added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2018-12-27T05:55:35.776Z
|
2015-11-26T00:00:00.000
|
73685568
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://www.nepjol.info/index.php/JIST/article/download/13949/11307",
"pdf_hash": "4ff46521944622d040d658844391eec1518a00d2",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43374",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "4ff46521944622d040d658844391eec1518a00d2",
"year": 2015
}
|
pes2o/s2orc
|
Assessing the Impact of Land Use on Water Quality across Multiple Spatial Scales in U-tapao River Basin, Thailand
The study investigated the linkages between land uses and water quality in U-tapao river basin, Thailand, in order to examine the impact of land use changes on full -basin, sub-watershed and buffer zone scales (1000m, 500m and 200m) on river water quality through Geographical Information Systems (GIS) and statistical analyses. Correlation and regression analysis were applied for ten water quality parameters. In scale analysis, in themost cases, the sub-watershed scale showed the clear relationship between land use water quality rather than full-basin and buffer zone scales. This indicates that the level of relationship between land use and water quality depends upon scale therefore the relationship between water quality parameters and land uses should be studied in multiple scales and it helps to develop effective river basin management in future.
INTRODUCTION
Over last decade, land uses in the U-tapao river basin have been changing sharply, causing decline of agricultural area and a significant increase of urban land . Such changes modify the surface characteristics of basin and can have considerable influence on runoff quality and quantity, and may be responsible for the increase of various pollutants (Tu & Xia 2006, Tong & Chen 2002, Ngoye & Machiwa 2004. Generally, surface water quality of river is contaminated by either point sources or non-point sources pollution. Mostly, point sources pollution can be easily monitored at a single place, but, non-point sources are difficult to identify since they generally cover large areas and it is believed that the changing land use pattern is one of the causes of increasing non-point sources pollution (Sliva & Williams 2001, Zampella et al. 2007, Ojutiku & Kolo 2011. In past decades, many researchers did research on linking water quality with land use practices on basin or watershed levels (Tong & Chen 2002, Ngoye & Machiwa 2004, Sliva & Williams 2001, Li et al. 2009, Basnyat et al. 1999, Ahearn et al. 2005. Tong and Chen (2002) found that increment in agricultural land had a strong positive correlation with conductivity and pH but a negative correlation with heavy metals, while increment in residential land had a positive correlation with heavy metals, biological oxygen demand, and conductivity in the watersheds of Ohio State, USA. Simi-larly, Li et al. (2009) demonstrated that temperature had negative correlation with vegetation and bare land, pH had negative correlation with urban land and nitrite had positive correlation with bare land in Han River Basin, China . Ahearn et al. (2005) demonstrated that nitrite and total suspended solid had positive correlation with agriculture, urban and grass land and negative correlation with forest land in Sierra Nevada, California.
On explaining the relationship of land use and water quality parameters, the geographical or spatial scale plays a vital role. Many previous studies have adjusted the scale factor on linking land use variables with water quality parameters (Tu & Xia 2006, Tong & Chen 2002, Ngoye & Machiwa 2004, Sliva & Williams 2001, Basnyat et al. 1999, Azyana & NikNorulaini 2012, Jarvie et al. 2002. And, the analysis of scale is important because it determines what area researchers use to link land use/cover with a stream site's chemical and physical properties (Pratt et al. 2012). But, there is still an ongoing dispute regarding whether the land use of the entire watershed or that of the buffer zone is more important in influencing the water quality, all other factors remaining constant (Sliva & Williams 2001). Some researchers advocated that the entire catchment explains better relationship of land use and water quality rather than buffer zone approach (Sliva & Williams 2001, Azyana & NikNorulaini 2012. In contrast, some researcher mentioned that the buffer zone approach gives the better explanation of the relationship of land use and water than entire catchment scale (Hunsaker & Levine 1995, Johnson et al.1997. Since many researchers have linked the land use and water quality on different scales, we know of none that have been conducted in U-tapao river basin. So, the objectives of this study were to determine the effects of land use activities on water quality and compare the influences of land use on water quality on different spatial scales. In this study, three types of scales were used: i) full-basin: the whole basin and mean water quality parameters of different locations (Fig. 1) ii) sub-watershed scale: the whole upstream drainage area of each water quality monitoring site (Fig. 2); and iii) buffer zone scales (1000m, 500m and 200m): circular radii with corresponding monitoring station (Fig. 3). So, this is the first type of study on comparing land use on these spatial scales and definitely it gives a clear idea of the impact of land use on water quality parameters as well as to select appropriate scale to link the land with water quality.
Study Area
The study was conducted on U-tapao river basin, a subbasin of Songkhla lake basin which is located at southern part of Thailand. The basin is about 60 km long from north to south, and 40 km wide from west to east, and total coverage is about 2,305 km 2 . The longitude and latitude of basin is 100º 10' through 100º 37' E and 6º 28' through 7º 10' N respectively (Fig.1). The dominating land use of the basin is agriculture especially rubber plantation. The land use change in this area was mainly caused by urban sprawl. The area consists of 10 watersheds, defined by Southern Remote Sensing, Thailand. All the sub-watersheds are mutually exclusive, and their sizes range from 104 to 348 km 2 . U-tapaoriver is one of the most important rivers of Songkhla Lake Basin which originates from Bantad Mountain and flows through Hatyai municipality before emptying into the outer part of Songkhla Lake, during its course of 90 kms, it receives a pollution load from both point and non-point sources .
Water Quality-Due to limitation of data, only water quality data from the year 2000-2009 were collected from existing monitoring framework done by the Regional Environment Office 16, Songkhla. Water quality monitoring stations were located at 10 sites throughout the U-tapao river basin (Fig. 1).The water quality parameters for this study were temperature (TEMP), pH, biological oxygen demand (BOD), dissolved oxygen (DO), electrical conductivity (EC), suspended solid (SS), dissolved solid (DS), fecal coliform bacteria (FCB), ammonia (NH 3 ) and total phosphorous (TP).
Spatial scales-The whole basin was divided into 10 watersheds and all the sub-watersheds are mutually exclusive, and their sizes range from 104.78 to 349.90 km 2 . ArcView GIS software was used to determine the composition of the land use and its characteristics within the 10 sub-watersheds. For each monitoring stations, ArcView's buffer facility was used to extract landscape data for the area 1000m, 500m and 200m buffer that will allow a comparison of the influence of land use data on water quality parameters.
Statistical analysis-
The impact of land use changes on water quality was assessed by analyzing both spatial and temporal relationships between land use and water quality parameters (WQP). In this study, the spatial relationship refers to how WQP varies with land use changes over space, while the temporal relationship refers to how WQP evolves with land use change over time. Descriptive statistics was used to explain general characteristics of land use and WQP. One way ANOVA was used to explain the spatial and temporal variations of land use indicators and WQP. To draw information about the relationship of land use and water quality and compare these relationships on different spatial scales, Karl Pearson's correlation analysis was used to determine whether land use factors have positive or negative influence on different water quality variables. To compare the influence on different scales, a simple linear bivariate regression was used. For each combination of variables, coefficients of determination (R 2 ) and significance levels (p) were compared to determine the relative importance of land use variables affecting water quality and the relative significance of different scales in terms of the impact on water quality. When R 2 and p were same for the combination of the same variables at different scales, the regression slopes (β 1 ) were further compared, with higher slopes indicating higher levels of impact.
Land use distribution of URB
U-tapao river basin has experienced land use change over last decade. Agriculture land use was the dominant land cover type in the basin about 80 percent in 2000 to about 74 percent in 2009 (Fig. 4). Analyzing the land use from year 2001 to 2009, agriculture land use was decreased about 7 percent whereas the forest land use was increased slightly (0.34%). Urban and water body land uses were increased by about 4 percent, and 3 percent. Mostly agriculture land use was converted to urban land use due to socio-economic reason and it was drastically changed after the year of 2006. From correlation analysis, agriculture land use showed significant negative correlation with forest, urban, and water body (r= -0.74, -0.97 & -0.96, p<0.05) and urban land use showed significant positive correlation with forest (r= 0.72, p< 0.05). These results suggest that agricultural land is decreasing and converting to other land uses.
Spatial and Temporal variations of land use
Analyzing the variance, by using one-way ANOVA technique , for temporal variation of land use pattern, agriculture, urban, and water body showed the significance variation (F=14.45, p<0.05; F=9.26, p<0.05; & F=6.43, p<0.05) where forest land use did not show significant variation. For the case of spatial variation of agriculture land use, there was significance difference on mean values of agriculture land in spatial level (F=93.00, p<0.05). From post-hoc analysis, the highest significance mean percentage difference of agriculture land use was observed between watershed V and watershed VIII (24.29%) (Fig 5). For the case of forest land use, there was significant difference between mean values in spatial level (F=379.97, p<0.05) and the highest significant mean difference was observed between watershed II and watershed VIII (-24.59%). For the case of urban land use, there was significant difference between mean values in spatial level (F=36.91, p<0.05) and the highest significant mean difference was observed between watershed I and watershed IX (-24.59%). For the case of water body, there was significant difference between mean values in spatial level (F=28.87, p<0.05) and the highest significant mean difference was observed between watershed II and watershed IX (-12.47%). There was significance difference on mean values of TEMP on both spatial and temporal level (F=11.936, p<0.05 & F=3.453, p<0.05). There were significance difference on mean values of pH, EC, DO, BOD, and FCB on spatial level but not on temporal level (F = 3.29, 5.80, 3.47, 4.13, 20.67, p<0.05). There were significance difference on mean values of TP and NH 3 on temporal level but not on spatial level (10.78, 3.30, p<0.05). There was no significance difference on mean values of SS and DS on both spatial and temporal level.
Correlation between land use and water
The different types of land use showed correlation with some water quality variables in different scales. Agriculture land use showed significant negative correlation with TEMP in sub-watershed scale (r= -0.58, p<0.05) and 1000m buffer zone scale (r= -0.52, p<0.05), with DS in sub-watershed scale (r= -0.71, p<0.05) and full-basin scale (r= -0. 0.61). Agriculture land use showed significant positive correlation with DO in sub-watershed scale (r= 0.37, p<0.05) and FCB in full basin scale (r= 0.72, p<0.05) and negative correlation with BOD in subwatershed scale (r= -0.42, p<0.05) and SS in 1000m buffer zone scale (r=-0.49, p<0.05). In contrast to other studies, agriculture land use did not show as a major source of degrading water quality in any level of scales (Basnyatet al. 1999). Similarly, in full-basin scale, urban land use had significant positive correlation with DS (r= 0.60, p<0.05) and FCB (r= 0.75, p<0.05). In sub-watershed scale, urban land use had significant positive correlation with TEMP (r=0.49, p<0.05), BOD (r=0.24, p<0.05), and DS (r=0.40, p<0.05) and negative correlation with DO (r= -0.24, p<0.05). In 1000m buffer zone scale, urban land use had significant positive correlation with TEMP (r=0.46, p<0.05). However, urban land use showed the degrading water quality in all scales.
In full-basin scale, forest land use had significant positive correlation with SS (r=0.58, p<0.05), and DO (r= 0.66, p<0.05). In sub-watershed scale, forest land had significant negative correlation with FCB (r=0.20, p<0.05) and DS (r=0.36, p<0.05). Even though, forest did not decrease SS, increment of DO with increment of forest land is improvement sign of water quality.
Comparing relationship land use and water quality in different scales-Comparing the strength of relationship of water quality parameters with land use indicators by using bivariate regression model, we used R 2 , p value and β 1 . For the case TEMP, there was no significance relationship between land use in full-basin scale, but in sub-watershed scale, TEMP had significance relationship with agriculture land, urban land and water body. In 1000m buffer zone, TEMP had significant relationship with agriculture land and urban. For 500 m and 200m buffer zones TEMP had significant relationship with agriculture. Comparing all these scales, TEMP showed higher level of relationship with land use indicators in sub-watershed scale. For the case of DO, it had significant relationship with forest and water body in full-basin scale. And, DO had significant relationship with agriculture, urban, and water body in sub-watershed scale. But, there was no significant relationship of TEMP with any types of land use in buffer zone scales (1000m, 500m, and 200m). Linking forest land use and water body with DO, the full-basin scale approach is more appropriate. For the case of BOD, there was no significant relationship with land use indicators in full-basin scale and 1000m, 500m, and 200 m buffer zone scales. In sub watershed scale, BOD had significant relationship with agriculture,urban and water body. Linking land use indicators with BOD, sub-watershed approach is more appropriate than other approaches. For the case of DS, it had significant relationship with agriculture land, forest land, and urban land in full-basin scale. Similarly, DS had significant relationship with agriculture land, forest land, urban land and water body in sub-watershed approach. But, there was no relationship between any types of land use with water quality in 1000m, 500m and 200m buffer zone scales. Comparing different scales to link DS with land use indicators, forest and urban land uses can be better explained in full-basin scale whereas agriculture and water body can be better explained in sub-watershed scale. So, both scales can be used for this case.
For the case of SS, it had significant relationship with forest land in full-basin scale and agriculture land and urban land in 1000m buffer zone. SS had significant relationship with agriculture land and urban land in 500m buffer zone scale; similarly it had significant relationship with agriculture land and urban land in 500 m buffer zone. However, there was no relationship of SS with land uses in sub-watershed approach, so, for SS it is better to link with full-basin or 1000m or 500m buffer zone approaches. For the case of EC, it had significant relationship with water body in 1000 m buffer zone and 500 m buffer zone. There was no significant relationship of EC with full basin, sub-watershed and 200 m buffer zone scale. For the case of FCB, it had significant relationship with agriculture land , urban , and water body in fullbasin scale. In sub-watershed scale, FCB had significant relationship with forest land. There was no significant relationship between FCB with any types of land use in 1000m, 500m and 200m buffer zone approaches. To sum up, linking FCB with agriculture, urban and water body full-basin is appropriate whereas linking with forest, subwatershed approach is appropriate.
DISCUSSION
Land use/ land cover change is one the major environmental changes happening around the globe and consequently it has been affecting water quality of river. For example, the land uses of U-tapao river basin has been changing rapidly, especially from agriculture land to urban land from year 2006 to 2009 due to rapid population growth, urbanization, industrialization in the basin . The analysis of urban development during the 2000-2009 period indicated that most of the urban growth occurred in the portion of basin where agriculture lands were available for new development. During this period, urban land increased more than two fold (approximately 84.206 km 2 to 180.589 km 2 and 7.83% of the total basin area) . Since, urban land showed positive correlation with DS, FCB and TEMP and negative correlation with DO. It showed the increment of urban land create pressure on water quality. Due to unsystematic urbanization, surface water pollution in U-tapao river is extremely high and it is due to uncontrolled and unregulated effluents and waste water from residential and industrial areas . The results of the study strongly highlight the negative impact of urbanization on river system and the water quality has been highly influenced by the pollution from point sources as well as non-point sources from urban areas. Generally, forest is mostly related to good water quality and especially forest land in riparian zone of river plays a vital role in reducing the amount of pollutants, but in U-tapao river, there is no forest in the riparian zone which might be one of causes of pollution of river. Surprisingly, agriculture was not acted as degraded water quality, as suggested by other studies (Tong & Chen 2002, Li et al. 2009, Ahearn et al. 2005. Agriculture was negatively associated with TEMP, BOD, DS and SS and positively associated with DO. The watersheds with a lower percent agricultural land will have much higher percentage in developed area that might become the primary pollution sources to the river water (Azyana & NikNorulaini 2012). Since, agriculture land use had negative correlation with urban land and build locations are nearly surrounded with agriculture, so urban land is likely to take over agriculture land. This is evidence that urbanization is a major factor that has led to the decrease of agriculture and which might be one of the causes of decreasing water quality of U-tapaoriver .
In this study, TEMP had significant relationship with changing land uses of agriculture, urban and water body in sub-watershed scale but it did not show any significant relationship with land uses in full-basin scale. DOhad significant relationship with changing land uses of forest and water body in full-basin scale and agriculture, urban and water body in sub-watershed scale. BOD had significant relationship with changing land uses of agriculture, urban and water body in sub-watershed scale and it did not show any significant relationship with other scales. DS had significant relationship with changing land uses of agriculture, forest, and urban in full-basin scale and agriculture, forest, urban, and water body in sub-watershed scale and it did not show any significant relationship with changing land uses in all buffer zone scales. SS had significant relationship with forest in fullbasin scale and agriculture and urban in 1000m, 500m, and 200m buffer zone scales. For the case of FCB, it showed significant relationship with changing land uses with agriculture, urban and water body in full-basin scale and forest with sub-watershed scale. EC showed significant relationship with changing land use of water body in 1000m and 500m buffer zone scales. Since, the knowledge of appropriate water quality parameters and spatial scales is very important on linking land uses with water quality. In this study, only water quality parameters like TEMP, EC, DO, BOD, DS, SS, and FCB should be chosen for linking land use and it depends on spatial scale For scale analysis, a relationship between land use and water quality parameters found in one level scale might not be same or even opposite in other scales. For this reason, spatial scale analysis is an important factor for linking land use and water quality. For example, agriculture land use had significant negative correlation with DS and positive correlation with FCB in full-basin scale. Similarly, it had significant negative correlation with TEMP, BOD and DS and significant positive correlation with sub-watershed scale. And, it had only significant negative correlation with TEMP and SS in 1000m, 500m, and 200m buffer zone scales. Since, four water quality parameters can be linked with agriculture land use in sub-watershed scale, it is better to use subwatershed scale for linking agriculture land use with water quality parameters. For the case of urban land, it had significant positive correlation with DS and FCB in full-basin scale. It had significant positive correlation with DS, BOD, and TEMP and significant negative correlation with DO in sub-watershed scale. It had significant positive correlation with SS in 1000m, 500m, and 200m buffer zone scales. The results suggest that, using sub-watershed approach to link urban land use to water quality parameters is the best approach. This relationship may explain the influence of point sources as well as non-point sources pollution that is commonly associated with urbanized areas (Sliva & Williams, 2001). After agriculture and urban land uses, the forest land use that appeared important in determining water quality and land use. For the case of forest, it had significant negative correlation with DS and significant positive correlation with DO and SS in full-basin scale and it had significant negative correlation with DS and significant positive correlation with FCB in watershed scale. There is no forest land in riparian zone of river, so, the relationship of forest and water quality parameters in buffer zone scales does not exist. Since, forest land use is distributed outer part of basin, it is better to link water quality with forest in full-basin scale rather than sub-watershed scale. For the case of water body, it had significant negative correlation with DO and FCB in full-basin scale. And, it had significant positive correlation with TEMP, BOD, and DS and significant negative correlation with DO in sub-watershed scale. It had only positive correlation with EC with 1000m and 500m buffer zone scales. Comparing all scales, it is better to link water quality with water body in sub-watershed scale.
Several researchers have addressed the issue of whether land use near river is a better predictor of water quality than land use over the entire watershed (Sliva & Williams 2001). Our analysis results show water quality to be correlated with sub-watershed scale slightly better than with buffer zone and full basin approach even though forest land use was better explained in full-basin scale. And, these days, sub-watershed scale approach has been commonly used for land-water studies . So, for river basin management aspect, it is better to link water quality with land uses in sub-watershed approach and it can be furthered extended other water quality parameters as well as hydrology and metrological parameters.
CONCLUSION
In this study, the water quality parameters like TEMP, DO, BOD, SS, DS, FCB and EC are appropriate parameters to link with land use indicators since they showed significant relationship agriculture, forest, urban, and water body in different scales. For the case of land uses, agriculture and urban are important land use indicators to link water quality variables. For the case of scale analysis, subwatershed approach is the best to link land use and water quality parameters for effective river basin management. By understanding of appropriate water quality variables and important land use indicators helps to link land use and water quality parameters for decision making process in river basin management. This study also demonstrates the importance of considering the geographical scale on linking land use with water quality parameters. The understanding of the linkage between land-use and river water quality with scale is critical to the management of healthy ecosystem of the basin. Such understanding may help future planning and efforts to alleviate water quality problems of the river basin.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2010-07-27T00:00:00.000
|
16234757
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.nature.com/articles/6605788.pdf",
"pdf_hash": "f8aa87d11f8593bea77540959b712379d54516c3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43376",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "f8aa87d11f8593bea77540959b712379d54516c3",
"year": 2010
}
|
pes2o/s2orc
|
Activation of the EGFR/ERK pathway in high-grade mucoepidermoid carcinomas of the salivary glands
Background: Mucoepidermoid carcinoma (MEC) shows differences in biological behaviour depending mainly on its histological grade. High-grade tumours usually have an aggressive biological course and they require additional oncological treatment after surgery. Methods: In a series of 43 MECs of the salivary glands, we studied the epidermal growth factor receptor (EGFR) gene by using dual-colour chromogenic in situ hybridisation (CISH). Moreover, we assessed the protein expressions of the EGFR and the activated extracellular signal-regulated kinases (pERK1/2) by using immunohistochemistry. These results were correlated with the histological grade of the tumours and the outcome of the patients. Results: The CISH study demonstrated a high-EGFR gene copy number, with balanced chromosome 7 polysomy, in 8 out of 11 high-grade MECs (72.7%), whereas 27 low-grade and 15 intermediate-grade tumours had a normal EGFR gene copy number (P<0.001). The EGFR gene gains correlated with disease-free interval (P=0.003) and overall survival of the patients (P=0.019). The EGFR protein expression had a significant correlation with the histological grade of the tumours but not with the outcome of the patients. The pERK1/2 expression correlated with histological grade of tumours (P<0.001), disease-free interval (P=0.004) and overall survival (P=0.001). Conclusions: The EGFR/ERK pathway is activated in high-grade MECs with aggressive behaviour. Patients with these tumours who require oncological treatment in addition to surgery could benefit from EGFR and mitogen-activated protein kinase pathway inhibitors.
Mucoepidermoid carcinoma (MEC) is the most frequent malignant tumour that originates in the major and minor salivary glands, and represents about one third of all malignant salivary gland tumours (Spiro, 1986;Goode and El-Naggar, 2005;Ellis and Auclair, 2008). It is a heterogeneous neoplasm that may present different biological behaviour, depending mainly on the histological grade of the tumour (Goode et al, 1998;Goode and El-Naggar, 2005;Ellis and Auclair, 2008;Nance et al, 2008). Surgical resection is the standard treatment for all grades of MEC. Radiotherapy after wide surgical excision of the tumour is recommended for high-grade MECs. Lymphadenectomy and adjunvant external beam radiotherapy are indicated when cervical metastases are present. Chemotherapy is indicated in the treatment of metastatic disease and in the palliation of locoregional disease not amenable to either salvage surgery or radiation therapy (Agulnik and Siu, 2004;Nance et al, 2008). Low-grade MECs usually do not recur; most patients are cured after surgery and the 5-year survival rate is 76 -95%.
Conversely, high-grade MECs are aggressive neoplasms that frequently have an infiltrative pattern of growth, recur and even metastasize, and their 5-year survival rate is 30 -50% (Goode et al, 1998;Nance et al, 2008).
To date, clinical trials using targeted therapies on salivary gland tumours are scarce, probably because of the low number of these cases in each institution. Only one phase II study of Herceptin (Trastuzumab) with disappointing results in patients with advanced salivary gland tumours overexpressing HER2/neu has been published (Haddad et al, 2003). Incomplete clinical trials using epidermal growth factor receptor (EGFR) antagonists have been performed on patients with salivary gland cancer. Moreover, studies on the oncogenetic pathways in salivary gland MECs predictive of response to targeted therapies are scarce and incomplete. In recent years, strategies against the EGFR family and the mitogen-activated protein kinase (MAPK) signalling pathway have received special attention in the treatment of cancer. The EGFR family, including the four distinct receptors EGFR/ErbB1, HER2/cerbB2, HER3/CerbB3 and HER4/ErbB4, has been identified as a potential therapeutic target in solid tumours. The EGFR/ErbB1 is a gene located on chromosome 7p12 and has emerged as a significant factor in the development and growth of many types of cancer, playing an important role in cancer-cell proliferation, angiogenesis and metastasis. This gene encodes a 170-kDa membrane glycoprotein that can be activated by phosphorylation and induce a downstream signalling transduction cascade. A major signalling route of the EGFR is the Ras-Raf-MAPK pathway (Klapper et al, 2000). Activation of Ras initiates a multistep phosphorylation cascade that leads to the activation of MAPKs. The MAPK extracellular signal-regulated kinases ERK1/2 are the best characterised and are most strongly associated with human cancer. The ERK1/2 are activated by dual phosphorylation on a tyrosine and a threonine residue by dual-specificity kinases, and subsequently regulate cell transcription and have been linked to proliferation, survival and transformation (Lewis et al, 1998).
The EGFR antagonists are included in treatment protocols of advanced stages of non-small cell carcinoma of the lung, colorectal cancer and head and neck squamous cell carcinoma (Ciardello and Tortora, 2008). In head and neck tumours, EGFR can be abnormally activated and protein overexpression by the neoplastic cells is frequently detected by immunohistochemistry Grandis and Sok, 2004). However, EGFR amplifications are not frequent and EGFR activating mutations are very rare (Kalyankrishna and Grandis, 2006). The EGFR overexpression has been correlated with poor prognosis in head and neck cancer (Kalyankrishna and Grandis, 2006). To date, several clinical trials have been carried out to identify the molecular characteristics of the tumours predictive of response to EGFR antagonists. The EGFR activating mutations and increased EGFR gene copy number identify the most sensitive population in these tumours (Cappuzzo et al, 2005;Hirsch et al, 2005;Tsao et al, 2005;Sartore-Bianchi et al, 2007). In most MECs of the salivary gland, the EGFR protein is overexpressed (Gibbons et al, 2001;Shang et al, 2007), but EGFR activating mutations are extremely rare (Han et al, 2008;Dahse and Kosmehl, 2008;Dahse et al, 2009). However, studies on the EGFR gene copy number have not been performed before in a series of salivary gland MECs.
In our previous studies, we saw that high-grade MECs with aggressive course differ molecularly from low-grade tumours. These high-grade tumours overexpress the oncogenic glycoprotein MUC1 (Alos et al, 2005;Handra-Luca et al, 2005). MUC1 acts as a protooncogene that interacts with EGFR and correlates with MAPK activation in mouse models (Schroeder et al, 2001) and inhibits the ligand-mediated ubiquitinisation and degradation of EGFR in vitro (Pochampalli et al, 2007). Moreover, the expression of ERK1/2 MAPKs has been related to aggressive tumour behaviour in MECs of the salivary glands (Handra-Luca et al, 2003). We therefore hypothesised that histological high-grade MECs, which have a clinically aggressive course, may harbour EGFR protein overexpression and high-EGFR gene copies linked to aggressive tumour biology. To investigate this, we studied the EGFR gene by using chromogenic in situ hybridisation (CISH) with a dual-colour probe, in a series of 43 MECs. This new technique obtains the same results as fluorescence in situ hybridisation (FISH) and offers potential advantages over FISH to detect gene copy number, including the ability to distinguish between areas of tumour and normal tissue.
In addition to genetic analysis, the immunohistochemical study of the EGFR protein was performed and activated ERK1/2 were assessed by using an antibody specific for the dually phosphorylated and activated ERK1 and ERK2 (MAPK phospho-p44/42). These molecular studies have been correlated with the histological characteristics of the tumours and the follow-up of the patients.
Selection of cases
Forty-three MECs diagnosed at the Department of Pathology of the Hospital Clinic, and Hospital Princeps d'Espanya, Bellvitge, University of Barcelona, from 1996 until 2005, were reviewed. The medical records were obtained from patients' files in the Departments of Otorhinolaryngology and Maxillofacial Surgery. The study was approved by the Local Ethical Committee and patients gave their informed consent. At diagnosis, the tumours were staged according to the American Joint Committee on Cancer (Sobin and Wittekind, 2002). All patients underwent primary surgery as standard treatment. Lymph node dissection was performed only in cases with lymph node metastases. Full-dose radiotherapy was applied after tumour excision with positive margins, when lymph node metastases were assessed, and in locoregional recurrences. Chemotherapy with cisplatin was added for palliative purposes, in patients with lymph node metastases (N2 or N3) and in cases with tumoural persistence after surgery and resistance to radiotherapy.
Histological grading of MECs
Haematoxylin-eosin and alcian blue-stained slides and paraffin wax-embedded material were available for all cases. The MECs were graded following the 2005 World Health Organization Classification of Tumours (Goode and El-Naggar, 2005).
CISH and immunohistochemistry
Representative paraffin wax blocks were selected from each of the 43 cases for CISH and immunohistochemistry.
The CISH was performed on a 4-mm section of each tumour that was deparaffined in two changes of xylene and three washes of degraded ethanol for 3 min each. The slides were pretreated with CISH pretreatment buffer (Dako, Carpinteria, CA, USA) and heated to 921C, and then rinsed with distilled water. The tissues were digested for 10 min with pepsin digestion solution (Dako) at room temperature, washed twice in distilled water for 5 min each, dehydrated in 70, 85 and 96% alcohol for 2 min each and dried. A measure of 10 ml of dual-colour EGFR Spectrum-red/CEP7 Spectrum-blue probe (Dako) were applied to each slide. Sections were covered with coverslips and denatured on a hot plate at 821C for 5 min. Hybridisation was done overnight at 371C. Then the slides were washed in 2 Â SSC at 731C for 2 min and three times in distilled water. Then the sections were blocked with H 2 O 2 in absolute methanol and incubated with a blocking reagent for 10 min at room temperature. The hybridisation signals were detected after sequential incubations with anti-mouse anti-DIG (60 min at room temperature), polymerised horseradish peroxidase anti-mouse antibody (60 min) and 3,3-diaminobenzedine (DAB). The sections were counterstained with haematoxylin.
Immunohistochemical studies were carried out using the automated immunohistochemical system TechMate 500 (Dako), and the EnVision system (Dako). Briefly, 4 mm sections were deparaffinised and hydrated using graded alcohols and water. For antigen retrieval, an autoclave pretreatment at 1201C for 5 min was performed. Peroxidase was blocked for 7.5 min in ChemMate peroxidase-blocking solution (Dako). The slides were incubated with the primary antibodies for 30 min and washed in ChemMate buffer solution (Dako). The peroxidase labelled polymer was then applied for 30 min. After being washed in ChemMate buffer solution, the slides were incubated with DAB substrate chromogen solution, washed in water, counterstained with haematoxylin, washed, dehydrated and mounted. The primary antibodies used in the study were: EGFR (Dako; dilution 1 : 100) and pERK1/2 (Phospho-p44/42; Thr202/Tyr204) (Cell Signaling Technology, Beverly, MA, USA; dilution 1 : 50). Appropriate positive and negative controls were used.
The CISH and immunohistochemical results were evaluated by two independent observers (BL and LA). For CISH evaluation, a light microscope under a  40 objective was used. A total number of 100 tumoural cells were evaluated. The centromeric blue signal and the EGFR red signal in each cell were counted and the proportion centromeric/EGFR signal number was calculated. The cases were considered normal if two blue and two red signals were visualised in each nuclear cell. Polysomy was considered when X3 blue and red signals (in equal number) were seen in each nucleus. The EGFR amplification was defined as red signals 41.5 blue signals.
The immunostain for EGFR was evaluated: 0, no positive cells; 1 þ , low discontinuous membrane staining; 2 þ , unequivocal membrane staining with moderate intensity and 3 þ , strong and complete membrane staining. Only cases with 2 þ and 3 þ staining patterns were considered positive. The pERK1/2 showed nuclear positivity. For analytical purposes, positivity for EGFR and pERK1/2 was considered when X10% of tumour cells were positive. High-pERK1/2 expression was considered when X30% of positive cells were detected.
Statistical analysis
The continuous clinical variables considered were follow-up and age (median and range were calculated). Overall survival was calculated from diagnosis to the death of the patient or loss of follow-up. Disease-free interval was the time from surgical excision of the tumour to the first recurrence or metastasis. Both overall survival and disease-free interval were analysed by the Kaplan -Meier method. The categorical clinical variables were gender (female/male), location of tumours (parotid/submaxillary/minor salivary gland) and stage (I/II/III/IV). The categorical histological variables considered were histological grade (1/2/3) and molecular results: EGFR protein expression (positive/negative), EGFR gene copy (normal/polysomy) and ERK1/2 expression (430% of positive cells/o30% of positive cells). Fisher's exact test was used for comparison between qualitative variables and Student's t-test and ANOVA were applied for quantitative variables according to the application conditions. All tests were two sided. Differences were analysed by the log-rank method. Differences were considered to be statistically significant with an a risk of 0.05.
Clinicopathological characteristics of the patients
The clinicopathological characteristics of the patients at diagnosis, the treatment details and outcome are summarised in Table 1.
The statistical associations of the disease-free interval and overall survival with histological grade of tumours and molecular results are expressed in Table 2. Patients with high-grade tumours had shorter disease-free interval (P ¼ 0.001) and overall survival (P ¼ 0.001) than those with low-and intermediate-grade tumours.
EGFR gene analysis
Eight cases (18.6%) had chromosome 7 polysomy. In these cases, there were 42 signals of both centromere and EGFR signals in over 70% of cells, but the relationship between both signals was 1 : 1. In two cases there were 3 signals (low polysomy) and in six cases there were 43 signals (high polysomy). No cases with EGFR amplification were detected. All of the eight cases with chromosome 7 polysomy were high-grade MECs, whereas the rest of the tumours (27 low grade, 5 intermediate grade and 3 high grade) showed a normal pattern of expression (Po0.001). Chromosome 7 polysomy was associated with shorter disease-free interval (P ¼ 0.003) and overall survival (P ¼ 0.019) (Figure 1).
DISCUSSION
This study shows that high-grade MECs with aggressive behaviour harbour an increased EGFR gene copy number and high expression of pERK1/2 MAPKs. In spite of the fact that EGFR amplification was not seen in any of the 43 cases of this series, in six of them there was high polysomy with X4 EGFR gene copies. The EGFR gene is rarely amplified in human cancers, but the increased EGFR gene copy number with balanced chromosome 7 polysomy in cancer cells is relatively frequent, in B24 -40% of patients with non-small cell lung cancer, squamous-cell carcinoma of the head and neck or colorectal cancer. Chromosome 7 polysomy has been linked to tumour aggressiveness and poor clinical outcome (Hirsch et al, 2003;Ciardello and Tortora, 2008). In this study, all cases with EGFR gene gains had a significant shorter disease-free interval and overall survival. The EGFR product is a membrane glycoprotein composed of an extracellular ligand-binding domain, a transmembrane lipophilic component and an intracellular protein kinase domain. The ligand binding induces EGFR dimerisation, activation of the intrinsic tyrosine kinase protein and tyrosine phosphorylation with the activation of a cascade of biochemical and physiological responses (Lewis et al, 1998). This downstream signalling transduction activates MAPKs through phosphorylation by MAPK kinases, and the activation of this pathway is associated with cell proliferation and oncogenic transformation (Grandis and Sok, 2004). In this series, there was a significant correlation between increased EGFR copy number and high expression of pERK1/2 (P ¼ 0.002). High expression of activated ERK1/2 has been related to tumour progression in several neoplasms (Albanell et al, 2001;Adeyinka et al, 2002) and in salivary gland MECs (Handra-Luca et al, 2003). In this series, the pERK1/2 expression was significantly correlated with shorter disease-free interval and overall survival. Furthermore, MAPKs can also be activated through the upstream activation of HER2/neu or RAS. About one third of salivary gland MECs have HER2/neu gene amplification (Press et al, 1994) and about one fifth of MECs harbour H-RAS mutations (Yoo and Robinson, 2000a), but K-RAS mutations are extremely rare (Yoo and Robinson, 2000b). However, to define the molecular mechanisms underlying the biological behaviour in high-grade MECs, in vitro experiments with cell lines should be carried out.
The immunohistochemical expression of EGFR in the majority of MECs that we have observed is concordant with other studies (Gibbons et al, 2001;Shang et al, 2007). All cases with chromosome 7 polysomy had an expression of EGFR protein of over 60% of cells. Nevertheless, most immunohistochemical positive cases failed to show an increased EGFR gene copy number. This discrepancy between the EGFR gene copy number and the immunohistochemical detection of the protein has been reported before in several cancers, and has been attributed to a post-transcriptional phenomenon mediated at the mRNA level (Grandis and Tweardy, 1993). There was a significant correlation between the EGFR protein expression and the histological grade of the tumours, but not with the clinical outcome of the patients. The current grading system classification is three-tiered, and tumours are classified into low-, intermediate-and high-grade MECs, depending on the architecture and cellular characteristics of the neoplasms. Low-grade tumours are usually well-defined tumours, often cystic with a predominance of mucous cells, whereas highgrade MECs usually have an infiltrative pattern of growth, are solid and mainly composed of intermediate type and epidermoid cells. High mitotic index, cellular anaplasia, necrosis and perineural invasion are characteristics of high-grade tumours (Spiro, 1986; Kaplan-Meier survival estimates Ellis and Auclair, 2008). Significant differences in the outcome of the patients related to histological grade have been repeatedly confirmed in series of MECs of the salivary glands (Goode et al, 1998;Alos et al, 2005;Nance et al, 2008). In this study, a statistical correlation between the histological grade and disease-free interval and overall survival of the patients was found. The prognostic value of the EGFR polysomy, and the EGFR and the pERK1/2 protein expressions were related to the histological grade. Strategies against EGFR include monoclonal antibodies able to bind to the extracellular domain of the receptor such as cetuximab, or small molecule ATP-competitive tyrosine kinase inhibitors (TKIs), such as gefitinib and erlotinib. Some clinical, histopathological and molecular characteristics have been proposed for identifying the population sensitive to EGFR-TKI treatment in non-small carcinoma of the lung (Sone et al, 2007). Activating mutation in exons 18, 19 and 21 of the EGFR gene has proved to be a significant factor in predicting response to EGFR-TKIs in nonsmall cell carcinoma of the lung. However, these mutations are less common in the United States and European population than in the Asian population (Sone et al, 2007), and data from large randomised studies indicate that increased EGFR gene copy number is probably the best factor in predicting response and evaluate overall survival of the patients Tsao et al, 2005;Cappuzzo et al, 2005). Interestingly, a good response to EGFR antagonists in head and neck and lung carcinomas with expression of MAPKs has been observed (Albanell et al, 2001;Gandara et al, 2004). Moreover, immunohistochemical positivity for activated ERK1/2 has been correlated with a good response to MAPKs inhibitors in clinical trials on cutaneous melanomas (Jilaveanu et al, 2009). Therefore, the high-grade MECs in this series, with increased EGFR gene copy number and pERK1/2 high expression could be sensitive to EGFR or MAPKs antagonists.
The MECs of the lung share histological and molecular characteristics with salivary gland MECs. Some series on lung MECs have shown lack of EGFR mutations in these tumours and a percentage of chromosome 7 polysomy of 17%, similar to the results in our series (Macarenco et al, 2008). However, some lung MECs have been described as having activating EGFR mutations in the Asian population (Han et al, 2008). The MECs and adenosquamous carcinomas share histological characteristics and differential diagnosis between both tumour types may be challenging in the head and neck region and lung (Alos et al, 2004;Rossi et al, 2009). Adenosquamous carcinomas are aggressive tumours arising from upper or lower airways, whereas MECs have a salivary or bronchial gland origin, whose prognosis depends on the histological grade. Adenosquamous carcinomas usually harbour EGFR activating mutations, whereas MECs do not (Kang et al, 2007;Han et al, 2008;Macarenco et al, 2008;Rossi et al, 2009). Previous studies on salivary gland MECs have found that EGFR mutations are extremely rare (Han et al, 2008;Dahse and Kosmehl, 2008;Dahse et al, 2009).
To date, few cases on metastatic salivary gland MECs with EGFR gene gains with chromsome 7 polysomy and good response to EGFR monoclonal antibody cetuximab have been published (Grisanti et al, 2008). However, clinical trials that include a large series of salivary gland MECs are difficult to carry out because of the low number of these cases in each institution.
In conclusion, we have identified that high-grade salivary gland MECs usually have an increased EGFR gene copy number and highly express pERK1/2. The activation of the EGFR/ERK pathway in these tumours is associated with aggressive behaviour and could represent potential indicators of response to EGFR antagonists or MAPK pathway inhibitors.
|
v3-fos-license
|
2021-11-16T14:12:08.050Z
|
2021-11-16T00:00:00.000
|
244119191
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2021.780461/pdf",
"pdf_hash": "1ce788da03af6beaa675dec33cbd7b0e6453c856",
"pdf_src": "Frontier",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43379",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "1ce788da03af6beaa675dec33cbd7b0e6453c856",
"year": 2021
}
|
pes2o/s2orc
|
Comprehensive Analysis Uncovers Prognostic and Immunogenic Characteristics of Cellular Senescence for Lung Adenocarcinoma
Cellular senescence plays a crucial role in tumorigenesis, development and immune modulation in cancers. However, to date, a robust and reliable cellular senescence-related signature and its value in clinical outcomes and immunotherapy response remain unexplored in lung adenocarcinoma (LUAD) patients. Through exploring the expression profiles of 278 cellular senescence-related genes in 936 LUAD patients, a cellular senescence-related signature (SRS) was constructed and validated as an independent prognostic predictor for LUAD patients. Notably, patients with high SRS scores exhibited upregulation of senescence-associated secretory phenotype (SASP) and an immunosuppressive phenotype. Further analysis showed that SRS combined with immune checkpoint expression or TMB served as a good predictor for patients’ clinical outcomes, and patients with low SRS scores might benefit from immunotherapy. Collectively, our findings demonstrated that SRS involved in the regulation of the tumor immune microenvironment through SASP was a robust biomarker for the immunotherapeutic response and prognosis in LUAD.
INTRODUCTION
Lung cancer has the highest incidence and mortality of cancer worldwide (Sung, et al., 2021). The 5years survival rate is less than 20% (Miller, et al., 2019). Lung adenocarcinoma (LUAD) is the main histological subtype of non-small-cell lung cancer (NSCLC), accounting for approximately 60% of NSCLC cases (Behera, et al., 2016). Although understanding LAUD genomics and breakthroughs of targeted therapies and immunotherapies have substantially expanded treatment modalities, challenges associated with LAUD remain elusive. Therefore, better prognostic tools and biomarkers accurately predicting the characteristics of tumors are urgently needed to stratify patients and personalize treatment strategies for LUAD. Cellular senescence is one of the key processes of ageing (Campisi 2013) and serves as a link between ageing and cancer (Partridge, et al., 2018). However, the linkage of senescence and cancer, especially in lung cancer, is complex and poorly understood at present. Previous studies have highlighted that the existence of senescence plays a doubleedged sword in the process of tumorigenesis and development. On the one hand, in the context of senescent cells entering permanent cell cycle arrest, senescence ensures tissue homeostasis and prevents tumorigenesis (Krizhanovsky, et al., 2008;Perez-Mancera, et al., 2014). Senescence acts as a barrier from tumor development in early tumorigenesis when it is followed by immune clearance and tissue remodelling (Xue, et al., 2007). On the other hand, cellular senescence can present a detrimental outcome when senescent cells are not cleared by the immune system and accumulate. This accumulation promotes the senescence-associated secretory phenotype (SASP), which releases cytokines, growth factors, extracellular matrix (ECM) components and ECM-degrading enzymes (Lasry and Ben-Neriah 2015;Lopes-Paciencia, et al., 2019), leading to both the ageing process and tumor development (Coppe, et al., 2010;Cuollo, et al., 2020). Therefore, an improved understanding of the impact of senescence on tumor immunity associated with invasion and development is required to frame novel treatment paradigms for tumors.
According to recent studies, tumor cells can undergo senescence as an evolutionary process, including both tumorintrinsic characteristics (dramatic gene expression changes along with chromatin remodelling and engagement of a persistent DDR) and extrinsic immune pressure (a temporal cascade in the development of SASP) (Lasry and Ben-Neriah 2015;Berben, et al., 2021;Eggert, et al., 2016;Hernandez-Segura, et al., 2018;Kumari and Jat 2021). Notably, the deleterious effects of SASP overshadow its beneficial properties (Cuollo, et al., 2020). We hypothesized that accompanied by the accumulation of senescent cells, inflammatory SASP remodels the tumor immune microenvironment (TIME) by recruitment of immunosuppressive protumorigenic cells, such as cancerassociated fibroblasts (CAFs), macrophages and neutrophils, and a decrease in cytotoxic lymphocytes (T and NK cells) and promotes tumor cell evasion from immunosurveillance, growth, and metastasis, contributing to poor prognosis in LUAD ( Figure 1).
To systematically assess the correlations between cellular senescence and prognosis in LUAD, we established a novel risk model based on cellular senescence-related genes and explored their potential importance as predictive biomarkers for prognosis and immunotherapy response. Subsequently, the relationships among risk subgroups, immune checkpoints, and immune cell infiltration were thoroughly analysed based on cell senescence-related signature. Further exploration of the mechanisms suggested that tumor cellular senescence affected the TIME through SASP. This study provided new insights into the regulatory mechanisms of cellular senescence associated with the TIME and strategies for LUAD immunotherapy.
Data Acquisition and Processing
Clinical information and transcriptional profiles of patients with LUAD were obtained from The Cancer Genome Atlas (TCGA, https://portal.gdc.cancer.gov) and the Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo). After filtering, a total of 500 patients with both mRNA expression and corresponding clinical data in the TCGA cohort were included in the training cohort. Fragments per kilobase million (FPKM) data of the TCGA cohort were then transformed into transcripts per million (TPM) data for further analysis. Three additional independent datasets, GSE30219 (Rousseaux, et al., 2013) (n 83), GSE31210 (Okayama, et al., 2012) (n 226) and GSE50081 (Der, et al., 2014) (n 127), were enrolled as the validation cohorts. For microarray data processing, the mean expression values were used when genes matched with multiple probes. Moreover, IMvigor210 (Mariathasan, et al., 2018), an immunotherapy cohort with 348 metastatic urothelial cancer patients treated with anti-PD-L1 agent, was downloaded from http://research-pub.gene.com/IMvigor210CoreBiologies/, and data processing methods were also provided in the IMvigor210CoreBiologies package. Detailed clinical information of the five patient datasets is shown in Supplementary Table S1. The flow diagram of this study is depicted in Supplementary Figure S1.
Development and Validation of the Cellular Senescence-Related Signature
The list of genes was obtained from CellAge (Avelar, et al., 2020) (https://genomics.senescence.info/cells/), which contains manually curated data of human genes associated with cellular senescence. A total of 278 genes (Supplementary Table S2) were included in this study. We first screened cellular senescence-related differentially expressed genes (DEGs) between normal samples (n 59) and tumor samples (n 513) based on the thresholds of an adjusted p < 0.01 and | log2 (fold change) | > 1. Univariate Cox proportional hazard regression analysis was performed to identify cellular senescence-related prognostic genes (p < 0. 001). Next, the DEGs and prognostic genes were investigated using the R package "veen" to acquire prognostic cellular senescence-related DEGs, and correlations were visualized by the R package "circlize" (Gu, et al., 2014). Least absolute shrinkage and selection operator (LASSO) Cox regression (Tibshirani 1997) was conducted with a random seed using the R package "glmnet" (Friedman, et al., 2010) to construct the risk score model (cellular senescence-related signature, SRS) for best predicting survival in the training cohort and was repeated 1,000 times. The optimal values of the penalty parameter lambda were determined through 10-fold cross-validations. Based on the median risk score calculated by SRS, patients in the training and validation cohorts were divided into high-and low-risk groups, and the performance of SRS was subsequently evaluated.
Signature Genes Analyses
Expression of the five signature genes was analysed in The Gene Expression Profiling Interactive Analysis (GEPIA2 (Tang, et al., 2019), http://gepia2.cancer-pku.cn/) database, and these analyses were based on tumor and normal samples from the TCGA and GTEx databases. Pancancer expression analysis of the five genes was also performed using the Oncomine (https://www.oncomine. org/) database. UALCAN (Chandrashekar, et al., 2017) (http:// ualcan.path.uab.edu), another powerful interactive online tool, was used to reveal the promoter methylation levels of signature genes.
Pathway and Functional Enrichment Analysis
Kyoto Encyclopedia of Genes and Genomes (KEGG) (Kanehisa, et al., 2016) and Gene Ontology (GO) (The Gene Ontology 2019) enrichment analyses were applied using the R package clusterProfiler (Yu, et al., 2012). The DEGs between the highand low-risk groups were subjected to pathway and functional enrichment analysis. Gene set enrichment analysis (GSEA) (Subramanian, et al., 2005) was also performed in the javaGSEA desktop application (GSEA 4.1.0) to identify the underlying pathways or processes in patients with high or low scores. Significantly enriched gene sets were defined as gene sets with a normalized enrichment score (NES) > 1.5 and p < 0.05.
Assessment of SRS and Response to Immune Checkpoint Inhibitors
The immunophenoscore (IPS), which has been demonstrated to predict patients' response to immune checkpoint inhibitor (ICI) treatment, was downloaded from The Cancer Immunome Atlas (TCIA (Charoentong, et al., 2017), https://tcia.at). A higher IPS score indicates a better immunotherapy response. Tumor Immune Dysfunction and Exclusion (TIDE Jiang, et al., 2018), http://tide.dfci.harvard.edu/), which was developed to assess immune evasion mechanisms, is another robust biomarker used to predict immunotherapy response. A higher TIDE score means that tumor cells are more prone to escape from immunosurveillance, suggesting a lower response rate to immunotherapy. TIDE scores were obtained after unloading the input data as described in the instructions. The TMB for each patient in the TCGA cohort was calculated as the number of nonsynonymous mutations per mega-base. PD-L1 expression on tumor-infiltrating immune cells (ICs) of patients in the IMvigor210 cohort was assessed by immunohistochemistry. IC0 and IC1 exhibit low PD-L1 expression, while IC2 indicates high PD-L1 expression in our study.
Clinical Specimens
We retrospectively collected 74 paraffin-embedded LUAD specimens and 74 adjacent normal tissues from the biobank of National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital in Chinese Academy of Medical Sciences and Peking Union Medical College (Beijing, China) and constructed tissue microarray (TMA). All the biospecimens were obtained from LAUD patients who underwent radical resection and had received no prior chemotherapy or radiotherapy. Informed consent was obtained from all patients. This study was approved by the Ethics and Research Committees of the National Cancer Center/Cancer Hospital, Chinese Academy of Medical Sciences, and Peking Union Medical College.
Statistical Analysis
Data analysis and graph generation were all performed in R version 3.5.1 (https://www.r-project.org), SPSS Statistics V25.0 and GraphPad Prism 8.0. For comparisons of two groups, unpaired Student's t-test was applied to analyse the statistical significance of normally distributed variables, and the Wilcoxon rank-sum test was adopted to estimate nonnormally distributed variables. Categorical variables were compared using the χ 2 test. The Kaplan-Meier survival curve for overall survival (OS) analysis was plotted with the R package "survminer". Receiver operating characteristic (ROC) curves for 1-, 3-, and 5-years survival were delineated to evaluate the predictive efficacy of the SRS score, which was generated using timeROC (Blanche, et al., 2013). Univariate and multivariate Cox regression analyses were utilized to evaluate the association between OS and clinicopathological characteristics as well as SRS scores. All statistical analyses were two-tailed, and p < 0.05 was considered statistically significant.
Identification of Differentially Expressed Senescence-Related Genes in LUAD
To comprehensively characterize the expression pattern of cellular senescence-related genes, the 278 genes downloaded in CellAge (Avelar, et al., 2020) were compared in tumor tissues versus normal tissues in the TCGA-LUAD cohort, and we identified 69 differentially expressed genes (DEGs). Among them, 42 genes were upregulated, whereas 27 were downregulated ( Figure 2A; Supplementary Table S3). GO and KEGG analyses were performed to clarify the biological process of the DEGs. As expected, the DEGs were remarkably enriched in cell cycle-and cellular senescence-related pathways, as they were obtained from a website of genes related to cellular senescence (Supplementary Figure S2; Supplementary Table S4).
Development of a Cellular Senescence-Related Signature in LUAD
To construct a cellular senescence-related signature (SRS) for survival prediction, the 10 genes mentioned above were analysed by LASSO-Cox regression analysis. A 5-gene signature was constructed according to the optimum λ value ( Figures 2E,F). We then established a risk score formula based on the expression of the five genes for patients with LUAD: risk score (0.0089 × expression value of FOXM1) + (0.1233 × expression value of HJURP) + (0.3092 × expression value of PKM) + (0.0851 × expression value of PTTG1) + (0.0003 × expression value of TACC3). The risk score of every patient was then calculated using this formula, and patients in the TCGA cohort were stratified into low-and high-risk groups according to the median value of the risk score.
The distribution of the SRS score, the survival status, and a heatmap exhibiting the expression profiles of the selected genes in the high-and low-risk groups are presented in Figures 3A-C.
Kaplan-Meier survival analysis demonstrated that patients in the high-risk group had a significantly shorter OS time compared with those in the low-risk group ( Figure 3D, HR 2.048, 95% CI 1.529-2.743, log-rank p < 0.0001). The 5-years survival rate of the Frontiers in Cell and Developmental Biology | www.frontiersin.org November 2021 | Volume 9 | Article 780461 high-risk group was 30.2%, which was significantly lower than that of the low-risk group (49.7%). Time-dependent receiver operating characteristic (ROC) analysis was performed, and the areas under the curve (AUCs) for 2-, 3-, and 5-years OS were 0.675, 0.660, and 0.607, respectively ( Figure 3E). In addition, our formula also worked well when applied to patients with different clinical stages. As shown in Figures 3F,G significant difference in OS time was observed in both early-stage (HR 1.955, 95% CI 1.357-2.816, log-rank p 0.0002) and advanced-stage LUAD (HR 1.725, 95% CI 1.034-2.879, log-rank p 0.0478).
To further verify whether the SRS-based risk score was an independent prognostic factor for LUAD, univariate and multivariate Cox regression analyses of clinicopathological factors in the TCGA cohort were performed. The T stage, N stage, TNM stage and risk score were correlated with OS in univariable analysis. After multivariable adjustment, the risk score remained a significantly independent prognostic factor (HR 2.746, 95% CI: 1.738-4.339, p < 0.001) for patients with LUAD ( Figure 4A). We also analysed the correlation between SRS and patients' clinicopathological parameters, including age, sex, T stage, N stage and TNM stage, in the TCGA cohort. Significantly higher percentages of patients with lymphatic metastasis and latestage LUAD were identified in the high-risk group (Supplementary Figure S3), indicating that a higher SRS score was related to the malignant progression of LUAD.
Validation of SRS in Three Independent Cohorts
To validate the predictive function of SRS on OS benefit, three independent data sets from the GEO database were enrolled. As illustrated in Figures 4B-D, patients with high-risk scores exhibited significantly worse OS in all three cohorts, including GSE30219 (HR 2.163, 95% CI 1.188-3.938, p 0.0118), GSE31210 (HR 6.699, 95% CI 3.450-13.01, p < 0.0001) and GSE50081 (HR 2.842, 95% CI 1.628-4.962, p 0.0002). The area under the ROC curve (AUC) values in the GSE30219 cohort were 0.687, 0.722, and 0.732 for 2, 3 and 5 years, respectively ( Figure 4E). In the GSE31210 cohort, all AUC values were greater than 0.7 ( Figure 4F). For the GSE50081 cohort, the AUCs of SRS at 2, 3, and 5 years were 0.690, 0.681, and 0.717, respectively ( Figure 4G). Moreover, we also observed that high expression of the five genes in four different cohorts was consistently indicative of poor prognosis for LUAD patients (Supplementary Figure S4). These results confirmed that SRS could serve as a good predictive factor to classify patients with different OS.
Biological Processes Analysis of SRS
Multicohort evaluation confirmed a robust prognostic value of SRS, which prompted us to further explore the possible mechanism underlying the predictive role of the signature. As shown in Supplementary Figures S5A,B, all five genes were abnormally Frontiers in Cell and Developmental Biology | www.frontiersin.org November 2021 | Volume 9 | Article 780461 5 upregulated in LUAD and many other types of cancer, including colorectal cancer, liver cancer, and brain cancer. We then analysed the relationship between methylation and the expression of the five genes. Significantly lower methylation levels of PTTG1 and TACC3 promoters were found in tumor tissues compared with normal tissues, which may account for the abnormal expression of the signature genes in LUAD (Supplementary Figure S5C).
Regarding the downstream effects, we first extracted the DEGs between subgroups categorized by the risk signature by applying the criteria FDR <0.05 and |log2FC| ≥ 1. In total, 1,164 genes were differentially expressed between the two groups (Supplementary Figure S6A; Supplementary Table S6). Based on these SRSrelated DEGs, GO analysis and KEGG analysis were performed. As expected, the results indicated that DEGs were involved in cellular senescence and cell cycle-related biological processes, such as nuclear division and chromosome segregation (Supplementary Figures S6B,C; Supplementary Table S7). In addition, GSEA revealed prominent enrichment in hallmark gene sets, such as MTORC1 signalling, glycolysis, and the unfolded protein response, in the high-risk group compared to the low-risk group in TCGA cohort. Similar trends were also observed in the three validation cohorts (Supplementary Figure S6D). These results suggested a more malignant phenotype in patients with high-risk scores, which may lead to a poor prognosis in LUAD patients.
SRS Is Associated With Alterations in SASP and Immune Cell Infiltration
Cellular senescence occurs when cells are confronted by excessive extracellular or intracellular stress in vivo or in vitro (Ben-Porath and Weinberg 2004; Ben-Porath and Weinberg 2005). As displayed in Figure 5A, cellular senescence-associated pathways, including oncogene-induced, DNA damage telomere stress-induced, and oxidative stress-induced senescence, were significantly enriched in patients with high SRS scores. Intriguingly, we noticed that the senescence-associated secretory phenotype (SASP) pathway was also prominently enriched. SASP indicates an enormous number of secretory proteins secreted by senescent cells, which may induce changes in the tumor microenvironment, thus promoting tumor recurrence and progression (Acosta, et al., 2008;Coppe, et al., 2008;Green 2008;Kuilman, et al., 2008). Our results revealed overexpression of different types of SASP in the high-risk group ( Figure 5B). Interleukins (IL-1A, IL-1B, IL-6, and IL-15), chemokines (CCL3, CCL8, CCL11, CCL20, CCL26, CXCL1, CXCL5, CXCL8, and CXCL11), growth factors and regulators Notably, some upregulated SASPs, including IL-6, CXCL8, and VEGF, possess immunosuppressive properties (Kato, et al., 2018;Lamano, et al., 2019;Liu, et al., 2013;Sharma, et al., 2017). For example, IL-6, secreted by CAFs, regulates immunosuppressive TIL populations in the TIME (Kato, et al., 2018). Thus, we hypothesized that patients with high SRS scores who had higher SASP levels might exhibit an immunosuppressive phenotype via SASP. To characterize the SRS-related immune landscape, RNAseq-derived infiltrating immune cell populations were estimated by TIMER, EPIC, xCell, CIBERSORT-ABS, and quanTIseq algorithms in TIMER2.0 and TISIDB. We found that patient risk groups stratified by SRS showed distinct immune infiltrate patterns. Correlation analysis showed that the infiltration levels of B cells, CD4 + T cells, and CD8 + T cells were negatively correlated with the SRS score, whereas a higher SRS score indicated increased abundance of neutrophils, cancer-associated fibroblasts, Tregs, and Figure S7A). GSEA revealed a significant enrichment of signatures associated with upregulation of TGF-β signalling, whereas no significant difference in IFN-γ signalling was observed in the high SRS score versus the low SRS score group (Supplementary Figure S7B). Taken together, our results implied that a high level of cellular senescence may remodel a suppressive TIME via SASP.
Impact of SRS and Immune Checkpoints on Clinical Outcome
Previous studies have emphasized the importance of immune checkpoint genes in modulating immune infiltration (Juneja, et al., 2017;Kumagai, et al., 2020), and our results also revealed significant relevance between cellular senescence and tumor immunity. Thus, to further investigate the complex crosstalk that occurs among immune infiltration, immune checkpoint genes and SRS, we first compared the expression pattern of immune checkpoint genes between patient groups divided based on the SRS. As shown in Figures 6A-D, patients with high SRS scores tended to express high levels immune checkpoint genes (PD-L1, PD1 and CTLA4) compared with the low SRS group, which was further confirmed in 3 validation cohorts. Other immune checkpoints, such as LAG3 and TIM3, which are also considered exhausted T cell markers, exhibited a trend of overexpression in the high SRS score group in the multicohort, suggesting that SRS has the potential to identify immune dysfunction in LUAD patients (Supplementary Figure S8A).
Next, we considered SRS in combination with immune checkpoint expression to assess whether SRS influences OS in patients with similar immune checkpoint expression. Survival analysis of the four groups stratified by SRS and immune checkpoint gene expression was conducted. As depicted in Figures 6E-G, patients with low PD-L1 and low risk had prolonged OS compared to those with low PD-L1 and high risk (p 0.0064). Among patients with high PD-L1 expression, a lower risk score signified a remarkably better survival (p 0.0005). Similar survival patterns were also observed among the four patient groups stratified by SRS and PD1 or CTLA4 expression in TCGA cohort. Besides, among various immune checkpoint genes, multivariate Cox regression modelling showed that SRS score remained an independently predictor for overall survival (HR 2.083, 95% CI: 1.533-2.830, p < 0.001). We then repeated the same analysis in three validation cohorts. Consistent with TCGA dataset, patients Frontiers in Cell and Developmental Biology | www.frontiersin.org November 2021 | Volume 9 | Article 780461 8 with low SRS scores had significantly better survival than those with high SRS scores despite the fact that similar expression levels of immune checkpoint genes were observed in cohorts GSE50081 (Supplementary Figure S8B) and GSE31210 (Supplementary Figure S8C). However, no significant result was observed in cohort GSE30219 (Supplementary Figure S8D).
In addition to immune checkpoints, TMB is also considered an independent prognostic predictor in various cancer types. We first calculated the TMB of each group and found that patients with higher SRS scores had noticeably increased TMB relative to the low-risk group (Supplementary Figure S9A). Subsequently, the survival distribution of patient groups classified by SRS and TMB level was also compared. As shown in Supplementary Figure S9B, patients with high SRS scores suffered unfavourable OS irrespective of patients' TMB level.
These results imply that SRS combined with immune checkpoint expression or TMB might serve as a promising predictor of patients' clinical outcomes.
Predictive Potential of SRS in Immunotherapy Response
Growing evidence has shown that immune checkpoint inhibitors (ICIs) have improved the survival of NSCLC, but responses vary. Thus, accurate predictive biomarkers are urgently needed (Mok, et al., 2019;Reck, et al., 2016). Given the association between SRS and immune infiltration, we further explored the predictive potential of SRS of ICIs by analysing the correlation of SRS and recognized immunotherapy predictors, including TIDE Jiang, et al., 2018) and IPS (Charoentong, et al., 2017). We discovered that patients in the high-risk group tended to achieve higher TIDE scores in TCGA cohort, and this result was further confirmed in three validation cohorts ( Figure 7A). In addition, IPS was significantly increased in the low SRS score group (p < 0.001), and patients' response to anti-CTLA4 treatment was relatively higher in the low-risk group (p < 0.001, Figure 7B). These results indicate that patients with low SRS scores may benefit from ICIs.
Considering the immunotherapy response predictive potential of SRS, we next performed Kaplan-Meier survival analysis to investigate the predictive role of immunotherapeutic overall survival using the immunotherapy cohort IMvigor210. As expected, a beneficial trend of low SRS scores in immunotherapeutic OS was observed in the IMvigor210 cohort (HR 1.368 95% CI 1.036-1.808 p 0.0197, Figure 7C), and the low-risk group also exhibited significantly better OS than the high-risk group among the PD-L1 high (p 0.0445, Figure 7D) or TMB-low population (p 0.0366, Supplementary Figure S9C). Collectively, SRS in combination with TMB or PD-L1 is a promising candidate for predicting the
Validation of Signature Gene Expressions in LUAD Tissues
To further explore the protein expression of genes that constituted SRS, three genes that exhibited significantly higher expression level in lung cancer in GEPIA2 (Supplementary Figure S5A) were quantified by IHC and compared between in tumor tissues (n 74) and adjacent normal tissues (n 74). As expected, IHC staining revealed that protein expressions of FOXM1, HJURP, and PTTG1 were all significantly elevated in tumors compared with adjacent normal tissues ( Figures 8A-F), indicating that SRS genes may play an important in lung cancer progression.
DISCUSSION
Senescence is a complex biological process with both cell autonomous and paracrine effects that has a significant impact on the microenvironment (Lasry and Ben-Neriah 2015;Hernandez-Segura, et al., 2018). Increasing evidence indicates that senescent cells can be eliminated through a SASP-provoked immune response, which involves both innate and adaptive immunity (Schneider, et al., 2021). Conceivably, the SASP has several positive functions in the short term. However, in the long term, these functions can become detrimental in the immunosuppressive context of cancer to promote tumor development (Lopes-Paciencia, et al., 2019;Coppe, et al., 2010;Basisty, et al., 2020;Birch and Gil 2020). However, how senescent cells interact with tumor immune infiltration and their value in evaluating the immune infiltrate of tumors and clinical outcomes have not been reported, particularly in lung cancer. Thus, modelling lung cancer will be important to decipher whether senescence molecular determinants reshape tumor microenvironments and whether this modification has implications for the prognosis and immunotherapy response of LUAD patients. Importantly, uncovering how cellular senescence influences the TIME can provide a window for discoveries of how we can effectively improve the immunosuppressive milieu by senolytic therapies (van Deursen 2019).
In this study, we analysed the expression patterns, prognostic values, and effects on the TIME of cellular senescence-related genes in LUAD. Using the LASSO method, we constructed a novel survival prediction model (SRS) based on the expression of five senescence features in the TCGA dataset. Furthermore, the SRS was well validated in three different public GEO datasets. We also explored the features of the immune microenvironment, including immune cell distribution and inflammatory activities, in patients with high and low SRS scores. Markedly, we distinguished different SASP affecting TIME remodelling as potential mechanisms underlying immune escape and tumor progression. Additionally, we found that the SRS score was an FIGURE 7 | Predictive potential of SRS in immunotherapeutic benefits. (A). The distribution of TIDE scores between patients with a higher SRS score and those with a lower SRS score in four different cohorts as indicated. (B). The distribution of IPS in the high-risk and low-risk groups in the TCGA dataset. (C). Kaplan-Meier curves for high and low SRS score patient groups in the IMvigor210 cohort. (D). Kaplan-Meier curves for four patient groups stratified by SRS and PD-L1 expression. *, **, and *** represent P < 0.05, P < 0.01, and P < 0.001, respectively.
Frontiers in Cell and Developmental Biology | www.frontiersin.org November 2021 | Volume 9 | Article 780461 independent prognostic factor for LUAD patients and was coupled with specific immune checkpoint factors or TMB as predictive biomarkers of ICI response. This study represents one of the first reports to examine the differential expression analysis of cellular senescence-relevant genes and then identify their prognostic values using TCGA and GEO databases. Markedly, we identified five significantly upregulated genes, including FOXM1, HJURP, PKM, PTTG1, and TACC3, which were also included in the cellular senescence-related signature reported in this study. Interestingly, these five signature genes are reported as negative regulators of cellular senescence in many human cancers and play important roles in tumor development (Ao, et al., 2017;Caporali, et al., 2012;Chen, et al., 2010;Francica, et al., 2016;Kato, et al., 2007;Schmidt, et al., 2010;Tao, et al., 2014). Forkhead box protein M1 (FOXM1) is significantly associated with immunotherapy resistance in lung cancer and patients (Galbo, et al., 2021;Wang, et al., 2014). Holliday junction recognition protein (HJURP), a histone H3 chaperone, affects cell cycle progression, DNA repair and chromosome segregation during mitosis. HJURP is overexpressed in cancers and associated with poor prognosis in NSCLC (Wei, et al., 2019). Pyruvate kinase M (PKM) is a glycolytic enzyme required for tumor proliferation and progression. PKM2 acts as the key factor mediating Th17 cell differentiation (Damasceno, et al., 2020), and silencing PKM2 mRNA could decrease PD-L1 expression and cancer evasion of immune surveillance (Guo, et al., 2019). As an oncogene during spindle formation or chromosome segregation (Bernal, et al., 2002;Li, et al., 2013), pituitary tumortransforming gene-1 (PTTG1) is an independent poor prognostic factor in NSCLC patients (Wang, et al., 2016) and can elicit an immunogenic response in NSCLC patients (Chiriva-Internati, et al., 2014). Transforming acidic coiled-coil protein 3 (TACC3) is involved in chromosomal alignment, separation, and cytokinesis, which is correlated with p53-mediated apoptosis (Schneider, et al., 2008;Zhang, et al., 2018). Additionally, TACC3 exerts as a prognostic biomarker for prostate cancer (Qie, et al., 2020), osteosarcoma (Matsuda, et al., 2018) and NSCLC (Jiang, et al., 2016), and high TACC3 expression is associated with increased immune cell infiltration and T cell exhaustion (Fan, et al., 2021). These published experimental efforts provide further evidence supporting that SRS has the potential to mirror LUAD prognosis based on immune landscape alterations.
As the role of cellular senescence is largely underexplored in cancer, it is important to gain more extensive insight into the linkage of cancer, senescence and the immune environment. However, to date, the effect of senescence on the tumor immune infiltrate and whether this would impact the response to ICIs have only been poorly studied. By performing a detailed characterization of the tumor immune infiltrate in patients with LUAD, we observed that cellular senescence-related genes could have substantial effects on the composition and distribution of the tumor immune infiltrate. In this study, we found that the SRS score was inversely associated with the infiltration levels of B cells, CD4 + T cells and CD8 + T cells, whereas neutrophils, CAFs, Tregs and resting NK cells were positively correlated with the SRS score in LUAD. This result suggested that patients with higher SRS score might have an immunosuppressive tumor microenvironment, which prevented immune clearance of tumor cells. GSEA also showed that upregulation of the TGF-β-associated pathway, which has been widely reported as an important factor to restrain antitumor immunity (Mariathasan, et al., 2018;Sheng, et al., Frontiers in Cell and Developmental Biology | www.frontiersin.org November 2021 | Volume 9 | Article 780461 2021), was prominently enriched in the high-risk group. Second, to further explore the mechanisms of immune remodelling by the increasing burden of senescent cells in tumors, we uncovered that alterations of SASP could impact TIME establishment, which ultimately contributes to immune escape and provokes tumor development. The high SRS score group exhibited increases in proinflammatory cytokines, including IL-1b, IL-6, and IL-8; growth factors, such as EGF, VEGF and IGFBP; receptors, such as ICAMs; and proteases, such as MMPs. These factors may modulate immune cell recruitment and have a tumorpromoting effect (Coppe, et al., 2010;Cuollo, et al., 2020;Basisty, et al., 2020;Lau and David 2019). Third, as mentioned above, cellular senescence-related immune remodelling may explain the diminished efficacy of checkpoint blockade. Intriguingly, we noticed that the exhausted T cell markers PD-(L)1, CTLA-4, LAG-3, and TIM3 were aberrantly increased in LUAD specimens with high SRS scores, indicating that T cells become progressively hypofunctional and hyporesponsive with senescence upregulation. This finding may explain the lower response to immunotherapy in older individuals. Therefore, our findings provide obvious clinical significance. On the one hand, significantly prolonged survival was observed for patients with low SRS scores, suggesting that high-risk patients should receive more frequent clinical surveillance and corresponding measures to prevent disease recurrence and progression. On the other hand, given that only a proportion of patients can derive durable benefits from ICIs, we need more accurate biomarkers with clinical utility. The developed cellular senescence-related signature can be applied not only as a prognostic tool but also as guidance for individualized immunotherapy. Besides, Small molecules targeting FOXM1 (Gormally, et al., 2014;, PKM2 (Ning, et al., 2017) and TACC3 (Akbulut, et al., 2020;Polson, et al., 2018) have been developed, and demonstrated promising anticancer capacity in vitro and in vivo experiments. These findings highlight the potential for using these compounds for future clinical application. Furthermore, we propose that controlling cellular senescence-associated inflammation by targeting specific inflammatory mediators may have a beneficial therapeutic effect in the treatment of cancer. A new group of drugs, named senolytic drugs, including quercetin, navitoclax, and fisetin, have received increased attention, and preclinical clinical data of their potential role in combination with immunotherapy are emerging. Thus, this group of drugs may have vast implications (van Deursen 2019; Campisi, et al., 2019;Kolb, et al., 2021;Prasanna, et al., 2021).
Although our study reports the benefits of immunotherapy and prognosis in LUAD, this study has several limitations. First, the five-gene signature was developed and validated in a public dataset; thus, external validation in multicentre cohorts is needed. Second, it is necessary to perform prospective clinical trials to verify the applicability of our research results in LUAD patients receiving immunotherapy. Third, the regulatory mechanisms by which cellular senescence-related genes reshape the TIME warrant further in vivo and in vitro investigations. Moreover, further studies are also needed to illustrate how the aged TIME contributes to lung cancer development. Finally, the preliminary interpretation of mechanisms underlying the association between cellular senescence-related genes and worse response to ICIs must be further elucidated using basic experiments.
In conclusion, our study identified and validated a cellular senescence-related signature that is based on five cellular senescence-related genes as an indicator of immune cell infiltration in the TIME and had independent prognostic significance for patients with LUAD. Importantly, the SRS was significantly associated with the immune cell infiltration levels of LUAD patients and involved in the regulation of the LUAD immune microenvironment by SASP. Finally, we characterized the complex interplay between the SRS score and immune checkpoint genes in patient outcomes and suggested the potential of the SRS score coupled with specific immune checkpoint factors as predictive biomarkers of ICI response to enable a more precise selection of patients who will benefit from checkpoint inhibitor immunotherapy. Therefore, identifying cellular senescence-related genes affecting tumor immune responses and further studying their regulatory mechanisms might assist risk stratification and provide promising targets for improving the response of LUAD to immunotherapy.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by National Cancer Center/Cancer Hospital, Chinese Academy of Medical Sciences, and Peking Union Medical College. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
YG and JH designed the study. ZW, FS, YY, ZC, and XF collected the data. WL and XW performed the data analysis and interpretated the data. XW and WL drafted the manuscript. YG revised the manuscript. All authors read and approved the final manuscript.
|
v3-fos-license
|
2019-03-30T13:12:53.401Z
|
2015-11-10T00:00:00.000
|
85671262
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://journal.hep.com.cn/fase/EN/article/downloadArticleFile.do?attachType=PDF&id=12498",
"pdf_hash": "da1c926a439b3c02ca46db89623aefff581e2118",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43384",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "02951ed1173d1d2cb2c1aea18f4c08cd782ed9f9",
"year": 2015
}
|
pes2o/s2orc
|
Research and development of a novel subunit vaccine for the currently circulating pseudorabies virus variant in China
Pseudorabies (PR) is a devastating viral disease which leads to fatal encephalitis and respiratory disorders in pigs. Commercial gE-deleted live pseudora-bies virus (PRV) vaccine has been widely used to control this disease in China. However, the new-emerging variants of PRV compromises the protection provided by current vaccines and lead to the outbreak of PR in vaccinated pig herds. Several killed and live vaccine candidates based on current PRV variants have been reported to be effective to control the disease. A subunit vaccine based on gB protein, one major PRV glycoprotein which elicits strong humoral and cellular immune responses, however, was never evaluated for protection against the current circulating PRV variants. In this study, full-length PRV gB protein was successfully expressed in baculovirus/insect cells in the soluble format and was tested on 3-week-old piglets as a subunit vaccine. Compared with unvaccinated pigs, the gB-vaccinated pigs developed speci fi c antibody-mediated responses and were protected from the virulent PRV HN1201 challenge. All vaccinated pigs survived without showing any PRV-speci fi c respiratory and neurological signs, but all unvaccinated pigs died within 7 days after HN1201 challenge. Hence, this novel gB-based vaccine could be applied as an effective subunit vaccine to control PRV variant in China.
Introduction
Pseudorabies virus (PRV), also known as Aujeszky's disease virus, is an economically important swine viral disease worldwide [1] . In China, the disease has been well controlled in recent decades by application of gene-deleted modified vaccines accompanied by the use of ELISA serological screening [2] . However, since 2011, the PRV variants were reported to be circulating in vaccinated pig herds and caused PR-specific clinical symptoms with a high mortality rate [3][4][5] . A range of killed and live vaccines based on current PRV variant strains were developed and tested on pigs, and the results showed these vaccines can provide useful protection against the PRV infection [6][7][8] . By contrast, the development and efficacy of subunit vaccines based on current PRV variants has not been reported. Compared to the other types of vaccines, the gBbased subunit vaccine is safer and less expensive but provides good protection to PRV infection.
PRV has a double-stranded linear DNA genome 150 kb in length and contains a unique long region, a unique short region, a terminal repeat sequence, and internal repeat sequences [9] . There are at least 11 different glycoproteins of PRV that have been identified and some of them have been implicated as important antigens in protective immunity against infection [10] . Among these, gB protein was the major glycoprotein that induced both humoral and cellular immunity and its role in the induction of a protective immune response against virus infection has been reported in vaccination experiments with mouse and pig models [11] . Many gB-based subunit or DNA vaccines have been tested in mice and pigs, and found to provide useful substitutes for the current commercial killed/live PRV vaccines [12][13][14] . However, gB protein of the currently circulating variants differ considerably as a consequence of multiple mutations, insertions and deletions compared to the common PRV vaccine strains such as Barth-K61 and other PRV isolates [3] . The alterations of some important amino acids may compromise the efficacy of gB-based subunit vaccine against the PRV infection. Therefore, in this study, a novel gB subunit vaccine based on PRV variant HN1201 was developed and its efficacy tested on 3-week-old piglets.
Expression of gB protein in the baculovirus/insect cells expression system
The full-length PRV HN1201 gB gene was amplified by PCR and was inserted into pFastBac I plasmid (Invitrogen, Carlsbad, CA, USA). After transformation into DH10Bac cells, the white recombinant bacmid colonies were selected and their DNA extracted. The recombinant bacmid DNA was used to transfect Sf9 insect cells for subsequent gB protein expression and purification. The recombinant gB protein was purified from the culture supernatant by immunoaffinity chromatography using Ni-NTA His$Bind Resin (Novagen, Madison, WI, USA) according to the manufacturer's instructions. The recombinant gB protein was subjected to SDS-PAGE and western blot. The specificity of recombinant protein was verified by anti-His and gB-specific monoclonal antibodies.
Preparation of gB subunit vaccine and animal experiments
Two hundred microgram of purified gB protein was homogenized with the Montanide TM ISA 206 adjuvant (SEPPIC, Puteaux, France) at a ratio of 46∶54 (v/v) to make a final one dose vaccine (2 mL).
Ten 3-week-old pigs free of PRV, porcine reproductive and respiratory syndrome virus, classical swine fever virus, and porcine circovirus 2 were randomly divided into vaccinated group and unvaccinated groups. Pigs in the vaccinated group were inoculated intramuscularly with one dose of 2 mL (200 μg) gB subunit vaccine, and pigs in the unvaccinated group received 2 mL Dulbecco minimum essential medium (DMEM) medium. After vaccination, rectal temperature and clinical signs were recorded on a daily basis. The serum samples were collected to monitor PRV gB and gE antibodies at designated days. Four weeks after vaccination, pigs were challenged intranasally with 2 mL HN1201 (10 7.0 TCID 50 $mL -1 ) to assess protection. After challenge, pigs were observed and scored daily for 7 days for clinical signs of disease as previously described [15,16] . At 14 days post challenge (dpc), all surviving pigs were euthanized, and different tissue samples were collected for histopathology and immunohistochemistry examination. The animal trial in this study was approved by the Animal Care and Ethics Committee of China National Research Center for Veterinary Medicine and conventional animal welfare regulations and standards were followed.
Enzyme-linked immunosorbent assay
PRV-specific gB and gE (BioChek, Holland) antibodies were detected by using commercially available enzymelinked immunosorbent assay kits according to the manufacturer's instructions.
Histopathology and immunohistochemistry staining
The tonsil, lung, brain, cerebellum and trigeminal samples were collected for hematoxylin and eosin staining with a Leica fully automatic dyeing machine according to standard procedures, and immunohistochemistry staining using previously described methods [17] . The slides were photographed at 400Â magnification.
Statistical analysis
The differences in body temperature, bodyweight gain and clinical score of pigs between two groups were determined by t-test (GraphPad Prism 5.0 Software, San Diego, CA, USA). Differences at P < 0.05 were considered statistically significant.
Expression and purification of gB protein
The gB gene was amplified and inserted into pFastBac I vector. The sequencing confirmed the gB gene was in the correct orientation and reading code frame in the expression plasmid (data now shown). The plasmid was transformed into DH10Bac cells and the positive recombinant bacmid was selected by white/blue colony screening. The recombinant bacmid was then transfected into Sf9 insect cells. After 3 days, the supernatant containing the recombinant gB protein was collected when apparent CPE was observed. SDS-PAGE analysis showed that the recombinant gB protein was successfully expressed in the culture supernatant (Fig. 1a). The recombinant protein was next purified by immunoaffinity chromatography and was recognized by His tag and gB-specific monoclonal antibodies, respectively (Fig. 1b).
Clinical symptoms after vaccination and PRV HN1201 challenge
The purified gB protein was combined with mineral Montanide TM ISA 206 adjuvant for pig vaccination. All pigs in both vaccinated group and control group remained clinically healthy and showed no adverse reactions after vaccination (data not shown). After 28 days, the pigs in both groups were challenged via intranasal inoculation with 10 7.0 TCID 50 virulent PRV variant HN1201. Pigs in both groups showed fever (≥40°C) after challenge as shown in Fig. 2a. However, the body temperature of vaccinated pigs returned normal at 5 days. By contrast, the body temperature of unvaccinated pigs kept increasing to 41°C until death at 5 to 7 dpc.
Besides fever, the vaccinated pigs did not develop any other clinical symptoms. In contrast, the unvaccinated pigs showed typical PR-clinical symptoms such as respiratory distress, excessive salvation and neurological signs including convulsion, muscle tremor, posterior paralysis, and ataxia. The average clinical scores in unvaccinated group were significantly higher than those in the vaccinated group (Fig. 2b).
The unvaccinated pigs lost body weight for the first 5 days after PRV HN1201 challenge (Fig. 3a). In contrast, the vaccinated pigs gained weight and the body weight of the vaccinated group was significantly higher than that of the unvaccinated group (Fig. 3a). One pig in the unvaccinated group was found dead at 5 dpc, three pigs were found dead at 6 dpc, and one pig was found dead at 7 dpi ( Fig. 3b). All pigs in the vaccinated group were euthanized at 14 dpc.
Antibody response after vaccination and PRV HN1201 challenge
Three out of five pigs in the vaccinated group developed gB-specific antibodies at 7 days post-immunization (dpi), and all pigs seroconverted positively at 14 dpi (Fig. 4a). Antibody titers kept increasing steadily with an average 3.0 of OD 405 before challenge. The unvaccinated pigs did not develop any gB antibodies after inoculating with DMEM medium as expected. After PRV HN1201 challenge, gBantibody titer in vaccinated group dropped to an average of 2.0 at OD 405 at 5 dpc and returned to an average of 2.2 at OD 405 at the end of the study (Fig. 4a). By contrast, gBantibody titer in the DMEM group kept increasing and reached an average of 0.5 OD 405 at 5 dpc.
As for the gE antibody, as expected, both vaccinated pigs and unvaccinated pigs did not develop any gE antibodies before PRV HN1201 challenge (28 dpi), and only unvaccinated pigs showed positive gE antibodies at 5 (Fig. 4b). All vaccinated pigs showed gE antibody positive at the end of study (14 dpc).
Pathological examination
All pigs in the unvaccinated group were found dead and were subjected to necropsy before the end of study. The unvaccinated pigs showed hemorrhage and necrosis in the tonsil, severe pulmonary consolidation and necrosis in the lung, encephalic hemorrhage in the brain and cerebellum. All pigs in the vaccinated group were euthanized at 14 dpc.
There were no visible gross pathological changes for the pigs in the vaccinated group.
Histopathological examination results showed that none of vaccinated piglets displayed any histopathological changes ( Fig. 5a-5e). The unvaccinated piglets had hemorrhages and necrosis in tonsil and lung samples ( Fig. 5f-5g). The infected pigs also showed nonsuppurative meningoencephalitis (Fig. 5h), Purkinje cell degeneration and necrosis in the cerebellum (Fig. 5i), and infiltration of lymphocytes around the trigeminal ganglia (Fig. 5j). Consistent with the pathological results, immu- nochemistry results also showed no positive staining was detected in the above tissues of vaccinated pigs (Fig. 6a-6e). Positive staining in the tonsil, lung, brain, cerebellum and trigeminal ganglia samples of unvaccinated pigs (Fig. 6f-6j).
Discussion
The new-emerging PRV variants have recently been reported to be widespread in China and caused significant economic losses in vaccinated pig herds [3,5] .These PRV variants are antigenically different from previously common PRV and current commercially-available PRV vaccines cannot provide complete protection to the circulating variants [2,3] . To address this problem, several killed and modified live vaccine based on current circulating PRV variants were developed and showed useful protection against infection [6][7][8] . Although the origin of current circulating PRV variants remains unknown, the massive use of modified live vaccines could have provided sufficient selection pressure for these variant to emerge. Therefore development of a more effective and safe PRV vaccines, such as novel subunit vaccine, is urgently needed to control the disease.
The PRV vaccines based on glycoprotein gB, gC and gD of vaccine strains were expressed in different systems including baculovirus, adenovirus and Sindbis virus [11,13,18] . These vaccines showed useful protection to the viral challenge by previously common PRV strains in mice and/or pig models because they elicit good humoral and cellular immunity. However, with the emergence of new PRV variants, the gB gene changes with multiple sites of insertions, deletions and mutations when compared with previously common RPV, which may alter the important epitopes which induce protective immunity. Therefore, in this study, the recombinant PRV variant HN1201 gB protein was produced in a baculovirus/insect cell expression system and tested on 3-week-old piglets as a subunit vaccine.
The gB subunit vaccine appears to be safe as no pigs developed clinical symptoms including fever after vaccination. At 28 dpi, the pigs were challenged with virulent PRV HN1201 to test the efficacy of vaccine. The unvaccinated pigs started to show PR-specific clinical symptoms and died between 5 and 7 days, demonstrating the high virulence of PRV HN1201 variant as previously reported [5] . In contrast, no vaccinated pigs showed any clinical symptoms except a transient fever. As found with the killed PRV vaccine [17] , this fever in vaccinated groups lasted for 4 days when body temperature returned to normal. In contrast, the unvaccinated pigs had lasting high fever until they died.
The glycoprotein gB of PRV is immunogenic and related to viral defense, and the level of gB antibodies may work as an indicator that correlates with the degree of protection [19] . The vaccinated pigs developed high levels of gB antibodies after vaccination (Fig. 4a). However, unlike the PRV killed vaccine previously reported [17] , the gB antibody titers of vaccinated pigs dropped with average OD 405 of 2.0 after PRV HN1201 challenge. Although this subunit vaccine may not induce gB-specific antibodies lasting as long as the killed vaccine did, the high titer of gB antibodies were still enough to provide useful protection to PRV HN1201 challenge. Two vaccination regimes may work better than the current one vaccination protocol, and may need to be explored in the near future.
The gE antibodies work as an indicator of field PRV infection since gB subunit vaccine did not elicit the generation of gE antibodies. As shown in Fig. 4b, all vaccinated pigs were gE antibody negative. In contrast, three out of five unvaccinated pigs were gE antibody positive, which indicated the infection of PRV HN1201. These results indicate the vaccination of gB subunit vaccine may provide protection at the early stage of virus infection. Consistent with other experimental PRV vaccines including killed and live modified vaccines, all vaccinated pigs were gE antibody positive at 14 dpc. To explore the mechanisms behind the phenomena, besides humoral antibody responses, the cellular immunity could also be important in elimination of PRV in the tissues and needs to be further studied.
The immunohistochemistry results showed that the PRV antigen could not be detected in tonsil, lung, brain and cerebellum samples of the vaccinated pigs. However, the PRV antigen was present in three out of five vaccinated pigs in trigeminal ganglia (Table 1). In pigs, neurons in the trigeminal ganglia are the primary site for PRV latency [20] . No previous PRV vaccine studies included antigen detection in the trigeminal ganglia because it is technically demanding. This is the first reported the PRV antigen distribution in the neurons of trigeminal ganglia after vaccination, which may indicate no current PRV vaccines can prevent the transmission of virus to the neurons in the trigeminal ganglia and virus latency and partially explained the generation of gE-specific antibodies.
Conclusions
To conclude, PRV gB was successfully produced in baculovirus/insect cell expression system and was tested for efficacy as a novel subunit vaccine combined with commercial adjuvant on 3-week-old pigs. The results showed that vaccination can provide complete protection to virulent PRV HN1201 challenge by the evidence of high level of gB and absence of viral antigen in multiple tissues. Therefore, this subunit vaccine is likely to provide a useful substitute to other types of PRV vaccines.
Note: The positive staining signals were interpreted as negative (-), low (+), moderate ( ++ ), intense ( +++ ) according to the intensity of staining. Each row represents one pig from the corresponding group.
|
v3-fos-license
|
2023-04-26T15:04:01.676Z
|
2023-04-24T00:00:00.000
|
258318789
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://zookeys.pensoft.net/article/100878/download/pdf/",
"pdf_hash": "8353a1d96e062eebb8a13185fc160cfa3fbe7b35",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43385",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "1614012459ce072c0b77db00f1699e075f1b4dd3",
"year": 2023
}
|
pes2o/s2orc
|
Diversity of flesh flies (Sarcophagidae, Sarcophaginae) of pond habitats in rural areas in the Croatian part of Baranja
Abstract The diversity of grey flesh flies (Sarcophagidae: Sarcophaginae) from the Croatian part of Baranja was studied during 2019 to 2021, resulting in 37 species, of which the following are new for the area: Raviniapernix (Harris, 1780); Sarcophaga (Het.) depressifrons Zetterstedt, 1845; S. (Het.) filia Rondani, 1860; S. (Het.) haemorrhoides Böttcher, 1913; S. (Het.) pumila Meigen, 1826; S. (Het.) vagans Meigen, 1826; S. (Lis.) dux Thomson, 1869; S. (Lis.) tuberosa Pandellé, 1896; S. (Meh.) sexpunctata (Fabricius, 1805); S. (Pan.) protuberans Pandellé, 1896; S. (Sar.) carnaria (Linnaeus, 1758); S. (Sar.) variegata (Scopoli, 1763), and S. (Pse.) spinosa Villeneuve, 1912. New locality records are provided for 25 species. Sarcophaga (Sar.) croatica Baranov, 1941 was the most abundant with 37%, followed by S. (Sar.) lehmanni Müller, 1922 (21%), and S. (Pas.) albiceps Meigen, 1826 (5%), making up 63% of all collected specimens. Most species (35) were collected in locality of Zmajevac, while the fewest (3) were collected in Bilje locality. During this study, S. (Pse.) spinosa was recorded in Croatia for the first time. Combined with previous records, 42 species of flesh flies have been recorded from Croatian Baranja, which comprise 27% of the flesh flies known to occur in Croatia. The total number of species of the family Sarcophagidae currently known in Croatia has increased to 156.
Study area
Baranja is a Pannonian Plain region of Hungary (its northern portion) and Croatia (its southern portion). It is situated in the eastern part of Croatia and forms part of Osijek-Baranja County. Triangular in shape, it covers an area of 1147 km 2 between the Drava, the Danube, and the state border with Hungary (Bognar et al. 1975). The Croatian part of Baranja is a predominantly lowland area (elevation ≤ 259 m). Bansko brdo (Bansko Hill) is the most prominent part of Baranja in terms of relief and extends NE-SW for 21 km, whereas its width is much smaller (Bognar et al. 1975). The steppe, the natural vegetation that covers Bansko Hill, has completely disappeared. The belt along the Danube and Drava is a flooded area (~ 63% of the territory) with many secondary tributaries and wetlands (Kopački rit) (Bognar et al. 1975). The Kopački rit Nature Park is one of the largest fluvial-marshy plains in Europe (Schneider-Jacoby 1994), and the basic ecological features relate to the river dynamics (Schneider-Jacoby 1994;Mihaljević et al. 1999). Forests cover ~ 20% of the Croatian part of Baranja. The climate is moderately continental with significant temperature fluctuations. The average January temperature is ~ -1.3 °C, and the July temperature is ~ 22 °C; the average annual rainfall is ~ 650 mm (Bognar et al. 1975). The Croatian part of Baranja contains 54 settlements, some of which contain pond habitats, which increasingly serve as places for recreation of people and their pets. All seven sampling sites are situated at the periphery of the settlements (Fig. 1). Geographical coordinates of these seven sampling sites are given in Table 1. Pond habitats in settlements of Kotlina, Suza, and Zmajevac are surrounded by species of willow (Salix) and poplar (Populus). Sampling sites in the settlements of Petlovac and Popovac are overgrown mainly with reeds (Phragmites australis), sedges (Carex ssp.), and rush (Typha ssp.) without forest vegetation at its edges. A similar type of vegetation is present at the pond in Darda, with the addition of water lilies (Nymphaea alba) and nenuphar (Nuphar luteum), while pond habitats in Bilje settlement are overgrown with different species of low grasses exposed to the open sun throughout the day. Darda;3. Kotlina;4. Petlovac;5. Popovac;6. Suza;7. Zmajevac.
Sampling and identification
Collections of sarcophagids from pond habitats were made frequently over a period of seven months (April-October) from 2019 to 2021. During 2019 and 2020, from April to October, sampling at the pond habitat in Zmajevac was done 1-5 times a month. Samplings in 2021 were carried out once per month from May to August. Flesh flies were sampled from 1 pm to 5 pm using a standard insect sweep net to selectively collect flies resting on soil and vegetation, or attracted to animal faeces and the remains of discarded food. The collected specimens were preserved in 96% ethanol. Male terminalia were prepared for species identification following the method of Richet et. al (2011). After two days in ethanol, male abdomens were dissected and soaked in a 10% KOH solution for 72 h. They were then immersed in 10% acetic acid for 1 min and rinsed with water for 1 min. They were then dehydrated in beech-wood creosote for 4 h. The phallus, pregonites and postgonites, sternite 5, cerci, and surstyli were separated from the rest of abdomen and placed into a plastic vial (volume of 2 ml) with 96% of ethanol solution. Identifications were carried out using keys for Sarcophagidae (Pape 1987;Povolný and Verves 1997;Richet et al. 2011) and descriptions and illustrations in Whitmore (2009Whitmore ( , 2010Whitmore ( , 2011 and Whitmore et al. (2013). Nomenclature and classification follow the Fauna Europaea database (Pape 2004). For all samples, the following information is provided: locality and date of collection, collector(s), number and sex of specimens, and depository. Also, 33 unidentified specimens from May 2018 are now identified and their data have been included in this study. One species newly recorded for the Croatian fauna is marked with a black triangle (▲). Specimens examined for this study are deposited in the collections of the Department of Biology, Josip Juraj Strossmayer University of Osijek, Osijek, Croatia (DBUO) and the Staatliches Museum für Naturkunde, Stuttgart, Germany (SMNS).
Terminology
Subgeneric names are abbreviated as follows:
Results
A total of 1293 flesh flies belonging to 37 species was collected (Table 2). Sarcophaga (Sar.) croatica Baranov, 1941 was the most abundant with 37%, followed by S. (Table 2). Most species (35) and specimens (75%) were collected in Zmajevac ( Table 2). The lowest number of species was collected in Bilje (3), whereas in other localities the number of collected species was between 8 and 11. The largest number of specimens and species was collected during 2019 (Table 3). New records for Baranja are provided for 25 species, with the record of S. (Pse.) spinosa representing a first record for Croatia.
Sarcophaga (Heteronychia) pumila Meigen, 1826
New records for Croatian Baranja. Popovac, 11.VI.2021 Baranov, 1941 (1♂) (DBUO); Kotlina, 18.VI.2021 (Povolný and Verves 1997), whereas one, S. (Het.) pseudobenaci, is restricted to southeastern Europe (Pape 2004). Sarcophaga (Pse.) spinosa represents a new record for Croatia, which is not surprising as this species is recorded from neighbouring Hungary and Serbia, and has also been recorded from Albania, French mainland, Italian mainland, North Macedonia, Romania, and Ukraine (Pape 2004). Most species were found in natural and semi-natural habitats, although some of the species recorded in this study, i.e., S. (Ren et al. 2018). All these five species were recorded in this study (Table 2). Sarcophaga (Pad.) similis was recorded in four localities (Darda, Petlovac, Popovac, Zmajevac). (Table 2). In this study, a large number of flesh fly specimens were collected on or nearby pet animal faeces and on discarded leftover food. This is not surprising since it is known that S. (Pas.) albiceps, S. (Thy.) incisilobata, and Ravinia pernix visit excrements from humans and (other) animals (Papp 1992a(Papp , 1992b. Several species of flesh flies are also known to visit different animal carcasses (Szpila et al. 2015). Among them, the following species were recorded during this study: R. pernix, S. ) incisilobata were also collected in this study. Eighty years ago, Baranov (1940) confirmed the presence of five flesh fly species in a laystall in the village of Metajna on the Island of Pag, four of which were also recorded in this study.
Sarcophaga (Sarcophaga) croatica
In a similar study from the Polish Baltic coast, a number of species were recorded from a marshy habitat (15) and a sandy habitat (24) lehmanni were also collected in this study. The large differences in the number of recorded species between Zmajevac and other localities may be explained by the much higher number of samples. This is clearly shown by the fact that in 2021 only 15 species were collected at the Zmajevac locality compared to the total number of 35 species that were sampled during all three years (2019-2021) at this locality. The lower number of recorded species at the Bilje locality was influenced by environmental factors such as open sun throughout the day (with afternoon temperatures ≥ 32 °C) and a lack of animal faeces and remains for food, which reduced the number of recorded species. The seven localities around pond habitats are polluted by different organic contaminations caused by various human activities, which can attract certain species for feeding and breeding.
Conclusions
The
|
v3-fos-license
|
2016-06-02T05:21:20.197Z
|
2014-06-01T00:00:00.000
|
18839898
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://content.sciendo.com/downloadpdf/journals/intox/7/2/article-p111.pdf",
"pdf_hash": "48e97d7f700c822cd73661b1f12f1641630c2486",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43386",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"sha1": "3906662fb30a4dcbbd47debb12d6dd8615db0db7",
"year": 2014
}
|
pes2o/s2orc
|
Effects of cadmium and monensin on renal and cardiac functions of mice subjected to subacute cadmium intoxication
Cadmium (Cd) is a well-known nephrotoxic agent. Cd-induced renal dysfunction has been considered as one of the causes leading to the development of hypertension. The correlation between Cd concentration in blood and urine and cardiovascular diseases has been discussed in many epidemiological studies. A therapy with chelating agents is utilized for the treatment of toxic metal intoxication. Herein we present novel information indicating that monensin (applied as tetraethylammonium salt) is a promising chelating agent for the treatment of Cd-induced renal and cardiac dysfunction. The study was performed using the ICR mouse model. Adult ICR male mice were divided into three groups with six animals in each group: control (received distilled water and food ad libitum for 28 days); Cd-intoxicated (treated orally with 20 mg/kg b.w. Cd(II) acetate from day 1 to day 14 of the experimental protocol), and monensin treated group (intoxicated with Cd(II) acetate as described for the Cd-intoxicated group followed by oral treatment with 16 mg/kg b.w. tetraethylammonium salt of monensic acid for 2 weeks). Cd intoxication of the animals resulted in an increase of the organ weight/body weight indexes. Cd elevated significantly creatinine and glucose level in serum. Monensin treatment improved the organ weight/body weight ratios. The therapy of the Cd-intoxicated animals with monensin ameliorated the creatinine and glucose level in serum and decreased the concentration of the toxic metal ions in the heart and kidneys by 54% and 64%, respectively.
proximal tubules was observed in animals exposed to Cd in environmentally relevant doses, demonstrating the necessity of developing an effective therapy for the treatment of Cd-intoxications (Thijssen et al., 2007). The renal dysfunction induced by Cd has been considered one of the causes for the development of hypertension (Satarug, 2005). The effect of Cd on the vascular system and cardiac function was discussed by Sompamit et al. (2010), Prozialeck et al. (2006Prozialeck et al. ( , 2008, Molloaoglu et al. (2006), Manna et al. (2008), and Donpunha et al. (2011) . Furthermore, the correlation between blood and urine Cd concentration and diseases such as idiopathic dilated cardiomyopathy (Smetana et al., 1987), peripheral arterial disease (Nordberg, et al., 1994), stoke, heart failure, atherosclerosis was documented in many epidemiological studies (Tellez-Plaza et al. (2010), Peters et al. ( 2010), Ross (1999).
Different chelators were shown to decrease Cd concentration in the kidneys of animals subjected to Cd-intoxication, yet they were not found to be very effective in reducing Cd concentration in other organs
Introduction
Cadmium is a toxic element. When it enters the bloodstream, it binds to albumin and other high molecular weight proteins. It is then transferred to the liver (Nordberg, 2004). The accumulation of Cd in the liver triggers synthesis of metallothionein (MT), which has been considered a primary defensive mechanism against Cd-intoxications (Nordberg & Nordberg, 2004). The Cd-MT complex passes through the glomeruli and is taken up by the renal tubular cells (Nordberg et al., 1994). In the kidneys, this complex is destroyed. MT is catabolized in the lysosomes and Cd is released (Nordberg et al., 1994). The accumulation of Cd in the kidneys causes damage of the renal proximal tubules. Injury of the renal Eff ects of cadmium and monensin on renal and cardiac functions in mice ISSN: 1337-6853 (print version) | 1337-9569 (electronic version) (Blanusa et al., 2005;Flora & Pachauri, 2010;Andujar, 2010). Furthermore, most of the chelating agents were administrated s.c. or i.p., which might cause severe side effects (Blanusa et al., 2005;Flora & Pachauri, 2010). Of the oral chelating agents DMSA (2,3 dimercaptosuccinic acid) has been used in the therapy of heavy metal intoxication. However, this chelating agent is hydrophilic and does not excess the toxic metal ion accumulated in the intracellular space (Flora & Pachauri, 2010). Furthermore, the effect of DMSA on Cu homeostasis should be considered when this agent is utilized for the treatment of heavy metal intoxications in humans (Flora & Pachauri, 2010). Recent studies on animal models showed that the polyether ionophorous antibiotic monensin is much more effective than DMSA in the therapy of lead (Pb) intoxication (Hamidinia, et al., 2006). In our previous paper we reported that this antibiotic reduced the concentration of Cd in organs of mice subjected to subacute Cd intoxication (Ivanova, et al., 2012). We found that monensin did not disturb the homeostasis of Cu and Zn. Furthermore, the antibiotic significantly improved Fe metabolism in mice subjected to subacute Cd intoxication (Ivanova et al., 2012). These promising results indicated that the possible application of monensin as a chelating agent for the therapy of Cd-intoxication should be studied in detail.
The present study was designed to assess the effect of monensin on Cd-induced renal dysfunction and cardiac impairment.
Animal model
Sixty-day-old adult male ICR mice were purchased from the Animal Care Unit Slivnica (Bulgaria). They were housed at the Institute of Experimental Morphology, Pathology and Anthropology with Museum (Bulgarian Academy of Sciences, Sofia) under conventional conditions at room temperature with 12 h light/12 h dark cycle and controlled humidity. The animals were divided into three groups with six mice each. The first (control) group received standard diet and had free access to distilled water during the experimental protocol. The second group of animals (Cd-treated) was exposed to 20 mg/kg body weight Cd(CH 3 COO) 2 × 2H 2 O in drinking (distilled) water once daily for two weeks. During the following 14 days of the experiment, the animals from this group received distilled water and food ad libitum. The third group (monensin-treated mice) was administrated Cd(CH 3 COO) 2 × 2H 2 O as described above, followed by treatment with tetraethylammonium salt of monensic acid (16 mg/kg body weight in distilled water) during the days 15 to 28 of the experiment. On day 29 of the experimental protocol, all animals were sacrificed under light ether anesthesia and the samples were collected for analysis. The organs were stored at -20 °C prior to analysis. Blood samples were collected in heparinized tubes, centrifuged, and the resulting plasma samples were stored at -20 °C. The animal studies were approved by the Ethics Committee of the Institute of Experimental Morphology, Pathology and Anthropology with Museum, BAS.
Preparation of monensic acid
Monensic acid A monohydrate was prepared from sodium monensin (711 mg, 1 mmol) by applying the procedure previously described (Ivanova et al., 2010).
Biochemical analysis
The biochemical analysis was performed in the clinical laboratory "Ramus" (Sofia, Bulgaria) using established analytical protocols. The laboratory "Ramus" is certified by the Ministry of Health (Sofia, Bulgaria) to perform clinical analyses.
Atomic absorption analysis
The organs were digested with concentrated HNO 3 (free of metal ions), as described previously (Ivanova et al., 2012). The determination of Cd in the kidneys was performed by a flame (Perkin Elmer Analyst 400, air-acetylene flame) atomic absorption spectrophotometer. The analysis of Cd in the hearts of the animals was conducted on an electrothermal analyzer (Zeeman Perkin Elmer 3030, HGA 600). Certified Reference Materials from the International Atomic Energy Agency (IAEA-H-8 (kidney) and IAEA-H-4 (animal muscle) were applied to control analytical accuracy.
Statistical calculations
The results for the three study groups are presented as the mean value ± SD (n=6 for each group). Student's t-test was applied to determine the significance of the differences between the experimental results of two groups. The difference between two groups was considered significant at p<0.05.
Results and discussion
In this study we present novel data regarding the effect of monensin on the kidney and heart weight of Cd-intoxicated animals (see Figures 1 and 2).
The administration of Cd(II) acetate to mice over 2 weeks resulted in a significant elevation of the heart weight by 20% compared to the controls (p<0.05). The weight of the kidneys was also affected by Cd intoxication. The Cd-induced organomegaly has been associated with inflammation processes, triggered by this metal (Donpunha et al., 2011). Treatment of the Cd-intoxicated mice with monensin returned the weight of the heart to normal. Monensin decreased also the weight of the kidneys in the Cd-treated animals, but the difference between Cd-intoxicated and monensin detoxicated groups was not statistically significant. The data showed that the effect of monensin on Cd-induced cardiac toxicity was more pronounced compared to the alterations in renal function triggered by Cd.
The biochemical studies showed that Cd increased creatinine in the serum of Cd intoxicated animals ( Figure 3). Our finding supports the conclusion that serum creаtinine concentration could be used to monitor cadmium-induced renal injury (Bharavi et al., 2011). The decrease of the level of creatinine in monensin-treated mice by 30% (p<0.05) confirmed the protective effect of this antibiotic on renal function of mice exposed to subacute Cd intoxication.
Treatment of the animals with Cd(II) acetate led to a significant increase of glucose concentration in serum (Figure 4). Different cellular and physiological mechanisms have been proposed for Cd-induced elevation of glucose level in serum (Edward & Prozialeck, 2009). Animal studies performed by Edward and Prozialeck (2009) demonstrated a direct effect of Cd on the pancreas. Cd was found to alter insulin release from β-pancreatic cells. In vitro studies on mouse renal cortical cells showed that Cd decreased both glucose uptake and expression of SGLT1, an Na + -dependent glucose symporter (Blumenthal et al., 1998). These results corroborated the studies on Cd-treated rats showing that Cd elevated activities of enzymes responsible for gluconeogenesis in kidney tissue (Chapatwala, et al., 1982). Treatment of the Cd-intoxicated mice with monensin in the present study attenuated the Cd-induced increase of glucose concentration in serum. Figure 5 presents the results from the biochemical analysis of the lipid profile in the three groups of mice studied. The lipid profile (total cholesterol, HDL-cholesterol and triglycerides) was not affected either by cadmium treatment or by monensin therapy. The study by Messner et al. ( 2009) demonstrated that the plaque formation induced by Cd did not always correlate with alteration of the lipid profile. The authors concluded that Cd exerted its atherogenic activity by causing endothelial damage (Messner et al., 2009). Histological studies on cardiac and renal tissues revealed that treatment of Cd-intoxicated mice with monensin significantly improved the morphology of both organs studied (data not shown).
The data from the atomic absorption analysis showed that the highest Cd concentrations were measured in the kidneys and hearts of the Cd-treated animals (second group) (Figures 6 and 7). The values for the concentration of Cd in both organs are higher than those reported in our previous study where the animals were subjected to 10 mg/kg Cd(II) acetate daily treatment for 2 weeks (Ivanova et al., 2012). The data presented in this study confirmed that accumulation of Cd in the organs was dose dependent. Monensin decreased the Cd concentration in the kidneys and heart of Cd-intoxicated animals by 57 and 64%, respectively (p<0.05). These data are in good agreement with the results from biochemical and histological analyses and support the conclusion that the polyether ionophorous antibiotic monensin could be a promising chelating agent for the treatment of renal dysfunction and cardiac impairment in cases of Cd intoxication. Cd Cd+Mon heart wet weight, ng/g * ** Figure 5. Lipid profi le in serum of the experimental animals. Each column represents mean±SD, n=6; Asterisk (*) represents signifi cant diff erences between the Cd-treated group and normal controls, p<0.05; Double asterisk (**) -signifi cant diff erences between the monensin-treated group and the Cd-intoxicated animals (p<0.05) Figure 6. Cd concentration in kidneys of the experimental animals. Each column represents mean±SD, n=6; Asterisk (*) represents signifi cant diff erences between the Cd-treated group and normal controls (p<0.05); Double asterisk (**) represents significant diff erences between the monensin-treated group and the Cd-intoxicated animals (p<0.05) Figure 7. Cd concentration in the heart of the experimental animals. Each column represents mean±SD, n=6; Asterisk (*) represents signifi cant diff erences between the Cd-treated group and normal controls (p<0.05); Double asterisk (**) represents significant diff erences between the monensin-treated group and the Cd-intoxicated animals (p<0.05).
|
v3-fos-license
|
2022-03-25T15:14:03.590Z
|
2022-03-22T00:00:00.000
|
247647053
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2223-7747/11/7/843/pdf",
"pdf_hash": "496a883a6d2254563aa344a2a872ae4cf4997c39",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43388",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "04c1f6476faabbfa2b2c28287a0b624214cdaea1",
"year": 2022
}
|
pes2o/s2orc
|
The Delay of Raphanus raphanistrum subsp. sativus (L.) Domin Seed Germination Induced by Coumarin Is Mediated by a Lower Ability to Sustain the Energetic Metabolism
In the present study, the mode of action of coumarin using the germination process as a target was investigated. A dose–response curve, built using a range of concentrations from 0 to 800 µM, allowed us to identify a key concentration (400 µM) inhibiting the germination process, reducing its speed without compromising seed development. Successively, short time-course (0–48 h) experiments were carried out to evaluate the biochemical and metabolic processes involved in coumarin-induced germination delay. The results pointed out that coumarin delayed K+, Ca2+, and Mg2+ reabsorption, suggesting a late membrane reorganisation. Similarly, seed respiration was inhibited during the first 24 h but recovered after 48 h. Those results agreed with ATP levels, which followed the same trend. In addition, the untargeted metabolomic analysis allowed to identify, among the pathways significantly impacted by the treatment, amino acids metabolism, the TCA cycle, and the glyoxylate pathway. The results highlighted that coumarin was able to interact with membranes reorganisation, delaying them and reducing the production of ATP, as also supported by pathway analysis and cell respiration. The in vivo 31P-NMR analysis supported the hypothesis that the concentration chosen was able to affect plant metabolism, maintaining, on the other hand, its viability, which is extremely important for studying natural compounds’ mode of action.
Introduction
The study of natural compounds with phytotoxic activity to develop new weedmanagement strategies is a challenging task due to the complexity of their mode of action and their potential multi-target activity [1]. Specialised metabolites' biosynthesis is intrinsically connected with the evolutionary forces that drive species' evolution and their survival strategies adopted in response to biotic and abiotic stresses in specific ecosystems. This ecological pressure led to new specialised metabolic pathways usable by plants for their defence and adaptation, which produce a wide array of structurally different chemicals with high biological activity [2]. This phenomenon has been compared by Dayan and Duke [2] to a high-throughput screen where scientists test several compounds over a brief period on a single biochemical target. However, natural processes selected specialised metabolites over millions of years based on their biological activities on whole plants and involved several chemical interactions with countless organisms and target sites. Nowadays, more than 2,140,000 specialised metabolites have been isolated and are classified according to their vast diversity in biosynthesis, structure, and function [3]. Few of them have been examined for their phytotoxic potential, and the modes of action (MOAs) of even fewer have been elucidated [4].
Among the different classes of specialised metabolites, hydroxycinnamic acids and their derivatives are highly potent natural phytotoxins [5]. In particular, one of the most biologically active classes of molecules is the coumarins class, deriving from the lactonisation of the o-hydroxycinnamic acid. The simplest compound representing this class is coumarin (also known as 1,2-benzopyrone), studied mainly for its inhibitory ability on seed germination and seedlings' growth [6][7][8][9][10][11][12]. It has recently been demonstrated that coumarin's phytotoxic effects on seedling growth are due to an accumulation of cyclin B, which alters the root apical meristem architecture mining the integrity of microtubule cortical array organisation. Such alterations are related to a reduction in auxin basipetal transport to the apical root meristem and its accumulation in the maturation zone, stimulating lateral root formation [13]. Despite the vast amount of studies focused on seedlings' growth and metabolism in response to coumarin [14,15], only a few manuscripts tried to elucidate its effects on seed germination.
The inhibition or delay of seed germination is an important effect that plays a significant ecological role in natural ecosystems, increasing the competitive ability of the species that can retard the growth of the competitors. McCalla and Norstard [16] defined germination as the vegetative stage most sensitive to phytotoxins. In fact, a short period of inhibition or stimulation, at this stage, could strongly increase or reduce the ability of the seedling to compete with other plants [17,18]. The plants' ability to delay the germination of neighbouring species through the release of coumarin in soil was largely documented, even if only a few hints were suggested concerning its mode of action [19].
One of the first studies on coumarin's mode of action was reported by Nutile [20], which described, for the first time, the inhibitory activity of coumarin on the germination of lettuce seeds, connecting it to the induction of light-sensitive dormancy in lettuce seeds. Mayer and Evenari [21], studying the coumarin structure-activity relationship, reported that the germination inhibitory activity was due to its specific structure (an unsaturated lactone linked to an unsubstituted benzene nucleus), but they also added that any change in coumarin structure caused a slight reduction, but not a destruction, of its activity as germination inhibitor. Successively, Mayer and Poljakoff-Mayber [22] reported that although it did not prevent the breakdown of sucrose, coumarin did prevent the accumulation of glucose. At the same time, a block in lipase activity was observed, suggesting that coumarin might operate primarily as an enzyme inhibitor with no marked specificity, but no studies confirmed this hypothesis. It was also suggested that the coumarin-induced dormancy in lettuce seeds could be attributable to its ability to antagonise gibberellins' function, since the exogenous addition of these hormones was unable to revert its effects [23,24].
More recent research highlighted the role of a seed coat in mediating the inhibitory activity of coumarin on radish germination; since, as a result of the tegument removal, the molecule's biological activity was significantly reduced, it was suggested that coumarin could induce a secondary seed dormancy, according to Aliotta, Fuggi, and Strumia [12]. In addition, they demonstrated that coumarin (200 µM) significantly inhibited radish germination by 50%, and that this molecule inhibits the elongation of cells of the differentiating zone of the root produced by seeds germinated in the treatment.
One of the earliest studies on specific enzymes involved in durum wheat seed germination was published by Abenavoli et al. [9], who reported that coumarin delayed the reactivation of peroxidases, enhanced the superoxide dismutase activity, and reduced the activity of selected marker enzymes involved in metabolic resumption. Later, Pergo et al. [6] suggested that coumarin acts as a cytostatic agent, retarding germination and growth of Bidens pilosa and exerting, at higher concentrations, stimulation of lipoxygenase activity. Chen et al. [8], working on the embryo of rice seeds, observed that coumarin (in a range between 1 mM and 20 mM) delays germination by decreasing the ABA 8 -hydroxylase 2 and 3 genes (OsABA8 ox2/3), inhibiting, as a consequence, the catabolism of abscisic acid. In addition, in explaining the role of endogenous hormone and their interaction with coumarin during germination, recent studies highlighted that this specialised bioactive metabolite might delay seed germination by reducing endogenous GA4 and decreasing, as a consequence, the accumulation of ROS [7].
Till now, experiments focused on germination response to coumarin were carried out on seeds treated for several days with the molecule at significantly high concentrations.
Considering the peculiar events that characterise the first phase of seed germination, such as membrane repair and reorganisation, activation of different metabolic pathways, and the induction of macromolecules biosynthesis, to analyze coumarin's effects in the first instances of germination appear crucial to better investigate the mechanisms underlying its toxicity. Therefore, to highlight the early effects of coumarin on this sensitive physiological process, we decided to work with a short time course, using the molecule at a sublethal concentration and carrying out in vivo (classical physiological and biochemical methods joined to 31 P-NMR experiments) and destructive untargeted metabolomics experiments. These approaches allowed to describe the metabolic changes induced by coumarin treatment on radish seed germination, highlighting the pathways affected by its inhibitory activity.
Germination Index and Seed Respiration
The dose-response curve showed that coumarin treatment significantly affected the GT index at concentrations higher than 200 µM, inducing inhibition of 18% and 58% at 400 and 800 µM, respectively ( Figure 1a). On the contrary, the S parameter highlighted a significant reduction in speed germination already at a concentration of 200 µM (25% lower than control), which reached 70% inhibition at 400 µM and 90% at 800 µM ( Figure 1b). Since 400 µM was the first significant concentration able to inhibit seed germination and delay this process, we decided to focus all the following experiments using this concentration.
Ion Leakage and Reabsorption and Seed Respiration
The ion leakage, monitored on the external medium in which seeds were incubated, highlighted significant differences in Ca 2+ and Mg 2+ only after 48 h of treatment, at which nine-fold higher Mg 2+ and 4-fold higher Ca 2+ content in the external medium were observed (Figure 2b,c). On the contrary, K + content was higher than the control at 12 h (one-fold), 24 h (seven-fold), and 36 h (four-fold) of treatment, but reached control values after 48 h (Figure 2a).
Concerning seed respiration, evaluated on control and treated seeds (400 µM) in a time-course experiment (6-48 h), no changes were observed after 6 h of treatment (data not shown), whereas 12 h and 24 h of exposition reduced this parameter by 36% and 30%, respectively ( Figure 2d). On the contrary, no differences were observed after 48 h, where an almost complete recovery was observed in the treated seeds ( Figure 2d).
In Vivo NMR Analysis
The in vivo 31 P-NMR analysis carried out on seedlings germinated on coumarin (0 µM and 400 µM) for 24 h and 48 h pointed out significant differences among the spectra and the parameters evaluated ( Figure 3). However, from the metabolic point of view, all spectra showed the resolution (i.e., the presence of peaks referable to main p-metabolites and the possibility to discriminate the inorganic phosphate pools, corresponding to those present in the cytoplasm and vacuole, respectively) typical of an active tissue, allowing us to exclude general toxicity symptoms in the seeds incubated in 400 µM of coumarin.
Concerning the cytoplasmatic pH, no differences were observed after 24 h and 48 h, whereas after 48 h of treatment, a reduction in the vacuolar pH was observed in control plants (Table 1). The resonance assignments are as follows: peak 1, phosphocholine; peak 2, cytoplasmic inorganic phosphate; peak 3, vacuolar inorganic phosphate; peak 4, γ-phosphate of NTP and β-phosphate of nucleoside diphosphate (NDP); peak 5, α-phosphates of NTP and NDP; peak 8, UDP-Glc and NAD(P)(H); peak 6, UDP-Glc; peak 7, β-phosphate of NTP. Chemical shifts are quoted relative to 85% H 3 PO 4 . In the three independent experiments, the resonance intensities differed by 15% at most.
A significant reduction in the amount of cytoplasmatic inorganic phosphate was observed in control seedlings after 48 h ( Figure 4). On the contrary, the vacuolar inorganic phosphate was significantly higher in seedlings treated with coumarin for 48 h (Figure 4). Concerning ATP content (Figure 4), coumarin-treated seedlings were characterised by a lower content after 24 h of treatment and a general recovery to the control level after 48 h. A similar trend was also observed in phosphocholine content, which was significantly lower after 24 h of treatment and recovered to the control level at the end of the experiment (48 h) ( Figure 4).
GC-MS Untargeted Metabolomic
To obtain more insights into coumarin-induced metabolic changes in R. sativus seeds during germination, a GC/MS-driven untargeted-metabolomic analysis was carried out.
The GC-MS-driven analysis was performed on seeds treated with coumarin for 0 (T0), 24 (T1), and 48 h (T2). The analysis revealed grouped and individual metabolites that allowed samples' discrimination. Among all the analysed stages, the metabolomic analysis allowed us to annotate and quantify 72 metabolites (mainly primary metabolites) and extract 161 unknown EI-MS shared features. Successively, the unknown features were putatively annotated in silico through MS-Finder, which allowed us to annotate the putative structure of 73 metabolites (mainly specialised metabolites).
Unsupervised Principal Component Analysis (PCA) was carried out on blank samples and the two sample groups (treated and untreated from T0 to T2) to demonstrate the system suitability ( Figure S1a-d). The PCA Score Plot, built on the first (PC1) and the second component (PC2), revealed good discrimination of sample groups against blanks, highlighting model robustness ( Figure S1a). The components used allowed the separation of the two treatments and the three growth stages (T0-T2) with no outliers (Figure S1a-d), indicating that our metabolomic analysis was reliable and could sufficiently reflect the metabolic profile changes of the seeds during germination. The unsupervised PCA, run on MS-DIAL-suggested metabolites ( Figure S1c) and unknown features ( Figure S1d), revealed clear discrimination of sample groups.
Successively, after manual feature annotation (on both MS-DIAL and MS-Finder) and the discard of falsely annotated metabolites, the putative metabolites and their normalised intensities were analysed through Metaboanalyst 5.0 [25]. The unsupervised PCA carried out on annotated metabolites was performed to obtain a global view of the time-course changes in metabolic patterns of developing R. sativus seeds during coumarin treatment ( Figure 5a). The first principal component (PC1), accounting for 53.5% of the total variance (PC2 accounted for 14.5%), reflected time-dependent seed germination development in response to coumarin. Clear separation among times and treatments was observed ( Figure 5a). The evaluation of the PCA loading plots pointed out that the PC1 was mainly dominated by 3-methoxytyramine-betaxanthin, tryptophan, D-glutamine, L-(-)-proline, and raffinose, among others (Table S1), whereas PC2 was mainly dominated by specialised metabo-lites such as 5-hydroxy-L-tryptophan, eremopetasitenin A1, gentiflavine, and solanapyrone, among others (Table S1).
To obtain maximal covariance between the metabolite levels and the treatments at each time point, a partial least-squares discriminant analysis (PLS-DA) was applied (Figure 5b). The PLS-DA model was well described using the first four components ( Figure S2), which explained a total variance higher than 75% ( Figure S2). Moreover, the cross-validation and permutation test validated the PLS-DA model's robustness, highlighting a high R2 and Q2 for both latent variables and a p ≤ 0.05 ( Figure S3). PLS-DA derived variable importance in projection (VIP) scores (built on the first 30 metabolites with a VIP score higher than 1.4) revealed 3-metoxytyramine BX, tryptophane, putrescine, raffinose, D-glutamine, proline, among others, like the ones with the highest VIP scores for the three germination steps (Figure 5c). Finally, a random forest analysis revealed lactose, quercetin-3-O-glucuronide, GABA, D-Arabino-hexose-2-ulose, and salicin, among others, as the metabolites with the highest mean decrease in accuracy (features ranked by their contributions to classification accuracy) for the three sample groups (Figure 5d).
The univariate analysis highlighted 126 significantly altered metabolites among the three times of exposure and the two concentrations assayed (for the full list of the significantly altered metabolites, refer to Supplementary Table S1). Among them, the organic acids ( Figure 6) were generally in higher abundance than the control after 24 h of treatment but then significantly dropped after 48 h ( Figure 6). On the contrary, the abundance of the majority of the amino acids significantly increased compared to T0 after 24 h (T1) and 48 h (T2) of treatment, but their content was, in general, lower in treated plants than in untreated plants ( Figure 7). Attention should be paid to lysine, for which peculiar behaviour was pointed out. Indeed, the treated seeds showed a higher level after 24 h but a lower one after 48 h compared to control seeds ( Figure 7). Table S1. A KEGG-based pathway analysis, which combines enrichment and topology analysis, was carried out separately, comparing coumarin's effect at T1 and T2. The analysis revealed that 29 (at T1) and 35 pathways (at T2) were significantly altered by the treatment (Table 2 and Supplementary Table S1). However, at T1, only 22, and at T2, only 25 were characterised by an impact higher than 0.1 ( Table 2 and Supplementary Table S1). Among them, amino acids metabolism, the glyoxylate pathway, and the TCA cycle, all playing pivotal roles during seed germination, were significantly impacted by coumarin treatment (Table 2). Table 2. Result from "Pathway Analysis" (topology + enrichment analysis) carried out on the metabolite identified in R. sativus seeds during the germination process (T0-T2) in response to different coumarin doses (0 µM and 400 µM).
Discussion
Identifying the mode of action (MoA) of a natural compound is a challenging task that requires biochemical and physiological knowledge as well as the choice of the right concentration and the right time of exposure to the chemical [1]. The choice of the effective concentration is generally based on standard parameters such as IC 50 and LD 50 , among others [26]. At the same time, the main problem encountered during the study of a molecule's MoA is the choice of lengthy exposures to the chemical, which do not allow the observation of the primary effects on plant metabolism but only the side effects, mainly due to a cascade of biochemical and physiological responses to the stressing factor. In the present study, focused on coumarin effects on seed germination, we used horseradish (R. sativus) as the target species since it is considered a sensitive species for the phytotoxicity tests. Moreover, it is characterised by a quick and synchronised germination (~24 h), which is essential to reduce samples variability, and is known to be sensitive-during germination-to coumarin exposition [10][11][12]. In addition, before the experiments, the seeds were decoated to avoid the seed-coat-mediated dormancy already observed during coumarin treatments [12]. Concerning the choice of the concentration, we focused our attention on the lowest effective dose that inhibited the germination process and delayed its speed. In particular, we focused our study on the concentration of 400 µM that-compared to other old and recent studies, which analysed the effects of coumarin at concentrations ≥1 mM [8,9,27]-could be considered a low dose. For this study, we decided to combine classical physiological methods, such as respiration monitoring, with in vivo ( 31 P-NMR) and destructive (GC-MS-driven untargeted metabolomics) metabolomics techniques in order to monitor metabolic changes in response to the induced stress.
The dose-response curve built on germination data confirmed the effective phytotoxicity of this molecule, which was in agreement with previous studies carried out on the same species [28] at concentrations ≥ than 400 µM. However, the reduction in this parameter was already evident at 200 µM. During the first stages of the germination process, the seed is a disorganised structure that must reorganise its membranes before activating the biochemical processes that lead to energy production, cell division, and root protrusion [29]. During this reorganisation stage, several ions, such as K + , Ca 2+ , and Mg 2+ , are leaked by the seeds and then reabsorbed after the membrane reorganisation [30,31]. Our results suggest that coumarin affected the plasma membrane reorganisation and the reactivation of transport activities, delaying ions reabsorption. In particular, in coumarin presence, after an early (12 h and 24 h) increase in K + release in the germination medium, K + was reabsorbed at the same rate as in control seeds. On the contrary, lesser reabsorption of Mg 2+ and Ca 2+ was observed after 48 h of treatment. Similar effects (K + reabsorption and Mg 2+ leakage) were observed on radish seeds treated with the phytotoxic concentrations of nickel, which did not alter plasma membrane reorganisation but was only delayed ion reabsorption, slowing down the germination process [32]. This effect was attributed to the lesser availability of energy to sustain transport activities [32]. Considering that it was recently highlighted that treatment with Ca 2+ mobilisation inhibitors hindered radicle protrusion in rice (Oryza sativa) seeds [33], the observed Ca 2+ accumulation in the germination medium and the delay of its reabsorption could suggest a slower reactivation of germination processes in coumarin-treated seeds. The central role of calcium in the metabolic reactivation is also well-supported by the study conducted by Negrini and co-workers [34]. In that work it was demonstrated that the reduction in Ca 2+ availability, obtained by the addition of Na-EGTA into the incubation medium, negatively affected the activation of the calmodulin pathway (i.e., the formation of a Ca-calmodulin complex), and this effect is strictly related to a delay in the radish seed germination. In conclusion, the specific effect of coumarin on Ca 2+ reabsorption, which was not observed for K + , could represent a crucial point in its inhibitory action. Phosphocoline is an intermediate in phospholipids' biosynthesis that is related to membrane biosynthesis, a crucial event in seed germination [35,36]. In the present study, phosphocholine, with production that is strictly dependent on ATP, was significantly lower after 24 h of treatments (in agreement with the lower ATP level observed) but was restored to the control level after 48 h. These data strongly support the hypothesis that during the first 24 h of treatment, coumarin could delay membrane synthesis and organisation. It has been reported that in A. thaliana, the abundance of phosphatidic acid, mediated by phosphatidic acid phosphohydrolase activity, modulates CTP:phosphocholine cytidylyltransferase activity to govern phosphocholine content [37].
To support the hypothesis that coumarin did not alter seed reorganisation, but it was only delaying it, we decided to monitor the seed respiration, which was already known to be altered by coumarin treatment on the root [38], but barely described on seed germination. The data pointed out no differences in seed respiration after 6 h of coumarin exposure (data not shown), and this is expected since the biochemical activities of the seeds during the first stages of germination are extremely low, and the main sources of gas release are from colloidal or simply pushed out from gas-filled spaces by the water that is rushing in [39]. On the contrary, significant lower O 2 consumption was observed between 12 h and 30 h, whereas after 48 h, treated seeds were able to restore respiration to control levels. These results are in agreement with the in vivo 31 P-NMR data, where an evident reduction in ATP content was observed after 24 h, which was compensated after 48 h of treatment. As previously, dry seeds contain only small amounts of ATP, but it is rapidly produced during cellular hydration in association with respiration [39][40][41]. Moreover, in cumarin-treated seeds, a higher value of vacuolar phosphate was observed after 48 h, suggesting that (i) the phosphate mobilisation from phosphate stored forms (i.e., phytate) was not affected by coumarin treatment, and (ii) phosphate transported to the cytosol and used by treated seeds was lower than the control, probably due to a delay in the reactivation of seed metabolic processes. 31 P-NMR data, which joined to the previously described data to support a delay in respiration processes and a reduction in ATP production, were strongly supported by the untargeted metabolomic data. In fact, the pathway analysis highlighted an alteration of the TCA and glyoxylate cycle, the last pivotal in oilseeds (such as R. sativus) during reserve mobilisation [42]. In fact, most of the metabolites involved in those pathways, also identified by the random forest analysis as potential biomarkers (i.e., isocitric acid, malic acid, succinic acid, and fumaric acid), were significantly accumulated in treated seeds after 24 h of treatment. It is known that many key enzyme activities and products of the TCA cycle increase during early germination [29,43]; hence, our result suggest that the germinating seeds were not able to consume them for energy production because of reduced respiration. They then dropped after 48 h, confirming a potential recovery of the physiological processes (i.e., respiration, among others) in which they are involved.
Additionally, the majority of the identified amino acids, known to strongly accumulate during seed germination [29,44], significantly increased after 24 and 48 h of treatment, but in coumarin-treated samples, their content was lower than the control, further suggesting that the normal biochemical processes in germinating seeds were slowed down. The aspartate amino acid family (lysine, methionine, aspartic acid, among others) seems to be crucial for seed germination, and it has been observed that during this physiological process, aspartate is the most increased amino acid [45]. Additionally, lysine, an aspartate metabolism product, was proved to provide substrate for the TCA cycle since the manipulation of biosynthetic and catabolic enzymes involved in its metabolism has a major effect on the level of metabolites of the TCA cycle, suggesting a strict connection between lysine metabolism and the cellular energy metabolism [29,46,47]. In particular, it has been observed that transgenic lines, characterised by increased lysine synthesis during seed germination, presented a slowing down of this process, altering several metabolites connected to the TCA cycle [46,48]. These results suggest that the catabolism of the amino acids belonging to the aspartic amino acid family is an important contributor to the energy status of plants and the start of autotrophic-growth-associated processes during germination [46]. Interestingly, as observed in our experiments, where lysine was accumulated more than the control at 24 h and dropped after 48 h, the increase in lysine was also accompanied by a decrease in aspartic acid, phenylalanine, fumaric acid, malic acid, and succinic acid [46].
Plant Material, Treatment, and Germination Indexes
To allow the synchronisation of seed germination, the seeds of Raphanus raphanistrum subsp. sativus (L.) Domin were poured into a beaker filled with deionised water and incubated for one hour in a refrigerator at 4 • C to soften the integuments. Successively, the seeds were blotted on filter paper, and the teguments were removed using a razor blade. Shelled seeds were then transferred to Petri dishes (9 cm in Ø), with the bottom covered by a single layer of filter paper. The experiments were carried out using 10 seeds per Petri dish; the experiments were replicated 4 times and validated by repeating them twice.
To identify the dose that delayed but did not inhibit germination, a coumarin doseresponse curve was built by pouring to each Petri dish 5 mL of aqueous solutions with different coumarin concentrations: 0, 25, 50, 100, 200, 400, and 800 µM.
Ion Concentration in the Incubation Medium
The concentration of ions in the incubation medium was measured at 12 h, 24 h, 36 h, and 48 h using a Varian 820 ICP-MS (Varian, Inc., Palo Alto, CA, USA). A 2 mg L −1 aliquot of an internal standard solution (45Sc, 89Y, 159Tb) was added to both samples and the calibration curve to give a final concentration of 20 µg L −1 . Typical analysis interferences were removed by using the collision-reaction interface of the ICP-MS with an H2 flow of 40 mL min −1 .
Oxygen-Uptake Rate Evaluation
Oxygen-uptake rates were measured by using a Gilson differential respirometer IGRP 20 (Gilson Medical Electronics, Middleton, WI, USA). Seeds, previously incubated for different times in a flask, were transferred into the reaction vessels with 3 mL of the same solution previously used. The central well of each flask contained a fluted filter paper wetted with 0.2 mL of 1 M KOH. The rate of oxygen uptake was measured for 15 min at 26 • C in the dark after 30 min of equilibration.
Nuclear Magnetic Resonance Spectroscopy
The in vivo 31 P-NMR spectra were recorded on a standard broad-band 10 mm probe on a Bruker AMX 600 spectrometer (Bruker Analytische Messtechnik, Ettlingen, Germany) equipped with TopSpin software, version 1.3. The 31 P-NMR spectra were recorded at 242.9 MHz without lock, with a Waltz-based broad-band proton decoupling and a spectral window of 16 kHz.
Chemical shifts were measured relative to the signal from a glass capillary containing 33 mM methylenediphosphonate (MDP), which is at 18.5 ppm relative to the signal from 85% H 3 PO 4 . The experiments were carried out by packing the seedlings, previously incubated for 24 or 48 h in the absence or in the presence of 400 µM of coumarin, into a 10 mm-diameter NMR tube equipped with a perfusion system connected to a peristaltic pump in which the aerated, thermoregulated (26 • C) medium (1 mM MDP, 0.4 mM CaSO 4 , 1 mM MES-BTP (pH 6.1) ± 400 µM coumarin) flowed at 10 mL min −1 . The spectra were determined using a 90 • pulse and a recycle time of 1 s (fast acquisition conditions) or 6 s (to give fully relaxed resonance, except for vacuolar phosphate). Resonances were assigned according to Roberts et al. [49] and Kime et al. [50]. Metabolite concentrations in the tissue were determined according to Spickett et al. (1992) by comparing the resonance intensities with that of a glass capillary containing 33 mM MDP and previously calibrated against standard solutions. The areas of the 31 P peaks were measured by the percentage volume of the tissue in the NMR tube [51]. Values of cytoplasmic pH (pHc) and vacuolar pH (pHv) were estimated from the chemical shift of Pi resonance after the construction of a standard titration curve [52].
Untargeted Metabolomic Analysis
To evaluate the effects of coumarin on R. sativus seeds' metabolism during the germination process, shelled seeds were treated, as previously described, using coumarin at the concentration of 400 µM. Seeds were collected at different time points (T0 = 0 h; T1 = 6 h; T2 = 12 h; T3 = 24 h; T4 = 48 h), snapped frozen in liquid nitrogen (to quench the endogenous metabolism), and powdered, and 100 mg of plant material for each replicate was transferred into a 2 mL vial.
Extraction was completed by adding 1400 µL of methanol (at −20 • C) and vortexing for 10 s after the addition of 60 µL ribitol (0.2 mg mL −1 stock in ultrapure H 2 O) as an internal quantitative standard for the polar phase. Samples were transferred in a thermomixer at 70 • C and were shaken for 10 min (950 rpm) and then further centrifuged for 10 min at 11,000 g. The supernatants were collected and transferred to glass vials, where 750 µL CHCl3 (−20 • C) and 1500 µL ultrapure H 2 O (4 • C) were sequentially added. All the samples were vortexed for 10 s and then centrifuged for another 15 min at 2200 g. The upper polar phase (150 µL) for each replicate was collected, transferred to a 1.5 mL tube, and dried in a vacuum concentrator without heating. Then, 40 µL methoxyamine hydrochloride (20 mg/mL in pyridine) was added to the dried samples, which were immediately incubated for 2 h in a Thermomixer (950 rpm) at 37 • C. Methoxyamated samples were then silylated by adding 70 µL of MSTFA to the aliquots. Samples were further shaken for 30 min at 37 • C. Derivatised samples (110 µL) were then transferred into glass vials for GC/MS analysis.
GC-Quadrupole/MS Analysis
The derivatised extracts were injected into a MEGA-5MS capillary column (30 m × 0.25 mm × 0.25 µm equipped with 10 m of pre-column) using a gas chromatograph apparatus (Agilent 7890A GC) equipped with a single quadrupole mass spectrometer (Agilent 5975C). Injector and source were set at 250 • C and 260 • C temperatures, respectively. One µL of the sample was injected in splitless mode with a helium flow of 1 mL/min using the following programmed temperature: isothermal 5 min at 70 • C followed by a 5 • C/min ramp to 350 • C and a final 5 min heating at 330 • C. Mass spectra were recorded in electronic impact (EI) mode at 70 eV, scanning at 40-600 m/z range, scan time 0.2 s. The mass spectrometric solvent delay was settled as 9 min. Pooled samples that served as quality control (QCs), n-alkane standards, and blank solvents (pyridine) were injected at scheduled intervals for instrumental performance, tentative identification, and monitoring of shifts in retention indices (RI). Solvent blanks were run between samples, and each mass was checked against the blank run to exclude possible contamination sources.
GC/MS Data Analysis Using MS-DIAL
The MS-DIAL with open-source publically available EI spectra library was used for raw peaks extraction, and the data baseline filtering and calibration of the baseline, peak alignment, deconvolution analysis, peak identification, and integration of the peak height. The average peak width of 20 scans and a minimum peak height of 1000 amplitudes was applied for peak detection, and the sigma window value of 0.5, EI spectra cut-off of 5000 amplitudes was implemented for deconvolution. For peak identification, the retention time tolerance was set at 0.2 min, the m/z tolerance was 0.5 Da, the EI similarity cut-off was 60%, and the identification score cut-off was 80%. In the alignment-parameters-setting process, the retention time tolerance was 0.5 min, and the retention time factor was 0.5.
For relative quantification purposes, when we encountered multiply silylated (n-TMS) features of well-annotated metabolites, we maintained the major (higher in abundance) compounds and discarded minor compounds (lower in abundance) for consistent comparison across all samples.
Once the compounds and features were identified and annotated, the shared metabolites were only reported as quantified and confidently identified. For metabolite annotation and assignment of the EI-MS spectra, we followed the metabolomics standards initiative (MSI) guidelines for metabolite identification [55], i.e., Level 2: identification was based on the spectral database (match factor >80%) and Level 3: only compound groups were known, e.g., specific ions and RT regions of metabolites and Level 4: in silico annotation. The Level 4 identification of unknown EI-MS features that did not match the existing spectral libraries was carried out using MS-FINDER version 3.44 [56].
Statistical Analysis
Experiments were carried out using a randomised design with four replications for germination experiments and GC-MS-driven untargeted metabolomics (N = 4), and three replications for NMR, ionomic analysis, and respiration experiments (N = 3).
Germination, respiration parameters, NMR, and ionomic analysis were previously tested for normality and homogeneity of the variance and successively analysed through one-way ANOVA using the LSD test as post hoc (p ≤ 0.05).
Metabolomic data were analysed using the software Metaboanalyst 5.0 [25]. Metabolomics data were normalised using the internal standard and QCs for LOESS-based normalisation functions available in the MS-DIAL software for batch-correction procedures. The missing values of the Lowess normalised dataset were replaced with half of the minimum value found in the data set. Successively, data were Log2 transformed, and Pareto-scaled. Data were then classified through principal component analysis (PCA) to have an overview of the quality of the data acquisition step, while the PLS-DA was employed to identify the differential metabolites by calculating the corresponding variable importance in the projection (VIP value).
A within-subjects two-way ANOVA was used, and the significance threshold was defined as the corrected p-value < 0.05. The False Discovery Rate was chosen for multiple testing corrections. For ASCA, the leverage threshold and alpha threshold were set to be 0.8 and 0.05, respectively.
Enrichment and pathways analysis was carried out using the Metaboanalyst 5.0 tools and setting Arabidopsis thaliana as a metabolome reference database.
Conclusions
The potential use of coumarin as a potential botanical herbicide has been largely discussed in the bibliography, but still, no clear information has been reported concerning the primary target of this molecule since the majority of the experiments carried out on this molecule were completed using a long time of exposure.
Understanding the mode of action of natural compounds is challenging since it requires biochemical and physiological knowledge as well as the choice of the right concentration and time of exposure. To highlight the primary biochemical and physiological processes altered by exogenous molecules and to avoid observing side effects due to a cascade of biochemical and physiological responses, we decided to work with low effective concentrations monitoring the seed responses in a short period.
The results highlighted that coumarin was able to interact with membranes' reorganisation, delaying them and reducing the production of ATP, as also supported by the pathway analysis and cell respiration. The NMR analysis supported the hypothesis that the concentration chosen was able to affect plant metabolism, maintaining, on the other hand, its viability, which is extremely important for studying natural compounds' mode of action. In addition, coumarin treatment strongly affected the seed metabolome, the trend of which generally followed the control, but it was characterised by a lower level of metabolites. The metabolomic study also highlighted different behaviour of the amino acid lysine, whose content was significantly increased after 24 h of treatment. Interestingly such an increase is generally connected, as also observed in our experiment, to germination delay and alteration of TCA cycle metabolites. Although this is a preliminary screening that has allowed us to highlight the main biochemical and physiological targets and processes involved in coumarin phytotoxicity on seeds, further studies are required to study them in-depth to better understand coumarin's mode of action.
This study further confirmed that coumarin is an extremely biologically active molecule representing a promising pharmacophore for the development of new agrochemicals with a low impact on the environment and human health. Moreover, its ability to delay germination in several species, reducing their fitness and competitiveness with crops, is in agreement with the agroecological perspective, which requires new agrochemicals that reduce weed pressure on crops without altering the field biodiversity.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/plants11070843/s1, Figure S1: MS-DIAL PCA score plot of annotated and unknown compounds; Figure S2: Pairwise score plot for top 5 components of the PLS-DA; Figure S3: Cross-validation (a) and permutation test (b) used to validate the PLS-DA model's robustness. Table S1: Metabolomic dataset and statistical analyses.
Author Contributions: All the authors equally contributed to the manuscript. All authors have read and agreed to the published version of the manuscript.
Funding:
The MS was partially funded by the project RTI2018-094716-B-100 granted by the "Ministerio de Ciencia, Innovaciòn y Universidades".
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2022-09-22T15:07:24.141Z
|
2022-09-01T00:00:00.000
|
252422417
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4425/13/9/1671/pdf?version=1663582747",
"pdf_hash": "7f4ce4e3cb6891d7306dbe707318fb8e7ed67077",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43393",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology"
],
"sha1": "0382c4b5befbcfed54030ff57480f212ce32b79c",
"year": 2022
}
|
pes2o/s2orc
|
Blood Transcriptome Analysis of Beef Cow with Different Parity Revealed Candidate Genes and Gene Networks Regulating the Postpartum Diseases
Maternal parity is an important physiological factor influencing beef cow reproductive performance. However, there are few studies on the influence of different calving periods on early growth and postpartum diseases. Here, we conducted blood transcriptomic analysis on cows of different parities for gene discovery. We used Short Time Series Expression Miner (STEM) analysis to determine gene expression levels in cows of various parities and divided multiple parities into three main periods (nulliparous, primiparous, and multiparous) for subsequent analysis. Furthermore, the top 15,000 genes with the lowest median absolute deviation (MAD) were used to build a co-expression network using weighted correlation network analysis (WGCNA), and six independent modules were identified. Combing with Exon Wide Selection Signature (EWSS) and protein-protein interaction (PPI) analysis revealed that TPCN2, KIF22, MICAL3, RUNX2, PDE4A, TESK2, GPM6A, POLR1A, and KLHL6 involved in early growth and postpartum diseases. The GO and KEGG enrichment showed that the Parathyroid hormone synthesis, secretion, and action pathway and stem cell differentiation function-related pathways were enriched. Collectively, our study revealed candidate genes and gene networks regulating the early growth and postpartum diseases and provided new insights into the potential mechanism of reproduction advantages of different parity selection.
Introduction
Beef cow reproductive performance is a key factor affecting a ranch. It has a positive effect on stabilizing and improving the reproductive performance of beef cows [1] through the introduction of excellent varieties [2], strengthening disease prevention, and adopting some advanced technologies [3]. In fact, maternal parity is one of the important physiological factors affecting the reproductive performance of beef cows.
Calving is the basic factor for the profit of beef cow production [4]. Under the premise of meeting the basic frequency of calving and the quality of calving [5], the management plan for calving cows with different parity should be different. For example, primiparous cows need more nutrients to maintain their own development, and multiparous cows have higher postpartum morbidity and need prior prevention [6]. Predicting the risk of postpartum disease and evaluating the calving quality of different parity cows can effectively prolong the profit period of the beef cow [7,8]; reducing the number of feeding heads of reserve cows and the cost of semen consumption and herd renewal [9,10]; in the case of constant herd size, allowing pastures to carry out a higher proportion of active elimination [11], increasing the intensity of selection [12], thereby accelerating the genetic progress of herds [13], and the cost of ranching operations and veterinary drugs is also reduced [14].
Japanese black cattle are well-known for producing high-quality beef with a high intramuscular (marbling) fat content, which has been improved through genetic selection over the last half-century [15]. Previous research on Japanese black cattle has concentrated on genetic traits and management strategies to increase high-quality beef yields [7,16,17]. In fact, the effect of cattle genetic improvement through breeding will be more far-reaching and lasting, but it is slower than that of management decision-making. Appropriate breeding decision-making schemes can maximize the potential of herds in a limited time, improve economic efficiency, and accelerate genetic advance. In this study, we look at RNA-seq transcriptome profiles to find candidate genes in the blood of beef cows of various parities. We believe that these findings will aid in our understanding of the molecular mechanisms of early growth and development, as well as postpartum diseases in beef cattle of various parities, and will serve as a reference for decision-making programs for reproductive advantages of cows of various parities.
Laboratory Animals and Feeding Management
A total of 50 Japanese Black cattle individuals from different periods were used in this study (Supplementary Table S1). During their time at Inner Mongolia Mengdelong Dairy Farm, the cows were raised under similar feeding strategies and conditions (Hohhot City, Inner Mongolia Province). There were no restrictions on the food and drink given to the animals in this experiment. They were all raised in the same batch and fed the same diet.
Sequencing Data Analysis
Adaptors, raw reads with over 5% unknown nucleotides, and other low-quality reads with Q20 or lower quality scores were removed from raw reads to obtain high-quality clean reads. Subsequently, the clean data were mapped to the reference genome (Bos taurus ARS-UCD1.2) by the HISAT2 (v2.2.1, https://daehwankimlab.github.io/hisat2/, accessed on 5 May 2022) [18]. The alignment of effective reads with gene regions was calculated by using genomic location information from bovine reference genomes. SAMtools (v1.9, http://samtools.sourceforge.net/, accessed on 15 May 2022) was used to sort the BAM alignment files that were generated from HISAT2 by name. StrigTie (v2.1.1, https://ccb.jhu. edu/software/stringtie/index.shtml/, accessed on 15 May 2022) was used to estimate read counts and normalize reads as FPKM (fragments per kilobase of exon model per million mapped fragments) for each sample [19,20].
Short Time-Series Expression Miner Analysis
Short time-series expression miner (STEM) analysis is a well-validated and widely used bioinformatics approach that identifies statistically significant time-dependent gene expression profiles and identifies significantly enriched biological pathways within elucidated profiles [21]. An initial step of STEM is to select a distinct set of temporal model profiles that represent potential time-dependent gene expression patterns.
Co-Expression Network Analysis
WGCNA could be used to investigate gene-gene relationship patterns and measure the correlation between modules and the targeted traits, which has become a popular method for identifying candidate biomarkers. The function was used in the current study to perform hierarchical clustering based on DEG expression levels to reduce the effect of outlier samples on network analysis by per kilobases per million reads (FPKM). Using the TBtools R package, the top 15,000 genes with the lowest MAD were used to build a co-expression network [22]. When the fit index was 0.8, an appropriate value for the scale-free network construction was determined. The topological overlap matrix (TOM) for network interconnectedness was then calculated using the average linkage hierarchical clustering method. Using a dynamic tree-cutting algorithm for module division, with a minimum number of genes in each module of at least 40 and a threshold of 0.25 for similar module merging. Furthermore, we identified stage-specific modules with strong correlations between GS and MM values (p-value < 0.05) and highly correlated module-trait relationships (correlation coefficient > 0.5) [23].
Exon Wide Selection Signature
The association analysis at the transcriptome level was used to assist in the screening of related important functional genes. The number of SNPs in the coding region, although small, is due to any base change in the exon. They may affect the translation of proteins and the expression of biological traits. Therefore, SNP is of great significance in the study of trait expression and disease. Exon Wide Selection Signature is an SNP screening method based on transcription level by detecting the index of population differentiation. The Fst is for screening selection signals among populations and then finding out the selection signal region and target traits related to important genes. The population differentiation index was calculated by the vcftools program, and the loci with negative Fst were eliminated to screen SNP selection signals of different parity beef cows (Fst > 0.15).
Data Analysis of Transcriptome
A total of 1246 million clean reads from 50 samples were generated by RNA-seq. After aligning clean reads to the cattle reference genome (ARS-UCD1.2), the mapping rate was approximately 95.35% (ranging from 93.73% to 96.13%) in Supplementary Table S1. We observed 24,577 genes that were expressed across 50 samples in Supplementary Table S2. By stem analysis, we found that gene expression changed greatly at 4 parities and divided all beef cows into three main stages: nulliparous, primiparous, and multiparous ( Figure 1). In spite of differences in sample characteristics, gene expression profiles clustered samples from the same group ( Figure 2A).
Genes 2022, 13, x FOR PEER REVIEW 4 o ( Figure 1). In spite of differences in sample characteristics, gene expression profiles cl tered samples from the same group ( Figure 2A).
Differentially Expressed Genes across Three Periods
The researchers looked into the differences in gene expression of samples from different time periods. 1010 DEGs were identified in the primiparous vs. nulliparous comparison, including 457 upregulated genes and 553 downregulated genes ( Figure 2B and Supplementary Table S3). In the multiparous vs. primiparous comparison, a total of 1060 DEGs were identified, including 616 upregulated genes and 444 downregulated genes ( Figure 2C and Supplementary Table S4). In the multiparous vs. nulliparous comparison, 2245 DEGs were found, with 1182 upregulated genes and 1063 downregulated genes ( Figure 2D and Supplementary Table S5).
Co-Expression Network Construction and Module Detection
We used WGCNA to investigate the relationship between three time periods to understand the function of genes better. According to the results of scale independence and mean connectivity, we took power as 5 when the correlation index reached 0.85 ( Figure 3A) and 15,000 genes were divided into 19 modules ( Figure 3B,C). The scale-free network was built with a block-wide module function, with 51 genes in the light yellow module and 1439 in the purple module.
Differentially Expressed Genes across Three Periods
The researchers looked into the differences in gene expression of samples from different time periods. 1010 DEGs were identified in the primiparous vs. nulliparous comparison, including 457 upregulated genes and 553 downregulated genes ( Figure 2B and Supplementary Table S3). In the multiparous vs. primiparous comparison, a total of 1060 DEGs were identified, including 616 upregulated genes and 444 downregulated genes ( Figure 2C and Supplementary Table S4). In the multiparous vs. nulliparous comparison, 2245 DEGs were found, with 1182 upregulated genes and 1063 downregulated genes (Figure 2D and Supplementary Table S5).
Co-Expression Network Construction and Module Detection
We used WGCNA to investigate the relationship between three time periods to understand the function of genes better. According to the results of scale independence and mean connectivity, we took power as 5 when the correlation index reached 0.85 ( Figure 3A) and 15,000 genes were divided into 19 modules ( Figure 3B,C). The scale-free network was built with a block-wide module function, with 51 genes in the light yellow module and 1439 in the purple module.
Period-Specific Module Identification
To investigate the period-specific modules, the GS and MM of all genes in the module were calculated over three periods. According to our definitions, GS is the correlation between a gene and its development period, and MM is the correlation between a gene's
Period-Specific Module Identification
To investigate the period-specific modules, the GS and MM of all genes in the module were calculated over three periods. According to our definitions, GS is the correlation between a gene and its development period, and MM is the correlation between a gene's expression profile and its module. A strong correlation between GS and MM (p-value < 0.05) demonstrates that genes strongly associated with a trait are frequently the most important components of the modules associated with that trait. Finally, we found six period-specific modules (average module-trait relationship > 0.5 and p < 0.05), of which the midnight blue, light green, and grey modules were positively correlated with the nulliparous period and the light cyan module was positively correlated with a multiparous period. In contrast, the module's light cyan was negatively correlated with a nulliparous period, and the module's light green was negatively correlated with the multiparous period ( Figure 4).
Function Enrichment Analysis
PPI network and Metascape were used to identify potential genes, and Cytoscape was used to visualize them. We present the significantly enriched GO terms and the pathways associated with each module in Figure 5. Detailed information on the GO terms and pathways is shown in Figures 6, 7 and Supplementary Table S7. For this study, the important pathways identified were Pathways in cancer, MAPK signaling pathway, Mi-croRNAs in cancer, PI3K-Akt signaling pathway, Platinum drug resistance, Cell cycle, Chemokine signaling pathway, Cancer-related transcriptional dysregulation Hippo signaling pathway, Progesterone-mediated oocyte maturation, cytokine-cytokine receptor interaction. Multiple significant GO terms are related to cell cycle, positive regulation of cell proliferation, cell division, mitotic cell cycle, immune response, neuron differentiation, positive regulation of ERK1 and ERK2 cascade, positive regulation of gene expression, mitotic spindle assembly checkpoint, chromosome segregation, DNA replication.
Function Enrichment Analysis
PPI network and Metascape were used to identify potential genes, and Cytoscape was used to visualize them. We present the significantly enriched GO terms and the pathways associated with each module in Figure 5. Detailed information on the GO terms and pathways is shown in Figures 6 and 7 and Supplementary Table S7. For this study, the important pathways identified were Pathways in cancer, MAPK signaling pathway, MicroRNAs in cancer, PI3K-Akt signaling pathway, Platinum drug resistance, Cell cycle, Chemokine signaling pathway, Cancer-related transcriptional dysregulation Hippo signaling pathway, Progesterone-mediated oocyte maturation, cytokine-cytokine receptor interaction. Multiple significant GO terms are related to cell cycle, positive regulation of cell proliferation, cell division, mitotic cell cycle, immune response, neuron differentiation, positive regulation of ERK1 and ERK2 cascade, positive regulation of gene expression, mitotic spindle assembly checkpoint, chromosome segregation, DNA replication. Genes 2022, 13, x FOR PEER REVIEW 8 of 17
Exon Wide Selection Signature
In this study, SAMtools and BCFtools software were used to screen the SNP sites of cows with different parity. Based on the population differentiation index Fst, the differences of each genetic variation and its frequency between groups were compared, and the highly differentiated SNP sites were analyzed. SNP variation sites were screened by
Exon Wide Selection Signature
In this study, SAMtools and BCFtools software were used to screen the SNP sites of cows with different parity. Based on the population differentiation index Fst, the differences of each genetic variation and its frequency between groups were compared, and the highly differentiated SNP sites were analyzed. SNP variation sites were screened by SAMtools and BCFtools. We screened 2541, 2744, and 1763 SNPs loci with high differentiation in different parity (Figure 8). Then, the selected SNP was mapped to the reference genome (Bos taurus ARS-UCD1.2), with highly differentiated SNP regions as the center, and the 1M extensions in the upstream and downstream were used as new candidate regions. Finally, we mapped 723, 945, and 548 genes over three periods (Supplementary Table S6).
Top Genes Expressed in Three Periods
In this study, we found that in the comparison of the nulliparous vs. primiparous group, it was mainly concentrated in the signal pathways such as Oxidative phosphorylation, Pathways of neurodegeneration-multiple diseases, and Thermogenesis. In the primiparous vs. multiparous group, it is mainly concentrated in signaling pathways such as postpartum disease, such as Salmonella infection, AMPK signaling pathway, Transcriptional misregulation in cancer, and the Insulin signaling pathway. This is highly consistent with our previous post-partum disease plate phenomenon. Based on the DEG, WGCNA, and EWSS results, we can suggest TPCN2, KIF22, MICAL3, RUNX2, PDE4A, TESK2, GPM6A, POLR1A, and KLHL6 as the promising candidate genes for different parity cows (Figure 9).
Top Genes Expressed in Three Periods
In this study, we found that in the comparison of the nulliparous vs. primiparous group, it was mainly concentrated in the signal pathways such as Oxidative phosphorylation, Pathways of neurodegeneration-multiple diseases, and Thermogenesis. In the primiparous vs. multiparous group, it is mainly concentrated in signaling pathways such as postpartum disease, such as Salmonella infection, AMPK signaling pathway, Transcriptional misregulation in cancer, and the Insulin signaling pathway. This is highly consistent with our previous post-partum disease plate phenomenon. Based on the DEG, WGCNA, and EWSS results, we can suggest TPCN2, KIF22, MICAL3, RUNX2, PDE4A, TESK2, GPM6A, POLR1A, and KLHL6 as the promising candidate genes for different parity cows (Figure 9).
Discussion
In the livestock industry, reproductive traits are important economic traits, and they are influenced by both genetics and the environment. In this experiment, beef cows are all raised in the same environment. In this study, we found some special genes, which are TPCN2 in the nulliparous vs. primiparous group; KIF22, MICAL3, RUNX2, and PDE4A in the primiparous vs. multiparous group; TESK2, PDE4A, GPM6A, POLR1A, KLHL6 and RUNX2 in nulliparous vs. multiparous group. Both RUNX2 and PDE4A were specifically expressed in groups primiparous and multiparous, and they work together on the Para-
Discussion
In the livestock industry, reproductive traits are important economic traits, and they are influenced by both genetics and the environment. In this experiment, beef cows are all raised in the same environment. In this study, we found some special genes, which are TPCN2 in the nulliparous vs. primiparous group; KIF22, MICAL3, RUNX2, and PDE4A in the primiparous vs. multiparous group; TESK2, PDE4A, GPM6A, POLR1A, KLHL6 and RUNX2 in nulliparous vs. multiparous group. Both RUNX2 and PDE4A were specifically expressed in groups primiparous and multiparous, and they work together on the Parathyroid hormone synthesis, secretion, and action signaling pathway. The Parathyroid hormone synthesis, secretion, and action-signaling pathway were demonstrated to be involved in translation initiation and translation process, which is critical in the growth, metabolism, reproduction, and aging of organisms [27].
RUNX2 is located on chromosome 6, also known as CCD or AML3. Runt DNA-binding domain-containing nuclear protein encoded by this gene is a member of the RUNX family of transcription factors. This protein is required for osteoblast differentiation and skeletal morphogenesis, as well as acting as a scaffold for nucleic acids and regulatory factors involved in skeletal gene expression. This gene has also been associated with carcass and growth-related traits in broilers [28] and goats [29]. According to some findings, RUNX2 may play an oncogenic role in esophageal carcinoma by activating the PI3K/AKT and ERK pathways [30]. PDE4A, also known as DPDE2 or PDE46, is located on chromosome 19. The protein encoded by this gene belongs to the cyclic nucleotide phosphodiesterase (PDE) family and PDE4 subfamily. This PDE hydrolyzes the second messenger, cAMP, which is a regulator and mediator of a number of cellular responses to extracellular signals. PDE4 inhibitors might be useful therapeutic targets for myelodysplastic syndromes (MDS) [31]. T-cells that were PDE4A-transgenic were also partially protected from regulatory T-cell suppression [32]. Furthermore, PDE4A effectively suppressed PGE2-mediated upregulation of the inhibitory surface markers CD73 and CD94 on CD8 T-cells [33].
In the nulliparous vs. primiparous group, we identified only one gene. TPCN2, also known as TPC2 or SHEP10, is located on chromosome 11. TPCN2 protects against diet-induced weight gain in a mild manner, and this protection is likely independent of glucose tolerance, insulin sensitivity, and fasting plasma and hepatic lipid levels [34]. The endolysosomal two-pore cation channel TPCN2 is a key factor in neovascularization and immune activation [35]. TPCN2 acts on autophagy progression and extracellular vesicle transport in cancer cells [36]. The encoded protein is a catalytic subunit of the complex that transcribes DNA into ribosomal RNA precursors. Tp53-dependent neuroepithelial apoptosis decreased neural crest cell proliferation, and cranioskeletal abnormalities can all result [37]. KIF22 on chromosome 11, also known as OBP or KNSL4. This gene encodes a protein that belongs to the kinesin-like protein family and plays an important role in metaphase chromosome alignment and maintenance. KIF22 may be involved in the regulation of cell proliferation in colon cancer [38]. In addition, it is highly expressed in pancreatic cancer; it regulates the MEK/ERK/P21 signaling axis and promotes the cell cycle and the development of pancreatic cancer [39]. MICAL3 on chromosome 22 is mainly involved in activating actin-binding activity and actin filament depolymerization. MICAL3 knockout results in an increased frequency of cytokinetic failure and delayed abscission. MICAL3 directs adaptor protein ELKS and Rab8A-positive vesicles to the midbody via a mechanism unrelated to its enzymatic activity, and ELKS and Rab8A deficiency causes cytokinesis defects [40].
The KLHL6 gene on chromosome 3 encodes a protein that is involved in B-lymphocyte antigen receptor signaling and germinal-center B-cell maturation [41]. TESK2 is a serine/threonine protein kinase with an N-terminal protein kinase domain that is structurally similar to the kinase domains of testis-specific protein kinase-1 and LIM motif-containing protein kinases [42]. GPM6A, also known as M6A, is located on chromosome 11. It is predicted to enable calcium channel activity and is involved in neuron migration and stem cell differentiation. POLR1A is also known as A190, RPA1, and RPO14 on chromosome 2. The protein encoded by this gene is the largest subunit of the RNA polymerase I complex. The encoded protein is a catalytic subunit of the complex that transcribes DNA into ribosomal RNA precursors. Tp53-dependent neuroepithelial apoptosis decreased neural crest cell proliferation, and cranioskeletal abnormalities can all result [43].
Conclusions
In this study, blood transcriptome analysis was performed on different parities of Japanese black cattle. DEGs, EWSS, and WGCNA analysis revealed that TPCN2, KIF22, MICAL3, KLHL6, and POLR1A might be involved in candidate genes for postpartum diseases. RUNX2, PDE4A, TESK2, and GPM6A genes could be used as candidate genes for early growth development. It provides a theoretical basis for beef cattle breeding evaluation.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/genes13091671/s1, Table S1: Number of reads, and number of aligned reads, per cattle sample;
|
v3-fos-license
|
2020-11-26T09:07:17.751Z
|
2020-11-24T00:00:00.000
|
229403687
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1108/heed-06-2020-0017",
"pdf_hash": "32b036cf54fa603278b92daf66c0dc8db2563a2a",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43394",
"s2fieldsofstudy": [
"Political Science",
"Business"
],
"sha1": "84802cf5c99efcc692fa7dc05df4228de18012cf",
"year": 2020
}
|
pes2o/s2orc
|
Internationalization initiatives of Taiwan ’ s higher education: a stepping stone to regional talent circulation or reproduction of unbalanced mobility scheme?
Purpose – As an emerging market of international education, Asian countries ambitiously launched internationalization initiatives and strategies to attract international talent. Since the 1990s, Taiwan ’ s government has implemented various internationalization policies. Partly affected by the political forces of neighboring China, Taiwan ’ s government launched the New Southbound Policy (NSP) as the main regional strategy in 2016. One of the aims of this strategy was to promote mutual talent mobility between Taiwan and New Southbound Policy countries (NSPC). The purpose of this study is to explore how the NSP influences the student mobility scheme in Taiwan. Design/methodology/approach – This study adopted the qualitativedocument analysis toinvestigateand compare the major Asian countries ’ internationalization focus and summarize Taiwan ’ s internationalization development process and policy priorities. Moreover, a qualitative approach was adopted in order to collect data from 2005 to 2018 to examine Taiwan ’ s student mobility scheme under the policy change. Findings – UndertheinfluenceoftheNSPafter2016,thestudentmobilityschemebetweenTaiwanandNSPC couldbecategorizedintofivecategoriesinaccordancewiththemobilityrate.Althoughthenation-drivenpolicywasconsideredpowerful,theunbalancedflowbetweenTaiwanandNSPCbecamesevere. Research limitations/implications – The study lacked statistics on the degree level of outbound Taiwanese students going to NSPC. It could not compare the student mobility scheme between Taiwan and NSPC by degree level. Originality/value – The research looked at the initiatives Asian countries have developed in order to raise highereducationinternationalizationandregionalstatus,whichshedlightonthenational/regionalapproachesundertheglobalchange.
Introduction
The number of global international students has exponentially increased over the past four decades, having risen from 0.8 million in the late 1970s to 4.6 million in 2015 (Organisation for Economic Co-operation and Development [OECD], 2017). Furthermore, it is predicted to reach 8 million in 2025 (ICEF Monitor, 2017). In this study, talent refers to international students who study cross-nationally (UNESCO-UIS/OECD/Eurostat, 2019). It has two distinct facets among the bachelor's, master's and doctoral stages in higher education: credit mobility and degree mobility (King et al., 2011). Since the onset of global student flows, the center of the international student market has long been dominated by Western/English-speaking countries, which are also traditionally economically strong countries (Altbach and Knight, 2007;Chen and Barnett, 2000;Prazeres, 2013). The conformity of academic hegemony with world economic and political performance was declared relevant to World System Theory (Chen and Barnett, 2000).
The high performance of US and British universities in world rankings particularly reflected Anglo-American academic hegemony; however, universities in the Asia Pacific region that were rising in the world rankings were predicted to soon overtake Anglo-American academic hegemony (J€ ons and Hoyler, 2013). The potential growing number of Asian students was another noteworthy phenomenon besides the growing reputation of higher education institutions. According to OECD (2019), students from Asia accounted for the largest share of global international students across the OECD in 2017, approximately 56%. Asian countries were recognized as new study destinations for international students (Chan, 2012;Chen and Barnett, 2000;Jon et al., 2014;J€ ons and Hoyler, 2013), reflecting the emerging pattern of international student mobility (Collins, 2013).
Since 2010, Major Asian countries, in particular Singapore, South Korea, Hong Kong and Malaysia, have adopted the internationalization strategy of becoming the "education hub" to enhance global competitiveness (Dessoff, 2012). The education hub policy highlighted Asian countries' ambitions of recruiting students globally and raising the status and competitiveness regionally (Knight and Morshidi, 2011).
Taiwan, as one of the major Asian countries, also aimed to be an education hub. In 2016, the Taiwanese government launched the New Southbound Policy (NSP) as the new internationalization strategy to promote regional talent cultivation and sharing (Executive Yuan of Taiwan, 2016a;2016b), with the ambition to expand and strengthen the connection between Southeast Asian countries and to re-position Taiwan's position (Office of the President of Taiwan, 2016). In 2017, the New Southbound Talent Development Program (2017)(2018)(2019)(2020) was proposed, with the aim of promoting mutual talent mobility between Taiwan and New Southbound Policy countries (NSPC) [1]. Therefore, this study looks at the following three research questions: RQ1. What is the student mobility scheme between Taiwan and NSPC by country after the implement of NSP in 2016?
RQ2. What is the student mobility scheme between Taiwan and NSPC by degree level after the implement of NSP in 2016?
RQ3. How might the NSP affect the current student mobility relationship between Taiwan and NSPC after 2016?
2. Asia as the leading force in reshaping the global flow of international students Since higher education is an integrated space, any change in any location could subsequently lead to another change (Li and Bray, 2007). Global players in the international student market also have a dynamic dependency relationship. Both economic and political growth of peripheral countries and the changing role of Asian countries from sending countries to host countries affected the traditional assumed relationship (Waters, 2012). The simplistic dichotomies of mobility were challenged by the juxtaposed multi-centered countries globally. South to south mobility or the regional circulation deconstructed the traditional international student mobility (Madge et al., 2014). Asian students contributed to the global student mobility growth over the past three decades, especially those from low-income and middle-income countries. Ziguras and Pham (2014) stated that students from countries with relatively low income were more HEED 14,2 inclined to study abroad for a few years to obtain a degree. Kuroda et al. (2018) concluded that outbound Asian students have grown steadily since the late 1990s and more than tripled from 771,496 in 1999 to 2,328,887 in 2015. Furthermore, intra-regional mobility contributed to the rapid growth of Asia's inbound students rather than students from Western countries. It could be said that the de facto regionalization of international student mobility and the intensive concerns of countries have made Asia both the source and hub for international students since the turn of the 21st century.
The regional change corresponded with the UNESCO Institute for Statistics' research; the research stated that there were two major shifts of international student mobility: (1) students preferred to choose destination countries closer to home and (2) regional hubs attracted a great share of global students and became ideal destinations for regional students (UNESCO Institute for Statistics, 2016). China and Japan were reported as traditional destinations of international students, while Malaysia, South Korea and Thailand were predicted to be hot spots as international student's new choices (Kuroda et al., 2018). Regardless of whether they have been included in the ranking or not, most Asian countries, for example China, Japan, Taiwan, South Korea and Malaysia, actively transformed themselves from pure traditional senders to top destinations in the international student market (Luo, 2017).
At the macro level, Asia's growing economic and political power has reshaped global international student mobility. This is especially the case in regard to the same high-quality education with relatively affordable fees, a potential employment environment and job opportunities that attracted more international students, particularly within the region (QS Asia News Network, 2017). According to the QS report, recently, an increasing amount of prospective Asian international students saw the graduate employability issue as a major concern. It was reported that these students continued to value the quality of education but took into account career support from universities and the reputation of a university among employers when choosing universities they would apply for (Quacquarelli Symonds, 2019).
3. Major Asian countries view international students as the panacea for domestic development Embedded in the globalization context driven by economic and academic forces in the 21st century, national, regional, institutional and individual levels must adopt certain policies or practices in response to the global academic environment of so-called "internationalization" (Altbach and Knight, 2007). Countries could adopt strategies as approaches to the internationalization of higher education. These strategies could include student study abroad programs, recruitment of international students, cooperation with overseas universities, development of education centers, or international curriculum or journal publications (Ho et al., 2015). As Japan and Korea are both relatively well developed Asian countries and share a similar background with Taiwan (see Table 1), their talent recruitment policies can to some extent reflect national attitudes toward regional student mobility and domestic environmental changes in the Asian region. Different from Japan and Korea's target at global talent, Taiwan grasped the growing interest of regional students toward Asian region and tried to use their advantages and build connection with regional countries to attract regional talent.
Japan was one of the leading countries in the development of higher education in Asia. Since 1983, Japan has actively evolved and expanded cross-border academic activities (Luo, 2017). Recently, the Japanese government devised "100,000 Foreign Students Plan" to recruit international students as their main strategy for higher education internationalization (Sugimura, 2016). Their goal was to receive 100,000 students by 2000 (McVeigh, 2015). Unlike the previous policies direction of aid mentality toward developing countries in the region, due to a demographic crisis, the Japanese government used international students to supplement skilled labor and activate domestic university reform after the 2000s (Sanders, 2018). In other
Initiatives of
Taiwan's higher education words, the shortfall of domestic university enrollments caused by a demographic crisis and the oversupply of universities caused by a rapid expansion from the early 1990s to 2014 accelerated the deterioration of Japan's higher education (Luo, 2017;Sanders, 2018).
In order to improve higher education in Japan, the Japanese government launched the "Top Global University" initiative in 2014 as their main internationalization policy to last until 2023. The aim of this initiative was to accelerate the internationalization of Japanese universities and to gain a higher rank in the global rankings (MEXT of Japan, 2015). The policy focused on (1) university reform and (2) global human resources development (MEXT of Japan, 2020a) in order to enhance global competitiveness (Shimmi and Yonezawa, 2015). Alongside these two points of focus was the goal of doubling international students from 36,545 in 2013 to 73,536 in 2023 in terms of having diversified student population in universities (MEXT of Japan, 2020b). By 2016, the policy had achieved considerable success, which could be attributed to a diversified recruitment strategy of targeting regional countries (Luo, 2017).
The South Korean government initiated The General Plan for Promoting Recruitment of International Students in 2001 as the first international student policy to promote the internationalization of higher education. In 2004, with a new goal of becoming the northeast Asian hub, South Korea proposed The Study Korea Project and The Development Plan for Study Korea Project in 2008 to expand the recruitment of international students (Bae, 2015). The inbound mobility rate reached its peak in 2007 at 51.3%, then decreased gradually from (Ko, 2012). Due to the declining sustainable population, the South Korean government launched the Study Korea 2020 Project as the latest policy and announced their intention to increase the number of international students from 90,000 in 2011 to 200,000 in 2020 (Korean Association of International Educators, 2013).
The Study Korea 2020 Project expanded the scope of the Global Korea Scholarship Program (GKSP) (Korean Association of International Educators, 2013) and focused more on the supporting system, employment and quality management for international students (Ko, 2012). Although the number of international students decreased consecutively for three years from 2012 to 2014 (Luo, 2017), the government kept the ambitious target of recruiting international students but postponed the time to 2023 (Mani, 2018). With a declining birth rate, the freshman-aged population dramatically decreased after 2010 (Kwon, 2013) and was predicted to cause 160,000 surplus university places by 2023 (ICEF Monitor, 2015). In other words, due to the low birth rate and the decrease in domestic college students (Kwon, 2013), international students were seen as a remedy for the low university enrollment rate and as the intensifier of higher education's international competitiveness in South Korea (Mani, 2018).
4. The context of Taiwan from a historical perspective 4.1 Building the international environment of universities Taiwan's internationalization of higher education can be traced back to the 1990s. At that time, Taiwan's higher education underwent a vast development in regard to both the students and institution growth (Hou, 2012). Since the late 1980s, policies had been directed toward denationalization, decentralization and autonomization of higher education institutions (Mok, 2000) and Taiwan's government revised the University Law in 1994 to empower the higher education institutions. Universities had much more academic freedom and university autonomy in terms of controlling education affairs (MOE of Taiwan, 2001;Mok, 2006).
Directly following the revision of University Law, Taiwan's government launched The Education Report of the Republic of China: Toward the Education Vision of the 21st Century in 1995 to give universities the responsibility of expanding academic exchanges and foreign cultural and educational relations (MOE of Taiwan, 1995). Not long after, in 2001, the White Paper of University Education was released, marked the beginning of universities' active involvement in internationalization affairs. The white paper stated that universities should formulate a budget for promoting international communication and cooperation to improve the international environment of universities (MOE of Taiwan, 2001). At this stage, the recruitment of international students served the purpose of actively strengthening academic, cultural and educational exchanges with foreign countries and enhancing the international competitiveness of domestic universities (Executive Yuan of Taiwan, 2002).
Developing world-class universities and expanding recruitment of international students
Moving toward the target of becoming world-class universities, the government issued a series of national programs comprised of the Promotion of University Teaching Excellence Program (2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016), the Development Plan for World-Class Universities and Research Centers for Excellence and the Aim for Top University Project to pursue excellence. On the basis of competition mechanism, the specific goals included (1) accelerating the internationalization of top universities and expanding students' horizons; (2) enhancing the quality of research, development and innovation in universities and strengthening the international academic influence; (3) recruiting and cultivating talent to build up a human resource pool and (4) training top talent in response to social and industrial needs (Wang, 2014).
It was not until 2011 that the government announced the determination of expanding international student recruitment; under the pressure of growing trends, including an aging
Initiatives of
Taiwan's higher education population, low birth rate and the active attitude toward recruiting international students from traditional exporting countries, such as European countries and the United States, as well as neighboring countries, such as Japan, South Korea and China. Different from the previous stage, the recruitment of international students at this phase not only focused on the improvement of university education quality but also aimed at solving the insufficient enrollment problem of some private universities (MOE of Taiwan, 2011).
4.3
Cultivating domestic talent and exporting education to purse the East Asian higher education hub To construct Taiwan as the East Asian higher education hub, the government proposed to increase the number of international students from 56,135 in 2011 to 150,000 in 2021 (Executive Yuan of Taiwan, 2012). At that time, both recruiting and cultivating talent was necessary for Taiwan, due to the fading domestic atmosphere of studying abroad and the external threat of the rapid rise of China, India and South Korea (MOE of Taiwan, 2013). The government began to develop students' global mobility competence and to cultivate talented students who were familiar with Southeast Asian countries in enhancing Taiwan's economic influence and establishing an overseas base for future development (MOE of Taiwan, 2016).
Promoting talent circulation and responding to regional education needs
Recently, confronting political and diplomatic pressure from neighboring China, Taiwan's government declared that it would reduce its economic reliance on China and increase trade relations with regional countries (Chiang, 2018;ICEF Monitor, 2016;Hawksley, 2019). Beginning with the implementation of the New Southbound Talent Development Program (2017-2020), Taiwan changed its policy practice from one-way recruitment of international students to mutual talent mobility and co-cultivation with regional countries. Considering the educational and industrial needs of NSPC, Taiwan provided customized professional training courses and Taiwan scholarships to actively attract outstanding talent. Domestically, Taiwan also encouraged young people to participate in exchanges and perform research or internships in NSPC to achieve the goal of regional talent sharing and circulation (MOE of Taiwan, 2017).
Overall, Taiwan's internationalization of higher education could be categorized into four stages according to the main target of national policies: (1) from 1994 to 2002: building the international environment of universities, (2) from 2003 to 2011: developing world-class universities and expanding the recruitment of international students, (3) from 2012 to 2016: cultivating domestic talent and exporting education to purse the East Asian higher education hub goal and (4) after 2017: promoting talent circulation and responding to regional education needs. Different from Japan and Korea's internationalization policy direction toward global international student recruitment, Taiwan's latest internationalization policy mainly targeted the regional students. Taiwan's internationalization policy shift could be viewed as the embodiment of the regionalization of international student mobility, which also reflecting the national/regional initiatives under the global change.
Research method
A qualitative document analysis approach was adopted to summarize Taiwan's internationalization development process and investigate the government's phased development focus (see Table 2). Official policy documents were collected from 1994 to 2018. Moreover, a qualitative approach was adopted to collect Taiwan's governmental data HEED 14,2 from 2005 to 2018 in order to evaluate the trend of the student mobility scheme between Taiwan and NSPC and investigate the mobility scheme change after the implementation of the NSP in 2016.
Major findings
Generally speaking, the NSP was a turning point in Taiwan's higher education internationalization policy, achieving success after 2016. As Figure 1 shows, there was a significant growth in students from NSPC from 33% in 2016 to 41% in 2018. By analyzing the governmental data, this study drew five major findings in regard to the mobility scheme between Taiwan and NSPC.
Mutual mobility scheme between Taiwan and NSPC became more unbalanced after 2016
As the New Southbound Talent Development Program (2017-2020) stated, promoting talent sharing and circulation was one of the main goals of talent cultivation (MOE of Taiwan, 2017). However, this seemed to be a difficult goal. Although students from NSPC have had a gradual growth in recent years, Taiwan's students seemed uninterested in going to NSPC. It could be said that the mobility scheme between Taiwan and NSPC became more unbalanced after the Initiatives of Taiwan's higher education implementation of the NSP in 2016 (see Figure 2). One of the reasons for this may be the economic gap; there were only four countries of NSPC were high-income economies that were economically equivalent with Taiwan (see Table 3). According to statistics from the Taiwanese government, the study found that Taiwanese students have mostly chosen to study in Australia and New Zealand rather than other NSPC during the past decades (see Figure 3).
6.2 Inbound students from Vietnam, Indonesia, Thailand, India and Philippines had rapid growth after 2016 Figure 4 shows the inbound students from NSPC from 2005 to 2018. For more than a decade, Malaysia, Vietnam and Indonesia have been the top three home countries of international students among all NSPC more than a decade. The ranking appeared to be the same as previous years; however, the growth rate changed dramatically after the issue of the NSP in 2016, particularly Vietnam, Indonesia, Thailand, India and Philippines. The highest increase was students from Vietnam. The inbound rate may implicate the potential international student market of Vietnam as Figure 5 shows. The rapidly increasing number of overseas programs demonstrated the growing need of higher education for Vietnamese students.
6.3 The percentage of non-degree-oriented students from NSPC to Taiwan increased after the implementation of the NSP Generally speaking, students from NSPC came to Taiwan over the past few years mostly to undertake a degree. Degree-oriented students invariably accounted for more than 60% of all students from NSPC annually. However, after the implementation of the policy, the Initiatives of Taiwan's higher education number of non-degree-oriented students skyrocketed from 12,790 in 2016 to 23,647 in 2019 (see Figure 6). Responding to the government's goal, the NSP customized many short-term vocational courses to provide professional training for students from NSPC (MOE of Taiwan, 2017).
6.4 The growth rate of students doing a bachelor's degree ranked the first among inbound students from NSPC Figure 7 shows the condition of students from NSPC studied at different education levels in Taiwan. At bachelor's level, the percentage of students rose the fastest, especially Taiwan HEED 14,2 students from South and Southeast Asian countries was one of the major goals (Glaser et al., 2018), the statistics showed that there is still much room for improvement. For talent recruitment, the government needs to improve the education quality or present other incentives to attract top talents from NSPC.
6.5 Student mobility scheme between Taiwan and NSPC under the impact of the NSP after 2016 By analyzing the governmental statistics, the study calculated the inbound rate of students from NSPC and outbound rate of Taiwanese students to NSPC separately for an overall comparison. Focusing on the period after 2016 and on the changes of mobility rate, the study categorized the student mobility scheme between Taiwan and NSPC into five types and presented these categories in a quadrant graph (see Figure 8). Different mobility rates referred different attraction level between Taiwan and NSPC. Furthermore, the study visualized the changing student mobility rate in Figure 9 to better demonstrate the interactive relationship between Taiwan and NSPC after 2016.
6.5.1 First type of student mobility model (first quadrant) -Growing inbound rate with growing outbound rate: India and Thailand. In regard to the first type of student mobility model, countries located in the first quadrant indicated that student mobility was an attractive prospect for both Taiwan and for NSPC. Students from NSPC and Taiwanese students became increasingly interested in studying at both sides after the implementation of NSP in 2016. Even though there was only a slight increase in the number of outbound Taiwanese students to NSPC and inbound students from NSPC to Taiwan, an upward trend between Taiwan and the student mobility relationship of NSPC was still exhibited. (2009; 2011; 2012; 2013a; 2013b; 2013c; 2014; 2015; 2016; 2017; 2018; 2019) HEED 14,2 6.5.2 Second type of student mobility model (second quadrant) -Declining inbound rate with growing outbound rate: Australia and Malaysia. In regard to the second type of student mobility model, countries located in the second quadrant were more attractive to Taiwanese students; in contrast, students in these countries showed less interest in studying in Taiwan. The mobility model was reshaped by the policy and by the changes in students' choices. For instance, Australia had been ranked the first study destination among 18 NSPC for a long time and continued to attract Taiwanese students after 2016. Even though there were still ups and downs, the ratio has been above 60% in the past few years (see Figure 10). Another emerging destination, Malaysia, has begun to attract an increasing number of Taiwanese students in recent years, even though the growing rate was still in its infancy (see Figure 11). 6.5.3 Third type of student mobility model (third quadrant) -Declining inbound rate with declining outbound rate: New Zealand and Singapore. In regard to the third type of student mobility model, countries located in the third quadrant indicated a lower attraction on both sides. The number of inbound students from NSPC and outbound students from Taiwan has simultaneously declined after 2016. The inbound rate of students from NSPC to Taiwan was much lower, approaching zero. This type of student mobility scheme demonstrated that the policy strength of the NSP could not be exerted on students in these countries. In addition, New Zealand and Singapore are relatively advanced, in terms of economic development, among NSPC. Compared with other less developed NSPC, it might be assumed that Taiwanese students would have a relatively high interest in them. However, it was interesting to discover that there was a weak interest from Taiwanese students toward New Zealand and Singapore (see Figures 12 and 13). Initiatives of Taiwan's higher education Figure 12. Student inbound and outbound mobility rate between Taiwan and New Zealand Figure 11. Student inbound and outbound mobility rate between Taiwan and Malaysia HEED 14,2 6.5.4 Fourth type of student mobility model (fourth quadrant) -Growing inbound rate with declining or unchanged outbound rate: Cambodia, Indonesia, Myanmar, Philippines and Vietnam. In regard to the fourth type of student mobility model, countries located in the fourth quadrant indicated a lower attractiveness to Taiwanese students; in contrast, Taiwan indicated a higher attractiveness to students from NSPC. Students from NSPC belonging to this type of student mobility model presented a high interest toward Taiwan. For instance, Figure 14 shows a large growth in Vietnamese students going to Taiwan after 2016. However, less Taiwanese students chose NSPC as their study destinations. The gap between inbound students from NSPC and outbound Taiwanese students became larger after the NSP was implemented.
6.5.5 Fifth type of student mobility model (countries without interaction) -Bangladesh, Bhutan, Brunei Darussalam, Lao PDR, Nepal, Pakistan and Sri Lanka. The fifth type of student mobility model indicated that it was not an attractive prospect for students from Taiwan to study in Bangladesh, Bhutan, Brunei Darussalam, Lao PDR, Nepal, Pakistan or Sri Lanka or for students from these countries to study in Taiwan. Among 18 NSPC, seven countries had very little interaction with Taiwan. Most of the countries are South Asian countries, except Lao PDR, which is a Southeast Asian country. Both the NSPC' student inbound rate to Taiwan and the Taiwanese student outbound rate to NSPC were lower than 0.3%, most often at 0%, even after 2016. The reason behind the figures, apart from certain academic or economic concerns, could be regarded as a lack of mutual understanding between Taiwan and NSPC (Sung and Lin, 2018). Thus, the policy could not play any role. Initiatives of Taiwan's higher education 7. Discussion 7.1 A powerful nation-led policy for whom? From the statistical data, the study found a greater influence from the Taiwanese government for students from NSPC than Taiwanese students. Driven by Taiwan's policy, the inbound rate of students from NSPC has grown rapidly after 2016. In contrast, the rate of outbound Taiwanese students to NSPC did not seem to be particularly influenced by the Taiwanese government. Sung and Lin (2018) stated that the "go out" policy seemed to have little impact compared to the "attracting" strategy. For students in relatively developed countries, such as Taiwan, academic development, economic development and students' personal factors may be a major concern in regard to studying abroad. However, Taiwan's better education development, cultural similarity, scholarship provision and geographic proximity may be attractive for some developing countries (Chang, 2017). As Kondakci (2011) pointed out, the pull force of high-economically developing countries was much stronger at an individual level than in terms of macro-level dynamics.
A remedy or a strong assistance?
For talent recruitment, it is always important to clarify the major motives. Since international students were viewed as a remedy for the demographic crisis in many Asian countries, the recruiting goal was mainly focused on "number" growth. Having more international students was considered more internationalized. However, in regard to long-term national construction or the establishment of human resources, governments should clarify the purpose of recruiting students with different education levels and enact appropriate recruitment strategies. Take Taiwan's talent recruitment for instance, one of the major policy Figure 14. Student inbound and outbound mobility rate between Taiwan and Vietnam HEED 14,2 goals was to attract excellent students from NSPC. However, most of the incoming students were bachelor's level. These students may be an instant remedy for Taiwan's higher education but may not be the excellent talent Taiwan needs.
Conclusion
Over the past few decades, Taiwan's government has been committed to promoting the internationalization of higher education. In the internationalization policies of many Asian countries, international students were mainly viewed as a remedy for the low birth rate or low university enrollment ratio. Similarly, international students were viewed as an indicator for evaluating Asian countries' degree of internationalization. However, advanced talents were needed for nation construction or development. How to strike a balance between the recruitment of international students and the establishment of a human resource pool may become a major concern for most Asian countries.
In order to promote talent sharing and circulation with regional countries, especially NSPC, Taiwan's government launched a series of scholarships and provided customized courses to attract students from NSPC. Statistics show that after the implementation of the policy, the number of students from NSPC going to Taiwan has increased. However, the mobility scheme seemed to remain single between Taiwan and NSPC and the mutual mobility became more unbalanced than it was previously. In general, due to academic development, economic development and graduation employment considerations, Taiwanese students had less interest in studying in NSPC.
Despite the overall unbalanced mobility that occurred, the NSP changed the mobility scheme between Taiwan and NSPC. Using the quadrant graph, the study demonstrated the subdivided mobility scheme that shed light on the Taiwanese government's internationalization policy for targeting NSPC. Countries located in the different quadrants required distinct strategies in terms of recruiting and on the domestic side to accomplish the mutual mobility goal. Certainly, the national driven policy was effective in promoting talent mobility, but a comprehensive assessment of costs and benefits is required to achieve policy goals and stand out in the international market.
Initiatives of
Taiwan's higher education
|
v3-fos-license
|
2020-01-02T21:10:19.212Z
|
2019-12-01T00:00:00.000
|
211267560
|
{
"extfieldsofstudy": [
"Geography"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://afspubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/mcf2.10102",
"pdf_hash": "6a80ce472cc40c58ef9818ef319b709f9b1ed4c9",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43396",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "e835716afaa0611ffb052e6eb21eb6324a6433be",
"year": 2019
}
|
pes2o/s2orc
|
Recreational Angler Attitudes and Perceptions Regarding the Use of Descending Devices in Southeast Reef Fish Fisheries
Reducing discard mortality in recreational fisheries remains an important component of stock rebuilding for many reef fish species. Discard mortality for these species can be high due in part to barotrauma injury sustained during capture coupled with high catch rates, but recent advances in fish descending devices can mitigate some of these declines. Despite high survival rates with rapid recompression strategies, recreational angler opinions and perceived effectiveness of the devices are relatively unknown. This study surveyed the perceptions, opinions, and attitudes of 538 recreational anglers regarding the use of descending devices in the reef fish fisheries of the Gulf of Mexico and U.S. South Atlantic, with particular emphasis on Red Snapper Lutjanus campechanus. In total, 1,074 descending devices were distributed to marine recreational anglers from North Carolina to Texas. After using the device for an average of 8 months and 15 fishing trips, recipients completed a questionnaire assessing their perceptions on the efficacy of the device. While 72% of respondents had little to no knowledge of descending devices prior to the study, 70% indicated that they preferred this release method over venting after the study. Survey respondents released over 7,000 Red Snapper and 4,000 other reef fish species with their descending devices, and 76% were likely to continue employing the device on their vessel. Eighty-nine percent of respondents believed descending Red Snapper would significantly reduce discard mortality in the recreational fishery. We discovered that recreational anglers perceive the devices to be highly useful in reducing discard mortality and are willing to employ them when releasing reef fish experiencing barotrauma. Other studies have demonstrated that these descending devices do reduce discard mortality of reef fishes, and this study indicates that recreational anglers are very willing to use them as a conservation tool. Recreational fishing is an important outdoor leisure activity to over 33 million people in the USA (Southwick Associates 2012). It generates substantial income to local, regional, and national economies while providing users an alternative means of domestic consumption (Arlinghaus et al. 2007). Recreational fishing is one of the most popular Subject editor: Debra J. Murie, University of Florida, Gainesville *Corresponding author: judd.curtis@tamucc.edu Received April 8, 2019; accepted November 3, 2019 This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. Marine and Coastal Fisheries: Dynamics, Management, and Ecosystem Science 11:506–518, 2019 © 2019 The Authors. Marine and Coastal Fisheries published by Wiley Periodicals, Inc. on behalf of American Fisheries Society. ISSN: 1942-5120 online DOI: 10.1002/mcf2.10102
outdoor activities, with economic impacts for saltwater recreational fishing totaling over US$63 billion annually (NMFS 2015). The highest concentration of saltwater recreational anglers resides in the Southeast (North Carolina to Texas), a region that supports over five million saltwater recreational anglers and generates $15 billion in revenue for the economy (NMFS 2012), making it an ideal location to study angler perceptions on fisheryrelated issues.
Over 50 species of reef-associated fish from nine families are managed by the South Atlantic Fishery Management Council and Gulf of Mexico Fishery Management Council, many of which have been historically overfished or are still undergoing overfishing. Combined recreational landings for South Atlantic and Gulf of Mexico reef fish totaled over 12 million pounds in 2017 (NMFS 2017(NMFS , 2018, making this southeastern region the largest federally managed recreational fishery in the nation. Yet, many fisheries in the region are overfished and rely on strict regulatory measures, including minimum size limits and closed seasons. For reef fish species such as Red Snapper Lutjanus campechanus, discard rates resulting from these regulations are very high and can be even higher outside of the directed fishery due to short, or even absent, summer fishing seasons. While many recreational anglers retain their catch for consumption, approximately 57% of fish caught in the USA are released (Bartholomew and Bohnsack 2005), and 88% of anglers participate in catch and release during some part of their fishing activity (USFWS 2006). Catchand-release fishing has become an increasingly popular method to conserve fishery resources through both voluntary practices and also as a requirement through mandated regulations (Cowx 2002;Cooke and Schramm 2007;Brownscombe et al. 2017). Reductions in season length and/or bag limit can result in very high regulatory discard rates, which in some cases are greater than landings for the directed fishery. For example, Gulf of Mexico Red Snapper recreational discard rates have historically on occasion been several times higher out of season than in season (SEDAR 2015). Furthermore, many anglers targeting other species unintentionally catch Red Snapper and are federally mandated to release them when this species is caught out of season. The decision to discard a captured fish can rely on various reasons, such as the fish being perceived as bycatch, regulations in place that require release (bag limits, size limits, closed season), belief that the fish will survive to be captured at a later date, or for ethical reasons (Cooke and Suski 2005). However, an essential assumption in the catch-and-release and discard process is that fish survive long term. While this assumption holds true for many species, postrelease survival for deepwater, physoclistous reef fish is further complicated by barotrauma, which leads to higher discard mortality rates than traditional catch and release occurring at shallower depths (Rummer 2007;Campbell et al. 2014). Barotrauma occurs due to rapid decompression experienced during ascent and has the potential to significantly reduce the odds of survival in Red Snapper and other deepwater reef fish (Rummer and Bennett 2005;Rudershausen et al. 2014;Curtis et al. 2015). Overcoming the issues surrounding barotrauma in catch-and-release fisheries is arguably one of the most important and unresolved complications facing managers today (Arlinghaus et al. 2007).
Fishery managers previously attempted to address barotrauma-induced mortality in the Gulf of Mexico reef fish fishery by promulgating regulations that required anglers to possess a venting tool onboard any vessel fishing in federal waters (GMFMC 2007). Soon after the enactment of the amendment, Wilde (2009) challenged the efficacy of venting discarded reef fish exhibiting barotrauma to increase survival. Additionally, Scyphers et al. (2013) determined that angler experience and knowledge on proper use of the tools was poor, possibly minimizing potential benefits from venting. Wilde (2009) performed a broad meta-analysis to examine the effectiveness of venting to reduce discard mortality in a variety of fish species.
His results concluded that venting should be avoided; however, a more recent meta-analysis discovered it to have positive effects (Eberts and Somers 2017). Additionally, the recent development of alternative methods to mitigate barotrauma, such as descending devices, have made the ways to mitigate barotrauma more widely available and more frequently used. Thus, the requirement to possess a venting needle in federal waters of the Gulf of Mexico was rescinded and, more recently, several fisherygoverning bodies have initiated new policies to incorporate descending devices into their fishery management plans (GMFMC 2018).
Rapidly recompressing fish using descending devices has been shown to be a successful method to reduce discard mortality in offshore reef fishes (Jarvis and Lowe 2008;Brown et al. 2010;Sumpton et al. 2010;Curtis et al. 2015;Runde and Buckel 2018). However, few studies have specifically examined angler perceptions or their willingness to use descending devices in recreational or commercial fisheries. Crandall et al. (2018) examined the motivating factors for Florida fishers to use venting and descending devices and their willingness to use these devices for barotrauma mitigation through online survey data. Dick (2017) interviewed fishery specialists, scientists, and managers to determine various challenges involved with the devices and to what extent mandating their use in the South Atlantic Red Snapper fishery would be possible. Study participants raised concerns due to a lack of scientific research, limited survey data, and the issue with the multispecies complex in the reef fish fishery. Participants also discussed the importance of angler involvement in the RECREATIONAL ANGLER PERCEPTIONS OF DESCENDING DEVICES regulatory process and that trust between managers and stakeholders in the fishery would be vital for moving forward. Thus, more information as to whether anglers are willing to use these tools is certainly essential if managing entities wish to mandate anglers to recompress discarded fish using descending devices as a management strategy in the future.
The primary goal of this study was to evaluate the perceptions and opinions of Gulf of Mexico and South Atlantic recreational anglers regarding the use of descending devices in offshore reef fish fisheries. There perhaps could not be a better model fishery to test the perceptions of these devices than the Gulf of Mexico and South Atlantic Red Snapper recreational fishery. Anglers in these regions have recently been faced with shortened Red Snapper seasons despite recent improvements in the stocks, and the recreational sector contributes a large proportion of the total catch of the fishery. The specific objectives for this project were to (1) obtain angler perspectives on the use and effectiveness of a popular descending device by distributing them to recreational anglers from North Carolina to Texas and following up with a survey questionnaire and (2) compare the perceptions and opinions acquired from survey responses among the three recreational subsectors (private, charter boat, and headboat anglers) to evaluate the potential for required use of the tools in recreational reef fish fisheries.
METHODS
SeaQualizer distribution.-To examine recreational angler perceptions regarding the use of descending devices, partnerships and collaborations were formed with various sportfishing entities to distribute approximately 1,100 descending devices to recreational anglers for use from June 2015 to October 2016. In collaboration with FishSmart (www.fishsmart.org), a science-based program that researches and promotes methods to reduce release mortality in recreational fisheries, agency and nonprofit collaborators distributed a standard model (50-100-150 feet) SeaQualizer to recreational anglers along with information on FishSmart's "best practices" for releasing deepwater fish afflicted with barotrauma. The best practices information includes guidance on assessing fish condition, the effects of barotrauma, and the different types of deepwater release techniques (e.g., venting tools and descending devices) and directs anglers to supplementary online resources on these topics. The best practices states that for deepwater releases, rapidly returning the fish to depth using recompression techniques is the preferred method of choice, followed by venting when rapid descent is not possible. The target population consisted of offshore recreational anglers of the Gulf of Mexico and South Atlantic that targeted reef fish. The majority of devices (79%) were distributed online at www.takemefishing.org/f ishsmart, although additional devices were distributed in person at fishing tournaments and club gatherings (5%), through dockside creel stations by project collaborators (5%), and from state agency personnel and other organizations (11%) from the eight Gulf of Mexico and South Atlantic states (Texas, Louisiana, Mississippi, Alabama, Florida, Georgia, South Carolina, North Carolina) (see Acknowledgments). Potential recipients were identified as individuals that were offshore-vessel owners or operators or individuals that were chartering offshore vessels targeting reef fish. Anglers were directed to the FishSmart Web site through articles, postings on social media oriented toward saltwater anglers, and direct mail and email to memberships of angling organizations in the region. The only selection criteria imposed on the Web site registration was that a participant's shipping address had to be within the coastal areas of the study region. Dockside distribution was conducted by agency personnel based on their knowledge and familiarity with the anglers in individual ports. All three subsectors of the federal recreational fishing sector (private anglers; charter captains, owners, and operators; and headboat captains, owners, and operators) were represented in the project. Prior to survey development, researchers engaged with potential participants and established partners from previous projects through informal feedback and conversations to determine appropriate survey questions that would provide optimum data for investigating angler perceptions. Initial attitudes and opinions of anglers acquired during the distribution phase assisted in the construction of the survey questionnaire.
Survey implementation.-Project participants were asked to complete a 30-question online survey concerning the extent of their use, their opinions, and the perceived effectiveness of the SeaQualizer after 8 months (see the appendix for the full survey questionnaire). This provided sufficient time to test their techniques and newly acquired tool on their vessels. Incentives were offered to complete the survey in the form of a random prize drawing, where one of two items could be awarded: a Shimano offshore fishing rod or reel valued at $269.99 or $549.99, respectively. Participants were classified as a private recreational angler; a charter boat captain, owner, or operator; or a headboat captain, owner, or operator and were assigned a home state based on primary fishing port. Estimates of the number of fish released with descending devices during the study was calculated by extrapolating survey responses that quantified the number of discards recompressed by each angler. Questions of particular interest assessed what percent of fish the participants believed survive long term after being released from a descending device and to what extent they will use the device on their vessel in the future. Because Red Snapper are a species of recent debate and 508 concern regarding barotrauma mitigation, anglers were asked to provide their opinions on how such devices could be helpful or hurtful specifically towards the Red Snapper recreational fishery.
To determine if differences in income, education, fishing experience, and fishing habits affected responses, a secondary portion of the survey was designed to evaluate demographic information and fishing practices. Once respondents had completed the initial portion of the survey, they were offered a secondary incentive for answering an additional nine questions. After completing the secondary portion of the survey, they would be entered into another free drawing to win a separate Shimano rod or reel valued at $269.99 or $649.99, respectively. Demographic questions addressed gender, age, zip code, combined household income, and highest level of education.
To determine fishing experience, participants were asked how many days they fished last year and the total number of years they have spent targeting offshore reef fish. Most commonly targeted fishing depth and distance from shore were also identified.
The survey questionnaire was created using Sur-veyMonkey (SurveyMonkey 2017). Due to the various types of questions in the survey, responses involved multiple formats. The majority of answers were on an ordinal scale (e.g., very unlikely to very likely), though not all answers followed the same ordinal categories. For example, the question addressing angler likeliness to use a descending device to release fish requiring submergence assistance yielded an ordinal scale from "not likely to use at all" to "likely to use it on all fish," while the question asking how helpful respondents believe descending devices would be in reducing discard mortality in Red Snapper yielded an ordinal scale from "not helpful" to "very helpful." Other questions provided nominal answers, binary yes or no answers, and percentage "slide-bar" answers.
Statistical analysis.-The variety of data collected from diverse answer categories necessitated a variety of appropriate statistical analyses to assess various aspects of the survey responses. A key objective in this study was defining differences in perceptions and attitudes about descending devices based on each respondent identifying with a recreational subsector. Analyses with significance testing were used to examine differences between the private and charter subsectors; however, the number of headboat responses prevented statistical comparisons with the other subsectors. Ordinal logistic regression (OLR) was performed when answer categories were on an ordinal scale, a chi-squared test of independence was performed when answers were nominal, and a Kruskal-Wallis test was performed when respondents chose a percentage of 0% to 100% using a slide bar. Likelihood-ratio tests (LRT) were performed when post hoc analysis of OLR models was required. All tests were performed using the statistical package R (R Core Team 2017). Analysis of variance (α = 0.05) was used where quantitative comparisons were possible.
Survey Responses
We distributed 1,074 SeaQualizers to recreational anglers in coastal regions from North Carolina to Texas. Of those, a relatively high response rate of 538 completed the survey sent via email (50% response rate), with the majority of responses coming from saltwater anglers in Texas (23%), Alabama (27%), and Florida (28%) (Figure 1). All other states made up less than 10% of respondents. Most respondents (79%) received their SeaQualizer via online registration on FishSmart's Web site. Anglers received educational materials (written and/or video) that included information on best practices for releasing fish (including when to use a descending device) and appropriate use of the SeaQualizer. Approximately 67% of recipients believed the combinations of materials they received improved their knowledge and skills regarding recognition of barotrauma and proper fish handling and release methods.
The vast majority of respondents identified as private recreational anglers (n = 451, 84%), while 81 (15%) and 6 (1%) respondents identified as charter boat and headboat captains, owners, or operators, respectively (Figure 1). On average, respondents owned their SeaQualizer 8 months and used it on 15 trips prior to completing the survey. Sixty-eight percent of anglers targeted reef fish in water depths of 38 m or less, 18% targeted depths between 38 m and 53 m, and the remaining respondents (14%) targeted depths greater than 53 m (Figure 2). Similarly, the majority of respondents fished closer to shore when targeting reef fish, with 67% fishing within 30 miles (48 km) of shore.
Most respondents had used a venting tool at some point in the past (89%), with significantly more charter respondents having vented in the past than private anglers (chi-square test: χ = 4.314, P < 0.05). When employing venting tools in the past, 78% of respondents vented all or most fish when they exhibited signs of barotrauma. When asked what cues or combination of cues anglers used to determine if submergence assistance was required, 80, 75, 68, 57, and 41% considered a protruding stomach, bloated abdomen, inability to submerge, exophthalmia, and sluggishness to be effective cues, respectively. Twenty-three percent of respondents considered all of those symptoms as useful signs. Thirteen percent used a venting tool or descending device on all fish regardless of symptoms, while 3% never used either. Sixty-three percent of respondents stated they still used venting tools to release fish exhibiting barotrauma. Responses were not significantly different between private anglers and charter boat captains (chi-square test: χ = 1.758, df = 1, P = 0.185). For those that did not currently employ venting tools to release fish, 19% stopped using them because they did not think they worked, 17% believed the fish were able to submerge without the help of venting, and 5% stopped using venting tools because they thought they were too time consuming. Sixty-seven percent chose the "other" category and were required to specify their reason. Of those 150 "other" respondents, 66 specifically mentioned they preferred rapid recompression to venting. The mean percentage of fish believed to survive the venting process was 57%, and this was not significantly different between private and charter respondents (Kruskal-Wallis test: χ = 0.152, df = 1, P = 0.697).
Previous knowledge concerning the use of descending devices was generally low. Seventy-two percent of respondents had little to no knowledge about descending devices prior to acquiring their SeaQualizer. Only 45 of the 517 respondents (<9%) had a high to very high amount of knowledge prior to receiving their SeaQualizer. Charter boat captains were more likely to possess previous knowledge on the devices than private anglers (OLR: β = 0.521, χ = 5.365, P < 0.05).
The likelihood of respondents to use a descending device to release fish exhibiting barotrauma was very high (Figure 3), and no differences in likeliness-to-use existed between private and charter sectors (OLR: β = −0.2095, χ = 0.821, P = 0.365). Only eight individuals were not likely to use a descending device at all, whereas 33% were likely to use one to release all fish, 43% to release most fish, and 14% to release approximately half of the fish they catch exhibiting barotrauma.
The vast majority of respondents (89%) believed descending devices would be at least "moderately helpful" in reducing discard mortality in the Red Snapper fishery (Figure 4). Seventy-nine percent believed they would be "helpful" to "very helpful." When answers were compared between private anglers and charter captains, private anglers believed the devices to be only slightly more helpful than charter captains did. However, these differences were not statistically significant (OLR: β = −0.407, χ = 2.940, P = 0.086). Mean perceived survival of reef fish released with descending devices was higher than for vented fish for both private respondents (ANOVA: F 1, 824 = 327.72, P < 0.0001) and charter respondents (ANOVA: F 1, 140 = 41.48, P < 0.0001) ( Figure 5). The mean predicted survival rate of both descended and vented fish was very similar between private and charter respondents ( Figure 5).
A range of the approximate total number of fish released by anglers during this study was calculated by multiplying the number of respondents in one category by the range of the minimum and maximum number of fish released in that category. Throughout the course of this study, survey respondents released a minimum of 7,068 to a maximum of 11,235 Red Snapper and a minimum of 4,316 to a maximum of 6,790 other species of fish. On average, charter captains and private anglers released approximately 28 and 16 Red Snapper per person, respectively, between the time of acquiring their SeaQualizer and completing the survey.
After receiving and using the SeaQualizer, 70% of all participants preferred descending to venting, with private respondents more likely to prefer descending to venting than charter respondents (chi-squared test: χ = 24.567, P < 0.001) ( Figure 6). After operating the SeaQualizer, 74% of private anglers preferred to release fish with a descending device, whereas 55% of charter respondents preferred descending to other methods. Likewise, 17% of charter captains still preferred venting compared with only 7% of private anglers. Only 4% of respondents still FIGURE 3. Percent response of private, charter, and headboat anglers when asked how likely they were to use a descending device to release fish exhibiting barotrauma. The sample sizes listed under the stacked bars refer to the number of respondents from each recreational subsector. Of the original 538 survey participants, 476 agreed to complete the secondary portion of the survey assessing fishing habits and demographic information. Fifty-four percent of respondents had been fishing for more than 20 years, 20% for 11 to 20 years, 17% for 5 to 10 years, 9% for 1 to 4 years, and only two respondents had been fishing for less than 1 year (0.4%). Charter captains were more likely to have greater fishing experience than private anglers (OLR: β = 0.862, χ = 10.404, P < 0.001). When asked how many days they targeted reef fish last year, 41% took more than 20 trips, 24% took 11-20 trips, and the remaining 34% took 10 trips or less, with charter respondents having fished more days than private angler respondents (OLR: β = 2.349, χ = 50.219, P < 0.001).
The majority of survey participants were males (96%) between the ages of 41 and 65 (66%). The highest level of education for 58% of respondents was a bachelor's degree or higher, and 66% held a combined household income of at least $75,000. Compared to charter captains, private anglers were more likely to have earned a higher education (OLR: β = −1.192, χ = 21.824, P < 0.001) and hold a higher household income (OLR: β = −0.559, χ = 5.190, P = 0.025). Education was not a significant predictor of either angler willingness to use descending devices (LRT: P = 0.243) or of perceived benefit of the devices to reduce discard mortality in the Red Snapper fishery (LRT: P = 0.123). Fishing experience was also not a significant predictor of angler willingness to use descending devices (LRT: P = 0.090) nor of the perceived benefit of the devices to reduce discard mortality (LRT: P = 0.991).
DISCUSSION
This study surveyed the perceptions, opinions, and attitudes of recreational anglers regarding the use of descending devices to reduce discard mortality in offshore reef fish. The majority of survey participants had positive perspectives on the benefits associated with using the tools to release discarded fish experiencing barotrauma. Slight differences in opinions existed between the three subsectors of the recreational fishing sector regarding their utility, but the majority believed they were effective tools for improving survival of discarded fish. Headboat respondents were less likely to use the devices due to the timeconsuming process required to release a single discard while meeting client demands, although the low sample size (n = 6) for this fishing sector necessarily made this observation only qualitative. Nevertheless, all subsectors perceived descending devices to be beneficial tools in improving discard mortality in the Red Snapper fishery and more respondents preferred using fish descending devices to venting practices after the opportunity to use these devices. These results provide evidence that recreational anglers positively perceive and are willing to use fish descending devices to improve discard survival.
Descending devices offer anglers an alternative release strategy to invasive venting techniques. While studies have shown that both venting and rapid recompression can reduce mortality (Eberts and Somers 2017), rapid recompression does not require anglers to possess knowledge regarding fish anatomy and physiology, whereas venting does. In this way, descending devices can prevent wellintentioned anglers unfamiliar with venting procedures from injuring fish. For example, Scyphers et al. (2013) and Hazell et al. (2016) discovered that a substantial number of venting-tool users were inserting their hypodermic needles in improper locations, potentially puncturing vital organs and reducing the chance of survival. Furthermore, angler experience was not correlated with knowledge of proper venting technique (Scyphers et al. 2013). These complications associated with improper venting technique and location are prevented by employing descending devices. Even when venting tools are operated correctly, descending strategies may result in greater chances of fish survival (Curtis et al. 2015).
Based on the questionnaire results, the majority of anglers surveyed in this study indicated positive attitudes toward, and desire to learn, successful release practices using descending devices. Despite charter respondents having more fishing experience and previous knowledge regarding rapid recompression devices, a greater proportion of private respondents preferred descending devices to venting after operating the SeaQualizer. Seventy percent of survey respondents stated that their barotrauma mitigation preference was descending by the end of the study, suggesting a transition towards favoring descending devices over venting tools as a preferred release method. Moreover, the perceived benefit of these devices to increase survival of discarded fish is very high and even greater than when using venting tools. However, each subsector of the recreational fishery has differing motives for using descending devices during fishing trips. For example, charter captains and deck hands aboard headboats have additional challenges and expectations of clients in providing a quality fishing trip that private recreational anglers do not, and these demands might influence their willingness to use these devices under certain circumstances. Private anglers may prefer to focus on proper release techniques because they are not required to tend to clients and assist numerous anglers at once. Charter captains and deckhands likely experience more time-sensitive situations in which multiple fish require release simultaneously, potentially resulting in their higher likelihood to continue employing venting strategies as the less time-consuming method. Private anglers made up the majority of survey responses, while headboats comprised a relatively small 512 portion; therefore, observations regarding the perceptions of headboat captains and deckhands should be interpreted cautiously and additional data collection for this sector is certainly needed. Further research and development of more efficient strategies to descend fish and how these can best be implemented on headboat charters with many anglers catching fish simultaneously would be extremely useful.
Despite existing variations in attitudes and feasibility in use between the subsectors, the majority of all anglers surveyed believed descending devices could be beneficial in reducing discard mortality in the fishery. This preference was in contrast with results obtained in other studies, where the majority of offshore recreational anglers preferred venting to descending (Hazell et al. 2016;Crandall et al. 2018). A key difference between this study and those above was that anglers in this study received a free SeaQualizer and information on best practices for use. These participants also were able to use a descending device in the field while making judgments on its efficacy and utility prior to taking the survey, whereas 32% of the anglers in Crandall et al. (2018) were not aware of the descending devices prior to survey completion. Moreover, 53% of respondents in Crandall et al. (2018) targeted fishing depths of less than 18 m (60 feet), where many reef fish may not require assistance submerging. Only 13% of Gulf of Mexico and South Atlantic anglers from this study targeted depths of less than 23 m (75 feet), and 71% preferred descending to venting after employing a SeaQualizer on their vessel for an average of 8 months. Prior to acquiring the SeaQualizer for this study, 88% had used a venting tool and, of those, 78% had vented all or most fish exhibiting barotrauma. After using the descending devices, 70% preferred descending to using a venting tool. These results indicate that anglers in our study may have changed their preference of barotrauma mitigation techniques from venting to descending after employing a descending device on their vessel.
Conversely, the receipt of the free SeaQualizer device and distribution of best practices materials could have played an influential role in promoting a positive perception and preference reported by anglers through the removal of purchasing barriers and priming respondents with preconceived benefits of these devices. The FishSmart best practices brochure distributed along with the descending devices states that recompression is the method of choice for returning barotrauma-afflicted fish to the water. It is possible that this information could potentially bias the angler's perception of these devices and, while this does not negate their response, it could have potentially inflated the positive response towards descending devices relative to venting tools by a small unknown percentage. Crandall et al. (2018) reported that one of the barriers to using descending devices was the expense in purchasing, as well as the lack of training or knowledge of devices and the extra time requirement. Thus, the removal of these barriers to use seems to be a critical component in facilitating the use of these descending devices that may be achieved through complementary promotional programs and increasing angler knowledge and awareness through dissemination of materials on best practices and device use.
Angler knowledge and perception are often overlooked when formulating hypotheses and methods to improve release mortality and, in many instances, angler opinion, observation, and participation can be highly useful in assisting with research, management, conservation, and sustainable use of fishery resources (Aswani and Hamilton 2004;Granek et al. 2008;Boudreau and Worm 2010;Brownscombe et al. 2017). For fishery management agencies seeking to implement future regulations that require the use of specific tools to reduce mortality in released reef fish, studies such as this are imperative for successful integration. Cooke and Schramm (2007) noted the importance of gathering and disseminating data on the utility and effectiveness of new regulations prior to enforcing them. If angler knowledge regarding the use of such devices is rudimentary or even nonexistent, appropriate dissemination of methodological instructions and best-use practices would be an essential complement to the actual devices before anglers could be expected to use them. Unlike other barotrauma mitigation techniques, descending devices offer anglers an easy-to-operate tool that does not require extensive knowledge on the physiology of various species, which likely contributed to the strong preference for descending over venting release strategies.
While our survey response rate of 50% is extremely high for these types of studies, it is important to note that the perceptions of survey respondents may not necessarily be representative of the entire angling population. One characteristic that indicates that the survey respondents may be more representative of more avid anglers is the number of fishing trips reported during the duration of the study. The average angler had completed 15 offshore fishing trips over 8 months, which is far above average for the typical offshore reef fish angler. Additional demographic data from supplementary questions also indicates that more avid anglers were the likely participants for this study. The lack of survey results from some states also indicates that participation may be more positively influenced through better outreach and engagement channels. Future studies should seek to promote these outreach mechanisms and fill participation data gaps in order to obtain the most representative view of how descending devices are perceived in the entire recreational fishing community. Nevertheless, this study provides one of the most comprehensive collections of survey data on the perceptions of descender device use for the recreational fishery.
RECREATIONAL ANGLER PERCEPTIONS OF DESCENDING DEVICES
Overall, both charter and private recreational reef fish anglers were found to have positive perspectives and attitudes towards descending devices for improving release survival in fish exhibiting barotrauma. Moreover, 70% of survey respondents indicated a preference of descending over venting by the study's end after the opportunity to test these devices. Despite requiring more time and effort to deploy a descending device, recreational anglers perceived their benefit to outweigh the time saved by venting. Headboat operators were less likely to employ the devices due to the extra time requirement to operate them; however, most believed the devices would be successful in reducing discard mortality. These data provide managers with essential information regarding the opinions of fishery stakeholders towards improving discard mortality using rapid recompression techniques. Rapid recompression gives anglers perceived confidence that their discards will survive to be captured again in the future, and they are receptive to employing descending devices in the recreational reef fish fishery to increase survival of discarded fish.
ACKNOWLEDGMENTS
We thank FishSmart for organizing survey distribution, compiling response data, and overall management of SeaQualizer recipient and respondent information. We also want to thank the Shimano Corporation for their contribution of offshore fishing gear used to collect and release Red Snapper during this study and for donating survey incentives in the form of two offshore rod and reel combinations. The American Sportfishing Association-FishAmerica Foundation, Recreational Boating and Fishing Foundation, and SeaQualize provided technical support during the distribution and testing phase. Many individuals from the Harte Research Institute, including David Norris, Ashley Ferguson, Quentin Hall, Matt Streich, Jason Williams, and Tara Topping, provided field support and Megan Robillard and Jennifer Wetz assisted with administrative matters. Special thanks to Daryl Gatewood and Peter Young for their contributions during SeaQualizer distribution. Gil Radonski provided invaluable support for planning and outreach through the FishSmart program. This study was made possible by funds from the American Sportfishing Association, FishAmerica Foundation, Brunswick Public Foundation, Grizzly Smokeless Tobacco, Guy Harvey Ocean Foundation, National Oceanic and Atmospheric Administration Fisheries Award NA14NMF4720224, and National Fish and Wildlife Foundation Award #0303.15.048009. Outreach to anglers was conducted through partnerships with the International Game Fish Association, Alabama Marine Resources Division, Texas Parks and Wildlife, Coastal Conservation Association, Florida Fish and Wildlife Conservation Commission, Georgia Department of Natural Resources, South Carolina Department of Natural
|
v3-fos-license
|
2018-04-03T01:33:56.859Z
|
2015-05-18T00:00:00.000
|
14835207
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/ecam/2015/205089.pdf",
"pdf_hash": "f889051853dbb1df17a12daa42fea0bd44e1ce57",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43397",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "3ed265765ca6ea8196dd4101edcdea85a829df04",
"year": 2015
}
|
pes2o/s2orc
|
Measuring Agarwood Formation Ratio Quantitatively by Fluorescence Spectral Imaging Technique
Agarwood is a kind of important and precious traditional Chinese medicine. With the decreasing of natural agarwood, artificial cultivation has become more and more important in recent years. Quantifying the formation of agarwood is an essential work which could provide information for guiding cultivation and controlling quality. But people only can judge the amount of agarwood qualitatively by experience before. Fluorescence multispectral imaging method is presented to measure the agarwood quantitatively in this paper. A spectral cube from 450 nm to 800 nm was captured under the 365 nm excitation sources. The nonagarwood, agarwood, and rotten wood in the same sample were distinguished based on analyzing the spectral cube. Then the area ratio of agarwood to the whole sample was worked out, which is the quantitative information of agarwood area percentage. To our knowledge, this is the first time that the formation of agarwood was quantified accurately and nondestructively.
Introduction
Agarwood (Aquilaria sinensis Lour. Gilg) is a dark resinous heartwood that forms in Aquilaria trees, which was distributed in South China such as Hainan province, Guangxi province, and Guangdong province [1][2][3]. It forms when Aquilaria trees are infected with a type of mould. Prior to infection, the heartwood is relatively light and pale coloured; however, as the infection progresses, the tree produces a dark aromatic resin in response to the attack, which results in a very dense, dark, resin embedded heartwood [4].
Agarwood is valued in many cultures for its distinctive fragrance and thus is used for incense and perfumes. It is also known as a kind of famous traditional Chinese medicine. It can be used as sedative, analgesic, and digestive medicine in the orient [5][6][7]. Moreover, it had been reported that agarwood can help healing rheumatism, cardiovascular and cerebrovascular diseases, and many other diseases related to the artery and the heart [8,9].
One of the main reasons for the relative rarity and high cost of agarwood is the depletion of the wild resource. Since 1995, Aquilaria malaccensis, the primary source, has been listed in potentially threatened species by the Convention on International Trade in Endangered Species of Wild Fauna and Flora. In 2004 all Aquilaria species were listed in potentially threatened species. For China, it had been claimed in China Plant Red Data Book in 1992 that, because of the deforestation of the nonagarwood which can generate the agarwood and lead to the rare situation of agarwood now, the way of cultivating should be the main method to get agarwood [10]. Nowadays the most common way of cultivating the agarwood is the Hong inoculated knot [2,11]. For better quality control of agarwood cultivation, it is essential to choose the suitable Aquilaria trees. There are several factors to show which trees are suitable, of which agarwood formation ratio is one of the most significant factors. Agarwood formation is the ratio of agarwood cross-sectional area to tree cross-sectional area. The larger the area of the agarwood is, the better the tree is for formation agarwood. So it is necessary to test the area of the agarwood in different trees and then use the statistical data to determine which is the more suitable tree. It is essential and important for selective breeding and evaluating quality of agarwood. However, there is no effective method to measure the area except rough estimation by human eyes. In this paper, the technology of fluorescence spectral imaging was presented to measure the area of the agarwood. As far as we know, this is the first time that the agarwood formation ratio was measured quantitatively.
Spectral imaging is a new technology by which the signal light of detected sample is divided into several narrow bands, and then the images at each band are captured by the detector in sequence. Both the spatial data and the spectra data can be obtained simultaneously. Spectral imaging technology has been applied in many fields such as food safety [12], fruit quality [13,14], and medicine testing [15,16]. In this paper, fluorescence spectral imaging method was used to work on the formation ratio of agarwood.
Sample Preparation.
The tested samples were cut from seven-year-old Aquilaria sinensis at height of 1.3 meters above the ground. Aquilaria sinensis grew in an artificial forest in Huazhou, Guangdong province, China, and were treated by liquid transfusion technology of agarwood formation in the whole body and then were cut two years later. The samples were dried, polished to flat surface, and then stored at 25 ∘ C ± 3 before measuring.
Testing
System. The testing system was designed in Guangdong Key Laboratory for Innovative Development and Utilization of Forest Plant Germplasm, South Agricultural University. The main components of the system are two 365 nm UV lamps (EA-160/FA, Spectronics Corp., NY, USA), optical filters (300 nm-800 nm, Thorlabs Inc., NJ, USA), a CCD (1/2 , 1280 × 1024) and camera lens (M0814-MP, Computer Company, Japan), and a host computer.
The two UV lamps are used as excitation light source which could excite sample plan uniformly. The optical filters are used to select the wavelengths of emission signals of samples. The working wavelength range is from 450 nm to 800 nm with the interval 10 nm. The spatial resolution is 5 lp/mm. The ray path of the system was shown in Figure 1. The UV light coming from the light source reached the samples on the underlay, and then the samples were stimulated to emit fluorescence light. The emission light was filtered by the filters and captured by the CCD. At last, a spectral cube of samples was got and analyzed. Each sample has one spectral cube.
Data Analysis.
A spectral cube is shown in Figure 2, which consisted of 16 spectral images. The pictures in the same cube are spatially matched, but spectrally different.
There are four parts in each picture, nonagarwood, agarwood, rotten wood, and the background. Since the spatial distribution of nonagarwood, agarwood, and rotten wood is irregular, the spectra differences of these parts were invited to determine the spatial position pixel by pixel. Here the pictures whose wavelength was between 500 nm and 650 nm were used because the spectral differences were the biggest in this wavelength range. There are four steps to analyze data here.
Obtaining the Spectral Curve of Each Part.
For positioning the spatial distribution of nonagarwood, agarwood, and rotten wood in one sample, the spectral cube of the sample should be analyzed to get the spectral curve of each part. As an example, to get the spectral curve of agarwood, first, the mask was designed based on prior knowledge for choosing a 3 × 3 area of nonagarwood on one picture of the spectral cube; second, the average intensity of chosen area was calculated, which could represent the intensity of nonagarwood at the wavelength of this picture; third, the same areas of other pictures were analyzed like the first and second steps to get a group of intensities at different wavelength, which is the spectral curve data of the agarwood. The spectral curve of the nonagarwood and the rotten wood was also obtained by the same way.
2.3.2.
Obtaining the Outline of the Sample. The outline of the whole sample is needed for getting the relative area of the whole sample. To obtain the outline of the sample, a highpass filter [17,18] in the spatial domain had been used, which was shown in the following expresion: in which is the intensity of a pixel and is the cutoff intensity. The signal which is higher than or equal to could pass the filter, but the signal which is lower than would be attenuated.
The filter shown in expression (1) was used to scan the spectral image line by line from left side. The scanning would not stop until the first pixel which could pass the filter appears. The pixel is the left edge of the sample in corresponding line. Then the filter begins second scanning from right side to look for the right edge.
Distinguishing Nonagarwood and Rotten
Wood. Distinguishing agarwood directly from sample is difficult here, so the nonagarwood and rotten wood were distinguished, respectively, at first. Which wavelengths were selected was based on the differences of spectral curve of these three parts. Edge detection filter [19][20][21][22] was designed to detect the edge of nonagarwood. The edge detection filter worked with four steps. First, smooth the picture to reduce the noise by Gaussian filter. Second, calculate the local gradient and the edge direction of each spot to get the ridges with the following expression: Here, ( , ) is the local gradient of each point, is the partial derivative of pixel ( , ) in direction , and is the partial derivative of pixel ( , ) in direction . ( , ) is the direction of the edge. The edge point is defined as the largest gray value point of the local area on the gradient direction. All these edge points could form the ridges of the gradient magnitude. Third, track the top of all the ridges and let all the pixels that are not on the top of the ridges be zero, which was known as nonmaximum suppression processing. Fourth, define the effective range of gray value of the ridges based on the tested sample. Here, range [0.2, 0.25] was used. Every 8 connection ridge pixels in the range were integrated and linked into the strong ridge pixels, which were the edge between nonagarwood and agarwood.
The band ratio algorithms were used for rotten wood distinguishing. Band ratio is created by dividing spectral values of one band by spectral values of another band from a spectral cube. It is used to enhance the spectral differences between bands and to reduce the effects of topography [23,24]. Here, band ratio algorithms were used to enhance the differences between the rotten wood and the other part of the sample. The band ratio method is expressed in in which , , and , , are the gray value of the pixel ( , ) on band and band pictures and , , is the ratio value of the pixel ( , ). If the denominator of the expression is zero, , , will be assigned to zero.
To express the range of function (3) by linear fashion and use the standard 8-bit encodings (range from 0 to 255), normalized function should be used to maintain further processing, and the following are the formulas: in which , , refers to the output gray value of the pixel ( , ) and the "Int" refers to integer conversion.
After the enhanced image by band ratio algorithms was proceeded by the edge detection filter mentioned, the edge between agarwood and rotten wood was obtained.
Obtaining the Agarwood Formation Ratio.
Since the area of no rotten wood and the area of no nonagarwood are obtained, for obtaining the agarwood formation ratio, two steps are needed, obtaining the edge of the agarwood and connecting the region. Function (5) was used to obtain the edge: in which and refer to the two images which were got in Section 2.3.3. refers to the edge of agarwood. The edge of agarwood could be got by intersecting the complementary sets of nonagarwood and rotten wood area. For getting the area of agarwood, the region surrounded by the edge should be connected. The edge just like a ring which has two sides. Track the outside of the edge and the corresponding inside of the edge, and then connect both sides to get the agarwood area.
Spectral Curve.
The picture at 550 nm was used to select the area of agarwood, nonagarwood, and rotten wood. The selected areas were shown in Figure 3. The three spectral curves were shown in Figure 4. The differences of three spectral curves were obvious by analyzing Figure 4. The fluorescence intensity at each wavelength of nonagarwood is much higher than that of agarwood and rotten wood.
Here, the picture at 550 nm was used. Comparing the spectra of agarwood and rotten wood, both of them had peaks at 510 nm and 570 nm. The peak of agarwood at 510 nm is higher 4
Evidence-Based Complementary and Alternative Medicine
Rotten wood Agarwood Nonagarwood than that of rotten wood. However, the peak of agarwood at 570 nm is lower than that of rotten wood. So the band ratio of 510 nm and 570 nm was used to enhance the differences of rotten wood and agarwood.
The
Outline of the Sample. The outline of the sample was shown in Figure 5, which was got by high-pass filter. Comparing Figure 5 with Figure 3, the outline of the sample had been got accurately.
The Area Ratio of Agarwood.
The results of distinguished nonagarwood and rotten wood were shown in Figure 6. In Figure 6(a), the area of nonagarwood was all in the dark, but the area of the agarwood and rotten wood was filled by bright pixels. In Figure 6(b), the area of rotten wood was all in the dark, but the area of the agarwood and nonagarwood was full of bright pixels. The common area of bright pixels in two pictures is the area of agarwood. The result got by intersecting Figures 6(a) and 6(b) was shown in Figure 7(a). The edges of the agarwood were tracked accurately. Filling the area between the inside and outside edges, the agarwood could be got, which was shown in Figure 7(b). The results are consistent with the judgment made by the expert in traditional Chinese medicines [25,26]. The relative area of agarwood could be got by counting the number of bright pixels in Figure 7(b).
Counting the number of pixels surrounded by the outline, the relative area of the whole sample could be got. The area percentage of agarwood is the ratio of relative area of agarwood to relative area of the whole sample. For the sample showed in the paper, the area percentage of agarwood is 2.06%. Table 1. The results show that the technique works well on detecting the agarwood formation ratio.
Discussion
The fluorescence spectral imaging technique tested samples pixel by pixel by the spectral information, which had been applied successfully to test spatial distribution of the constituents in traditional Chinese medicines by our group [16,27]. Spatial distribution of agarwood was detected by fluorescence spectra of agarwood in this paper. Before the fluorescence spectral imaging technique was presented, the agarwood only can be judged manually. The experts could point out the distribution of agarwood but could not provide quantitative data. The new technique can test formation ratio quantitatively. The validity of the results was proved by the expert in field of identification of Chinese medicine.
Conclusions
The agarwood which formed in Aquilaria sinensis was measured by fluorescence spectral imaging technique in this paper. The agarwood formation ratio is an important factor to indicate the better trees, better liquid transfusion, and better planting technique for agarwood formation. To our knowledge, this is the first time that the agarwood formation ratio was measured quantitatively. Comparing to qualitative estimation by manual watching, the technique is much more accurate. It is concluded that fluorescence spectral imaging is a precise, noninvasive, and fast technique for measuring agarwood formation ratio and quality control of agarwood cultivation.
|
v3-fos-license
|
2022-12-16T16:13:34.395Z
|
2022-12-01T00:00:00.000
|
254719351
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://ejournal.ukm.my/malim/article/download/47420/13639",
"pdf_hash": "47b87183dae3bab060c81807087c2b16c03a70b5",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43400",
"s2fieldsofstudy": [
"Education"
],
"sha1": "8004295239529e04e3c98f21ff6b9818cdcf04ce",
"year": 2022
}
|
pes2o/s2orc
|
ANALYZING ITEMS USED IN DEVELOPING DESCRIPTIVE ESSAY MARKING RUBRICS: FOCUSSING ON VOCABULARY USE
Descriptive writing is an excellent language learning strategy to improve students’ vocabulary and ultimately their language competence. Teachers have found that longer time is required to develop students’ descriptive writing skills, and it gets more complex because there is not many appropriate, or rather standardized, guideline to help teachers teaching descriptive writing. Most commonly, teachers employ the five-paragraph-writing format to teach writing. It was found in this study that students focus on vocabulary in describing scenes, characters, plots, or settings. Consequently, it is the aim of this study to explore the criteria or items of the existing rubrics used in descriptive writing, and develop an exclusive one that emphasizes on vocabulary. 20 prominent rubrics for descriptive writing were gathered and analyzed. Summative content analysis method was employed for the analysis. The most frequent components used were listed down and integrated into a new rubric. The findings revealed that there are several components used to assess descriptive writing namely description, word choice, and vocabulary competence. This study also found that with a specific descriptive writing rubric, teachers are able to better prepare their lessons in teaching descriptive writing. The exclusive descriptive writing rubric also helps students to better learn and develop their vocabulary.
INTRODUCTION
Writing is one of the main components in language learning.Shanahan (2015) listed Common Core State Standards' writing includes argumentative, informational/explanatory, and narrative writing.Shanahan (2015) defines argumentative writings as the development of rational arguments that are generated arguing the writer's position or claim in a given text, usually with appropriate evidence.The expository structure of informational or explanatory writing seeks to improve reader's knowledge or assist readers in understanding a certain process, procedure, or concept.A narrative writing style can be applied for a wide range of tasks, such as for information, instructional, persuasive, or entertainment purposes.
However, apart from the Common Core State Standards, Melly (2006) revealed that there are five types of writing styles, each with different purpose, or objective.Expository writing serves to explain or inform while descriptive writing aims to show or describe.Persuasive writing can be employed to argue against or for an issue whereas creative writing is commonly used for writing fiction, poetry, drama, and autobiographies; and, narrative writing is used to convey a story.
Based on these existing writing rubrics, it is clear that they have the tendency to overlap one another.For example, while narrative writing may convey a particular story, it may turn out to be a creative writing as well.Thus, the grading rubric for all types of writing have been similar i.e., utilizing the five-paragraph marking rubric despite the fact that different type of writing style serves different writing purpose or objective.
It is crucial to keep in mind that a given style of writing demands a specific vocabulary usage or application, or word choice, in order to fulfil the objective or purpose of that particular piece of writing.As a result, it is thought that in order to improve students' writing abilities, a marking rubric tailored to a certain writing style is required.The main goal of this study was to concentrate on creating a marking scheme specifically for descriptive writing.The goal of this study is to investigate the criteria or items utilised in various forms of descriptive writing marking rubrics in order to construct a unique descriptive writing marking rubric.
LITERATURE REVIEW
According to Rivers (2018), writing is not straightforward as compared to the other language skills.It is complicated because it requires learners to display their competence in language use that includes words, sentence, grammar, to morphing those segments into written forms.As such, it can be assumed that for a learner to display good written work in a specific language, he must develop the cognitive abilities to distinguish certain segments of that language.Simply, Rivers (2018) asserted that writing is complicated because a writer must be able to recognise word use and sentence structure in order to produce well written piece of writing.Thus, it can be said that writing process involves not only the exploration of ideas but also the transformation or conversion of these ideas into readable texts.In the transformation of ideas into readable texts, there is a silent requirement of having the cognitive ability to utilize the nuanced segments of language and channel them into the written form.
Teaching writing skills involves some objectives and markers that learners are required to achieve.According to the National Education Department, BSNP (2006), the objective of teaching writing skills in Junior High School is to obtain a functional level of comprehension, whereby students can adequately communicate in verbal and written form to perform their daily activities.Students should also be able to write monologues, brief functional texts, and essays in the process, descriptive, recount, narrative, and report forms after they have reached this functional level.According to the National Education Department, BSNP (2006), employing proper grammar and a sufficient vocabulary to obtain a comfortable level of linguistic competence is another indicator of reaching the functional level.There are various ways to accomplish these goals.In teaching writing, there are two methods, according to Brookhart (2018).These methods either concentrate on the final outcome of the writing process or the actual writing process.According to Brookhart (2018), people who support a process approach to writing focus on the writing process.Teachers, however, should focus on the many phases of every piece as it progresses.
To accomplish the goals, certain writing strategies-including the process approach-are used.In essence, they are assisting pupils in comprehending the content and expressing themselves in a grammatically correct manner in English.A process approach in writing, according to Suastra and Menggo (2020), analyses the act of composing from a different perspective, putting as much emphasis on itself.The researchers made it clear that the process method also emphasises the operational stages involved in the initial and subsequent revisions of a piece of work.The types of activities change from being language-focused to learner-centered in the process method, giving students more control over what they write and how they write it (Richards 2002).
However, there are other criticisms and criticisms of the process approach in the literature that has already been published.According to Sumekto and Setyawati (2018), the process-centered approach falls short of adequately preparing university-bound pupils to perform at the requisite level.For instance, law students are frequently forced to write in an argumentative style for their projects in order to allow them to utilise the process writing skills they have learned to present stronger arguments.Since they can complete their coursework and promote thinking skills appropriate for their level, it becomes sense that these children will become skilled language users.
Most significantly, it was also discovered that the process approach prioritises the writing process just as much as the final product.Pourdana and Asghari (2021) identified planned process writing exercises that were carried out in the classroom to boost the motivation of both the teacher and the students.These exercises included writing poetry and using a computer.To learn additional languages as effectively and at various levels of instruction, the above activities can be significantly adjusted.As a result, it is reasonable to assume that the process method can be developed by consistent writing exercises employing efficient activities that produce superior input and thus enhance students' writing abilities.
METHODOLOGY
In this qualitative study, 20 set of regular rubrics related to descriptive writing were gathered and analysed using the summative content analysis method.These rubrics were obtained from the global rubric bank that stores all rubrics used by English language teachers.These rubrics were carefully checked for 'suitability' and selected to be used in this study.A process known as the systematic literature review (SLR) was done to categorise the rubrics.According to Dewey and Drahota (2016) a SLR allows researchers to "identify, select and critically appraise literature in order to answer research questions".It involves clear methods to 1) perform a comprehensive literature search, 2) write a critical judgement of the individual studies gathered, and 3) integrate the valid studies using appropriate statistical techniques.As explained by Vlachopoulos and Makri (2017) in their study, the SLR in this study was done using the steps below: Using these six steps, 20 rubrics were selected and analyzed.The assessment criteria that were included in the rubrics were listed down in a table.The more criteria that appeared from other studies, the more columns were created in the table below to see if the criteria appear in the other rubrics.Ticks were inserted into the table to be statistical calculated later.
Table 1 below presents the analysis of 20 rubrics that were studied.to construct a descriptive paragraph marking rubric that focuses on vocabulary use.The table shows the criteria that were used in each of the rubrics listed below and the most frequently used relevant criteria has been included in the descriptive paragraph marking rubric.The most recurring components that were used as assessment criteria, were listed down and integrated into an exclusive descriptive writing rubric.In this study, it was found that these rubrics that were studied had their dominance but needed improvements.These rubrics featured non-descriptive components as its assessment criteria such as main topic, thesis statement, topic sentences, supporting details, evidence and creativity.This is certainly not wrong, however these rubrics portray very less importance to description and variety of vocabulary use, which is actually needed for descriptive writing.
FINDINGS AND DISCUSSIONS
Table 2 shows the number of occurrences of each item that were found in the 20 rubrics that were analysed.The summative content analysis method was used to analyse these rubrics.Table 2 presents the important criteria that are commonly used in a descriptive essay marking rubric.These criteria in the rubrics were categorised into 11 sections.Certain criteria with the same functions or description were labelled with different names, thus the researcher categorised these criteria in one suitable section.This includes content (ideas/ style), task fulfilment, language & grammar, mechanics (capitalization, spelling & punctuation), main topic/thesis statement/topic sentences/supporting details, vocabulary/word choice, creativity, descriptive language (metaphors/similes/collocations), organization, sentence fluency and format.The analysis on Table 2 shows the most frequent criteria that were used in international rubrics such as TOEFL, IELTS and other prominent ones in the rubrics' bank.The analysis shows that not much focus needs to be given to the format of descriptive writing.This is because descriptive writing does not involve any format that is commonly seen in letter writing and report writing.The analysis highly indicated that language, grammar, mechanics, creativity, sentence fluency and organization are the most common criteria in the descriptive essay writing rubric.However, there was also an indication that vocabulary usage that includes word choice such as verbs/ nouns/ adverb and adjectives, and descriptive language that includes metaphors/ similes and collocations were also used to assess descriptive writing.
Based on the findings that were gathered in this study, the following descriptive writing rubric template was developed.As seen in Table 3, this template can be used by teachers and lecturers to insert criteria descriptors that suit their students' level of study.After thorough analysis, a descriptive writing rubric template was created to be used to assess the descriptive writing by teachers.This descriptive writing rubric template in Table 3 allows teachers to easily comprehend the criteria that would be looked at during marking.The template was improved several times to prevent any type of confusion and discrepancy while marking took place.
The descriptive writing rubric template is brief and comprehensive compared to other marking rubrics that focused too much on the content or mechanics of writing alone.This template was constructed after the researcher studied and considered several reliable and widely used descriptive essay marking rubrics, thus making this marking rubric highly consistent and valid.It was precisely constructed for the use of marking descriptive essays which focused mainly on vocabulary and word choice.There were four criteria that were assessed in the students' paragraph writing task.First was Criteria 1 (C1) which focused on content/ task fulfillment.The second criteria (C2) looked at the use of language and mechanics of writing.It concentrated on grammar use, spelling, capitalization and punctuation.Criteria 3 (C3) is an extremely unique feature that was inserted in this marking rubric.The researcher realized that there was a great need for students to use a good amount of descriptive words to write descriptively.As this study focused on vocabulary use through students' descriptive writings, the marking rubric was created to assess vocabulary use, which was not seen in other descriptive writing rubric.It was observed that most descriptive essay marking rubrics did not focus extensively on the use of vocabulary, making them not different from the common 6 trait or 5 paragraph essay marking rubric.Finally, Criteria 4 (C4) looked at the organization of the paragraph.It focused on sentence construction, coherence, flow of ideas and also the length of the paragraph.The suggested total marks for each essay was 40, where 10 marks were allocated for each criteria.The score range was categorized as Excellent (9-10 marks), Good (7-8 marks), Average (5-6 marks), Weak (3-4 marks) and Poor (1-2 marks).
CONCLUSION
In conclusion, a significant percentage of the studies considered it critical to use guidelines or rubrics to help students write different types of essays.It was crucial that each type of essays, has some distinguishment where there are requirements for styles of writing and use of certain word forms in the essays.As such, descriptive writing, which has been the focus of this study, requires the use of extensive vocabulary to write descriptively, compared to the other types of writing.It was also observed that teachers consider descriptive writing as a way to develop their student's language use.With the rubric that has been created as a template in this study, teachers no longer need to use the common five paragraph writing technique to teach descriptive writing.They would be able to emphasize on each of the items or criteria needed to write a descriptive essay, which doesn't take too much of class time, which was the primary concern of teaching different types of writing.The limitation of this study is certainly that it focused on descriptive writing alone.In future, it is hoped that rubrics for other types of writing are also developed to ease teachers teaching and simplify students learning, as they would know what it takes to complete these types of essays.
Table 1 :
Analysis of Criteria Used in Descriptive Essay Marking Rubrics
Table 3 :
A Descriptive Writing Rubric Template
|
v3-fos-license
|
2018-04-03T06:18:32.319Z
|
2014-05-18T00:00:00.000
|
17167053
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2014/340216.pdf",
"pdf_hash": "a7b23f34f64884a50421a385a4a75a43627df916",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43401",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "9e6654223622d424b305566351db8cf7e2ca8bdc",
"year": 2014
}
|
pes2o/s2orc
|
Gla-Rich Protein Is a Potential New Vitamin K Target in Cancer: Evidences for a Direct GRP-Mineral Interaction
Gla-rich protein (GRP) was described in sturgeon as a new vitamin-K-dependent protein (VKDP) with a high density of Gla residues and associated with ectopic calcifications in humans. Although VKDPs function has been related with γ-carboxylation, the Gla status of GRP in humans is still unknown. Here, we investigated the expression of recently identified GRP spliced transcripts, the γ-carboxylation status, and its association with ectopic calcifications, in skin basal cell and breast carcinomas. GRP-F1 was identified as the predominant splice variant expressed in healthy and cancer tissues. Patterns of γ-carboxylated GRP (cGRP)/undercarboxylated GRP (ucGRP) accumulation in healthy and cancer tissues were determined by immunohistochemistry, using newly developed conformation-specific antibodies. Both GRP protein forms were found colocalized in healthy tissues, while ucGRP was the predominant form associated with tumor cells. Both cGRP and ucGRP found at sites of microcalcifications were shown to have in vitro calcium mineral-binding capacity. The decreased levels of cGRP and predominance of ucGRP in tumor cells suggest that GRP may represent a new target for the anticancer potential of vitamin K. Also, the direct interaction of cGRP and ucGRP with BCP crystals provides a possible mechanism explaining GRP association with pathological mineralization.
Introduction
Gla-rich protein (GRP), also known as cartilage matrix associated protein or upper zone of growth plate and cartilage matrix associated protein (UCMA) [1][2][3][4], was identified in sturgeon as a new vitamin-K-dependent protein (VKDP), exhibiting an unprecedented high density of Gla residues (16 Gla residues among 74 amino acids) and a high affinity for calcium mineral [1]. Highly conserved GRP orthologs presented conserved features specific to all VKDPs, in particular a remarkably well conserved Gla domain, thus suggesting GRP to be a new vertebrate-specific -carboxylated protein [1]. While in sturgeon GRP was predominantly found in cartilaginous tissues [1], in mammals it was shown to have a wider tissue distribution and to accumulate both in skeletal and connective tissues including bone, cartilage, skin, and vasculature [1,2]. GRP was found to be a circulating protein and to be associated with calcifying pathologies affecting skin and arteries, where it accumulates at sites of ectopic calcifications and colocalizes with calcium mineral deposits [2].
BioMed Research International
Although the function of GRP is still unknown, it has been suggested to act as a negative regulator of osteogenic differentiation [4], a modulator of calcium availability in the extracellular matrix [1,2], and as a potential inhibitor of soft tissue calcification in connective tissues [2]. In concordance, recent functional studies pointed to an essential role of GRP in zebrafish skeletal development and calcification [5], albeit GRP-deficient mice did not reveal evident phenotypic alterations in skeletal architecture, development, or calcification [6]. While four alternatively spliced transcripts of the GRP gene (GRP-F1, -F2, -F3, and F4) were described in mouse chondrocytes [7] and zebrafish [5,8], new alternatively spliced transcripts were recently identified in humans. Besides GRP-F1, the new variants GRP-F5 and GRP-F6 were characterized by the loss of full -carboxylation and partial secretion functional motifs, due to deletion of exon 3 in F5 and exons 2 and 3 in F6 [9].
Although the precise function of the novel human GRP variants and the importance of their -carboxylation need to be further addressed, the perception that crucial differences exist between mouse and human GRP highlights the need for additional characterization of GRP expression/accumulation patterns in health and disease. Considering VKDPs for which the function is known, -carboxylation was shown to be required for biological activity [10][11][12], and under-or uncarboxylated species are generally regarded as proteins with low or no functional activity [13,14]. Several factors, such as insufficient dietary intake of vitamin K, mutations in the -glutamyl carboxylase enzyme (GGCX), and warfarin treatment, result in decreased -carboxylation of VKDPs, which has been associated with an increased risk for osteoporotic bone loss [15,16] as well as for arterial and skin calcifications [12,[17][18][19][20]. Microcalcifications are also often associated with different types of cancer and are considered as a hallmark for early detection of breast cancer, with recognized prognostic relevance. Accumulating evidence suggests that mineralization occurring in breast cancer is a cell-specific regulated process sharing molecular mechanisms involved in arterial pathological mineralization and physiological mineralization in bone [21]. Several reports have described an association of VKDPs with different types of cancer, namely, matrix Gla protein (MGP) and uncarboxylated prothrombin (des--carboxy prothrombin, DCP) [22][23][24][25][26][27][28]. Although a relation between MGP -carboxylation status and neoplasias is still unknown, for prothrombin it was shown that increased circulating DCP can be used as a diagnostic and prognostic marker for hepatocellular carcinoma [22,28,29]. It is remarkable that, despite the widely reported vitamin K anticancer potential [ [10,30], and references therein], its mechanism of action in cancerous processes remains elusive. Therefore, the identification of new vitamin K targets in cancer, such as GRP, may contribute to unveil the role and functional mechanism of vitamin K in cancer development.
Here, we report on the association of GRP in human skin and breast cancers with the specific accumulation pattern of carboxylated (cGRP) and undercarboxylated (ucGRP) protein forms. In these studies we used newly developed conformation-specific antibodies against ucGRP and cGRP. We investigated the association and -carboxylation status of GRP with microcalcifications occurring in skin and breast cancers and tested the mineral binding capacity of both protein forms, which may help understanding the mechanism behind the previously reported association of GRP with pathological mineralization. [31]. RNA integrity was evaluated by agaroseformaldehyde gel electrophoresis and concentration determined by spectrophotometric analysis at 260 nm.
Gene Expression.
One microgram of total RNA was treated with RQ1 RNase-free DNase (Promega) and reversetranscribed at 37 ∘ C with MMLV-RT (Invitrogen) using a dT adapter. PCR amplifications for GRP-F1, -F5, and -F6 splice variants were performed with SsoFast EvaGreen Supermix (BioRad) for 50 cycles and specific primer sets A/B, C/D, and C/E, respectively. Ribosomal 18S was used as loading control. A list of all PCR primer sequences is presented in Table 3.
Quantitative Real-Time PCR (qPCR).
Quantitative PCR was performed with an iCycler iQ apparatus (Bio-Rad) using 25 ng cDNA and the conditions described above. In addition to GRP-F1, -F5, -F6 and 18S, MGP, GGCX, VKOR (vitamin K epoxide reductase), OPN (osteopontin), TNF (tumor necrosis factor alpha), and GAPDH were amplified using primer sets as described in Table 3. Fluorescence was measured at the end of each extension cycle in the FAM-490 channel and melting profiles of each reaction were performed to check for unspecific product amplification. Levels of gene expression were calculated using the comparative method (ddCt) and normalized using gene expression levels of both GAPDH and 18S housekeeping genes, with the iQ5 software (BioRad); qPCR was performed in duplicates and a normalized SD was calculated.
Conformation-Specific Antibodies against Carboxylated (cGRP) and Undercarboxylated (ucGRP) GRP Protein Forms.
Affinity-purified chicken polyclonal antibody against cGRP (cGRP pAb) (GenoGla Diagnostics, Faro, Portugal) was produced by immunizing chickens with a synthetic peptide corresponding to a -carboxylated region of the human GRP Gla-domain located within exon 4 (aa 29-42: QRNEFEN-FVEEQND, in which all E are Gla-residues and termed cGRP29-42, Figure 1). An equivalent, but noncarboxylated peptide (aa 29-42, where all E are Glu residues), was termed ucGRP29-42 ( Figure 1). The conformation-specific affinitypurified antibody was obtained by passing the chicken serum through an ucGRP29-42 affinity column followed by immunopurification of the flow-through on a cGRP29-42 column.
were fused with a mouse myeloma cell line (Sp 2/01-Ag, CRL 8006, ATCC). Clones strongly reacting with ucGRP31-54 and nonreactive with cGRP31-54 were selected, and the antibodies produced were purified by protein G affinity chromatography.
CTerm-GRP polyclonal antibody (GenoGla Diagnostics) detecting total GRP was produced against a synthetic peptide corresponding to the C-terminus of rat GRP, following a previously described procedure [2].
BioMed Research International 5 Negative controls consisted in the substitution of primary antibody with TBT. Counterstaining was performed with haematoxylin. Microscopic images were acquired in a Zeiss AXIOIMAGER Z2 microscope, with an AxioCam ICc3 camera and AxioVision software version 4.8 (Carl Zeiss), at the light microscopy facility, Department of Biomedical Sciences and Medicine, University of Algarve (Portugal).
Cloning of hGRP-F1 into pET151 Expression Vector.
The complete open reading frame (ORF) of the human GRP-F1 isoform (hGRP) was amplified by nested PCR from reverse transcribed Sk total RNA, using HsGRPCDS1 Fw and HsGRPCDS3 Rv specific primers in the first PCR, followed by nested amplification with HsGRPCDS2 Fw/HsGRPEx5R2 primers. PCR products were cloned into pCR II TOPO (Invitrogen) and sequenced (CCMAR sequencing facilities, Faro, Portugal). Human GRP cDNA coding for the secreted GRP-F1 protein was amplified using an Sk-derived positive clone and specific primers ReHsGRP CFw/ReHsGRP Rv designed to allow directional cloning into a pET151/D-TOPO vector (Champion pET Directional TOPO Expression kit, Invitrogen). A His 6 tag, a V5 epitope, and a tobacco etch virus (TEV) protease cleavage site were added to the N-terminus of the expressed protein. Correct cloning was verified by DNA sequencing (CCMAR). A list of all PCR primer sequences is presented in Table 3.
Recombinant Protein Expression and Purification.
Escherichia coli BL21star (DE3) cells (Champion pET Directional TOPO Expression kit) were transformed according to manufacturer's instructions and induction was performed with 1 mM IPTG for 4 h. Cells were pelleted by centrifugation, resuspended in binding buffer (20 mM sodium phosphate, 0.5 M NaCl, 20 mM imidazole, pH 7.4), and sonicated for 3 min in 10 sec pulses series at 60 V. The resulting cleared supernatant was loaded onto a 1 mL HisTrap HP column (GE Healthcare) according to manufacturer's instructions, and recombinant protein was eluted with 20 mM sodium phosphate, 0.5 M NaCl, 500 mM imidazole, pH 7.4. Recombinant human GRP (rhGRP) protein purity was assessed by SDS-PAGE.
Extraction and Purification of GRP and MGP from
Calcified Tissues. Sturgeon GRP (sGRP) was extracted and purified as previously described [1]. Identification of purified protein, obtained after RP-HPLC purification, was confirmed by N-terminal amino acid sequence. Bovine MGP (bMGP) was extracted from bovine calcified costal cartilage, obtained from local slaughterhouse, as described [32]. Briefly, the formic acid demineralized fraction containing mineralbinding proteins was dialyzed against 50 mM HCl using 3,500 molecular weight tubing (Spectra/Por 3, Spectrum) over two days and then freeze-dried and dissolved in 6 M guanidine-HCl, 0.1 M Tris, pH 9.0. Subsequent partial purification was achieved by precipitation of insoluble proteins (mainly MGP) through dialysis against 5 mM ammonium bicarbonate. Precipitated MGP was dissolved in 6 M guanidine-HCl, 0.1 M Tris, pH 9.0. HisTrap rhGRP was further purified through RP-HPLC as described above for sGRP, and recombinant Thermus thermophilus S6 ribosomal protein (S6) was a kind gift from Professor Eduardo Melo (CBME, University of Algarve, Portugal).
Protein Mineral Complex (PMC) In Vitro Assay.
Basic calcium phosphate (BCP) crystals were produced as previously described [33] by incubating 2 mM CaCl 2 and 10 mM sodium phosphate buffer pH 7.0 for 2 h at 37 ∘ C and then centrifuged at 20 000 xg for 20 min at RT. BCP crystals were incubated for 30 min at 37 ∘ C, with approximately 5 g of each protein (rhGRP, sGRP, bMGP, and S6) in 25 mM boric acid, pH 7.4. After centrifugation at 20 000 xg for 20 min at RT, supernatants containing non-bound mineral proteins were collected, lyophilized, and analyzed by SDS-PAGE. Pellets containing PMCs were first washed with 25 mM boric acid, pH 7.4 and then demineralized with 30% (v/v) formic acid for 2 h at 4 ∘ C with agitation. After centrifugation at 20 000 xg for 20 min at 4 ∘ C, the supernatant containing the BCP-binding proteins was collected, lyophilized, and analyzed by SDS-PAGE.
Electrophoresis and Dot-Blot Analysis.
Aliquots of protein were separated on a 4 to 12% gradient SDS-PAGE (NuPage, Invitrogen) gel and proteins were visualized by Coomassie Brilliant Blue (CBB, USB) staining as described [2]. For dot-blot immunodetection, 100, 50, and 25 ng of synthetic peptides were applied onto a nitrocellulose membrane (BioRad), as described [2], and incubated O/N with a 1 : 1000 dilution of cGRP pAb and ucGRP mAb, respectively. Immunodetection was achieved using species-specific secondary horseradish-conjugated antibodies and Western Lightning Plus-ECL (PerkinElmer).
GRP-F1 as Main GRP Splice Variant Expressed in Human
Skin and Mammary Gland. In order to study the expression pattern of GRP splice variants in skin (Sk) and mammary gland (MG) control tissues, specific primers were designed (Figure 2(a)) and tested for the unique amplification of each splice variant using GRP-F1, -F5, and -F6 clones. Primer sets A-B, C-D, and C-E were shown to specifically amplify GRP-F1, -F5, and -F6, respectively (results not shown). GRP-F1 was consistently amplified in all control samples of Sk and MG analyzed, while GRP-F5 and -F6 expressions were shown to be heterogeneous: barely detectable in some Sk samples and mostly undetectable in the MG tissues analyzed (Figure 2(b)). Overall, the expression pattern of GRP splice variants shows that the GRP-F1, coding for the full protein, is the main transcript present in control skin and mammary gland.
GRP Accumulates in Both Skin and Mammary Gland
Control Tissues. The pattern of total GRP accumulation was determined in control human skin (Figures 3(a)-3(c)) and mammary gland (Figure 3(d)). Strong positive staining for GRP was observed in all strata of the epidermis (Ep), in small blood vessels (BV) at the dermis level (Figure 3(a)), in hair follicles (results not shown) and in sweat (SwG; Figure 3(b)) and sebaceous (SG; Figure 3(c)) glands; this is consistent with the previously described pattern of GRP accumulation in human skin [2]. In normal mammary tissue, GRP was mainly detected in the cytoplasm of ductal cells (DC) forming the lobules (Figure 3(d)) and in small arterioles (results not shown). Negative controls (NC) showed absence of signal.
GRP-F1 and Genes Involved in -Carboxylation Share Gene Expression Pattern in Skin and Breast Cancers.
Expression levels of GRP splice transcripts were determined in control and cancerous tissues and correlated with gene expression of MGP, GGCX, VKOR, and the tumor markers OPN and TNF (Figures 4 and 5). Both in skin cancer (SC) and in the control samples (Sk), the levels of GRP-F1 were found to be heterogeneous without a clear tendency for up-or downregulation in cancer cases ( Figure 4). Interestingly, the same heterogeneous pattern was found for MGP, GGCX, and VKOR, while OPN and TNF were found clearly upregulated in tumor samples (Figure 4). These results suggest a concerted expression of the VKDPs, GRP, and MGP, and the genes involved in the -carboxylation process, which cannot be associated with growth, progression, or metastasis of cancer processes at this time. Of notice, skin cancer samples analyzed were devoid of microcalcifications, as determined by von Kossa staining and confirmed by histological evaluation by pathologists (Table 1). Similar gene expression results were found in control MG and breast cancer (BC) samples ( Figure 5), with heterogeneous levels of GRP-F1, MGP, GGCX, and VKOR and increased expression of OPN and TNF in cancer cases ( Figure 5). However, higher levels of GRP-F1, MGP, GGCX, and VKOR were found in BC samples that include microcalcifications (Table 1), suggesting an upregulation associated with calcification, but not necessarily with tumor development. Gene expression of GRP-F5 and -F6 transcripts was found to be nearly undetectable in the majority of samples from both skin and breast cancers (results not shown), highlighting the predominance of the GRP-F1 transcript in all tissues and conditions analyzed.
Validation of Novel Conformation-Specific Antibodies against Human Carboxylated (cGRP) and Undercarboxylated (ucGRP) GRP Protein Forms.
Conformational specificity of cGRP and ucGRP antibodies was initially screened by a cross-reactivity test of immune sera with each peptide in both -carboxylated and non--carboxylated forms. Crossreactivity of purified cGRP pAb and ucGRP mAb with all available synthetic peptides was further tested by dot-blot, confirming specificity of cGRP pAb for cGRPpep29-42 and of ucGRP mAb for ucGRPpep31-54 ( Figure 6). In addition, quenching assays were performed for both antibodies to validate their use in IHC. Blocking recognition of cGRP pAb and ucGRP mAb epitopes was performed using the corresponding synthetic peptides in both forms; incubations of cGRP pAb with cGRPpep29-42 and ucGRP mAb with ucGRPpep31-54 resulted in a decreased signal intensity, while incubations of cGRP pAb with ucGRPpep29-42 and ucGRP mAb with cGRPpep31-54 showed similar signal staining as nonblocked antibody assays (results not shown). Tables 1 and 2) were analyzed by IHC using cGRP and ucGRP antibodies and compared with control tissues (Figure 7). In control skin, cGRP and ucGRP (Figures 7(a) and 7(e), resp.) colocalized with total GRP (Figure 3(a)), although most of the fibroblasts In healthy mammary glands cGRP was consistently detected in the cytoplasm of ductal cells (DC) forming the lobules (Figures 7(i) and 7(j)) and colocalized with total GRP (Figure 3(d)), while ucGRP was found to be either colocalized (Figure 7(m)) or undetectable (Figure 7(n)), depending on samples analyzed. In contrast, in IDC samples ucGRP was consistently detected throughout the cytoplasm of tumor cells (TC; Figures 7(o) and 7(p)), while cGRP was found to be highly localized with a pointed spot pattern to certain tumor cells (Figures 7(k) and 7(l)). Overall, ucGRP is the predominant form accumulating in IDC-tumor cells, while cGRP preferentially accumulates in healthy mammary gland. Of notice, not all areas of IDC analyzed were found positive for GRP, but in areas with a positive signal, the described patterns were always observed.
Differential Accumulation Pattern of Human cGRP and ucGRP in Skin and Breast Cancers. Fifteen individual cases (eight skin and seven breast cancers, see
Negative controls were performed for each antibody and each sample analyzed and showed absence of signal (results not shown).
Both cGRP and ucGRP Protein Forms Are Accumulated at
Sites of Microcalcifications in BCC and IDC. From all samples analyzed by von Kossa staining, two BCC and four IDC samples were found to contain microcalcifications (results not shown) and classified as light, moderate, or massive, according to the quantity and size of the mineral present (Tables 1 and 2). In all samples containing microcalcifications, both cGRP (Figures 8(a)-8(c)) and ucGRP (Figures 8(d)-8(f)) were detected colocalizing with mineral deposits in BCC (Figures 8(a) and 8(d)) and IDC (Figures 8(b), 8(c), 8(e), and 8(f)). These results strongly suggest that both cGRP and ucGRP have a high affinity for calcium mineral deposits.
In Vitro Association of Both c and ucGRP Protein
Forms with Basic Calcium Phosphate (BCP) Crystals. Calcium/phosphate (Ca/P) mineral-binding capacity of carboxylated and noncarboxylated GRP protein forms was evaluated using protein mineral complex (PMC) assays.
Human recombinant GRP-F1 protein (rhGRP) was expressed as a noncarboxylated secretory 107-aa protein, comprising the 74-aa GRP-F1 protein, with 33 aa of His 6 , epitope V (V5), and the TEV recognition site (TEV RS) at Control Cancer TNF gene expression Control Cancer TNF gene expression its N-terminus. Purified rhGRP with an apparent molecular weight of 14 kDa on SDS-PAGE (Figure 9(a)) was further identified through LC-MS/MS analysis (Mass Spectroscopy facilities, ITQB-Lisbon, results not shown). Protein fractions obtained in the PMC assays using noncarboxylated rhGRP, carboxylated sturgeon GRP (sGRP), S6, and bovine MGP (bMGP) were analyzed by SDS-PAGE. Results demonstrate that most of rhGRP, sGRP, and bMGP (used as positive control) were present in the demineralized fraction corresponding to the mineral-bound proteins, while S6, used as negative control, was predominantly found in the supernatant containing the non-mineral bound proteins (Figure 9(b)). These results confirm the BCP-binding capacity of both carboxylated and noncarboxylated GRP protein forms.
Discussion
-carboxylation of VKDPs is widely accepted to be determinant for their proper function, highlighting the importance of investigating -carboxylation status of GRP in humans. Human GRP carboxylation has been hypothesized on the basis of its high sequence similarity with sturgeon protein, previously shown to be -carboxylated, and the identification of specific domains and motifs conserved in other VKDPs [1]. In this work, we first investigated GRP -carboxylation status in human healthy tissues and further determined its association with ectopic calcification in cancers. Recently we have found additional GRP alternatively splice transcripts in human tissues (GRP-F5 and -F6, [9]) which were different from those previously described for mouse [7] and zebrafish [5,8]. However, the GRP-F1 transcript was clearly shown to be the predominantly expressed variant in the control and cancer tissues analyzed. The newly developed conformationspecific antibodies were designed to detect the complete form of human secreted GRP (i.e., the GRP-F1 isoform), containing 15 Glu residues potentially -carboxylated. Since both GRP-F5 and -F6 contain exons 4 and 5 it would be possible that in certain conditions both cGRP pAb and ucGRP mAb colocalize different GRP isoforms. However, in this study the expression of GRP-F5 and -F6 was undetectable in the majority of the samples analyzed and their contribution for the GRP accumulation pattern was considered to be negligible. By using the new conformation-specific antibodies we were able to demonstrate the differential accumulation patterns of cGRP and ucGRP species in healthy skin and mammary gland tissues, their relation with neoplasias, and particular association with microcalcifications in skin and breast cancers. In healthy tissues, cGRP and ucGRP were found to be colocalized, suggesting an incomplete GRPcarboxylation status under normal physiological conditions. This result is consistent with the knowledge that all extra hepatic Gla proteins presently investigated are undercarboxylated in non-vitamin-K supplemented healthy individuals [13,14]. Moreover, in tumor cells (both in BCC and IDC) cGRP was clearly lower than in non-affected areas, whereas ucGRP preferentially associated with tumor cells; high amounts of ucGRP were also found at sites of microcalcifications. Since conformation-specific GRP antibodies were produced against synthetic peptides covering small regions of GRP and possible -carboxylated Glu residues are present throughout the entire mature protein, the possibility of simultaneous detection of c/ucGRP protein forms cannot be completely discarded yet. Further characterization of monospecificity against native GRP species is currently under investigation. Nevertheless, these antibodies were found to have high specificity towards the respective synthetic GRP-related peptides used as antigens and clearly demonstrate different patterns of cGRP and ucGRP protein accumulation in the human tissues analyzed.
We have previously suggested that GRP may be a physiological inhibitor of soft tissue calcification accumulating at sites of mineral deposition [2], and the clear association of GRP with microcalcifications present in BCC and IDC further supports a global association of GRP with ectopic calcifications, independent of disease etiology. The presence of high amounts of ucGRP at sites of calcification, together with (i) the knowledge that Gla residues increase calcium binding capacity of VKDPs and (ii) that calcification inhibitors are known to accumulate at sites of mineral deposition [34][35][36], suggests a pivotal role for GRP in the regulation of mineralization that can be compromised in situations of low -carboxylase activity (e.g., by poor vitamin K status). In analogy, impaired carboxylation of MGP, leading to the accumulation of substantial amounts of ucMGP at sites of calcification, was previously suggested to be associated with a suboptimal capacity of arterial and skin calcification inhibition [18,19,36]. In concordance, our results show higher levels of GRP-F1 expression in IDC cases where ectopic calcifications were present. Although increasing sample sets, not yet available in our laboratory, would be required to clearly establish a relation between GRP-F1 expression levels and cancer development, it is interesting to note that expression pattern of GGCX, VKOR, and MGP was found highly similar to that of GRP-F1. This suggests that genes required for -carboxylation respond in a concerted manner according to demands of substrate and should not be limiting factors for carboxylation of VKDPs, such as GRP and MGP. However, increased gene expression might not reflect Figure 7: cGRP preferentially accumulates in healthy tissues while ucGRP is the predominantly form associated with tumor cells. Immunolocalization of cGRP and ucGRP in control skin (Sk; (a, e), resp.), basal cell carcinoma (BCC; (b-d), (f-h), resp.), control mammary gland (MG; (i-j), (m-n), resp.), and invasive ductal carcinoma (IDC; (k-l), (o-p), resp.) tissue sections was performed with cGRP and ucGRP antibodies, respectively. In control skin, cGRP and ucGRP are similarly accumulated in the epidermis, although cGRP is the predominant form in blood vessels (BV) and fibroblasts (Fb) ((a, e), resp.). In BCC tumor cells (TC), cGRP levels are significantly decreased (b, c), while ucGRP is the predominant form (f, g) compared to both healthy skin (a, e) and non-affected areas adjacent to tumor cells (d, h). In control mammary gland (MG), cGRP is accumulated in the ductal cells (DC; (i, j)), while ucGRP accumulation is either similar (m) or decreased (n) compared to cGRP. In IDC tumor cells, the amount of cGRP is significantly decreased (k, l) in relation to ucGRP (o, p). Sections were counterstained with haematoxylin. Scale bar represents 100 m.
protein functionality, and impaired GGCX activity has been associated with insufficient -carboxylation of prothrombin in cancers [37,38], while levels of GGCX mRNA have been shown either increased or heterogeneous among hepatocellular carcinomas [39]. Although prolonged subclinical vitamin K deficiency has been demonstrated as a risk factor for cancer development [40], a relation between vitamin K status or intake and decreased carboxylation of VKDPs is still controversial [24,39]. However, the increased ucGRP accumulation in BCC and IDC and concomitant decreased of cGRP in relation to healthy tissues could be explained by decreased levels of vitamin K in tumor areas in contrast to non-tumorous, as previously reported [39]. Although a number of studies have shown that different forms of vitamin K (notably vitamin K2) exert antitumor activity on various rodent-and human-derived neoplastic cell lines [10,30,41], most of these effects have been correlated with increased -carboxylation of prothrombin leading to decreased levels of DCP [38,39,42]. Moreover, although levels of MGP mRNA have been suggested as a molecular marker for breast cancer prognosis, with overexpression and downregulation of MGP gene reported in different types of cancer and cell lines [reviewed in [25,26]], its -carboxylation status in neoplasias remains unknown. Special attention should be given to the suggested therapeutic effect of vitamin K on cancer progression and to the potential detrimental effects of vitamin K antagonists [43] widely used in therapy of patients with cancer, on the functionality of VKDPs present in tumor tissues such as GRP and MGP.
Our protein-mineral complex in vitro studies provide insights into a possible mechanism explaining the accumulation of GRP at sites of pathological calcifications, since we demonstrated that both cGRP and ucGRP have calcium mineral-binding capacity and can directly bind BCP crystals. Similarly, MGP was shown to directly interact with HA crystals involving both phosphoserine and Gla residues; also for MGP the direct protein-HA interaction was suggested to be the mechanism underlying MGP arterial calcification inhibition [44]. Interactions between proteins and biological calcium crystals are believed to play a central role in preventing or limiting mineral formation in soft tissues and biological fluids, being determinant in several physiological processes and associated with pathological conditions. Additional studies are required to further clarify the role of cGRP and ucGRP species in calcium crystal nucleation and growth and to determine their precise mechanism of action at the molecular level.
Although further efforts will be made to highlight the relevance of GRP in cancer processes, we showed that GRP is associated with pathological mineralization in cancer and has the in vitro capacity to directly interact with calcium crystals. Our results emphasize that the involvement of this protein should be considered whenever conditions for correct carboxylation of VKDPs are affected. Furthermore, the measurement of carboxylation degree of Gla proteins, such as MGP, osteocalcin, and prothrombin, has been proposed as a marker for certain pathological conditions and vitamin K status [14,40,45,46]. Further investigation aiming to correlate circulating levels and -carboxylation status of GRP with the degree of calcification and disease progression are currently in progress in our labs. We expect that our work will contribute to the evaluation of GRP potential usage as an additional marker for ectopic calcification.
Conclusions
Here we report the -carboxylation status of GRP-F1 in human healthy tissues and its association with skin and breast cancers. The new conformation-specific GRP antibodies enable us to demonstrate that in healthy tissues, cGRP and ucGRP were found to be colocalized, suggesting an incomplete GRP -carboxylation status under normal physiological conditions, while ucGRP was the predominant form associated with tumor cells.
Our results strengthen the previously reported association of GRP with ectopic calcifications, which are particularly relevant in the diagnosis of breast tumors. Our findings suggest that GRP may represent a new target for the anticancer potential of vitamin K, while the degree of GRP -carboxylation might be useful as a potential marker for vitamin K status and ectopic calcification occurrence.
|
v3-fos-license
|
2022-04-10T15:07:34.102Z
|
2022-04-08T00:00:00.000
|
248056532
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2079-9292/11/8/1190/pdf?version=1649673673",
"pdf_hash": "4a9acb925059192d8f4dbf5cbe22538030f41595",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43402",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "646cd9d8aefc610e73258bcab7371ee2293bfed5",
"year": 2022
}
|
pes2o/s2orc
|
Using LinkedIn Endorsements to Reinforce an Ontology and Machine Learning-Based Recommender System to Improve Professional Skills
: Nowadays, social networks have become highly relevant in the professional field, in terms of the possibility of sharing profiles, skills and jobs. LinkedIn has become the social network par excellence, owing to its content in professional and training information and where there are also endorsements, which are validations of the skills of users that can be taken into account in the recruitment process, as well as in the recommender system. In order to determine how endorsements influence Lifelong Learning course recommendations for professional skills development and en-hancement, a new version of our Lifelong Learning course recommendation system is proposed. The recommender system is based on ontology, which allows modelling the data of knowledge areas and job performance sectors to represent professional skills of users obtained from social networks. Machine learning techniques are applied to group entities in the ontology and make predictions of new data. The recommender system has a semantic core, content-based filtering, and heuristics to perform the formative suggestion. In order to validate the data model and test the recommender system, information was obtained from web-based lifelong learning courses and information was collected from LinkedIn professional profiles, incorporating the skills endorsements into the user profile. All possible settings of the system were tested. The best result was obtained in the setting based on the spatial clustering algorithm based on the density of noisy applications. An accuracy of 94% and 80% recall was obtained.
Introduction and Related Studies
Employability requirements have changed to address the new realities and challenges faced by organizations. According to [1] cited by [2], in several fields, some discrepancies emerge between employee competences and the labor market. In view of this, professionals must improve and/or develop their competences and skills according to trends in the labor market in order to be competitive. Nowadays, organizations can use different Social Networks Sites (SNSs) to research into candidates [3], as well as to gather information on labor trends so as to adjust academic plans, analyze student profiles and extract their skills, and identify the most desired skills for the purpose of adjusting curricula [4].
In the professional field, LinkedIn has played a very important role in terms of the dissemination of jobs, where, according to the work presented by one of its missions is to match jobs with suitable professionals [5].
To support this process, the Recommender Systems (RSs) have emerged, which collect the information linking users to items and use it to make relevant and meaningful suggestions. By [6] the RSs base the prediction of user interests, according to their explicit or Table 1. Use of ML in recommender systems.
Ref.
Year Objective Filtering Technique ML Technique Metrics [20] 2015 Recommend LLL courses according to common professional interests Collaborative + ML ML supervised Accuracy/Recall [21] 2018 Recommend courses according to academic profile and jobs on offer Collaborative + Content + ML ML supervised Accuracy/Recovery [22] 2019 Recommend exercises based on information regarding academic performance ML ML non-supervised NDCG **, MAE * and F1 *** [23] 2019 Recommend courses to maximize learning based on past performance ML ML by way of support Accuracy/F1 [24] 2019 Recommend consultancy according to academic-industrial research interests ML ML supervised NDCG [25] 2020 Recommend learning resources according to context, taking colleagues' learning into account ML ML non-supervised Accuracy/Recall/F1 MAE *: Mean absolute error; NDCG **: Normalized Discounted Cumulative Gain; F1 ***: Harmonic mean between precision and recall.
In some cases, ML is used in RSs to make inferences about how past actions correspond to future outcomes. [23] applies this technique to predict future performance based on students' academic record in order to re-schedule courses for target course preparation and exam preparation. [24] proposes an RS for different users in academia, using ML to extract information about research areas of interest to professors from publications and curricula, and then combining them with their background to assign courses and research work for supervisory purposes.
Additionally, semantic web techniques have been combined with ML techniques, allowing knowledge to be exploited from available information, updated and new relationships between data inferred [14]. The combination of an ontology-based recommender system with ML techniques has been used as an approach to improve the accuracy of recommendations, in addition to addressing information overload, with a view to solving cold start problems [26].
Based on the above [2], they propose a hybrid RS to suggest lifelong learning (LLL) courses to improve professional competences for users whose profile is built from their LinkedIn profile data. The RS core involves semantic filtering that uses ontology to model employment sectors and areas of knowledge to represent professional competences. The ontology is updated using events based on professional record profile data extracted from LinkedIn, using ML to cluster entities in order to make predictions about the new data. As a line of future work, they recommend considering the use of LinkedIn endorsements to enrich the user profile and thus improve recommendations.
Recently, there have been several research work oriented analyses of data provided by LinkedIn, which in addition to data related to academic and professional training, also highlights endorsements, which is one of the key features of LinkedIn, asking the viewer to endorse a skill for the candidate as proof of their skill level [27].
In a review of relevant literature, we found the research shown in Table 2, related to the use of LinkedIn endorsements. This is highlighted in [4], in which an RS is proposed that classifies profiles according to skills and peer endorsements, identifying the most desired skills that should be covered by curricula and areas of learning, and suggesting possible corrections in learning programs. [32] indicates that in addition to this information, endorsements of the skills of LinkedIn users can be analyzed to complement the profile.
The system proposed by [32] recommends relevant job opportunities for users seeking employment in the area of information technology, based on the curriculum to be entered into the system. In addition, it makes it easy for recruiters to search for the best talent based on job requirements by analyzing not only the resume, but also LinkedIn endorsements, which could improve the user profile.
Based on the above, an improvement in RS of [2] is proposed, incorporating LinkedIn endorsements as an additional system input. These are taken into account in order to validate the skills determined in the RS profiles to improve recommendations.
In literature on the subject, we found some RSs aimed at developing professional skills through a wide range of training. In this respect, the objective of the RSs proposed by [33][34][35] is to identify the professional competencies that users need to develop in their current job, so that they can be taken into account in their training plan.
For [4], along with RSs, the analysis of SNSs data has become a common practice for collection of information about users, since [36], SNSs such as Facebook and LinkedIn have become integrated into everyday life.
Ref. [37] indicates that social media provide innovative pedagogical frameworks for teaching and learning that allow students to develop digital skills deemed useful for a successful professional career. He also points out that semantic social networks offer a series of advantages related to their strategic use, such as the creation of a network of contacts, which can have an impact on their professional development by connecting with professionals and following labor market trends.
As the use of SNSs has increased decade by decade, so has interest in using SNSs in recruitment and hiring processes [3]. In this sense, for [28], LinkedIn is the most influential web tool in terms of professional use, unlike other SNSs such as Facebook or Twitter, which focus on social relationships. In LinkedIn, users have greater visibility in professional terms, as they can share their training, skills and work experience [4].
For [37], LinkedIn focuses on professional networking and career development, designed to help people make connections, share experiences and resumes, and find jobs. It is also a tool that can be used to find relevant, quality content. According to [3], allowing users to provide written recommendations and endorsements of skills that appear on a user's profile, LinkedIn has also introduced endorsements, in which users are asked to endorse the skills of other users [27].
The data contained in SNSs are very diverse [3], and RSs also obtain information from multiple sources; to handle heterogeneity, they have adopted semantic knowledge representation as one of the theories to help deal with this [26].
Based on this review and lending continuity to the RS presented by [2], it is proposed that LinkedIn skills endorsements be incorporated in order to validate the skills obtained in the user profiling stage to optimize performance and prediction of the continuing education course recommendation system. The system uses LinkedIn user records in the area of software development.
This article is structured as follows: Section 2 details the RS proposal; Section 3 describes the evaluation of the proposal; and Section 4 provides conclusions and recommendations for future research.
Materials
In the field of learning and in personalized recommendation services, different user dimensions, such as personal data, interests, knowledge levels, and context, among others, must be taken into account in order to respond appropriately to their needs [38]. Based on this, LinkedIn records used by [2] were employed to create user profiles, which contained personal, academic and professional data of users, as they in turn contain relevant information to help determine the area of knowledge, skills and labor sector that are taken into account when recommending courses. Additionally, LinkedIn endorsements were incorporated. For the purposes of evaluating the proposal, this was confined to the specific domain of software development.
For user and course profiles, professional skills and job sectors were coded using the taxonomies defined in [2].
The ontology used was stored in Neo4j, and selection of the tool in [2] was based on the fact that it is a NoSQL graph database manager, whereby the relationship is the most important database element representing the interconnection between nodes. This makes it ideal to represent knowledge graphs, allowing nodes and relationships and efficient operations to be managed on them.
As for the different programs that make up the system, the ones developed by [2] were used as a basis and programmed using Python 3. Cypher, the Neo4j query language, was used to query the graph.
Methods
In order to recommend continuing education courses to improve and develop professional skills, a hybrid RS based on ontology and ML is planned to determine the skills to be updated and/or developed according to labor market trends. User profiles are created from their LinkedIn records, taking into account endorsements, in order to validate and determine the level of skills. The recommendation process has three filtering stages at whose core is a semantic filtering, which is combined with filtering by content for the initial prediction of courses, and a heuristic one to obtain the final range of courses to be recommended.
The following is established as a basis for designing the RS: The system input data is obtained from LinkedIn, to create user profiles, and from the web for courses, without requiring any additional information upload by the users.
Courses will be recommended using the skills they develop as a criterion. Skills in the user and course profiles need to be defined under the same terms in the recommendation process.
Modular design allows new algorithms to be incorporated into the system and multiple configuration options offered, as well as including parameterizations for the different algorithms, by enabling or disabling filtering stages in order to validate and evaluate the proposals.
The RS architecture proposed is shown in Figure 1.
The system consists of two main phases: the off-line phase (I), in which the user profiles (A) and (B) are built, definitions are loaded and data models are built from the ontology (C) and ML (D) update processes; an on-line phase (II), in which the course recommendation process is performed, for which purpose the user profile is to be given the recommendation (1) must be built, and which is performed in three filtering stages (2, 3 and 4). The system consists of two main phases: the off-line phase (I), in which the user profiles (A) and (B) are built, definitions are loaded and data models are built from the ontology (C) and ML (D) update processes; an on-line phase (II), in which the course recommendation process is performed, for which purpose the user profile is to be given the recommendation (1) must be built, and which is performed in three filtering stages (2, 3 and 4).
Profile Creation
A semantic profiling based on ontology is performed in order to generate the different profiles used, making use of taxonomies. This process consists of the user profiling process (A) and the course profiling process (B).
After analyzing the records of LinkedIn users and LLL courses extracted from the Web, two taxonomies were then defined to hierarchically code the areas of knowledge and job performance sectors needed for the recommendation process. The levels defined for the areas of knowledge are: area; sub-area; specialty; sub-specialization; and knowledge. For job performance sectors, the hierarchy levels are: sector; field; type; domain; and position.
The following are coded with a taxonomy of areas of knowledge: user skills with their level of specialization (LS) and their degree of updating (DU), the skills to be developed on LLL courses with an LS, and also the skills required to qualify for a course. The job performance sector taxonomy is used to hierarchically code the different positions held by users.
The user profile information is given by: -Demographic information.
Off-Line Phase Profile Creation
A semantic profiling based on ontology is performed in order to generate the different profiles used, making use of taxonomies. This process consists of the user profiling process (A) and the course profiling process (B).
After analyzing the records of LinkedIn users and LLL courses extracted from the Web, two taxonomies were then defined to hierarchically code the areas of knowledge and job performance sectors needed for the recommendation process. The levels defined for the areas of knowledge are: area; sub-area; specialty; sub-specialization; and knowledge. For job performance sectors, the hierarchy levels are: sector; field; type; domain; and position.
The following are coded with a taxonomy of areas of knowledge: user skills with their level of specialization (LS) and their degree of updating (DU), the skills to be developed on LLL courses with an LS, and also the skills required to qualify for a course. The job performance sector taxonomy is used to hierarchically code the different positions held by users.
The user profile information is given by: Job performance sectors. This is a set of hierarchical codes defined by a taxonomy for job performance sectors, their description and dates. -Skill sets, where each skill is coded with three arguments: a hierarchical code that is defined by a taxonomy of areas of knowledge, a level of specialization and degree of update.
Course profile information is provided by: -Demographic information.
-Skills to be developed. Each skill is coded with two arguments: a hierarchical code defined by a taxonomy of areas of knowledge and a level of specialization. -Required skills, where each is coded with a hierarchical code that is defined by a taxonomy of areas of knowledge. -Related skills. These are given by a set of codes of areas of knowledge, where the skills developed on the course may be of interest or complement them.
Ontology and ML
A key element for smooth performance of RSs is to ensure that there is a wealth of semantic data mining, and to prevent the loss of information obtained from data retrieval. Another process associated with this phase aims at building data models to represent the domain of skills and job performance sectors, according to areas of knowledge.
The ontology, in its broadest sense, seeks to represent positions, skills and existing relationships between them. Additionally, it should offer the capacity to represent different groupings of positions under the criterion of similarity according to the skills they use. Likewise, the different relationships that can be found between skills must be represented. The classes defined in the ontology are shown in Table 3. Likewise, class hierarchy is given by the different relationships between classes of ontology as described in Table 4. The ontology is updated in two stages from user profiles: through events (C), where the model is updated with the relationships between job performance sectors and areas of knowledge according to user profile competences; through machine learning (D), the positions of the entities in the ontology are grouped together using the density based spatial clustering of applications with noise algorithm (DBSCAN) and K-means algorithms.
For [39] K-means algorithm mainly uses the Euclidean distance function to measure the similarity between data objects, and the sum of squared errors is used to found spherical clusters with more uniform distribution of data.
The work presented by [40] defines DBSCAN as a clustering algorithm based on data point distribution density. The algorithm can identify the degree of data density and classify the data points in the distribution. At the same time, sporadic data points can be identified as noise, rather than being classified within a class.
In the event driven update, the ontology stores the relationship between the characters and areas of knowledge, which is given by skills with attributes, as well as the number of times that skill is present, its LS average and the promise of DU, which is calculated taking into account the number of users that possess the skill. The relationships are determined, and their attributes calculated, from a training dataset of user profiles via events in the ontology.
For the ontology substrate, new entities, relationships and attributes are defined in the taxonomies and inferences that are allowed to be made, which will be used in semantic filtering. Among the new relationships, synonyms and uses of terms in other languages are contemplated-for the purposes of testing this proposal, the use of terms in Castilian Spanish and English were both considered, due to their common use in the domain used. In the case of areas of knowledge, relationships of the type "is of interest" are used in their entities to indicate that users who possess a skill related to those entities may be interested in developing other skills with other knowledge, in order to obtain a more comprehensive profile.
Additionally, in work performance sectors, the position entities can have "opt for" relationships, where from one position one can opt for similar or higher positions in an orderly relationship defined by a ladder. ' Job performance clusters are created in the machine learning update. An alternative for calculation of related skills involves using ML. It is proposed that unsupervised clustering algorithms be used in instances of position entities, in order to group similar positions, with these groupings being based on the similarity of skills associated with the relevant positions. Job performance charge clustering, based on the similarity of the skill set, can be used to determine the set of related positions for a particular position, or a particular user's skill set. From the job grouping determined, related skills can be established as those skills that appear most frequently in the positions in the grouping, and which the user does not possess.
On-Line Stage
The course recommendation process is carried out at this stage. As shown in Figure 1, this is made up of the objective user profile construction process (1) and recommendation process, which is carried out in three filtering stages (2), (3) and (4).
(1) Process involved in creating the target user profile This process is similar to the profile construction of the off-line stage. It differs in the semantic transduction stage in that it incorporates coding of the endorsements in the user profile. In the information obtained from the LinkedIn record of the user to whom the recommendation is to be made, are the endorsements and use of skill coding algorithms, and the hierarchical code of the area of knowledge is determined. To determine the LS and DU, heuristics are used that take into account the relative frequency of the number of users who endorse the skill. The endorsements are part of the profile of the user to whom the recommendation is to be made and are taken into account in the recommendation process as skills that the user has acquired.
Recommendation process
The recommendation process followed is the one proposed by our previous work [2], where the first stage corresponds to a modification of the semantic filtering algorithm (2) in order to determine the user's own, related and ontological skills, as given by the incorporation of LinkedIn endorsements. Then a second content-based filtering (3) is applied for initial course prediction, and finally, filtering and sorting heuristics (4) are applied for the final recommendation of LLL courses to improve and/or develop professional skills based on a user's record.
The process diagram represented in Figure 2 is followed in order to prepare course recommendations for each user.
(2) Semantic filtering to determine similar skills In order to determine related skills for a user, we first determine his or her related work performance areas using the ontology, based on his or her skills. For this process, the set of work performance sectors where the user's own skills appear is determined and filtered as follows: those related work performance sectors whose skills are covered by a given percentage of the user's skills are selected, and/or those related work performance sectors whose associated skills cover a percentage of the user's own skills are selected. In any case, the user's job performance sectors are excluded from the set of related job performance sectors.
(2) Semantic filtering to determine similar skills
In order to determine related skills for a user, we first determine his or work performance areas using the ontology, based on his or her skills. For t the set of work performance sectors where the user's own skills appear is dete filtered as follows: those related work performance sectors whose skills are c given percentage of the user's skills are selected, and/or those related work p sectors whose associated skills cover a percentage of the user's own skills are any case, the user's job performance sectors are excluded from the set of rela formance sectors.
Alternatively, from the user's job performance sectors it can be determin groupings (clusters) they belong, constructed according to ML algorithms. In new data, i.e., a job performance sector that is not part of any of the cluste Alternatively, from the user's job performance sectors it can be determined to which groupings (clusters) they belong, constructed according to ML algorithms. In the case of new data, i.e., a job performance sector that is not part of any of the clusters obtained during training, it is possible to predict the cluster, to which the user belongs, by making use of the user's skills. Given the groupings, the set of job performance sectors that belong to each of them will form the user's related performance sectors.
In the case of DBSCAN, a core node with a distance less than an epsilon value is located and associated with the cluster, to which it belongs, to predict clustering for new data; in the case of k-means, the centroid with the smallest distance is located and associated with that cluster. In both cases, the difference with one of the similarity functions is used as distance, according to Equation (1).
Finally, which related work performance sectors are deemed prior to the user's work performance sectors are determined in an orderly relationship imposed by a ladder, and these are removed from related performance areas. Subsequently, with sectors of work performance, the user's own and related skills are determined, which are the most frequent skills associated with related sectors of work performance that add value, in other words, that the skill does not exist among the set of skills or, if it does, that it has a higher level of specialization or degree of update than the user's skills; finally, those skills that add value and are included under the relationship "is of interest" in the knowledge network are added to related skills.
Given the inputs: UJS, Set of user job performance sectors. USK, Set of user skills. UESK, Set of validated user skills.
The following Algorithm 1 is used to semantically determine related skills (RSK), the result of modifying the algorithm described in [2], after taking endorsements into consideration: Where: Onto_Get_Jobs(sk) performs job performance sectors whose users have the sk skills registered via events in the ontology and can be configured to use registered skills belonging to the same subspecialty sub-specialization of sk when no job performance sectors are obtained for the same.
Onto_CutOff_Jobs(jobs,sks,uskp,jskp) performs filtering of job performance sectors, js,js ∈jobs, according to a percentage jskp coverage of sks skill set over the skills associated with job performance sector js,js ∈jobs or a percentage skp coverage of sector skills js,js ∈jobs over the sks skill set.
Onto_Cluster_Pred(js) predicts the cluster for a js job sector according to clustering by Machine Learning in the ontology.
Onto_Previous(RJS, U JS), the previous sectors in RJS to U JS according to the orderly relationship given by the ladder in the ontology.
3) Content-based filtering to predict initial course recommendation
In this stage, the algorithm proposed in [2] is applied, where filtering by content is used to filter the courses from the course catalog that raise the level of specialization and/or degree of update of the user's own skills, or help develop the user's related skills. From this filtering, an initial recommendation is obtained, which is refined in the next stage.
(4) Heuristics of filtering and ordering courses for final prediction
The initial course prediction from the previous phase is complemented and filtered using heuristics proposed by [2], while the following is verified for the user profile and course catalog:
1.
That the user has the necessary skills to approach the courses, and otherwise, courses that develop them are selected by applying demographic restrictions; 2.
User demographic restrictions are applied to initial course prediction and to the result of the previous stage; 3.
In the course prediction, the courses whose skills to be developed are the same or a subset of another are eliminated, retaining those with the highest score, to thus determine course recommendation for the user.
Evaluation and Results
According to [7], RS evaluation can be conducted through on-line and off-line experimentation. The work presented by [2] performed off-line RS evaluation. This type of evaluation allows different algorithms and approaches to be assessed, which is used in experimental environments, since the utmost consistency is desired in order to compare the performance of different proposals under the same conditions. Likewise, the off-line evaluation involves metrics to reflect the effectiveness of the system from the user perspective and provide a widely accepted evaluation, due to the robustness of the metrics used.
To evaluate our proposal, and to compare the results with our previous work [2], we performed an off-line evaluation and calculated the metrics used in it (Table 5).
Metrics Description Calculation for the System
Coverage Users to whom the system has made a recommendation, CRi set of recommendations from user u i ∈ U C(S) = |{u i ∈U,CR i ∅}|
Precision
The fraction of the recommendation that is relevant to the user.
Mean absolute error of the given recommendation vs. the recommendation expected by the user, CEi
Recall
Calculates the ratio between recommendation and user preference The portion of recommendations made to the user that the user is not familiar with or has not seen before The fraction of the recommendation that is unexpected and valuable to the user.
Harmonic mean between the precision and recall F 1 = 2 * Precision * Recall
Precision+Recall
RSs are evaluated in batches, with a set of data containing the profiles of both users and courses, as well as user preferences. These were taken from a survey conducted using a Google form, where it was requested that a list of courses be classified in the categories of desirable, preferred, novel and serendipitous, in order to ascertain the preferences of users regarding choice of LLL courses.
The off-line phase of the system was implemented, whereby data were loaded into the Neo4j database that had previously been populated by the ontology substrate definitions. This was updated via events using the training dataset, and clusters of job performance sectors were determined using ML, k-Means and DBSCAN algorithms.
In order to evaluate RS performance and compare it to our previous work, metrics were calculated for each of the following configurations [2]: 1: Content filtering using only the user's own skills; no related skills are determined, although skills of interest are considered; 2: Collaborative filtering using only the user's own work performance sectors; semantically related skills are determined; 3: Semantic filtering using rules to determine related job performance sectors and related skills; 4: Semantic filtering using 75% coverage of user skills to determine related job performance sectors. In other words, those positions that cover 75% of the user's skills were selected as related job performance sectors; 5: Semantic filtering using 50% coverage of skills from the job performance sector to determine related job performance sectors. In other words, those job performance sectors in which the user's skills cover 50% of the skills associated with the position; 6: Semantic filtering using DBSCAN clustering with ε = 0.3, to determine related job performance sectors and semantic rules to determine related skills; 7: Semantic filtering using k-Means clustering to determine related job performance sectors and semantic rules to determine related skills; 8: Semantic filtering using DBSCAN and k-Means clustering to determine related job performance sectors and semantic rules to determine related skills. Each of the configurations described above were run through the different data sets, training, testing and total, obtaining the following results.
The test results are next shown along with the results obtained in the previous work [2], to compare the results and evaluate improvement in the RS.
The results, using the training data (70% of the data set) to validate the model performance, are shown in Table 6. The results obtained from use of the different configurations with the total number of samples from the dataset are shown in Table 7. Finally, the system was run under the different configurations, using the test portion (30%) of data, with the results shown in Table 8. From the results obtained, it can be observed that both this proposal and the system proposed in [2] evidence similar behavior. As such, the configurations showing the best performance in the case of all data sets were semantic filtering using rules to determine related job performance sectors and related skills (3) and semantic filtering using DBSCAN clustering with ε = 0.3 to determine related job performance sectors, and semantic rules to determine related skills (6). However, there was an improvement in MAE and RMSE scores, and a slight increase in precision.
A measure that summarizes both precision and recall is the harmonic mean between them (F1). Table 9 shows the harmonic means for the best performing configurations. From the harmonic mean comparison table, it can be seen in Figure 3, that there is a slight improvement in configuration 6, which makes use of the DBSCAN algorithm, while configuration 3, which corresponds to semantic filtering, remains unchanged. From the different tests, the clustering performed by k-Means evidences an inferior performance, which could be explained by the nature of the domain and distribution of the positions in space. In general, in the graph analysis we observe better clustering (related positions) performed by DBSCAN than that performed by k-Means. Improvements in the recall and serendipity metrics can be associated with use in the ontology and the estimation of associated skills or skills of interest and filtering of positions prior to the current one in an orderly relationship imposed by a ladder.
In our review of different works, we found some similar RS proposals, and so to evaluate the performance of ours it is advisable to compare results. The work by [41] is geared to university students and company coordinators, recommends jobs according to user skills, while that by [21] is designed to recommend courses to students, using multiple data sources. On the other hand, the work by [42] recommends career paths and skills required for different jobs to users, based on their skills and interests. Finally, [35] recommends online courses to professionals according to their professional competences and professional development preferences. Table 10 provides a summary of the results obtained from each of these papers, which use the precision and recall metrics for the evaluation of their RSs, and the best performing configuration from our proposal. When comparing these works with the best results obtained from our proposal, an improvement in metrics of all similar RS proposals can be seen, as can an increase in accuracy, recall and harmonic mean; the latter measures being the combined performance between accuracy and recall. From the different tests, the clustering performed by k-Means evidences an inferior performance, which could be explained by the nature of the domain and distribution of the positions in space. In general, in the graph analysis we observe better clustering (related positions) performed by DBSCAN than that performed by k-Means. Improvements in the recall and serendipity metrics can be associated with use in the ontology and the estimation of associated skills or skills of interest and filtering of positions prior to the current one in an orderly relationship imposed by a ladder.
In our review of different works, we found some similar RS proposals, and so to evaluate the performance of ours it is advisable to compare results. The work by [41] is geared to university students and company coordinators, recommends jobs according to user skills, while that by [21] is designed to recommend courses to students, using multiple data sources. On the other hand, the work by [42] recommends career paths and skills required for different jobs to users, based on their skills and interests. Finally, [35] recommends online courses to professionals according to their professional competences and professional development preferences. Table 10 provides a summary of the results obtained from each of these papers, which use the precision and recall metrics for the evaluation of their RSs, and the best performing configuration from our proposal. When comparing these works with the best results obtained from our proposal, an improvement in metrics of all similar RS proposals can be seen, as can an increase in accuracy, recall and harmonic mean; the latter measures being the combined performance between accuracy and recall.
Discussion and Conclusions
In this paper we have presented a new version of the RS proposed by [2], taking the incorporation of endorsements of user LinkedIn skills to evaluate RS performance into consideration. In the review of literature on LinkedIn endorsements, we noted that very few papers analyze this element. These are more oriented to analyze the validity of endorsements, as an element to be taken into account in the data extracted from SNSs.
The strategy proposed for recommendation of LLL courses was based on establishing a relationship between users according to work performance sectors and professional skills in order to identify those skills that should be improved or developed for their current job, or to access another, higher level job and, based on these identified skills, to determine an initial prediction of courses that may develop them. This strategy made it possible to establish a mechanism for relating the data, which generally speaking, did not initially have relationships on which to base recommendations.
By incorporating the endorsements, it was possible to obtain more information on the user profile, which in turn made it possible to incorporate skills that were not evident in the data related to current employment, and which were useful when refining course recommendation. When evaluating the incorporation of endorsements in creating user profiles, an improvement in RS performance was observed. The configuration that makes use of the DBSCAN algorithm improves the precision value by 3%, and results in a decrease in the root mean square error (RMSE) and mean absolute error (MAE).
When we wanted to compare our proposal with similar works, we did not find RSs that made use of endorsements, therefore, we selected some similar proposals in terms of RSs objectives. One of the main differences focuses on the nature of the data as it was observed that most work was with structured data. The work by [42] obtained data directly from a university database, while [41] obtained them from a data repository used for ML experimentation, and [35] extracted them from employee profiles and the training plan directly from a company. The work by [21], on the other hand, proposed using information from LinkedIn to complement the data entered directly from users through the application or obtained from the university database. From these data a taxonomy was proposed to represent course, job and student information.
The metrics used by most works were found to be recall and accuracy when they were analyzed to compare results with this proposal. In terms of the scores obtained, we can conclude that if we compare them to the results obtained from the tests performed on the total set of data, this proposal improves on the metrics obtained by these previous works by two percentage points in terms of accuracy and 10 percentage points in recall.
Research on LinkedIn endorsements is oriented towards assessing the veracity of endorsements on LinkedIn profiles. [30] propose a framework for assessing the trustworthiness of endorsements. The work of [31] proposes to measure the trustworthiness of job candidates based on their skills and endorsements. Based on this research, one line of future work is to incorporate the validation of endorsements into the RS to determine their veracity when taking them into account for recommendations.
In order to evaluate system behavior for multiple domains, the ontology could be updated with new instances associated with new domains. In order to evaluate system behavior for multiple domains, the ontology could be acted on with new instances associated with new domains, and the use of ontologies already built could also be evaluated, together with new ways to represent areas of knowledge and job performance sectors.
With regard to information sources and given the continuous changes, the use of other SNSs should be evaluated, as well as different platforms such as, RocketReach and DataLead, from where professional profiles can be obtained.
Online applications could also be offered for course recommendations, not only for online evaluation of the system, but also to determine opportunities for improvement based on user suggestions. Funding: This research received no external funding.
Data Availability Statement:
The data presented in this study are available on request from the corresponding authors.
|
v3-fos-license
|
2019-04-05T03:28:44.856Z
|
2019-03-27T00:00:00.000
|
93000858
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.7717/peerj.6610",
"pdf_hash": "d5f38562b2152b56071604ed6ea51fca08be6019",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43404",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "d5f38562b2152b56071604ed6ea51fca08be6019",
"year": 2019
}
|
pes2o/s2orc
|
Associated bacteria of Botryococcus braunii (Chlorophyta)
Botryococcus braunii (Chlorophyta) is a green microalga known for producing hydrocarbons and exopolysaccharides (EPS). Improving the biomass productivity of B. braunii and hence, the productivity of the hydrocarbons and of the EPS, will make B. braunii more attractive for industries. Microalgae usually cohabit with bacteria which leads to the formation of species-specific communities with environmental and biological advantages. Bacteria have been found and identified with a few B. braunii strains, but little is known about the bacterial community across the different strains. A better knowledge of the bacterial community of B. braunii will help to optimize the biomass productivity, hydrocarbons, and EPS accumulation. To better understand the bacterial community diversity of B. braunii, we screened 12 strains from culture collections. Using 16S rRNA gene analysis by MiSeq we described the bacterial diversity across 12 B. braunii strains and identified possible shared communities. We found three bacterial families common to all strains: Rhizobiaceae, Bradyrhizobiaceae, and Comamonadaceae. Additionally, the results also suggest that each strain has its own specific bacteria that may be the result of long-term isolated culture.
Bacteria can grow in close proximity to the microalgal cells due to the presence of EPS substances secreted by the microalgae (Bell & Mitchell, 1972). The presence of bacteria within, or close to this EPS layer can lead to mutually beneficial interactions as well as interactions that are antagonistic in nature. Beneficial interactions for microalgae normally provide environmental advantages, such as nutrient exchange and community resilience to invasion by other species (Eigemann et al., 2013;Hays et al., 2015;Jasti et al., 2005;Ramanan et al., 2015). Antagonistic interactions will usually result in inhibition of the microalgal growth, either causing cell lysis, or directly competing for nutrients (Cole, 1982;Cooper & Smith, 2015;Segev et al., 2016). Studies investigating interactions of microalgae with bacteria show how important these interactions can be for the cultivation process (Guerrini et al., 1998;Kazamia et al., 2012;Kim et al., 2014;Windler et al., 2014). Understanding the interactions of microalgae and bacteria, and how it can enhance the cultivation for industrial process, could lead to increased biomass productivity.
So far, the bacterial community of B. braunii species is described in only a few studies. The earliest work is from Chirac et al. (1982) who described the presence of Pseudomonas sp. and Flavobacterium sp. in two strains of B. braunii. Rivas, Vargas & Riquelme (2010) identified in the B. braunii UTEX strain the presence of Pseudomonas sp. and Rhizobium sp. One study using the B. braunii Ba10 strain showed the presence of rod shaped bacteria in the rim of the colony aggregations and proposed it is as growth promoting bacteria closely related to Hyphomonadaceae spp. (Tanabe et al., 2015). One important finding was that B. braunii is a vitamin B 12 autotroph, so it does not depend on bacteria for the synthesis of this important metabolite (Tanabe, Ioki & Watanabe, 2014). A more recent study using a B. braunii (race B) strain, revealed the presence of several Rhizobiales such as Bradyrhizobium, and the presence of Bacteroidetes sp. (Sambles et al., 2017). So far, all studies have focused on only a few strains making it difficult to have a good overview of what bacterial community dominates B. braunii.
In this study, we looked at twelve strains of B. braunii obtained from several culture collections to investigate the bacterial community composition that is associated with B. braunii.
DNA extraction
On sampling days, five mL of fresh culture was harvested with sterilized membrane filters (0.2 mm; Merck-Millipore, Darmstadt, Germany) using a vacuum apparatus. The filters were cryopreserved in -80 C until further processing. DNA was extracted from the cryopreserved filters that were cut into small pieces with a sterile scissor. Filter pieces were transferred to a two mL sterilized tube with zirconia/silica beads (Biospecs, Bartlesville, OK, USA), and one mL S.
16S rRNA gene amplification and Miseq sequencing
Amplicons from the V1-V2 region of 16S rRNA genes were generated by a two-step PCR strategy consisting of a forward primer (27F-DegS = 5′GTTYGATYMTGGCTCAG 3′ where M = A or C; R = A or G; W = A or T; Y = C or T) and an equimolar mixture of reverse primers (338R I = 5′GCWGCCTCCCGTAGGAGT 3′ and II = 5′ GCWGCC ACCCGTAGGTGT 3′ where M = A or C; R = A or G; W = A or T; Y = C or T). Eighteen bp Universal Tags 1 and 2 (Unitag1 = GAGCCGTAGCCAGTCTGC; Unitag2 = GCC GTGACCGTGACATCG) were appended at the 5′ end of the forward and reverse primer, respectively (van den Bogert et al., 2011;Daims et al., 1999;Tian et al., 2016). The first PCR mix (50 mL) contained 10 mL 5Â HF buffer (Thermo Scientific TM , Waltham, MA, USA), one mL dNTP Mix (10 mM; Promega, Leiden, the Netherlands), 1 U of Phusion Ò Hot Start II High-Fidelity DNA polymerase (Thermo Scientific TM , Waltham, MA, USA), one mM of 27F-DegS forward primer, one mM of 338R I and II reverse primers, one mL template DNA and 32.5 mL nuclease free water. Amplification included an initial denaturation at 98 C for 30 s; 25 cycles of denaturation at 98 C for 10 s; annealing at 56 C for 20 s and elongation at 72 C for 20 s; and a final extension at 72 C for 10 min. The PCR product size was examined by 1% gel electrophoresis. The second PCR mix (100 mL) contained 62 mL nuclease free water, five mL of PCR1 product, 20 mL 5Â HF buffer, two mL dNTP Mix, 2 U of Phusion Ò Hot Start II High-Fidelity DNA polymerase, 500 nM of a forward and reverse primer equivalent to the Unitag1 and Unitag2 sequences, respectively, each appended with an eight nt sample specific barcode. Amplification included an initial denaturation at 98 C for 30 s; five cycles of denaturation at 98 C for 10 s, annealing at 52 C for 20 s and elongation at 72 C for 20 s; and a final extension at 72 C for 10 min. The concentration of PCR products was quantified with a Qubit Fluorometer (Life Technologies, Darmstadt, Germany) in combination with the dsDNA BR Assay kit (Invitrogen, Carlsbad, CA, USA). Purified products were then pooled in equimolar amounts of 100 ng mL -1 and sequenced on a MiSeq platform (GATC-Biotech, Konstanz, Germany).
Processing MiSeq data
Data was processed using the Quantitative Insights into Microbial Ecology 1.8.0. In short, paired-end libraries were filtered to contain only read pairs perfectly matching barcodes. Low quality or ambiguous reads were removed and then chimeric reads were removed and checked. Sequences with less than 0.1% were discarded. Remaining filtered sequences were assigned into operational taxonomy units (OTUs) at 97% threshold using an open reference method and a customized SILVA 16S rRNA gene reference (Quast et al., 2013). Seven samples from day 4 were removed from the results due to contamination during the PCR steps: AC755, AC759, AC760, AC767, AC768, CCAP, and UTEX572. The 16S rRNA gene dataset obtained in this study is deposited in the Sequence Read Archive, NCBI with accession number SRP102970.
Microbial community analysis
For the interpretation of the microbial community data on family level, the OTU abundance table was converted to relative abundance and visualized as heatmaps using JColorGrid (Joachimiak, Weisman & May, 2006). Ordination analyses to estimate the relationship of the B. braunii strains based on dissimilarity of the microbial community compositions among the individual samples was performed for, (a) all strains of B. braunii used in this study, (b) all strains received from ALGOBANK-CAEN culture collection. For both analysis a standardized 97% OTU table (decostand function, method = hellinger) and the nMDS function metaMDS (distance = Bray-Curtis) from the vegan package in R was used (R version 3.0.2) (Oksanen et al., 2016;R Core Team, 2014). Beta dispersion and a permutation test were performed to test homogeneity dispersion within a group of samples. Adonis from the vegan package in R (v.3.0.2) was used to test significant differences in bacterial community between strains. Hierarchical clustering analysis was performed using hclust function in R using method = average. Figure 1 shows the bacterial families with a relative abundance above 1% and a total of four bacterial phyla associated with B. braunii strains. The four phyla found associated with B. braunii are the Bacteroidetes, Gemmatimonadetes, Planctomycetes, and Proteobacteria. Proteobacteria is the predominant bacterial phylum and representatives of this taxon are found in all 12 strains. Bacteroidetes is found in all strains with exception to strains AC761, AC768, and CCAP. Gemmatimonadetes is found only in the CAEN culture (with AC prefix) strains with exception to AC755. Planctomycetes is found in AC760, CCALA, K1489, Showa, and UTEX strains. Three families are found across all 12 B. braunii strains and all are Proteobacteria. These are the Rhizobiaceae, Bradyrhizobiaceae, and Comamonadaceae. Rhizobiaceae is represented by 1-59% of the bacterial reads. Bradyrhizobiaceae was found within the 1-8% range. Comamonadaceae was found between 1% and 5%. Two families of bacteria are only found in the strains obtained from the CAEN culture collection: Erythrobacteraceae with bacterial reads ranging from 1% to 29% and Rhodocyclaceae with 1-18%. Some families of bacteria are particularly dominant in specific strains. Sinobacteraceae is dominant in CCAP with relative abundances ranging from 59% to 78%. Planctomycetaceae is dominant in K1489 strain with relative abundances between 46% and 51%. Rhizobiaceae is dominant in AC761 with relative abundances between 55% and 64%. Other families of bacteria become dominant as the cultures become older. Rhodobacteraceae is present in AC755 strain with relative abundances ranging from 28% at day 1 to 40% at day 11. Sphingomonadaceae is present in UTEX with 10% at day 1 and increases its presence to 47% at day 11. Chytophagaceae is dominant in CCALA strain with relative abundance ranging from 10% at day 1 to 52% at day 11.
RESULTS
Because we found three common families across all strains, we wanted to investigate in more detail the bacterial composition in these selected families and see if we could identify an unique microorganism present in all strains. Therefore, we zoomed in and looked at the OTUs distribution belonging to the three families: Rhizobiaceae, Bradyrhizobiaceae, and Comamonadaceae. In addition, we picked the OTUs found only in the strains obtained from the CAEN culture collection which belong to two families: Erythrobacteraceae and Rhodocyclaceae. The most abundant OTUs were selected and a total of 28 OTUs were investigated. From Fig. 2 it is clear that there is not an OTU that is found across all strains but rather each family comprises of several different OTUs. The second important observation is that CCAP strain has no representative OTUs for Bradyrhizobiaceae and Rhizobiaceae in the most abundant OTUs. The most represented family taxon is Rhizobiaceae with 12 OTUs. From the three families found in the 12 strains, OTU 233 assigned to the genus Rhizobium has the highest OTU frequency abundance with 10% and is present in seven out of 12 strains. The OTUs 143, 88, and 131 assigned to the genus Shinella are present in nine out of 12 strains. The OTUs 477, 475, Table 1. Each bar displays the bacterial family relative abundance above 1%. Strains are labelled below with sample day within square brackets. Bacterial Families are organized according to the phyla (in italics) they belong to.
Full-size DOI: 10.7717/peerj.6610/ fig-1 and 484 assigned to the genus Bosea cover 11 out of 12 strains. From the two families found only in the cultures originating from the CAEN culture collection, OTUs 333 and 539 are found in all seven CAEN strains with an assigned genus Porphyrobacter and Methyloversatilis, respectively. The most abundant OTUs (as listed in Fig. 2) were subjected to a Blast search against the NCBI database to infer their nearest neighbors (Table 2). OTUs 88, 115, 143, and 233 are similar in their nearest neighbors with four different Rhizobium spp. as candidates. Similar blast results are seen also for OTUs 566 and 567 with the nearest neighbors being Hydrogenophaga spp. The OTUs 819 and 832 with Dyadobacter spp. as nearest neighbor dominate CCALA bacterial community. Some OTUs show different species as closest neighbors such as OTUs 45 and 69 with Frigidibacter albus, Paracoccus sediminis, and Nioella nitratireducens as neighbors. The OTU 415 with high abundance in K1489 belonging to Planctomycetaceae, has as closest neighbors uncultured bacterium and third closest neighbor uncultured Planctomyces spp. with the latter showing 87% identity. The OTU 333 present only in the strains from CAEN culture collection, has 100% identity with Sphingomonas as closest two neighbors, and third neighbor, also with 100%, identity being Porphyrobacter.
Non-metric multidimensional scaling ordination was performed for the 12 strains to determine the bacterial community dissimilarities (Fig. 3A). B. braunii strains from the CAEN culture collection cluster together when compared to the other strains indicating these strains are similar to each other in bacterial community composition. This is supported by hierarchical cluster analysis showing CAEN strains in their own cluster (Fig. S1). The strains K1489, UTEX, CCAP, CCALA, and Showa represent separate clusters. The homogeneity of dispersion within each strain with 1,000 permutations show no significant difference (F = 0.323). Using adonis to test for bacterial community similarities between all strains, the results show that the bacterial communities are significantly different (DF = 11, Residuals = 28, R 2 = 0.921, P = 0.001). Figure 3B zooms in
DISCUSSION
It is evident that B. braunii possesses a highly diverse bacterial community as seen by the range of bacterial phyla and families present in all the strains used in this study ( Fig. 1; for a more comprehensive list see Fig. S2). From the bacterial community analysis (Figs. 3A and 3B), it appears that each B. braunii strain has a specific bacterial community and no OTU is shared between all strains. The strains from the CAEN culture collection cluster together while B. braunii strains from other culture collections appear as separate groups. This implies that the culture collection from which the strain was obtained could potentially have an effect. With this study we are not able to really deduce the potential impact of the culture collection on the bacterial community because the experimental design was not set-up to do so. The presence of weak (within a culture collection) and strong (between culture collections) migration barriers may explain the bacterial profiles as obtained in our study and they may be a result of historical contingencies (Fenchel, 2003) rather than pointing toward highly specific interactions for a large number of OTUs. OTUs 539 and 333 are only found with the CAEN cultures and contributes toward these strains clustering in close proximity. OTU 333 is especially high in relative abundance and contributes to the distinctive clustering of the CAEN culture collection strains. The remaining strains also contain their specific OTUs that contribute toward their own clustering: OTU 819 and 832 with CCALA, OTU 310 with UTEX and K1489 with OTU 415. The bacterial community between three race B and three race L are mixed together (Fig. 3B). Therefore, no correlation was found between bacterial community and the type of hydrocarbons produced between the two races. Similar observations were made in another study using six strains of B. braunii in which the authors did not find a correlation between the bacteria and type of hydrocarbon produced (Chirac et al., 1985). Three bacterial families were found to be present with all twelve strains of B. braunii: Bradyrhizobiaceae, Rhizobiaceae, and Comamonadaceae. Two families were found abundantly only in the strains from the CAEN culture collection: Erythrobacteraceae and Rhodocyclaceae. The OTUs 88, 115, 143, and 233 blast hits show these are related to Rhizobium spp. (Table 2). Rhizobium spp. are known to form nodules in the roots of several plants within the family of legumes and are best known for nitrogen fixation. Nitrogen fixing bacteria were investigated in association with microalgae and it has been shown that they can enhance microalgae growth (Hernandez et al., 2009). Rhizobium spp. associated with B. braunii could have a similar role. Rivas, Vargas & Riquelme (2010) also found a Rhizobium sp. associated with B. braunii in particular UTEX LB572, and Kim et al. (2014) showed the presence of Rhizobium sp. with B. braunii 572. Sambles et al. (2017) identified Rhizobium sp. closely associated with B. braunii after submitting the cultures through a wash step and antibiotic treatment. Recent studies also shows Rhizobium spp. present with Chlamydomonas reinhardtii, Chlorella vulgaris, and Scenedesmus spp. (Kim et al., 2014). Rhizobium spp. seem important to B. braunii strains as it appears in all 12 strains with more prominence in the CAEN cultures and K1489 with three to four OTUs (Fig. 2). For the remaining strains CCALA, CCAP, Showa, and UTEX, Rhizobium spp. is represented only with one OTU.
Operational taxonomy unit 475 from Bradyrhizobiaceae family shows 100% similarity with the species Hyphomicrobium nitrativorans as the two closest neighbors and is present in 10 out of 12 B. braunii strains. H. nitrativorans is a known denitrifier isolated from a seawater treatment facility (Martineau et al., 2013). Denitrification is the process of reducing nitrate into a variety of gaseous compounds with the final being dinitrogen. Because denitrification mainly occurs in the absence of oxygen it is unlikely that this is happening within our cultures that are well oxygenated. The third closest neighbor for OTU 475 is Bosea lathyri and is associated with root nodules legumes (De Meyer & Willems, 2012).
Operational taxonomy units 555, 566, and 567 from Comamonadaceae family, appeared in seven out of 12 strains. The three closest neighbors of OTU 555 were Variovorax spp. and for OTUs 566 and 567 these were Hydrogenophaga spp., Variovorax, and Hydrogenophaga spp. are not known for being symbionts but may be able to support ecosystems by their ability to degrade toxic compounds and assist in nutrient recycling, therefore potentially producing benefits to other microorganisms (Satola, Wübbeler & Steinbüchel, 2013;Yoon et al., 2008). Comamonadaceae also appeared as one of the main bacteria families associated with cultivation of microalgae in bioreactors using a mix of fresh water and municipal water as part of a water treatment strategy (Krustok et al., 2015).
Erythrobacteraceae and Rhodocyclaceae were only found in the strains from CAEN culture collection. OTU 333 (Erythrobacteraceae) first two closest neighbors are from Sphingomonas spp., and third closest neighbor is Porphyrobacter spp. isolated from water in a swimming pool. Most Porphyrobacter spp. isolated originate from aquatic environments (Tonon, Moreira & Thompson, 2014) and are associated with fresh water sediments (Fang et al., 2015). Porphyrobacter spp. have also been associated with other microalgae such as Tetraselmis suecica (Biondi et al., 2016). OTU 539 (Rhodocyclaceae) second and third closest neighbor is Methyloversatilis discipulorum which is a bacteria found in biofilms formation in engineered freshwater installations (Van Der Kooij et al., 2017). It is not clear why OTU 333 and 539 are specifically found only in the strains originating from the CAEN culture collection, but it could be an introduced species during handling. None the less, these two OTUs are present in high relative abundance (Fig. 2), and would be interesting to know if they have a positive or negative influence on the growth of the CAEN strains. It would be interesting to confirm such statement by attempting the removal of these OTUs and investigate the biomass growth.
Sinobacteraceae is dominant in CCAP (Fig. 1). This family was proposed in 2008 with the characterization of a bacteria from a polluted soil in Chi (Zhou et al., 2008). A recent bacteria related to hydrocarbon degradation shows similarities with Sinobacteraceae (Gutierrez et al., 2013). OTU 63 is highly abundant in CCAP and could have a negative impact in the cultivation of CCAP strain by reducing its hydrocarbon content.
The Bactoroidetes family Cytophagaceae dominates the culture CCALA at later stages of growth (Fig. 1). Cytophagaceae has also been found present in laboratory scale photobioreactor cultivation using wastewater for production of microalgae biomass (Krustok et al., 2015). The two OTUs that dominate the bacterial community in CCALA are OTU 819 and OTU 832. The Blast search on NCBI database approximates these two OTUs as Dyadobacter spp. which have also been found co-habiting with Chlorella spp. (Otsuka et al., 2008).
Planctomycetaceae dominates the bacterial community in K1489 strain ( Fig. 1) with one OTU 415. This family can be found in freshwater biofilms and also strongly associated with macroalga (Abed et al., 2014;Lage & Bondoso, 2014). Species in this family could possibly be involved in metallic-oxide formation and be co-players in sulphate-reduction with the latter also involving a sulfur-reducing bacteria (Shu et al., 2011).
Rhodobacteraceae is present with up to 55% of bacterial relative abundance in AC755. Members of this family have been also isolated from other microalgae, namely Chlorella pyrenoidosa and Scenedesmus obliquus (Schwenk, Nohynek & Rischer, 2014). The OTUs 45 and 69 blast searches in NCBI database show the closest neighbors to be F. albus, P. sediminis, and N. nitratireducens (Table 2). All three neighbors were isolated from water environments (Li & Zhou, 2015;Pan et al., 2014).
Sphingomonadaceae is mostly found in freshwater and marine sediments (Newton et al., 2011). OTUs 302, 310, and 355 from this family were found in 6 out of 12 strains above 1% relative abundance. OTU 310 is only found in the UTEX strain with Sphingomonas spp. as the two closest neighbors. Sphingomonas spp. are shown to co-habit with other microalgae such as Chlorella sorokiniana and Chlorella vulgaris (Ramanan et al., 2015;Watanabe et al., 2005). Sphingomonas spp. have been shown to be able to degrade polycyclic aromatic hydrocarbons (Tang et al., 2010) and could possibly be degrading the hydrocarbons secreted by B. braunii as its carbon source.
Another characteristic of many bacteria is the ability to produce EPS such as species from the Rhizobiaceae and Bradyrhizobiaceae family (Alves, De Souza & Varani, 2014;Bomfeti et al., 2011;Freitas, Alves & Reis, 2011). This characteristic could play a role on the colony aggregation of B. braunii as EPS is known to be essential for biofilm formation (Flemming, Neu & Wozniak, 2007). Therefore, it would be interesting in the future to study this possible relationship as B. braunii is a colony forming organism. Such studies could involve the introduction of bacteria associated with colony formation such as Terramonas ferruginea as it has been associated with inducing flocculation in Chlorella vulgaris cultures (Lee et al., 2013).
With the present high microbial diversity, B. braunii shows qualities in resilience toward microbial activity, probably due to its colonial morphology and protective phycosphere made of hydrocarbons and EPS (Weiss et al., 2012). A number of microbes are potentially beneficial such as Rhizobium spp. which have been shown to have a positive effect on the biomass productivities of B. braunii UTEX (Rivas, Vargas & Riquelme, 2010), and Hydrogenophaga with the ability to degrade toxic compounds (Yoon et al., 2008). There are also microbes that may cause detrimental effects on hydrocarbon productivities of B. braunii such as Sphingomonas spp. (OTU 310) with its ability to degrade hydrocarbons (Tang et al., 2010). The removal of such detrimental microbes could enhance cultivation allowing more nitrogen available for biomass production and increase hydrocarbon accumulation of B. braunii as well as EPS production at a larger industrial scale.
CONCLUSION
Botryococcus braunii can host a diverse microbial community and it is likely that some form of interaction is taking place with the members from the Rhizobiaceae, Bradyrhizobiaceae, and Comamonadaceae family, which all belong to the phylum Proteobacteria. There is not a specific bacterial community correlated to the different types of hydrocarbons produced by race B and L and mostly likely also not race A. B. braunii has many strains and each seems to have its own species-specific bacterial community. With a diverse microbial community present, it is also likely that some bacteria are having antagonistic effects on B. braunii such as competition with nutrients and degradation of hydrocarbons. Botryococcus is a microalgae of high scientific interest and it is important to understand better the associated bacteria. Botryococcusassociated bacteria are hard to get rid of (J. Gouveia, 2016, unpublished data) and therefore, it is important to start mass cultivation without those bacteria that are most harmful to the process.
ADDITIONAL INFORMATION AND DECLARATIONS Funding
This project is carried out with financial support from the European Community under the seventh framework programme (Project SPLASH, contract nr. 311956), and Jie Lian was supported by the China Scholarship Council (No. 201406310023). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
|
v3-fos-license
|
2023-04-23T15:08:54.404Z
|
2023-04-14T00:00:00.000
|
258283397
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://moem.pensoft.net/article/104830/download/pdf/",
"pdf_hash": "1737190f3a179668694bf2bdcc15c62f72434c21",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43405",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "d3f8c050b0f552518271328e6e95f2d0519f29a4",
"year": 2023
}
|
pes2o/s2orc
|
Effect of calcium doping on the anodic behavior of E-AlMgSi (Aldrey) conducting aluminum alloy in NaCl electrolyte medium
on the
Introduction
Aluminum and its alloys are widely used in electrical engineering as conductor and structural materials. As a conductive material, aluminum is characterized by high electrical and thermal conductivity. After copper, aluminum has the maximum conductivity level among all technically used metals. Aluminum also has a low density, high atmospheric corrosion resistance and resistance to chemicals [1].
Another property of aluminum is neutrality to insulating materials. For example, aluminum is inert to oils, varnishes and thermoplastics, including at high temperatures. Aluminum differs from other metals by a low magnetic susceptibility. It forms a nonconductive easily removable powder material (Al 2 O 3 ) in an electric arc [2,3].
The use of aluminum and its alloys is regulated by specific instructions or general construction guidelines, especially regarding materials for switching devices, air transmission line poles, electric motor or switch casings etc.
The economic feasibility of using aluminum as a conductive material is explained by the favorable ratio of its cost to the cost of copper. In addition, one should take into account that the cost of aluminum has remained virtually unchanged for many years [2].
When using conductive aluminum alloys for the manufacture of thin wire, winding wire, etc., certain difficulties may arise in connection with their insufficient strength and a small number of kinks before fracture [1].
A well-known conducting alloy is the E-AlMgSi (Aldrey) aluminum alloy. This alloy is a heat-strengthened one, possessing good plasticity and high strength.
After appropriate heat treatment this alloy acquires high electrical conductivity. Wires made from this alloy are used exclusively for air transmission lines [1][2][3].
The task of increasing the corrosion resistance of aluminum alloys is of great practical importance since power transmission lines made from these alloys are used in the open air [4][5][6].
The aim of this work is to study the effect of calcium doping on the corrosion and electrochemical behavior of the E-AlMgSi (Aldrey) aluminum conducting alloy containing 0.5 Si and 0.5 Mg (wt.%).
Experimental
The alloys were synthesized in the 750-800 °C range in a SShOL type resistance laboratory shaft furnace. A6 grade aluminum which was additionally doped with the calculated amount of silicon and magnesium was used as a charge in the preparation of the E-AlMgSi alloy. When doping aluminum with silicon, the metallic (0.1 wt.%) silicon present in primary aluminum was taken into account. Magnesium wrapped in aluminum foil was introduced into the molten aluminum using a bell. Calcium was introduced into the melt in the form of master alloy with aluminum. The alloys were chemically analyzed for silicon and magnesium contents at the Central Industrial Laboratory of the State Unitary Enterprise Tajikistan Aluminum Company. The alloy compositions were controlled by weighing the charge and the alloys. Synthesis was repeated if the alloy weight deviated from the target one by more than 1-2% rel.u. Then the alloys were cleaned from slag and cast into graphite molds in order to obtain samples for corrosion and electrochemical studies. The cylindrical samples had a diameter of 8 mm and a length of 140 mm.
Specimens for electrochemical studies were positively polarized relative to the potential established upon submerging into the test NaCl solution (E fc is the free or steady-state corrosion potential) until the potential at which the current density increased dramatically (Fig. 1, Curve I). Then the specimens were polarized in the reverse direction (Fig. 1, Curves II and III) until a -1.3 V potential. This polarization caused oxide film dissolution. Finally the specimens were positively polarized again (Fig. 1, Curve IV). During this polarization the potting potential E pf of the alloys is recorded upon the transition from cathodic to anodic current.
The following main electrochemical potentials of the alloys were determined from the polarization curves recorded: E fc is the steady-state or free corrosion potential, E rp is the repassivation potential, E p is the pitting potential, E cor is the corrosion potential and I cor is the corrosion current.
The corrosion current was calculated from the Tafel slope (A = 0.12 V) for the cathodic curve, taking into account that pitting corrosion of aluminum and its alloys in neutral media is determined by the cathodic reaction of oxygen ionization. In turn, the corrosion rate is a function of the corrosion current and calculated using the following equation: The reproducibility of the data on the electrochemical potentials was ±1 ÷ ±2 mV, the current
Results and discussion
The results of corrosion and electrochemical studies of calcium doped E-AlMgSi (Aldrey) aluminum alloy in NaCl electrolyte medium are summarized in Table and shown in Figs. 2-5. Figure 2 shows free corrosion potential (E fc , V) as a function of time for specimens of calcium containing alloy in a NaCl electrolyte medium. It can be seen that upon the submersion of the specimens in NaCl electrolyte the E fc potential shifts toward the positive region.
It can be seen from table that 0.01 to 0.5 wt.% calcium doping of the initial aluminum alloy in the test media shifts the corrosion, repassivation and pitting potentials toward the positive region. This is accompanied by an increase in the pitting corrosion resistance of the alloys. Figure 3 shows the corrosion rate of the E-AlMgSi (Aldrey) aluminum alloy as a function of calcium content for 0.03, 0.3 и 3.0% NaCl electrolyte media. Calcium doping of the alloy reduces its corrosion rate in all the test media by 15-20%.
An increase in the concentration of NaCl electrolyte leads to an increase in the corrosion rate of the alloy (Fig. 4). At a calcium concentration of 0.5 wt.% the corrosion rate and the corrosion current density of the E-AlMgSi (Aldrey) aluminum alloy were the lowest. Thus this composition is the optimum one for corrosion resistance.
The anodic branches of the polarization curves for calcium-doped AlMgSi (Aldrey) aluminum alloy are shown in Fig. 5. The curve patterns suggest that an increase in the content of the calcium dopant shifts all the electrochemical potentials of the alloy in a NaCl electrolyte medium towards positive values. This indicates a decrease in the anodic dissolution rate of calcium doped alloys in comparison with the initial alloy.
Conclusion
The anodic behavior of calcium containing E-AlMgSi (Aldrey) aluminum alloy was studied using a potentiostatic technique (at a 2 mV/s potential sweep rate) in a NaCl electrolyte medium. We show that calcium doping to 0.5 wt.% increase the corrosion resistance of the initial alloy. The pitting corrosion resistance of the alloy grows as indicated by a shift of the pitting corrosion potentials towards positive values.
An increase in the concentration of chloride ions in the electrolyte leads to a 1.5-fold increase in the corrosion rate of the alloy.
Experiments suggest that calcium doping to within 0.1-0.5 wt.% is optimum for the design of new compositions on the basis of the Aldrey alloy.
|
v3-fos-license
|
2021-12-13T14:22:49.739Z
|
2021-12-01T00:00:00.000
|
245119020
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "98fb1e6264991e53558864a51701cf5e31ff52a1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43406",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "98fb1e6264991e53558864a51701cf5e31ff52a1",
"year": 2021
}
|
pes2o/s2orc
|
Knowledge translation of prediction rules: methods to help health professionals understand their trade-offs
Clinical prediction models are developed with the ultimate aim of improving patient outcomes, and are often turned into prediction rules (e.g. classifying people as low/high risk using cut-points of predicted risk) at some point during the development stage. Prediction rules often have reasonable ability to either rule-in or rule-out disease (or another event), but rarely both. When a prediction model is intended to be used as a prediction rule, conveying its performance using the C-statistic, the most commonly reported model performance measure, does not provide information on the magnitude of the trade-offs. Yet, it is important that these trade-offs are clear, for example, to health professionals who might implement the prediction rule. This can be viewed as a form of knowledge translation. When communicating information on trade-offs to patients and the public there is a large body of evidence that indicates natural frequencies are most easily understood, and one particularly well-received way of depicting the natural frequency information is to use population diagrams. There is also evidence that health professionals benefit from information presented in this way. Here we illustrate how the implications of the trade-offs associated with prediction rules can be more readily appreciated when using natural frequencies. We recommend that the reporting of the performance of prediction rules should (1) present information using natural frequencies across a range of cut-points to inform the choice of plausible cut-points and (2) when the prediction rule is recommended for clinical use at a particular cut-point the implications of the trade-offs are communicated using population diagrams. Using two existing prediction rules, we illustrate how these methods offer a means of effectively and transparently communicating essential information about trade-offs associated with prediction rules.
Clinical prediction models are developed with the ultimate aim of improving patient outcomes [8,14]. Prediction models take as inputs various patient characteristics or risk factors (e.g. age, gender, comorbidities) and provide as an output a prediction of the probability of either having or developing a particular disease or outcome (called an "event"), for example, future heart disease, cancer recurrence or lack of response to some treatment. When used to predict the likelihood of having a particular disease they are referred to as diagnostic models, and when used to predict outcomes, they are referred to as prognostic models. Subsequent to model development, prediction models should be internally and externally validated, and then the performance of the prognostic model evaluated in an implementation study-so that impact on clinical outcomes can be determined [16,29,31].
There are numerous ways that prediction models can be translated for use in clinical practice. One approach is to formulate a directive decision rule based on cutpoints for the predicted probabilities-for example, low or high risk [2,6,8]. We refer to this as a prediction rule [25]. Patient care might then be stratified on the basis of these cut-points-and consequently, the model can be thought of as acting like a prediction "rule" [16,26]. An alternative is to provide individual predicted risks which can be used by the health care professional in guiding therapeutic decisions for individual patients. A recent systematic review found that three quarters of prediction models, in cancer, report associated prediction rules [22]. Here, our focus is on prediction models which are used to risk-stratify patients or recommend treatment or management strategies based on cut-points of predicted risk. Examples of commonly used prediction rules are the Ottawa ankle score [28] and the Framingham heart score [32]. Other examples are the Canadian Syncope Risk Score [30] and the QRISK2 score [4,5] which we include as case studies (Tables 1 and 3). Although prediction rules can be based on multiple risk strata, for example, low, medium or high risk, for simplicity, we focus on the scenario where predicted probabilities are dichotomised, say, into two groups: low and high risk.
The importance of considering implications of mis-classification of patients at risk When a prediction model is intended to be used as a prediction rule, it is important that the implications of imperfect performance of the prediction rule are clear. These implications include two types of misclassification: classifying patients to be at high risk when they will not go on to have the event and classifying patients to be at low risk when they will go on to have the event. The consequences of misclassification are highly contextual, depending on the implications in that particular clinical setting. For example, as illustrated in Case Study 1 (Table 1), the ensuing decision can have serious consequences when misclassifying a patient who is truly at high risk as "not at risk". On the other hand, the ensuing decision can also have consequences when patients who are not at risk are misclassified as "at risk" (see Case Study 2, Table 3).
The extent of these potential misclassifications should be transparent at all stages of reporting-whether that be when the model is at the development stage, or when the model is at the impact assessment stage. This is because transparent reporting of the extent of the potential misclassification, along with contextual knowledge of the consequences of these misclassifications, can allow the users of these rules to determine how much confidence they place in them. Transparency is important at the development stage because it can help inform the potential impact [26]: for example, in Case Study 1, a prediction rule that clearly misclassifies too many people as low risk might not have been deemed a suitable rule to take forward to an impact study. This transparency is also important at the impact assessment stage-for example, in Case Study 2, if the rule under assessment was known to misclassifying too many people as high risk, then health care providers might re-consider the extent to which they follow the prediction rule. Clear and complete reporting of the performance of prediction rules is thus important at all stages of model development. Reporting results in a transparent way is a form of knowledge translation-where the information on model performance has to be translated by the researchers so it is understood by the intended users, the health professionals. We underscore that our concern is about the communication of the trade-offs or accuracy of the prediction rule at hand, and this is different to communicating the estimated risk from the model [1].
Common ways of reporting model performance
The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) The Canadian Syncope Risk Score (CSRS) was developed to help identify patients presenting to the emergency department with syncope who are at risk of developing a serious adverse event, which typically occurs with a prevalence of about 4% [30]. The model was proposed as a risk stratification tool with cut-points signalling very low, low, high and very high risk. The reported internally validated C-statistic for the developed model was 0.88 (95%CI 0.85, 0.90), with sensitivity of 93% and specificity of 53% for the cut-point "low risk". The rule was summarised by the statement "the tool will be able to accurately stratify the risk of serious adverse events among patients presenting with syncope, including those at low risk who can be discharged home quickly". From the reported sensitivity (93%), specificity (53%) and prevalence (0.036) in the development dataset [30] we estimate the natural frequencies at several cut-points and present a population diagram at one of several reported cut-points for illustration. Figure 1 illustrates how the use of population diagrams can help quantify the implications of using this model at the cut-point "low risk". This figure illustrates that for this cut-point, whilst of the 540 patients identified as "low risk" by the model only two have a serious adverse event, for every 1000 patients assessed by the model, 460 will be classified as "at risk" of whom only 36 will have a serious adverse event. Therefore, the model used at this cut-point is reasonably able to rule out a serious adverse event, but at the cost of a large proportion of patients undergoing monitoring (i.e. not good at ruling in). Possible consequences of this misclassification are longer stays in hospital for those classified at risk; and a small proportion of patients classified as "low risk" progressing to have a serious adverse event out of hospital. Whilst these might be appropriate trade-offs, they are not obvious when summarising the performance by a C-statistic and sensitivity alone, but become transparent when showing population diagrams. Table 2 presents these natural frequencies across a range of cut-points. For example, if there was a concern that the rule was misclassifying too many people as "at risk" when they would not have the event, increasing the cut-point to 3 for example, would reduce the number classified as "at risk" from 460 to 119; but would increase the number classified as "not at risk" who would have the event from 2 to 12.
Hemming and Taljaard Diagnostic and Prognostic Research
(2021) 5:21 Initiative is a checklist of 22 minimum items for reporting of conduct and results from prognostic model studies ([3] a, [23]). The TRIPOD guidelines recommend that model performance metrics be reported (item 10d) and-whilst not directive in its recommendations-it includes measures of calibration, discrimination, C-statistics, sensitivity and specificity and decision curve analysis [23]. The most commonly reported measure of performance of a prediction model is the C-statistic [22,23]. Indeed, in a recent review of prognostic models for COVID-19, this was the only measure of predictive performance Fig. 1 Population diagram to illustrate clinical ramifications of the Canadian Syncope Risk Score for acute management of syncope (cut-point "low risk"). Each circle in the figure represents one person (1000 in total) presenting in the emergency department with syncope, of whom approximately 36 will sustain a serious adverse event (shaded circle) and 964 will not (unshaded circle). Red cells (460) indicate people deemed "at risk" using the risk score with a cut-point of "low risk". Green cells are people deemed not "at risk". These natural frequencies are derived from the reported sensitivity of 93%; specificity 56% (for the low-risk cut-point); and prevalence (0.036) in the internally validated model [30]. The internally validated C-statistic for the developed model was 0.88 (95%CI 0.85, 0.90). Table 2 Summary of performance measures across a range of cut-points for the Canadian Syncope Risk Score using natural frequencies reported in a set of externally validated models [37]. The C-statistic is a summary measure of performance across all possible cut-points, but it does not quantify the performance at a specific cut-point that may be used to guide management decisions. Because it is a summary performance measure, the C-statistic thus does not convey the performance of the model when used as a prediction rule. Other performance measures which describe the model's ability to predict risk values which are close to actual levels of risk, called calibration, also summarise overall model performance [26].
To determine how a model performs when used as a prediction rule to risk stratify patients or guide decisionmaking at recommended cut-points, the performance must be summarised at the given cut-points. Two useful metrics are sensitivity and specificity which describe the rule's ability to discriminate between those who will and will not have the event at those cut-points [26]. Sensitivity is the ability of the model (at a specified cut-point) to correctly identify those with the event, and specificity is the ability of the model (at a specified cut-point) to correctly identify those without the event. Whilst a chosen cut-point may maximise sensitivity, the trade-off may be poor specificity. Measures such as sensitivity and specificity make the trade-offs at different cut-points transparent. For example, in scenarios such as Case Study 1 ( Table 1) where it is important not to miss an event, preference would be given to a cut-point that maximises sensitivity (i.e. a model that is good at "ruling in"). In other scenarios, such as Case Study 2 (Table 3), where there may be potential for over-treatment, it might be important not to falsely diagnose an event and preference would then be given to a cut-point that maximises specificity (i.e. a model that is good at "ruling out"). However, whilst sensitivity and specificity in theory allow the consequences of misclassification to be apparent, there is evidence that these concepts may be misunderstood by health professionals, for example, by confusing sensitivity with the probability of a patient having the event (when in fact it represents the probability of testing positive if the patient has the event) [12,13].
Alternative ways of summarising a prediction rule's ability to discriminate (again at specified cut-points) are the positive and negative predictive values (i.e. probability a patient does (or does not) have the event when classified as "at risk" (or "not at risk") [12]. Positive and negative predictive values also allow the consequence of trade-offs to be transparent at different cut-points. However, whilst positive and negative predictive values prevent the type of misinterpretation commonly observed when interpreting sensitivity and specificity, they are also a conditional probability which can be difficult to interpret [18]. Conditional probabilities are hard to understand because people need to know information on both the probability the person does (or does not) have the event of interest when classified as "at risk" (or "not at Table 3 Case Study 2 the QRISK2 prediction model The QRISK2 prediction model is a widely endorsed and validated model to assess cardiovascular risk [15]. The QRISK2 model was developed using data from 531 general practices in the UK, with information from 2.3 million patients. The model was developed so as to identify those patients for whom interventions (i.e. statins) or more intensive follow-up may be required. The models are commonly used as part of directive decision-making at a cut-point of 20% predicted risk [15]. The models were reported to perform well, had C-statistics in the region of 0.80, and were subsequently validated in large cohorts [4,5]. We report natural frequencies for this prognostic rule at the 20% cut-point, derived using information reported in the external validation cohort study for males which had a reported C-statistic of 0.77 [[4], Table 4]. Using data from this validation cohort, it is expected that out of 1000 (male) individuals between the ages of 35 and 74 years, approximately 90 will have a cardiovascular event over a 10-year period and 910 will not (i.e. a prevalence of 0.09) [4]. From data reported in Table 4 of Collins [4], we also estimated the sensitivity of the rule to be 40% and specificity to be 88%. Figure 2 illustrates that, when used at the cut-point of 20%, the prediction rule does not do terribly well in identifying those who will have an event: the rule is correctly able to identify 36 out of the 90 who will have an event, but misclassifies 110 of the 910 individuals who will not have an event as "at-risk". So, for every person identified as needing treatment, another 3 will be treated unnecessarily and two thirds of those in need of treatment will not be treated. Thus, despite having a C-statistic of close to 0.8, the model does not do terribly well at either ruling-in future events or ruling-out future events [32]. If the extra treatment poses no harm, which might arguably be the case for statin use, then over-treating the low risk patients might not be of concern. Nonetheless, presenting the results using the population diagram allows full implications of potential under and overtreatment to be made transparent. Table 4 presents these natural frequencies across two different assumed prevalence. For example, if the actual prevalence in the population was lower than the assumed 9%, the model would identify slightly fewer at risk; but proportionately more of those at higher risk would be identified as "at risk" risk")) and contextual information on the likelihood of the event. Negative and positive predictive values thus only communicate one part of this information, but do not convey information on the underlying risk.
Comparing performance across several prediction rules
Sometimes the performance of several prediction rules are compared. For example, at the derivation stage, the performance of a prediction model might be reported across multiple cut-points, one or two of which are then recommended as the cut-point for implementation in practice (as in Case Study 1, Table 1). Or, sometimes this comparison might be to an existing treatment strategy (such as treat or monitor everyone). Reporting sensitivity and specificity (or negative and positive prediction values) across a range of cut-points allows readers to infer whether a model would work well according to preferences in the particular setting, but is again limited because of the potential for these metrics to be misunderstood.
Decision curves have been proposed as an alternative. Decision curves allow inferences about whether the prediction rule under consideration has a superior netbenefit over alternative strategies (such as treat everyone) [33]. It is recommended that decision curves are presented over a range of cut-points that represent plausible regions of acceptable trade-offs. At any given cut-point, readers can then compare the net-benefit across a set of different strategies (e.g. the prediction rule under consideration and a strategy of treat everyone). The strategy or rule that at any given cut-point maximises the net-benefit is the optimal strategy/rule under the assumption that the trade-offs at that cutpoint are acceptable. Yet, decision curves are often viewed as difficult to understand and thus are unlikely to be best suited when conveying information to the health professionals who might use the rule in practice [34]. Furthermore, whilst decision curves are sometimes wrongly assumed to convey information about cutpoints that optimise trade-offs, they actually offer a means of comparing net-benefit across different strategies (which might include a prediction rule), but do so under the assumption that the trade-offs are acceptable [17,34].
What can be learnt from other areas of communication
There is a large body of research that tells us that when trying to determine if a trade-off is acceptable, people need information about negative and positive predictive values and contextual information on the likelihood of an event [12,18,27]. The combination of these two sources of information is known as natural frequencies [13]. For example, when deciding whether to participate in a screening programme for Down's syndrome, people need to know information on the probability that the baby does (or does not) have Down's syndrome when Fig. 2 Population diagram to illustrate clinical ramification of the QRISK2 score (cut-point 20%). Each circle (1000 in total) in the figure represents one male between the ages of 35 and 74 years, of whom approximately 90 will have a cardiovascular event over 10 years of follow-up (shaded circles) and 910 will not (unshaded circles). Red shaded cells indicate people deemed "at risk" using the QRISK2 score with a cut-point of ≥20%. Green cells are people deemed "not at risk". These natural frequencies were derived using the reported natural frequencies in the external validation cohort for males ([4], Table 4). The externally validated C-statistic was 0.77; estimated the sensitivity 40% and specificity 88%; and prevalence of 0.09 over 10 years classified as "at risk" (or "not at risk") and the likelihood the baby has Down's syndrome. This body of work underscores the fact that, to increase understanding amongst patients and members of the public, and consequently facilitate more informed decisions, presenting numerical information using natural frequencies is optimal [27]. Presenting natural frequencies in a visual form has also been shown to increase understanding [24]. Population diagrams (see Case Studies) are one way of visually presenting natural frequencies [18]. Visual presentations have been successfully used in the area of communicating the trade-offs of deciding to participate in screening programmes [7,18,27]. Whilst health care professionals tend to have a better ability to interpret statistical information than patients and the public [11], they tend to have some difficulty in interpreting statistical concepts [9,12,38]. Furthermore, there is evidence from a systematic review of randomised trials that presenting information using natural frequencies and visual aids increases the understanding of health professionals [13,35].
Conveying performance of prognostic rules using natural frequencies and population diagrams
We use two case studies to illustrate how natural frequencies and population diagrams can be useful in helping health professionals decide if a prediction rule has potential to improve treatment or management strategies (Tables 1 and 3). For each case study, we present a population diagram for the prognostic rule at either a recommended cut-point or a cut-point in common use Figs. 1 and 2. Alongside this, for Case Study 1, we illustrate the trade-offs behind the choice of the cut-point using natural frequencies. Cut-points considered to have acceptable trade-offs might then be considered as candidate prediction rules for an impact study. In Case Study 2 (Table 3) we present the natural frequencies at one cut-point only (simply because this is the accepted cutpoint used in practice). Presenting the associated population diagram in Case Study 2 allows intended users of the tool (for example health professionals in an impact study) to understand the scope for mis-classification in a rule they have been asked to implement.
Population diagrams are invariably referred to by different names, such as pictograms and decision aids; and can be presented in a variety of ways. Following others, we base the diagrams on a population size of 1000 [10]. Furthermore, we note that the representation of each member of the population might take any of a number of forms, for example a pictorial representation of a person, or as in our population by circles [18,20]. The diagrams need a coding system that allows two lots of two-way classifications and we follow the format used by Loong [20], whilst noting that alternative ways of presenting, such as scaled rectangular diagrams, might be equally, if not more appealing [21]. Others have suggested that the natural frequency information should be communicated alongside consequences of misclassification [36]. We reiterate that even when presentation of these natural frequencies suggests an apparent well-performing rule, this is not sufficient to indicate if the model should be used in clinical practice, and that all prediction rules should undergo an impact analysis [26]. Indeed, presentation of prediction rules in this way might moreover suggest that the models should be used as an aid in the decision process and not as a substitute or decision rule [19].
Both case studies illustrate how there are trade-offs to be made when using prognostic rules. In Case Study 1, natural frequencies help reveal that when the model is used at the suggested cut-point, whilst it is reasonably able to rule out a serious adverse event, there is a costa large proportion of patients are flagged as "at risk" and so would undergo monitoring (i.e. the rule is not good at ruling in). Whilst these might be appropriate tradeoffs, they are not obvious when summarising the performance by a C-statistic and sensitivity alone, but become transparent when showing population diagrams. In Case Study 2, for every person identified as needing treatment (i.e. identified as "at risk"), another three will be treated unnecessarily and two thirds of those in need of treatment will not be treated. Thus, despite having a C-statistic of close to 0.8 (actual value 0.77), the model does not do terribly well at either ruling-in or ruling-out future events [32].
Recommendations for reporting prognostic rules to allow trade-offs to be transparent When prediction models are recommended to be used as prediction rules there will be trade-offs to be made at the chosen cut-point. These trade-offs should be transparent to the proposed end user-the health professional. Whilst we have not carried out a formal evaluation, these case studies illustrate how in any knowledge translation of prediction rules, population diagrams and natural frequencies are good methods to ensure that the performance of prediction rules can be properly understood. Our goal is to prevent poorly performing rules being adopted in clinical practice because of a misconception that they work well. We advocate not for the replacement of current metrics but rather propose an effective communication tool at the point where researchers have to translate their results to guide clinical decisionmaking. We make a distinction between (1) providing information in such a way that allows the implications to be compared across multiple cut-points (to facilitate the choice of candidate cut-points that represent a range of acceptable trade-offs in an impact assessment study) and (2) providing information in such a way that allows the implications of the tradeoffs at one cut-point to be considered (to facilitate the limitations of a rule when used in clinical practice).
|
v3-fos-license
|
2021-10-14T06:24:05.277Z
|
2021-10-01T00:00:00.000
|
238744889
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/11/10/e047645.full.pdf",
"pdf_hash": "38965524dfacfeec23f33a926dc96ba808f10540",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43407",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "cbdd954cf36ae03824c2109a5bbfae61e22c3c25",
"year": 2021
}
|
pes2o/s2orc
|
Modified effect of active or passive smoking on the association between age and abdominal aortic calcification: a nationally representative cross-sectional study
Objective The deleterious effects of smoking on atherosclerosis were well known; however, the interaction among ageing, smoking and atherosclerosis remains unclear. This study tested the hypothesis that the association between age and vascular calcification, a critical mark of atherosclerosis, was modified by smoking. Design Cross-sectional study. Setting A nationally representative sample, the National Health and Nutrition Examination Surveys 2013–2014. Participants This study included 3140 adults aged 40–80 years with eligible data for abdominal aortic calcification (AAC). Active and passive smoking exposure was identified through self-reports and tobacco metabolites (serum cotinine and urinary 4-methylnitrosamino-3-pyridyl-1-butanol). Primary outcome measures AAC score was determined using dual-energy X-ray absorptiometry (DXA) scans. OR was estimated using the logistic regression method to assess the association between age and the presence of severe or subclinical AAC stratified by smoking exposure. The survey-weighted Wald test was used to evaluate potential interactions. Results AAC was positively associated with age in the general population. After adjustment for age, sex, race/ethnicity and other cardiovascular risk factors, age was significantly associated with the odds of severe AAC (OR for each 5-year increase in age: 1.66, 95% CI 1.48 to 1.87, p<0.001). As expected, the association between age and vascular calcification was especially stronger in smokers than in never smokers (p value for interaction ≤0.014). According to spline fitting, the progression of vascular calcification was significantly increased after 45 years in smokers compared with that after 60 years in never smokers. Quitting smoking may compromise the deleteriousness of the vascellum especially in younger adults. However, the difference in age-related calcification among never smokers with or without secondhand smoke exposure was minor, regardless of the definition by self-report, serum cotinine, or urinary 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanol. Conclusions Smoking significantly accelerated the progression of age-related subclinical atherosclerosis. Early smoking cessation should be encouraged among young smokers. The effect of passive smoking exposure on arteriosclerosis should be assessed further.
Major 1. Introduction. As the authors mentioned (page 6, 2nd paragraph), vascular calcification or atherosclerosis is a common aging-related process. Smoking is also well-established risk factor for vascular calcification or atherosclerosis. It is obvious that smoking exposure affects the association between age and aortic calcium burden. Therefore, the scientific premise of the present study seems to be unclear for me. 2. Study population. How many US adults participated to the NHANES in 2013-2014? How many of them underwent DXA scans to measure abdominal aortic calcium score? What is the participation rate? Did the authors consider the possibility of selection bias? 3. Outcome, abdominal aortic calcium score. The definitions of AAC-24 and AAC-8 scores remain unclear. Please describe them more specifically in the Methods section. 4. Statins promote vascular atheroma calcification which ultimately results in plaque stabilization and less likelihood for progression to unstable plaques. I suggest the authors take statin use into account in the analysis. 5. Statistical analysis. In general, aortic calcium score is likely to show skewed distribution, but not normal distribution. Did the authors confirm its distribution? I do not think it is appropriate to analyze the data using the linear regression. 6. Too much covariates in a multivariable model may cause the problem of overfitting. Did the authors examine the multicollinearity of covariates when performing multivariable analysis? For example, hypertension was defined as systolic/diastolic BP of ≥140/90 mmHg or antihypertensive medication use. However, hypertension, ACEI/ARBS use, and beta blocker use were simultaneously included in the multivariable model. 7. Results. It will be important to present the association of smoking status (current, former, and never smoking) with abdominal aortic calcium burden as one of the results. 8. Discussion, Page 15, 2nd paragraph, "the data on smoking and abdominal arterial calcification ware limited". However, some prior studies provided evidence in this area: Hisamatsu et al. J Am Heart Assoc. 2016 Aug 29;5(9):e003738; and Pham et al. Int J Cardiol. 2020 Sep 1;314:89-94. 9. The authors conclude that age-related vascular calcification is partly reduced by quit smoking. To obtain more solid scientific evidence, the authors should examine the association of age with abdominal aortic calcium burden based on current smoking, former smoking groups according to years since quit smoking vs. never smoking.
Minor 1. Page 9, line 3. The sentence "The detailed protocols have been presented in previous studies" seems to need the appropriate references. 2. I suggest the results based on AAC-8 be also provided in the supplement. 3. Page 16, line 6, "suggesting the potential benefits of quit smoking to prevent atherosclerosis". Please consider prior 3 papers which reports the association of time since quitting in former smokers with atherosclerosis burden ( 4. Please change "non smoker" to "never smoker". 5. Table 1. Why did the authors divided participants into 2 groups based on 65 years old? Please describe the reasons in the statistical analysis of the Methods section. 6. Figure 1-3, legends. Please explain the definition of AAC-24.
VERSION 1 -AUTHOR RESPONSE
Comments to the Author: The authors present a cross-sectional observation from the NHANES 2013-2014 in 3140 US individuals aged 40-80 years to test the hypothesis that the association of age with abdominal aortic calcium burden is modified by the smoking exposure. The strength of the present study is the use of nationally representative sample from the US general population. Data including smoking exposure and abdominal aortic calcium score were based on standardized and validated protocol. However, there are some concerns in this paper.
Response: We would like to thank reviewer for the careful reading and helpful comments. Each comment was presented with a reply that we do our best. If the answer is inaccurate or insufficient, Please point out. Thanks for your patient review. Major 1. Introduction. As the authors mentioned (page 6, 2nd paragraph), vascular calcification or atherosclerosis is a common aging-related process. Smoking is also well-established risk factor for vascular calcification or atherosclerosis. It is obvious that smoking exposure affects the association between age and aortic calcium burden. Therefore, the scientific premise of the present study seems to be unclear for me. Response: We speculate that reviewers refers to confounding rather than interaction. We agree with the reviewer that a variety of cardiovascular risk factors such as age, sex, smoking, and metabolic syndrome are associated with vascular calcification. Each risk factor independently contributes to the outcome and results in cumulative effects. Therefore, multivariable analysis is used to adjust for smoking and other risk factors (potential confounders) and observe the independent correlation between age and vascular calcification. However, that does not mean smoking and age have an interaction on calcification, the amplification effect produced by the combination of aging and smoking. Thomas and Edward recently reviewed the difference between confounding and interaction and the application (Thomas R Vetter, Edward J Mascha. Bias, Confounding, and Interaction: Lions and Tigers, and Bears. Anesth Analg. 2017;125 (3):1042-1048). We drew a graph of the difference between the two as followed. To avoid ambiguity, we have reorganized the introduction. Thank you very much. (P6/7) 2. Study population. How many US adults participated to the NHANES in 2013-2014? How many of them underwent DXA scans to measure abdominal aortic calcium score? What is the participation rate? Did the authors consider the possibility of selection bias? Response: Thanks for your comments. We have added a detailed description in Methods. There were 10,175 individuals in NHANES 2013-2014 and 3,815 adults 40 years of age and older.. Pregnant females were excluded (n=3). dual-energy X-ray absorptiometry (DXA) scans were conducted in participants aged 40 years and older. Aortic calcification for vertebrae lateral spine L1-L4 was assessed. Exluding participants without scan (n=482) or image invalidity (n=190). All those with eligible AAC scores were included in this study (n = 3140). NHANES study is an ongoing nationally representative, stratified, multistage probability-sampling survey designed by the Center for Disease Control and Prevention of the United States. NHANES specially provided several subsample weights to minimize the selection. For NHANES analysis, using the correct sample weight depends on the variables used. A good rule of thumb is to use the "lowest common denominator", where the variable of interest collected on the smallest number of respondents is the "lowest common denominator". The sample weight applied to that variable is the appropriate weight for that particular analysis. (https://wwwn.cdc.gov/nchs/nhanes/tutorials/module3.aspx) (P8) 3. Outcome, abdominal aortic calcium score. The definitions of AAC-24 and AAC-8 scores remain unclear. Please describe them more specifically in the Methods section. Response: Thanks for your suggestions. We added the definitions of AAC-24 and AAC-8 scores.
Abdominal aortic calcification could be easily assessed on lateral spine scans obtained by DXA with some advantages including inexpensive, ease, rapid and safe. AAC-24 was calculated according to the length of calcification at the posterior and anterior aortic walls contiguous to lumbar spine L1-L4. Treating lines across the middle of intervertebral spaces as segment boundaries, abdominal aorta was divided into 8 segments. AAC was scored from 0 to 3 according to calcification length in aortic wall of each segment (0 point: no calcification; 1 point: ≤1/3; 2 points: 1/3-2/3; 3 points: >2/3). The AAC-8 was a simplified method derived from AAC-24. Aortic calcification in anterior or posterior aortic walls in front of L1-L4 was scored 0 to 4, respectively (0 point: no calcification; 1 point: no more than the length of one vertebrae; 2 points: no more than the length of 2 vertebra; 3 points: no more than the length of 3 vertebra; 4 points: more than the length of three vertebra). Therefore, AAC-8 score was less influenced by small calcification but needed more skillful technologists than AAC-24 (Pawel Szulc. Bone. 2016 Mar;84:25-37) . (P9) 4. Statins promote vascular atheroma calcification which ultimately results in plaque stabilization and less likelihood for progression to unstable plaques. I suggest the authors take statin use into account in the analysis. Response: Thanks for your comments. we agreed with the reviewer that statin treatment improved lipid metabolism and plaque stabilization. In our manuscript, multivariable regression analysis included lowing lipid agents, of which statins accounted for 94%. Furthermore, the variable "lowing lipid agents" was replaced by "Statins treatment" and the results was consistent.
5. Statistical analysis. In general, aortic calcium score is likely to show skewed distribution, but not normal distribution. Did the authors confirm its distribution? I do not think it is appropriate to analyze the data using the linear regression. Response: Thank the reviewer to the professional comments. We transformed AAC-24 scores as binary data. According to previous studies, severe AAC was defined as AAC-24 ≥6 points and subclinical AAC was defined as AAC-24 ≥2 points. The distribution of aortic calcium score was skewed. But the residuals of linear regression model was close to a normal distribution which was emphasized as a necessary condition in generalized linear model (including linear, logistics and Cox regression). The trend in both models was consistent. We agree with the rigorous manner of the reviewer. Logistics regression to calculate odds ratios was shown in Table 1 as primary analysis. And the linear regression results was presented in supplementary document.
6. Too much covariates in a multivariable model may cause the problem of overfitting. Did the authors examine the multicollinearity of covariates when performing multivariable analysis? For example, hypertension was defined as systolic/diastolic BP of ≥140/90 mmHg or antihypertensive medication use. However, hypertension, ACEI/ARBS use, and beta blocker use were simultaneously included in the multivariable model. Response: Thanks for your comments. We agreed with the reviewer that over-fitting should be considered. More than two third users of ACEI/ARBS or β-blocker had hypertension. We removed the above two variables and multivariable models were assessed by variance inflation factor (VIF). VIF of each variable was not more than 2.64 that did not suggest a multicollinearity of covariates. We supplemented necessary descriptions in methods.(P12) 7. Results. It will be important to present the association of smoking status (current, former, and never smoking) with abdominal aortic calcium burden as one of the results. Response: we supplemented the analysis for the correlation between smoking status and ACC (Table S2).
|
v3-fos-license
|
2023-08-06T15:31:17.770Z
|
2023-08-01T00:00:00.000
|
260599808
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1420-3049/28/15/5824/pdf?version=1690965424",
"pdf_hash": "aa35ba8c0a79c74b16b2e5c4ee6d5d34d152b189",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43408",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "815dfec126c96bbf2ef19d3e6ab3ff389a35ab56",
"year": 2023
}
|
pes2o/s2orc
|
Neurological Applications of Celery (Apium graveolens): A Scoping Review
Apium graveolens is an indigenous plant in the family Apiaceae, or Umbelliferae, that contains many active compounds. It has been used traditionally to treat arthritic conditions, gout, and urinary infections. The authors conducted a scoping review to assess the quality of available evidence on the overall effects of celery when treating neurological disorders. A systematic search was performed using predetermined keywords in selected electronic databases. The 26 articles included upon screening consisted of 19 in vivo studies, 1 published clinical trial, 4 in vitro studies and 2 studies comprising both in vivo and in vitro methods. A. graveolens and its bioactive phytoconstituent, 3-n-butylphthalide (NBP), have demonstrated their effect on neurological disorders such as Alzheimer’s disease, Parkinson’s disease, stroke-related neurological complications, depression, diabetes-related neurological complications, and epilepsy. The safety findings were minimal, showing that NBP is safe for up to 18 weeks at 15 mg/kg in animal studies, while there were adverse effects (7%) reported when consuming NBP for 24 weeks at 600 mg daily in human trials. In conclusion, the safety of A. graveolens extract and NBP can be further investigated clinically on different neurological disorders based on their potential role in different targeted pathways.
Introduction
Neurodegenerative illnesses are defined as a loss of functionality and the eventual death of nerve cells in the brain or peripheral nervous system [1]. One in three people are estimated to experience a neurological condition at some point in their lives, making them the second largest cause of mortality and the primary source of disability [2,3]. Most available prevalence data are focused on dementia, as it is the highest contributing factor towards neurodegenerative diseases. However, apart from the most common neurodegenerative diseases such as Parkinson's disease, Alzheimer's disease, multiple sclerosis, and stroke [4], there are also a wide range of other neurological diseases such as prion disease, motor neuron diseases, Huntington's disease, spinocerebellar ataxia and spinal muscular atrophy [5]. Anatomical (functional systems), cellular (neuronal groups), protein vulnerability (structural change, biochemical modifications and altered physiological function), and genetic changes all affect how these diseases develop. Persistent neuroinflammation often occurs, and a neurological disease's pathogenesis is often complex, with all these factors interlinking and perpetuating each other [6]. Current therapeutic options for neurological diseases mostly provide symptomatic support for the patients and caregivers, while a successful cure is yet to be found. Early diagnosis is essential for treatment planning and can help to optimize support for patients and their families in the long run [7]. Recent reviews have been published regarding the use of herbal medicine for the treatment of neurodegenerative diseases [8,9]. Therefore, herbal treatments should be considered a potential therapeutic candidate in order to tackle neurological disorders.
Celery is among the plants that have recently gained popularity in research [10,11]. Celery (A. graveolens) is an indigenous plant of the family Apiaceae, or Umbelliferae, originating in the Mediterranean [12,13]. It is most easily identified by its thick, very erect stem. It is used as a food in most parts of the world. Celery contains many active compounds, including polysaccharides (apiuman) [14], flavonoids (luteolin, apigenin) [15], phthalides (sedanolide, 3-n-butyl phthalide) [16,17], furanocoumarins (bergapten, xanthotoxin) [18], terpenes (d-limonene) [17], amino acids (L-tryptophan) [16], polyacetylenes (falcarinol, falcarindiol) [19,20], and vitamins (alpha-tocopherol) [21]. One of its bioactive compounds, known as butylphthalide, which, is a light-yellow viscous compound comprising a family of optical isomers that includes l-3-N-butylphthalide (L-NBP), d-3-N-butylphthalide (D-NBP), and dl-3-N-butylphthalide (DL-NBP), is known for its therapeutic value. Based on its traditional use, A. graveolens has been known to relieve joint pain, gout, and urinary infections [22]. It has also been used traditionally to increase urine excretion, promote menstrual discharge, and treat dengue fever and inflammation or pain in muscles or joints [23]. Based on in vivo or in vitro studies, A. graveolens has shown its pharmacological efficacy with antimicrobial, antifungal, anti-parasitic, anti-inflammatory, anti-cancer, anti-ulcer, antioxidant, anti-diabetic, anti-infertility, anti-platelet, anti-spasmolytic, hepatoprotective, cardioprotective, neuroprotective, cytoprotective, hypolipidemic, and analgesic activity [24]. There is a need to review all relevant studies to assess whether celery has an effect on neurological disorders. Despite the growing evidence, there have been no known published systematic or scoping reviews narrating the effect of celery when treating neurological disorders. Therefore, this scoping review aimed to collate and assess the quality of the currently available scientific evidence on the overall potential use of celery in neurological disorders.
Study Inclusion
A total of 208 records were identified from the initial search, with a final 26 articles included . One clinical study was identified, while the rest were 19 in vivo studies, 4 in vitro studies, and 2 studies employing both in vivo and in vitro methods. The study selection process is presented in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flowchart ( Figure 1).
Characteristics of Included Studies
Overall, the included studies focused on the efficacy of A. graveolens, with the majority of studies investigating NBP as the main phytoconstituent and its derivatives and analogues (n = 20); this was followed by extracts (n = 5), with one study not describing the intervention in sufficient detail. The extracts were mostly sourced from the whole plant or the aerial parts of A. graveolens, while a majority did not mention the source of the NBP and its derivatives. Among the included studies, the in vivo studies were mainly focused on Alzheimer's disease (n = 4) and stroke-related neurological complications (n = 4), followed by depression (n = 3), the general mechanisms of action of neurological disorders (n = 3), diabetes-related neurological complications (n = 2), epilepsy (n = 2) and others (Parkinson's disease (n = 1), anxiolytics (n = 1), and neurotoxicity (n = 1)). In vitro studies were mostly on Parkinson's disease (n = 3), followed by others (diabetes (cognitive decline) and stroke in support of in vivo findings (n = 2), and Charcot-Marie-Tooth disease (n = 1)). The clinical study focused on therapy for Parkinson's disease (n = 1).
Among the included studies, 4 out of 26 underwent the authentication process through the voucher specimen deposition of the plant. In total, 4 out of 26 studies reported the use of a qualitative analysis to determine the phytochemicals associated with A. graveolens. In total, 3 out of 26 studies performed a quantitative analysis in order to determine the composition of the associated phytochemicals in A. graveolens. Only one study reported [28] using a standardized formulation of the methanolic extract of the whole A. graveolens plant. The routes of administration of the intervention included oral, intranasal, intravenous, and intraperitoneal. Detailed information on the qualitative and quantitative phytochemical analysis, as well as the standardization formula of the herbal interventions of all included studies, are presented in the Supplementary Material: Table S3.
Characteristics of Included Studies
Overall, the included studies focused on the efficacy of A. graveolens, with the majority of studies investigating NBP as the main phytoconstituent and its derivatives and analogues (n = 20); this was followed by extracts (n = 5), with one study not describing the intervention in sufficient detail. The extracts were mostly sourced from the whole plant or the aerial parts of A. graveolens, while a majority did not mention the source of the NBP and its derivatives. Among the included studies, the in vivo studies were mainly focused on Alzheimer's disease (n = 4) and stroke-related neurological complications (n = 4), followed by depression (n = 3), the general mechanisms of action of neurological disorders (n = 3), diabetes-related neurological complications (n = 2), epilepsy (n = 2) and others (Parkinson's disease (n = 1), anxiolytics (n = 1), and neurotoxicity (n = 1)). In vitro studies were mostly on Parkinson's disease (n = 3), followed by others (diabetes (cognitive decline) and stroke in support of in vivo findings (n = 2), and Charcot-Marie-Tooth disease (n = 1)). The clinical study focused on therapy for Parkinson's disease (n = 1).
Among the included studies, 4 out of 26 underwent the authentication process through the voucher specimen deposition of the plant. In total, 4 out of 26 studies reported the use of a qualitative analysis to determine the phytochemicals associated with A. graveolens. In total, 3 out of 26 studies performed a quantitative analysis in order to determine the composition of the associated phytochemicals in A. graveolens. Only one study reported [28] using a standardized formulation of the methanolic extract of the whole A. graveolens plant. The routes of administration of the intervention included oral, intranasal, intravenous, and intraperitoneal. Detailed information on the qualitative and
In Vivo Studies
All of the 19 studies were in vivo studies, and 2 were supported by additional in vitro findings, which further explored potential mechanisms of action. Four studies conducted between the years 2010 and 2016 focused on Alzheimer's disease using L-3-nbutylphthalide (L-NBP), with an oral dosage of 15 mg/kg for a treatment duration of three months or more. The findings showed that L-NBP improved synaptic functions; reduced Aβ plaque load, oxidative stress, and microglia activation; and inhibited abnormal tau hyperphosphorylation [33,34,39,45].
Another four studies were focused on stroke-related neurological disorders such as cerebral ischemia reperfusion, focal ischemic stroke, and intracerebral hemorrhage using either DL-3-n-butylphthalide (DL-NBP) or L-NBP; the study employed various doses and routes of administration (intranasal, intraperitoneal and intragastric) for a duration of 2 to 14 days. The findings showed that DL-NBP significantly decreased neurological deficit scores and increased the diameter of collaterals (arteriogenic effect), while L-NBP inhibited the expression of tumor necrosis factor-alpha (TNF-α) and matrix metallopeptidase 9 (MMP-9), thereby reducing inflammatory reactions due to intracerebral hemorrhage [30,37,43,44].
For depression-related neurological disorders, three studies used a crude 70% methanolic extract of A. graveolens or DL-NBP with dosages between 10 mg/kg and 500 mg/kg, administered either orally or intraperitoneally for a durations of two weeks to six weeks. The findings for the methanolic extract of A. graveolens showed a significant improvement in immobility and the climbing times at all treatment intervals, comparable with the fluoxetine treatment. In terms of the congnitive-enhancing effects using Morris water maze and object recognition tests, A. graveolens increased the novel exploration time more than the donepezil treatment (p < 0.05) in a non-dose-dependent manner [26]. The DL-NBP showed significant findings with regard to the increased locomotor activity; the increased sucrose preference in the sucrose preference test; the decreased immobility time in the forced swimming test; and the increased number of crossing and rearing behaviors in the open-field test [27,40].
An aqueous extract of A. graveolens and DL-NBP were studied for diabetic-related neurological disorders between 20 mg/kg and 120 mg/kg for four to eight weeks. DL-NBP showed neuroprotective effects in diabetes-associated cognitive decline through hippocampal morphology normalization, by improving synaptic plasticity, and by reducing neuronal apoptosis [36]. However, there was no mention of the dosage and duration of the administration of the A. graveolens aqueous extract, which demonstrated only positive results in the step-through latency test, with no significant improvements in the initial latency and Y-maze test [35].
With regard to celery's role in epilepsy, two studies [29,41] using either an aqueous extract of A. graveolens or L-NBP at doses between 80 mg/kg and 1000 mg/kg administered intraperitoneally showed increased minimal clonic seizure (MCS) latency and a significant amelioration of epileptiform activity (p < 0.05) compared to the saline or vehicle (tween-80) based on electroencephalography readings.
Risk of Bias Assessment of In Vivo Studies
All studies (100%) showed a low risk of bias in selective reporting, while more than 70% of the studies showed a low risk of bias for the baseline characteristics and attrition bias (as incomplete outcome data). A further 10% of the studies were assessed as having a high risk for other biases due to a lack of details regarding the origins of the test item and the study funding.
In Vitro Studies
In total, 4 out of the 26 included studies were in vitro studies and 2 were additional in vitro findings from in vivo studies that further explored potential mechanisms of action. Most in vitro studies (n = 3) focused on Parkinson's disease in relation to neurological disorders. NBP and its racemic (L-NBP/DL-NBP) showed protective effects in Parkinson's disease cell models through reducing cytotoxicity, preserving the dendritic processes surrounding cells, decreasing apoptotic cells, and inhibiting tau protein hyperphosphorylation. [46,47,49]. Another two studies [36,37] were supportive in vitro findings to the in vivo studies for exploring the mechanisms of action in stroke and diabetes (cognitive decline) models. One study [48] analyzed the effects of L-NBP on a hereditary disease known as Charcot-Marie-Tooth disease (CMT), which harms the peripheral nerves. The scientific evidence of the pharmacological properties of A. graveolens and its phytoconstituents are described in the tables and narratively, as follows ( Table 2):
Clinical Trial
One clinical trial was included. It is was prospective, single-center, parallel-group, randomized controlled trial using DL-NBP as therapy for Parkinson's disease. Patients with idiopathic Parkinson's disease were treated with 200 mg of DL-NBP, thrice daily for 24 weeks, alongside concomitant existing medications that patients were already taking. The findings showed improvements in symptoms such as bradykinesia plus stiffness, based on the non-tremor score, sleep quality, via the Pittsburgh sleep quality index scores, and quality of life, by NBP therapy [50].
Safety Study
In total, 3 out of 26 studies contained safety findings; these were in vivo studies (n = 2) [33,45] and one clinical trial [50]. One study using L-NBP at 15 mg/kg showed no significant toxicity in mice after monitoring their general health for 18 weeks [33]. However, another study using L-NBP with a similar dose for 18 weeks reported that the mice gradually died due to the poor physical condition of aging [45]. For the clinical trial [50], it was reported that 3 adverse events out of 43 were directly associated with the treatment in the NBP group at 200 mg three times a day for six months; these events included itching and skin rash (n = 1), a slight elevation in the levels of alanine transaminase (ALT) enzyme, and a mild gastrointestinal reaction.
Discussion
According to the Pan American Health Organization, neurological disorders account for 533,172 deaths, 7.5 million years of life lost due to premature mortality, and 8.2 million years lived with disability [51]. The included studies show that A. graveolens and its compounds have potential applications in various neurological disorders, although most of the reported studies were in the in vivo stage.
Parkinson's Disease
Parkinson's disease is the only application of the NBP compound, instead of a plant extract, that has successfully reached the clinical trial stage. NBP has been shown to improve Molecules 2023, 28, 5824 9 of 16 behavioral abnormalities in a Parkinson's disease mice group; reduce oxidative stress via reducing malondialdehyde levels and increasing glutathione peroxidase and the percentage inhibition of oxygen; and protect the dopaminergic neurons by reducing the activity of monoamine oxidase types A and B [28]. These findings were further supported by a study that combined DL-NBP with mesenchymal stem cells, showing enhanced neuroprotection in Parkinson's disease caused by concussive head injury [52].
Alzheimer's Disease
In Alzheimer's disease studies, L-NBP has been shown to improve synaptic functions; reduce Aβ plaque load, oxidative stress, and microglia activation; and inhibit abnormal tau hyperphosphorylation, which plays a role in the Aβ tau synergy [45]. It is now thought that there are two different types of interactions: major physical interactions between the two proteins at the synapse, or indirect interactions caused by Aβ and tau's effects on neuronal physiology (activating kinases, preventing tau degradation, regulating excitability and gene expression, and activating glia) in slowing the progression of Alzheimer's disease [53]. Celery's potential in treating Alzheimer's disease needs to be further assessed by capturing the spatiotemporal progression of Aβ and tau pathology and other disease characteristics, as well as considering the contribution of complex genetic and environmental variables that influence disease phenotypes [53].
Stroke-Related Neurological Disorders
In stroke, NBP has potential with its dual role; it has arteriogenic effects and the ability to inhibit the expressions of TNF-α and MMP-9. Arteriogenic effects may benefit the maintenance of pial collaterals, which are small arterial connections joining the terminal cortical branches of the major cerebral arteries along the surface of the brain, and are therefore important in supporting a functional brain environment [54]. Conversely, TNF-α production is triggered by ischemia during stroke as an inflammatory response, leading to the activation of MMP-9 expression related to secondary bleeding in the BBB [55]. These collectively show the pleiotropic effects of NBP, which may be beneficial, given that the pathogenesis of stroke is multifactorial and involves multiple pathways of neuroexcitotoxicity, neuroinflammation, structural damages, oedema in the BBB, oxidative damages, as well as overall neurodegeneration.
Other Neurological Disorders
Other targeted pathways reported for the effects of A. graveolens in neurological disorders include Nrf2 and NF-κB pathways; BDNF/ERK/mTOR (antidepressant); CNTF/ CNTFRa/JAK2/STAT3 (cerebral blood flow decreases); and SIRT1/PGC-1a (obstructive sleep apnea). All these signaling pathways are essential contributors to chronic neuroinflammation and oxidative stress in the brain. To further understand the complex transcriptional regulation of brain function in various disease models, in-depth research on the effects of celery and its bioactive constituents and their temporal effects on upstream regulators and downstream effector signaling pathways in neuroinflammation and neuronal damage needs to be carried out [56].
Celery's Mechanisms of Action in Neurological Diseases
Celery and its bioactive compound play a role in oxidative stress, inflammatory responses, and neuronal apoptosis [57]. Most neurological disorders have three common underlying mechanisms. The first is oxidative stress, which causes cellular damage involving Nrf2 [58]. NBP is a known potent antioxidant in activating the Nrf2 enhanced expressions of antioxidant enzymes [59,60]. These enzymes will reduce reactive oxygen species (ROS) and prevent mitochondrial damage [61,62]. The second central mechanism is prolonged and unregulated neuroinflammation related to NF-κB, with the production of pro-inflammatory cytokines and chemokines associated with the self-potentiation of the neuroinflammatory cycle. NBP plays a role in the downregulation of TNF-α and MMP-9 ex-pressions, leading to the inhibition of microglia activation via TLR4/NF-κB signaling, and reducing inflammation [63][64][65]. The third pathway involves neurodegeneration through apoptosis, autophagy, and necrosis. Neurodegeneration is significantly influenced by oxidative stress and chronic neuroinflammation through the regulation of p53 activity. In a molecular docking study, NBP showed its potential to suppress glial apoptosis by limiting p53 degradation by inhibiting NAD(P)H quinone oxidoreductases [66]. In addition, an animal study of Alzheimer's disease supported NBP's role in decreasing the expression of p53 in the cortex, improving learning and memory abilities [67]. The role of celery's action in neurological diseases is summarized in Figure 4.
volving Nrf2 [58]. NBP is a known potent antioxidant in activating the Nrf2 enhanced expressions of antioxidant enzymes [59,60]. These enzymes will reduce reactive oxygen species (ROS) and prevent mitochondrial damage [61,62]. The second central mechanism is prolonged and unregulated neuroinflammation related to NF-κB, with the production of pro-inflammatory cytokines and chemokines associated with the self-potentiation of the neuroinflammatory cycle. NBP plays a role in the downregulation of TNF-α and MMP-9 expressions, leading to the inhibition of microglia activation via TLR4/NF-κB signaling, and reducing inflammation [63][64][65]. The third pathway involves neurodegeneration through apoptosis, autophagy, and necrosis. Neurodegeneration is significantly influenced by oxidative stress and chronic neuroinflammation through the regulation of p53 activity. In a molecular docking study, NBP showed its potential to suppress glial apoptosis by limiting p53 degradation by inhibiting NAD(P)H quinone oxidoreductases [66]. In addition, an animal study of Alzheimer's disease supported NBP's role in decreasing the expression of p53 in the cortex, improving learning and memory abilities [67]. The role of celery's action in neurological diseases is summarized in Figure 4.
Limitations
Most of the included studies have a similar limitation. Although the findings of these studies support the role of A. graveolens in signaling pathways, they lack in understanding the molecular mechanism that contributes towards celery's neuroprotective and pharmacological effect. For example, NBP was studied on its neuroprotective role in BBB disruption following ischemic stroke without conducting an in-depth study of the internal relationship, possible targets, or effect of the molecular mechanism of NBP on protecting the BBB after cerebral ischemic reperfusion [30]. When considering potential therapeutic candidates for diseases of the central nervous system, the route of administration used to deliver the drug to the brain is an important consideration. This is to ensure that the drug interaction occurs on the specific targeted site. Based on our included studies, the routes of administration include oral, intranasal, intravenous, and intraperitoneal. A distribution study that evaluated the metabolic profile of NBP in rats via a radiochromatograph showed that the delivery of NBP from the blood to the brain is limited by the BBB [68]. Therefore, much research is needed to develop herb-based formulations with improved delivery using exosomes, nanoparticles, active transporters or brain permeability enhancers, and other non-invasive techniques [69]. As this is a scoping review, we did not perform meta-analyses of the data. However, with time, when sufficiently homogenous literature is present for any single neurological disorder, systematic reviews with meta-analyses may be performed in the future.
Materials and Methods
A scoping review of the literature was conducted in accordance with the methodology by Levac et al. [70]. The Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) guidelines were followed, which are a set of 20 essential items and 2 optional items that were created to help improve the quality, completeness, and transparency of scoping reviews; this is presented in the Supplementary material: Table S1 [71].
Review Objective
This scoping review was conducted to evaluate the worldwide scientific evidence on the pharmacological properties and safety of A. graveolens plant for neurological disorders.
Type of Study
This review considered both clinical and preclinical (in vivo, in vitro) articles. Proceeding articles were excluded due to a lack of information for critical appraisal.
Type of Participants
This review included studies that either recruited human subjects with any neurological disorder, animal models or cell studies, all addressing both central and peripheral systems.
Type of Intervention
This review considered any form of A. graveolens, including all plant parts and preparations including crude preparations, extracts, standardized extracts, finished products in pharmaceutical forms (e.g., capsule, tablets, powder, liquid) containing A. graveolens as a sole active ingredient, as well as its representative compounds.
To assess celery as a whole or as its constituents, as well as a single contributing intervention, this review excluded studies using co-intervention in combination with celery.
Type of Outcomes
The following primary and secondary outcomes were selected prior to screening and the selection of studies to facilitate a systematic assessment of the outcome measures. These outcomes were selected based on the effects of the compounds on central and peripheral nervous system disorders found in a published literature review and general web search [72][73][74][75][76][77].
Primary Outcomes
Pharmacological properties of A. graveolens in neurological disorders. Preclinical and clinical outcomes of A. graveolens efficacy studies. Mechanism of action of A. graveolens in efficacy studies.
Secondary Outcomes
Safety: this included adverse events and safety monitoring information from clinical studies, as well as toxicity and safety pharmacology studies from animals that were related to applications in neurological disorders.
Search Strategy
The electronic databases MEDLINE, Web of Science, LILAC, and Cochrane Central Register of Controlled Trials (CENTRAL) were searched for published studies from inception until November 2022. There were no restrictions applied in terms of the publication period and language. In addition to database searches, the team screened the reference lists and citations of retrieved articles to further identify studies for inclusion. In cases of ambiguity, attempts were made to contact the authors of relevant articles that met the inclusion criteria for this review.
The search strategies (Supplementary Table S2) were translated into the other databases using the appropriate controlled vocabulary, as applicable. The general search terms used were celery and neurological disorders and their synonyms.
Study Selection
A pair of review authors independently screened titles and abstracts from the search strategy according to the inclusion and exclusion criteria, with disagreements resolved via discussion, with the help of a third author as an arbiter if required.
Data Extraction and Management
A pair of review authors independently coded all data from each included study using a pro forma designed specifically for this review. The interventions defined in the study were compared against our pre-defined intervention. Any disagreement among the review authors was resolved by discussion leading to a consensus, with referral to a third review author if necessary.
Risk of Bias Assessment
Two review authors (XYL, TT) independently assessed each article included for risk of bias in animal studies using the Systematic Review Centre for Laboratory animal Experimentation (SYRCLE) risk of bias tool. These authors scored the risk of bias in each domain and the overall risk was reported using the Cochrane Review Manager (RevMan, version 5.4) software [78]. Any disagreement among the review authors was resolved by discussion leading to a consensus and involved a third review author if necessary.
Conclusions
In conclusion, A. graveolens, especially its phytoconstituent NBP, can be further investigated regarding different neurological disorders based on its potential to have pleiotropic effects on different targeted pathways for neurological pathogenesis. The safety of celery extracts and NBP needs to be further established with better quality standards of reporting for a meaningful evaluation of its dosage, efficacy, and safety before its application in future clinical trials.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/molecules28155824/s1, Table S1: PRISMA checklist; Table S2: Search strategy for each electronic database; Table S3: Qualitative, quantitative and standardization details of Apium graveolens interventions. Reference [79] is cited in the supplementary materials.
|
v3-fos-license
|
2018-04-03T02:11:56.109Z
|
2014-02-21T00:00:00.000
|
115584
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ccj.biomedcentral.com/track/pdf/10.1186/1752-153X-8-14",
"pdf_hash": "4985eeef5e374eb9b323e2db8aebb94282f7f425",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43412",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "ecfe83b109f6b2d35faee8865efa08776916f7f2",
"year": 2014
}
|
pes2o/s2orc
|
PM10 and gaseous pollutants trends from air quality monitoring networks in Bari province: principal component analysis and absolute principal component scores on a two years and half data set
Background The chemical composition of aerosols and particle size distributions are the most significant factors affecting air quality. In particular, the exposure to finer particles can cause short and long-term effects on human health. In the present paper PM10 (particulate matter with aerodynamic diameter lower than 10 μm), CO, NOx (NO and NO2), Benzene and Toluene trends monitored in six monitoring stations of Bari province are shown. The data set used was composed by bi-hourly means for all parameters (12 bi-hourly means per day for each parameter) and it’s referred to the period of time from January 2005 and May 2007. The main aim of the paper is to provide a clear illustration of how large data sets from monitoring stations can give information about the number and nature of the pollutant sources, and mainly to assess the contribution of the traffic source to PM10 concentration level by using multivariate statistical techniques such as Principal Component Analysis (PCA) and Absolute Principal Component Scores (APCS). Results Comparing the night and day mean concentrations (per day) for each parameter it has been pointed out that there is a different night and day behavior for some parameters such as CO, Benzene and Toluene than PM10. This suggests that CO, Benzene and Toluene concentrations are mainly connected with transport systems, whereas PM10 is mostly influenced by different factors. The statistical techniques identified three recurrent sources, associated with vehicular traffic and particulate transport, covering over 90% of variance. The contemporaneous analysis of gas and PM10 has allowed underlining the differences between the sources of these pollutants. Conclusions The analysis of the pollutant trends from large data set and the application of multivariate statistical techniques such as PCA and APCS can give useful information about air quality and pollutant’s sources. These knowledge can provide useful advices to environmental policies in order to reach the WHO recommended levels.
Background
The knowledge of chemical composition and sources of air polluted is demanded in any program aimed at controlling the levels of pollutants in order to evaluate and reduce their impact on human health.
The inhalation of air polluted, in fact, with particulate matter (PM 10 ) and or irritant gases such as NO 2 and SO 2 is associated with both short-term and long term health effects, most of which impact on respiratory and cardiovascular system [1]. For example the atmospheric concentrations of NO 2 have been linked to the deaths of severely asthmatic patients in Barcelona [2], child asthma cases in Toronto and Southern California [3,4], heart rate dysfunction in Taiwan and Switzerland [5,6], and ischemic heart disease in elderly residents of French cities [7]. Similar examples can be chosen to illustrate the damaging effects of PM 10 inhalation, whether it be asthma in Madrid or Sydney [8,9] or all-cause mortality (especially stroke) in Boston [10].
The federal Clean Air Act Amendments of 1990 mandate that the U.S. EPA determine a set of urban hazardous air pollutants (PAHs, or 'air toxics') that potentially pose the greatest risks in urban areas, in terms of contribution to population health risk. The current set of 188 PAHs includes toxic metals and volatile organic compounds (VOCs). The U.S. EPA identified 33 urban PAHs based on emissions and toxicities in a 1995 ranking analysis [11] and developed concurrent monitoring and modelling programs to evaluate potential exposures and risks to these top-ranked 33 PAHs. Developing effective control strategies to reduce population exposure to certain PAHs requires identifying sources and quantifying their contributions to the mixture of PAHs and the associated health risks. One approach is to use receptor-based source apportionment models to distinguish sources. Most source apportionment studies focus on analysing either VOCs [12,13] or fine particle (PM 2.5 ) mass [14][15][16]. Only few studies used source apportionment modelling to identify common sources of both VOCs and PM 2.5 . In other source apportionment studies that included both non-organic trace elements on PM and gaseous pollutants [17][18][19][20], the gaseous species usually were non-VOCs (such as CO, SO 2 , and NO).
In recent years, there has been an increased interest in the application of chemometrics [21] to different environmental research fields, ranging from water to air pollution and cultural heritage [22][23][24][25]. One aspect of the application of chemometrics to environmental pollution research is often referred to as source apportionment, receptor modelling and/or mixture analysis discipline. Recent examples of such work can be found in Europe [26,27], the US [28,29] and Asia [30,31]. In the fields of pollution sciences (air or water), source apportionment models aim to re-construct the emissions from different sources of pollutants based on ambient data registered at monitoring sites [32].
In the present paper a bihourly data set of PM 10 , CO, NO x , Benzene and Toluene collected in six air quality monitoring stations of Bari territory from January 2005 to May 2007 is used. The main aim of this paper is to provide a clear illustration of how large data sets from monitoring stations can give information about the number and nature of the pollutant sources, and mainly to assess the contribution of the traffic source to PM 10 concentration level by using multivariate statistical techniques.
These knowledge could provide useful advices to environmental policies in order to reach the WHO recommended levels. In fact legislative efforts to reduce the health effects of air pollutants are currently being applied throughout the developed world, with the imposition of averaged limit values which vary for different pollutants. In the case of PM 10 , the World Health Organization has recommended progressive achievement of four pollution thresholds which cascade down through three Interim Targets (IT1 ¼ 70 μg/m 3 ; IT2 ¼ 50 μg/m 3 ; IT3 ¼ 30 μg/m 3 ) to reach the ultimate objective: an Air Quality Guideline (AQG) annual mean of just 20 μg m 3 PM 10 [33,34]. Moreover considering the latest Italian law [35,36] for PM 10 the annual limit value is 40 μg/m 3 , while the daily limit value is 50 μg/m 3 ; for NO x the annual limit value is 40 μg/m 3 , while hourly limit value is 200 μg/m 3 ; for Benzene the annual limit value is 5 μg/m 3 and for CO the 8 hour mean limit value is 10 mg/m 3 .
Results and discussion
In the Table 1 the basic statistics for each site have been summarized. Among all the available sampling sites, only those with the number of data not less than 5000 were used, considering only days with complete data (12 daily data). High variability is explained by the long range of the period (2.5 years). Pollutants concentrations were reported as μg/m 3 , except for CO which is expressed as mg/m 3 .
From data collected, night and daily mean concentrations (per day) for each parameter have been obtained. Night and daily mean values have been plotted for each parameter and graphics, as Figures 1, 2, 3 and 4 shown, have been obtained for each sampling site.
Observing the Figures 1, 2, 3 and 4 shown as example, parameters such as CO, Benzene and Toluene show different trend between night and daily values, whit daily mean values bigger than night ones. In particular for the data shown in Figures 1, 2 and 3 the percentage ratio between (daily mean -night mean) and daily mean for CO, Benzene and Toluene is 53%, 49%, 54% respectively. Considering Toluene trend shown in Figure 3 it is possible to note for some days, e. g. 05/05/2005 or 22/02/ 2006, very high daily mean values on the contrary of Benzene ones shown in Figure 2. The reason is due to the presence of another pollution source affecting the monitoring site, probably identifiable in the painting of pedestrian crossing and road stripes.
Considering the PM 10 night and dilay mean concentrations ( Figure 4) it's possible to note that they don't show a clear difference between day and night: in fact the ratio for PM 10 is 16%. Moreover for some days, e. g. 25/03/ 2005 and 06/02/2006, the thermodynamic conditions in the planetary boundary layer (PBL) adversely affect pollutants dispersion leading to PM 10 night values bigger than daily ones, in spite of emission sources reduction during the night.
The different night and daily behavior suggests that parameters such as CO, Benzene and Toluene are mainly connected with transport systems, whereas PM 10 is mostly influenced by different factors.
The parameters trends shown in Figures 1, 2, 3 and 4, related to Viale Archimede data, are similar to ones of the other sites. So the different behaviour between PM 10 and the other parameters (CO, Benzene, Toluene) can be considered common to the whole area investigated: Bari and Bari province.
Moreover, as we have shown in a previous papers [37], the results obtained both by automatic monitoring stations and sampling campaigns in several sites of Apulia region, suggest that the PM 10 amount monitored in this area presents a common contribution also among monitoring stations located at 70 km far each other: the common contribution apparently does not depend from local sources. Moreover in the reference 37 we pointed out that PM 10 concentrations do not show a seasonal trend, contrary to the PM 10 trend shown in the towns of North Italy [38,39].
In order to identify the pollutant sources that contribute to PM 10 concentrations and try to distinguish the contribution of local sources, such as vehicular traffic, as respect to "a common regional source" (that is resuspended matter, dust intrusions, calcium carbonate source), APCS model has been applied to the data collected. According to the criteria described in the methods section we have chosen the ODV90 one, revealing that three components are necessary and sufficient to run properly the model.
In Table 2 the loading's values for the PC analysis applied to the data collected in all the sites during January 2007 are shows as example. Three factors explain almost the 92% of the total variance of data for all the sites. Factor loadings are used to obtain information about source's profiles. The first factor (or first principal component, PC1) accounting for a percentage of the total variance ranging between 40% and 51% was dominated by high loading values of Benzene, Toluene and CO, or by NO x and CO depending on the sites; the second factor (or second principal component, PC2), accounting for a percentage ranging between 24% and 31% of the total variance, is dominated by PM 10 or by Benzene and Toluene, while the third factor explaining a percentage ranging between 21% and 25% of the total variance had high loadings values for Benzene and Toluene or PM 10 .
Applying PCA on all data set generally we found that for each sampling site one of the three factors is characterized by high loading values of PM 10 , the other two factors are characterized by high loading values of NO x , CO, Benzene and Toluene.
Observing Figure 5 it's possible to note that PM 10 is the dominant parameter on the second component with high loading values.
In order to identify the three sources the Absolute Principal Component Scores model has been applied to data sets. In the Tables 3 and 4 Observing Table 3 and 4 that show the parameters distribution in the three pollution sources, averaged on the whole monitoring period, one can see that the profile of the second source is mostly characterized by PM 10 . The other two sources are differently characterized by NO x , Benzene, Toluene, CO and for a little contribution by PM 10 .
Moreover comparing the source's profile concentrations between Summer and Winter seasons it's possible to note a constant increasing of NO x concentration from Summer to Winter for all sites and sources. In particular the first source shows for all sites bigger NO x concentrations in the Winter than Summer ones. The first source can be considered a mixed source between vehicular traffic and domestic heating. In Figure 6 the percentage distribution of the parameters in the three sources is represented. The plot is obtained from monthly sources profile averaged for all sampling period of time and among all monitoring sites.
Over 85% percent of the mass of PM 10 is attributed to the second source. The first and third sources, composed by NO x , CO and aromatic compounds, and low level of PM 10 , are characterized by similar level of benzene and toluene. In particular the Toluene and Benzene concentrations ratio in the first and third sources profiles are bigger than 2 (except for San Nicola sport stadium monitoring site): in literature this value is associated to vehicular traffic emissions. Moreover NO x and CO are predominant in the first source. The amount of PM 10 in the third source, even if low, is 50% higher than first source.
These observations suggest that the second source could be identified as "Particulate source", while the first and third sources can be considered different components of vehicular traffic emissions. In fact, no industrial plants or similar are located close the sampling sites, and the traffic is the most important source of pollution of anthropic nature. The two traffic sources might be originated by different kinds of vehicles or engines, for example gas and diesel. These different fuels are known to be responsible of different emission of pollutants. In particular diesel, before the introduction of filters, was the major source of particulate matter among the several fuels used for road transport, with lower emissions of NO x and CO. Considering also the constant increasing of NO x concentration from Summer to Winter for all sites and sources (Tables 3 and 4) the first source could be identified with a mixed source between vehicular traffic and domestic heating, while the third source with vehicular traffic. Another proof linking the first and third sources to vehicular emissions is the daily profile of bihourly mean concentrations contributions of the three sources (Figure 7). In Figure 7 it's clearly showed that the particulate source shows a rather constant trend during the day and it is uncorrelated with the traffic sources. The other two sources show, instead, a typical traffic profile, with peaks of emission at 8 in the morning and 20 in the evening, in correspondence of rush hours of people going back and forth Table 5 shows the coefficients of correlation among the six sites of the three sources in the APCS profiles matrix. According to this data, we can observe that the source Particulate shows high correlation among four sites of different zones (Bari and Province). This makes our hypothesis of a regional character for PM 10 concentrations [37]; Monopoli and San Nicola sites don't show correlation and this can be explained considering the different nature of these sites: Monopoli is a urban sites while San Nicola is a suburban site skirting by high vehicular traffic street and whit high vehicular traffic spot during sport events (generally in the week end).
On the contrary, considering the vehicular traffic sources it's possible to observe low correlation among the sites due to different location of the sampling sites. Table 6 shows the reconstruction percentage error of the APCS model for each parameter. The error shows high variability over the range of the period. PM 10 concentrations have shown the lowest error of reconstruction, while the CO concentrations the biggest ones. The model, in fact, suffers of low robustness when values are low (this is the case of carbon monoxide). Anyway, in most of the cases the error was acceptable, allowing a fairly good reconstruction of the concentration trend.
The air quality monitoring network
Bari is a town of about 350000 inhabitants located in South-East of Italy (latitude 41°08' , longitude 16°45'). Its greater industrial activities are in mechanical (carpentry and industrial vehicles), food and clothing sectors; its industrial area, whit a thermo electrical power station, is placed in the neighbouring towns.
Prevailing winds are from NNW and WNW in December, January and February, from East in March and September and from NNE and South in October and November. Raining days are 80 -90 for year with maxima 40 -50 mm. The region is characterized by an active photochemistry mostly in the summer season.
Like many other Italian cities, its urban area is characterized by high motor-vehicle traffic density, mostly in the centre of the city.
The air quality monitoring network of the Bari Municipality is composed by six fixed monitoring stations, by a mobile laboratory and a data elaboration centre. In province of Bari, that extends for 3.825 km 2 and includes 41 towns, there are four fixed monitoring stations located in the towns of Casamassima, Altamura, Andria and Monopoli.
In this paper some stations of Bari and its province monitoring networks have been selected as representative sites of the investigated area. In Bari, the selected monitoring stations are located in residential area (viale King), in urban area (viale Archimede) and in a suburban area (S. Nicola sport stadium).
In province of Bari, the three selected stations are located in the urban and residential areas of the following towns: Altamura (67000 inhabitants) located at 47 Km south-westwards from Bari, Andria (98000 in.) at 55 Km northwards from Bari and Monopoli (50000 in.) a coastal town at 40 Km southwards from Bari. All considered sites can be classified as urban background sites, except for Monopoli that is a urban site and San Nicola that is a suburban site skirting by high vehicular traffic street and whit high vehicular traffic spots during sport events.
The instrumentation
Each station is provided with automatic analysers of CO (Advanced pollution Instrumentation Nitrogen oxides, NO and NO 2 , were analysed using the chemiluminescence method. Measurement of ozone is based upon the capacity of such gas to absorb ultraviolet rays with opportune wavelengths, generated by built-in lamp. Carbon monoxide is analysed through the absorption of infrared rays (IR).
The measuring of PM 10 is based upon the beta ray attenuation method on standard 47 mm membrane filters; the data are bihourly collected. Benzene/Toluene/Xylene are measured using the capillary gas chromatographic technique in the gaseous phase, which enables the rapid separation and identification (15 minutes) of the components of the gas sample.
The data
The data are collected by the system every hour for all parameters, except for PM 10 that are collected every two hours. Therefore, all data are considered with means every two hours (even hours).
In order to simplify the further statistical elaborations, only days with complete data, that is days with all 12 bihourly means were considered for data set.
The data collected by the monitoring network was validated according to this protocol: a preliminary validation was carried out by the software, which has invalidated all data occurred in calibration hours, and data identified as artifacts; then, a manual calibration was carried out by operators, considering the relations existing among the several parameters: for example, the validation of parameters monitored by the same instrument (i.e. benzene and toluene, or the nitrogen oxides), was carried out simultaneously, like so for parameters linked by the same hypothetical source (i.e. carbon oxide and aromatic compounds, typical traffic pollutants). In this way it is possible to verify that eventual critical data are related to real pollution situations, and they are not artifacts due to instrument malfunction. Moreover, meteorological data (rain, speed and direction wind) were used to investigate about the influence of natural events on high or low concentration situation.
The data have been collected during the period of time from January 2005 to May 2007 in the investigated sites.
In the Table 1 the basic statistics for each site have been summarized.
Conclusions
Multivariate statistical techniques such as receptor models offer a valid tool to handle complex data sets and allow to extract information not directly inferable from original data matrix by traditional approach.
In our case the model suggests that the major amount of PM 10 isn't linked directly to the vehicular traffic. It's probably due to PM 10 long and medium range transport and due to formation of secondary particulate. The model confirms a common regional contribution to PM 10 among sites and the absence of PM 10 seasonal trend observed.
Even if the model is applied to few parameters, it is able to suggest information about the nature of the pollution's sources. However for the determination of the other important pollution sources, such as domestic heating, it's needed to obtain parameters that allow to identify this source.
The results obtained by the models moreover confirm that PM 10 concentration cannot be considered a good air quality indicator because it don't reflect the real pollution's sources.
The model description
The aim of the application of the receptor models is the apportionment of the pollutant's sources. The two main approaches of receptor models are Chemical Mass Balance (CMB) and multivariate factor analysis (FA). CMB gives the most objective source apportionment and it needs only one sample; however, it assumes knowledge of the number of sources and their emission pattern. On the other hand, FA attempts to apportion the sources and to determine their composition on the basis of a series of observations at the receptor site only [40]. Among multivariate techniques, Principal Component Analysis (PCA) is often used as an exploratory tool to identify the major sources of air pollutant emissions [38,[41][42][43]. The great advantage of using PCA as a receptor model is that there is no need for a priori knowledge of emission inventories [44].
PCA is a statistical method that identifies patterns in data, revealing their similarities and differences [45]. PCA creates new variables, the principal components scores (PCS), that are orthogonal and uncorrelated to each other, being linear combinations of the original variables. They are obtained in such a way that the first PC explains the largest fraction of the original data variability, the second PC explains a smaller fraction of the data variance than the first one and so forth [46][47][48]. Varimax rotation is the most widely employed orthogonal rotation in PCA, because it tends to produce simplification of the unrotated loadings to easier interpretation of the results. It simplifies the loadings by rigidly rotating the PC axes such that the variable projections (loadings) on each PC tend to be high or low.
Moreover the reconstruction of the source profile and contribution matrices can be successfully obtained by APCS (Absolute Principal Component Scores) method [49].
The observed pollutant concentration in the atmosphere at a certain time C i can be considered as a linear combination of contributions from p sources: where S k is the contribution from each source and a ik is the fraction of source k contribution possessing property i at the receptor. One of the most used methods to decompose the concentration matrix in the product of the source pattern and contribution matrices is the APCS. The starting point is the matrix X (samples × parameters). In the APCS method the first step is the search of the Eigenvalues and Eigenvectors of the data correlation matrix G. Only the most significant p Eigenvectors (or factors) are taken into account. Generally two methods are used in order to choose p Eigenvectors: Kaiser method.
Eigenvectors: Kaiser method (PCs with eigenvalues greater than 1) and ODV80 ones (PCs representing at least 80% of the original data variance).
The p Eigenvectors are then rotated by an orthogonal or oblique rotation. The most used rotation algorithm is Varimax, which performs orthogonal rotation of the loadings. After the rotation all the components should assume positive values; small negative values are set zero. An abstract image of the source contributions to the samples can be obtained by multivariate linear regression: where Z is the scaled data matrix, PCS is the principal component scores matrix, and V T is the transposed rotated loading (Eigenvectors) matrix. In order to pass from the abstract contributions to real ones, a fictitious sample Z 0 , where all concentrations are zero, is built [43,50]. Details about the method can be found in the reference 49: the APCS matrix can be identified with the estimated contributions matrix F r . A regression on the data matrix X allows to obtain the estimated source profiles matrix A r . At last the product of the matrices F r and A r allows to recalculate the data matrix X r (reconstructed data matrix). The reconstruction percentage error of the model has been calculated as percent relative root mean square errors (RRMSE) as shown in reference [49].
The authors declare no experimental research has been performed on animals or humans in the frame of the research activities related to this paper. No ethics committee exists for this kind of research.
|
v3-fos-license
|
2023-03-19T15:10:38.679Z
|
2023-03-17T00:00:00.000
|
257608729
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2023.1101268/pdf",
"pdf_hash": "ce9b451419076d790fc64f4ed0078add297939aa",
"pdf_src": "Frontier",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43413",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "0aed523672254b36a031a61f5acdb43449dda0c2",
"year": 2023
}
|
pes2o/s2orc
|
Rare papillary renal neoplasm with reverse polarity: A case report and review of the literature
Papillary renal neoplasm with reverse polarity (PRNRP) is a rare renal tumour and was newly named in 2019. This study reported a case of a 30-year-old female patient with a left renal tumour without any clinical symptoms and whose CT scan of her left kidney showed a mass of 2.6 cm×2.3 cm, which was considered to be renal clear cell carcinoma. Laparoscopic partial nephrectomy was performed, and histopathology and immunohistochemistry confirmed papillary renal neoplasm with reverse polarity, which had unique clinicopathological features, immunophenotype, KRAS gene mutation and relatively indolent biological behaviour. As newly diagnosed cases, rigorous and regular follow-up is necessary. In addition, a literature review was performed from 1978 to 2022, and 97 cases of papillary renal neoplasms with reverse polarity were identified and analysed.
Introduction
Papillary renal neoplasm with reverse polarity is a newly reported papillary renal tumour that accounts for approximately 4% of all previously diagnosed papillary renal cell carcinomas (PRCC) (1). This tumour was initially classified as papillary renal cell carcinoma, but it has unique morphological features and a better prognosis. In 2019, AI-Obaidy et al. first diagnosed papillary renal neoplasm with reverse polarity and proved that it was different from papillary renal cell carcinoma in pathological morphology, immunohistochemistry and chromosomal features (2). Because PRNRP is rare, it is easily misdiagnosed preoperatively as other types of renal tumours. Surgeons often choose the appropriate surgical method based on their experience in the treatment of common renal tumours. The key to the treatment is to completely remove the tumour. Current data suggest a good prognosis after resection of PRNRP (1,2); however, the long-term outcome is unclear, and regular follow-up is necessary. Here, we reported a case of papillary renal neoplasm with reverse polarity and reviewed the relevant literature to further understand the clinical features, pathology, treatment and prognosis of PRNRP, and to strengthen the awareness of this rare disease.
Case presentation
A 30-year-old female patient was admitted to the hospital with a left renal mass found on physical examination. During the course of the disease, there was no low back pain, haematuria, frequent urination and pain, dizziness, palpitations, fever or chills. The patient had not received any specific treatment previously and followed a healthy diet and lifestyle and had no family history of the disease or similar diseases. The patient's vital signs were normal. There was no swelling, tenderness or pain induced by tapping over either kidney area. Urological ultrasound showed a moderate echogenic mass of approximately 2.6 cm×2.3 cm in the middle and upper parts of the left kidney, with a clear boundary and regular shape and no blood flow signal ( Figure 1). Chest CT showed no abnormalities. Abdominal CT showed a left renal mass, and renal clear cell carcinoma was considered ( Figure 2). The preoperative diagnosis was a left renal mass, and the patient underwent laparoscopic partial left nephrectomy. The tumour capsule was intact in the resected specimen, and brown fish-like tissue was observed after a longitudinal incision of the tumour. Histopathological studies of the resected tumour revealed the tumour was well demarcated and had a complex branched papillary structure with a fibrous vascular axis, and the papillary surface was covered with a monolayer of cuboidal or columnar cells, with eosinophilic cytoplasm and characteristic nuclei located at the top of the cytoplasm away from the basement membrane ( Figures 3A, B). Immunohistochemical studies of the tumour showed that the lesion was positive for the expression of GATA3, KRT7, p504s, EMA, PAX-2, PAX-8, SHDB, Ki-67 and TTF-1 and negative for vimentin, CD5, CD10, WT-1, CAIX, TFE-3, HMB-45, CD117, ALK and TG ( Figures 3C, D). Histomorphology and immunophenotype were consistent with papillary renal neoplasm with reverse polarity. The patient declined further molecular genetic testing for financial reasons. The patient did not receive any treatment for 7 months after the operation, and there was no recurrence or metastasis.
Systematic review of literature
The PubMed database was searched for case reports and case series of papillary renal cell carcinoma and papillary renal neoplasm with reverse polarity published between 1978 and 2022. Using the following keywords: (oncocytic papillary renal cell carcinoma) or (oncocytic PRCC) or (papillary renal neoplasm with reverse polarity) or (PRNRP), 403 results were retrieved. After removing unrelated studies, 11 publications describing 97 cases were finally identified ( Table 1). The review series included 97 patients (56 men and 41 women) with a definite diagnosis of papillary renal tumour with reverse polarity. The evaluation showed that 31 cases of PRNRP occurred in the left kidney, and 43 cases occurred in the right kidney. The age of PRNRP patients ranged from 35 to 82 years, with an average age of 62.2 years. The diameter of PRNRP ranged from 0.8 to 8.5 cm, with an average diameter of 2.1 cm. Most tumours have no clinical symptoms and are diagnosed incidentally during imaging examination. The World Health Organization (WHO)/International Society of Urological Pathology (ISUP) showed low nuclear grade (13), and most of the reported PRNRP cases were staged as pT1. Among them, 52 patients underwent laparoscopic partial nephrectomy, 10 patients underwent laparoscopic radical nephrectomy, and 2 patients underwent renal biopsy, all of which were confirmed as PRNRP by histopathology. Seventy-four of 97 patients were followed up from 1 month to 222 months, and no tumour recurrence during the follow-up period.
Discussion
Papillary renal neoplasm with reverse polarity is a rare type of renal neoplasm reported recently. In 2003, Allory et al. found that some papillary renal cell carcinomas had a good prognosis and called it "oncocytoid-type papillary renal cell carcinoma" (14). Lefevre et al. named it "oncocytic papillary renal cell carcinoma", and this term was widely used in 2005 (15). In 2017, Saleeb et al. classified papillary renal cell carcinoma into 4 types based on immunohistochemical and molecular phenotypes and proposed the term "low-grade eosinophilic papillary renal cell carcinoma, type 4" (16). In 2019, AI-Obaidy et al. named this tumour papillary renal neoplasm with reverse polarity for the first time and proved that it was different from papillary renal cell carcinoma type I and type II in terms of pathological morphology, immunophenotype and chromosomal characteristics (2). Subsequently, 89 additional cases of PRNRP were reported, and 8 patients did not include detailed clinical data (1) and were therefore not included in Table 1.
According to the study reported by AI-Obaidy et al., the incidence of PRNRP was similar in males and females, and the age ranged from 46 to 80 years, with an average age of 64 years (2). However, our systematic review showed that the incidence of PRNRP was slightly higher in males than in females, with an average age of 62.2 years. In terms of age, the evaluation indicated that the youngest patient in the past was 35 years old, while the present patient was 30 years old, which is the youngest patient identified to date. Previous data indicated that the tumour size was 3.0 cm or less, with an average of 1.6 cm. The evaluation results showed that the average tumour diameter of 97 patients with PRNRP was 2.1 cm (range: 0.8-8.5 cm). PRNRPs are usually asymptomatic, and often are discovered incidentally on imaging. Although our evaluation shows that the majority of PRNRPs are small in size, as they gradually grow, they may compress surrounding organs, impair kidney function, and even rupture bleeding,etc.
PRNRP usually has no specific clinical symptoms, thus posing significant preoperative diagnostic challenges. Imaging examinations do not provide much diagnostic information because of their rarity. At present, there is a lack of literature reports on the imaging features of PRNRP, and more data need to be collected and further explored. Histopathology and immunohistochemistry are the gold standard for the diagnosis of PRNRP. Chang et al. proposed the following four diagnostic criteria: (I) Mainly protruding thin papillary or tubular papillary growth; (II) Focal or diffuse interstitial vitrification; (III) Eosinophilic fine granular cytoplasm; (IV) The tumour nuclei were neatly arranged on the top of the cytoplasm far away from the basement membrane, showing the characteristics of "reverse polarity", with the same size and low nuclear grade (1). This patient was a 30-year-old female who was considered to be diagnosed with clear cell carcinoma before surgery and underwent laparoscopic partial nephrectomy. The postoperative pathological diagnosis was PRNRP, and the clinical stage was pT1. The patient did not receive any treatment and did not have any discomfort for 7 months after the operation.
Immunophenotypically, PRCC strongly expressed vimentin and p504s, but did not express GATA3 and 34bE12. PRNRP strongly expressed GATA3 and KRT7, and expressed p504s to varying degrees, and could express 34bE12, but could not express vimentin. This tumor strongly expressed KRT7 and partially expressed p504s, but did not express CD10, which was consistent with the PRCC phenotype. However, unlike other PRCC subtypes, PRNRP typically expresses GATA3. PRNRP does not express vimentin, CD10, CAIX, CD117, TFE-3, ALK, etc, which is helpful for differential diagnosis from other rare types of renal cell carcinoma. The low proliferation index of Ki-67 suggests that it has a good prognosis. The above immunophenotypes were consistent with those reported in the literature (2), supporting the diagnosis of PRNRP in this case. In addition, 7 cases of PRNRP reported by Zhou et al. were all positive for 34bE12 except for the specific expression of GATA3 (4), which was not recorded in other studies. The results provide new insights into the diagnosis and even treatment of papillary renal neoplasms with reverse polarity.
Recent studies have shown that PRNRP has high-frequency KRAS gene mutations; the KRAS mutation rate was found in 85% of the tested cases, and KRAS gene mutations in PRNRP were concentrated in the exon 2 codon 12 (3,10). Among them, the G12 V missense mutation was the hotspot mutation (mutation rate was 33.3%-75.0%), followed by G12D (0-30.7%), G12R (3.8%-25.0%) and G12C (0-11.1%), the BRAF V600R mutation was detected in one KRAS wild-type case (3,5,17). Fluorescence in situ hybridization (FISH) analysis showed that 32% (14/44) of papillary renal tumours with reverse polarity had abnormalities in chromosomes 7 and 17, and only 2 cases (2/44) had chromosome Y deletion (10), which further proved that it was similar but not identical to classic PRCC. In this study, because the patient refused to undergo molecular genetic testing, we could not further understand whether the patient had gene mutations and chromosomal variations. Therefore, PRNRP has unique clinicopathological features, immunophenotypes, and KRAS gene mutations and can be clinically differentiated from common papillary renal cell carcinoma, renal papillary adenoma, clear cell papillary renal cell carcinoma, and Xp11.2 translocation-associated renal cell carcinoma. To date, there is no consensus on the optimal treatment strategy for PRNRP. The preferred treatment for any nonmetastatic, solid renal mass is surgical resection, preferably using a minimally invasive approach (18). For localized renal tumours, surgical treatment mainly includes radical nephrectomy (RN) and partial nephrectomy (PN). Nephron sparing partial nephrectomy is recommended for certain patients, and a negative surgical margin should be achieved while removing the renal mass. Compared with radical nephrectomy, partial nephrectomy can preserve the normal renal parenchyma while removing the tumour, reduce the incidence of long-term renal insufficiency, reduce the incidence of cardiovascular events, and improve the quality of life of patients with renal tumours (19)(20)(21)(22). Other options for treating renal masses less than 3 cm include thermal ablation, cryoablation, and radiofrequency ablation. Renal mass biopsy should be performed in all patients receiving these regimens to facilitate histological diagnosis and guide subsequent treatment and follow-up. However, patients should be advised that these treatment options increase the risk of local recurrence or tumour persistence (18). At present, there is no literature report on the treatment of PRNRP by various ablations, and further studies are needed to verify its effect. Active monitoring is an acceptable option for some patients with renal masses less than 2 cm (grade C). It is suitable for elderly patients with serious complications or short life expectancy. However, continuous imaging must be performed to monitor changes in renal tumour size. Patients and their families need to understand the risks of active surveillance. For patients who choose active surveillance, renal mass biopsy is recommended for further risk factor stratification (18). If the benefit of intervention exceeds the benefit of active surveillance, active treatment should be chosen. Combined with a literature review, PRNRP is usually treated by partial nephrectomy or radical nephrectomy, and the treatment effect is good. Urologists can choose the appropriate treatment according to the specific situation of patients and their own clinical experience. Although PRNRP has a good prognosis, the current data are insufficient to draw conclusions about the long-term efficacy of treatment for this tumour, and regular follow-up is necessary. In our study, the patient had no clinical symptoms, no abnormalities on chest CT, no surrounding organ infiltration or regional lymph node enlargement on abdominal CT, so the diagnosis of localized renal tumor was considered. After we actively communicated with the patient about the treatment plan, the patient underwent laparoscopic partial nephrectomy to remove the tumour and absence of recurrence at follow-up, and renal ultrasound or CT examination is necessary in the future.
Patient perspective
Kidney tumor had brought me great trouble and anxiety, affecting my daily life. After talking to my doctor, I underwent a laparoscopic partial nephrectomy to remove the tumor. When histopathology and immunohistochemistry confirmed PRNRP, my fears and concerns disappeared. I achieved physical and psychological healing. I think I have been treated successfully. I will follow the doctor's advice for regular follow-up in the future.
Conclusion
PRNRP is a newly recognized low-grade renal tumour with relatively indolent biological behaviour. Its pathological morphology, immunophenotype and molecular genetic changes are different from those of classical PRCC type 1 and type 2. It may be a special subtype of PRCC, which has not yet been classified into the WHO (2016) classification of renal tumours (23). However, PRNRP is a provisional subtype of papillary RCC in WHO 2022 but has not yet been incorporated into an independent histological type or subtype (24). Urological surgeons should recognize this rare disease to distinguish it from other renal tumours. Due to the rarity of this tumour, its pathogenesis and histological origin still need to be further improved, and more cases and follow-up data need to be accumulated to further explore its biological behaviour. Therefore, it is of positive clinical significance to distinguish PRNRP from papillary renal cell carcinoma for targeted therapy.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author.
Ethics statement
The studies involving human participants were reviewed and approved by the Ethics Committee of Chengdu Second People's Hospital. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. Written informed consent was obtained from the participant/ patient(s) for the publication of this case report.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
|
v3-fos-license
|
2021-08-20T18:26:35.731Z
|
2021-01-01T00:00:00.000
|
240711773
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://iiste.org/Journals/index.php/CER/article/download/56245/58089",
"pdf_hash": "b26f67b45d26d8c88cd6a5321a02ad60b1c0394a",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43414",
"s2fieldsofstudy": [
"Environmental Science",
"Geography",
"Agricultural and Food Sciences",
"Economics"
],
"sha1": "01a1bc9b6dce6b7e4844b427da6270988d6a0fb8",
"year": 2021
}
|
pes2o/s2orc
|
Impacts of Urbanization Rate and Land Cover Change on Urban Farm Land A Case of Wolayta Zone Sodo Zuria Woreda
The effect of human intervention on the agricultural system has shown food insecurity with an increase in the cost of crop production in the local market due to the expansion of urbanization in the study area with 2.66% in 9 years for the demand of residential land. and also beyond this simulation result, the impact of deforestation on urban Agricultural and livelihoods was not scientifically studied in most of the southern Ethiopia high lands and their surrounding in particularly for Wolayta zone sodo zuria woreda has shown 0.295 % per year of forestland declined by 0.09 % for residential land and it has used for public service during the selected study period. This study has focused on the impact of urbanization rate and land use land cover change detection analysis using Primate and secondary data source with integration (GIS) Geographic Information System and Remote sensing Method for managing the negative impact on the geospatial environment in the context of Ethiopian Agricultural Transformation (ATP) goal of sustainable development. And evaluation of image classification accuracy can be defined as the process of comparing the classified image with geographic data considered to be accurate and referential typically, the data which is classified image is compared to are ground-truth based on supervised classification with (CR) Consistency Ratio 0.016 % of the pixel has resulted for accuracy assessment of 30m Land Sat 7 satellite image. In general, a set of reference shapefile has been overlaid as 7 major landuse class which are generated over the classified image by the AHP method of Computing Eigenvector Matrixes, which is an output of the pairwise comparison matrix to produce the best fit set based on the pixel value landuse area listed
Introduction
Globally, urbanisation is growing due to population growth and rural-urban migration is one of the main reason and expansion of urbanization, it was also a man-made phenomenon with inbuilt activity on the environment and natural resource, in addition, the intervention of humans in the natural environment influences the nature of ecosystem processes by the rate at which they operate on the environment especially farmland around the urban area (Angel et al., 2011;Jiang, 2015). one of its main factor which is deforestation, the removal of land cover vegetation into the extent that human need to have residence. According to United Nations Human Settlements Programme, 54.5 per cent of the world's population resided in urban settings in 2016 based on this report by 2030, urban areas are projected to accommodate 60 per cent of the world population, and the urban population of developing countries will in increase with frequency of 2 per cent (UN, 2016) In other words, urbanisation can be defined as the transformation of forest and urban farmland to residential area and shifting cultivation area to urbanisation (FAO/UNEP, 1982). However, this definition does not include "logging". More inclusive was Myers's 1980 definition, where deforestation refers, "generally to the destruction of forest cover through clearing for agriculture land so that not a tree remains, and the land is given over to non-forest purposes and was] very heavy and unduly negligent logging results in a decline of biomass and depletion of ecosystem services So severe that the residual forest can no longer qualify as forest in any practical sense of the environment." Based on this idea this study was focus on the impact of human operation through expanding urban residential area on urban farmland a case Wolayta sodo city south Ethiopia. Human beings are highly dependent on the natural environment for their livelihoods, However, in the 21st century, the impact of environmental variability and climate change has significantly affected the livelihoods of the poor and marginal societies in developing countries. For this study, the FAO's latest definitions (1993) will be used and (FAO 2012) defines forests as "ecosystems with a minimum of 10-20% crown cover of trees and/or bamboo, generally associated with wild flora, wildlife, and natural soil conditions, and not subject to agricultural practices" and deforestation as a "change of land use with a depletion of tree crown cover to less than 10% crown cover". One of the major challenges in the study area is environmental degradation coming from population pressure, land use urban residential expansion in addition weak urban agricultural domestic crop production, a lack of market value chain with the low rank of integrated urban Agricultural technology transfer and inadequate input supply and marketing systems (Shiferaw & Singh, 2010) This study is a reflection of human intervention urban Agricultural land and degradation in regarding to residential land and the studies made in this site may have wider application to other highland regions of the country. But beyond this assumption, the impact of deforestation on urban Agricultural livelihoods was not scientifically studied in southern Ethiopia and their surrounding in particular land in Wolayta zone sodo zuria woreda, Therefore, the study focuses on an urban agricultural land-use change based on change detection analysis using Primate and secondary data source with integration to Geographic Information System based (GIS) overlay analysis to managing the negative impact on the geospatial environment in the context of Ethiopian ATP 2 goal of sustainable and balanced land use planning for agricultural economic development livelihood in the study area 2. Ground factor of urbanization in Wolayta Sodo,zuria woreda Food close and Housing is a significant basic input of man king and also land is the most expensive resource in most developing county in Ethiopia, Wolayta is the most densely populated area due to the rapidly growing population in urban and ruler area of the study area (Barana B, Senbetie T & Aklilu B, 2016 ). And also According to (Wolayta Sodo Municipal 2018/19), the right to adequate housing was enshrined in several international human rights instruments and has long been regarded as essential to ensuring the wellbeing and dignity of the human person. Housing Rights have been included in the most authoritative international statements regarding human rights. Ethiopia was also been a member of this worldwide agreements. It has also been locally indorsed commandments that may reply to the residential land demand of urban residents. Mainly Ethiopia proclaimed a contestable announcement, i.e., proclamation no.721/2011. The proclamation's contribution in solving the currently escalated urban housing demand needs (Kidus M. Gebrehiwot E, and Kahsay .G, 2017). And in addition to the fast growth of population, this issue supposed to be raising urban expansion for the demand of housing for urban residents
Materials and Method
Wolayta Zone sodo zuria woreda Administration is one of Southern Nations, Nationalities and Peoples Regional (SNNPR) State, Ethiopia (Figure 3.1). In addition, it has a total area of the city of 16569.06 hectares and It is located about 396 km Southwest of Addis Ababa. Its geographical extent is between 746338.79 m and 763697.57m N latitude and between 351399.84m and 368966.93m E longitude with altitude ranging from 1400 to 2140 amsl (Ethiopian demography and health organization institute 2018). It has kifle ketema and kebels. Sodo zuria woreda is located from Northwest of "Boloss Sore", from Northeast of "Damote Gale", from Southwest in "Offa" and from South of "Humbo" in Wolayta Zone Administration of Ethiopia and according to (CSA 2007) demographic study Population projection in 2020 the city currently assumed to more than 350,000
Materials
Data for this study were collected from the local government Authority one of the offices was Ethiopian Mapping Authority (EMA) for coordinate transformation, Wolayta Sodo municipality for land use shapefile of 2010 and 2019 Land sat 7 (ETM+) satellite image of an average spatial resolution taken on 09/ July 2019, from path/row 169/55 from USGS as user Registry of Bereket for cross-checking information. The image was recorded on 6 bands, with a spatial resolution of 30 meters, while band 6 has. The radiometric resolution of the image was 8 bits and it was extracted by mask in Spatial Analysis by using study area boundary shapefile Table 3.1 Data and their sources 4. Methodology of the research Land Sat 7 (ETM+) satellite image with GIS datasets existing land use, forest cover, the livelihood of study area administrative boundaries and village locations) with the resolution of 30m has been used for this study and GIS data is based on information ranging from September 2010 to July 2019 over 9 years old to data was being processed for Change detection Analysis the main source for those satellite image was taken from Ethiopian Mapping Authority ( For spatial relationship remark concerning urban farm change
Data analysis, results and discussions
As some scholars like (Wubante et al., 2019) indicated in their study, the demand for land for urbanization is primarily met by converting rural land through expropriation. However, land expropriations are adversely affecting the previous land users by reducing the amount of crop production and their sources of income and also supervising Urbanization expansion rate with the help of two different land sat7 (ETM+) image obtains different dates for the expansion rate (Ramada E., 2003)
Supervised classification
According to (Peter E, 2018) in his GIS image analysis for change detection Analysis, used to perform image algebra on normalized difference vegetation index or NDVI button is used to perform image algebra on the red (band 3 for Land sat 7 ETM+) and near-infrared (band 4 for Land sat 7 ETM+) bands of an image to produce a new single band layer that shows "greenness" or relative biomass brighter (higher) value indicates a higher percentage of vegetation, healthier vegetation, or plant species differences. The formula for the calculation is NDVI = (IR -R)/(IR + R). (Y,L., Newlands, davidson, A Zhang, Y., et al 2014) however this study mainly focus urbanization change rather than NDVI and this study has used land-use shapefile of land use land cover data analysis of both 2010 and 2019 image the main source of the data for ground truth land cover data was Wolayta sodo municipality based on the information provided, this study has overplayed this shapefile to those two images for further analysis supervised classification applied to the three algorithms, namely: maximum likelihood, Mahalanob was distance and minimum distance. The maximum likelihood algorithm is one of the most widely used in the classification of satellite imagery (Vorovencii, 2005) 3 Reclassified land use land cover of 2010 (Source author2019) According to Wolayta sodo municipality 2010 urban land use planning, the existed ground truth of land covers in 2010 has been generalized for seven major classes and the above-supervised classification was done as the seen table below consisted in choosing the training sites, the actual classification and result evaluation by Eagan Vector Matrix by AHP approach (Lilles and Kiefer, 1999). During the training phase, 80 training sites were selected which was original detail GIS shapefile land use land cover data of (Wolayta sodo Municipal 2018/19) and screen extraction of specific required land use land cover polygon shapefile was generalized to 7 major classes spectral classes obtained in this way were transferred to "signature editor" of the classification module of Arc GIS 10.4.1 software of Image Analysis After choosing the spectral signatures edited with checking them, for the same spectral class, were combined into a single class. And used for image classification and that training field was assigned a number from (1 to 7) representing land cover classes (see table 4.1 below) Number of pixels correctly classified 14 It was calculated by dividing the number of pixels correctly classified in each class value by the total number of pixels in this class value (row total). Overall Accuracy is calculated by dividing the total number of correctly classified pixels (sum of major diagonal) by the total number of tested pixels (Lilles & Kiefer, 2000) In this study, it was used a total of 146 points randomly equalized over each class value. These points were used in assessing image classification for accuracy assessment of classification algorithms, Given the number of pixels correctly classified from the 14 pixels belonging to each class, in the case of maximum likelihood algorithm, the users' accuracy for the land classes was the following used in this study and diagonal Eigen Vector Matrixes was used by AHP analysis using the value in per cent with geographic data considered to be accurate and referential typically, the data which the classified image was compared to are ground-truth based on supervised classification. In general, a set of reference shapefile has been overlaid as 7 major land-use class which are generated over the classified image. The relationship between the two images is expressed in the error matrix, also known as confusion matrix or contingency table by using Analytic Hierarchical Process (AHP)
Analytic Hierarchical Process
Evaluation of classification accuracy can be defined as the process of comparing the classified image with geographic data considered to be accurate and referential typically, the data which the classified image was compared to ground-truth based on supervised classification. In general, a set of reference shapefile has been overlaid as 7 major land-use class which are generated over the classified image. The relationship between the two images is expressed in the error matrix, also known as confusion matrix or contingency table by using Analytic Hierarchical Process (AHP) (Saaty. T., 2004) method employs an Eigenvector matrix for accuracy of Principal Component Assessment (PCA) in addition to underlying scale with values from 1 to 9 to rate the relative preferences for two supervised classifications of the image Computed Eigenvector Matrixes, which is an output of the pairwise comparison matrix to produce the best fit set based on the pixel value land-use area listed (table 4.1 of column 9 land use Area in % ) this value has been calculated by use GIS Attribute (table 4.1) of the above. (Eastman J. R.1997), which is an output of the pairwise comparison matrix to produce the best fit set based on the pixel value land-use area listed ( The number of rows and columns in the error matrix must be equal to the number of categories whose precision is being evaluated and the number of diagonal sum of each land-use class should be equal to 100 (Saaty. T., 2004) with an acceptable consistency ratio of consistency ratio (CR) indicates the probability that the matrix ratings has randomly generated. As a rule, a CR larger than 0.10 should be re-calculated and the result is a CR of 0.09, which indicates a reasonable level of consistency in the pairwise comparisons (Saaty, T, 2004). The critical ratio of the calculated Eigen Vector Matrixes (EVM) for the PCA of this study was 0.08 (see table above 4.3) of row 13, which is acceptable
Analysis of overall accuracy assessment for supervised classification
After applying the AHP algorithms, two thematic images obtained, showing the 7 land cover categories considered to be representative of the area under study (Fig.4.3&4.4). The supervised classification method applied with the help of the provided land use land covers geospatial data of the study area from the city led to different results highlighted in the error matrix (Djenaliev, A. K, 2007). Thus, in the case of maximum likelihood, overall classification accuracy assessment obtained, for 2010 land sat 7 (ETM+) satellite image, AHP Eigenvector matrixes of developed land use 2019 for Producers Accuracy assessment followed by correlation was summarized as follow (Table 4.6) below with standard consistency ratio of 0.08 which acceptable Proposed industry park 92.89 (CR) 0.017 The basic problem about Principle Component Assessment in transformation is to estimate the distributive matrix of multivariate random variables exactly and make clear the Eigenvalue and eigenvector of distributive matrix only based on band collations then original data is used to create linear conversion by regarding eigenvector as transform matrix only without considering ground land use geometry (He D.,Cai J., Zhou J., Wang Z., 2008 ) therefore this study was used AHP eigenvector matrixes algorithm rather than applying PCA for evaluation of classification accuracy that can be defined as the process of comparing the classified image with ground truth for different environmental impact assessment
Results and discussion
The results of the LULC classification for the time series of 2010 to 2019 were presented in (table 4.3 and 4.4) of this study based on this classification and AHP based eigenvector error matrix for accuracy assessment resulted in 0.087 of 2010 land sat 7 (ETM+) image of the study area and 0.017 of 2019 image classification and according to (Saaty.T., 2004, Haas, R., & Meixner, N. 2016.) CR should be less than 0.10 to be acceptable and using Arc Toolbox>Spatial Analyst Tools>Math>Minus tool with the layer as Input Raster value in GIS tools to see the changes in the urbanizing area between the two satellite image of 9 years. the values range from (6 to -4) and are the class values assigned to the original colour map (fig 4.4) below as (Peter E, 2018) a (+ 6) indicates very fast a 90 to 100% spatially increase in urbanization between the years, a 0 equals no change and a (-4) indicates a 90 to 100% loss change in the area. This is a very significant result for the Wolayta sodo city and it was finally reclassified based on yearly sprawl data (see fig 5.1) Fig 5.1 Change detection outcome of the study (Source author 2019) As the analysis indicates (+ 6) indicates a 90 to 100% increase in urbanization land cover between the 9 years, a (0 to 1) equals no change, and (-4) indicates a 90 to 100% Forest, Farmland, Bare land and uncultivated land loss and as this very significant result for the study area, this research can equate tree loss or gain to processes associated with the urban development of the study area with the rate of change formula and the outcome was reclassified spatially for 5 ranking score (Haas, R. and Meixner N., 2016) where: 'r' is the mean annual rate of LULC change for a given study period, L2 and L1 are LULC areas (ha) in 2010 and 2019 from (table 4.3 and 4.4) of this study, respectively and 't' is the interval year between 2010 and 2019 (9 years)
Conclusion
As different scholars investigated, (Alemayehu A., Assefa A., Asmamaw L.,& Diogenes L. 2016 ) should be analysed by each spatially related sprawl and this study also identified changes in land-use/land cover and the relationship with a growing sprawl of urbanization in Wolayta Sodo city (Southwestern Ethiopia) for the period between 2010 and 2019. The study also showed that urbanization increased by approximately 2.66% of 9 years which is 0.295 % per year while forestland declined by about 0.09 % for the land has used for public service during the selected study period and also urban farmland become bare land by 6.7 % this was estimated as crop production loses in domestic Agricultural livelihood and 3.2 % of the area has proposed for industry zoning by the municipality As we can see from the above table urbanization rates with 2.66 % every year according to the equation adopted above (Ningal et al. 2008;FAO 2012), urbanization rate was 2 %. And also according to the 1999/2000 agricultural sample survey conducted by the Central Statistical Authority (CSA), the average farm size for the total country is 0.97 hectare whereas 29 per cent of the total farming households have half or less hectare of land (CSA, 2007). In the areas of the Sothern highlands of Ethiopia,
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2002-06-30T00:00:00.000
|
14915693
|
{
"extfieldsofstudy": [
"Geology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://angeo.copernicus.org/articles/20/795/2002/angeo-20-795-2002.pdf",
"pdf_hash": "ec5d76e122380c95d5540aad327d82c75feaadfc",
"pdf_src": "CiteSeerX",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43416",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"sha1": "ec5d76e122380c95d5540aad327d82c75feaadfc",
"year": 2002
}
|
pes2o/s2orc
|
Annales Geophysicae (2002) 20: 795–805 c ○ European Geophysical Society 2002
Simultaneous all-sky camera and HF radar obser- vations of the visual and E-region radar aurora in the west- ward electrojet suggest a close relationship between a pair of parallel east-west-aligned auroral arcs, separated by 30 km, and a region of strong radar backscatter. Poleward of this a broader region of radar backscatter is observed, though the spectral characteristics of the echoes in these two regions dif- fer considerably. We suggest that the visual aurorae and their radar counterparts are produced in a region of upward field- aligned current (FAC), whereas the backscatter poleward of this is associated with downward FAC. Relatively low elec- tric fields ( 10 mV m 1 ) are observed in the vicinity of the arc system, suggesting that in this case, two-stream waves are not directly generated through the electrodynamics of the arc. Rather, the generation of irregularities is most prob- ably associated with the gradient drift instability operating within horizontal electron density gradients produced by the filamentary nature of the arc FAC system. The observation of high Doppler shift echoes superimposed on slow background flow within the region of backscatter poleward of the visual aurora is argued to be consistent with previous suggestions that the ion-acoustic instability threshold is reduced in the presence of upwelling thermal electrons carrying downward FAC.
Introduction
Considerable work was conducted in the 1970s concerning the relationship between optical auroral arcs and VHF coherent radar backscatter echoes from the E-region (Balsley et al., 1973;Greenwald et al., 1973Greenwald et al., , 1975;;Romick et al., 1974;Tsunoda et al., 1974Tsunoda et al., , 1976;;Hunsucker et al., 1975;Tsunoda and Fremouw, 1976a, b).In general, it was found that backscatter was observed adjacent to auroral arcs as opposed Correspondence to: S. E. Milan (steve.milan@ion.le.ac.uk) to the two phenomena being co-located.Two mechanisms for the generation of metre-scale ionospheric irregularitiesthe targets from which VHF coherent radars scatter -were suggested to explain this interrelationship.In the first, it was the electron density gradients at the edges of auroral arcs that provided the seed for irregularity formation by the gradient drift instability (Balsley et al., 1973;Tsunoda et al., 1974).The growth rate of this instability becomes positive when the local electron density gradient has a component parallel to the background electric field.In the eastward electrojet the convection electric field is directed poleward and hence, it is the electron density gradient at the equatorward edge of the arc that is most favorable for the generation of irregularities, and so it is here that backscatter should be observed.This situation is reversed in the westward electrojet.
The second theory suggested instead that the irregularities are generated in the electrojet region where the current is the greatest, and that this region is found adjacent to, but not co-located with, visual aurora (Greenwald et al., 1973).One conclusion of this work was that the most intense current was located in a region of relatively low electron density (low conductivity) compared with that expected to be associated with the adjacent auroral precipitation, and hence, the electric field in this region must be enhanced.In the observations of Greenwald et al., the eastward electrojet was found equatorward of the visual aurora.Perhaps the discrepancy between these two generation scenarios can be understood in terms of the distinction made by Tsunoda et al. (1974) between diffuse and discrete radar echoes; the former is perhaps more associated with the electrojet and the latter with discrete visual aurora.
Although there has been considerable interest in E-region echoes observed by HF radars, i.e. backscatter from irregularities of decametre-scales (e.g.much work in the 1960s, such as Bates, 1965, and more recently, Villain et al., 1987Villain et al., , 1990;;Hanuise et al., 1991;Milan and Lester, 1998, 1999, 2001), it is only in the last few years that the subject of their interrelation with auroral arcs has been revisited (Milan et al., 2000(Milan et al., , 2001)).The advantage of more modern radar systems, such as SuperDARN (Super Dual Auroral Radar Network, Greenwald et al., 1995), over previous apparatus is that in addition to the measurement of solely backscatter power, Doppler velocity, spectral width, and to some extent altitude information are also available.This allows for the echo characteristics to be defined much more accurately, and hence, hopefully different generation mechanisms can be distinguished.Milan et al. (2000) demonstrated that indeed auroral arcs and E-region HF radar backscatter features appeared related, though unfortunately in their study, the viewing-geometry between the radar and the all-sky camera was not optimal and did not allow for the exact interlocation to be determined.A superior viewing-geometry allowed Milan et al. (2001) to suggest that backscatter may appear between pairs of auroral arcs (in that case separated by 200 to 300 km), or rather to be bounded at one edge by one arc and then sometime later to be bounded at the other edge by another arc.The present study reports on subsequent observations which demonstrate a remarkable correspondence between closely-spaced (∼ 30 km) arc pairs and HF radar Eregion backscatter.
Experimental arrangement
The observations to be presented in this study were collected on 23 September 2000, by an all-sky camera (ASC) and HF radar located in Iceland.To allow for an accurate comparison of the radar and optical observations, it is essential that we are confident of the calibration and mapping of our two systems, which will form the focus of this section.
The ASC, situated at Tjörnes (66.20 • N, 342.90 • E) as shown in Fig. 1, comprised a video camera which recorded images to videotape at a rate of 24 frames per second.For the purposes of the present study, snapshot images were digitized from this videotape once every 30 s. Due to the nature of the automatic gain control of the camera and the digitization process, no absolute brightness measurements are available.Figures 2a and 2b show two ASC snapshots from 02:56 and 03:40 UT, digitized and presented using a colour scale (arbitary units), where the brightest features appear red.Bright spots within the images are heavenly bodies, with Jupiter being the brightest feature in the southern portion of the fieldof-view in both images.Comparison of the locations of these features with star maps allowed the alignment of the camera to be accurately determined.The centre of the imagethe zenith of the field-of-view -could then be identified, as could the direction of geographic north (shown in top left corner of Fig. 2a).The image was then rotated so as to be aligned with the local geomagnetic meridian (solid vertical line); this meridian is employed later in the study to form keograms.Dotted circles show zenith angle locii in steps of 15 • .The presence of hills and buildings in the vicinity of the ASC means that the viewing horizon occurs at a zenith angle between approximately 60 • and 75 • , represented by an irregular solid line.A line along the top of each image occurs at the edge of the video picture recorded on the videotape.In this study, we assume that the altitude of main optical emission is 110 km.The field-of-view of the ASC is shown in Fig. 1, with concentric circles marking the 30 • , 60 • , and 75 • zenith angle locii, projected to an altitude of 110 km, along with the projection of the geomagnetic meridian.Figure 3 shows the way in which this mapping is achieved.
The radar measurements were made by the Iceland East SuperDARN radar located at Pykkvibaer (63.77 • N, 339.46 • E), shown in Fig. 1.The radar sounds along sixteen different beam directions separated by 3.24 • in azimuth, with the radar boresite pointing at an azimuth of 30 • east of north.SuperDARN radar returns are gated into 75 range bins, usually of 45 km in length, with a range of 180 km to the first gate, giving a maximum range of 3555 km.However, E-region backscatter can only be identified unambiguously in the near ranges of the radar field-of-view, and hence, the "myopic" radar scan mode, designed specifically for studying E-region echoes, reduces the length of the 75 range gates to 15 km each, increasing the spatial resolution in the "usable" portion of the field-of-view.The maximum range of this mode is 1305 km.This reduced field-of-view is shown in Fig. 1, which also illustrates the ranges which correspond to range gates 0, 15, 30, . . .From the slant range r of each echo (time of flight), the ground range is determined assuming straight-line propagation to an altitude of 110 km (Fig. 3), a standard SuperDARN assumption for E-region backscatter.In the present study, the 16 beams are scanned clockwise from beam 0 to beam 15, with a dwell time of 3 s each, with a full scan of the field-of-view being completed every 49 to 50 s.The radar is operating at a frequency of 10.5 MHz, corresponding to a wavelength λ ≈ 28.6 m.Bragg scatter is then observed from irregularities with a wavelength of 14.3 m.In each radar cell, the spectral characteristics of the backscatter -power, mean Doppler shift and spectral width -are determined directly from a 17-lag complex auto-correlation function: the spectral width is a measure of the decorrelation time of the ACF, while the velocity is obtained from the rate of change of phase with lag; the reader is directed to Hanuise et al. (1993) for a full description of the ACF analysis technique.
Backscatter from field-aligned irregularities can only be observed where the radar wave vector is perpendicular to the local magnetic field, known as the orthogonality condition.At auroral latitudes, radars operating at VHF frequenciesfrequencies at which the radar beam is not significantly refracted by the ionosphere -can only achieve the orthogonality condition near the peak of the E-region, over a relatively narrow latitudinal range, and only with a poleward-directed radar looking close to the horizon.HF radars, on the other hand, exploit the refractive nature of HF radio-wave propagation in the ionosphere to achieve orthogonality over a range of altitudes both in the E-and F-regions (see, for instance, Milan et al., 1997b).Although our range-determination algorithm assumes that backscatter originates at an altitude of 110 km, more accurate estimates of the altitude of each echo can be found from a knowledge of the slant range to the backscatter volume r and a measurement of the elevation angle of the radar returns (Fig. 3), determined using the interferometric technique described by Milan et al. (1997aMilan et al. ( , 2001)).Altitude estimates made in this way will be presented later in this study, which show that the radar backscatter originates predominantly from the altitude range 100 to 140 km.We note that refraction of the radar beam is not taken into account in these altitude estimates, i.e. we assume straight line propagation.In addition, calibration of the radar interferometer is somewhat tricky and so a small systematic error in the altitude estimates may be expected, but we are confident that differences in altitude between different regions of backscatter are real.
As discussed at length in our previous studies of the relationship between radar and visual auroral forms, Milan et al. (2000Milan et al. ( , 2001)), a fundamental limitation of ground-based observations of the aurora is the ambiguity in determining the location of an optical feature without a detailed knowledge of the altitude profile of the emission intensity.Only auroral features situated directly above the observing site, i.e. at the zenith of the ASC field-of-view, can be located with certainty.Another inherent limitation is that as the emission generally occurs over a range of altitudes, optical features off-zenith may appear somewhat smeared out as the "top" and "bottom" of the aurora will appear at different zenith angles, i.e. the well-defined lower border of the aurora appears at higher zenith angles and the more tenuous higher altitude emissions appear at lower zenith angles (see Fig. 3).Again, this is less of a problem above the ASC.We have the option of two different strategies for comparing the radar and optical observations.In the first, the ASC observations are projected to a geographic frame, employing an assumed emission altitude, to compare with the radar measurements.Alternatively, the radar observations are projected onto the ASC field-ofview and compared directly with the optical images, though again, an optical emission altitude must be assumed to determine the radar mapping.For the present study, we have elected to employ the latter technique.Figures 2c-2f illustrates the mapping of the radar field-of-view onto that of the ASC, assuming an optical emission altitude of 110 km; here, we show the same field-of-view grid as seen in Fig. 1.The geomagnetic meridian of the ASC field-of-view runs from beam 0, gate 30, through beam 7, gate 10 at zenith, to beam 15, gate 5.In the following analysis, we employ this meridian to produce "radar keograms" for direct comparison with optical keograms.
Observations
In Fig. 1, the radar backscatter power (signal-to-noise ratio) recorded during the scan starting 02:55:43 UT on 23 September 2000 is colour-coded.Backscatter is only shown where the SNR of the echoes was sufficient to be able to calculate the spectral parameters of Doppler velocity and width.Running east-west over the location of Tjörnes is a high backscatter power feature, which is of the order of 2-3 radar gates in latitudinal width.The location of the statistical auroral oval (Holzworth and Meng, 1975) is also shown in Fig. 1, and it is found that the enhancement in the backscatter power is closely aligned along this, though the statistical oval is much broader.This study will concentrate on the behaviour of this "radar arc" and its relationship to simultaneously observed optical features.Other backscatter features are observed poleward of this location, and these will be discussed as well.
Figure 2 shows a direct comparison of visual and radar aurora observed at 02:56 UT (the same time as Fig. 1) and 03:40 UT.In both all-sky camera images (Figs.2a and 2b), a pair of parallel, approximately east-west-aligned arcs are present near the zenith of the camera, with a separation of the order of 20-30 km.Such arc pairs are characteristic of our optical observations throughout the night of 22-23 September 2000.At times, such as Fig. 2b, considerable additional structure is apparent, though on a gross scale it is the arc pair that dominates.Turning to the radar observations in Fig. 2c, the radar arc described above is clearly seen to be co-located with the visual arc pair, with the power maximum appearing to fall in the gap between the two auroral arcs.The line- of-sight Doppler velocity indicated in Fig. 2e clearly distinguishes between the backscatter associated with the arc pair, with velocities of the order of −100 to −200 m s −1 (negative line-of-sight velocities represent Doppler shifts away from the radar), and backscatter poleward and equatorward of the radar arc in which, in general, velocities smaller than −100 m s −1 are measured.In the second example shown, Figs.2b, 2d, and 2f, the relationship between the visual and radar signatures appears less clear.However, as will be seen below, backscatter features are still distinguishable that relate specifically to the auroral arc pair.At the start of the interval no significant optical features are observed.However, at 02:08 UT, a feature appears poleward of Tjörnes at a zenith angle near 60 • .Inspection of the individual ASC images shows that this is an east-westaligned optical arc.For the next 20 min or so, this feature drifts equatorwards, reaching the ASC zenith at approximately 02:30 UT, before moving just south of the zenith.At this time, 02:33 UT, the arc brightens and a second feature appears adjacent to and polewards of it, forming an arc pair as described above.This arc pair continues to straddle the ASC zenith for the next 30 min until 03:03 UT.During this time each arc varied individually in brightness, most noticeably between 02:45 and 02:51 UT when the polewardmost arc faded considerably.At 03:03 UT, the auroral intensity increased and there was a rapid poleward motion of the poleward-most arc until 03:08 UT, when the intensity dropped and only a single arc remained.Between 03:12 and 03:16 UT, no auroral emission was observed.An auroral arc became apparent once again at 03:16 UT, poleward of zenith.This subsequently drifted equatorwards, across the ASC zenith at 03:30 UT, to the south of zenith, reaching its southern-most position near 04:00 UT.After 03:25 UT, it is clear that this auroral feature had bifurcated into an arc pair, though there is also considerable additional structure within this feature, as suggested above in relation to Fig. 2b.
Careful examination of the four radar keograms in Fig. 4 indicates that there is a backscatter feature which can be directly associated with the optical auroral arcs observed by the ASC.This feature is first observed near 02:15 UT at a zenith angle of 30-40 • .It subsequently moves equatorwards, reaching the ASC zenith at 02:30 UT.At this time, it increases in backscatter power, and remains near-zenith until 03:03 UT when it disappears.Comparison of Figs. 1 and 2 reveals that this is the same radar arc located at the latitude of Tjörnes (the zenith of the ASC) at ∼02:56 UT.This backscatter feature is observed once more at 03:17 UT, at a zenith angle of 30 • .Finally it drifts equatorwards, crossing the zenith near 03:30 UT, reaching its most southerly position at 04:00 UT, when it once more fades from view.We can characterize the echoes associated with the radar arc as follows.The echoes are narrow, with spectral widths, in general, below 100 m s −1 , with line-of-sight velocities which vary between −50 and −100 m s −1 .In addition, the echoes appear to originate from an altitude between 100 and 110 km.We also note that the backscatter power within this feature appears to be the greatest when two parallel optical arcs are present, for instance, between 02:32 and 03:03 UT.There is also a suggestion that the line-of-sight velocity is reduced when the arc pair appears: the line-of-sight velocity is greatest before 02:33 UT, before the second arc appears, and between 02:47 and 02:51 UT when the poleward arc fades.As will be demonstrated below, the direction of electron drift deduced from the Doppler shifts is directed predominantly eastward, placing our observations in the westward electrojet, consistent with the location of the radar in the early dawn sector.
A direct comparison of the relative locations of the optical and radar arc is made in Fig. 5. Here, the optical keogram, somewhat smoothed to allow for contouring, has been superimposed over the radar power keogram.Two main conclusions are reached.When only a single auroral arc is present (approx.at 02:10-02:30 UT, 02:40-02:50 UT, and 03:20-03:30 UT), the radar arc lies at the poleward edge of the optical luminosity.However, when an arc-pair is observed (for instance, at 02:30-02:40 UT and 02:50-03:04 UT), the centre of the radar arc lies between the two peaks in luminosity.
Additional regions of backscatter, other than the radar arc, are also observed throughout this period, especially northward of the radar arc, but to the south as well (Figs. 2 and 4).This can also be seen in Fig. 6, which shows the backscatter power and Doppler shift observed during four radar scans from the 8 min period following 02:49 UT; these scans are shown since they are typical of the radar observations throughout the interval of study.Figure 6 shows several interesting features.The radar arc is clearly apparent in the backscatter power measurements of each scan, being in general the strongest feature observed (near gate 5 in beam 0).Sporadic backscatter is seen at lower latitudes.Other L-shell aligned backscatter features are seen at higher latitudes in the more westerly beams, approximately beams 0 to 10.These features occur in a region where no auroral luminosity is observed.Ground clutter echoes, identified by virtue of their characteristic low Doppler shift and spectral width (see below), are observed in the more easterly beams, indicated in grey in the velocity panels of Fig. 6.As discussed in association with Fig. 2, the Doppler shift measurements reveal that the line-of-sight velocities observed within the radar arc are considerably different from those observed outside it, i.e. a distinct flow signature exists within the arc.This flow signature will be investigated in more detail below.Finally, the echoes observed poleward of the radar arc typically have relatively low Doppler shifts (| v los |< 150 m s −1 ), but small regions of echoes are observed in each scan which have considerably higher Doppler shifts (| v los |> 150 m s −1 ), greater than the typical ion-acoustic speed, seen as irregular patches of red.These patches become more evident after 03:35 UT, when such echoes are seen consistently in Fig. 4b, where again, they appear superimposed on a background of lower Doppler shift echoes.These observations are similar to those described previously by Milan et al. (2001).
The spectral characteristics of the backscatter, as a whole, are examined more quantitatively in Fig. 7.In this way, we hope to determine whether the spectral characteristics of the radar arc are materially different from those of the other Eregion echoes, so as to give a clue to their generating mechanism.In Fig. 7, three panels show the backscatter (a) power, (b) spectral width and (c) altitude as a function of Doppler shift, for all echoes observed between 02:00 and 04:10 UT; the contours indicate the density of data points on a logarithmic scale; approx.54 000 echoes were observed in total during this period.Plots of this nature have been discussed at length in Milan andLester (1999, 2001) and Milan et al. (2001), and the backscatter characteristics presented here are consistent with these previous studies.Three main populations of echoes are observed: a high Doppler shift population (| v los |> 350 m s −1 ), which originates at altitudes between 100 and 160 km; a low Doppler shift population (| v los |< 300 m s −1 ) from altitudes between 80 and 150 km; a very low Doppler shift (| v los |< 50 m s −1 ) and very low spectral width ( v < 30 m s −1 ) population which originates at altitudes above 180 km.The latter population is the ground clutter described above and of little interest to the present study.This ground clutter appears to originate at high altitude as it necessarily violates our straight line propagation assumption.The high Doppler shift population comprises the sporadic echoes observed poleward of the radar arc, as mentioned above; these will be discussed more fully below.Of most immediate interest is the low Doppler shift population, some of the echoes of which comprise the radar arc.To investigate the characteristics in more detail, we concentrate on the interval 02:30 to 03:03 UT when the radar arc remained almost stationary near the zenith of the keogram meridian.During this interval we have identified all echoes associated with the arc, and plotted the characteristics of these in a similar manner to .For comparison, all other echoes from this interval are shown in Figs.8a-8c (ignoring echoes with Doppler shifts in excess of −300 m s −1 ).Again, a contribution from ground clutter is evident in this figure, especially in panel (c).Differences are observed be- tween the radar arc and the surrounding backscatter, which can be summarized as follows: a slightly broader range of backscatter powers are observed in the radar arc, extending up to 30 dB, as opposed to 25 dB; the Doppler shifts are concentrated between −50 and −200 m s −1 , as opposed to +50 and −200 m s −1 ; the spectral widths tend to be lower, v < 100 m s −1 , whereas, in general, v ranges up to 300 m s −1 .In addition, the range of altitudes from which the radar arc appears to originate is narrow, 90 to 110 km, in comparison to the surrounding backscatter.
We can also deduce that the convection flow associated with the radar arc is somewhat different from that observed just poleward and equatorward of it, as suggested before.Applying a beam-swinging analysis to the Doppler shifts allows for the direction and magnitude of the plasma drift within a population of backscatter echoes to be inferred, as discussed in some detail in Milan and Lester (2001) and Milan et al. (2001).This analysis is shown in Fig. 9 for backscatter within the radar arc (Fig. 9b) and backscatter polewards and equatorwards of it (Fig. 9a) for the interval 02:30 to 03:03 UT.These panels show the Doppler shift, v los , plotted as a function of the angle between the radar look direction and the local L-shell.We term this angle the "L-shell angle" or φ, and it is measured anti-clockwise from geomagnetic east.The easternmost beam of the radar points almost parallel to the local L-shell, whereas the westernmost beam points rather more meridionally, so L-shell angles range from 5 • to 63 • over the field-of-view.The grey scale shows the density of points, self-normalized and plotted on a linear scale.
Superimposed are the median (dots) and upper-and lowerquartiles (bars) of the distribution in bins with an L-shell angle 3 • wide.If the flow within these populations is consistently of a similar magnitude and direction, then a cosinedependence between v los and φ should exist, and indeed, for the main body of the echoes, this is the case.Within the radar arc (Fig. 9b), this cosine-dependence is found to be consistent with a flow speed of 150 m s −1 and a flow direction of 0 • (this cosine-dependence is represented by the dashed curves in Fig. 9), i.e. parallel to the auroral arc pair.Outside of the radar arc (Fig. 9a), the Doppler shifts are consistent with a flow speed of 260 m s −1 and a flow direction of 332 • (solid curves).This relationship breaks down for φ < 30 • , though this represents only a very small proportion of the backscatter echoes.Some doubt can be cast on the exact velocities and angles deduced using this technique, especially as the measurements presented in Fig. 4 suggest that there is some variability in the line-of-sight Doppler shifts observed during this interval.However, the results suggest that only relatively low drift velocities (certainly less than the ion-acoustic speed) are present in the vicinity of the arc system.Poleward of this, the background flow is again relatively low, though as we saw earlier, irregular patches of high Doppler shift echoes are observed superimposed upon this.
Discussion and conclusions
For a period of some two hours, an optical auroral feature is observed by an ASC located at Tjörnes, Iceland.At times, this feature takes the form of a single east-west aligned arc, though for the majority of the interval, a pair of parallel arcs, separated by approximately 30 km, is observed.Simultaneous radar observations show that backscatter is associated with this arc system, appearing to be co-located with the gap between the arc-pair.In addition, other backscatter features are observed poleward of this, in regions where no auroral luminosity is present.However, the spectral characteristics of the radar arc are sufficiently different from the other regions of radar backscatter, suggesting that they are produced by different generating mechanisms.
As discussed in the Introduction, several mechanisms for the generation of arc-associated radar backscatter have been suggested by previous workers.One favoured mechanism posits that irregularities are produced in a region of high electric field adjacent to the arc (Greenwald et al., 1973).Indeed, a higher electric field is observed poleward of the arc, of the order of 13 mV m −1 (as deduced from the fitted flow velocity of Fig. 9a), than within the arc itself, 8 mV m −1 (Fig. 9b).If the electric field was high enough then it might be expected that these irregularities are generated by the twostream instability, but this requires a threshold of ∼20 mV m −1 .Higher Doppler shift echoes, consistent with electric fields above the two-stream instability growth threshold, are observed some distance poleward of the arc system, but these are thought to originate through another mechanism that will be discussed below.An alternative mechanism for the generation of backscatter is the growth of the gradient drift instability in the horizontal electron density gradients associated with the arc, produced by enhancement of the E-region by the arc electron precipitation (Balsley et al., 1973).In the westward electrojet (eastward electron drift), the southward-directed convection electric field favours the growth of gradient drift waves in southward-directed electron density gradients, i.e. on the northern side of the arc.This is indeed the situation observed when a single arc is present (e.g.02:10 to 02:30 UT, Fig. 5).When multiple arcs appear, the luminosity is highly structured (see, for instance, Fig. 2b), and presumably short scalelength electron density gradients, suitable for the generation of irregularities, abound.
Auroral arcs are generally associated with an upward field-aligned current (FAC), carried by the precipitating electrons, which produce the auroral luminosity, and current closure is achieved through an adjacent downward FAC sheet.We show in Fig. 10 a schematic of the FAC structure that we envisage being associated with our observations.The auroral luminosity is produced within a region of upward FACs with the arc pair being produced by filamentary FACs within this region; such an instance of parallel arcs present within a single upward FAC region has been observed by Janhunen et al. (2000).A broad region of downward FAC lies poleward of this, with the FAC being carried by thermally upwelling electrons.These upwelling electrons produce a depletion of the E-region electron density, and if the downward FAC is filamentary, in the same way that the upward FAC appears to be, then this can also provide electron density gra- dients that generate gradient drift waves, thus giving rise to the backscatter regions observed poleward of the radar arc.The main body of backscatter echoes associated with this region have Doppler shifts that are consistent with an electric field below the threshold that is necessary for the excitation of the two-stream instability (see above).However, sporadic regions have higher Doppler shifts in excess of the ion-
Upwelling thermal electrons
Low electron density Higher electric field (~13 mV m -1 ) Low power, broad echoes
Upward FAC
Precipitating electrons Visual arcs High electron density Lower electric field (~8 mV m -1 ) Radar arc: high power, narrow echoes 2 6 0 m s -1 150 m s -1 Fig. 10.A schematic showing our suggestion of the FAC system associated with the radar backscatter and visual arcs.The radar arc and visual arcs are confined to a region of upward FAC (dark grey region), adjacent to a broader region of downward FAC lying to the north (light grey region).Also indicated is the approximate electron drift vector within the two regions, deduced from the beamswinging analysis.acoustic speed.Villain et al. (1987Villain et al. ( , 1990) ) argued that the upwelling electrons that carry downward FACs could feed energy into the two-stream or electrostatic ion cyclotron instabilities, producing the growth of waves with Doppler shifts exceeding the ion-acoustic speed, in regions of low background electric field.Milan et al. (2001) employed this argument to explain the appearance of high Doppler shift echoes superimposed on regions of low Doppler shift flow adjacent to an auroral arc, and a similar situation appears to apply in the present case.
Clearly, the exact relationship between radar backscatter and auroral FACs is still an open question.However, there is no doubt that FACs control the generation of E-region irregularities through one mechanism or another: they are associated with field-parallel particle motions, which can feed energy into instability growth; these field-parallel motions produce modification of the ionosphere and structure the Eregion electron density, giving rise to the horizontal electron density gradients which seed the gradient-drift instability; they are responsible for the transfer of momentum from the magnetosphere to the ionosphere, i.e. is they are present wherever strong or divergent horizontal electric fields exist in the ionosphere.Optical observations, while instructive, do not give an unambiguous identification of the locations of FACs.Low altitude satellite measurements of the particle and/or magnetic field signatures of FACs should, on the other hand, allow for the role of FACs in the excitation of Eregion instabilities to be investigated in detail.In the future, we intend to compare the comprehensive DMSP and FAST data sets with SuperDARN observations in order to extend our understanding of the factors which govern the growth of ionospheric irregularities in the E-region, and to characterize the spectral properties of coherent backscatter from these irregularities.
Fig. 1 .
Fig. 1.A map showing the vicinity of Iceland and southeastern Greenland, indicating the location of the Pykkvibaer radar and the Tjörnes all-sky camera.The radar field-of-view is mapped assuming a backscatter altitude of 110 km.Colour-coded is the backscatter power from the radar scan starting 02:55:43 UT on 23 September 2000.The projection of the ASC field-of-view is shown, with the three concentric circles marking the locii of zenith angles of 30 • , 60 • and 75 • mapped to an altitude of 110 km.The geomagnetic meridian employed to produce keograms is also shown.Superimposed is the statistical auroral oval.
Fig. 2 .
Fig. 2. Two all-sky images taken at (a) 02:56 UT and (b) 03:40 UT on 23 September 2000.Note that panel (a) corresponds to the time of the radar observations in Fig. 1.The circles mark the locii of zenith angle in steps of 15 • .The irregular solid line marks the horizon due to buildings, etc.The vertical solid line shows the geomagnetic meridian passing through zenith, which is employed to produce keograms.The direction of geographic north is indicated in the top left corner.
Fig. 3 .
Fig. 3.A schematic diagram showing the method of mapping between radar and ASC observations, where r is radar range, is elevation angle, and χ is zenith angle.In our algorithms, a curved Earth is assumed.
Fig. 4 .
Fig. 4. Radar keograms of (a) backscatter power or signal-to-noise ratio, (b) Doppler shift or line-of-sight velocity, (c) spectral width, and (d) estimate of altitude of scatter volume.(e) Keogram of the visual aurora.
Figure 4
Figure 4 presents radar and optical observations from the interval 02:00 to 04:10 UT.Panels (a)-(d) are radar keograms of backscatter power, Doppler velocity, spectral width, and the estimated altitude of origin of each echo.In panel (a), black indicates where the SNR was insufficient for spectral characteristics, such as velocity and width, to be determined; these parameters are only indicated in panels (b)-(d), where significant SNR was received.Panel (e) presents an optical keogram.In all five keograms, a dashed line indicates the location of the ASC zenith.We will discuss the optical observations first.
Fig. 5 .
Fig. 5.A comparison of the backscatter power keogram, assuming an optical emission altitude of 110 km, and the visual luminosity keogram (contours).
Fig. 6 .
Fig. 6.Backscatter power and Doppler shift from four radar scans between 02:49 and 02:58 UT.Every third scan from this interval is shown.
Fig. 7 .
Fig. 7.The density of echoes in (a) power-velocity, (b) widthvelocity, and (c) altitude-velocity space for the interval 02:00 to 04:10 UT on 23 September 2000.The shading indicates density on a logarithmic scale.
Fig. 8 .
Fig. 8. Similar to Fig. 7, though only for the interval 02:30 to 03:03 UT, and concentrating on low Doppler shifts.Panels (d)-(f) show the backscatter characteristics of echoes which comprise the radar arc, whereas panels (a)-(c) show the characteristics of all other echoes.
Fig. 9 .
Fig. 9.The density of echoes in φ − v los space for (b) the echoes which comprise the radar arc and (a) all other echoes.The grey scale shows the density of echoes on a linear scale, self-normalized.The dots and bars show the median and upper-and lower-quartiles of the distributions in 3 • wide bins of L-shell angle.The dashed and solid curves show the v los cosine dependence on φ expected for flows of (i) 150 m s −1 aligned eastwards along the local L-shell and (ii) 260 m s −1 oriented at 28 • clockwise from the L-shell direction, respectively.
|
v3-fos-license
|
2020-11-12T09:02:11.198Z
|
2020-01-01T00:00:00.000
|
229214196
|
{
"extfieldsofstudy": [
"Sociology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scielo.br/pdf/tce/v29nspe/1980-265X-tce-29-spe-e20190274.pdf",
"pdf_hash": "2f58afd3ef243dde2c7a58dda52def916f9b01f6",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43417",
"s2fieldsofstudy": [
"Sociology",
"Education",
"Medicine"
],
"sha1": "6a3bf0f93724fa675c90bffeb11372a7accec8da",
"year": 2020
}
|
pes2o/s2orc
|
THE IMAGINARY OF FAMILIES OF SCHOOLCHILDREN ON EVERYDAY HEALTH PROMOTION
Santa Catarina, Brasil. ABSTRACT Objective: to understand the imaginary of families on health promotion, as well as its limits and strengths in the family routine. Method: a qualitative research of the descriptive-exploratory type. It was conducted with 12 families whose children attend a Municipal preschool and elementary educational center. Data collection was conducted between October and November 2018 by means of individual semi-structured interviews. The material was organized by coding the information until reaching the sense nucleus of the text, thus discovering the classes and their connections. Data analysis involved processes of preliminary analysis, ordering, key connections, coding, and categorization. Results: three subcategories emerged: The imaginary of fathers and mothers about health promotion in the family routine; Strengths that can contribute to the promotion of health in the family routine; and Limits of the families to promote health in the family routine. Conclusion: from the perspective of the imaginary, the importance of this research strengthens the possibility of sustaining strategies which favor an improvement in the everyday life of these families, further reinforcing community health to attain
INTRODUCTION
The World Health Organization (WHO) considers that Health Promotion targeted to young people, especially younger adolescents, has great potential to foster health in the population. Fostering healthy behaviors is an effective way for the youth and their families to better control and improve their health. It is so that, among other groups, sports are considered an important and healthy alternative for the youth. On the other hand, it recognizes the need to expand the knowledge on health so that the population can better control it and treat diseases by managing risk in the best possible way. The communication strategy allows for greater access to information, improving the knowledge on health, health-related decision-making, perception, and risk evaluation. 1 The alliance established between the health centers and the educational institutions is of great relevance. It is expected that they advance collaboratively in the elaboration and execution of a work plan for the health center with the preschool level, as well as with the Municipal and maintained schools, coherently with the guidance provided by the Health Promoting Educational Institutions (Establecimientos Educacionales Promotores de Salud, EEPS) and by the City's Educational Plan. 2 From the family perspective, parents are their children's model; if we want children who take care of themselves, we must set the example in the practice. If we do not have self-care habits in our routines, we are conveying to our children that it is not something important to lead a full life. Resigning one's own needs and putting oneself in the last place is considered a sign of low self-esteem. Taking care of our health, eating well, practicing sports, having a space for personal care, or enjoying leisure activities is one of the most powerful messages that we can convey to our children. Enjoying life, verbalizing the moments when we really enjoy, is to convey positive emotions that will brand for life the physical and emotional self-care of our children. 3 According to the WHO, in 2016, 41 million children under the age of five were overweight or obese. In the same year, there were also more than 340 million overweight or obese children and adolescents (from 5 to 19 years old). Worldwide, in many countries obesity and overweight present a high and increasing prevalence since the early years of life. Currently, Obesity is a global health problem, reason why we speak about Globesity. 1,4 Chile has not remained aloof from this phenomenon, presenting a significant increase in its obesity and overweight rates, a situation that has drawn the attention of the governments. Although it has been approached as an urgent problem, the public policies that were launched have not yet managed to revert the high obesity rates. The Nutritional Map of the National Board for School Help and Scholarships shows that the prevalence of obesity is of 24.4% for 2018, with a similar percentage for overweight, resulting in 51.7% of overweight individuals. Magallanes is the leading region regarding this rate, with 33% of the students with obesity and 13.3% of morbid obesity, a tendency which is repeated in the South regions of the country. 5 In the last 100 years, a strong urbanization process occurred in Chile, in which most of the rural population moved to the cities. The result has been a drastic change in the eating habits of the population, with more calories derived from highly-processed foods containing sugar, refined carbohydrates, sodium, and saturated fats. The current nutritional situation in Chile is related to the economic and sociodemographic changes both in diet and in lifestyles. 2,6 Everyday life is the object of study in the different scenarios of Nursing and Health research, which allows approaching an expression of a way of life in a given context. Consequently, and paradoxically, daily routines are not only a scenario but, above all, it they integrate the scenes of living and co-living. 7 The daily routines are the way of living of the human beings that is evident in everyday life, expressed by its interactions, beliefs, values, symbols, meanings, images, and imaginaries, which 4/14 describe their process in life, in an initiative to stay healthy and not falling ill, punctuating their life cycle. There is certain pace that characterizes our way of life, influenced both by the must-be and by the needs and wishes of everyday life, which is called the rhythm of life. 7 It is observed that people lead more sedentary lives, burning fewer calories daily; hence we see the importance of incorporating healthy life habits in the children's routines, since that contributes to good health. Currently, Chile is characterized by a marked decrease in malnutrition, by an increase in obesity, and by risk factors for non-communicable chronic diseases. 8 Thus, it is important to intervene at early ages, avoiding the risk of increasing the number of future illnesses. 9 Due to the increase in childhood obesity in the Magallanes Region, research work with the families of preschool and elementary boys and girls is important, in order to know the limits and strengths they have for health promotion in their family routines. We shall understand as strengths the advantages that allow taking an opportunity or facing a threat; and, as limits, those limitations that prevent facing a threat or taking an opportunity. 9 Additionally, it is pertinent to consider the Imaginary as a common source of emotions, feelings, affections, and ways of life by self-identifying and by identifying oneself in the other. 10 When reflecting on Health Promotion in the family routine, it is feasible to wonder: What is the imaginary of fathers and mothers about Health Promotion in the family routine? What are the families' limits and strengths for Health Promotion in the family routine of preschool and elementary boys and girls?
This study was designed with the aim of understanding the families' imaginary of Health Promotion, its limits and strengths in the family routine of preschool and elementary boys and girls, in a school from the Barranco Amarillo sector, Region of Magallanes and Chilean Antarctica.
This study sets out the premise that health problems are also related to people's everyday life, their interactions, beliefs, images, and symbols constructed in society life. Its purpose is to help expand the construction of the Nursing knowledge which implies care, teaching, research and extension, with health promotion as the driving factor of family care, expanding and reasserting the importance of studies on the theme.
METHOD
This is a study with a qualitative approach, in the descriptive-exploratory modality, based on Michel Maffesoli's Comprehensive Sociology of Everyday Life. From this proposal, the objective of science is to open mutual comprehension and to promote tolerance and sensitivity towards other ways of describing and explaining events. 11 The Comprehensive Sociology of everyday life seeks to understand, not to explain, social phenomena, valuing everyday knowledge and common senses, 10 involving the way of living of the individuals and social groups in their imaginary, sublimating sensitive reason. 12 Anonymity of the study participants was guaranteed with the Informed Consent. In the record for each participant, the letter "E" for student ("estudiante" in Spanish) was assigned, followed by the numerical code according to the sequence of the interviews, thus avoiding identification.
The study locus was a municipal preschool and elementary educational center located in the north exit of the city of Punta Arenas, Chile, at kilometer 8 ½ North, Barranco Amarillo sector.
The representatives of the 12 families in this study voluntarily agreed to participate, after the researcher presented the study to them in a proxy meeting which is hold in that school once a month. When they agreed to their participation, they signed consent documents, and the interviews were held in their homes.
The inclusion criteria to participate in the study were being over 18 years old, with children enrolled in the 2018 academic year, as well as agreeing to welcome the researcher in their homes.
5/14
Data was collected by means of semi-structured interviews, following a script with questions, which were digitally recorded (recorder). The interviews were conducted between October and November 2018, and lasted approximately one and a half hour each. Work was conducted with a field diary to help build the data interpretations; notes were taken in each interview with the following codes: Interaction Notes (Notas de Interacción, NI), Methodological Notes (Notas Metodológicas, NM), Theoretical Notes (Notas Teóricas, NT), and Reflexive Notes (Notas Reflexivas, NR).
After collecting and recording the interview data, they were digitally recorded and transcribed in their entirety into Word format. Data analysis involved processes of preliminary analysis, ordering, key connections, coding, and categorization. 13 Text clippings were performed, thus generating recording units; the material was coded seeking to attain the sense nucleus of the text, so that the meaningful classes, their characteristics, and their connections were unveiled. Three categories emerged: The Imaginary of fathers and mothers about health promotion in the family routine; Strengths that can contribute to the promotion of health in the family routine; and Limits of the families to promote health in the family routine
Profile of the research participants
The participants of this study were eight mothers and four fathers responsible for 12 families, with an age range between 28 and 53 years old. The families had the following characteristics: four are couples (no civil union), one has its parents separated, and seven are married couples. Their number of children varies from one to four. The parents had different professions, distributed as follows: bilingual (English and Spanish) secretary, Nursing technician, three housewives, foreman (construction chief), seamstress, gas installer, kindergarten education technician, confectioner, machinist, and cab driver. As regards their schooling level, three attained eight grade (elementary school); two, fourth year (high school); one, second year (high school); one, third grade (elementary school); one, higher education; and four, professional technician level. Regarding the children's characteristics, boys and girls participating in this study, 17 were preschool and elementary students in the 2018 academic year, being ten boys and seven women, with an age range between 5 and 12 years old. They attended kindergarten up to 6 th grade (elementary school), distributed as follows: one pre-kindergarten student, one kindergarten student, two 1 st grade (elementary school) students; five 2 nd grade (elementary school) students; one 3 rd grade (elementary) student; two 4 th grade (elementary school) students; four 5 th grade (elementary) school students; and one 6 th grade (elementary school) student. No anthropometric assessments were performed on the pupils. The responsible individuals of the families contributed information on the pupils' weight and height, data through which very few cases of obesity and overweight were verified.
The Imaginary of fathers and mothers on health promotion in the family routine
The imaginary of the family on health promotion in the family routine emerges in the fathers' and mothers' reports. They show how, in their daily routines, the families are concerned in "trying", one way or another, to perform physical activities or implement changes in their eating habits in the family environment, always "trying" to contribute to Health Promotion.
Jogging, going out for a walk, going out for jogging, walking is what they demand anyway walking two hours a day two hours a day (E9).
It can also be physical well-being that is permanently staying physically active either walking riding a bike or any other related kind (E11).
6/14
The reports reveal to us that health promotion is intimately linked to the pre-established imaginary of healthy eating. As such, images emerge which represent the quality of life of the family, that is, family health.
Eating healthy stuff, not junk, that healthy food I know are the casseroles the desserts, tea, it's a healthy food not to gain weight (E3).
Starting to eat better, starting to leave aside much of the issue that it has to be big pasta, big meat, that we're trying to quit sugar anyway (E5).
About eating healthy eating well for their health for their well-being (E8). From their imaginary, they set forth their doubts, of what it is to speak about Health Promotion. It is when they receive information on how to take care of themselves, and what is related to a healthy lifestyle: It's the guidance that it gives us, that they provide us to lead a healthy life (E1).
It's like when they put the posters (E10).
Publicize what is being done, right! […] Nationwide for health care, that is especially promoting healthy lifestyles. To promote is to take care of yourself, if you're sick, to promote healthy lifestyles (E2).
Form the families' imaginary, the metaphors of health promotion emerge as a cycle, a wheel. If something fails, health "goes down".
It is a cycle in its entirety, it is like a wheel: something fails in health prevention and "everything goes wrong", it makes no greater interference, it is like a chain.
In the statements of some participants, we can also observe that there is lack of knowledge. They say "I don't know" or "I don't understand" in their speeches.
Strengths that can contribute to health promotion in the family routine
Among the strengths that can contribute to everyday health promotion, based on the speeches provided by the families, the following elements stand out: The strengths refer to the following: The search for healthy eating; Guiding the children not to eat so much; Instilling the mentality of healthy eating and well-being; Health promotion begins with oneself -(Proactivity).
It's what we're starting to do now, starting to eat better, starting to leave aside much of the issue that it has to be big pasta, big meat, … we're trying to quit sugar anyway because we were very good at consuming sugar [ It is worth mentioning that the parental or family model exerts a strong influence on the biopsychosocial health condition of the family members. This is equally true in the formation of risk factors and of protective factors, in their beliefs, and in the expectation of the family. The following speeches derive thereof: Being in good health […] It is motivating they are my two sons… they're what they need all the help because they're growing up (E4).
For the father and mother to be there… if we're fine our daughters are going to be fine, if we're down our daughters are going to be down (E3).
It is motivating they are my two sons […] they're what they need all the help because they're growing up they are in good health (E4).
7/14
Limits to promote health in the family routine In the imaginary of fathers and mothers, among the limits to promote health in the family routine, the temptations to indulge in unhealthy eating stand out.
The temptations suddenly we eat a hot dog, which is not healthy, suddenly we fell like drinking unhealthy drinks, we dive into the cookie with sugar, bread, dough, those things (E1).
They prefer preparing easy foods, junk, French fries, hamburgers and all that kind of things (E2).
It is difficult to establish a limit or contrast between the habits or customs and the imaginary of what is healthy in the routines of these families, in relation to health promotion: We're quite a bunch in the house, suddenly there are different ideas and things like how to treat certain themes and certain customs, we have different customs and that kind of weakens a little the theme of prevention in health (E2).
We're not used to physical activity (E5). The accelerated rhythm of contemporary life is shown as a limit for health promotion. The theme of lack of time emerges.
Physical activity is kind of well-varied because or suddenly I'm not there or when I am I want to rest. I believe that it's not even lack of time, but, it's not wanting to […] I'm all day seated, because of my work, all day in the car, simply there's no exercise, zero physical activity (E11).
Another limit pointed out corresponds to the images of what it is to be healthy transmitted by different family generations.
When I have healthy stuff I eat all that I find, because I know it's not going to do me any harm and if I get fat, 'as I am', I gain weight but healthy. My granny used to say that if you find something to eat, something that'll do you no harm you don't put yourself any limit […] because in the end it's not going to do you harm because it's something healthy (E6).
The families report that they do not choose certain food products that could be healthier due to their higher cost. Thus, in relation to these economic issues, it is difficult to lead a healthy life for it demands from the families an expenditure which surpasses their budgets.
Buying healthy stuff…, the thing is that it's very expensive here. In sum everything is expensive, especially the healthy stuff. It's difficult to lead a healthy life (E6).
DISCUSSION
Seeking to understand the imaginary, we can observe each family by their words, their way of thinking and of living, and by their reflections on health promotion. The imaginary is that entire world of meanings, of ideas, of fantasies, memories of perceived or unperceived figures, the values, the beliefs, that merges with the images. It is an ambivalent force that joins the emotional and rational aspects, where the human being is immersed and social life is molded. 14 Image is a cosmological reality, it is the collective that allows bringing to bear the multidimensional potentialities of each person, in a joint manner. This imaginary is also a reference both for the interactions that involve the health of the families and for the healthy being. [14][15] The data presented allow us to understand how each person and each family has their own beliefs, values and knowledge, who somewhere hears or reads new ways of understanding and performing health care. As this participant reports, "it's like when they place the posters", thus generating a communicational dynamics that leads to learning new ways of caring for their health and for their surroundings, which juxtapose or impose on the inherited knowledge and practices with respect to what is good or not as regards health.
8/14
Knowing which food products are healthy and which are not can foster favorable behaviors and environments in preschool children on food selection and consumption, as well as respect for the prevention of risk factors related to childhood obesity. In small children, prevention usually yields better results than those offered by the Health Promotion programs.² By essence, the family has a self-care nature, which seeks to "understand and foster health promotion to create other ways of caring". 15 In this sense, Nursing takes on a fundamental role to value users' knowledge, understanding their values, beliefs, experiences, and life events based on an open-dialog relationship so that information and guidance are not clearly limited to the user, as well as the perspective of the elaboration of healthy ways to care for family health. 16 When they express how they understand health promotion, the traits of their imaginary permeate through their self-declared knowledge. The social imaginary portrays a system of meanings that is inherent to every community, whose senses represent a network of meanings that enable cohesion of the existing environment/disarray. This refers to the manifestations of the symbolic dimension, even the imaginary uses the symbolic to express itself, it reflects social practices that materialize beliefs, rituals, and myths. 17 As such, the imaginary is configured in the routines of these families, dictating customs and care actions in everyday life. Such care actions appear in the mothers' and fathers' speeches when they report on their eating habits.
An eating preference involves a complex interaction between family and social influence and the environment the child is immersed in. Apart from the association between preferences, to the tastes, and the access to knowledge on the food products.
Health Promotion practices can be activators of strengths in the elaboration of measures that result in the strengthening of the subjects and of the collectives, in the expansion of their autonomy, and in fostering participation in and use of the health networks. 18 The adoption of healthy lifestyles, such as regularly practicing physical activity and healthy eating, is directly associated with health promotion, since this latter seeks means to improve the individuals' quality of life. However, it is necessary to consider its determinants and conditionants, as highlighted in the Ottawa Letter. They are certain health pre-requisites, which include peace, adequate economic and nutrition resources, housing, a stable ecosystem, and sustainable use of the resources. 19 In the current society, an increase has occurred in the intake of hypercaloric food products, as well as a reduction in physical activity, as a result of the change in behavioral patterns that lead to more sedentary lifestyles typical of life in the cities. In turn, the importance of empowering the families through Health Education, as well as of fostering personal skills and self-esteem, is fundamental to favor health, a fundamental promotion tool in the contexts of Primary Health Care and of education. 20 Health care begins within each person, as a concern for taking care of oneself. And the concern for care appears when it is important that someone else exists. Consequently, it is necessary to devote to that person, to participate in their destiny, their searches, their sufferings, and their successes in life. Then, to care means concern, effort, attention, promptness, jealousy, and good manners. We are in front of a fundamental action, a way of being in which the person comes out of themselves and focus on the other utmostly and with and concern. 21 In post-modernity, being together is not a reason, it is a feeling experienced by the people who integrate in a group, being part of the "tribe". In many aspects, feelings are the most irrefutable reality of our time, being used both for doing good and for doing evil. 10 It is important to highlight family co-living, or "being together", happy moments, enjoying with the family, taking into account certain values that guarantee happiness and union, for the balance and health of the nuclear members of the family. 22
9/14
Care can be understood as a configuration of practices with meanings. The notion of configuration allows us to think in actions that converge to the conformation of a related skill, interweaving and following certain logic or organization which includes and tries to visualize tensions and overlappings with other structures of signification in everyday life. When we speak about care actions and caring, we do not refer to a neutral concept but to one that materializes inequalities and differences, reason why it needs to be reflected upon in order to create well-being from the social policies and the interventions by the public institutions. 23 The strength lies exactly on the fact that each of the actions is, simultaneously, the expression of certain alienation and of a certain form of resistance. It is a compound of triviality and exception, of slowness and excitement, it is the place of a real feeling of reappropriation of existence. 10 The feeling of being together, caring for the children, providing them with a good education and good health strengthens, cultivates and fosters the bonds, the manners for a good social relationship. It is there that it is taught to respect, to care for things, to manage sadness and happiness. Shared maternity and paternity represent an important advancement, boost the opportunity of more egalitarian relationships and of new socializing models in the educations of sons and daughters. 24 In this sense, we are living times of great cultural, and technological transformations, among others, which call us to adjust our outlook on this emerging reality. More than ever, it is necessary to stimulate each human being, involving their strengths. 13 The sociodemographic, social, and community characteristics of the parents can influence on issues like the frequency with which they practice physical activity and on how they feed their children. 24 In the data presented, a number of decisions emerge clearly, such as the impulse to eat food products they identify as unhealthy, being stated by the participants, who also report food options they eat at their homes. It can be interpreted that health care is increasingly difficult every day, due to diverse food options available in the market and to the fact the most common preferences are for fast foods.
Many parents have to divide among the multiple work and household duties, and they find it more practical to offer fast food to their children. Nutrition is an important conditioning factor for children' growth and development; consequently, it is very important to guarantee them an adequate nutritional intake, as well as to educate them on a healthy lifestyle, an effective health promoter. Among the most functional strategies to attain this goal, the following can be mentioned: establishing a regular time schedule for each meal, serving varied and healthy food, setting the example by eating a healthy diet, discourage quarrels with food as a center, encourage the children to participate in the process of food elaboration or selection, always following the guidelines of a balanced and healthy diet. 25 Undoubtedly, the family is the main nucleus of social life where children and adolescents grow up. Many of the habits, values, beliefs, and lifestyles that will be present, to a greater or lesser extent, in the adult life of the person are established in the family. For this reason, the family is one of the most important cornerstones for the promotion of healthy habits. As stressed by the FAROS Foundation, the ideal way to promote better eating habits and healthy activities in the children is by involving the whole family. This approached centered on the family simply means that everyone, both parents and children, work together as a team to attain a healthy life. 26 Through the statements of fathers and mothers, the reports show the habits and customs of the participating families of the study, being that it is them, as responsible adults of each household, who exert an influence, to a greater or lesser extent, either for deficient nutrition or for healthy eating. They also account for the importance of trans-generational beliefs associated to eating excesses and their effect on health.
Together with the eating habits, sedentarism is another of the main factors with an effect on health and quality of life. Sedentarism is defined as performing less than 150 minutes of physical activity a week, either vigorous or moderate. Among the multiple studies conducted around sedentarism, several of them signal that various factors associated to diverse diseases could be prevented by practicing physical activity at least fifty minutes a week. 27 According to the information collected among the participants of this research, they are people with sedentary habits. The reasons they point out as preventing them from engaging in physical activities are the following: lack of time, followed by lack of motivations; tiredness due to their jobs; and lack of habit. Therefore, for many of them the idea of practicing physical exercise ends up being a sacrifice.
The reports explicitly point to the economic restrictions as an important obstacle to health promotion, quality of life, and being healthy. It is evidenced that the family strategies are precarious as regards establishing actions focused on promoting healthy lifestyles and habits and on health promotion.
In order for the families and, the smallest boys and girls in particular, enjoy a quality and healthy life, it is necessary to strengthen the efficacy of the health promotion strategies. This implies developing policies and formulating strategies, actions, and interventions which go beyond a sectorial logic of the health scope. Only new approaches of inter-sectorial policies will be able to act on the social conditioning and determinant factors of health, with an interdisciplinary support, with adequate inter-sectorial coordination at the local, regional, and national levels, and with an Open Government approach, in which the effectiveness of the mechanisms for citizen participation and accountability is enhanced. Persisting on bureaucratic initiatives will not boost the development of healthy lifestyles among people, families and communities, in the spaces where they live, study and work. 28 In this context, health promotion must be based on a solid articulation between the logics of the health and educational teams and institutions. This is one of the challenges that must be solved to advance towards efficient and effective health promotion strategies. 14 Health is essential for personal, social and economic development, emerging as a crucial component of quality of life. Political, economic, socio-cultural, environmental, behavioral, and biological factors can favor or impair health. 29 As such, the studies and the knowledge developed by the social sciences warn us that work must be conducted with the families, strengthening their knowledge, helping them face and overcome the limits and obstacles they experiment, both in their imaginary and in their routines, in order to be effective spaces for the promotion of a healthy life.
CONCLUSION
This study sought to understand the imaginary of the families on health promotion, with its limits and strengths in the family routines of a specific group of boys and girls in the South of Chile/ Chilean Antarctica, Magallanes Region. The relatives' imaginary, a set of actions and interactions that is a component of the families, expresses different customs and habits that can be favorable or unfavorable to health. This proves that, to promote health, the focus is healthy eating, knowing how to eat, engaging in physical activity, knowing the limits, obstacles, socioeconomic situation, and beliefs.
Nursing has an important social commitment to fulfill in the field of Health Promotion, especially in the school setting, which is not limited to offering education on nutrition, in order for the individuals, families, and populations to maintain healthy eating habits; also, to convey scientific information and studies that allow favoring the understanding and importance of some prohibitions or reductions and the increase in the intake of certain food products. It is necessary that it allow knowing the existence of public policies and programs targeted at the well-being of human beings, with their accomplishments and failures, as well as with their successes and limitations. Additionally, it is relevant that Nursing imposes itself the challenge of developing actions towards and with the families, coordinated and articulated among the technical teams and health/education professionals, that provide them with 11/14 information and education opportunities as regards healthy habits, always considering the social determinants.
The study allows setting out that, to reinforce the families' knowledge, the nurses need to further involve in the quality of life practices with educational interventions targeted not only to provide information and guidance but, above all, to favor reflection and analysis of various aspects of family life and society, in order to develop the necessary ability and strengths in the families. It is definitely advisable that other studies are conducted on the theme of the Imaginary and the everyday routines in Family Health Promotion, as well as that these research studies include the reality of the imaginary and the everyday routines in the health professionals themselves.
|
v3-fos-license
|
2019-09-11T07:07:18.712Z
|
2020-05-14T00:00:00.000
|
202284121
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11150-020-09487-x.pdf",
"pdf_hash": "d519ff95ac5c115a49ebec2b8378ec4da5adf089",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43421",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "7f0f269040a427df2c9678338b0fd1a6c64a3a44",
"year": 2020
}
|
pes2o/s2orc
|
Climbing up ladders and sliding down snakes: an empirical assessment of the effect of social mobility on subjective wellbeing
We examine how intergenerational mobility impacts on subjective wellbeing (SWB) drawing on data from the British Cohort Study. Our SWB measures encapsulate both life satisfaction and mental health, and we consider both relative and absolute movements in income. We find that relative income mobility is a significant predictor of life satisfaction and mental health, whether people move upward or downward. For absolute income, mobility is only a consistent predictor of SWB and mental health outcomes if the person moves downwards, and in this case the impact is far larger than relative mobility. For both relative and income mobility, downward movements impact SWB to a greater extent than upward movements, consistent with exhibiting loss aversion. Notably, we find that social class mobility does not affect SWB. We present evidence that the significant relative and absolute mobility effects we find operate partially through financial perceptions and consumption changes which can occur because of income mobility.
Introduction
Income mobility is regularly touted as a means through which individuals who were born into lower socioeconomic backgrounds can access a 'better' life. Mechanically, 'better' implies more income, and possibly even better quality of work. However, intuitively 'better' should also mean higher levels of subjective wellbeing (SWB). In this paper, we consider how social mobility impacts on SWB, with SWB being measured either as changes in an individual's self-reported life satisfaction or mental health. We consider three different measures of income mobility, which capture both absolute and relative mobility movements.
Related to our work is a large literature that looks at how relative income impacts SWB (e.g., Dolan et al. 2008;Bechtel et al. 2012). The main message from this literature is that SWB is adversely affected if you are surrounded by people who are richer than you. Relative income has been measured in a host of ways but usually the comparison group are people of a similar age and gender at a given point in time (Knight and Song 2006;Luttmer 2005;Card et al. 2010;Li et al. 2011;Senik 2004). That is, people who I find comparable with myself with respect to some key demographic. Additionally, the comparison group could be the income that an individual has experienced in their past. This accommodates the notion that people feel relative changes in income more intensely than absolute levels of income (Rabin 2004). Where comparisons with past income have been considered, it has been usual to consider the income that the individual themselves has earned in the recent past. There are two exceptions. First, Clark and D'Angelo (2009) look at how upward social class mobility affects SWB by drawing on 15 waves of the British Household Panel Survey. They find that individuals with greater mobility also have higher levels of life satisfaction. Their scope is more limited than our study as they only consider upward mobility, defined as a binary indicator. In contrast, we study downward mobility and also the extent of the adjustment, by considering percentile change alongside. Second, McBride (2001) utilises the answer to the following question to create an inter-generational measure of mobility: 'compared to your parents when they were the age you are now, do you think your own standard of living now is: much better, somewhat better, about the same, somewhat worse, or much worse?' The author finds that respondents who perceive their parents as having had a higher standard of living in comparison to their own, report lower levels of SWB. This study is also limited, however, in its cross-sectional nature and by the fact that the respondent is asked to recall their parents' standard of living 1 . In this work, we explore how both upward (positive) and downward (negative) income mobility impact on SWB. We draw on the British Cohort Study (BCS) to show how income mobility affects SWB, and consider both relative and absolute inter-generational income movements.
Overall, we find that upward mobility augments SWB and downward mobility deteriorates SWB, with the overall effects of downward mobility always being the greatest in magnitude. Notably, upward relative income mobility augments SWB more than upward absolute income mobility (where the latter effects are zero). In contrast, downward absolute income mobility deteriorates SWB more than downward relative income mobility. The estimates implied by downward absolute mobility movements are substantive, while all other effects are modest. To give some context, earning £100 less than your parents on a weekly basis (which is less than the movements experienced by the average person in the data we consider), gives the same deterioration to life satisfaction as being unemployed has been shown to impact on life satisfaction . Crucially, our results are robust to several specifications. We also present suggestive evidence that the income mobility effects we find operate partially through financial perceptions (i.e., how a person is feeling about their financial situation) and consumption changes which occurred because of mobility.
Income mobility and SWB
To consider how income mobility impacts SWB, we envisage a utility function with a reference point for income determined by the individual's own past income. We assume that new cohorts begin with aspirations that are at least as high as their own parents. Specifically, we expect that individuals want to consume more or equal amounts as compared to their own parents. We also expect that the experience of mobility is worse for those who are downwardly mobile, with slower rates of adaptation. We expect that adaptation will be quicker for those that are upwardly mobile, implying little or no legacy effects on current SWB. Our hypothesis is consistent with the notion that losses in social mobility resonate more, and for longer, than gains i.e. akin to classic loss aversion. Evidence of loss aversion abounds in many other contexts (e.g., Shea 1995a;Shea 1995b;Bowman et al. 1999). Assuming that loss aversion affects mobility, it follows that the absolute impact on SWB of a loss of one dollar, from an initial reference position, is greater than the effect of a gain of one dollar (Tversky and Kahneman 1991).
In our work, we explore inter-generational upward and downward income mobility. We see four pathways that are not mutually exclusive through which income mobility can affect SWB. Given the data at our disposal, we attempt to disentangle which of these channels is the most important. These are: (i) stress (ii) prosperity concerns (iii) identity and (iv) consumption changes.
For the first pathway, we envisage individuals fully internalizing their new status and gaining a 'feeling of pride' when they are mobile and a 'feeling of 'despair' when they are dis-mobile. This internalization is a direct pathway through which mobility can impact on SWB. Our second pathway relates to a person's perception of their own financial security, conditional on actual earnings. Feelings about earnings can impact on SWB over and above the impact of the level of earnings. A long literature highlights that poorer perceptions of one's current financial situation is associated with lower SWB and that perceptions of change in financial circumstances predict well-being (Wildman and Jones 2002;Brown et al. 2005;Johnson and Krueger 2006).
For the first and second pathways, SWB is affected through increased or decreased stress levels. Johnston and Lordan (2012) document the mechanisms by which stress affects SWB. We argue that any stress effects caused in the immediate aftermath of downward income mobility can also be exacerbated further, as individuals who report low levels of SWB are also less likely to commit to the future and be optimistic. Consequently, they are less likely to pursue a lifestyle that includes regular exercise and managing a nutritious diet which have been linked directly to SWB Pakrashi 2014 andPakrashi 2015). Alternatively, they may engage in risky health-behaviors such as excessive drinking and smoking (Macinko et al. 2003). For individuals who are upwardly mobile, there may also be an alleviation of stress as they move from a situation with less disposable income (and vice versa for the downwardly mobile), to one where they no longer need to make ends meet.
The third pathway, the identity hypothesis, stems from evidence that changing comparison groups, can affect an individual's sense of identity (Akerlof and Kranton 2010). All animals, including human ones, need to feel that they belong to a group; being mobile in income, even if mobility is upward, can result in an individual neither feeling part of their former group nor part of their new group. This process is used to explain why children from poor backgrounds who win college scholarships are not as happy as other peers from more affluent backgrounds (Aries and Seider 2005). With respect to this study, identity loss can potentially affect those that are both upwardly and downwardly mobile if the person no longer socializes with old friends and family members regularly, and misses the experience.
The fourth pathway concerns consumption changes. This suggests that individuals may not fully realise the utility (disutility) of their new income status. If true, individuals who are upwardly income mobile consume less. This may occur because these individuals do not feel secure in their newfound status and want to ensure they can smooth future consumption. Finally, having grown up in a lower income environment, they may not view themselves as needing the same level of consumption as those who have grown used to it. This suggests that individuals who are upwardly mobile are slow to adapt, as it takes time to discover and pursue the new consumption bundles available to them. Conversely, downward mobility may impact SWB if individuals still spend in accordance with their former reference group, and take on too much debt. It follows that they also worry about their financial situation (our first pathway) despite consuming more.
Data and methods
We draw on the 1970 British Cohort Study (BCS70). The BCS70 began by including more than 17,000 births between April 5th through 11th in 1970. It is estimated that these births represent more than 95% of births over these days in England, Scotland, Wales and Northern Ireland. We draw on data from: 1975, 1980, 1986, 1991, 1996, 2000, 2004 and 2008. Added to the three major childhood surveys (age 5, 10 and 16) are children born outside of the country during the week of April 5th through 11th and could be identified from school registers at later ages.
Income mobility measures
This work focuses on the impact of income mobility as defined by changes in household income from ages 10 (survey taken in 1980) through to ages 30 (survey taken in 2000) and 34 (survey taken in 2004). Age 10 is chosen, as it is the earliest year that income information was gathered from the BCS families (please see Appendix A, A.1 for more detailed information on the income variable). Using multiple years of income in adulthood helps abate concerns that income gathered in a 'one snapshot' fashion is not a good measure of permanent income. It is, however, worth noting that for surveys like these the correlation between current income and permanent income is very strong (0.74) (Blanden et al. 2012).
Ages 30 and 34 are chosen as they are deemed ages when a person is likely to be settling into their permanent income level. They are also the years when the most questions are asked regarding mental health and life satisfaction. Considering two different time points is important for two distinct reasons. First, for some careers (for example, an academic who is tenure tracked) a person may not have settled into a particular income by age 30. Second, a person who finds that they are doing better/ worse than their parents at age 30 may have SWB gains/losses at that time but adapt as they realize their gains/losses are permanent.
We consider three measures of income mobility: two measures of relative mobility and one that captures absolute mobility. Our first relative mobility measure is defined as the intergenerational movement between income quintiles. A person is defined as upwardly mobile if they moved upward at least one quintile from their parents' household income quintile in 1980 by age 30/34. Conversely, a person is defined as downwardly mobile if they moved downward at least one income quintile 2 . In both cases these variables take on a value of 1 or 0.
Our second measure of relative income mobility is based on absolute percentile change in income. A person is defined as upwardly mobile if they moved upward at least one percentile from their parents' household income percentile in 1980 by age 30/34. For example, if a respondent's parents were in the 30th income percentile at age 10, and the respondent reaches the 70th percentile among their own cohort at age 30, the variable takes on the value of 40. A second variable then captures downward mobility in the same way. Overall, this measure is derived by first calculating the difference between the BCS child's income in percentiles minus their parent's income in percentiles. Subsequently we create two variables to capture upward mobility and downward mobility. Upward mobility is defined as equal to this difference if it is positive and zero otherwise, and vice versa for downward mobility. Thus, we capture the intensity of relative income mobility movements 3 .
Our third income mobility measure is defined as the difference between adult and childhood income. Because the income bands reported in 1980 relate to gross income, it is necessary to calculate an approximation of what the take home pay would have been. To do this, we convert the mid-points of the 1980 Descriptive statistics for all variables used in this study are provided in Table 1.
SWB outcomes
Our primary analysis considers how inter-generational income mobility that occurred between 1980 and 2000/2004 affects the SWB of the BCS respondents. The first proxy of SWB is based on a life satisfaction question that takes a value from 0 to 10 where 10 is the highest level of satisfaction and is available at ages 30 and 34. At age 30 we also proxy SWB with a measure of mental health, specifically the Rutter Malaise Inventory (Rutter et al. 1970), which is a set of 24 questions that combine to measure levels of psychological distress and depression. At age 30, its scores range from 0 to 24, with each question scoring a value of 1. For age 34, only nine of the questions usually asked in the Rutter Malaise Inventory were included in the survey and we therefore rely on a sub-index, which takes on values from 0 through 9.
We also measure mental health using the 12-item version of the General Health Questionnaire (GHQ) at age 30. The GHQ is a commonly used self-reported measure of mental health and consists of questions regarding the respondent's emotional and behavioral health over the past few weeks. Each response to the GHQ garners one point, yielding a score that can potentially range from 0 to 12. The GHQ is not available at age 34 but this wave of the survey did include four questions usually included in the Kessler scale, an alternative proxy for mental health. The Kessler scale in the BCS data has 6 items, whereas the full scale is a 10-item questionnaire (Kessler et al. 2002). We follow the same method here used to aggregate the 10-item index when creating the sub-index we rely on this study.
Descriptive statistics for all variables used in this study are provided in Table 1, and further details of the SWB outcome variables are provided in appendix A.4.
Econometric approach
This study relies on the following model to estimate the impact of income mobility on SWB: In Eq. (1) i indexes the BCS child and t indicates either study wave at 30 years or 34 years. UP t−1980 then denotes upward social mobility and DOWN t−1980 denotes downward social mobility. x is then a vector of childhood variables 4 and y denotes a vector of adult variables that can affect SWB which are taken at age 30 or age 34 depending on the timing of the outcome of interest 5 . From Eq. (1) we can identify if income mobility is a predictor of SWB, holding constant adult and childhood income as well as the usual demographics. We cannot claim a causal effect given that mobility may be correlated with many other factors that are also correlated with SWB that are not measured in the BCS. Later, we make substantial efforts to explore what these factors may be to identify further plausible pathways through which any mobility effects found may be operating.
Results
We document the estimates for the age 30 analysis in Table 2. We note that higher values of life satisfaction denote higher levels of SWB. In contrast, lower values of malaise and GHQ imply higher levels of SWB (i.e. better mental health). A few stylized facts emerge from considering Table 2. First, the coefficients are always of the expected sign and are mostly significant. Overall, they imply that upward mobility augments SWB and downward mobility deteriorates SWB.
Second, the effects of downward mobility are always greater than those for upward mobility. This is consistent with the loss aversion hypothesis described earlier in the study. That is, downward mobility hurts more i.e. losses hurt more than gains. For example, in the case of absolute income upward mobility, the estimate is centered around zero (0.005) and not significant. This compares to the estimate of Quintile based income mobility = 1 if a person moved up or down one income quintile. Percentile income mobility represents the difference in percentile that an individual moved as compared to their parent. The closest comparison to quintile based relative mobility is to multiply the coefficient of percentile-based mobility by 20. Absolute income mobility represents the difference between current and past parental income, divided by 100. It therefore represents the inter generation weekly income gap in 100's of British pounds. The regressions we consider also include controls for 11 possible regions of residence at age 10, along with the following childhood variables measured at age 10: household weekly income, birth weight, gender, maternal education (indicators as to whether she has a degree, a vocational qualification, 'A' levels, 'O' levels, a trade qualification or 'other' qualification), mother's age, maternal employment, fraternal education (consistent with the definition of maternal education), father's age, father's employment, household size, household size squared, tenure (lives in a rural area, lives in an urban area, lives in a council estate, lives in a suburb, lives in 'other' area), number of younger siblings, number of older siblings, region of birth, and a dummy indicating whether the child had no father figure.
Additionally, we control for the following adult variables measured at age 30: weekly household income at age 30, social class (a set of fixed effects that denote one of the six registrar general social classes), marital status (disaggregated into fixed effects representing married, cohabiting, single and separated/divorced/widowed), whether or not the BCS child has a degree, household size and household size squared. When data at age 10 are missing for mother's or father's education, age, income or employment because they are living in a single parent household the variable is coded as 0 (imputing at the mean does not change the estimates). Note that birth weight was collected at birth. The estimated effect is the OLS regression coefficient. *, ** and *** denote significance at 0.10, 0.05 and 0.01 levels −0.832 for downward income mobility, implying that if an individual earned 100 GBP less than their parents per week (an amount that is less than the average movement observed in the data), their life satisfaction would decrease by 0.832 units. This implies that a 225 GBP increase in weekly income of their parents leads to a deterioration in SWB of about 1 standard deviation (standard deviation = 1.848, and a 225 GBP weekly increase is the approximately the maximum decrease experienced, see Table 1). This is a very substantive decline if taken in the context of the life satisfaction literature where few things have been shown to influence this specific outcome with no adaptation. For example, much is made of the effects of unemployment on SWB, an exception to this rule, where one year after becoming unemployed the effects of being unemployed are still 0.26 of a standard deviation (see . The effects of downward absolute income mobility for malaise and GHQ are also relatively substantive. Specifically, a 100 GBP decrease in weekly income compared to the cohort child's own parent worsens their malaise by about 1/6 of a standard deviation (standard deviation = 3.491, see Table 1). For GHQ this dis-improvement is about 0.25 of a standard deviation (standard deviation = 4.5, see Table 1). Notably, the implied effects of absolute income upward mobility on malaise and GHQ are significant, but very modest. For percentile based relative income mobility, the estimates are also much smaller for upward mobility as compared to downward mobility. In addition, the effects are much less substantive as compared to absolute income mobility. Specifically, a person would need to move about 80 percentiles on the income distribution to get the same deterioration to life satisfaction as would be suggested by receiving 100 GBP less than their parents in weekly income. The percentile movements implied for GHQ and malaise are equally large, and unlikely. In contrast the effects of relative income upward mobility, as measured by percentile change, are always significant and more substantive than those implied by absolute income mobility, albeit they are still modest. Finally, for quintile based relative income mobility, the estimates for life satisfaction imply that downward mobility hurts twice as much as the gains experienced for upward mobility (−3.19 vs. 0.161). The estimates for both malaise and GHQ are also significantly larger for downward mobility. Third, specific to upward mobility, relative movements in income matter more as compared to absolute movements in income. However, the effects of downward absolute income mobility on life satisfaction, malaise and GHQ has a more substantive imprint on SWB as compared to this relative measure. In other words, if you are moving upward socially to get the greatest SWB gains you should be moving on your own within your comparison group (this conclusion is drawn given the coefficients for the relative mobility regressions are larger than those for the absolute mobility regressions when we consider upward mobility). However, absolute income losses relative to your parents are felt heavier than relative losses in terms of SWB. To get to the equivalent 'pain' in terms of wellbeing deterioration from a relative downward movement, as compared to a 100 GBP weekly decrease in absolute income compared to the cohort child's parent, they would have to move a large number of income quintiles. For example, the implied movement would be three income quintiles in the case of GHQ. Table 3 is in the same format as Table 2 and documents the estimates drawing on the BCS data at age 34. Comparing across Table 2 and Table 3, the overall conclusions drawn thus far remain true. First, all of the estimates have the expected sign, implying that downward mobility hurts, and upward mobility brings gains to SWB when income movements are relative (we note that all of the estimates for upward absolute income mobility are centered around zero and not significant). The Quintile based income mobility = 1 if a person moved up or down one income quintile. Percentile income mobility represents the difference in percentile that an individual moved as compared to their parent. The closest comparison to quintile based relative mobility is to multiply the coefficient of percentile-based mobility by 20. Absolute income mobility represents the difference between current and past parental income, divided by 100. It therefore represents the inter generation weekly income gap in 100's of British pounds. The regressions we consider also include controls for 11 possible regions of residence at age 10, along with the following childhood variables measured at age 10: household weekly income, birth weight, gender, maternal education (indicators as to whether she has a degree, a vocational qualification, 'A' levels, 'O' levels, a trade qualification or 'other' qualification), mother's age, maternal employment, fraternal education (consistent with the definition of maternal education), father's age, father's employment, household size, household size squared, tenure (lives in a rural area, lives in an urban area, lives in a council estate, lives in a suburb, lives in 'other' area), number of younger siblings, number of older siblings, region of birth, and a dummy indicating whether the child had no father figure.
Additionally, we control for the following adult variables measured at age 34: weekly household income at age 34, social class (a set of fixed effects that denote one of the six registrar general social classes), marital status (disaggregated into fixed effects representing married, cohabiting, single and separated/divorced/widowed), whether or not the BCS child has a degree, household size and household size squared. When data at age 10 are missing for mothers or father's education, age, income or employment because they are living in a single parent household the variable is coded as 0 (imputing at the mean does not change the estimates). Note that birth weight was collected at birth. The estimated effect is the OLS regression coefficient. *, ** and *** denote significance at 0.10, 0.05 and 0.01 levels estimates in Table 3 are also of a decent order of magnitude larger for downward losses as compared to upward gains for SWB. These differences are very striking for absolute income mobility (for example −0.452 vs. 0.002 for life satisfaction), but are still roughly 1.5 times the size when we consider the relative mobility measures. Last, specific to upward mobility, relative income movements matter more as compared to absolute movements, however the estimates suggest only a modest augmentation of SWB. In contrast, the effects of downward absolute income mobility on life satisfaction, malaise and GHQ is a more substantive imprint compared to the relative measures, and the magnitude is also substantive.
Recall the suggestion that any effects of income mobility on SWB found could potentially operate through four non-mutually exclusive channels. These are: (i) stress (ii) prosperity concerns (iii) identity and (iv) consumption changes. Table 4 presents some results that allow us to explore these potential pathways further. First, we explore the identity hypothesis drawing on data from the 2000 survey (age 30). Specifically, the BCS child is asked how often they see their mother with the following response options: (i) more than once a week (ii) more than once a month and (iii) less than once a month (iv) never (v) lives with mother. In Table 4, the results under the heading 'maternal contact regressions' detail results from regressions that add these five fixed effects to the model described in Eq. 1. The intuition being that an individual's 'old' identity will be stronger if an income mobile individual has kept in closer contact with their own mother, and weaker if they never see them. We note that there will be measurement error in this variable as it also includes individuals whose mother has passed away, to retain the same samples as previous regressions. However, when we consider a robustness that excludes individuals whose mother has passed away there are no significant changes to the coefficients. Two things are worth noting from Table 4. First, maternal contact does not seem to be an important predictor of SWB outcomes. The exception is for the group that never see their mother, where the estimate is substantive but only significant for malaise. Second, the estimates for upward and downward mobility are never attenuated when we include these sets of variables, although in a few cases they are augmented. This implies that our identity proxy does not explain the underlying relationship between income mobility and SWB that we documented in Table 3.
The section of Table 4 labelled 'prosperity concerns' explores whether perceptions about financial security can explain the link between SWB and income mobility documented in Tables 2 and 3. Specifically, we add to Eq. 1 a set of fixed effects that represent a measure of perceived financial prosperity gathered from the BCS respondents at age 30, which takes on values one through five, representing the response to the question: 'how well are you managing financially these days?'. The options for the BCS respondent are: (1) living comfortably (2) doing alright (3) just about getting by (4) finding it quite difficult or (5) finding it very difficult. From Table 4, we note that prosperity concerns are a viable pathway through which income mobility is operating. In particular, all of the income mobility estimates are attenuated, with most centered around zero and not significant. We note that the estimates for downward mobility across all three income mobility measures remain substantive, although they are attenuated, implying that financial security is a partial pathway through which the estimates documented in Tables 2 and 3 were operating. Finally, we explore consumption changes by drawing on the intuition that if individuals are consuming less, they will necessarily save more. Using information on savings habits gathered at age 34, we add two variables to Eq. 1. These are (i) an indicator (yes/no) for if the BCS respondent is saving monthly; and (ii) how much the child saves monthly in £'s (equal to zero if the binary indicator (i) represents no savings). The results from these regressions are documented in Table 4 under the heading 'savings' regressions. First, we note that savers overall have higher levels of wellbeing. Second, adding the savings variables to the regressions does attenuate most of our estimates, although not as substantively as the prosperity channel.
Our work has documented a persistent relationship between income mobility-both relative and absolute-and a variety of SWB outcomes. We have presented evidence that these effects are likely caused by financial perceptions and consumptions changes. We note that our proxy for identity changes is less than ideal, and this may explain why we do not find any evidence in favor of this pathway. Given the impact of income mobility on SWB, particularly downward absolute mobility, the last question is whether or not this is a causal relationship. It is feasible that some of the effects we find are determined by characteristics of the individual that makes them more likely to be mobile (for example being more or less gritty). Additionally, individual personality factors may be correlated with the reporting of a certain level of SWB and also the likelihood of mobility. To consider this possibility further we include some personality proxies in our life satisfaction regressions. We focus on life satisfaction because of data availability for our lagged robustness test (see below). Specifically, we include an index of emotional and behavioral problems at age 10 and age 16. These indexes are labeled as non-cognitive skills (Heckman 2008, Lekfuangfu andLordan 2018) and are based on the Rutter behavioral problems index. Further, for two of the outcomes we consider it is possible to add a lagged dependent variable to Eq. 1. These are only for life satisfaction which we observe with a lag of four years (that is, at age 26 for the age 30 outcomes and at age 30 for the age 34 outcomes). Including a lagged dependent variable should control for any negative 'feelings' associated with being mobile as its information was gathered at a time when the BCS child would have had already some knowledge of their income attainment in comparison to their parents. Consequentially, any adaption would already have begun. The results are documented in Table 5. Turning to Table 5, when we control for non-cognitive skills in panel 1, the overall conclusions of Tables 2 and 3 are robust, with the estimates not changing substantively. Relative income mobility predicts life satisfaction modestly and significantly, whereas for absolute mobility only downward mobility matters and the implied effects are substantive. Considering the second panel of Table 5, when we add lagged life satisfaction most of the estimates are attenuated, however they remain Table 2 and notes to Table 2 are relevant significant for downward mobility across all three measures of income mobility. That is, regardless of how we measure downward mobility (relative or absolute) it remains a significant negative predictor of life satisfaction at ages 30 and 34 despite lagged life satisfaction being included as a control variable. This work has considered income mobility, however the data at our disposal does also have a measure of social class. Specifically, the Registrar General's definition of social class which divides of individuals into six distinct social classes. Utilizing this information, we can re-estimate Eq. 1 and consider the effects of social class mobility. These estimates are documented in Table 6. We note caution. Unlike our income mobility estimates, which control for both childhood income and adult income, we cannot control for childhood and adult social class. This means our estimates are likely to be upward biased. This problem arises owing to multicollinearity. Overall, Table 6 suggests that social class mobility does not affect any of our outcomes significantly, allowing us to conclude that income mobility matters more than social class mobility for predicting SWB. These regressions also include the controls detailed in Tables 2 and 3. The estimated effect is the OLS regression coefficient. *, ** and *** denote significance at 0.10, 0.05 and 0.01 levels. *Birth weight was collected at birth In this work, we examine how intergenerational mobility affects SWB drawing on the British Cohort Study. We consider several outcomes that capture life satisfaction and mental health. We define mobility as income movements inter-generationally both relatively and absolutely. We define relative mobility based on changes across income quintiles and percentiles. The advantage of the former is that the quintiles are derived from external data that arguably better represent the income distribution in the UK at that time, whereas the latter allows for greater numbers of individuals to be 'winners' and 'losers' (i.e. there is more variation and we can model intensity of movements). Overall, we find that upward mobility augments SWB and downward mobility deteriorates SWB, with the effects of downward mobility being greater. This is consistent with the theory of loss aversion, essentially, downward mobility hurts more. Interestingly, for upward mobility, relative movements in income matter significantly more as compared to absolute movements, but the estimates imply only modest effects overall. The effects of downward absolute income mobility on SWB has a much more substantive imprint as compared to relative movements. In other words, if you are moving upward socially, to get SWB gains you should be moving a large distance in relative income as compared to your family. In contrast, absolute losses are felt more heavily than relative losses when you are moving down. To give some context, earning £100 less than your parents on a weekly basis gives the same deterioration to life satisfaction as being unemployed has been shown to effect life satisfaction elsewhere (see . We proposed four pathways through which income mobility effects may operate: (i) stress/alleviation of stress; (ii) prosperity concerns; (iii) changes in sense of identity and (iv) realised or unrealised consumption changes. We do not have data to explore whether (i) is a viable pathway. We have however presented highly suggestive evidence that the income mobility effects identified by our models are partially caused by financial perceptions and consumption changes. The effects found for consumption changes echo the importance of research considering consumption data rather than income when exploring the effects of windfalls on SWB. We note that our proxy for identity changes is a very crude measure and ideally, we would have information on changes to social networks. This may explain why we do not find any evidence in favor of this pathway, and considering better quality identity proxies is an area for future research.
Of course, individuals are not randomly assigned to a mobility status. We have tested the sensitivity of our results to alternate specifications and the conclusions we draw remain robust. Unambiguous proof of a causal effect of social mobility requires data that does not exist. We also consider how social mobility measured using the Registrar Generals framework affects SWB. We do not find any significant associations between social class mobility and SWB. This is in contrast with the results found by Clark and D'Angelo (2009); however, we do note that they identify effects of upward class mobility from a comparison with all others. In this case, 'others' includes those who are downwardly mobile. Overall, we conclude that income mobility matters much more than social class mobility for SWB.
A natural question arising from our work is how income mobility should be measured to best capture how a person decides if they are doing better or worse than their parents. The answer is that we do not know. We do however, believe that children compare themselves to their parents. Additionally, the significance of the results we present should convince our audience that children make these comparisons based on income and some notion of changes in standard of living.
We are more circumspect in saying anything about the policy recommendations of this research because it raises many normative issues about how to appropriately weigh the many factors that go into the conceptualisation and derivation of the social welfare function. Firstly, it should be noted that income at ages 30 and 34 is also a significant independent predictor of SWB. That is, the mobility estimates we document are already conditional on both personal and childhood income. Therefore, to the extent that you would like the world to remain equitable with respect to who gets this SWB income effect, there is a separate argument to promoting mobility so that different individuals get to experience SWB effects owed to enjoying higher levels of personal income. Last, much of the deterioration of SWB can be explained by prosperity concerns and a lack of saving for the downwardly mobile that are larger than others experiencing the same level of income. This suggests a role for policy in helping people to stop living beyond their means.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. The BCS child's parents in 1980 were asked the following question: 'Please show the following income ranges and ask for the range in which the family's total gross weekly income falls (before deductions). An estimate will be acceptable.' Include all earned and unearned income of both mother and father before deductions for tax, national insurance etc.
Exclude any income of other household members and child benefit At ages 30 and 34 the BCS child was asked to state in £s both their own and their partners usual take home pay. That is, they were asked for the monetary amount that they take home after 'all deductions for tax, National Insurance, union dues, pension and so on, but including overtime, bonuses, commission and tips. ' We combine these to get a measure of household income. Specifically, if both are employed we take the simple sum of these incomes. For those households in which only one person works, household income is assigned equal to the value of his/her wages alone.
A.2 Income Mobility Based on Inter-Generational Mobility in Income Quintiles Our work defines income mobility as the intergenerational movement between income quintiles. For this measure a person is defined as mobile if they move upward one quintile inter-generationally. Conversely, a person is defined as downward mobile if they move down one quintile inter-generationally. Therefore, if the BCS child's parent was in income quintile 5 but they are in income quintile 1 they are defined as upwardly mobile. So, we need to relate the incomes reported in the BCS in 1980, 2000 and 2004 to a relevant income quintile.
We therefore rely on the Family Expenditure Survey to define our income quintiles for 1980. In this case the relevant income quintiles were drawn from the same year data sets based on the variable representing gross normal household income. Clearly, the reported bands do not allow us to exactly match these quintiles. However, regardless of whether we define the quintile above or below the reported matched bands, the results are robust. In this work the reported results pertain to the following quintiles: >£55, >£110, >£160 and >£225 and we cut off the bands below each quintile. That is, these quintiles collapse into >£50, >£100, >£150 and >£200.
For 2000 we also rely on the Family Expenditure Survey and the quintiles used are: > £148, >£281 > £464 and £719. Because the income data in 2000 is reported as a continuous variable we can use these quintiles 'as is'. For the 2004 the Expenditure and Food Survey replaced the Food Expenditure Survey, albeit for our purposes similar data was collected. For this year the relevant quintiles are defined as: > £205, >£375, >£579 and >£885.
A.3 Relative Mobility Based on Percentile Differences in Income While our relative mobility measure based on quintiles has the advantage of not being affected by attrition in the BCS, it also has a disadvantage of throwing away information. We therefore consider a third measure that is defined by the BCS data but retains more information. That is, we calculate the difference between the percentile income of the BCS child in adulthood (age 30 and 34) and that of their parents (age 10). Upward mobility is then defined as all positive values of this result, with negative values recoded to zero. Conversely, downward mobility is then defined as all negative values of this result, with negative values recoded to zero. This percentile measure of relative has the limitations of being based on a sample that may be biased by attrition; however, it has the advantage of retaining more information than the quintile measure of relative mobility (i.e., this variable denotes absolute percentile movements as compared to the quintile measure which is a binary variable that is either 1 or 0 We define mobility as weekly net income from adulthood (age 30 or 34 in 2004 prices) minus weekly net income from childhood (age 10 in 2004 prices). As in the percentile measure, upward mobility is defined as the positive values of this result, with negative values recoded to zero. Similarly, downward mobility is defined as negative values of this result, with negative values recoded to zero.
A.4 Further Details of Subjective Wellbeing (SWB) Outcome Variables The life satisfaction outcome variable is based on the response to the following question: 'Here is a scale from 0-10 where '0' means that you are completely dissatisfied and '10' means that you are completely satisfied. Please enter the number, which corresponds with how satisfied or dissatisfied you are about the way you life has turned out so far'.
At age 30 we also proxy SWB with a measure of mental health, specifically the Rutter Malaise Inventory (Rutter et al. 1970), which is a set of 24 questions that combine to measure levels of psychological distress and depression. At age 30, its scores range from 0 to 24, with each question scoring a value of 1. The index is derived through the number of yes scores to having backaches, feeling tired, feeling miserable and depressed, having headaches, worrying, having difficulty in falling asleep or staying asleep, waking unnecessarily early in the morning, worrying about health, getting into a violent rage, getting annoyed by people, having twitches, becoming scared for no reason, being scared to be alone, being easily upset, being frightened of going out alone, being jittery, suffering from indigestion, suffering from upset stomach, having poor appetite, being worn out by little things, experiencing racing heart, having bad pains in your eyes, being troubled by rheumatism, and having had a nervous breakdown.
For age 34, only nine of the questions usually asked in the Rutter Malaise Inventory were included in the survey and we therefore rely on a sub-index, which takes on values from 0 through 9. We derive this sub-malaise index by aggregating the number of yes responses to: feeling tired, feeling miserable and depressed, worrying, getting into a violent rage, becoming scared for no reason, being scared to be alone, being easily upset, being jittery, suffering from indigestion, suffering from upset stomach, having poor appetite, being worn out by little things, experiencing racing heart.
We also measure mental health using the 12-item version of the General Health Questionnaire (GHQ) at age 30. The GHQ is a commonly used self-reported measure of mental health and consists of questions regarding the respondent's emotional and behavioral health over the past few weeks. Each response to the GHQ garners one point, yielding a score that can potentially range from 0 to 12. The 12 items in the GHQ are: ability to concentrate, sleep loss due to worry, perception of role, capability in decision making, whether constantly under strain, problems in overcoming difficulties, enjoyment of day-to-day activities, ability to face problems, whether unhappy or depressed, loss of confidence, self-worth, and general happiness. For each of the 12 items, the respondent indicates on a four-point scale the extent to which they have been experiencing a particular symptom. For example, the respondent is asked 'have you recently felt constantly under strain', to which they can respond: not at all (a score of 0), no more than usual (1), rather more than usual (2), much more than usual (3).
The GHQ is not available at age 34 but this wave of the survey did include four questions usually included in the Kessler scale, an alternative proxy for mental health. The Kessler scale in the BCS data has 6 items, whereas the full scale is a 10item questionnaire (Kessler et al. 2002). We follow the same method here used to aggregate the 10-item index when creating the sub-index we rely on this study. The specific questions asked are during the last 30 days, about how often did you feel (i) so depressed that nothing could cheer you up? (ii) hopeless? (iii) restless or fidgety? (iv) that everything was an effort? The possible responses are: all of the time (a score of 1), most of the time (2), some of the time (3), a little of the time (4) and none of the time (5). This results in an index that has a range between 4 and 20, with 4 being the best outcome with respect to mental health.
|
v3-fos-license
|
2021-10-02T05:20:49.261Z
|
2021-09-16T00:00:00.000
|
238236835
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1093/rb/rbab051",
"pdf_hash": "fba4f6b4e245635bff9cf93ba776e45d5757c3a0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43424",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"sha1": "fba4f6b4e245635bff9cf93ba776e45d5757c3a0",
"year": 2021
}
|
pes2o/s2orc
|
Porous tantalum-composited gelatin nanoparticles hydrogel integrated with mesenchymal stem cell-derived endothelial cells to construct vascularized tissue in vivo
Abstract The ideal scaffold material of angiogenesis should have mechanical strength and provide appropriate physiological microporous structures to mimic the extracellular matrix environment. In this study, we constructed an integrated three-dimensional scaffold material using porous tantalum (pTa), gelatin nanoparticles (GNPs) hydrogel, and seeded with bone marrow mesenchymal stem cells (BMSCs)-derived endothelial cells (ECs) for vascular tissue engineering. The characteristics and biocompatibility of pTa and GNPs hydrogel were evaluated by mechanical testing, scanning electron microscopy, cell counting kit, and live-cell assay. The BMSCs-derived ECs were identified by flow cytometry and angiogenesis assay. BMSCs-derived ECs were seeded on the pTa-GNPs hydrogel scaffold and implanted subcutaneously in nude mice. Four weeks after the operation, the scaffold material was evaluated by histomorphology. The superior biocompatible ability of pTa-GNPs hydrogel scaffold was observed. Our in vivo results suggested that 28 days after implantation, the formation of the stable capillary-like network in scaffold material could be promoted significantly. The novel, integrated pTa-GNPs hydrogel scaffold is biocompatible with the host, and exhibits biomechanical and angiogenic properties. Moreover, combined with BMSCs-derived ECs, it could construct vascular engineered tissue in vivo. This study may provide a basis for applying pTa in bone regeneration and autologous BMSCs in tissue-engineered vascular grafts.
Introduction
Tissue engineering (TE) is expected to become a new approach to create alternative tissues for treating congenital deficiency or pathological tissues. For tissue regeneration, the key problem for TE is a vascular deficiency in the engineered structure [1]. Given the diffusion distance limitation, rebuilding the blood vessels' network is challenging to promote the supply of nutrients, gas exchange, and elimination of waste products [2,3]. To overcome the problem of vascularization, many scholars have proposed numerous methods, such as embedding angiogenic factors into stents to advance the growth of microvessels, manufacturing techniques for developing polymers, including vascularlike networks, and vascularization of the matrix before cell inoculation [4]. A three-dimensional (3D) scaffold is a critical element in TE researches. It could provide the matrix for cellular anchorage, migration, and growth. In addition, it promotes neovascularization and tissues regeneration by supplying nutrients and oxygen. Furthermore, the fibrous porous structures of the 3D scaffold can mimic the extracellular matrix (ECM) environment and significantly improve cell functions, including angiogenesis [5,6].
Tantalum (Ta) has received extensive attention in biomedicine due to its excellent biocompatibility and chemical stability [7]. Since 1940, Ta has been used in clinical practice and widely applied in diagnostics and implantation, such as radiographic markers, vascular clips, endovascular stents, cranioplasty plates, orthopedics, and dental implants [8]. Compared with traditional metal implant materials (e.g. stainless steel, titanium alloys and cobalt-based alloys), Ta has superior ductility, tenacity biocompatibility, and high corrosion resistance. Porous metal materials have attracted extensive attention among biomaterials. Porous tantalum (pTa) has been regarded as the ideal orthopedic implant material with promising compatibility and mechanical support [9]. Clinical studies have shown that pTa implant material has good biocompatibility and is integrated well with bone tissue. In addition, this combination has superior long-term stability and is not easy to loosen [10,11]. Angiogenesis is the primary element for bone regeneration; intramembranous and intrachondral ossification occur around the growth of vascular tissues [1]. In bone regeneration, such as bone defects, the ideal scaffold should have fibrous porous structures in which the reconstruction of vascularization and the formation of new bone should be carried out [12]. However, pTa does not have the appropriate structure to mimic the ECM microstructure. Therefore, pTa is lacks of property to promote angiogenesis and limits its application in bone regeneration.
Recently, many investigations have made numerous attempts on vascularized structures in vitro. One method is to inoculate seed cells on the suitable scaffold with mechanical properties, stimulating rapid cell growth and directed differentiation in vitro. Therefore, when implanted in vivo, the designed structure will undergo reconstruction and maturation to repair or form functional tissue [13]. The inherent angiogenic ability of endothelial cells (ECs) can be used to avoid angiogenesis induced by prefabricated channels or growth factors. However, the clinical application of mature ECs from autologous vascular tissue has certain limitations; (i) The isolation process requires an invasive operation; (ii) the proliferation potential of mature ECs is relatively low; and (iii) to obtain sufficient cells from small quantities of autologous tissue biopsies is challenging. These limitations have prompted the search for ECs sources with more proliferative and angiogenic abilities. Bone marrow-derived hematopoietic stem cells (HSCs), endothelial progenitor cells, mononuclear cells, and mesenchymal stem cells (MSCs) can differentiate into ECs. Meanwhile, bone marrow mesenchymal stem cells (BMSCs) are pluripotent precursors that can differentiate into various cell types derived from mesoderm, including osteocytes, chondrocytes, adipocytes, and stromal cells [14]. Nevertheless, the angiogenic potential of BMSCs-derived ECs in 3D scaffold material in vivo is rarely reported.
Colloidal gels are a special class of hydrogel materials, showing various applications, including ceramic processing, food industry, and biomedical engineering. Previous studies confirm hydrogel provides a physiologically relevant microenvironment for cell and tissue regeneration [15,16]. However, traditional permanent hydrogels with a wide range of high cross-linking density cannot adapt to the complexity of the local environment and bones' irregular structures due to its elasticity being more advantageous than viscoelasticity [17]. Adaptable hydrogels can exhibit ideal viscoelasticity and withstand complex mechanical environments and irregular shapes because of the reversibility of physical network connections [18]. Previously we reported a novel class of colloidal gels assembled by gelatin nanoparticles (GNPs) of electrostatic attraction. These gels are composed of a porous particulate network dispersed in a continuous phase of an aqueous solvent. This particulate microstructure renders colloidal gels superior properties like tailorable viscoelasticity, incorporating other components, and fascinating mechanical behavior. The reversible feature of the interparticle forces renders the colloidal gels shear-thinning and self-healing, thereby enabling these gels to be injectable and moldable. More importantly, GNPs within the colloidal gels can sustain release therapeutics, including proteins or small molecule drugs, which are an attractive feature for regenerative medicine [19,20]. However, adapting to the mechanical environment and local structure in vivo remains a challenge.
Hereby, to expand the application of pTa in bone regeneration, we constructed artificial cellular scaffolds that could mimic the microstructure of natural tissue for vascular TE application. In the present study, the combination of pTa and GNPs hydrogel was designed to construct the 3D scaffold material and expected to have the advantages of two materials with a particular mechanical strength for cell and tissue regeneration, providing physiological relevant microenvironment. The characteristics and biocompatibility of pTa and GNPs hydrogel scaffold were evaluated by mechanical testing, scanning electron microscopy (SEM), cell counting kit, and live-cell assay. Then, we proposed a pTa-GNPs hydrogel scaffold combined with differentiated endothelial BMSCs. Using in vivo animal study, we determined whether the BMSCs-derived ECs could promote angiogenesis in the scaffold.
Ta samples preparation and morphological analysis
The columnar and disc-shaped structures of pTa samples were prepared by computer-aided design software. The cell structure (bodycentered cubic structure; Fig. 1a) was imported in the .stl file, and the sample model was generated by computer-aided design (CAD) software. Porous Ta samples were created by selected laser melting (SLM; AM400, Renishaw, Britain) based on predesigned CAD data.
The laser power and laser scanning speed were set at 320 W and 500 mm/s. Meanwhile, the Ta powder layer thickness and the hatching distance were 30 and 70 lm, respectively. The pTa cylinders (u 6 mm  H 10 mm) and pTa discs (u 6 mm  H 2 mm; Fig. 1b and c) were used in mechanical testing and biological experiment, respectively. The disc samples were autoclaved at 121 C for 30 min to prepare for cytotoxicity and in vivo experiments. For the design, the porosity, strut size, and pore diameter of pTa were 88.6%, 430lm, and 1300 lm, respectively. The gravimetric method was used to calculate the porosity of pTa cylindrical samples [21]. A SEM (SU3500, Hitachi, Japan) was used to examine the sizes and morphology of the sample, and Image J 1.8 (National Institutes of Health, USA) was used to analyze data.
Mechanical testing
The mechanical properties of five cylindrical pTa (u 6 mm  10 mm) were evaluated by compression test. The universal testing machine (ETM 503A, Wance, China) was used to conduct mechanical tests, and the constant deformation rate was 1 mm/min. The compression elastic modulus was analyzed by the linear region of the stress-strain curve.
The rheological properties of the GNPs were measured by a rheometer (DHR, TA Instruments, USA). All measurements were performed using a flat plate with a diameter of 20 mm at 25 C with a gap distance of 800 mm. First, the storage modulus G 0 and loss modulus G 00 were determined using an oscillatory time sweep test for 300 s at a constant strain of 1 0.5% and a constant frequency of 1 Hz. Subsequently, a frequency sweep measurement was performed at a constant strain of 0.5% and the angular frequency range from 0.628 to 628.319 rad/s. To measure the viscosity, these gels were loaded with steady rate sweeps within a shear rate range of 0.1-100 s À 1 . Self-healing properties of the sample were quantitatively characterized by monitoring the evolution of storage (G 00 ) and loss (G 00 ) modulus of gels during multiple cycles of destructive shearing (oscillatory strain sweep with increasing strain from 0.1% to 500% with a fixed frequency of 1 Hz) and recovery (oscillatory time sweep at 0.5% strain and a frequency of 1 Hz) for 200 s.
Preparation of GNPs
The method of two-step desolvation was carried out to prepare GNPs [5]. Briefly, 1 g gelatin was dissolved in 20 ml of distilled water with constant heating and added 25 ml of acetone to precipitate the gelatin chains of high molecular weight and obtain the gelatin solution (5% w/v). The gelatin became soluble in water at 40 C after removing the supernatant, and then the pH of the gelatin solution was adjusted to 2.5. Subsequently, the GNPs were formed after adding 75 ml of acetone to the gelatin solution with vigorously stirring (1000 rpm) at a constant rate of 3.75 ml/min. Afterward, 660 ll of 25 wt% glutaraldehyde was added to stabilize the GNPs. Unreacted aldehyde groups were blocked by adding 100 ml of 100 mM glycine solution after 16 h of cross-linking of the GNPs. Then, the suspension was resuspended in deionized water by vortexing after centrifuging at 5000 rpm for 60 min. The suspension was changed to pH 7.0 after three cycles of washing and freeze-drying for 48 h. The GNPs were obtained and sterilized by gamma irradiation.
Preparation of pTa-GNPs hydrogel scaffold A 0.13 g freeze-dried GNPs and 1 ml sterile water were filled in the syringe and mixed by repetitive extrusion. The injectable and moldable hydrogel were manufactured within 30 s. The pTa was filled with the injectable hydrogel with the syringe. The morphology of scaffold material was observed using SEM. The scaffold material was frozen and fractured at À80 C, followed by freeze-drying to remove the water of hydrogel. Moreover, to evaluate the surface morphology of the scaffold material, the cross-section was coated with gold to improve the conductivity.
Isolation, culture, and identification of BMSCs
BMSCs were isolated from the bone marrow of the tibia and femur of a 6-week-old athymic nude mouse. The extracted marrow was centrifuged with the heparin, then mixed and resuspended in the solution containing 10% fetal bovine serum (FBS; Corning, USA), DMEM/F12 (HyClone, USA), and 1% penicillin-streptomycin (HyClone, USA). The culture medium was rinsed using phosphate buffer saline (PBS, HyClone, USA) after 2 days to eliminate the floating cells, and replaced the culture medium.
Osteogenesis and adipogenic differentiation experiments were used to evaluate the differentiation of BMSCs. The BMSCs at the 4th passage were used for induction experiments. The BMSCs at the 3rd passage were harvested and cultured with the growth medium at a density of 5 Â 10 4 cells/well in six-well plates. The osteogenic induction medium is used for BMSCs identification after confluence in the growth medium [22].
Alizarin Red and alkaline phosphatase staining were performed in induction culture on Days 14 and 21, respectively. When staining with Alizarin Red, it removed the medium, rinsed the cell layer with PBS, and fixed in 4% paraformaldehyde solution at room temperature for 20 min. Then, using 0.1% Alizarin Red S to incubate the fixed cells at 37 C for 30 min. For ALP staining, we used the BCIP/ NBT ALP Color Development Kit to incubate the fixed cells at 37 C for 30 min. During adipogenesis, the BMSCs at the 3rd passage cultured with the adipogenic differentiation medium at a density of 5 Â 10 4 cells/well in six-well plates. After 3 weeks of culture, the oil red O staining method was used to assess the adipogenic ability of the BMSCs.
Endothelial cell differentiation and identification
The BMSCs at the 3rd passage were harvested and cultured with the growth medium at a density of 5 Â 10 4 cells/well in six-well plates.
Then, the cells were cultured in the induction medium, including 50 ng/ml VEGF (PeproTech, USA) and 5% FBS for 7 days.
In vitro analysis of capillary formation was performed using Angiogenesis Assay Kit (Merck Millipore, Germany). One well of a 96-well plate was filled with 50 ml of gel matrix solution and incubated at 37 C for 60 min. Endothelial-differentiated BMSCs with 5 Â 10 3 cell count were suspended in 150 ml induction medium, plated onto the gel matrix, and incubated for 12 h.
Immunofluorescence was performed to detect the expression of CD31, which is an endothelial-specific marker in endothelial differentiated BMSCs. Firstly, the endothelial differentiated BMSCs were fixed by adding 4% paraformaldehyde for 5 min and 0.2% Triton for 30 min to complete membrane permeation. After blocking for 60 min by 10% FBS, the primary antibody rabbit anti-mouse CD31 (1:100 dilution; ab222783, Abcam, UK) was added and incubated overnight at 4 C. Next, the cells were incubated with secondary antibody Alexa Fluor 488-conjugated goat anti-rabbit (1:200 dilution; ab150077, Abcam, UK) after washing with PBS. Finally, nuclei were stained with 4,6-diamidino-2-phenylindole (DAPI), and the fluorescent images were obtained using laser scanning confocal microscopy (Eclipse Ti, Nikon, Japan).
Cytotoxicity of the pTa by cell counting kit-8
The cell toxicity of the pTa was evaluated using the BMSCs culture. BMSCs of 5.0 Â 10 4 /ml concentration at the 3rd passage were seeded in wells of the 24-well plate, respectively, and the sterilized pTa scaffold was added to each well. In the control group, we did not add pTa sample. After 1, 3, 5, and 7 days of culture, pTa scaffold and the medium were removed, and fresh culture medium with 10% cell counting kit-8 reagent (CCK-8; Dojindo, Japan) was added to each well and incubated for 120 min. Then, the medium containing 10% CCK-8 solution was removed from each well of the 96well plate and was measured by a microplate reader (Epoch 2, Biotek, USA) at 450 nm.
Characteristics of cell adhesion on pTa in vitro
The cell adhesion was evaluated by the direct contact assay. BMSCs were employed to examine the cytocompatibility of pTa. BMSCs with 1 Â 10 5 cells/ml were suspended and seeded on pTa. On day 7 of co-culture, PBS was used to rinse the samples, and 2% glutaraldehyde was used to fix them for 120 min. Then, the samples were conducted with serial dehydrated and critical point dried. To improve the conductivity, the samples were coated with gold. The surface morphology was observed using SEM.
Live cell assay
GNPs hydrogel was filled in the well of a 24-well culture plate, and 5 Â 10 5 cells/ml BMSCs were suspended and inoculated onto the surface of GNPs hydrogel. The uniform colonization was assessed by live-cell assay (Molecular Probes, USA) after 1, 3, and 5 days of co-culture. Scaffold material was covered with calcein AM solution (2 lM), and then incubated for 30 min in the dark. The samples were imaged using laser scanning confocal microscopy (Eclipse Ti, Nikon, Japan).
In vivo vasculogenesis experiments
The experimental scheme was carried out according to the China Animal Research Guidelines, and all procedures for animals experiments were approved by the Animal Ethics Committee of Dalian University. All the animals received human care in compliance with the "Guide for Care of Laboratory Animals" which is mentioned by the National Ministry of Science. Twenty 6-week-old male athymic nude mice (30 6 5 g) were used in this study. Scaffold material pTa was filled with composited GNPs hydrogel (13% w/v). Endothelial-differentiated BMSCs and BMSCs dispersed as individual cells in a 50:50 ratio, or 100% endothelial differentiated BMSCs or 100% BMSCs were prepared for cell-type control experiments. The cells of 5 Â 10 5 cells/ml concentration of different combinations were suspended and seeded on a pTa-GNPs hydrogel scaffold. The control group was designed as a pTa-GNPs hydrogel scaffold without cell implantation (Table 1). After 24 h of co-culture, the composite scaffold material was divided into four groups and was implanted in the subcutaneous tissue of the back by subcutaneous incision. Two composite scaffold materials were implanted per mouse (Fig. 2). Each experimental group contained five mice. The general experimental protocol was shown in Schemes 1 and 2.
Histology
All mice of each group were sacrificed after 28 days of the surgery. The subcutaneous samples were collected and fixed in 10% formaldehyde solution in phosphate buffer (pH 7.4). Then, the specimens were immersed in the graded series of ethanol to dehydrate and embedded in plastic to slice the hard tissue. The sections were sliced to 30 lm thickness for Van Gieson's stain. Four visual fields of every slice were chosen randomly, and imaged by an optical microscope (BX53, Olympus, Japan).
Microvessel density analysis was used to evaluate the angiogenic ability of samples. The vascular lumenal structures were identified as microvessels, and the amount was counted. Image analysis was used to estimate the area of the hydrogel. To calculate the microvessel density of each sample, we divided the total quantity of microvessels by the total area of hydrogel (expressed as vessels/mm 2 ).
Statistical analysis
All the results were expressed as mean 6 standard deviation (SD). The data of groups were analyzed by analysis of variance (ANOVA) followed by post-hoc multiple comparisons using Tukey's test. P values < 0.05 were considered to indicate a statistically significant difference.
Morphology and mechanical property of pTa scaffolds
The porosity and pore size of the pTa scaffolds samples were corresponding with the intended design ( Table 2). The compression stress and elasticity modulus were shown in Table 3, and the stress-strain curve of pTa was analyzed in Fig. 3b. The surface morphology of pTa was observed by SEM, and the metallic luster of dark grey was visible ( Fig. 3c and d).
Characteristic and morphology of pTa-GNPs hydrogel scaffold
GNPs form a rather elastic but also self-healing and shear-thinning colloidal gel on account of the cohesive interactions between globally positively charged but locally amphoteric GNPs [20,23]. This allows the colloidal gel of GNPs to be extruded through a syringe and easily mixed with other materials to be molded. The rheological properties of GNPs colloidal gel were characterized by rheometer. The modulus of GNPs colloidal gel was time-independent (Fig. 4a). Scheme 2. Schematic for constructing vascularized scaffolds in subcutaneous tissue of mouse with pTa-GNPs hydrogel scaffold combined with BMSCs and BMSCs-derived ECs. BMSCs were harvested from mouse tibia and femur. PTa-GNPs hydrogel scaffold were seeded with BMSCs and BMSCs-derived ECs, respectively. Composite material was implanted in subcutaneous tissue of mouse for 4 weeks.
The storage modulus G 0 value was about 4.5 kPa, and the loss storage G 00 value was 0.1 kPa. The oscillatory frequency sweep tests indicated the modulus of GNPs was low frequency dependent upon applying constant shear (Fig. 4b). The storage modulus G 0 value was increased from 4.5 to 5.5 kPa. The viscosity of GNPs colloidal gel was decreasing with the increasing shear rate (Fig. 4c). Moreover, GNPs colloidal gel also revealed a robust degree of mechanical recovery after the network destruction (Fig. 4d). As shown in Fig. 4d, in region I the GNPs colloidal gel indicated higher G 0 than G 00 . Subsequently, in region II, a destructive shear strain (0.5% for 200 s) led to network destruction and the transformation from a solid gel to a liquid-like material(G 00 > G 0 ). Upon release of the destructive shear, GNPs immediately showed more than 90% of recovery of G 0 value as relevant to the initial G 0 .
In this study, we prepared the gel with GNPs concentration of 13 w/v% to obtain colloidal gel with high mechanical strength. The high concentration could eliminate the structural changes after shear-induced mixing. GNPs' hydrogels have injectability and adaptability to irregular shapes before completely solidified (Fig. 4e). The SEM images showed GNPs' hydrogels and tightly bonded pTa (Fig. 4h). The surface morphology with a pore diameter of 76 6 15.3 lm and the representative microstructures of GNPs' hydrogels was observed (Fig. 4i).
Identification of BMSCs
The typical adherent spindle-like shape of the cells isolated from mice tibia and femur marrow samples was imaged using a microscope ( Fig. 5a; CKX41, Olympus, Japan). The BMSC markers CD44 and CD90 were expressed by the extracted cells, but CD34a and CD45 were rarely expressed (Fig. 5b).
The results indicated that the isolated cells from the marrow showed alkaline phosphatase after 2 weeks of culture in an osteogenic induction medium (Fig. 5c). After 3 weeks of culture in osteogenic induction medium, Alizarin Red staining showed calcified nodules, indicating that the isolated cells were at the stage of matrix mineralization (Fig. 5d). The oil red O staining showed the intracellular lipid droplets after 3 weeks of culture in adipogenic induction medium (Fig. 5e).
Angiogenesis evaluation
Compared to the isolated cells, the morphology of bigger and round, cytoplasm shrank and spread with a cobble stone-like appearance was observed after 7 days of culture in the vascularized induction medium (Fig. 6a). Immunofluorescent staining for CD31 was carried out for the basal characterization of ECs. For CD31 staining, BMSCs showed almost no specific results (Fig. 6b). However, the fluorescence intensity of endothelial differentiated BMSCs increased significantly after 7 days of induction (Fig. 6c).
In the BMSCs group, flow cytometry analysis showed that the percentage of CD31þ cells was 0.3%, and the 31.5% CD31þ cells were observed in the endothelial differentiated BMSCs group (Fig. 6d and e).
The in vitro angiogenesis kit was applied to investigate the ability of BMSCs and endothelial differentiated BMSCs to form capillaries in a semi-solid medium. Two kinds of cells were seeded on EC matrix gel solution. After 12 h, nearly 97% of cells were found rounded in the BMSCs group (Fig. 6f). However, differentiated endothelial BMSCs showed visible tube-like structures (Fig. 6g).
Adhesion and proliferation of BMSCs on pTa in vitro
The adhesion and morphology of BMSCs were observed by SEM. After 3 days of co-culture, we found the cells attached to the surface of pTa, the cellular morphology displayed the active status and the cellular pseudopodium spread well (Fig. 7a). Our result showed superior cytocompatibility and adhesive of pTa. We expected that the pTa would be gradually covered with proliferation.
The proliferation of cells on pTa was analyzed. Compared with the control group, the statistical results of CCK-8 showed that BMSCs co-cultured with pTa for 1, 3, 5, and 7 days, the proliferation was not inhibited (P > 0.05, Fig. 7b). We affirmed that the pTa scaffold had no cytotoxicity to BMSCs.
Live cell assay
BMSCs were seeded on GNPs hydrogel, and live-cell assay was performed after 1, 3, and 5 days. The bottom scaffold surfaces of each sample were imaged. On day 1 of co-culture, there were a small number of BMSCs adherent to GNPs hydrogel (Fig. 7c). On day 3 of co-culture, the BMSCs adhered nonhomogeneous to GNPs hydrogel, and several cell aggregates were formed (Fig. 7d). The trend of cell proliferation on GNPs hydrogel was observed on day 5 of coculture (Fig. 7e).
Histological evaluation
After 28 days of implantation, four groups of the subcutaneous samples in mice were taken out. Van Gieson's staining results showed the difference in the degree of vascularization in vivo (Fig. 8a-d).
Quantitative results of vascular lumens determined the microvessel density, and the density of the four groups is shown in Fig. 8e. The pair comparison of differences among B, C, and D groups was statistically significant (P < 0.05). When compared with group A, group B had similar microvessel density without significant difference (P > 0.05), whereas both of groups C and D had significantly higher microvessel density than group A (P < 0.05).
Discussion
The application of tissue vascularization is the major challenge for the clinical transformation of TE. This study aimed to construct artificial cellular scaffolds with mechanical strength and appropriate physiological microenvironment to mimic the microstructure of natural tissue for pTa application in vascularized TE.
In summary, we developed a 3D pTa-GNPs hydrogel scaffold, which is biocompatible with the host and exhibits biomechanical behavior. Furthermore, we explored the methods of endothelial induction of BMSCs and 3D co-culture with the scaffold. Then, the angiogenic properties of pTa-GNPs hydrogel scaffold combined with BMSCs-derived ECs in vivo models were assessed.
The 3D printing technology has developed rapidly in recent years and is widely used in TE. It could offer superior flexibility in manufacturing 3D samples with complex geometries [24]. Therefore, TE is constantly developing novel scaffold materials to improve the corresponding mechanical properties. In addition, 3D printing can directly use biomaterial powder to manufacture implants with complex structures to substitute tissue and achieve the desired effect through precise structural design [25,26]. Given its excellent tissue compatibility, high porosity, appropriate surface friction coefficient, and low elastic modulus, pTa has been recognized as an ideal orthopedic implant material. Moreover, with the continuous progress of 3D printing technology, pTa has a broad scope of applicability [27]. Wauthle et al. used SLM technology to manufacture highly pTa with entirely interconnected open pores, which approximated mechanical properties of human bone, and the pTa has excellent osteoconductive properties, normalized fatigue strength, and high strength ductility [28]. Wang et al. aimed to compared with traditional porous Ti implant, the Ti and Ta samples were manufactured with the same pore structure by computer design, and then SLM was used to produce the samples. It suggested that pTa has the same promotion effect on bone fixation and equal biological performance [29]. Consistent with our previous studies [30], most pTa's studies as the biomaterial was conducted on the compatibility and mechanical support. The key problem for bone regeneration in vivo is a vascular deficiency in the engineered structure [1]. However, pTa has no microstructure to mimic the ECM environment, lacking the characteristics to promote vascularization. Therefore, we used SLM technology to manufacture pTa with a specific structure and add suitable filling materials in the macropores of pTa to construct the suitable microstructure.
The hydrogel with local adaptability and long-term volume stability usually formed by reversible interactions (such as electrostatic interactions, hydrogen bonds, hydrophilic/hydrophobic interactions, and host-guest interactions) has become an attractive biomaterial [31]. According to our previous studies [32], the GNPs hydrogel exhibited a network mechanism, and the self-assembled colloidal network could disperse the biological stress. The scaffold of gelatin colloidal gels contains a porous but interconnected particulate network dispersed in the aqueous solvent. In TE, nanostructured colloidal gelatin gels are excellent carriers that can promote the programmed and sustained release of multiple proteins and also support cells to adhere, spread, and proliferate in vitro. However, due to weak reversible bonds' inherent characteristics, these adaptable hydrogels show insufficient mechanical strength, limiting their wide application as biomaterials in vivo.
Therefore, we manufactured a new type of 3D scaffold material combined with pTa and GNPs hydrogel, to enable the material to possess certain mechanical strength for cell and tissue regeneration and provide a physiological relevant microenvironment. We designed the parameter of the pTa as 1300 lm (pore size), 430 lm (strut size), and 88.6% (porosity) in this study. Moreover, mechanical testing showed that the compressive strength and the elastic modulus were 8.01 MPa and 1.12 GPa, respectively. It can optimize mechanical properties and provide enough space for filling materials to promote vessel formation and growth. Meanwhile, SLM technology was used to control the structure of pTa precisely. The rheological characterization confirmed that GNPs colloidal gel was a typical viscoelastic gel-like material. The fantastic combination of pTa scaffold and gels was attributed to the viscoplasticity of the gel. The shear-thinning and self-healing behavior of GNPs was observed. Due to the self-healing behavior, the GNPs colloidal gel could be injected through the syringes needle. The property of the material also means that it was residing in the implantation site. The scaffold material should maintain the interconnected and continuous network of pores, which is the critical factor for uniform cell seeding, promoting nutrient transportation and waste removal, and thus improve the vascularization performance of the scaffold material [33]. The surface morphology of GNPs and the representative microstructures were observed using SEM. It showed a porous structure with a pore diameter of 76 6 15.3 lm. The pore size of the scaffold material affects cell adhesion and vascular growth [34]. Previous studies have shown that when the pore size of the scaffold is larger than 50 lm, it can achieve the diffusion of nutrients, oxygen, and the excretion of metabolites, and porous material with a pore size of 100-200lm is conducive to the vascularization [35,36]. According to our previous studies [20,37], GNPs colloidal gel had superior performance combining mechanical properties and self-healing capacity. Moreover, it is developed by elaborative control of particle assembly, a basic understanding of the formation mechanism and accurate adjustment of the structure and composition of the gel network. The organic and inorganic colloidal building blocks were composed of amphoteric soft GNPs and negatively charged hard silica nanoparticles, respectively. When the net charge of the GNPs alters from negative to positive, the self-assembly reaction between the oppositely charged GNPs and silica is triggered. Although long-range attractive electrostatic interactions caused the formation of the gel network, the subsequent formation of additional short-range particle interactions (such as hydrogen bonds and van der Waals) exceeded the repulsive electrostatic forces, maintaining the integrity of the gel network.
The biocompatibility of the implant can be assessed by the interaction between implants and BMSCs [38]. Cell adhesion and proliferation was performed to assess interactions of material and BMSCs. The interaction was assessed by SEM cell morphology, CCK-8 kit, and live-cell assay. Our results showed that the pTa and GNPs' hydrogels have a superior biocompatible for cell adhesion, viability, and proliferation.
Previous studies have shown that it is possible to establish the microvascular network using mature ECs derived from vascular tissue. However, the clinical application of ECs taken from autologous vascular tissue is limited because there is no program to get sufficient cells [2]. MSCs have great potential in regenerative medicine and have certain application value in complex TE [14]. There are many studies that report BMSCs for treating bone and cartilage defects, as well as the damaged myocardium after acute myocardial infarction [39]. Studies have confirmed that BMSCs can differentiate into different kinds of connective tissues, including osteocyte, chondrocyte, and adipocyte, and also can differentiate into ECs and vascular smooth muscle cells [40]. In this study, the expression patterns and the multiple lineage differentiation capacities of BMSCs were assessed by flow cytometry and differentiation experiment, respectively. Moreover, the results showed that our extracted cells were comprised identified BMSCs, which have the potential of multiple differentiation. We induced BMSCs into EC-like cells successfully in the induction medium. The expression of EC surface marker CD31 was detected by flow cytometry. The endothelial differentiated BMSCs formed the visible tube-like structures that showed the functional properties of ECs.
BMSCs cannot differentiate into blood vessels spontaneously in vivo; however, recent evidence suggested that it can differentiate into perivascular cells in vivo to promote angiogenesis. Therefore, BMSCs play the role of pericytes in the formation of new blood vessels, and can stabilize and maintain the development of the vascular system [41]. Furthermore, there is a heterogeneous interaction between BMSCs and ECs, providing survival advantages for ECs and perivascular cells [42]. The vascularization performance of the material can be effectively evaluated by implanting it on the back of mice [43,44]. In this study, BMSCs-derived ECs and BMSCs were seeded on scaffold materials to participate in angiogenesis. Subsequently, we compared the angiogenesis property of the scaffold material on the back of nude mice. After 28 days of implantation, the histomorphometry statistics of the neovascularization in terms of density suggested that group C (40 6 8 vessels/mm 2 ) was significantly superior compared to group A (20 6 9 vessels/mm 2 ) and B (21 6 7 vessels/mm 2 ). These results indicate that during vascularization in vivo, BMSCs-derived ECs had a significant impact and the cells have great potential application in the research of regenerative medicine. Compare to group C (40 6 8 vessels/mm 2 ), the values of group D (75 6 6 vessels/mm 2 ) suggested the interaction between BMSCsderived ECs and BMSCs that could form the vessel in the composite scaffold materials, and the co-culture of two types of cells in vivo could affect the maturation and stability of the vascular network (Fig. 7). Meanwhile, group B (21 6 7 vessels/mm 2 ) was similar to group A (20 6 9 vessels/mm 2 ), suggesting that BMSCs played a synergistic role in the process of vascularization rather than a decisive. In addition, these results demonstrated that the pTa-GNPs hydrogel scaffold was biologically active in vivo and promoted the formation of a capillary-like network in vivo. Due to the insufficient mechanical strength, GNPs hydrogel cannot maintain its original shape after subcutaneous implantation. Therefore, the GNPs hydrogel group was not set in this study.
Based on our previous studies [23,45], the colloidal hydrogel has some specific structure and performance such as inherent porous matrix, the large specific surface area of gelatin, electrostatic and hydrophobic interactions with a strong affinity for proteins. Moreover, it could continuously release biomolecules. In this study, GNPs' hydrogels probably provided the physiologically concerned microenvironment for tissue regeneration and played a role as absorbent and prolonged the release of biologically active substances; thereby, it could induce EC generation and angiogenesis. In addition, though cells were randomly implanted, the live-cell assay showed the BMSCs adhered nonhomogeneous, tissue sections showed the newly formed blood vessels were unevenly distributed. The influence of the mechanical strength and microporous structure of GNPs on cellular immunity and migration might affect this phenomenon; however, it has not been explored in this study.
In recent years, 3D bioprinting technology is rapidly developing. Among them, extrusion-based systems, inkjet printing systems, and laser-based technologies have been developed to accurately distribute cells in 3D structures [46,47]. The precise distribution of human umbilical vein EC and smooth muscle cell with biolaser printing technology can control the formation of the vessel network, as well as the size and shape of the lumen [48]. Contrarily, using thermal inkjet printer manufacturing technology, a thrombin solution containing ECs was deposited on the fibrinogen matrix to create a 3D tubular microvascular structure [49]. Instead of using a computercontrolled deposition system, we randomly implant cells in scaffold material to allow the cells to organize themselves. However, BMSCs-derived ECs were not labeled and tracked in this study. Therefore, the location and number of these cells in the neovascularization could not be determined.
Conclusion
In summary, novel, integrated 3D scaffold materials were designed with biocompatibility, biomechanical and angiogenic properties, and an approach for TE to construct vascular engineered tissue. We used the SLM technology to manufacture pTa and filled the macropores of pTa with GNPs hydrogel. The pTa and GNPs hydrogel had no inhibitory effect on the proliferation of BMSCs in vitro. The BMSCs-derived ECs were implanted on pTa-GNPs hydrogel scaffold, and they could promote the formation of a capillary-like network in vivo. Therefore, endothelial differentiated BMSCs played a significant role in TE. This study might provide a basis for applying pTa in bone regeneration; meanwhile, tissue-engineered vascular grafts based on autologous BMSCs deserved further investigation. In bone development, vascularization and mineralization coincided and could make bone the highly vascularized tissue. This work demonstrated the potential to realize the vascularization of TE bone in further studies. Conflict of interest statement. None declared.
|
v3-fos-license
|
2020-01-30T09:15:28.447Z
|
2020-01-29T00:00:00.000
|
213592020
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://res.mdpi.com/d_attachment/diversity/diversity-12-00054/article_deploy/diversity-12-00054-v2.pdf",
"pdf_hash": "c0f330ba781735c95aa8520bf97647792eea923d",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43425",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "ec4f9daddb4fb277f686b272d8f04fc2fbe8ce86",
"year": 2020
}
|
pes2o/s2orc
|
Niche Complementarity and Resistance to Grazing Promote the Invasion Success of Sargassum horneri in North America
: Invasive species are a growing threat to conservation in marine ecosystems, yet we lack a predictive understanding of ecological factors that influence the invasiveness of exotic marine species. We used surveys and manipulative experiments to investigate how an exotic seaweed, Sargassum horneri , interacts with native macroalgae and herbivores off the coast of California. We asked whether the invasion (i.e., the process by which an exotic species exhibits rapid population growth and spread in the novel environment) of S. horneri is influenced by three mechanisms known to affect the invasion of exotic plants on land: competition, niche complementarity and herbivory. We found that the removal of S. horneri over 3.5 years from experimental plots had little effect on the biomass or taxonomic richness of the native algal community. Differences between removal treatments were apparent only in spring at the end of the experiment when S. horneri biomass was substantially higher than in previous sampling periods. Surveys across a depth range of 0–30 m revealed inverse patterns in the biomass of S. horneri and native subcanopy ‐ forming macroalgae, with S. horneri peaking at intermediate depths (5–20 m) while the aggregated biomass of native species was greatest at shallow (<5 m) and deeper (>20 m) depths. The biomass of S. horneri and native algae also displayed different seasonal trends, and removal of S. horneri from experimental plots indicated the seasonality of native algae was largely unaffected by fluctuations in S. horneri . Results from grazing assays and surveys showed that native herbivores favor native kelp over Sargassum as a food source, suggesting that reduced palatability may help promote the invasion of S. horneri . The complementary life histories of S. horneri and native algae suggest that competition between them is generally weak, and that niche complementarity and resistance to grazing are more important in promoting the invasion success of S. horneri . L.M.M.; L.M.M.; D.C.R. L.M.M. resources, D.C.R.; data L.M.M.; writing—original preparation, L.M.M.; writing—review D.C.R., L.M.M. S.J.H.; L.M.M.; supervision, D.C.R. and S.J.H.; project administration, L.M.M.; funding acquisition,
Introduction
Marine ecosystems are increasingly threatened by invasive species as global trade expands and human-mediated introductions via commercial shipping occur at escalating rates [1][2][3][4][5]. Developing a predictive understanding of factors influencing the success of marine invasive species has clear implications for managing their spread and impacts. Yet relative to terrestrial systems, little is known about the ecological processes that influence marine invasions [6,7]. In terrestrial ecosystems, once an introduced species becomes established, biotic interactions with native species can play a major role in limiting population growth, spread and ecological impacts [8][9][10][11]. These interactions can either ability to outcompete native algae, then we expected the biomass and taxonomic richness of native algae to increase in areas where we experimentally removed S. horneri. Alternatively, if the invasion success of S. horneri relies on its ability to occupy underutilized resources, then we expected to see little change in the native algal assemblage in response to S. horneri removal. We also performed a field experiment involving the major herbivores to examine their grazing preferences for S. horneri versus other algae. Using a combination of feeding assays and distributional surveys, we tested the hypothesis that herbivores facilitate S. horneri by preferentially consuming native algae.
Study System
Field experiments and surveys were conducted on rocky reefs on the leeward side of Santa Catalina Island, located 35 km offshore of Los Angeles, CA, USA. Study reefs consisted of bedrock, boulders and cobble distributed along a moderate slope that transitioned to sand at depths of about 30 m. The reefs were dominated by native macroalgae and the invasive Sargassum horneri. Native macroalgae included the canopy-forming giant kelp Macrocystis pyrifera, subcanopy-forming species of kelp (e.g., Eisenia arborea and Agarum fimbriatum) and fucoid algae (e.g., Sargassum palmeri, Stephanocystis neglecta and Halidrys diocia), and understory-forming foliose and calcified algae. Sessile invertebrates occupied only about 3% of the reef surface. S. horneri has become one of the most common macrophytes on shallow reefs at Santa Catalina Island since its introduction in 2006.
The primary grazers at Santa Catalina Island include sea urchins and herbivorous snails. Centrostephanus coronatus, the most abundant species of urchin, takes refuge in crevices and forages within <1 m from its shelter during the night before returning to the same location before sunrise [38]. This behavior leads to the formation of urchin "halos" where they commonly graze down algae within small home ranges.
Competition
To test the effects of Sargassum horneri on the abundance and taxonomic richness of native algae, we compared the native algal assemblages in experimental plots from which S. horneri was continually removed (hereafter referred to as S−) with those in unmanipulated control plots with S. horneri left intact (S+) over 3.5 years. We also measured the reduction in the amount of light permeating through its canopy as a potential mechanism of competition. This experiment was conducted at Isthmus Reef (33.4476° N, 118.4898° W) at 6 m depth, within the range where S. horneri is most abundant. Twenty-four 1 m 2 plots separated by a distance of at least 2 m were established on areas of reef comprised of >90% rock and with a high density (i.e., at least 30 individuals) of S. horneri. S. horneri was removed from 12 randomly assigned plots (S−) beginning in spring 2014 and every 6 to 12 weeks thereafter until summer 2017. S− plots had a 30 cm wide buffer zone around the perimeter where S. horneri was removed to minimize potential edge effects such as shading by individuals outside of the plot. Removal entailed divers using knives to pry all S. horneri holdfasts off the substrate, minimizing disturbance to the other biota within the plot as much as possible. Since competitive interactions may vary with time and among seasons, we sampled the algal communities in all S+ and S− plots just prior to the initial removal of S. horneri in spring 2014 and quarterly thereafter (i.e., summer, autumn, winter and spring) over three consecutive growing seasons (2014-2015, 2015-2016 and 2016-2017).
Algae were identified to the lowest taxonomic level possible, which in most cases was species (Table S1), and measurements of all understory and subcanopy-forming algae were taken in order to estimate the damp biomass of algae in each plot. The abundance of low-lying understory algae was measured as percent cover using a uniform point contact (UPC) method that involved recording the presence and identity of all algae intersecting 49 points distributed in a grid within each 1 m 2 plot. Percent cover was determined as the fraction of points a taxon intercepted × 100. Although multiple organisms may intersect a single point if they overlay one another, a taxon was only recorded once at a given point even if it intersected that point multiple times. Using this technique, the percent cover of all taxa combined in a plot can exceed 100%, but the percent cover of any individual species or morphological group cannot. This sampling resolution was sufficient to detect species covering at least 2% of the area in a quadrat. If a species was present in the plot but not recorded at one of the 49 points, then it was assigned a percent cover value of 0.5%. Since percent cover does not necessarily scale with biomass for larger subcanopy-forming algae, we recorded the density and the average size of these taxa. Damp biomass was estimated from density and size data of subcanopy algae and percent cover data of understory algae using taxon-specific relationships obtained from the literature [27,[39][40][41] or developed specifically for this project (Table S2).
All but two species of algae recorded in the study plots were native to the region; the non-native Sargassum muticum and Codium fragile occurred in low abundance. Both of these species and S. horneri were excluded from analyses to test specifically for the effects of S. horneri on the native algal assemblages [42]. The surface canopy-forming giant kelp, Macrocystis pyrifera, was present at the beginning of the experiment, but it declined quickly during a warming trend and disappeared by December 2014 for the duration of the study. Consequently, its presence did not factor into our analyses.
The effects of S. horneri removal on the taxonomic richness and aggregate biomass of native algae were evaluated using linear mixed effects models [43]. Taxonomic richness was calculated as the number of unique native algal taxa within each plot, and aggregate biomass was calculated as the summed damp biomass of all native algae within each plot. Since we hypothesized that treatment effects may differ among seasons and develop over time, we included season, treatment (S+ or S−) and days since the start of the experiment (elapsed time) as main effects in the model. To account for variation associated with resampling individual plots, we included plot and the summed damp biomass of native algae within each plot at the start of the experiment prior to the first removal of S. horneri as random effects. Full models with the main effects in question (i.e., season, removal treatment, elapsed time and the interactions of time-removal treatment and season-removal treatment) were compared against null or full models without the effects in question using likelihood ratio tests with chi-square test statistics to select the best fit based on the Akaike Information Criterion (AIC). Model assumptions of normality and homoscedasticity were validated through visual inspection of the residuals, and biomass data were square-root transformed to meet model assumptions. To identify which time periods contributed to the time-by-removal treatment interaction, we used Tukey's Honest Significant Difference (HSD) post hoc analysis to compare the means of S+ and S− treatments for each sampling period.
Differences in the composition of the algal community between S+ and S− plots were tested using non-metric multi-dimensional scaling (nMDS) and analysis of similarities (ANOSIM). We compared the mean biomass of each taxon in S+ and S− plots in spring and summer 2017, during and after the sampling period when S. horneri removal had a significant effect. We used an unrestricted permutation of raw data (999 permutations) on Bray-Curtis similarity matrices with square-root transformation applied. A similarity percentage (SIMPER) analysis was used to determine the taxa that contributed most to dissimilarity between S+ and S− plots.
To determine the amount of shading caused by the S. horneri canopy we calculated the percent transmission of photosynthetically active radiation (PAR, 400-700 nm) during the spring sampling periods in S− and S+ plots. Light was measured using a handheld spherical quantum sensor (LI-COR Model LI-192) oriented vertically in the center of each plot 30 cm above the bottom. Ten readings of Photosynthetic Photon Flux Density (PPFD in μmol m −2 s −1 ) were taken in each plot and averaged. Percent transmission was calculated from the average of 10 PPFD readings taken at the surface before and after the dive as: We assessed how percent transmission of PAR was affected by S. horneri canopy biomass in S+ plots during spring using linear regression. We also tested the hypothesis that the removal of S. horneri increases PAR reaching the bottom compared to unmanipulated plots during spring following the initial removal of S. horneri using a repeated-measures ANOVA with removal treatment as a fixed factor, and plot and year as random factors. We used one-tailed t-tests to determine how the years differed from each other with respect to light transmission because we had an a priori expectation that light would be lower in S+ plots than S-plots. Percent transmission light data were arcsintransformed prior to analyses to meet the assumptions of ANOVA.
Complementarity
We examined seasonal patterns of biomass of Sargassum horneri and native algae in the experimental plots described above to test their degree of temporal complementarity. Comparisons of native algae and S. horneri in S+ plots were used to determine whether the seasonality in biomass differed between the two, while comparisons of native algae in S+ and S− plots were used to determine whether seasonal fluctuations in biomass of native algae occurred independent of S. horneri abundance.
We examined the degree of spatial complementarity between S. horneri and native algae by comparing their biomass across the depth range within which most species of brown algae at Santa Catalina Island occur (0-30 m). Scuba divers counted the number of recruit (defined as <5 cm tall) and adult (defined as >5 cm tall) S. horneri and native species of subcanopy-forming macroalgae within 1 m 2 quadrats placed every 5 m along transects at four sites that ran perpendicular to shore from the intertidal to 30 m depth or where the reef transitioned to sand, whichever came first. Density data were converted to units of damp biomass using the method described above (see 2.2 Competition). Since these algae grow only on hard bottom substrate, we visually estimated the percent cover of rock within each quadrat and standardized density estimates to m −2 hard bottom. We performed these surveys in April of 2016, the time of year when the biomass of S. horneri reaches its peak [27]. Although smaller native understory species may also compete with S. horneri, limits on bottom time prevented us from sampling them.
Measured depths were adjusted relative to the Mean Lower Low Water (MLLW) and quadrats were binned into depth intervals of 5 m. Between one and three quadrats were sampled within each depth interval at each site, depending on the grade of the reef. The aggregate biomass of native algae within a quadrat was calculated as the sum of the biomass of the juvenile and adult stages of all native species measured. A two-way ANOVA was used to test whether the biomass of S. horneri and the aggregate biomass of native algae varied by depth interval and taxa.
Herbivory
We performed grazing assays and surveys of benthic algae within and adjacent to urchin halos to assess whether the palatability of S. horneri differed from that of other algae. In September 2016, replicate arrays consisting of Sargassum horneri, its native and introduced congeners S. palmeri and S. muticum and the native kelps Macrocystis pyrifera and Eisenia arborea were deployed at Isthmus Reef for periods of 48 h. Arrays were either exposed to grazing by urchins and snails or placed inside cages nearby that were designed to exclude these grazers. Cages were constructed from 1 cm-gauge plastic mesh and were cylindrical in shape (1 m in height and 0.5 m in diameter) with mesh covering the top. Cages were open at the bottom and a 1 m-wide weighted skirt secured them to the reef and prevented grazers >1 cm from entering. All urchins and snails were removed from the cages at the beginning of each assay.
During each of the four deployments, 15 arrays containing one sample of each of the five target species of algae were placed in urchin halos while another 15 were placed inside cages. Urchin halos were defined as sections of the reef adjacent to a small ledge where >10 urchins were found and grazing activity was apparent from a lack of algae growing within a 30 cm radius. Some herbivorous snails were also present in the halos, including Tegula eiseni, Tegula aureotincta, Megastrea undosa and Norrisia norrisii. Cages were left in the same location for the duration of the experiment, but we selected unique halos for each deployment so that herbivores would be naïve to the arrays. In the day preceding each deployment, we collected and weighed similarly sized blades or thalli of the five target species. Damp weights were quantified prior to deployment and immediately after collection by spin-drying samples for 10 s before weighing them. Three repeat measurements of each sample were taken by re-hydrating the sample and repeating the drying and weighing process. The average of three replicate measurements for each sample was used to optimize our ability to detect small changes in tissue loss.
Herbivore preference was assessed by comparing algal weights measured before and after each deployment in the exposed versus caged arrays. We calculated the percent of biomass lost as: where G initial and G final represent the mean of the three replicate weights measured for each sample before and after deployment respectively. For each deployment, exposed and caged arrays were randomly paired and the biomass of each species of algae lost due to grazing was calculated as the difference in the change in biomass between paired arrays. One-way ANOVA was used to evaluate whether the biomass lost due to grazing differed by species, and post hoc contrasts were tested for significance with a Tukey HSD test to determine which species were preferentially consumed. Model assumptions of normality and homoscedasticity were validated through visual inspection of the residuals.
To provide a more time-integrated assessment of the feeding preferences of grazers, we tested whether the relative abundance of S. horneri differed from that of native algae in heavily grazed areas during the final deployment. We did this by measuring the percent cover of all subcanopy and understory algae in 1 m 2 quadrats placed adjacent to the 15 urchin halos and at 15 nearby reference locations with high algal cover. Percent cover was assessed using the uniform point contact sampling method described above (see 2.2 Competition). We standardized estimates of cover for individual algal taxa to the total cover of subcanopy and understory algae within each quadrat to compare the relative algal composition adjacent to and away from halos. We ignored encrusting algae and unoccupied space in order to focus on the differences between the foliose algal species that are likely to be consumed by the grazers. Algae were identified to the lowest taxonomic level possible, and were analyzed in the following groups: S. horneri, S. palmeri and other native algae (Table S3). We used a two-way ANOVA to test whether the cover of these taxonomic groups differed adjacent to and away from urchin halos, and Tukey HSD post hoc contrasts were used to determine how the taxonomic groups differed from one another. Standardized percent cover data were arcsintransformed prior to analyses to meet the assumptions of ANOVA.
Software Used for Statistical Analysis
All univariate statistical models and tests were completed using RStudio (version 1.1.414) for R Statistical Computing Package [44]. Linear mixed models were fit using the lme4 package [45], and post hoc comparisons were performed using the multcomp library [46]. All multivariate analyses were conducted using PRIMER v7.0 [47] and PERMANOVA+ for PRIMER [48].
Competition
The aggregated biomass and taxonomic richness of native algae varied significantly by season (Table 1). Biomass peaked during summer and autumn, declined by winter and remained low into spring (Figure 1a), while richness also peaked in summer and declined slightly through spring (Figure 1b). The effects of experimentally removing Sargassum horneri on the biomass and species richness of native algae were dependent on season (see season × removal interactions in Table 1).
(a) (b) Although there was a significant interaction between season and removal for both biomass and species richness, post hoc tests revealed no particular season as driving the difference (p > 0.05 for all comparisons). Closer examination of the data revealed that the effects of S. horneri removal varied dramatically with days since the start of the experiment (Figure 2) as post hoc testing showed a significant difference in algal biomass between treatments in spring 2017 only, approximately 1200 days since the start of the experiment (Tukey's HSD, p = 0.002 indicated by * in Figure 2a; all other periods p > 0.05). This difference was driven by a bloom in native algae in S− plots that coincided with a dramatic increase in the biomass of S. horneri in S+ plots (Figure 2a). The biomass of native algae in S− and S+ plots began to converge again by summer 2017 when S. horneri biomass declined. The taxonomic richness of native algae decreased over the course of the study (Figure 2b), independent of the removal of S. horneri (Table 1b). The left y-axis shows percent transmission of PAR (mean ± SE) in S. horneri-removal (S-; grey bars) and non-removal (S+; white bars) plots, and the right y-axis shows damp biomass of S. horneri (± SE) in non-removal (S+) plots when light measurements were taken. Asterisks indicate sample dates where t-tests indicated significant differences between treatments (*, **, ***: p < 0.05, 0.01, and 0.001, respectively).
Since S. horneri manipulation had no significant effect on the total biomass of native algae until spring 2017, we restricted our analysis of community structure in S+ and S− plots to data collected during spring and summer 2017. S. horneri removal significantly influenced the native algal assemblages in the spring (Figure 4a; PERMANOVA: Pseudo-F1,21 = 2.90, p = 0.016) and summer (Figure 4b; Pseudo-F1,22 = 2.12, p = 0.041). SIMPER analysis (Table 2) revealed that nearly fifty percent of the dissimilarity between S− and S+ treatments was explained by just two species in spring (Sargassum palmeri and Zonaria farlowii) and three species in summer (Z. farlowii, S. palmeri and Colpomenia sinuosa).
Complementarity
Sargassum horneri displayed a different seasonal pattern in biomass compared to the aggregated biomass of native algae. There was strong seasonality in the biomass of S. horneri in S+ plots, remaining low during summer and autumn, and increasing slightly in winter and dramatically in the spring ( Figure 5). By contrast, the aggregated biomass of native algae fluctuated much less throughout the year with highest mean values recorded in summer and biomass declining through winter. In S+ plots, the biomass of native algae continued to decrease into spring, while in S− plots, an increase in the biomass of native algae occurred, which was driven primarily by the native congener S. palmeri in spring 2017. Results of the depth surveys were consistent with the hypothesis that spatial complementarity with native algae facilitates the invasiveness of S. horneri. Two-way ANOVA revealed that the effect of depth on biomass differed for S. horneri and native algae (F5,1 = 11.78, p < 0.0001 for depth × taxa interaction), and the two were inversely related (Figure 6a). S. horneri was present from the intertidal to the deepest depths sampled, but was most abundant between depths of 5-20 m while the biomass of native algae showed peaks at <5 and >20 m (Figure 6b). The occurrence of specific taxa of native algae varied with depth (Table S4). Biomass of fucoid species (such as Stephanocystis neglecta, Halidrys dioica and Sargassum palmeri) as well as the native kelp Eisenia arborea peaked at shallow depths, while E. arborea also occurred at deeper depths in addition to another native kelp, Agarum fimbriatum.
Herbivory
The effects of grazing on the biomass of algae remaining after 48 h assays differed significantly among the five species of algae tested (Figure 7a; ANOVA, F4 = 35.146, p < 0.001). Approximately five times more biomass of Macrocystis pyrifera and four times more biomass of Eisenia arborea was lost due to grazing compared to the three species of Sargassum. Surveys revealed that the taxonomic composition of algae varied between areas adjacent to and away from urchin halos (Figure 7b; Table S3). There was a significant interaction between taxonomic group and proximity on the relative percent cover (ANOVA, F2,1 = 12.97, p < 0.0001). Post hoc tests revealed that the cover of S. horneri was approximately two times greater near the halos (p = 0.01). By contrast, the proximity to halos had no effect on the cover of S. palmeri (p = 0.98), while that of other native algae taxa near halos was about one third of the level away from halos (p = 0.001).
Discussion
The ability of invasive plants to outcompete native flora for limited resources has been well documented [13,49,50] and is the primary mechanism that has been attributed to the successful invasion of Sargassum muticum in the coastal waters off Washington state, USA [30]. Its congener, S. horneri, has a similar potential to displace native algae as a result of shading caused by the high canopy biomass it achieves during the spring [27]. However, we found little evidence that competitive superiority explains the high invasiveness of S. horneri in California as its sustained removal had a minimal effect on the biomass and composition of native algae over a 3.5-year period. Taxonomic richness of the native flora declined over the course of this study but was unresponsive to S. horneri removal. The total biomass of native algae was also unaffected by S. horneri manipulation until 2017, when it increased sharply in plots where S. horneri had been removed. The increase was driven primarily by a perennial congener, S. palmeri. This bloom of S. palmeri coincided with a large increase in the ambient biomass of S. horneri in spring 2017, which dramatically reduced the amount of light reaching the bottom in non-removal plots. Studies of aquatic plants and animals, marsh grasses and marine macroalgae have shown that impacts scale with the abundance of an invader (e.g., [51][52][53][54]). In this study, S. horneri had no detectable effects until it reached extremely high abundance, at which point only modest impacts to the native algal community occurred, driven primarily by a single closely related species.
The strength of competition between introduced and native species can vary spatially and temporally, depending on fluctuations in biomass driven by species' life histories or environmental factors [55]. The seasonal phenology of the macroalgal community suggested that S. horneri's peak biomass was generally complementary to that of most of the native macroalgae, whose biomass tended to be highest in summer. This pattern was consistent regardless of the presence of S. horneri (i.e., in removaland non-removal plots) except during spring 2017 when S. horneri was extremely abundant, suggesting it was not a consequence of S. horneri, but rather a natural cycle. This conclusion is substantiated by similar estimates of seasonal biomass of native algae at Santa Catalina Island and elsewhere in southern California prior to invasion by S. horneri [39,56]. Since the giant kelp, M. pyrifera, was absent from our survey and experimental sites throughout nearly the entire course of this study, it did not factor into our analyses. However, like the other native algae we observed, the biomass of M. pyrifera in southern California often peaks in the summer and autumn and drops during winter and spring due to wave-induced disturbance to the canopy [57]. Hence, the success of S. horneri may be attributed in part to the decreased abundance of native algae during its period of peak growth and reproduction. The depth distribution of S. horneri relative to that of native subcanopy algae could reflect the strength of their competitive interactions or physiological preferences for different parts of the environment. We found that S. horneri displayed spatial complementarity with other subcanopy algae as it was most abundant at intermediate depths (5-20 m), while native algae were most abundant at shallower (<5 m) and deeper (>20 m) depths. That the depth distributions of native subcanopy algae observed in our surveys were similar to those reported by others at Santa Catalina Island prior to the arrival of S. horneri [58][59][60][61] suggests that their lower abundance at intermediate depths wasnot due to competition with S. horneri.
The reasons for the peak in S. horneri abundance at intermediate depths in our study are unknown. However, the distribution of S. horneri in other regions indicates great versatility in light requirements, and opportunistic growth in situations where competition is minimal. For example, in its native range in Japan, S. horneri grows from the intertidal to 20 m [62] but is most common on shallow reefs from the low intertidal to 4 m [63]. In Baja California, Mexico, near the southern extent of its invaded range, S. horneri has been reported to occur from the intertidal [64,65] to at least 8 m depth [66]. Perhaps robust subcanopy-forming macroalgal communities at Santa Catalina Island deter S. horneri at very deep (>20 m) and very shallow (<5 m) depths, while increased space and light available at intermediate depths allow S. horneri to thrive with minimal competition. Such appears to be the case for the annual Asian kelp, Undaria pinnatifida, whose invasion success in the United Kingdom has been attributed in part to its broad depth range as well as its niche dissimilarities with native algae as the abundances of U. pinnatifida and native algae were inversely correlated along a depth gradient [67].
Our findings revealed that S. horneri has the greatest biomass at depths where, and times when, the abundance of native macroalgae is lowest. The consistent phenology of S. horneri in its native and invaded range [27] and of most native algae in the presence or absence of S. horneri suggest that niche complementarity between them occurs throughout the year. Recent work by Sullaway and Edwards [68] at nearby sites at Santa Catalina Island supports this idea, showing that S. horneri increased rather than decreased levels of community production and respiration in this system. They concluded that S. horneri takes advantage of environmental conditions that disturb native algae and thrives as a consequence of disturbance, rather than causing an ecosystem shift due to its ability to outcompete the native flora [69]. Consistent with this idea is the observation by Caselle et al. [7] that S. horneri abundance at nearby Anacapa Island was significantly lower in older, well-established marine protected areas (MPAs) where the abundance of native algae was high relative to newly established MPAs. These authors argued that the differences in S. horneri abundance between new and old MPAs reflect stronger competition between native algae and S. horneri in the older MPAs where native algae flourish. Thus, niche complementarity may allow S. horneri to achieve high abundance only in places where competition from native algae is not strong.
Herbivores can influence the invasion success of freshwater and marine macrophytes directly through consumption of the invader, or they can mediate interspecific competition through preferential consumption of native species [23,36,37,70]. These preferences may arise from morphological differences or chemical defenses. For example, algae in the order Fucales (which includes the genus Sargassum) typically have high levels of phenolic compounds that are known to deter grazing [37]. Our results are consistent with this hypothesis, demonstrating that grazers consumed the native kelps M. pyrifera and E. arborea while avoiding S. horneri and its congeners S. palmeri and S. muticum. Our results also support the hypothesis posed by Caselle et al. [7] that urchins avoid S. horneri and preferentially consume native algae in areas where they co-occur, thereby reducing the potential for competition between them.
The composition of the benthic algal community reflected the grazer preferences we observed. Centrostephanus centrotus, the most abundant species of sea urchin in our study, is known to display strong feeding preferences, decreasing the abundance of favored species dramatically before switching to less-preferred species [38]. We found that native foliose algae were reduced and S. horneri was more dominant adjacent to urchin halos compared to nearby reference areas. Interestingly, we found no biomass response to grazing by its perennial congener S. palmeri, which is native to southern California. Thus while grazers avoided both species of Sargassum in favor of native foliose algae, only S. horneri responded to a lack of herbivory with increased abundance. It may be that S. horneri is able to colonize space created on the reef more readily than S. palmeri due to its annual life history and high fecundity. Traits related to rapid growth and high fecundity, as well as deterrence to herbivory, are often associated with invasive plants [71]. However, defenses often come at a fitness cost [72] and shorter lived, r-selected plants are not typically heavily defended [73]. Yet S. horneri is a species with r-selected traits that allow it to rapidly colonize available space, and it is also a member of an order of algae that typically displays high levels of chemical defense. These traits undoubtedly contribute to the ability of S. horneri to proliferate in places where interactions with native species are weak.
Conclusions
We found that the high propensity of S. horneri to invade southern California reefs results largely from its ability to occupy resources underutilized by native species in space and time and to resist grazing relative to native algae. Its annual life history, high fecundity and capacity for widespread dispersal further enhance its ability to colonize novel habitats. The complementary phenology of S. horneri and native algae suggest competition between them is generally weak, which is consistent with the results of our 3.5-year manipulative experiment. Our findings indicate the greatest potential for competitive interactions between S. horneri and native algae is at intermediate depths during spring when S. horneri peaks in biomass. Future work testing the effects of S. horneri on native algae should focus on this depth range and season. Collectively, our results highlight the importance of considering exotic marine species in the context of the invasibility of native assemblages when assessing their invasiveness and developing management strategies for controlling their spread.
|
v3-fos-license
|
2019-04-30T13:06:29.406Z
|
2018-01-23T00:00:00.000
|
66289248
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ajtas.20180701.15.pdf",
"pdf_hash": "fb24f4facea685fae3cfa9713c82fd44d2a46de3",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43426",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"sha1": "eecdddba50d1a82897bc0e3380ea5d19cfa14c2d",
"year": 2018
}
|
pes2o/s2orc
|
Desirability and Design of Experiments Applied to the Optimization of the Reduction of Decarburization of the Process Heat Treatment for Steel Wire
This study contributes directly to the understanding of the causative agent of loss of carbon steel wire during the heat treatment (phenomenon called decarburization). This carbon loss disqualifies the material for your applications originally envisaged, as with mechanical reduction of the amount of the chemical element carbon steel becomes less resistant to traction and less hard what would prevent your use for various applications mechanics. This research aim is to show desirability method application related to decarburization and hardness, in SAE 51B35 drawn steel wires. Data were generated from application of design of experiments methodology (by means of the Minitab Statistical Software) and results revealed that all variables considered in study have significant influence. Statistic modeling was carried out by means of application of multiple linear regression method which allowed obtaining models which represent properly the process itself. Results of response variables decarburization and hardness were submitted to desirability method application and the process was optimized at the best adjust condition of entry variables in relation to their specifications.
Introduction
The practice of applications of statistical methods is now in your time of greatest use. In manufacturing, process industries, hospitals and services, statistical thinking is being used to decrease costs, reduce defects and control the variability.
Contrary to popular belief, usually the statistic is not just data analysis, she is also planning of experiments in which those data are collected. Maybe we should even say she is mainly for more sophisticated planning that is the analysis that then, the lack of planning is often the cause of failure of an investigation, and yet very few researchers who think in statistics before perform your experiments.
This work used design of experiments as the main statistical method for planning experiments and diagnose the problems of decarburization and hardness in steel wires during heat treatment, but also used multiple regression methods to model statistically the process and Desirability to optimize the process through the appropriate adjustments of the factors.
This study contributes directly to the understanding of the causative agent of loss of carbon steel wire during the heat treatment (phenomenon called decarburization). This carbon loss disqualifies the material for your applications originally envisaged, as with mechanical reduction of the amount of the chemical element carbon steel becomes less resistant to Decarburization of the Process Heat Treatment for Steel Wire Sae 51B35 traction and less hard what would prevent your use for various applications mechanics. The importance of this study is that knowing the cause of decarburization during heat treatment is possible to neutralize the causative agent so that the problem does not reoccur and that it will be possible to reduce the loss (scrapping) of this material due to occurrence of this phenomenon. However, it is important to note that this study demonstrated the occurrence of this phenomenon only for this specific process, with specific equipment and with the particular steel, so it's not safe to say that this phenomenon that occurred in this study will also be reproduced if any pre-established conditions is changed or if the process to be compared has some characteristic different from conditions experienced in this research.
Decarburization
The decarburization is a phenomenon that can occur during heat treatment of steels and involves the loss of carbon in the surface of the material. The decarburization is related to the microstructure of the material and, consequently, with its properties. The main consequences of decarburization are the loss of surface hardness, tensile strength, wear resistance and fatigue strength due to the depletion of carbon from the surface, and may disqualify the material for those functions it normally would play. The decarburization is more serious for applications where the material is not subjected to surface treatment, as for example, carburizing.
According to Tschiptschin (1980), the decarburization can occur for a variety of situations, depending on the specific characteristics of the thermal treatment. The loss of carbon from the surface of the material happens as a result of factors such as temperature and treatment time, furnace atmosphere (presence of oxidizing gases such as oxygen, carbon dioxide and water steam), carbon steel alloy elements. It can occur through chemical reactions with hydrogen or material with iron oxides, in this last case, forming the slag which is the top layer of rust. By comparison with standards, you can sort the decarburization in three basic types (HERNANDEZ JR.; FONSECA; DICK, 2010): 1. Type 1: superficial Region with measurable thickness with ferrite and carbides, free under this layer of ferrite, Pearlite fraction increases with the distance from the surface; 2. Type 2: Occurs on the surface a loss exceeding 50% average value of the carbon content of the steel, but without the complete decarburization of this region; and 3. Type 3: Occurs on the surface a loss less than 50% of the average carbon content of steel. Surface Oxidation is made up of three iron oxides (shown in Figure 1), so this factor has been the main factor chosen because there was a chance that the oxidation reaction with the internal temperature of the oven during heat treatment and reduce the carbon from the surface layer of steel wire. (Tschiptschin, 1980). According to Callister (2002) and Chiaverini (2012), hardness is a metal resistance measure to penetration. The most common methods to determine a metal hardness are Brinell, Vickers and Rockwell. In this research, only the Brinell method (BH) is used. Brinell hardness values (BH), as shown in Figure 2, are calculated by dividing applied load by penetration area. The diameter penetrator (D) is a hardened steel ball for materials of medium or low hardness, or tungsten carbide for high hardness materials. The test machine has a light microscope which makes the circle diameter measurement (d, in mm), which corresponds to the spherical cap projection printed on the sample. Brinell hardness (BH) is given by the applied load (P, in kgf) divided by the print area, as shown in equation 1.
Design of Experiments
According to Lima To perform a factorial planning, you must specify the levels at which each factor should be studied and the more important of these special cases is called 2 k factorial planning, which uses two-level factors k each. In this type of experiment, a full replica requires 2 x 2 x 2 x...... 2 = 2 k observations (NETO et al., 2007). Montgomery and Runger (2003) state that multiple linear regression is used for situations involving more than one regressor, and the models can include interaction effects. An interaction between two variables can be represented by a cross term, for if we assume that x 3 = x 1 x 2 and β 3= β 12 , then the model, including interaction terms, will be as shown in equation 2. ε β β β β In this expression, Y is the dependent variable; the independent variables are represented by ,..., , 2 1 and ε is the random error term. The term "linear" is used because the equation is a linear function of the unknown parameters β 0 , β 1 , β 2 and β n . In this model, the parameter β 0 is the plane intersection; β 1 , β 2 and β n are the regression partial coefficients.
A mathematical model consists of a set of equations that represent a quantitatively, the assumptions that were used in building the model. Such equations are solved on the basis of some known or provided for in the real world and can be tested by means of comparison with known data (SODRÉ, 2007).
According to Benyounis and Olabi (2008), multiple regression technique when used in addition to the design of experiments, is very efficient to develop statistical models that quantify the influence of process input variables for prediction of output variables and multiple regression is used for situations involving more than one regressor, as (3): In this expression Y represents the dependent variable, independent variables are represented by 1 2 , ,..., n x x x and ε is the random error term. The unknown parameters are β 0 , β 1 , β 2 e β n . In this model, the parameter β 0 is the intersection of the plan, β 1 , β 2 e β n are the partial regression coefficients. The models that include interaction effects, according to Montgomery and Runger (2003), can be analyzed by multiple regression method. An interaction between two variables can be represented by a term, because if we concede that x 3 = x 1 x 2 and β 3= β 12 , so the model including interaction terms, uses (4): As a result of geometric mean represented by equation 5, the value D evaluates, in a general way, the levels of the combined set of responses. It is an index also belonging to interval [0, 1] and will be maximized when all responses approach as much as possible of its specifications. The closer of one D is, the closer the original responses will be of their respective specification limits. The general optimal point of system is the optimal point achieved by maximizing the geometric mean, calculated from individual desirability functions. Advantage of using geometric mean is to make the overall solution is achieved in a balanced way, allowing all responses can achieve the expected values and forcing algorithm to approach the imposed specifications (PAIVA, 2008).
According to Derringer and Suich (1980), the algorithm will depend on the optimization type desired for response (maximization, minimization or normalization) of desired limits within the specification and the amounts (weights) of each one response, which identifies the main characteristics of different optimization types, as follows: 1. Minimize Function: The desirability function value increases as the original response value approaches a minimum target value; 2. Normalize Function: When response moves toward the target, the desirability function value increases; 3. Maximize Function: The desirability function value increases when the response value increases. Paiva (2008) and WU (2005) state that when a response maximization is wished, the transformation formula is shown in equation 6: Where: L i, T i and H i are, respectively, the values of major, minor and acceptable target for the ith response.
The R value, in Equation 6, indicates a preponderance of the superior limit (LSL). Values higher than unity should be used when the response (Yi) increases rapidly above L i . Therefore, d i increases slowly, while the response value is being maximized. Consequently, to maximize D, the ith response must be much larger than L i . One can choose R <1, when it is critical to find values for the response below the fixed limits.
In cases where the objective is to reach a target value, the transformation formulation stops being unilateral and becomes bilateral. The bilateral formulation, represented by equation 7, occurs when the interest response has two restrictions: one maximum and the other one minimum.
Material, Factors Selection and Experimental Organization
The drawn steel wire is a product widely used in mechanical construction, which is the raw material used for the manufacture of various products such as: screws; chains; bearings and covers for sails.
This product has a great demand of consumers in Brazil and in the world, because they are used in machines in various sectors and especially by the automotive industry.
The material used in this work was the SAE 51B35 steel wire, cold drawn, with 12.85mm diameter, round. Chemical analysis was carried out in the chemical laboratory of the company funding the research, using optical emission spectrometer ARL brand. The results are presented in table 1.
Characteristics of Heat Treatment Furnace
For this study we used a high-convection Bell type oven, with heat treatment capacity for 20 tons of wire per cycle.
The principle of operation of Bell type furnaces boils down basically in the heating and cooling of material loaded at the base, protected by the canopy of protection, with internal pressure always positive. This pressure is obtained by injection of inert protective gas (N2) with flow rate of 200m³/h in the first 90 minutes of purging and 300m³/h after the initial bleed. The goal of maintaining the positive internal pressure is to prevent the entry of oxygen inside the base (canopy).
According to Hernandez Jr.; Fonseca; Dick (2010) the heat treatment is widely used in medium and high carbon steels in order to produce a structure of globular carbide in a ferritic structure array. This structure provides the reduction of hardness, the increase in ductility and Machinability.
Loads of steel (SAE 51B35) used in this research were treated according to the X and Y, whose time and temperature settings are represented in table 2.
Selection of Factors, Response Variable and Choice of Array of Design of Experiments
For the selection of the factors raised the possible causes that could influence the decarburization of the wire, being selected the following: 1. Oxidation, assuming for the sake of argument that iron contained oxides, somehow during the thermal treatment, could react with the surface layer of steel wire by subtracting the carbon. 2. Heat treatment cycle, assuming the time and the temperature had influence on decarburization; 3. Pressure (Dew point), assuming that the amount of oxygen inside the oven had influence on decarburization. This factor characterized by measuring the internal pressure inside heat treatment furnace. The values indicate measurements performed by the specific equipment of this heat treatment furnace; 4. Moisture, assuming the oxygen emitted by the sample in wet condition, because the moisture caused by the wet material would evaporate after heating and oxygen released could react chemically with the surface layer of steel due to high temperature in the oven; The selection of the levels of the factors was based on the actual condition of the process (the minimum and maximum for all factors).
Experiments Oxidation Heat treatment cycle Pressure Moisture
For experiments planning accomplishment, reduced variables (β) were used rather than physical variables (real adjustments) of investigated factors, in order to preserve the confidential data of the company which funds the research. Variables reduction was calculated according to Montgomery and Runger (2003), using the physical value (α) that one wants to test subtracted from the mean (µ) between the minimum and maximum of factors adjustments. The result was divided by half the amplitude (R) between the minimum and maximum values of factors adjustment. Thus, the reduced variables dimensionality was restricted to the range [-1 to 1], according to equation 8. The raw material (SAE 51B35 drawn steel wire) selected for realization of experiments was obtained from the same manufacturing batch to the lower variation possible in relation to the decarburization, since it would hardly be possible to obtain in this case materials exempt from this feature.
The sample was sent to the laboratory of the company funding the research to measure the initial decarburization and to her this analysis an average value of 0,03mm deep.
Sequence of Experiments and Statistical Analysis
In table 5 are the order in which the experiments were executed, its settings and adjustments the experimental results of decarburization and hardship (responses). Factors significance was tested at a 90% confidence level (p <0.10). This analysis was carried out separately so that factors significance for Decarburization (response) and could be verified, as shown in Table 5. It was verified through analysis of table 5 and Figure 3, using 90% confidence, that the influential factors on decarburization are oxidation and furnace pressure. No interaction proved influential. As shown in Figure 4 you can see that the decarburization increases when the oxidation is present (adjust -1), because the oxidation causes a chemical reaction with the temperature of the heat treatment which takes the subtraction of the surface of carbon steel, causing the carbon depletion in the surface of the material. It is also possible to finish analyzing the Figure 5 that the decarburization increases when the Furnace pressure is -25 (adjust 1) because as the pressure is achieved by injecting nitrogen gas in the oven in order to expel the oxygen before heat treatment, this pressure adjustment indicates a lower nitrogen flow which could mean a greater likelihood of left some residue of oxygen inside the oven and is known in the literature that the oxygen in contact with the heat treatment temperature can cause decarburization in steel. Therefore, it was done in this study with the lowest pressure ( -25) is the largest due to possibility of decarburization remaining residue of oxygen during heat treatment.
Considering only the influential factors to the construction of the mathematical model for decarburization (as shown in Table 5), the model will be (9): Decarburization= 0,14519 -0,05394 (oxidation) + 0,01769 (furnace pressure) Factors significance was tested at a 90% confidence level (p <0.10). This analysis was carried out separately so that factors significance for Hardness (response) and could be verified, as shown in Table 6. It was verified through analysis of table 6 and Figure 6, using 90% confidence, that the influential factors on Hardness are oxidation and furnace pressure. No interaction proved influential. As shown in Figure 7 you can see that the Hardness decreases when the oxidation is present (adjust -1), because the oxidation causes subtraction of the carbon on surface steel, causing the reduction of hardness, because the hardness is directly related to the amount of the chemical element carbon. Considering only the influential factors to the construction of the mathematical model for Hardness (as shown in Table 6), the model will be (10): Hardness = 273,063 -14,330 (oxidation) + 3,562 (furnace pressure) (10)
Application of desirability Function for Optimization
For process optimization by means of desirability function, firstly, it was necessary to formulate the specifications required for the decarburization and hardness. For decarburization what you want is that there is no the decarburization, so the smaller the decarburization is better for the process. In relation to the desired hardness is the material get the harshest possible.
The composite desirability (D) is the overall index calculated from combination of each response variables processed through a geometric mean and this index is responsible for showing the best condition to optimize all responses variables at the same time. To obtain the highest Looking at Figure 8, it can be seen that D value belonging to [0-1] interval, is maximized when all responses are close to their specifications, for the closer D is of 1, the closer the original responses will be of their respective specification. The optimal general point of the system is the optimum point achieved by geometric mean maximization calculated from individual desirability functions (d), which in this case are values for each one of response variables given below: 1. For response variable called hardness, d=0.96733; 2. For response variable called decarburization, d=0.86602. Values obtained for desirability (D) and individual desirability (d), show that the process was well optimized, since these indices are found to be very close to the optimum condition (0.9153). Thus, it was possible to find that values obtained for this optimized condition are in accordance with required specifications and are: 1. For hardness (y= 292.5625 BH); 2. For decarburization (y= 0.0914 mm).
Conclusion
It was concluded that the oxidation factor and furnace pressure are the factors that cause the decarburization and hardness reduction of drawn steel wire SAE 51B35 during the heat treatment process.
It was concluded that the decarburization increases when the oxidation is present (adjust -1), because the oxidation causes a chemical reaction with the temperature of the heat treatment which takes the subtraction of the surface of carbon steel, causing the carbon depletion in the surface of the material. It is also possible to observe that the decarburization increases when the Furnace pressure is -25 because as the pressure is achieved by injecting nitrogen gas in the oven in order to expel the oxygen before heat treatment, this pressure adjustment indicates a lower nitrogen flow which could mean a greater likelihood of left some residue of oxygen inside the oven and is known in the literature that the oxygen in contact with the heat treatment temperature can cause decarburization in steel. Therefore, it was done in this study with the lowest pressure (-25) is the largest due to possibility of decarburization remaining residue of oxygen during heat treatment.
It was possible to observe also that the Hardness decreases when the oxidation is present, because the oxidation causes subtraction of the carbon on surface steel, causing the reduction of hardness, because the hardness is directly related to the amount of the chemical element carbon.
Through the use of the method Desirability was found the best tweaks of the influential factors to obtain the best condition were processing Oxidation (in 1.0) and Furnace pressure (in 1.0), which amounts in practice to use the material with the total absence of oxidation and the furnace pressure -35 (the that represents the total absence of oxygen inside the furnace).
This conclusion indicated the need to better plan the operational practice of this process, causing the steel wires are previously sandblasted with steel shot or chemically pickled for removal of surface oxidation before heat treatment.
There was also the possibility of reduction of decarburization depth by more than 50% during the heat treatment. However, it is necessary to standardize the oxidation removal before the material be treated thermally. In addition, it is important whenever possible keep the material stored in appropriate locations and protected against the action of rain, thus preventing this is material with moisture (wet) and be placed in the oven to heat treatment in this condition, as this reduced enough the decarburization caused during heat treatment.
It is important to note that the decarburization above 0.11 mm for steel wire results in your disqualification for their mechanical applications required. Therefore, very important to standardize the removal of oxidation of the surface layer of the wire where the wire is subjected to heat treatment.
|
v3-fos-license
|
2019-05-06T14:08:50.756Z
|
2013-12-16T00:00:00.000
|
145173392
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://journals.sagepub.com/doi/pdf/10.1177/2158244013516771",
"pdf_hash": "b017f10a1c9ea4cfd5df21e46f962081198c2ca6",
"pdf_src": "Sage",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43427",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "2b36e4c7a5786a19599cea43081557291998aaf8",
"year": 2013
}
|
pes2o/s2orc
|
Psychometric Analysis of a Scale to Assess Norms of Reciprocity of Social Support in Community-Based and Congregation-Based Groups
Reciprocity, a core component of social capital, is rarely theorized or measured leaving the relationship between reciprocity and health ambiguous. Historically, reciprocity measures have not been used in the context they were designed causing measurement error. This multi-phased study was designed to develop and validate a reciprocity measure for formal and informal groups within communities and congregations as part of a more comprehensive social capital measure. In-depth interviews (n = 72), cognitive interviews (n = 40), and an expert review panel guided item development and selection for content validity. South Carolina residents (n = 500) completed the 10-item Reciprocity of Social Support (RSS) Scale during 2008-2010. Construct validity was supported through an exploratory factor analysis (EFA) that confirmed a two-factor model for the scale for community- and congregation-based groups. Cronbach’s α values indicated items were highly correlated for community groups and church groups. Psychometric analyses of the RSS Scale support convergent and divergent validity for the community- and congregation-based groups. Mean RSS Scale scores were not statistically different between community- and congregation-based groups. This scale has proven psychometric properties for utilization in future research investigating reciprocity of social support in community- and congregation-based groups and will be useful to examine whether reciprocity (by context and type of group) is associated with physical and/or mental health.
Social capital has been defined as connections among individuals' social networks, including group membership, characterized by trust between individuals and norms of reciprocity that facilitates collective action and cooperation for mutual benefit (Kawachi, Kennedy, & Glass, 1999;Putnam, 2000;Putnam, Leonardi, & Nanetti, 1993). Social capital is an established health determinant, independent of other social and behavioral determinants (Giordano, Ohlsson, & Lindström, 2011), and its dimensions at the individual and community levels are associated with individual and population health (Kawachi, Kennedy, Lochner, & Prothrow-Stith, 1997;Moore, Haines, Hawe, & Shiell, 2006;Viswanath, Randolph-Steele, & Finnegan, 2006;Yip et al., 2007). However, questions still surround the relevance of social capital for health outcomes, and researchers are seeking measures within specific social contexts to investigate its effects (Giordano et al., 2011).
To clarify the associations between social capital and health, a critical step is to differentiate between structural social capital and cognitive social capital (Harpham, 2008;Hurtado, Kawachi, & Sudarsky, 2011). Structural social capital refers to behaviors (e.g., participation in associations) that facilitate access and influence networks providing social support or other potentially beneficial resources (Harpham, 2008;Hurtado et al., 2011). Cognitive social capital (e.g., trust and reciprocity) refers to values, perceptions, and expectations regarding social behaviors that provide a sense of community belonging and safe and stable representation of reality (Harpham, 2008;Hurtado et al., 2011). In other words, structural social capital refers to what people do (e.g., participation in associations and networks), and cognitive social capital refers to what people think (e.g., values and perceptions) that influences individuals' behaviors toward mutually beneficial collective action (Harpham, 2008;Krishna & Uphoff, 2002) and is therefore likely to precede the actions included in structural social capital. For the purposes of this article, reciprocity will be examined through the lens of cognitive social capital, that is, what people value or believe they or others in their network would do under certain circumstances.
Reciprocity has long been of theoretical interest in the social sciences. In Alvin Goulder's (1960) seminal publication, he provided a theoretical explanation of reciprocity and clarified the concept as patterns of returning or responding to benefits received. He described reciprocity as "moral cement" that stabilizes social relationships by creating a sense of obligations to one another (Goulder, 1960). Harpham (2008) explained that reciprocity as a two-way behavioral relationship: when someone has helped someone there is an expectation that the favor will be returned when needed. Stone (2001) emphasized reciprocity as a core construct of social capital because it is an indicator of the quality of social relationships that impacts people's abilities to solve common problems (Stewart-Weeks & Richardson, 1998).
Social Capital in Faith Groups and Community Groups
Social capital depends on context at individual and group levels (Kawachi & Berkman, 2000). Tangible and intangible resources are a function of specific social connections (Cattell, 2001). Community participation, through formal and informal groups, is thought to produce more social capital (Putnam, 2000) because it facilitates access to resources (Hurtado et al., 2011) and provides individuals with a sense that they can solve their problems through collective action (Hawe, 1994;Zakus & Lysack, 1998). Putnam (2000) suggests that faith communities may be the single most important source of social capital in the United States. Faith communities commonly contribute to social services and community cohesion through social capital (Dinham, Furbey, & Lowndes, 2009). In the United States, Christianity, Judaism, and Islam all have commitments to peace, justice, honesty, service, personal responsibility, forgiveness, respect, and obligation to others that can contribute to the development of trusting relationships (Furbey, Dinham, Farnell, Finneron, & Wilkinson, 2006). Thus far, no studies have examined whether faith groups also contribute to reciprocity in relationships. Wald and Calhoun-Brown (2007) estimated that between three fifths to three fourths of the American adult population were members of a Christian church. In 2009, 41.6% of Americans reported attending church at least once a week or almost every week (Newport, 2010). Church attendance levels varied across U.S. regions, with the highest levels in the South and Midwest (45%-63%; Newport, 2010). Furthermore, according to the Corporation for National and Community Service (CNCS, 2006), U.S. adults have been more likely to volunteer through religious organizations compared with civic, educational, political, professional, hospital, or recreational organizations with proportions volunteering varying by age from 36.5% among 45-to 54-yearolds to 45.5% among those 65 years and older. No other type of organization has comparable levels of involvement (CNCS, 2006). This high level of volunteerism helps account for the role of faith communities in fostering social capital.
Measuring Reciprocity
Reciprocity is a core part of cognitive social capital (Abbott & Freeth, 2008;Stone, 2001), and there is evidence that reciprocity of social support may impact health outcomes. For instance, a study by Moskowitz, Morris, and Schmidt (2010) examined reciprocity of social support in a low-income population and found a balanced proportion of giving and receiving may buffer the effect of stressors more than the absolute amount of received support.
However, in most social capital studies, scholars rarely or inadequately theorized or measured reciprocity (Abbott & Freeth, 2008;O'Brien, Burdsal, & Molgaard, 2004;Stone, 2001), leaving the relationship between reciprocity and health ambiguous (Abbott & Freeth, 2008). This may be in large part because social capital constructs are multi-dimensional, not entirely distinct from one another, and the same questions are often used to measure different constructs, particularly among the constructs of trust and reciprocity (Blaxter, 2004). The reciprocity measures that do exist are often used out of contexts for which they were designed (Abbott & Freeth, 2008). As a result, measures are often worded with underlying assumptions about relationships (e.g., "friends" and "neighbor"), which may not be meaningful to respondents, (Abbott, 2009;Harpham, 2008) if the terms are not further defined for the respondent. Common reciprocity measures that use the terms neighbors and friends include "Have you assisted neighbors or friends? Have your neighbors or friends assisted you?" (Ziersch, Baum, MacDougall, & Putland, 2005). In relation to the term friends, respondents may have various meanings ranging from friends that receive greeting cards to those called on during difficult times (Abbott, 2009). Therefore, inferences regarding the respondents' relationship to his or her "friends" remain unclear, and may have varied impacts on health (Abbott, 2009). Common reciprocity measures that use the term neighbor or neighborhood include the ones mentioned previously, as well as "In my neighborhood, most people are willing to help others" (Pollack & von dem Knesebeck, 2004). The term neighbors can also be confusing to respondents because respondents may define neighbor differently (e.g., a neighbor who lives on the respondents' street compared with a neighbor who lives in the respondents' community). Such general terms (i.e., friends, neighbors, and neighborhoods) may be imprecise indicators of particularized social capital, that is social capital that occurs within specific groups of people; therefore, such general terms may not accurately depict the respondent's relationship in a particular context. To address these contextual measurement issues, Dudwick, Kuehnast, Nyhan Jones, and Woolcock (2006) recommend that field researchers have a thorough understanding of the context in which the measures are developed, so the measures for different groups are relevant and understandable to the local population being studied. For example, researchers may rely on local cultural or ethnic idioms that more clearly convey the intended relationship to the respondent.
Other common measures for reciprocity solicit level of agreement with generalized statements such as "Most people try to be helpful" in the General Social Survey (Kawachi et al., 1999) or "People are helpful" and "People look after themselves" from the World Values Survey (Inglehart, Basáñez, Díez-Medrano, Halman, & Luijkx, 2004). Such generalized questions, though simple, do not capture the complexities of social relationships (Abbott, 2009) or the social context (Abbott & Freeth, 2008;Stone, 2001). Some social capital literature presents measures that conflate reciprocity and trust measures, even though they are two separate constructs. Letki and Evans (2005) use both in an effort to measure trust in a single scale. Newton (1997) advised that failure to conceptualize separate dimensions of social capital will confuse our understanding of how these dimensions empirically operate.
Furthermore, many studies have examined social capital based on outcomes (i.e., volunteerism and political participation), but do not examine mechanisms of social capital (Calhoun-Brown, 2005), such as reciprocity among community members, including faith communities. During a broadbased consultative workshop (Dinham & Shaw, 2012), reciprocity was identified as a valued aspect of faith communities by congregation-based groups. Members of congregation-based groups said that reciprocity should be measured to demonstrate contributions for faith communities and to assess the level of support generated within faith communities. However, few quantitative measures capture reciprocity in the context of communities, and no measures exist to assess reciprocity in the context of faith communities (Dinham & Shaw, 2012). Thus, these lack of reciprocity measures leave the relationship of reciprocity and health unknown in particular contexts; therefore, it is necessary to develop and refine measures of reciprocity in specific community settings using qualitative data (Dudwick et al., 2006), which will help determine whether different types of groups (i.e., community-based groups and congregation-based groups) differentially affect health outcomes (Morgan & Swann, 2004).
The purpose of this study was to develop and validate a measure of reciprocity for formal and informal groups within specific contexts (community and congregation) to determine whether levels of reciprocity differ by context. A scale that measures reciprocity will allow future studies to examine whether reciprocity (by context and type of groups) is associated with physical and/or mental health.
Study Design
A multiphase study was conducted using scale development methods outlined by DeVellis (2003). The reciprocity scale development was part of a larger project to develop and test the Relationships in Community Groups (RCG) questionnaire, a multi-dimensional measure of social capital. The overview of study phases for the construction of the Reciprocity of Social Support (RSS) Scale is outlined in Table 1.
In Phase I, perceptions of norms of reciprocity in community-and congregation-based groups were assessed using in-depth interviews to support content validity. In Phase II, the qualitative data from Phase I were used to generate items and response categories, which were subsequently reviewed by an expert panel for content validity. In Phase III, the items and response categories were evaluated using cognitive interviews, which led to revisions of the items and response categories, further assessing content validity. In Phase IV, a variety of psychometric methods were utilized to evaluate reliability and validity of the scale, which included the assessments of construct validity (i.e., convergent and divergent validity). Mean comparisons were also performed for community-and congregation-based groups. These methods and results for each study phase are described in subsequent sections.
The study protocol was approved by the University of South Carolina's Institutional Review Board. Written informed consent was obtained for all participants in Phase I and Phase IV. Oral consent was obtained from participants in Phase III. The development of the RSS Scale was part of a study that developed a more comprehensive multi-dimensional measure of social capital for community-and congregation-based groups. Participants in Phase I and III received US$20 for their time in the study (between 40 and 90 min) and participants in Phase IV received US$10 for their time in the study (between 20 and 60 min). The limitation to Christians was solely for the qualitative portion of the survey development: the structure and language of the scale reflected those of Christian faith because of the initial setting of preliminary work.
Study Setting
The setting for study phases was in South Carolina. In effort to include participants with varied racial, educational, and income levels, participants were recruited from eight counties across the state.
Phase I: Conceptual Framework and Qualitative Methods
The methods in which the reciprocity of social support items was developed contributed to content validity. Phase I was guided by a conceptual framework (see Figure 1). Trust has been shown to be inversely associated with common mental illnesses (De Silva, McKenzie, Harpham, & Huttly, 2005). Therefore, it would not be surprising for people with mental illness to score low on individual social capital scales (i.e., trust and/or reciprocity scales; De Silva et al., 2005). As Figure 1 illustrates, high levels of trust often lead to formal and informal group participation, and high levels of social participation often result in increased trust between group members (Lindström, 2004), although limitations of physical health may reduce social participation (Yong, 2012). The research team theorized that as members experience interpersonal trust with group members, they may also experience reciprocity of social support and a sense of belonging to the group that leads to future experiences of trust, reciprocity of social support, and a continued sense of belonging experienced between group members. Furthermore, individuals may experience enhanced positive mental health outcomes and have downstream effects of protective and improved physical health outcomes.
From the theoretical framework, investigators developed an in-depth interview guide using open-ended questions to elucidate participants' values, perceptions, attitudes, and opinions regarding social capital constructs, including reciprocity between group members (Krishna & Shrader, 1999). Interview items included (a) study participants' group participation in the past year; (b) identification of the one group that was most important to them; (c) for the identified group, participants commented on the other group members' interest in helping others (Krishna & Shrader, 1999), which investigated the cultural norm and values held about reciprocity within the group (Stone, 2001); and (d) the benefits received from the connections within the groups, which informed whether participation in social networks was due to a norm of reciprocity (e.g., act out of obligation or for the common good), which are indicators of quality of the social networks (Stone, 2001).
Phase I: In-Depth Interviews
Prior to the current project, several of the authors implemented a congregation-based holistic health intervention aimed at older adults meeting in small, interracial groups. Named Heart, Soul, Mind, and Strength (HSMS), groups met weekly for 2 hr over a 1-year period. Participants from that program were included in Phase I of the study because investigators thought there may be additional mechanisms related to social capital that resulted in broader health benefits.
In 2008, in-depth interviews were conducted (n = 72). Phase I inclusion criteria for participants were African American and White adults (ages 50 and above) who were residents of South Carolina. Phase I study groups were defined by three levels of religious participation: (a) HSMS participants (n = 24), (b) regular attendees of religious services who were not HSMS participants (i.e., attended religious services at least once a month; n = 24), and (c) infrequent/non-attendees of religious services (i.e., did not attend religious services more than twice a year; n = 24), each level stratified by race-African American and Whitefor a total of six study groups of n = 12 each. The HSMS participants were randomly selected to participate in the indepth interviews. Regular attendees of religious services and infrequent/non-attendees of religious services were selected using a snowball sampling method from eight counties in South Carolina.
All interviews were audio recorded, transcribed, and researchers provided hand-written notes for each interview. Interviews were coded using QSR NVIVO7 software (QSR International, Burlington, Massachusetts, United States). Each transcript was independently reviewed and analyzed by two researchers (H.C.P. and M.C.M.) using an open coding process (Strauss & Corbin, 1990). Themes were identified across the six study groups based on supporting comments from at least two participants. Coders met on a frequent basis to discuss and compare the themes they identified independently and discussed coding issues until agreement was reached. The codebook contained a list of defined codes.
Phase I: Results
Seventy-six percent (n = 55) of the participants were women. All participants were 50 years or older (M = 66 years, SD = 10.90). Table 2 highlights Phase I participant demographic characteristics. Thematic analysis identified the types of groups to which participants belonged (i.e., community-and/ or congregation-based groups); the most important groups in which they participated (i.e., community-and/or congregationbased groups); their group's interest in helping others, whether within and/or outside their group; and reported benefits of group participation, usually described in terms of the social support participants provided and/or received in their group. Comments reflected the types of social support experienced: tangible, emotional, informational, positive social interaction, and spiritual (Krause, 2002;Sherbourne & Stewart, 1991). to share your own experiences and knowledge 4 to provide spiritual support (i.e., pray for them) 5 to take a meal if they were sick How likely would one or more members of your community group be there for you . . . a 6 to get through a difficult time emotionally 7 to do something enjoyable with 8 to share your own experiences and knowledge 9 to provide spiritual support (i.e., pray for you) 10 to bring you a meal if you were sick a When congregation-based groups were assessed, the phrase church group was used in place of community group.
Phase II: Item Development
The social capital literature and results of the thematic analysis guided the development of a large pool of items and served to support content validity of the measure. During the second study phase, investigators developed 63 items to measure reciprocity of social support. An expert review panel with relevant expertise in medicine, social work, social determinants of health, epidemiology, and health promotion, played an integral role by helping to guide the development of the questionnaire. The panel met twice a month for approximately 6 months to determine which questions to include in the social capital measures. Item inclusion was based on items that were theoretically congruent with the reciprocal nature of social support provided and received in a group setting, consistent with the literature or qualitative findings, and structurally appropriate (i.e., concise, not double barreled, without multiple negatives; DeVellis, 2003).
Phase II: Item Selection
The reciprocity scale development was part of a larger project to develop and test a comprehensive measure of social capital for community groups. Therefore, to reduce participant burden in this study and in future studies, 10 items were selected from the item pool based on the relevance to the study population from the original 63 items. Five items were selected to measure the likelihood the participant would provide social support to group members, and 5 items were selected to measure the likelihood the participant would receive social support from group members. Items were informed primarily by the work of Sherbourne and Stewart (1991). The item on emotional support, that is, positive feeling from experience with group members was derived from participant comments about helping others or being helped during times of personal and emotional difficulties, producing the phrase following the stem "to get through a difficult time emotionally." Positive social interaction, defined as the availability of sharing enjoyed activities, was reflected in the item "to do something enjoyable with." Informational support, offering or receiving advice, information, guidance, or feedback, was assessed by the item "to share (your/their) experiences and knowledge." Spiritual support (Krause, 2002), characterized as support in the realm of faith beliefs, was based on participant comments about feeling supported on their faith journey, finding spiritual guidance, and most frequently providing and receiving prayers, which led to the item, "to provide spiritual support, that is, pray for (you/them)." Sherbourne and Stewart (1991) define tangible support as provision of material aid and behavioral assistance. Participants commented on assisting others and/or being assisted usually during an illness, which led to the item "to take a meal if (they/you) were sick."
Phase III: Cognitive Interviews
The items and question formats for the social capital measures, including the RSS Scale items, were examined using cognitive interviews (n = 40) to support content validity. Cognitive interview participants' demographic characteristics were similar to participants who completed the survey in Phase IV (see Table 2). Participants were asked to discuss their reactions to the 10 reciprocity items to identify issues related to order, comprehension, wording, clarity, and response categories (Willis, 2005). Field researchers (HP and MM) and expert review panel iteratively reviewed field notes and made subsequent changes to the scale based on participant feedback.
Phase III: Results
The cognitive interviews led to modifications to the page format, directions, item wording, and response categories. The sample size was expanded until no new instrumentation issues emerged; ultimately, cognitive interviews were used to examine nine versions of the RSS Scale. See Table 3 for the final RSS Scale items.
Phase IV: Procedures Used for Testing Psychometric Properties
Following content validation from the in-depth interviews, expert panel review of scale items, and cognitive interviews, a variety of psychometric methods were utilized in Phase IV to evaluate reliability and validity of the RSS Scale, and separate analyses were run for the community-and congregation-based groups.
Participants for Phase IV were a convenience sample of 500 adults. Researchers (H.C.P. and M.C.M.) recruited individuals from various community settings (e.g., parks, workplaces, laundromats, flea markets, convenience stores, recreational centers, fund raising events, civic clubs, and churches) to achieve a diverse sample that varied by age, race, and education.
Phase IV: Study Measures
In addition to the RSS Scale, other measures developed for the larger project to assess social capital in community groups were administered at the same time and included an 11-item RCG Trust Scale and a 2-item RCG Sense of Belonging Scale, developed by the authors using the same rigorous procedures as the RSS Scale and will be reported in separate manuscripts.
RSS Scale.
The 10-item scale assessed the social support provided and received within each participant's communityand/or congregation-based group. The RSS Scale is displayed in Table 3. A Likert-type 5-item response scale was used for all items (1 = not at all to 5 = very likely), which was scored by taking mean scores of the responses for the 10 items. Higher scores indicated higher levels of reciprocity of social support in the group.
RCG Trust
Scale. An 11-item scale was developed to assess trust in community-and/or congregation-based groups. The scale measured four components of trust: openness, benevolence, honesty, and dependability (Hoy & Tschannen-Moran, 1999).
RCG Sense of Belonging Scale.
A two-item scale was developed to assess an individual's sense of belonging in a community-and/or congregation-based group.
Lubben Social Network Scale-6 (LSNS-6). The six-item selfreported scale measured active network size of friends and family, potential instrumental support, and perceived confidents (Levy-Storms & Lubben, 2006;Lubben, 1988). This scale demonstrated high levels of internal consistency, stable factor structures, and high correlations with criterion variables .
Socio-demographic characteristics.
Items elicited key sociodemographic characteristics: gender, level of education, marital status, race/ethnicity, age, length of residency in current community, and length of membership and frequency of participation in selected group.
Phase IV: Analysis
Data were scanned into Excel© spreadsheets using Teleform© software, and imported into SAS© version 9.2 for data management and analysis, including descriptive statistics of participants' demographic characteristics and scores from the RSS Scale, RCG Trust Scale, RCG Sense of Belonging Scale, and the LSNS-6.
To assess construct validity, an exploratory factor analysis (EFA) was conducted separately using promax rotation for community-and congregation-based groups to determine whether the survey items assessed the same latent dimensions and whether all items loaded similarly for each group type. According to Hatcher (1994), the minimal number of subjects for EFA should be greater than 100, or 5 times the number of variables being analyzed; therefore, the study groups were an adequate sample size for the analysis.
Convergent validity, a form of construct validity, was tested by Pearson correlation analyses between the RSS Scale and the RCG Trust Scale in community-and congregation-based groups. The theoretical framework predicted the two scales should have strong positive correlations. Convergent validity was also tested by Pearson correlation analyses between the RSS Scale and the RCG Sense of Belonging Scale. It was hypothesized that the scales should have strong positive correlations because they are theoretically related. Divergent validity, a form of construct validity, was tested by Pearson correlation between the RSS Scale and the LSNS-6. It was hypothesized that the RSS Scale and the LSNS-6 assess related although distinct constructs; therefore, it was predicted that there would be weak positive correlations between the RSS Scale and the LSNS-6. Last, Cronbach's α (Cronbach, 1951) assessed internal consistency of the 10-item RSS Scale in community-and congregationbased groups. A "high" value of α (≥.70) is considered desirable in most social science research studies (Nunnally & Bernstein, 1978).
Phase IV: Results
The average age of the participants (n = 500) was 45.95 (SD = 17.95), and 68% (n = 342) were female, 54% (n = 268) were white, 50% (n = 251) graduated from college or had completed any graduate study, 46% (n = 230) were married or living with a partner, and 34% (n = 172) had lived in their community 1 to 10 years. Socio-demographic variables were controlled for in all subsequent analyses. Phase IV participant socio-demographic characteristics are presented in Table 2.
The RSS Scale took a few minutes to complete, but was administered along with other measures of social capital, so the entire survey took on average 40 min to complete (ranging from 20 to 60 min).
A repeated-measures ANOVA was used to determine whether RSS Scale scores would vary for individuals who participated in a community-based group and a congregation-based group (n = 233). Results indicated that congregation-based groups had a slightly higher mean RSS Scale score (M = 4.45, SD = 0.58) compared with communitybased groups (M = 4.36, SD = 0.73), though the difference was not statistically significant, F(1, 232) = 2.79, p = .0963. Note. Bold items indicate high loadings (above 0.40).
a Items 4 and 9 (spiritual support) and 5 and 10 (tangible support, that is, take a meal if they were sick) evidenced low-loadings (below .40) for both factors in the community group members. When the items with low loadings were dropped (Items 4, 5, 9, and 10), the two-factor model was run again with the six items that originally had high loadings (above .40). b Total number of participants exceed n = 500 because each participant could report they belonged to community-or congregation-based groups, or both.
EFA
The responses to the 10-item RSS Scale were subjected to an EFA using squared multiple correlations as prior communality estimates. The principle factor method was used to extract the factors. A rotation was not possible. A scree test suggested two meaningful factors. The results of the EFA are displayed in Table 4. A factor loading ≥0.40 is considered desirable in social science research (Costello & Osborne, 2005).
EFA-and Community-Based Groups
For both types of groups, a two-factor model was used based on the hypothesis that social support is provided and received by members of a group. For members of community-based groups, Items 1, 2, and 3, all related to providing support to other group members, loaded strongly on the first factor (factor loadings from 0.64 to 0.85) and Items 6, 7, and 8 loaded strongly (range from 0.60 to 0.78) on the second factor, related to support received from members of the community group. Items 4 and 9 (spiritual support) and 5 and 10 (tangible support, that is, take a meal if they were sick) evidenced low-loadings (below 0.40) for both factors in the community group members. When the items with low loadings were dropped (Items 4, 5, 9, and 10), the two-factor model was run again with the six items that originally had high loadings (above 0.40). Results indicated a model fit for a two-factor model. From the original items, Items 1, 2, and 3 loaded strongly on the first factor, social support provided to members of a community group, now ranging from 0.68 to 0.74, and Items 6, 7, and 8 loaded strongly on the second factor, social support received from members of a community group, ranging from 0.69 to 0.77.
EFA-and Congregation-Based Groups
For congregation-based groups, a parallel two-factor model was tested. Items 1 to 5 loaded strongly on the first factor (0.51-0.82), for social support provided to members of a congregation-based group. Items 6 to 10 loaded strongly on the second factor (0.60-0.80), social support received from members of a congregation-based group.
Convergent Validity
The RSS Scale had a very strong positive association with the RCG Trust Scale for community-(r = .71) and congregation-based groups (r = .72; p ≤ .0001 for both). The RSS Scale had a strong positive association with the RCG Sense of Belonging Scale for community-(r = .60) and congregation-based groups (r = .625; p ≤ .0001 for both); therefore, convergent validity was established based on analyses.
Divergent Validity
The RSS Scale had a weak positive association with the LSNS-6 for community-(r = .26, p ≤ .0001) and congregation-based groups (r = .15, p = .0110), which established divergent validity.
Internal Reliability
Cronbach's α values were high and consistent within the RSS Scale for both study groups. The raw α coefficient for the reciprocity scale was .92 for community-based groups and .93 for congregation-based groups, well above the acceptable level of .70 (Nunnally & Bernstein, 1978). In community-based groups, the raw α coefficient for the subscale social support provided was .87 and was .89 for the subscale social support received. In congregation-based groups, the raw α coefficient for the subscale social support provided was .87 and was .91 for the subscale social support received. Removal of any items did not dramatically alter the internal reliability of the scale. See Table 4 for the results of the EFA using a two-factor model and Cronbach's α values if an item was removed for community-based groups and congregation-based groups.
Discussion
To our knowledge, this is the first scale developed to measure reciprocity of social support that is suitable for community-and congregation-based groups that has been created using a rigorous scale development process. Results indicated that the scale performed very well on tests of reliability and validity-content validity and different forms of construct validity, including convergent and divergent validity. Norms of reciprocity were explored in community-and congregation-based groups using in-depth interviews. Through cognitive interviews and an expert review panel, items and response categories were assessed, strengthening the scale's content validity (Knafl et al., 2007) and readability, allowing the scale to be self-administered.
The RSS Scale performed as expected in validity tests. Scree test findings suggested two meaningful factors. This procedure helped inform the decision to use a two-factor model, which reinforced the research team's hypothesis of the conceptual nature of reciprocity of social support, specifically that social support is provided (Factor 1) and received (Factor 2). The EFA provided evidence that the items capture reciprocity of social support as two-factor models for both group types. It was hypothesized that there would be two factors of the reciprocity of social supportthe first, social support provided to group members (Items 1-5), and the second, social support received from group members (Items 6-10). Results indicated that the norms for reciprocity of social support in community-based groups include providing and receiving emotional social support, positive social interaction, and informational social support, which has been found to have protective effects on mental health (Berkman & Glass, 2000;Gjerdingen, Froberg, & Fontaine, 1991;Janevic et al., 2004). Spiritual and tangible social support did not have adequate factor loadings in a twofactor model for community-based groups; therefore, these items (4, 5, 9, and 10) can be excluded when measuring reciprocity of social support in community-based groups.
However, the results of the EFA for norms of reciprocity in congregation-based groups included all 10 items. Therefore, the additive value of belonging to a congregationbased group compared with a community-based group may stem from providing and receiving spiritual and tangible social support. For future studies, the full scale may be used to explain the buffering effects of congregation-based groups on health outcomes compared with community-based groups.
As hypothesized, tests of convergent validity supported very strong positive correlations between the RSS Scale and the RCG Trust Scale for the community-and congregationbased groups (r = .71, .72; p ≤ .0001 for both study groups) and strong positive correlations for the RSS Scale and the RCG Sense of Belonging Scale for the community-and congregation-based groups (r = .60, .625; p ≤ .0001, for both study groups). Tests of divergent validity of the RSS Scale and the LSNS-6 demonstrated a weak positive association for the community-(r = .26, p ≤ .0001) and congregationbased groups (r = .15, p = .0110).
The Cronbach's α values obtained in this study indicated that the items for community groups (α = .90-.91) and church groups (α = .92-.93) were highly correlated.
Results of this study provided evidence that communityand congregation-based groups contribute to reciprocal social support. Although results indicated that congregationbased groups contribute to higher levels of reciprocity of social support, levels were not significantly higher than community-based groups and thus do not confirm a difference in reciprocity of social support for these groups. However, the differences in item loadings for four factor suggest there are aspects of reciprocity that differ between the two types of groups. Currently, there is a lack of literature comparing reciprocity of social support in community-and congregationbased groups, so there is no empirical literature with which results can be compared. Further research is needed to explore whether significant differences in levels of reciprocity of social support exist within particular community-(e.g., volunteer groups) and congregation-based groups (e.g., faith-based support groups, prayer groups).
Although reciprocity is a core construct of social capital (Stone, 2001), little is known about it, as it is very rarely theorized, defined, or measured (Abbott & Freeth, 2008). Previous studies suggest that reciprocity is an indicator of the quality of social relationships (Stone, 2001), results in tangible and intangible resources (Cattell, 2001), facilitates access to resources (Hurtado et al., 2011), and impacts people's abilities to solve common problems through collective action (Hawe, 1994;Stewart-Weeks & Richardson, 1998;Zakus & Lysack, 1998). Therefore, a valid and reliable measure of reciprocity is needed to determine if reciprocity of social support experienced in different types of groups (i.e., community-based groups and congregation-based groups) differentially affect health outcomes; Morgan & Swann, 2004).
To address this paucity, the RSS Scale will now allow for more sophisticated social capital theories to be examined and will assist in the advancement of understanding the relationship between reciprocity of social support in communityand congregation-based groups and the associations of protective health effects, particularly on mental health. If an association does exist, reciprocity of social support may be a targeted strategy for the prevention and management of chronic illnesses or the maintenance of healthy behaviors. Kawachi and Berkman (2001) found that protective effects of social ties on mental health are not uniform across groups in society. Therefore, this measure may enable detection of variation in social support reciprocity among community-and congregation-based groups by demographic characteristics (i.e., age, gender, ethnicity, marital status, and socioeconomic status).
Limitations
The RSS Scale demonstrated reliability and validity in the sample in which it was developed; however, these findings should be interpreted in light of several limitations. This study used a convenience sample. Participants' reciprocal behaviors related to social support may not have been representative of the larger population. Importantly, religious homogeneity of the qualitative work and validity sample limits the generalizability of the scale's validity. The in-depth interviews were conducted with individuals belonging to a Christian Protestant denomination and the validity sample included only individuals of Christian faith. This is an indication where the scale is relevant for the Christian population, but may be less applicable in other faith traditions (e.g., Judaism or Eastern religious traditions).
In addition, the in-depth interviews that guided item formation were performed in a population of adults aged 50 and above; other aspects of reciprocity may not have been captured that are relevant to younger age groups. Moreover, women were overrepresented in the in-depth interviews and during administration of the instrument. However, reliability and validity testing indicated positive findings when administered to the study sample that ranged from ages 18 to 94 and to men, which indicates the scale is appropriate to use in various age groups and for men and women.
Finally, this scale may not be appropriate to use outside the Southern United States. The scales were developed in communities across South Carolina, which included highincome and very-low-income areas; however, further psychometric testing is needed for the scales if used in varied geographic and developing regions, so the items reflect the norms of reciprocity specific to the contexts of interest. For example, participants in this study did not identify selfesteem as an important dimension of social support, which refers to others' communications indicating the person is valued (e.g., letting the person know they are competent at something or have an admired personal quality; Cohen & Wills, 1985). However, self-esteem has been found to be an important dimension of support among various populations in previous studies (Brookings & Bolton, 1988;Cohen & Wills, 1985).
Conclusion and Future Research
The RSS Scale for community-and congregation-based groups was developed over a 2-year process, which included qualitative work for item development and validity and reliability testing. To our knowledge, this scale is the first with proven psychometric properties that can be recommended for utilization in future research investigating reciprocity of social support in community-and congregation-based groups. An interesting finding from the study, as indicated by the EFA, was that the norms of reciprocity of social support for church groups include spiritual and tangible social support. Future research should focus on understanding the additive value of belonging to a congregation-based group compared with other community-based groups, and whether the additive value of a congregation-based group is associated with protective mental and physical health outcomes. Furthermore, our results showed small differences in mean RSS scores across different group settings (i.e., higher, but not significantly different scores in congregation-based groups compared with community-based groups). Further research is needed to investigate these differences and determine the implications of reciprocity in specific group settings. Recommendations of future research include adapting the scale for use in other faith communities and additional cultures, including developing countries and various ethnic and religious populations.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This article was supported by a grant from the John Templeton Foundation.
|
v3-fos-license
|
2019-10-17T09:04:42.744Z
|
2019-10-01T00:00:00.000
|
211771712
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.econjournals.com/index.php/ijeep/article/download/8694/4680",
"pdf_hash": "222b16131edd3cc8a3c92dbc3a15dcd9f2cc9ca7",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43428",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "3c4d6929594be49f498fe6e5cc56ec44e5879104",
"year": 2019
}
|
pes2o/s2orc
|
Experimenting the Long-Haul Association between Components of Consuming Renewable Energy: ARDL Method with Special Reference to Malaysia
Global warming is a standout amongst the present most critical environmental issues, caused generally by emission of greenhouse gases, for example, carbon dioxide from the consuming of fossil fuels. Emissions of carbon dioxide fluctuate all through nations in Asia. It is progressively perceived that nations must act to advance the more prominent utilization of renewable energy resources as a component of activities looking to relieve environmental change. This paper shows a survey of the energy request situation in Malaysia and Indonesia. In this renewable energy, resources are getting to be attractive for sustainable energy development in Malaysia. It is essential for Malaysia to build the consumption of renewable energy to lessen its reliance on dirty fossil fuels for electricity generation. This paper endeavours to examine the elements influencing renewable electricity consumption (REC) in Malaysia. In particular, our examination means to investigate the long-haul relationship among REC, economic growth, CO2 emanations, foreign direct investment (FDI) and exchange transparency over the period from 1980 to 2015. By utilizing autoregressive distributed lag (ARDL) bounds testing cointegration approach, we locate that financial development and FDI are the real drivers for REC. Exchange transparency, be that as it may, is found to have a negative effect on REC over the long haul. Strangely, the impact of CO2 outflows on REC isn’t critical. In addition, the vector error correction model (VECM) Granger causality test finds the presence of a unidirectional causality relationship running from GDP to REC, affirming the legitimacy of conservation hypothesis in Malaysia. Some significant policy suggestions are likewise talked about. It is accordingly prescribed that the policymakers are required to concentrate on the green energy generation part by expanding renewable energy creation from the current sources.
INTRODUCTION
Energy is required in nearly our day-by-day life including agriculture, transportation, media transmission and mechanical exercises that affect the economic development. The economic development is measure by total national output (GDP) and in Malaysia, GDP is connecting precisely with the energy consumption of the nation (Rabie, 2018). The development in the economy in Malaysia is dependent on a continuous supply of energy.
As per the estimation of international Energy Agency, 53% worldwide energy consumption will be expanded by 2030, with 70% of the growth popular originating from creating nations (Oh et al., 2010). Malaysia is a standout amongst the most creating nations among ASEAN nations by Singapore, with GDP of US$15,400 per capita (PPP premise), and enduring GDP growth of 4. 6% in 20096% in (IMF, 2010. Malaysian economy grew at 5% in 2005 and in general, energy request is required to increment at a normal rate of 6% per annum (Saidur et al., 2009). In parallel with Malaysia's quick economic development, last energy consumption grew at This Journal is licensed under a Creative Commons Attribution 4.0 International License rate of 5.6% from 2000 to 2005 and achieved 38.9 Mtoe in 2005. The last energy consumption is relied upon to achieve 98.7, almost three times the 2002 dimension. The mechanical sector will have the highest growth rate of 4.3%. Modern sector represented some 48% of all out-energy use in 2007, which speaks to the highest.
Renewable energy will be energy produced from natural resources, for example, sunlight, wind, rain, tides and geothermal heat. Renewable energy will be energy that is produced from natural procedures that are persistently recharged (Promsri, 2018). Elective energy is a term utilized for an energy source that is an option in contrast to utilizing fossil fuels.
Renewable Energy in Malaysia
The Government received the four-fuel strategy, supplementing the national exhaustion policy, went for guaranteeing reliability and security of supply. The exhaustion of fossil fuels will expect Malaysia to utilize more sources of renewable energy for the supportability of its development. The National Biofuel Policy energizes the utilization of biofuels in accordance with the country's Five-Fuel Diversification Policy. The principle target of National Biofuel Policy is to decrease the relying upon fossil fuel that related with environmental issue, for example, the greenhouse gas discharge. On April 2009, Malaysia formulated the National Green Technology Policy to mirror that Malaysia's earnestness in driving the message that 'perfect and green' is the route forward towards making an economy that depends on feasible arrangements. It will likewise be the reason for all Malaysians to appreciate an improved personal satisfaction (Pineda and Maderazo 2018). The government needs to elevate green technology utilization to push for economic growth in the new economic model.
Consuming Renewable Energy in Malaysia
Renewable energies incorporate wind, sun powered, biomass and geothermal energy sources. This implies all energy sources that recharge themselves inside a brief timeframe or are for all time accessible. Energy from hydropower is just somewhat a renewable energy. This is positively the situation with river or tidal power plants. Something else, various dams or repositories additionally produce blended forms, for example by siphoning water into their repositories around evening time and recouping energy from them amid the day when there is an expanded interest for electricity. Since it is preposterous to obviously decide the measure of produced energy, all energies from hydropower are shown independently.
The primary indigenous renewable sources in Malaysia incorporate; palm oil biomass wastage, "hydropower, sun powered power, strong waste and landfill gas, wind energy. Among these resources, hydropower is reasonable for both little and extensive scale applications, sun-based photograph voltaic power is (Malek et al., 2010). Furthermore, Malaysia's local oil generation happens seaward, essentially close to the Peninsular Malaysia. Toward the finish of 2015, Malaysia's raw petroleum save, including condensate, was 5.5 billion barrels of proportionate. Malaysia likewise has a rich natural gas hold. Figures 1 and 2 shows the renewable sources in Malaysia. Toward the finish of 2015, Malaysia's demonstrated natural gas holds were 14.66 billion barrels of comparable. Malaysia's hydropower potential is surveyed at 29 000 megawatts; 85% of potential destinations are situated in East Malaysia. Biomass resources are for the most part from palm oil, wood and agro-industries." In 2015, renewable energies accounted for around 5.2% of actual total consumption in Malaysia. The following chart shows the percentage share from 1990 to 2015:
LITERATURE REVIEW
Malaysia presented renewable energy as the fifth fuel strategy in the energy-blend under the National Energy Policy in 2001. An objective was set at 500 MW matrix associated power generations by 2005 from renewable energy sources. Abd (Abd Halim, 2012) the little renewable energy power program (SREP) was propelled in the meantime with monetary impetuses to support this activity. Malaysia has colossal potential renewable energy resources as biomass, sunlight based and hydro. Nevertheless, the implementation of SREP was not up to desire because of a few hindrances and difficulties looked by the experts and engineers, and the objective was updated in 2006 to 350 MW by 2010. At the COP15 in Copenhagen, Malaysia swore a wilful decrease of up to 40% regarding emissions force of GDP continuously 2020 contrasted with 2005 dimensions. With this dedication, the Renewable Energy Act was ordered in 2011 with the arrangement of Feed-in Tariff, giving progressively appealing motivating forces to goad the implementation of framework associated power generation from renewable energy resources. With the new RE Act 2010, the objective is modified to 985 MW by 2015 This paper depicts the development of renewable energy policy structure, techniques and initiatives for renewable energy implementation in Malaysia, with an end goal to diminish carbon emissions as vowed at the COP15. This paper additionally gives instances of renewable power generation at present executed and the on-going research and development exercises to improve the exploitation of renewable energy resources in Malaysia (Rahman and Zhang, 2017).
Malaysia has a decent blend of energy resources like oil, natural gas, coal and renewable energies, for example, biomass, and sun powered and hydro. Disregarding this many resource, the nation is dependent on fossil fuel for modern and transportation sector. (Andriyana, 2011) In 2009.5% of electricity is created by utilizing fossil fuel, for example, natural gas, coal, diesel oil and fuel oil. As of not long ago, Malaysia stills a net energy exporter. Worries about energy security, the vacillation of raw petroleum cost and environmental change are driving significant changes in how energy and electricity explicitly, is created, transmitted and consumed in Malaysia. In such manner, renewable energy resources are getting to be alluring for supportable energy development in Malaysia. There is because renewable sources of energy are abundant in Malaysia, the significant ones being biomass and sunlight based (Rahman, 2017). This article displays an audit of present energy circumstance and energy policies for the energy sector in Malaysia. Examination of different renewable energy and inspect the energy and environmental issues related with this energy. The audit of current utilization of renewable energy sources and furthermore its potential implementation is assessed to give answer for the national. Figure 3 shows the renewable electricity consumption in Malaysia. and connected panel vector error rectification model to think about research the relationship in nineteen rising economies. The findings of the investigation establish that renewable energy is generous to diminish carbon emanation in long run. The findings of the examination anyway neglected to discover significance of renewable in affecting carbon outflow in short run.
The similar studies of Azlina and Mustapha (2012) also studied the likewise association in Malaysia. The aftereffects of these examinations uncovered the reliance of renewables on environmental corruption and recommend the unit-directional connection between the variable expressing that an increase in renewable is probably going to decline environmental deterioration as carbon emission.
Sources of Data Collection
The investigation utilizes annual time series information for the period from 1980 to 2015 in Malaysia. The information is gotten from International Energy Statistics, World Development Indicators (WDI) and Emissions Database for Global Atmospheric Research (EDGAR).
Variables used in this study
The needy variable in the model alludes to renewable electricity consumption (REC) which is estimated utilizing renewable electricity net consumption in billion kWh. In the interim, the explanatory factors included are economic growth, carbon dioxide (CO 2 ) emissions, exchange transparency, FDI which are signified by GDP per capita (US$), CO 2 emissions in metric tons per capita, offer of import and fare in GDP, proportion of foreign investment to GDP individually. To decrease the variety and prompt stationarity in the variance covariance matrix, the common logarithmic form (ln) is connected to every one of the factors.
Procedure
The examination is begun by deciding the order of integration of the factors utilizing unit root test. Augmented Dickey Fuller (ADF) is a standout amongst the best-realized unit root tests dependent on the model of the main request autoregressive procedure (Box and Jenkins, 1970). Also, Phillips and Perron (PP) test takes into account milder suspicions on the error distribution and controls for higher request sequential relationship in the series just as heteroscedasticity. In the wake of choosing the request of integration, the presence of long-run connection between the factors is tested. The autoregressive distributed lag (ARDL) strategy is proposed because of its compelling applications for little example sizes contrasted with Johansen (1988) and Engle and Granger (1987) tests. ARDL is additionally material independent of the factors are coordinated of request zero or one or commonly cointegrated as long as not structure two. Additionally, Banerjee et al. (1998) guaranteed that ARDL does not change over the short run coefficients into residuals. Fundamentally, ARDL strategy includes the estimation of unlimited error change model (UECM) in first contrast form, augmented with one period lagged of all factors in the model. The UECM model is appeared as pursues: Where REC is REC, GDP is economic development, CO 2 is carbon dioxide emissions, TRADE will be exchange transparency and FDI is foreign direct investment, ∆ is the main distinction administrator and is error term individually. The ideal lag length is chosen dependent on Akaike's Information Criterion (AIC). The F-statistics got from Wald tests is utilized to decide the joint significance of the coefficients of the lagged dimension of the factors. The null hypothesis of no cointegration is set up as: 1 + 2 + 3 + 4 + 5 = 0. The upper bound critical value (UCB) accept that all the regressors are I (1) while I (0) for lower bound critical value (LCB). Given that the sample size of this investigation is moderately little (T=36), the rejection of null hypothesis alludes to the critical value re-enacted by Narayan (2005). It was demonstrated that the cointegration existed between the factors if F-statistics is more noteworthy than UCB. Something else, the null hypothesis can't be rejected if F-statistics is lower than LCB which shows that the factors are not cointegrated. As indicated by Bardsen (1989), the long-run coefficients are evaluated utilizing the proportion of coefficients of every independent variable to dependent variable's coefficient individually.
This is trailed by Granger causality test to examine the causal connection between the factors in short run. On the off chance that the factors are not cointegrated, the vector autoregressive (VAR) in first contrast form is utilized. Interestingly, if the cointegration is found between the factors, the examination appraises the heading of causality utilizing vector error rectification models (VECM).
Where ∆ is the primary distinction operator, k is the optimal lag order dependent on AIC and 1 is error term. Table 2 displays the consequences of the ADF and PP unit pull tests for the five factors both at level and first contrast of the natural log values. Strangely, every one of the factors in ADF test is non-stationarity at level aside from FDI. The factors transform into stationary when they are first differenced at 1% significance dimension while exchange is the main variable noteworthy at 5% level. In addition, PP test produces results like ADF test. Just FDI accomplishes stationarity at level while different factors, for example, REC, GDP, CO 2 emissions and exchange become stationary at first contrast with 5% significance dimension. As every one of the factors are found to have the order of I (0) and I (1), we utilize ARDL bound test so as to decide the long-run cointegration between GDP, CO 2 emissions, exchange and FDI with REC in Malaysia. Moreover, it is likewise relevant in VECM Granger causality with the integration of I (0) and I (1) (Sinha and Sinha, 1998). Table 3 displays the consequences of the bounds test dependent on REC and its determinants. The ARDL (1, 4, 0, 2, 3) model is chosen to fit the information of value included per capita inservice sector. The optimal lag chose is one dependent on AIC tests. The processed F-statistic of 13.591 in ARDL bound test is more prominent than the upper critical bound value of 6.250 at 1% significance dimension dependent. The rejection of null hypothesis of no cointegration proposes that the presence of consistent state long-run relationship among GDP, CO 2 emissions, exchange, FDI and REC in Malaysia. This is in accordance with the examinations led by Sadorsky (2009) and Sebri and Ben-Salha (2014) who uncover a long-run relationship among the factors.
ARDL bound test
The Breusch-Godfrey serial correlation LM test shows that the model is free from serial correlation issue. There is no heteroskedasticity issue found from ARCH test. Table 4 shows the aftereffects of long-run flexibilities of explanatory variables on REC. Strikingly, all the explanatory variables are observed to be huge in clarifying the REC over the long haul for Malaysia aside from CO 2 emissions. It is apparent that GDP has a positive and noteworthy coefficient all through the long keep running at 10% significance level, showing that 1% expansion of GDP would build the REC by 1.189%. The assessed income flexibility of REC or request is decidedly more noteworthy than 1, showing that it tends to be considered as a predominant decent in Malaysia. The outcome is inside our desire as higher economic development would enable the economy to have more assets to advance the utilization of greener energy sources that incorporate renewable electricity. Meanwhile, better economic performance empowers individuals in the nation to have more salary to be spent on natural insurance and to request a greater amount of renewable energy. The positive effect of GDP on REC is confirmed by past examinations, for example, Marques and Fuinhas (2011). Moreover, the finding further infers the significance of economic development in boosting REC by expanding its capacity to create technologies identified with renewable electricity in Malaysia.
Extra rate increment in FDI likewise altogether raises REC by 0.299%. This recommends mechanical exchange through FDI has effectively upgraded the consumption of renewable electricity. Our finding concurs with the prevalent view that FDI is basic in improving REC.
Trade openness is observed to be contrarily identified with REC over the long haul. The REC is lessened by 1.352% with one percent expansion in trade openness. The outcome demonstrates that foreign trade can't energize the exchange of renewable advancements in electricity generation by means of mechanical exchange. As it were, it doesn't advance the consumption of renewable electricity in Malaysia. Be that as it may, it negates with ends archived in writing, for example, Ben Jebli and Ben Youssef (2015) and Omri and Nguyen (2014) who guarantee that trade openness prompts an expansion in the utilization of renewable energy.
Then again, our outcomes demonstrate that CO 2 emissions don't significantly affect REC. The value of the coefficient is sure however not critical. This demonstrates our finding isn't in accordance with past examination. The positive connection between CO 2 emissions and REC, in any case, is steady and guarantee that an ascent in CO 2 emissions is the real driver for REC. On account of Malaysia, a positive yet immaterial connection between the two variables can be clarified by the way that public awareness on the significance of using renewable electricity is as yet weak among Malaysians.
VECM granger causality test
The consequences of Granger causality as appeared Table 5 demonstrates that GDP does Granger-cause REC in a unidirectional manner at 10% significance level. This outcome affirms the significance of economic development of Malaysia in advancing REC. Like findings acquired Kula (2014), a unidirectional causality is likewise found from GDP to REC. Be that as it may, our finding contrasts from concentrates by Farhani and Shahbaz (2014) and Al-mulali et al. (2014) who found no causality and a bidirectional connection between economic development and REC individually.
Likewise, a unidirectional causal relationship is found from trade openness to CO 2 emissions at 1% significance dimension showing that trade progression contributes to the disintegration of environmental quality in Malaysia. It is predictable with the findings of Kasman and Duman (2015) who additionally find a unidirectional causality from trade openness to CO 2 emissions on account of new EU part nations. Our outcome, in any case, is conflicting with the finding by Ohlan (2015) who proposes that there is no causality between the two variables. Table 6 demonstrates the aftereffects of variance decomposition analysis (VDC) that different the variety for each endogenous variable into the part shocks to the VECM. The VDC of REC obviously shows that TRADE and GDP are crucially disclosing the advancement to REC (Hussain et al., 2019). The shocks to REC in light of a one standard deviation development in TRADE and GDP exceptionally extend from 0% to 20.96% and 20.93% individually (Johari et al., 2018). Like the findings of ARDL and IRF, these VDC findings affirm the dynamic impact of shocks from TRADE and GDP towards REC (Saudi et al., 2019). This suggests the development and development of Malaysian economy is critical to improve the consumption of renewable electricity. Then again, the shocks of CO 2 ( 8.77%) and FDI (6.46%) are found to contribute the minor impact on the shocks of REC in the discrete time frames.
Variance decomposition analysis
In addition, the VDC of GDP demonstrates the most critical shocks impact of CO2 emissions (12.53%) towards the shocks of GDP contrasted with REC (11.15%), TRADE (4.02%) and FDI (0.15%) separately. Moreover, the VDC of CO 2 finds that the shocks impact of TRADE exceedingly reacts to one standard deviation development in CO 2 emissions. This is in accordance with the findings of VECM Granger causality that TRADE granger causes CO 2 emissions. In the interim, one standard deviation shock in GDP clarifies CO 2 emissions by 27.25%. The presence of modified U-shaped relationship among GDP and CO 2 emissions in Malaysia is demonstrated as GDP increments at the underlying stage and starts to decrease in the wake of achieving the pinnacle point. The commitments of REC, GDP, CO 2 and FDI to TRADE are added up to 18.65%, 4.88%, 5.50% and 4.62% individually. Moreover, the changes of FDI are clarified by one standard deviation shock in REC, GDP, CO 2 and TRADE with the level of 1.73%, 32.82%, 14.66% and 11.87% separately.
CONCLUSION
This present paper researches the job of renewable energy consumption in affecting economic prosperity of Malaysia by utilizing the annual information over the time from 1980 to 2015. Malaysia has substantial resources of RE as biomass, hydro and sunlight-based energy. The National Energy Policy and procedures are as of now set up for the significant commitment of RE in the electricity generation mix. Distinguishing the determinants for REC is significant for Malaysia to configuration proper arrangements and systems that can battle ecological issues. In this examination, we investigate the drivers and hindrances to the consumption of renewable electricity in Malaysia utilizing annual information Results got from ARDL bounds testing cointegration approach uncover that economic development and FDI are the fundamental drivers for REC in Malaysia. This infers REC could be additionally improved through an expansion in GDP and FDI. Then again, trade openness is found to negatively affect REC over the long haul, demonstrating that exchanges of merchandise and ventures among nations will in general ruin the consumption of renewable electricity. It is discovered that CO 2 emissions irrelevantly influence REC. Moreover, a unidirectional causality relationship is discovered running from GDP to REC, proposing that conservation hypothesis is legitimate for Malaysia. It is likewise affirmed that a unidirectional Granger causality is running from trade openness to CO 2 emissions, however not the other way around. We find that FDI positively affects REC. This outcome is characteristic of the way that FDI can be used as an instrument to advance the utilization of renewables. While pulling in more FDI, the Malaysian government needs to guarantee that just those foreign investors who create and receive renewable energy are welcome to the nation.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2011-02-24T00:00:00.000
|
13386083
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CC0",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1289/ehp.1002560",
"pdf_hash": "6c552f72978e1ce2b5aa164d82a35a7b786774b8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43430",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3d18dcbd2721cad1dc774594b7df0efff6e9cf41",
"year": 2011
}
|
pes2o/s2orc
|
Medium-Term Exposure to Traffic-Related Air Pollution and Markers of Inflammation and Endothelial Function
Background Exposure to traffic-related air pollution (TRAP) contributes to increased cardiovascular risk. Land-use regression models can improve exposure assessment for TRAP. Objectives We examined the association between medium-term concentrations of black carbon (BC) estimated by land-use regression and levels of soluble intercellular adhesion molecule-1 (sICAM-1) and soluble vascular cell adhesion molecule-1 (sVCAM-1), both markers of inflammatory and endothelial response. Methods We studied 642 elderly men participating in the Veterans Administration (VA) Normative Aging Study with repeated measurements of sICAM-1 and sVCAM-1 during 1999–2008. Daily estimates of BC exposure at each geocoded participant address were derived using a validated spatiotemporal model and averaged to form 4-, 8-, and 12-week exposures. We used linear mixed models to estimate associations, controlling for confounders. We examined effect modification by statin use, obesity, and diabetes. Results We found statistically significant positive associations between BC and sICAM-1 for averages of 4, 8, and 12 weeks. An interquartile-range increase in 8-week BC exposure (0.30 μg/m3) was associated with a 1.58% increase in sICAM-1 (95% confidence interval, 0.18–3.00%). Overall associations between sVCAM-1 and BC exposures were suggestive but not statistically significant. We found a significant interaction with diabetes—where diabetics were more susceptible to the effect of BC—for both sICAM-1 and sVCAM-1. We also observed an interaction with statin use, which was statistically significant for sVCAM-1 and suggestive for sICAM-1. We found no evidence of an interaction with obesity. Conclusion Our results suggest that medium-term exposure to TRAP may induce an increased inflammatory/endothelial response, especially among diabetics and those not using statins.
Research
There is strong epidemiological evidence that short-term air pollution exposure (i.e., < 24 hr to 3 weeks) is related to mortality and other cardio vascular events (Brook et al. 2010). Much of this evidence involves exposure to ambient particulate matter (PM) with aerodynamic diameter ≤ 2.5 μm (PM 2.5 ), which comprises many components and varies regionally. Contributions of specific components and sources to these effects are not well understood but are critical for informing the develop ment of regulations.
Short-term exposure studies often use stationary monitors to estimate exposure in a nearby region. However, specific components of traffic-related air pollution (TRAP) vary substantially within cities, and traffic variables may contribute to this variation (Brauer et al. 2003;Clougherty et al. 2008;Kinney et al. 2000). This suggests that a geographically based approach could substantially improve assessment of black carbon (BC) exposure.
Long-term exposure to TRAP (i.e., averages ≥ 1 year) has also been associated with cardio vascular mortality, often based on studies using nitrogen dioxide as a surrogate for traffic pollutants and spatial modeling of exposures (Brunekreef et al. 2009;Gehring et al. 2006;Yorifuji et al. 2010). In a Boston, Massachusetts-area case-crossover analysis, we reported an association between mortality and BC exposure on the day before death, using spatio temporally modeled BC exposure estimates as a marker of TRAP (Maynard et al. 2007).
Intercellular adhesion molecule-1 (ICAM-1) and vascular cell adhesion molecule-1 (VCAM-1) are markers of inflammation and endothelial function that are expressed on cell surfaces and are also found in soluble form in the plasma (sICAM-1 and sVCAM-1). These markers are independently and jointly associated with increased risk of cardio vascular disease (Albert and Ridker 1999;Pradhan et al. 2002;Rana et al. 2011;Ridker et al. 2003). Recent studies have reported associations between inflammatory markers and short-term exposure to TRAP, but studies have varied in both PM exposure type and inflammatory markers examined (Delfino et al. 2008;Madrigano et al. 2010;Zeka et al. 2006).
Obesity and diabetes are becoming increasingly prevalent, and these conditions may increase susceptibility to the adverse health effects of air pollution (Forastiere et al. 2008;O'Neill et al. 2005;Zanobetti and Schwartz 2002). Prospective and randomized clinical trials, as well as laboratory studies, have demon strated that statins, a widely prescribed class of drugs with anti-inflammatory and antioxidant activity (Haendeler et al. 2004), can decrease C-reactive protein (CRP), ICAM-1, and VCAM-1 levels (Albert et al. 2001;Blanco-Colio et al. 2007;Liang et al. 2008;Montecucco et al. 2009). Thus, it is plausible that statin use could attenuate pro inflammatory effects of air pollution, and reduced effects of air pollution on inflammation and heart-rate variability among statin users compared with non users have been reported (Schwartz et al. 2005;Zeka et al. 2006).
The primary objective of the present study was to estimate the cumulative effects of exposure to TRAP over several weeks on inflammation and endothelial function. Previous findings based on BC measured at a central-site monitor suggested associations between inflammatory markers and air pollution exposure averaged over the prior 4 weeks (Zeka et al. 2006). Because BC concentration varies spatially and averaging over a longer period of time decreases the temporal variation in the data, we wanted to examine effects of 4to 12-week average exposures estimated using a land-use regression model. We chose sICAM-1 and sVCAM-1 as outcomes because they reflect changes in inflammation, are associated with cardio vascular disease risk, and are relatively stable within individuals over 4-week periods (Eschen et al. 2008).
We hypothesized that increases in medium-term BC concentrations estimated by land-use regressions would be associated with an increased inflammatory response in elderly Background: Exposure to traffic-related air pollution (TRAP) contributes to increased cardiovascular risk. Land-use regression models can improve exposure assessment for TRAP. oBjectives: We examined the association between medium-term concentrations of black carbon (BC) estimated by land-use regression and levels of soluble intercellular adhesion molecule-1 (sICAM-1) and soluble vascular cell adhesion molecule-1 (sVCAM-1), both markers of inflammatory and endothelial response. Methods: We studied 642 elderly men participating in the Veterans Administration (VA) Normative Aging Study with repeated measurements of sICAM-1 and sVCAM-1 during 1999-2008. Daily estimates of BC exposure at each geocoded participant address were derived using a validated spatiotemporal model and averaged to form 4-, 8-, and 12-week exposures. We used linear mixed models to estimate associations, controlling for confounders. We examined effect modification by statin use, obesity, and diabetes. results: We found statistically significant positive associations between BC and sICAM-1 for averages of 4, 8, and 12 weeks. An interquartile-range increase in 8-week BC exposure (0.30 μg/m 3 ) was associated with a 1.58% increase in sICAM-1 (95% confidence interval, 0.18-3.00%). Overall associations between sVCAM-1 and BC exposures were suggestive but not statistically significant. We found a significant interaction with diabetes-where diabetics were more susceptible to the effect of BC-for both sICAM-1 and sVCAM-1. We also observed an inter action with statin use, which was statistically significant for sVCAM-1 and suggestive for sICAM-1. We found no evidence of an interaction with obesity. conclusion: Our results suggest that medium-term exposure to TRAP may induce an increased inflammatory/endothelial response, especially among diabetics and those not using statins. in the Greater Boston area. We also examined whether that effect was modified by statin use, obesity, and diabetes.
Materials and Methods
Study population. We studied participants in the Normative Aging Study (NAS), a longitudinal study established by the VA in 1963 (Bell et al. 1972). In brief, the NAS enrolled 2,280 men from the Greater Boston area who were initially free of known chronic medical conditions. All participants provided written informed consent, and the study was approved by the institutional review boards of all participating institutions. Participants visited the Boston VA Hospital study center every 3 years to under go physical examinations. At each of these visits, blood samples and extensive physical examination, laboratory, anthropometric, and questionnaire data were collected. Information about cigarette smoking, medical history, and medication use were obtained by self-administered questionnaire. Each subject was interviewed to confirm the identity and purpose of medications used, and all new disease diagnoses were noted.
Diabetes was defined as a physician diagnosis of diabetes, and obesity was defined as a body mass index (BMI) of at least 30 kg/ m 2 . Self-reported data on diabetes status and statin use were updated at each study visit. In addition, BMI and obesity were updated based on height and weight measurements at each visit. Thus, NAS data reflect changes in disease status and medication use over time.
Measurements of sICAM-1 and sVCAM-1 began in 1999. For the present study, we included the 642 NAS participants with at least one measurement of sICAM-1 and sVCAM-1 and whose home address was in the Greater Boston area (1,423 total person-visits). Subjects who moved out of the area were excluded.
BC exposure prediction. The BC exposure model and the stationary air monitors used to develop the model have been described in detail previously ). Briefly, 82 sites were used; most sites measured BC continuously using aethalometers, and other sites collected particles on a filter over 24 hr and meas ured elemental carbon (EC) using reflectance analysis. The monitoring data used to develop our model included 6,031 observations from 2,079 unique exposure days.
Using a spatio temporal model that we developed and validated previously ), we estimated the 24-hr average BC concentration at each geocoded participant address. Predicted daily concentrations showed a > 3-fold range of variation in exposure across measurement sites (adjusted R 2 = 0.83). A validation sample at 30 additional monitoring sites showed an average correlation of 0.59 between predicted and observed daily BC levels. We averaged the 24-hr predictions to form estimates for the 4, 8, and 12 weeks before each participant visit. We also averaged the 24-hr predictions to form estimates for the 4-week average during the 5-8 weeks before the study visit and the 4-week average during the 9-12 weeks before the study visit, which are components of the 8-and 12-week averages, to use as a sensitivity analysis.
Covariates in the BC prediction model included measures of land use for each address (cumulative traffic density within 100 m, population density, distance to nearest major roadway, and percent urbanization), geographic information system (GIS) location (latitude, longitude), daily meteorological factors (apparent temperature, wind speed, and height of the planetary boundary layer), and other characteris tics (day of week, day of season). The Boston central-site monitor was also included as a predictor to reflect average pollutant concentrations over the entire region on each day.
Separate models were fit for warm and cold seasons. Interaction terms between the temporal meteorological predictors and sourcebased geographic variables allowed for spacetime inter actions. Regression splines allowed main effect terms to non linearly predict exposure levels, and thin-plate splines modeled the residual spatial variability not explained by the spatial predictors. A latent variable framework was used to integrate BC and EC exposure data, where BC and EC measurements were treated as surrogates of some true, unobservable traffic exposure variable; see Gryparis et al. (2007) for further details.
Estimation of health effects. We log-transformed sICAM-1 and sVCAM-1 levels to increase normality and stabilize variance in the residuals. Model covariates were selected a priori, and all models included age, BMI, diabetes, smoking status, pack-years, and season. We used mixed models to account for correlation among measurements on the same subject across different medical visits. Mixed models have the form where Y ij is log(sICAM-1) or log(sVCAM-1) in subject i on day j, and u i represents a subject-specific intercept that reflects unexplained hetero geneity in the outcome. BC averages and model covariates are modeled as fixed linear effects, and u i is modeled as a random effect. We assume that the u i are generated from a normal distribution with common variance, yielding the simple compound symmetry variance structure. This model requires estimation of two variance components, which represent between-and within-subject variation. Models with unbalanced data (i.e., varying numbers of repeated measurements on each subject) typically yield accurate estimates of within-subject variation, provided a sufficient number of repeated measurements contribute to the estimate. Models used to examine effect modification by obesity, diabetes, and statin use included inter action terms that allowed associations between BC and the outcomes to vary among subgroups. Diabetes, obesity, and statin use were treated as time-varying covariates, where the status was updated at each visit to reflect changes since the last visit. The percentages of subjects whose status changed for these factors over the study period were 3% for diabetes, 9% for obesity, and 21% for statin use.
We also performed a sensitivity analysis to investigate whether the interaction with statin use was heavily influenced by the people who began taking statins after the study period began. Specifically, we limited the interaction analysis to the participants who never used statins during the study period (n = 284) and those who used statins throughout the entire study period (n = 223).
Because the outcomes were log-transformed, effect estimates are reported as percent changes in sICAM-1 and sVCAM-1 concentrations associated with a 0.30-μg/m 3 increase in BC, which corresponds to the interquartile range (IQR) for average BC exposures over all three time intervals (4, 8, and 12 weeks). A level of α = 0.05 was used to determine statistical significance.
Regression analysis. We found significant positive associations between sICAM-1 and BC averaged over the previous 4, 8, and 12 weeks (Table 2). Associations between average BC for 4, 8, and 12 weeks and sVCAM-1 were in the same direction, with slightly smaller effect size, but were not statistically significant. As a sensitivity analysis, we also estimated the effects of BC exposure in weeks 5-8 and weeks 9-12 separately. For sICAM-1, the association with the longer BC lags were positive with effect size slightly smaller than the effect size for weeks 1-4 and not quite statistically significant. A 0.30-μg/m 3 increase in BC exposure during weeks 5-8 was associated with a 1.22% increase in sICAM-1 [95% confidence interval (CI), -0.10% to 3.00%], and during weeks 9-12, with a 1.09% increase in sICAM-1 (95% CI, -0.18% to 2.38%). For sVCAM-1, the estimated effects for the longer lags had the same effect size and direction as the effects for weeks 1-4 but were not statistically significant (data not shown). Overall, we saw the same pattern of effects in the lagged 4-week intervals, with only slight weakening in effect when the lags were considered individually rather than cumulatively.
Regression analysis and interactions. When we examined the effect of BC on sICAM-1 and sVCAM-1 by diabetes status, we saw effects only among diabetics, whereas non diabetics showed no effect (Figure 2; inter action p-values < 0.05 for all models except 12-week BC and sVCAM-1). When we examined the effects of BC by statin use, interaction models suggested that effects were present only in participants who did not use statins, with little evidence of associations among statin users (Figure 3; inter action p-values < 0.05 for sVCAM-1 only). In contrast, we found no difference in the estimated effect of BC by obesity status on either outcome (data not shown).
We restricted our sensitivity analysis of statin use interaction to participants who never used statins or always used statins during the study period. The results were comparable, with the effect sizes estimated to be slightly larger in all categories in the restricted model, and statistically significant (data not shown). These differences could reflect some overall difference in health between statin users and non users because statin use was not randomized.
Discussion
We found that address-specific BC exposures averaged over 4, 8, and 12 weeks were positively associated with markers of inflammation and endothelial dysfunction in this elderly cohort. Exposure estimates based on our validated land-use regression model are more accurate than estimates based on ambient monitoring, which are often used for cohorts of this size. Our analyses suggest that diabetics are more susceptible to adverse effects of TRAP than are non diabetics, but we found no evidence of effect modification by obesity. In addition, we observed a null effect among participants who used statins and a positive association for those not taking statins, but further investigation is needed to clarify this potential effect. TRAP and inflammatory response. Several studies have examined the effects of shortterm (i.e., < 24 hr to 3 weeks) TRAP and various markers of inflammation. For example, in a large epidemiological study, daily increases in ambient PM levels were positively associated with plasma fibrinogen levels, with the strongest association observed in participants with chronic obstructive lung disease (Schwartz 2001). In a study of a small (n = 29) elderly population, Delfino et al. (2008) examined inflammatory markers and PM components measured at each subject's home for 1-9 days on average. The authors reported that several traffic-related components were significantly associated with CRP and interleukin-6, and observed positive but non significant associations with sICAM-1 and sVCAM-1.
In other studies of NAS participants, traffic-related PM (BC and organic carbon) exposure has also been positively associated with plasma total homo cysteine concentrations (Park et al. 2008), and this association is modified by poly morphisms in genes related to oxidative stress (Ren et al. 2010). PM 2.5 and BC averages of 1-3 days measured at the central site have been associated with increased vascular cell adhesion in NAS participants (Madrigano et al. 2010). Traffic-related PM (particle number and BC) has also been positively associated with inflammatory markers (CRP, white blood cell count, sediment rate, and fibrinogen) in NAS participants, with stronger associations with particle numbers than with BC and stronger associations with BC averaged over 4 weeks than averaged over 48 hr or 1 week (Zeka et al. 2006).
Few controlled studies of PM on human inflammatory response have been conducted, but one reported that plasma fibrinogen increased after exposure to urban PM (Ghio et al. 2000), and another reported that peripheral neutrophils, sVCAM-1, and sICAM-1 increased after a 1-hr exposure to diesel PM (Salvi et al. 1999).
Mechanisms and interactions. Diabetics have impaired endothelial function compared with non diabetics (Calles-Escandon and Cipolla 2001), and there is increasing evidence that diabetics are more susceptible to the effects of air pollution. Inverse associations between 60-day average BC exposure and brachial artery flow-mediated dilation (FMD) were stronger in diabetics than in nondiabetics . Associations between 24-hr exposure to PM 2.5 and FMD were stronger among diabetics with markers of severe insulin resistance compared with other diabetics (Schneider et al. 2008).
In contrast with our findings, obese NAS participants have been reported to have stronger associations between short-term BC exposure and plasma CRP, erythrocyte sediment rate (Zeka et al. 2006), and sVCAM-1 (Madrigano et al. 2010) than non obese participants.
Our finding that statin users are less susceptible than those not taking statins is generally consistent with other studies. In diabetics, stronger associations of PM 2.5 and BC with sVCAM-1 were reported among those not using statins than in statin users (O'Neill et al. 2007). Associations between CRP and traffic PM exposures of 5 and 9 days were reported to be stronger among those using statins than those not using statins (Delfino et al. 2008). Statins promote endothelial nitric oxide release, which reduces cell adhesion, thus suggesting a mecha nism for this association. However, we cannot rule out the possibility that statin use is an indicator of health status or some other unmeasured factor that may explain why BC did not appear to influence sICAM-1 and sVCAM-1 in statin users.
The question of which CAM is most closely associated with PM remains unresolved. Differences in the associations reported in epidemiological studies of the two CAMs could reflect differences in cell types that express the molecules or differences in the process of cleavage and shedding from endothelial cells. In the present study, the effects on both sICAM-1 and sVCAM-1 were consistent: The effect sizes were similar and the direction of the effects and the interactions were the same, even though we found differences in statistical significance. Thus, our study does not support the idea of a different under lying mechanism for the effects of TRAP on these two CAMs.
The expression of both ICAM-1 and VCAM-1 on the surface of vascular endothelial cells is associated with the formation of early athero sclerotic lesions. Although the relationship between the degree of cellu lar ICAM-1 and VCAM-1 expression and plasma concentrations of soluble forms is not entirely clear, multiple studies have shown that sICAM-1 and sVCAM-1 predict risk of cardiovascular disease (Albert and Ridker 1999;Pradhan et al. 2002;Rana et al. 2011;Ridker et al. 2003). In a recent study investigating the variability of sICAM-1 and sVCAM-1 measures over a 4-week period, Eschen et al. (2008) reported an estimated intra subject variability of 7.6% for sICAM-1 and 9.5% for sVCAM-1, which suggests that these markers are relatively stable over a 4-week period.
Exposure estimation. A major advantage of the present study is the use of a validated land-use regression model to characterize the individual-level differences in exposures instead of classifying exposure based on measure ments at the nearest monitor, landuse regression models without BC measurements, or a weighted form of distance to roadway. Although the study is still limited by the lack of individual-level monitoring data at the home, our validation study suggests that our estimates are highly correlated with actual exposure measure ments at locations in Boston other than those used to fit the model and are much more closely correlated with these measurements than are exposure estimates based on a central-site monitor. Although some exposure misclassification will still occur based on our model estimates, we expect that most of the residual error is Berkson type, based on a previous validation study for this model analyzing measure ment error (Gryparis et al. 2009). Thus, we expect that the exposure misclassification will not bias effect estimates. The lack of individual activity data also presents a limitation, but the NAS population consists of elderly men who spend a considerable amount of time at or near their home address compared with other population groups.
Our sensitivity analysis of the block lags of weeks 5-8 and weeks 9-12 shows slight attenuation compared with the cumulative lags, suggesting that the cumulative effect may be dominated by the more recent weeks. The effect sizes for cumulative exposures do not attenuate when averaged up to 12 weeks. The 4-week block lags do not isolate the effects of those time windows because each week is correlated in both space and time; thus, future studies to model the specific contributions of different lag weeks would be bene ficial in understanding these effects.
Sources and components.
Many studies have examined PM at different diameters, but fewer studies have looked at which components of PM are associated with adverse health effects. Our study links a health biomarker with BC, a specific component of PM 2.5 . In addition, we consider BC to be a surrogate for primary traffic PM (a specific source of air pollution), where the spatial variability of BC on a given day reflects the variability of traffic in the region, which is the basis of the BC model we developed and used for this study.
We did not examine other pollutants such as total PM 2.5 or PM 2.5 components other than BC in this study. We think it is unlikely that total PM 2.5 or any other copollutant is driving the effect we observed for BC because the BC model is based largely on the daily spatial variation exhibited by BC. Other components of PM 2.5 , such as sulfates and organic PM, are more homo geneous over the study region. Although we cannot completely rule out confounding by copollu tants or other factors, the correlation between the modelpredicted 4-week BC and the corresponding 4-week averages of PM 2.5 measured at the central site was low (0.108), which supports our belief that the effects observed are not driven by any temporal correlation with PM 2.5 or one of its components.
Generalizability. A limitation of this study is the restricted demographics of the study population. Study subjects were all elderly men, most of them white. Thus, we cannot generalize our results to other populations. However, the elderly represent a particularly susceptible population, and the growth in the number and proportion of older adults in the United States is unprecedented: by 2030, the proportion of the U.S. population ≥ 65 years of age will double to about 71 million older adults, or one in every five Americans (U.S. Census 2005).
Conclusions
We observed positive associations between BC exposures and blood levels of sICAM-1 and sVCAM-1, with statistically significant effect estimates for sICAM-1 in the population as a whole and for sVCAM-1 among diabetics and participants who were not using statins. Effects of BC on both markers appeared to be limited to diabetics and possibly those not using statins. Overall, our results suggest that exposure to traffic PM over 4-12 weeks may induce an increased inflammatory and endothelial response, particularly among diabetics, and that statin use may be protective against this effect.
|
v3-fos-license
|
2023-08-14T15:04:49.377Z
|
2023-08-12T00:00:00.000
|
260879974
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.researchsquare.com/article/rs-2849661/latest.pdf",
"pdf_hash": "65a0ef619b6ff09c57df5a5ced28333823372de7",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43432",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"sha1": "6413400cec81dde39a5f456f6a8cc1701e8707d4",
"year": 2023
}
|
pes2o/s2orc
|
Synthesis, characterization and non-isothermal degradation kinetics of rose Bengal end capped poly(aniline)/Cr2O3 nanocomposite
Solution polymerization of Ani was carried out in the presence of peroxydisulfate as a free radical initiator under N2 atmosphere at 0–5 °C for 2 h both in the presence and absence of Cr2O3(bulk) material under vigorous stirring condition. During the polymerization reaction, the Rose Bengal dye was added as an end capping agent. The above synthesized polymers were characterized by FTIR, UV–visible, fluorescence emission, XPS, XRD, DSC, TGA, SEM, HR-TEM, viscosity and conductivity measurements. The added Cr2O3(bulk) controlled the structure of poly(aniline) (PANI) and the same can be confirmed by FTIR spectroscopy. The Tg of Cr2O3 mediated PANI showed somewhat higher value than the pristine PANI. The XPS showed the presence of Cr3d2p3/2 and Cr3d2p1/2 peaks and this confirmed the nano-sized crystalline Cr2O3. Further, its thermal degradation was studied by non-isothermal degradation kinetics and their thermodynamic parameters were determined. The experimental data were compared with the available literature data.
Introduction
Electrically conducting polymers are familiar for its easy preparation methodology and excellent conducting properties particularly poly(aniline), (PANI) with various technological applications (Goswami et al. 2023;Azar et al. 2023;Micioiu et al. 2021).PANI is a dark green colored polymer with excellent electrical conductivity nearer to metallic regime in its doped form (Djara et al. 2020).Such an electronic material has some drawbacks such as low molecular weight, poor thermal properties, worst mechanical properties, poor processability and solubility and low optical activity.In order to increase the processability via increasing thermal stability and optical properties, the present research work was made.The focus is set on reviewing various synthesis processes available for the preparation of PANI.A novel 1D PANI nanorod was synthesized by a facile and green approach (Subalakshmi et al. 2020).
Ammonium persulfate mediated chemical synthesis of PANI was reported by Tarmizi et al. (2021).Electrochemical synthesis of PANI was studied by Nuepane et al. (2021).Fe(III) (Toshima et al. 2000), peroxosalts (Anbarasan and Gopalan 1998) and K 2 Cr 2 O 7 (Yelilarasi et al. 2009a, b) were used for the above polymerization as a chemical initiators.The processability of PANI can be increased to some extent by functionalization and copolymerization process.PANI was functionalized with acrylicacid via graft copolymerization process (Kang et al. 1997).Rhodamine6G dye was grafted onto PANI (Jangid et al. 2015).Cysteamine functionalized PANI was reported in the literature (Tao et al. 2012).Hexadecylbromide (Massoumi et al. 2012) and polysulfone (Alam et al. 2012) were grafted with PANI.Aniline and auramine O-based copolymers were reported by Ponprabhakaran and research team (Ponprabhakaran et al. 2019).Chemical grafting on phenyl ring or -NH group of PANI lead to decrease in electrical conductivity through ring flipping process.Now it requires a new technology to increase the processability of PANI without disturbing its electrical conductivity.The literature survey concludes that end capping of PANI with functional groups is a new and novel technology to increase the solubility, thermal and optical properties of PANI.Hence, this offers the novelty of the present investigation.
The thermal properties of PANI can be increased by making nanocomposites with various metal or metal oxide nanoparticles because the added nano material can occupy the space between the polymer chains as a filler.PANIs were made nanocomposite with CuO (Porselvi et al. 2014), ZrO 2 (Prasanna et al. 2016), kaolinite (Morsi et al.2019), MnO 2 (Shafiee et al. 2018), CdS (Raut et al. 2013), TiO 2 (Guo et al. 2012), V 2 O 5 (Yelilarasi et al. 2010), As 2 O 3 (Yelilarasi et al. 2009a, b), Sb 2 O 3 (Jeyakumari et al. 2009) and ZnO (Moskafei et al. 2012).By this way, the thermal stability of PANI was increased.In the present work, Cr 2 O 3 was considered because of its application as pseudo-capacitor electrode material (Ullah et al. 2015), spintronic material (Sahoo et al. 2007) with more electro-chemical applications.Cr 2 O 3 is commercially and easily available with cheaper cost.During the in situ polymerization, the Cr 2 O 3 nanoparticle was formed easily.Moreover, it has more applications in electrochemistry field.This is the first and foremost reason to select Cr 2 O 3 in this research work as a filler material.Moreover, Cr 2 O 3 -based PANI nanocomposites are rarely reported in the literature.This motivated the authors to choose the present research work.
Rose Bengal (RB) is a fluorescent xanthene dye with the λ max value of 540 nm in an aqueous medium.Under strong acidic pH, it has two functional groups: -OH and -CO 2 H.It is well known that PANI is a colored, fluorescent and water insoluble polymer.In order to increase the processability of PANI, the structure of PANI is modified with fluorescent dyes particularly with RB.RB is a fluorescent dye with OH and carboxyl functional groups.Copolymerization of RB with Ani is not possible since it has only one OH group and this leads to chain end process.When RB interacts with PANI through the -OH group, it leads to chemical grafting of PANI onto RB via free radical mechanism.If it interacts with PANI through the -CO 2 H group of RB, as a doping agent, it increases steric effect so there is restriction in doping process.Above all, there is no free position in the phenyl ring for extension of polymer for PANI chain end and acts as a chain end capping agent.RB was considered as a chain end capping agent.Apart from dyeing in textile industries, RB is used to produce reactive oxygen species in cancer treatment (Redmond et al. 2019) and also in solar cells for electron harvesting process (Sayyeed et al. 2010).Due to the difference in the experimental conditions, the RB can act as a chain end capping agent as well as a reactive oxygen species.The end capping reaction includes the role of oxidizing agent whereas the photo-sensitizing reaction includes the role of hγ.
The thermal stability of PANI was found to be less due to its low molecular weight.It is believed that the added metal oxide increases the thermal stability of PANI.When PANI was subjected to high temperature application for long period, it loses its doping agent and slowly the structure is degraded.In order to increase the thermal stability of PANI, RB with the molecular weight of 1017.6 mol/Lit was included in the PANI backbone as an end capping agent.Hence, it is necessary to study the influence of end capping agent on the thermal stability of PANI under air atmosphere.Moreover, RB has a rigid structure with fused ring systems.In order to increase the application of PANI in thermal engineering field, it was structurally modified with RB as an end capping agent.This is the key idea of the present research work.Degradation kinetics of PANI under non-isothermal condition was reported in the literature (Alves et al. 2018).Yang et al (2000) studied the thermal decomposition of PANI.Aging and thermal degradation of poly(N-methylaniline) was reported in the literature (Abthagir et al. 2004).PANI with different dispersion degree was studied under non-isothermal degradation condition (Doca et al. 2009).Amita and research team (2019) reported the E a value for the degradation of PANI/CdO nanocomposite system.The E a for the PANI/CoFe 2 O 4 nanocomposite system was determined as ~ 50 kJ/mol (Gandhi et al. 2011).The E a and thermodynamic parameters were determined for PANI/ZnS system (Bompilwar et al. 2010).The literature survey indicates that the RB end capped PANI/Cr 2 O 3 nanocomposite is not yet reported.Hence, the study is made on TD parameters of CP1 and CP2.
Synthesis of Poly(aniline) and Poly(aniline)/Cr 2 O 3 nanocomposite systems
Poly(aniline) and its nanocomposite with Cr 2 O 3(bulk) were prepared by solution polymerization technique.1 M dil.HCl was prepared with the help of DDW and used as a dopant.1 M ANI solution was prepared with the aid of 1 M HCl solution.0.10 M PDS solution was prepared in 1 M HCl solution.0.001 M RB solution was prepared in NMP medium.NMP was used as a solvent for the solution polymerization technique.The preparation procedure in brief is given below: Each 10 mL of monomer solution, initiator solution, end capping agent solution, HCl and NMP were taken in a 100 mL RBF.The concentrations of the solutions are mentioned above.The contents were mixed thoroughly under N 2 atmosphere and kept in an ice bath maintained at 0-5 °C for 3 h.The free radical polymerization of Ani was noted by change of color to green (Yelilarasi et al. 2009a, b).At the end of two hours of reaction, the contents were filtered through G4 sintered crucible and washed 3 times with acetone and then, dried at 110 °C for overnight.The dried dark green colored precipitate was collected from the crucible weighed and stored in a polythene-based cover under N 2 atmosphere to avoid further aerial oxidation reaction.This sample is named as RB end capped PANI (CP1).A similar procedure was followed for the preparation and storage of RB end capped PANI/Cr 2 O 3 nanocomposite.Here, 3% weight loading of Cr 2 O 3(bulk) was used and the resultant polymer was named as CP2.3% of bulk chromic oxide was selected based on trial-and-error method.The higher % weight loading of bulk chromic oxide powder leads to increase in viscosity of the reaction medium and resulting with decrease in % yield of polymer.The increase in viscosity of the reaction medium restricts the interaction between the monomer and initiator as well as monomer and bulk chromic oxide.Hence, 3% weight loading of bulk chromic oxide powder was chosen.
The polymerization reaction mechanism is given in Scheme 1.The present system follows the free radical mechanism that involves the initiation, propagation and termination steps.Under aqueous medium even at 0-5 °C, the PDS undergoes to cleavage to form a two similar SO 4 radicals.This is the first and foremost initiation step.The sulfate radical interacts with Ani monomer to form Ani radical cation.The interaction of two Ani monomer radical cation leads to the formation of a dimeric structure (Yelilarasi et al. 2009a, b).The propagation reaction proceeds in such a way with the formation of Ani oligomer radical cation.Similarly, the sulfate radical can interact with RB to form a RB radical.This occurs through the hydroxyl group of RB.The termination reaction occurs via the interaction between the Ani oligomer radical cation and RB radical.This leads to the formation of final end product.The primary oxidation leads to benzenoid form whereas the secondary oxidation leads to the quinonoid form of PANI.Meanwhile, the bulk Cr 2 O 3 accepts free electrons available in the reaction medium to form nano Cr 2 O 3 .The sulfate radical withdraws one electron from the amino group of Ani to form Ani radical cation with the liberation of one electron to the reaction medium.Hence, the conversion of bulk Cr 2 O 3 into nano Cr 2 O 3 involves the consumption of multiple electrons.Thus, formed nano Cr 2 O 3 attached with the imino group of PANI chain.In such a way PANI/Cr 2 O 3 nanocomposites were formed with HCl doping and RB end capping effect.The metal oxide nanoparticles are readily binding with the amino or imino group rather than the -SO 3 H or -SO 2 group or any other group.The metal oxide nanoparticle is not binding with the RB dye since it is not having any binding site on its structure.Once the Cr 2 O 3 nanoparticles are formed with more surface area, it is not acting as an oxidizing agent.Hence, it is not affecting the benzenoid to quinonoid ratio in PANI chain.The bulk Cr 2 O 3 is a mild oxidizing agent, and it may oxidize the Ani monomer or oligoaniline.The new things in the reaction steps are incorporation of RB as an end capping agent on the PANI backbone via free radical mechanism.This will enhance the thermal and optical properties of PANI due to the increase in molecular weight with resonance structure.
Characterizations
The binding energy of the polymer samples was determined using Thermo Scientific, Theta Probe (UK) instrument.The electrical conductivity value of polymer sample was determined using Keithley Four Probe conductivity meter (India).The UV-visible spectrum of the polymer samples was measured with the help of Shimadzu 3600 NIR (Japan) instrument from 200 to 800 nm in DDW.The TGA was done with the help of Universal V4.4A TA Instruments (USA) under air condition at the heating rate of 10 K min −1 .The FTIR spectrum was recorded from 400 to 4000 cm −1 for the polymer samples using Shimadzu-8400 S (Japan) instrument.The Elico SL 174 (India) instrument was used for the fluorescence emission spectrum measurement in the range of 350 to 700 nm.The surface morphology was recorded with the help of SEM, 6300 JEOL (USA) instrument.The HR-TEM images were recorded with the aid of JEM 2100 (Japan) instrument.XRD was recorded on Brucker K 8600 (USA) instrument from 10 to 80° at the scanning rate of 5°/min.The DSC of polymer samples are determined (Universal V4.4A TA Instruments (USA) under N 2 atmosphere at the heating rate of 10 K min −1 ).
Non-isothermal degradation kinetics
The ultimate aim of the present research work is determination of energy of activation (E a ) for the degradation PANI and its nanocomposite.For this purpose, Flynn-Wall-Ozawa (FWO) method, Auggis-Bennet method and Kissinger methods were followed (Amita et al. 2019).The thermodynamic parameters also determined and tabulated.
FWO method
The equation used for the determination of E a by FWO method is given below: where R is gas constant (8.314), β is heating rate ( o C/min), T is degradation temperature ( o C), and E a is apparent activation energy (kJ/mol).In this method, plot of ln(β) versus 1/T was made and from the slope value, the E a value was calculated. (1)
Auggis-Bennet model
The Auggis and Bennett equation is given below to find out the E a value of PANI degradation.
where T d is the degradation temperature exactly determined from the DTA curve.The E a value can be obtained from the slope of the plot ln(β/T d ) versus 1/ T d .A is pre-exponential factor.
Kissinger model
The Kissinger equation for the determination of E a value is given below: The E a can be obtained from the slope of the plot of ln
Determination of TD parameters
The TD parameters can be determined from the following equations as given below (Cincovic et al. 2013): where A (min −1 )-frequency factor, e-Neper number (2.7183), T max -peak temperature, χ-transmission coefficient, k-rate constant, h-Plank's constant, k B -Boltzmann constant, ΔG ≠ , ΔS ≠ andΔH ≠ are the changes in Gibb's free energy, entropy and enthalpy, respectively, for the activated complex formation.
Results and discussion
Determination of % yield is very important to analyze the role of an oxidizing agent during the polymerization reaction.PANI (CP1) is a simple and steric effect free polymer.In the present work, RB end capped PANI was prepared by using peroxydisulfate as an oxidizing agent with the yield of 95.2% (Table 1).Addition of 3% weight of Cr 2 O 3 under the same experimental conditions, the CP2 led to the yield of 97.3%.In the presence of metal oxide (Cr 2 O 3 ) nanoparticle, the % of yield of PANI increased slightly.This can be explained as follows.During the in situ polymerization, the bulk Cr 2 O 3 is reduced into its nano-sized Cr 2 O 3 .The Cr 2 O 3 nanoparticle has high surface area, resulting in surface catalytic effect.As a result, more number of Ani monomer was oxidized to Ani radical cation.In this situation, the added oxidizing agent also helped to form Ani radical cation from Ani monomer.Now these two combined effects led to a higher % yield of PANI in the case of CP2.
The Rp gives an idea on the formation of polymer.The CP1 formed at the rate of 2.83 × 10 -4 mol/lit/sec.But the CP2 system exhibited the Rp value of 2.90 × 10 -4 mol/ lit/sec.This explains the mild catalytic effect of Cr 2 O 3 nanoparticle as discussed above.The Cr 2 O 3 nanoparticle formed during the in situ polymerization has more surface area.As the size is small, the Cr 2 O 3 nanoparticles act as a filler.Due to the filling effect, the thermal, optical and electrical properties of PANI are improved and discussed in the forth coming sessions.Once the Rp of Ani is increased automatically, the molecular weight of PANI will be increased with the increase in resonance effect.This will ultimately increase the thermal and optical properties of PANI.
The functional groups present in the RB end capped PANI backbone, CP1 is characterized by FTIR spectroscopy (Fig. 1a), and it contains quinonoid (1582 cm −1 ), benzenoid (1488 cm −1 ), C-N stretching (1304 cm −1 ), aromatic stretching (817, 688 cm −1 ), N-C stretching (1027 cm −1 ), C-H out of plane bending vibration (752 cm −1 ) and chloride ion stretching (616 cm −1 ).This is in accordance with our earlier report (Yelilarasi et al. 2009a, b). Figure 1b represents the FTIR spectrum of CP2 system.This system too presents the above discussed peaks with one new peak which is seen due to Cr-O stretching (591 cm −1 ) (Ponprabhakarann et al. 2019).This confirmed the PANI/Cr 2 O 3 nanocomposite formation.One important point to be noted here is the benzenoid form in Fig. 1a which appears as a doublet due to the presence of PANI and RB.But in Fig. 1b, it appeared as a single sharp peak due to the added Cr 2 O 3 .It means, the added Cr 2 O 3 controls the structure of PANI.Moreover, during the polymerization of Ani, emeraldine and pernigraniline structure were formed due to the homogenization by the bulk Cr 2 O 3 , a mild oxidizing agent.In this work, a bulk sized Cr 2 O 3 was added.During the polymerization, it gains electrons from the reaction medium and reduced to its nano-size.This is further confirmed by XPS and HR-TEM techniques.Electrons are released to the reaction medium because of oxidation of Ani by persulfate radical and bulk Cr 2 O 3 .The advantage of the present research work is synthesis of PANI with simultaneous formation of Cr 2 O 3 nanoparticle.Above all, the PANI was formed with unique structure, i.e., the PANI backbone was formed by benzenoid and quinonoid forms.But in the absence of Cr 2 O 3 benzenoid structure of pernigraniline and emeraldine salts were formed, this is confirmed by the stereo selective reaction of Cr 2 O 3 with increase in electrical conductivity.Thus, the present methodology offers more advantage to the conducting polymer field.By using the FTIR spectrum software, the absorbance peak The HCl doped PANI is a dark green colored polymer with the λ max value of 600 nm (Amita et al. 2019) with a broad peak corresponding to the conducting form of PANI.In the case of RB end capped PANI (Fig. 2a), a sharp peak appeared with the λ max value of 550 nm (Roy et al. 2008).Here, the PANI chains are chemically grafted with RB through its steric free -OH group.The same chemical reaction occurred in the presence of Cr 2 O 3 , shifted the peak to higher λ max value of 565 nm (Fig. 2b).The red shift is responsible for the association of Cr 2 O 3 with the ether oxygen atom of RB which leads to the increase in resonance stabilization.Generally, the nano-sized metal oxide degrades the structure of dye through the photography degradation reaction.But in this case, such a reaction was suppressed due to reaction temperature (0-5 °C).Hence, the nanocomposite formation enhanced the optical properties of RB end capped PANI.It is necessary to find out the direct band gap (BG) energy of the present research work because it contains a colored polymer conjugated with a RB dye and semi-conducting Cr 2 O 3 .From the UV-visible data, Tauc's plot was made and indicated in Fig. 2c for CP1 with the band gap value of 2.16 eV.Table 1 showed the data.Similar plot was made for CP2 (Fig. 2d) system and calculated the band gap value as 2.09 eV (Table 1).It was found that after the nanocomposite formation, the BG energy was found to be decreased.This confirmed the chemical grafting of RB with PANI backbone, i.e., chain end capping effect.The decrease in BG energy confirmed the existence of chemical interaction between -NH group of PANI and Cr 2 O 3 nanoparticles.Even though the PANI is a dark green colored one, its emission activity is not good when compared to the pristine dyes.In order to improve the emission behavior of PANI, the present research work was made.The fluorescence emission spectrum (FES) of CP1 is given in Fig. 3a and the data is given in Table 1.The spectrum showed one emission peak at 557 nm with the intensity of 574 cps (Baynakutan et al. 2020).The CP2 system showed an emission peak at 579 nm (Fig. 3b) with an intensity of 863 cps.In comparison, the emission intensity and wavelength were increased in the The surface morphology (SEM image) of CP1 is given in Fig. 4a with dried sky like morphology with micro voids.The edge of the polymer particles is white, and this confirmed the fluorescent nature of the RB end capped PANI.The sizes of the particles varied in length from 150 to 800 nm. Figure 4b represents the SEM image of CP2 system.The arrow marked portion confirmed the cage like structure.The size of Cr 2 O 3 nanoparticle was determined as ~ 50 nm.Further, the size of the same will be confirmed by HR-TEM technique.Here, the entire image is white, which implies that the Cr 2 O 3 nanoparticles enhanced the fluorescent nature of PANI.Hence, the SEM study proved the fluorescence enhancing activity of Cr 2 O 3 nanoparticles.
The size and shape of the Cr 2 O 3 present in CP2 system are given in Fig. 5.The rhombohedral structure of Cr 2 O 3 nanoparticle is shown by the arrow mark in Fig. 5a. Figure 5b shows the presence of various crystal planes and confirmed the crystalline structure of Cr 2 O 3 nanoparticles.Figure 5c, d confirmed the presence of perfectly ordered region of Cr 2 O 3 .The circled portion in Fig. 5d denotes the lattice ordered structure propagated on the edge of the grains (Ohyma et al. 1989).In Fig. 5e, the arrow mark confirmed the presence of lattice planes.Figure 5f shows the SAED pattern of Cr 2 O 3 present in the CP2 system.The SAED declared the presence of ( 012), ( 104), ( 113) and ( 024) crystal planes in Cr 2 O 3 nanoparticles (Galvan et al. 2014).Hence, the HR-TEM confirmed the presence of Cr 2 O 3 in crystalline form with rhombohedral shape and with clear edge.
The crystalline nature of polymer samples was confirmed by XRD. Figure 6a indicates the XRD of CP1 system.It is well known that the PANI is an amorphous powder.The end capping agent influenced the crystallinity of PANI.The diffractogram showed the peaks at 11.3° (002), 15.7 o (030), 20.9 o (010), 24.5 o (102) and 26.9 o (200).Appearance of these peaks confirmed the PANI formation due to the repetition of benzenoid and quinonoid rings (Mostafei et al. 2009).The other peaks are corresponding to the end capping agent.The XRD of CP2 system is shown in Fig. 6b.Here, also, the above said peaks corresponding to RB and PANI are seen.Apart from these peaks, some new peaks are also seen corresponding to Cr 2 O 3 nanoparticles.New peaks at 24.7°(012), 33.4 o (104), 35.9 o (110), 40.6 o (113), 50.6 o (024), 55.0 o (110), 63.5 o {a small hump, (214)} and 65.1° {a small hump, (300)} can be seen.Appearance of these peaks confirmed the rhmbohedral structure of α-Cr 2 O 3 (Ullah et al. 2015).At the same time, the crystalline peaks of α-Cr 2 O 3 were suppressed.Hence, the XRD results further supported the HR-TEM and SAED reports.
The intrinsic viscosity (IV) measurement gives an idea about the molecular weight of the polymer chain.The IV values of CP1 and CP2 are determined as 0.51 and 0.48 dL/g, respectively, and the data are given in Table 1.It was found that the CP2 system showed lower IV value due to the increase in viscosity of the reaction medium while adding bulk Cr 2 O 3 during the solution polymerization of Ani.Moreover, once the tetramer is formed immediately, it will be precipitated from the reaction medium and further increase in polymer chain length is restricted.This depends on the nature of solvent and monomer considered for polymerization.If the solute-solvent interaction is dominant than solute-solute interaction, there is a chance for the increase in IV value of PANI.In the present work, NMP was used as a reaction medium.Hence, the chain length of PANI was slightly increased.This leads to increase in IV value of the polymer.This is further supported by TGA result.
The electrical conductivity values of CP1 and CP2 systems are given in Table 1.The CP2 system exhibited higher electrical conductivity value of 6.82 × 10 -2 S/cm.This is due to the smaller size of Cr 2 O 3 nanoparticle.The increase in electrical conductivity is not only due to increase in resonance stabilization of PANI structure but also due to the surface catalytic effect offered by the Cr 2 O 3 nanoparticles.During the in situ polymerization reaction, the bulk Cr 2 O 3 is converted into nano-sized Cr 2 O 3 simultaneously.The nanoparticle has more surface area due to nano-size.This might accelerate the Rp reaction and increase the polymer chain length.But the case is reverse here.The solution polymerization of Ani was started with the addition of PDS and bulk Cr 2 O 3 .This restricts the interaction between monomer and initiator molecules and resulting with decrease in IV of PANI.The increase in electrical conductivity value indirectly indicates the chain length of PANI is increased with the increase in resonance effect.Once the PANI chain length is increased, automatically the IV of PANI is also increased.If the molecular weight of PANI is increased the thermal properties like T g and T d (Table 1) will also be increased.This proved that the increase in electrical conductivity will indirectly enhance the other thermal and optical properties.
The main aim of the present research work is to increase the processability of PANI via increase in solubility of the same.Hence, the solubility of PANI in its doped form was tested in different polar and non-polar solvents.The RB end capped PANI is soluble in DMSO, DMF, HCO 2 H, THF, NMP, CHCl 3 and EtOH.In DDW, it is soluble only after long ultrasonication.Now the solubility of PANI in its HCl doped form is increased via end capping process.
The T g of PANI synthesized both in the presence and absence of Cr 2 O 3 was determined and compared.The CP1 system showed the T g value of 59.4 °C (Fig. 8a) (Table 1) whereas the CP2 system showed the T g at 64.8 °C (Fig. 8b) (Table 1).The CP2 system showed the higher T g as a first order transition due to the presence of Cr 2 O 3 .The induced crystallinity by the RB and Cr 2 O 3 are responsible for this higher T g .The DSC study proved that the T g of PANI was increased by the addition of Cr 2 O 3 .When compared with the literature (Bhadra and Khastgir 2009), the present system yielded excellent T g value.It means the thermal property of the PANI was enhanced by the added Cr 2 O 3 .
The TGA thermogram of RB end capped PANI (CP1) is shown in Fig. 9a with three step degradation process.The minor weight loss around 115 °C is due to the removal of moisture and HCl dopant.The major degradation occurred at 458 °C is due to the degradation of PANI backbone.A minor weight loss step around 521 °C is ascribed to the degradation of rigid phenyl structure of RB dye.Table 1 showed the TGA data.Above 750 °C 15.5% weight remained as residue.When compared with Kumar et al (2020) report, the present system exhibited higher T d value.Figure 9a-e showed the TGA thermograms of PANI carried out at five different heating rates under atmospheric oxygen condition.The DTA thermograms of CP1 are given in Fig. 9f-j.It was found that while increasing the heating rates the T d of CP1 increased slowly.This can be explained on the basis of fast scanning process.In order to find out the energy of activation (Ea) for CP1 backbone, three different methods were adopted.The first method includes the plot of 1000/T d Vs ln (β) (Fig. 9k) which is known as Flyn Wall Ozawa method.The second method includes the plot of 1000/T d Vs ln (β/T d ) (Fig. 9l) also known as Auggis-Bennett method, and the third method includes the plot of 1000/T d Vs ln (β/T d 2 ) (Fig. 9m), also known as Kissinger method.From the slopes of the plots, the average Ea value was calculated as 87.28 kJ/mol (Table 2) for step 2. In such a way, the Ea value for the degradation of RB dye (step 3) was determined by plotting 1000/T d Vs ln β (Fig. 9n), 1000/T d Vs ln (β/T d ) (Fig. 9o) and 1000/T d Vs ln (β/T d 2 ) (Fig. 9p).The average Ea value was determined as 107.28 kJ/mol (Table 2).In comparison, step 3 consumed more amount of thermal energy than step 2 for its degradation.This is due to more rigid structure of RB dye.When compared with the Amita and research team's report (Amita et al. 2019), the present system yielded lower Ea value due to the presence of RB end capping agent.
Figure 10a-e confirms the TGA thermogram of CP2 system with three steps degradation process (Table 1).The minor weight loss taken place around 130 and 215 °C are due to the removal of HCl dopant and NMP solvent, respectively.The major weight loss occurred at 396 °C.Above 750 °C, 42.5% weight remained as residue.In comparison, the degradation temperature (T d ) of PANI was deceased in the presence of Cr 2 O 3 and this can be explained as follows.(i) The added Cr 2 O 3(bulk) during the solution polymerization of ANI increased the viscosity and reduced the interaction between ANI radical cations, (ii) the interaction between persulfate radicals and ANI monomer was restricted, (iii) at 0-5 °C, the decomposition rate of persulfate in the presence of Cr 2 O 3 may be delayed.As a result, the molecular weight of resultant PANI chains decreased.At the same time the % weight residue remained above 750 °C rose due to the added Cr 2 O 3 .In overall comparison, the CP2 system exhibited a lower T d due to the molecular weight of PANI.The decrease in molecular weight of PANI was confirmed by IV measurements.Chemical polymerization of aniline, in the presence of bulk Cr 2 O 3 , was carried out.The present system also exhibited three step degradation processes.As usual, the de-doping process was noticed below 200 °C.The PANI backbone degradation was noticed at 395 °C whereas the RB structure degradation happened at 439 °C.The TGA thermograms of CP2 system at various heating rates are shown in Fig. 10a-e.The DTA thermograms are shown in Fig. 10f-j.It was found that while increasing the heating rates the T d of PANI are proportionally increased.In order to find out the Ea for the degradation of PANI back bone, the following plots were made for step 2. Plots of 1000/T d Vs ln β (Fig. 10k), 1000/T d Vs ln (β/T d ) (Fig. 10l) and 1000/T d Vs ln (β/T d 2 ) (Fig. 10m) were made and noted as a straight line with decreasing trend.By linear fitting method, a tangent was made.Now the slopes of the plots were noted, and the average Ea value was calculated as 77.15 kJ/mol.The Ea value for step 3 of CP2 was determined by using the above-mentioned plots.Plots of 1000/T d Vs ln β (Fig. 10n), 1000/T d Vs ln (β/T d ) (Fig. 10o) and 1000/T d Vs ln (β/T d 2 ) (Fig. 10p) were made and noted as a straight line with decreasing trend.The average Ea value was calculated as 85.06 kJ/mol for the degradation of rigid RB structure.When compared with step 2, the step 3 consumed more amount of thermal energy.This is due to the rigid structure of RB dye.In overall comparison, the PANI/Cr 2 O 3 nanocomposite system consumed lower amount of thermal energy for the PANI backbone degradation.Under atmospheric air condition, the Cr 2 O 3 nanoparticles can act as an oxidizing agent and catalyst for the degradation of PANI backbone which is continuously enhanced through the surface catalytic effect of Cr 2 O 3 nanoparticles.While using different formulae for the calculation of Ea, the final value is also varied but within the limit.When the T d is squared in the denominator, the Ea value will be slightly lowered.Anyhow, the three methods used for the determination of Ea produced the values within 2% error only.Determination of TD parameter value for both CP1 and CP2 systems is very important to analyze the degradation mechanism.During the degradation reaction under air atmosphere at different heating rates, the TD parameters such as entropy (S), enthalpy(H) and free energy(G) of the products are varied.It is well known that PANI and its nanocomposite with Cr 2 O 3 were prepared by solution polymerization method via free radical mechanism.The synthesis involves an exothermic reaction.Various TD parameters were determined and tabulated in Table 3.The thermodynamic parameter values for CP1-stage-I were calculated at various heating rates.The average ∆S value for the degradation of step 2 was calculated as − 242 J/K.mol.The average ∆H and ∆G values were calculated as − 6144 J/mol and 175,022 J/mol, respectively.From the TGA curves, the thermodynamic parameters for step 3 of CP1 system were also determined and compared.The average ∆S value was calculated as − 244 J/K.mol (Table 3).The average ∆H value was calculated as − 6692 J/mol and the average ∆G value was calculated as 190,268 J/mol, respectively (Table 3).Bompilwar et al (2010) reported that the H 2 SO 4 doped PANI exhibited the ΔS and ΔG values of − 43.16 J/ mol/K and 24.4 kJ/mol, respectively.Table 3 indicates the TD parameter values of CP2 system which was carried out at five different heating rates.The average ∆S value for the PANI backbone (stage-2) degradation in presence of Cr 2 O 3 nanoparticle was calculated as − 241 J/K.mol.When compared to the neat polymer system, the present system is having almost the equal value.The average ∆H value for the degradation of step 2 was calculated as − 5995 J/mol.The ∆H value is somewhat lower when compared with the neat polymer system.While coming to ∆G value for the degradation of step 2, it was calculated as 169,104 J/mol.This value is somewhat higher when compared to neat polymer system.The average ∆S, ∆H and ∆G value for the stage-3 of CP2 system was calculated as − 333 J/K.mol, − 5638.9J/mol and 220,395 J/mol, respectively (Table 3).In overall comparison, the ∆G value is higher for CP2 system.This explains the filling effect offered by Cr 2 O 3 nanoparticle.The present system exhibited similar values.The PANI nanoparticle exhibited high ΔG, ΔH and ΔS values (Ebrahimi et al. 2018).By analyzing these results, one can come to a conclusion that the TD parameter depends on the size, shape and nature of the nanoparticle.
Conclusions
From the present research work, the important results are collected and given here as conclusion.The Cr 2 O 3 mediated solution polymerization decreases the % yield of PANI due to the increase in viscosity of the reaction medium.During the solution polymerization reaction, the bulk sized Cr 2 O 3 was converted into nano-sized Cr 2 O 3 simultaneously.The benzenoid form appeared around 1500 cm −1 in the FTIR spectrum was enhanced by the Cr 2 O 3 nanoparticles.Both the absorbance and emission spectra showed enhanced results with red shifting in their peaks due to the chain end capping effect.The band gap value of PANI was decreased slightly while adding Cr 2 O 3(bulk) .Increase in T g value was noticed for PANI due to the influence of Cr 2 O 3 nanoparticle; at the same time, the IV was found to be reduced due to the high viscosity of the reaction medium by the added bulk Cr 2 O 3 .The XRD results declared the crystalline nature of PANI/ Cr 2 O 3 nanocomposite and further evidenced by SAED pattern.The SEM image showed the fluorescent surface with cage like morphology.The HR-TEM confirmed the rhombohedral geometry of Cr 2 O 3 nanoparticle.The Cr 2 O 3 mediated synthesis of PANI exhibited Cr3d2p 3/2 and Cr3d2p 1/2 peaks in the XPS, and this confirmed the formation of PANI/Cr 2 O 3 nanocomposite.Due to the presence of Cr 2 O 3 nanoparticle, the electrical conductivity of PANI was increased slightly.While increasing the heating rate, the T d of RB end capped PANI also increased due to the fast scanning.Higher thermal energy was consumed by PANI system for its degradation.The ΔS value was higher for CP2 system due to the catalytic degradation effect offered by the Cr 2 O 3 nanoparticle.Next, our research team is going to test the supercapacitance activity of PANI and its nanocomposite with Cr 2 O 3 nanoparticle.
Fig. 1
Fig. 1 FTIR spectrum of a CP1 and b CP2 systems This leads to the conclusion of both the chemical structure of CP1 and CP2 are different.In 2014,Abdullah et al (2014) reported the band gap energy of Cr 2 O 3 as 3.2 eV.The present research work yielded lower band gap value and exhibited excellent results.In the present research work, the added bulk Cr 2 O 3 slightly shifted the λ max value, and this confirmed the existence of chemical interaction between Cr 2 O 3 nanoparticle and PANI chains.The -NH group of PANI interacted with the Cr 2 O 3 nanoparticle.
Fig. 4
Fig. 4 SEM image of a CP1 and b CP2 systems
Fig. 6
Fig. 6 XRD of a CP1 and b CP2 systems Fig. 7 XPS of a CP1 and b CP2 systems
Fig. 8
Fig. 8 DSC thermogram of a CP1 and b CP2 systems
|
v3-fos-license
|
2019-04-16T13:29:16.128Z
|
2018-10-01T00:00:00.000
|
116726930
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scielo.org.za/pdf/jsaimm/v118n10/08.pdf",
"pdf_hash": "39eb70cd86f5542445c8f985ff4c96c532712b9a",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43433",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "39eb70cd86f5542445c8f985ff4c96c532712b9a",
"year": 2018
}
|
pes2o/s2orc
|
Graphical analysis of underground coal gasification : Application of a carbon-hydrogen-oxygen ( CHO ) diagram by
providing over 40% of global electricity demand and about 90% of South Africa’s primary energy needs. However, less than 20% of the known world resources are suitable for possible extraction using conventional surface and underground mining techniques (Andrianopolous,, Korre, and Durucan, 2015). Underground coal gasification (UCG) has the potential to recover the energy stored in coal in an environmentally responsible manner by exploiting seams that are deemed unmineable by traditional methods. The UCG process, if successfully developed, can increase coal reserves substantially. For example, in the Limpopo region of South Africa alone, the estimated potential for UCG gas, based on existing geological records, is over 400 trillion cubic feet (TCF) natural gas equivalent – this is about a hundred times more gas than the existing 4-TCF Pande-Temane natural gas field reserve in Mozambique (de Pontes, Mocumbi, and Sangweni, 2014). Sasol has been producing synthesis gas from surface gasifiers for over 60 years using South African bituminous coal that is mined using traditional methods (van Dyk, Keyser, and Coertzen, 2006). The authors acknowledge that South Africa will, for many years, rely on its abundant coal resources for energy, with gasification technology playing an enabling role. The gasification propensity of low-grade South African coal was studied by Engelbrecht et al. (2010) in a surface fluidized bed reactor. The coal samples from New Vaal, Matla, Grootegeluk, and Duvha coal mines were high in ash (up to 45%), rich in inertinite (up to 80%), had a high volatile matter content (20%) and low porosity. The study established that these low-grade South African coals were able to gasify to produce syngas for downstream processes. UCG is a thermo-chemical process which converts coal into a gas with significant heating value. The process requires the reaction of coal in air/oxygen (and possibly with the addition of steam and carbon dioxide) within the underground seam to produce synthesis gas (syngas). The primary components of syngas are the permanent gases hydrogen, carbon monoxide, carbon dioxide, and methane along with tars, hydrogen sulphide, and carbonyl sulphide. The ash is deliberately left below the ground within the cavity. A typical gasification cavity is carefully controlled to operate just below the hydrostatic pressure to ensure ingress of subsurface water into the cavity and the retention of products within the gasification system. The nature of UCG processes are such that a limited number of parameters can be Graphical analysis of underground coal gasification: Application of a carbon-hydrogen-oxygen (CHO) diagram
Coal is a commonly utilized fossil fuel, providing over 40% of global electricity demand and about 90% of South Africa's primary energy needs.However, less than 20% of the known world resources are suitable for possible extraction using conventional surface and underground mining techniques (Andrianopolous,, Korre, and Durucan, 2015).Underground coal gasification (UCG) has the potential to recover the energy stored in coal in an environmentally responsible manner by exploiting seams that are deemed unmineable by traditional methods.The UCG process, if successfully developed, can increase coal reserves substantially.For example, in the Limpopo region of South Africa alone, the estimated potential for UCG gas, based on existing geological records, is over 400 trillion cubic feet (TCF) natural gas equivalent -this is about a hundred times more gas than the existing 4-TCF Pande-Temane natural gas field reserve in Mozambique (de Pontes, Mocumbi, and Sangweni, 2014).
Sasol has been producing synthesis gas from surface gasifiers for over 60 years using South African bituminous coal that is mined using traditional methods (van Dyk, Keyser, and Coertzen, 2006).The authors acknowledge that South Africa will, for many years, rely on its abundant coal resources for energy, with gasification technology playing an enabling role.
The gasification propensity of low-grade South African coal was studied by Engelbrecht et al. (2010) in a surface fluidized bed reactor.The coal samples from New Vaal, Matla, Grootegeluk, and Duvha coal mines were high in ash (up to 45%), rich in inertinite (up to 80%), had a high volatile matter content (20%) and low porosity.The study established that these low-grade South African coals were able to gasify to produce syngas for downstream processes.
UCG is a thermo-chemical process which converts coal into a gas with significant heating value.The process requires the reaction of coal in air/oxygen (and possibly with the addition of steam and carbon dioxide) within the underground seam to produce synthesis gas (syngas).The primary components of syngas are the permanent gases hydrogen, carbon monoxide, carbon dioxide, and methane along with tars, hydrogen sulphide, and carbonyl sulphide.The ash is deliberately left below the ground within the cavity.A typical gasification cavity is carefully controlled to operate just below the hydrostatic pressure to ensure ingress of subsurface water into the cavity and the retention of products within the gasification system.The nature of UCG processes are such that a limited number of parameters can be Graphical analysis of underground coal gasification: Application of a carbon-hydrogen-oxygen (CHO) diagram by S. Kauchali Underground coal gasification (UCG) is recognized as an efficient mining technique capable of chemically converting the coal from deep coal seams into synthesis gas.Depending on the main constituents of the synthesis gas, chemicals, electricity, or heat can be produced at the surface.This paper provides a high-level graphical method to assist practitioners in developing preliminary gasification processes and experimental programmes prior to detailed designs or field trials.The graphical method identifies theoretical limits of operation for sensible gasification within a thermally balanced region, based primarily on the basic coal chemistry.The analyses of the theoretical outputs are compared to actual field trials from sites in the USA and Australia, with very favourable results.A South African coal is studied to determine the possible synthesis gas outputs achievable using various UCG techniques: controlled retractable injection point (CRIP) and linked vertical wells (LVW).For CRIP techniques, an important result suggests that pyrolysis, and subsequent char production, are important intermediate phenomena allowing for increased thermal efficiencies of UCG.The conclusion is that South African coals need to be studied for pyrolysis-char behaviour as part of any future UCG programme.The results also suggest that UCG with CRIP would be a preferred technology choice for Bosjesspruit coal where pyrolysis dynamics are important.Lastly, the use of CO 2 as oxidant in the gasification process is shown to produce syngas with significant higher heating value.
Underground coal gasification, pyrolysis, char, thermal balance.
Graphical anaylsis of underground coal gasification: Application of a carbon-hydrogen-oxygen (CHO) diagram either controlled or measured.Furthermore, UCG processes require multidisciplinary integration of knowledge from geology, hydrogeology.and the fundamental understanding of the gasification process.
Recent review articles by Perkins (2018aPerkins ( , 2018b) provide an excellent basis for UCG practitioners.Perkins (2018a) covered the various methods of UCG as well as the performance of the methods at actual field sites worldwide.Of particular interest are the descriptions for drilling orientations, linking, and operational methods utilized for UCG: linked vertical wells (LVW), controlled retractable injection point (CRIP), and the associated variations.The factors affecting the performance of various UCG trials were studied as well as an assessment of economic and environmental issues around UCG projects.Guidelines are provided for site and oxidant selection based on field trials from the USA, Europe, Australia, and Canada.Huang et al. (2012) studied the feasibility of UCG in China using field research, trial studies, and fundamental laboratory work comprising petrography, reactivity, and mechanical tests of roof material.In contrast, Hsu et al. (2014) performed a laboratory-scale gasification simulation of a coal lump and used X-ray tomography to assess the cavity formation.The cavity formation in the experiment was consistent with a teardrop pattern typical in UCG trials.The cavity shape and effect of operating parameters on the UCG cavity during gasification were studied by Jokwar, Sereshki, and Najafi (2018) using commercial COMSOL software.
Andrianopolous, Korre, and Durucan (2015) developed models to represent the chemical processes in UCG.In this study, models previously developed for surface gasifiers were adapted for UCG processes.The molar compositions and syngas production from the models were compared to reported results from a laboratory-scale experiment.A high correlation of the experimental and modelling results was achieved.Zogala (2014aZogala ( , 2014b) studied a simplistic coal gasification simulation method based on thermodynamic calculations for the reacting species as well as kinetic and computational fluid dynamics (CFD) models.Mavhengere et al. (2016) developed a modified distributed activation energy model (DAEM) for incorporation into advanced CFD calculations for gasification processes.Yang et al. (2016) reviewed the practicalities of worldwide UCG projects and research activities over a fiveyear period.Their studies included developments in computational modelling as well as laboratory and field test results.The techno-economic prospects of combining UCG with carbon capture and storage (CCS) was also discussed.Klebingat et al. (2018) developed a thermodynamic UCG model to maximize syngas heating values and minimize tar production from early UCG field trials at Centralia-PSC, Hanna-I, and Pricetown.The optimization suggested that tar production in the field trials could be eliminated, with significant improvements to the syngas heating values.
UCG development has been largely concerned with establishing methods to enhance well interconnectivity as well as techniques for drilling horizontal in-seam boreholes.In addition, methods are sought for the ignition of the coal as well as appropriate process control to ensure syngas quality.Site selection criteria have been considered crucial, while the contribution from laboratory work is considered to be limited.This undelines the need for site-specific piloting and testing.
In this study, the focus is restricted to the development of the UCG process based on the inherent chemical nature of coal and the specific reactions required to complete the conversion of solid coal into syngas.A graphical method is presented that allows an engineer with a basic competence in chemistry to develop high-level UCG processes without the need for detailed studies of kinetics, equilibrium, geology, and hydrogeology.The information obtained from such an exercise provides a target for the subsequent, and costly, field trials.The results obtained from the high-level graphical analyses are compared to UCG outputs from the Rocky Mountain (USA) and Chinchilla (Australia) trials.An interesting outcome is that the field trial outputs lie in a predictively narrow region, regardless of the UCG technique used.This is useful when new designs, with different coals, are being planned for UCG.Furthermore, the underground gasification of a South African coal from Bosjesspruit mine is studied to determine the possible regions of operation for producing syngas with the highest heating value suitable for power generation.A key result here shows that the preferred method for applying UCG to the coal from Bosjesspruit mine is the CRIP method, whereby the coal undergoes pyrolysis and char production prior to gasification.
The representation of gasification reactions on a bond equivalent phase diagram was advocated by Battaerd and Evans (1979).The bond equivalent phase diagram is a ternary representation of carbon, hydrogen, and oxygen (CHO) where species are represented by the bonding capacity of the constituent elements.To obtain the bond equivalent fraction for a species C x H y O z , the contribution by carbon is 4(x), hydrogen is 1(y), and oxygen is 2(z), which is normalized for each species.Thus, CH 4 (methane) is represented by the midpoint between C and H. Similarly CO 2 (carbon dioxide) and H 2 O (water) are midway between C-O and H-O respectively.CO (carbon monoxide) is one-third between C-O, as shown in Figure 1.According to Kauchali (2017), the important gasification reactions are obtained by considerations of the intersection of the feed (coal)-oxidant (steam, oxygen, or carbon dioxide) with the following lines: H 2 -CO, H 2 -CO 2 , H 2 -CO, CH 4 -CO and CH 4 -CO 2 (Figure 1).These intersections represent the stoichiometric region in which sensible gasification occurs -outside of these regions an excess amount of coal (carbon) or oxidants is evident, implying that they do not react within the gasification system.A further analysis of the intersections indicates the inherent thermal nature of the reactions, some of which are endothermic while others are exothermic.The endothermic and exothermic nature of the important reactions will be further explained in the examples that follow from the various field trials.
In an idealized underground gasification process the system must be overall thermally balanced so that there is no net heat released or added to the cavity.This requirement further limits the region of operation of thermally balanced gasification reactions.
In addition, the following criteria (Wei, 1979;Kauchali, 2017) are used to decide on reactions that will form the overall mass and energy balances: ® Feed components may not appear in the product.For example, any gasification reaction that uses steam/oxygen as oxidant cannot have water as a product.® The reactions on the CHO diagram represent the maximum region they enclose -mathematically, the intersection points represent the extreme points of a linearly independent reaction system.® The extreme reactions points, representing overall stoichiometry, will lie on either of the lines H 2 -CO, H 2 -CO 2 , H 2 -CO, CH 4 -CO and CH 4 -CO 2 .
The graphical representation of the UCG processes is depicted on a ternary CHO diagram.The three different coals (USA, Australia, and South Africa) and the oxidants (steam, oxygen or carbon dioxide) are represented on the diagrams as feed points.From the representation of the feed points and the various intersections with the product lines (H 2 -CO, H 2 -CO 2 , H 2 -CO, CH 4 -CO and CH 4 -CO 2 ), a region of stoichiometrically acceptable gasification products is obtained.This stoichiometric region is a mass balance region indicating the possible combinations of elements (C, H, and O) resulting from the various reaction schemes during gasification.This stoichiometric region thus represents the maximum allowable area and possible products that can be obtained.Once the reactions governing the stoichiometry are obtained, the possible pairing of endothermic-exothermic reactions can be established.This requires the thermodynamic properties (heat of formation) of each species participating in the reaction.The combinations of the reaction pairs (exothermic and endothermic) lead to thermally balanced points where the reactions have a heat of reaction of zero (kJ/mol).This thermally balanced point represents a 'balanced' UCG process and is also plotted on the CHO diagram.Depending on the number of possible exothermic and endothermic stoichiometric reactions, a number of thermally balanced points exist.A study of the thermally balanced reaction points will result in identifying a smaller subset of reactions that will form the basic reactions, i.e. the extreme reactions that will form a boundary around all other thermally balanced reactions.These extreme reactions are referred to as 'linearly independent thermally balanced reactions' and are unique for every coal used.The linearly independent reactions are also plotted on the CHO diagram and the region enclosed by them is shaded to indicate the 'thermally balanced region' for the specific coal.These calculations can be repeated for chars resulting from the drying and pyrolysis of the parent coal, provided that the data is available.
The information thus obtained enables the determination of important gasification parameters such as the type of oxidants to use, the ratio of C:H or C:O going into the gasification process, the UCG technique required for maximum energy, and product recovery.
The following sections essentially provide the graphical development for a US and an Australian coal, and South African coal and char.L Subbituminous coal from the Rocky Mountain site was gasified using UCG (Dennis, 2006).The coal had a chemical formula CH 0.811 O 0.167 and a calculated heat of formation (from coal CV) of -203.1 kJ/mol.Table I represents the ultimate analysis.In Table II, the syngas output from the two UCG operations employed is shown, namely extended linking well (ELW) / linked vertical well (LVW) and controlled retractable injection point (CRIP) (Dennis, 2006).The ELW technique used two vertical wells about 40 m apart but linked to a horizontally drilled gas production well.The CRIP method used two directionally drilled horizontal wells into the coal seam: one for steam and oxygen injection and the second for syngas recovery.The ELW and CRIP methods produced syngas with different compositions.
In the final technical report on the site, Dennis (2006) discussed two technologies, both using a combination of steam and oxygen as oxidants.The report details the dry gas composition for ELW and CRIP operations.The ELW site used a steam to oxygen ratio of approximately 1.88 and the CRIP site a ratio of approximately 2.04.Tables III and IV represent the stoichiometric reaction scheme adapted for the Rocky Mountain coal and the thermally balanced reactions respectively.Table V lists the standard heat of formation per compound required to determine the heat of reaction for the respective systems for all samples considered in this study.In Table V, the standard heat of formation for coal was calculated from the coal CV, assuming total combustion to liquid water and carbon dioxide only.For the char, an estimate of the CV of char from South African coals was used as derived by Theron and le Roux (2015).
Table III is obtained by consideration of the intersection of the line joining Rocky Mountain coal with oxygen/steam and the lines H 2 -CO, H 2 -CO 2 , H 2 -CO, CH 4 -CO and CH 4 -CO 2 (Figure 1).It is noted that eight reactions (r 1 to r 8 ) form the basis of the stoichiometric region within which gasification occurs.Moreover, two of these reactions are exothermic: r 2 and r 6 .Table IV is thus obtained by taking linear combinations of exothermic-endothermic pairs such that the overall heat of reaction is zero, leading to a further 16 reactions.At these conditions the gasification reactions are considered to be thermally balanced and are considered the 'desirable' operation from a mass and energy perspective.For UCG, this implies that the cavity is 'self-sustaining' from an energy perspective and assuming that there are no heat or mass losses from the system.
A matrix analysis of the thermally balanced reactions in Table IV indicates that there are in fact only four linearly independent thermally balanced reactions (zero heat of reaction).Also included are the calculated standard state higher heating values (HHV), in MJ/m 3 , of the syngas produced (with air as the source of oxygen), as given by Li et al. (2004).
Graphical anaylsis of underground coal gasification: Application of a carbon-hydrogen-oxygen (CHO) diagram Figure 1 illustrates the thermally balanced region (shaded grey area) based on the four basic reactions A, C, L, and J.It is of interest to note the position of the syngas (X) from the ELW and CRIP UCG field trials, which the proximity of the field trial results relative to the theoretical developments (grey shaded region) based only on the coal thermodynamic properties.Furthermore, it is noted that the theoretical HHV ranges from 6.95-14.34MJ/m 3 (for pure oxygen blown) with an average of 10.64 MJ/m 3 and confirms the actual values of about 9.5 MJ/m 3 reported by Perkins (2018a).The highest HHV reported at L is not achievable due to equilibrium considerations, as the high temperatures required for gasification favour the destruction of methane and the production of CO 2 , leading to lower HHV values.The Australian UCG projects were performed on the Macalister coal seam of the Walloon Coal Measures.At the Bloodwood Creek location the coal seam was about 200 m deep and 13 m thick, while at the Chinchilla, the depth was 130 m and the seam thickness 4 m.Coal quality data was obtained from the Queensland Department of Mines and Energy (1999) with respect to the use of Walloon coals (subbituminous) for power generation.Though analysis of the coal was reported on both the as-received and dry ash-free basis, the product gas was reported (in Kačur et al., 2014) only on a moisture-free basis.For this reason, the Macalister coal points are plotted as dry only, as seen in Table VII.Figure 2 represents the gasification propensity of the Macalister coal.The thermally balanced region, represented by the shaded grey area, outlines the possible region where favourable UCG conditions may occur.The field test results (X) of the syngas composition from Chinchilla and Bloodwood Creek fall within the thermally balanced region.This confirms the method of UCG operation practised by the operators.Of particular interest is that all UCG methods (CRIP or LVW etc.) appear to operate within or near the thermally balanced region.The range of HHV for the UCG syngas is predicted to be within 5.19-11.18MJ/m 3 (Table XI).The Macalister coal seam data exhibits another interesting feature in that a portion of the thermally balanced region crosses the H 2 -CO line toward the CH 4 -CO line, indicating that methane formation at equilibrium (high temperature) may be feasible.This could also be a possible reason for the presence of coal seam methane in Australian coals, not seen in USA coals (Figure 1).Lastly, the operation of a UCG cavity for power generation at L (where the HHV appears to be the highest) may not be feasible due to the equilibrium being favoured to H 2 -CO.However, the equilibrium may be favourable at J, allowing for the production of methane and hence a higher heating value syngas (7.95 MJ/m 3 ) may be obtained from air-blown gasification only without the need to use steam.The natural ingress of water into the cavity may not allow for air-blown UCG only and in this case, to obtain the highest HHV, the UCG would be operated along line J-L and not L-C as trialled by the different sites.It is noted that points along L-C for the field trials were to produce syngas for downstream liquid fuels production.
Graphical anaylsis
Based on the findings from the CHO diagrams for US and Australian coals developed here, this study attempts to predict syngas production by UCG of a South African coal from Sasol's Bosjesspruit Colliery.The colliery is based in the Graphical anaylsis of underground coal gasification: Application of a carbon-hydrogen-oxygen (CHO) diagram Highveld Coalfield, South Africa.The bituminous coal is high in ash and typically inertinite-rich.The CHO diagram is used to demonstrate the stoichiometric region in which sensible gasification (i.e.conversion of solid coal to syngas) occurs.
From the analysis of the thermally balanced reactions, a region for UCG is determined from which various syngas compositions are analysed for downstream processes: syngas for the Fischer Tropsch (FT) process requiring 2H 2 :1CO ratios and syngas for power production.An analysis of the ELW and CRIP methods for UCG of Bosjesspruit coal will be studied.Lastly, the feasibility of using CO 2 as oxidant for UCG is considered.
The characteristics of the Bosjesspruit coal are provided in Table XII, from which the molecular formula is determined to be CH 0.75 O 0.16 (for coal as received), with the heat of formation being -212.6 KJ/mol (Pinheiro, 1999) coal is represented by the light grey area bounded by points A,C,L, and J, and for the char by Ac, Cc, Lc, and Jc (dark grey).There is a resemblance to Rocky Mountain coal (Figure 1) as both the molecular formulae and heat of formation values are similar, and hence the thermally balanced regions appear very similar.The syngas resulting from the CRIP method for Rocky Mountain coal appears to be an outlier from the thermally balanced region in Figure 1.However, if the similarity of the Bosjesspruit coal is applied to the Rocky Mountain coal, then the Bosjesspruit char thermally balanced region will be sufficient to predict the Rocky Mountain char gasification behaviour.In this case, the CRIP result for Rocky Mountain coal would fall within the char gasification thermally balanced region.This is an important result, suggesting that UCG using CRIP leads to pyrolysis and subsequent char gasification, which is not prominent in LVW methods.
The effect of coal drying and pyrolysis is evident from Figure 3, where the char thermally balanced region has significantly enlarged with a higher achievable HHV (6.5-21.2MJ/m 3 ) than for coal (3.5-5.2MJ/m 3 ).This thermally balanced region is more efficient and shows the importance of allowing the coal to dry and pyrolysis to occur prior to gasification.Also, the equilibrium at Jc is favourable, thus allowing the production of methane and carbon monoxide with higher HHV (13.3 MJ/m 3 ) with air as oxidant.
The models for UCG methodologies are complex (Perkins, 2018a).Andrianopolous, Korre, and Durucan (2015) attempted to model LVW and CRIP.Their description of the mechanisms for CRIP suggests that there are roof-top and floor-bottom (spalled roof material that falls to the bottom) gasification steps resulting in different gas compositions that ultimately mix and exit the reactor cavity.This suggests that there is a greater degree of drying and pyrolysis products mixing with the syngas from char gasification.In comparison to the LVW method, the high-temperature gasification zone is localized near the reactor injection point, implying that any pyrolysis product from freshly exposed coal surfaces will eventually react to form the final exit gases.The implication of this analysis is that LVW follows the UCG thermally balanced results obtained for coal (Figure 3 -light grey area), while CRIP follows the char reactions (Figure 3 -dark grey area).These results for LVW (or ELW) are confirmed by the USA (Figure 1) and Australian (Figure 2) trials, where both ELW/LVW lie within the thermally balanced region for the coals (not char).This leads to the conclusion that South African coals need to be studied further to determine the pyrolysis-char behaviour prior to deciding on the UCG method.The results also suggest CRIP would be the preferred technology choice for Bosjesspruit coal, where the pyrolysis dynamics are important.
Based on the analysis above, Figure 4 depicts the possible outputs for CRIP and LVW (dotted semicircles) with the optimal steam-oxygen ratios (solid semicircles) for liquid fuel production.The outputs are based on the assumption that field trials will obtain gasification outputs similar to surface gasifiers, which are typically designed for 2H 2 :1CO -this ratio is satisfied along line PQ in Figure 4.The estimated HHV for CRIP would be around 8 (max.13.3) MJ/m 3 and 3.5 (max.4.4) MJ/m 3 for LVW.However, for power generation the UCG CRIP would operate close to Jc, where the maximum equilibrium HHV is 13.3 MJ/m 3 for an air-blown system.
The CHO phase diagram proved to be a useful tool for analysing gasification systems, and in particular for UCG where a limited number of control parameters exist.The development of a thermally balanced system for the coals allowed the prediction of the syngas output within a narrow region -these regions were tested for US and Australian field trials and were found to correlate with reasonably accuracy.This method was able to predict, without prior knowledge of the UCG technique employed, the flow rates of oxidants, reaction kinetics, heat and mass transfer kinetics, and hydrogeology.It was shown that only four reactions govern the output of any thermally balanced UCG system.
A South African coal was assessed and the effects of pyrolysis were shown to enhance the thermodynamic efficiency of the system, leading to a key conclusion that the determination of pyrolysis propensity and char characteristics should form part of any future UCG programme.It was suggested that the CRIP method be used for the Bosjesspruit coal, where a theoretical maximum syngas HHV can be obtained (13.3 MJ/m 3 ) when air is used as oxidant.The use of CO 2 in addition to steam and air indicates that a UCG process for the Bosjesspruit char would be possible and capable of producing syngas with a HHV value as high as 8.8 MJ/m 3 .A part of this work was presented at the workshop held in 2016 by the South African Underground Coal Gasification Association (SAUCGA).Also, my thanks to Keeshan Moodley for the presentation, development of the ternary CHO diagram, and some of the literature field data collation.
Graphical anaylsis of underground coal gasification: Application of a carbon-hydrogen-oxygen (CHO) diagram 1069 VOLUME 118 of underground coal gasification: Application of a carbon-hydrogen-oxygen (CHO) diagram VOLUME 118 1071 L Graphical anaylsis of underground coal gasification: Application of a carbon-hydrogen-oxygen (CHO) diagram VOLUME 118 1073 L Figure 3 represents the gasification reactions for the Bosjesspruit coal and char.The thermally balanced region for Graphical anaylsis of underground coal gasification: Application of a carbon-hydrogen-oxygen (CHO) diagram VOLUME 118 1075 L
Table I
Table VIII provides the syngas compositions obtained from various UCG methods and trials (Queensland Department of Mines and Energy, 1999).The chemical formula for the Macalister coal seam is CH 0.898 O 0.108 , with the heat of formation being -112.27kJ/mol.Table IX considers the eight balanced stoichiometric reactions for gasification of Macalister coal with steam and oxygen.Table X provides the thermally balanced reactions
Table VIII
. It must be noted that the volatile matter and char analyses used here were not determined experimentally but are derived from another South African sub-bituminous coal (van Dyk, 2014).The molecular formulae and heats of formation of Bosjesspruit coal and the Rocky Mountain coal are similar.An analysis of the Bosjesspruit coal, similar to the US and Australian coals, is considered based on the details in TableXII.The oxidants are assumed to be air and steam, from
Table XII
which Tables XIII and XIV are derived for the stoichiometric basis reactions and the four thermally balanced independent reactions respectively.Table XV and Table XVI are for the char resulting from the drying and pyrolysis of the Bosjesspruit coal.The char analyses for the US and Australian coals have not been considered due to lack of information on the pyrolysis and char products of those coals.
|
v3-fos-license
|
2017-10-14T13:52:13.649Z
|
2017-10-09T00:00:00.000
|
41514380
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.preprints.org/manuscript/201710.0047/v1/download",
"pdf_hash": "de5b1e440355f47cf26f4c73c698a8d784dce4a4",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43434",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"sha1": "de5b1e440355f47cf26f4c73c698a8d784dce4a4",
"year": 2017
}
|
pes2o/s2orc
|
Nerve Growth Factor ( s ) Mediated Hypothalamic Pituitary Axis Activation Model in Stress Induced Genesis of Psychiatric Disorders
Apart from their established role in embryonic development Nerve Growth Factors (NGFs) have diverse functions in the nervous system. Their role in integration of physiological functioning of the nervous system is now attracting attention. In the present analysis, we propose a new paradigm about a novel role of NGFs. We hypothesize that NGFs play imperative role in maintaining psychological integrity of an individual as a biological system. This function may be mediated through HPA-axisoperated homeostatic mechanisms; stress induced disruption of which may lead to psychiatric disorders. Current literature suggests existence of constitutive homeostatic regulatory mechanisms for NGFs disruption which may lead to important behavioural effects. NGFs have been shown to play crucial parts in endocrine regulation. This is especially true with the prototype ‘NGF’ and Brain Derived Neurotrophic Factor (BDNF). These moieties have been observed to play important role in maintaining neuro-endocrine homeostasis thereby having a profound impact on the psychological health of an individual. Role of NGFs and HPA-axis activation (in separate studies) in developing psychiatric disorders especially those born of stress have been established. Literature suggests their unique interplay for producing a common effect which might be implicated in stress induced genesis of psychiatric disorders. This aspect, therefore, needs to be elucidated further as a disease etiogenesis model. This model may yield important insights into the evolution of psychiatric disorders and may open ways for new therapeutic approaches.
INTRODUCTION
The emerging role of Nerve Growth Factors (NGFs) in integration of physiological functioning of the nervous system is gaining a lot of interest.NGFs, especially the prototype
NGFs in adult brain: The omnipresent molecules
NGFs and their receptors have been detected ubiquitously in all regions of adult brain in various animal studies, and more specifically in hippocampus and neocortex (24)(25)(26).The hippocampus had been noted to have highest NGFs synthesis in brain and amongst all NGFs, BDNF was most abundant (25).The expressions of the NGFs were found heterogeneous and showed region specific dominance in different brain regions.In cortex, a study reported that in rat brain, expression of prototype 'NGF', BDNF, and NT-3 was varying in different layers (26).
In hippocampus NGFs were present more specifically in pyramidal and granular layer neurons and showed variations in expression in different CA regions, and in cerebellum were more specifically localised to Purkinje and granule cell neurons (27).
Similar to the NGFs, their receptors were also found heterogeneously distributed and showed regions specific dominancy which indicated target regions of their ligands, i.e., TrkB, receptor for BDNF, was found more expressed in ependyma and periventricular brain parenchyma.And TrkA, high affinity receptor for prototype 'NGF' was more densely expressed in basal forebrain regions and striatum (28).
Uniquely, intraventricular injections of NGFs in animal brains showed that they can also diffuse to other brain regions with differential limits i.e., prototype 'NGF' reached to the cortical regions, BDNF limited to the periventricular regions, and NT-3 reached some intermediate destinations (28).
The mode of NGFs secretion in neurons has been a matter of contention.Current evidence suggests, NGFs' secretion is not only constitutional (target derived and showing retrograde axonal transport serving trophic functions in neurons) but also have prominent activity dependent, regulated secretion which move anterograde via dendrites and axons to act at the synapses with similar processes of other neurons; where they add extensively in synaptic transmission and plasticity (29)(30)(31)(32).
The prototype NGF was found more specifically expressed in inhibitory GABAergic neurons (co-labelled GAD65/67) than excitatory neurons (co-labelled CaMKIIa).GABAergic neurons are considered as primary source of prototype 'NGF' production in CNS (33).
The expression of NGFs and their receptors were found in different subcortical regions (as striatum, thalamus, hypothalamus, basal forebrain nuclei, septal regions) and brainstem nuclei also.However, the expression was not only restricted in neurons, concomitantly expressed in glial cells and nerve fiber bundles also (27,28,34).
NGFs homeostasis: as a concept
NGFs have been demonstrated to have extensive role in physiological homeostasis regulation (3,35,36) and it seems reasonable to speculate for a homeostatic control system of their own (3), disruption of which may result in psychiatric disorders (37,38).Evolutionary link therefore has been seen amongst different NGFs by sharing common moiety, receptors, signalling cascades and mode of action (10).
The neurons found to co-localize more than one NGFs or their receptors (39).The NGFs have also noticed to regulate own or other family members in autocrine/paracrine manner in CNS (14) , and their regulation also found to follow circadian rhythm (40,41).And a change in NGFs levels also found to reflect on levels of HPA hormones or various neurotransmitters in the brain (42,43).NGFs also found to show neural activity dependent secretion, change in levels with sleep, physical activities, and responsive to the environmental changes (44)(45)(46).All these evidences suggest that NGFs maintain a homeostatic control system of their own which plays in concert with other neuro-endocrine homeostatic systems in the body.
Role of NGFs in maintenance of normal behaviour
The significant role of NGFs in normal behavioural processes has been warranted by peer reviewed research (47,48).The role of NGFs in various neurophysiological functions which determine normal behavior is known for long; the specific NGFs have been found involved in synaptic transmission (20,49), memory and learning (50)(51)(52), sleep (53)(54)(55), and neuronal protection (18,56).
NGFs have been reported mediating behaviour induced cerebral plasticity (19,20).A forebrain specific BDNF gene knock-out study in adult mice revealed that there was significant (30% reduction) changes in density of dendritic spines of cortical neurons that reflected in smaller brain size and compromised behaviour as difficulty in spatial learning and greater inclination for depression (48).
A study on male hamsters has shown that differential action of BDNF was responsible for dominant-subordinate relationship in a resident-intruder model in which individual hamsters were identified as winner or loser on the basis of behaviour.Losing animals had significantly more BDNF m-RNA in basolateral and medial nuclei of amygdala while winners had more BDNF m-RNA in dentate gyrus of dorsal hippocampus indicating a role of BDNF in subsequent experience dependent behavioral plasticity.The experience dependent learning of dominant or subordinate behaviour was mediated by BDNF got further confirmed when investigators were able to block acquisition of such behaviour with use of K252a (a Trk receptor antagonist (57).
Single nucleotide polymorphic variants of BDNF (rs6265, Val66Met) and NGF (rs6330, Ala35Val) genes have been reported to be associated with anxiety phenotypes in human populations (58).Another study implicated BDNF in depression related personality traits in healthy volunteers (59).
Egan et al found a polymorphic variant of BDNF (Val66Met) in subjects associated with poor episodic memory and abnormal hippocampal activation (60).Few other studies had additionally found the BDNF (Val66Met) polymorph to be associated with anxiety trait, depression, HPA and SAM (sympathetic adrenal medullary) axis hyperactivity, and higher anticipatory cortisol response to psychological stress in population (61)(62)(63).
NGFs have also been implicated in various causes of psychosocial stress which may cause deviation from normal social behavior as: social displacement, disharmony, future uncertainty bereavement, complicated upbringing and intense conflicts, etc (64)(65)(66).
NGFs have also been implicated in regulation of reproductive behaviours through hypothalamus-pituitary-gonadal (HPG) axis, a close functionary of the HPA.An experimental study claimed that the prototype 'NGF' is the constitutive element of semen and prostatic secretions, and it can mediate influence of male on sexual behaviour of female or can induce ovulations through HPG axis (67).
The prototype 'NGF' has been observed to be associated with intimate sexual behaviours; a study found serum prototype 'NGF' level, and not other NGFs higher in persons indulged in early romantic love (68).
NGFs have a definite role in neurogenesis, network organizations, formation and remodelling of synapses in developing brain which also keeps refining afterwards under their regulation (43).In adult brain (also in humans) where neurogenesis keeps going on as a physiological event; as in hippocampal region (dentate gyrus) and subventricular zone (SVZ) of lateral ventricles (69,70); the role of NGFs in induced neurogenesis, remodelling of neuronal connections, and synaptic organizations with new experiences in these brain regions provide individual the ability to adapt to new life situations (71,72), a disruption of which may lead to aberrant behavioural changes alternatively described as psychiatric disorders (73).
NGFs have also been seen to interact with dopaminergic neurons thus facilitating dopamine release at synapses (74,75).Dopamine is the chief substrate for mesolimbic reward pathway.Such a function is crucial for experience dependent modulation of normal behaviour (76).A dopamine-BDNF link in reward pathway functions has been well substantiated by animal model studies of social defeat stress (77,78).
Neuronal activity dependent secretions of NGFs have been shown to involve cAMP response element binding protein (CREB) as the chief mediator.CREB is known to mediate many neurotransmitters and neuromodulators in brain.It also activates many transcription factors, and influences expression of immediate early genes involved in synaptic transmission and plasticity, and experience dependent remodelling of neuronal circuits (79)(80)(81).The extensive role of CREB in normal neuronal functions and resultant behaviour, and its NGFs mediated regulation further substantiates that NGFs may be crucial for maintenance of normal mental health and behaviour.
Plausible mechanisms for NGFs mediated HPA axis activation model
Increasing number of studies is linking neurotrophins or NGFs, and stress to the pathophysiology of psychiatric disorders (104)(105)(106).NGFs work as a homeostatic interface between organism and environment which is robust to get disrupted from ordinary perturbations of daily life and helps the organism to adapt to the changing survival conditions (107).A persistent state of mounting stress may disrupt NGFs homeostasis in the body (107,108), and in consequence may dysregulate NGFs mediated HPA axis activation (107,109).
The ambiguity raised in the NGFs-HPA axis regulatory system is to negatively affect mood and cognition of the individual which in turn would cause more NGFs-HPA axis dysregulation, and a vicious cycle of this (Fig. 2) may lead to stable changes in individual's behaviour in an attempt to re-establish the disrupted homeostasis (110)(111)(112).If the NGFs-HPA axis homeostasis is not re-established, the deviant changes in behaviour may further progress into manifested psychiatric disorders (19,113).
Fig. 2. Stress induced HPA axis mediated disruption in NGFs homeostasis
Published literature supports the notion that stress induced NGFs mediated dysregulation of HPA axis may end up as a psychiatric disorder (if stress causing factors persist).NGFs show activity and stress based synthesis in various brain regions involved in stress response regulation (1,105,114) and the involved brain regions are known to differentially regulate HPA axis.For example, prefrontal cortex and hippocampus are known to inhibit but amygdala excites it (115).In the similar context, chronic immobilization stress in male rats was observed to produce inverse neuronal BDNF secretion and hence growth changes mediated by it, in hippocampus and amygdala (116).
The differential regulation of HPA axis by the various brain regions involved in stress response regulation perhaps may be the reason of phenotypic diversity observed in psychiatric disorders (115,117).
Experimental studies suggest that hippocampus is the most severely affected brain region (owing to prolonged glucocorticoid secretion) in chronic restraint stress (118)(119)(120).
Hippocampal inhibitory control of HPA axis activity, which is a normal phenomenon in homeostasis, is also known to be compromised in chronic restraint stress (115,121); and along with glucocorticoids, NGFs are known to be central players causing this dysregulation (109,112,122).Hippocampal pyramidal neurons secrete NGFs in response to stress which correspond to HPA axis activity (123,124), and also TrkB, the receptor for BDNF is found to be co-localized with glucocorticoid receptor (GR) and having mutual influence on each other's signalling in these neurons (122).A study by Lambert et al has shown in primary rat cortical neurons that BDNF induces structural change at GR gene (phosphorylation at serine 155 and 287) and causes significant change in its transcriptome (125).Similar evidence for essential glucocorticoid-BDNF interaction for stress response have also been noted in hypothalamus controlling CRH synthesis at PVN (114).A schematic depiction of NGFs mediated hippocampal-HPA axis regulation in normal health and chronic restraint stress is presented in Fig. 3. hormones occurs in response of fresh stressful stimuli.An increase in cortisone release at adrenal as a stress response sends negative feedback through glucocorticoid receptor (GR) in hippocampus (but GR expression is maintained) which in turn inhibits HPA activity.Also the NGFs secretion response in challenge of fresh stress is maintained.In chronic restraint stress: HPA axis regulation gets epigenetically modified causing enduring increase in cortisone secretion at adrenal which in turn increases feedback inhibition at hippocampus resulting in significant reduction in GR expression, which further decreases hippocampal inhibitory control on HPA axis facilitating its persistent hyperactivation and exaggerated response in challenge of fresh stressful stimuli.The NGFs secretion response is also reduced.(Thickness of signaling lines and feedback loops are indicating their strength) (126).
Evidences from animal model studies
There are multiple animal model studies supporting NGFs mediated regulation of HPA axis.
Studies have also established dysregulation of particular NGFs (prototype 'NGF' and BDNF) and HPA axis in chronic stress giving way for genesis of psychiatric disorders.
Givalois et al showed that a single intra-cerebroventricular injection of BDNF in nonanaesthetized adult rats modified HPA axis activity.In paraventricular nuclei of hypothalalamus an increase of BDNF was found to alter CRH (corticotrophin releasing hormone) and AVP (arginine vasopressin) synthesis, which in turn lead ACTH release (adreno-cortico-trophic hormone) from anterior pituitary and cortisone release from adrenal cortex.This change in HPA axis activity in response of exogenous BDNF was similar to the physiological condition when a rat is subjected to immobilization stress which suggested role of BDNF as stress responsive intracellular messenger (127).
Naert et al examined the effect of chronic stress (restraint 3 hour/day for 3 weeks) in adult rats on behaviour and HPA axis activity in parallel with studying BDNF levels in hypothalamus, pituitary and hippocampus.Chronic stress induced anxiety, anhedonia and depression like states in these model animals.HPA axis activity was found highly modified with increase in the basal level of hypothalamic CRH and AVP synthesis and plasma levels of ACTH and cortisone.Added to that, basal BDNF levels were increased in the hypothalamus, pituitary and hippocampus.Also, the BDNF response to subsequently applied acute novel stress was found modified.All these findings indicated plausible role of BDNF in chronic stress induced genesis of psychiatric disorders (42).
In a consecutive study Naert et al tested the effect of continuous intra-cerebroventricular administration of BDNF on HPA axis activity in adult rats and found that it not only modified HPA activity but also biological rhythms.These authors had a view that a change in HPA activity and biological rhythms occurred through regulatory effect of BDNF on AVP m-RNA expression (AVP m-RNA was found to be upregulated after continuous BDNF infusion) in suprachiasmatic nucleus (SCN) of hypothalamus (which is regarded as the biological clock) (128).The BDNF and its cognate receptor Trk B are known to be expressed in SCN and showed circadian variations (41).
Furthermore, in a study Naert et al tested the effect of partial inhibition of endogenous BDNF on HPA activity in adult rats.BDNF knock down by small interfering RNA (siRNA) caused decreased endogenous BDNF production in different brain regions which although didn't influence basal HPA activity but the knock down rats showed decreased BDNF production and concomitant altered ACTH and cortisone production in response to restraint stress which indicated essential role of BDNF in stress adaptation (129).
The acute and chronic intra-cerebroventricular administration of BDNF into brain of rat pups found to produce changes in HPA activation but only chronic administration of BDNF caused stable changes in HPA axis regulation resulting in persistently increased secretion of cortisone by adrenal cortex leading to deviant behaviour, while changes in HPA axis activation induced by acute doses of BDNF were adjusted by bodily homeostatic systems (127,128).The HPA axis changes induced by chronic administration of BDNF resembled that by chronic stress and believed to involve epigenetic mechanisms (130,131).The epigenetic mechanisms are supposed to induce structural adaptations in the chromosomal regions so as to register, signal or perpetuate altered activity states (132).The essential role of the epigenetics in producing stable changes in HPA axis regulation has been effectively proved in animal model studies of chronic stress.Maternal separation of the rat pups (which is regarded as an equivalent to chronic stress) induced stable changes in HPA axis regulation leading to hyper-cortisone response in stress challenges.The HPA axis changes were mediated by epigenetic mechanisms in the form of hypermethylation in promoter regions of glucocorticoid receptor (GR) gene hence reducing expression of GR receptors, which in turn promoted cortisone hyper secretion by influencing negative feedback loops (133)(134)(135), conversely, better maternal care caused hypomethylation in the promoter region of the GR gene increasing the GR expression leading to reduced cortisone secretion response in face of stress challenges (135).
In experiments blocking the promoter region methylation of GR by histone deacetylase inhibitor (introduces change in chromatin structure which would reduce DNA length available for methyl group attachment) blocked any increase of cortisone secretion in stress challenges in rat pups who had comparatively got low maternal care (135).In contrast, addition of methyl groups (by introducing methyl donors) in promoter regions of GR gene in adult rats which had got good maternal care as pups induced hyper cortisone secretion thus reversing the set maternal programming for stress response (136).The environmental enrichment could also reverse the maternal separation induced stress response modification (137), which is a good indication for plausible reversibility of HPA axis mediated persistent stress induced disorders.Epigenetic modification of HPA axis stress response also occurred at higher levels in the axis than GR synthesis at adrenal, and were also mediated through other epigenetic methods than methylation as phosphorylation (125), histone modifications and micro-RNA mediated transcriptional regulations (138).
In an ELS study in male mice maternal separation induced increase in cortisone response in next stress challenges was found associated with sustained hypomethylation in POMC (a precursor for ACTH) gene promoter region.The mice also presented despair like behaviour and memory deficit which were supposedly mediated through AVP signalling and epigenetic adaptations at AVP enhancer locus (139).A persistent increase in AVP expression was found in hypothalamic paraventricular nuclei (PVN) associated with decreased DNA methylation at AVP enhancer locus in an another ELS study done in mice (140).
In a different ELS study in adult rats decreased AVP enhancer locus methylation in amygdala found associated with development of active coping mechanisms when presented with predatory stress challenge (141).
Not only the BDNF expression, but also the prototype NGF had been found to be altered significantly in stress caused by various reasons in animal models (123,142); it was found raised in maternal deprivation in rats (123); and specific hypothalamic nuclei were reported having raised 'NGF' levels in aggression in mice (95).Occasional reports also implicated NT-3 in stress (108).
A dopaminergic modulation of NGFs-HPA axis mediated stress response has also been shown in animal models of social defeat.Social defeat was found to raise BDNF levels in nucleus accumbens (NAc) which is a destination centre for dopaminergic neurons residing at ventral tegmental area (VTA) in mesolimbic reward pathway.A raised BDNF level was found associated with learning social avoidance behaviour in model animals and mediated through CRH (from PVN of hypothalamus) (77,78).
Translational studies in human
There are also reports translating the animal model observation of stress response modifications in persistent stress in human subjects (142)(143)(144).A single nucleotide 145).An increase in BDNF synthesis accompanying any neuronal activity has been a neurophysiological phenomenon.A rapid increase in its synthesis in specific brain regions has been a characteristic finding in face of stress challenges (124,146); Although methylation in the BDNF promoter (exon IV) in specific brain regions was found in chronic stress only (42,147).The hypermethylation of a CpG nucleotide at 5' end of the binding site of a transcription factor, nerve growth factor inducible protein A (NGFIA) in GR promoter region has also been reported in all rat pups which got reduced maternal care.Certain alteration in the DNA methylation pattern for NGFIA is considered as an essential step in the epigenetic reprogramming of a stress response (135,148).
The persistently raised basal levels of BDNF, and hypercortisone secretion along with other stable alterations in HPA axis set points may be inducing transcriptomic changes in expression of genes related to synaptic functions (149,150), and causing structural and functional changes in neurons in form of the synaptic organization and synaptic transmission leading to altered behaviour of the organism (20,151) which is perceived as a psychiatric disorder.
Fig. 1 a
Fig. 1 a.Superfamily of neurotrophins.NGFs-Nerve growth factors, *= In invertebrates only b. Nerve growth factors and their cognate receptors.(Intact and dashed lines are showing signaling through specific high affinity Trk receptors and a common low affinity receptor-P 75NTR respectively.)
Fig. 3 .
Fig. 3. Hippocampal-HPA axis regulation in normal health and chronic restraint stress
Preprints
(www.preprints.org)| NOT PEER-REVIEWED | Posted: 9 October 2017 doi:10.20944/preprints201710.0047.v1polymorphism of BDNF (Val66Met) in human populations has also been found associated with altered stress response(63).Role of epigenetic modifications in NGFs mediated HPA activation model: Behavioural adaptations to persistent stress manifesting as psychiatric disordersNGFs have been intensively implicated in creating epigenetic memory of environmental stimulus(65,104).MeCP2 (methyl-CpG-binding domain 2), a methyl binding domain (MBD) and transcription repressor which is essential for accomplishing gene silencing effect of the methylation needed to dissociate from the BDNF promoter region to reprogram the stress response (
|
v3-fos-license
|
2021-12-23T16:16:29.351Z
|
2021-01-01T00:00:00.000
|
245409162
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=114018",
"pdf_hash": "a0a4a490fd8f5861165f9fe972dca8d9eeb3c1c8",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43435",
"s2fieldsofstudy": [],
"sha1": "3f8d02b1c822ca12dc661a26b2573353e5a3188c",
"year": 2021
}
|
pes2o/s2orc
|
A Comprehensive Price Prediction System Based on Inverse Multiquadrics Radial Basis Function for Portfolio Selection
Price prediction plays a crucial role in portfolio selection (PS). However, most price prediction strategies only make a single prediction and do not have efficient mechanisms to make a comprehensive price prediction. Here, we propose a comprehensive price prediction (CPP) system based on inverse multiquadrics (IMQ) radial basis function. First, the novel radial basis function (RBF) system based on IMQ function rather than traditional Gaussian (GA) function is proposed and centers on multiple price prediction strategies, aiming at improving the efficiency and robustness of price prediction. Under the novel RBF system, we then create a portfolio update strategy based on kernel and trace operator. To assess the system performance, extensive experiments are performed based on 4 data sets from different real-world financial markets. Interestingly, the experimental results reveal that the novel RBF system effectively realizes the integration of different strategies and CPP system outperforms other systems in investing performance and risk control, even considering a certain degree of transaction costs. Besides, CPP can calculate quickly, making it applicable for large-scale and time-limited financial market.
Introduction
The target of PS is to achieve some long-term financial goals by constructing an effective investment strategy that can reasonably allocate wealth among a set of assets. There are two main theories about PS. One is the mean-variance theory proved that MQ performed best in dealing with the interpolation problem of scatter data, MQ was conditionally positive definite. Therefore, a more application-oriented IMQ was proposed. The outstanding advantages of IMQ are good global feature, strict positive definite and stable eigenvalue [14] [18]. For example, Abbasbandy [14] pointed out that IMQ could be used to approximate the unknown analytic function to get a more stable and accurate solution in solving the global optimization problem. And Tanbay [19] pointed out that compared with GA in the solution of the neutron diffusion equation, IMQ had a more stable performance and could obtain highly numerical solution. Therefore, we incorporate IMQ rather than traditional GA in our novel RBF system to increase efficiency and robustness.
In this paper, a comprehensive price prediction (CPP) system based on IMQ radial basis function is constructed. The system firstly uses IMQ basis function to construct a novel RBF system. Then, combining with the novel RBF system, a portfolio update strategy based on kernel and trace operator is constructed. Now let's consider H different price prediction strategies. This paper mainly focuses on three strategies, namely EMA, L 1 -median and PP. Firstly, CPP selects the best-performing strategy according to investing performance of all strategies within the recent window and given it the largest influence in future price prediction. Secondly, CPP exploits the similarity between the best-performing strategy and other price prediction strategies to calculate the influence of other strategies. This system effectively integrates the advantages of all price prediction strategies and innovative measuring the influence by investing performance. In genneral, this paper's main contributions are as follows: 1) Propose a novel RBF system based on IMQ radial basis function and centered on multiple price predictions, which form a comprehensive price prediction.
2) Propose a comprehensive combination of aggressive strategies and moderate strategies to achieve a better balance between returns and risks.
3) Propose a portfolio update strategy based on kernel and trace operator.
The rest parts of this paper are presented as follows. Section 2 describes the relevant problem setting and related work about PS. The CPP system is introduced and described in detail in Section 3. Experiments on 4 benchmark data sets are carried out to assess CPP in Section 4. Finally, conclusions are presented in Section 5.
The Relevant Problem Setting
In this paper, d assets with a time span of n periods in financial market are considered. For the sake of understanding, let's think of a period as a day. The asset prices of the tth period is presented by the close prices vector At the beginning of each trading period, wealth needs to be allocated across a range of assets. In this paper, when investing, the proportion of each asset in total weath is recorded as the portfolio vector. Suppose there are d assets and their portfolio vector in the tth period is where d ∆ is a d-dimensional simplex. A non-negative constraint indicates that non-short-selling and the equality constraint indicates that self-financing, which means it is not allowed to borrow money and all of the wealth is reinvested.
Since all the wealth in the previous periods is invested over next period, the cumulative wealth (CW) increases at a multiple rate, i.e.
Τ v s is the increasing factor. So after n periods, the final cumulative wealth is where 0 W is the initial wealth. In this paper, for the convenience of calculation, it is assumed that 0 1 W = . Then the final cumulative wealth n W is The ultimate purpose of PS system is to maximize the final cumulative wealth n W by constructing a set of the portfolio vectors { } 1 From the above equation, this is equivalent to maximizing the increasing factor t t Τ v s . Note that this optimization problem does not require statistical assumption about the changes in asset price.
Related Work
In this subsection, some classical prediction strategies are introduced to help us understand how to build a PS system.
The UBAH strategy [5], which is generally used as a market strategy to generate a market index, is to start with an equal distribution of wealth among d assets Both OLMAR [10] and RMR [8], which keep a moderate attitude, use the mean reversion phenomenon to predict the future asset prices. OLMAR points out that the future asset prices would recover to historical moving average and proposed the exponential moving averages (EMA). The EMA exploits all historical price information to achieve price prediction. The specific representation of EMA is as follow: where t EMA represents the previous EMA and 1 is a d-dimensional vector with components of 1. 0 1 α < < is a decaying factor and t s is the real price relative on the tth period.
When expanding t EMA , then (8) as a result, EMA really makes full use of all historical prices and gives lager weight to more recent price information. Unlike OLMAR, RMR no longer uses the simple mean, but instead exploits the robustness of L 1 -median [26] [27] to predict the future asset prices. Statistically speaking, the L 1 -median has a more attractive property than the simple mean because it's breakdown point is 0.5, meaning that when 50% of the points in the data set are pollution values, the L 1 -median can take values that exceed all boundaries. A higher the breakdown point means a more stable estimator, and the breakdown point of the simple mean is 0. The corresponding future price relative of RMR is ( ) where ( ) where ⋅ represents the Euclidean norm. OLMAR and RMR exploit the same optimization approach to update strategies as follows: EMA and L 1 -median essentially exploits the principle of mean reversion. They are cautious in their price prediction. However, there are plenty of evidence in real financial markets that irrational investment can keep prices trends. Therefore, the importance of trend-following strategies should not be ignored. In real financial markets, most investors profit from rising prices. So they are more concerned about recent maximum prices. PPT system suggests using the PPs from different asset prices within a time window [6].
and , 1 P t+ s can also be understood as the growth potential of the assets.
EMA and L 1 -median belong to the trend-reversing, both of which are conservative and moderate investment strategies. In contrast, PP is an active and aggressive strategy as it belongs to the trend-following. Depending on the financial environment, sometimes aggressive strategies are needed to achieve high returns, while sometimes moderate strategies are needed to avoid risks. All of these motivate us to construct a comprehensive price prediction system that can effectively integrate the advantages of different strategies.
Novel RBF System Based on IMQ Function
The classical expression of RBF system is as follow: where x and y are input and output respectively, Besides GA, IMQ is another radial basis function that cannot be ignored. Here, a novel RBF system based on IMQ radial basis function is constructed, that is There is a theoretical basis for this improvement. GA and IMQ are essentially the same and both are positive definite functions [28]. However, in practical application, IMQ performance is more stable and better than GA [14] [19].
As for the formula in this paper, H price prediction strategies are denoted as where d ∆ is defined as Equation (2) in Equation (15), , represents the increasing factor of the hth price prediction strategy of the ( ) t k − th period and t k − s is the actual price relative generated by Equation (1). The method is firstly , , [31]. Then it ex- s is select the best-performing strategy, where the best-performing strategy means that it can get the highest return even in the worst financial environment. The general process is that the smallest increasing factors of each strategy is selected within a time window, and then the largest increasing factor is selected from a set composed of the smallest increasing factors. This approach ensures that we get the best price prediction strategy in the worst trading environment, which is the key to improving the overall robustness of the system.
In the novel RBF system mentioned in Equation (14), all qualified portfolios calculated by Equation (15) serve as centers of the novel RBF system, and the best strategy , 1 t * + s serves as fixed inputs. The specific form of novel RBFs in Equation (14) is transformed into where 1 t + ∆v represents the updated increment of the portfolio on the ( ) 1 t + th period and The reason for this representation of will be explained in the later Section 3.3. From the above formula, it can be seen that the system quantifies the similarity degree between , 1 The proposed novel RBF system in Equation (17) is different from in Equation (13) in many different ways which are mainly reflected in the problem set-ting, data characteristics and the selection of radial basis functions.
2) Although the inputs of those two RBF systems are fixed, , 1 t * + s is determined by the recent investing performance of all prediction strategies. This means that each price prediction strategy is likely to be an input.
3) The basis functions of those two RBF systems are different. The system in Equation (13) uses Gaussian radial basis function, while Equation (17) uses the IMQ radial basis function.
4) The objective of Equation (13) is to fit y . So the back-propagation methods can be used to solve the problem [29] [30]. However, the objective of Equation (17) aims to maximize the generalized increasing factor and the solution method is different from Equation (13).
Comprehensive Price Prediction System
In order to apply the theory to practice, the next step is to construct a PS model using the proposed novel RBF system. As described in Section 2.1, in order to obtain better investing performance, the increasing factor where tr is the trace operator, V is an 3-dimensional vector with component v , Ψ is a diagonal matrix with ψ as diagonal elements, Compared with the classical form of the increasing factor the reason for this simplification is that ˆt v is fixed and ˆ0 t Therefore, the optimization goal is now switching from to the update in-
Solution Algorithm
In this subsection, the solution algorithm of CPP is introduced in detail, which has briefly concluded in Proposition 1. It is worth noting that our solution is suboptimal, not only because there is a certain bias in estimating the future with historical data but also to avoid over-fitting. So it's not necessary to get the optimal solution.
, we make Proof. The first step is to prove that 1 t + u satisfies all constraints in Equation (19), that is where 3= U u1 , On the one hand, the right side of the Equation (23) The second equation is derived from the idempotent of On the other hand, the left side of the Equation (23) can be converted to ( ) The inequality is derived from Cauchy-Schwarz inequality.
Take Equation (23), Equation (25) and Equation (26) into consideration, we have ( ) Hence, we can deduce ˆε < u . This contradicts ˆε ≤ u . It is proved that the optimization problem in Equation (19) obtain the maximum at This shows that According to Equation (25) and Equation (29), we obtain ε * > u . This contradicts the constraint ε * < u .
To sum up, we can get that d d where ε is a mapping projected onto the ε -Euclidean ball. Compared with Equation (17), in order to satisfy the constraint conditions in Equation (19), 1 t + u adds two operators on the basis of 1 t + ∆v . Thus, this is also explains why the update increment of the portfolio is represented as Equation (17).
Then, the portfolio 1t + v on the next period is shown as follows: The complete CPP system is outlined in Algorithm 1. CPP is a fast algorithm because it only uses ordinary matrix calculation without any iterative calculation, which significantly reduces the operation time.
Data Sets and Comparison Approaches
In this subsection, in order to comprehensively assess the performance of sys- [32], SSPO [33] and AICTR [34], to compete with CPP that we proposed.
A detailed descriptions of these systems are as follow.
1) RMR: RMR uses L 1 -median to make price prediction as describled in Section 2.2.
2) OLMAR: OLMAR uses the moving averages to make price prediction, which are described in Section 2.2.
3) PPT: PPT is an aggressive strategy that uses the maximum values of different assets to make price prediction as mentioned in Section 2.2. 1 . In general, the parameters of CPP are determined based on the results of final cumulative wealth (CW), operating in the same way as previous studies. The calculation of final CW is described in detail in Section 2.1. First, we set the window size 5 ω = , which is widely used and consistent with other systems.
Secondly, we change one parameter to fix the other parameters for the experiments. Since ε is the updating strength, it is roughly estimated to be larger value, while 2 h σ is a parameter used to evaluate the difference between two portfolios, it should be a small value. On the one hand, we firstly set 5 ω = , 1400 ε = , and then set 2 h σ change between 0.0007 and 1.0012. According to the results in Figure 1, the investing performance of CPP around 2 0.0008 h σ = is stable and good. On the other hand, we firstly set 5 ω = , 2 0.0008 h σ = and then make ε change between 1200 and 1700. The results are shown in Figure 2 and we know that CPP is stable and good around 1400 ε = . Therefore, the parameters of CPP are set as: 2 0.0008
Experimental Results
In this paper, a scheme containing seven evaluation indicators are designed to assess the performance of different systems and achieve the most excellent results. These seven indicators can be roughly divided into three categories, namely investing performance, risk metrics and application issues. Investing performance M. M. Zheng Applied Mathematics includes CW, mean excess return (MER) and α Factors. Risk metrics consist of sharpe ratio (SR) and information ratio (IR). As for application issues, we chose transaction cost and running times to assess them. Those indications will be discussed in the following subsection. 16, 9.59), respectively. In addition, DJIA is a challenging data set because many systems do not perform well in this data set, such as PPT, and OLMAR. But CPP can reach a value of 4.74, which is 53.90% higher than PPT. These results shows that CPP is an effective PS system and can accumulate more wealth in the real financial market.
Investing
In order to show the superiority of the system CPP, the CWs of each system on DJIA is plotted in Figure 3. By observing the Figure 3, the excellent investing performance of CPP can be shown more intuitively.
2) MER: Return is a financial term that describes the proportion of wealth by a PS system gained or lost over one investing period. In this paper, the daily return r present the daily returns of attended PS system and the market baseline on the tth period, respectively. Note that, the UBAH system is defined as the market baseline.
The MER results from various PS systems are described in Table 2 tively. In addition, a small MER gap is likely to produce a lager CW gap in the long-term. Therefore, the above results demonstrate that CPP can achieve an outstanding investing performance.
3) α Factor: MER measures the investing performance of a PS system without considering market risks. However, in the real financial market, the volatility of the market will undoubtedly affect the performance of assets. Capital asset pricing model(CAPM) [36] points out that the expected return sources of PS systems can be divided into two parts: the first part comes from the market return, and the second comes from the inherent excess return, also called α Factor [37] [38]. Therefore, α Factor able to evaluate the investing performance of different PS systems: Table 3. CPP achieved the highest α on three data sets and ranked second on the NYSE(N). For example, CPP (0.0042, 0.0002) achieves a higher α , compared show that CPP is still able to achieve higher inherent excess return in the face of market volatility. In addition, the statistical t-test is used to determine whether α is significantly lager than 0, proving that the inherent excess return is not achieved by luck. The results of p-value presented in Table 3 The results are presented in Figure 4. CPP achieves the highest CW on all data sets when the transaction cost rate γ fluctuates between 0% and 0.15%. In addition, even when γ reaches a high value ( 0.15% 0.5% γ ≤ ≤ ), CPP still outperforms other PS systems on three data sets. Therefore, it shows that CPP can bear moderate transaction costs and can be applied in real world financial markets.
2) Running Times: Running Time is an important indicator to judge whether a system can be applied in a large-scale and time-limiting environment, such as High-Frequency Trading (HFT) [42]. We use a regular computer equipped an and HS300, respectively. Therefore, CPP has good computational efficiency and can be applied in large-scale financial markets.
Conclusion
In this paper, we proposed a new CPP system based on IMQ radial basis function with an integration of three different aggressive and moderate strategies for effective and robust PS. Instead of using a traditional GA function, here we chose a more stable and accurate function that is IMQ for the novel RBF system, which centers on multiple strategies. With regard to portfolio update, different from the traditional increasing factor, we propose a generalized growth factor based on a kernel and trace operation. And CPP is fast and can be applied in larger scale and limited time financial environment. Extensive experiments are performed on 4 worldwide benchmark data sets to indicate that CPP can effectively integrate the advantages of different strategies and it was proved to be effective in PS. On the one hand, in most cases CPP outperforms other commonly used systems in performance indicators CW, MER and α Factor. On the other hand, CPP achieves the highest SR and IR compared with other systems.
The results show that CPP has not only excellent investing performance but also good risk control ability. In addition, CPP can withstand reasonable transaction costs and fast operation, which have to be considered in the real financial market. In conclusion, CPP is an efficient and robust PS system and deserves further investigation. Of course, the CPP system has its own shortcomings. On one hand, the problem of reducing transaction cost was not considered at the initial stage of modeling. On other hand, this paper considers only the three strategies. In the future, we can improve the performance of the system from these two aspects.
Acknowledgements
Thanks to Jinan University for the resources provided, and to tutors for their valuable advice on this manuscript. This research is supported by the National
Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper.
|
v3-fos-license
|
2020-07-23T09:09:33.153Z
|
2020-01-01T00:00:00.000
|
221607409
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://newprairiepress.org/cgi/viewcontent.cgi?article=7934&context=kaesrr",
"pdf_hash": "b7f32def9df272849bea3533e1082b109af02acd",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43438",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "5b46db6839106c2978c278aefa0c391a52f54b9b",
"year": 2020
}
|
pes2o/s2orc
|
Nitrogen and Sulfur Fertilization in Soybean: Impact on Seed Yield Nitrogen and Sulfur Fertilization in Soybean: Impact on Seed Yield and Quality and Quality
Summary Over time, plant breeding efforts for improving soybean [ Glycine max (L.) Merr.] yield were prioritized and effects on seed nutritional quality were overlooked, decreasing protein concentration. This research aims to explore the effect of nitrogen (N) and sulfur (S) fertilization on soybean seed yield, seed protein and sulfur amino acids concentration. In 2018, ten field trials were conducted across the main US soybean producing region. The treatments were fertilization at 1) planting (NSP); during 2) vegetative growth (NSV); and 3) reproductive growth (NSR) and 4) unfertilized (Control). Nitrogen fertilization was applied at the rate of 40 lb/a utilizing urea ammonium nitrate (UAN), and S at 9 lb/a via ammonium sulfate (AMS). A meta-analysis was performed to consider small variations among experimental designs. A summary of the effect sizes did not show effects for seed yield. However, fertilization at planting (NSP) increased seed protein by 1% more than the control across all sites. Overall, sulfur amino acid concentration increased by 1.5% relative to the control, but the most consistent benefit came from fertilization during the reproductive growth (NSR), increasing sulfur amino acids by 1.9%. Although N and S fertilization did not affect seed yields, applying N and S in different stages of the crop growth can increase protein concentration and improve protein composition, providing the opportunity to open new US soybean markets.
Introduction
Soybean [Glycine max (L.) Merr.] demands a great amount of nitrogen (N) during the seed filling period compared to other legumes and cereals.The plant N assimilation from the soil supply and biological nitrogen fixation (BNF) frequently does not match the requirements for a high yielding crop.This gap between assimilation and requirement forces the plant to prematurely remobilize N from other organs and consequently establish a "self-destruction" status, hampering the synthesis of highly energetic compounds in the seed, such as proteins and amino acids (Sinclair and de Wit, 1975).Over the last decades, plant breeding efforts overlooked changes in seed quality (defined here as nutritional composition) and concentrated on increasing soybean yields.The latter was achieved, increasing production and profitability, but the former was diminished, creating a concern for the global industry and producers.This study aims to explore the effect of N and S fertilization on seed yield, seed protein, and concentration of sulfur amino acids-such as cysteine and methionine.We hypothesized that N and S fertilization as a management practice can help offset the reduction in protein levels and protein quality (amino acids concentration), especially when adopted during the seed filling period.
Sites and Measurements
This research project was conducted across seven states of the main soybean producing region in the United States (KS, MN, AR, IL, IA, SD, and IN), investigating management practices with potential effects on soybean nutritional quality.However, experimental designs and treatments are slightly different across locations, requiring a preliminary selection of studies to perform the analysis in this report.Selection of trials, from the 2018 season, was done considering the presence of the following treatments in at least one variety × planting date combination (defined as the study): 1) fertilization at planting (NSP); during 2) vegetative growth (NSV); 3) reproductive growth (NSR); and 4) an unfertilized (Control).A description of the 10 selected studies and their soil properties before planting is presented in Table 1.Regarding fertilizers and nutrient rates, ammonium sulfate (AMS) was applied to provide 9 lb/a of S-SO 4 , and urea ammonium nitrate (UAN) to provide 40 lb/a of N. At harvest, seed yield was recorded and seed samples were analyzed in terms of protein and sulfur amino acids concentration with the near infrared (NIR) method (Pazdernik et al., 1997).
Statistical Analysis
A meta-analysis was adopted considering the different experimental procedures and designs across locations.The response ratio effect sizes, in logarithmic scale, of each treatment relative to the control, were estimated according to Borenstein et al. (2009).First, the effect sizes were calculated per study and associated to the within-study variability.The between-study variability was also estimated in order to assign specific weights to each study (random effect model).Finally, the summary of the effect sizes was calculated for each of the treatments and variables.The I 2 parameter, percentage of between-study variance over the total variance, was calculated for each model and could be associated with specific conditions of each study (e.g.weather and soil), beside the random error.The R software (R Core Team, 2019) was used to perform calculations, analysis, and figures.
Responses on Seed Yield and Quality
The summary of effect sizes shows no yield response from N and S fertilization applied at any time of the soybean season (Figure 1).Seed protein concentration across sites was increased by 1% more than the control only by the fertilization at planting (Figure 2).The sulfur amino acids were always enhanced after N and S application, increasing 1.7%, 1.5% and 1.9% when applied at NSP, NSV, and NSR, respectively-all relative to the control.In addition, for sulfur amino acids, the summary of effect sizes for the late fertilization (NSR) was the most precisely estimated, with smaller 95% confidence intervals (CI).Overall, the magnitude of changes in protein and amino acids was rela-tively small, around 1-2% over the control, which represents less than 1% of changes on the basis of concentration by dry weight.
Final Considerations and Next Steps
Much of the between-study variance is not explained by the current meta-analysis model.A future step for fine-tuning this model could be to consider the input of weather and soil variables to improve the estimation of the summary effect size.In addition, more studies from the literature or from different field locations should be explored, minimizing the weight of specific sites on the final results, and even allowing statistical comparison of fertilization timings during the season.
Figure 1 .
Figure 1.Treatment effect sizes for soybean seed yield across studies.Squares are located on the log of the response ratios (RR), or effect sizes.Size of the squares represent the weight of the study on the final summary, and horizontal bars represent the 95% confidence intervals (CI).The width of the gray bar on the effects summary determines whether the treatment had a positive, negative, or no effect (ns) on seed yield (95% CI).Percentages (left of the summary) indicate the final RR, and the I 2 represents the between-study variability.Nitrogen (N) and sulfur (S) application at planting is presented in the left panel (NSP), during the vegetative growth in the center (NSV), and during the reproductive growth in the right panel (NSR).
Figure 2 .
Figure 2. Treatment effect sizes for seed protein concentration across studies.Squares are located on the log of the response ratios (RR), or effect sizes.Size of the squares represent the weight of the study on the final summary, and horizontal bars represent the 95% confidence intervals (CI).The width of the gray bar on the effects summary determines whether the treatment had a positive, negative, or no effect (ns) on protein (95% CI).Percentages (left of the summary) indicate the RR, and the I 2 represents the between-study variability.Nitrogen (N) and sulfur (S) application at planting is shown in the left panel (NSP), during the vegetative growth in the center (NSV), and reproductive growth in the right (NSR).
Figure 3 .
Figure 3. Treatment effect sizes for sulfur amino acids concentration across studies.Squares are located on the log of the response ratios (RR), or effect sizes.Size of the squares represent the weight of the study on the final summary, and horizontal bars represent the 95% confidence intervals (CI).The width of the gray bar on the effects summary determines whether the treatment had a positive, negative, or no effect (ns) on sulfur amino acids (95% CI).Percentages (left of the summary) indicate the summary RR, and the I 2 represents the between-study variability.Nitrogen (N) and sulfur (S) application at planting is presented in the left panel (NSP), during the vegetative growth in the center (NSV), and N and S applied during the reproductive growth is presented in the right panel (NSR).
Table 1 .
Description of the ten studies relative to planting date, maturity group (MG), and soil properties (pH, clay, and soil organic matter) Studies were located in Indiana (IN) and South Dakota (SD), with study codes representing single combinations of planting dates and MG.
|
v3-fos-license
|
2017-05-04T21:13:27.876Z
|
2016-10-17T00:00:00.000
|
435042
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2016.00209/pdf",
"pdf_hash": "b9b655e708033309bf9b3a9b52022cc2ccd1d621",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43439",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "b9b655e708033309bf9b3a9b52022cc2ccd1d621",
"year": 2016
}
|
pes2o/s2orc
|
The Therapeutic Effect of PLAG against Oral Mucositis in Hamster and Mouse Model
Chemotherapy-induced mucositis can limit the effectiveness of cancer therapy and increase the risk of infections. However, no specific therapy for protection against mucositis is currently available. In this study, we investigated the therapeutic effect of PLAG (1-palmitoyl-2-linoleoyl-3-acetyl-rac-glycerol, acetylated diglyceride) in 5-fluorouracil (5-FU)-induced oral mucositis animal models. Hamsters were administered 5-FU (80 mg/kg) intraperitoneally on days 0, 6, and 9. The animals’ cheek pouches were then scratched equally with the tip of an 18-gage needle on days 1, 2, and 7. PLAG was administered daily at 250 mg/kg/day. PLAG administration significantly reduced 5-FU/scratching-induced mucositis. Dramatic reversal of weight loss in PLAG-treated hamsters with mucositis was observed. Histochemical staining data also revealed newly differentiated epidermis and blood vessels in the cheek pouches of PLAG-treated hamsters, indicative of recovery. Whole blood analyses indicated that PLAG prevents 5-FU-induced excessive neutrophil transmigration to the infection site and eventually stabilizes the number of circulating neutrophils. In a mouse mucositis model, mice with 5-FU-induced disease treated with PLAG exhibited resistance to body-weight loss compared with mice that received 5-FU or 5-FU/scratching alone. PLAG also dramatically reversed mucositis-associated weight loss and inhibited mucositis-induced inflammatory responses in the tongue and serum. These data suggest that PLAG enhances recovery from 5-FU-induced oral mucositis and may therefore be a useful therapeutic agent for treating side effects of chemotherapy, such as mucositis and cachexia.
Presence of erythema, but no evidence of erosion in the cheek pouch 2 Severe erythema, vasodilation, and surface erosion 3 Formation of ulcers in one or more faces of the mucosa, but not affecting more than 25% of the surface area of the cheek pouch, as well as severe erythema and vasodilatation 4 Cumulative formation of ulcers of about 50% of the surface area of the cheek pouch 5 Complete ulceration of the cheek pouch mucosa, in which the fibrosis makes oral mucosa exposure difficult infected tissue, where they function to eliminate pathogens (5). Neutrophils destroy pathogen cells via phagocytosis, degranulation, and the neutrophil extracellular trap (NET) (6,7). Although neutrophils play a critical role in the innate immune system, excessive transmigration of neutrophils can result in neutropenia and severe inflammation in the host tissue (8). Mucositis may, therefore, limit the effectiveness of chemotherapy, resulting in an extension of the treatment period and perhaps a decrease in patient survival (9). The anti-metabolite drug 5-fluorouracil (5-FU) is widely used for the treatment of solid tumors, including colorectal and breast cancers (10). 5-FU is an analog of uracil that non-specifically blocks DNA synthesis, thus inhibiting cell division (11). PLAG (1-palmitoyl-2-linoleoyl-3-acetyl-rac-glycerol) is an acetylated form of diacylglycerol and a mono-acetyl-diglyceride that was first isolated from the antlers of sika deer (12). PLAG can be chemically synthesized using glycerol, palmitic acid, and linoleic acid, and the synthetic form has been confirmed to be identical with the naturally isolated form (13). In a previous study, PLAG was shown to exert a therapeutic effect with pegfilgrastim to treat chemotherapy-induced neutropenia by modulating neutrophil transmigration (14). Because PLAG regulates neutrophil transmigration, we hypothesized that PLAG administration would ameliorate the sequelae associated with oral mucositis.
To characterize the effect of PLAG on oral mucositis, we established oral mucositis models in mice and hamsters in which the disease is induced by 5-FU administration and scratching of the cheek pouches and/or tongue with the tip of an 18-gage needle (15). The scratching increases the risk of infection and thus induces neutrophil recruitment into the inflamed tissues. Using these models, we found that PLAG administration significantly reduced the symptoms of oral mucositis. PLAG reversed the weight loss, ulceration, and severe inflammation associated with 5-FU/scratching-induced mucositis. These data indicate that PLAG could be therapeutically useful in reducing the complications associated with chemotherapy, such as oral mucositis and cachexia, and thus may be an excellent supplementary agent for anticancer therapy.
MaTerials anD MeThODs animal experiments
Male Syrian Golden Hamsters were obtained from Japan SLC (Shizuoka Prefecture, Japan). The hamsters were 6 weeks old, weighed 120-140 g, and were maintained under specific pathogenfree conditions. BALB/c mice were obtained from Koatech Co. (Pyongtaek, Republic of Korea) and maintained under specific pathogen-free conditions. The mice were 6-8 weeks of age and weighed 20-22 g at the time of the experiments.
All animal experimental procedures were performed in accordance with the Guide for the Care and Use of Laboratory Animals (Institute of Laboratory Animal Resources).
Oral Mucositis experimental Models
Hamsters were intraperitoneally administered 5-FU (Sigma Aldrich, MO, USA) at 80 mg/kg on days 0, 6, and 9. For the scratching group, hamsters were anesthetized with 2,2,2-tribromoethanol (150 mg/kg, Sigma Aldrich) by intraperitoneal injection, and then a 1-cm 2 area of each cheek pouch was scratched with the tip of an 18-gage needle (Koreavaccine, Aansan, Republic of Korea) at an equal force and depth on days 1, 2, and 7. The scratching was blinded to the treatment groups (n = 3 per group). PLAG (Enzychem Lifesciences Co., Daejeon, Republic of Korea) was then administered orally at 250 mg/kg/day.
Mice were intraperitoneally administered 5-FU (Sigma Aldrich) at 100 mg/kg on day 0. For the scratching group, mice were anesthetized with 2,2,2-tribromoethanol (150 mg/kg, Sigma Aldrich) by intraperitoneal injection, and then a 0.5-cm 2 area of the tongue was scratched using the tip of an 18-gage needle (Koreavaccine) at an equal force and depth on days 7 and 8. The scratching was blinded to the treatment groups (n = 5 per group). PLAG (Enzychem Lifesciences) was then administered orally at 250 mg/kg/day beginning on day 7.
assessment of Mucositis
Mucositis was scored by a blinded investigator using published criteria based on the following parameters: erythema, vasodilation, erosion, bleeding, fibrosis, and ulcers (
hematoxylin and eosin staining
Samples of cheek pouch tissue were obtained on day 13, fixed in 10% buffered formalin for 24 h, embedded in paraffin, and sectioned at a thickness of 4 μm. The tissue sections were stained with hematoxylin and eosin (H&E), and the degree of inflammatory cell infiltration was assessed. Sections were observed under a light microscope (Olympus, Tokyo, Japan).
Peripheral blood analysis
For hamsters, whole blood was collected from the orbital sinuses using capillary tubes (Kimble Chase Life Science and Research Products LLC, FL, USA) and collection tubes (Greiner Bio-One International, Frickenhausen, Germany) containing K3E-K3EDTA. Blood cells were enumerated by CBC analysis using scratching/5-FU treatment group, and (4) scratching/5-FU/PLAG treatment group (a). 5-FU was administered intraperitoneally at 80 mg/kg on days 0, 6, and 9. PLAG was administered orally at 250 mg/kg/day. For the scratching model, hamsters were anesthetized and the cheek pouches were scratched with an equal force and depth on days 1, 2, and 7. The cheek pouches were isolated on day 13 (b). The volume (c) and weight (D) of the isolated cheek pouches were measured. Mucositis scores were determined on days 2, 7, 11, and 13 (e). ▬, control; ▲, 80 mg/kg 5-FU; X -, scratching with 80 mg/kg 5-FU; and •, scratching with 80 mg/ kg 5-FU and 250 mg/kg PLAG. Average values are shown, and the bars represent error ranges. Statistical significance was assessed using Student's t-test.
elisa
The concentrations of interleukin (IL)-6, tumor necrosis factor (TNF), and IL-1β in the supernatant of homogenized tongue, and serum samples were measured using a mouse IL-6 ELISA kit, mouse TNF ELISA kit, and mouse IL-1β ELISA kit (all from BD Bioscience, NJ, USA), respectively, according to the manufacturer's instructions. The cytokine levels were estimated by interpolation from a standard curve generated using an ELISA reader (Molecular Devices) at a wavelength of 450 nm.
statistical analyses
Statistical analyses were performed using paired Student's t-tests. Differences were considered statistically significant at p < 0.001, p < 0.01, and p < 0.05.
Plag alleviated the symptoms of severe Mucositis in the hamster Model
To investigate whether PLAG has a therapeutic effect on 5-FU-induced mucositis in the hamster model, 5-FU was intraperitoneally administered on days 0, 6, and 9 ( Figure 1A). The cheek pouches were scratched equally on days 1, 2, and 7, and PLAG was administered orally every day thereafter. To quantify the level of oral mucositis, the cheek pouches were isolated on day 13. The 5-FU/scratching group exhibited severe ulceration, fibrosis, and festering wounds ( Figure 1B). PLAG treatment significantly decreased ulcer formation and diminished the degree of wound festering. No fibrosis was observed in the cheek pouches of PLAG-treated hamsters. Inflammation of cheek pouches induced by 5-FU/scratching was ameliorated by PLAG administration, which significantly decreased the volume and weight of isolated pouches (Figures 1C,D). PLAG administration also had an effect on the mucositis score ( Figure 1E). Compared with the 5-FU/scratching group, PLAG-treated hamsters exhibited a lower mucositis score (p < 0.01). These data indicate that PLAG has a significant therapeutic effect against chemotherapy-induced oral mucositis.
Plag administration reversed Weight loss in the hamster Mucositis Model
Cancer patients usually experience a decline in body weight and difficulty in eating. This condition, known as cachexia, is associated with a poor therapeutic prognosis (17). To characterize the effect of PLAG on mucositis-associated cachexia, the body weight of hamsters was monitored. PLAG administration had a significant effect in preventing weight loss associated with 5-FU/scratching-induced mucositis (Figure 2A). By day 13, hamsters in the 5-FU/scratching group exhibited a 15% decline in body weight compared to controls ( Figure 2B).
Hamsters subjected to 5-FU/scratching and PLAG administration lost only 5% of their body weight. These results suggest that chemotherapy-associated cachexia may be prevented by PLAG administration.
Plag administration suppressed Mucositis-induced inflammation in hamsters
For histopathologic analysis, isolated hamster cheek pouches were stained with H&E. Hamsters in the 5-FU/scratching group exhibited dermal necrosis and severe fibrosis, with an increase in the thickness of the mucosal epidermis (Figure 3).
Hamsters subjected to 5-FU/scratching and PLAG administration exhibited newly formed epidermis and blood vessels in the cheek pouches, indicative of recovery. These data also suggest that PLAG is useful for treating or preventing chemotherapyinduced oral mucositis. Circulating blood cells were counted on days 7, 9, and 11. The neutrophil count was significantly reduced by 5-FU treatment and steadily declined in hamsters subjected to 5-FU/scratching ( Figure 4A). Hamsters that received PLAG with 5-FU/scratching had a greater number of neutrophils than hamsters treated with 5-FU alone or 5-FU/scratching alone. Moreover, the number of circulating neutrophils in 5-FU/scratching hamsters treated with PLAG was similar to that of control animals ( Figure 4B).
These findings indicate that PLAG plays a role in reversing the decline in the number of circulating neutrophils associated with chemotherapy-induced neutropenia.
Plag administration reversed Weight loss and cured Ulceration in the Mouse Mucositis Model
Treatment of hamster subjected to 5-FU/scratching with PLAG prevented the progression of oral mucositis. To investigate the therapeutic effect of PLAG in mice, the animals were administered 5-FU on day 0, and after 7 days, PLAG was orally administered with scratching of the tongue (Figure 5A). After scratching, the mice exhibited a rapid and significant loss of body weight ( Figure 5B). However, in mice also treated with PLAG, the body weight returned to the control level by day 15 (Figure 5C). Figure 5D shows that 5FU/scratching resulted in severe ulceration of the tongue. In PLAG-treated mice, however, the scratching did not lead to ulceration. These data suggest that PLAG is a useful therapeutic agent for treating side effects of chemotherapy such as mucositis and cachexia.
Plag administration Diminished Mucositis-induced inflammatory response in the Tongue and serum of Mice
To determine whether PLAG has an anti-inflammatory effect in mice with oral mucositis, levels of various chemokines were measured in the supernatant of homogenized tongue tissue and serum. Levels of the representative inflammatory cytokine IL-6 increased as a result of 5-FU/scratching treatment but were significantly reduced in both the tongue ( Figure 6A) and serum ( Figure 6B) following PLAG administration. PLAG administration also resulted in decreases in the levels of TNF in the serum and IL-1β in the tongue. These data indicate that PLAG plays an anti-inflammatory role and could therefore be a useful therapeutic agent for treating mucositis-associated inflammation.
DiscUssiOn
We previously reported that PLAG administration efficiently blocks neutrophil extravasation and increases the number of circulating neutrophils when used with pegfilgrastim during gemcitabine treatment (14). These data suggest that PLAG exerts a therapeutic effect with pegfilgrastim on chemotherapy-induced neutropenia by regulating neutrophil transmigration. In cancer patients, inflammation is regarded as a critical factor indicative of tumor malignancy or chronic inflammation-induced sepsis (18,19). To characterize the effect of PLAG on the severity of inflammation associated with chemotherapy, mucositis indexes were evaluated in the induced oral mucositis hamster and mouse models. As shown in Figure 1, mucositis scores and the volume and weight of cheek pouches in the animals subjected to 5-FU/ scratching treatment were increased and significantly decreased, respectively, following PLAG administration. This indicates that neutrophil activation for transmigration is induced by 5-FU/ scratching treatment and could result in increased neutrophil infiltration with subsequent formation of ulcers (i.e., mucositis). Moreover, histochemical staining data showed that the dermal necrosis induced by 5-FU/scratching was alleviated by PLAG administration (Figure 3). PLAG, therefore, appears to block neutrophil recruitment and minimize the mucositis burden by preventing excessive neutrophil transmigration into the scratched tissues.
Cachexia is a complication associated with both cancer chemotherapy and mucositis. Tumor-induced inflammatory molecules can affect host metabolism, leading to apoptosis (20). Although the incidence of cachexia is high in cancer patients, there is no specific medicine available for treating the condition. PLAG has the potential to not only prevent the weight loss induced by mucositis but also to promote the recovery of lost body weight (Figures 2 and 5). Both the viability and ease of movement in PLAG-treated mice was much better than that of
|
v3-fos-license
|
2018-05-08T18:33:05.898Z
|
2005-01-01T00:00:00.000
|
6253518
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jneuroengrehab.biomedcentral.com/track/pdf/10.1186/1743-0003-2-23",
"pdf_hash": "19ae1a3d499c706e95064648085ed684a0170ef9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43440",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"sha1": "19ae1a3d499c706e95064648085ed684a0170ef9",
"year": 2005
}
|
pes2o/s2orc
|
Journal of Neuroengineering and Rehabilitation Effect of Gait Speed on Gait Rhythmicity in Parkinson's Disease: Variability of Stride Time and Swing Time Respond Differently
Background: The ability to maintain a steady gait rhythm is impaired in patients with Parkinson's disease (PD). This aspect of locomotor dyscontrol, which likely reflects impaired automaticity in PD, can be quantified by measuring the stride-to-stride variability of gait timing. Previous work has shown an increase in both the variability of the stride time and swing time in PD, but the origins of these changes are not fully understood. Patients with PD also generally walk with a reduced gait speed, a potential confounder of the observed changes in variability. The purpose of the present study was to examine the relationship between walking speed and gait variability.
Introduction
Falls are one of the most serious complications of the gait disturbance in Parkinson's disease (PD) [1][2][3][4][5][6][7]. Beyond the acute trauma that they may cause, falls may lead to fear of falling, self-imposed restrictions in activities of daily living, and nursing home admission [1][2][3][4][5][6]. While traditional measures of gait and postural control do not adequately predict falls in PD [8], increased stride variability has been associated with an increased fall risk in older adults in general, as well as in patients with PD [9][10][11][12][13], suggesting that this aspect of gait may have clinical utility as an aid in fall risk assessment. More specifically, as a result of PD pathology, the ability to maintain a steady gait rhythm and a stable, steady walking pattern with minimal strideto-stride changes is impaired in PD, i.e., stride variability is increased in PD [11,[14][15][16][17][18][19][20].
The mechanisms underlying the increased stride variability in PD have not been widely investigated. The increased stride variability and impaired rhythmicity of gait in PD may reflect reduced automaticity and damaged locomotor synergies [15,16,21]. Indeed, external pacing and cues decrease stride variability in PD [20,22,23]. Levodopa therapy also reduces variability in PD, demonstrating the role dopaminergic pathways play in the impaired gait rhythmicity in PD [11]. Nonetheless, another possible explanation for the increased gait variability observed in PD is that it is simply a byproduct of bradykinesia and a lower gait speed, and not intrinsic to the disease. In addition to their effect on variability, levodopa and external cues also may increase gait speed in PD [11,24,25] and several studies suggest that stride variability increases if gait speed is lower than an optimal value [26,27]. Conversely, other reports indicate that walking speed and stride variability may be independent. No significant increase in stride time variability was observed in healthy elderly subjects even though they walked significantly slower than young adults [28,29]. Maki demonstrated that among older adults, variability was related to fall risk, while walking speed was related to fear of falling [13]. Miller et al observed a significant increase in gait speed, but no significant changes in variability measures after rhythmic training of PD subjects [30]. Hausdorff et al. found that gait variability measures were significantly increased in patients with Huntington's disease and patients with PD, compared to controls, whereas gait speed was significantly lower in PD, but not in Huntington's patients [16]. Thus, further work is needed to better understand the relationship between gait speed and stride variability in PD.
Previously, we described the effects of a treadmill on the gait of patients with PD at their comfortable walking speed [22]. Here we report on the influence of different walking speeds on the stride-to-stride variations in gait, specifically, stride time variability and swing time variability. The influence of speed was examined both in subjects with PD and in healthy controls to determine the degree to which any observed effects were specific to PD. We evaluated the effects of speed by studying subjects on a treadmill, where the speed could easily be fixed. In addition, subjects were tested while walking on level ground, both with and without the use of a walking aid.
Subjects
Thirty-six patients with idiopathic PD, as defined by the UK Brain Bank criteria [31], were recruited from the outpatient clinic of the Movement Disorders Unit at the Tel-Aviv Sourasky Medical Center. Patients were invited to participate if their disease stage was between 2 and 2.5 on the Hoehn and Yahr scale [32], if they did not experience motor response fluctuations, if they were able to ambulate independently, and if they did not use a treadmill for at least six months prior to the study. The PD patients were compared to thirty healthy control subjects of similar age who were recruited from the local community. Both PD and control subjects were excluded if they had clinically significant musculo-skeletal disease, cardio-vascular disease, respiratory disease, uncontrolled hypertension, diabetes or symptomatic peripheral vascular disease, other neurological disease (or PD in the case of the controls), dementia according to DSM IV criteria and MMSE, major depression according to DSM IV criteria, or uncorrected visual disturbances. The study was approved by the Human Studies Committee of Tel-Aviv Sourasky Medical Center. All subjects gave their written informed consent according to the declaration of Helsinki prior to entering the study.
The study population was characterized with respect to age, gender, height, weight, Mini-Mental State Exam (MMSE) scores [33] (a gross measure of cognitive function widely used to screen for dementia), and the Timed Up and Go test (TUG) (a gross measure of balance and lower extremity function) [34][35][36][37]. Subjects were also asked about their history of falls in the past year. The Unified Parkinson's Disease Rating Scale [38] (UPDRS) was used to quantify disease severity and extra-pyramidal signs in the subjects with PD.
Protocol
After providing informed consent, subjects were familiarized with walking on a 35 meter walkway and walking on a motorized treadmill (Woodway LOKO System ® , Germany). Subjects were tested four times on the walkway and four times on the treadmill at different speeds. Each test lasted two minutes. On level ground (the walkway), subjects were tested under four conditions in the following order: a) at their comfortable walking speed (CWS), b) at a self-selected slow speed, i.e., specifically, subjects were asked to walk at about 20% less than their CWS, c) at their self-selected CWS while using a walker (four rolling wheels, Provo Rolator, Premis Inc., Holland), and d) at a self-selected slow speed while using the walker (i.e. at 20% less than the CWS with the walker). On the treadmill, subjects were studied at four treadmill speeds: 1) the CWS observed when using a walker on the level walkway; 2) 80% of this CWS; 4) 90% of this CWS; and 4) 110% of this CWS. The order of the walking conditions on the treadmill was randomized.
Average gait speed on level ground was determined using a stopwatch by measuring the average time the subject walked the middle 10 meters of the 35 meter walkway during the two minutes of testing. Under all walking conditions, subjects walked with a safety harness around the waist that was attached only during the treadmill walking. Subjects walked on the treadmill with full weight bearing. Because the subjects walked while holding on to the handrails (of the walker or treadmill), the gait speed under condition (1), i.e., comfortable walking on the treadmill, was set to the gait speed under condition (c).
Initially, subjects walked up and down the 35 meters walkway to become familiar with the testing conditions. Before testing on the treadmill, subjects were given time to walk on the treadmill. This familiarization period was completed when the subject reported feeling comfortable walking on the treadmill at his or her preferred gait speed. Afterwards, subjects were given 5 minutes of rest to minimize any fatigue effects. Measurements on the treadmill were taken after about 30 seconds of gradually increasing the treadmill speed to the desired speed i.e., data collection was started only after subjects had reached a steady pace.
Apparatus
A previously described computerized force-sensitive system was used to quantify gait and stride-to-stride variability [22,39]. The system measures the forces underneath the foot as a function of time. The system consists of a pair of shoes and a recording unit. Each shoe contains 8 load sensors that cover the surface of the sole and measure the vertical forces under the foot. The recording unit (19 × 14 × 4.5 cm; 1.5 kg) is carried on the waist. Plantar pressures under each foot are recorded at a rate of 100 Hz. Measurements are stored in a memory card during the walk and, after the walk, are transferred to a personal computer for further analysis. The following gait parameters were determined from the force record using previously described methods [9][10][11]17,22]: average stride time, swing time (%), stride time variability, and swing time (%) variability. Average stride length was calculated by multiplying the average gait speed by the average stride time. Variability measures were quantified using the coefficient of variation, e.g., stride time variability = 100 × (standard deviation of stride time)/(average stride time). Because values between the left and right feet were significantly correlated, we report here only the values based on the right foot.
Statistical Analysis
Descriptive statistics are reported as mean ± SD. We used the Student's t and Chi-square tests to compare the PD and control subjects with respect to different background characteristics (e.g., age, gender). To evaluate the effect of speed on gait parameters and to compare the groups, we used Mixed Effects Models for repeated measures. For each gait parameter, a separate model was applied. The dependent variable was the gait parameter and the independent variables were the group (PD patients or controls), the walking condition (e.g., treadmill or walker), walking speed, and the interaction term group × walking condition × walking speed. P values reported are based on two-sided comparison. A p-value = 0.05 was considered statistically significant. All statistical analyses were performed using SPSS 11.5 and SAS 8.2 (Proc Mixed).
Subject Characteristics
Demographic, anthropometric, and clinical characteristics of the patient and control groups are summarized in Table 1. Both groups were similar with respect to age, gender, height, weight, and the MMSE. Among the PD subjects, 63.9% were men; 60% of the controls were men (p = 0.746). As expected, subjects with PD took longer to perform the Timed Up and Go test. In terms of PD characteristics, the mean Hoehn and Yahr stage of the patients was 2.1 ± 0.2. The average score on the UPDRS (total) was 36.1 ± 11.5 and scores on Part I (mental), Part II (activities of daily living) and Part III (motor) were 2.2 ± 1.5, 10.5 ± 4.2, and 23.4 ± 7.4, respectively. On level ground, while using the walker, patients with PD walked more slowly and with increased variability of the stride time and swing time, compared to controls (see Table 1). Table 2 summarizes the effects of walking at a self-selected slow speed on gait on level ground. When asked to walk at a slow speed, the patients and the controls significantly reduced their gait speed (p < 0.001), by 17% and 15% when walking without a walker, respectively, and by 16% and 17% when walking with a walker, respectively. At the lower gait speed, both in the patients with PD and in the controls, the average stride length was significantly reduced and the average stride time and stride time variability were significantly increased. In contrast, swing time variability was not significantly changed when subjects walked at slower gait speeds. For all measures, among the patients with PD, the changes in gait that were made in response to the slower walking speed paralleled the changes made in the control subjects (i.e., there were no significant Group × Walking Condition × Speed interactions on level ground, p = 0.092 for stride time variability and p > 0.445 for all other measures). Table 3 summarizes the effects of treadmill speed on gait. On the treadmill, the effects were generally similar to those observed on level ground. Both in the patients with PD and in the controls, the average stride length and the average swing time were significantly reduced at the slowest treadmill speed (80% of CWS) and increased at 110% CWS. Average stride time was increased at the slowest For all gait measures, the effects of the different walking speeds on treadmill were similar in the patients with PD and the control subjects (there was no significant Group × Slope interaction, p > 0.172). As can be discerned from the examples shown in Figure 1, all gait measures responded to the changes in speed in a more or less parallel fashion in the two groups. In both groups, there was a significant linear relationship between gait speed and average stride time (p < 0.0001), stride time variability (p = 0.0002), average swing time (p < 0.0001), and stride length (p < 0.0001). Note that while a significant relationship existed between speed and other measures, the changes with speed were, nonetheless, relatively small (see Table 3 and Figure 1). In both groups, swing time variability was not related to gait speed (p > 0.451).
Discussion
Consistent with previous studies, we find a reduced stride length and average swing time, and an increased stride time variability and swing time variability in patients with PD [11,[14][15][16][17][18][19][20]. The key findings of the present study are the relationships between gait speed and these measures. Stride length, stride time, swing time, and stride time variability were related to gait speed, both on level ground and on the treadmill, most notably at the slowest speeds, while swing time variability was independent of gait speed. Similar relationships were observed in the patients with PD and in the controls.
Yamasaki et al described a U-shaped relationship between stride length variability and gait speed when healthy subjects walked on a treadmill [26]. Minimum values were obtained at the CWS and increased when subjects walked slower or faster than the CWS. Similar U-shaped relationships in stride time variability and stride length variability have also been reported by others [27,40,41]. Yamasaki et al. suggested that minimal variability of stride length occurs at the CWS because, mechanically, the most efficient gait occurs at this speed and metabolic energy expenditures are at a minimum. Studies of mechanical and energetic expenditures on the treadmill support this explanation [42,43]. In the present study, we observed a linear relationship between gait speed and stride time variability and not a U-shaped relationship. The range of walking speeds tested may explain this apparent contradiction between previous studies. The linear trend that we observed for stride time variability may reflect one arm of the U-shape. Differences in study populations may also play a role here. Most of the previous investigations that examined the relationship between variability and gait speed studied healthy young adults. The present study focused on patients with PD and older adults. Mechanical and energy expenditure optimizations may be affected by aging and disease [44]. Interestingly, in a study of young and older adults, Grabiner et al [45] reported that gait speed did not affect the variability of walking velocity, stride length or stride time. To our knowledge, the present study is the first to examine the influence of speed on swing time variability. If the present results are confirmed, then it appears as if swing time variability may be used as a speed-independent marker of steadiness and fall risk. Nonetheless, future studies should evaluate the relationship between variability and gait speed over a wider range of speeds and perhaps also in young and older adults.
In previous studies that quantified stride time variability and swing time variability, these two measures were typically affected by disease and aging to similar degrees [9,16,46]. While both measures were different in PD and controls, walking speed affected stride time variability, but not swing time (%) variability in the present study. More than 20 years ago, Gabell and Nayak speculated about the differences between these two measures of variability [28]. They suggested that stride time variability is determined predominantly by the gait-patterning mechanism (repeated sequential contraction and relaxation of muscle groups resulting in walking), whereas swing time (double support time) variability is determined predominantly by balance-control mechanisms. Maybe because stride time variability reflects automatic rhythmic stepping mechanisms, it is more sensitive to different rhythmic rates, and hence walking speeds. Other studies have also observed that measures of gait variability may, at times, show independent behavior [45,47]. Additional biomechanical studies are needed to better understand the differences between stride time variability and swing time variability and the factors that contribute to each.
While more studies are needed to further clarify the relationship between gait speed and variability, the present findings support two conclusions. First, dysrhythmicity in gait in PD is caused by disease-related pathology. Stride time variability is influenced to a small degree by gait speed, but a close look at Table 3 suggests that the increased variability in PD is not simply the result of a reduced walking speed. The increased swing time variability in PD is apparently independent of gait speed. Furthermore, even when patients with PD walk at the same speed as controls (i.e., 90% of CWS in controls ≈ 100% of CWS in PD), swing time variability is increased in PD. Second, when studying gait variability, one should try to control for and take into account gait speed, perhaps by dictating the gait speed with a treadmill. When this is not possible, study of swing time variability may provide a marker of dysrhythmicity and instability that is independent of gait speed.
Conflict of interest statement
The author(s) declare that they have no competing interests.
Authors' contributions
SFT, NG, and JMH designed the study. SFT and TH participated in data collection. CP, JMH and LG performed the data analysis. SFT and JMH drafted the manuscript. All authors helped with the interpretation of the results, reviewed the manuscript, and participated in the editing of the final version of the manuscript.
Stride length, stride time variability and swing time variability as measured at four different gait speeds on the treadmill Figure 1 Stride length, stride time variability and swing time variability as measured at four different gait speeds on the treadmill. There were small but significant associations between gait speed and stride length and between gait speed and stride time variability, but swing time variability was not related to gait speed. CWS: comfortable walking speed. Values shown are based on mixed model estimates.
|
v3-fos-license
|
2019-03-11T17:23:11.082Z
|
2019-03-01T00:00:00.000
|
264716266
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.cdc.gov/mmwr/volumes/68/wr/pdfs/mm6808a2-H.pdf",
"pdf_hash": "b176a292f3c65f5059ae1cafc9960e9bc6c3d08b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43445",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b176a292f3c65f5059ae1cafc9960e9bc6c3d08b",
"year": 2019
}
|
pes2o/s2orc
|
Progress Toward Hepatitis B Control and Elimination of Mother-to-Child Transmission of Hepatitis B Virus — Western Pacific Region, 2005–2017
Hepatitis B vaccine (HepB), which has been available since 1982, provides lifelong protection against hepatitis B virus (HBV) infection and the associated 20%-30% increased lifetime risk for developing cirrhosis or hepatocellular carcinoma among >95% of vaccine recipients (1). Before HepB introduction into national childhood immunization schedules, the estimated hepatitis B surface antigen (HBsAg) prevalence in the World Health Organization (WHO) Western Pacific Region (WPR)* was >8% in 1990 (2). In 2005, the WPR was the first WHO region to establish a hepatitis B control goal, with an initial target of reducing HBsAg prevalence to <2% among children aged 5 years by 2012. In 2013, the WPR set more stringent control targets to achieve by 2017, including reducing HBsAg prevalence to <1% in children aged 5 years and increasing national coverage with both timely HepB birth dose (HepB-BD) (defined as administration within 24 hours of birth) and the third HepB dose (HepB3) to ≥95% (3). All WPR countries/areas endorsed the Regional Action Plan for Viral Hepatitis in the Western Pacific Region 2016-2020 in 2015 (4) and the Regional Framework for the Triple Elimination of Mother-to-Child Transmission of human immunodeficiency virus (HIV), Hepatitis B and Syphilis in Asia and the Pacific 2018-2030 (triple elimination framework) in 2017 (5). These regional targets and strategies are aligned with program targets established by the WHO Global Health Sector Strategy on Viral Hepatitis 2016-2021 that aim to reduce HBsAg prevalence among children aged 5 years to ≤1% by 2020 and to ≤0.1% by 2030 (6). This report describes progress made to achieve hepatitis B control in the WPR and the steps taken to eliminate mother-to-child transmission (MTCT) of HBV during 2005-2017. During this period, regional timely HepB-BD and HepB3 coverage increased from 63% to 85% and from 76% to 93%, respectively. As of December 2017, 15 (42%) countries/areas achieved ≥95% timely HepB-BD coverage; 18 (50%) reached ≥95% HepB3 coverage; and 19 (53%) countries/areas as well as the region as a whole were verified to have achieved the regional and global target of <1% HBsAg prevalence among children aged 5 years. Continued implementation of proven vaccination strategies will be needed to make further progress toward WPR hepatitis B control targets. In addition to high HepB-BD and HepB3 coverage, enhanced implementation of complementary hepatitis B prevention services through the triple elimination framework, including routine HBsAg testing of pregnant women, timely administration of hepatitis B immunoglobulin to exposed newborns, and antiviral treatment of mothers with high viral loads, will be needed to achieve the global hepatitis B elimination target by 2030.
Progress Toward Hepatitis B Control and Elimination of Mother-to-Child Transmission of Hepatitis B Virus -Western Pacific Region, 2005-2017
Joseph Woodring, DO 1 ; Roberta Pastore, MD 1 ; Anne Brink, MBBS 2 ; Naoko Ishikawa, MD, PhD 2 ; Yoshihiro Takashima MD, PhD 1 ; Rania A. Tohme MD 3 Hepatitis B vaccine (HepB), which has been available since 1982, provides lifelong protection against hepatitis B virus (HBV) infection and the associated 20%-30% increased lifetime risk for developing cirrhosis or hepatocellular carcinoma among >95% of vaccine recipients (1).Before HepB introduction into national childhood immunization schedules, the estimated hepatitis B surface antigen (HBsAg) prevalence in the World Health Organization (WHO) Western Pacific Region (WPR)* was >8% in 1990 (2).In 2005, the WPR was the first WHO region to establish a hepatitis B control goal, with an initial target of reducing HBsAg prevalence to <2% among children aged 5 years by 2012.In 2013, the WPR set more stringent control targets to achieve by 2017, including reducing HBsAg prevalence to <1% in children aged 5 years and increasing national coverage with both timely HepB birth dose (HepB-BD) (defined as administration within 24 hours of birth) and the third HepB dose (HepB3) to ≥95% (3).All WPR countries/areas endorsed the Regional Action Plan for Viral Hepatitis in the Western Pacific Region 2016-2020 in 2015 (4) and the Regional Framework for the Triple Elimination of Mother-to-Child Transmission of human immunodeficiency virus (HIV), Hepatitis B and Syphilis in Asia and the Pacific 2018-2030 (triple elimination framework) in 2017 (5).These regional targets and strategies are aligned with program targets established by the WHO Global Health Sector Strategy on Viral Hepatitis 2016-2021 that aim to reduce HBsAg prevalence among children aged 5 years to ≤1% by 2020 and to ≤0.1% by 2030 (6).This report describes progress made to achieve hepatitis B control in the WPR and the steps taken to eliminate mother-to-child transmission (MTCT) of HBV during 2005-2017.During this period, regional timely HepB-BD and HepB3 coverage increased from 63% to 85% and from 76% to 93%, respectively.As of December 2017, 15 (42%) countries/areas achieved ≥95% timely HepB-BD coverage; 18 (50%) reached ≥95% HepB3 coverage; and 19 (53%) countries/areas as well as the region as a whole were verified to have achieved the regional and global target of <1% HBsAg prevalence among children aged 5 years.Continued implementation of proven vaccination strategies will be needed to make further progress toward WPR hepatitis B control targets.In addition to high HepB-BD and HepB3 coverage, enhanced implementation of complementary hepatitis B prevention services through the triple elimination framework, including routine HBsAg testing of pregnant women, timely administration of hepatitis B immunoglobulin to exposed newborns, and antiviral treatment of mothers with high viral loads, will be needed to achieve the global hepatitis B elimination target by 2030.
Immunization Activities
HepB-BD and HepB3 coverage data are reported annually to WHO and the United Nations Children's Fund (UNICEF) from 36 of the 37 WPR countries and areas.† WHO and UNICEF estimate vaccination coverage for 27 countries in the region, using annual government-reported survey and administrative data; for the remaining areas and territories, government-reported coverage data are used.By 2005, all countries/areas in the region had introduced at least three HepB doses into national immunization schedules.By 2012, 34 (94%) of 36 countries/areas provided universal HepB-BD (Table 1); since 1987, Japan and New Zealand have provided selective administration of timely HepB-BD to infants born to mothers who are HBsAg-positive or whose HBsAg status is unknown.During 2005-2017, regional HepB-BD coverage increased from 63% to 85%, and HepB3 coverage increased from 76% to 93%.In 2017, 15 (42%) of 36 countries/areas achieved ≥95% timely HepB-BD coverage, and 18 (50%) countries/areas reached ≥95% HepB3 coverage.
HBsAg seroprevalence surveys
Surveillance for acute hepatitis B infection and its sequelae cannot fully capture the population prevalence of HBV infections, because most infants and children remain asymptomatic during acute infection.Nationally representative Abbreviations: HepB-BD = birth dose of monovalent hepatitis B vaccine; HepB3 = third dose of hepatitis B containing vaccine; NA = not applicable; NK = not known; NR = not reported; SAR = special autonomous region; UNICEF = United Nations Children's Fund.* WHO-UNICEF estimates except for areas and territories (American Samoa, French Polynesia, Guam, Hong Kong, Macao, New Caledonia, Commonwealth of Northern Mariana Islands, Tokelau, Wallis, and Futuna), and where otherwise specified.† Timely hepatitis B birth-dose is defined as administration of a dose of hepatitis B vaccine within 24 hours of birth.§ WHO-UNICEF estimates not available; reported coverage used instead.¶ Year of introduction of selective birth dose vaccination of newborns of mothers who are HBsAg positive or of unknown HBsAg status.** Approximate year of birth dose introduction into the routine immunization program.
HBsAg seroprevalence surveys allow countries to assess the prevalence of chronic HBV infection among children born after the introduction of hepatitis B vaccine in the national immunization schedule and track progress toward achievement of regional hepatitis B control targets.In 1990, before HepB was introduced into childhood immunization schedules in most WPR countries/areas, HBsAg seroprevalence among children aged 5 years was estimated to be >8% in 22 (61%) of 36 countries/areas (2), a level of chronic infection considered to be highly endemic (1).As of December 2017, all countries/areas in the WPR had completed serosurveys except for Nauru and New Caledonia; seven countries/areas conducted serosurveys before 2009 (Table 2).By December 2017, HBsAg seroprevalence among children aged 5 years declined to <1% in 25 (69%) countries/areas based on national serosurveys (Table 2).Notably, China, which has the largest birth cohort in the region, was successful in decreasing the prevalence of HBV infection among children to <0.5%.In Laos, Papua New Guinea, Vietnam, and several of the Pacific Island countries, estimated HBsAg prevalence among children aged 5 years still exceeds 2%.
Regional Verification of Hepatitis B Control Goal
In 2007, the WPR established the Hepatitis B Immunization Expert Resource Panel that independently advises on the status of and strategies for achieving the regional hepatitis B control goal.§ The Expert Resource Panel is an interdisciplinary team of 10-15 experts recognized in the field of hepatitis B, and members have expertise in immunization, epidemiology, virology, and hepatology.The Expert Resource Panel convenes verification panels to assess whether countries have met established regional control targets.The verification process includes review of data from nationally representative serosurveys conducted among children aged 5 years, who have already passed through the period of highest risk for perinatal and horizontal transmission of HBV.Panel members also review national and subnational HepB coverage data and supporting evidence of countries' progress in monitoring and targeting high-risk populations including HBsAg-positive mothers and their exposed newborns.As of December 2017, 19 (53%) of the 36 countries/areas were verified by the Expert Resource Panel as having met the regional <1% prevalence target based on serosurvey and vaccination coverage data (Table 2).
Progress Toward Elimination of Mother-to-Child Transmission of HBV
In 2017, the WPR established a goal for elimination of mother-to-child transmission (MTCT) of HBV by 2030, defined as achievement of a 90% reduction in new cases of chronic HBV infection, equivalent to 0.1% HBsAg seroprevalence among children aged 5 years, as part of the triple elimination framework.Key components of the WPR's strategy to achieve elimination of MTCT of HBV include achieving ≥95% HepB-BD and HepB3 coverage, screening ≥95% of pregnant women for chronic HBV infection, administering hepatitis B immunoglobulin (HBIG) to infants born to HBsAg-positive mothers, and treating pregnant women eligible for treatment with antiviral drugs (5) hepatitis prevention; 26 (70%) countries/areas reported having established integrated routine antenatal screening programs for HIV and syphilis (data not shown); and 20 (56%) countries/ areas had a national policy for routine antenatal HBsAg testing (Table 3).However, only two (6%) countries reported testing ≥95% pregnant women for HBsAg, and eight countries (22%) reported providing antivirals to infected mothers.In addition to providing timely HepB-BD and HepB3 vaccination for HBV-exposed infants, 10 (28%) countries/areas administered HBIG to newborns of HBsAg-positive mothers, and seven (19%) provided postvaccination serologic testing to determine the infection status of exposed infants.
Discussion
During 2005-2017, the WPR achieved remarkable progress toward the regional hepatitis B control goal and elimination of MTCT of HBV.HepB has been introduced in all countries/ areas; almost all countries/areas provide universal HepB-BD; coverage with HepB-BD and HepB3 increased by 35% and 22%, respectively; and 19 (53%) countries/areas were verified to have achieved the 2017 regional control target by December 2017.This success was corroborated by a 2016 disease modeling study that estimated regional prevalence to be 0.93% among children born in 2012.¶ This model also showed that immunization programs in the region prevented more than 37 million cases of chronic HBV infection, and averted more than seven million deaths related to hepatitis B that would have occurred in the lifetime of children born between 1990 ¶ Cambodia, Commonwealth of the Northern Mariana Islands, Federated States of Micronesia, French Polynesia, Guam, Niue, and Tokelau presented serosurvey evidence of reaching the <1% HBsAg prevalence target among children aged 5 years after 2016, suggesting that regional prevalence might have further declined.
and 2014 had hepatitis B vaccination programs not been established (2).Interventions implemented to increase HepB-BD vaccination coverage included promoting community awareness about the need for HepB vaccination, especially the administration of a timely HepB-BD; building capacity and knowledge of health care staff to administer a timely birth dose; using HepB-BD outside the cold chain; and promoting institutional deliveries (7).WPR countries/areas have extensive experience using the highly heat stable monovalent HepB-BD outside the cold chain in areas that lack a reliable cold chain or have high home birth rates with limited health facility access.This use of the HepB-BD outside the commonly recommended storage temperatures of 35°F-46°F (2°C-8°C) for limited periods under monitored and controlled conditions, has been demonstrated to be safe and effective, with WHO suggesting outside-the-cold-chain use in settings where HepB-BD administration is restricted by access to cold storage (1).HepB-BD use outside the cold chain has been used to increase timely HepB-BD administration by 27% in Laos, 70% in China, and 150% in the Solomon Islands (7,8).Cambodia and China have national policies that encourage pregnant women to deliver in health facilities to reduce maternal and neonatal mortality by ensuring that mothers and newborns are examined by health care professionals within 24 hours of delivery.Institutional delivery also facilitates coordination of postnatal care services Abbreviations: CI = confidence interval; HBsAg = hepatitis B surface antigen; ND = not done; NS = not submitted to the regional verification commission; SAR = special autonomous region; UR = under review by the regional verification commission.* Verification is done by a regional commission of experts from the Hepatitis B Immunization Expert Resource Panel that determines if the country or area has reached the target of <1% HBsAg seroprevalence among children aged 5 years.† Preliminary data.§ By December 2017, Cambodia and the Federated States of Micronesia had conducted nationally representative serosurveys and were subsequently verified as meeting the <1% HBsAg seroprevalence target in 2018.¶ Fiji completed a subnational hepatitis B serosurvey in 2008 and is planning its first nationally representative survey for 2019.** The Philippines conducted a nationally representative serosurvey in 2018 with preliminary results indicating a 0.7% HBsAg prevalence among children aged 5 years.
between maternal, neonatal, and child health programs and national immunization programs that can improve timely HepB-BD coverage (7,9).
To reach the global hepatitis B elimination goal of ≤0.1% HBsAg prevalence among children aged 5 years by 2030, WPR countries/areas need to achieve elimination of MTCT of HBV, because perinatal transmission accounts for a high proportion of chronic HBV infections among children (1).The triple elimination framework provides guidance for a coordinated delivery of services for immunization, HIV, sexually transmitted infections, and reproductive, maternal, neonatal, and child health to ensure that a timely HepB-BD is administered and high HepB3 coverage is achieved.The framework also provides guidance for implementation of complementary interventions in addition to vaccination to prevent perinatal HBV transmission, including routine testing of pregnant women and timely administration of hepatitis B immunoglobulin to exposed newborns.In addition, the framework suggests possible administration of antiviral drugs to mothers with high viral loads, while awaiting global guidance on its use to prevent MTCT of HBV (5).**In Cambodia, a modeling analysis indicated that offering an integrated package of services through the triple elimination platform could reduce MTCT of HBV by 76%, from 14.1% to 3.4%; syphilis by 51%, from 9.4% to 4.6%; and HIV by 8%, from 6.6% to 6.1%.It could prevent approximately 3,200 infant HBV infections annually at a cost of $114 USD per disability-adjusted life-year (10).
The WPR has significantly decreased the incidence of chronic HBV infection, with a few countries still requiring programmatic improvement in vaccination to achieve hepatitis B control.As the WPR expands implementation of interventions for elimination of MTCT of HBV, global and regional guidance is needed on 1) the use of monitoring indicators to assess the effect of these interventions on elimination of MTCT, 2) the appropriate frequency of costly serosurveys for verification of achievement of low HBsAg seroprevalence targets, and 3) the use of models to estimate infection prevalence from programmatic data to support countries in their control efforts and the elimination of MTCT verification process.
Abbreviations:
Abbreviations: ANC1 = at least 1 antenatal care visit; ANC4 = at least 4 antenatal care visits; EMTCT = elimination of maternal-to-child transmission; HBsAg = hepatitis B surface antigen; HBIG = hepatitis B immunoglobulin; HBV = hepatitis B virus; HepB-BD = birth dose of monovalent hepatitis B vaccine; HepB3 = third dose of a hepatitis B containing vaccine; PVST = postvaccination serological testing; SBA = skilled birth attendant; UNICEF = United Nations Children's Fund.* The Pitcairn Islands is excluded from analysis because it does not report immunization coverage.† Information collected from countries in preparation for the Midterm review of the Regional Action Plan for Viral Hepatitis in the Western Pacific 2016-2020, December 13-14, 2018, Manila, Philippines.§ UNICEF data.Monitoring the situation of children and women; https://data.unicef.org/updated June 2018, data from 2006-2017.¶ China, Wang et al.Bull World Health Organ 2015;93:52-6; Republic of Korea, unpublished study of the liver 2013; Mongolia, reported at the Informal Consultation on Validation of Elimination of Mother-to-child Transmission of HIV, Hepatitis B and Syphilis: Developing the Method for Validating Hepatitis B Elimination, February 27-28, 2018.** WHO and UNICEF estimates for all 27 countries in the Western Pacific Region, unless otherwise specified; reported coverage for the remaining nine reporting areas and territories in the Western Pacific Region.† † Japan and New Zealand are excluded for this indicator as these countries selectively offer hepatitis B birth dose to newborns of HBsAg-positive mothers or mothers with unknown HBsAg status.
TABLE 1 . Hepatitis B vaccine (HepB) schedule and estimated coverage* with a timely birth dose † and third dose of HepB, by country/area - World Health Organization (WHO) Western Pacific Region, 2005, 2012, and 2017 Country/Area HepB schedule Year birth dose introduced % Coverage 2005 2012 2017 Timely HepB-BD † HepB3 Timely HepB-BD † HepB3 Timely HepB-BD † HepB3
. As of December 2017, 19 (53%) of 36 countries/areas had developed national plans for viral § At the September 2018 Expert Resource Panel meeting (https://iris.wpro.who. new interim goals were proposed: 1) to reduce HBsAg prevalence to <1% among children aged 5 years in all countries and areas by 2025; and 2) in countries and areas that already have <1% HBsAg prevalence in children aged 5 years, to further reduce HBsAg prevalence to <0.3% by 2025.These are intended to be interim targets for countries to reach the hepatitis B elimination target of <0.1% HBsAg prevalence among children aged 5 years by 2030.
|
v3-fos-license
|
2018-04-03T05:06:32.319Z
|
2012-12-01T00:00:00.000
|
41542496
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.scielo.br/j/eins/a/JBMXRqHQZMV8rxrVHypBmYD/?format=pdf&lang=en",
"pdf_hash": "b0d4abdaa40d7cb690eb94c170a8b975dd8ab193",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43446",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b0d4abdaa40d7cb690eb94c170a8b975dd8ab193",
"year": 2012
}
|
pes2o/s2orc
|
PCSK 9 and its clinical importance with the new therapeutic targets against dyslipidemia
Conflict of interest: none. ABSTRACT This is a remarkable progress; since the finding of statins, there was no new way of reducing, significantly, cholesterol and LDL fraction. It is also clear that this decrease, by statins, is related to future cardiovascular lesions, being useful in its primary and secondary prophylaxis. The authors presented studies on research to promote the falling of blood cholesterol by means of antibodies, which inhibit the pro-protein PCSK9, as well as agents that act performing the RNA interference. We had two advantages immediately: for patients with myopathy associated with statins, and the fact of being injected every 15 days, that may contribute to better treatment adherence.
INTRODUCTION
The proprotein convertase subtilisin/kexin type 9, also known as PCSK9, is an enzyme that is encoded by gene PCSK9 (1) in humans, and has orthologous genes found in virtually all species.This gene encodes a proprotein convertase from the proteinase K subfamily, of the secretory subtilase family.The encoded protein is synthesized as a soluble zymogen, which undergoes an intramolecular self-catalytic processing in the endoplasmic reticulum.This protein plays an essential role in regulation of cholesterol homeostasis.PCSK9 binds to the repetition domain A, of the epidermal growth factor (EGF-A), to the low-density lipoprotein receptor (R-LDL), inducing degradation of this receptor.These reduced R-LDL levels result in decreased metabolism of low-density lipoproteins (LDL-cholesterol -LDL-c), which could lead to hypercholesterolemia (2,3) .
CLINICAL RELEVANCE
Inhibiting function of PCSK9 is explored as a way to reduce cholesterol levels.Research has advanced in studying antibodies against this enzyme, aiming at its inhibition.It is worth mentioning that the use of statins is well established in clinical practice.Statins interfere in cholesterol production by inhibiting the enzyme HMG-CoA reductase (4) and stimulating the production of LDL receptors; meanwhile, these antibodies allow more LDL receptors to be available (5) .
The most recent phase II study was published online on October 31, 2012, in the New England Journal of Medicine, and was carried out by a tem led by Dr. Eli Roth.The project involved 92 patients, with PCSK9 and its clinical importance with the new therapeutic targets against dyslipidemia einstein.2012;10(4):526-7 LDL levels ≥100mg/dL after receiving atorvastatin 10 mg for at least 7 weeks.They were randomized for 8-week treatment, at the ratio of 1:1:1 -receiving atorvastatin 80mg/day + REGN727/SAR236553; atorvastatin 10mg/day + REGN727/SAR236553 and atorvastatin 80mg/day + placebo.The drug REGN727/ SAR236553 was administered every two weeks, subcutaneously.The exclusion criteria included type-1 diabetes mellitus, uncontrolled type-2 diabetes mellitus (glycated hemoglobin >8.5%) or on insulin, triglycerides >350mg/dL or any cardiovascular events in the past 6 months before the study.The results on the monoclonal antibody anti-PCSK9 REGN727/SAR236553 showed important reduction in LDL levels, in both associations (atorvastatin 10mg and 80mg) in patients with primary dyslipidemia (Table 1).In the placebo group, the drop was by 2.7% only.Like in previous studies with the same therapy, there was a decrease by approximately one-third lipoprotein-a Lp (a) levels in patients who received the antibody (7) .
The Odyssey study with this antibody is on its phase III since July 2012.A total of 22000 patients will be enrolled, distributed throughout 2000 centers all over the world (United States, Europe, Canada, Australia, Asia and South America, including Brazil).It is conducted by Professor Dr Henry Ginsberg, of the Columbia University Medical Center, in New York.
Another way to inhibit the action of PCSK9 would be to use the therapeutic targets that act interfering in RNA, that is, RNA interference (RNAi) (8) .The RNAi is a great advance in medicine, and represents understanding of how genes are switched on and off in the cells, and provides a totally new approach to discovery and development of new medications.Its discovery was announced as "a great scientific advance that occurs once in every ten or more years" and it represents one of the most promising findings, being awarded the Nobel Prize of Physiology, in 2006 (9,10) .Alnylam Pharmaceuticals recently demonstrated in initial clinical trials (Phase 1), the positive results of the new drug ALN-PCS, which has the same action, and effectively inhibits this proprotein.
New studies will be published soon, enabling a better evaluation of these promising therapeutic targets.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2011-12-29T00:00:00.000
|
12576191
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0028721&type=printable",
"pdf_hash": "2f1c054b823e85914201646794ff74dffaae53f0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43447",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "2f1c054b823e85914201646794ff74dffaae53f0",
"year": 2011
}
|
pes2o/s2orc
|
Porcine FcγRIIb Mediates Enhancement of Porcine Reproductive and Respiratory Syndrome Virus (PRRSV) Infection
Antibody-dependent enhancement (ADE) of virus infection caused by the uptake of virus-antibody complexes by FcγRs is a significant obstacle to the development of effective vaccines to control certain human and animal viral diseases. The activation FcγRs, including FcγRI and FcγRIIa have been shown to mediate ADE infection of virus. In the present paper, we showed that pocine FcγRIIb, an inhibitory FcγR, mediates ADE of PRRSV infection. Stable Marc-145 cell lines expressing poFcγRIIb (Marc-poFcγRII) were established. The relative yield of progeny virus was significantly increased in the presence of sub-neutralization anti-PRRSV antibody. The Fab fragment and normal porcine sera had no effect. Anti-poFcγRII antibody inhibited the enhancement of infection when cells were infected in the presence of anti-PRRSV antibody, but not when cells were infected in the absence of antibody. These results indicate that enhancement of infection in these cells by anti-PRRSV virus antibody is FcγRII-mediated. Identification of the inhibitory FcγR mediating ADE infection should expand our understanding of the mechanisms of pathogenesis for a broad range of infectious diseases and may open many approaches for improvements to the treatment and prevention of such diseases.
Introduction
Porcine reproductive and respiratory syndrome virus (PRRSV) is an enveloped positive-strand RNA virus in the family Arteriviridae [1]. PRRS can cause severe reproductive failure in sows and is associated with the porcine respiratory disease complex in combination with secondary infection [2][3][4]. The virus is present in a majority of swine producing countries around the world and gives rise to significant economic losses in pig farming [5]. Swine are the only known host of PRRSV, and myeloid cells, particularly macrophages and dendritic cells, are the primary permissive cells [6].
Various features of PRRSV infection and the ensuing immune response suggest that PRRSV immunity is aberrant. The acute, viremic infection lasts for 4 to 6 weeks and is followed by a period of persistent viral proliferation in lymphoid tissues that lasts for several months before complete resolution of infection [7]. PRRSV infection can induce significant and specific antibody and B-cell responses to a variety of PRRSV protein [8,9].
Fcc receptor (FccR)-mediated entry of infectious PRRSV immune complexes into macrophages is hypothesized to be a key event in the pathogenesis of the disease [10][11][12]. Infection of alveolar macrophages by PRRSV is significantly enhanced in vitro in the presence of diluted anti-PRRSV antisera [10], and the mean level and duration of viremia are greater in pigs injected with sub-neutralizing antibodies prior to virus challenge than in pigs injected with normal IgG [10,12]. The prolonged duration of viremia and virus isolation from the tissues of piglets with low maternal antibodies also suggest antibody-dependent enhancement (ADE) of PRRSV [13]. These observations strongly suggest that ADE of PRRSV infection has the potential to enhance the severity of disease and possibly the susceptibility to PRRSV infection in pigs with declining levels of PRRSV-specific antibodies of maternal origin, or with antibodies induced by exposure to wildtype or vaccine strains of PRRSV.
IgG Fc Receptors (FccRs) comprise a multigene family of integral membrane glycoprotein that exhibit complex activation or inhibitory effects on cell functions after aggregation by complexed IgG. Four different classes of FccRs, known as FccRI (CD64), FccRII (CD32), FccRIII (CD16) and FccRIV, have been extensively characterized in mice and humans [14]. Both FccRI and FccRIIa have previously been shown to facilitate antibodymediated dengue virus enhancement in human macrophage [15,16], and FccRIIa appeared to be the most effective [17]. FccRII is a 40-kDa molecule detected on monocytes, neutrophils, eosinophils, platelets and B cells. It has a low affinity for monomeric IgG, preferentially binding multivalent IgG. Porcine FccRII, previously characterized by our research group, amino acid sequence shows a high similarity to human and mouse FccRIIb. Since the cytoplasmic domain of this receptor contains a conserved immunoreceptor tyrosine-based inhibitory motif (ITIM), the swine receptor may also have a similar inhibitory function [18]. Therefore, it is important to elucidate the role of porcine FccRIIb in PRRSV infections in order to better understand PRRSV-porcine cell interactions and the pathogenesis of PRRSV infections.
Results
Establishment of stable Marc-145 cell lines expressing poFccRII (Marc-poFccRII) Marc-145 cell line was selected for transfection with poFccRII, because it is a permissive cell for PRRSV infection. Three Marc-145 cell lines stably expressing the poFccRII were established (data not shown), and one of the cell lines, Marc-poFccRII/1 was selected for further studies. The expression of poFccRII was verified by flow cytometry and RT-PCR ( Fig. 1A and B). Expression of poFccRII was stable for as long as 20 continuous passages, over a period of 2 months.
Efficiency of PRRSV replication and progeny virus production in Marc-poFccRII
Virus infection and virion production were quantified in the absence of PRRSV antibody using both Marc-poFccRII and parent Marc-145 cells (Fig. 2). The growth kinetics of PRRSV BJ-4 was similar in the two cell lines. The results indicate that the infection, efficiency of PRRSV replication and mature virion production in Marc-poFccRII was similar to that of the parent cell line.
Augmentation of progeny virus production in Marc-poFccRII cell line by anti-PRRSV antibody PRRSV-specific antibody were detected by IDEXX commercial ELISA kit and neutralizing antibody test was performed using Marc-145 cells. The sera from one pig with the S/P ratio of 1.2 and the sera neutralizing antibody titer of 1/5.9 were used in the ADE test. Marc-poFccRII cells were infected with PRRSV BJ-4 strain in the presence or absence of serially diluted anti-PRRSV sera and the virus yield was examined 48 h later. Marc-poFccRII cells which were infected with PRRSV in the presence of 1/2 4 to 1/2 10 dilutions of anti-PRRSV sera produced a significantly higher percentage of virus than those infected in the normal porcine sera (Fig. 3). The anti-PRRSV serum enhanced infection was optimum at a dilution of 1/128. Normal porcine serum, which does not contain anti-PRRSV antibodies did not enhance infection at any dilutions ranging from 1/2 to 1/2 10 . These results suggest that infection with virus-antibody complex results in viral production.
Analysis of PRRSV-ADE infection in susceptible cells bearing poFccRII
In order to test whether this enhancement infection specifically applied to poFccRII positive cells, we pre-incubated PRRSV BJ-4 virions with whole or F(ab')2 fragment of anti-PRRSV IgG at 80 mg/ml and then added the opsonised virions to either Marc-poFccRII cell lines, porcine alveolar macrophages (PAMs) or Marc-145 cells. Anti-PRRSV IgG enhanced the PRRSV infection in both FcR-bearing cells, Marc-poFccRII cell line and PAMs (Table 1). No enhancement was seen in the normal Marc-145 cells (Table 1). Thus, the enhancement of PRRSV infection appears to be due to the presence of poFccRII.
Inhibition of anti-PRRSV virus antibody-mediated infection by anti-poFccRII polyclonal antibodies
Rabbit anti-poFccRII polyclonal antibodies, which are known to inhibit the binding of porcine IgG to poFccRII, was used to prove that enhancement of PRRSV infection by anti-PRRSV is FccRII-mediated [19]. The sera were heat inactivated (56uC for 30 min) to remove intrinsic complement activity. Marc-poFccRII cells and PAMs were incubated with anti-poFccRII polyclonal antibodies and were then infected with PRRSV in the presence or absence of anti-PRRSV antibody. Anti-poFccRII antibody inhibited the enhancement of infection when cells were infected in the presence of anti-PRRSV antibody, but not when cells were infected in the absence of antibody ( Fig. 4A and B). Control rabbit serum did not inhibit infection in the presence or absence of anti-PRRSV virus antibody (data not shown). These results indicate that the observed enhancement of infection in the presence of anti-PRRSV virus antibody is FccRII-mediated.
The infection rates of PRRSV-ADE infection in Marc-poFccRII cell lines
PRRSV infectivity in Marc-poFccRII cells under the conditions of ADE infection and control infection was compared at 12, 24 and 48 h post-infection. At 12 h post-infection, there was increased infectivity for ADE-treated cells, but this finding was not statistically significant compared with the control (p = 0.096). As the data shown, a clear and significant increase in infectivity was found after 24 h, with higher numbers of infected cells under ADE infection conditions ( Table 2). Therefore the initial PRRSV infectivity was not different between ADE and non-ADE infection in Marc-poFccRII cell lines, suggesting that post-entry intracellular activities are of greater importance.
Discussion
The basis for vaccine development against a wide range of infective agents is the development of protective antibodies. However, the discovery of the phenomena of ADE in the 1960s gave rise to the concern that the development of antibodies could, at times, exacerbate the reaction to a natural infection. Pigs infected with PRRSV develop a strong and rapid humoral response, but the protective effectiveness of these antibodies might be reduced by ADE. The development of significant levels of no- neutralizing antibodies especially early may actually mediate ADE [20]. Antibody-FccR interaction is known to play a key role in ADE phenomenon. The FccR-mediated mechanism of ADE in virus infections was first suggested by Halstead et al. who reported that F(ab')2 fragments prepared from IgG did not enhance infection in peripheral blood leukocyte cultures by dengue virus (DV) while whole IgG did [21]. To date, four different classes of FccRs, known as FccRI, FccRII, FccRIII and FccRIV, have been recognized in mice. FccRI (CD64) is a high affinity receptor found mainly on myelomonocytic cells, that can bind to monomeric IgG, whereas FccRII (CD32) and FccRIII (CD16) are of lower affinity, binding primarily aggregated IgG or IgG in immune complexes [22], and FccRIV is a recently identified receptor of intermediate affinity and restricted subclass specificity [23]. Kontny et al. showed that FccRI mediated ADE of DV infection in U937 cells [15]. In a related study, FccRIIa was also reported to mediate ADE of DV infection in a human erythroleukemic cell line (K562), which has only FccRIIa [16]. In the present paper, we have shown that porcine FccRIIb will mediate ADE of PRRSV infection. The relative viral yield was significantly increased in the presence of sub-neutralizing levels of anti-PRRSV antibody in stable Marc-145 cell lines expressing poFccRIIb (Marc-poFccRII) (Fig. 3). The enhancement was not evident for Marc-145 cells not bearing FccRs. The Fab fragment and the normal porcine sera had no effect (Table 1). Thus, the enhancement of PRRSV load was due to poFccRII.
Unlike the previously identified FccRI and FccRIIa mediating ADE of virus infection, poFccRIIb is an inhibitory FccR that can prevent activation of immune cells by recruitment of SHIP (Src Homology 2-containing 59-inositol phosphatase) to its cytoplasmic ITIM (immunoreceptor tyrosine-based inhibition motif) [18]. During initial studies on ADE it had been assumed that increased virus output resulted from the avid attachment of opsonised virus to FccRI and FccIIa receptors, therefore yielding a larger number of infected cells [24,25]. An alternative proposal was that infection via Fc and FcR ligation also alters the intracellular signaling pathways, resulting in their switching from an anti-viral mode to a viral-facilitating mode. This process has been termed intrinsic antibody-dependent enhancement (iADE) [26,27]. Infection under conditions of iADE may not only facilitate the process of viral entry into macrophages but also modify the innate and adaptive intracellular antiviral mechanisms. The innate immune response suppression involves a decreased production of reactive nitrogen radicals via nitric oxide synthase (NOS2) suppression and downregulation of tumor necrosis factor alpha (TNF-a) and IFN-b production through abolished interferon regulatory factor 1 (IRF-1) and NF-kB gene expression, while a marked increase in IL-10 gene transcription and protein production is also observed [28,29]. In the present work, the infection rate between poFccRIImediated ADE infection was compared, and the initial PRRSV infectivity was not different between ADE and non-ADE infection in Marc-poFccRII cell lines, suggesting that iADE behaviour is of great importance in PRRSV infection ( Table 2). The intracellular mechanisms and implications of enhanced pathogenesis of iADE may be the result of the inhibitory signals after interactions of infectious immune complexes with FccRIIb, the inhibitory Fc receptor. The mechanisms of the inhibitory signal of the FccRIIb receptor in regulating the viral replication and production need further investigation.
Antibody-dependent enhancement of virus infection is a significant obstacle to the development of effective vaccines for the control of certain human and animal viral diseases. The ADE in PRRSV infection has been suspected as one of the possible reasons for the relative ineffectiveness of vaccination in controlling PRRS. In the present paper we have shown that poFccRIIb, an inhibitory FccR can mediate ADE of PRRSV infection, as does FccRI and FccRIIa, the activation FccRs, in the DENV. Identification of the inhibitory FccR involvement in the ADE process expands our understanding of the mechanisms of pathogenesis for a broad range of infectious diseases and opens approaches for improvement in the treatment and prevention of such diseases. The new ADE assay using Marc-poFccRII cells is simple and practical, and is useful for defining the role of ADE in the pathogenesis of PRRSV infection.
Cells, virus and antibodies
Marc-145 cells were used for virus titration and maintained in Dulbecco's modified Eagle's minimal essential medium (Gibco) supplemented with 10% (v/v) fetal bovine serum (FBS, Life Technologies). The PAMs were collected from 4 to 6 week old piglets free of PRRSV, by lung lavage as previously described (Zhang et al., 2006). PAMs were dispensed at 1610 5 cells/well into 24-well plate and maintained in RPMI-1640 (Gibco) supplemented with 10% FBS containing an antibiotic-antimycotic mixture of 100 mg/ml streptomycin, 100 IU/ml penicillin and 25 mg/ml amphotericin B (Sigma) and incubated at 37uC in a humidified atmosphere containing 5% CO 2 .
The PRRSV strain BJ-4 was a typical North American (VR2332)-like PRRSV isolated in 1996 in China and its complete genomic sequence has been determined and deposited in GenBank (accession no. AF331831). The virus was supplied by Dr. Hanchun Yang of China Agricultural University. Anti-PRRSV sera were obtained from 3 pigs 50 days following nasal inoculation with 10 4 TCID 50 of the PRRSV BJ-4 strain. Control sera were obtained from PRRSV antibody-free pigs of similar age. F(ab')2 fragments were generated by pepsin digestion of isolated IgG and then depleted of undigested antibody and Fc fragments by passage over a protein A-sepharose column. The PRRSVspecific antibody titers were determined by using the commercially available PRRSV antibody detection kit (HerdCheck PRRS; IDEXX) according to the manufacturer's instructions. The virus neutralization (VN) antibody titer was determined in 96-well microtitration plates using Marc-145 cells. Serum samples were heat inactivated at 56uC for 30 min prior to performing the test and serially diluted 2-fold. Each dilution of serum was mixed with an equal volume of PRRSV BJ-4 strain containing 2610 2 TCID 50 /ml and incubated for 1 h at 37uC. The serum-virus mixture was transferred to a 96-well plate containing confluent Marc-145 cells, and the cells were analyzed for CPE at 5 days post inoculation. The VN antibody titer was defined as the reciprocal of the highest dilution that inhibited CPE in 50% of the inoculated wells.
The experimental procedure for the collection of porcine alveolar macrophages was authorized and supervised by the Ethical and Animal Welfare Committee of Key Laboratory of Animal Immunology of the Ministry of Agriculture of China.
Stable expression of poFccRII on Marc-145 cells
The recombinant eukaryotic cell expression vector of pcDNA3-poFccRII was constructed as previously described [18]. Marc-145 cells were transfected with BgIII-linearized poFccRII cDNA constructs using Lipofectamine 2000 (Invitrogen) according to the manufacturers' protocols. Transfected cells were selected with 500 mg/ml G418 (Invitrogen) for two weeks and then further selected by the limiting dilution method. The G418-resistant clones were screened by RT-PCR using the PCR protocol described previously [18]. The surface expression of poFccRII was verified by the binding of porcine aggregated-IgG and assessed by flow cytometry.
IgG-binding assay
Surface expression of poFccRII was examined by IgG-binding assay on stably transfected Marc-145 cells as described previously [19]. Porcine IgG was aggregated at 62uC for 20 min. Aggregated IgG was added to the transfected cells. Cells were incubated with the FITC-conjugated goat anti-porcine IgG for 30 min at 4uC, then pelleted and washed twice with PBS. Fluorescent spectra were analyzed by a BD FACSCalibur flow cytometer counting 10,000 cells per sample.
Infection of cells with PRRSV and PRRSV-antibody complex
Infection of cells with PRRSV-antibody complex was conducted as previously described [10]. BJ-4-specific antibody-positive and antibody-free serums were de-complemented by heat-inactivation at 56uC for 45 min and serially diluted 2-fold from 2 1 to 2 10 in DMEM growth media. PRRSV-antibody complex was prepared by mixing each dilution of the serum sets with an equal volume of DMEM media containing 10 4.5 TCID 50 /ml of PRRSV BJ-4 strain. Virus-antibody mixtures were incubated for 60 min at 37uC. One-tenth of a milliliter of virus-antibody mixture was inoculated in triplicate onto cell monolayers in 24-well cell culture plates. The plates were incubated at 37uC for 60 min. After virus absorption, cells were washed with 1 ml of DMEM and overlaid with maintenance medium with 2% FCS for an additional 48 h at 37uC. At the end of the incubation period, the cells were subjected to 3 cycles of freeze-thawing. The amount of PRRSV in each cell lysate was quantitated by virus titration as described [10]. Virus titers were determined by the method of Reed and Muench and expressed as TCID 50 /0.1 ml.
The infection rates following treatment with PRRSV-specific antibody was determined by immunofluorescence microscopy. Cells infected as above and incubated for 48 h were fixed in cold methanol for 10 min. After washing the cells were incubated for 2 h at 37uC with a monoclonal antibody, 2D6, specific for the nucleocapsid protein of PRRSV (VMRD), followed by the addition of FITC-conjugated sheep anti-mouse IgG antibodies (Sigma) for 1 h at 37uC, after which the cells were counted using a fluorescence microscope. Between antibody incubation and prior to viewing under the microscope, the cells were washed three times with sterile PBS. Ten fields were counted, and mean infectivity (6SE) was calculated.
Statistical analysis
Data were subjected to one-way analysis of variance (one-way ANOVA). If the P value from the ANOVA was less than or equal to 0.05, pairwise comparisons of the different treatment groups were performed by a least-significant difference test at a rejection level of a P value,0.05.
|
v3-fos-license
|
2024-03-22T15:28:33.961Z
|
2024-03-01T00:00:00.000
|
268560543
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2077-0383/13/6/1797/pdf?version=1710954201",
"pdf_hash": "0cb3af62f323f20783a9947e0c4d331ff47e3eb2",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43448",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "22819d95b523815091b0cca77068a65dd5aaf4c6",
"year": 2024
}
|
pes2o/s2orc
|
Novel Antidiabetic Drugs and the Risk of Diabetic Retinopathy: A Systematic Review and Meta-Analysis of Randomized Controlled Trials
Background: The aim of this study is to compare the effect of sodium–glucose cotransporter-2 inhibitors (SGLT-2i), glucagon-like peptide-1 receptor agonists (GLP-1RA), and dipeptidyl peptidase-4 inhibitors (DPP-4i) on the risk of diabetic retinopathy (DR) in patients with type 2 diabetes (DM2). Methods: We systematically searched the databases Pubmed, Embase, and Clinicaltrials up to October 2, 2023, for randomized clinical trials (RCTs) of drugs from the GLP-1RA, SGLT-2i, and DPP-4i groups, with at least 24 weeks duration, including adult patients with DM2 and reported ocular complications. A pairwise meta-analysis was performed to calculate the odds ratio (OR) of DR incidents. Results: Our study included 61 RCTs with a total of 188,463 patients and 2773 DR events. Pairwise meta-analysis showed that included drug groups did not differ in the risk of DR events: GLP1-RA vs. placebo (OR 1.08; CI 95% 0.94, 1.23), DPP-4i vs. placebo (OR 1.10; CI 95% 0.84, 1.42), SGLT2i vs. placebo (OR 1.02; CI 95% 0.76, 1.37). Empagliflozin may be associated with a lower risk of DR, but this sub-analysis included only three RCTs (OR 0.38; 95% CI 0.17, 0.88, p = 0.02). Conclusions: Based on currently available knowledge, it is challenging to conclude that the new antidiabetic drugs significantly differ in their effect on DR complications.
Introduction
Diabetic retinopathy (DR) stands as one of the leading causes of visual impairment in developed countries [1].Hyperglycemia plays an important role in the pathophysiology of DR as it affects vascular endothelial function [2].In recent years, an increasing number of new antidiabetic drugs have become available.Besides their varying abilities to lower blood glucose levels, these drugs also exhibit diverse effects on the vascular endothelium, potentially influencing the onset and progression of DR [3][4][5].The SUSTAIN 6 trial has indicated a higher incidence of DR complications with the use of Semaglutide compared to placebo [6].However, some analyses do not support this relationship, suggesting the potential role of the rate of glucose-lowering as a contributing factor, as the magnitude of HbA1C reduction has been associated with increased DR risk in glucagon-like peptide-1 receptor agonists (GLP-1RA) treated population [7,8].Previous meta-analyses of randomized clinical trials (RCTs) have suggested a potential association between the use of GLP-1RA and Canagliflozin and a higher risk of vitreous hemorrhage in patients with type 2 diabetes mellitus (DM2) [9,10].Nonetheless, conflicting results from other studies challenge these findings [11][12][13][14].The current body of evidence remains inconclusive.Considering the expected increase in the incidence of diabetes and its complications in the coming years, it is crucial to determine how new antidiabetic drugs may impact the risk of DR [15].We conducted a pairwise meta-analysis and meta-regression of randomized clinical trials, including patients with DM2, comparing the risk of DR complications between new antidiabetic drugs sodium-glucose cotransporter-2 inhibitors (SGLT-2i), GLP-1RA, dipeptidyl peptidase-4 inhibitors (DPP-4i), and placebo.The aim of our study was to determine the potential impact of these drugs on the risk of DR complications.The secondary aim was to investigate whether other factors, such as differences in changes of glycated hemoglobin blood concentration (HbA1c) between intervention and control groups, HbA1C at baseline, body mass index (BMI) at baseline, age, and duration of diabetes, might contribute to variations in this risk.
Materials and Methods
We conducted our meta-analysis in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 (PRISMA) [16].The protocol of the systematic review was registered in the International Prospective Register of Systematic Reviews (PROSPERO) under the registration number CRD42022336459.
Search Strategy and Study Selection
We systematically searched the databases Pubmed, Embase, and Clinicaltrials.govusing the search strategy included in Text S1.Studies published up to 2 October 2023, were included.Only trials reported in English were included in our study.Two independent reviewers assessed titles, abstracts, and full texts using the Rayyan online tool [17].Additionally, we examined the bibliographies of included papers.Inclusion criteria were as follows: a randomized clinical trial of at least 24 weeks duration, adult patients with DM2 reporting ocular complications, drugs from the SGLT-2i, GLP-1RA, and DPP-4i groups (Canagliflozin, Empagliflozin, Ertugliflozin, Dapagliflozin, Sotagliflozin, Luseogliflozin, Linagliptin, Saxagliptin, Teneligliptin, Alogliptin, Omarigliptin, Vildagliptin, Albiglutide, Lixisenatide, Semaglutide, Dulaglutide, Liraglutide, Efpeglenatide, Exenatide).A list of counted DR complications is available in Text S2.
Data Collection and Risk of Bias Assessment
Two independent reviewers collected data and assessed the risk of bias.In cases of conflicting opinions, a third reviewer resolved the conflict.Data were collected from full articles, protocols, clinical study reports, and ClinicalTrials.govdatabase.We collected the following data: author, publication year, trial name, intervention, control, mean age, percentage of male participants, number of subjects, follow-up duration, background treatment, characteristics of the patient population, HbA1C levels at baseline and their changes at the study endpoint for each group, BMI, and DR events.For the HbA1C endpoint, we selected the longest time point measurement where at least half of the study's initial population remained.In the case of missing data, which only occurred for the variables analyzed in the meta-regression, the study was omitted from the calculation.The risk of bias was assessed using the Cochrane risk-of-bias tool for randomized trials (RoB 2) [18].Five domains were analyzed: risk of bias arising from the randomization process, risk of bias due to deviations from the intended interventions (effect of assignment to intervention), missing outcome data, risk of bias in the measurement of the outcome, and risk of bias in the selection of the reported result.
Certainty Assessment
Certainty in the body of evidence for each outcome was assessed using the GRADE approach (The Grading of Recommendations Assessment, Development, and Evaluation) [19].This method is used to rate the certainty of evidence in systematic reviews through the assessment of five domains: risk of bias, inconsistency, indirectness of evidence, imprecision of the effect estimates, and risk of publication bias.Evaluation of each domain can lower the level of evidence, as there are four levels: very low, low, moderate, and high.
Statistical Analysis
We conducted a pairwise meta-analysis using a random effects model to calculate the odds ratio (OR) and 95% confidence interval (95% CI) for the risk of diabetic retinopathy events between different drug groups and placebo.Sub-analyses were performed for drugs with three or more RCTs.The results of the meta-analysis were presented as a forest plot.To assess heterogeneity between studies, I 2 statistics was used.Subgroup analyses and meta-regression were conducted to explore possible causes of heterogeneity.Sensitivity analysis was performed to determine the impact of individual studies on the OR of DR incidents.Publication bias was evaluated using funnel plot analysis and Egger regression.Meta-regression was performed to analyze the influence of factors HbA1C change during the trial between intervention and control, HbA1C at baseline, diabetes duration on baseline, BMI on baseline, age on baseline, and OR of DR incidents.Statistical analyses were conducted using Statistica v 13 (TIBCO Software Inc., Santa Clara, CA, USA) with plus kit v 5.0.96,under the license for Wroclaw Medical University.
Results
From the 13,694 preliminary studies found, we selected 966 studies for full-text analysis (Figure S1).Ultimately, 61 RCTs were included in the study (Table 1).A total of 188,463 subjects were included in the meta-analysis, with 2773 diabetic retinopathy events.The mean treatment duration was 1.57 years, and participants had an average diabetes duration of 9.91 years at baseline.On average, 58.4% of the subjects in each RCT were male.Characteristics of included studies are presented in Table 1, and HbA1C data is available in Table S1.
Risk of Bias
The majority of RCTs included in the study exhibited some concerns or a high risk of bias (50.8% vs. 37.7%, respectively).This was primarily attributed to the measurement methods for diabetic retinopathy complications, missing data, and the methods of analysis used to estimate the effect of assignment to intervention.The analyzed RCTs mostly did not perform regular fundoscopy, and some of them did not have pre-defined ocular complications.Most of the studies assessed in domain 2 (effect of assignment to intervention) showed some concerns due to uncertainty about the validity of the method of adverse events analysis.A significant portion of RCTs analyzed adverse events using an as-treated approach.Additionally, six studies had a high risk of bias due to an open-label design.Finally, only seven studies were assessed as low risk.RoB individual study ratings are available in Table S2.
Certainty Assessment
Assessments of certainty are presented in Table S3.Five out of twelve outcomes were graded as moderate certainty, and the rest were graded as low or very low.Indirectness was rated as serious in every outcome because most of the included RCTs differed in terms of background therapy and ophthalmic events, which were collected from adverse event summaries.Imprecision was assessed based on the absolute effect, as the analyzed trials had large populations and low event rates.
Pairwise Meta-Analysis
The results of the meta-analysis results did not reveal a significant difference in the risk of DR events between any drug group and placebo (Table 2 and Figure 1, Figure 2, Figure 3 and Figure S2).Sub-analysis was performed for drugs with three or more RCTs (Figures S3-S10).Empagliflozin was associated with a lower risk of DR compared to placebo (OR 0.38; 95% CI 0.17, 0.88, p = 0.02); however, this sub-analysis included only three RCTs.SGLT-2i was not compared to GLP-1RA or DPP-4i due to the limited number of studies available for each comparison (one and two studies, respectively).
Heterogeneity Analysis
Strong heterogeneity was identified when comparing GLP-1RA to DPP-4i (I 2 = 76.16%,p < 0.00).This is mainly due to the inclusion of the NCT01098539 study, which has an older population and a longer duration of diabetes compared to the other studies in this group (mean age at baseline 63.3 years vs. 55.6 years, mean duration at baseline 11.23 years vs. 7.3 years).The meta-regression results described below indicate that higher values of these two factors are associated with a lower risk of DR incidents with GLP-1RA when compared to DPP-4i.This relationship remains significant even after removing the NCT01098539 study from the analysis.While heterogeneity was also elevated in the Semaglutide vs. placebo comparison, it did not reach statistical significance (I 2 = 37.88%, p = 0.12).
Publication Bias
Based on Egger's test and visual inspection of the funnel plot, significant publication bias was found in Albiglutide vs. placebo (Egger p = 0.02), Linagliptin vs. placebo (Egger p = 0.24), and Liraglutide vs. placebo (Egger p = 0.28) comparisons.However, it is worth mentioning that these are analyses with a small number of studies (each less than 10 RCTs).
Sensitivity Analysis
In the sensitivity analysis, we assessed whether the inclusion of individual studies would result in a change in OR of DR incidents.When comparing DPP-4i with placebo, the exclusion of the CARMELINA trial would lead to a higher risk of DR incidents with DPP-4i use compared to placebo (OR 1.26; CI 95% 1.05, 1.53; p = 0.02).Additionally, when comparing Semaglutide vs. placebo, removal of the PIONEER 9 study would have resulted in a statistically significant increased risk of DR complications with Semaglutide, compared with placebo (OR 1.30; CI 95% 1.05, 1.60; p = 0.01).For the sub-analyses of the SGLT2 group, empagliflozin vs. placebo, sensitivity analysis also identified studies whose removal would significantly alter the outcomes.However, these are groups with a small number of studies (three).
Regression Analysis
A multivariate and univariate meta-regression of 44 RCTs found that there was no effect of the difference in HbA1C change between intervention and placebo, HbA1C on baseline, diabetes duration, age, or BMI on the risk of DR complications (Table S4).In univariate sub-analysis, a higher BMI at baseline was associated with an increased risk of DR complications in GLP-1RA use (Table S5).Additionally, a smaller difference in HbA1C change between DPP-4i and placebo use was linked to a higher risk of DR complications in DPP-4i use (Table S6).SGLT-2 inhibitors showed higher DR risk with higher HbA1C level at the start of the therapy (Table S7), and when comparing GLP-1RA to DPP-4i, older age and longer duration of diabetes at baseline lowered the DR risk in favor of the GLP-1RA group (Tables S8 and S9).Studies with missing data were excluded.Other sub-analyses were not included as they did not show a significant effect of the analyzed variables on the risk of DR incidents.
Discussion
Data from 61 RCTs involving a total of 188,463 patients and 2773 DR incidents were analyzed in our study.The analysis did not reveal an increased risk of DR events with the use of any drug group.The use of empagliflozin showed a potential association with a lowered risk of DR, but this finding is based on a sub-analysis involving only three RCTs.Further research with a larger number of studies in this subgroup may alter this observation.
SGLT-2i
The cardiovascular effects of SGLT-2i have garnered substantial attention, particularly through clinical trials like EMPA-REG OUTCOME, CANVAS, and DECLARE-TIMI 58, which demonstrated reductions in cardiovascular death and hospitalization for heart failure during SGLT-2i use [33,36,38].These benefits are thought to be associated with their diuretic and natriuretic effects [75].Additionally, SGLT-2i appears to exert protective effects on vascular endothelial function, potentially benefiting retinal health by enhancing glycemic control, managing hypertension and hyperlipidemia, and protecting the blood-retinal barrier and retinal capillaries [76].Indeed, rodent studies have shown the beneficial effects of SGLT-2i on ophthalmic complications of diabetes, and human studies indicated the ability to reduce diabetic macular edema [77][78][79][80].However, in our study, we did not observe a lower risk of DR complications with SGLT-2i use.These findings align with the meta-analysis and systematic review by Li et al., which also found no evidence of SGLT-2i providing benefits in reducing DR incidents or total ocular events in patients with type 2 diabetes [11].The result of the meta-analysis by Zhou et al. partially supports this observation; compared to other antidiabetic drugs or placebo, the use of SGLT-2i was not associated with a reduction in the overall number of ocular complications in DM2 patients.However, subgroup analysis suggested that Ertugliflozin and Empagliflozin may reduce the risk of retinal disease and DR, accordingly.Canagliflozin, on the other hand, may increase the risk of vitreous disease compared to placebo [9].Our study results are in agreement with the beneficial effect of Empagliflozin use, as the risk of DR complications was lower when compared to placebo.Unlike the aforementioned study published by Zhou et al., DR events were analyzed together and were not grouped, so the lack of effect of Canagliflozin on DR complications remains consistent with the results.It is important to note that our study included only one RCT on Ertugliflozin, so we did not perform a sub-analysis for this drug.
GLP-1RA
The effect of GLP-1RA extends beyond glycemic control alone, as GLP-1 receptors are present in many tissues, including the brain and heart [81].Studies have also shown a protective effect on the retina by accelerating its regeneration and inhibiting the progression of DR [82,83].Puddu et al. and Dorecka et al. detected GLP-1 receptors on the retinal pigment epithelium, suggesting a potential mechanism for the positive effect on reducing DR complications [84,85].Additionally, Zhou et al. and Fu et al. showed a protective effect of GLP-1RA on retinal ganglion cells under conditions of high glucose levels [86].However, not all studies agree with the protective effect of GLP-1RA on the diabetic retina.Hebsgaard et al. showed that GLP-1R expression is low in healthy eyes and virtually absent in eyes affected by proliferative diabetic retinopathy [87].In our study, we did not show a higher risk of DR incidents with GLP-1 RA use compared to placebo.This result is consistent with previously performed meta-analyses of RCTs [12][13][14].The exception was the study by Avgerinos et al., where the use of GLP-1RA was linked to a higher risk of vitreous hemorrhage [10].The SUSTAIN 6 trial indicated a significantly higher rate of retinopathy complications in the Semaglutide group compared to placebo [6].However, when analyzing the SUSTAIN 1-5 and Japanese trials, no significant difference was demonstrated when compared to the control groups.The authors of this analysis suggested that this phenomenon in the SUSTAIN 6 study might be attributed to a rapid reduction in HbA1C during the initial 16 weeks in patients treated with Semaglutide and insulin, particularly those already suffering from retinopathy with poor glycemic control [7].In our study, we did not observe a higher risk of DR incidents associated with the use of Semaglutide compared to placebo.
DPP-4i
DPP-4i is suspected to have effects on the cardiovascular system and vascular endothelium [88].Studies in rodents have indicated that DPP-4i may demonstrate retinoprotective effects [89,90].Sitagliptin has been shown to have a beneficial effect on endothelial cell function during retinal inflammation [91].For DPP-4i, their effect after topical administration in the form of eye drops is also being studied.Ramos et al. demonstrated the anti-oxidative and anti-inflammatory effects of topical administration of sitagliptin in diabetic retina [92].However, some authors disagree on the protective effect of DPP-4i.Studies have suggested that prolonged use of DPP-4 inhibitors may induce vascular leakage, possibly by destabilizing barriers formed by retinal endothelial cells [93,94].In a cohort study published in 2018, the use of DPP-4i did not result in a higher risk of DR incidents compared to other oral antidiabetic drugs at longer follow-up.Nonetheless, with a shorter duration of use (less than 12 months), the risk of DR complications was higher than in the never-use DPP-4i control group [95].In a retrospective study, Chung et al. were among the first to show that the use of DPP-4i was an independent inhibitor of DR progression compared to the other antidiabetic drugs included in the study.However, this study included only eighty-two participants [96].In another larger study, the authors demonstrated that DPP-4i did not increase the risk of DR progression compared to sulphonylureas [97].In 2020, Taylor et al. published a meta-analysis of 18 studies, including RCTs, to determine the effect of DPP-4i on microvascular and macrovascular complications of diabetes.Among the data analyzed, there was no significant evidence of a protective effect of DPP-4i on the onset and progression of DR [98].Consistent with these findings, our study also did not find an association between the use of DPP-4 inhibitors and the risk of DR incidents.In 2018, Tang et al. published a systematic review and meta-analysis of RCTs considering older and new antidiabetic drugs and their impact on DR complications in patients with DM2.Their pairwise meta-analysis indicated that the use of DPP-4i was associated with a higher risk of DR incidents compared to placebo.However, they suggested that this association was largely influenced by the inclusion of the TECOS trial and speculated that with the inclusion of more studies, this relationship might become statistically nonsignificant [99].In our study, we included a larger number of RCTs and, as predicted by Tang et al., did not confirm this relationship.Our results are consistent concerning the other drug groups studied by the authors, where we also did not find any statistically significant difference in DR risk with the use of any drug group compared to placebo.
GLP-1RA vs. DPP-4i
We have found no association between GLP1-RA use and the risk of DR incidents when compared to DPP-4i.This result agrees with a cohort study that integrated data from Sweden and Denmark, which compared the risk of DR incidents in patients with a history of DR after their first prescription of GLP1-RA and DPP-4i.The authors found no association between the use of GLP-1RA and DR complications, with DPP-4i as an active comparator [100].As there is no strong evidence of a higher risk of DR with either drug use, our result appears to be consistent with the available data.
Meta-Regression Analysis
In our study, the effect of the difference in HbA1C change between intervention and placebo on the risk of DR incidents was only demonstrated with DPP-4i use, where a greater reduction of HbA1C in the intervention group was associated with a lower risk of DR incidents.Previously, the opposite-worsening of DR was related to greater efficacy in lowering HbA1c by GLP-1RA [14].HbA1C concentration was also significant in SGLT-2i vs. placebo comparison, where higher HbA1C on baseline resulted in higher DR risk in SGLT-2i use.This mechanism highlights the impact of high glycemia on ocular complications.In addition, we have demonstrated that there was also a higher DR risk during GLP-1RA use in patients with higher BMI.The association of BMI with DR complications has been previously described; however, results remain controversial [101][102][103].GLP-1RA and DPP-4i comparison showed strong heterogeneity.It may be explained by our meta-regression results, which showed that older age and longer duration of diabetes on baseline were related to lower DR risk in the GLP-RA group and higher in DPP-4i.The lack of significant correlations when considering all included RCTs may be attributed to the different mechanisms of action of drug groups or the limited number of studies analyzed.
Strengths and Limitations
Strengths: The study includes a significant number of 61 RCTs and a large population of 188,463 subjects.In addition, the meta-regression performed allowed us to determine the effect of additional factors on the risk of DR with the use of the studied drug groups.In the subgroup analysis, we determined the risk of DR when using specific drugs, not only whole groups.Limitations: A notable limitation of our study is that the majority of included trials were primarily designed to evaluate the impact of tested drugs on cardiovascular events or glycemic control.Detailed fundoscopic examinations were conducted in only 19 of the included studies.DR endpoints came primarily from adverse events reporting, which may cause the number of DR events to be significantly underreported.Most of the studies did not report data on pre-existing retinopathy, so we could not explore tested drug effects in this population.As we included less than three studies, each comparing SGLT-2i with DPP-4i and GLP-1RA, we were not able to compare these drug groups.
Conclusions
In light of currently available knowledge, it is challenging to conclude that the new antidiabetic drugs differ significantly in their effect on diabetic retinopathy complications.
The available data suggest a potential decrease in the risk of diabetic retinopathy incidents with empagliflozin use, but more studies are needed to confirm this observation.Controlling glycemia may offer potential benefits in reducing this risk when using incretin-based drugs and SGLT-2i, and using GLP-1RA in older populations may be beneficial compared to using DPP-4i.Studies show potential mechanisms by which these drugs could protect the retina, but most of the available randomized trials do not support these statements and do not include a detailed ophthalmic evaluation.Further RCTs, including detailed ophthalmic evaluation, are required to assess the impact of new antidiabetic drugs on diabetic retinopathy accurately.This is particularly important in light of their increasing use and the growing number of people suffering from diabetes.
Table 1 .
Characteristics of included studies.
|
v3-fos-license
|
2024-06-09T15:05:47.871Z
|
2024-06-01T00:00:00.000
|
270339396
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2079-4991/14/12/996/pdf?version=1717765671",
"pdf_hash": "2566a823bf0f51d650b3abd1df365f380a131676",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43450",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"sha1": "2cd4126999731d17bcc7d948a6529a6a455c6f75",
"year": 2024
}
|
pes2o/s2orc
|
Systematic Review of Solubility, Thickening Properties and Mechanisms of Thickener for Supercritical Carbon Dioxide
Supercritical carbon dioxide (CO2) has extremely important applications in the extraction of unconventional oil and gas, especially in fracturing and enhanced oil recovery (EOR) technologies. It can not only relieve water resource wastage and environmental pollution caused by traditional mining methods, but also effectively store CO2 and mitigate the greenhouse effect. However, the low viscosity nature of supercritical CO2 gives rise to challenges such as viscosity fingering, limited sand–carrying capacity, high filtration loss, low oil and gas recovery efficiency, and potential rock adsorption. To overcome these challenges, low–rock–adsorption thickeners are required to enhance the viscosity of supercritical CO2. Through research into the literature, this article reviews the solubility and thickening characteristics of four types of polymer thickeners, namely surfactants, hydrocarbons, fluorinated polymers, and silicone polymers in supercritical CO2. The thickening mechanisms of polymer thickeners were also analyzed, including intermolecular interactions, LA–LB interactions, hydrogen bonding, and functionalized polymers, and so on.
Introduction
In recent years, carbon dioxide (CO 2 ) has become the focus of carbon emission reduction.As a major greenhouse gas, the utilization and storage of CO 2 (as a buffer gas when producing hydrogen) are required to be environmentally friendly and economically sustainable [1][2][3][4][5][6].For the chemical properties of CO 2 , the carbon atoms in the CO 2 molecule are hybridized in sp mode, and the electrons form two mutually perpendicular π bonds.The bond length of the carbon-oxygen double bond (O=C=O) is shorter than that of the carbonyl bond (C=O).Thus, the CO 2 structure is stable and the chemical properties are not very reactive [7], and the phase diagram of CO 2 can be seen in Figure 3 of Ref. [8].The critical points of CO 2 are at 31 • C and 74 bar [8].When the temperature and pressure exceed the critical point, CO 2 undergoes a transition into the supercritical state, resulting in the formation of supercritical CO 2 .
Supercritical CO 2 is a substance that exhibits properties intermediate between those of a gas and a liquid.It possesses advantageous characteristics such as high diffusivity, low viscosity, low surface tension, and controllable solubility.This unique nature of supercritical CO 2 finds extensive applications in oil displacement technology and fracturing technology [9], effectively addressing the limitations associated with hydraulic fracturing [10,11].These limitations include excessive water consumption, clay swelling, reservoir damage caused by residual working fluids, and inadequate flowback leading to groundwater pollution [12,13].However, the low viscosity characteristics of pure supercritical CO 2 , with a viscosity of only 0.02-0.05mPa•s, will cause a series of problems including viscous fingering, limited sand-loading capacity, and filter damage.The viscosity fingering problem stems from the viscosity difference between supercritical CO 2 and crude oil, which causes supercritical CO 2 to form finger-like flows in the reservoir, bypassing the oil layer and reducing recovery efficiency [14].High filtration loss means that part of the fracturing fluid is adsorbed, retained, and permeated into the formation during the fracturing process.The problem of limited sand-carrying capacity is due to the low viscosity and high diffusivity of supercritical CO 2 , which limits its ability to carry sand particles and affects the effective support of fractures [10,[15][16][17].
Therefore, understanding how to increase the viscosity of supercritical CO 2 has become extremely important.The most direct and effective method is to add a thickener to supercritical CO 2 .The ideal supercritical CO 2 thickeners should be effective in increasing viscosity at low doses.From a quantitative point of view, the thickening ratio is sufficient when it can thicken supercritical CO 2 200-300 times, which is achieved in fluorinated thickeners [18], yet fluorinated thickeners are toxic, limiting their on-site application.In principle, the higher the viscosity of supercritical CO 2 , the better it would fit within the range to be achieved.In oilfield applications, the ideal efficiency of thickening depends on the actual application needs.For a broad range of applications, the increase of 5 to 10 times will serve the purpose.For oil extraction in the Middle East, a viscosity increase of 10 times is more than adequate.And in the context of supercritical CO 2 fracturing technology, the viscosity enhancement should range from 20 to 30 times the original value.In addition to an increase in viscosity, an ideal thickener should also be cheap, environmentally friendly, safe, high efficiency, and soluble in supercritical CO 2 but insoluble in water [19].At present, supercritical CO 2 thickeners are generally divided into the following four categories: surfactants, hydrocarbon polymers, fluorinated polymers, and silicone polymers [20].In particular, fluorine-containing thickeners have the best solubility and thickening effects, but their use is restricted as they are expensive and not environmentally friendly [21,22].Furthermore, they are not effective at low concentrations and also adsorb to rock [23].This article mainly reviews the research progress of the four types of thickeners in terms of solubility, thickening properties, and mechanisms.
Characterization Parameters of Solubility and Thickening Properties
Through much literature research, it can be seen that the ideal supercritical CO 2 thickener requires both efficient solubility and thickening properties in CO 2 without any additional cosolvents.The characterization parameter of solubility and thickening properties are summarized below.
Solubility Properties
In terms of solubility, the main influencing factors are the interaction between the polymer and CO 2 molecules, solvent CO 2 density, solute relative molecular mass and molecular polarity, and especially the density factor.Solubility increases exponentially with the density of a supercritical CO 2 system [24].For thickening properties, it is mainly influenced by the spatial network structure formed by the interaction between thickener molecules.This structure effectively impedes the flow of CO 2 molecules, resulting in the thickening of supercritical CO 2 .
The dissolving properties of thickeners in supercritical CO 2 can be described by the solubility parameter δ [25], which is equal to the arithmetic square root of the cohesive energy density.The closer the polymer solubility parameter is to the CO 2 solubility parameter, the better the solubility in CO 2 .The addition of cosolvents can reduce differences in the solubility parameters [25].
According to Enick's research on thermodynamics, in order to be dissolved in supercritical CO 2 , Gibbs free energy (G mix ) must be reduced, that is ∆G mix < 0 [26], and its related expression is as follows: Here, ∆H mix , ∆S mix , and T are the mixing enthalpy change, mixing entropy change, and absolute temperature, respectively.Therefore, the above problem is transformed into the problem of how to improve ∆H mix and ∆S mix .
For H mix , the main influencing factors are the density of the CO 2 mixed solution, the interaction between CO 2 molecules, between the polymer thickener molecules, and between the polymer thickener and CO 2 molecules.However, the interaction between the polymer thickener and the CO 2 molecules is critical in determining whether the thickener can be dissolved in liquid CO 2 .This interaction effectively promotes enthalpy reduction, which in turn reduces the free energy, allowing the thickener to dissolve in supercritical CO 2 [27].The strength of the interaction between molecules can be described by cohesive energy density [25].For CO 2 molecules, the electron distribution is near oxygen atoms and the CO 2 molecule has a zero dipole moment; however, a large quadrupole moment and low polarizability [28] can cause weak interactions with non-polar covalently bonded fragments like C-C motifs, but reasonably strong interactions with non-hydrogen-bonded polar functional groups like esters, ethers, C-F groups, or aromatic structures.Therefore, in order to be dissolved in CO 2 , it should be weakly polar and have low cohesive energy density or contain a certain number of CO 2 -philic groups.
As for ∆S mix , the solubility of the polymer can be improved by increasing entropy of the system.For example, increasing the free volume of the polymer, improving the flexibility of the polymer chain, and lowering the glass transition temperature can all help reduce the interaction between polymer molecules and thus promote dissolution.Additionally, increasing the degree of branching of the polymer thickener can have a similar effect [29].
The interaction between the polymer and CO 2 is one of the key factors determining the solubility of the polymer.If the interaction energy between the polymer and CO 2 is strong, it usually means that the solubility of the polymer in supercritical CO 2 is better because this strong interaction helps to overcome the attraction between the polymer molecules or between the CO 2 molecules such that the thickener can be dispersed into the solvent.The interaction energy between the polymer and the CO 2 molecules provides a measure of the strength of this interaction and can be calculated via Formula (2) [30].
The total energy E CO2 of supercritical CO 2 , the total energy E polymer , and the total energy E polymer-CO2 of the mixed system can be calculated through molecular dynamics (MD).The larger the absolute value of E inter , the stronger the interaction.Hu et al. used MD methods to study poly(vinyl acetate-alt-maleate) copolymers.The results show that this type of polymer reduces the interaction energy between the polymer and CO 2 after copolymerizing vinyl acetate, but it remains higher than the interaction energy between PVAc and CO 2 [31].
In addition, the adsorption of polymers in supercritical CO 2 can be reflected by the potential of mean force (PMF), which is calculated as shown in (3) [32].The change in PMF can reflect the positional preference of polymers in CO 2 .If the PMF value is negative, it indicates that the polymer is more stable at a certain position in supercritical CO 2 , which contributes to polymer solubilization.
where E (r) is the mean force potential, k B is Boltzmann's constant, T is the absolute temperature, g (r) is the radial distribution function between the polymer and CO 2 .The physical significance of g (r) can be expressed as the ratio of the local density of the B atom to the intrinsic density of the A atom at a distance r from the central atom A. This indicates that CO 2 molecules need to overcome a certain barrier to approach the thickener.Moreover, the solubility of polymers in CO 2 is also affected by polymer-polymer interactions to the same degree.The weaker the polymer-polymer interaction and the stronger the polymer-CO 2 interaction, the more favorable the solubility.Among the interaction between the polymer and CO 2 , Lewis acid-base (LA-LB) interaction can effectively promote dissolution of polymer [25].In an LA-LB interaction, the Lewis acid acts as an electron pair acceptor and can accept electron pairs, while the Lewis base acts as an electron pair donor and can donate electron pairs.For example, Gong et al. [25] found that the LA-LB interaction between O atoms in PVAc and C atoms in CO 2 can enhance the solubility of PVAc in supercritical CO 2 .This interaction helps more CO 2 molecules distribute around the carbonyl groups in the PVAc molecular chain, thereby increasing the solubility of PVAc in supercritical CO 2 .
Thickening Properties
The viscosity of the mixed solution can be calculated by the following formula [33].
where η(Pa•s) is the viscosity, τ w and γ w are the wall shear stress, and apparent shear rate, relatively.D is the capillary diameter, ∆p is the capillary pressure difference, L is the capillary length, and ν is the flow rate of CO 2 in the thickened liquid.Research shows that the thickened supercritical CO 2 system is a non-Newtonian fluid and the viscosity has a non-linear relationship with the shear rate [33].Moreover, the viscosity can also be acquired by fitting the following transverse current autocorrelation function below: which can be obtained through MD simulation [32].
Surfactants
Surfactant thickeners are polymers composed of nonpolar hydrophilic groups and polar hydrophobic groups [34,35].In supercritical CO 2 , the hydrophilic groups of the surfactant undergo physical interactions with a small amount of water, while the hydrophobic groups are exposed to and interact with CO 2 , forming a reverse micelle structure that further develops into a spatial network structure [36,37].This network structure can restrict the mobility of CO 2 molecules, thereby serving a thickening function.
The solubility of the polymer ZCJ-01 (copolymers of styrene with modified sulfonated fluorinated acrylates), surfactant thickener APRF-2 (consists of sodium succinate (-2-ethyl) sulfonate, ethanol, and H 2 O, etc.) and surfactant thickener SC-T-18 in supercritical CO 2 was investigated by Zhai et al. [38].The main component of SC-T-18 is a comb copolymer with polydimethylsiloxane as the main chain and amino groups as side chains.The research results show that SC-T-18 has the highest solubility among them, and the experimental values are in good agreement with the theoretical values [38].SC-T-18 and supercritical CO 2 form a single, stable, homogeneous emulsion micelle after sufficient mixing, which is due to the side-chain amino group in SC-T-18 effectively increasing its solubility in CO 2 through an LA-LB interaction.At 25 • C and 6.894 MPa, Enick [39] conducted a study on the use of a 1 wt% surfactant tributyltin fluoride thickener and a 40-45 wt% pentane cosolvent to increase the viscosity of the supercritical CO 2 system.The results showed that this combination can thicken the viscosity by 10-100 times.However, the flaw of this thickener is the necessity to use a considerable number of cosolvents to facilitate its dissolution in supercritical CO 2 , which makes the process very inefficient.Considering that the addition of cosolvents will bring more serious environmental problems, Shi et al. [40][41][42] introduced a fluoroalkyl group into the trialkyltin fluoride polymer molecule to obtain a semi-fluorinated trialkyltin fluoride thickener.Research shows that this thickener has high solubility in supercritical CO 2 systems without the use of any cosolvents.Under 10-18 MPa, 4% mass fraction of semi-fluorinated trialkyltin fluoride can increase the system viscosity by up to 3.3 times.Enick et al. [43] also used perfluoropolyether glycol and fluorinated diisocyanate to react to synthesize a fluorinated polyurethane thickener.At 25 • C and 25 MPa, 4% fluorinated polyurethane can increase the viscosity of the system by 1.7 times.
Trickett et al. [44] designed a surfactant that can not only dissolve in CO 2 , but also form rod-shaped micelles with enhanced viscosity when a small amount of water is added.They changed Na + in dialkyl sulfosuccinic acid into M 2+ (Co 2+ or Ni 2+ ), as shown in Figure 1, turning the spherical micelles into rod-shaped micelles, and then forming reverse micelles in supercritical CO 2 .Then, through the interaction between these micelles, the viscosity of the CO 2 mixed system increased.Research results show that Co(di-HCF 4 ) 2 and Ni(di-HCF 4 ) 2 with 6-10 wt% can increase the viscosity of the CO 2 mixed system by 20-90 wt%.For some surfactant thickeners that are difficult to dissolve in supercritical CO 2 , CO 2 -philic groups can be introduced, such as fluorinated amine-and oxygen-containing surfactants [45].Semi-fluorinated and fluorinated surfactant thickeners are found to be soluble in CO 2 liquids and can also increase CO 2 viscosity through the addition of a small amount of water [46].By studying the solubility of oxygenated hydrocarbon surfactants in CO 2 , it was found that this thickener has a similar level to that of fluorinated surfactants, and both show high solubility properties, indicating that the O atoms in oxygenated hydrocarbon surfactants can increase solubility [47].
Considering that the addition of cosolvents will bring more serious environmental problems, Shi et al. [40][41][42] Trickett et al. [44] designed a surfactant that can not only dissolve in CO2, but also form rod-shaped micelles with enhanced viscosity when a small amount of water is added.They changed Na + in dialkyl sulfosuccinic acid into M 2+ (Co 2+ or Ni 2+ ), as shown in Figure 1, turning the spherical micelles into rod-shaped micelles, and then forming reverse micelles in supercritical CO2.Then, through the interaction between these micelles, the viscosity of the CO2 mixed system increased.Research results show that Co(di-HCF4)2 and Ni(di-HCF4)2 with 6-10 wt% can increase the viscosity of the CO2 mixed system by 20-90 wt%.For some surfactant thickeners that are difficult to dissolve in supercritical CO2, CO2-philic groups can be introduced, such as fluorinated amine-and oxygen-containing surfactants [45].Semi-fluorinated and fluorinated surfactant thickeners are found to be soluble in CO2 liquids and can also increase CO2 viscosity through the addition of a small amount of water [46].By studying the solubility of oxygenated hydrocarbon surfactants in CO2, it was found that this thickener has a similar level to that of fluorinated surfactants, and both show high solubility properties, indicating that the O atoms in oxygenated hydrocarbon surfactants can increase solubility [47].The principle of surfactant thickening of CO2 involves the formation of reverse micelles by surfactants.These reverse micelles overlap and entangle with each other, creating a spatial network structure.This structure restricts the flow of CO2 molecules, resulting in the thickening effect of the system.In addition, hydrocarbons, polar, or ionic groups can also be introduced into surfactants to increase the viscosity of supercritical CO2 systems [48].The interaction between ion charges and water dipoles, as well as the Van der Waals force between alkyl chains, are also important factors in the formation of ionic surfactant micelles.The former is required to be stronger than the interaction between ions and CO2, The principle of surfactant thickening of CO 2 involves the formation of reverse micelles by surfactants.These reverse micelles overlap and entangle with each other, creating a spatial network structure.This structure restricts the flow of CO 2 molecules, resulting in the thickening effect of the system.In addition, hydrocarbons, polar, or ionic groups can also be introduced into surfactants to increase the viscosity of supercritical CO 2 systems [48].The interaction between ion charges and water dipoles, as well as the Van der Waals force between alkyl chains, are also important factors in the formation of ionic surfactant micelles.The former is required to be stronger than the interaction between ions and CO 2 , and as for the latter, specific functional groups can be introduced to enhance the Van der Waals forces.In order to enhance the solubility of thickener molecules in supercritical CO 2 , CO 2 -philic groups, such as fluoroalkyl groups, carbonyl groups, and oxygen atoms, can be introduced into surfactant thickener molecules.
Hydrocarbon Polymers
Generally, this type of thickener contains carbon (C), hydrogen (H) elements, and may contain oxygen (O) elements.Hydrocarbon polymer thickeners usually have low solubility and weak thickening properties.Heller et al. [49] conducted a study on commercially available hydrocarbon polymer thickeners and found that the viscosity of the supercritical CO 2 system did not significantly increase through the introduction of hydrocarbon polymer thickeners.They discovered that only a portion of the thickeners was soluble in supercritical CO 2 .The solubility of this portion was attributed to the polymers' amorphous and irregular structure, which lacked a compact crystalline arrangement.This structural characteristic allowed for greater space, facilitating the penetration and solubilization of CO 2 molecules.Not only that, the solubility of this type of polymer in supercritical CO 2 was influenced by the interactions between the polymers, as well as between the polymers and CO 2 .The solubility is higher when the interaction between the polymer and CO 2 is stronger.
Sarbu et al. [50] believed that polymers soluble in CO 2 should have monomer units of LA-LB interaction with CO 2 and monomer units with high free volume and high flexibility.Shen et al. [51] confirmed that among all the hydrocarbon thickeners, polyvinyl acetate (PVAc, Mw = 125,000) has the best solubility in supercritical CO 2 .The reason is that PVAc contains acetic acid groups, which can effectively increase solubility, but its ability to thicken CO 2 is relatively weak.Because of this, PVAc has become an ideal supercritical CO 2 thickener design material.Shen et al. [52] used azobisisobutyronitrile (AIBN) as a catalyst to synthesize a polyvinyl acetate telomer through free radical reaction, and then polymerized it with styrene to form a binary copolymer, namely styrene vinyl acetate binary copolymer, and the reaction mechanism is shown in Ref. [52] (1), (2) and ( 3).This copolymer molecule has CO 2 -philic groups and thickening groups, namely acetic acid groups and styrene groups, respectively, and is expected to become an economical and environmentally friendly thickener.Zhang Jian [53] used AIBN as an initiator to synthesize four-arm PVAc through the reversible addition-fragmentation-chain transfer (RAFT) polymerization method.In the case of adding cosolvent ethanol, at 35 • C and 15 MPa, adding four-arm PVAc with a concentration of 1 wt% and ethanol with a concentration of 5 wt% can increase the viscosity of the supercritical CO 2 system by 31-55%.However, this still does not meet the standards for actual use.
In recent years, MD simulations have been used to study the solubility of supercritical CO 2 and the thickening properties of thickeners.Xue et al. [32] used MD simulation methods to calculate the thickening mechanism of supercritical CO 2 thickener and found that polyvinyl acetate-copolyvinyl ether (PVAEE) molecular chains formed a spatial network structure by intertwining with each other.Due to the interaction between CO 2 molecules and polymers (including electrostatic interactions and Van der Waals interactions), the polymer restricted CO 2 molecules to the network structure formed by PVAEE (degree of polymerization of each chain was N = 50) molecular chains.This restriction reduced the supercritical CO 2 molecules flow, thereby increasing the viscosity of the supercritical CO 2 fluid.They also obtained PMF by calculating the radial distribution function (RDF) in the results.The contact minimum (CM) and the second solvent separation minimum (SSM) are 0.4 nm and 0.85 nm, respectively.Their corresponding energy values determine the binding stability of CO 2 and the polymer in the first and second solvent layers, respectively.A higher energy barrier exists between the CM and the SSM, which is the barrier of the solvent layer (BS).The equations for calculating the binding energy barrier and dissociation energy barrier between CO 2 and polymer groups are ∆E + = E BS − E SSM and ∆E − = E BS − E CM , respectively.The corresponding binding and dissociation energy barriers for the ether (ester) groups are 700 kJ/mol and 300 kJ/mol (540 kJ/mol and 190 kJ/mol), respectively.These values indicate that the ester group is more readily bonded to CO 2 .Goicochea et al. [54] also used MD simulation to study the interaction between polymers and CO 2 molecules.Research shows that both intermolecular interactions and branching can improve the viscosity of supercritical CO 2 .In particular, intermolecular π-π stacking plays a crucial role in the thickening effect of supercritical CO 2 .These studies show that MD simulation is a very effective method to study supercritical CO 2 systems at the molecular level.
Double-chain polyether carbonate (TPA-PEC, Mw = 2168, N = 30), tri-chain polyether carbonate (TMA-PEC, Mw = 2211, N = 30), and four-chain polyether carbonate (TFA-PEC, Mw = 2254, N = 30) were synthesized as CO 2 thickeners by Chen et al. [55].The respective dissolution properties were studied using the MD simulation method at 24.85 • C and 20 MPa.The results show that both TPA-PEC and TMA-PEC have better solubility than TFA-PEC due to their stronger interaction with the CO 2 molecules, but their thickening effect is poor.TFA-PEC has the highest viscosity and only needs an addition of 0.95 wt% to thicken the supercritical CO 2 viscosity by 11 times, while TMA-PEC needs to be added at 0.72 wt% to thicken the CO 2 viscosity by 3.9 times.Among the three, TPA-PEC has the worst CO 2 thickening ability.From the perspective of solubility and thickening properties, the multi-chain structure is beneficial to the thickening ability but not to solubility.Polyether carbonate is also a polymer thickener that is easily degraded under natural conditions and has the advantage of being environmentally friendly.
Afra et al. [23] investigated the supercritical CO 2 system of Poly-1-decene (P1D, Mw = 2950) in sandstone rock properties.It was shown that 1.5 wt% P1D increased the viscosity of the supercritical CO 2 system by a factor of six at 24.13 MPa and 35 • C. When the temperature was increased to 90 • C, the viscosity increased by a factor of 4.8.It was also shown that the large number of methyl groups in the P1D molecule contributes to its solubility in CO 2 , while the branched structure of the molecule positively affects the thickening effect as well.In addition, poly-1-decene (an oligomer of about 20 repeating units) is not only very effective in CO 2 viscosification, but also reduces the remaining water saturation from 40 to about 27% at 24.13 MPa and 90 degrees celsius, which can improve the storage efficiency of CO 2 .
Sun et al. [56] used AIBN as the initiator and synthesized a series of copolymers P(HFDA-co-MMA) and P(HFDA-co-EAL) using HFDA (1H,1H,2H,2H-perfluorodecyl acrylate), EAL (ethyl acrylate), and MMA (methyl methacrylate), as shown in Ref. [56] Scheme 1.The microstructure and intermolecular interactions in the supercritical CO 2 system were studied through MD simulations.According to their research, an increase in the content of the EAL group enhances the interaction between copolymer chains and reduces their flexibility, leading to a decrease in solubility.Moreover, the intermolecular association of the copolymer is strengthened, resulting in an increased thickening ability.At 35.05 • C and 30 MPa, the P(HFDA 0.19 -co-EAL 0.81 , Mw = 3576) copolymer with the highest EAL content increases the viscosity of supercritical CO 2 by 96 times at 5 wt% concentration and has the best thickening property among all copolymers.The solubility and thickening properties of P(HFDA 0.37 -co-EAL 0.63 ) are higher than those of P(HFDA 0.39 -co-MMA 0.61 ).P(HFDA 0.37 -co-EAL 0.63 , Mw = 3394) increases the viscosity of supercritical CO 2 by 70 times, while P(HFDA0.39-co-MMA0.61)only increases it by 40 times.It shows that although EAL and MMA are isomers, the differences in their structures and compositions make huge differences in the intermolecular interactions of copolymer-CO 2 and the association between copolymer chains.The presence of methyl groups in the main chain of P(HFDA-co-MMA) increases steric hindrance, which reduces intermolecular association, free volume, and chain flexibility.
Furthermore, at 344.3 K and 25-45 MPa, a coarse-grained molecular modeling study optimized by Kazuya [18,57] via the particle swarm optimization algorithm showed that branched hydrocarbon poly-1-decene oligomers (especially the model with six repeating units and Mw = 1000) showed a significant increase in solubility in supercritical CO 2 compared to straight-chained alkanes with the same molecular weight, up to a factor of 270 times.This increase is attributed to an increase in the number of branches in the molecular structure, especially structural edges (methyl groups), which have enhanced interactions with CO 2 and thus increase solubility.The branched structure of the thickener not only increases its solubility in CO 2 but also reduces the adsorption of the thickener to the rock as compared to the change in chemical composition [57].These findings provide important molecular design principles for the development of thickeners with high solubility in supercritical CO 2 .Ding et al. [58] conducted a study on the solubility and thickening properties of oligomers of 1-decene (O1D) with six repeat units and oligomers with branches of 1-dodecene and 1-hexadecene (O1D1H).The research confirmed that branches and methyl groups can promote solubility.At approximately 13.8 MPa and 35 • C, the solubility of O1D in supercritical CO 2 is 0.6 wt%.While at 24.1 MPa and 35 • C, the solubility of O1D1H is 0.3 wt%.However, in terms of relative viscosity, the 0.3 wt% concentration of O1D1H provides better viscosity performance than the 0.6 wt% concentration of O1D.
In general, the biggest problem with hydrocarbon polymer thickeners is their low solubility and difficulty in completely dissolving in supercritical CO 2 .At present, the main way to quickly solve the problem of solubility is to add a large amount of cosolvent, but this also has the implication of environmental problems caused by the cosolvent, which is neither economically nor environmentally friendly.Therefore, it is still necessary to modify and design the hydrocarbon thickener molecules themselves to find thickeners with high solubility and high thickening properties.
Fluorinated Polymers
Compared with hydrocarbon polymer thickeners, fluorinated polymer thickeners obtained after fluorination have stronger CO 2 -philic properties and can be effectively dissolved in liquid CO 2 without adding cosolvents.At the same time, the polymer has a better thickening effect.
DeSimone et al. [59] demonstrated for the first time that fluoropolymers can be strongly dissolved in supercritical CO 2 without the assistance of cosolvents and show good thickening properties.Research shows that at 50 • C and 300 bar, 3.7 wt% poly(1,1-dihydroperfluorooctyl acrylate) (PFOA, Mw = 1,400,000) in supercritical CO 2 can increase the viscosity of the system from 0.08 cP to 0.20-0.25 cP.However, PFOA is toxic to aquatic organisms, which may cause disruption to aquatic ecosystems.
Huang et al. [60] synthesized copolymers (PolyFAST) using a fluorinated acrylate and styrene copolymer.Among them, the fluorocarbon group is CO 2 -philic and can improve the solubility of PolyFAST in supercritical CO 2 , while the styrene group is CO 2 -phobic and can thicken CO 2 but can also reduce the solubility of PolyFAST in supercritical CO 2 .The ratio of fluorinated acrylate to styrene is crucial in determining the thickening effect.Through multiple experiments, it has been found that the most significant increase in viscosity occurs when using a ratio of 71 mol% fluorinated acrylate and 29 mol% styrene [60].Additionally, incorporating 1-5 wt% PolyFAST in supercritical CO 2 can result in viscosity increasing up to 5-400 times.However, the production cost of this polymer is relatively high, and it is not environmentally friendly.
Heller et al. [49] studied telechelic polymer thickeners which have corresponding ionic groups at each end and form a network structure in the form of ion pair aggregation.Enick et al. [61] synthesized poly-sulfonated polyurethane.Fluorinated telechelic ionic polymers have good solubility in CO 2 and the addition of 4 wt% the polymer can increase the viscosity of CO 2 by 2.7 times at 25 • C and 25 MPa.
Shi et al. [42] synthesized a series of semi-fluorinated trialkyl tin fluorides.Among them, 4 wt% tris(1,1,2,2-tetrahydroperfluorohexyl)tin fluoride is soluble in CO 2 and can increase the thickening of the supercritical CO 2 system by 3.3 times.The mechanism is that the positively charged Sn atoms and the negatively charged F atoms form Sn-F bridges to create transient polymer chains.
Sun et al. [62] used an all-atom MD to simulate the molecular model of the polymer-CO 2 system and studied the solubility and thickening properties of the copolymer in supercritical CO 2 .Research shows that 5 wt% of P(HFDA 0.49 -co-VPc 0.51 , Mw = 3023) can increase the viscosity of the CO 2 system by 62 times.The thickening performance of 1.5 wt% of P (HFDA 0.31 -co-VAc 0.69 , Mw = 3539) is higher than that of P (HFDA0.49-co-VAc0.51)under the same conditions, but it is not easily dissolved at higher concentrations.The main reason is the high concentration of VAc which increases the number of methyl groups in the polymer chain, resulting in a decrease in chain flexibility.Therefore, the length and composition of polymer side chains can greatly affect the thickening performance.
Kilic et al. [63] synthesized a series of aromatic acrylate-fluoroacrylate copolymer supercritical CO 2 thickeners, and studied their structure and mechanism of thickening supercritical CO 2 .Research results show that the thickening property of this copolymer increases firstly and then decreases as the content of aromatic acrylate groups increases.The best solution is a 29% phenyl acrylate-71% fluoroacrylate copolymer.In the supercritical CO 2 system, the copolymer only needs 5 wt% to increase the viscosity of the system by 205 times at 21.85 • C and 41.4 MPa.At the same time, it was also found that 26% phenyl acrylate (PHA)-74% fluoroacrylate (FA) has a better thickening effect than 27% CHA (cyclohexyl acrylate)-74% FA (fluoroacrylate).This proves that π-π stacking between aromatic rings plays a crucial role in thickening supercritical CO 2 .
In addition, Goicochea et al. [54], also used molecular simulation to study the thickening properties of polymer HFDA.Research shows that the thickening principle of fluorinated polymers mainly has two aspects.On the one hand, the fluorocarbon groups in the molecules can effectively enhance the CO 2 -philic properties of the polymers; on the other hand, the coupling mechanism between polymer molecules, which is the π-π association between styrenes, is stronger than the intramolecular interaction, making it difficult for polymer molecules to diffuse and aggregate, hindering the flow of CO 2 molecules.This further enhances the thickening property of the supercritical CO 2 system.
At present, according to the above research results, it can be seen that fluorinecontaining polymer thickeners have impressive characteristics in terms of CO 2 -philic properties and CO 2 -thickening properties.However, the economic cost of such fluorinated polymers is too high, and they cannot be metabolized by organisms in the ecosystem.At the same time, they can also cause varying degrees of damage to organisms, such as weakening germ cell activity, interfering with enzyme activity, and damaging cell membrane structures, and so on [22].Nevertheless, the research on this type of polymer provides theoretical guidance for the future design of economical, environmentally friendly, and pollution-free supercritical CO 2 thickeners.
Silicone Polymer
Silicone polymers show reliable performance in thickening and are also pollutionfree [64], and thus they can be an ideal potential supercritical CO 2 thickener.
Bae et al. [65,66] used polydimethylsiloxane (PDMS) as a thickener to thicken supercritical CO 2 .Research shows that at 54 • C and 17.2 MPa, the viscosity of the 4% PDMS thickener + 20% toluene cosolvent + 76% liquid CO 2 system increases to a maximum of 1.2 mPa•s.Compared with pure supercritical CO 2 , the viscosity increased by 30 times.But the disadvantage is that a large amount of cosolvent needs to be added.Zhao et al. [67] also used PDMS to thicken supercritical CO 2 , and the difference was that kerosene was used as a cosolvent because kerosene has a better solubilizing effect than toluene.Research results show that at 51.85 • C, the viscosity of the 5% PDMS thickener + 5% kerosene cosolvent + 90% liquid CO 2 system increases to 4.67 mPa•s, which makes an increase of 54 times, while the amount of cosolvent is reduced.
Fink et al. [68] studied the feasibility of side-chain functionalization to improve the thickening properties of silicone polymers.The results show that silicone polymers with the appropriate amounts of side-chain functionalization act similarly to fluorinated polyether materials in supercritical CO 2 .Kilic et al. [68,69] enhanced the solubility of PDMS in supercritical CO 2 through functionalization with propyldimethylamine.O'Brien et al. [70] synthesized a series of aromatic amidated polydimethylsiloxane (PDMS), as shown in Figure 5 of Ref. [70], and studied their solubility and thickening properties in supercritical CO 2 .Research results show that PDMS with anthraquinone-2-carboxamide (AQCA) end groups can thicken supercritical CO 2 with hexane as a cosolvent, as shown in Figure 6 of Ref. [70].The reason is that the content of CO 2 -philic groups and benzene ring groups in PDMS containing AQCA is low, and hexane is needed to thicken supercritical CO 2 .
Li et al. [71] synthesized a silicone terpolymer using 0.09 g tetramethylammonium hydroxide catalyst and a molar ratio of aminopropyltriethoxysilane and methyltriethoxysilane of 2:1.At 35 • C and 12 MPa, 3 wt% silicone terpolymer and 7 wt% toluene can thicken the viscosity of the supercritical CO 2 system by 5.7 times.The mechanisms of silicone terpolymer and toluene are shown in Figure 2. CO 2 interacts with amino groups.Specifically, N in the amino groups donates electrons to C in the CO 2 and CO 2 is located above N. Hydroxyl enhances the stability of the spatial network structure formed by siloxane and CO 2 molecules.The reason why this type of polymer can thicken supercritical CO 2 is that the hydroxyl group enhances the spatial network structure.Additionally, the chain structure generated by intermolecular interactions also plays a certain binding role, thereby increasing the flow resistance of CO 2 .
as a cosolvent because kerosene has a better solubilizing effect than toluene.Research results show that at 51.85 °C, the viscosity of the 5% PDMS thickener + 5% kerosene cosolvent + 90% liquid CO2 system increases to 4.67 mPa•s, which makes an increase of 54 times, while the amount of cosolvent is reduced.
Fink et al. [68] studied the feasibility of side-chain functionalization to improve the thickening properties of silicone polymers.The results show that silicone polymers with the appropriate amounts of side-chain functionalization act similarly to fluorinated polyether materials in supercritical CO2.Kilic et al. [68,69] enhanced the solubility of PDMS in supercritical CO2 through functionalization with propyldimethylamine.O'Brien et al. [70] synthesized a series of aromatic amidated polydimethylsiloxane (PDMS), as shown in Figure 5 of Ref. [70], and studied their solubility and thickening properties in supercritical CO2.Research results show that PDMS with anthraquinone-2-carboxamide (AQCA) end groups can thicken supercritical CO2 with hexane as a cosolvent, as shown in Figure 6 of Ref. [70].The reason is that the content of CO2-philic groups and benzene ring groups in PDMS containing AQCA is low, and hexane is needed to thicken supercritical CO2.
Li et al. [71] synthesized a silicone terpolymer using 0.09 g tetramethylammonium hydroxide catalyst and a molar ratio of aminopropyltriethoxysilane and methyltriethoxysilane of 2:1.At 35 °C and 12 MPa, 3 wt% silicone terpolymer and 7 wt% toluene can thicken the viscosity of the supercritical CO2 system by 5.7 times.The mechanisms of silicone terpolymer and toluene are shown in Figure 2. CO2 interacts with amino groups.Specifically, N in the amino groups donates electrons to C in the CO2 and CO2 is located above N. Hydroxyl enhances the stability of the spatial network structure formed by siloxane and CO2 molecules.The reason why this type of polymer can thicken supercritical CO2 is that the hydroxyl group enhances the spatial network structure.Additionally, the chain structure generated by intermolecular interactions also plays a certain binding role, thereby increasing the flow resistance of CO2.Wang et al. [72] synthesized epoxy-terminated polydimethylsiloxane, as shown in Figure 1 of Ref. [72], and studied its thickening performance in supercritical CO2.Research results show that when the shear rate increases, the polymer network structure will be destroyed by shear, and the viscosity of the system also decreases, that is, shear thinning.When the temperature rises, the activity and migration rate of various molecules in the Wang et al. [72] synthesized epoxy-terminated polydimethylsiloxane, as shown in Figure 1 of Ref. [72], and studied its thickening performance in supercritical CO 2 .Research results show that when the shear rate increases, the polymer network structure will be destroyed by shear, and the viscosity of the system also decreases, that is, shear thinning.When the temperature rises, the activity and migration rate of various molecules in the system will be enhanced, which will weaken the intermolecular interaction, thereby destroying the network structure of the polymer and resulting in a decrease in the viscosity of the system.When the pressure increases in the range of 8-14 MPa, the degree of damage to the polymer's spatial network structure will decrease and the viscosity will increase.
Shen et al. [6] used benzoyl peroxide as the initiator and synthesized a graft copolymer of methylsilsesquioxane and vinyl acetate through graft polymerization, as shown in Figure 3 of Ref. [6].The thickener does not contain fluorine.Studies have shown that the grafting of PVAc enhances the solubility of siloxane polymers in supercritical CO 2 , and what plays a thickening role would be the network structure generated by polymethyl-silsesquioxane. This research provides ideas for solving the solubility problem of polymers in supercritical CO 2 .
Thickening Mechanism
To obtain an ideal thickener, it should have a certain amount of CO 2 -philic groups (ether groups, carbonyl groups, acetate groups, acetyl groups, sugar ester groups, etc.) and CO 2 -phobic groups in the molecule.CO 2 -philic groups contribute to improve the solubility of the thickener, while CO 2 -phobic groups enhance the viscosity of supercritical CO 2 through intermolecular association.
The introduced chain-like CO 2 -philic groups should have good flexibility, low cohesive energy, and high free volume.CO 2 -phobic groups can associate or their chains can cross and entangle with each other to form a spatial network structure to restrict the flow of CO 2 molecules [32].According to the results of Sagisaka et al. [73], at a certain concentration, the surfactant self-assembles to form linear or rod-like micelles that would intertwine with each other, forming a network structure and increasing the viscosity of CO 2 .It was also observed that rod-like reverse micelles, with different length-to-diameter ratios, exhibit varying thickening effects on supercritical CO 2 at the same temperature and pressure.For instance, at 45 • C and 350 bar, anisotropic reverse micelles of about 5 to 7 wt% with rod lengths of approximately 166 Å and 583 Å increase the viscosity by 24% and 200%, respectively.Meanwhile, these two groups cannot be too many or too few.If there are too few CO 2 -phobic groups, the solubility of the thickener would be insufficient to achieve the desired thickening effect; while too many CO 2 -phobic groups would also affect the solubility of the thickener [27].Kilic et al. [63] showed that the thickening properties of aromatic acrylate-fluoroacrylate copolymers exhibited an increase and then a decrease with the content of aromatic acrylate groups.Copolymers containing 29% phenyl acrylate and 71% fluoroacrylate were found to be the most desirable.The addition of only 5 wt% of the copolymer could increase the viscosities of supercritical CO 2 by up to a factor of 205.Thus, research seeking an optimal ratio or dosage is still needed.
Generally, for surfactant thickeners, one end should be soluble in CO 2 and the other end should be soluble in water or organic solvents to reduce the surface tension of water or organic solvents in CO 2 .At the same time, to form reverse micelles, two conditions should be satisfied, one is the multiple branched non-polar tail chain and low cohesive energy density, and the other one is a hydrogen-bonding interaction between the polar head group and water [74].
Hydrocarbon thickeners should meet two characteristics.On the one hand, they require large free volume, high chain flexibility, small steric resistance, weak interaction, low glass transition temperature and small steric hindrance, which help the polymer dissolve in CO 2 .On the other hand, polymer chains can cross and entangle with each other to form a spatial network structure, which hinders the flow of CO 2 molecules and thicken CO 2 [27].
Fluorine-containing polymer thickeners are obtained by fluorination of hydrocarbon polymers.They are usually weakly polar and have dipole-quadrupole interactions with CO 2 molecules.At the same time, molecular chains can cross and entangle with each other to form a spatial network structure.
Silicone thickeners generally require cosolvents to improve solubility and thickening effects.The π-π stacking between phenyl groups produces intermolecular interactions, which has the thickening effect of supercritical CO 2 .
For thickening properties of thicker in supercritical CO 2 , in addition to the molecular structure, ratio of CO 2 -philic groups and CO 2 -phobic groups mentioned above, temperature, pressure, thickener molecular weight, and so on are also important influencing factors.The temperature and pressure conditions vary across different reservoir depths, leading to differences in the corresponding properties.The viscosity of supercritical CO 2 exhibits changes in response to temperature variations.Much research has been conducted [18,75] on the effects of temperature and pressure on CO 2 viscosity, and the results reveal a de-creasing trend in CO 2 viscosity with increasing temperature, while an there is an increasing trend with increasing pressure.
The Flow of CO 2 in Porous Media
The displacement process of CO 2 in heterogeneous porous media is one of the most important mechanisms [76].Fluid physical parameters will cause phase flow instability during the CO 2 displacement process.Research shows that when CO 2 is injected into deep salt-water layers, it will displace the pores in a supercritical state.The dominant force in the displacement process is viscous force, and it will affect the form and distribution of fluid flow during the displacement process [77].The simulation results show that under the condition of low-viscosity enhancement, the displacement process is obviously unstable, and the whole process has a relatively obvious fingering phenomenon; on the contrary, under the condition of higher viscosity enhancement, the displacement process is more stable, and no obvious fingering phenomenon occur [78].
Adsorption in Porous Media
During CO 2 fracturing, CO 2 thickeners may remain in the shale reservoir, and these chemicals may pollute the reservoir environment.Therefore, there is the need for the low adsorption of CO 2 thickeners in porous media in practical industrial applications.If the adsorption within the porous medium is excessive, it may lead to the blockage of the pores [58].Afra et al. [23] conducted experiments on a variety of thickeners currently available on the market, and the results showed that several thickeners containing the element fluorine showed significant adsorption to the surface of porous media [23].They also used MD simulations to study the adsorption problem and came up with the agreement between the simulation and experimental results.They proposed an effective theoretical approach to study the adsorption of thickeners in porous media [23].Li et al. [79] modified the thickener and prepared a new type of PDMS, and then investigated its contact angle.The results showed that the contact angle of PDMS decreased from 138 • to 99 • with increasing temperature, with a significant decreasing trend, while the contact angle of the prepared novel PDMS decreased from 135 • to 127 • [79].Compared with the two, the novel PDMS has less adsorption on the reservoir surface, which is more favorable to reduce the contamination of the thickener on the reservoir.
In conclusion, the thickener should not exhibit excessive adsorption on the surface of porous media, as it would lead to a poorer process, economic problems, and excessively large reductions in permeability due to wettability alteration.If the thickener is brine-soluble (which is unlikely, given the low mutual solubility of CO 2 and water), the thickener may separate out into the brine within the porous media.If the thickener is crude-oil-soluble, a portion of the thickener may ultimately contaminate the crude oil product and potentially cause problems in downstream processing equipment within refineries [23].
Summary and Outlook
This article provides a comprehensive review of four types of polymer thickeners, namely surfactants, hydrocarbons, fluorine-containing polymers, and silicones.We focused on analyzing their solubility and thickening characteristics in supercritical CO 2 systems, and also explained the thickening mechanisms.Furthermore, we discussed the flow and adsorption of thickeners in porous media.
For surfactants, the thickening property is adequate, while solubility is far from satisfactory [39,43].For example, a 1 wt% surfactant tributyltin fluoride thickener and a 40-45 wt% pentane cosolvent can thicken the viscosity by 10-100 times [39].However, the solubility of the thickener is poor, requiring a significant amount of cosolvent or CO 2 -philic groups.In addition, silicones show similar solubility and thickening characteristics to surfactants, where 5% PDMS thickener with a small amount of cosolvent increases the viscosity to 4.67 mPa•s, which is an increase of up to 54 times [67].The economic cost and environmental problems of cosolvents have become an urgent issue to be addressed.
Among the four thickeners, fluorinated thickeners have the most outstanding thickening properties, brought about by the interactions between the fluorine element and CO 2 .According to our research, the addition of 5 wt% PolyFast could increase the viscosity of supercritical CO 2 by up to 400 times [67], which is the highest on record so far, according to our knowledge.Meanwhile, this copolymer also has fantastic solubility under reservoir conditions.However, this type of thickener is not commonly used, mainly because of its economic cost and biological toxicity.At present, most other polymer thickeners still require cosolvents to thicken liquid CO 2 ; however, it is not environment friendly.
It was found that PVAc is one of the optimal CO 2 -philic hydrocarbon homopolymers because of the acetic acid group [73], yet its thickening properties are not ideal at present.However, PVAc is an ideal economical and environmentally friendly thickener, and an abundance of research has been conducted to improve the viscosity with it [32,52,53], such as forming a binary copolymer or spatial network structure, etc.This has made the PVAc-based system a mainstream thickener in work sites.
In recent years, the thickening mechanism and promotion of thickeners have been investigated through molecular modeling of polymer-CO 2 systems.Regarding the thickening mechanism, it has been recognized that CO 2 -soluble polymers may have a moderately branched structure, high free volume, low solubility parameter, and contain Lewis acid-base groups.By introducing CO 2 -philic groups, the interaction between the thickener molecules and CO 2 can be facilitated, thereby increasing the solubility of the thickener in supercritical CO 2 .The polymers should also contain CO 2 -phobic groups, which can combine with neighboring CO 2 -phobic groups to form a viscosity-enhancing network structure.Furthermore, the thickeners should exhibit low adsorption onto rock to minimize the blockage of rock pores, maintain the fluidity of the fracturing fluid, and reducing pollution and damage to the rock environment.Therefore, further research may focus on these aspects, addressing economic and technological barriers, as well as environmental concerns.The development of efficient, environmentally friendly, and cost-controllable thickeners can help promote engineering site applications.
introduced a fluoroalkyl group into the trialkyltin fluoride polymer molecule to obtain a semi-fluorinated trialkyltin fluoride thickener.Research shows that this thickener has high solubility in supercritical CO2 systems without the use of any cosolvents.Under 10-18 MPa, 4% mass fraction of semi-fluorinated trialkyltin fluoride can increase the system viscosity by up to 3.3 times.Enick et al. [43] also used perfluoropolyether glycol and fluorinated diisocyanate to react to synthesize a fluorinated polyurethane thickener.At 25 °C and 25 MPa, 4% fluorinated polyurethane can increase the viscosity of the system by 1.7 times.
|
v3-fos-license
|
2018-09-26T16:51:19.000Z
|
2018-09-26T00:00:00.000
|
96437248
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP11(2018)110.pdf",
"pdf_hash": "12d349f31bab9700a9b8622f61ce89ef22d8e312",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43452",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"sha1": "12d349f31bab9700a9b8622f61ce89ef22d8e312",
"year": 2018
}
|
pes2o/s2orc
|
De Sitter Vacua in No-Scale Supergravity
No-scale supergravity is the appropriate general framework for low-energy effective field theories derived from string theory. The simplest no-scale K\"ahler potential with a single chiral field corresponds to a compactification to flat Minkowski space with a single volume modulus, but generalizations to single-field no-scale models with de Sitter vacua are also known. In this paper we generalize these de Sitter constructions to two- and multi-field models of the types occurring in string compactifications with more than one relevant modulus. We discuss the conditions for stability of the de Sitter solutions and holomorphy of the superpotential, and give examples whose superpotential contains only integer powers of the chiral fields.
Introduction
If one assumes that N = 1 supersymmetry holds down to energies hierarchically smaller than the Planck mass, low-energy dynamics must be governed by some N = 1 supergravity. It is known that the energy density in the present vacuum is very small compared, e.g., to typical energy scales in the Standard Model. It was therefore natural to look for N = 1 supergravity theories that yielded a vanishing cosmological constant without unnatural fine tuning, and a total scalar potential that is positive definite. The unique Kähler potential for such an N = 1 supergravity model with a single chiral superfield φ (up to canonical field redefinitions) was found in [1] to be In [2] this was dubbed 'no-scale supergravity', because the scale of supersymmetry breaking is undetermined at the tree level, and it was suggested that the scale might be set by perturbative corrections to the effective low-energy field theory. The single-field model (1) was explored in more detail in [3] (EKN), and the generalization to more superfields was developed in [4] 1 . It was shown subsequently that no-scale supergravity emerges as the effective field theory resulting from a supersymmetry-preserving compactification of tendimensional supergravity, used as a proxy for compactification of heterotic string theory [6].
In recent years interest has grown in the possibility of string solutions in de Sitter space, for at least a couple of practical reasons. One is the discovery that the expansion of the Universe is accelerating due to non-vanishing vacuum energy that is small relative to the energy scale of the Standard Model [7]. The other is the growing observational support for inflationary cosmology [8], according to which the Universe underwent an early epoch of near-exponential quasi-de Sitter expansion driven by vacuum energy that was large compared with the energy scale of the Standard Model, but still hierarchically smaller than the Planck scale. At the time of writing there is an ongoing controversy whether string theory in fact admits consistent solutions in de Sitter space [9].
If string theory does indeed admit de Sitter solutions and approximate supersymmetry with scales hierarchically smaller than the string scale, their low-energy dynamics should be described by some suitable supergravity theory that is capable of incorporating the breaking of supersymmetry that is intrinsic in de Sitter space. Since string compactifications yield no-scale supergravity as an effective low-energy field theory, it is natural to investigate how de Sitter space could be accommodated within no-scale supergravity 2 . This question was studied already in [3], and the purpose of this paper is to analyze this question in more detail and generality, extending the previous single-field analysis of [3,11] to no-scale models with multiple superfields that are characteristic of generic string compactifications. These models may provide a useful guide to the possible forms of effective field theories describing the low-energy dynamics in de Sitter solutions of string theory, assuming that they exist.
The outline of this paper is as follows. In Section 2 we review the original motivation and construction of no-scale supergravity with a vanishing cosmological constant [1], and also review the construction in [3,11] of no-scale supergravity models with non-vanishing vacuum energy. Section 3 describes the extensions of these models to no-scale supergravity models with two chiral fields, which have an interesting geometrical visualization. The de Sitter construction is extended to multiple chiral fields in Section 4. In each case, we discuss the requirements of stability of the vacuum and holomorphy of the superpotential, and give examples of models whose superpotentials contain only integer powers of the chiral fields. Finally, Section 5 summarizes our conclusions and presents some thoughts for future work.
2
Single-Field Models
No-Scale Supergravity Models
We recall that the geometry of a N = 1 supergravity model is characterized by a Kähler potential K that is a Hermitian function of the complex chiral fields φ i . The kinetic terms of these fields are is the Kähler metric. Defining also K i ≡ ∂K/∂φ i and analogously its complex conjugate K i , the tree-level effective potential is where K −1j i is the inverse of the Kähler metric (2) and 1 2 D a D a is the D-term contribution, which is absent for chiral fields that are gauge singlets as we assume here.
In this Section we consider the case of a single chiral field φ, in which case it is easy to verify that the first term in (3) can be written in the form It is then clear that the unique form of K with a Minkowski solution, for which V = 0, is where f is an arbitrary analytic function. In fact, since physical results are unchanged by canonical field transformations, one can transform f (φ) → φ and recover the simple form (1) of the Kähler potential for a no-scale supergravity model with a single chiral field. We note that this Kähler potential describes a maximally-symmetric SU(1,1)/U(1) manifold whose Kähler curvature R j i ≡ ∂ i ∂ j ln K j i obeys the simple proportionality relation which is characteristic of an Einstein-Kähler manifold. This model was generalized in EKN [3], where general solutions for all flat potentials were found. The SU (1,1) which corresponds (up to irrelevant field redefinitions) to the extended Kähler potential where we assume α > 0, and W (φ) is the superpotential 4 . In this case the effective potential is EKN found 3 classes of solutions with a constant scalar potential [3], namely Solution 1) corresponds to the V = 0 Minkowski solution discussed above, whereas solutions 2) and 3) yield potentials that are constant in the real direction, but are in unstable in the imaginary direction. As we discuss further below, stabilization in the imaginary direction is straightforward and allows these solutions to be used for realistic models with constant non-zero potentials in the real direction. We find that 2) leads to anti-de Sitter solutions with V = −3/8 α · a 2 and 3) leads to de Sitter solutions 5 with V = 3 · 2 2−3α · a 2 . We note that in the particular case α = 1 this reduces to W = a (φ 3 − 1), which yields the de Sitter solution discussed in [11]. This was utilized in [14] when making the correspondence between no-scale supergravity and R 2 gravity. In the following subsections, we first generalize the Minkowski solution (11), and then show that de Sitter solutions can be obtained as combinations of Minkowski solutions. These aspects of the solutions will subsequently be used to generalize them to model theories with multiple moduli. 3 We note that in extended SU(N,1) no-scale models [4] that include N − 1 matter fields, y i , with the Kähler potential K = −3α log(φ + φ † − y i y † i /3), the Kähler curvature becomes R = (N + 1)/3α. Our constructions can be generalized to this case, but such generalizations lie beyond the scope of this paper. 4 Starobinsky-like models with α = 1 were discussed in [12]. Such models were later dubbed α-attractors in [13,11]. 5 We correct here a typo in the third solution given in [3].
Minkowski Solutions
We consider the N = 1 no-scale supergravity model with a single complex chiral field φ described by the Kähler potential given in (9) and the superpotential W (φ) is a monomial of the form W = a φ n , and we seek the value of n that admits a Minkowski solution with V = 0. Defining φ ≡ x+iy, the potential along real field direction x is given by We can obtain a Minkowski solution by setting to zero the term in the brackets: Solving the above equation for n, we find two solutions [11]: We note that n − = 0 for α = 1, corresponding to the EKN solution (11) listed above. However, we see that in addition to this n = 0 solution, n = 3 also yields a Minkowski solution with V = 0 in all directions in field space. Although such solutions exist for any α, for the superpotential to be holomorphic we need n − ≥ 0, which requires α ≥ 1. Clearly, integer solutions for n are obtained whenever α is a perfect square [11].
It is possible to go from one superpotential to another via a Kähler transformation: with λ(φ) = ±3 √ α lnφ. In general, the solutions (17) can be thought of as corresponding to endpoints of a line segment of length 3 √ α centred at 3α/2. Though this appears trivial, extensions of this geometric visualization will be useful in the generalizations to multiple fields discussed below. For α = 1, the two solutions yield V = 0 only along the real direction, and the mass squared of the imaginary component y along the real field direction for x > 0 and y = 0 is given by where the ± in the exponent corresponds to the two solutions n ± . From this it is clear that the Minkowski solutions are stable for α ≥ 1.
There are two aspects of the single-field model that we emphasize here, because they generalize in an interesting way to multi-field models. The first is the fact that there are two solutions for n and the second is that, when α = 1, we get a Minkowski solution with a potential that vanishes everywhere.
De Sitter Solutions
As was shown in EKN, de Sitter solutions can be found with the Kähler potential (9) and a superpotential of the form (13), which may be written as where n ± are given in (17). In this case the potential along the real field direction y = 0 is Thus, the de Sitter solution is obtained by taking the difference of the two "endpoint" solutions mentioned above. Unfortunately, this de Sitter solution is not stable for finite α. However, this can be remedied by deforming the Kähler potential to the following form [15,12]: The addition of the quartic stabilization term does not modify the potential in the real direction, which is still given by (21). However, the squared mass of the imaginary component y is now given by The stability requirement m 2 y ≥ 0 is achieved when α ≥ 1. In Fig. 1 we plot the stabilized potential for a = b = α = 1, and we see that the potential is completely flat along the line y = 0 and is stable for all values of x > 0.
Two-Field Models
Several of the features of the single-field model that we discussed in Section 2 generalize in an interesting geometrical way to models with N > 1 fields. We illustrate this first by considering in this Section the simplest generalization, i.e., two-field models.
Minkowski Solutions
We consider the following Kähler potential with two complex chiral fields: with the following ansatz for the superpotential Denoting the real and imaginary parts by φ i = x i + iy i , we find that the potential along the real field directions y i = 0 is given by We see immediately that by setting we obtain a Minkowski solution, We observe that (27) describes an ellipse in the (n 1 , n 2 ) plane centred at (3α 1 /2, 3α 2 /2). All choices of (n 1 , n 2 ) lying on this ellipse yield a Minkowski solution. In this way, the line segment centred at 3α/2 in the single-field model that yielded Minkowski endpoints is generalized, and we obtain a one-dimensional continuum subspace of Minkowski solutions. We can conveniently parametrize the solutions for n i in (27) as the points on the ellipse corresponding to unit vectors r = (r 1 , r 2 ) with r 2 1 + r 2 2 = 1: The unit vector r should be located starting at the centre of the ellipse, and defines a direction on its circumference. The operation r → − r in equation (28) takes a point on the ellipse to its antipodal point, an observation we use later to construct de Sitter solutions. We note also that holomorphy requires both n 1 , n 2 ≥ 0, i.e.
As in the case of the single-field model, we can move from one point on the ellipse to another point via a Kähler transformation. This is possible because the superpotential is just a monomial.
Integer solutions for the values of n i are also possible in the two-field case. The full set of solutions in the single-field case are valid for n 1± when n 2+ = n 2− (and similarly when 1 ↔ 2). More generally, solutions can be found by writing with λ i is non-negative and λ 1 + λ 2 = 3. As one example out of an infinite number of solutions, choosing λ 1 = 1 and λ 2 = 2 gives (n 1+ , n 1− ) = (3, 1) and (n 2+ , n 2− ) = (6,2). In general, points around the ellipse yield potentials that are flat only in the real direction and, as in the single-field model, may not be stable in the imaginary directions. The masses of the imaginary component fields y 1 , y 2 are given by The stability requirement m 2 It is easy to see that if the stability conditions are satisfied then the holomorphy conditions (29) are satisfied. Since the left hand side of (32) is proportional to n i+ n i− , points on the ellipse that give stable Minkowski solution are those that are holomorphic so long as their antipodal points are also holomorphic. However, given a choice of unit vector, r, this condition is not satisfied for all choices of α i . We show in Fig. 2 the allowed domain in the (α 1 , α 2 ) plane for which the stability conditions (32) (and hence also the holomorphy conditions (29)) are satisfied, for two illustrative choices of the unit vector r. The allowed region for r = (1/ √ 2, 1/ √ 2) is shaded green and behind it (shaded blue) is the allowed region when r = (1/ √ 10, 3/ √ 10). For both choices of r, there is a kink in the allowed domain where it meets the line given by α 1 + α 2 = 1. At the kink, for all choices of r, the potential is completely flat and vanishes in all directions in field space. The position of the kink can be calculated by solving the stability condition along this line: For the two examples shown in Fig. 2, r 1 = 1/ √ 2 implies α 1 = 1/2 at the kink, and r 1 = 1/ √ 10 implies α 1 = 1/4. In fact, because of the sign ambiguity, there are four unit vectors for each solution, corresponding to the ambiguous signs of r 1 and r 2 . Another projection of the domain of stability is shown in Fig. 3, which displays the allowed regions in the (α 1 , r 2 1 ) plane for the fixed values α 2 /α 1 = 1, 2, 3, 5, 10, as illustrated by the curves, respectively. Each pair of curves (red, green, purple, blue and black for increasing α 2 /α 1 ) corresponds to the two equalities in (32), and the positivity inequalities are satisfied to the right of each pair of lines for a given value of α 2 /α 1 . For example, when α 2 /α 1 = 1 (shown by the solid red curves), all values of r 2 1 are allowed if α 1 ≥ 1, while no values are allowed when α 1 < 1/2. The point where the curves meet corresponds to the kink when α 1 = α 2 = 1/2 and r 2 1 = 1/2 that was seen in Fig. 2 where the green shaded region touches the black line. When α 2 /α 1 = 3 (shown by medium dashed purple curves), the kink occurs when these two curves meet at α 1 = 1/4 and r 2 1 = 1/10. The lower ellipse (27) in the (n 1 , n 2 ) plane shown in Fig. 4 corresponds to this second example. As this corresponds to the position of the kink, only a single value of r 2 1 = 1/10 is allowed. The four red spots in the figure correspond to the four different vectors r = (±1/ √ 10, ±3/ √ 10). These four unit vectors correspond to four different superpotentials via the relation (28), which give (n 1 , n 2 ) = (3/4, 9/4), (3/4, 0), (0, 9/4), (0, 0). When (α 1 , α 2 ) = (1/4, 3/4), each of the four superpotentials defined by the pair n i yields a true Minkowski solution. However, because we are at the kink, there are no other stable solutions.
Choosing a larger value of α 1 while keeping α 2 /α 1 fixed would increase the allowed range (32). The stability inequality is satisfied for points with α 1 to the right of both curves of the same colour (red, green, purple, blue and black for increasing α 2 /α 1 ). The point at which the two curves meet corresponds to the kink that appears in Fig 2 when in r 2 1 (as seen in Fig. 3) and would allow a continuum of stable Minkowski solutions along the real direction in field space. This is seen in the upper ellipse in Fig. 4, where we have chosen α 1 = 1/2 and α 2 = 3/2. In this case, the stability constraint, which can be read off Fig. 3 for α 2 /α 1 = 3 at the chosen value of α 1 , yields r 1 < 1/2. Unit vectors with r 1 < 1/2 correspond to arcs along the upper ellipse in Fig. 4. These are further shortened by the holomorphy requirement that n i ≥ 0, and the resulting allowed solutions are shown by the red arc segments in the upper ellipse.
De Sitter Solutions
We recall that in the single-field model we were able to construct a de Sitter solution by combining the two superpotentials corresponding to Minkowski solutions that can be visualized as opposite ends of a line segment. In the two-field model, we have a continuum of superpotentials that give Minkowski solutions, which are described by an ellipse (27). In this case it is possible to to construct new de Sitter solutions by combining superpotentials corresponding to antipodal points on the ellipse (27). For example, consider the following combined superpotential: It is easy to see that the scalar potential in the real field direction is a de Sitter solution: in this case. For the example described by the lower ellipse in Fig. 4, one example of a de Sitter solution is found by taking antipodal points corresponding to the red spots. When r = (1/ √ 10, 3/ √ 10), we have W = a(φ 3/4 1 φ 9/4 2 − 1), which is the unique solution with a holomorphic superpotential that results in a flat de Sitter potential in the real direction. However, as we discuss further below, this solution is actually not stable.
As an alternative example, we consider a two-field model with α 1 = 1 and α 2 = 2. The Minkowski solutions in this case are described by the ellipse (27) in (n 1 , n 2 ) space shown in Fig. 5, whose centre is at (3/2, 3). In this case, the entire ellipse can be used to construct de Sitter solutions, as all possible unit vectors r are allowed since α 1 > 1 (see Fig. 3). As in the previous example, we can use antipodal points to construct de Sitter solutions, as illustrated in Fig. 5. One such pair of antipodal points is (3,3), (3,0), corresponding to r = (1, 0), indicated by the horizontal orange line in Fig. 5. The corresponding superpotential is so that the fields appear in the superpotential with positive integer powers. This example yields a de Sitter potential with the potential value
Stability Analysis
As in the single-field case, the de Sitter solutions of the two-field model require modification in order to be stable. Stable solutions can easily be found by deforming the Kähler potential to include stabilizing quartic terms: With this modification the potential along real field directions is still given by equation (36).
To prove the stability of the two-field de Sitter solution with the quartic modification of the Kähler potential, we calculate the Hessian matrix ∂ 2 V /∂y i ∂y j : i, j = 1, 2 along the real field directions, and demand that it be positive semi-definite. The Hessian matrix along the real field directions is of the form where we have defined and the Hessian matrix is positive semi-definite if the condition is satisfied. The stability condition (45) for generic α 1 , α 2 , b 1 , b 2 and r is A general stability analysis is intractable, so we have considered the simplified case: α 1 = α 2 ≡ α and r = (1/ √ 2, 1/ √ 2), for which the positivity condition (45) becomes Eliminating x 2 in favour of x 1 and w via equation (44), this inequality becomes We note that (96b 1 x 3 1 − 1) dominates for x 1 ≫ 1 and 96b 2 w √ 2/α x 3 1 − 1 dominates for x 1 ≪ 1, implying that there is an extremum for some intermediate value of x 1 . This occurs at , and is a global extremum. Whether it is a maximum or a minimum depends on the sign of 2α(1 + w + w 2 ) − (1 + w) 2 , and it is non-negative for This is a necessary condition for the inequality (48) to be satisfied. We have not explored the full range of possible values of b 1 and b 2 when α 1 = α 2 = α, but have checked that the stability condition (48) is always satisfied if b 1 = b 2 = 1 and α ≥ 2/3, irrespective of the value of w. We have also found that when α 1 = α 2 the sum α 1 + α 2 ≥ 4/3. We have also considered the case r = (0, 1) with b 1 = b 2 = 1. The inequality (46) reduces in this case to which is always satisfied for α 2 ≥ 1. It is easy to check that the same is true for the case r = (1, 0). Based on these cases and the previous example with r = (1/ √ 2, 1/ √ 2), we expect that there are generic stable solutions for a range of r in the first and third quadrants where r 1 /r 2 > 0. However, the situation is different when r 1 /r 2 < 0. We find that the inequality (45) cannot be satisfied for r = (−1/ √ 2, 1/ √ 2) and b 1 = b 2 = 1, so there are no stable de Sitter solutions, and we expect the same to be the case for other choices of r in the second or fourth quadrant.
In summary, we have established the existence of stable de Sitter solutions only when r is in either first or third quadrant.
N-field models
Finally, we generalize the above set of examples to models with multiple fields N > 2.
Minkowski Solutions
The natural generalization of the Kähler potential in (24) is simply a sum of N similar terms: Similarly, we adopt the following ansatz for the superpotential: in which case the potential along the real field directions x i is We can obtain Minkowski solutions along the real field directions by setting which describes an ellipsoid in (n 1 , ..., n N ) space whose centre is at (3α 1 /2, ..., 3α N /2). Once again we find a continuum of Minkowski solutions. The points on the ellipsoid can be parametrized conveniently using an N-dimensional unit vector r: where the unit vector r is to be considered as anchored at the centre of the ellipsoid. To ensure holomorphy of the superpotential we need n i ≥ 0, and the masses of the imaginary field components y i are given by For stability, we impose conditions similar to (32), namely: As in two-field models, ensuring these stability conditions are satisfied implies that the holomorphy conditions are also satisfied. For a given unit vector r, one can ask what values of α 1 , ..., α N satisfy the stability conditions. We find a multidimensional region analogous to that in Fig. 2, with a vertex that satisfies α 1 + ... + α N = 1. We show In Fig. 6 the allowed region of α 1 , α 2 and α 3 for a three-field model with r = (1/ √ 3, 1/ √ 3, 1/ √ 3). The vertex is a special solution that corresponds to V = 0 in both the real and imaginary field directions. When the sign of one of the components of r is changed, the region in Figure 6: Allowed values of α 1 , α 2 , α 3 for a three-field model with r α 1 , ..., α N space that satisfies (57) remains the same. Therefore, there are 2 N unit vectors, each differing only in the sign of the components, that have the same vertex solution.
The above observations on the vertex solution can be summarized as follows. When there are 2 N superpotentials of the form where {i 1 , ..., i n } (n ≤ N) is a subset of {1, 2, ..., N}, that all give V = 0 in both the real and imaginary field directions.
De Sitter Solutions
Finally we discuss de Sitter solutions in N-field models. Here the Kähler potential is again given by and, as in the two-field case, the superpotential may be constructed from two antipodal points of the ellipse (54): where the exponents are given by and the potential along the real field directions is then We use a simple three-field model with α 1 = 2, α 2 = 2 and α 3 = 4 for illustration. The Minkowski solutions are described by an ellipsoid in (n 1 , n 2 , n 3 ) space centred at (3,3,6), which is shown in Fig. 7. To construct de Sitter solutions for this model, we choose the antipodal points (3,3,9), (3,3,3) corresponding to the unit vector r = (0, 0, 1), which yield the superpotential: This yields a de Sitter potential along the real field directions with potential
Stability Analysis
The stability analysis of the de Sitter solution in the N-field model is difficult, as it requires finding the eigenvalues of an N × N matrix. However, as in the two-field model, we do not expect the solution to be stable unless the Kähler potential is deformed, e.g., to With this modification, for any given unit vector r there should exist a region in (α 1 , ..., α N ) space where the de Sitter solution is stable.
To demonstrate this in a specific three-field example, we consider the model with three chiral fields S, T, U that was considered in [16]. This model is defined by the following Kähler potential and superpotential: This model is of particular interest as it arises in the compactification of Type IIB string theory on T 6 /Z 2 × Z 2 . Then the three chiral fields are the axiodilaton S, a volume modulus T and a complex structure modulus U. One expects that the perturbative contribution to the superpotential should be a polynomial and that the non-perturbative contribution would have a decaying exponential form. For our analysis we assume that the powers of the fields in the superpotential could also be fractional. In our notations, this ST U model has α 1 = 1/3, α 2 = 1 and α 3 = 1. We first construct a Minkowski solution. We can use the stability conditions to find a unit vector r and construct an appropriate superpotential. One such unit vector is r = (0, 1, 0). This leads to a superpotential of the form which gives a stable Minkowski solution V = 0 along real field directions 6 . In order to construct de Sitter solutions we add stabilization terms to the Kähler potential: As discussed above, we use antipodal points to construct the superpotential, choosing r = (0, ±1, 0), in which case: With this we get a de Sitter solution along the real field directions with potential In order to check whether the de Sitter solution is stable for the antipodal points that we have chosen, we calculate the Hessian matrix along the real field directions to verify that the eigenvalues are non-negative. Defining we calculate the Hessian matrix ∂ 2 V /∂y i ∂y j : i, j = 1, 2, 3 along the real field directions, finding We see that the Hessian matrix is diagonal, so the eigenvalues are simply the diagonal entries.
For the Hessian matrix to be positive semi-definite we need which is independent of b S and b U . Therefore, we simply need with no restriction on b S and b U .
Conclusion and Outlook
Generalizing previous discussions of de Sitter solutions in single-field no-scale models [3,11,14], in this paper we have discussed de Sitter solutions in multi-field no-scale models as may appear in realistic string compactifications with multiple moduli. As a preliminary, we shoed that the space of Minkowski vacua in multi-field no-scale models is characterized by the surface of an ellipsoid. The parameters in these models are the coefficients (α 1 , ..., α N ) in the generalized no-scale Kähler potential and a unit vector r that selects a particular pair of antipodal points on this ellispoid whose center is located at (3α 1 /2, ..., 3α N /2). Requiring the stability of Minkowski solutions for a fixed r leads us to a region in (α 1 , ..., α N ) space with a vertex that is a special point where N i=1 α i = 1. Such points describe Minkowski vacua with potentials that are flat in both the real and imaginary field directions. In this way we constructed 2 N monomial (in each field) superpotentials for models with N i=1 α i = 1 that yield acceptable Minkowski vacua. The exponent of each monomial is determined by the coefficients α i and the vectors, r i . We then constructed de Sitter solutions by combining the superpotentials at antipodal points, generalizing a construction given originally in the single-field case in [3]. These de Sitter solutions are unstable if the simple no-scale Kähler potential is used, and require stabilization. We showed that modifying the Kähler potential with a quartic term stabilizes a specific two-field model with α 1 = α 2 = α and r = (1/ √ 2, 1/ √ 2) for α ≥ 2/3, and we expect the stability to hold for other generic r for suitable ranges of α 1 , α 2 . We also expect that similar stable de Sitter solutions exist for N-field models under certain conditions, as demonstrated explicitly in a specific three-field model motivated by the compactification of Type IIB string theory [16].
We note that satisfying the stability requirement also ensures that the superpotential is holomorphic in the Minkowski case, i.e., contains only positive powers of the chiral fields, whereas this is not necessarily true in the de Sitter case. It is easy to find infinite discrete series of models for which these powers are integral, and we have provided a number of illustrative single-and multi-field examples.
As noted in the Introduction, it is currently debated whether string theory admits de Sitter solutions [9]. If this were not the case, measurements of the accelerating expansion of the Universe [7] and the continuing success of cosmological inflation [8] would suggest that our Universe lies in the swampland. Our working hypothesis is that this is not the case, and that deeper understanding of string theory will reveal how it can accommodate de Sitter solutions. Since no-scale supergravity is the appropriate framework for discussing cosmology at scales hierarchically smaller than the string scale, assuming also that N = 1 supersymmetry holds down to energies ≪ m Planck , the explorations in this paper may provide a helpful guide to the structure of the low-energy effective field theories of de Sitter string solutions. As such, they may even provide some useful signposts towards the construction of such solutions.
|
v3-fos-license
|
2019-12-18T16:10:32.802Z
|
2019-12-18T00:00:00.000
|
209392325
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00382-019-05083-7.pdf",
"pdf_hash": "02a26e013a70fa3b9d31023d4032638ecf4786f9",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43453",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "02a26e013a70fa3b9d31023d4032638ecf4786f9",
"year": 2019
}
|
pes2o/s2orc
|
Southern-Hemisphere high-latitude stratospheric warming revisit
Previous studies showed significant stratospheric warming at the Southern-Hemisphere (SH) high latitudes in September and October over 1979–2006. The warming trend center was located over the Southern Ocean poleward of the Western Pacific in September, with a maximum trend of about 2.8 K/decade. The warming trends in October showed a dipole pattern, with the warming center over the Ross and Amundsen Sea, and the maximum warming trend is about 2.6 K/decade. In the present study, we revisit the problem of the SH stratospheric warming in the recent decade. It is found that the SH high-latitude stratosphere continued warming in September and October over 2007–2017, but with very different spatial patterns. Multiple linear regression demonstrates that ozone increases play an important role in the SH high-latitude stratospheric warming in September and November, while the changes in the Brewer-Dobson circulation contributes little to the warming. This is different from the situation over 1979–2006 when the SH high-latitude stratospheric warming was mainly caused by the strengthening of the Brewer-Dobson circulation and the eastward shift of the warming center. Simulations forced with observed ozone changes over 2007–2017 shows warming trends, suggesting that the observed warming trends over 2007–2017 are at least partly due to ozone recovery. The warming trends due to ozone recovery have important implications for stratospheric, tropospheric and surface climates on SH.
Introduction
It is well known that the ozone layer experienced depletion in the last quarter of the twentieth century. Especially, severe ozone depletion was found in the Antarctic stratosphere in austral spring, i.e., the so-called Antarctic Ozone Hole. Associated with ozone depletion, the Antarctic stratosphere demonstrated strong cooling trends in austral spring and summer from the late 1970s to the late 1990s (Randel and Wu 1999;Solomon 1999; Thompson and Solomon 2002). It is thought that the cooling trend is mainly caused by the radiative effect of severe ozone depletion (Ramaswamy et al. 2001), and that increasing greenhouse gases also partly contributed to the cooling trends (Langematz et al. 2003) due to their radiatively cooling effects in the stratosphere.
In contrast to the cooling trend, Johanson and Fu (2007) and Hu and Fu (2009) found warming trends over a large portion of the Southern-Hemisphere (SH) high-latitude stratosphere in September and October, and the warming and cooling trends demonstrate a wavenumber 1 pattern. Their results, derived from satellite Microwave Sounding Unit (MSU) observations, show the largest warming of 7-8 °C in the September and October over the period of 1979-2006. Warming trends are also found at all stratospheric levels in the NCEP/NCAR reanalysis, and the maximum warming is about 11 °C at 30 and 20 hPa. In fact, warming signals can also be identified in earlier works (Ramaswamy et al. 1996;Randel and Wu 1999). However, little attention was paid to the warming trends because all these studies had concentrated on ozone-induced stratospheric cooling.
It is known that polar stratospheric temperatures are determined by both radiative and dynamical processes (Andrews et al. 1987;Hu and Tung 2002). The radiative effects of both ozone depletion and increasing greenhouse 1 3 gases lead to stratospheric cooling. Then, the warming trends are likely a result of wave-driven dynamic heating. Indeed, Hu and Fu (2009) demonstrated that the SH high-latitude stratospheric warming is caused by enhanced wave-driven dynamic heating. That is, increasing wave activity caused the strengthening of the Brewer-Dobson circulation (BDC), which consequently leads to stronger downward motion in the polar region and enhanced adiabatic heating in SH high latitudes. They showed that the stratospheric warming trends are closely related to sea surface temperature (SST) warming in the tropical Pacific, and their simulations forced by observed SST demonstrated that increasing wave activity in the SH stratosphere is caused by SST warming. The results by Hu and Fu (2009) were further confirmed by Lin et al. (2009), and Fu et al. (2010, 2015. Especially, they showed how ozone-induced radiative cooling and wavedriven dynamic heating cancel each other and lead to the wavenumber one pattern of stratospheric temperature trends over SH high latitudes. Moreover, the maximum covariance analysis by Lin et al. (2012) showed that the La Nina-like and the central-Pacific El Nino-like SST anomalies result in stronger stratospheric planetary wave activity and phase shifts of the stratospheric stationary waves.
As pointed out by Hu and Fu (2009), the warming trends may have important effects on the recovery of the Antarctic ozone hole. First, the warming may reduce the formation of the polar stratospheric clouds, which would consequently reduce ozone depletion in the Antarctic stratosphere. Second, increasing wave activity may cause weakening of the polar vortex or the polar night jet, which would lead to more high-ozone air mixed into the polar region. Both effects would benefit the recovery of the ozone hole.
The most recent report of ozone assessments has pointed out that the ozone hole is recovering (WMO 2018), as a result of the Montreal Protocol. Observations showed that Antarctic ozone has started increasing in the twentyfirst century (Salby et al. 2011;Solomon et al. 2016). A recent study showed satellite observational results of significant recovery of O 3 in the lower, middle and upper stratosphere and total column ozone in the Southern-Hemisphere middle and high latitudes (Wespes et al. 2019). It was suggested that zonalmean warming trends in the Antarctic lower stratosphere are associated with ozone recovery, based on results from the reanalysis and simulations by chemistry-climate models (Maycock et al. 2018;Randel et al. 2017;Solomon et al. 2017), although the model studies showed a large spread of trends in polar stratospheric temperatures (Maycock et al. 2018;Thompson et al. 2012). However, Philipona et al. (2018) found that the Antarctic lower stratosphere remains cooling in the twentyfirst century, using four radiosonde stations on the Antarctic. These results imply that there exist zonally non-uniform distributions of temperature trends in the Antarctic lower stratosphere.
The purpose of the present study is to revisit the warming trends in the SH high-latitude stratosphere in the recent decade, using up-to-date satellite observations. Especially, we focus on the spatial patterns of temperature trends. We also preform simulations, with stratospheric ozone forcing, to demonstrate how stratospheric ozone changes in the recent decade generate stratospheric temperature trends.
Data
To investigate changes of temperatures in the lower stratosphere (TLS), we use the Advanced Microwave Sounding Units (AMSU) lower stratospheric channel brightness temperature data version 4.0 from 1979 to 2017 (Mears and Wentz 2009). Monthly-mean temperature data, with 2.5° × 2.5° horizontal resolution, is used in this study (data available at http://www.remss .com/measu remen ts/upper -air-tempe ratur e/). Here, TLS is a combination of MSU channel 4 and AMSU channel 9. The weighting function of TLS has peak sensitivities over approximate range of 150-40 hPa (13-22 km) (Young et al. 2011), which can well capture the lower stratospheric condition. The monthly gridded global dataset from the Center for Satellite Applications and Research (STAR) for the National Oceanic and Atmospheric Administration (NOAA) merged the Stratospheric Sounding Unit (SSU) and AMSU-A version 3.0, with resolution of 2.5° × 2.5°, is used to calculate temperature trends in the middle and upper stratosphere (Zou and Qian 2016). The weighting functions of these channels for temperatures in the mid-stratosphere (TMS), upper-stratosphere (TUS), and top-stratosphere (TTS) peak at approximately 10, 3, and 1.5 hPa, respectively. The data is available at ftp://ftp.star.nesdis.noaa.gov/pub/ smcd/emb/mscat/data/SSU/SSU_v3.0/.
Monthly mean total column ozone data is from the Multi Sensor Re-analysis version 2 (MSR2) over 1979(Van Der et al. 2015a. The MSR2 is constructed using all available satellite observations, surface Brewer and Dobson observations, with a data assimilation model. It has horizontal resolution of 0.5° × 0.5°. The unit of total column ozone is the Dobson Unit (DU). It is available at http://www.temis .nl/proto cols/O3glo bal.html.
ERA-Interim daily reanalysis produced by the European Centre for Medium Range Weather Forecasts (ECMWF) (Dee et al. 2011) is used to calculate eddy heat fluxes. We use the monthly mean data with spatial resolution of 1.5° × 1.5° from 1979 to 2017. The data is available at http://apps.ecmwf .int/datas ets/data/inter im-full-moda/levty pe=sfc/).
GCM experiments
The model used in the present study is the Whole Atmosphere Community Climate Model (WACCM) (Marsh et al. 2013), which is compiled from the National Center for Atmospheric Research (NCAR) Community Earth System Model version 1.2 (CESM1.2). The specified chemistry WACCM (SC-WACCM) is used to specify ozone concentrations in the atmosphere (Smith et al. 2014). The SC-WACCM has horizontal resolution of 1.9° × 2.5° and 66 vertical levels from the ground to 4.5 × 10 −6 hPa.
To study the effect of ozone recovery on SH stratospheric temperatures in the recent decade, we perform two simulations with different ozone prescriptions, while greenhouse gases and SSTs are fixed at 2000. In the control experiment, monthly mean ozone at year 2000 is prescribed (case F_2000_WACCM_SC), and the model is run for 30 years. In the ozone recovery experiment (O3R), output from the control experiment is used as initial conditions, the net ozone change over 2007-2017 is added to the column ozone of year 2000, and the simulation is also run for 30 years. Here, the recovery of total column ozone is obtained from the linear trend over 2007-2017, which is multiplied by 10 years (shown in Fig. 3). The vertical distributions of total column ozone recovery are obtained from the vertical profile used in the reference simulation (REF-B2) of WACCM in the Chemistry-Climate Model Validation (CCMVal-2) (Eyring et al. 2005(Eyring et al. , 2010. The effects of ozone recovery on stratospheric temperatures can be characterized by the difference between the last 15-year averages of the O 3 R simulation and the control simulation. Figure 1 shows spatial distributions of SH TLS trends in austral spring for 1979-2006 (upper panels), and 2007-2017 (lower panels), derived from satellite observations. The warming trends in upper panels are almost the same as that in Hu and Fu (2009) although they are from different MSU/ AMSU datasets. Warming trends in September are located over the sector of western Pacific Ocean (Fig. 1a). Significant warming trends in October are located over the Amundsen Sea and Ross Sea (Fig. 1b), with significant cooling trends over the sector of the Indian Ocean. In November (Fig. 1c), there are only significant cooling trends over the southern Indian Ocean. Warming trends in September and October are about 2.8 and 2.6 K/decade, respectively. Trends in all the 3 months show a zonal wavenumber one or dipole pattern. To demonstrate temperature changes at higher stratospheric levels, we plot trends in TMS, TUS, and TTS over 2007-2017 in Fig. 2. In September, warming trends decrease with altitude, and the area of warming trends becomes smaller with increasing altitude. The spatial patterns of warming trends show westward and poleward tilting. At the same time, cooling trends become stronger and dominant with increasing altitude. In October, all these levels show dominant cooling trends, in contrast to the warming trends at the lower stratosphere. In November, all levels demonstrate warming trends over the sector of western Pacific Ocean. The cooling trends are in the polar region in the upper and top stratosphere, in contrast to the warming trends in the polar region in the lower stratosphere. The vertical structures of temperature trends are consistent with the "mirrored shape" between the lower and upper stratosphere of zonalmean Antarctic stratospheric temperature changes (Solomon et al. 2017).
Stratospheric temperature and ozone changes
The above results demonstrate that warming in the SH high-latitude lower-stratosphere continued in the recent decade. An important question is what caused the warming trends in the recent decade. Previous works have attributed the warming over 1979-2006 to increasing wave-driven dynamic heating in the SH stratosphere (Hu and Fu 2009;Lin et al. 2009). It is known that Antarctic ozone started increasing since the late 1990s. Whether increasing ozone contributed to the warming trends in the recent decade is the major question that we want to address in the present study. In the following, we first show ozone changes in the recent decade before we perform attribution analysis.
The upper panels in Fig. 3 show spatial distributions of trends in total column ozone over 1979-2006, derived from MSR2. Severe ozone depletion in the Antarctic stratosphere can be clearly seen in Fig. 3a-c. The maximum ozone decreases are about 42, 55, and 40 DU/decade in September, October, and November, respectively. It is the radiative cooling effect of ozone depletion that contributed to the cooling trends in the Antarctic stratosphere (Fig. 1a-c). The lower panels show ozone increases over 2007-2017. The maximum ozone increase is about 66 and 103 DU over the 10 years in September and November, respectively (Fig. 2d, f). It is important to note that the spatial patterns of the ozone trends well resemble those of the temperature trends for 2007-2017 ( Fig. 1d-f). The spatial correlation coefficients between the changes in ozone and lower stratospheric temperature from 45° S to 90° S are 0.91, 0.89, and 0.99 in September, October, and November, respectively, indicating that the warming trends over 2007-2017 might largely be related to the increase in ozone.
Attribution of the TLS trend
In this section, we perform attribution analysis to reveal what caused the warming trends in recent decade. Following Lin et al. (2009), we use the method of multiple linear regression to attribute the respective contributions of wave-driven dynamic heating, ozone changes, and wave-phase shifts to temperature changes. In Lin et al. (2009), they defined the ozone index as the area-weighted spatial-mean total ozone over 45° S poleward to represent the ozone effect on highlatitude stratospheric warming. Here, to better characterize the zonally non-uniform distribution of ozone changes, we define the ozone index for each grid point, which is slightly different from the ozone index in Lin et al. (2009). The highlatitude stratosphere is heated adiabatically by the descending branch of BDC. To denote the strength of the BDC, an "eddy-heat flux index" is defined as the area-weighted are statistically significant at the 95% confidence level (student t test). The dotted contour lines enclose regions where the trends are significant at the 90% confidence level. The unit is K/decade three-month-mean (including the previous 2 months) eddy heat flux at 150 hPa over 45°-90° S calculated from the ERA Interim reanalysis. In addition to these, the zonal shift of the wavenumber-1 phase temperature pattern alters the zonal distribution of temperature trends and thus leads to temperature changes, although it doesn't affect the zonalmean temperature. Thus, we define a "phase index" as the longitude of the largest temperature averaged over 50º-70º S in the lower stratosphere. These results indicate that ozone increase and the phase shift of the wavenumber-1 pattern have the major contribution to the TLS trend, while eddy heat flux has little contribution. We will further address this using multiple linear regressions later. Figure 5 shows the time series of area-weighted TLS over the warming area enclosed by the contour of 2 K/decade in October over 2007-2017, the eddy-heat flux index, the area-weighted spatial-mean ozone over the warming area enclosed by the contour of 2 K/decade in October over 2007-2017, and the phase index in October. TLS shows increase over 2007-2017, and its linear trend is about 2.6 K over the recent decade, about half of that in September. The eddy-heat flux index also shows increasing over the recent decade, but the trend is statistically insignificant. Ozone change is about 22 DU over 2007-2017. The phase index shows a positive trend, about 11° of eastward shift over the decade. The correlation coefficients between TLS and the eddy-heat flux index, TLS and ozone, and TLS and the phase index are 0.46, 0.97, and − 0.41, respectively. These results suggest that both BDC strengthening and ozone increase may contribute to the warming trends at the SH high latitudes in October. Figure 6. shows the time series of these variables in November. TLS shows a significant warming trend over 2007-2017, with a value of about 9.5 K over the recent decade. It is about twice larger than that in September. The linear trend of eddy-heat fluxes, about 0.49 Km/s, is weak The period is too short to calculate statistical significance for ozone trends over [2007][2008][2009][2010][2011][2012][2013][2014][2015][2016][2017] and insignificant. The increase of total column ozone is about 70 DU over 2007-2017. The linear trend of ozone is statistically significant at the 90% confidence level. The phase index shows an insignificant negative trend, about 46° of westward shift over the decade. The correlation coefficients between TLS and the eddy-heat flux index, TLS and ozone, and TLS and the phase index are 0.64, 0.90, and 0.39, respectively. These results suggest that the significant increase of ozone might be the major contribution to the warming trends at the SH high latitudes in November.
Using the method of multiple linear regression, Lin et al. (2009) attributed TLS warming over 1979-2006 to the BDC strengthening, phase shift of the temperature wavenumber-1 pattern, as well as their cancellation with ozone depletion. Especially, they found that the BDC strengthening played the major role in causing the warming in the SH high-latitude stratosphere over 1979-2006. Following Lin et al. (2009, we carry out multiple linear regression of gridded monthly mean TLS upon the eddyheat flux index, ozone index, phase-shift index, and the residual term. The contributions to the TLS trend are then obtained by multiplying the regression coefficients by the linear trend in the corresponding indexes. Figure 7 shows the multiple regression results for September, October, and November over 2007-2017. In September, ozone increase has the largest contribution to the observed TLS warming trend, and the spatial pattern of ozone changes can well capture that of the observed TLS trends. In contrast, both eddy heat flux and phase shift have minor contributions to the TLS trend pattern. The residual term is relatively small. This is consistent with the time series in Fig. 4. In October, it is also ozone changes that have the major contribution to the observed TLS trend pattern, while phase changes have a weak negative contribution. For November, it is the same that ozone increase over the polar region has the major contribution to the polar warming, and that the contributions of the other two terms are negotiable. Overall, the observed TLS trends over 2007-2017 can be well explained by the ozone changes.
Simulation results
To further verify the contribution of ozone changes to the observed temperature trends in the recent decade, we carry out simulations as described in Sect. 2. Simulated temperature responses to observed ozone changes over 2007-2017 are shown in Fig. 8. For September (left panels of Fig. 8), the warming trend pattern and its location are close to that in Figs. 1d and 2 (left panels). The warming trends are weaker than observations. The warming trend patterns also show westward and poleward tilting with increasing altitudes, similar to that in Figs. 1d and 2 (left panels). The middle panels of Fig. 8 show weak temperature responses to ozone recovery in October. The simulated warming trends in TLS are rather weak. At higher stratospheric levels, temperature trends become negative. They are consistent with the observational results in Fig. 1e and the middle panels of Fig. 2, but with weaker trends. For November (right panels), simulated TLS trends largely resemble that in Fig. 1f. Temperature trends at higher stratospheric levels also resemble observations in the right panels of Fig. 2. The simulated maximum warming trends in September and November are about 2 and 2.6 K, respectively. In summary, the simulation results demonstrate consistent results with observations, especially in September and November in general, although the simulation cannot reproduce the magnitudes of the observed warming trends.
Discussion and conclusions
In this study, we have revisited the lower stratospheric warming over SH high latitudes in austral spring found by Johanson and Fu (2007) and Hu and Fu (2009). Updated satellite observations demonstrate that the warming continued in the lower stratosphere in the recent decade, especially in September and November. The maximum warming trends in September and November are up to 7 and 13 K over 2007-2017, respectively. However, the spatial patterns of temperature trends are largely different from that over 1979-2006. The wavenumber 1 pattern in September shifted by about 90° in longitude compared to that over 1979-2006, and it shifted by about 180° in October. In November, warming trends are mainly in the lower stratospheric polar region.
Satellite observations show vertically tilting structures of temperature trends. The warming trends in September title westward and poleward with the increasing altitude. In October, the lower stratospheric shows warming, while the middle and upper stratosphere show cooling trends. For November, large warming trends are found in the lower Antarctic stratosphere, while the polar region becomes cooling with increasing altitudes.
Multiple linear regression shows that ozone recovery in the Antarctic stratosphere plays an important role in causing the observed warming trends over 2007-2017. This is different from that over 1979-2006 when the strengthening of the BDC was the major factor in causing the high-latitude warming (Hu and Fu 2009;Lin et al. 2009). The time series of eddy heat fluxes shows that wave activity had little increase in the recent decade, as shown in Figs. 4 and 5, while the recovery of Antarctic ozone became the major factor that contributed to the observed warming.
Our simulations forced by ozone recovery can reasonably reproduce the spatial patterns of temperature trends, especially in September and November. The simulation results also generate the observed vertically tilting structures of temperature trends at different SH high-latitude stratospheric levels. However, the magnitudes of simulated temperature trends are weaker than observations, the simulation result could not well reproduce the warming trends in October. Thus, the simulation results here suggest that at least part of the high-latitude stratospheric warming trends over 2007-2017 are due to ozone recovery.
Note that it requires further studies to confirm the contribution of ozone recovery on the observed warming trends, An important question is why planetary wave activity in the SH stratosphere has no longer increased in the recent decade. According to the analysis by Hu and Fu (2009) and Lin et al. (2012), stratospheric wave activity over 1979-2006 was enhanced because of SST warming over the tropical western Pacific Ocean. If it is true, the result here implies that SST over the tropical western Pacific has not been increased over 2007-2017, which is verified from observation. One possibility is that the Pacific Decadal Oscillation (PDO) has switched from the warm phase to the cool phase since the late 1990s. Whether it is the case requires future analysis and simulation studies.
It is worth noting that ozone recovery and the associated Antarctic stratospheric warming have important implications not only for stratospheric climate but also for tropospheric and surface climates. Recent works have showed that ozone recovery would lead to retreat of Antarctic sea ice, due to dynamic or radiative effects (Bitz and Polvani 2012;Xia et al. 2019). It would also cause the SH Hadley cell narrowed Tao et al. 2016). These all would cause consequent surface climate changes on SH. Top panels: September, middle panels: October, and bottom panels: November. From left to right, the plots are observed TLS trends, TLS trends attributable to the BDC as represented by the eddy heat flux index, TLS trends attributable to ozone index, TLS trends attributable to phase shifts, summation of the three terms, and the residual term, i.e., the difference between the observed trends and summation. The unit is K/decade tively. From left to right, the panels are for September, October, and November, respectively. Color interval is 0.5 K Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
|
v3-fos-license
|
2020-05-07T09:15:11.425Z
|
2020-05-01T00:00:00.000
|
218532571
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/20/9/2607/pdf",
"pdf_hash": "c92b3e3d46fd953ea815fd24bc38c93af09345a0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43455",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "50cba3cbbd3115937085d689222fbac564dcdc7e",
"year": 2020
}
|
pes2o/s2orc
|
A Prototype Microwave System for 3D Brain Stroke Imaging
This work focuses on brain stroke imaging via microwave technology. In particular, the open issue of monitoring patients after stroke onset is addressed here in order to provide clinicians with a tool to control the effectiveness of administered therapies during the follow-up period. In this paper, a novel prototype is presented and characterized. The device is based on a low-complexity architecture which makes use of a minimum number of properly positioned and designed antennas placed on a helmet. It exploits a differential imaging approach and provides 3D images of the stroke. Preliminary experiments involving a 3D phantom filled with brain tissue-mimicking liquid confirm the potential of the technology in imaging a spherical target mimicking a stroke of a radius equal to 1.25 cm.
Introduction
Brain stroke is one of the main causes of permanent injury and death worldwide, with an incidence of over 5 million annual deaths [1]. Since prompt intervention (such as the administration of specific drugs that can affect the acute stage of the stroke) can significantly improve the prognosis, a rapid diagnosis of the disease and continuous monitoring after its onset represent important clinical goals.
Currently, the most effective tool for brain stroke diagnosis is the imaging-based diagnostics performed in an emergency department after the recognition of stroke-like symptoms. In this respect, magnetic resonance imaging (MRI) or X-ray based computerized tomography (CT) are the clinically adopted techniques. However, although they are continuously evolving, these technologies still have several limitations. In particular, despite its high resolution and accuracy, MRI is not widely available in emergency settings and is therefore actually used only as a secondary diagnostic tool [2,3]. On the other hand, non-contrast CT may be limited by the fact that the early signs of ischemia may not be easily recognizable by non-experienced personnel. Moreover, due to the use of ionizing radiation, CT is not suitable for repeated examinations, which are especially useful for post-acute monitoring purposes. Furthermore, both MRI and CT equipment is bulky and so not currently suited for ambulance use or as bedside devices.
The above circumstances have led to increased interest in the development of different diagnostic imaging techniques [3]. Among others, microwave imaging (MWI) [4] has emerged as a complementary technique which is able to address the different needs arising in stroke diagnosis and management, namely the early-possibly prehospital-diagnosis of the kind of stroke (ischemia or hemorrhage), bedside brain imaging and continuous brain monitoring for stroke in the post-acute stage. MWI takes advantage of the different electric properties (electric permittivity and conductivity) that human tissues exhibit at microwave frequencies depending on their kind (e.g., blood versus gray or white matter) and pathological status. These differences permit a functional map of the inspected anatomical region to be obtained. The benefits of MWI mainly stem from the non-ionizing nature of microwave radiation and the reduced intensity required to obtain reliable imaging (at an intensity comparable to that currently used for mobile phones), which make it completely safe and suitable for repeated applications. Moreover, MWI technology is cost-effective and benefits from a reduced size, as it makes use of miniaturized, low-cost, off-the-shelf components that are available in the microwave frequency range for signal generation and acquisition [5] and low-cost accelerators to speed up processing [6].
Recently, several MWI devices and prototypes have been proposed [7][8][9][10][11][12][13][14][15][16][17]. Among them, the two most prominent examples (which are already being tested on humans) are the Strokefinder, developed by Medfield Diagnostics [7,8], and the EMTensor BrainScanner [10]. The Strokefinder is a device which aims to discriminate between ischemic and hemorrhagic strokes in the early stage of patient rescue, based on an automated classification which is carried out by comparing the measured data to a database (obtained by data collected from already examined patients). This device is characterized by its very simple and compact hardware, consisting of a small number of printed antennas mounted on a support that can be adapted to the patient's head. Some initial clinical trials have been reported for the Strokefinder [7], but it should be remarked that it does not provide images; thus, its intended role is to complement standard imaging tools. The EMTensor BrainScanner aims to perform brain stroke tomography. The system is characterized by a high complexity, as it consists of a large number of radiating elements (177 truncated waveguides, loaded with ceramic material of appropriate permittivity [10]), which considerably increase its cost and size, thus reducing to some extent the advantages of its use. In addition, the image reconstruction task involves the processing of a considerable amount of measured data and has to face the pitfalls of dealing with a non-linear and ill-posed inverse problem. This entails long elaboration times and possibly results in false solutions; i.e., producing images which fulfill the underlying optimization but are different from the ground truth.
In this paper, we describe the realization, characterization and initial experimental validation of a prototype device representing a different approach to dealing with a still open issue in stroke management; that is, continuous monitoring during the hospitalization of the patient in order to evaluate the effectiveness of the administered therapies [18]. This specific application aims to image only a "small" variation occurring in the brain and not its overall structures and features. As a consequence, it is possible to keep the device complexity low, and therefore also its size and cost, as well as to rely on the Born approximation to model the scattering phenomenon, thus enabling reliable real-time imaging. Accordingly, the proposed device is based on the low-complexity architecture designed with the rigorous procedure as described in [18]. Moreover, it adopts a differential imaging approach, where data gathered at two different acquisition times are processed [19] with simple and fast imaging algorithms based on the distorted Born approximation [18,19].
The proposed system provides 3D images of the head by relying on data measured through an array of 24 printed monopole antennas organized as an anatomically conformal shape mimicking a wearable and adaptable helmet. Each antenna is enclosed in a box of graphite-silicon material, acting as the coupling medium, and connected to a two-port vector network analyzer (VNA) through a 24 × 24 switching matrix, which allows the whole differential scattering matrix required for imaging to be acquired. The use of a semi-solid matching medium is a distinct feature of the system which allows for an increased simplicity of operation and repeatability, as compared to other arrangements that make use of a coupling liquid [10]. Finally, as detailed below, the device presented here is equipped with a "digital twin" based on a proprietary electromagnetic (EM) solver that allows us to properly characterize and foresee its behavior, as well as to provide the building blocks needed for the imaging.
In the following sections, the different components of the device are described and discussed, and a first experimental assessment on an anthropomorphic head phantom is presented. This phantom consists of a plastic shell with the shape and size of a human head, which is filled up with a homogeneous material whose dielectric properties are equal to an average value of the properties of the different tissues present in the brain. The reported experimental results provide an initial demonstration of the capabilities of the developed device.
It is worth noting that the presented system is not the only example of a low-complexity device for brain imaging, as other devices, using a low number of antennas (8)(9)(10)(11)(12)(13)(14)(15)(16) arranged in a circular array, have been proposed [12][13][14][15][16]. However, these devices only provide 2D maps of the transverse cross-section of the head in the array plane , whereas the device herein presented provides a full 3D image of the head.
Three-Dimensional Microwave Imaging System Design
In this section, we present the choices made in the design of the proposed imaging system (i.e., the operating frequency, coupling medium, antenna number and arrangement).
The operating frequency as well as the properties of the selected coupling medium were set according to previous findings obtained from theoretical formulations and experimentally validated [20] and [4] (Chapter 2). In particular, a working frequency of around 1 GHz and a coupling medium with a relative permittivity of around 20 were determined to be optimal and therefore chosen for the realization of the hardware. As all the simulations and measurements were made at 1 GHz, this operating frequency will be considered as implicit in the following.
To design the layout of the array of antennas (the number and position of the radiating/measuring elements), a recently proposed rigorous procedure was adopted [18,21]. This procedure is based on the analysis of the spectral properties of the discretized scattering operator [22], assuming the antennas to be located on a surface conformal to the head (a helmet) and taking into account the dynamic range and signal-to-noise ratio (SNR) of the adopted measurement device as well as the actual size of the antennas. The result of this study allowed us to identify a 24-element array as the suitable candidate to perform the desired imaging task while keeping the system complexity as low as possible. The expected performances were confirmed by a preliminary numerical analysis [18].
As far as the choice of the imaging algorithm was concerned, the key aspect was that the targeted application was to monitor the time evolution of the stroke. Hence, a differential approach was a suitable choice [18,23]. In particular, the input data of the imaging problem, denoted as ∆S in the following, were represented by the difference between the scattering matrices measured at two different times, while the output was a 3D image showing the possible variation of the electric contrast of the brain tissues-i.e., ∆χ-occurring between these two different times. The differential electric contrast function ∆χ is defined as ∆ / b , where b is the complex permittivity of the non-homogeneous background at the reference instant (e.g., a map of the brain at the first diagnosis) and ∆ is the complex permittivity variation between the two time instants.
Since the contrast variation ∆χ was localized in a small portion of the imaging domain, it was possible to take advantage of the distorted Born approximation [22], so that a linear relationship held between ∆S and ∆χ: where S is a linear and compact integral operator, whose kernel is −jω with r m ∈ D, r p and r q denote the positions of the transmitting and receiving antennas, and r m shows the positions of the points in which the imaging domain D is discretized. E b is the background field in the unperturbed scenario; that is, the field radiated inside the imaging domain by each element of the array. The symbol "·" denotes the dot product between vectors, ω = 2π f is the angular frequency, and j the imaginary unit. As a reliable and well-established method to invert (1), we exploit the truncated singular value decomposition (TSVD) scheme [22], which allows us to obtain the unknown differential contrast function through the explicit inversion formula: where σ n , [u n ] and [v n ] are the singular values and the right and left singular vectors of the discretized scattering operator S, respectively. L t is the truncation index of the SVD, which acts as a regularization parameter and was chosen to obtain a good compromise between the stability and accuracy of the reconstruction [22].
Three-Dimensional Microwave Imaging System Realization
The realized 3D microwave imaging system prototype is shown in Figure 1. It consists of several parts, described in the following sub-sections, all of which are controlled by a laptop.
Vector Network Analyzer and Switching Matrix
First, all the signals are generated and received by a standard VNA (Keysight N5227A, 10 MHz-67 GHz) where the input power is set to 0 dBm and the intermediate frequency (IF) filter to 10 Hz. The two ports of the VNA are connected, via flexible coaxial cables, to the two input ports of the 2 × 24 switching matrix. The switching matrix has been realized with two single-pole-four-throw (SP4T), eight single-pole-six-throw (SP6T), and 24 single-pole-double-throw (SPDT) electro-mechanical coaxial switches. The internal connections between the switches are made with semi-rigid coaxial cables to maximize the isolation and minimize the insertion losses. Then, the 24 output ports of the switching matrix are connected, via flexible coaxial cables, to the 24 antennas placed on the helmet as supports hosting the 3D anthropomorphic head phantom. As detailed in [23], the switching matrix has been realized so that there are 24 paths from VNA port 1 to the corresponding 24 antennas, as well as 24 paths back to VNA port 2; all the paths were designed to have the same electrical length. In this way, each antenna can work as a transmitter (TX) or as a receiver (RX), and while one antenna is transmitting, the signals collected by the other 23 antennas are received in sequence by the VNA.
Brick Antenna Array
The antenna array is the core of the imaging system. The antenna numbers, positions and orientations were determined according to the design procedure described in Section 2.1. In the prototype shown in Figure 2, the 24 antennas are placed on a 3D printed plastic (acrylonitrile butadiene styrene, ABS) support with the shape of a helmet conformal to the head phantom. This prototype configuration allows us to easily change or remove the antennas, if needed. Each antenna, denoted as "brick" antenna in the following, was a monopole antenna printed on a standard FR4 substrate, with a thickness equal to 1.55 mm, and embedded in a dielectric brick. The dielectric brick was made of a mixture of urethane rubber and graphite powder and was designed in order to reach a relative dielectric permittivity of r ∼ = 20 and to minimize the losses. The actual EM properties obtained with this mixture were r = 18.3 and σ = 0.19 S/m. The graphite powder was needed to increase the relative dielectric constant of the urethane rubber; however, it also increased the conductivity, although still within acceptable levels compared to other matching media used in medical microwave imaging (e.g., [23,24]). Moreover, the adopted matching medium is usually liquid [10], which is inherently inconvenient for a helmet-like device. Here, instead, the implemented matching medium was solid rubber, which can be easily placed conformally to the head, as shown in Figure 2. The overall dimensions of each brick were 5 × 5 × 7 cm 3 to accommodate the need to place 24 brick antennas around the head. Figure 3 reports the 24 × 24 scattering matrix measured by the microwave imaging system in the presence of the head phantom; the self-terms were set to zero in order to highlight the range variation of the measured coupling coefficients, which are the input data of the used TSVD imaging algorithm. It can be noticed that the measured signals are above −100 dB; considering that the VNA noise floor is −110 dBm (at 1 GHz and with an intermediate frequency (IF) filter equal to 10 Hz), with an input power of 0 dBm the measured data are then above the VNA noise floor [25]. Moreover, as expected, the 24 × 24 matrix is approximately symmetric, which confirms the reciprocity of the realized system.
Anthropomorphic Head Phantom
The 3D anthropomorphic head phantom used for the validation and testing of the microwave imaging system was made of polyester casting resin. It was realized by additive manufacturing from a stereo-lithography (STL) file derived from MRI scans. The STL file was obtained with computer-aided design (CAD) software by modifying an original file from the Athinoula A. Martinos Center for Biomedical Imaging at Massachusetts General Hospital [26].
The phantom consisted of a cavity, shown in Figure 4a, whose wall thickness and height were equal to 3 mm and 26 cm, respectively. Its maximum cross section was approximately an ellipse whose minor and major axes were equal to 20 cm and 26 cm, respectively. The cavity was filled with a liquid mixture, made of Triton X-100, water and salt, which mimicked the average value of the dielectric characteristics of different brain tissues (white matter, gray matter) [27]. The measured dielectric characteristics of the mixture are reported in Figure 4b.
Digital Twin of the Device
The developed device was equipped with a digital twin, namely an accurate full-wave numerical model simulating its behavior, thanks to the use of proper CAD and EM simulation softwares. This tool had a two-fold purpose. First, it allowed us to foresee the expected outcomes of the planned experiments and to analyze them before running the actual experiments. Second, it provided the EM fields E b needed to build the imaging kernel, as described in Section 2.1. Figure 5 shows the CAD model of the device which includes both the antenna array and the 3D anthropomorphic head phantom. Each brick, placed on the head, represented one TX/RX antenna together with the dielectric coupling medium; the antenna ports were placed at the end of the coaxial cables leading out from the bricks. The dielectric characteristics of the bricks were the same as the nominal ones used in the realized system (see Section 2.2.2), i.e., r = 18.5 and σ = 0.2 S/m. The head phantom was made of a dielectric medium representing the average brain tissues with r = 42.5 and σ = 0.75 S/m, according to the properties of the medium adopted in the experimental validation (Section 2.2.3).
To perform the EM simulation, which provided both the S-parameters at the antenna ports and the EM fields in the whole scenario, the CAD model was introduced into an in-house full-wave software, based on the finite element method (FEM) [28]. The implemented software used the standard "curl-curl" formulation for the electric field and Galerkin testing. The whole volume, including the CAD model, was discretized with edge-basis functions, defined over a mesh of tetrahedral cells. The antenna metal parts were modeled as perfect electric conductors (PEC), and all the dielectrics were modeled via sub-volumes with given r and σ values. The tetrahedral mesh was terminated at the volume boundaries with appropriate absorbing boundary conditions (ABC). Each antenna port was modeled as a coaxial cable section where, if excited, a tangential electric field distribution was enforced.
In the numerical modeling of the MWI system, the 24 antennas were excited one at a time, setting all the others as receivers in order to generate the 24 × 24 scattering matrix. Moreover, the field radiated by each antenna was evaluated at different locations within the head to generate the discretized scattering operator S in (1).
Experiment Set-Up
The microwave imaging system was validated using the 3D anthropomorphic head phantom in which a 1.25 cm-radius plastic sphere was inserted, as shown in Figure 6. The sphere was fixed via a monofilament polymer line (fishing line) to an external support located above the head and was immersed in the liquid mixture at a location and height that could be varied. The sphere density was higher than the liquid mixture filling the head phantom, and the sphere was therefore not floating on the liquid and could be easily positioned in different locations within the head phantom. Three positions were considered, as shown in Figure 7. However, as their exact location could be affected by inaccuracies, it is obvious that the actual positions may have slightly differed from the expected ones. The aim of these experiments was to identify the 3D shape and location of the sphere that was supposed to represent the region of the brain affected by the stroke. In this respect, it is important to remark that, while the plastic sphere adopted here for the sake of simplicity was not intended to mimic a hemorrhage or a clot, it showed a dielectric contrast with brain tissues which was comparable to a hemorrhage but with the opposite sign. Thus, the experiment was expected to provide a meaningful, though initial, validation of the imaging capabilities of the prototype device.
Results
In the following, we describe the validation of the realized 3D microwave imaging system (Section 2.2), whose experimental set-up is detailed above in Section 2.2.5. First, by means of the digital twin, the outcomes of the experiments are foreseen and assessed. Then, the results of the actual experiments are reported. All the imaging results were obtained by using the TSVD algorithm described in Section 2.1. Figure 8 shows the singular values calculated for the relevant scattering operator, which were computed with the digital twin. The truncation index L t in (2) was set to −20 dB for all the reported cases. To quantitatively assess the quality of the reconstruction images, the root mean square error (RMSE) was evaluated as where N s is the number of samples of the discretized domain, ∆ χ the retrieved differential contrast and ∆χ is the actual contrast.
Numerical Assessment
To confirm the validity of the above statement, the digital twin of the system described in Section 2.2.4 was exploited. To this end, the experiment for the target corresponding to the blue sphere in Figure 7 was simulated by including a 1.25 cm-radius plastic sphere with r = 2.1 in the CAD head phantom. The simulation was repeated for the case of noiseless measurements (to provide an ideal benchmark) and for two SNR levels, namely 65 dB and 55 dB. Noise was modeled as an additive Gaussian white noise, which was added to the simulated data given by the S matrices computed with and without the target, respectively. Figure 9 shows the outcomes of the simulations. In particular, the first row shows the amplitude of the scattering matrices, where, in agreement with the actual experimental situation, the self-contributions have been omitted. It appears that the considered noise level severely affects the data matrix. The middle and bottom rows of Figure 9 show the normalized amplitude of the reconstructed differential contrast ∆χ arising from the TSVD algorithm. It is worthwhile to note that the target is properly imaged, even when the SNR is comparable to or higher than the maximum values of the differential S matrix and the corresponding matrices appear very noisy (see Figure 9b,c), as the unavoidable occurrence of some artifacts does not restrict the interpretation of the result. The RMSE values obtained for the reconstructions at the three considered noise levels are 0.04, 0.06 and 0.11, respectively. Once the kinds of results that the device was expected to provide in the experiments had been characterized, the second goal was to show whether-and to what extent-the experiment was relevant for the considered clinical scenario. To this end, the same simulation as before was repeated considering a spherical target with the properties of blood ( r = 63.4 and σ = 1.6 S/m) instead of plastic, thus mimicking a hemorrhagic stroke. The results of the simulations are shown in Figure 10. Figure 10d-i show the imaging results, which are comparable with the previous ones but for an expected increased presence of (not meaningful) artifacts. In agreement with this, the RMSE values are essentially the same as for the case of the plastic target, at 0.04, 0.06 and 0.13 for the different considered levels of noise, respectively. According to the above results, we can conclude that the developed prototype will be able to pass the planned experimental validation and that the considered experiments provide a relevant, though initial, test-bed for the designed microwave imaging system.
Experimental Validation
To perform the experimental validation of the system, for each sphere location, two measurement sets were taken at different times. The first data-set was measured in the absence of the sphere, and the second one was taken when the sphere was positioned inside the phantom. Each pair of measured 24 × 24 scattering matrices were then differentiated and given as an input to the TSVD algorithm. It can be observed that VNA port calibration was not needed as possible systematic measurement errors were cancelled out via the use of differential data or did not affect the obtained qualitative imaging of the differential contrast function. The time needed to perform the total measurement with the current prototype was less than 4 min for each sphere location, and the elapsed time between the two measurement sets was around 10 min. The kernel simulated with the digital twin (Section 2.2.4) was exploited to generate the images, and the data processing time needed by the TSVD algorithm was negligible (less than 1 s).
The 24 × 24 differential scattering matrices obtained for the three cases are shown in Figure 11a-c. The second and third rows of Figure 11 display the images obtained for the different sphere locations indicated in Figure 7. The second row corresponds to the horizontal cross-section passing through the sphere center; the expected sphere location and size are highlighted with a red circle. The third row displays horizontal cross-sections at the different levels indicated in Figure 7b. In all cases, the targets are very well retrieved, and the images agree with the simulation results, which confirms our expectations. The RMSE values for these reconstructions are 0.16, 0.19 and 0.18 for the three considered positions of the target, respectively. Moreover, the last row of Figure 11 shows the 3D rendering of the imaged stroke obtained by plotting the values of the normalized differential contrast amplitude, which are above −3 dB. Finally, Figure 12 reports horizontal cross-sections at various levels of the 3D reconstructed image obtained by differentiating two different sets of measurements of the same scenario; all the images are normalized with respect to the maximum value of Figure 11b, which represents a case when the target is present.
Discussion
The main goal of the developed device was to image (qualitatively) possible anomalies (clots or hemorrhages) in the head to support clinicians in the evaluation of the effectiveness of the administrated therapies. The results shown in the previous section, although preliminary, confirm the potential of the technology in providing reliable results, as it is capable of imaging a target as small as 1.25 cm in radius.
A second important achievement of the analysis carried out here is represented by the validation of the proposed system through its digital twin, which provides simulated data which are quantitatively consistent with the measured data. As such, the adopted modeling tool provides a reliable representation of the device, making it possible both to build the imaging kernel and to synthetically reproduce laboratory experiments. As a matter of fact, the results from the simulations and measurements appear to be essentially the same, except for a slight deterioration of the images in the case of measured data. In particular, the worse RMSE results in the experimental tests, with respect to the corresponding numerical cases, are due to the possible inaccuracies in the expected positions of the target as well as to model inaccuracies in the digital twin.
A very important feature of the presented device is its robustness against false positives, assessed through a specific experiment, the result of which is shown in Figure 12. We can observe that the reconstructed values, which represent only the overall noise between different sets of measurements, are significantly lower than 0 dB (the maximum value in the 3D reconstructed image is equal to −4.75 dB) and are therefore clearly different from the cases when the target is present (see Figure 11).
To the best of our knowledge, this paper presents the first system based on a low-complexity antenna arrangement conformal to the head which is able to provide full 3D images; other imaging systems available in the literature only provide 2D images and often exploit a large number of antennas.
With this work representing a preliminary validation of the developed hardware, it is worthy of note that, for practical reasons, the target used in the experimental test does not exhibit the same dielectric properties as a stroke. On the other hand, the digital twin can help us to predict what will happen in an experiment dealing with a target mimicking a hemorrhage, for example. As a matter of fact, by comparing results from simulations and measurements, it can be observed that the differential scattering matrices exhibit a similar pattern but are lower in amplitude (by about 10 dB) in the case of simulations due to the lower maximum amplitude of the differential contrast between blood and the average brain with respect to plastic and the average brain (0.49 and −0.95, respectively). This implies that, in an experiment dealing with a target mimicking a hemorrhage, slightly weaker useful signals should be collected. However, this is not a significant limitation, as the amplitudes of the differential scattering matrices are well above the VNA noise floor, which represents the ultimate limitation for accurate measurements [25].
Finally, while the repeatability of the experiment has not been tested in this paper with respect to the possible misalignment of the phantom in the two gathered data sets, from our previous studies, we expect that such uncertainties will produce "structured" artifacts in the final image, which are easily attributable to positioning errors [19].
Conclusions and Future Work
In this paper, a prototype of a novel low-complexity device, dedicated to brain stroke monitoring in the post-acute stage, has been presented. The reported experiments aimed to provide an initial validation of the device and confirm that there is an agreement between the design of the system [18] and its performance. This is assessed by the comparison of the achieved results with those obtained from its digital twin. The availability of such a model also allows us to show the kinds of outcomes that are expected to be obtained by the system when operated with more realistic targets.
The next steps of the system validation and development will involve an assessment with the more realistic anthropomorphic head phantom described in [27], which includes an additional cavity modeling a stroke [29]. As this phantom is derived from an STL file, a numerical validation using the digital model of the system will also be carried out in this case. In addition, improvements of the prototype will be performed by considering image reconstruction procedures using multi-frequency data calibration techniques as well as sparsity-promoting regularization schemes; furthermore, there will be a refinement of the antenna array support to make it wearable.
Conflicts of Interest:
The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.
Abbreviations
The following abbreviations are used in this manuscript:
|
v3-fos-license
|
2021-10-15T15:10:25.798Z
|
2021-10-15T00:00:00.000
|
238988335
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.biochemia-medica.com/assets/images/upload/xml_tif/bm-31-3-030901.pdf",
"pdf_hash": "ac6fea187e0decf14e6a36cc9790e8ec96252419",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43456",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "8a95ca6d3ca2088a64ef3e1792f89cfd8df51a3f",
"year": 2021
}
|
pes2o/s2orc
|
Seroprevalence of SARS-CoV-2 in Croatian solid-organ transplant recipients
Introduction The data on the coronavirus disease (COVID-19) in solid-organ transplant recipients (SOTRs) in Croatia is unknown. The aim of this study was to analyze the seroprevalence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in Croatian SOTRs. Materials and methods From 7 September to 27 November 2020 (beginning of the second COVID-19 pandemic wave), a cross-sectional screening for COVID-19 was performed in the adult outpatient liver (LTRs; N = 280) and kidney transplant recipients (KTRs; N = 232). Serum samples were initially tested for SARS-CoV-2 IgG antibodies using a commercial enzyme-linked immunosorbent assay (ELISA; Vircell Microbiologists, Granada, Spain). All positive samples were confirmed using a virus neutralization test (VNT). Data on risk exposure and COVID-19 related symptoms were collected using a questionnaire. Results The transplanted cohort’s seroprevalence detected by ELISA and VNT was 20.1% and 3.1%, respectively. Neutralizing (NT) antibodies developed in 15.6% of anti-SARS-CoV-2 ELISA IgG positive SOTRs. The difference in seropositivity rates between LTRs and KTRs was not statistically significant (ELISA 21.1% vs. 19.0%, P = 0.554; VNT 3.6% vs. 2.6%, P = 0.082). Overall VNT positivity rates were higher in patients who reported participation in large community events (5.9% vs. 1.0%; P = 0.027) as well as in patients who reported COVID-19 related symptoms in the past six months. In addition, symptomatic VNT positive patients showed significantly higher (P = 0.031) NT antibody titers (median 128, interquartile range (IQR) = 32-128) compared to asymptomatic patients (median 16, IQR = 16-48). Conclusions This study showed that 15.6% of anti-SARS-CoV-2 ELISA positive Croatian SOTRs developed NT antibodies indicating protective immunity. Further studies are needed to determine the dynamic of NT antibodies and COVID-19 immunity duration in immunocompromised populations such as LTRs and KTRs.
Introduction
Patients after solid-organ transplantation have been severely affected by the coronavirus disease 2019 (COVID-19) pandemic. Disruption of medical services, higher risk of infection, and its consequences are major reasons for the concern in this immunocompromised group of patients. Immunosuppression modulates the immune system, affecting humoral immune responses against various pathogens, including viruses. The serologic immu-noglobulin G (IgG) response against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has been an intense investigation area; however, the immune response to SARS-CoV-2 after transplantation remains unknown and directs further research. The screening with SARS-CoV-2 IgG enzyme-linked immunosorbent assay (ELISA) followed by virus neutralization test (VNT) confirmation is a reliable approach for seroepidemiological Mrzljak A. et al. SARS-CoV-2 antibodies in Croatian transplant recipients studies, which is crucial to assess infection attack rates in the population and herd immunity (1). Currently, there is a lack of seroprevalence data for specific patient populations, such as transplanted patients. In Croatia, the first case of COVID-19 has been reported on 25 February 2020, affecting more than 334,229 people and resulting in 7130 deaths by 2 May 2021. During the first (February-May 2020) and second (August 2020-February 2021) pandemic waves, cases have been recorded in all Croatian counties. However, there are few published studies on the prevalence of SARS-CoV-2 in selected population groups and the COVID-19 seroprevalence in the solid-organ transplant recipients (SOTRs) is unknown. In this study, we analyzed the seroprevalence of SARS-CoV-2 at the beginning of the second pandemic wave in liver (LTRs) and kidney transplant recipients (KTRs) in Croatia.
Subjects
We performed a cross-sectional screening for COVID-19 in 512 adult outpatient SOTRs in a single transplant center in Croatia: LTRs (N = 280) and KTRs (N = 232) from 7 September to 27 November 2020, at the beginning of the second wave COV-ID-19 epidemic curve in Croatia. All transplanted patients who had an appointment in the transplant outpatient clinic in this period and signed the informed consent were included in the study. All patients were asymptomatic at the time of sampling. The study was approved by the Hospital Ethics Committee (reference: 03/1-7732/9).
Methods
Each patient was submitted to a COVID-19 questionnaire regarding the risk exposure (attendance to large meetings such as weddings/funerals/concerts, traveling abroad or receiving blood products) and COVID-19 related symptoms (fever, cough, ageusia, anosmia, headache, and myalgia) since the beginning of 2020, as well as previous SARS-CoV-2 reverse transcription polymerase chain reaction (RT-PCR) testing. The questionnaire response rate was 82%. All participants were en-rolled in the study, regardless of the response to questionnaire. Blood samples were collected by venipuncture in serum blood tubes (without anticoagulants) in a non-fasting state at any time of day. After centrifugation (3000 rpm for 10 min), serum samples were stored at -20 °C until testing.
Statistical analysis
Statistical analysis was performed using SPSS Version 17.0.1. (SPSS Inc, Chicago, IL, USA). Differences between the groups were tested using χ 2 or Fisher's exact test (categorical variables) and Mann-Whitney U test (ordinal or numerical variables).The level of statistical significance was set at P < 0.05.
Results
The Anti-SARS-CoV-2 IgG antibodies were found in both asymptomatic patients (N = 301) and patients who reported COVID-19 related symptoms in the past six months (N = 119). In the KTR group, significantly higher VNT positivity rates were documented in patients who reported fever, cough, anosmia and ageusia compared to patients who reported no symptoms (Table 1). In a LTR group, NT antibodies were more frequently detected in patients who reported cough and breathing difficulties ( Table 2).
Discussion
This is the first study to demonstrate the anti-SARS-CoV-2 seroprevalence in the Croatian LTRs and KTRs. The study was conducted at the beginning of the second COVID-19 pandemic wave (September-November 2020), demonstrating the transplanted cohort's seroprevalence of 20.1% using ELISA with 15.6% of anti-SARS-CoV-2 IgG positive SOTR who developed NT antibodies. The seroprevalence in this population group was found to be significantly higher compared to some other populations. At the end of the first wave, the SARS-CoV-2 seroprevalence in Croatia was very low. Using immunochromatography assay, IgG antibodies were detected in only 1.27% of industry workers Split-Dalmatia and Šibenik-Knin County (5). Besides, 2.7% of healthcare workers tested seropositive using ELISA, while NT antibodies were found in only 1.5% of the tested (4). Additionally, 2.2% of the general population were shown to be ELISA positive after the first wave, with low titers of NT antibodies detected in only 0.2% of individuals (6). A more recent study conducted among children from Children's hospital Zagreb showed that the prevalence of anti-SARS-CoV-2 antibodies differed significantly between the first wave (2.9%) and the second wave (8.4%) of the COVID-19 pandemic (7).
The presentation of SOTRs with COVID-19 appears similar to the general population, and the majority develop symptoms such as fever, cough, or diarrhea; however, some patients may remain asymptomatic (8,9). As in our cohort, anti-SARS-CoV-2 IgG was found in both symptomatic and asymptomatic SOTRs, with significant difference in the VNT positivity among patients who reported COVID-19 related symptoms in the past six months (KTRsfever, cough, anosmia and ageusia; LTRs -cough and breathing difficulties) compared to the asymptomatic patients.
In Patients with more severe disease tend towards higher antibody levels (9). Similar to the results of other studies, our study showed that symptomatic patients developed higher NT antibody titers compared to asymptomatic ones. Compared to a previous Croatian study in the general population, the prevalence of NT antibodies was slightly higher in the SOTRs (3.1% vs. 2.2%) (6). However, it is important to note that NT antibodies were detected in 15.6% ELISA positive SOTRs compared to 8.3% of the general population. The study in the Croatian general population was conducted after the first wave, while SOTRs were tested at the beginning of the second wave when the number of COVID-19 cases increased sharply which could explained at least in part the observed difference in VNT positivity rates.
It was previously shown that most of the asymptomatic and mildly ill patients did not produce significant levels of IgM antibodies, indicating that the IgM diagnosis is not sensitive and efficient. In contrast, similar IgG responses were detected in all patients (13). Therefore, IgM antibodies were not tested in this study and should be regarded as one of the limitations of the study.
In addition, the study's cross-sectional design precludes us from drawing conclusions regarding the duration of anti-SARS-CoV-2 IgG in SOTRs or distinguishing between impaired antibody production or a rapid decline, as we lack the data regarding the level of the immunosuppression and serial time point measurements with longer follow-up.
In immunocompromised patients such as SOTRs, it is necessary to understand the immune response to SARS-CoV-2 and identify factors associated with inadequate antibody response among those who failed to seroconvert. Therefore, further studies are needed to assess the dynamic of NT antibodies and COVID-19 immunity duration in immunocompromised populations such as LTRs and KTRs.
|
v3-fos-license
|
2021-06-28T05:07:34.513Z
|
2021-06-01T00:00:00.000
|
235654006
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2079-4991/11/6/1620/pdf",
"pdf_hash": "3f0d7f2a265fa4f653ba3eefc1287dbdc2926b37",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43458",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"sha1": "3f0d7f2a265fa4f653ba3eefc1287dbdc2926b37",
"year": 2021
}
|
pes2o/s2orc
|
Ultra-Low Percolation Threshold Induced by Thermal Treatments in Co-Continuous Blend-Based PP/PS/MWCNTs Nanocomposites
The effect of the crystallization of polypropylene (PP) forming an immiscible polymer blend with polystyrene (PS) containing conductive multi-wall carbon nanotubes (MWCNTs) on its electrical conductivity and electrical percolation threshold (PT) was investigated in this work. PP/PS/MWCNTs composites with a co-continuous morphology and a concentration of MWCNTs ranging from 0 to 2 wt.% were obtained. The PT was greatly reduced by a two-step approach. First, a 50% reduction in the PT was achieved by using the effect of double percolation in the blend system compared to PP/MWCNTs. Second, with the additional thermal treatments, referred to as slow-cooling treatment (with the cooling rate 0.5 °C/min), and isothermal treatment (at 135 °C for 15 min), ultra-low PT values were achieved for the PP/PS/MWCNTs system. A 0.06 wt.% of MWCNTs was attained upon the use of the slow-cooling treatment and 0.08 wt.% of MWCNTs upon the isothermal treatment. This reduction is attributed to PP crystals’ volume exclusion, with no alteration in the blend morphology.
Introduction
Conductive thermoplastic composites have gained a lot of attention in different research fields since they are used in a variety of industrial applications. Among their many applications, these materials are used in sensors, intelligent medical devices, energy harvesting, actuators, flexible electronics, robotics, static dissipation and electromagnetic interference (EMI) shielding. The interest in these composite materials stems from their advantages, namely the ability to achieve the electrical conductivity in a wide range, processability into products of complex shapes, as well as flexibility, lightweight, and corrosion resistance [1][2][3][4][5][6][7][8][9][10][11][12].
Several methods to reduce the PT concentration have been reported in the literature. One of the simplest methods is to use well-dispersed conductive particles with the highest possible aspect ratio as a filler. This has been experimentally found to lower the PT as was predicted by Bruggeman, V.D., and Böttcher, C. in the 1940s [38,39]. In order to further reduce the PT, more recent studies have dealt with the modification of nanoparticles, whilst others have made use of the controlled matrix morphology to tailor the location of nanoparticles. In particular, the use of immiscible polymer blends (PBs), presenting a co-continuous morphology, has been suggested. By controlling the interfacial tension between the filler and the polymers forming the blend, one can tailor the location of the conductive filler at the interface, thus lowering the PT. Depending on the used binary blend, the nanoparticles were added alone or with a compatibilizer [6,9,[24][25][26][40][41][42][43]. For example, Chen, J. et al. have shown that introducing 5 wt.% of SEBSg-MA in PP/PS (70/30 wt.%) co-continuous matrix reduces PT from 1.22 wt.% for PP/PS/MWCNTs to 0.66 wt.% for PP+SEBS-g-MA/PS/MWCNTs [44]. Other additives like graphene, graphene oxide (GO), organoclay, CB, noncovalent and covalent modifiers were also reported in the literature to reduce the PT of PB/CNTs co-continuous systems [9,11,24,28,41,[45][46][47].
Furthermore, other researchers suggested that the thermal annealing of polymer/filler or PB/filler composites above the melting or softening temperature could further decrease the PT and improve the electrical conductivity of a system [11,14,18,[29][30][31]35,42,[48][49][50][51][52]. In polymer blend-based composites, this effect is usually accompanied by changes in the blend morphology. First, the morphology coarsens (the domain size becomes larger) and then it is stabilized by increasing the annealing time of the filled composites. This leads to the easier creation of the conductive filler network due to the reduction in the interphase area where the filler is distributed. One of the most dramatic effects was, for example, achieved by Chen, Y. et al. who showed that annealing at 200 • C for 2 h decreased PT from 0.48 to 0.09 wt.% for a PP/PMMA (30/70 wt.%) co-continuous blend containing MWNTs [49]. All these strategies have been used for polymer pairs containing both amorphous and semi-crystalline polymers. However, the influence of crystallization on PT and electrical conductivity for immiscible polymer blends, containing at least one semi-crystalline polymer, has to date not been adequately evaluated.
To our knowledge, there are few studies that have evaluated the effect of crystallization on the PT of polymer/filler conductive composites [53][54][55]. Wang, J. et al. have investigated the effect of the cooling rate on the electrical conductivity of PP/MWCNTs composites containing a sorbitol-based external nucleating agent (NA). They showed that the PT was reduced for both PP/MWCNTs and PP/MWCNTs/NA from 0.75 wt.% at a fast-cooling rate of 150 • C/min to 0.36 wt.% at a slow-cooling rate of 1.5 • C/min [53]. Huang, C. et al. have shown the effect of matrix crystallinity on the PT of PLLA/MWCNTs composites containing 0.15 wt.% of a NA. The PT was reduced from 0.96 to 0.75 wt.% for the samples which were treated for 0.1 and 6 min, respectively, at 130 • C [55]. Other researchers have reported the effect of stereocomplex (SC) crystallization on the PT and electrical conductivity of miscible PLLA/PDLA blend composites, containing a conductive filler [56][57][58][59][60]. In these cases, the change in PT happens due to the volume exclusion effect of stereocomplex crystals.
The present study aimed to shed light on the effect of the semi-crystalline polymer's crystallization within the PB on the reduction in the percolation threshold. In particular, for this we chose PP and PP/PS blend-based composites containing MWCNTs obtained by the melt-mixing process. It will be shown that two types of treatments aiming to affect PP crystal growth can significantly improve the electrical conductivity and reduce the PT to ultra-low values. These treatments are proposed to achieve lower PTs for other semi-crystalline-based PB co-continuous systems. In addition, it will be shown that the proposed treatments do not significantly change the PB morphology compared to thermal annealing above the melting or softening temperature.
Materials
Commercial polypropylene (PP)-PP4712E1 grade from Exxon Mobile with a density 0.9 g·cm −3 -and polystyrene (PS)-MC3650 from PolyOne with a density 1.04 g·cm −3were used in this work. Multi-walled carbon nanotubes (MWCNTs) grade NC7000 TM from Nanocyl with an average diameter and a length of 9.5 nm and 1.5 µm, respectively (aspect ratio~160), and with a nominal electrical conductivity of 10 6 S·m −1 , were used as the conductive filler in the composites.
Extrusion
The materials were prepared by a melt-mixing process using a Haake Rheomix OS PTW16 twin-screw extruder (Thermo Fisher Scientific Inc., Waltham, MA, USA). The temperature was fixed at 220 • C in all zones, and the screw speed was adjusted to 100 rpm for all compositions. First, a masterbatch of PP with 10 wt.% of MWCNTs was prepared. Second, PP composites with a MWCNTs concentration varying from 0 to 2 wt.%, as well as co-continuous morphology 50/50 PP/PS blends with MWCNTs concentration, also varying from 0 to 2 wt.%, were prepared by dilution of the PP/MWCNTs masterbatch.
To achieve a co-continuous morphology for PP/PS/MWCNTs composites, the calculation of a co-continuous range based on viscosity measurements both PP and PS should be done. The viscosity of both PP and PS was measured using a capillary rheometer at a temperature of 200 • C. For the effective shear rate of 100 s −1 experienced in the twin-screw extruder, both polymers manifested similar viscosities (as can be seen in Figure S1 in the Supplementary Materials) [61,62]. A 50/50 wt.% PP/PS concentration was, therefore, chosen following the analysis of Jordhamo, G. et al. [63].
Thermal Treatments
Three thermal treatments were performed by compression molding and were designated here as: (1) fast cooling; (2) slow cooling; and (3) isothermal. Figure 1 shows a schematic of the temperature vs. time profile for all thermal treatments.
Crystallization Studies
Differential scanning calorimetry (DSC) was performed using a Pyris 1 Differential Scanning Calorimeter (PerkinElmer, Waltham, MA, USA). The nitrogen gas flow rate was set to 20 mL/min. The samples were encapsulated in standard aluminum pans and covers. The DSC was calibrated using indium and zinc standards.
Two different cycles were used to determine the crystallinity of the composites as well as the crystallization kinetics.
-Non-isothermal crystallization; The samples were heated from 50 to 200 °C at 10 °C/min and then cooled down from 200 to 50 °C at 10 °C/min under nitrogen atmosphere. This thermal cycle was performed twice for all samples to erase the thermo-mechanical history of the samples. The data from The three treatments consisted of three steps each, with the first two steps being the same and a third one differing for each treatment. The first step was at a temperature of 200 • C under constant pressure of 0.8 MPa for 10 min. The second one was at a temperature of 200 • C under a pressure of 10 MPa for an additional 10 min.
1.
For the fast-cooling treatment, the third step was fast cooling to room temperature, which was performed at a rate of 50 • C/min under a pressure of 10 MPa. The whole treatment, all three steps, took 22 min ( Figure 1); 2.
For the slow-cooling treatment, the third step was fast cooling to 160 • C and a slow cooling from 160 to 135 • C at a rate of 0.5 • C/min, followed by fast cooling from 135 • C Nanomaterials 2021, 11, 1620 4 of 18 to room temperature under a pressure of 10 MPa. The whole treatment took 1 h 10 min ( Figure 1). The starting temperature of the treatment (160 • C) was chosen to prevent the so-called annealing effect [11,14,[29][30][31]49] which could involve the coarsening of blend morphology. The temperature of 135 • C at the end of the treatment corresponds to the highest onset temperature of crystallization, evaluated by the DSC analysis.
3.
For the isothermal treatment, the third step was fast cooling to 135 • C which was maintained for 15 min. Then, the sample was fast cooled to room temperature under a pressure of 10 MPa. The whole treatment took 36 min (Figure 1).
Fast-and slow-cooling treatments were carried out to study how the cooling rate, which has a direct effect on the crystallization kinetics and crystals morphology, affects electrical conductivity due to the volume exclusion effect. Slow cooling was performed at 0.5 • C/min to allow the growth of larger crystals.
Crystallization Studies
Differential scanning calorimetry (DSC) was performed using a Pyris 1 Differential Scanning Calorimeter (PerkinElmer, Waltham, MA, USA). The nitrogen gas flow rate was set to 20 mL/min. The samples were encapsulated in standard aluminum pans and covers. The DSC was calibrated using indium and zinc standards.
Two different cycles were used to determine the crystallinity of the composites as well as the crystallization kinetics.
Non-Isothermal Crystallization
The samples were heated from 50 to 200 • C at 10 • C/min and then cooled down from 200 to 50 • C at 10 • C/min under nitrogen atmosphere. This thermal cycle was performed twice for all samples to erase the thermo-mechanical history of the samples. The data from the second heating and cooling cycle were used for calculations. Prior to that, the same thermal cycle was run with empty pans for getting a baseline.
The crystallinity for all compositions was calculated using Equation (1): where X c is the weight fraction of the crystalline phase, ∆H is the heat of fusion of the sample, ∆H m is the heat of fusion of 100% crystalline PP (207 J/g) [64], and w is the MWCNTs' weight fraction for PP/MWCNTs composites and (MWCNTs + PS) weight fraction for PP/PS/MWCNTs composites.
Isothermal Crystallization
The samples were heated from 50 to 200 • C at 50 • C/min and kept at this temperature for 5 min to eliminate any previous thermal history; then, they were cooled to 135 • C at 50 • C/min and kept at 135 • C for 30 min for isothermal crystallization. Again, in this case, prior to testing the sample, the same thermal cycle was run with empty pans to obtain a baseline.
Polarized Optical Microscopy
The crystals' growth and their morphology were studied using polarized optical microscopy (POM)-OLYMPUS BX51 microscope (Olympus Co., Tokyo, Japan) equipped with a hot stage. The morphology evolution (crystal growth) at 135 • C was observed. In this case, the heat treatment was performed without any applied pressure. Furthermore, the morphology of the crystals of pure PP and PP/MWCNTs composites at room temperature obtained by fast-cooling, isothermal, and slow-cooling treatments as described above was also observed.
Electrical Conductivity
The electrical properties as a function of frequency for all compositions were evaluated using a broadband dielectric spectrometer (BDS) (Novocontrol Technologies GmbH & Co. KG, Montabaur, Germany) in the frequency range from 10 −2 to 3 × 10 5 Hz under an excitation voltage of 3 VRMS applied across the sample.
The electrical conductivity was evaluated from the measurement of the AC complex conductivity as a function of frequency-σ * (ω), which is related to the complex permittivity by where ω is the frequency, ε 0 is the vacuum permittivity and ε * (ω) is the complex permittivity which includes the contributions of the electrical conductivity and can be expressed as where ε is the real part of the complex permittivity, ε tot (ω) the total imaginary part, ε P represents the imaginary part of the permittivity due to the polarization phenomena, and σ is an electrical conductivity. By combining these two equations, we can obtain: where the real part of the complex conductivity is: The equipment measures the total imaginary part-ε tot (ω) since the device cannot distinguish between the two contributions. Accordingly, the electrical conductivity cannot be formally isolated. However, since it does not increase with frequency unlike the contribution from ε P (ω), the occurrence of a low frequency plateau in the plot of σ (ω) as a function of frequency indicates that the value of the electrical conductivity dominates the real part of the complex conductivity that then becomes very close to the true DC conductivity at low frequencies. The electrical conductivity values presented in this work refer to the value of σ (ω) at the lowest frequency (1 × 10 −2 Hz). Consequently, this value is always higher than the true conductivity, particularly below the percolation threshold, however, once the percolation threshold is reached, it gives a very good approximation of the conductivity. Disks of 25 mm in diameter and 1 mm in thickness, covered on both sides with 20 nm of gold, were used for the measurements.
Scanning Electron Microscopy
The morphology of PP/PS/MWCNTs composites for all treatments was observed by scanning electron microscopy (SEM) using a S3600 Hitachi microscope (Hitachi, Ltd., Tokyo, Japan) in the secondary electrons mode. The samples were fractured in liquid nitrogen and then polystyrene phase was extracted by using butanone at room temperature, under continuous stirring for two hours. Then, the samples were dried under vacuum at room temperature during 12 h. After drying, the samples were covered with gold by using a gold sputter coater, model K550X. All porous samples after PS extraction were imaged at an accelerating voltage of 5 kV. Figure 2 shows the electrical conductivity as a function of MWCNTs mass fraction for PP/PS/MWCNTs composites after fast-cooling, isothermal, and slow-cooling treatments.
Effect of the Treatments on Electrical Conductivity and Morphology of PP/PS/MWCNTs Composites
Nanomaterials 2021, 11, 1620 6 of 18 Equation (6) was used to calculate the percolation threshold of composites that underwent the different thermal treatments: where σ is the electrical conductivity of the composite, p is the mass fraction of MWCNTs, p c is the percolation threshold (PT), t is a fitted exponent that depends, only, on the dimensionality of the system, and k is a scaling factor. It should be noted that this equation is valid for p > p c . Figure 2 shows the electrical conductivity as a function of MWCNTs mass fraction for PP/PS/MWCNTs composites after fast-cooling, isothermal, and slow-cooling treatments. Equation (6) was used to calculate the percolation threshold of composites that underwent the different thermal treatments:
Effect of the Treatments on Electrical Conductivity and Morphology of PP/PS/MWCNTs Composites
where is the electrical conductivity of the composite, is the mass fraction of MWCNTs, is the percolation threshold (PT), is a fitted exponent that depends, only, on the dimensionality of the system, and is a scaling factor. It should be noted that this equation is valid for . A linear regression fit was employed to determine the percolation threshold, log vs. log . The results of these fits for each treatment of PP/PS/MWCNTs composites are presented in Table 1. The results presented in Figure 2 and Table 1 show that the isothermal and slowcooling treatments resulted in a much lower percolation threshold. The percolation threshold was drastically reduced from 0.28 wt.% to 0.08 wt.% and 0.06 wt.% of MWCNTs for the isothermal treatment and slow-cooling treatment, respectively, where an increase of 10 orders of magnitude in electrical conductivity was observed. Here, a double effect on reducing the PT of PP/PS/MWCNTs composites was achieved. On the one hand, the A linear regression fit was employed to determine the percolation threshold, log(σ) vs. log(p − p c ). The results of these fits for each treatment of PP/PS/MWCNTs composites are presented in Table 1. The results presented in Figure 2 and Table 1 show that the isothermal and slowcooling treatments resulted in a much lower percolation threshold. The percolation threshold was drastically reduced from 0.28 wt.% to 0.08 wt.% and 0.06 wt.% of MWCNTs for the isothermal treatment and slow-cooling treatment, respectively, where an increase of 10 orders of magnitude in electrical conductivity was observed. Here, a double effect on reducing the PT of PP/PS/MWCNTs composites was achieved. On the one hand, the PT was reduced due to the effect of the double percolation using the co-continuous morphology of PP/PS/MWCNTs composites. Indeed, the PT was reduced from 0.6 wt.% for PP/MWCNTs composites (as can be seen in Figure S2 and Table S1 in the Supplementary Materials) to 0.28 wt.% for PP/PS/MWCNTs composites for the fast-cooling treatment. On the other hand, the ultra-low PT was achieved for PP/PS/MWCNTs composites after the isothermal and slow-cooling treatments. These results can be explained by the exclusion of the MWCNTs by the PP crystalline structure, as was observed by Wang, J. et al. [53].
PP/PS/MWCNTs composites formed a co-continuous structure, where the quantification of morphology can be done by calculating the characteristic domain size ξ-total area of the SEM image per total interfacial length between PP and PS phase. The evolution of the morphology of the blends for all composites for different treatments was investigated using SEM images. The characteristic domain size-ξ was studied by averaging at least five different SEM images for the same treatment and the same concentration of MWCNTs by using the following equation [30,31]: where A SEM is the total area of the SEM image and L int is the interface length between two phases estimated using a homemade image analysis script (as can be seen in Figure S3 in Supplementary Materials, which shows how L int was estimated). Indeed, the composites were prepared by adding PP/MWCNTs to PS. PP/MWCNTs presents a higher viscosity than pure PP and transfers, therefore, more stress to the PS phase, and resulting in a finer morphology. Furthermore, the better affinity of MWCNTs to PS favors its migration to the PS phase, preventing its coalescence and coarsening, which is leading to drastic decrease for the characteristic domain size [29,30]. This decrease indicates that MWCNTs refined the morphology and that the number of viable electrical paths is increased, explaining the increase in electrical conductivity shown Figure 2.
PT was reduced due to the effect of the double percolation using the co-continuous morphology of PP/PS/MWCNTs composites. Indeed, the PT was reduced from 0.6 wt.% for PP/MWCNTs composites (as can be seen in Figure S2 and Table S1 in the Supplementary Materials) to 0.28 wt.% for PP/PS/MWCNTs composites for the fast-cooling treatment. On the other hand, the ultra-low PT was achieved for PP/PS/MWCNTs composites after the isothermal and slow-cooling treatments. These results can be explained by the exclusion of the MWCNTs by the PP crystalline structure, as was observed by Wang, J. et al. [53].
PP/PS/MWCNTs composites formed a co-continuous structure, where the quantification of morphology can be done by calculating the characteristic domain size -total area of the SEM image per total interfacial length between PP and PS phase. The evolution of the morphology of the blends for all composites for different treatments was investigated using SEM images. The characteristic domain size-was studied by averaging at least five different SEM images for the same treatment and the same concentration of MWCNTs by using the following equation [30,31]: , where is the total area of the SEM image and is the interface length between two phases estimated using a homemade image analysis script (as can be seen in Figure S3 in Supplementary Materials, which shows how was estimated). Figures 3 and 4 show SEM images and the plot of characteristic domain size of PP/PS/MWCNTs composites for which the PS phase was extracted with different filler concentrations after the fast-cooling treatment. It can be seen that upon the addition of MWCNTs the characteristic domain size drastically decreases from 11.3 μm to 1.3 μm for neat PP/PS blend and PP/PS/MWCNTs composite with 0.5 wt.% of MWCNTs, respectively. Indeed, the composites were prepared by adding PP/MWCNTs to PS. PP/MWCNTs presents a higher viscosity than pure PP and transfers, therefore, more stress to the PS phase, and resulting in a finer morphology. Furthermore, the better affinity of MWCNTs to PS favors its migration to the PS phase, preventing its coalescence and coarsening, which is leading to drastic decrease for the characteristic domain size [29,30]. This decrease indicates that MWCNTs refined the morphology and that the number of viable electrical paths is increased, explaining the increase in electrical conductivity shown Figure 2. Figure 5a-c. Figure 6 shows the characteristic domain size (ξ) of the PP/PS blends as a function of MWCNTs concentration for different treatments. It can be seen that, for the composites with the same amount of MWCNTs-but for different treatments-the morphology of the blends did not change, although a larger electrical conductivity was observed upon the slow-cooling and isothermal treatments. treatments, in the case of the blends studied here, may not originate from an evolution of the blend morphology as suggested by several researchers [11,14,18,[29][30][31]42,48,50]. Rather, this stems from an evolution of the crystalline morphology of the semi-crystalline polymer. The next section will present an analysis of the crystallization of PP that will help to understand the obtained results. Figure 5a-c. Figure 6 shows the characteristic domain size (ξ) of the PP/PS blends as a function of MWCNTs concentration for different treatments. It can be seen that, for the composites with the same amount of MWCNTs-but for different treatments-the morphology of the blends did not change, although a larger electrical conductivity was observed upon the slow-cooling and isothermal treatments. For example, the characteristic domain size is 2.8 μm, 2.7 μm, and 2.7 μm for the PP/PS/MWCNTs composite with 0.1 wt.% of MWCNTs subjected to fast-cooling, isothermal, and slow-cooling treatments, respectively. However, the electrical conductivity is 3.4 × 10 −14 S/m, 2.0 × 10 −4 S/m, and 1.7 × 10 −4 S/m for the PP/PS/MWCNTs composite with 0.1 wt.% of MWCNTs subjected to fast-cooling, isothermal, and slow-cooling treatments, respectively. These observations indicate that the increase in electrical conductivity upon thermal treatments, in the case of the blends studied here, may not originate from an evolution of the blend morphology as suggested by several researchers [11,14,18,[29][30][31]42,48,50]. Rather, this stems from an evolution of the crystalline morphology of the semi-crystalline polymer. The next section will present an analysis of the crystallization of PP that will help to understand the obtained results. Figure 5a-c. Figure 6 shows the characteristic domain size (ξ) of the PP/PS blends as a function of MWCNTs concentration for different treatments. It can be seen that, for the composites with the same amount of MWCNTs-but for different treatments-the morphology of the blends did not change, although a larger electrical conductivity was observed upon the slow-cooling and isothermal treatments. For example, the characteristic domain size is 2.8 μm, 2.7 μm, and 2.7 μm for the PP/PS/MWCNTs composite with 0.1 wt.% of MWCNTs subjected to fast-cooling, isothermal, and slow-cooling treatments, respectively. However, the electrical conductivity is 3.4 × 10 −14 S/m, 2.0 × 10 −4 S/m, and 1.7 × 10 −4 S/m for the PP/PS/MWCNTs composite with 0.1 wt.% of MWCNTs subjected to fast-cooling, isothermal, and slow-cooling treatments, respectively. These observations indicate that the increase in electrical conductivity upon thermal treatments, in the case of the blends studied here, may not originate from an evolution of the blend morphology as suggested by several researchers [11,14,18,[29][30][31]42,48,50]. Rather, this stems from an evolution of the crystalline morphology of the semi-crystalline polymer. The next section will present an analysis of the crystallization of PP that will help to understand the obtained results.
Crystallization Behavior and Electrical Conductivity of PP/MWCNTs and PP/PS/MWCNTs Nanocomposites
DSC thermograms of non-isothermal and isothermal crystallization were used to investigate the crystallization state as well as the isothermal crystallization behavior of PP/MWCNTs and PP/PS/MWCNTs nanocomposites. Figure 7 shows typical thermograms obtained by DSC during cooling scans at 10 • C/min for PP and the PP/MWCNTs composite containing 0.1 wt.% of MWCNTs. Similar results were obtained for all composites. These curves were used to infer the peak crystallization temperature, the onset, and end of crystallization, as well as the crystallinity rate for all composites. Melting temperatures for all compositions were also determined by DSC during heating scans at 10 • C/min. The data for peak crystallization temperature (Tc), as well as the degree of crystallinity as a function of MWCNTs concentration for both PP/MWCNTs and PP/PS/MWCNTs, are presented in Figure 8a,b. The data for melting temperatures, the onset, and end of crystallization temperatures can be found in Table S2 in the Supplementary Materials.
Crystallization Behavior and Electrical Conductivity of PP/MWCNTs and PP/PS/MWCNTs Nanocomposites
DSC thermograms of non-isothermal and isothermal crystallization were used to investigate the crystallization state as well as the isothermal crystallization behavior of PP/MWCNTs and PP/PS/MWCNTs nanocomposites. Figure 7 shows typical thermograms obtained by DSC during cooling scans at 10 °C/min for PP and the PP/MWCNTs composite containing 0.1 wt.% of MWCNTs. Similar results were obtained for all composites. These curves were used to infer the peak crystallization temperature, the onset, and end of crystallization, as well as the crystallinity rate for all composites. Melting temperatures for all compositions were also determined by DSC during heating scans at 10 °C/min. The data for peak crystallization temperature (Tc), as well as the degree of crystallinity as a function of MWCNTs concentration for both PP/MWCNTs and PP/PS/MWCNTs, are presented in Figure 8a,b. The data for melting temperatures, the onset, and end of crystallization temperatures can be found in Table S2 in the Supplementary Materials.
Crystallization Behavior and Electrical Conductivity of PP/MWCNTs and PP/PS/MWCNTs Nanocomposites
DSC thermograms of non-isothermal and isothermal crystallization were used to investigate the crystallization state as well as the isothermal crystallization behavior of PP/MWCNTs and PP/PS/MWCNTs nanocomposites. Figure 7 shows typical thermograms obtained by DSC during cooling scans at 10 °C/min for PP and the PP/MWCNTs composite containing 0.1 wt.% of MWCNTs. Similar results were obtained for all composites. These curves were used to infer the peak crystallization temperature, the onset, and end of crystallization, as well as the crystallinity rate for all composites. Melting temperatures for all compositions were also determined by DSC during heating scans at 10 °C/min. The data for peak crystallization temperature (Tc), as well as the degree of crystallinity as a function of MWCNTs concentration for both PP/MWCNTs and PP/PS/MWCNTs, are presented in Figure 8a,b. The data for melting temperatures, the onset, and end of crystallization temperatures can be found in Table S2 in the Supplementary Materials. (Figure 8a). This increase in crystallization temperature originates from the nucleating effect of the MWCNTs. The effect is more pronounced for PP/MWCNTs composites as opposed to that of PP/PS/MWCNTs composites. This could be due to a favored location of the MWCNTs at the interface between both phases of the blends.
Finally, the addition of MWCNTs and the blending of polymers do not change the degree of crystallinity of PP in the case of PP/MWCNTs composites, as well as in the case of PP/PS/MWCNTs composites (Figure 8b). Figure 9a,b show typical heat flow curves and the relative crystallinity curves as a function of time during isothermal crystallization tests at a temperature of 135 • C, for the composites studied here to which MWCNTs were added. Similar curves were obtained for all composites. The heat flow curves (Figure 9a) were used to infer relative crystallinity-X(t) as a function of time, which can be obtained from the area under the exothermic peak up to time t, divided by the total exothermic peak area as expressed in Equation (8) [65]: The results presented in Figure 10a show that upon the addition of MWCNTs for PP/MWCNTs composites, the crystallization half time drastically decreases, indicating that MWCNTs act as a nucleating agent. The crystallization half time was found to be equal to 62, 6, 3, and 2.6 min for composites with 0, 0.1, 0.3, and 0.5 wt.%, respectively. For PP/PS/MWCNTs composites, not only the effect of MWCNTs amount on relative crystallinity should be taken into account, but also the matrix composition. The crystallization half time did not significantly change with the increase in the MWCNTs concentration. The crystallization half time was determined to be 12, 7, and 5 min for composites with 0.1, 0.3, and 0.5 wt.%, respectively, which is approximately two times more than that for PP/MWCNTs composites, with the same MWCNTs amount. This could be caused by the selective localization of MWCNTs at the interface inside the PP/PS matrix, which weakens the nucleation effect since nucleation is not a factor for PS. However, t1/2 for the pure PP/PS blend, it is three times smaller than for pure PP. The presence of PS in the mixture could have changed the energy needed for the ultimate crystallization of PP. The same behaviour was observed for the induction time of crystallization for both PP/MWCNTs and PP/PS/MWCNTs (Figure 10b). dH c is the heat flow required for crystallization for a certain time dt. Figure 9b shows the relative crystallinity curves as a function of time (obtained using Equation (8) from Figure 9a data) for the composites studied here at different MWCNTs concentrations and at 135 • C. Similar curves were obtained for all composites and were used to infer the crystallization half time (t 1/2 ) which is the time required to complete 50% of the crystallization and the induction time which is the time at which the nuclei start to form. The data for t 1/2 and induction time as a function of MWCNTs concentration are shown in Figure 10a PP/MWCNTs composites, with the same MWCNTs amount. This could be caused by the selective localization of MWCNTs at the interface inside the PP/PS matrix, which weakens the nucleation effect since nucleation is not a factor for PS. However, t1/2 for the pure PP/PS blend, it is three times smaller than for pure PP. The presence of PS in the mixture could have changed the energy needed for the ultimate crystallization of PP. The same behaviour was observed for the induction time of crystallization for both PP/MWCNTs and PP/PS/MWCNTs (Figure 10b). The results presented in Figure 10a show that upon the addition of MWCNTs for PP/MWCNTs composites, the crystallization half time drastically decreases, indicating that MWCNTs act as a nucleating agent. The crystallization half time was found to be equal to 62, 6, 3, and 2.6 min for composites with 0, 0.1, 0.3, and 0.5 wt.%, respectively. For PP/PS/MWCNTs composites, not only the effect of MWCNTs amount on relative crystallinity should be taken into account, but also the matrix composition. The crystallization half time did not significantly change with the increase in the MWCNTs concentration. The crystallization half time was determined to be 12, 7, and 5 min for composites with 0.1, 0.3, and 0.5 wt.%, respectively, which is approximately two times more than that for PP/MWCNTs composites, with the same MWCNTs amount. This could be caused by the selective localization of MWCNTs at the interface inside the PP/PS matrix, which weakens the nucleation effect since nucleation is not a factor for PS. However, t 1/2 for the pure PP/PS blend, it is three times smaller than for pure PP. The presence of PS in the mixture could have changed the energy needed for the ultimate crystallization of PP. The same behaviour was observed for the induction time of crystallization for both PP/MWCNTs and PP/PS/MWCNTs (Figure 10b).
The crystallization of pure PP and PP/MWCNTs with 0.1 wt.% of MWCNTs after the fast-cooling, isothermal, and slow-cooling treatments was investigated by polarized optical microscopy, and the crystal morphology is shown in Figure 11a-f. A concentration of 0.1 wt.% was chosen as it is not possible to visualize by optical microscopy, the spherulites for higher concentrations of MWCNTs. The treatment temperature of 135 • C was chosen for isothermal and slow-cooling treatment to obtain larger crystals as mentioned previously. It can be clearly seen that the cooling rate affects the crystal size when fast cooling is compared to the other two treatments. The crystals of pure PP-which was fast cooled-are so small that they cannot be easily identified compared to the crystals obtained from isothermal and slow-cooling treatments, where the size is around 100 µm for both treatments. The crystals size for the PP/MWCNTs composite with 0.1 wt.% of MWCNTs decreased compared to neat PP. Nevertheless, there is a big difference in the crystal size for the samples, which were treated by fast-cooling treatment compared to isothermal and slow-cooling treatments, although crystal sizes for composites are difficult to identify by POM.
are so small that they cannot be easily identified compared to the crystals obtained from isothermal and slow-cooling treatments, where the size is around 100 μm for both treatments. The crystals size for the PP/MWCNTs composite with 0.1 wt.% of MWCNTs decreased compared to neat PP. Nevertheless, there is a big difference in the crystal size for the samples, which were treated by fast-cooling treatment compared to isothermal and slow-cooling treatments, although crystal sizes for composites are difficult to identify by POM. Figure 12a,b show the dynamic process of filler conductive network formation, which was observed by measuring the electrical conductivity as a function of time for the PP/PS/MWCNTs composite at 135 °C. It can be seen that, at a very low concentration of MWCNTs, the electrical conductivity was constant for around 11.5 min of treatment and then drastically increases by five orders of magnitudes (Figure 12a). The electrical conductivity of composites with 0.1 wt.% of MWCNTs is low at the start of the treatment due to the poor connection of nanoparticles inside the polymer matrix, but when the critical point of particle's connections is achieved, they create an electric pathway and the electrical conductivity rapidly increases. Figure 12b shows the electrical conductivity as a function of time for the PP/PS/MWCNTs composite with 0.3-1 wt.% of MWCNTs. It can be seen that the electrical conductivity starts to increase from the first seconds of treatment and approaches a plateau after 5 min. In this case, the concentration of nanoparticles was sufficient for the creation of a conductive network from the start of the treatment.
Similar behavior was observed for the electrical conductivity of PP/MWCNTs as a function of time (as can be seen in Figures S4 and S5 in the Supplementary Materials). At Figure 12a,b show the dynamic process of filler conductive network formation, which was observed by measuring the electrical conductivity as a function of time for the PP/PS/MWCNTs composite at 135 • C. It can be seen that, at a very low concentration of MWCNTs, the electrical conductivity was constant for around 11.5 min of treatment and then drastically increases by five orders of magnitudes (Figure 12a). The electrical conductivity of composites with 0.1 wt.% of MWCNTs is low at the start of the treatment due to the poor connection of nanoparticles inside the polymer matrix, but when the critical point of particle's connections is achieved, they create an electric pathway and the electrical conductivity rapidly increases. Figure 12b shows the electrical conductivity as a function of time for the PP/PS/MWCNTs composite with 0.3-1 wt.% of MWCNTs. It can be seen that the electrical conductivity starts to increase from the first seconds of treatment and approaches a plateau after 5 min. In this case, the concentration of nanoparticles was sufficient for the creation of a conductive network from the start of the treatment.
Discussion
There are several ways to reduce the PT of CNT-based thermoplastic composites prepared by melt mixing. One of these ways is to use immiscible polymer blends with a co- Similar behavior was observed for the electrical conductivity of PP/MWCNTs as a function of time (as can be seen in Figures S4 and S5 in the Supplementary Materials). At very low concentrations of MWCNTs (0.1-0.3 wt.% of MWCNTs-below the percolation threshold), electrical conductivity did not change during the isothermal treatment, and its values were close to those of unfilled PP. However, at MWCNTs concentrations close to the PT and higher (0.5-1 wt.% of MWCNTs), electrical conductivity monotonically increased with time.
Discussion
There are several ways to reduce the PT of CNT-based thermoplastic composites prepared by melt mixing. One of these ways is to use immiscible polymer blends with a cocontinuous morphology as a matrix. In this case, the reduction in PT can be achieved thanks to a double percolation effect, resulting from the co-continuous morphology of the used PB [6,9,[24][25][26]. Furthermore, the reduction in PT can be achieved through both covalent and noncovalent modifications of either CNTs or matrices of CNTs-based thermoplastic composites with co-continuous morphology [7,25,28]. Some researchers suggested adding different nanoparticles to help trap CNTs at the interface [41,46]. Another way is to optimize the mixing parameters (time, mixing speed, . . . ), as well as use varied post-mixing thermal treatments, as was discussed in the introduction [11,35,49,51]. Table 2 summarizes some literature studies in which the ultra-low PT of PB/CNTs composites was achieved along with the employed modifications and treatments. The PT reached in the present work for PP/PS/MWCNTs composites was comparable or even lower than those reported in the literature. The electrical conductivity values, achieved in this study, for several PT concentrations as well as 1 wt.% of CNTs, were greater than those reported in other studies. These values were obtained primarily due to the induced crystallization of PP during the post-mixing thermal treatments. As a result, ultra-low PTs for PP/PS/CNTs composites have been achieved. The results reported in this work also showed that the thermal treatments did not influence the blend morphology as is the case after an annealing treatment performed above the melting or softening temperature, during which the coarsening of the PB matrix morphology is happening [11,[29][30][31]49].
The dynamic percolation threshold for the system studied in the present work was also investigated using electrical conductivity measurements. It was found that the electrical conductivity of PP/PS/MWCNTs composites with the lower concentration of 0.1 wt.% of MWCNTs (close to PT concentration) drastically increased after 11.5 min of thermal treatment at 135 • C, as can be observed in Figure 12a. For the samples with a larger MWCNTs concentration, the electrical conductivity monotonically increased with time, depending on the MWCNTs concentration. Figure 13 shows the time which corresponds to the complete crystallization of PP as a function of MWCNTs concentration, as well as the time which the electrical conductivity take to reach plateau values as a function of the MWCNTs concentration for PP/PS/MWCNTs composites. It can be seen that the electrical conductivity reached plateau values before the crystallization was complete. This is an indication, we believe, of an improvement of MWCNTs particle connections at the PP/PS interface. In this case, MWCNTs were pushed to PP crystals' borders where they stopped by the PS phase which does not crystalize. Generally, the PS phase is more favorable for MWCNTs, but their diffusion into it is significantly impeded upon because the treatment temperature does not provide enough energy to promote MWCNTs diffusion. As a result, due to the accumulation of MWCNTs connections at the PP/PS interface and in the PP phase, the electrical conductivity increases.
Conclusions
PP/PS/MWCNTs blend composites with co-continuous morphology have been prepared by melt-mixing, using a twin-screw extruder through the dilution of a masterbatch of PP/MWCNTs with PP and PS. Due to the effect of double percolation, the PT of PP/PS/MWCNTs was reduced by over 50% compared to PP/MWCNTs composites. Furthermore, thermal annealing treatments, aimed at enhancing the effect of PP crystal growth on electrical conductivity and PT of PP/PS/MWCNTs composites have been done. It was shown that extremely low PTs of 0.06 wt.% and 0.08 wt.% MWCNTs were obtained after slow-cooling and isothermal treatments, respectively. These treatments promoted the selective localization of MWCNTs due to the PP crystal volume exclusion effect of MWCNTs. Moreover, microscopy observations (SEM) and the characteristic domain sizes calculation of PP/PS/MWCNTs co-continuous morphology confirmed that these treatments have not changed PB morphology in contrast to thermal annealing above the melting or softening temperature.
Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1, Figure S1: Viscosity as a function of shear rate measured by capillary rheometer for pure PP and PS. Figure S2: Effect of treatments on electrical conductivity as a function of MWCNTs concentration for
Conclusions
PP/PS/MWCNTs blend composites with co-continuous morphology have been prepared by melt-mixing, using a twin-screw extruder through the dilution of a masterbatch of PP/MWCNTs with PP and PS. Due to the effect of double percolation, the PT of PP/PS/MWCNTs was reduced by over 50% compared to PP/MWCNTs composites. Furthermore, thermal annealing treatments, aimed at enhancing the effect of PP crystal growth on electrical conductivity and PT of PP/PS/MWCNTs composites have been done. It was shown that extremely low PTs of 0.06 wt.% and 0.08 wt.% MWCNTs were obtained after slow-cooling and isothermal treatments, respectively. These treatments promoted the selective localization of MWCNTs due to the PP crystal volume exclusion effect of MWCNTs. Moreover, microscopy observations (SEM) and the characteristic domain sizes calculation of PP/PS/MWCNTs co-continuous morphology confirmed that these treatments have not changed PB morphology in contrast to thermal annealing above the melting or softening temperature.
Supplementary Materials:
The following are available online at https://www.mdpi.com/article/ 10.3390/nano11061620/s1, Figure S1: Viscosity as a function of shear rate measured by capillary rheometer for pure PP and PS. Figure S2: Effect of treatments on electrical conductivity as a function of MWCNTs concentration for PP/MWCNTs composites. Figure S3: Image treatment analysis for distinguishing between two phases: (a) original SEM picture for PP/PS/MWCNTs composite with 0.3 wt.% of MWCNTs; (b) image phase separation; (c) final treated image which was used for the estimation of L int -interface length between two phases, which in this case is the perimeter of the white phase (PS phase). Table S1: Percolation threshold and fitting values of experimental data according to Equation (6) for PP/MWCNTs composites after each treatment. Figure S4: Electrical conductivity as a function of time for PP/MWCNTs composite with 0-0.3 wt.% of MWCNTs measured every 10 s at 1 Hz of frequency and 135 • C. Figure S5
Data Availability Statement:
The data presented in this study are available in this article.
|
v3-fos-license
|
2018-12-29T10:25:21.918Z
|
2017-01-01T00:00:00.000
|
65048457
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1080/23311916.2017.1398702",
"pdf_hash": "7f96408f78d26dba42cb524526addcf8cd7fb6ad",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43459",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "1975c70b9618f8218a84e790da026dc4ec06ba7a",
"year": 2017
}
|
pes2o/s2orc
|
Predicting upper limb discomfort for plastic surgeons wearing loupes based on multi-objective optimization
Plastic surgeons report neck, shoulder and back pain when wearing headmounted magnifiers (loupes) during operations. There will be many factors contributing to such pain. In order to explore these factors this paper developed a novel application of Multi-objective Optimization (MOO) which used postural constraints on anthropometric models to determine Rapid Upper Limb Analysis (RULA) scores. For the pain experienced by surgeons wearing loupes, the analyses showed that adjusting the height of table and suitable working distance of loupes for surgeon could decrease the flexion angle of neck. The results demonstrated that it is possible to predict RULA scores for the range of postures and propose that this approach could be used to quantify risk assessment, particularly in the selection and fitting of loupes and in the specification of working height for surgeons. Subjects: Ergonomics & Human Factors; Musculosketletal Disorders Ergonomics; Plastic & Esthetic Surgery
Introduction
Plastic surgery involves a range of specialisms directed at the reconstruction or correction of dysfunctional or defective parts of the body. Given the nature of the work (particularly when working on children or on the hand), it is common for plastic surgeons to employ some form of visual aids, such as microscopes or head-mounted magnifying glasses called loupes. While these visual aids can enhance surgeons' vision, there can be a need to adopt uncomfortable postures during an operation
ABOUT THE AUTHOR
Zhelin Li is a professor in the Design School of South China University of Technology. He is the director of the Laboratory Center at SCUT School of Design. His research fields are ergonomics evaluation of products, digital human modeling, new interface technology and interactive products design. His research projects include ergonomics analysis of head-mounted product, humanmachine interface techniques, development of control software in recent years. He was a visiting research fellow in the School of Engineering, Birmingham University UK in 2015. The research topic is about plastic surgeons during surgery and define possible postures that may contribute to cervical musculoskeletal disorders by using ergonomic methods (motion capture techniques) when he was in UK.
PUBLIC INTEREST STATEMENT
Plastic surgeons report neck, shoulder and back pain when wearing head-mounted magnifiers (loupes) during operations. There will be many factors contributing to such pain. In order to explore these factors this paper developed a novel application of Multi-objective Optimization which used postural constraints on anthropometric models to determine Rapid Upper Limb Analysis scores. For surgeons wearing loupes, the analyses showed that adjusting the height of table and suitable working distance of loupes for surgeon could decrease the flexion angle of neck. The approach is applied to the surgeons who wear head-mounted magnifiers and report a neck injury.
in order to see the operating site while performing action on that site. Not surprisingly, therefore, plastic surgeons who wear loupes during surgery report a high incidence of musculoskeletal injuries (Capone, Parikh, Gatti, Davidson, & Davison, 2010;Nimbarte, Zreiqat, & Chapman, 2012). A survey of European surgeons reported more than 80% (n = 284) had discomfort in the neck, shoulder and back muscles associated with operating (Wauben, van Veelen, Gossot, & Goossens, 2006). Sivak-Callcott reported 58% of ophthalmic plastic surgeons (n = 139) had neck pain associated with operating. Nearly 10% had to cease operating as a result of neck pain (Sivak-Callcott et al., 2011). In a recent survey, authors identified contributory factors as the age of the respondent (older respondents were more likely to report pain), the number of hours operating while wearing loupes (more than 15 h per week performing operations led to higher incidence of pain), and the magnification of the loupes (higher magnification of loupes resulted in more reports of symptoms). While the first two factors could be seen as self-explanatory, it is worth considering the third and why this relates the design of the loupes.
Surgical loupes consist of magnifying lenses mounted on prescription spectacles. Loupes are custom-fitted for an individual surgeon based on two factors: the working distance and the declination angle (Chang, 2014). The working distance is influenced by the magnification of the loupes; ranging from around 2x to 5x. For some procedures, there is a recommendation to use 2.5x (1, 2). In our survey, 23% of respondents used 2.5x while 67% used 3.5x (the remainder using 2x, 3x or 4x). The magnification will influence the size of the visual field that can be seen clearly at a given viewing distance, e.g. higher magnification loupes of 4x or 5x will have a smaller visual field in sharp magnification (and might be used for vascular surgery for instance). Loupes have specified viewing distances (ranging from 34 to 50 cm) which are intended to be distances at which images are clear. However, viewing distance will also be influenced by the stature of the surgeon and the working height of the operation. Consequently, wearing loupes creates the need to trade-off the loupe's viewing distance (for a clear image) and the surgeon's working distance (to gain access to the patient). Because of the limitation of viewing direction and working distance, the flexion angle of the neck could increase as the surgeon adjusts their posture during an operation.
A our preliminary study, using Vicon Motion Capture compared postures for four experienced surgeons performing simple tasks with and without loupes, while sitting or standing, and at different operating heights, as shown in Figure 1. From the analysis of surgeon posture, it was found that angles of neck and head were bigger as surgeon with loupes than without loupes on different height table. This indicated how differences in head and neck angle result from wearing loupes, and that these differences vary with the height of the table on which the task was performed.
In this paper, the relationship between surgeon stature (SS), working distance of loupes (WD, the distance from surgeon's eye to the patient's operation position) and table height (TH, the vertical distance from the patient's operation position to the floor) will be analyzed. The definition of working distance varies across specialisms and types of operation; some surgeons (particularly those operating on hands or feet) might prefer to remain seated during an operation, whereas surgeons performing other types of operation might prefer to stand. From initial discussion with surgeons it is noted that loupes are often fitted when the surgeon is seated and the viewing distance based on this. This suggests potential problems when moving from sitting to standing. In this paper, author focus on standing surgeons.
Methods
In this paper, author employ MOO method combining digital anthropometric modelling, and RULA. Figure 2 illustrates the approach taken in this paper. https://doi.org/10.1080/23311916.2017.1398702
Digital anthropometric models
A digital anthropometric model is used in this approach to define the working posture in terms of the parameters which can be optimized. In order to simplify the model, a 2-dimensional body-link model is built based on the joints and links, as shown in Figure 3.
In Figure 3(a), each joint has a local coordinate (Y axis perpendicular to the floor, X axis in line with the floor). The height of table (TH) is set midway between the umbilicus and sternum. In a study of experienced surgeons performing discectomy (on a spine surgery simulator) when wearing loupes, it is proposed that this table height is optimal for reducing surgeon musculoskeletal fatigue (Park et al., 2012). The working distance (WD) is defined by the dotted line linking the operating position (O) being worked on using instrument (I) and the eye (E).
Figure 3(b) shows the model of the head wearing loupes. This has a reference line connecting the top of the ears to the corner of the eyes. The Reference Line Angle (RLA) is the deviation from a horizontal line from the top of the ears and is taken as 12° (Chang, 2014). The declination angle for the loupes is the angle between the reference line and the optical axis of loupes. Through-the-lens (TTL) Notes: Sb refers to the Scores from (specifically neck and trunk scores) RULA Table B. Based on the UK PEOPLE SIZE 1998 database, 10 digital anthropometric models were built. Multi-Objective Optimization is used to predict the posture which minimizes RULA score, defined as minimum Sb, of a surgeon based on joint restraint in an digital anthropometric model. (Freer, Marshall, & Summerskill, 2008). In order to reduce model complexity, author assume that some joints do not contribute significantly to postural variability during the course of an operation, and so these can be frozen in the model. The frozen joints are the hip joint (J 8 ), thigh joint (J 9 ), leg joint (J 10 ), ankle joint (J 11 ) and foot joint (J 12 ). This assumes that postural change happens from the waist upwards, and this paper are particularly interested in the extension and flexion of the neck during an operation. Thus, 7 active joints (J 1 -J 7 ) are used for the standing model. a i represents the joint angle between L i-1 and L i , as shown in Figure 3. Joint angle limits for each degree of freedom are listed in Table 1.
In this study it is assumed that the surgeon will also work with surgical instruments, which provide a focal point for their vision with reference their hand position. Table 2 shows the corresponding dimensions of scissors (Item: MB150R, Length: 140 mm) (Aesculap, Inc, 2008). It is assumed that half of length of instrument is in the hand, so the valid length of instruments is l t = 70 mm. It is defined that the angle between Y axis and instrument is a t = 120 ∘ .
In order to accommodate a wide range of the surgeons' stature, ten digital human models are defined in terms of standing height of adults from 5% female up to 95% male (stature) based on UK PEOPLE SIZE 1998 (Freer et al., 2008). The stature range is from 1,534 to 1,864 mm. Table 3 lists the body data for female and male models at 20 or 25 percentile intervals: 5, 25, 50, 75 and 95%. In Table 3, F 25 means the 25% female, and M 25 means the 25% male, and so on. One column in Table 3 represents the stature and length of links. For example, in the 3th column, stature = 1,596 and L 1 = 274 mean that the stature of F 25 is 1,596 mm and length of upper arm is 274 mm (Figure 3(a) shows the position of L 1 ).
The RULA method
RULA is a popular tool to evaluate risk of musculoskeletal injury is called (McAtamney & Coreltt, 1993). RULA supports classification of posture in terms of potential musculoskeletal risk through a simple pencil and paper pro-forma. This can be completed from observation (either directly in the field or from video recordings) and can provide a consistent and reliable identification of postures which could be harmful. For this paper, authors are interested whether it is possible to use RULA predictively, i.e. as a means of identifying postural problems from models rather than observation. In this respect, this paper follow the lead of Plantard who captured human posture using the Microsoft KINECT sensor and derived corresponding RULA scores for a large set of poses and sensor placements (Plantard, Auvinet, Pierres, & Multon, 2015). Authors are interested in relating the RULA classification scheme to the postures defined by an anthropometric model. A core problem that needs to be solved prior to implementing such an approach is the need to determine which parameters are most significant in contributing to musculoskeletal risk classified by RULA. As there are multiple parameters which can contribute to risk, this requires the solution of a multi-objective problem.
In the RULA method, there are three score tables. Table B describes risks associated with neck and trunk angle. Since neck discomfort is the most common disorder reported by plastic surgeons, this paper employ Table B (McAtamney & Coreltt, 1993). The score of Table B is defined as S b . In order to analyse the continuously changing angles of neck and trunk, two functions-Equations (1) and (2)were created by using quadratic fit of the RULA score and specific joint angles, according to studies by McAtamney and Coreltt (1993). In order to obtain a more precise definition of S b , Equation (3) was created by using liner fit based on Table B.
In Equation (1), S n is the score for neck angle, and a ne is the angle of neck. In Equation (2), S t is the score for trunk angle, and a tr is the angle of trunk. For consistency with SAMMIE, author defined these joint angles as: a ne = a 4 + a 5 ; a tr = a 6 + a 7 .
Multi-objective optimization
The process of systematically and simultaneously optimizing a set of objective functions is called Multi-Objective Optimization (Marler & Arora, 2004). In general, a multi-objective optimization problem can be posed as follows: (a) Minimize Function F of (x): where k is the number of objective functions.
x∈E n is a vector of design variables, where n is the number of independent variables x i . F(x)∈E k is a vector of objective (or cost) functions F i (x): E n →E 1 . x i * is the point that minimizes the objective function F i (x). The feasible design space X (often called the constraint set) is defined as the set {x|g j (x) ≤ 0,j = 1, 2, ...,m; and h i (x)= 0,i = 1, 2, ..., e} (Marler & Arora, 2004).
In this paper, S b is used as the objective value. Joint angles a i are taken as predictor variables. By using Multi-Objective Optimization, minimal S b (where risk of neck and trunk is the lowest) can be calculated out when a i is optimal. Based on the standing posture of surgeons, joint angle restraints will be implemented. In the following definition, the reader can refer to Figure 3(a) for the specific links (l) and specific angles (a) used in the calculation. In this model, the end of instruments must touch the operation position on the operation table. It is assumed that avatar in the digital anthropometric model should see the instruments (I) at the operating position (O) at a working distance (WD) and declination angle (DA) (see Figure 3(b)). l h is equal to table height (TH), and l e is the length of eye line in Figure 3. The values of a indicate the angles of the various joints in the model (see Table 1). The optimization problem is defined as follows: Subject to: Equations (6-11) define constraints for the model as follows. Equation (6) defines the vision constraint allowing standing surgeon to see clearly the instrument based on the assigned WD and TH. h 1 must be g j (x) ≤ 0, j = 1, 2, ⋯ , m; h l (x) = 0, l = 1, 2, ⋯ , e; (4) Find a = a 1 a 2 ⋯ a 12
(5)
Minimize F(a) = S b = F F 1 a ne , F 2 a tr (6) h 1 = abs l 6 * cos a 7 + a 6 + l 7 * cos a 7 + l 8 + l 9 + l 10 + l 11 − abs l 5 * cos a 7 + a 6 + a 5 + l e * cos a 7 + a 6 + a 5 + a 4 + a e +l v * cos a 7 + a 6 + a 5 + a 4 + a v + l h (7) h 2 = abs l 1 * cos 180 − a 1 + l 2 * cos 180 − a 1 + a 2 +l 3 * cos 180 − a 1 + a 2 + l t * cos a t − abs l 5 * cos a 7 + a 6 + a 5 +l v * cos a 7 + a 6 + a 5 + a 4 + a v + l e * cos a 7 + a 6 + a 5 + a 4 + a e (8) h 3 = abs l 1 * sin 180 − a 1 + l 2 * sin 180 − a 1 + a 2 +l 3 * sin 180 − a 1 + a 2 + l t * sin a t − abs l 5 * sin a 7 + a 6 + a 5 +l v * sin a 7 + a 6 + a 5 + a 4 + a v + l e * sin a 7 + a 6 + a 5 + a 4 + a e (9) h 4 = a 3 − a 2 (10) g 1 = a 1 − a 6 (11) a L i ≤ a i ≤ a U i equal to zero when minimal S b is gained. Equation (7) defines the vertical position of instrument. Equation (8) defines the horizontal position of instrument. h 2 and h 3 must be equal to zero when minimal S b is gained. Equations (7) and (8) ensure the surgeon to touch the operation field with the instrument meanwhile seeing the instrument. Equation (9) defines the wrist joint keeping no bent. h 4 is zero as minimal S b is gained. Equation (10) defines the upper arm swinging forward. g 1 is less than zero as minimal S b is gained. Equation (11) defines the inequality constraint of joints angle based on Table 1.
Based on the surgeon's standing posture, a i will be limited by the upper and lower values that satisfy all of the constraints in Equations (6-11). Thus, the aim is to define the working posture to be modelled, in terms of the key constraints that affect the posture when performing a given type of task using a given set of equipment, and then to calculate the potential risk score (defined by RULA Table B) that relates to a person of a given stature adopting this working posture.
Results
In order to demonstrate the application of the approach followed in this paper, ten avatars are considered with standing height from 5% female up to 95% male digital human models. The interrelations among S b , SS, WD and TH are then analysed based on ten digital human models.
The surgeon stature (SS) range is from 1,534 to 1,864 mm. Table height (TH) is from 800 to 1,250 mm (increment is 50 mm). When SS and TH are changed according to the mentioned values as above, minimal S b can be calculated out by using Multi-Objective Optimization. The results are shown in Table 4. Every minimal S b is got based on the optimal AN and WD. The corresponding AN and WD are separately shown in Tables 5 and 6. Table 4 shows the interactions between S b , SS and TH. The top zone, with the "-" symbol indicates no solution. For the green zone, 2 ≤ S b < 3. For the yellow zone, 3 ≤ S b < 4. For the red zone, S b ≥ 4. From Table 4, author make the following observations:
(5) When TH ∈ {800, 850}, S b > 4 for all SS greater than F 5 . This table height should be avoided. Table 5 shows the interactions between AN, SS and TH. AN is a response to S b in Table 4. The top zone, white and with the "-" symbol indicates no solution. The green zone, AN is 12 ≤ AN < 24. In the yellow zone, AN is 24 ≤ AN < 35. In the red zone, AN is AN ≥ 35. Assuming that cervical symptoms are more common as neck flexion in excess of 15° (Capone et al., 2010), and maximum head tilt should be less than 20° (Valachi, 2008), author define 15° as the criteria for risky posture and 20° as critical.
Calculating neck angle (AN)
In Table 5, risky values of AN are shown in bold and critical values in bold italic. Table 5 shows how TH ∈ {800, 850, 900, 950} creates problems for most of the values of SS, and even the range of 1,000 to 1,150 poses risk in terms of neck angle. Table 6 shows the interactions between WD, SS and TH. WD is response to S b in Table 4.The top zone, white and with the "-" symbol indicates no solution. The green zone, WD is 380 ≤ WD < 700. In the yellow zone, WD is 560 ≤ AN < 760. In the red zone, WD is 620 ≤ WD < 760. Recall that the recommended working distance when wearing loupes is 34-50 cm. From this, it is assumed that values Table 4. S b scores as wearing 25 degree loupes Table 5. AN(degree) as wearing 25 degree loupes (degree) Table 6.
WD(mm) as wearing 25 degree loupes (mm)
above 500 mm could be problematic because there will be a compromise between the viewing distance at which loupes focus clearly and the working distance of the surgeon. In Table 6, these values are shown in bold and imply that majority of the surgeons will need to adjust their posture in order to work at these table heights.
Discussion
Tables 4-6 show how it is possible to decrease the angle of neck by adjusting the table height, or by changing the working distance. For the same SS, WD has an inverse correlation with TH. If TH becomes too low, WD will increase leading to an increase in AN. For example, consider M 50 working at different table heights. For all values of TH, the working distanced (WD) exceeds the proposed value of 500 mm and, from 1,200, the neck angle is greater than 15° and the RULA risk increases beyond 3 with TH < 1,000 mm.
From the analysis of ten digital human models, TH = 1,100 mm appears suitable for most surgeons. Using this assumption, and the results shown in Tables 4-6, a virtual scene is built in SAMMIE, as shown Figure 4. Figure 4 shows surgeons of different stature can have an optimal posture when they work together. In order to achieve comfortable neck angle, they must wear loupes with matching viewing distance, stand at the correct distance from the table and maintain appropriate joint angles of arm to ensure that the instrument can touch the object. While this is the ideal (based on our analysis), this also highlights the potential for risk to vary as the operation progresses. For example, it is often necessary to change posture, either to affect a task, or to improve vision, or to collect or pass over an instrument. Consequently, while the ideal model illustrates the desirable, static postures, these are likely to change with the operation. The point that author would make here is that there are regions (as shown in Tables 4-6) in which the predicted risk of neck injury (defined by RULA scores) will increase for surgeons of specific stature working at WD defined by the viewing distance of their loupes. Notes: 10 digital anthropometric models are shown. P is the patient. T is the operation table.
Given the number of reports of musculoskeletal injury (particularly to the neck), it is possible that current practice does not consider the range of the postures that will be encountered. It is proposed that, rather than complicating the fitting procedure, it could be beneficial to use the process outlined in this paper (or a modified and simplified version) to conduct additional checks on the use of the loupes and to provide advice and guidance on appropriate settings for TH and WD, given loupes of a particular prescription and specification and surgeon of a particular stature.
Conclusions
The study shows how RULA scores change for different table heights and working distance during an operation. Of particular interest to our work is that fact that head-mounted magnifiers (loupes) constrain the working distance by their viewing distance. It is shown how WD interacts with the stature of surgeon. RULA is a useful assessment value for posture prediction. The result shows that reasonable table height and working distance of loupes for surgeon could decrease the flexion angle of neck. Certainly when loupes are fitted to a surgeon (and all loupes are custom-modified and fitted as bespoke equipment for surgeons), there is the need to consider how the viewing distance and angle corresponds to the likely activity of the surgeon. Anecdotal evidence suggests that fitting typically takes place when the surgeon is seated, without performing surgical (or simulated, at least, surgical) tasks and in an environment which differs from the operating theatre. Consequently, the fitting will rely on the experience of the surgeon (in judging likely values of WD) and the expertise of the loupe-fitter (in terms of calculating viewing angle).
Practitioner summary
This paper demonstrates how risk of musculoskeletal discomfort (defined using RULA scores) can be predicted using virtual human models. The approach is applied to the surgeons who wear head-mounted magnifiers and report a neck injury.
Funding
This work was supported by the China Scholarship Council (CSC). CSC provided Zhelin Li with expenses including travel, living and health-care for his academic visiting in the University of Birmingham.
|
v3-fos-license
|
2015-03-06T19:42:58.000Z
|
2012-01-08T00:00:00.000
|
10725208
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/ar3555",
"pdf_hash": "552f516ffaf04c707fc2d3e327b0264729e8ccf1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43462",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "e288e0caaf5ce3fb2895994e44324ea5eaf91547",
"year": 2012
}
|
pes2o/s2orc
|
Plasma proteins present in osteoarthritic synovial fluid can stimulate cytokine production via Toll-like receptor 4
Introduction Osteoarthritis (OA) is a degenerative disease characterized by cartilage breakdown in the synovial joints. The presence of low-grade inflammation in OA joints is receiving increasing attention, with synovitis shown to be present even in the early stages of the disease. How the synovial inflammation arises is unclear, but proteins in the synovial fluid of affected joints could conceivably contribute. We therefore surveyed the proteins present in OA synovial fluid and assessed their immunostimulatory properties. Methods We used mass spectrometry to survey the proteins present in the synovial fluid of patients with knee OA. We used a multiplex bead-based immunoassay to measure levels of inflammatory cytokines in serum and synovial fluid from patients with knee OA and from patients with rheumatoid arthritis (RA), as well as in sera from healthy individuals. Significant differences in cytokine levels between groups were determined by significance analysis of microarrays, and relations were determined by unsupervised hierarchic clustering. To assess the immunostimulatory properties of a subset of the identified proteins, we tested the proteins' ability to induce the production of inflammatory cytokines by macrophages. For proteins found to be stimulatory, the macrophage stimulation assays were repeated by using Toll-like receptor 4 (TLR4)-deficient macrophages. Results We identified 108 proteins in OA synovial fluid, including plasma proteins, serine protease inhibitors, proteins indicative of cartilage turnover, and proteins involved in inflammation and immunity. Multiplex cytokine analysis revealed that levels of several inflammatory cytokines were significantly higher in OA sera than in normal sera, and levels of inflammatory cytokines in synovial fluid and serum were, as expected, higher in RA samples than in OA samples. As much as 36% of the proteins identified in OA synovial fluid were plasma proteins. Testing a subset of these plasma proteins in macrophage stimulation assays, we found that Gc-globulin, α1-microglobulin, and α2-macroglobulin can signal via TLR4 to induce macrophage production of inflammatory cytokines implicated in OA. Conclusions Our findings suggest that plasma proteins present in OA synovial fluid, whether through exudation from plasma or production by synovial tissues, could contribute to low-grade inflammation in OA by functioning as so-called damage-associated molecular patterns in the synovial joint.
Introduction
Osteoarthritis (OA) is a degenerative disease of the joints that is characterized by destruction of articular cartilage, inflammation of the synovial membrane (synovitis), and remodeling of periarticular bone. Which of these pathogenic processes occurs first is unknown. One proposed scenario is that cartilage breakdown (due to injury or mechanical stress) releases components of the damaged extracellular matrix (ECM) into synovial fluid, and that these ECM components elicit the local production of inflammatory molecules by binding to receptors on resident synovial cells or infiltrating inflammatory cells [1,2]. The inflammatory molecules produced may in turn stimulate production of cartilage-degrading enzymes and recruit inflammatory cells to the affected joint [3,4], thus establishing a vicious cycle of cartilage destruction and inflammation that perpetuates and promotes the OA pathology. Therefore, OA has been described as a chronic wound in which molecules in synovial fluid function as damage-associated molecular patterns (DAMPs; that is, endogenous molecules produced during injury that signal through inflammatory toll-like receptors (TLRs) to effect tissue remodeling) [2,5,6]. Although the identities of the endogenous molecules that mediate synovial inflammation have yet to be confirmed in OA patients or animal models, a continuous supply of DAMPs could perpetuate the early response to injury and thereby damage the joint.
Besides ECM components, many other molecules may act as DAMPs [2]. One such molecule is fibrinogen, which stimulates macrophage production of chemokines in a TLR4-dependent manner [7][8][9]. Fibrinogen is present at abnormally high levels in OA synovial fluid [10], and the amount of fibrin (the thrombin-cleaved form of fibrinogen [11]) deposited in the synovial membrane correlates with the severity of OA [12]. Although classically a plasma protein, fibrinogen exudes from the vasculature at sites of inflammation, such as the inflamed OA joint, owing to the retraction of inflamed endothelial cells [11]. Fibrinogen is not the only protein to extravasate at sites of inflammation, however, and several other plasma proteins have been detected in OA synovial fluid [10,13]. The extravascular function of most of these plasma proteins is unclear. It is possible that, like fibrinogen, some of these plasma proteins could have an immunoregulatory role at sites of inflammation or tissue damage.
Inflammation is present even in the early stages of OA [14,15], and clinical signs of synovitis correlate with radiographic progression of knee OA [16]. Insight into the cause of synovial inflammation is therefore important in understanding the pathogenesis of OA. Here we used proteomic techniques to survey the proteins present in OA synovial fluid and to evaluate levels of inflammatory cytokines in OA serum and synovial fluid. We then determined whether a subset of the identified proteins could promote inflammation by functioning as immunostimulatory DAMPs.
Synovial fluid and serum samples
Serum and synovial fluid samples were obtained from patients with knee OA, patients with rheumatoid arthritis (RA), or healthy individuals under protocols approved by the Stanford University Institutional Review Board and with the patients' informed consent. Synovial fluid aspiration was performed by a board-certified rheumatologist by fine-needle arthrotomy, and the synovial fluid samples obtained were free from obvious contamination with blood or debris. OA serum and synovial fluid samples were obtained from patients diagnosed with knee OA (of Kellgren-Lawrence score 2 to 4 [17]) according to the 1985 criteria of the American Rheumatism Association [18]. For mass spectrometric analysis, OA synovial fluid samples were from five Caucasian men aged 50 to 75 years who met the 1985 OA criteria [18]; exclusion criteria included radiographic evidence of chondrocalcinosis or evidence of crystals under polarizing microscopy. Demographics and clinical characteristics of these five individuals are shown in Table 1. Synovial fluids from the other OA patients and from the RA patients were provided as de-identified remnant clinical samples, and patient demographics were therefore unavailable for these samples. All RA patients met the 1987 American Rheumatism Association criteria for RA [19] and had RA of less than 6 months' duration; exclusion criteria included concurrent infectious or crystal arthritis. Samples of "normal" serum were obtained from healthy individuals who had no joint pain and no radiographic evidence of knee arthritis [20]. OA and normal sera were matched by age, sex, and BMI. Serum and synovial fluid samples were not matched but were derived from patients with the characteristics described earlier. All samples were aliquoted and stored at -80°C.
Mass spectrometric analysis
Synovial fluid proteins were separated by 1D or 2D polyacrylamide gel electrophoresis (PAGE), trypsinized, and identified by liquid chromatography tandem mass spectrometry (LCMS), as follows. Fifty microliters of frozen synovial fluid was diluted to a final volume of 1 ml in phosphate buffered saline (PBS) containing Halt protease and phosphatase inhibitor (Thermo Fisher Scientific), and then depleted of the highly abundant proteins albumin and immunoglobulin G (IgG) by using the Pro-teoPrep Immunoaffinity Albumin & IgG Depletion Kit (Sigma-Aldrich) according to the manufacturer's instructions. In brief, synovial fluids were twice passed over spin columns prepacked with a mixture of two beaded mediums containing recombinantly expressed, small, single-chain antibody ligands. The flow-through fractions containing synovial fluid depleted of albumin and IgG were diluted 1:1 with Laemmli Sample Buffer (BioRad) and then subjected to 1D-PAGE or 2D-PAGE analysis. Because a small number of proteins other than albumin and IgG may bind to the medium in the spin columns, the bound proteins were eluted with Laemmli sample buffer and also subjected to PAGE analysis. For 1D-PAGE analysis, proteins were boiled for 10 minutes and separated on Precast Criterion XCT gels (4% to 12% linear gradient, BioRad). After electrophoresis, the gels were stained for 1 hour with Gelcode blue (Pierce) and destained overnight. For 2D-PAGE analysis, methods were as previously described [21]. In brief, 100 μg of synovial fluid proteins was dissolved in 150 μl of isoelectric focusing (IEF) buffer (ReadyPrep Sequential Extraction Kit Reagent 3, BioRad). For the first-dimension electrophoresis, 150 μl (at a 1 μg/μl concentration) of sample solution was applied to an 11-cm Ready-Strip Immobilized pH Gradient (IPG) strip, pH 3 to 10 (BioRad). The IPG strips were soaked in the sample solution for 1 hour, to allow uptake of the proteins, and then actively rehydrated in the Protean IEF cell (BioRad) for 12 hours at 50 V. IEF was performed for 1 hour at each of 100, 200, 500, and 1,000 V, and then for 10 hours at 8,000 V. For second-dimension electrophoresis, IPG strips were equilibrated for 20 minutes in 50 mM Tris-HCl, pH 8.8, containing 6 M urea, 1%SDS, 30% glycerol, and 65 mM dithiothreitol (DTT), and then reequilibrated for 20 minutes in the same buffer containing 260 mM iodacetamide in place of DTT. Precast Criterion XCT gels (4% to 12% linear gradient, BioRad) were used for the second-dimension electrophoresis, as was done for the 1D-PAGE. After electrophoresis, the gels were stained for 1 hour with Gelcode blue (Pierce) and destained overnight. The stained protein bands and spots (from the 1D-PAGE and 2D-PAGE, respectively) were cut out of the gels, immersed in 10 mM ammonium bicarbonate containing 10 mM DTT and 100 mM iodoacetamide, treated with 100% acetonitrile, and then digested overnight at 37°C with 0.1 mg trypsin (Sigma-Aldrich) in 10 mM ammonium acetate containing 10% acetonitrile. The trypsinized proteins were identified with LCMS by using the Agilent 1100 LC system and the Agilent XCT Ultra Ion Trap (Agilent Technologies, Santa Clara, CA) as previously described [22]. We scanned the LCMS data against the SwissProt database by using the Spec-trumMill software (Agilent). We required the detection of at least two peptides for identification of a protein, and a significance level of P ≤ 0.05 for identification of each peptide. The significance level of peptide identification takes into account the number of ionization forms of the fragmented peptide that match with a particular protein in the SwissProt database (with penalties for ionization forms not identified), as well as the total intensity of each ionization form [23].
Multiplex cytokine analysis
Multiplex analysis of cytokines and chemokines in human serum and synovial fluid samples was performed by using both the 27-plex and the 21-plex Bio-Plex Pro Human Cytokine Assay (BioRad) run on the Luminex 200 platform, as recommended by the manufacturers. Performing the Bio-Plex assay with the kit reagents, we found that several commercial reagents designed to block the confounding effect of heterophilic antibodies, including ones we used previously with other cytokine assay kits [24], did not significantly affect the readout of the Bio-Plex assay; we therefore did not use such blocking reagents with the Bio-Plex assay. Data processing was performed by using Bio-Plex Manager 5.0, and analyte concentrations (in picograms per milliliter) were interpolated from standard curves. Statistical differences in cytokine levels were calculated with significance analysis of microarrays (SAM [25]), and the SAM-generated results with a false discovery rate (FDR) of less than 10% were selected. To identify relations and to display our results most effectively, we normalized the analyte concentrations as follows: all values less than 1 were designated as 1, and the mean concentration of each analyte in the "normal serum" samples was calculated; the analyte value in the sample was then divided by the mean analyte value in normal serum, and finally, a log-base-2 transformation was applied. Results were subjected to unsupervised hierarchic clustering by using Cluster 3.0, which arranges the SAM-generated results according to similarities in cytokine levels, and the clustering results were displayed by using Java Treeview (Version 1.1.3).
Macrophage stimulation assays
To generate mouse macrophages, we differentiated bone-marrow cells isolated from wild-type C57BL/6 mice and from B6.B10ScN-Tlr4 lps-del mice (Jackson Laboratory) according to standard procedures [26]. In brief, the femur and tibia were flushed with α-minimal essential medium (MEM; Invitrogen) by using a 1-ml syringe and a 25-gauge needle. The resulting cell suspension was lysed with ACK Lysing Buffer (Invitrogen) for removal of erythrocytes. Cell clumps were removed by filtering through a 70-μm cell strainer (BD). The remaining cells in the suspension were cultured on 100mm culture dishes in α-MEM supplemented with 10% fetal bovine serum (FBS), 100 units/ml of penicillin, 100 μg/ml of streptomycin, and 2 mM glutamine (Invitrogen) for 16 to 24 hours in 5% CO 2 at 37°C. Nonadherent cells were collected, plated on 100-mm dishes, and differentiated into bone-marrow-derived macrophages (BMMs) for 6 days in the presence of 30 ng/ml of macrophage colony-stimulating factor (PeproTech). To generate human monocyte-derived macrophages (MDMs), we collected peripheral blood mononuclear cells (PBMCs) by performing density-gradient centrifugation of LRS chamber content (Stanford Blood Center) over Ficoll (Invitrogen), purified human monocytes by negative selection by using a monocyte isolation kit (Miltenyi Biotec), and differentiated the monocytes into macrophages by culturing them for 7 days in RPMI containing 10% FBS and 30 ng/ml of human M-CSF.
For stimulation assays, mouse BMMs were plated in 96-well plates at 1 × 10 5 cells/well, and human macrophages at 7 × 10 4 cells/well. Cells were incubated for 24 hours with lipopolysaccharide (LPS; Sigma-Aldrich), peptidoglycan (InvivoGen), α 1 -microglobulin (Cell Sciences), α2-macroglobulin (EMD Chemicals), α1-acid glycoprotein (EMD Chemicals), Gc-globulin (also known as vitamin D-binding protein; Abcam), haptoglobin (Sigma-Aldrich), or human serum albumin (Sigma-Aldrich). We measured levels of interleukin-1β (IL-1β), interleukin-6 (IL-6), and vascular endothelial growth factor (VEGF) in the culture supernatants with Luminex analysis, by using a 27-plex Bio-Plex Pro Human Cytokine Assay kit (BioRad) according to the manufacturer's instructions. We measured TNF levels with enzymelinked immunosorbent assay (ELISA; PeproTech). For the TNF ELISA, the limits of detection were 16 to 2,000 pg/ml for mouse TNF, and 23 to 1,500 pg/ml for human TNF. For the Luminex assay, the limits of detection were 3.2 to 3,261 pg/ml for IL-1β, 2.3 to 18,880 pg/ ml for IL-6, and 5.5 to 56,237 pg/ml for VEGF. To exclude a contribution of endotoxin contamination, we included 10 μg/ml of polymyxin B (Sigma-Aldrich) in some of the stimulation assays. As an additional control for endotoxin contamination, we tested whether preincubating the plasma proteins with proteinase K and βmercaptoethanol at 55°C for 4 hours (and then at 100°C for 10 minutes to inactivate the proteinase K) abrogated their ability to induce the production of cytokines (the plasma proteins, but not any contaminating endotoxin, would be denatured under these conditions).
Statistical analysis
One-way ANOVA and unpaired t test (Graph-Pad Software) were used to analyze differences in levels of cytokines. P values less than 0.05 were considered significant.
Results and Discussion
We first used mass spectrometry to survey the proteins present in the synovial fluid of patients with knee OA. Synovial fluid proteins from five OA patients were separated by 1D-or 2D-PAGE and then identified by LCMS. Analysis of all five samples identified a total of 111 unique proteins; three of these were keratin proteins, skin proteins most likely obtained as a result of the cutaneous puncture performed during aspiration of the synovial joints. Eliminating these keratins left 108 unique proteins (Tables 2 and 3), most of which were detected in all synovial fluid samples analyzed. Of these, 44 were identified in a previous proteomic survey of highly abundant proteins in OA synovial fluid [10] ( Table 2). Thus, we confirmed the presence of serine protease inhibitors (for example, antithrombin III, α 1antitrypsin, α 1 -antichymotrypsin, kininogen 1) and of proteins important in regulating proteases that degrade cartilage ECM. We also confirmed the presence of proteins involved in cartilage (for example, fibronectin) and/or collagen (for example, gelsolin and collagen α 1 , α 2 , and α 3 chains) metabolism, and of proteins involved in inflammation or immunity (for example, fibrinogen, AGP 1, complement factors, immunoglobulins, cytokines) ( Table 2), findings consistent with the inflammation, ECM degradation, and immune-cell infiltration that characterize OA. Among the 64 proteins that we newly identified (Table 3) were histone-related proteins, macrophage-related proteins, proinflammatory receptors, and proteins related to the proinflammatory transcription factor nuclear factor kappa B (Table 4), presumably reflecting the turnover of resident synovial cells or infiltrating inflammatory cells.
Our mass-spectrometric findings revealed the presence of many molecules associated with inflammation. Although cytokines are also classically associated with inflammation, PAGE-based mass spectrometry is not well suited to the detection of small proteins such as cytokines. We therefore used a multiplex immunoassay to measure levels of inflammatory cytokines and chemokines in synovial fluid samples from 12 patients with knee OA and 14 patients with RA, as well as in serum samples from 24 patients with knee OA, 23 patients with RA, and 35 healthy individuals. Samples from patients with RA, a classic inflammatory arthritis, were used as a comparator. Figure 1 shows a heatmap of the relative levels of cytokines in the five groups of samples. Compared with cytokine levels in normal sera, cytokine levels in OA sera were generally slightly higher, and those in RA sera were much higher ( Figure 1). SAM analysis revealed that levels of several inflammatory cytokines (for example, IL-1β and IL-6), chemokines (for example, IP-10 (also known as CXCL10), MCP-1, IL-8, MIG, and MIP-1β), and growth factors (for example, VEGF and SCGF-β) were significantly higher in OA sera than in normal sera (FDR < 10%; Figure 2), consistent with previous reports of the association of OA with such inflammatory mediators [27]. Interestingly, we also found OA-associated elevations in levels of IL-9 and cutaneous T-cell attracting chemokine (CTACK), supporting the concept that T cells play a role in OA [28]. As expected, cytokine levels were significantly higher in RA sera than in OA sera ( Figure 3; FDR < 10%). Unlike RA, OA is considered a disorder that is restricted to the joints. Indeed, levels of multiple cytokines were much higher in OA synovial fluids than in OA sera ( Figure 1 and Table 5); levels of TNF were negligible in OA sera but substantial in OA synovial fluid ( Figure 1 and Table 5). Our findings suggest that the abnormally high levels of cytokines in OA sera largely reflect overproduction of these cytokines in the joint, consistent with the finding that levels of high-sensitivity C-reactive protein in the serum of OA patients correlate with the degree of inflammatory infiltrate in the patients' joints [29]. Thus, OA is associated with lowgrade inflammation that may originate in the joints. Interestingly, 39 (36%) of the proteins we identified in OA synovial fluid are classically considered plasma proteins (Tables 2 and 3). Indeed, plasma proteins form a large proportion of the proteins enriched in OA synovial fluid relative to healthy synovial fluid [10]. What might these plasma proteins be doing in the OA joint? Like certain products of ECM breakdown [2,5,6], the plasma protein fibrinogen can function as a DAMP and has been proposed to contribute to the pathogenesis of inflammatory arthritis [7][8][9]. We therefore examined whether other plasma proteins in OA synovial fluid can function as immunostimulatory DAMPs that could contribute to the low-grade inflammation associated with OA.
Key players in OA-associated inflammation are the macrophages [27,30,31]. The cell infiltrate in human OA joints consists mainly of macrophages, and mice depleted of macrophages are relatively resistant to collagenase-induced OA [30]. Macrophages from OA joints produce a number of growth factors, such as VEGF, and inflammatory cytokines, such as the major OA-associated cytokines IL-1β and TNF [30]. We detected VEGF, IL-1β, and TNF in OA synovial fluid in our cytokine screen (Figure 1 and Table 5) and found that levels of VEGF and IL-1β are significantly higher in OA sera than in normal sera (Figure 2). VEGF may promote OA pathology by inducing angiogenesis (and thereby osteophyte formation) and by inducing matrix metalloprotease production (and thereby cartilage degradation) Figure 1 Inflammatory cytokines are associated with osteoarthritis. Relative cytokine levels in serum and synovial fluid (SF) samples from patients with osteoarthritis (OA) or rheumatoid arthritis (RA) and in serum samples from healthy individuals (normal sera). Cytokine levels were measured with a multiplex bead-based immunoassay. Samples from individual patients are listed above the heatmap, and the individual cytokines are listed to the right of the heatmap. IL, interleukin; IFN, interferon; MIG, monokine induced by IFN-γ; IP-10, interferon gamma-induced protein 10; IL-1ra, interleukin-1 receptor antagonist; VEGF, vascular endothelial growth factor; GM-CSF, granulocyte macrophage colonystimulating factor; FGF, fibroblast growth factor; MCP, monocyte chemotactic protein; IL-2Rα, interleukin-2 receptor α chain; HGF, hepatocyte growth factor; GROα, growth-regulated oncogene α; MIP-1, macrophage inflammatory protein; β-NGF, β nerve growth factor; SCF, stem cell factor; M-CSF, macrophage colony-stimulating factor; SCGF-β, stem cell growth factor β; LIF, leukemia inhibitory factor; SDF-1α, stromal cellderived factor 1α; G-CSF, granulocyte colony-stimulating factor; CTACK, cutaneous T-cell attracting chemokine. [32]. The cytokines produced by macrophages amplify the inflammation in the joints by inducing synovial cells to produce further cytokines and chemokines, as well as matrix metalloproteases [30]. Moreover, macrophages express many of the receptors that mediate DAMP signaling, and they can thus trigger an inflammatory cascade in response to DAMPs present in OA synovial fluid [7][8][9]. We therefore assessed whether a subset of the identified plasma proteins could induce macrophages to produce TNF, a key cytokine that is thought to drive the inflammatory cascade in OA [27]. We tested α 1 -microglobulin (α 1 m), α 1 -acid glycoprotein 1 (AGP 1; also known as orosomucoid 1), α 2 -macroglobulin (α 2 m), Gcglobulin (also known as vitamin D-binding protein), albumin, and haptoglobin, all of them plasma proteins detected in our survey of synovial fluid proteins ( Table 2) and shown to be enriched in OA synovial fluid [10]. With mouse macrophages, we found that α 1 m, α 2 m, and Gc-globulin, at concentrations similar to those measured in synovial fluid [33][34][35], each dose-dependently stimulated the production of TNF, whereas AGP 1, albumin, and haptoglobin did not (Figure 4a). The plasma proteins ceruloplasmin, complement component C3, complement component C4, β 2 -glycoprotein (also known as apolipoprotein H) also did not stimulate TNF production (data not shown).
We next examined the effect of α 1 m, α 2 m, and Gcglobulin on cytokine production in human macrophages. Because the endotoxin LPS is a common contaminant and is itself an agonist of TLR4, we tested the stimulatory properties of the plasma proteins in the presence of polymyxin B, a compound that neutralizes LPS. In the presence of polymyxin B, α 1 m-, α 2 m-, and Gc-globulininduced TNF production was not significantly reduced, whereas LPS-induced TNF production was abrogated (Figure 4b). Additionally, pretreatment with proteinase K significantly abrogated TNF production induced by the plasma proteins but not TNF production induced by LPS ( Figure 5). Although we cannot exclude the possibility that a small component of the observed stimulation is due to endotoxin, this result confirms that the plasma proteins are themselves immunostimulatory. Gc-globulin, α 1 m, and α 2 m were also able to induce the Mostly OA sera Mostly RA sera Figure 3 Levels of inflammatory cytokines are higher in RA compared with OA sera. Cytokines whose levels differ significantly between sera from individuals with osteoarthritis (OA) and sera from individuals with rheumatoid arthritis (RA) (FDR < 10%). Significance analysis of microarrays (SAMs) was used to identify statistically significant differences, and the SAM-generated results were subjected to unsupervised hierarchic clustering. Cytokine levels were measured with a multiplex bead-based immunoassay. Samples from individual patients are listed above the heatmap, and the individual cytokines are listed to the right of the heatmap. IL, interleukin; IL-1ra, interleukin-1 receptor antagonist; FGF, fibroblast growth factor; MCP, monocyte chemotactic protein; HGF, hepatocyte growth factor; MIP-1, macrophage inflammatory protein; M-CSF, macrophage colony-stimulating factor; SCGF-β, stem cell growth factor β; G-CSF, granulocyte colony-stimulating factor. production of several other inflammatory cytokines that were upregulated in OA serum and synovial fluid (Figures 1 and 2): IL-1β, IL-6, and VEGF ( Figure 4c). Thus, Gc-globulin, α 1 m, and α 2 m can each induce the production of TNF, IL-1β, IL-6, and VEGF, all molecules implicated in the pathogenesis of OA [27,30,31]. But how do these plasma proteins stimulate cytokine production? To determine whether these immunostimulatory plasma proteins signal through TLR4, we examined whether Gc-globulin, α 1 m, and α 2 m could also induce TNF production in TLR4-deficient macrophages. TLR4 deficiency inhibited Gc-globulin-, α 1 m-, and α 2 minduced TNF production ( Figure 6). Confirming that the defect in inflammatory signaling in the Tlr4 lps-del macrophages was specific to the TLR4 pathway, theTLR2-specific agonist peptidoglycan was able to induce TNF production in these cells-in fact, to a greater degree than in wild-type cells (possibly because of compensatory mechanisms operating within the TLR family) (Figure 6). Thus, Gc-globulin-, α 1 m-, and α 2 m-induced production of TNF is dependent on TLR4.
Interest in the putative immunomodulatory effects of α 1 m, α 2 m, and Gc-globulin is increasing, with both proinflammatory and antiinflammatory properties suggested for each of them [36][37][38].
For example, α 1 m has been shown to bind to the surface of various inflammatory cells and to either stimulate or inhibit the activation of human lymphocytes [38]. The immunoregulatory role of α 1 m in health and disease is likely to be context dependent. Gc-globulin, however, appears to be primarily proinflammatory: it enhances the neutrophil-and monocyte-chemotactic activity of the anaphylatoxin C5a [36] and, in its sialicacid-free form, activates macrophages [39]. Here, we uncover an additional mechanism by which these plasma proteins could promote inflammation. We speculate that exudation into extravascular spaces at sites of tissue damage and inflammation may render these plasma proteins inflammatory by bringing them into contact with TLR-expressing macrophages. Our finding that certain plasma proteins present in OA synovial fluid can induce macrophage production of inflammatory cytokines supports the model of local production of inflammatory mediators in the joints in OA.
Conclusions
We identified 108 proteins in OA synovial fluid and showed that OA is associated with low-grade inflammation. We found that plasma proteins form a large proportion of the proteins present in OA synovial fluid and that certain of these plasma proteins can signal through TLR4 to induce the production of an array of inflammatory cytokines, including those upregulated in OA. Our findings suggest that plasma proteins present in OA synovial fluid, whether through exudation from the plasma or production by synovial tissues, could contribute to lowgrade inflammation in OA by functioning as DAMPs. Figure 5 Induction of TNF production by plasma proteins is not due to endotoxin contamination. RAW264.7 macrophages were stimulated for 24 hours with 50 μg/ml of Gc-globulin, α 1microglobulin (α 1 m), or α 2 -macroglobulin (α 2 m) that had been incubated with proteinase K (20 μg/ml) at 55°C for 4 hours in the presence of β-mercaptoethanol and then heated to 100°C for 10 minutes. TNF levels in the supernatants were determined with ELISA. Lipopolysaccharide (LPS; 1 ng/ml) was used as a positive control. Data are shown as the mean ± SEM of duplicates from one of two representative experiments. *P < 0.05; **P < 0.01.
|
v3-fos-license
|
2019-12-05T09:08:59.420Z
|
2019-12-04T00:00:00.000
|
208642265
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0225750&type=printable",
"pdf_hash": "03ce5bddd8390a728a579af08623c4792cd4549c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43464",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Biology"
],
"sha1": "8885e7cb81207f21136870be9608976ff2d22f3b",
"year": 2019
}
|
pes2o/s2orc
|
Comparative study of the composition of cultivated, naturally grown Cordyceps sinensis, and stiff worms across different sampling years
Natural Cordyceps sinensis, which is a valuable anti-tumor, immunomodulatory, and antiviral agent in Asia, has been overexploited in recent years. Therefore, it is important for cultivated C. sinensis to be recognized in the market. In this research, the main components of entirely cultivated, naturally grown C. sinensis, and stiff worms across different sampling years were detected and compared by HPLC-MS and UV spectrometry. The results indicated that the mean levels of adenosine and cordycepin were significantly higher, whereas the mean levels of mannitol and polysaccharides were remarkably lower in the cultivated type than in the natural type. No distinct difference in the average soluble protein content was observed. The composition of the stiff worms was similar to that of the natural herb, except that the total soluble protein content was higher, and that of mannitol was lower. In addition, the ultraviolet absorption spectroscopy of the three types showed high similarity at 260 nm. This research indicates that the main nutritional composition of cultivated and natural C. sinensis is identical and that cultivated type can be used as an effective substitute.
Introduction
Cordyceps sinensis (Berkeley) Sacc. is a unique entomopathogenic fungus and valuable Chinese medicine resource that has been employed for treating various human conditions, including autoimmune disease, cancer, chronic inflammation, fatigue, and type II diabetes [1][2][3]. In winter, the fungus, mostly Hirsutella sinensis [4], parasitizes the ghost moth larvae (Hepialus armoricanus Obertheir, belonging to the order of Lepidoptera), and proliferates until the larva is converted into fungal hyphae; in summer, the stroma grows out of the dead caterpillar, leaving the exoskeleton intact (the fruiting body) [5]. Thus, this characteristic Chinese medicine is referred as 'winter worm summer grass' (Dong Chong Xia Cao in Chinese). The parasitic complex of the fungus and the caterpillar is mainly found in the soil of the prairie at an elevation of 3500-5000 m in Tibet, Qinghai, Sichuan, Yunnan provinces in China [6,7]. Natural C. sinensis, whose availability is limited due to its extreme host range specificity and confined geographic distribution [8], has been overcollected to the brink of extinction. However, the technology for artificially breeding is not yet mature. There has been a massive disparity between supply and demand, resulting in skyrocketing prices in recent decades [9,10]. The most common cultivated products on the market today are various health-care products consisting of fermentation liquid extracted from the mycelia of C. sinensis and other similar fungi [11]. However, due to the differences in the product form and the source of the effective components between cultivated and natural type, although the price of the cultivated type is lower than that of the natural one, it was not well accepted by consumers. On the other hand, the entirely cultivated type that not only morphologically resembles the wild one but also exhibits similar medicinal effects with controllable heavy-metal contamination necessitates more recognition by the market.
The quality assessment of C. sinensis is still in the preliminary stage [12]. The Chinese Pharmacopoeia specifies only the content of adenosine as a quality control marker [13]. Studies have shown that cordycepic acid (mannitol), cordycepin, and polysaccharides are also the main effective components [14,15]. They are significant markers for the evaluation of the cultivated and natural type. Previous study shows that the nutritional value in terms of the levels of nucleosides, nucleotides, and adenosine is virtually the same between artificially and naturally cultivated samples [16,17]. There is no difference in the chemical components detected between cultivated and natural Chinese cordyceps [18]. The extracts of both cultured and natural mycelia exhibit direct, potent antioxidant activities [19]. However, one study also reported that the contents of crude fat, total amino acids, and minerals were significantly different between natural and cultured samples [20]. Natural and cultured samples display significant differences in their metabolic profiles [21]. Thus, the available findings are confusing. Moreover, these reports have focused on fermentation extract or mycelia [22,23]. A comprehensive exploration of the differences in the main components among different types by HPLC-MS and UV spectrometry has yet to be performed.
This study quantitatively and qualitatively described and compared the cordycepin, mannitol, adenosine, polysaccharides, and total soluble protein of entirely cultivated, naturally grown C. sinensis and stiff worms. It aims at providing useful information for people to understand the differences and accept the cultivated substitute more widely to reduce the use of the natural C. sinensis, an endangering species. This study also proves that the artificial cultivation of this precious herb is technically feasible.
Materials and instruments
Sample collection. A total of 8 samples were divided into three types (Fig 1, Table 1). Entirely cultivated C. sinensis (B, C1-C4) was obtained through cultivation and inoculation by us. All the strains were the same Hirsutella sinensis, which were reserved separately for the next cultivation. The cultivation conditions of the strains and inoculated C. sinensis were also the same. Natural C. sinensis (A) was collected from the Pan'an village, Xiaojin County, Aba Tibetan and Qiang Autonomous Prefecture, Sichuan Province, China (102˚1 0 -102˚59 0 E, 303 5 0 -31˚43 0 N), at an altitude of 3800-4500 m. The sampling site is publically owned. All the wild samples were acquired legally from the local people, and all the cultivated samples were bred by us, so we didn't need any permits to carry out this study. The stiff worms were ghost moth larvae that were not able to grow fruiting bodies after artificial inoculation (C5, C6). Standard samples for cordycepin, mannitol, and adenosine were purchased from the China Food and Drug Testing Institute. We here state that we didn't involve any endangered, threatened, or protected species or locations in this study.
Sample processing. The samples were rinsed with deionized water, wrapped in filter paper and placed in an oven at 80˚C for 24 hours. They were then pulverized at 6000 rpm for 3 min into fine powder and passed through a 60 mesh screen. From each sample, 0.25 g ± 0.005 g was weighed accurately and placed in a 50 mL volumetric flask, and 30 mL of Na 3 PO 4 solution was then added at a concentration of 0.01 mol/L. The sealed bottle was shaken vigorously for 30 min and extracted in an ultrasonic extractor at a constant 60˚C for 30 min. The extract was filtered through two-layer coarse filter paper, and the 50 mL filtrate was taken as the test solution.
Instruments. A Thermo Bio MATE 3S automated nucleic acid and protein analyzer and a Waters XEVO TQ mass spectrometer were used in this study. High-performance liquid chromatography (HPLC) was conducted with a Waters ACQUITY UPLC-I Class instrument. Thermo shaker was also used.
The mass spectrometry conditions were as follows: ion source: electrospray ionization (ESI) positive mode; scanning mode: multiple reaction ion detection (MRM); analytical temperature: 350˚C; desolvated gas flow rate: (L/Hr): 700; capillary voltage: 3.3 KV. For ADE (adenosine), an ion pair of 268.22 > 136.07 was selected; for CORD (cordycepin), an ion pair of 252.22 > 136.07; and for MAN (mannitol), an ion pair of 183.14 > 69.03 (the former is the parent ion, and the latter is the most stable daughter ion) ( Table 2).
Determination of main components
The three representative components of adenosine (C 10 H 13 N 5 O 4 ), mannitol (C 6 H 14 O 6 ), and cordycepin (C 10 H 13 N 5 O 3 ) were measured by high-performance liquid chromatography-mass spectrometry (HPLC-MS).
Polysaccharides were determined by the sulfuric acid-phenol method [24]. The total soluble protein content was measured by placing the prepared test solution into the sample chamber of the fully automatic nucleic acid and protein analyzer to read the parameters after 10 seconds.
Each test sample was measured by a UV spectrometer at λ = 260 nm to compare the absorption spectra.
Standard samples and standard curves
The standard samples of cordycepin, mannitol, and adenosine were accurately weighed to obtain samples of 500, 250, 50, 25, 10, 5, and 2.5 mg, respectively, which were then dissolved in H 2 O. The solutions had a constant volume of 500 mL. A 1 mL aliquot of each sample was diluted again, and the volume was set as 1000 mL. Standard solutions with concentrations of 1000, 500, 100, 50, 20, 10, and 5 μg/L were prepared. After the HPLC-MS test, the standard curve of the concentrations and peak areas was obtained (Fig 2) as follows: adenosine: y = 656.088 � x + 101.914, r = 0.997589; cordycepin: y = 6.67068 � x-12.245, r = 0.999771; mannitol: y = 630.794 � x-4898.04, r = 0.997564, where x is the concentration, and y is the peak area.
Data analysis
All data in the article were processed using SPSS25.0.
HPLC-MS results
The HPLC-MS results showed that adenosine, mannitol, and cordycepin could be detected in cultivated C. sinensis. The peak times of all the samples were nearly the same, which indicated that the cultivated C. sinensis did not differ from the wild fungus in its main component types but that their concentrations varied. The adenosine contents of the eight samples varied from 1.2 μg/mL to 50.0 μg/mL, indicating that the adenosine contents of samples from different sampling years and of different types were significantly different ( Table 3). The adenosine content of the cultivated type harvested in 2018 (B, 8.05 μg/mL) was significantly higher than that of the wild type (A, 3.00 μg/mL) in the same year (P<0.01). The adenosine contents of cultivated C. sinensis of different quality also exhibited extreme differences (P<0.01): the content of C1 (in which the worm body was filled, and the fruiting body was short and robust) was 50.0 μg/mL, whereas that of C4 (in which the appearance of the worm body was dry, and the fruiting body was thin and long) was 1.20 μg/mL. The mean total adenosine content of cultivated C. sinensis harvested from 2015 to 2018 (B, C1-4, 18.13 μg/mL) was significantly higher than that of the natural type (A, 3.0 μg/ mL). There was no significant difference in adenosine content between the stiff worms (C5: 3.40 μg/mL, C6: 2.10 μg/mL) and natural type (P>0.05).
Cordycepin contents. Fig 4 shows that the average peak time of cordycepin was approximately 4.5 min, and the peak time and retention time were longer than those of adenosine and mannitol. The stiff worms (C5, C6) were prone to exhibit close twin peaks where the second peak was notably lower than the first peak.
The cordycepin content ranged from 2.4 μg/mL to 22.0 μg/mL and fluctuated less than the adenosine content (Table 3). However, the data indicated a significant difference in cordycepin content in C. sinensis samples from different sampling years and of different types, even within the same year. The cordycepin contents of the cultivated (B, 22.10 μg/mL) and wild (A, 3.00 μg/mL) types in 2018 were extremely significantly different (P<0.01). The average cordycepin content of cultivated C. sinensis from different years was 7.36 μg/mL, which was markedly higher than corresponding value of 3.0 μg/mL for natural type. The cordycepin contents of the stiff worms produced in 2017 and 2018 (C5 and C6) were 2.7 μg/mL and 2.8 μg/ mL, respectively, and this difference was not significant (P<0.05).
Mannitol content. The mannitol test results showed a similar peak time and retention time of all the samples, while the contents differed significantly (Fig 5). The differences in the mannitol contents were the largest among all of the indicators tested.
The measured concentration of mannitol was the highest among all indicators, indicating that mannitol was one of the major components. It also presented the greatest range (Table 3) of 0.60 μg/mL (C6) to 578.30 μg/mL (C1) compared with other indicators. The mannitol contents of C. sinensis from different sampling years varied remarkably from 128.10 μg/mL (C2, 2017) to 578.30 μg/mL (C1, 2016). The mannitol contents of cultivated C. sinensis harvested in the same year (2017) with different quality levels also showed significant differences (P<0.01), ranging from 128.10 μg/mL (C2) to 514.0 μg/mL (C4). The mannitol concentrations of the cultivated type ranged from 128.10 to 578.30 μg/mL with an average content of 437.58 μg/mL, which was notably lower than those of the natural type (502.10 μg/mL). The mannitol concentration was significantly different in stiff worms from different years (321.80 μg/mL (C5) versus 0.60 μg/mL (C6)), and the average mannitol level of the stiff worms (161.2 μg/mL) was notably lower than those of the wild and cultivated types. Comparative study on composition of diferent Cordyceps sinensis Overall, the HPLC-MS results demonstrated that all of the main effective substances, including adenosine, cordycepin, and mannitol, could be detected in cultivated type and that the peak times were similar to those of the natural type. The only difference between cultivated and natural C. sinensis was in the concentrations detected. This analysis indicates that the components of the cultivated C. sinensis and the wild type are virtually identical.
Assessment of polysaccharides
The results of analysis by the sulfuric acid-phenol method showed that the contents of polysaccharides differed between samples, with the distribution ranging from 126.30 μg/mL to 286.63 μg/mL. The polysaccharide content of cultivated C. sinensis (B, 2018) was 286.63 μg/ mL, which was significantly higher than that of (P<0.01) the natural type (A, 228.23 μg/mL). However, the average polysaccharide content of cultivated C. sinensis was 177.20 μg/mL, which was significantly lower than that of the natural polysaccharides (228.23 μg/mL) (P<0.01). The average content of polysaccharides in the stiff worm was 182.1 μg/mL, which was also significantly lower than that of wild type.
Assessment of total soluble proteins
The data revealed that soluble proteins were not the main component (3.35-4.57 μg/mL). The soluble protein content of the cultivated type (B, 2018) was 4.57 μg/mL, whereas the content of natural one from the same year (A, 2018) was 3.97 μg/mL, and the difference was significant (P<0.01). The soluble protein contents of cultivated C. sinensis from different sampling years were significantly different (P<0.01). The average soluble protein content of the stiff worms was 4.36 μg/mL, which was significantly higher than those of the cultivated type (3.74 μg/mL) and the wild type (3.97 μg/mL) (P<0.01).
Ultraviolet absorption spectrum results (λ = 260nm)
As shown in Fig 6, the UV absorption spectrum of each sample was almost the same, but Fig 6A, Fig 6B, Fig 6C, and Fig 6D (cultivated C. sinensis) reveal a similar pattern in which the Comparative study on composition of diferent Cordyceps sinensis central peak is intense, and there are several consecutive small peaks around the central peak. The UV profiles of Fig 6E and Fig 6F (both were stiff worms) could be classified into one type in which the central peak is followed by a small peak before eventually leveling out. The UV absorption spectra of Fig 6G (A, natural C. sinensis) and Fig 6H ( (Table 4) revealed distinct differences in the indicators. However, with further analysis, we could see that the levels of small molecules such as adenosine and cordycepin were significantly higher in cultivated C. sinensis than in the wild type. Although there were differences between the stiff worms and the natural type, the absolute difference was not significant. The content of polysaccharides was significantly different between the three types in the following order (P<0.01): wild type> stiff worm> cultivated type. Soluble protein was not the main component of C. sinensis (below 0.01%), suggesting no significant difference between the natural and cultivated types (P>0.05), but the content of this component was significantly different from that in stiff worms (P<0.01), with the following order being observed: stiff worm> cultivated type> natural type.
Discussion
Cordyceps sinensis has gained public popularity and global scientific attention due to its wide range of nutritive and medicinal properties. However, the natural C. sinensis has been excessively harvested in the last two decades, leading to a drastic decrease in wild populations. Hence, this study attempts to investigate the differences in the entirely cultivated, wild type, and stiff worms by identifying and comparing the major composition. It aims at providing useful information to understand the differences and encouraging the use of cultivated alternatives.
The results indicate that all the characteristic components of the natural herb can be detected in the cultivated type and stiff worms, but the concentrations vary among different types and types from different yielding years. Our results are in some respects similar to those in previous studies comparing the constituents in the cultivated, natural C. sinensis and related species [20,23], but in the present study, the representative composition has been comprehensively compared by employing the artificially breeding C. sinensis across different years that has the same appearance of the natural type instead of fermentation extract or mycelia (Fig 1). Adenosine, cordycepin, cordycepic acid (mannitol), and polysaccharides are four major and effective components of C. sinensis. Among these components, adenosine has been used as a premier marker for quality control of C. sinensis [13] and is well known to depress the excitability of CNS neurons, to inhibit the release of various neurotransmitters presynaptically and to possess anticonvulsant activity [7]. The molecular structure of cordycepin is CsHyON, which is essentially a derivative of adenosine. Cordycepin is the first nucleoside antibiotic isolated from Cordyceps militaris, a species related to C. sinensis that is commonly used as a substitute [25]. Whether or not natural and cultured C. sinensis contain cordycepin is still controversial [26][27][28][29], but cordycepin contained in natural and cultivated type is confirmed in this and other reports [30,31]. Adenosine and its derivatives, including cordycepin, are very useful due to their powerful bactericidal, antiviral, fungicidal, and anticancer functions, presenting strong pharmacological and therapeutic potential to cure many dreadful diseases [32]. This study showed that the contents of adenosine-related substances (adenosine and cordycepin) in cultivated C. sinensis were significantly higher than in the natural type, whereas the contents in the stiff worms were not significantly different from those in natural ones. The results echo the previous studies that the contents of nucleosides (cordycepin, adenosine, etc) from cultured Cordyceps were higher than that of those from wildlife [29,31], and that the levels of adenine and adenosine in the cultured sample are considerably higher than the natural ones [20,29]. One study also states that the amount of nucleosides, especially adenosine in cultured C. sinensis is higher than that in natural type, and cultured C. militaris exhibits much higher content of cordycepin [33]. Chassy and Suhadolnik report on the biosynthesis of cordycepin in C. militaris by radioimmunoassay [34], from which we can speculate that higher adenosine and cordycepin in the cultured type might be induced by the favorable controlled lighting, moisture, and temperature during the initial period of asexual reproduction, which facilitate the absorption and transportation of the adenosine and cordycepin.
Cordycepic acid, also known as mannitol, is mainly used as an anhydride and diuretic in medical treatment. It has pharmacological effects such as increasing plasma osmotic pressure, anti-tussive, anti-free radical activities [7], and cerebrovascular dilation [35]. It can be employed in the treatment of meningioma as a liquid chemoembolization agent [36], in the treatment of intracerebral hemorrhage [37], and for downregulating intracranial pressure [38]. In the present study, there was a significant difference in the contents of mannitol among the three types of specimens in the following order: natural type > cultivated type > stiff worm, but the difference between the absolute values of the last two specimen types was not significant. Previous study exhibits consistent findings that natural herb contains more free mannitol and a small amount of glucose, while mannitol in cultured C. sinensis and cultured C. militaris is much less and free glucose is only detected in a few samples [39]. Additionally, natural products have a significantly higher content of mannitol compared with the submerged cultured mycelia [40]. Adenosine, cordycepin, and mannitol are small molecules that are the primary molecules constituting nucleic acids and polysaccharides. From the perspective of molecular structure, mannitol is the reduction product of mannose, which is the C-2 epimer of glucose [41]. Therefore, the discrepancy in the content of mannitol might have been caused by the different transformation processes and equilibrium sites associated with varying transformation efficiencies under different biological conditions. However, the specific transformation processes of the three compounds in the ghost moth body and C. sinensis body are still poorly understood, which necessitates further research.
Polysaccharides are one of the most abundant components of C. sinensis [42]. Since Miyazaki first obtained water-soluble polysaccharides from C. sinensis fruiting bodies [43], researchers have conducted extensive research on the functions of C. sinensis polysaccharides. Studies have demonstrated the pharmacological use of C. sinensis polysaccharides to achieve immunostimulation, antitumor activity, and free radical scavenging [44]. A more recent study finds that there are relatively high similarities among the polysaccharides from different batches of cultivated C. militaris, and also between the polysaccharides from cultivated C. militaris and natural C. sinensis [45]. Polysaccharide is significantly higher for cultured than those of natural samples [20]. Conversely, in our research, noticeable higher contents were detected in the natural herb. The results of this study indicated significant differences in the following order: natural type > stiff worm> cultivated type. The reasons for these findings can be twofold. First, there are differences in the structure of animal tissues and fungal tissues, and the main dry matter component of animal tissues is protein, whereas the main component of fungi is polysaccharides. When ghost moth larvae are transformed by the fungus after successful inoculation, the original animal tissue components are turned into fungal hyphae. Therefore, the polysaccharide contents of both the cultivated type and the stiff worm were significantly lower. Second, notable differences are found in the diets of wild ghost moths and artificially reared ghost moths. Wild ghost moth larvae are omnivorous insects that feed on varied diets, including plant underground roots and soil humus [46]. A wide variety of food sources (generally more than ten species) are available to wild ghost moths. In contrast, the diet of the artificially cultivated ghost moths is usually limited to 1-2 species. As a result of this simplified diet, the monosaccharides available to synthesize polysaccharides are markedly less abundant than in wild ghost moths. Therefore, C. sinensis might be affected by different cultivation conditions and harvesting time, resulting in different polysaccharide contents, which implies that the originality and growth environment can considerably affect the chemical composition of C. sinensis. This is in line with the previous studies that among different habitats [47] and different harvesting time, the contents of various components of C. sinensis differ significantly [48]. Limited by the structural diversity and complexity of polysaccharide molecules and current research methods, the structure of the polysaccharides in C. sinensis is currently inconclusive, which remains to be further studied [49].
Proteins in C. sinensis play role in biological processes such as ribosome formation, stress adaptation for temperature reduction and cell cycle control [50], which is not the main effective component. Previous researches report on the proteomic analysis of C. sinensis to determine the proteins [50] and provide basic proteome profile for further study [16]. However, very limited studies are available for the comparison of the soluble protein in the natural and cultivated C. sinensis. In this study, the soluble protein content of the stiff worms was significantly higher than those of the natural type and cultivated type (P<0.01), but no significant difference was detected. This could be explained by the fact that less effective transformation of animal tissues into fungal tissues drives the protein contents higher in the cultivated type than those of the natural type. Likewise, a large number of animal components (insect proteins, glycolipids) in the original stiff worm body were not effectively transformed due to failed fruiting body formation, resulting in a higher soluble protein content in the cultivated type. Additionally, ghost moth larvae have a life cycle of up to 5 years, most of which is spent underground [51]. By comparison, the life span of artificially reared ghost moths is compressed to as little as one year due to the beneficial controlled environment. Thus, rapid growth leads to a shortened productive nutrition accumulation period. Therefore, the various abundant nutrient resources from food and long growth time of the wild ghost moth guarantee the synthesis of various proteins in its body and the effective accumulation of various dry matter components.
In the limitation of this study, in order to dehydrate the fresh samples as soon as possible to avoid bacterial, viral, and insect contaminants, the preparation of the test solution for soluble protein test was set at a higher temperature, which might lower the soluble protein tested, however, it could be offset at some extent by the following ultrasonic extractor that could break the macromolecule. Besides, although the major components have been identified and compared in the natural and cultivated C. sinensis, extensive work is still needed to define the transformation processes and the exact roles of these components.
Conclusions
This study compared the main components of wholly cultivated, natural C. sinensis, and stiff worms. The test results showed that all five examined effective components of natural C. sinensis could be detected in cultivated type. More importantly, the contents of adenosine and cordycepin were even higher in the cultivated type. Additionally, the adenosine content of cultivated type in different years exceeded 0.01%, meeting the quality control requirements specified in the Chinese Pharmacopoeia (2015 version) [13]. Although the contents of cultivated C. sinensis were inconsistent, showing remarkable differences in cultivated type from different years, we can conclude that cultivated C. sinensis could be used as a reliable substitute of the natural herb for mass production of the medicinal fungal materials.
|
v3-fos-license
|
2020-07-23T09:03:40.828Z
|
2020-09-01T00:00:00.000
|
225182759
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://hal.archives-ouvertes.fr/hal-02930173/file/TSK2020.pdf",
"pdf_hash": "7d6e62e23d24c0cd3ae9be5f42cb6bb14cc03730",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43466",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "f9a662b106c674dc5f3a55b6718c88d66f666edf",
"year": 2020
}
|
pes2o/s2orc
|
Analysis of corrosion risk due to chloride diffusion for concrete structures in marine environment
Abstract The chloride-induced steel corrosion is one of the main causes of deterioration for reinforced concrete structures exposed to marine environments. The chloride ingress into reinforced concrete structures is even more complex since it depends on random parameters linked to transport and chemical properties of materials, which results in variability of corrosion initiation. This variation raises the need of statistical approaches to evaluate the risk of corrosion initiation due to chloride ingress. To address this issue, we use sensitivity analysis to identify the influence of input parameters on critical length of time before corrosion initiation predicted by our chloride diffusion model. Exceedance probabilities of corrosion initiation time given that input parameters exceed certain thresholds were also calculated. Results showed that the corrosion initiation time was most sensitive to: chloride effective diffusion coefficient De in concrete, that is a parameter controllable by relevant stakeholders; surface chloride concentration Cs, a non-controllable parameter depending on surrounding conditions. Reducing the chloride diffusion coefficient enables us to postpone the maintenance of structures. However, the interaction between controllable parameters and non-controllable surrounding conditions was revealed influential on the reliability of results. For instance, the probability that corrosion initiation time exceeds 15 years given an effective diffusion coefficient (De) equal to 0.1 × 10−12 m2⋅s-1 can vary from 19 to 41% according to stochastic variations of chloride concentrations (Cs) values. Postponing the corrosion initiation time was combined with a decreasing probability of its occurrence.
Introduction
Corrosion of steel reinforcement due to chloride ingress is one of the major causes of degradation of Reinforced Concrete (RC) structures [1]. According to Tuutti diagram [2], the corrosion can be divided into two stages: corrosion initiation and corrosion propagation. The corrosion initiation corresponds to the process of chloride ingress into concrete until the chloride concentration has reached the steel rebar and exceeds a threshold value. Then steel is de-passivated and the corrosion propagates into the reinforcement. We define the service life as the period of penetration of chloride into the concrete cover until the chloride content exceeds a threshold * Corresponding author.
value at the position of the reinforced steel bar. Indeed, at the end of that period, maintenance operations are required: most current maintenance operations consist in removing the chloride contaminated concrete and replacing it by a new one [3], thus inducing additional costs mainly caused by concrete production. There is thus a balance to ensure both long service life and minimum costs. In a broader perspective, we can also consider that there should also be a balance for environmental impacts because cement concrete is an important contributor to climate change [4]. Consequently, it is important to improve service life predictions, but also to determine influent parameters and to evaluate levels of potential risk in order to provide recommendations for longer service life to engineering designers when designing concrete structure exposed to chloride. Current studies are increasingly interested in reliability assessment of results on corrosion initiation from chloride ingress modelling. For instance, a probabilistic approach to assess chloride ingress into concrete was performed by analyzing a model of chloride penetration into a stochastic framework [5]. More precisely, this last study applied Monte Carlo simulations and Latin hypercube sampling to consider propagation of uncertainties related to material properties and surrounding conditions. Likewise, a probabilistic analysis of corrosion initiation time was performed by focusing on the initiation phase of chloride induced reinforcement corrosion, when assessing five concrete durability options [6]. Recently, probabilistic de-passivation time for RC structures exposed to chloride ingress was also estimated according to three simple diffusion models, based on experimental data from a concrete structure exposed to the atmospheric marine environment [7]. Meanwhile, probabilistic risk analyses were also conducted for other causes of corrosion of concrete structures, such as corrosion risk due to carbonation [8]. However, in these and other studies, the individual influence of model's input parameters and the influence of their interactions on chloride ingress phenomenon were not investigated. This is required to be deeply investigated, in addition to probabilistic analysis of corrosion initiation time.
Our research aims at combining service life prediction models in order to give recommendations to the design engineers for extending service life. This kind of approach was already developed and applied to a RC structure located in Madrid and submitted to carbonation [9]. In complement to this previous work, the present study focuses on probabilistic assessment of corrosion risk due to chloride ingress into RC structures exposed to marine environment. First, a model for chloride ingress into concrete is chosen based on appropriate simplification in order to use it as decision tool for design engineers. Several studies focused on modeling chloride ingress by exclusively considering the diffusion process or by considering diffusion and convection [10]. Our approach requires a simplified durability model based on parameters (i.e. concrete mix design, construction parameters) that are available and controllable by the engineering designer, and including as well as relevant (i.e. influent) non controllable parameters (meteorology, salinity, …). Second, a probabilistic analysis to corrosion risk due to chloride ingress is performed. That includes a sensitivity analysis (SA) to identify individual influences of model's parameters and of their interaction on service life of marine RC structures.
Various SA studies have already been conducted on chloride ingress inside cement concrete. For example, Boddy et al. [11] applied a one at a time (OAT) technique for sensitivity investigation of a chloride transport model to the variations of the parameters controlling the rate of diffusion, temperature of exposure, critical chloride level, diffusion coefficient, permeability coefficient, surface chloride concentration and position of steel rebar. Likewise, Kirkpatrick et al. [12] studied a probabilistic model to predict chloride corrosion initiation depending on the position of steel rebar, surface chloride concentration and apparent diffusion coefficient. A deeper deterministic SA than the two previous studies was conducted by undertaking a differential analysis technique on corrosion model depending on four governing parameters: apparent chloride diffusion coefficient, position of steel rebar, surface chloride concentration, chloride threshold [13]. A global SA using Sobol method was used to study the sensitivity of the probability of failure to the assumed coefficients of variation of properties of the pre-stressed concrete [14]. Likewise, the Sobol method was applied to investigate independent and cooperative effects of the material parameters on masonry compressive strength [15]. In addition to SA, exceedance probabilities of model output are calculated given that input parameters exceed certain thresholds, for reliability assessment of service life improvement.
Our study proposes to apply successively Morris [16] and Sobol [17] approaches as complementary methods for SA of diffusion-based chlorides model. The originality of our work is to go beyond the recent studies in literature on uncertainties in the models describing the degradation of reinforced concrete structures, by considering the individual influence of available input parameters as well as the influence of their interaction on model output. A nonlinear chloride transport model easy to implement is used to enable us to investigate the potential interaction between input parameters, for which few experimental results are available in the literature. SA methods are applied to better understand the phenomena involved in chloride model and identifying input parameters (i. e. action levers) that sector's actors can control to improve service life behavior of concrete structures. These controllable (or technological) parameters are the action levers that will result in the proposition of possible pathways for structure owners to extend the length of time before corrosion initiation. Furthermore, the probabilistic analyses of the critical length of time to corrosion initiation given the most influent parameters are calculated to provide information on the reliability of results.
Chloride transport model
Various approaches were developed to model the chloride ingress through cement based materials. The main difference between models concern the fact that they considered concrete exposed in saturated and/or unsaturated conditions. For saturated conditions, the diffusion process obeying the Fick's second law is generally sufficient to model the chloride ingress [18]. Likewise, the time-dependence of the apparent diffusion coefficient is considered as well as the chloride concentration of the exposed surface [19,20] by using empirical laws to establish time dependency functions.
For models accounting for unsaturated conditions, some studies have modeled chloride transport by taking into account both diffusion and convection [21][22][23][24][25][26][27]. These models used semi-empirical laws found by fitting experimental data to get the moisture diffusivity and the chloride diffusion coefficient. These models can be mono-species (only chloride is considered) or multi-species (ions contained in the pore solution are also considered). They are considered as sophisticated models because they take into account some physical or chemical phenomena occurring into concrete such as chemical binding, electrical double layer, activity of pore solution. However, models suitable for unsaturated conditions require many input parameters that are not currently measured for concrete design because they are expensive or time-demanding to collect and they are not required by the standards [28,29]. Moreover, reinforced concrete material in marine environment can generally be considered as saturated. Indeed, when casted on site, concrete is initially saturated and persistently exposed to high Relative Humidity (RH) that, for instance, stands superior to 80% on the French Atlantic coast [30]. In the case of maritime structures it is thus obvious to consider the chloride displacement into concrete by a diffusion equation as it was done by some studies to analyze chloride profiles obtained from reinforced concrete structures in unsaturated conditions [30][31][32][33][34]. From an engineer point of view, the Fick's second law is simple and the orders of magnitude of diffusion coefficient (or chloride profile), which we can obtain with this law, are correct and acceptable [35]. In addition, numerical implementation of this law is not time consuming in order to conduct statistical investigation.
Thus, according to our objective, our choice goes to a simple model suitable for saturated conditions. This model is resumed in Fig. 1. Equations are justified below.
Fick's second law of diffusion is written as: where Cðx; tÞ (kg ⋅ m À3 of concrete) is the chloride concentration at position x (m) at time t (s).
The previous equation can be solved by considering finite or semi-infinite boundary conditions. The last case is often considered for computing diffusion coefficient or chloride concentration, because the unsteady state is not time consuming. By assuming a timeinvariant chloride concentration C s at the concrete surface and a time-invariant diffusion coefficient D, Crank's solution of second Fick's law is given by Crank [36]: under the initial condition C ¼ 0 for x > 0 andt ¼ 0. In equation (2), C s (kg ⋅m À3 of solutionÞ is the chloride concentration at the concrete surface, D a (m 2 ⋅ s À1 ) is the apparent diffusion coefficient and erfc() is the complementary error function. Input parameters C s and D a are not time-dependent in this paper, as it was done by Ref. [31][32][33]. Moreover it was shown that the chloride diffusion coefficient decreases particularly in the first few months before being stabilized even with admixtures as slag [37].
The apparent diffusion coefficient D a previously mentioned is given as [1]: where D e (m 2 ⋅ s À1 ) is the effective diffusion coefficient, p is the porosity and, C ' b (kg:m À3 of concrete) and C f (kg:m À3 of pore solution) are the concentrations of bound and free chlorides, respectively [38].
In order to easily use the statistical study, we have to note that as first step, we considered the diffusion coefficient as intrinsic property of the material so "constant". Taking into account the time dependency of the diffusion coefficient by introducing the ageing factor [39,40] is a possible and interesting step not done in this paper. Chloride binding isotherm is the relationship between free and bound chloride ions at a given temperature such that the partial derivative ∂C b =∂C' f can be determined. The chloride binding capability of concrete is a very complex phenomena describing by thermodynamic equilibrium, kinetic control and surface complexation [41] and the effects of different types of chloride binding (physically and chemically bound chloride) can be taken into account to describe the chloride ingress [42]. Usually to evaluate the relationship ∂C ' b =∂C f , three types of binding isotherms (linear, Langmuir, Freundlich) have been proposed by fitting isotherm models with measurements [43]. In order to experimentally determine this isotherm, researchers usually use powder or thin disc extracted from concrete cores. In that way the best fitting with experimental data was obtained with the Freundlich isotherm in many cases [37,44]. However, this experimental procedure did not match the observations on reinforced concrete in a natural maritime environment. So, to establish the relationship between free and bound chloride, it is necessary to expose concrete specimens under marine environment for a long time [45,46]. For the different concrete compositions, which were tested in these studies, the obtained chloride binding capacity fitted very well with the linear relationship. So, the linear isotherm is used as: That results in the apparent diffusion coefficient given as follows: In summary, the chloride diffusion model is presented in Fig. 1 with the main input parameters involved in chloride ingress into RC structures. We are interested in determining the critical time t ¼ t crit that corresponds to the period of time during which chlorides penetrate the concrete but no damage is observed [2]: where the critical chloride concentration C crit (kg � m À3 of concrete) is the chloride concentration threshold above which passivation of steel is destroyed [47]. The critical chloride concentration involved in equation (6) can take three forms [10]: the concentration of total chlorides [48], the concentration of free chlorides [49], and the ½Cl�=½OH� ratio [50]. These levels are highly variable because they depend on particular surrounding conditions, such as concrete mix design, experimental procedures [51], as well as rebar surface conditions [52,53]. The concentration of total chlorides was shown to be more relevant than the concentration of free chlorides alone or than the ½Cl�= ½OH� ratio [54,55], thus we choose the total chloride content as our critical chloride concentration for the threshold value.
The critical length of time t crit to corrosion initiation is defined as the time at which the chloride concentration at the position of steel rebar x rebar exceeds the critical value. The time t crit can be analytically expressed as:
Risk analysis study
According to our objective, to obtain best service life values, a risk analysis study induced by chloride ingress phenomenon is conducted in three steps: 1) to perform SA to evaluate the influence of our model's various input parameters on its output parameters [17], 2) to calculate conditional probabilities of critical length time to corrosion initiation, and 3) to analyze favorable and unfavorable scenarios to corrosion initiation.
Sensitivity analysis
At the first step, Morris and Sobol SA methods are applied to the model. The method of Morris [16] provides sensitivity information (influential, sense of variation) about input parameters on the interval range of interest. As a complement to the former, the method of Sobol [17] evaluates the contribution of the variation of each input parameter to the total variation of model output via quantitative indices. Sobol indices also quantify the contribution of the interaction between input parameters. Morris and Sobol methods are successively conducted based on the same methodology originated from Andrianandraina et al. [56]. More precisely, in this methodology, the model is first defined with used intermediary models, constants and input parameters. All input parameters are then characterized with their pdf and are grouped in two categories according to the actor's action possibility: technological that are controllable by the engineering designer and corresponding to potential action levers, and surrounding parameters that are not controllable by the engineering designer. Morris and Sobol methods are successively applied using variation range and pdf of parameters. More details on these two SA methods are provided in Supplementary Material (Section B).
Conditional probability of critical length of time to corrosion initiation
At the second step, conditional distributions of critical length of time t crit to corrosion initiation are calculated given the most influential input parameters. More precisely, that consists in evaluating exceedance probabilities of t crit given that input parameters X 1 andX 2 exceed certain thresholds x 1 andx 2 , respectively, by setting other input parameters at their default or mean value, such as: We will practically considered input parameters X 1 andX 2 having the most important contribution on variations of critical length of time t crit :
Comparisons of simulated scenarios
At the third step, unfavorable, favorable and medium scenarios are simulated and compared. The first scenario called "unfavorable" aims at simulating the degradation of service life that the engineering designer can obtain by choosing values of parameters that decrease service life. Thus all influential technological parameters are set at their most unfavorable for the shortest service life and all surrounding parameters at their default or mean value [56]. The second scenario called "favorable" aims at simulating the reliability of the possible improvement of service life that the engineering designer can obtain by choosing values of parameters that increase service life. Thus all influential technological parameters are set at their most favorable for the longest service life and all surrounding parameters at their default or mean value [56]. The last scenario, called "medium" aims at obtaining an average scenario, for which values of parameters are not known and thus all set to their mean values. Probability distributions of simulated critical length of time t crit are then plotted according to the three previous scenarios. The comparison among probability distributions of t crit and their descriptive statistics (mean and standard deviation) will provide information about uncertainty and potential variation (extension or reduction) of the corrosion initiation time in each simulated scenario.
All calculations (SA indices, conditional probabilities, various scenarios) were realized over 5000 repetitions by Monte Carlo simulations. This number of repetitions was sufficient to reach stable and relevant results, with a reasonable computation time. Input parameters are assumed to be independent, as required in the methodology of the two SA methods applied.
Characterization of input parameters
According to our objective, we investigate the individual influence of input parameters D e , x rebar , C s ,C crit , p and α, as well as the influence of their interaction on the corrosion initiation time t crit .
Technological parameters are the position of steel rebar in concrete since it is governed by the European normalization (EN 1992-1-1 [29]), the effective diffusion coefficient D e , the porosity p because the engineering designer can control them through other factors such as the Water/Binder ratio. Surrounding parameters are the chloride concentration at the concrete surface, the linear isotherm α, as well as the critical chloride content which it is sensitive to chemical characteristics of concrete components (binder and type of aggregates).
Probability density functions (pdf) of input parameters considered in this study are shown in Table 1 and we explain below how they were obtained.
Descriptive statistics of parameters related to properties of the materials (chloride diffusion, porosity) were calculated from a literature review of experimental studies performed on 13 references with 52 different concrete mixtures and different types of binders [37,[57][58][59][60][61][62][63][64][65][66][67]. The references were selected because they determine the effective diffusion coefficient D e in concrete from accelerated steady state migration tests with a method closed to the French standard XP-P-18461. The distributions of observed values of effective diffusion coefficient D e and porosity p obtained from our literature review were assumed lognormal and normal, respectively ( Table 2a in Ref. [68] (page 1607) presents descriptive statistics of surface concentration for structures exposed to marine breeze. b Table 3 in Ref. [68] (page 1608) presents descriptive statistics of critical chloride concentration.
considered for testing corrosion initiation and a normal distribution is assumed in this study [13,68]. More precisely, any conclusions were drawn about statistical distributions of C s for structures exposed to marine breeze [68]. In our work, we considered a normal distribution of C s truncated at the minimum value of its variation range.
The linear isotherm parameter α is experimental and it does not have a known pdf in literature. Some α values that were calculated in one study varied from 0.01452 to 0.06195 [45]. In that study, the concrete specimens were exposed to marine environment for 10-30 years before samples were collected to determine total and free chlorides content. The Water/Binder ratios varied from 0.45 to 0.55 and different binders were studied: ordinary Portland cement, slag cement and fly ash cement. Concrete specimens with Water/Binder ratios varying from 0.45 to 0.65 were also used with different binders (cement types CEMI, CEMV, as well as binders with fly ash replacing Portland cement type CEMI) [46]. In that study, the concrete specimens were exposed to seawater for seven years before determining total and free chloride content. The α values calculated from these data varied from 0.01584 to 0.0423. From these two references we assume that the α parameter can vary inside the [0.01; 0.1] interval. As we only have a minimum and maximum value, but no information on its pdf, we assume a uniform probability distribution. The critical chloride concentration inducing the end of the de-passivation state [2], chosen as the concentration of total chlorides [54,55] as previously explained, depends on many parameters such as the type of steel or electrochemical environment in concrete [68]. Its pdf being unknown, we also assume a uniform probability distribution.
At last, the position of steel rebar is assumed to follow a normal distribution truncated at a lower bound equal to 10 mm [69].
Sensitivity analysis
Results of SA concerning the influence of input parameters D e , x rebar , C s ,C crit , p and α of the studied model are given in Table 2.
Individual influences
According to the algebraic sign of Morris index μ, the time t crit may be reduced (μ < 0) by an increase of D e , p and C s . Conversely, the time t crit may be extended (μ > 0) by an increase of C crit ; x rebar andα. These trends forD e , p and α are expected from the analytical expression of apparent diffusion coefficient D a in equation (5). Increasing α decreases the apparent diffusionD a , while increasing p increasesD a . Then, looking at the individual effect of parameters (S i ) as well as the total effect (ST i ), the time t crit is found most sensitive to variations of D e (S i . ¼ 29.90%). That emphasizes the key role of diffusion phenomenon in the chloride ingress process in comparison with binding capacity and porosity of the concrete, as experimentally found by Pradelle et al. [70]. Our results were affected by bias calculations particularly for estimated Sobol indices close to 0.
A comparison can be established with deterministic [13] and probabilistic SA [71] performed on diffusion-based corrosion initiation model by considering four governing parameters: apparent chloride diffusion, position of steel rebar, surface chloride concentration, critical chloride concentration. An approximation of the considered model was developed by applying analytical differential techniques based on a Taylor series that do not take into account pdf of parameters [13]. Their results are in agreement with ours, on Table 2 Sensitivity analysis (Sobol and Morris indices) for input parameters involved in chloride transport model. Mean value of Sobol first order indices are highlighted in grey. the positive trend of t crit to x rebar and C crit and the negative trend of t crit to C s and D e . However, the individual effects of parameters differ between the two studies. Different types of steels (conventional black carbon steel and corrosion resistant steels) and exposure conditions (light, moderate, high and severe) were considered in Ref. [13] such that the ranking of parameter importance varied between these different cases. In one case, the time t crit with regards to chloride diffusion was most sensitive to variations of x rebar then D e ; in other case, it was most sensitive to C s and C crit . Importance factors of governing parameters were calculated in Ref. [71] by using their statistical distributions assumed to be lognormal. These factors were both influenced by time within the design life of the structure and coefficient of variation of parameters. According to these importance factors, the input parameter x rebar was ranked first, followed byD e , C s and C crit .
These results of SA cannot be entirely compared with ours since SA results must be interpreted with regards to variation range and statistical distributions considered for input parameters as well as the type of concrete structures studied. Furthermore, one of the main advantages of the approach applied in this work is the quantification of the influence of interaction between input parameters, which was not done by previous studies.
Influences of interactions
Possible interactions between parameters were indicated by the value of the ratio σ i =μ � i superior to 0.5 for all parameters in Table 2.
Interactions detected by the Morris method are confirmed by the Sobol method. Results show that the influence of interactions between input parameters are the most important forD e and C s (via the calculations of the difference ST i À S i �30%) and non-negligible for x rebar (ST i À S i �20%). The strongest interactions is observed between D e and C s with S ij ¼ 15.38% then D e and x rebar with S ij ¼ 11.63%. Furthermore, significant interactions of third order (between three parameters) were also revealed by calculating ST i À S i À P j S ij shown to be superior to 10% for D e , C s and C crit .
Various bivariate graphs are shown below to highlight interactions. For sake of visibility, 1000 points were plotted in Figs. 2 and 3, over the 5000 sample size used for simulation. As the parameter D e is found the most influential with a strong interaction with other parameters (Table 2), we investigated bivariate plots of D e as function of parameters such as C s (Fig. 2), C crit and α (Fig. 3). For instance, the time t crit (described by the size of points in Fig. 2) varies simultaneously according to D e and C s , which emphasizes the strong interaction between these two parameters. In addition, around the mean value of C s ¼ 9.75kg � m À3 , D e ranges from 0.3 � 10 À12 to 5 � 10 À12 m 2 � s À1 (Fig. 2b) with various t crit values. Note that C s not only depends on the surrounding conditions but also on the concrete mix design reflected by the chloride binding capacity of the binder which in turn depends on its free chloride content and so on its diffusivity. This dependency between both parameters D e and C s was experimentally found by Othmen et al. [30] for total chloride profiles determined from 30 cores extracted at the same level above the sea from a reinforced concrete beam exposed to Atlantic ocean. The authors observed an increase of about 10 times in the diffusion coefficient for the mean chloride surface concentration (compared with about 16 times in Fig. 2b).
No interaction between D e and C s , as well as between D e and α (Table 2) are found, and the time t crit only varies according to D e as shown in Fig. 3a and b. This trend is in accordance with the results obtained by Pradelle et al. on experimental data [70]. The weak effect of correlations between chloride diffusion coefficient, linear isotherm, and critical chloride content was globally highlighted through a sensitivity analysis: introducing correlations in the study will not modify the relative influence of each input data.
Conditional probabilities of critical length of time given effective diffusion coefficient and surface chloride concentration
For the influence of D e and C s pointed out from the analysis presented so far, the probabilistic analysis of t crit regarding cooccurrence of D e and C s was investigated by applying equation (8) given by where d and c are predefined values of D e and C s , respectively. According to the methodology described in Sub-section 2.2.2, conditional probabilities of t crit were calculated given the co-occurrence of both D e and C s by fixing other parameters x rebar , C crit , p and α at their mean value. For instance, considering the case in which one hundred C s values (generated according to their pdf) were associated with one minimum D e ¼ 0.1 � 10 À12 m 2 � s À1 , the probability of t crit exceeding 15 years given D e and each C s was about 19-41% (Fig. 4a, c and Table 3). That probability logically decreased when C s increased. Likewise, considering one hundred D e values (generated according to their pdf) associated with the mean value of C s ¼ 9.75 kg � m À3 , the probability of t crit exceeding 15 years was lied between 0 and 38% (Fig. 4b,d and Table 3). Therefore, to achieve an extension of t crit exceeding 15 years was more reliable with D e ¼ 0.1 � 10 À12 m 2 � s À1 and various C s than with C s ¼ 9.75 kg � m À3 and various D e . In addition, the conditional probabilities of t crit decreased more quickly as D e increased than as C s increased. Thus, the extension of t crit was combined with a decreasing probability of its occurrence.
Comparisons of simulated scenarios
To deeply investigate the time t crit as function of D e from conditional probabilities calculated so far, three scenarios were simulated: a medium scenario in which D e was set to its mean value, and a favorable and unfavorable scenarios in whichD e was set to its minimum and maximum values, respectively. In the three scenarios, technological parametersx rebar , p and α were set at their mean value and surrounding parameters C s and C crit according to their pdf. Probability distributions of simulated critical length of time to corrosion initiation values were plotted in Fig. 5 and Table 4 presents mean and standard deviation of the simulated values of t crit for the three scenarios. The comparison between favorable and medium scenarios indicates that the corrosion initiation time can be extended by 12 through a decrease ofD e . However, this improvement is coupled with a large increase of the uncertainty affecting the reliability of the service life extension of t crit in the favorable scenario. That is consistent with conditional probabilities of t crit previously showed in Fig. 4. Conversely, the comparison between unfavorable and medium scenarios indicates that initiation corrosion time is 3 times reduced by an increase of D e coupled with a reduction of the uncertainty. This means a better reliability of the average length of time to corrosion initiation in the unfavorable scenario. In addition, a scenario corresponding to an average value of t crit close to 100 years (95 years in mean with a standard deviation of 60.6 years) associated with a D e ¼ 0.25 � 10 À12 m 2 � s À1 was presented as a reference for service life behavior of structures for sector's actor.
Concluding remarks
This study has pointed out the influential role of both parameters related to material properties and surrounding conditions, and their interactions, on the reliability of results from a chloride transport model. Sensitivity Analysis (SA) and conditional probabilities of length of time to corrosion initiation have been used in a probabilistic approach to investigate the corrosion risk due to chloride ingress into RC structures.
The SA has shown that the key input data for predicting time to corrosion initiation are clearly the chloride diffusion coefficient, the surface chloride concentration and the position of steel rebar. The only controllable parameters are the effective diffusion coefficient and concrete cover thickness. It is crucial to define with enough accuracy the mean value and the coefficient of variation of the pdf of these input data. The diffusion chloride coefficient is particularly an action lever that enables relevant stakeholders to influence the service life behavior of structures, in comparison to binding capacity and porosity of the concrete. That finally allows limiting expensive and time consuming experimental tests, necessary to determine input data to predict service life of reinforced concrete structures in marine environment. In a further work, it would be interesting to take into account the time dependency of the diffusion An emphasis is also shown on the effect of interaction between parametersD e and C s . By acting on controllable parameter D e to extend the time at which chloride concentration achieve a critical level requires to consider possible variations in surrounding parameters, such as chloride concentration at the concrete surface. These interactions emphasize the fact that length of time to corrosion initiation results from complex phenomena concerning the sea water and concrete structures and not only from individual influence of transport and chemical properties of concrete materials. SA methods that focus on combined influences of groups (triplets, …) of input parameters should be deeper investigated.
The results of conditional probabilities of length of time t crit to corrosion initiation in regard to the influence from the co-occurrence of D e and C s have contributed to the reliability assessment of extension of t crit . The sector's actor faces the choice either of extending the service life of structures but with a lower reliability of t crit 's extension or of getting a lowest quality of service life but with a higher reliability of t crit 's reduction.
Table 3
Variation ranges of conditional probability of the critical length of time t crit to corrosion initiation given (a) the co-occurrence of effective diffusion coefficient D e ¼ 0.1 � 10 À12 m 2 � s À1 and one hundred values of surface chloride concentration C s (kg � m À3 Þ, and (b) the cooccurrence of C s ¼ 9.75kg � m À3 and one hundred values of D e .
Length of time t (year)
Variation range of conditional probability of tcrit (%)
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
v3-fos-license
|
2018-05-21T21:28:04.067Z
|
2018-06-11T00:00:00.000
|
21728932
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://dev.biologists.org/content/develop/145/12/dev160200.full.pdf",
"pdf_hash": "c325a44562da336e42e3ff54335b550e480595cb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43467",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "14993db715b5232186de15536c28ec6c0b30452d",
"year": 2018
}
|
pes2o/s2orc
|
Simulation of melanoblast displacements reveals new features of developmental migration
ABSTRACT To distribute and establish the melanocyte lineage throughout the skin and other developing organs, melanoblasts undergo several rounds of proliferation, accompanied by migration through complex environments and differentiation. Melanoblast migration requires interaction with extracellular matrix of the epidermal basement membrane and with surrounding keratinocytes in the developing skin. Migration has been characterized by measuring speed, trajectory and directionality of movement, but there are many unanswered questions about what motivates and defines melanoblast migration. Here, we have established a general mathematical model to simulate the movement of melanoblasts in the epidermis based on biological data, assumptions and hypotheses. Comparisons between experimental data and computer simulations reinforce some biological assumptions, and suggest new ideas for how melanoblasts and keratinocytes might influence each other during development. For example, it appears that melanoblasts instruct each other to allow a homogeneous distribution in the tissue and that keratinocytes may attract melanoblasts until one is stably attached to them. Our model reveals new features of how melanoblasts move and, in particular, suggest that melanoblasts leave a repulsive trail behind them as they move through the skin.
Potential functions
We assign to each Kc a "potential" ( ) at time , which can be interpreted as a factor density but which can also have an abstract mathematical meaning of the capacity of attraction of Mb by Kc. For a free-Kc, we assume that the time evolution of Ψ is modeled by the differential equation These Kc potentials enable us to define a density of factors by a diffusion process of the factor. At each time step the Kc potential diffusion is obtained by repeating (up to three times) the following process: -At each polygon vertex, a potential is calculated by an average of the potential of the Kc to which the vertices belong. -The new Kc potential is the average of the potential of all its vertices.
Thus the factor is propagated over three Kc away after which a value for the potential is computed as a weighted average of the initial value and of the new one. If the weight ! is 0, there is no diffusion and if it is 1, the diffusion is maximal. This density of factors enables to calculate a potential of attraction ! t at each node as an average of potentials of Kc lying around the node . The potential of repulsion between Mb, denoted ! , computed at a given node , is equal to 0 or 1 according to the fact that all nodes adjacent to are occupied by another Mb or not. To model the additional assumption that a Mb can repulse another Mb for a short time after it has left a node, we consider more generally that at a given time, a node occupied by a Mb k time steps before this tim has a potential ! where 0 < < 1. Therefore, the potential ! is computed at a node as the minimum of the potential of all nodes adjacent to (with the possibility of considering several layers of adjacent nodes). Some of the experimental sets of trajectories we want to simulate exhibit a clear anisotropy of the trajectory directions without a specific orientation along this direction, possibly due to some experimental and non-biological cause. To take this anisotropy into account, a term ! ( ) is added to the potential in order to favour one direction. This term is proportional to the absolute value of cos , where is the angle between the displacement direction and the favoured direction.
Melanoblast velocity
The mean speed of a Mb is the trajectory length divided by the time duration, and in a simulation only the number of time steps is known but not the duration of a time step. Thus, we determine the time step duration in order that the average of all Mb mean speeds is equal to a given experimental value. Let us remark that if the number of time steps is large enough, all of the trajectory lengths will be almost equal to , where is the mean value of the Kc polygon side lengths. In this case, the distribution of lengths is very narrow and only depends on geometric properties (e.g. the Kc polygonal shapes). However, the trajectory lengths must also depend on biological features of the Mb. The classical way to correct this model drawback is to allow for the Mb sometimes to be stationary. That is why in order to obtain a realistic distribution of Mb mean speeds the following process is added: a Mb numbered is assigned a probability of moving ! with a set of probabilities ! uniformly distributed between a value !"# and 1. At each time step and for each Mb a uniformly distributed random number is chosen between 0 and 1 and the Mb is moving if > ! . Hence the number !"# controls the standard deviation of Mb mean velocities. The value !"# has been calibrated about 0.3 but it can depend on the simulation when the aim is to simulate an experiment.
Trajectory analysis tools
In this section we analyze a set of trajectories by writing them as complex linear combinations of a few basic curves. That is mathematically equivalent to performing a Principal Component Analysis (PCA) on the set of Mb displacement steps, but here it is more intuitive. Keeping in mind that a plane trajectory defined by N points defines a complex vector of dimension N, a set of trajectories is in fact a set of complex vectors in dimension N. The basic trajectories are built using a complex "Singular Value Decomposition" of this set of vectors, a mathematical tool close to a PCA and which can be outlined as follows: the first basic trajectory is the trajectory which, after a similitude (i.e. a rotation and a dilatation), best fits all the trajectories, in the mean square sense. After subtracting this first approximation from all trajectories, we obtain a second set of trajectories. The second basic curve is built in the same way as the first one from this second set of trajectories, and so on. In Figure 4, the application of this analysis for the trajectory of the experiment WT2: the first basic curve is close to a straight line and shows the direction of the trajectory (and its start-end length). The second basic trajectory shows a simple oscillation of the trajectory around this line (that can be also away and return). The third basic trajectory shows a double oscillation and so on (the basic trajectories of Figure 4A have been scaled by a weight proportional to their average contribution to fit the trajectories). This analysis is similar to a Fourier series decomposition but with basic functions which are the best adapted to each set of trajectories. Actually, less than 7 basic trajectories are needed to represent all trajectories with an error less than 2%. To sum up, for each set of trajectories an optimal set of a few basic trajectories is calculated such that each trajectory is a complex linear combination of these basic trajectories. Since a multiplication by a complex number is a similitude (i.e. a rotation and a dilatation), meaning that each trajectory is a linear combination of trajectories similar to the basic ones. Figure 4 shows the rebuilding of a set of experimental trajectories taking account 1, 2 or 3 basic functions. Now, to compare two sets of trajectories by the above analysis, basic trajectories are extracted from one set and both sets of trajectories (experimental and simulated) are expanded into this basis. Note that in order to make a meaningful comparison between two sets of trajectories, some preprocessing must be applied to the data. First the time interval must be the same for the two sets; hence we reduce the longest to the smallest. Then, since a trajectory is in fact a set of points, i.e. a complex vector, all the trajectories must have the same number of points. That is why all the trajectories are interpolated with 100 points to prevent the loss of the precision.
Crossing number assessment
The aim is to assess the number of crossings of two Mb trajectories, or to be more precise, the number of intersections of the two curves defined by the trajectories. Two irregular curves can have many intersections without a clear meaning of their number of intersections due to the lack of precision of experimental data and calculations. That is why we consider it more meaningful to count the number of pairs of curves that have any intersections. In short we do not count all crossings between two trajectories but only those occurring during the memory effect times and we count at most one crossing between two trajectories in order to avoid that two neighbouring trajectories lead to a lot of crossings. The following method is used: the domain of study (a square) is divided into small squares and we count the number of pairs of curves that are going through a small square, but only once. Notice that this method does not distinguish genuine crossings from one-sided contacts.
Justification of method choices and parameter assessment
The geometric parameters (quasi random polygon centers, the ratio of randomness, and the minimal edge length ) are chosen to mimic the biological data, here mainly the Kc polygonal shape (see Figure 2A). The number of Mb and the density of Kc are the same as in the experiments (about 220 Mb and 4,000 Kc with an average edge size of 10µm).
The main choices are in the Kc potential definition: -The maximum !"# of the Kc potential depends of the Kc polygon size but after some tests we choose to leave it invariable and since its value has no meaningful effect on the trajectories, it is fixed to !"# = 1. -In order to get a good match between experiment and simulation, the Kc attracting potential must combine suitable spatial and time variations: both are obtained with a quick potential decreasing for bound Kc and a suitable diffusion for the potential. With other choices the potential is either too flat, leading to spatial concentrations of Mb, or too irregular, leading to too random-like trajectories. The speed of increasing ! of a free Kc potential, the speed of decreasing ! of a bound Kc potential and the speed of diffusion of the factors associated with these potentials are strongly correlated with these qualitative properties and after some qualitative and quantitative tests, we calibrated the values ! = 0.2, ! = 10 and ! = 2 and the coefficient weight for the diffusion ! = 1 in order to get a noticeable and spatially consistent potential variation after a few time steps. -Note that the attractive and repulsive potentials ! and ! are of the same order of magnitude, hence the ratio ! between the Kc potential and the Mb repulsion potential is of the order of magnitude 1. Smaller or larger values lead to neglecting the effect of one of the potentials. The results are not very sensitive to the exact value. The potential ! favouring a direction has a small coefficient, between 0 and 0.15, depending of the experiment to match. Table S1. Values and parameters used in the mathematical model
|
v3-fos-license
|
2022-11-02T07:15:14.868Z
|
2022-10-28T00:00:00.000
|
253246316
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1097/md.0000000000031628",
"pdf_hash": "a1d3f2b195e847669c63b08ede28dfdfa9705878",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43468",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "9c22abebdf0c8a90f25ecff1bc917e23f3499ff9",
"year": 2022
}
|
pes2o/s2orc
|
The effect of extracorporeal shock wave therapy on large neurogenic heterotopic ossification in a patient with pontine hemorrhage: A case report and literature review
Rationale: Heterotopic ossification (HO), an ectopic bone formation in soft tissue around the joint, is a complication observed in stroke patients. HO around the hip joint causes a reduction in the functional ability of patients by generating pain and limiting range of motion (ROM). In addition, it results in impaired mobility, ultimately affecting quality of life and increasing the mortality of patients. Extracorporeal shock wave therapy (ESWT) has demonstrated efficacy in treating soft tissue inflammation and has been used to reduce patients’ pain in HO. However, almost none of the studies reported degradation in the size of HO on images obtained before and after ESWT application. Patient concerns and Diagnosis: We report a case of a 36-year-old man who developed HO around both hip joints 3 months after bilateral pontine hemorrhage. Interventions: Seven months after HO development, ESWT was administered to the area of HO every other day for a total of 10 sessions. Outcomes: Immediately following treatment, the ROM of both hip joints increased. Thus the patient was able to maintain a sitting posture without having to be bound to the wheelchair. In addition, the tolerable sitting time before groaning increased from less than ten minutes to almost 60 minutes by the end of all ESWT sessions. Unlike other previous reports, a diminished HO size was confirmed by comparing plain X-rays and bone scans obtained before and after treatment sessions. Lessons: In this case, we report an objective size reduction in HO in radiologic findings after applying ESWT to both hips. ESWT is a safe, easy-to-apply, and noninvasive modality. We would like to emphasize the use of ESWT as a treatment option for HO to decrease the extent of HO, as well as to improve pain, spasticity and function in patients with stroke.
Introduction
Heterotopic ossification (HO) is characterized by the progressive formation of pathological ectopic bone in soft tissues around the joint. [1] HO associated with disease or injury of the central nervous system can be classified as neurogenic HO (NHO). [2] NHO was reported in 10% to 53% of patients after neurologic injury, mainly in patients with traumatic brain injury or spinal cord injury. [3] The prevalence of HO in stoke patients is known to be 0.5% to 1.2%. [4] HO can occur in both the upper and lower extremities, and among sites, the most common is the hip joint of the paretic limb. [5] HO around the hip joint causes pain and reduces range of motion (ROM), resulting in impairment of mobility, ultimately reducing quality of life and increasing the mortality of patients. [6] Currently, effective and safe methods for treating HO are not clearly established; medications are prescribed mostly for prophylactic purposes, and surgical management burdens patients with a risk of infection or nerve damage. [1] Extracorporeal shock wave The authors have no funding and conflicts of interest to disclose.
The datasets generated during and/or analyzed during the current study are publicly available.; Data sharing not applicable to this article as no datasets were generated or analyzed during the current study. therapy (ESWT) is a generator of high-energy acoustic shockwaves, which allows for the initiation of microscopic environmental changes in the tissue where the pulse energy is propagated. It proved to be effective in treating orthopedic disorders, such as plantar fasciitis or lateral epicondylitis. [7] Few studies have applied ESWT for the treatment of HO, and the results have indicated that ESWT was effective in reducing patients' pain or improving ROM and quality of life. [5,[8][9][10][11] However, almost none of the studies reported degradation in the size of HO on images obtained before and after ESWT application. Here, we present a case of a 36-yearold male patient with pontine hemorrhage who had severe neurogenic HO on both hip joints. We report an objective size reduction in HO on radiologic findings after applying ESWT to both hips.
Case report
A 36-year-old man with underlying hypertension became unconscious and quadriplegic when he suffered a spontaneous bilateral pontine hemorrhage. He was immediately admitted to the intensive care unit, where he underwent tracheostomy and received ventilator care for 1 month. Two months later, his level of consciousness returned to an alert state; however, his cognition was still poor, and he was not able to obey commands beyond 1 step. During the intensive care unit period, he was bedridden and could not undergo rehabilitation treatment; as a result, contractures and limitation of range of motion (ROM) developed in multiple joints. Eventually, he was transferred to the general ward without significant improvement in mobility. Three months after stroke onset, computed tomography (CT) and whole-body bone scans were obtained and revealed abundant HO around both hip joints. CT and bone scans revealed that HO was present from the lateral border of the iliac bone all the way down to the proximal portion of the femur and involved ossifying myositis in the vastus muscles. At this point, the patient began receiving rehabilitation treatment, such as tilt table and ROM exercises.
Physical and neurological examinations
It was approximately 10 months after the onset of pontine hemorrhage and 7 months after the first discovery of HO when the patient was admitted to our institution. He displayed circadian rhythms and stayed alert during the day. Although the Mini-Mental-State-Examination score was not accessible, the patient responded to sound by opening and closing the eyes from time to time and made a moaning and groaning sound at pain stimulus, scoring 9 on the revised Coma Recovery Scale. Persistent decerebrated posture was seen in all 4 limbs, and the muscle power of all extremities displayed a Medical Research Council grade of 0-1 (zero to trace). Spasticity was measured to be worse than grade 4 on the Modified Ashworth Scale in both hip flexion and extension and grade 2 to 3 in knee and elbow flexion. Deep tendon reflexes were brisk, and ankle clonus was present in the right leg. Hard and firm HO could be palpated on the lateral side of both hip joints Because of HO on the hip joints, the patient could not use a wheelchair with a natural, proper position. He needed a reclining wheelchair in a fully reclined position with leg support. The patient moaned repeatedly when he stationary in the wheelchair for more than several minutes. His mother, who is his caregiver, claimed that the patient responded in that way when he felt pain. This response was the same as when we applied a noxious stimulus, such as pressing the nailbed hard or scratching the sternum, for motor function evaluation. To improve the patient's sitting posture and pain, we decided to apply ESWT to the HO around his hip joint.
Intervention
ESWT (MP200, Storz Medical Masterplus®, Tagerwilen, Switzerland) treatment was conducted by a single physiatrist throughout the treatment period. The patient was laid in the supine position. One session of ESWT involved 2000 shocks delivered at a rate of 10 Hz with an energy of 1.2 bar (1 bar = 0.1 MPa = 0.1 N/mm 2 ). The treatment was performed on both hips every other day for 10 times in total. Each ESWT session was performed at the same time of day. During the ESWT treatment, there was no change in medication. Additionally, the amount and type of physical therapy (PT) and occupational therapy (OT) before and during the ESWT session were the same. According to the South Korean health insurance standard, PT was performed 2 times per day for 1 hour in total. The first 30 minutes were assigned mainly for the tilt table, and the latter 30 minutes were assigned for ROM exercises provided by a therapist.
During the intervention period, ESWT was administered prior to administration of physical therapy. The passive ROM of both hips was measured immediately after each ESWT session using a standard goniometer, with the patient in either the supine or the decubitus position. The amount of pain sensation was indirectly evaluated by counting the tolerable wheelchair-sitting time before the patient started groaning and moaning loudly. In addition, the serum alkaline phosphatase level, a bone formation marker, was also measured.
Changes in radiologic findings and the patient's function
Plain hip X-rays and bone scans were obtained before and after the intervention (Fig 1). To evaluate the size of the HO in a 2-dimensional manner on a plain hip X-ray, the contour of the ectopic bone was drawn to measure the estimated area of HO. At the beginning of ESWT, the estimated area of HO was approximately 1740.84 mm 2 on the right side and 21,182.94 mm 2 on the left side. When all of the sessions of ESWT were completed, the area of HO on the right side was reduced to 15,062.10 mm 2 , and the area of HO on the left side changed to 18,932.80 mm 2 , proving that the size of the HO was degraded (Fig. 1A). Additionally, when comparing the bone scans obtained at the completion of all treatments with previous images obtained 1 month prior to the intervention, the area with active metabolite uptake around the hip joints, measured by tracing the contour of the lesion, was revealed to have decreased from 1824 mm 2 to 1254 mm 2 on the right side and from 2566 mm 2 to 2107 mm 2 on the left side (Fig. 1B). The serum alkaline phosphatase level, which was 234 IU/L before treatment, decreased to 177 IU/L after all sessions. After the intervention, the patient's spasticity on hip flexion and extension slightly improved from MAS grade 4 to 3. Additionally, bilateral hip ROM measured with a goniometer showed gradual improvement as the ESWT sessions continued (Fig. 2). As the ROM increased, the patient was able to maintain a sitting posture without having to be bound to the wheelchair (Figs. 3 and 4). In addition, the tolerable sitting time before groaning increased from less than ten minutes to almost 60 minutes by the end of all ESWT sessions (Figs. 3 and 4). No adverse effect associated with ESWT, such as pain or skin lesions, were not found during and after intervention. Patient's guardian (Patient's mother) has provided a written informed consent for inclusion of patient's clinical and imaging details in the manuscript for the purpose of publication. The case study was approved by the institutional review board of our hospital
Discussion
HO is a localized and progressive formation of pathological ectopic bone mainly located in the soft tissue around joints. [5] HO around the hip joint causes pain and reduces ROM, resulting in impaired mobility, ultimately reducing the quality of life and increasing the mortality of patients. [6] In this case, we reported a male patient who showed objective reduction in the extent of HO in radiologic findings, as well as functional improvement, after ESWT application on both hips. This finding is significant and suggests that, by applying ESWT to HO, 1 can expect a reduction in the extent of HO, improvement of ROM and better adjusted sitting position.
HO: mechanisms, classification system, and treatment
Although the mechanism of HO formation is not clearly known, the previous literature suggests that multipotent cells in the local tissue constitute a cellular origin of HO. [1] When an inciting event occurs, bone morphogenic protein initiates the secretion of neuroinflammatory factors, such as substance P and calcitonin gene-related peptide, from the sensory nerve. [12] These inflammatory factors stimulate immune cells, such as mast cells, platelets, and neutrophils, and when mast cells are degranulated, they secrete various proteases and matrix metalloproteinases. In turn, these secreted agents stimulate the peripheral nerves to change the activity of the osteoblasts, chondrocytes, and brown adipose tissue that coordinate bone formation in ectopic locations, as well as the creation of new vessels and nerves around newly formed bone. [13] The classification system of HO has varied. Brooker et al classified HO by severity, from the lowest class (class 1), in which an island of bone occurred in the soft tissue around the hip, to the highest class (class 4), involving ankylosis of the hip. [14] The Della Valle classification was simplified into 3 classes. [15] Later, Schmit et al suggested a more practical classification based on the surgical approach, and in this classification, the region and extent of HO were classified separately. [16] In our case, the patient was categorized as class 4 (ankylosis of the hip) in the Brooker classification and class 3 in the Della Valle classification (spurs leaving < 1 cm between opposing surfaces or bony ankylosis), which were the highest levels in each classifying system. In the Schmidt classification, Region II (HOs below and above the tip of the greater trochanter) and Grade C (ankylosis by means of firm bridging from the femur to the pelvis) best describe the patient's HO.
Currently, methods for treating HO include taking NSAIDS or bisphosphonates or applying radiation, but these methods are only prophylactic and are ineffective when HO has already formed. Physical therapy might improve the ROM and functional ability of patients, but there is controversy regarding whether HO can be aggravated when physical therapy is applied at the acute stage. Operative intervention is an effective method of removing HO all at once, but there is a risk of infection or nerve injury. [1] Accordingly, the need for other methods to treat HO is increasing, and previous studies have also applied ESWT for the treatment of HO.
Mechanisms of ESWT for the treatment of HO:
Previous studies (Table 1) Extracorporeal shock wave therapy (ESWT) induces microscopic interstitial and extracellular responses in the tissue by generating high-energy acoustic shockwaves and concentrating maximal beneficial pulse energy in the target area. [7] Previous studies have indicated that ESWT was effective in reducing patients' pain and improving ROM and quality of life in HO (Table 1). In previous studies, the ESWT protocols have varied according to the cause, site and severity of HO. Among cases, the most reported cause of HO was NHO, and the most common site was the hip. The applied ESWT protocol also varied greatly, with 3000 to 4000 shocks applied at frequencies ranging from 3 to 12 Hz and intensities distributed from 1 to 5 bars. [5,[8][9][10][11][17][18][19] In this case, the setting of the ESWT was established by referring to the frequency and intensity in the study of the ESWT protocol in the previous study by Li et al, in which there was an actual change in the size of HO on radiological findings. In the study of Li et al, 4000 shocks were applied to the unilateral hip with HO at 8 Hz and energy of 1 bar. In this study, 2000 shocks were applied to each hip so that a total of 4000 shocks were applied. Instead, the frequency and energy intensity were slightly modified to 10 Hz and 1.2, respectively. The mechanisms of ESWT for musculoskeletal pathologies have been well documented. According to a recently updated review, ESWT promotes the activation of molecular and immunological reactions, improving blood microcirculation, stimulating angiogenesis and increasing neovascularization, activating the anti-inflammatory reaction, and suppressing leukocyte infiltration. [20] In this study, the pain reduction effect of ESWT could only be indirectly assessed by observing diminished moaning and groaning responses and elongated tolerable sitting. However, in previous studies, in which the severity of pain was assessed by the visual analog scale (VAS), a case presented a maximum decrease in the VAS score from 9 to 0. In another study involving 26 patients, the mean VAS score of HO patients decreased from 4.32 to 1.14 after ESWT. Possible mechanisms for the pain relief provided by ESWT treatment were discussed. Shock waves could stimulate nociceptors to send high-frequency nerve impulses, such as in hyperstimulation; thus, the propagation of nerve impulses was blocked according to gate-control theory. [21] ESWT also changes the chemical environment of the cell membrane by generating free radicals, in turn resulting in pain-inhibiting chemicals in the vicinity of the cells. Another possible mechanism is that ESWT inhibits chronic pain by interfering with the neural circuit that promotes chronic pain, reorganizing pathologic memory traces. [21] Although the mechanisms of ESWT action on spasticity due to central nervous system injury are still unknown, variable mechanisms have been proposed. Low-energy ESWT enhances the neuroprotective effect of vascular endothelial growth factor and improves neurological function. [20] ESWT also stimulates the activity of macrophages and Schwann cells, which contribute to the survival and regeneration of neurons. [20] Since the efficacy of ESWT on spasticity is achieved through the reduction of motor neuron excitability, it is expected that application to myotendinous junctions, where the Golgi tendon organ resides, will provide the best outcome. [22] However, in a previous study in which 151 patients were divided into 2 groups, 1 group with ESWT application to the belly muscle and the other to the myotendinous junction, the MAS and Modified Tardieu Scale of both groups decreased after ESWT application without significant difference. [23] Because the patient in our study had large HO on both hips, there was no remarkable change in spasticity.
The effect of ESWT in our case and possible mechanisms for the size reduction of HO
In this case, although the degree of pain could not be measured using a standard scale due to cognitive impairment, the patient's tolerance of sitting gradually increased after ESWT treatment and was accompanied by improvement in the passive hip flexion angles. An increase in the patient's tolerance of sitting after ESWT was a meaningful improvement for the patient. Earlier, he would start moaning loudly after only a few minutes and could not receive effective physical therapy or other treatment. Since his tolerance of sitting increased to 1 hour, both the patient's and caregiver's quality of life improved since the patient stopped moaning between various tests or physical therapy exercises while waiting and sitting in a wheelchair. Previous studies have also reported that there was a positive behavioral and functional effect, along with reductions in pain and spasticity accompanied by microscopic changes, after ESWT in both hip joints. [5] In addition to reduced pain and improvement in function, the patient showed a marked size reduction in the extent of HO after ESWT on X-ray and bone scans, confirmed by a radiologist and a nuclear medicine physician. For HO to be degraded without surgical removal, it is speculated that changes such as a reduction in angiogenesis and calcium production, fragmentation of calcified deposits by shockwave pressure, and absorbance of ectopic bone to surrounding tissue by phagocytosis should occur. [19,24] Previous reports have scarcely mentioned radiologic changes in HO following ESWT. There was only 1 article depicting a meaningful radiological improvement due to ESWT, but the treatment period was almost an entire year. [19] A major underlying condition that the article and our case have in common is the degree of severity of the patient's HO, which was graded as Class IV by the Brooker classification.
HO stage and site also affected the outcome of ESWT treatment. Three-phase bone scans are the most sensitive for the early detection of HO since they appear on bone scans 1 to 4 weeks prior to being detected on plain X-rays. The intensity of metabolic activity in the bone scan peaks within a few months after HO formation and decreases after 6 to 12 months, but the activity is maintained in an elevated state as the HO matures. [25] Comparing the 2 bone scan images before and after ESWT application, a clear decrease in the metabolic activities and extent of HO was observed (Fig. 1). The difference in the time points at which the 2 images were obtained might have had an effect, but this effect alone is insufficient to explain this large area of reduction. It is inferred that the anti-inflammatory effect of ESWT also contributed to the changes in the bone scans. When comparing the volume by roughly drawing the contour of the part with uptake on the bone scan image, the reduction in the size of the HO is meaningful since it corresponds to that on plain X-ray (Fig. 1). Additionally, the reason for the reduction in the contour of the mass after applying ESWT in this case might be our patient having massive HO that was closer to the surface of the skin; thus, the effect of ESWT would have been greater. Since the large size of the HO and its location close to the surface seemed to have played a role in the reduction in HO size, it is necessary to further investigate which classifications of HO are indications for ESWT.
Limitations
There are a few limitations of this study. Foremost, this study is a preliminary case study, and due to the patient's poor mental status, we could not evaluate pain using a standard scale. Because the patient was in a bedridden state, dramatic functional improvement was not observed. Regarding the radiologic reduction in HO size, although the opinions of 2 radiologists were obtained, the use of reformatted CT images or artificial intelligence calculations of HO volume would have been a more objective and definite method. Additionally, the bone scan was not as active after the treatment session, and we cannot conclude that the improvement was only related to ESWT since we did not have a control. The expectation is that the activity would decrease over time, and the finding might have been a coincidence. In future studies, it will be necessary to prepare for objective volume measurements and enroll a control group for bone scanning.
Conclusion
In this case, we reported an objective size reduction in HO on radiologic findings after applying ESWT to both hips. ESWT is
|
v3-fos-license
|
2020-02-13T09:23:37.253Z
|
2020-02-07T00:00:00.000
|
211551276
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/ijae/2020/4905698.pdf",
"pdf_hash": "bd4de55ee39f89afef47d6c3dfcf54454603d54b",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43469",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "0302fad40cfdcae0fdf9c744b86d01142abe4bf9",
"year": 2020
}
|
pes2o/s2orc
|
Decoupling Attitude Control of a Hypersonic Glide Vehicle Based on a Nonlinear Extended State Observer
College of Engineering, China Agricultural University, Beijing 100083, China Synergistic Innovation Center of Jiangsu Modern Agricultural Equipment and Technology, Jiangsu University, Zhenjiang 212013, China College of Water Resources & Civil Engineering, China Agricultural University, Beijing 100083, China State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
Introduction
Owing to its great space application value, the hypersonic vehicle has attracted attention all over the world. This kind of aircraft is extremely fast. Besides, a long flight in the near space, and the complexity of the fight trajectory, results in difficulty in controlling it. At present, a feasible scheme is to use a hypersonic glide vehicle with a "boost+glide" motion (as shown in Figure 1). It is a kind of plane-symmetry hypersonic vehicle with lifting-body configuration. This vehicle reentries at hypersonic speed and flies in the near space (20-100 km) at most of the time, of which the flight attitude is between that of the aircraft and spacecraft. Such vehicle usually adopts a large lift-to-drag ratio wave-rider configuration, and its flying Mach number is greater than 5. As shown in Figure 2, it can be sent to a predetermined height by a launch vehicle, or released by a space-based platform. After entering the atmosphere, the lift is generated by its own spe-cial aerodynamic configuration to skip glide, and it is able to fly thousands or even tens of thousands of kilometers by the "boost+glide" motion [1]. A typical representative of the hypersonic glide vehicle is the common aero vehicle (CAV) in the Falcon program jointly launched by the United States Department of Defense and the United States Air Force in 2003 [2]. The Apollo command module is basically a single-skip reentry, as is the Chinese Chang'e 5-T1. For novel vehicles like the Orion spacecraft, complicated multiskip reentry is proposed.
The special flight motion mode of the hypersonic glide vehicle puts forward a very high requirement for the design of the control system. Long-range hypersonic glide vehicles need to have some lateral maneuverability, and if they adopt side-to-turn (STT) control mode, there exist certain difficulties. When the aircraft adopts STT control mode to achieve lateral maneuvering, it needs to move sideways with a certain sideslip angle. Meanwhile, the heating and aerodynamic performances of the aircraft vary greatly, which bring difficulties to the aircraft's overall design. The larger rudder deflection to match the turn brings greater difficulties to the aircraft's structural design and thermal protection design. If the bank-to-turn (BTT) control mode is adopted, the sideslip angle can be controlled at a small value, and the complexity to design the control surface and the thermal protection system is reduced, such that the aerodynamic efficiency is improved. Therefore, during the flight, the aircraft maneuvers laterally in the way of bank-to-turn (BTT). In order to ensure gliding in the near space, such aircraft needs to maintain a large angle of attack to provide adequate lift, while the sideslip angle should be kept as zero as possible to avoid the impact of local heating effect on the body. Moreover, in order to obtain a high lift-to-drag ratio to increase the flight range, the hypersonic glide vehicle is generally designed as a planesymmetric type of lift body, which enhances the coupling between the channels. In addition, the reentry flight speed, flight altitude, and flight range vary greatly. The flight parameters are strongly coupled and change with time. To sum up, the couplings induced by the above factors between the three channels of the aircraft are very serious, and the controlled plant-the aircraft model, is a multivariable, strongly coupled time-varying system. Especially, the coupling between the lateral channels will result in a large side angle of the aircraft, which is a serious threat to flight safety [3,4]. In order to guarantee the accurate tracking of the guidance command, the flight control system must be able to overcome the adverse effects of the channel coupling. Therefore, it is necessary to study the decoupling control method of Multiple-Input Multiple-Output (MIMO) systems such as hypersonic glide vehicles.
In engineering, the traditional gain scheduling method is usually used to design the flight control system, but it requires a lot of storage and calculation, and it is difficult to solve the influence of the coupling effect. In [5], the decoupling control method of the aircraft is designed according to the variable structure control theory, but the highfrequency chattering caused by the frequent switching of the controller will threaten the flight safety of the aircraft. In [6,7], the decoupling controller is designed using the nonlinear dynamic inverse method, but the robustness of the system cannot be guaranteed in the case that the model cannot be accurately known. In [8,9], the robust decoupling control method for the linear model of the aircraft is given and the results are better, but it is not suitable for nonlinear systems such as hypersonic glide vehicles.
In [10][11][12][13], Tornambè extends the decentralized control theory from the linear Single-Input Single-Output (SISO) system [10] and the linear MIMO system [11] to a class of nonlinear MIMO system [12,13], which can realize the independent design of the subchannel controller in the case of mastering partial nonlinear system dynamics. The channel coupling dynamic is obtained by the observer method and added as a compensation signal to the subchannel controller. Based on the decentralized robust control structure to a class of nonlinear MIMO system of which the stability is guaranteed [12,13], the motivation of this paper is to estimate the channel coupling dynamic by using a nonlinear extended states observer (NESO), and finally achieve independent three channel controller design of a hypersonic glide vehicle.
The rest part of this paper is organized as follows: a decoupling control algorithm based on decentralized control theory of Tornambè and NESO is proposed in Section 2. A nonlinear MIMO model and a decoupling attitude controller of the hypersonic glide vehicle are presented in Section 3. Attitude control simulations are verified on the hypersonic glide vehicle model in Section 4, and Section 5 concludes this work.
Decoupling Control Based on NESO
2.1. MIMO System Decentralized Control. It can be shown that the decentralized control law proposed by Tornambè, which are separately designed to guarantee the stability of the system, can be locally applied to the class of square multi-input multi-output nonlinear systems described in state space form by equations of the following type [12,13]: where f , g i , and h i , i = 1, ⋯, m, are differentiable functions of their arguments: f , g i ∈ C p ðR n , R n Þ, and h i ∈ C p ðR n , RÞ, with p > 0 being a suitable integer, n is the dimension of the state vector, y = ðy 1 , ⋯, y m Þ T , u = ðu 1 , ⋯, u m Þ T , G = ðg 1 , ⋯, g m Þ, and h = ðh 1 , ⋯, h m Þ T . And Tornambè applied this separation principle procedure to a high-nonlinear robotic manipulator model [12,13]. Thus, in this paper, a class of nonlinear MIMO system is considered as follows: Among them, x ∈ R n is the state variable of the system, u ∈ R n is the control input of the system, y = x ∈ R n is the output of the system, and f ðxÞ and gðxÞ are the continuously differentiable nonlinear functions. Due to the uncertainty and the influence of the unmodeled dynamics, the values of f ðxÞ and gðxÞ cannot be accurately obtained. For a hypersonic glide vehicle, the elements of f ðxÞ contain aerodynamic force and moment parameters, and the elements of gðxÞ contain the rudder parameters, which are high uncertainties.
The control objective is to make the real output of the system accurately track the expected value on the basis of ensuring the stability of the system: where x des is the desired output of system (2). Define the output tracking error: Based on the decentralized control idea of Tornambè in [12,13], the ith equation in the differential equations of system (2) is rewritten as follows: Define the generalized uncertainties of the system [12,13]: Then, the MIMO system (2) can be expressed as follows: In equation (7), the generalized uncertainty Δ i not only contains the parameter uncertainty and unmodeled dynamics of the system but also contains the dynamic effect caused by the coupling terms.
After obtaining the estimated value b Δ i of the Δ i with the observer approach in the later Subsection 2.2, the control law of system (2) can be given as follows: where k i is a feedback gain and ε i represents the tracking error between the desired output x i,des and the state x i . By utilizing the control law given by equation (8), system (2) can be converted into _ ε i + k i ε i = 0. If k i is determined to be a positive value, the characteristic roots (eigenvalues) of system (2) are placed in the open left half-plane and hence the system is stable. The dynamical performances of system (2) can be reached by tuning the value of k i .
It can be seen from (8) that the control law can be solved without knowing the exact value of f ðxÞ and gðxÞ, which is robust to parameter uncertainties and unmodeled dynamics. In addition, the system state and system input is a one-to-one correspondence, which shows that the designed control law is decentralized and decoupling.
Nonlinear Extended State Observer (NESO).
The nonlinear extended state observer (NESO) can be used to estimate the uncertainties of a nonlinear system. It can be seen from equation (2) that the nth equation in the MIMO system represents the nth channel of the system which can independently design the controller. Correspondingly, the nth NESO needs to be designed to give the estimated values of the generalized uncertainty.
Using the second-order NESO [14], the ith state equation's generalized uncertainty is extended to a new state, and the following observer equation can be obtained: where where β 1,i , β 2,i , a, and δ are the NESO observer parameters to be determined. The extended state z 2,i is an estimate of Δ i , and e 2,i is the estimation error between z 2,i and Δ i .
There is a sign function in the f al function (10) of NESO. The sign function utilizes the switching and the sliding mode strategies to cope with effects due to system uncertainties. The conventional high-frequency chattering phenomenon arising from sliding mode technique might occur in the system's observer and control signals. Thus, some scholars use more complex nonlinear smooth functions to replace with the nonsmooth f al function to overcome the drawback. For realizing easily in engineering, in this paper, we overcome the high-frequency chattering phenomenon by tuning the parameters and gains. However, the phenomenon is hard to be eliminated completely since NESO uses the sliding mode technique to observe and compensate uncertainties. Actually, sometimes, the steady-state error of the 3 International Journal of Aerospace Engineering controller with the NESO will be a little larger than that of the controller without NESO since the high-frequency chattering phenomenon is occurring. However, the overshoot and settling time of the controller with the NESO are both superior to those of the controller without NESO. The performances of the controller with NESO are better than those of the controller without NESO in general.
The convergence proof of NESO and the relevant parameters' tuning rules are given in Theorem 2.
Theorem 2 [14]. For a nonlinear system (2) and a designed NESO observer (9), assume that j _ If the undetermined parameter satisfies 0:5β 2 1,i > β 2,i > Δ i and α = α * , then δ exists, which make the following hold: Then, the estimation error of NESO is bounded. That is, 2.3. Decoupling Control Based on NESO. Different from references [12,14], there is a first innovation in this section. Combined with the estimation result of NESO, the decoupling control law is proposed and rewritten as follows: The estimation result of NESO is added to the control law as an estimate of the generalized uncertainty. Theorem 3 proved that the stability of the closed-loop system and the convergence of the tracking error can be guaranteed with the decoupling control law (13).
Theorem 3.
For the closed-loop system composed of the MIMO system (2) and NESO (9), if the control law (13) is adopted, then the tracking error of the closed-loop system is uniformly bounded.
Proof. Given the following Lyapunov function: Substituting the expression of the control law, the following formula is available: where the extended state z 2,i is an estimate of the uncertainty Δ i from NESO's equation (9) and e 2,i is the estimation error between z 2,i and Δ i from NESO's equation (9). Obviously, e 2,i has a great influence on the Lyapunov function's boundedness. Thus, Theorem 3 is proved with the Lyapunov function with NESO. Then, according to the conclusion given by Theorem 2 (the convergence proof of the NESO), there exists When the following formula is established: there exists _ V < 0 that the tracking error of the closed-loop system is uniformly bounded. Figure 3 is a control block diagram of the closed-loop system. The input signal of the decoupling controller is the MIMO system's tracking error and the NESO's estimated value for the generalized uncertainties. The control law is a subchannel designed by the decentralized control theory. The input signal of the NESO is the control input and system output of the MIMO system. It can be seen from the block diagram that the NESO-based decoupling control method combines the traditional subchannel design method and the observer method. On the basis of guaranteeing the decoupling control effect and the robustness of the system, the traditional subchannel controllers' design experiences can be used to facilitate the engineering implementation. i ,a, ) . . . . Figure 3: Block diagram of a NESO-based decoupling controller. 4 International Journal of Aerospace Engineering
Decoupling Control of the Hypersonic Glide Vehicle
3.1. Modeling of the Hypersonic Glide Vehicle. Assuming that the hypersonic glide vehicle is a plane-symmetry rigid body, ignoring the Earth's rotation and the mass change of the aircraft, the following simplified model can be derived according to theoretical mechanics principles [15]. Establish the centroid dynamics equation in the track coordinate system: The five terms on the right side of the equation are as follows: aerodynamics Q, gravity G, centrifugal force S v caused by high-speed flight, centrifugal force S e caused by earth rotation, and the projection of Coriolis inertial force S c caused by earth rotation in the ballistic coordinate system. In the hypersonic reentry flight, the effects of centrifugal force and Coriolis inertial force caused by high-speed flight cannot be neglected compared with the external force, especially that the centrifugal force caused by high-speed flight may be very large.
The position of the aircraft is determined by three parameters: geocentric distance r = R E + H, longitude λ, and lati-tude ϕ. The navigation kinematics equations represented by polar coordinates are as follows: Establish a system of moment equations in the body coordinate system: where M x1 , M y1 , M z1 is the projection of the combined external moment acting on the aircraft in the body coordinate system and ω x1 , ω y1 , ω z1 refers to the component of the absolute angular velocity of the aircraft in the body coordinate system. Hypothesis 1. Ignoring the influence of wind, the flight speed V is equal to the airspeed U, the speed coordinate system coincides with the airflow coordinate system, the angle of attack α is the angle between the projection of the flight speed on the longitudinal symmetry plane of the body and the longitudinal axis of the body, the sideslip angle β is the included angle between the flight speed and the longitudinal symmetry plane of the airframe, and the bank angle γ s is the roll angle of the aircraft around the flight speed vector.
Hypothesis 2. Ignore the effects of implicated inertial forces. This term is several orders of magnitude smaller than other forces.
_ Hypothesis 3. The product of inertia is small and negligible, i.e., J xy , J yz , J xz = 0, J = diag fJ x , J y , J z g.
By simplifying (18) and (19), the kinematics and dynamics equations of the center of mass are as follows: where r is the distance of the aircraft from the center of the earth, ϕ is latitude, λ is longitude, V is the magnitude of flight speed, θ is the ballistic inclination angle, ψ s is the ballistic declination angle (the projection of the flight speed in the horizontal plane is to the left of the north direction is positive), g is the gravity acceleration, D is the resistance acceleration, L is the lift acceleration, Y is the side force acceleration, ω D is the earth rotation angular velocity, and γ s is the bank angle.
According to the angular velocity transfer relationship between the coordinate systems, the following equation holds: where _ r = V sin θ, In addition, since the ballistic inclination of the glide section is usually very small, there is cos θ ≈ 1 and sin θ ≈ 0. Therefore, the nonlinear dynamics f 1 ðΩ, uÞ in (31) can be further simplified as Finally, the model of the hypersonic glide vehicle flight attitude control system is obtained as the following cascade affine nonlinear system: Because the flight state variables have significant differences in time scale, singular perturbation theory is used to decompose the vehicle state variables into different speed loops. The control law is applied to the controlled plant "hypersonic glide vehicle" by controlling the rudder surfaces. When the rudder surface changes, the vehicle's angular rate ω = ½ω x1 , ω y1 , ω z1 T changes first, and then the attitude angle Ω = ½α, β, γ s T changes. Therefore, ω = ½ω x1 , ω y1 , ω z1 T is called the fast state of the system and Ω = ½α, β, γ s T is called the slow state.
Coupling Analysis of the Hypersonic Glide Vehicle.
Hypersonic glide vehicle is a complex type of MIMO system.
International Journal of Aerospace Engineering
The control input is the rudder, elevator, and aileron deflection of the yaw, pitch, and roll channel, and the output is the angle of attack, sideslip angle, and bank angle. Using the traditional subchannel control design method, the pitch channel state variables are the angle of attack and the pitch rate, and the yaw channel state variables are the sideslip angle and the yaw rate. The roll channel is the bank angle and the roll rate. Besides, when designing the channel control law, the coupling between the channels is not taken into account by the traditional subchannel control design method.
It can be seen from equations (37) to (39) that the coupling between the channels is reflected in the following three aspects [16]: (1) Kinematic coupling terms (2) Coupling terms caused by the product of inertia between the roll and yaw channels (3) Coupling items caused by aerodynamic forces between the channels Due to the special flight mode of the hypersonic glide vehicle, that is, the high-angle-attack flight and large bankangle turnover, the value of the above coupling items will become very large, resulting in that the coupling of the aircraft between the three channels is very serious, which is beyond the range of control capability of traditional subchannel method. Considering the strict requirements of the aircraft for guidance-command tracking and attitude stabilization, it is necessary to design the attitude decoupling controller to ensure the success of the flight mission.
Attitude Decoupling Control of the Hypersonic Glide
Vehicle. The purpose of the hypersonic glide vehicle' attitude decoupling control is to track the airflow angle command given by the guidance system under the premise of ensuring the stability of the attitude. That is, Remark 4. Time-scale separation principle. The influence of the steering force generated by the deflection of the control surface of the hypersonic glide vehicle is much smaller than that of the steering torque, so the attitude kinetic equation of the hypersonic glide vehicle is divided into ("fast and slow") two loops for the design. Ω = ½α, β, γ s T is called the state variable of the slow loop, and ω = ½ω x1 , ω y1 , ω z1 T is called the state variable of the fast loop. Then, we can use the singular perturbation theory to design a fast-and-slowloop controller for time-scale separation for hypersonic glide vehicle [17,18]. The slow loop is designed to produce a slow-loop control output ω des = ½ω x1des , ω y1des , ω z1des T based on the desired guidance command Ω des = ½α des , β des , γ sdes T and is used as the desired value for the fast loop to design the desired command of the control surfaces δ = ½δ a , δ r , δ e T . The structure of the fast-and-slow-loop flight control system is shown in Figure 4. Based on equation (39), by the coupling analysis and time-scale separation principle of the hypersonic glide vehicle, the second innovation of this paper is that dynamic equations of the vehicle can be written into affine nonlinear equations as follows, which can be directly applied to the control law (13) derived from Section 2.3: where f s ðΩÞ = f 1 ðΩÞ, g s ðΩÞ = NðΩÞ, f f ðωÞ = f 2 ðΩ, ωÞ, and g f ðΩÞ = BðΩÞ cannot be accurately calculated due to parameter uncertainties. The uncertain parameters include aerodynamic parameters, vehicle body structure parameters, and environment parameters. The decoupling controller is designed for fast and slow loops by "pitch, yaw, and roll" three-channel design, respectively. There is a one-to-one relationship between the three airflow angles and the control deviation of the three channels, so the feedback controller can be designed first according to the decentralized control theory, and the coupling terms and uncertainties between the channels are not taken into account at this step. Then, the nonlinear extended state observer is designed. The coupling terms and the uncertainties are considered as generalized uncertainty. Its estimated value from the observer is added as a compensation control signal into the decoupling control law, which can eliminate the influence of the coupling terms and the uncertainties. Figure 5 is the decoupling control block diagram of the hypersonic glide vehicle. The deflection angles δ e , δ r , and δ a are the inputs of the vehicle's attitude dynamics model, which correspond to u 1 , u 2 , and u 3 , respectively, in equation (7). Theorem 3 guarantees the convergence of the closed-loop system's tracking error.
Simulation Results
In order to verify the effectiveness of the proposed method, the mathematical model of the hypersonic glide vehicle is (1) The relevant parameters of the aircraft can be referred to the literature [19] (2) In the simulation, the aerodynamic parameters are offset by 30% and the moments of inertia are offset by 10% to verify the robustness of the control method International Journal of Aerospace Engineering control based on the NESO are better than those of the traditional subchannel feedback control method. The maximum tracking error is about 2 degrees smaller, and the airflow angle can converge faster to the range of the allowable error. In the yaw channel, the traditional subchannel control method has the relatively large sideslip angle, and the local thermal effect induced by this situation is a serious threat to the safe flight of the hypersonic glide vehicle. The method designed in this paper can limit the sideslip angle to less than 1 degree. For the steady-state error, sometimes the steadystate error of the controller with the NESO will be a little larger than that of the controller without NESO since the highfrequency chattering phenomenon is occurring. The chattering phenomenon is hard to be eliminated completely due to the characteristic of the sliding mode technique used in NESO. Figures 6 and 7 also indicate the phenomenon. However, the overshoot and settling time of the controller with the NESO are both superior to those of the controller without the NESO. The performances of the controller with the NESO are better than those of the controller without the NESO in general.
Conclusions and Future Work
In this paper, a decoupling control method is proposed for a class of nonlinear MIMO system based on decentralized control theory of Tornambè and NESO. According to the idea of decentralized control, the coupling dynamics and uncertainty are reduced to generalized uncertainties, and then the NESO method is used to give the estimated value which is added as a compensation signal to the decentralized control law. The proposed method is applied to the attitude control problem of a hypersonic glide vehicle. The theoretical derivation and simulation results verify the effectiveness of the proposed method, which is superior to the traditional subchannel control design method.
The future work will focus on how to design a new linear motion controller to form a closed-loop linear motion system; thus, the stability discussion of the linear motion dynamics will be remarked to make the research work more convincing.
Data Availability
The data used to support the findings of this study are currently under embargo while the research findings are commercialized. Requests for data, 6 months after publication of this article, will be considered by the corresponding author.
Conflicts of Interest
The authors declare that they have no conflicts of interest to this work.
|
v3-fos-license
|
2018-04-03T00:22:48.045Z
|
2017-09-01T00:00:00.000
|
11370131
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.ajol.info/index.php/ahs/article/download/161247/150803",
"pdf_hash": "a83afaa86c2c2d0b630d52ce210d3af619398019",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43473",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "a83afaa86c2c2d0b630d52ce210d3af619398019",
"year": 2017
}
|
pes2o/s2orc
|
HIV counseling and testing practices among clients presenting at a market HIV clinic in Kampala, Uganda: a cross-sectional study
Background Uptake of HIV counseling and testing (HCT) among informal sector workers is not well documented. Objective To assess HCT practices among clients presenting for HIV services at a market HIV clinic in Kampala, Uganda. Methods Between August 1 and September 15, 2009, clients presenting for HIV services at a market HIV clinic were invited to participate in the study. Socio-demographic and HCT data were collected from consenting adults aged 16+ years. Descriptive statistics were performed using STATA version 14.1. Results Of 224 individuals who consented to the interview, n=139 62 % were market vendors while n=85 38 % were engaged in other market-related activities. Majority of the respondents, n=165, 73.7 %, had ever tested for HIV; of these, n=148,89.7 % had ever tested for 2+ times. The main reasons for repeat testing were the need to confirm previous HIV test results, n=126, 85.1% and the belief that the previous HIV test results were false, n=35, 23.6 %. Uptake of couples' HCT was low, n=63, 38.2 %, despite the fact that n=200, 89 % had ever heard of couples' HCT. Conclusion These findings indicate high rates of repeat testing but low rates of couples' HCT uptake in this population.
Introduction
HIV counseling and testing (HCT) has been promoted on the premise that it provides an entry point into HIV prevention, treatment and care services. Expanding the availability and use of HCT services is a critical step towards the attainment of the UNAIDS global targets dubbed 90-90-90; i.e. 90% of people living with HIV are diagnosed; 90% of those who have been diagnosed with HIV are linked to HIV care; and 90% of those in HIV care have achieved viral suppression 1 . HCT presents an opportunity to share information with clients and promote measures to reduce the risk of HIV infection and transmission 2 . In 1999, Weinhardt et al. 3 concluded that HCT can reduce sexual risk-taking behaviors among HIV-positive individuals and HIV-discordant couples. In 2003, Allen et al. 4 found that condom use among HIV discordant couples in Zambia increased from <5% before HCT to over 80% following HCT. Eleven years later, in 2014, a community-based study conducted in 34 communities in Africa and 14 in Thailand found that HCT can reduce HIV incidence by up to 14% 5 . Collectively, these studies demonstrate that HCT has got numerous risk-reduction benefits.
Despite these benefits, up to 40% of people living with HIV are not aware of their HIV status, with the highest proportion reported in sub-Saharan Africa 6,7 . A recent UNAIDS report suggests that up to 5.3 million people living with HIV in Eastern and Southern Africa are not aware of their HIV sero-positive status 8 need for interventions to improve HIV diagnosis in this region of the world. Recent guidelines from WHO suggest a need to target couples and male partners for HIV testing as one of the priority populations 9 , highlighting a need for intensified promotion of couples' HIV counseling and testing (couples' HCT.) Couples' HCT provides an opportunity for mutual awareness of HIV status -an important ingredient for couple-based HIV prevention programming -and offers multiple benefits related to timely linkage of couples to appropriate HIV prevention, treatment and care practices 2 . However, uptake of couples' HCT in the general population remains generally low 2,[9][10][11] . The proportion of those who are aware of their own or each other's HIV status becomes even much lower among those working in the informal sector, including market vendors and other people engaged in market-related activities. This is because such populations are usually considered as highly mobile and hard-to-reach, and hence less targeted by conventional HIV counseling and testing promotion programs 12 . Indeed, a recent study among female market vendors in Kenya found that 11.5% had migrated, changed residence, over county or national boundary in the past year and 39.3% in the past five years. Over one-third 38.3% spent nights away from their main residence in the past month, with 11.4% spending more than a week away 13 . At the time of the study, there were virtually no data available on HCT practices of people working in the market sub-sector, including market vendors and other people engaged in other market-related activities in Uganda.
The objective of this study was to assess HCT practices among clients presenting for HIV services at a market HIV clinic operated by the Market Vendors AIDS Project (MAVAP) in Kampala, the Capital city of Uganda. This study was not intended to assess the effect of MAVAP activities on HCT uptake but to assess and characterize the HCT practices among people accessing HIV services at the MAVAP clinic in order to inform future HCT interventions.
Study design
This was a cross-sectional study conducted among 224 individuals aged 16 years or older, presenting for HCT services at St Balikuddembe market HIV clinic in Kampala, Uganda, between August 1 and September 15, 2009.
The study was implemented as part of a large program evaluation of the MAVAP project activities.
Setting
The Market Vendors AIDS Project (MAVAP) is one of the projects implemented under the umbrella of Development Initiatives International; a registered not-for-profit non-government organization in Kampala, Uganda. The project, which started in 2004, operates in fifteen markets; fourteen in Kampala and one in Mukono district. MAVAP operates an HIV clinic that is located within St Balikuddembe market one of the fourteen markets in Kampala, Uganda. The MAVAP HIV clinic offers HIV/ AIDS education, sexually transmitted infections testing and treatment, HCT services, treatment of opportunistic infections, and referral for anti-retroviral therapy among HIV-positive clients, among other services. MAVAP also conducts outreaches to several other market work places on designated "MAVAP days" where services similar to those provided at the HIV clinic are provided to interested individuals. MAVAP serves an estimated market population of 112,700 market operators and 300,000 customers who visit the markets daily. The project receives technical support from an advisory committee that comprises representatives from the Ministry of Health, Kampala Capital City Authority, The AIDS Support Organization (TASO), Uganda Cares and the United Market Vendors Association.
Study population
The study population was composed of individuals who accessed HIV services at the MAVAP clinic between August 1 and September 15, 2009. Individuals were included in the study if they were 16 years or older and were either involved in market vending or any other market-related activities within St. Balikuddembe market. Prior engagement in a MAVAP-related activity was not a requirement for enrolment into the study. Individuals who did not meet these criteria were excluded from the study.
Sample size determination
Using the Kish-Leslie formula, with a 5% level of precision, a standard critical value of 1.96 representing 95 % confidence and assuming that 40% of those accessing HIV services at the market-based HIV clinic had ever tested for HIV, we obtained a sample size of 246. After accounting for an estimated 20% non-response 14 , a sample of 307 respondents was obtained.
African Health Sciences Vol 17 Issue 3, September, 2017 Data collection methods and procedures A trained research assistant introduced the purpose of the study to the clients receiving HIV services at the market HIV clinic, explained the eligibility criteria and allowed individuals to ask questions pertaining to their participation in the study prior to getting enrolled. Thereafter, individuals were asked if they were willing to participate in the study, and those that responded in the affirmative were asked to provide verbal informed consent prior to participation. Individuals that were not interested in the study were informed that this would not affect their access to HIV services offered at the HIV clinic. After obtaining consent, individuals were taken through a screening process to identify those that were eligible for the study. Eligible respondents were administered structured, pilot-tested, interviewer-administered questionnaires that were conducted in Luganda, the main local language that is spoken in the study area. Data were collected on socio-demographic age, sex, education, marital status, etc. and behavioral characteristics, including HIV risk perception and prior HIV testing either individually or together with their partners. The data collection process took an average 40 minutes. It is important to note that respondents were approached at their own convenience, either before or after they had received the HIV services that they came for; so, the implementation of the study at the HIV clinic did not alter service provision in any way. After data collection, all the completed questionnaires were edited in the field to allow for clarification on unclear questions or responses with the respondent before the teams left the field. Edited questionnaires were entered into an EpiData version 3.1 dataset in preparation for analysis.
Measures
We used the term "HCT practices" to denote the practice of individuals receiving pre-test counseling, HIV test results and associated HIV post-test counseling support. Individuals were asked if they had ever received HCT, and those that responded in the affirmative were asked about how many times they had ever tested for HIV coded as 1, 2-4 or 5+ times and whether or not they had ever tested together with their partners. Prior HCT uptake was assessed by socio-demographic and behavioral characteristics to describe prior HCT practices among clients accessing HIV services at the market HIV clinic. Age was categorized into three categories 16-24, 25-34, and 35+; education was categorized as: primary, secondary and post-secondary education; while respondents were categorized as either being market vendors or engaged in other market-related activities, e.g. customer/buyer. Marital status was categorized as either 'never married', 'currently married', or 'previously married' divorced/ widowed/ separated. HIV risk perception was assessed using three parameters: Belief that one is personally at risk of HIV infection; belief that one's sexual partner is at risk of HIV infection and belief that market operators are a high-risk group. These parameters were assessed dichotomously, with those holding each belief coded with '1' for "Yes" while those opposing the beliefs coded with '2' for "No".
Data analysis
We conducted descriptive analyses i.e. frequencies and percentages to compute respondents' socio-demographic characteristics stratified by sex, HIV risk perception and respondents' prior individual and couples' HIV counseling and testing practices. Due to small numbers, we were not able to compute any inferential statistics beyond the descriptive analyses reported in this paper. Data analysis was conducted using STATA statistical software, version 14.1.
Ethical considerations
The study was implemented as part of a large program evaluation that was commissioned by the MAVAP project; no ethical review was requested. However, to elicit consent to participate in the study, potential respondents were provided with detailed information about the study and invited to participate. Interested respondents gave verbal informed consent prior to participating in the study. No compensation was provided for participation.
Respondents' characteristics
Overall, 238, 77.5% individuals accepted to be interviewed. Of those who were not interviewed, n=69, 80% indicated that they did not have time to participate in the interview or promised to return on another day for interview but did not return while 20% out-rightly refused to participate in the study. This paper is based on 224 individuals for whom complete interview data were available for analysis (Table 1). Of these, 139, 62% were market vendors while 85, 38% were engaged in other activities in the market. There were slightly more males (44.6%) engaged in market vending than females (32.1%), although this difference was not significant P=0.49. Majority of those engaged in market vending, n=76, 54.7% had been in this business for 3 or more years data not shown. Majority of the respondents were aged 25-34 years, n=116, 51.8% followed by those aged 35+, n=69, 30.8% and had either primary n=101, 45.1% or secondary education n=104, 46.4%. There were slightly more females with secondary education 49.1% than males 43.8% but this difference was not statistically significant P=0.42. Of those who had ever tested, n=148, 89.7% reported that they had ever tested for two or more times. Of these, n=122, 73.9% had ever tested for 2-4 times, while n=26, 15.8% had ever tested for five or more times data not shown. However, there was no significant difference in repeat HIV testing between males and females P=0.98. When respondents were asked why they tested for more than once, n=126, 85.1% reported that they wanted to confirm their previous HIV test results; n=35, 23
Couples' HIV counseling and testing practices
Majority of the respondents, n=200, 89.3% had ever heard of the term "couples' HIV counseling and testing" and nearly all respondents, n=217, 96.9% agreed with the statement, "Testing as a couple is the best option for married people". However, as shown in Table 3, only a small proportion of those who had ever tested reported that they had ever received their HIV test results together as a couple, n=63, 38.2%. Prior receipt of couples' HCT was much lower among those aged 16- When those who had ever received couples' HCT were asked for what motivated them to receive couples' HCT, the majority, n=35, 52.2%, reported that they had always wanted to test together as a couple, n=16,23.9% reported that this was due to the fact that HCT services were provided in the home or at the work place, n=13,19.4% reported being motivated by a peer educator, n=11,16.4% reported that they received an invitation to test as a couple, n=10,14.9 % reported that both partners attended the same outreach session where couples' HIV testing services were provided, while n=10 ,14.9% reported that it was because their partner attended a session where couples' HIV counseling and testing was emphasized.
Of those who have never received couples' HCT services n=37, 23.6 % reported that their partner refused to go with them for couples' HIV testing, n=31, 19.8% had never discussed HIV testing issues with partner, n=17, 10.8% reported that their partners do not see any need for couples' HCT, n=13, 8.3% reported that they just don't want to test with partner, n=10, 6.4% suspect partner to be infected with HIV while n=5, 3.2% fear that their partner might find out that they are HIV-positive. The remaining n=63, 40.1% gave other reasons for not testing with their partners.
Discussion
In this study of HCT practices among clients accessing HIV services at a market HIV clinic in Kampala, Uganda, we found that nearly three-quarters had ever tested for HIV; of these, nine out of every ten had ever tested for more than once. However, although prior HIV testing was high, only 38% of those that had ever tested reported that they had ever tested together with their sexual partners. The high rates of prior HIV testing among clients accessing HIV services at the market HIV clinic are likely to be a result of the proximity of HIV testing services within the market, since the MAVAP HIV clinic is located within St Balikuddembe market itself. Indeed, 42% of those who had ever tested indicated that they took their most recent test at a MAVAP HIV clinic or MAVAP outreach session. It is also likely that the high rates of repeat testing were a result of prior exposure to MAVAP health education activities or due to exposure to other HCT promotional campaigns at the time. Nevertheless, the presence of the HIV clinic within the market could have been a strong motivator for people to go for repeat HIV testing since they didn't need to incur any travel costs to seek HCT services. However, since 58% of those reporting a previous HIV test sought HCT services from outside the MAVAP HIV clinic; it is likely that convenience alone might not be the sole reason for repeat testing.
Repeat testers usually advance a number of reasons to explain their repeat testing behaviors including preparation for a new relationship 15 ; confirmation of previous HIV status, or fear that they might have acquired HIV in the period after the last test 16 . We found that majority of those who returned for repeat testing wanted to confirm their previous HIV test results but nearly a quarter of the respondents returned following concerns that their previous HIV test results were false. These findings suggest a need to emphasize the need for enhanced pre-and post-test counseling messages to individuals seeking HIV counseling and testing at MAVAP clinics to address their concerns on the authenticity of HIV test results received.
Our finding of high rates of repeat testing is consistent with previous studies that have found high rates of repeat testing in high risk groups 15,17,18 . Indeed, some studies show that it is because of high risk behaviors that individuals seek repeat testing 16 while others show that repeat testing itself is likely to prompt people intohigh risk-taking behaviors 18,19 . Compared with first-time testers, Mac-Kellar 18 found that repeat testers were more likely to report recent risk behaviors and to acquire HIV 7% versus 4%; over 75% of repeat testers who sero-converted acquired HIV within 1 year of their last test. These findings suggest that repeat testers are more likely to engage in higher risk behaviors than first-time or non-testers. How-ever, since we did not collect sexual risk behavior data, there is need for further research to examine the sexual risk-taking patterns of market operators in order to fully characterize the repeat testing behaviors observed in this population. This will help to inform the development of appropriate HIV prevention interventions targeting this group.
The finding that couples' HCT uptake was low in this population is consistent with findings from other studies regarding couples' HCT in general [21][22][23] , indicating limitations in improving joint awareness of couple HIV status through couples' HCT. Several reasons have been reported to inhibit couples' HCT uptake, including low risk perception among couples coupled with the belief that a partner's HIV test results should be similar to a spouse's; also known as 'HIV testing by proxy' 24 , fears of receiving HIV-discordant results, low levels of male involvement and fears of marital dissolution resulting from joint awareness of HIV test results especially in the event that one or both partners are HIV-positive 25 . Since there are high rates of re-testing in this population, it is important that all tested individuals are encouraged to share their HIV test results with their partners; where appropriate, with assistance from a professional counselor. Otherwise, any interventions aimed at improving couples' HCT uptake in this population should address the apparent fears expressed by the participants but also explore alternative ways of encouraging communication between partners about couples' HIV testing 26 in the hope that this discussion will arouse interest in the partner to accept couples' HCT.
The findings of this study should be interpreted with caution. In the first place, this study was conducted in a market setting with very close proximity to HIV testing services. It is likely that the uptake rates reported in this setting, including the high levels of re-testing, might be due to the proximity of services rather than a dire need to repeat the test. However, since nearly 60% of those who tested for HIV sought the services elsewhere, it is likely that our findings might reflect genuine interests for re-testing in this population. Secondly, the findings reported in this paper are based on individual self-reports that are subject to reporting bias. It is also likely that our study could have targeted a highly self-selected group of repeat testers; and if this were true, then the repeat HIV testing rates reported in this paper might not be represen-tative of the HIV testing behaviors of market vendors or other people engaged in market-related activities. Because there are no prior studies on this subject, we don't know the extent to which our study population differs from or is similar to other informal sector workers; or whether those interviewed were similar to or different from the average market vendors or other people engaged in market-related activities in Uganda. Further research is warranted in other markets to obtain data necessary to fully characterize the HIV testing behaviors of these informal sector workers.
Given the relative lack of HIV data on the study population, this study would have benefitted from collecting data on the risk profiles of clients accessing HIV services at the market HIV clinic, including their sexual risk behaviors, biomarkers of HIV risk exposure e.g. history of sexually transmitted infections, and other factors that increase their vulnerability to the risk of HIV infection. A recent study among female market vendors in Kenya found that 25.6% were HIV-positive 13 ; suggesting that people engaged in market vending and other market-related activities could be at a heightened risk of HIV infection. Future research should include a component that explores the sexual risk-taking characteristics of all people involved in the informal sector in order to inform the design of target-specific interventions. It is also important to note while the proportion of those who had ever received couples' HCT was expressed out of those that had ever tested for HIV, we do not know if all them were involved in any sexual relationships or living with any sexual partners at the time they were tested for HIV. This is because we did not include any questions asking respondents whether or not they were living with any sexual partners at the time they tested for HIV. For this reason, the reported proportion of prior couples' HCT uptake should be interpreted with caution. Nevertheless, despite these limitations, our study is unique in that it is the first study to document HIV testing practices of individuals accessing HIV services at a market HIV clinic, and provides useful information to guide HIV prevention planning for this rather neglected but important HIV risk group.
Conclusion
Our findings show high rates of prior HIV testing, high rates of re-testing but low rates of couples' HCT uptake in this population, suggesting that interventions that enhance increased uptake of couples' HCT services among informal sector workers are urgently needed.
|
v3-fos-license
|
2017-05-02T13:37:29.126Z
|
2013-06-01T00:00:00.000
|
16049928
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1003592&type=printable",
"pdf_hash": "7e33dd27323eec2961c32e47e15b5fd141f052e0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43474",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "7e33dd27323eec2961c32e47e15b5fd141f052e0",
"year": 2013
}
|
pes2o/s2orc
|
Functional Analysis of Neuronal MicroRNAs in Caenorhabditis elegans Dauer Formation by Combinational Genetics and Neuronal miRISC Immunoprecipitation
Identifying the physiological functions of microRNAs (miRNAs) is often challenging because miRNAs commonly impact gene expression under specific physiological conditions through complex miRNA::mRNA interaction networks and in coordination with other means of gene regulation, such as transcriptional regulation and protein degradation. Such complexity creates difficulties in dissecting miRNA functions through traditional genetic methods using individual miRNA mutations. To investigate the physiological functions of miRNAs in neurons, we combined a genetic “enhancer” approach complemented by biochemical analysis of neuronal miRNA-induced silencing complexes (miRISCs) in C. elegans. Total miRNA function can be compromised by mutating one of the two GW182 proteins (AIN-1), an important component of miRISC. We found that combining an ain-1 mutation with a mutation in unc-3, a neuronal transcription factor, resulted in an inappropriate entrance into the stress-induced, alternative larval stage known as dauer, indicating a role of miRNAs in preventing aberrant dauer formation. Analysis of this genetic interaction suggests that neuronal miRNAs perform such a role partly by regulating endogenous cyclic guanosine monophosphate (cGMP) signaling, potentially influencing two other dauer-regulating pathways. Through tissue-specific immunoprecipitations of miRISC, we identified miRNAs and their likely target mRNAs within neuronal tissue. We verified the biological relevance of several of these miRNAs and found that many miRNAs likely regulate dauer formation through multiple dauer-related targets. Further analysis of target mRNAs suggests potential miRNA involvement in various neuronal processes, but the importance of these miRNA::mRNA interactions remains unclear. Finally, we found that neuronal genes may be more highly regulated by miRNAs than intestinal genes. Overall, our study identifies miRNAs and their targets, and a physiological function of these miRNAs in neurons. It also suggests that compromising other aspects of gene expression, along with miRISC, can be an effective approach to reveal miRNA functions in specific tissues under specific physiological conditions.
Introduction
Classical genetics have uncovered important developmental functions of numerous microRNAs (miRNAs). However, these approaches have been limited in understanding other physiological functions of this extensive class of regulatory RNAs, partly because many miRNAs are dispensable under favorable, non-stressful conditions [1]. Particularly, the elimination of many miRNA families resulting in few apparent defects suggests that the nature of miRNA regulation is much more complex [2]. This complexity potentially arises from redundancy and coordination between multiple aspects of gene expression, ensuring robust modulation and control over developmental and nondevelopmental processes. Thus, compromising single miRNAs or even miRNA families may have limited effects on the entire regulatory network, and therefore would be a challenging approach to elucidating the biological functions of miRNAs.
We, and others, have found that compromising miRISC and total miRNA function helps to reveal processes sensitive to miRNA regulation [3,4,5]. Thus by compromising either Argonaute or GW182 function, miRNA-related processes can be identified [5]. To this end, compromising other regulatory mechanisms, in combination with miRISC, may further influence regulatory networks to reveal unknown physiological functions of miRNAs. In order to understand the physiological functions of miRNAs and their roles within complex regulatory networks, we investigated a genetic interaction between a neuronal transcription factor, UNC-3, and miRISC. UNC-3 is a conserved COE (Collier/Olf-1/Early B-cell Factor) transcription factor that is important in controlling motor neuron development as well as the identity of ASI chemosensory neurons that have been shown to play critical roles in dauer development [6,7,8]. Through investigating this complex genetic interaction, we found that miRNAs largely function in neurons to exert control over dauer development. The process of dauer development is highly regulated through multiple signaling pathways, and the robustness of these pathways is further reinforced through miRNA regulation to ensure ideal induction of dauer development.
Complementary to this genetic approach, we further identified miRNAs and miRNA-targets within neuronal tissue. In doing so, we found individual miRNAs and potential targets that modulate dauer formation. More broadly, our analysis suggests that neuronal miRNAs can modulate a wide range of neuronal processes and activities, including dauer formation.
AIN-1 and AIN-2 are two Caenorhabditis elegans (C. elegans) GW182 proteins that act semi-redundantly for miRNA-mediated gene silencing [10,11]. We made an unc-3(e151 lf) ain-1(ku322 lf) double mutant and observed that double mutant animals formed more dauers than the unc-3(lf) mutant at 25uC, while ain-1(lf) alone does not display obvious defects in dauer formation ( Figure 1A). The rate of dauer formation of unc-3(lf) ain-1(lf) mutants can be significantly reduced when a functional AIN-1::GFP or UNC-3 is expressed through an extra-chromosomal array ( Figure 1B). These results suggest that both unc-3(lf) and ain-1(lf), not a linked mutation, are responsible for the partial dauer-constitutive (Daf-c) phenotype of unc-3(lf) ain-1(lf) animals. This synergistic Daf-c phenotype suggests that miRNAs play an important role in modulating dauer responses and at least some of these functions are UNC-3 independent. Because ain-1(ku322 lf); ain-2(tm1863 rf) double mutants, which further compromise miRNA function, are not Daf-c, miRNAs appear to act in concert with multiple mechanisms to provide robustness to the regulatory system. However, we did not observe a strong increase in the rate of dauer formation in the unc-3(e151); ain-2(tm2432 lf) mutant, potentially suggesting that AIN-1 containing miRISC complexes play a more prominent role in modulating dauer formation ( Figure 1A and Figure S1A).
To determine where miRNAs execute this function, we restricted the spatial expression of the AIN-1::GFP transgene in unc-3(lf) ain-1(lf) animals. Neuronal expression and intestinal expression were accomplished through rgef-1 and ges-1 promoters, as described previously [12,13]. Expression of either neuronal AIN-1::GFP or AIN-2::GFP in the unc-3(lf) ain-1(lf) mutant was found to be sufficient to reduce the rate of abnormal dauer formation ( Figure 1B and Figure S1B), indicating that neuronal miRNAs are largely responsible for the observed role of miRNAs in repressing dauer formation.
In contrast, we found that intestinally expressed AIN-1::GFP was not only insufficient to suppress the Daf-c defect, but also slightly increased the rate of dauer formation ( Figure 1B). This intriguing result suggested that intestinal miRNAs may have a role in promoting dauer formation, which is contrary to the repressive role of neuronal miRNAs observed in unc-3(lf) ain-1(lf) animals. The potential role of promoting dauer formation is consistent with our previous finding that intestinal miRNAs repress activities of the insulin/IGF-1 signaling (IIS) pathway and the finding by others that miRNAs repress IIS signaling for longevity related functions [3,14]. In order to confirm the potential role of intestinal miRNAs in promoting dauer formation, we tested the relationship of UNC-31 and AIN-1. UNC-31 is involved in insulin secretion and signaling. Interestingly, it was previously shown that unc-31(lf); unc-3(lf) mutants have a strong daf-c phenotype [9]. We found that unc-31(e928 lf); ain-1(tm3681 lf) double mutants formed dauers far less frequently than the unc-31(e928 lf) single mutant ( Figure 1C), which is consistent with the idea that miRNAs could also play a role in promoting dauer formation and not simply repressing dauer formation, revealing a highly complex role of miRNAs in dauerrelated signaling. To further test whether this dauer-promoting function was intestinal, we expressed an AIN-1::GFP fusion in the intestine of unc-31(e928 lf); ain-1(tm3681 lf) animals. We found that restoring AIN-1 in the intestine increased the rate of dauer formation, partially rescuing the unc-31(e928 lf); ain-1(tm3681 lf) phenotype ( Figure S1C). There are likely other tissues involved as there was not complete rescue. Nonetheless, this result supports a role for intestinal miRNA function in promoting dauer formation. However, this role of intestinal miRNAs seems to be weak, as we see very slight increases (5-15%) of overall dauer formation in transgenic animals expressing AIN-1::GFP in the intestine.
Altogether, these results suggest that miRNAs within different tissues can exert opposing effects on biological processes. Furthermore, loss of neuronal AIN-1 appears to overcome the loss of intestinal AIN-1, resulting in an increased rate of dauer formation.
The dauer constitutive phenotype of unc-3(lf) ain-1(lf) mutants is suppressed by constitutive DAF-7/TGF-b and IIS signaling TGF-b and Insulin/IGF-1 signaling (IIS) are two well-known signaling cascades that regulate dauer formation [15]. Moreover, the release of insulin-like and TGF-b signaling ligands is predominantly neuronal and serves as a means to link environ-
Author Summary
MicroRNAs (miRNAs) are important in the regulation of gene expression and are present in many organisms. To identify specific biological processes that are regulated by miRNAs, we disturbed total miRNA function under a certain genetic background and searched for defects. Interestingly, we found a prominent developmental defect that was dependent on a mutation in another gene involved in regulating transcription in neurons. Thus, by compromising two different aspects of gene regulation, we were able to identify a specific biological function of miRNAs. By investigating this defect, we determined that neuronal miRNAs likely function to help modulate cyclic guanosine monophosphate signaling. We then took a systematic approach and identified many miRNAs and genes that are likely to be regulated by neuronal miRNAs, and in doing so, we found genes involved in the initial defect. Additionally, we found many other genes, and show that genes expressed in neurons seem to be more regulated by miRNAs than genes in the intestine. Through our study, we identify a biological function of neuronal miRNAs and provide data that will help in identifying other important, novel, and exciting roles of this important class of small RNAs. mental cues to development and metabolism. Our data suggest that neuronal miRNAs regulate dauer formation; because of the neural component of these pathways, we tested the dependence of these signaling cascades on our mutant phenotype.
Major miRNA roles in dauer formation are independent of serotonin activity The observation that miRNAs were required in neurons to repress aberrant dauer formation, and the suppressions of the unc-3(lf) ain-1(lf) double mutant phenotype by hyperactive TGF-b and IIS signaling, raised the possibility of defects upstream of both TGF-b and IIS. Serotonin has been suggested to regulate both TGF-b and IIS signaling in C. elegans ( Figure 2D) [19,20]. Additionally, a previous study has shown that compromising serotonin signaling, by mutations in the serotonin biosynthesis gene tph-1 (tryptophan hydroxylase), could enhance the dauer Figure 1. Neuronal miRNAs act in parallel to UNC-3 to repress undesired dauer formation by potentially affecting IIS and TGF-b signaling. A-D. A. Bar graph representing the percent of dauer progeny of indicated genotypes. * indicates that ain-1(lf) mutants were not tested in parallel to these specific experiments. Alleles used are ain-1(ku322 lf), unc-3(e151 lf), and ain-2(tm2432 lf) B. Relative dauer rate of unc-3(e151 lf) ain-1(ku322 lf) transgenic mutants carrying the designated extrachromosomal array. Relative dauer rate was determined by normalizing the percent of dauer progeny for transgenic and non-transgenic animals by the percent of dauer progeny of non-transgenic progeny on the same plate. This was done to account for plate-to-plate variation. The AIN-1 protein in each transgene was translationally fused to GFP. C. ain-1(tm3681 lf) mutation represses aberrant dauer formation of unc-31(e928 lf) at 27uC. D. daf-5(e1386 lf) and daf-16(mu86 lf) suppress aberrant dauer formation phenotype of unc-3(e151 lf) ain-1(ku322 lf). The percent of dauer progeny is shown relative to the original unc-3(lf) ain-1(lf) mutant. Relative dauer rate was used to account for day-to-day variation. Error bars represent S.E.M. from at least three biological replicates. A student's t-test was used to determine statistical significance for all experiments. doi:10.1371/journal.pgen.1003592.g001 formation of unc-3(lf) mutants ( Figure 2A) [21]. Consistent with this observation, we found that mutations in the LIM-4 homeodomain protein, which decreases TPH-1 expression in ADF neurons, also enhanced the dauer formation of unc-3(lf) mutants ( Figure 2B) [22]. Combined, these results suggest that serotonin, produced through TPH-1 in ADF neurons, is important in preventing dauer formation in an unc-3(lf) mutant. Interestingly, the rate of dauer formation of the unc-3(lf); tph-1(mg280 lf) mutant and the unc-3(lf) lim-4(yz12 lf) mutant was similar to that of the unc-3(lf) ain-1(lf) mutant (Figure 2A-2B).
To test whether miRNAs regulate serotonin, we compromised serotonin synthesis in the unc-3(lf) ain-1(lf) mutant. We found that both the tph-1(lf); unc-3(lf) ain-1(lf) and lim-4(lf) unc-3(lf) ain-1(lf) triple mutants had significant increases in dauer formation rate when compared to the corresponding double mutant (Figure 2A-2B). Thus, regulation of dauer formation by miRNAs is largely independent of serotonin production, and the Daf-c phenotype associated with unc-3(lf) ain-1(lf) mutants is likely to be unrelated to serotonin activity.
We also compared the rate of dauer formation of ain-1(lf); tph-1(lf) mutants with that of tph-1(lf) mutants, and compared lim-4(lf) ain-1(lf) mutants with that of lim-4(lf) mutants. Interestingly, we found that ain-1(lf); tph-1(lf) double mutants do not have a higher rate of dauer formation than tph-1(lf) at 25uC ( Figure 2C). Rather, the enhancement of dauer formation is evident at 27uC, confirming that at least a portion of miRNA activity functions in parallel to serotonin to regulate dauer formation ( Figure 2C). These results also suggest that compromising both ain-1 and tph-1 is not sufficient to generate the strong dauer phenotype. Instead, an additional insult is necessary for the dramatic induction of dauer formation. Thus, the presence of another condition (unc-3 mutant background or 27uC) reveals the effect of miRNA and serotonin activity on regulating dauer formation. Strangely, lim-4(lf) ain-1(lf) mutants and lim-4(lf) mutants do not show any increase in the rate of dauer formation, even at 27uC ( Figure S1D). This suggests that the lim-4(lf) and tph-1(lf) mutants differ in their response to a 27uC environment, or a potential difference in the genetic backgrounds in the lim-4(lf) and tph-1(lf) mutant strains.
To investigate if miRNAs affected the levels of cGMP or the cGMP signal transduction machinery, we exogenously supplemented unc-3(lf) ain-1(lf) mutants with the 8-bromo-cGMP analog, which has been shown to decrease dauer formation of daf-11(lf) mutants but not TGF-b mutants [26]. By increasing the levels of cGMP, we were able to decrease the rate of dauer formation of unc-3(lf) ain-1(lf) mutants ( Figure 3B). Additionally, we saw that increasing the levels of cGMP failed to decrease the rate of dauer formation of unc-3(lf) tph-1(lf) mutants, suggesting that the effect of 8-bromo-cGMP supplementation is specific to the dauer formation of unc-3(lf) ain-1(lf) and not general unc-3(lf)-involved dauer formation. Interestingly, the supplementation of 8-bromo-cGMP to unc-3(lf) tph-1(lf) mutants seems to cause an increased rate of aberrant dauer formation, the reason for this is unclear. Because we can partially rescue the Daf-c phenotype by increasing levels of cGMP, it is likely that a portion of miRNA activity is dedicated to maintaining cGMP signaling, either directly or indirectly. However, the inability to completely suppress dauer formation through 8-bromo-cGMP supplementation suggests that miRNAs are not only influencing cGMP signaling, but are also likely regulating other mechanisms of dauer signaling.
Overall, these results suggest that dauer formation of unc-3(lf) ain-1(lf) is caused, in part, by defects in maintaining proper cGMP signaling. Moreover, the evidence that ain-1(lf) mutant animals do not form constitutive dauers similar to daf-11(lf) mutants suggests that cGMP signaling is only modestly affected by miRNAs. Interestingly, the modest regulation of cGMP signaling by miRNAs is only revealed in the unc-3(lf) mutant background, as defects in cGMP-dependent dauer formation are only apparent in the unc-3(lf) ain-1(lf) mutant and not the ain-1(lf) mutant.
Multiple neuronal miRNAs regulate dauer formation
Our genetic evidence for a function of neuronal miRNAs in regulating dauer formation prompted us to further analyze neuronal miRNAs by a biochemical approach. We applied tissue-restricted immunoprecipitations of neuronal miRISCs from an asynchronous population of worms, as we have done previously for the intestine and muscle [4]. Briefly, a neuronal specific rgef-1 prom ::AIN-2::GFP transgene, which can rescue the unc-3(lf) ain- Figure 3. miRNAs may modulate cGMP signaling. A and B. Bar graphs representing the percent of dauer progeny of indicated genotypes and conditions. A. All mutants were assayed in the same experiment. Alleles used are unc-3(e151 lf), ain-1(ku322), and tax-2(p691 lf) B. The original unc-3(e151 lf) ain-1(ku322 lf) mutant was used in this experiment. This assay was scored blindly between (2)/(+) cGMP plates. Each trial consisted of four independent plates, and the median percent dauer between scored plates was taken for each trial. Error bars represent S.E.M. from at least three biological replicates. A student's t-test was done to determine statistical significance for all experiments. doi:10.1371/journal.pgen.1003592.g003 1(lf) phenotype ( Figure S1B), was integrated into the genome through single-copy ballistic transformation. An antibody against GFP was used for immunoprecipitation to enrich for neuronal miRISCs.
We subjected four biological replicates of an asynchronous population to immunoprecipitation, small RNA isolation, and Illumina deep-sequencing. We identified 16 miRNAs that were significantly enriched in neurons when compared to total whole worm lysate (p,0.01) (Dataset S1). Of the 16 miRNAs we identified, 12 of these have been analyzed by promoter::GFP analysis [27]. Of these 12 miRNA promoters, 9 show some indication of having neuronal expression (Dataset S1). Despite the previous lack of visible GFP expression of the other 3 miRNAs in neurons, this method was sensitive enough to detect the enrichment of the mature miRNA sequence within neurons. Additionally, our neuronal miRISC IP identified 33 miRNAs that were highly depleted from neurons when compared to total whole worm lysate. Of these 33 depleted miRNAs, 18 were analyzed by promoter::GFP analysis, and 12 showed no indication of being expressed in neurons. Thus a portion of the miRNAs identified in our IPs is supported by previous promoter::GFP analysis. Moreover, the subset of miRNAs we identified in this asynchronous, neuronal immunoprecipitation is different from our previously identified L4 intestine-specific miRNAs ( Figure 4A). This comparison is limited in the fact that the miRNAs are from a different stage and a different tissue, but the methods used to isolate, clone, and sequence the miRNAs between these two datasets were exactly the same and were the most comparable. However, our data did not show a statistically significant enrichment of lsy-6, a known neuron-specific miRNA (p,0.15). Because lsy-6 is expressed in a small subset of chemosensory neurons [28], and our systematic investigation of neuronal miRISCs did not discriminate between neuronal subtypes, this method may not be sensitive enough identify miRNAs that are expressed in a small subset of neurons.
The neuronal specificity of this IP analysis is also consistent with the following findings. First, we noticed a moderate depletion of miR-1, a muscle-specific miRNA, from neuronal miRISCs (p,0.06) [4,29]. Second, this neuronal miRISC IP was depleted for miRNAs that were enriched in the intestine, such as miR-72 [4]. Finally, we observed that several miRNAs enriched in the neuronal IP were depleted from the intestinal IP, such as miR-790. Although there are some limitations, the miRNAs identified in this neuronal IP are a subset of miRNAs that are highly expressed in neurons.
To test the roles of several of these identified miRNAs in dauer formation, we ablated the function of single miRNAs in unc-3(lf) mutants. In addition to mir-81, mir-234, mir-235, and mir-240, which were identified in our IPs, we also tested mir-124, which is a highly conserved neuronal miRNA that was not enriched in the IP [30]. We found that deletion mutations in mir-81/82, mir-234, or mir-124 significantly enhanced the Daf-c phenotype of unc-3(lf) mutants ( Figure 4B and Figure S1E), suggesting that these miRNAs have a role in dauer formation. Because the rate of dauer formation of each of these double mutants was less than that of the unc-3(lf) ain-1(lf) double mutant, these results suggest that there is a complex network of regulation exerted by multiple miRNAs in modulating the neuronal component of dauer formation. Additionally, we found that ablation of mir-81/82, mir-124, or mir-235 in unc-3(lf) ain-1(lf) did not further increase the rate of dauer formation ( Figure 4D). This observation suggests that the ain-1(lf) background compromises a sufficient amount of activity of these miRNAs to exert a detectable phenotype. However, we found that further deletion of mir-234 in the unc-3(lf) ain-1(lf) mutant increased the rate of dauer formation, relative to its double mutant sibling control. However, the absolute rate of dauer formation of the unc-3(lf) ain-1(lf); mir-234(lf) mutant is no different than the other unc-3(lf) ain-1(lf) clones (i.e. the unc-3(lf) ain-1(lf) control for mir-81/82 or mir-124), which may simply be a result of a varying genetic background introduced specifically in the unc-3(lf) ain-1(lf) control clone for mir-234. Nonetheless, the finding that these specific miRNAs function in repressing aberrant dauer formation in an unc-3(lf) mutant background supports the validity of this tissue-specific-IP approach in identifying neuronal miRNAs.
Because of the effect of mir-81/82 in repressing aberrant dauer formation in an unc-3(lf) mutant background, we tested mir-58, a family member of mir-81/82. We found that unc-3(lf); mir-58(lf) mutants behaved similarly to the unc-3(lf) mutant and that unc-3(lf) ain-1(lf); mir-58(lf) mutants were similar to unc-3(lf) ain-1(lf) mutants in dauer formation ( Figure 4B, 4D). A previous study found that the mir-58 family is required for dauer formation, as deletion of the entire family (mir-58, -80, -81, -82) resulted in an inability to form dauer larvae (Daf-d) [2], which is opposed to the function of the miRNAs discussed above. Interestingly, they found that restoring mir-81 in this familial deletion did not rescue the Daf-d phenotype, which is consistent with our observation of mir-81 repressing dauer formation. Moreover, we found mir-58 to be 19-fold depleted from neuronal miRISC (p = 0.00124) while mir-81 was enriched in neuronal miRISC ( Figure 4C). Thus, although family members can function redundantly in targeting similar mRNAs, the spatial expression of different miRNA family members can allow for different, even opposing, biological functions.
Identification of neuronal miRNA targets
In addition to identifying neuronal miRNAs, we applied microarray analysis to identify potential miRNA targets that coimmunoprecipitated with asynchronous neuronal miRISCs, in five biological replicates (Dataset S2). In each replicate, the ratio of IP signal/Total signal was converted to percent ranks ( Figure 5A), as described previously [4,31]. Through our microarray analysis, we identified 747 (p,0.001) probes that mapped to unique genes and that were associated with neuronal miRISCs. Of these 747 probes, 728 were readily identified with a gene. In order to confirm whether these associated mRNAs were miRNA targets, we analyzed the 39 UTRs of these associated genes. We found that the 39 UTRs of these genes were enriched for perfect 7-mer (2-8 nt) binding sites of C. elegans miRNAs per 1000 nucleotides Figure 5C). In contrast, there was minimal enrichment when the reverse complement of perfect 7-mer binding sites was used in the analysis. Moreover, the relative enrichment of perfect 7-mer binding sites was even more robust for miRNAs that we identified through our neuronal IP and deep-sequencing analysis. Additionally, the median 39 UTR length of the subset of these mRNAs is longer than the median 39 UTR of all testable genes in our microarray experiments ( Figure 5B). We also found that the percentage of genes with at least one perfect 7-mer binding site (to all C. elegans miRNAs) was higher for genes associated with neuronal miRISCs than for all testable genes. These observations are similar to what was observed in our previous intestine-or muscle-restricted IPs [4]. Furthermore, we compared our data against our previous IPs and found that a portion of genes enriched in neuronal miRISC were also enriched in our whole body, muscle, and intestine IPs, suggesting that we had previously identified a subset of these miRNA targets ( Figure 5D). Collectively, these observations suggest that the genes identified through our IPs are likely miRNA targets in neurons.
Additionally, we compared the relative mRNA levels of several of our top targets in ain-1(ku322 lf); ain-2(tm1863 rf) double mutants and N2 worms, as done previously [4] ( Figure S2). We found that 8/9 genes that we tested show a higher relative mRNA level in ain-1(lf); ain-2(rf) worms than N2. The last gene that did not show a higher relative mRNA (lev-1), was shown previously to be up-regulated by a GFP 39 UTR analysis, suggesting that regulation may occur via translational efficiency. It is important to note, however, that the up-regulation of these 8 genes may be indirect, but their association with neuronal miRISC supports the idea that their up-regulation is likely a result of lack of direct miRNA targeting. Overall, these results support the idea that the mRNAs associated with neuronal miRISC are up-regulated when miRISC is compromised.
To further characterize the tissue-specificity of our IP, we compared the average enrichment of genes that were known to be expressed in various tissues through previous IP-microarray analysis of PAB-1 associated mRNAs in specific tissues [12,31,32]. The average percent rank of all testable genes in our dataset was 0.48. We found that the average percent rank of genes highly expressed in muscle and intestine to be 0.51 (p = 1.4610 24 ) and 0.45 (p = 1.66610 25 ), respectively ( Figure 6A) [31,32]. Thus, there is a small enrichment of highly expressed muscle genes in our dataset. In contrast, genes that are highly expressed in the intestine are depleted from the neuronal miRISC IP. Not surprisingly, we found minimal overlap between genes enriched in our neuronal miRISC IP with genes highly expressed in either muscle or intestine. We found that the average percent rank of transcripts that are highly expressed in neurons to be 0.73 (p = 5.5610 2241 ), with extensive overlap between genes enriched in the neuronal miRISC IP and genes highly expressed in neurons. Moreover, we found that genes that were highly expressed in neurons, but not associated with neuronal miRISCs, also had a percent rank higher than average (data not shown). Thus, the overlap between neuronal miRISC-associated genes and highly expressed neuronal genes is not responsible for the enrichment of neuronal genes in our dataset. Overall, these data suggest that our neuronal IP is specific for neuronal genes.
The enrichment of highly expressed neuronal genes in our dataset could suggest that we were simply systematically enriching for genes that were expressed at higher levels in neurons. However, the possibility that the AIN-IP method favors genes expressed at higher levels was dismissed in our previous analysis of intestine-and muscle-restricted IPs [4]. To test if this was also true in neurons, we further analyzed the 39 UTRs of the neuronal and intestinal PAB-1 datasets [11,32]. We saw a similar trend in the neuronal PAB-1 dataset as we did with our neuronal miRISC dataset. As a reference, we used all 39 UTRs identified previously [33]. Genes that are highly expressed in neurons have a longer median 39 UTR, more perfect 7-mer seed sites within the 39 UTR, and a higher percentage of genes with at least one perfect 7-mer than all 39 UTRs in the database ( Figure S3). This contrasts with genes that are highly expressed in the intestine. Additionally, these neuronally enriched genes are also slightly enriched in our previous intestine-and muscle-restricted IPs ( Figure S3). Overall, these data suggest that a proportion of neuronally expressed genes are miRNA targets. Thus, the enrichment of previously identified neuronal genes in our neuronal miRISC IP indicates that the previous neuronal PAB-1 IP is also enriched for miRNA targets. Interestingly, this may suggest that genes highly expressed in neurons are more likely to be regulated by miRNAs than genes in other tissues.
Neuronal miRNA targets have broad functions
We analyzed mRNAs that were significantly associated with neuronal miRISCs by utilizing GOrilla (http://cbl-gorilla.cs. technion.ac.il/), an online resource for GO term enrichment analyses (Table S1). We found that a wide range of GO terms involved with neuronal activity were enriched in neuronal miRISC associated genes when compared to testable genes ( Figure 6A). These GO term enrichments show that a wide variety of processes may be modulated by neuronal miRNAs. Additionally, we looked at previous genes that were enriched in dauer larvae, in GABA neurons, and in touch receptor neurons [34,35,36]. We found that genes from these datasets were overrepresented within the subset of genes associated with neuronal AIN-2 ( Figure 6B). This overrepresentation may imply that neuronal miRNAs help regulate these processes, reinforcing the idea the neuronal miRNAs have broad physiological functions.
We searched for genes that promote dauer formation within our dataset and found a variety of genes that affected multiple aspects of dauer signaling and looked at their 39 UTRs for potential miRNA binding sites via TargetScan ( Figure 6C) [33]. For example, pde-3 and pdl-1 were enriched in our IPs. Both genes encode for predicted cGMP phosphodiesterases that may regulate cGMP levels. Also, mef-2 which functions downstream of EGL-4 and cGMP signaling, was also enriched in our IPs [37]. Thus, aspects of cGMP are likely pair was done in parallel, but independently from other miRNA mutants (i.e. unc-3(lf) and unc-3(lf); miR-124(lf) were run in parallel to each other, but not in parallel to unc-3(lf) miR-81/82(lf) mutants). The original unc-3(lf) was used compared to unc-3(lf); miR-124(lf). * indicates that the individual miRNA mutant was run separately from the unc-3(lf) and unc-3(lf); miR-(lf) mutant. Error bars represent S.E.M. from at least three biological replicates. A student's t-test was used to determine statistical significance between unc-3(lf) and unc-3(lf) miR-(lf). Alleles used are unc-3(e151 lf), ain-1(ku322 lf), miR-58(n4640 lf), miR-81/82(nDf54 lf), miR-124(n4255 lf), miR-234(n4520 lf), and miR-235(n4504 lf). C. Chart showing the relative abundance (IP vs. Total) of miR-58 family members and statistical significance determined by a student's t-test from deep-sequencing experiments. D. Bar graph representing the percent of dauer progeny. Assays for each miRNA mutant was run in parallel to the corresponding unc-3(lf) ain-1(lf) mutant but not in parallel to assays for other miRNA mutants (which may have resulted in the variation seen in unc-3 (lf); ain-1(lf) mutants across different experiments). The original unc-3(lf) ain-1(lf) strain was used as a control in miR-81/82 and miR-124 experiments. Error bars represent S.E.M from at least three biological replicates. E. Bar graph showing the percent of dauer progeny. Assays were scored blindly between (2)/(+) cGMP. Error bars represent S.E.M. from at least three biological replicates and a student's t-test was used to determine statistical significance. doi:10.1371/journal.pgen.1003592.g004 regulated by miRNAs. Additionally, ins-1, an insulin antagonist, and daf-16 were enriched in our IPs, indicating possible miRNA modulation of insulin signaling. daf-12, a nuclear hormone receptor, was also identified in our IPs, which has been previously shown to be regulated by miRNAs to influence dauer formation [38]. In addition to identifying genes that promote dauer formation, we found several genes that repress dauer formation (daf-4, daf-28, insulin-like-peptides). However, it is unclear whether or not there is a switch between targeting genes that promote dauer formation versus genes that repress dauer formation in response to specific environmental stimuli. The identification of many dauer-signaling related mRNAs and genes enriched in dauer larvae suggests that miRNA regulation over dauer formation is complex. It is likely that multiple miRNAs target multiple genes that control several different dauer pathways. Moreover, the broad function of miRNAs in modulating neuronal activity may also influence dauer signaling.
Discussion
Understanding the role and importance of miRNA regulation on physiological processes is a challenging aspect of miRNA biology that has often proven to be difficult using classical genetics. This difficulty is likely due mainly to two important aspects of miRNA-involved regulation of gene expression. First, miRNAs Figure 5. mRNAs associated with neuronal miRISC are likely neuronal miRNA targets. A. Bar graph showing the distribution of genes identified through microarrays analysis. Testable genes were defined as having reliable signals in at least two microarray experiments (see Methods). For multiple probes for a single transcript, the probe with the smallest p-value from a one-tailed t-test was used (see Methods). For alternative transcripts to a single gene, the transcript with the smallest p-val was used. There were 17673 testable genes and 747 enriched genes in the graph. B. Chart showing the median length of 39 UTRs and the percentage of 39 UTRs with at least one perfect 7-mer binding site to all annotated C. elegans miRNAs. A nonparametric median test was used to determine the difference between median 39 UTR length (see Methods). Statistics from a hypergeometric distribution was used for determining the difference in percentages of having at least one 7-mer. C. Bar graph that shows relative seed density. Seed density is calculated by (# of perfect 7-mer seed matches/1000 nt UTR). The density is relative to that of testable genes. Enrichment values shown above enriched genes (i.e. 1.456) show relative changes in seed density compared to control. ''Top 20 miRNAs'' indicates the miRNAs, with the top 20 highest raw read counts in our IPs. ''Enriched miRNAs'' indicates miRNAs that were enriched and statistically significant in our IPs. D. Chart showing the overlap between genes identified in previous datasets to genes associated with neuronal AIN-2. P-values and expected values were calculated using a hypergeometric distribution. Percentages represent 100*[(# genes in overlap)/(# genes associated with neuronal AIN-2)]. doi:10.1371/journal.pgen.1003592.g005 often function to enhance the robustness of gene expression related to specific physiological functions; disrupting miRNA regulation may not alter expression to the extent that generates phenotypes observed by established assays. Second, miRNA roles in a specific physiological process are often executed as a miRNA-target interaction network involving multiple miRNAs and a large number of their targets [1,2,4,5,39,40]. Therefore, in this study, we used a sensitized genetic background to interrogate miRNA functions, similarly to previous studies [3,4,5]. Within the ain-1(lf) genetic background, we found that compromising UNC-3 resulted in an enhancement of the unc-3(lf) Daf-c phenotype. Thus, the unc-3(lf) genetic background permitted the discovery of functions of miRNAs that would have otherwise been masked by ''genetic redundancy'' [5]. The induction of dauer formation is a physiological process that orchestrates organismal-level changes. As such, the process is tightly regulated through a variety of different mechanisms involving cGMP, serotonin, TGF-b, IIS, and hormonal signaling between various tissues [15]. Few miRNAs have been functionally linked to dauer formation, except for the miR-58 and let-7 family of miRNAs [2,38].
However, through our genetic investigations of the unc-3(lf) ain-1(lf) mutant, we have shown a physiological role of neuronal miRNAs in repressing aberrant dauer formation, through possible modulation of cGMP signaling and other unidentified mechanisms. Moreover, we have biochemically identified miRNAs that are highly expressed in neurons and verified the function of several of these individual miRNAs in dauer formation. Again, the use of the unc-3(lf) mutant background has allowed us to discover physiologically important functions of these individual miRNAs. For example, a previous study on miR-124 observed no overt phenotypes for the mir-124(lf) mutant [30]. But, a physiological function of miR-124 is revealed within the unc-3(lf) mutant background. Thus, within various genetic backgrounds, the biological function of miRNAs can be further elucidated.
Our investigation into functions of neuronal miRNAs highlights the complexity behind miRNA regulation and genetic redundancy. We identified a function of several individual miRNAs in modulating dauer formation, thus revealing that multiple miRNAs can function similarly in modulating the same biological process. Because these miRNAs have different seed sites, it is likely that they target different subsets of mRNAs, which may or may not overlap. Moreover, it is likely that the collective misregulation of many miRNA target genes, in combination with the genes misregulated under the unc-3(lf), interact to result in an overt phenotype, and dissecting specific miRNA::miRNA target interactions will prove difficult.
Furthermore, even though we observed that the collective function of miRNAs is important in modulating dauer formation, we also found that this function depends on the spatial expression of miRNAs. We observed that miRNAs could either function neuronally to repress dauer formation, or partially in the intestine to promote dauer formation. Combined with the observation of a previous study, our data suggests that the differing spatial expression of miR-58 family members may allow for opposing biological functions, adding to the complexity of the system [2]. Thus, analyzing the function of miRNAs in a specific tissue can allow for a clearer understanding of their impact on cellular and organismal physiology as exemplified previously [4].
By identifying miRNAs and potential mRNA targets within neuronal tissue through our systematic co-immunoprecipitation analyses, we discovered that neuronal miRNAs target mRNAs that are involved in many different neuronal activities. As an example, we found that genes that are enriched in dauer larvae were overrepresented in our neuronal miRISC IP. Thus, it may be plausible that miRNAs serve as a parallel mechanism to repress these genes, independent of dauer signaling pathways, under nondauer inducing conditions. However, we also found that miRNAs also target mRNAs involved in known dauer signaling pathways, possibly reinforcing the robustness of these mechanisms. Interestingly, targeted mRNAs both promote and repress dauer formation. The precise mechanism by which mRNAs that promote and repress dauer formation differentially associate with neuronal miRISCs under different environmental conditions is intriguing but remains unclear. However, studies investigating changes in miRNA expression in response to dauer formation may provide some insight into this phenomenon [41,42]. Interestingly, these studies identify changes in mir-234 in response to stress, and we identified a potential function of mir-234 in repressing dauer formation in this study.
In addition to mRNAs that were involved with dauer formation, we also found mRNAs that are enriched in GABAergic and touch receptor neurons. However, this systematic approach is unable to distinguish whether these GABAergic mRNAs are expressed in GABA neurons to maintain function, or in other neurons to repress GABAergic qualities in non-GABA neurons. The same thinking applies to the touch receptor neuron genes and further investigation into these processes can be explored. Aside from these processes, we unexpectedly found that neuronal mRNAs seem to have a higher propensity for being regulated by miRNAs than mRNAs in the intestine in C. elegans. We believe that the constant role of neurons to relay messages throughout the organism requires tight regulation at multiple levels of neuronal processes to ensure the integrity of neuronal signaling. This is, in part, exerted by miRNAs. The observation that many GO terms involved in neuronal activity were enriched in our neuronal miRISC IP reveals the broad functions of neuronal miRNAs and supports the idea of neuronal miRNAs maintaining neuronal activity. Additionally, it may be possible that miRNA regulation may be well-suited for regulating neuronal genes that require translation along the axon.
Overall, our investigation of miRNA function in neurons emphasizes the complexity of miRNA function. By identifying hundreds of associated mRNAs with neuronal miRISCs, we have only begun to understand the physiological role of these mRNAs. We have already identified a functional role of miRNAs in modulating dauer formation, but the specific miRNA::miRNA target interactions responsible for this process will be difficult to dissect. Like dauer formation, the other function of neuronal miRNAs may be masked by other neuronal controls of gene expression and using sensitized backgrounds, which compromise these other controls of gene expression, may help uncover other important roles of neuronal miRNA regulation.
Generation of transgenic lines
The transgenic strains used to perform immunoprecipitations were constructed as previously described, with the addition of a 3 kb region upstream of rgef-1 used as a pan-neuronal promoter [4,10]. For transgenic animals used in dauer assays, AIN-1 was translationally fused to GFP into PBSIIKS. The 3.1 kb region upstream of ain-1 was used as the endogenous promoter, a 3 kb region upstream of rgef-1 was used as a neuronal promoter, and a 3.1 kb region upstream of ges-1 was used as an intestinal promoter. A fosmid spanning unc-3 was used for transgenic rescue (WRM0622bH08). Transgenic animals used in dauer assays were made through microinjections.
Dauer assays
L4/Young adults grown at 20uC were singled onto OP50 plates and placed at 25uC. Progeny were scored 65-90 hours after worms were picked. Each trial consisted of three plates. At least three biological replicates were completed for each strain unless otherwise noted. The total percent of dauer progeny, from the three plates, was counted and larvae younger than L2 were ignored. For transgenic animals, L4/young adults containing the transgene of interest were singled. Scoring distinguished between transgenic and non-transgenic siblings within the same plate. The percent of dauer formation for both transgenic and non-transgenic animals was normalized to the percent of dauer progeny of nontransgenic animals. Assays comparing miRNA mutants were done by blindly scoring plates of unc-3(lf) vs. unc-3(lf); mir(lf) and unc-3(lf) ain-1(lf) vs unc-3(lf) ain-1(lf); mir-(lf). Additionally, assays involving additional mutations in an unc-3(lf) or unc-3(lf) ain-1(lf) background were compared to unc-3(lf) or unc-3(lf) ain-1(lf) clones derived from non-mutant siblings of heterozygous parents, unless otherwise noted. For example, in assays comparing unc-3(lf) and unc-3(lf); mir-58(lf) mutants, both mutants were derived from the same unc-3(2/ 2) mir-58(+/2) parent to minimize genetic differences. Also, assays comparing specific mutants were run on the same day, unless otherwise noted.
8-bromo-cGMP supplementation
The procedure was performed as described previously with the following exceptions [25]. 8-bromo-cGMP was used at a concentration of 6.5 mM and was added to 100 ul of fresh overnight OP50 culture and then spotted onto NGM plates.
Immunoprecipitation and microarray analysis
The procedure was performed on a mixed-stage population of worms as described previously with the following exceptions [4,10]. First, four of the five biological replicates were analyzed using an Agilent C. elegans microarray chip. Testable genes were defined as having reliable signals in two different biological replicates using the Agilent microarray chip. Multiple probes for the same gene were removed, keeping the probe with the lowest pvalue as defined by a one-tailed t-test comparing all testable probes versus probes towards a designated gene. The testable data was then supplemented with the enrichment values (as percent ranks) of a fifth replicate, from another microarray platform (which had a single probe for each gene), and statistical significance was re-calculated. In order to utilize the data from both microarray platforms, we decided that keeping the probe with the lowest p-value would be the best way of compiling the data. The percent ranks were not altered by filtering out these probes.
MicroRNA cloning and analysis, and 39 UTR analysis
Small RNA isolation, cloning, and analysis was performed essentially as described previously for the L4-staged immunoprecitations [4]. However, the procedure was performed on a mixedstaged population of worms instead of an L4 synchronized population. For the miRNA analysis, the relative concentration of a miRNA was given by (# reads of miRNA)/(# total reads in experiment). This relative concentration was then converted to relative abundance by normalizing every relative concentration to the lowest relative concentration. The relative abundance was then expressed in log2, which was used to compare in IP RNA vs. Total RNA. A nonparametric median test comparing 39 UTR length can be found at http://www.fon.hum.uva.nl/Service/Statistics/ Median_Test.html. To limit data input, we compared 400 points for each dataset (every 0.25 percentile length).
Quantitative RT-PCR
The procedure was done on asynchronous, well-fed worms grown on OP50 as described previously [4].
Accession numbers
The Gene Expression Omnibus accession number for the microarray data reported in this paper is GSE45871.
Supporting Information
Dataset S1 Neuronal miRISC IP -deep sequencing. Analyzed deep sequencing data from 4 biological replicates. Dataset shows relative enrichment values and p-values from a student's t-test. (XLSX) Dataset S2 Neuronal miRISC IP -microarray. Analyzed data from 5 biological replicates on two different microarray platforms. Replicates B-E were run with Agilent chips. Replicate A came from another microarray platform. Enrichment value (IP vs Total) of each gene in each replicate is expressed in terms of percent ranks as described previously [4]. (XLSX) Figure S1 Additional dauer assays. A, D. Chart showing percentage of dauer of indicated strains, two biological replicates were done. B, C. Relative rate of dauer formation of indicated strains at indicated temperature. Data is the average of at least two independent lines. E, F. Rates of dauer formation of indicated strains. (PDF) Figure S2 qPCR of potential miRNA targets. Chart showing the relative log2 enrichment of indicated gene in ain-1(ku322 lf); ain-2(tm1863 rf) vs N2 from four biological replicates. *indicates only three biological replicates were done. (PDF) Figure S3 Relevant to Figure 5. Neuronal genes are likely miRNA targets. A. Chart showing the overlap and enrichment of highly expressed genes in various tissues within the neuronal miRISC dataset. A student's t-test was used to determine differences in average percent rank against all testable genes (average rank of 0.48). P-values and expected values were calculated using a hypergeometric distribution. Percentages were calculated from 100*[(# genes in overlap)/(# genes associated with neuronal AIN-2)]. B. Chart showing median 39 UTR length and percent 39 UTRs with at least one perfect 7-mer binding site to all annotated miRNAs, methods for statistics are similar as before. C. Bar graph showing the relative seed density of perfect 7mers in the 39 UTRs of genes from highly expressed intestinal and neuronal genes. The analysis is also done for miRNAs that were enriched and statistically significant in neurons. D. Chart showing the average percent rank of highly expressed neuronal genes (from ref. 11) within a given dataset. A student's t-test was used to determine statistical significance. (PDF)
|
v3-fos-license
|
2021-04-25T14:39:49.107Z
|
2021-02-21T00:00:00.000
|
233392780
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.ccsenet.org/journal/index.php/ijsp/article/download/0/0/44783/47334",
"pdf_hash": "24b5f1129d7bb5c5da32bf0e6585310f82e170d7",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43475",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "24b5f1129d7bb5c5da32bf0e6585310f82e170d7",
"year": 2021
}
|
pes2o/s2orc
|
Cross-Sectional and Time Series Data as the Basis for Panel Modelling: The Case of Kidnappings in México From 2010 to 2019
This paper presents the elements entailing the building of a panel data model on the basis of both cross-sectional and time series dimensions, as well as the assumptions implemented for the model application; this, with the objective of focusing on the main elements of the panel data modelling, its way of building, the estimation of parameters and their ratification. On the basis of the methodology of operations research, a practical application exercise is made to estimate the number of kidnapping cases in Mexico based on several economic indicators, finding that from the two types of panel data analyzed in this research, the best adjustment is obtained through the random-effects model, and the most meaningful variables are the Gross domestic product growth and the informal employment rate from the period 2010 to 2019 in each of the states. Thus, it is illustrated that panel data modelling present a better adjustment of data than any other type of models such as linear regression and time series analysis.
Introduction
In the current days, social, economic, financial and biological phenomena, among others, have largely showed complex behaviors mainly due to the structure that data present, which tent to be either cross-sectional data (evaluation of the phenomenon in a certain period of time) and time-series data (evaluation of the phenomenon through time), that is, according to Lavado (2012): Cross-sectional data ; where i stands for a specific moment in time (1) Time-series data ; where t stands for is a specific moment in time (2) Chart 1. Data types Source. Econometrí a de corte transversal (Lavado, 2012) As an example of these types of data, it is found: The estimation of gasoline prices during the period 2000-2018, taking as a reference the crude oil price and the economic growth in such period.
The growth of a plant during a period of 125 days as of the quantity of the fertilizer and the water applied, as well as the amount of time of exposure.
The variation of global temperature in the last 150 years as of greenhouse gas emissions and economic growth.
In the view if these phenomena, the main goal of this paper is to present the elements enclosing panel data models, its way of building, the estimation of its parameters and its ratification. To fulfill this goal, it is presented a practical application to estimate the number kidnapping cases in Mexico from 2010 to 2019 taking a frame of reference different economic indicators such as Gross Domestic Product (GDP), economic growth, unemployment rate and employment informality rate in each of the Mexican states.
which meets the following criteria (Ackoff & Sesieni, 1977): Phase I: Mathematical formulation: lineal association is set out between dependent and independent variables.
Phase II: Model estimation: the estimation of the parameters is proceed, as a result, two models were tested: the fixed-effects model (FEM) and the random-effects model (REM); by means of hypothesis testing, the higher adjustment model is selected. Besides, the significance level is estimated.
Phase III: Model validation: to validate the model it is required to fulfill the following assumptions: residuals normal distribution, homoscedasticity, non-collinearity among the independent variables.
Phase IV: Interpreting results: once the model has been validated, the interpretation of the parameters is continued, as well as, the projections of the phenomenon.
For the application of this practical exercise, R-studio software was implemented now that its programming language assisted on obtaining a more efficient outcome.
Theoretical Background
Panel data models are presented when the information of the phenomenon is found over time to a sample of individual units, in other words, if there is a variable Y it in which i = 1,2, 3…, N observed objects over t= 1,2,3…t periods of time (Arellano, 1991): AZ t 1 a n-2 b n-2 AZ t 2 a n-1 b n-1 AZ t 3 a n b n Source. Modelos de Datos Panel (Albarrá n, 2010) According to Lavado (2012), its mathematical expression is: is the expected value of the phenomenon under study of the object (i) at an specific point in time (t).
X it is the independent variable which may affect the behavior of the phenomenon under observation of object (i) in a specific point in time (t) u it is the margin of error that cannot be explained because of the lineal association between Y & X.
B j; j = 0 & 1, are the parameters to estimate through the method of least squares 1 The main purpose of the panel data models is to capture non-observable heterogeneity, and that is not taken into consideration in the traditional regression models which may cause negative effects in the estimation of the phenomenon under study. Panel data models are classified into Models of Fixed Effects (MFE) and into Models of Random Effects (MRE).
On MFE, it is assumed that the differences among the objects of study can be captured through the differences in the constant term, which are deterministic. Accordingly, to Baronjo & Vianco (2014): cov (X it, , Z i ) ≠ 0 (4) such that: (5) where: i is the sub index that represents a column vector of the number one.
The issue with this method is when a large-size sample is presented which tends to void the object effect handling the variables in deviations with regard to the temporal mean of each object; as a consequence, this prevents analyzing the effect of the invariant variables in the time.
Referring to MFE, it is considered that the individual effects are not independent among them, since these are randomly distributed of a given value. In these models, it is contemplated not only the impact of the independent variables but also the specific features of each cross-sectional unit. In accordance to Baronjo & Vianco (2014), the models are demonstrated: 1. The Method of Least Squares consists in minimizing the sum of the squares of vertical distances between the data values and the estimated regression. Reducing the residual sum of squares, having as a residual the difference between the observed data and the values of the model (Mendenhall, Wackerly & Scheffer, 2008). where: u i is the random disturbance that allows distinguishing the effect of each individual in the panel.
For the purpose of its estimation, stochastic components are grouped so that the outcome, in respect to Torres (2007) is: According to Labra& Torrecillas (2014), it is assumed that the condition of the individual effects is not correlated with the independent variables in these models.
such that: B i are the individual effects X are the independent variables For decision-making purposes about the model of better adjustment is used the Haussmann's test which consist of comparing the ′ obtained through an estimator of both models MFE, and MRE, whose aim is to identify if the differences among them are or not meaningful. On the basis of the foregoing, the hypothesis statement is the following (Ramoni & Orlandoni, 2013): :^ ≠^ ≠ → If the P-value is higher to the significance level ( ) is not rejected (Ho). There is no correlation between the individual effects and the independent variables, in other words, the random estimator must be used. Once the model with the best adjustment is selected (MFE vs MRE), this must fulfill with the following assumptions according to (Molina, Rodrigo, 2010): The residuals must be close to a normal distribution: ( ) ≠ . If IFV (Inflation Factor of Variance) is higher than 5 units, there is a high co-linearity. In the light of the foregoing, conducting an exercise is proceeded, in which all the tools are encompassed to the construction of the panel data models that begins from a descriptive analysis of the information until the fulfilment of the model of better adjustment.
Construction of the Model
One of the main problems that Mexican society faces is insecurity. Such phenomenon has had an accelerated growth. Within its guidelines, kidnapping has been one of the most fraudulent practices. Based on the Mexican Legal Dictionary, and from the point of view of penal judicial, this activity is defined as the following (Cá mara de Diputados, 2019): "The seizure and retention of a person for the purpose of ransom in money or in goods, and it is used as a sign of plagiarism" The studies have showed that from 1970 to 1984, Mexico presented very low numbers of kidnapping (300 cases). After this period, this activity has strongly been accelerated, so much that in 2012, the Public Security Bureau reported more than 1,117 cases, and in the year 2019 the same dependency determined more than 1,206 cases (Yam, Trujano, 2014).
In respect to World Bank, developing countries that manifest this illicit activity have proved economic indicators very unfavorable for their populations, thus they are characterized for having high unemployment rates, low economic growth, a very weak tax collection, a high informal economy rate, having as an effect the worsening of human capital (Gonzá lez, 2012).
In this context, it is aimed at the estimation of the degree of incidence that these economic variables have in relation to the kidnapping rate each Mexican state during the period of nine years, from 2010 to 2019, that means: Where: Y Sjt is the number of cases of kidnapping in the j-th entity in the time.
X 1jt is the Gross Domestic Product in millions of Mexican pesos in the j-th entity in the time X 2jt is the Economic growth rate in percentage terms in the j-th entity in the time.
X 3jt is the Employment Informality Rate in the j-th entity in the time.
X 4jt is the Unemployment Rate in the j-th entity in the time.
X 5jt is the time elapsed in the j-th entity in the time.
Through the identification of the variables that have theoretically impact on such phenomenon, the estimation of the dynamic is carried on; consequently, a mathematical formulation has to be done, the estimation of the parameters, the validation of these, and finally, the interpretation of the results.
Mathematical Formulation
From the foregoing, the model to estimate is: When comparing both models, it can be observed that the Fixed-effects model has the variables X 1jt (GDP) and X 4jt (Unemployment Rate) are meaningful to 0.05. On the other side, the Random-effects model, the variable X 1jt is meaningful. However, the random-effect model presents a higher adjustment due to it has a coefficient of determination (R 2 ) greater than the fixed-effects model. Therefore, MRE is more suitable to predict the dynamic of the cases of kidnapping in Mexico.
Chart 5. Hypothesis test to choose the model with the best adjustment Source. Own elaboration The information above can be confirmed through the Haussmann's test (chart 5). It can be stated that the model of random effects is the most appropriate because its level of significance is lower to 0.05, and this shows a better adjustment.
Estimation of the Model
With respect to the random-effects model, eliminating the variables that are not meaningful and making the information symmetrical, the regression analysis would be the following: Having run the data, we can obtain that: Chart 6. Model of better adjustment
Source. Own elaboration
Replacing in equation 11, it is shown that: With a level of confidence of 0.95 and with a level of significance of 0.05, equation 12 conserves 31.05 percent of the variability of the data, that implies that equation explains a 31.05 percent of the dynamic of the cases of kidnapping in Mexico.
Validation of the Model
Taking as a reference equation 12, it can be proceed with the validation of the model through the fulfillment of the following assumptions: Chart 7. Fulfillment of the assumptions
Source. Own elaboration
With the implementation of the chart 6, it can be appreciated that the estimated model to be applied fulfills all the assumptions: its residuals are close to a normal distribution, are homoscedastic and are not correlated; with a level of significance to 0.05, the P-value of each of parameter is found above (
Interpretation of Data
In respect to equation 12, the backward equation is the following: In accordance to this equation, the interpretation of its parameters is the following: Per each $1000 pesos increased in X 1jt (Gross Domestic Product), an additional case of kidnapping will be presented in the country, remaining constant the rest of the variables: Per each percentage unit that rises in X 2jt (Employment Informality Rate), 9 cases of kidnapping will be presented in the country, remaining constant the other variables: With this model there is enough evidence of the produced effects of the Gross Domestic Product and the Employment Informality Rate, having found that the Employment Informality Rate shows a higher incidence. Furthermore, to this model, the time is not an element which determines the behavior of the phenomenon under study.
Summary
As observed, the construction of a panel data model involves having the information of both cross-sectional and time series data, in which the aim is to estimate the dynamics that presents a phenomenon of these features, which often presents difficulties to be modelled through the lineal regression and time series analysis.
Some of the bounties that this type of models present is to estimate heterogeneous objects, which cannot occur with lineal regression (manages the information of homogeneous way) and time series analysis (depends on the asymptotic properties of the temporal dimension, for which they need to have an enough number of observations), having as an effect the decrease of the adjustment of the information.
With Panel Data Models the most erroneous information of the phenomenon is captured, in other words, it collects observations about multiple objects of the phenomenon under study over specific periods of time.
|
v3-fos-license
|
2020-12-06T14:06:28.045Z
|
2020-12-01T00:00:00.000
|
227297062
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/molecules25235689",
"pdf_hash": "2b6a3ab42394e5bfcdc5f92a6b9ed0f8741b4426",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43476",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "dc3e82b3d35c70743342edffe4ec83db993746eb",
"year": 2020
}
|
pes2o/s2orc
|
Bacterial Alkyl-4-quinolones: Discovery, Structural Diversity and Biological Properties
The alkyl-4-quinolones (AQs) are a class of metabolites produced primarily by members of the Pseudomonas and Burkholderia genera, consisting of a 4-quinolone core substituted by a range of pendant groups, most commonly at the C-2 position. The history of this class of compounds dates back to the 1940s, when a range of alkylquinolones with notable antibiotic properties were first isolated from Pseudomonas aeruginosa. More recently, it was discovered that an alkylquinolone derivative, the Pseudomonas Quinolone Signal (PQS) plays a key role in bacterial communication and quorum sensing in Pseudomonas aeruginosa. Many of the best-studied examples contain simple hydrocarbon side-chains, but more recent studies have revealed a wide range of structurally diverse examples from multiple bacterial genera, including those with aromatic, isoprenoid, or sulfur-containing side-chains. In addition to their well-known antimicrobial properties, alkylquinolones have been reported with antimalarial, antifungal, antialgal, and antioxidant properties. Here we review the structural diversity and biological activity of these intriguing metabolites.
Introduction
The bacterial alkylquinolones are a class of microbial metabolites consisting of a 4-quinolone core, typically substituted with alkyl groups, most often at the 2 position [1]. The best-known of these are the Pseudomonas Quinolone Signal (PQS, O7), a potent modulator of quorum sensing behaviour in the Pseudomonas genus, and its biosynthetic precursor, 4-hydroxy-2-heptylquinoline (HHQ, H7). The discovery of these alkylquinolones begin with the use of a Pseudomonas aeruginosa (Bacillus pyocyaneus) extract by Bouchard for the prevention of anthrax in rabbits [2]. Emmerich and Low later demonstrated the antibiotic activity of the cell-free extracts of P. aeruginosa and named the antibiotic extract 'pyocynase', because at that time the antibiotic activity was attributed to the presence of enzymes [3]. Several years later Hays et al. isolated the actual antibiotic substances from P. aeruginosa and confirmed that they were small molecules. The names Pyo Ib, Ic, II, III and IV were proposed, and their antibiotic activity was demonstrated [4], though it took several years for the exact structures of these compounds to be determined. After studies involving chemical degradation, UV spectrophotometry, and total synthesis, the Pyo compounds were identified as alkylquinolones [5,6].
Initially, only the antimicrobial activity of these molecules was the focus of attention, but in 1999 Pesci et al. made a breakthrough when they found that cell-cell communication in P. aeruginosa was not solely a function of the homoserine lactones. They isolated 3-hydroxy-2-heptylquinolone (O7) and named it the Pseudomonas Quinolone Signal (PQS), describing its activity as an auto-inducer, acting as a pivotal element in the quorum sensing system [12]. Later, other alkylquinolones responsible for quorum sensing were also identified [13]. It was also discovered that P. aeruginosa was not the only microbial species capable of producing alkylquinolones: bacteria belonging to Alteromonas, Pseudoaltermonas and Burkholderia genera have been shown to produce compounds from this class: so far, more than 57 alkylquinolones have been isolated from different sources. The Pseudomonas Quinolone Signal itself has been recently reviewed in detail [14], however, the properties of the wider alkylquinolone class are less well-described. Thus, in this review, we take a look at the microbial alkylquinolones in general, their structural diversity, bioactivity and their role in quorum sensing.
Structural Diversity and Distribution
The central structural motif in the alkylquinolones is the 4-quinoline core, an aromatic nitrogencontaining heterocyclic compound that can participate in both electrophilic and nucleophilic substitution reactions (Figure 1). The quinoline scaffold commonly exists in various natural products, exhibiting a broad range of biological activities [15]. In the context of the alkylquinolones, the quinolone nucleus forms the core around which the diverse substituents are arranged. Most of these molecules have alkyl (both saturated and unsaturated) substitution at position 2, though substitution at the quinolone nitrogen atom and C-3 positions are also relatively common [1]. Many of the quinolones exist in equilibrium between the 4-quinolone and 4-hydroxy-quinoline tautomeric forms [16]; the predominance of one over the other is determined largely by the pH [17,18] (Figure 1). Heeb et al. described a nomenclature based on structural predominance at physiological pH [19]. For the purposes of this review, we have represented the structures in the quinolone form wherever possible. The bacterial quinolones discussed in this review can be divided into six major categories, some of which are widely distributed, whilst others are found in only a few rare microbial strains ( Figure 2). The first of these are the classical alkyl-4-quinolones, also known as the hydroxyalkylquinolones (HAQs) characterized by HHQ and its derivatives, which are substituted solely at the 2-position. The side-chains are most often linear saturated or monounsaturated alkyl chains. While these are most strongly associated with the Pseudomonas genus, examples have also been reported from the Alteromonas, Pseudoalteromonas, and Burkholderia genera (Table A1). By far the most widely distributed of these compounds is HHQ itself, but numerous other examples have been elucidated over the years. Alkylquinolones with 1, and 4-11-carbon linear chains have been isolated and fully characterized, though MS-based studies have detected the presence of quinolone derivatives with 1-13 carbon side-chains [20][21][22]. Alkylquinolones with odd-numbered numbers of carbons are more commonly found than those with even numbers of carbon atoms, with sevencarbon and nine-carbon examples being particularly prominent; this may reflect their biosynthesis [23]. More recently, several branched chain examples have been described from Pseudomonas (H7a, H8b) [24] and Pseudoalteromonas (H5a, H6a) strains, respectively [22]. Numerous alkylquinolones with unsaturated alkyl chains have also been described (Figure 2), the best known and most widely distributed of which is 2-(Δ1′-nonenyl-)-4-quinolone (H9Δ1), one of the originally reported Pyo series of compounds [6]. The position of the double bond can vary, with quinolones being reported with double bonds at the 1, 2, 3, and 4 positions (Table A1). While the majority of alkylquinolones reported have simple saturated or unsaturated hydrocarbon side chains, a handful of quinolones have been reported with more complex substituents. These include one example containing a cyclopropyl ring in the side chain (H12a), first reported from a P. aeruginosa strain [25], and sulfide-and benzylsubstituted quinolones (H3a, H7b), from a Chinese P. aeruginosa isolate [24]. The first of these are the classical alkyl-4-quinolones, also known as the hydroxy-alkylquinolones (HAQs) characterized by HHQ and its derivatives, which are substituted solely at the 2-position. The side-chains are most often linear saturated or monounsaturated alkyl chains. While these are most strongly associated with the Pseudomonas genus, examples have also been reported from the Alteromonas, Pseudoalteromonas, and Burkholderia genera (Table A1). By far the most widely distributed of these compounds is HHQ itself, but numerous other examples have been elucidated over the years. Alkylquinolones with 1, and 4-11-carbon linear chains have been isolated and fully characterized, though MS-based studies have detected the presence of quinolone derivatives with 1-13 carbon side-chains [20][21][22]. Alkylquinolones with odd-numbered numbers of carbons are more commonly found than those with even numbers of carbon atoms, with seven-carbon and nine-carbon examples being particularly prominent; this may reflect their biosynthesis [23]. More recently, several branched chain examples have been described from Pseudomonas (H7a, H8b) [24] and Pseudoalteromonas (H5a, H6a) strains, respectively [22]. Numerous alkylquinolones with unsaturated alkyl chains have also been described (Figure 2), the best known and most widely distributed of which is 2-(∆1 -nonenyl-)-4-quinolone (H9∆1), one of the originally reported Pyo series of compounds [6]. The position of the double bond can vary, with quinolones being reported with double bonds at the 1, 2, 3, and 4 positions (Table A1). While the majority of alkylquinolones reported have simple saturated or unsaturated hydrocarbon side chains, a handful of quinolones have been reported with more complex substituents. These include one example containing a cyclopropyl ring in the side chain (H12a), first reported from a P. aeruginosa strain [25], and sulfide-and benzyl-substituted quinolones (H3a, H7b), from a Chinese P. aeruginosa isolate [24].
The second class are the Pseudomonas Quinolone Signal (PQS) and its derivatives, which are hydroxylated at the 3 position. PQS itself (O7), which possesses a seven-carbon alkyl chain at the 2-position, was initially reported from a P. aeruginosa strain in 1959, long before its importance as a quorum sensing agent was known [26]. Since then, only a single additional analogue has been isolated, Molecules 2020, 25, 5689 4 of 26 the nonyl-substituted O9 from a Streptomyces species [27], however, other hydroxylated derivatives have been detected using mass spectrometry in several Pseudomonas strains [21,28].
Next are the alkyl-4-quinolone N-oxides (AQNOs), which attracted much of the early attention on the quinolones due to their significant antibacterial activity. The N-oxides also exist as tautomers (hydroxylamine and N-oxide forms). The most commonly encountered examples are the heptyl and nonyl-substituted derivatives, though eight and eleven-carbon analogues have also been reported, along with small suite of unsaturated derivatives (Table A1). To date, bacterial quinolone N-oxides have only been isolated from members of the Pseudomonas genus, with only one exception: a C-3 methylated N-oxide (CN9∆2) produced by an Arthrobacter species [29]. However, a range of unsaturated derivatives have been detected by MS in Burkholderia strains [30].
While the majority of quinolones are alkylated solely at the 2-position, a small number are also alkylated at C-3. The most common of these are a group of quinolones bearing a methyl group at the 3 position, known as hydroxy-methyl-alkylquinolines (HMAQs) or methylalkylquinolones (MAQs), which are widely distributed in Burkholderia species [31]. The first example, 2-(2-heptenyl)-3-methyl-4-quinolone (C7∆2), was reported from Burkholderia pyrrocinia (originally described as Pseudomonas pyrrocinia) in 1967 [32]. Several derivatives have since been discovered: the range of sizes of the C-2 substituents are similar to those for other alkylquinolones, with isolated examples incorporating hydrocarbon chains between 5 and 9 carbon atoms long (Table A1). In contrast to the HAQs, all unsaturated HMAQs reported thus far from the Burkholderia genus possess unsaturation exclusively at the 2-position of the side-chain. A number of C-3 methylated N-oxides have also been detected by metabolic profiling methods (see Section 2), though only one has been isolated and fully characterized [29]. Overall, across all of the quinolone sub-classes substituted at the 2-position, it can be seen that seven and nine-carbon side-chains are the most common ( Figure 3). Special note should be made of an intriguing subclass of the alkylquinolones: those containing prenylated side-chains. In a report by Dekker et al. a suite of quinolones (HG-HGc and CG-CGc) incorporating geranyl-derived side chains were described from a marine Pseudonocardia species, half of which were also methylated at the C-3 position [33]. Some of these compounds were also N-alkylated, a feature common in the plant quinolones but very rare in bacterial examples.
Molecules 2020, 25, x FOR PEER REVIEW 4 of 29 quorum sensing agent was known [26]. Since then, only a single additional analogue has been isolated, the nonyl-substituted O9 from a Streptomyces species [27], however, other hydroxylated derivatives have been detected using mass spectrometry in several Pseudomonas strains [21,28].
Next are the alkyl-4-quinolone N-oxides (AQNOs), which attracted much of the early attention on the quinolones due to their significant antibacterial activity. The N-oxides also exist as tautomers (hydroxylamine and N-oxide forms). The most commonly encountered examples are the heptyl and nonyl-substituted derivatives, though eight and eleven-carbon analogues have also been reported, along with small suite of unsaturated derivatives (Table A1). To date, bacterial quinolone N-oxides have only been isolated from members of the Pseudomonas genus, with only one exception: a C-3 methylated N-oxide (CN9Δ2) produced by an Arthrobacter species [29]. However, a range of unsaturated derivatives have been detected by MS in Burkholderia strains [30].
While the majority of quinolones are alkylated solely at the 2-position, a small number are also alkylated at C-3. The most common of these are a group of quinolones bearing a methyl group at the 3 position, known as hydroxy-methyl-alkylquinolines (HMAQs) or methylalkylquinolones (MAQs), which are widely distributed in Burkholderia species [31]. The first example, 2-(2-heptenyl)-3-methyl-4-quinolone (C7Δ2), was reported from Burkholderia pyrrocinia (originally described as Pseudomonas pyrrocinia) in 1967 [32]. Several derivatives have since been discovered: the range of sizes of the C-2 substituents are similar to those for other alkylquinolones, with isolated examples incorporating hydrocarbon chains between 5 and 9 carbon atoms long (Table A1). In contrast to the HAQs, all unsaturated HMAQs reported thus far from the Burkholderia genus possess unsaturation exclusively at the 2-position of the side-chain. A number of C-3 methylated N-oxides have also been detected by metabolic profiling methods (see Section 2), though only one has been isolated and fully characterized [29]. Overall, across all of the quinolone sub-classes substituted at the 2-position, it can be seen that seven and nine-carbon side-chains are the most common ( Figure 3). Special note should be made of an intriguing subclass of the alkylquinolones: those containing prenylated side-chains. In a report by Dekker et al. a suite of quinolones (HG-HGc and CG-CGc) incorporating geranyl-derived side chains were described from a marine Pseudonocardia species, half of which were also methylated at the C-3 position [33]. Some of these compounds were also N-alkylated, a feature common in the plant quinolones but very rare in bacterial examples. Another class of structurally distinct prenylated quinolones are the quinolone-type aurachins, which are substituted by isoprenoid chains at the C-3 position (Table A2), and have been reported from Stigmatella, Rhodococcus, and Streptomyces species [34][35][36][37]. These are unique in that they are the Another class of structurally distinct prenylated quinolones are the quinolone-type aurachins, which are substituted by isoprenoid chains at the C-3 position (Table A2), and have been reported from Stigmatella, Rhodococcus, and Streptomyces species [34][35][36][37]. These are unique in that they are the only reported bacterial 4-quinolones to possess alkyl chains larger than a methyl group at the C-3 position. The last class, the tetrahydroquinolines, are included here as they commonly co-occur with the alkylquinolones and share a biosynthetic origin (Table A2). To date, they have only been reported from Pseudomonas species [4,8,9,24,25]. The first isolation of 3-n-heptyl-3-hydroxy-1,2,3,4-tetra-hydro quinoline-2,4-dione (T7) and 3-n-nonyl-3-hydroxy-1,2,3,4-tetra-hydroquinoline-2,4-dione (T9) from P. aeruginosa was reported by Hays in 1945 as part of the original Pyo series of quinolones [4]. Subsequently, Kitamura isolated T7 from P. methanica [38]. Budzikiewicz reported that T7 and T9 were formed when P. aeruginosa is grown under iron deficiency [9]. MS profiling studies have revealed the presence of additional analogues with alternative side chains [23].
A handful of bacterial quinolones do not fit into any of these classes. The siderophores quinolobactin and thioquinolobactin are produced by several Pseudomonas fluorescens strains, especially under conditions of iron limitation [39,40]. The metabolite 2,4-dihydroxyquinoline (DHQ) has been reported from both Pseudomonas aeruginosa and Streptomyces sindenensis [20,27].
Metabolic Profiling
In addition to those quinolones that have actually been isolated and characterized, numerous additional derivatives have been detected by MS-based metabolic profiling. One of the first of these studies was conducted by Taylor et al. in 1995, where GCMS profiling of a clinical isolate of Pseudomonas aeruginosa revealed the presence of a large number of alkylquinolones (Table A3) [20]. The major components were the saturated quinolones H7 and H9, in addition to a number of unsaturated analogues with 7, 9, and 11 carbon-chains. GC-MS profiling was also used during the course of a biosynthetic study by Brendenbruch et al., which revealed the presence both PQS and AQ derivatives [28]. Lepine et al. used an LCMS-based method for the analysis of a P. aeruginosa strain, which revealed several series of quinolone derivatives: including saturated and unsaturated HAQ derivatives, N-oxides, and tetrahydroquinolines. Perhaps most significant were the first detections of C-3 hydroxylated quinolones that were not PQS itself. In a 2017 study, Depke et al. used a new MS-based unsupervised clustering method to analyse an extract from P. aeruginosa PA14, identifying a wide range of HAQ, AQNOs, and PQS derivatives. Most notable was the first detection of quinolones containing polyunsaturated side-chains, though only for the longer-chain derivatives (C9 and higher) [41]. In a recent study, a rapid LC-MS method was developed for the analysis of both microbial strains and clinical samples: a total of 28 quinolone derivatives were detected [42]. Other genera also produce quinolones: LC-MS analysis of a marine Pseudoalteromonas isolate revealed a suite of saturated alkylquinolone derivatives, including two branched-chain derivatives [22].
Burkholderia strains have also been studied-in a 2008 report eleven different Burkholderia strains were profiled using LCMS, three of which produced a range of medium-chain HMAQs, HAQs, and methyl-alkylquinoline N-oxides (MAQNOs), with dramatically different distributions between the three species [31]. In a biosynthetic study on the effects of KynB on quinolone biosynthesis in Burkholderia pseudomalleii, LC-MS profiling was carried out, revealing a similar chemical profile to the prior study [43]. A molecular networking study on the effects of the antibiotic trimethoprim on the secondary metabolites of a Burkholderia thailandensis isolate revealed a small suite of C-3 methylated and non-methylated alkylquinolones [44]. A 2020 report described the LC-MS analysis of quinolone derivatives in three microbial strains by multiple reaction monitoring (MRM). With the aid of synthetic standards the authors were able to confirm the location of the double bonds in the unsaturated derivatives: all quinolones with an unsaturated side chain produced by the Burkholderia strains possessed a double bond at the 2 -position, while those produced by Pseudomonas species had double bonds at either the 1 or 2 positions [30].
Biosynthesis
Over the past few decades, the biosynthesis of the alkylquinolones has been elucidated [21,45]. The synthesis of HHQ and PQS ( Figure 4) is initiated by the coenzyme (CoA) ligase PqsA, which catalyzes the activation of anthranilic acid with ATP/Mg 2+ to produce the intermediate anthranilyl-AMP Although 2-ABA-CoA is highly susceptible to spontaneous cyclization to form 2,4-dihydroxyquinoline (DHQ), which has been shown to be fundamental in P. aeruginosa pathogenicity [50], in vivo this is counterbalanced by the activity of PqsE, which acts as a 2-ABA-CoA thioesterase to release 2-aminobenzoylacetate (2-ABA) [45]. 2-ABA is another branching point in the pathway: it can undergo decarboxylation to 2-aminoacetophenone (2-AA), a secondary metabolite reported to promote chronic infection phenotypes of P. aeruginosa and to modulate the host innate immune response. 2-ABA is transformed into HHQ by the heterodimeric PqsBC bearing an octanoyl chain. Finally, the flavin monooxygenase PqsH oxidizes HHQ into PQS [51]. In addition, 2-ABA could be converted into its hydroxylamine form by the oxidase PqsL and subsequently transformed into 4-hydroxy-2-heptylquinoline-N-oxide (HQNO) by the octanoyl-PqsBC complex [52].
Biological Activity
With their structural diversity, the alkylquinolones also bring a significant number of biological effects. Although most recently the focus has centred on their quorum sensing properties of alkylquinolones, these molecules were initially discovered as antimicrobial agents. Since then, many of these compounds have been isolated for their antibiotic, antifungal and anti-algal activity against human, plant and animal pathogens.
Earlier Discoveries
Based on the works of Bouchard, Emmerich and Löw, Hays et al. described their antibiotic effect and partially characterized the alkylquinolone antibiotics produced by P. aeruginosa [4]. They employed serial-dilution assays to test the antibiotic effect of crude extracts, which were later identified as mixtures of 2-alkyl-4-quinolones and their N-oxides [4]. These compounds were found to be highly active against Gram-positive bacteria but showed only slight activity against Gram-negative bacteria [4]. It was also shown that the N-oxides N7-N9, N11 (designated as Pyo-II by Hays et al.) were ten times more potent than the reduced compounds, displaying bactericidal activity at high concentrations and bacteriostatic effects at lower concentrations [4,6,7]. Lightbrown and Jackson later reported that the N-oxides are also potent antagonists of streptomycin and dihydrostreptomycin, and inhibit the cytochrome systems of heart muscle and certain bacteria by interfering with the respiratory chain [53,54].
Antibacterial Activity
After the earlier discoveries and introduction of quinolones as front-line antibiotics, a hunt for new natural antimicrobial molecules based on the quinolone scaffold started [16]. The naturally occurring alkylquinolones have proven to be interesting starting point for synthesis of molecules with broad spectrum of activity and less toxicity [16,19]. Wratten et al. reported the isolation of HHQ (H7) and its shorter congener, 2-n-pentyl-4-quinolone (H5/PQ) from a marine Pseudomonas bromoutalis isolate. These molecules showed antibiotic activity against both Gram-positive and Gram-negative bacteria including Staphylococcus aureus, and the common marine pathogens, Vibrio harveyi and Vibrio anguillarum [55]. While investigating the ecological and biogeochemical role of a previously-reported alkylquinolone, 2-n-pentyl-4-quinolone (H5/PQ) from a marine Alteromonas sp. SWAT 5, it was determined that H5 inhibits the growth and motility of several particle-associated bacteria from different phyla including α-Proteobacteria, Bacteroidetes and γ-Proteobacteria. It was also found that H5 targets DNA synthesis and motility at concentrations as low as 10 µM [56]. In a 1998 report, Debitus and co-workers isolated several quinolone derivatives (H7, H9, H9∆1, T7, N7) from a sponge-derived Pseudomonad, with H7 displaying strong antimicrobial activity [57].
Homma et al. discovered that Pseudomonas cepacia RFM25 (subsequently reclassified as Burkholderia cepacia) was inhibitory to several soil borne plant pathogens [58]. Later they managed to isolate 2-(2-heptenyl)-3-methyl-4-quinolone (C7∆2), previously isolated by Hashimoto and Hattori, and 2-(2-nonenyl)-3-methyl-4-quinolone (C9∆2) from the same strain [32]. Both molecules displayed similar activity against Corynebacterium michiganense [58]. Helicobacter pylori infection is one of the most common causes of gastritis and peptic ulcers. While exploring the effects of extracellular substances produced by different bacteria on H. pylori it was found that a clinical strain of P. aeruginosa inhibited the growth of H. pylori. The active fractions were found to contain 2-heptyl-quinolone (H7), 2-nonyl-quinolone (H9) and their corresponding N-oxides. H7 and N7 were found to be active against metronidazole and metronidazole resistant strains of H. pylori with MIC values of 0.1-0.5 mg/mL in an agar-well plate assay [59]. Another group of bacterial quinolones with antimicrobial activity against H. pylori are the geranylated quinolones (CG-CGc; HG-HGc) isolated from Pseudonocardia spp. CL38489, several of which possessed nanomolar activity. The epoxide derivative 3-dimethyl-2-(6,7-epoxygeranyl)-4-hydroxy-quinolone (CGb) was found to be the most potent of all isolates, with a MIC of 0.1 ng/mL [33]. Because of their inactivity against other microbes, and potential of posing no potential harm to normal gut flora, these molecules hold great promise in clinical settings to be used as anti-ulcer agents. One possible explanation for their selective action was postulated to be the selective inhibition of a part of microaerophilic respiratory chain of H. pylori.
Later, 4-hydroxy-3-methyl-2(1H)-quinolone (CX1) was isolated from Burkholderia sp. 3Y-MMP [66]. This compound was previously isolated as a plant metabolite from Isatis tinctoria (Brassicaceae) and displayed anti-tuberculosis activity with an IC 90 of 6.8 µM [66][67][68]. An extensive isolation report in 2020 described the isolation of six new HAQs, including the highly unusual sulfide and benzyl derivatives H3a and H7b, in addition to fifteen known AQs, three AQNOs, and the tetrhydroquinoline T7 [24]. While no significant biological studies were reported in the initial isolation report, a subsequent synthetic investigation revealed that the natural products H3a, H7b and H7∆2 inhibited the growth of S. aureus, with the latter reducing growth by over 80% at 1.1 µM. The latter two compounds also reduced the swarming motility of B. subtilis [69].
Kunze et al. described the antimicrobial effect of the aurachins, several of which of are isoprenoid quinolone alkaloids characterized by a farnesyl residue, from the myxobacterium Stigmatella aurantiaca [34]. Aurachins C (AC15) and D (AD15) were shown to block NADH oxidation and inhibit the growth of several gram-positive bacteria including B. subtilis, S. aureus, Arthrobacter aurescens, Brevibacterium ammoniagenes, and Corynebacterium fascians; potentially by interfering with bacterial respiration [34]. Additional antimicrobial aurachins have been reported: Kitagawa et al. isolated aurachin RE (ARE15) from Rhodococcus erythropolis JCM 6824 which exhibited a broader and more potent activity than aurachin C [34,35]. It was found to inhibit both Gram positive (B. subtilis, Nocardia pseudosporangifera, Streptomyces griseus) and Gram-negative bacteria (Sinorhizobium meliloti and Deinococcus grandis). Aurachins Q (AQ15) and R (AR15) from Rhodococcus sp. Acta 2259 displayed low to moderate activity against S. epidermidis, B. subtilis and P. acnes, respectively [34][35][36]. The O-methylated aurachin SS (ASS10), isolated from a Streptomyces strain, possessed reduced antimicrobial activity compared to aurachins C and D, though it is unclear whether this is due to the presence of the methoxy group, or the shortened side-chain [37].
Many researchers have studied the structure-activity relationships of the synthetic quinolone antibiotics [70], but the relationships of the bacterial alkylquinolones have not been as well-established. While it has been commonly observed that the alkylquinolone N-oxides (AQNOs) have a broader spectrum of activity than their reduced counterparts it is clear that the length, degree of unsaturation and branching of the alkyl chain also play a role. For example, Szamosvári (2017) reported that the unsaturated compound trans-(non-1-enyl)-4-quinolone N-oxide exhibited up to 20-fold higher bacterio-static activity against Staphylococcus aureus strains than the most potent saturated AQNO [71]. Due to the diverse test organisms and methods used in the studies overviewed in this section, it is difficult to establish firm rules for antibacterial activity. Nonetheless, alkylquinolones possessing an unsaturated chain and either N-methylation or N-oxidation tend to have improved antimicrobial properties. Alkylquinolones with unsaturated carbon chains and methylation at R-3 do tend to possess good anti-bacterial activity, but such structural modifications are not essential for good anti-bacterial activity [32,64]. An unusual example seems to be the geranylated quinolones, which possess potent and selective activity against H. pylori, with epoxidation of the geranyl residue (CGb) significantly increased the potency of the molecule [33]. For the aurachins, the addition of a double bond at position 4 of the prenylated chain (AQ15, AR15) results in loss of activity, whereas addition of hydroxy group in the side-chain (ARE15) increased the spectrum of activity to a great extent [34,35].
Antifungal and Anti-Oomycete Activity
Fungal and oomycete pathogens cause massive economic losses by effecting crops in both temperate and tropical areas.
During a co-culture experiment it was found that P. aeruginosa exhibited a fungicidal effect on Cryptococcus neoformans. On further investigation it was discovered that the anti-fungal action was due to pyocyanin and the alkylquinolones PQS (O7) and HHQ (H7) produced in the co-culture [77].The quinolones H1, H7, H9, H9∆1, H11, O7 and O9 from an alkali-tolerant Streptomyces sindenensis were found to inhibit the growth of C. albicans one of the most prevalent causes of opportunistic fungal infections in humans [27]. Aurachins C and D were also shown to inhibit the growth of the yeasts Saccharomyces cerevisiae and Debaryomyces hansenii [34]. A small suite of quinolones (C7, C7∆2, H9∆2, C9∆2), isolated from a Burkholderia species, displayed antifungal activity against the fungal pathogens Rhizopus oryzae and Trichophyton rubrum [64]. Mossialos et al. discovered the iron-chelating properties of quinolobactin, isolated from a Pseudomonas fluorescens ATCC 17400 strain [39]. Although quinolobactin (QB), originally reported along with thioquinolobactin in 1980 [78], did not show any significant activity besides iron chelation it was later demonstrated that quinolobactin is a produced by the hydrolysis of the unstable but highly active 8-hydroxy-4-methoxy-2-quinoline thiocarboxylic acid (thioquinolobactin). Thioquinolobactin (TQB) not only possesses iron chelation properties but also significantly inhibits the growth of Pythium debaryanum, Rhizoctonia solani and Sclerotinia sepivorium [40]. Later Kilani-Feki et al. also isolated C7 and C7∆2 from B. cepacia and demonstrated their activity against common food pathogen Aspergillus niger [79,80]. C9∆2 from outer membrane vesicles (OMVs) of B. thailandensis displayed anti-fungal activity at 100 µM against C. albicans and Cryptococcus neoformans [63]. Microbial alkylquinolones C7∆2, C8∆2, C9∆2, obtained by employing Conrad−Limpach approach and Suzuki−Miyaura coupling reactions, showed moderate activity against C. neoformans [65]. Overall, for anti-fungal and anti-oocyte activity of the alkylquinolones, methylation of position-3 and the presence of an unsaturated alkyl chain at C-2 are strong predictors of antifungal activity.
Quorum Sensing/Biofilm Formation
The ability of bacteria to communicate and act as a community for collective tasks was underestimated for many years. It was believed that these organisms act on the principle of 'every cell for itself' but later it was realized that the bacterial cells communicate by a phenomenon called quorum sensing, or auto-induction [81]. The concentrations of auto-inducer molecules released by a single bacterium are insufficient to bring about behavioural and metabolic changes. Thus, this phenomenon occurs in a cell density-dependent manner, where the bacteria must reach a critical mass necessary for collective action, to activate or supress target genes by releasing auto-inducers [82]. The collective action of such molecules and the regulation of an array of genes help these bacteria move to a more friendly environment (with better nutrients), adapt a new growth strategy, or aid in protection from harsh/deleterious environments by biofilm formation [82].
At a transcriptional level, the biosynthesis of alkylquinolones is controlled by PqsR, which activates the expression of pqsABCDE and phnAB operons in P. aeruginosa [85]. A homologue of the pqsABCDE operon named hmqABCDEFG has been found in Burkholderia spp. (Figure 4) [31,86]. Some of these autoinducers i.e., PQS and HHQ, regulate their own expression by binding to PqsR itself [87]. It has been revealed that HHQ induces a conformational change in PqsR, as binding of PqsR to the pqsA promoter in vitro is enhanced by HHQ, although not as much as with PQS [88]. Other AQs such as 2-nonyl-4-quinolone (H9) can also activate PqsR and as such could potentially be considered as autoinducers, although not as potent as PQS [84]. Alkylquinolones are highly lipophilic molecules, which might hinder their ability to act as effective signaling molecules, however, several of the biochemical changes they induce work to minimize this problem. PQS increases the production of rhamnolipids, which increases the solubility of this lipophilic molecule within aqueous solutions [89]. Another induced change that may help to overcome solubility issues is via promoting the biogenesis of outer membrane vesicles (OMVs) that package the highly lipophilic PQS for transport within the population [90]. It has been found that the third hydroxy position is absolutely critical for the formations of OMVs and is the reason why PQS and not HHQ can stimulate OMV formation [91]. The kinetics of both HAQs and HMAQs show that these molecules start to accumulate near the end of the log phase of growth of the respective bacteria [31].
Besides regulating adaptation responses and signal integration, these alkylquinolone derivatives are primarily involved in the regulation of various genes responsible for virulence. The virulence factors mainly regulated by these auto-inducers include elastase, pyocyanin, hydrogen cyanide, rhamnolipids and LecA lectin [86,92]. The production of these molecules gives a competitive advantage to the bacterium producing them [93]. This advantage is demonstrated by modulating the swarming motility, growth repression by depriving the rival bacteria of iron i.e., iron chelation, cytotoxicity via production of reactive oxygen species, and regulating the production of antimicrobial AQs like 4-hydroxy-2-heptylquinoline-N-oxide (HQNO) (N7) [91,[94][95][96][97][98]. Although not an actual quorum sensing molecule itself, HQNO has been found to induce antibiotic tolerance by triggering programmed cell death. The DNA released due to cell death can induce biofilm formation making the bacteria resistant to antibiotics. This antibiotic tolerance can be triggered not only in the species producing the molecule i.e., P. aeruginosa, but also in the bacteria in the surrounding environment e.g., S. aureus [99,100]. It should be noted that the growth repression function is distinct from bacteriostatic or bactericidal effect of antibiotics [94]. Biofilm formation is a complex phenomenon, the mechanism of which is not fully understood, but it has been inferred that alkylquinolone quorum sensing molecules promote biofilm formation via LecA activation and/or formation of extracellular DNA and may also promote or inhibit the biofilm formation in other bacteria [97,101,102].
Reen et al. discovered that the C-3 position is crucial for the wide range of activities including production of virulence factors and biofilm formation exhibited by PQS and HHQ. Absence or substitution of this position with halogens or an NH group results in loss of activity in modulating inter-species and intra-species behaviour [103]. It has also been observed that PQS not only acts by transcriptional regulation within the cells but may also interact directly with the proteins e.g PQS binds with MexG (RND-type efflux pump) and MgtA (Mg 2+ transporter) [104]. It has also been demonstrated that the two quorum sensing systems in Pseudomonas and Burkholderia spp. work interdependently. The acyl-homoserine lactones (AHLs) regulate the expression of HAQs via LasR and RhlR which bind to PqsR [105]. On the other hand, hydroxymethyl-alkylquinolines (HMAQs) found in Burkholderia spp. regulate the expression of AHLs and it has been demonstrated that the methyl group of HMAQs is essential for this function [31]. The role of quinolones in quorum sensing, virulence, and interspecies interactions is summarized in Figure 5.
Molecules 2020, 25, x FOR PEER REVIEW 12 of 29 found in Burkholderia spp. regulate the expression of AHLs and it has been demonstrated that the methyl group of HMAQs is essential for this function [31]. The role of quinolones in quorum sensing, virulence, and interspecies interactions is summarized in Figure 5.
Miscellaneous Activities
Alkylquinolones have mostly been analyzed for their antimicrobial and quorum sensing abilities. But these unique quinolones have proven to be very diverse compounds in terms of their activity despite of their limited structural diversity. 2-nonyl-4-quinolone (H9), one of the oldest alkylquinolones known from P. aeruginosa was isolated from Brazilian shrub Raulinoa echinata by
Miscellaneous Activities
Alkylquinolones have mostly been analyzed for their antimicrobial and quorum sensing abilities. But these unique quinolones have proven to be very diverse compounds in terms of their activity despite of their limited structural diversity. 2-nonyl-4-quinolone (H9), one of the oldest alkylquinolones known from P. aeruginosa was isolated from Brazilian shrub Raulinoa echinata by Biavatti et al. in 2002 and showed moderate antitrypanosomal activity against T. cruzi with an IC 50 100.9 µg/mL [6,107]. 2-undecyl-4-quinolone isolated from Pseudomonas sp. 1531 E7 displayed antiviral against HIV at an ID 50 of 10 −3 µg/mL.
2-n-Heptyl-4-hydroxy-quinoline-N-oxide (N7), 2-n-heptyl-4-quinolone (H7), 3-n-heptyl-3hydroxy-1,2,3,4-tetrahydroquinoline-2,4-dione (T7) were analyzed for their anti-asthma activity via 5-lipooxygenase. Although all these compounds showed 5-lipooxygenase inhibition activity, N7 proved to be the most potent and selective inhibitor (IC 50 = 1.5 × 10 −7 M) [38]. Later N7 was also shown to significantly suppress the antigen-induced bronchoconstriction in guinea pigs and inhibition of histamine release [108]. 2-n-heptyl-4-quinolone (H7) from marine Pseudoalteromonas sp. M2 was investigated for its anti-inflammatory activity and proved to be a very promising candidate for neuro-inflammatory disorders, owing to its ability to inhibit NO, ROS production and the expression of iNOS and COX-2 [109,110]. The same compound (H7) proved to be the most potent inhibitor of melanin synthesis when tested for anti-melanogenic activity. A range of AQs also showed strong activity against Hep C virus with an IC 50 of 1.4 ± 0.2 µg/mL and no cytotoxicity. This activity was stronger than that of ribavirin and an investigation into the mechanism of action revealed that H9 acts against HCV by inhibiting the viral entry and replication [111]. H9 has also been found to possess iron-chelation properties [112]. Aurachins, including AC12 and AD12 from Rhodococcus sp. Acta 2259 showed weak inhibition of glycogen-synthase-kinase 3β (GSK-3β) which could lead to exploration of these molecules for treatment of neurodegenerative and other disorders caused by perturbation of GSK-3β [36].
Conclusions
Overall, the alkylquinolones are a chemically and biologically diverse class of compounds. Initially discovered from the Pseudomonas genus, recent studies have shown that these compounds are distributed amongst several genera, including not only Burkholderia, but also more distantly related genera such as Streptomyces and Rhodococcus. Likewise, while many of the earlier-discovered alkylquinolones incorporated only linear C-2 side chains, more recent isolation studies have greatly diversified the number of structural motifs encountered in this structure class. While the majority of studies have focused on their antimicrobial properties (Table A1 in Appendix A, Figure 6), it is clear that these compounds have potential as anti-fungal, anti-malarial, and anti-inflammatory agents as well, and that future investigators would do well to broaden the scope of biological activities under investigation.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2020-06-04T09:05:59.724Z
|
2020-05-13T00:00:00.000
|
219496564
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.intechopen.com/citation-pdf-url/72153",
"pdf_hash": "a7eb69b7dbbabb74caa12e4d759df1081f783226",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43477",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "d9d594aff7f265477f768081a9a6346539a6f9bb",
"year": 2020
}
|
pes2o/s2orc
|
Deep Brain Stimulation Approach in Neurological Diseases
The technique was emanated in early 1960s; nowadays, deep brain stimulation (DBS) has become a huge practice in treatment of various movement disorders along with some psychiatric disorders. The advancement of DBS in different neurodegenerative diseases and managing patients with refractory brain disorders are closely related to the developments in technology. This development in regard with the device advancement along with the safe coupling of DBS to high-resolution imaging can help us to shape our knowledge in brain-wide networks and circuits linked with clinical aspects. DBS is found to be useful in learning and memory. On the contrary, traditional epilepsy surgeries are more complicated and technologically DBS is easier and more feasible. There are mild adverse effects of this DBS treatment, but a number of studies have shown positive treatment outcome with movement disorders and many kinds of psychiatric disorders too.
Introduction
Neuromodulation is an increasingly rising field in the successful treatment of neurological disorders [1,2]. Neurostimulation allows highly flexible alteration of disease symptoms. A number of medications fail due to severe side effects that outweigh the medication benefit, but neurostimulation has been so long to be potentially used as a treatment option for several movement disorders [2], with mild side effects with unknown mechanism of action in other disorders [3].
Types of Neuromodulation techniques
1. Deep brain stimulation (DBS) is an approved option for the treatment of intractable forms of various diseases. It involves inflecting the dysfunctional neuronal networks by long-term electrical stimulation, which utilizes implanting electrodes placed in the target neurological site that excites the neuronal circuits [4]. In recent years, evolution of DBS has revolutionized the treatment few neurological diseases especially in the treatment of movement disorders [5].
2. Vagal nerve stimulation (VNS) uses a device to stimulate the vagus nerve via electrical impulses. VNS is very helpful for people who have not responded to
Deep brain stimulation (DBS)
DBS is an electrode implantation method using stereotactic techniques into the deep regions of brain for modulating neuronal function. An implantable pulse generator (IPG) is attached below the clavicle region, with an intention to treat neurological and psychiatric conditions. This attached IPG works on battery and delivers electrical stimulation, which is regulated externally by patients with the help of the remote. The electronic components like frequency, pulse, voltage, and other parameters can be altered to attain maximum efficacy of the treatment. It is believed that it works on excitation and inhibition of neurons present nearby the electrodes, but the exact mechanism is still unknown.
Low-frequency stimulation seems to excite nearby neurons, while high-frequency stimulation may decrease local activity leading to rescindable functional lesion. This simple-minded opinion for the mechanistic action has been a challenge in recent years, and more comprehensive knowledge may promote enhanced DBS treatments [8].
Early history
During the early 1900s' experiments, first stereotactic frame was designed that allowed stimulation of deeper regions of brain. In 1947, X-ray pneumoencephalography was developed that enabled surgeons to locate the target with the help of detailed stereotactic atlas that was developed later on. In 1950, stereotactic techniques were used for the tremor treatment. Later, in 1963, Albe Fessard reported high-frequency (~100-200 Hz) electrical stimulation for the first time in the ventral intermediate thalamic nucleus that could substantially alleviate Parkinsonian tremor [9].
The last 50 years
In 1960, levodopa treatment development was highly effective for Parkinsonian symptoms with lesser risk and expense compared to DBS implantation and this led the curtailing of early forms of DBS research.
Inspite of the drawbacks, the research for use of DBSnever stopped. DBS continued to observe restricted use in intractable chronic treatment, with Medtronic Inc. (Minneapolis, MN, USA) and released the first fully implanted DBS systems commercially accessible for this purpose in the mid-1970s [10].
Other study groups investigated the use of thalamic DBS to treat consciousness diseases and reported few benefits. By the end of 1980s, it was evident that using levodopa would not hold the promising effects, after years of therapy, and patients developed wearing off along with the side effects like dyskinesias. Meanwhile, the technology of implantable medical devices had improved to the stage that it was routinely used for chronically implanted devices like cardiac pacemakers and spinal cord stimulators. The animal model study eventually got translated into clinical practices, and the first subthalamic nucleus (STN) DBS study got published [11]. By the beginning of the century, clinical use of DBS in Parkinsonian disorders has begun to become common [8].
Rationale and mechanisms of action
Although the exact mechanisms of action of DBS are still elusive in spite of extensive research, several theories have been put forward. These proposed mechanisms can be divided according to the latency of onset of the effects from the time of stimulation into acute (seconds to hours) and chronic (days to months). The two major proposed mechanisms are as follows: 1. Electrophysiological and neurotransmitter modulation 395 likely explain the acute effects.
However, there is a considerable overlap among the proposed mechanisms and one group of mechanisms has effects over the other, as described in detail in the following sections. Furthermore, depending on the methods used to investigate the mechanisms of action, different aspects of stimulation effects are tested. With an integrative approach combining investigations employing different modalities, one can understand the general effects of DBS.
Modalities used to study the mechanisms of DBS
Different methods have been used to quantify the changes produced by the DBS at the cellular, tissue and system levels to study the mechanisms of action of DBS. These modalities can be broadly classified into electrophysiological, imaging, biochemical, and molecular methods. Imaging techniques such as positron emission tomography (PET) and functional MRI (fMRI) provide information on both localand system-level changes. These are complementary methods: functional imaging studies have high spatial resolution, whereas electrophysiological methods have high temporal resolution. Moreover, electrophysiological methods directly measure neuronal activities rather than indirect measures of neural activities using bloodflow changes measured by imaging methods [13].
There are several hypotheses proposed by different schools of thought, to explain the processes by which DBS works. Accepted and popular hypothesis relied on the alteration of pathological brain circuit activity induced by stimulation [12,14]. The stimulating impacts that are accountable for this disturbance occur at protein, ionic, cellular, and network levels to produce symptom improvements [15]. While it is presently unclear that which of the DBS' wide-ranging impacts are needed and adequate to generate therapeutic results, it is evident that highfrequency (~100 Hz) pulse stations (~0.1 ms) produce network reactions that are essentially distinct from low-frequency (~10 Hz) stimulation. The electrodes implanted into the brain redistribute the charged particles (such as Na+ and Cl−ions) throughout the extracellular space, which generates electric field and ultimately leads to the manipulation of sodium channel protein's voltage sensor embedded in the neuron membrane [16]. The opening of sodium channels at the cellular level may generate a potential action for initiation of axons and can propagate in both orthodromic and antidromic directions. DBS causes, activated axons are able to follow very high fidelity stimulus rates at ~100 Hz, but these highfrequency synaptic transmissions are less robust and much complex than that of axonal transmission [17,18].
Under such high-frequency activity, postsynaptic receptors can be depressed and axon terminals can exhaust their released pool of neurotransmitters [19,20]. Even though these synapses appear to be active in DBS, theories of information processing suggest that they could become low-pass filters that can block lowfrequency signal transmission [21]. This general mechanism, defined as "synaptic filtering," may play a crucial role in DBS, hampering the transmission of oscillatory activity patterns throughout the related networks of brain via neurons [22].
DBS' simple biophysical consequences offers a background where the patterns of network activity observed in patients can begin to be interpreted. The oscillation frequency of the stimulation signal is virtually zero as stimulation intensity remains unchanged during DBS, which could produce what is known as an "information lesion" in stimulated neurons [23]. According to this theory, action potential induced by DBS essentially bypasses some endogenous activity directly within the stimulated nerves and therefore slows down the transmission of oscillatory activity via the network. Nonetheless, not too many researches support the statement that high-frequency DBS causes lesion. Research with asleep and behavioral primates indicates how DBS can serve as a filter, which allows certain sensorimotor-related regulation of neuronal activity in the activated area, whereas specifically suppressing pathological low-frequency oscillation propagation [24,25]. Certain basal ganglia activities, like those of reward-based decision-making or motor sequence learning, can often be retained during STN DBS or globus pallidus [26].
Certain factors may also have significant roles in treatment mechanism of DBS for PD like high-frequency DBS could provide an appropriate information lesion that inhibits the propagation of low-frequency oscillations, unlike low-frequency synchronization, could have no impact on broader network function [27,28]. One of the advantages of this system is that high-frequency DBS is a standard device that can overcome various forms of clinical low-frequency excitations, like mobile tremor, dystonia, and akinesia rigidity [29].
The above proposed mechanism of DBS goes some way to explain only the acute effects of DBS in movement disorders, but this would not explain long-latency, chronic-adaptive alterations, which arise in individuals with dystonia following DBS and it may describe the psychological response to DBS. There might be possibility that low oscillating frequencies are strongly enhanced by long-term potentiation, while stimulation of high-frequency seems to have smaller plasticity effect. Therefore, replacing low-frequency patterns with high frequency can reverse those symptoms associated with chronic disease [30]. DBS often takes months to get maximum benefit in various disorders, such as dystonia, depression, and epilepsy [31].
Open-vs. close-loop stimulation system
Nowadays, the open-loop system is embedded in many cases for DBS in which related parameters such as frequency, amplitude, and duty cycle can be adjusted by trained physicians. Stimulation, in this method, is fixed for initial months of treatment, then later can be adjusted based on patient's symptoms and overall conditions. A closed-loop system receives continuous feedback from the patient's neuronal circuits of brain by a present and programmed algorithm and thus appears to be an effective stimulation, and the parameters are adjusted real time. The implanted device causes physiological changes, both over long and short term, via automatic therapeutic parameter delivery with the ability to sense brain signals. Though there are no randomized controlled trials, comparing the therapeutic effect of open-vs. closed-loop system, few researchers opine that closed-loop method are more effective than the open-loop system. Through their novel closed-loop method, to compare the effectiveness of open-loop systems using two neurons, they demonstrated that closed-loop system with implantable electrodes in GPi region has better results on the disease motor symptoms in PD patients than the open-loop and high-frequency systems [32,33].
DBS in different neurodegenerative diseases
The common form of dementia, AD, treated with lesser efficiency of success in treatment via this technique has been used to modulate nonfunctional neuronal circuits with abnormalities seen in cortical and subcortical areas of the brain. Treatment helps in altering cholinesterase inhibitors and NMDA receptor antagonist [31]. DBS is a significant option for treatment of movement disorders that are intractable to drugs namely Parkinson's disease, essential tremors, dystonia, and have recently shown to be effective against treatment of OCDs, depression, and Tourette syndrome [5,31].
DBS in movement disorders
DBS became the standard therapy refractory over the last 25 years for individuals with motor circuit disabilities, most notably PD, dystonia, and essential tremor. DBS use has now been confined to high-income and developing countries [34]. Hospital-discharge-based studies of US database has showed that >30,000 DBS surgeries were performed during 2002 and 2011, and the publications on DBS have also risen over the same period of time [35].
Parkinson's disease
Over the last 10 years, STN is used as a target for DBS in PD [36]. GPi is also used as a target, but the choice between STN and GPi is often guided by the biomedical group based on the medical context of the patient.
Multiple studies have already shown that STN DBS produces continuous symptom relief even after 5-10 years of treatment, although with cognition and gait regression due to the unremitting development of the underlying degenerative disorder [37]. In PD diagnosis, DBS is called the "second honeymoon" (dopaminergic therapy is the first). Instability in posture and freezing can be improved by DBS at pedunculopontine nucleus region of the brain [38].
Based on previous studies, there is a general concept of DBS that it can improve PD patients with advance kind of symptoms like motor fluctuation, dyskinesias secondary to chronic levodopa as well as those with refractory and marked tremor. But based on studies of EARLYSTIM findings, DBS can also improve early stages of PD [39]. Due to these advantages of DBS, it is now been under clinical trials for those patients who are eliminated from surgery due to age factor, along with those patients with motor fluctuations in whom medication is effective. Due to the inherent risk of DBS like hemorrhage and infection, such trials face ethical issues [40].
Epilepsy
Earlier it was thought that DBS can switch open resective surgery in epilepsy, but after studies on DBS of the anterior nucleus of the thalamus (ANT), it was stifled. These researches imposed well on the efficacy of DBS but simultaneously also demonstrated that many patients did not attain seizure freedom after the DBS treatment [41,42]. Closed-loop stimulation is a hopeful technology in epilepsy that can sense seizure activity with electrode and also can send electrical stimulations to brain to thwart propagation of seizure [43].
Essential tremor
After various studies, DBS was recommended by FDA in 1997 for the initial tremor symptoms of the movement disorder [44]. Along with DBS, other therapies such as lesional surgery have also been used for the treatment of essential tremors. DBS is a better choice due to its safety as well as adjustability of the stimulation, which is not provided by the lesional therapy [45]. Thalamic DBS is used in tremors of multiple sclerosis patients [46].
Dystonia
DBS had played a crucial role in dystonia treatment [47,48]. Pallidal DBS, for instance, is the first-line treatment in childhood generalized dystonic disease. The most significant determinant of results was age at which surgery was performed and the duration of disorder [49][50][51]. Genetic makeup of patients has also been important in evaluating the outcome, as individual with DYT1 dystonia are benefited more than the DYT6 dystonia [52]. Therefore, genetic testing of patients undergoing DBS treatment would suggest which candidate is going to be benefited more [53]. The posteroventral lateral GPi in dystonia is the utmost recognized target for DBS [54]. GPi stimulation offers significant recovery in dystonic patients with adversarial effects on low frequency. The STN and the thalamus are two other targets for DBS. Despite of positive outcomes of STN DBS, the therapeutic use is still restricted [55]. An additional important target is sensorimotor thalamus, which in the age of radiofrequency lesioning, was considered as standard target [56,57]. The mode of action of DBS in clinical improvement is quite intricate because of delayed and progressive effect exhibited over a period of months. The underlying mechanism for this was hypothesized as the alteration of maladaptive plasticity, progressive motor learning, and modification in pathological oscillatory activity in basal ganglia circuitry [58]. Dystonia can recur within minutes to hours after discontinuation of stimulation during the initial postoperative period; the advantages from stimulation that has been administered for several years can persist for days and weeks after [59,60]. Therefore, DBS acts as a true treatment in case of dystonia where progressive treatments are absent or poorly successful. This rationale has contributed an EARLYSTIM study in dystonia [61].
Alzheimer disease (AD)
AD is perhaps the most prevailing neurodegenerative disease but is characterized with years of gradual reduction in neurocognitive parameters. Many DBS strategies have been identified for AD, including areas anterior to the fornix, entorhinal cortex, and the nucleus basalis of Meynert (NBM). Several studies suggest that DBS can affect cognitive function in AD. Nonetheless, outcome influencing factors such as baseline neuroanatomical substrates, surgical technique, placement of lead, and target population choice are the challenges for DBS [62].
DBS in psychiatric disorders
Psychiatric disorders are assorted conditions affecting multiple pathways with overlap. Such disorders have few (if any) biochemical markers that support treatment and outcomes, and there is a lack of data for its outcome assessment in patients. Thus, this affects the designing of clinical trial studies. In addition, the quality of surgical trials is also hindered due to significant selection barriers [63]. In an attempt to alleviate refractory psychiatric symptoms, numerous prospective studies have been done to evaluate if focal disruption at specific anatomic targets can impact circuitwide or network-wide changes. Though the strategy is enticing, there are still some challenges.
Tourette syndrome
Due to the behavioral and cognitive issues in these patients, less than 300 patients have endured DBS treatment across the world. Patients with chronic symptoms are improved less than those with mild symptoms as per a meta-analysis [64]. A randomized controlled trial in 2017 did not report any significant improvement of tics in Tourette syndrome patients treated with anteromedial GPi stimulation during the initial blinded phase of the study; however, tics improvement was documented during the study's transparent period [65]. There is need of more randomized control trials for further development of DBS treatment in these patients.
Major depression
Major depression is a serious disorder that can impact quality of life day-today working and, eventually, life expectancy significantly [66,67]. As a result of advancement of imaging techniques, there is a suggestion that depression occurs due to alteration in mood-related circuits, which can be reversed with neuromodulation along with other antidepressant therapies.
Bipolar disorder
Bipolar disorders are associated with acute and strong emotive condition, which are episodic and known as mood episodes; these disorders are less common than major depression but are linked with increased risk of suicide. Effective targets in bipolar disorders for DBS are thought to be SCC, the nucleus accumbens, and slMFB, but studies are less [68].
Obsessive: compulsive disorder
Obsessive-compulsive disorder (OCD) is a debilitating psychological condition, which is characterized by obsessions combined with time-consuming and subjectively anxiolytic behaviors. Several targets were proposed for OCD treatment, but STN DBS was found to be the most effective with significant reduction in OCD symptoms [69].
Anorexia nervosa
Anorexia nervosa is a severe, prevalent, and has one of the highest mortalities among any psychiatric disorder. The limbic and emotional circuits are involved activating and upholding the disorder. The limited availability of treatment in refractory anorexia nervosa and positive outcomes of DBS in mood-related circuits have led the curiosity for DBS targets availability in anorexia nervosa condition. Several research articles are published on SCC DBS target with significant reduction in depression and anxiety [70]. However, further studies are needed for convincing target for DBS.
Pain
For patients with pain, the analysis of DBS outcome is more challenging than in motor movement disorders due to rationality of pain self-assessment. Though nociceptive pain can usually be kept in check with opiate medication, DBS targets in the thalamus or cingulum are considered for patients with severe refractory neuropathic pain [30,71,72].
Positive influence of the DBS treatment
There are a number of side effects via medication that is highly reduced by neuromodulation technique. Seizure frequencies and mortality were decreased, but the results were not evaluated. Successful results of DBS on movement disorder and vagal nerve stimulation for epilepsy [73,74]. DBS is a best way to treat extrapyramidal motor disorder namely dyskinesia, tremors, rigidity, and dystonia [75][76][77]. GPi-DBS, in primary generalized dystonia, was proved to be very successful, and it can be used as an effective treatment option [78]. Although with mild treatment side effects, a number of studies have shown positive treatment outcome in chronic disorders of consciousness with unknown mechanism of action [79]. DBS is found to beneficial in enhancing altered learning and memory. In rodents' model of dementia, mesial temporal DBS has shown positive results. Improvement in visual memory is seen in patients who underwent unilateral amygdalohippocampal DBS [80]. DBS helps in regaining of learning, memory, and altered communication skills in patients of postbrain injury with disorders of consciousness [81].
Negative influence of the treatment
Severe harmful effects of DBS are seen on dominant side of hippocampal region. Bilateral hippocampal DBS may cause memory dysfunction in epilepsy patients. Though DBS, is supposed to be safe, the adverse events can be seen in 7.5-8.5% of patients. The major adverse effects being, infections, intraoperative seizures and other complications [7].
Evolution in DBS technologies
Evolutions in technologies have led to the advancement in pain management in DBS. Several technologies related to spinal cord stimulation like Expanded MRI labeling, pulse modifier (generator as well as shrinker), dorsal root ganglia stimulating leads, and so on have benefited a lot due to high-frequency and high-density strategies of software [82][83][84][85]. The major problem in DBS is the inappropriate dose, for which no new technology has been developed for the past two decades; therefore, there is lack of competitiveness in DBS technology [30].
Summary
DBS is a neurosurgical procedure that utilizes lead-implanted electrodes that is placed chronically in the target areas of the brain well connected to pulse generator, which excites the neuronal circuits [1,4,5]. It is an invasive neuromodulation technique that was emanated in early 1960s [1]. Recently, DBS has become a huge practice in treatment of various movement disorders along with some psychiatric disorders [4,5]. As compared to other neurosurgical options, lower chances of complications are seen with this technique [5]. Although with mild treatment side effects, a number of studies have shown positive treatment outcome in chronic disorders of consciousness with unknown mechanism of action [79]. Growth in DBS in respect to pathway and its impact on neuronal circuit has been mainly propelled by preclinical, neurophysiological, and computational research. Significant needs and prospect have evolved innovative techniques and technologies that have improved tolerability as well as research design, but DBS is still growing in many areas to manage cerebral diseases safely and efficiently.
|
v3-fos-license
|
2023-02-01T16:09:41.511Z
|
2023-01-28T00:00:00.000
|
256445021
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-6643/15/3/665/pdf?version=1674899972",
"pdf_hash": "8052a5d608318ea04d9e8deeea593c09834e0a32",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43478",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "a64aee3e4208739957e0136391aa83666a3345ad",
"year": 2023
}
|
pes2o/s2orc
|
Mediterranean Dietary Patterns Related to Sleep Duration and Sleep-Related Problems among Adolescents: The EHDLA Study
Purpose: The aim of the present study was to examine the association of adherence to the Mediterranean Diet (MD) and its specific components with both sleep duration and sleep-related disorders in a sample of adolescents from the Valle de Ricote (Region of Murcia, Spain). Methods: This cross-sectional study included a sample of 847 Spanish adolescents (55.3% girls) aged 12–17 years. Adherence to the MD was assessed by the Mediterranean Diet Quality Index for Children and Teenagers. Sleep duration was reported by adolescents for weekdays and weekend days separately. The BEARS (Bedtime problems, Excessive daytime sleepiness, Awakenings during the night, Regularity and duration of sleep, and Sleep-disordered breathing) screening was used to evaluate issues related to sleep, which include difficulties at bedtime, excessive drowsiness during the day, waking up frequently during the night, irregularity, length of sleep, and breathing issues while sleeping. Results: Adolescents who presented a high adherence to the MD were more likely to meet the sleep recommendations (OR = 1.52, 95% CI 1.12–2.06, p = 0.008) and less likely to report at least one sleep-related problem (OR = 0.56, 95% CI 0.43–0.72, p < 0.001). These findings remained significant after adjusting for sex, age, socioeconomic status, waist circumference, energy intake, physical activity, and sedentary behavior, indicating a significant association of adherence to the MD with sleep outcomes (meeting sleep recommendations: OR = 1.40, 95% CI 1.00–1.96, p = 0.050; sleep-related problems: OR = 0.68, 95% CI 0.50–0.92, p = 0.012). Conclusions: Adolescents with high adherence to the MD were more likely to report optimal sleep duration and fewer sleep-related problems. This association was more clearly observed for specific MD components, such as fruits, pulses, fish, having breakfast, dairies, sweets, and baked goods/pastries.
Introduction
Two factors intrinsic to health are diet and sleep, which may well influence one another [1]. For instance, sleep exerts an integral role in emotional regulation, modulating affective neural systems and reprocessing recent emotional experiences [2], which could lead to unhealthy food intake [3]. Furthermore, whole diets rich in vegetables, fruits, pulses, and other sources of dietary melatonin and tryptophan seem to predict favorable sleeprelated outcomes [1]. Getting enough sleep is essential for overall well-being and good health [4]. Adequate sleep is vital for proper brain function and physical health, including Nutrients 2023, 15, 665 2 of 12 maintaining metabolism, controlling hunger, and ensuring the proper functioning of the immune, hormonal, and cardiovascular systems [5]. However, teens are particularly at risk for not getting enough sleep [6]. The overall percentage of sleep complaints and sleep problems in the young population is very high, reaching approximately 80% in certain parts of the world [7]. Ensuring proper sleep is crucial for promoting healthy growth and development in adolescents since a lack of sufficient sleep or inconsistent sleep patterns can lead to an increased risk of both physical and mental health issues [8,9].
Importantly, dietary behavior has been shown to influence sleep via the food groups consumed and their nutrient content, as well as the timing when they were ingested prior to going to bed [10]. Recently, there has been much scientific research focusing on the connection between diet and sleep with the aim of better understanding how certain aspects of diet can directly impact the quality of sleep [11]. Sleep is influenced not only by the calorie content of a diet but also by the specific types of macronutrients that are consumed, such as proteins, fats, and carbohydrates [12]. Therefore, consuming a high-carbohydrate diet and foods rich in tryptophan, melatonin, and plant-based compounds shows potential in enhancing both the length and quality of sleep [11]. Supporting this notion, using diet management to improve sleep has been identified as a feasible, affordable, and convenient strategy [13].
The Mediterranean diet (MD), a plant-based eating plan that is becoming increasingly popular around the world, is considered to be one of the healthiest types of diets [14]. Research suggests that individuals can gain many benefits by incorporating elements of this diet into their eating habits [15]. This eating pattern includes a great consumption of plant-based foods (vegetables, fruits, breads and other cereals, beans, seeds, nuts, and potatoes); seasonally fresh, minimally processed, and locally grown foods; olive oil intake as the main fat source; and a moderate intake of dairy products (mostly as yogurt and cheese), among other components [14]. In addition, MD has been proposed as a "gold standard" diet due to its numerous benefits for health and nutrition, richness in biodiversity, great sociocultural food value, low environmental impact, and positive economic impacts on local communities [16].
The available studies suggest that higher adherence to the MD is linked with adequate sleep duration and with some indicators of greater sleep quality. However, most of these studies have focused on adults [17], and this relationship has been less studied among adolescents [18,19]. A deeper comprehension of the specific dietary elements that have a direct association with sleep duration and sleep-related problems is of upmost importance to establish future intervention programs aimed at improving sleep outcomes among this population. Thus, the objective of the present study was to examine the association of adherence to the MD and its specific components with both sleep duration and sleep-related disorders in a sample of adolescents from the Valle de Ricote (Region of Murcia, Spain).
Study Design and Population
This study is cross-sectional and uses data obtained from the Eating Healthy and Daily Life Activities (EHDLA) study, which analyzed a sample of adolescents aged between 12 and 17 years from Valle de Ricote (Region of Murcia, Spain) as representative participants. This study involved three secondary schools and collected data during the 2021/2022 academic year. The detailed methods of the EHDLA study are available elsewhere [20]. Initially, 1378 adolescents (100.0%) were randomly chosen. Of them, 277 (20.1%) were excluded due to the lack of data on sleep-related outcomes. In addition, other participants were removed because of missing data on adherence to a Mediterranean diet (n = 157; 11.4%), waist circumference (n = 45; 4.1%), sedentary behavior and physical activity (n = 10; 1.1%), and energy intake (n = 42; 4.7%). Thus, a total of 847 adolescents (55.3% girls) were included in the analyses. The following inclusion criteria were established: (1) aged between 12 and 17 years old; (2) registered and/or living in Valle de Ricote; and (3) provided consent by parents or legal guardians and assent by the student. Participants were excluded when they (1) were exempt from the Physical Education class at school, as data collection was conducted in the physical education lessons; (2) had any pathology that demanded special attention or contraindicated physical activity; or (3) were under pharmacological treatment due to a chronic medical condition.
To take part in this research, written consent was obtained from the parents or guardians of the selected adolescent participants, and they were provided with an information sheet outlining the aims of the study, along with the tests and questionnaires that would be given. The adolescents were also asked for their agreement to participate.
This research was granted ethical clearance by the Bioethics Committee of the University of Murcia (ID 2218/2018), the Ethics Committee of the Albacete University Hospital Complex, and the Albacete Integrated Care Management (ID 2021-85). In addition, it was conducted in compliance with the guidelines of the Helsinki Declaration.
Adherence to the Mediterranean Diet
The Mediterranean Diet Quality Index for Children and Teenagers (KIDMED) was used to evaluate adherence to the MD. This index has previously been validated in the young Spanish population [21]. The KIDMED includes a 16-question test and ranges from −4 to 12 points. Questions about unhealthy aspects of the MD are given a score of −1, and those about healthy aspects are given a score of +1. The total score is then divided into three categories: high MD (>8 points), moderate MD (4-7 points), and low MD (≤3 points).
Sleep Recommendations and Sleep-Related Problems (Outcomes)
The duration of sleep was assessed by asking the adolescents to separately report their typical bedtime and wake-up time for both weekdays and weekend days as follows: "What time do you usually go to bed?" and "What time do you usually get up?". The daily sleep duration was calculated by averaging the night-time sleep duration on weekdays and weekends, and the formula [(average nocturnal sleep duration on weekdays × 5) + (average nocturnal sleep duration on weekends × 2)]/7 was used. As per the guidelines of the US National Sleep Foundation, individuals who slept less than 9-11 h (for 12-13 years old) or 8-10 h (for 14-17 years old) were considered not to meet the recommended sleep guidelines [5]. In addition, the BEARS (Bedtime problems, Excessive daytime sleepiness, Awakenings during the night, Regularity and duration of sleep, and Sleep-disordered breathing) self-report questionnaire [22] was used to screen for sleep-related problems in the study since it has been shown to be an easy-to-use tool with good accuracy in detecting sleep-related problems [23].
Covariates
Sex and age were self-reported by adolescents. Socioeconomic status was assessed with the Family Affluence Scale (FAS-III) [24]. The FAS-III score was calculated by the sum of the responses from six different items related to family (i.e., vehicles, bedrooms, computers, bathrooms, dishwater, and travels). The final score ranged from 0 to 13 points. A higher score indicates greater socioeconomic status. Based on standard protocols, the measurement of waist circumference was taken by using a measuring tape with constant tension to the nearest 0.1 cm at the navel level. The Youth Activity Profile Physical (YAP), a self-administered 7-day (previous week) recall questionnaire that contains 15 different items separated into three sections (i.e., activity outside of school, activity at school, and sedentary habits), was applied to determine physical activity and sedentary behavior among the adolescents [25].
Statistical Analysis
The descriptive data are presented in the form of numbers and percentages for categorical variables and mean and standard deviation for continuous variables. To examine the relationship between following the different components of the MD and the dietary pattern with sleep outcomes, the Chi-square test was used. Since preliminary analyses showed no interaction between sex and adherence to the MD in relation to meeting sleep recommendations or sleep-related problems (meeting sleep recommendations: p = 0.689; sleep-related problems: p = 0.638), we analyzed both sexes together. Binary logistic regression analyses were performed to determine the odds ratio (OR) and the 95% confidence interval (CI) of the relationship between adherence to the MD (i.e., low/moderate MD or high MD) and each sleep outcome (i.e., meeting sleep recommendations or reporting sleep-related problems). The Hosmer-Lemeshow test was used as a statistical goodness-of-fit test for the logistic regression models. Based on Peduzzi et al. [26], sample size calculations for logistic regression were performed. Thus, the following formula was applied: N = 10 k/p, where p indicates the smallest proportion of sleep-related outcomes (i.e., meeting the sleep recommendations (24.8%), no sleep-related problems (43.3%)) in our sample and k is the number of independent variables (k = 8; adherence to the MD, sex, age, socioeconomic status, waist circumference, energy intake, physical activity, and sedentary behavior). Following this recommendation, the sample sizes required for the analyses were 323 for sleep recommendations and 185 for sleep-related problems. Sex, age, socioeconomic status, waist circumference, energy intake, physical activity, and sedentary behavior were included as covariates [27,28]. All analyses were conducted using SPSS software (IBM Corp, Armonk, NY, USA) version 28.0 for Windows. Statistical significance was determined at a p-value of less than or equal to 0.05.
Results
The descriptive data of the study participants are reported in Table 1. The KIDMED mean score was 6.6 ± 2.5 points. The proportion of adolescents meeting the sleep recommendations was 24.8%. A total of 57.7% of the adolescents reported at least one sleeprelated problem.
The proportion of the different KIDMED components met and MD adherence in relation to the meeting of the sleep recommendations or the presence of sleep-related problems are shown in Table 2. A higher proportion of daily fruit intake was reported by adolescents meeting the sleep recommendations (p < 0.016) and not reporting any sleeprelated problem (p < 0.031). Similarly, having breakfast and not eating sweets were more frequent in those meeting sleep recommendations (p < 0.11) and not presenting any sleeprelated problems (p < 0.14). Participants meeting the sleep recommendations showed a greater proportion of second daily fruit intake and pulse intake more than once a week than those who did not meet these recommendations. Furthermore, participants not reporting any sleep-related problems reported a higher proportion of fish intake, higher intake of dairy products for breakfast, and lower intake of commercially baked goods/pastries for breakfast in comparison with their counterparts reporting sleep-related problems (p < 0.05 for all). Last, adolescents meeting sleep recommendations and not reporting any sleeprelated problem had a higher adherence to the MD (i.e., KIDMED score ≥ 8 points) (p < 0.05 in both cases). Figure 1 depicts the unadjusted and covariate-adjusted probability of meeting sleep recommendations or reporting any sleep-related problem according to adherence to the MD. In unadjusted analyses, adolescents who presented a high adherence to the MD were more likely to meet the sleep recommendations (OR = 1.52, 95% CI 1.12-2.06, p = 0.008) and less likely to report at least one sleep-related problem (OR = 0.56, 95% CI 0.43-0.72, p < 0.001). These findings remained significant even after adjusting for sex, age, socioeconomic status, waist circumference, energy intake, physical activity, and sedentary behavior, indicating an independent association between adherence to the MD and sleep outcomes (meeting sleep recommendations: OR = 1.40, 95% CI 1.00-1.96, p = 0.050; at least one sleep-related problem: OR = 0.68, 95% CI 0.50-0.92, p = 0.012). The Hosmer-Lemeshow test indicated a good fit of the models (sleep recommendations: p = 0.817; sleep-related problems: p = 0.217). nomic status, waist circumference, energy intake, physical activity, and sedentary behavior, indicating an independent association between adherence to the MD and sleep outcomes (meeting sleep recommendations: OR = 1.40, 95% CI 1.00-1.96, p = 0.050; at least one sleep-related problem: OR = 0.68, 95% CI 0.50-0.92, p = 0.012). The Hosmer-Lemeshow test indicated a good fit of the models (sleep recommendations: p = 0.817; sleep-related problems: p = 0.217).
Discussion
Overall, the findings from the present study suggest that greater adherence to the MD was related to higher odds of meeting sleep recommendations, as well as with lower odds of reporting sleep-related problems among adolescents. These results are consistent with a previous review including adults and adolescents by Scoditti et al. [17], which pointed out that higher adherence to the MD is linked with adequate sleep duration and with numerous markers of better sleep quality. The MD includes a balanced ratio of fat, carbohydrates, and proteins and a particularly high content of vitamins and polyphenols, mostly provided by the moderate-to-high amounts of fruits, nuts, vegetables, cereals,
Discussion
Overall, the findings from the present study suggest that greater adherence to the MD was related to higher odds of meeting sleep recommendations, as well as with lower odds of reporting sleep-related problems among adolescents. These results are consistent with a previous review including adults and adolescents by Scoditti et al. [17], which pointed out that higher adherence to the MD is linked with adequate sleep duration and with numerous markers of better sleep quality. The MD includes a balanced ratio of fat, carbohydrates, and proteins and a particularly high content of vitamins and polyphenols, mostly provided by the moderate-to-high amounts of fruits, nuts, vegetables, cereals, olive oil, and fish [15]. Thus, the potential interactions between these foods and nutrients may provide an explanation for the positive effects of the MD on sleep outcomes [17]. Conversely, consuming a lot of processed meat, saturated fat, sugary drinks, and foods that are not commonly found in the MD has been associated with poorer sleep quality and shorter sleep duration, as well as insomnia symptoms [18,29]. In addition, certain micronutrient deficiencies (i.e., low availability of tryptophan) have been related to suboptimal hormonal regulation provoking disrupted sleep [30]. Likewise, macronutrient imbalances have also been noted, although not consistently, to affect sleep (i.e., energy-rich foods such as fats or refined carbohydrates) [30]. For instance, one study among US adolescents showed that those not meeting the sleep guidelines consumed a smaller variety of foods in comparison with their counterparts who did [31]. It is possible that high adherence to MD avoids nutritional deficiencies through an adequate and balanced intake of micronutrients and macronutrients [14] and, therefore, a higher quality and duration of sleep can be achieved.
Importantly, our results depicted the crucial role of fruit intake on sleep outcomes. There are some possible reasons justifying this finding. Fruits are rich in vitamin C, which is an antioxidant that scavenges free radicals. A growing body of evidence supports that vitamin C may have a role in sleep health due to the association between free radical formation and oxidative stress with sleep and sleep-related problems [32]. In addition, fruits are rich in some B-complex vitamins, which are associated with sleep outcomes. An example of this is that vitamin B6 helps in the creation of serotonin, which is a neurotransmitter that influences mood and hunger, and it also plays a role in making melatonin, which is an important hormone that regulates sleep, through the conversion of 5-hydroxytryptophan [12]. Moreover, fruits usually contain a high amount of vitamin B9 (i.e., folic acid). Indeed, deficiency in this vitamin has been associated with insomnia and sleep disturbances, possibly because vitamin B9 seems to be involved in the conversion of tryptophan into serotonin [33]. Another alternative explanation may be the high fiber content of fruits, since low fiber intake has been related to lighter and less restorative sleep, with greater arousals [29].
Moreover, our results showed that skipping breakfast was associated with not meeting sleep recommendations and sleep-related problems. Consistent with this finding, skipping breakfast and inconsistent eating schedules have been closely related to poor sleep quality [12]. Similarly, another study among Greek adolescents depicted that those skipping breakfast had greater odds of not meeting the sleep recommendations [34]. Skipping breakfast seems to be related to higher psychosocial health problems among the young population [35], and increased exposure to these health problems has been linked with current and later sleep-related problems among adolescents [36], which could justify this finding.
In addition, in our results, adolescents who usually had commercially baked goods or pastries for breakfast or sweets and candy several times daily reported more sleep-related problems than those who had not. Similarly, the proportion of adolescents meeting the sleep recommendations was lower in those who ate sweets and candy frequently. One possible explanation is that these foods are ultra-processed foods (e.g., highly profitable, ready-to-consume, hyperpalatable) according to the NOVA classification [37], which have been linked to lower sleep quality and duration among adolescents [38]. As an example, a study conducted on Brazilian adolescents found that consuming a high amount of highly processed foods, such as sweets, was related to a greater likelihood of experiencing poorer sleep quality [39]. Furthermore, another study among Iranian female adolescents showed that ultra-processed food consumption was linked to greater odds of reporting insomnia [40]. Likewise, sugar and high saturated fat intake (characteristic of ultra-processed foods) are associated with less restorative sleep [29]. The intake of ultra-processed foods may be associated with increased levels of inflammatory biomarkers, which are related to harmful sleep outcomes [41]. Furthermore, the consumption of ultra-processed foods has been related to higher depressive symptoms [42], anxiety-induced sleep disturbance [43], and mental health complications [44], likely owing to various mechanisms (e.g., inflammation, neuroplasticity, hypothalamic-pituitary-adrenal axis function).
Another finding of this study was that adolescents who more frequently consumed fish, pulses, and dairy products (for breakfast) were less likely to have negative sleep outcomes than those who consumed these foods less frequently. This is possibly due to the healthy nutritional profile of these foods. For example, fish intake could provide more longchain polyunsaturated fatty acids (e.g., omega-3), which has been related to better sleep health per se [45,46]. In addition, Jansen et al. [45] showed that plasma docosahexaenoic acid was associated with earlier sleep timing and longer weekend sleep duration in Mexican adolescents. In addition, a meta-analysis by Dai et al. [46] concluded that omega-3 (e.g., docosahexaenoic acid, eicosatetraenoic acid) could potentially enhance specific aspects of sleep well-being throughout the period of childhood. Concerning pulses, their high amount of tryptophan could (at least partially) explain our findings. For instance, beans contain a high amount of tryptophan, which is a type of amino acid, and the impact Nutrients 2023, 15, 665 9 of 12 of beans on sleep is related to the role they play in the formation of the brain chemical called serotonin [8]. Tryptophan is utilized to produce serotonin, and this chemical is then converted into melatonin [47]. In addition, pulse intake may provide favorable changes in the ratio of tryptophan to other large neutral amino acids (LNAAs) (valine, isoleucine, leucine, tyrosine, phenylalanine, methionine, and histidine). This could lead to a greater transport of tryptophan to the brain [48] and, therefore, to better sleep health [30]. In relation to dairy products, our findings showed a lower proportion of at least one sleep-related problem in those adolescents consuming this type of food for breakfast. It is possible that adolescents who include dairy in their breakfasts could obtain higher levels of tryptophan (e.g., milk [49]), which likely improves some sleep-related parameters (e.g., decreased sleep latency and increased sleep time and efficiency) [50].
Our results must be interpreted while considering some limitations. Because this study used a cross-sectional design, it is not possible to draw a cause-and-effect relationship from the findings obtained. Further studies with different designs (e.g., experimental) are required to examine whether higher adherence to MD and its components reduces the risk of sleep disturbances in adolescents. Although clinical trials are needed to confirm a causal impact of dietary patterns on sleep and elucidate the underlying mechanisms, the available data illustrate a cyclical relation between these lifestyle factors [1]. Similarly, the use of questionnaires to gather information on MD and sleep outcomes may result in bias due to the potential for differences in the desire to provide information or inaccuracies in recalling information. The potential effects of melatonin obtained from exogenous sources (such as diet) on the sleep-wake cycle may vary depending on the time of day at which it is consumed (i.e., morning, afternoon, or evening) [51]. Therefore, because the timing of adolescents' meals was not available, we cannot affirm that a better sleep profile in adolescents with greater adherence to the MD is due to the intake of foods rich in bioactive nutrients related to sleep, such as tryptophan and melatonin. Conversely, a strength of this study is that it uses a sample of adolescents from Valle de Ricote (Region of Murcia, Spain) which is representative and makes our results generalizable to a broader population. Another strength is that, to our knowledge, the results from this study offer cross-sectional evidence of the association between MD and sleep outcomes (i.e., sleep recommendations, sleep-related problems) in an understudied population (i.e., adolescents).
Conclusions
Adolescents with high adherence to the MD were more likely to report optimal sleep duration and fewer sleep-related problems. This association was more clearly observed for specific MD components, such as fruits, pulses, fish, having breakfast, dairy products, sweets, and baked goods/pastries. Prospective studies are required to evaluate whether promoting MD is an effective strategy to prevent negative sleep outcomes among this population.
|
v3-fos-license
|
2020-12-24T09:13:42.031Z
|
2020-12-18T00:00:00.000
|
234519328
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.mdpi.com/2673-6470/1/1/1/pdf",
"pdf_hash": "da8f947a4730e77ca3c29e9895c2e9af672a614a",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43479",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"sha1": "f5a1c5d39d3816c0ad465bba2903ab90bd34365e",
"year": 2020
}
|
pes2o/s2orc
|
Improving Generalized Discrete Fourier Transform (GDFT) Filter Banks with Low-Complexity and Reconfigurable Hybrid Algorithm
With ever-increasing wireless network demands, low-complexity reconfigurable filter design is expected to continue to require research attention. Extracting and reconfiguring channels of choice from multi-standard receivers using a generalized discrete Fourier transform filter bank (GDFT-FB) is computationally intensive. In this work, a lower compexity algorithm is written for this transform. The design employs two different approaches: hybridization of the generalized discrete Fourier transform filter bank with frequency response masking and coefficient decimation method 1; and the improvement and implementation of the hybrid generalized discrete Fourier transform using a parallel distributed arithmetic-based residual number system (PDA-RNS) filter. The design is evaluated using MATLAB 2020a. Synthesis of area, resource utilization, delay, and power consumption was done on a Quartus 11 Altera 90 using the very high-speed integrated circuits (VHSIC) hardware description language. During MATLAB simulations, the proposed HGDFT algorithm attained a 66% reduction, in terms of number of multipliers, compared with existing algorithms. From co-simulation on the Quartus 11 Altera 90, optimization of the filter with PDA-RNS resulted in a 77% reduction in the number of occupied lookup table (LUT) slices, an 83% reduction in power consumption, and an 11% reduction in execution time, when compared with existing methods.
Introduction
The high computational complexity and low reconfigurability of generalized discrete Fourier transform filter banks (GDFT-FBs) render them unfit to handle the upcoming radio standards in software-defined radio (SDR) handsets. The main cause of such high filter order is the huge number of multipliers consumed during the channelization operations. Multipliers contribute remarkably to the complexity of digital filters and channelization algorithms, as evidenced by the high filter orders obtained during implementation. Multipliers slow down computational speed, limit filter bank reconfigurability, increase resource utilization, and increase production costs and power consumption. The extent of complexity and reconfigurability differs in different existing channelization algorithms, from uniform channelization algorithms, such as the per-channel (PC), pipelined/binary algorithm, and pipelined frequency transform (PFT), to the non-uniform ones. A review of these algorithms is summarized in Table 1. The major challenges of channelization algorithms, as shown from Table 1, are the higher filter orders, with attendant computational complexity and low reconfigurability. Metrics for the evaluation of the computational load are based on the following scales: Very High, High, and Low. The Very High scale indicates higher filter order and filter coefficients. High computational load indicates averagely high filter order, while the Low scale denotes low filter order and filter coefficients. Furthermore, the reconfigurability performance from Table 1 is based on the following scales: Good and In order to address these challenges, different approaches have been proposed to reduce the effects of multipliers in the design of FIR filters. Distributed arithmetic (DA) is a multiplierless memory-based architecture, which was proposed to replace the multiplications in signal processing with a combinational lookup table (LUT) [27][28][29][30]. The DA replaces the multiply-accumulate (MAC) operation of convolution operations with a bit serial lookup table read-write operation. This approach reduces the number of multipliers to barest minimum, but compromises the operating speed and the required memory. Many researchers have addressed the problems facing DA. Partial or full parallel structures [31,32] can be used to overcome the speed limitations of bit serial DA, but at the cost of an exponential increase in memory requirements. Yoo and Anderson also proposed an LUT-less architecture comprised of multiplexers and adder pairs. However, the gain in area reduction was offset by the cost of increased critical path. LUT decomposition, or slicing of LUT, has been suggested in [33]. An indexed LUT DA FIR filter has been proposed [27], which consists of indexed LUT pages (each of size 2 n ) and an m-bit multiplexer unit as a page selection module. Indexing of the LUT controls the exponential DA growth and eliminates the need for adders. LUT partitioning has been proposed, by [34], to reduce the memory usage of the LUT for higher order FIR filters. This design provides less latency, less memory usage, and high throughput, when compared with conventional DA. The author in [35] proposed a memoryless distributed arithmetic-based adaptive filter for low power and area efficiency. In this case, the conventional DA was replaced by 2:1 multiplex-ers, in order to reduce area. By replacing the algorithm with a 4:2 compressor adder, instead of a normal adder, area complexity enhancement was attained. The author in [29] proposed the use of a modified DA method to compute the sum of product, saving a considerable number of multiply and accumulate blocks and reducing the circuit size considerably. There were 40% less LUT flip flop pairs used, at the expense of speed. A DA-based LMS adaptive filter using offset binary coding without LUT has been presented, in order to improve the performance of bit-serial operation [36]. Additionally, a DA-RNS-based filter implementation has been used for the effective calculation of modular inner products in a FIR filter [28]. The RNS enhances high-speed processing, due to the absence of carry propagation, thus offering a solution to the conventional DA approach. Attempting to reduce the memory requirements of DA by reducing the area utilization and delay is very important for the implementation of a digital FIR filter. A Residual Number system (RNS) has been proposed to offer such a solution, providing high operating speeds with reduced word length, area utilization, and power consumption. The residue number system (RNS) is a non-weighted number system that can speed up arithmetic operations, due to its peculiar features of carry-free propagation and parallelism. This results in carry-free addition, multiplication, and subtraction [37][38][39]. The most important factors to consider when choosing an RNS for an FIR filter is the moduli set. The choice of moduli set greatly influences the area utilization, speed, cost, and power consumption of the hardware design. Different research efforts on the influence of the moduli set on the hardware complexities can be found in the literature. Sweidan and Hiasat proposed an algorithm that requires four binary adders-of which two operate in parallel mode-which resulted in higher speed and a smaller silicon area [40]. Furthermore, Amir Sabbagh and Keivan [41] presented two residues to binary converters, using the 2 n , 2 n+1 + 1, 2 n+1 − 1 moduli set. This moduli set consists of well-formed moduli and a balanced set, resulting in better and faster RNS implementation. Prem Kumar, in [42], described a residue number to binary converter, which converts numbers in the modulo set 2n + 2, 2n + 1, 2n, with 2 as a common factor. This algorithm achieved a faster conversion ratio, in terms of speed. Moreover, [43] discussed a high-speed realization of a residue to binary converter for the 2 n+1 , 2 n , 2 n+1 moduli set, which improved upon the best-known implementation twofold, in terms of the overall delay time. The algorithm employed certain symmetrical properties in its implementation, in order to reduce the hardware specification by n − 1 full adders. It also reduced redundancy in its implementation. Another approach has been proposed to perform inner product computation based on distributed algorithm principles [44]. The input data are represented in the residue domain and encoded with thermometer code, while the output data are encoded with one of the hot code formats. The operating speed of a one-hot code modular adder was superior to the conventional binary code. A non-recursive digital filter was presented based on moduli set 2 n−1 , 2 n , 2 n+1 , using diminished 1 representation [45]. The method investigated the usage of a n + 1-bit circuit for a 2 n + 1-bit channel.
A forward converter for RNS with diminished-1 encoded channels has been proposed by [46]. Furthermore, multiplication was eliminated in the design of a RNS converter [43]. Thus, fewer multipliers and adders were used in the design. This invariably reduced the hardware complexity and increased the speed. A dual sum carry look-ahead adder [47], which consists of a circular carry generator and a multiplexer, has been designed with reduced complexity. Jemmy, Yung Shem eliminated the bottleneck encountered in the carry propagation additions and modular adder free of the existing designs. This method resulted in a reduced power factor and leakage power.
Vinnakota and Rao discussed an RNS to the binary converter [48] and showed it to be a simple modification of the well-known mixed radix conversion techniques. The evaluation of this algorithm and comparison with the existing algorithms showed improvements, in terms of speed and cost, but not in terms of delay and area. A conjugate moduli set was presented in hardware-efficient two-level implementations of the weighted-to-RNS and RNS-to-weighted conversions [49]. The design offered 25 to 40% hardware savings, a reduction of 80% in complexity of the CRT, and achieved a higher dynamic range.
Kotha et al. [39] proposed new modular multiplication for 2 k − 1, 2 k , 2 k+1 − 1 for a fixed-point coefficient FIR filter. This algorithm improved the clock rates and reduced the area and power consumption, compared to conventional modular multiplication. Ahmad Hiasat [40] designed a converter consisting of three 4n-bit carry save adders (CSAs), together with an additional modulo 2 4n − 1 adder. This led to a reduction in hardware requirements, concerning area, delay, power, and energy efficiency. Richard Conway and John Nelson, in [50], used a moduli set of the form 2 n − 1, 2 n , 2 n + 1, which was primarily based on CSA and a one carry propagation adder (CPA) without the need for a look-up table (LUT). Their design occupied less silicon space and, therefore, was very fast. The authors proposed a new CRT property, in order to reduce the total dynamic range. The overall result was faster and more efficient, with improved delay and area cost.
Kazeem Alagbe Gbolade et al. [51] used the CRT to obtain a reverse converter that uses mod (2n − 1) operations, instead of mod (2n + 1)(2n − 1) and (2n)(2n − 1). This approach is traditional in nature but the results yielded better performance, in terms of conversion time, area, cost, and power consumption. Mohan [48] compared the designs of Vinnokota and Raos and Piestrak, together with the design of Andraros and Ahmad. It was seen that the design of Andraros and Ahmad was more cost-effective, in terms of delay and speed, when compared to Vinnokota and Rao's design. The design used the moduli set 2 n − 1, 2 n , 2 n + 1, which is a variation of the mixed radix conversion technique. Ahmad Hiasat [52] used the Chinese remainder theorem (CRT) approach to produce a simpler converter structure for the four moduli set 2 n − 1, 2 n , 2 2n + 1, 2 2n+p , using common factors. This led to considerable reductions in area, delay, time, energy, and power utilization, when compared with other published works.
From the foregoing, it can be stated that most of the research in the literature has focused on the speed improvement of FIR filters, while the costs of the area utilization and delay time are too high for the future trends of software-defined radio. Therefore, the goal of this work was to improve the performance of the generalized discrete Fourier transform, in terms of speed, area utilization, and delay time. This was approached by: (i) hybridization of a generalized discrete Fourier transform filter bank with frequency response masking and the coefficient decimation method 1; and (ii) filter bank design using a parallel distributed arithmetic-based residual number systems (PDA-RNS) filter. The final algorithm can, therefore, be described as hybridized GDFT with a PDA-RNS filter design. The design methodology and a detailed analysis are presented in the following.
Methodology
As proposed, two designs are investigated herein. The first approach is based on the hybridization of frequency-response masking with coefficient decimation filters and the classical generalized discrete Fourier transform (GDFT) filter bank. For ease of reference, this will be referred to as the hybrid GDFT (HGDFT). The second approach explores the improvement of the HGDFT using a parallel distributed arithmetic based residual number system (PDA-RNS). The algorithms for the two approaches and their simulation methods using the VHSIC hardware description language in the Altera DSP builder platform are presented in the following.
Proposed Hybrid Generalized Discrete Fourier Transform (HGDFT-FB)
The HGDFT-based filter bank consists of two branches: The upper and the lower branch. The upper branch is made up of FRM-interpolated coefficient decimated filters and the masking filter, whereas the lower branch consists of complementary FRM-interpolated coefficient decimated filters and the complementary masking filter. A low-pass interpolated coefficient decimated linear phase FIR filter, H a (z L M ), is formed from the cascade of the base interpolating filter, H a (z L ), and the coefficient decimating filter, H cd (z 1/M ), in order to extract the sharp narrow-band channel of choice. Furthermore, a bandpass edge-complementary interpolating coefficient decimated base filter, H c (z L M ), is formed from the cascade of the complementary base interpolating filter, H a (z L ), and the complementary coefficient decimating filter, H cd (z M ), in order to isolate multi-band frequency responses. The low-pass interpolated coefficient base filter, H a (z L/M ), cascades with the masking filter, H ma (z), in the upper branch, while the bandpass complementary interpolated coefficient base filter, H a (z L/M ), cascades with the complementary masking filter, H mc (z), in the lower branch, in order to produce reconfigurable low computational multinarrow frequency bands. The desired passband (ω p ) and cutoff frequency (ω s ) of the base filter response, H a (z), are calculated as indicated in Table 2. The transfer function of the FRM interpolated coefficient decimated filter is given by Equation (1), as: The interpolated coefficient decimated base and complementary filters are symmetrical and asymmetrical linear phase FIR filters, respectively, which can be expressed as . A half band filter is introduced into the FRM Interpolated coefficient decimated filter, in order to further reduce its computational complexity. This is possible as a result of the symmetrical properties possessed by the half-band filter. The time-domain impulse response of the CDM-1 technique requires every other component to be zero, except for the components at the centre. This indicates that it is symmetrical around the centre. This translates to reduced complexity, in terms of the number of the multipliers required by the filter. The transfer function of the half-band FRM interpolated coefficient decimated filter can be expressed in terms of two polyphase components, as in Equation (2): The masking filters are replaced with two GDFT-FBs, as shown in Figure 1. The transfer function for H-GDFT can be expressed as: By applying polyphase decomposition, Equation (4) is obtained: where E AI (z k ) and E Bi (z k ) are the k polyphase components of A(z) and B(z), respectively. The GDFT-FB modulated bandpass filters are obtained from the lowpass prototype filter by applying complex modulations, as in Equation (5): Finally, Equation (6) represents each of the modulated bandpass filters, as depicted in Figure 2: where . The transition band of the H-GDFT FB is centered at π 2 rad, whereas the complementary filter bank is centred at 2πK M , where K is an integer ranging from 0 to (M − 1).
Proposed Design Steps
The design steps for the proposed filter bank are outlined below: 4. Calculate the decimation factor M of the masking filter using the formula M= π ω ms . The interpolated factor is calculated using the formula L = π ω ms , where s i is the stopband frequency for each channel. Thus, the fractional rate for the masking filter can be calculated as L ma M ma . 5. Calculate the decimator factor of the complementary filter using the formula M = π π+ω mcs . The interpolated factor is calculated using the formula L = π π+ω mcs . Thus, the fractional rate for complementary filter can be calculated as L mc M mc . 6. Determine the transition bandwidth for the masking and complementary filters, tbwi, such that tbw k = tbw k × L k 10. Find the stopband ripple using δ s1 = δ s1 11. The modal passband peak ripple is calculated as: δ pmodal = min(δ p1 , δ p2 , ..., δ pn ).
12. The cutoff frequency of the prototype and masking filter are calculated using Table 2. 13. Determine the prototype filter order and the individual channel filter order using the Bellanger formula N = −2log 10 (δ p δ s ) 3∆ TBW − 1 [58].
Improving HGDFT with Parallel Distributed Arithmetic-Based Residual Number System (PDA-RNS)
In an attempt to lower the filter complexity and improve the reconfigurability of the proposed hybrid filter bank, we propose the design of our second approach, known as the parallel distributive arithmetic-based residual number system. The design and tcosimulation were carried out on the Quartus 11 Altera DSP builder 10, using the very high-speed integrated circuits (VHSIC) hardware description language. Consider an input signal sampling rate of 40 MHz for the filter design example in Section 2.2.1, with the filter specifications given in Table 3. The moduli set 2 n − 1, 2 n , 2 n + 1 was selected from the literature, due to its high speed and reduced hardware complexity. With the value of n = 5 bits, the relevant moduli set based on the moduli format is 31, 32, 33. The filter coefficients of the reconfigurable filter generated in Table 3 num2bin(Q, 1, b). The values of the parameter format create a parameter of binary numbers (word length, fractional length) for signed fixed-point mode. The input signal and the filter coefficients use 16-bit precision format, with a parameter word length of 16 and fractional length = 15. Distributed arithmetic can be expressed as: where H i represents the filter coefficient and X i,j denotes the input signal vectors. The fixedpoint binary values of the input signals and the filter coefficients are converted into the residue form using the arbitrary forward converter mechanism outlined below. The inputs y i in Equation (7) are converted to RNS, as shown in Equation (8): Assume that the input block X is partitioned into different bits, as follows: B k−1 , B k−2 , ......, B 0 . Then, the blocks of bits can be represented by The total residue X is calculated as the total sum of the partitioned residue bits blocks, with respect to the chosen modulus, as depicted in Equation (10): The reverse converter, which converts from residual to binary numbers, takes place at the back-end of the architecture; the shift and add method is used for its implementation. The 2 k possible values of r 1 , r 2 , and r 3 are pre-computed and stored in a 2 k × D-bit LUT. After the residual values are computed, Equation (8) becomes Equation (11): Without loss of generality, let us assume that r k is an N-bit residual of Q1; that is, an (N − 1) format number, such that The dot product of (12) can be written as: Rearranging the terms yields For K = 2 and N = 3, the rearrangement forms the following entries in the ROM, as indicated in Equation (15): The input values and filter coefficients pre-stored in the LUT tables are partitioned into different LUT tables and a modulo accumulator (ACC) performs the modulo shift-accumulate operation, in order to generate y i in D cycles, as shown in Figure 3. Performance comparison was evaluated in terms of the resource utilization, power consumption, and delay.
Application of HGDFT with PDA-RNS Filter Bank to Non-Uniform Channels: BT, ZIGBEE, WCDMA
The HGDFT channelization algorithm was applied to non-uniform input channels. The three multi-standard channels considered were: Bluetooth (BT), Zigbee, and wideband code division multiplexers (WCDMA). The input sampling rate used was F s = 40 MHz, with the channel bandwidths for BT, Zigbee, and WCDMA specified as 1 MHz, 4 MHz, and 5 MHz, respectively. The transition bandwidths for BT, Zigbee, and WCDMA were specified as 50 kHz, 200 kHz, and 500 kHz, respectively. The passband and stopband ripples specifications for BT and Zigbee were 0.1 and −40 dB, while those of WCDMA channels were 0.1 and −55 dB, respectively. The filters H a (z), H ma (z), and H mc (z) were the base filter, masking, and complementary filters, respectively, characterised using Case 1 of Table 2. The filter specifications shown in Table 4 were simulated following the design steps in Section 2.1.1. The following parameters were used to compare the performance of the new results: passband and stopband width, passband ripples, and stopband attenuation. The results obtained were compared with the designs in [53,54], using the same filter specifications and parameters. The realized HGDFT filter design specifications are shown in Tables 3-6. The filter coefficients obtained for BT, Zigbee, and WCDMA in Table 3 were revised by converting into RNS format. The double precision 16−bit values for the different filter coefficients were quantized and converted into integer values. These values were transformed into three modular RNS representations. The parallel distributed arithmetic architecture was used for implementing the addition of these three RNS values. The co-simulations of HGDFT with PDA-RNS used the following parameters on the Quartus 11 Altera software: LUT slices, total slices, slice registers and slice LUTs, flip flops, power, and delay. Performance comparisons were made using NU-MDFT CSD optimized with Pareto ABC [55], NU-MDFT SID-CSE [55], Sdr in channeliser [56], and SDR in Channeliser [57].
Results and Discussion
Using the information contained in Table 3, the normalized channel bandwidths of Bluetooth (BT), ZigBee, and wide code division multiplexer access (WCDMA) were 0.05, 0.2, and 0.25, respectively. From Step 2 of Section 2.1.1, the passband width of the prototype filter was thus set to 0.025. The fractional rate of each channel was calculated using the formula in Step 3 of Section 2.1.1. The fractional sampling rate for the modal filter was 39 40 , while the masking filters for BT, Zigbee, and WCDMA were 39 40 , 9 10 , and 7 8 , respectively. When the fractional rate of 39 40 was applied to the modal filter, the transition bandwidth computed was 0.002375, with passband peak ripple of 0.1 dB, stopband peak ripple of −50 dB, and filter length of 196. When the fractional rate of 39 40 was applied to the BT channels, the transition bandwidth was 0.0026, with passband peak ripple of 0.00975 dB, stopband peak ripple of −39 dB, and filter length of 159. When the fractional rate of Zigbee was 9 10 , the transition bandwidth was calculated to be 0.011, with passband ripple of 0.09, stopband peak ripple of −39, and filter order of 34. When the fractional rate of WCDMA was 7 8 , the transition bandwidth was calculated to be 0.021, with passband ripple of 0.0875, stopband peak ripple of −48.125, and filter order of 19. The frequency characteristic input is shown in Table 3, while Figures 4-7 show the magnitude responses of the input for the modal filter, BT masking filter, Zigbee masking filter, and WCDMA masking filter. The stopband for the complementary masking frequency ω mcs was also calculated, using the equation in Table 2. The stopband edge, passband edge, and fractional rate values were calculated using design steps 5 and 9 in Section 2.1.1. The complementary masking decimator factors for the modal filter, BT, Zigbee, and WCDMA were 8 9 , 8 9 , 8 9 , and 7 8 , respectively. The complementary masking transition bandwidths for the modal filter, BT, Zigbee, and WCDMA were 0.00222, 0.00222, 0.0089, and 0.021875, with filter order of 209, 150, 37, and 13, respectively. Table 5 shows the filter characteristics of the complementary masking filter using the HGDFT channelisation algorithm.
The total number of multiplications used was 526, while the multiplications used in [53][54][55] were found to be 1745, 1545, and 1090, respectively. The number of multipliers utilized by the proposed HGDFT filter bank was compared and found to be lower than those of the CDFB [54] and ICDM [53,55] methods, as indicated in Table 7. Figures 5-7 show the magnitude responses of the modal filter, BT masking filter, Zigbee masking filter, and WCDMA masking filter.
The results obtained from improving HGDFT with the PDA-RNS filter are as follows. The filter coefficients obtained in Table 2 were quantized with 16-bit representation, as it showed better passband ripples and stopband attenuation, when compared with 8-and 12-bit representations, as indicated in Table 8. By replacing the multiplier in the HGDFT with the PDA-RNS filter, there was an 100% decrease in multipliers, from 526 to 0, as the use of multipliers was totally eliminated. Tables 9 and 10 show a device utilization comparison. The total hardware resources occupied by HGDFT with PDA-RNS were as follows: 941 total slices, 2073 slices of LUTs, 2338 flip-flops, total power of 333.53 mW, and total delay of 3.328 ns. The total slices occupied by NU-MDFT CSD with Pareto ABC were 2406, slice LUTs utilized were 8950, flip flops consumed were 8980, total power consumed was 1751 mW, and total delay of 3.75 ns. The performance of the NU-MDFT filter optimized with SID-CSE showed total slices consumed of 1633, slice LUTs utilized of 5901 with flip flops of 5911, total power of 1281 mW, and delay time 2.6 ns. From these performance results, the plot in Figure 8 shows that HGDFT with PDA-RNS utilized 12.97% of the total LUT, 14% of the LUT slices, had a 12.7% reduction in flipflops used, 17% power consumption, and 4% delay in the execution time, when compared with NU-MDFT CSD optimized with Pareto ABC [55]. It was observed that the filter achieved an 83% reduction in number of occupied slices, from 2406 to 941 slices. There was an 83% reduction in power consumption and 75% reduction in execution delay time. However, when HGDFT PDA-RNS filter was compared with the NU-MDFT SID-CSE in [55], the delay execution time for NU-MDFT SIDE-CSE was found to be lower and, thus, faster. The total resources utilised are compared in Table 10. The slice registers utilized by HGDFT with PDA-RNS filter bank were 2279 out of 239,616. This showed a drastic reduction in slice registers, when compared with NU MDFT FB in [55], which consumed 29,797 out of 301,440; that in [56], which used 15,295 out of 58,880; and also that in [57], which used up 29,797 out of 301,440. The total slice LUTs used by HGDFT with PDA-RNS filter were found to be 2022 out of 119,808, while the hardware utilized by the NU MDFT filter in [55] was observed to use 5901 out of 298,600, the device consumption rate in [57] was discovered to be 21,169 out of 150,720, and that in [56] used up 14,726 out of 58,880. Thus, Table 10 proves that the hardware resource utilization under the implementation of HGDFT with the PDA-RNS filter bank was less than that in SDR channeliser [55][56][57]. The lower filter order of the proposed design, coupled with the modularity of RNS, clearly contributed to its lower slice requirements and lower power consumption, when compared to the designs in [39,55].
Conclusions
The proposed HGDFT with PDA-RNS method was found to be an effective channelization algorithm for low-complexity reconfigurable filters in multi-standard receivers. Two improvement methods were used for the realization of the algorithm: The first improvement was achieved by hybridizing CD1 and FRM filters with the GDFT. The performance of the method was further improved by using a parallel distributed arithmetic-based residue number system. The HGDFT filter bank demonstrated a reduction in the number of multiplications and filter coefficients used, compared with FRM-based or modified GDFT. The HGDFT filter bank was also optimized with a PDA-RNS, after which it was shown that the number of adders, multipliers, and overall filter complexity were reduced to the barest minimum, while reconfigurability was preserved. This resulted in remarkable reductions in resource utilization, operational speed, and power consumption.
|
v3-fos-license
|
2021-09-23T06:23:24.658Z
|
2021-09-22T00:00:00.000
|
237593382
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/1931471A7AED69D9479E2FE72A406285/S0924933821022343a.pdf/div-class-title-neuroanatomical-correlates-of-reality-monitoring-in-patients-with-schizophrenia-and-auditory-hallucinations-div.pdf",
"pdf_hash": "2116e18665d70f7f9225eb59dbfb5517f9788126",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43480",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"sha1": "6835f413eb1b0bef0ba5928a5438662108a1e22a",
"year": 2021
}
|
pes2o/s2orc
|
Neuroanatomical correlates of reality monitoring in patients with schizophrenia and auditory hallucinations
Background Reality-monitoring process enables to discriminate memories of internally generated information from memories of externally derived information. Studies have reported impaired reality-monitoring abilities in schizophrenia patients with auditory hallucinations (AHs), specifically with an exacerbated externalization bias, as well as alterations in neural activity within frontotemporoparietal areas. In healthy subjects, impaired reality-monitoring abilities have been associated with reduction of the paracingulate sulcus (PCS). The current study aimed to identify neuroanatomical correlates of reality monitoring in patients with schizophrenia. Methods Thirty-five patients with schizophrenia and AHs underwent a reality-monitoring task and a 3D anatomical MRI scan at 1.5 T. PCS lengths were measured separately for each hemisphere, and whole-brain voxel-based morphometry analyses were performed using the Computational Anatomy Toolbox (version 12.6) to evaluate the gray-matter volume (GMV). Partial correlation analyses were used to investigate the relationship between reality-monitoring and neuroanatomical outcomes (PCS length and GMV), with age and intracranial volume as covariates. Results The right PCS length was positively correlated with reality-monitoring accuracy (Spearman’s ρ = 0.431, p = 0.012) and negatively with the externalization bias (Spearman’s ρ = −0.379, p = 0.029). Reality-monitoring accuracy was positively correlated with GMV in the right angular gyrus, whereas externalization bias was negatively correlated with GMV in the left supramarginal gyrus/superior temporal gyrus, in the right lingual gyrus and in the bilateral inferior temporal/fusiform gyri (voxel-level p < 0.001 and cluster-level p < 0.05, FDR-corrected). Conclusions Reduced reality-monitoring abilities were significantly associated with shorter right PCS and reduced GMV in temporal and parietal regions of the reality-monitoring network in schizophrenia patients with AHs.
Introduction
Reality monitoring is a crucial cognitive process in the daily life to differentiate memories of thoughts and imagination from memories of externally derived information [1]. For instance, this process allows us to determine whether an event was generated by our imagination or if it really did occur.
A deficit in the reality-monitoring abilities has been repeatedly observed in patients with schizophrenia compared with healthy individuals (e.g., [2]; for recent review, see [3]). More specifically, several studies have pointed out that patients with schizophrenia and auditory hallucinations (AHs) were more likely to misattribute internally generated stimuli as being perceived from the environment than patients with schizophrenia without AHs and healthy individuals ( [4][5][6]; for review, see [7,8]). This tendency to misattribute imagined events as being perceived is called an externalization bias and is assumed to partly underlie AHs. Indeed, a prominent cognitive model of AHs suggests that they might arise from a misattribution of internal mental events such as inner speech as being externally perceived [9,10].
The neural network underlying reality-monitoring process has been explored in both healthy individuals and patients with schizophrenia in the literature. The prefrontal cortex (PFC), and particularly its medial and anterior part, was found to be a key structure of this network (for review, see [3,11]). Interestingly, a functional neuroimaging study has reported that the externalization bias was correlated with a reduced activation in this specific brain region [12]. In patients with schizophrenia, deficits in the neural activity of the medial PFC have been observed during reality-monitoring performances [13,14]. The medial PFC is not the only brain region that may account for the reality-monitoring process. Indeed, the contribution of temporoparietal areas, and particularly their abnormal overactivation, into the experience of externalization bias has been supported by neuroimaging studies [15] as well as noninvasive brain stimulation studies [16].
Although neuroimaging studies have broadly investigated brain activity linked to reality-monitoring performances, less is known about the neuroanatomical correlates of reality monitoring. In recent years, the morphology of a specific structure of the medial PFC, the paracingulate sulcus (PCS), has been investigated. The PCS is a tertiary sulcus that lies in the medial wall of the PFC and runs dorsal and parallel to the cingulate sulcus in a rostro-caudal direction. The PCS presents a great morphological variability within the general population, in that it can be found in none, one, or both hemispheres [17], and its presence affects the morphometry [18,19] and the cytoarchitectonic organization of surrounding cortices [20,21]. The PCS was found to be associated with a wide array of executive and cognitive functions [22], including reality monitoring [23]. Namely, healthy individuals with bilaterally absent PCS showed significantly reduced reality-monitoring performances compared with individuals with present PCS in at least one hemisphere [23]. In schizophrenia patients, some studies showed that reduced PCS length was associated with AHs [24,25]. However, the relationship between the PCS length and reality-monitoring performances remains unclear in patients with schizophrenia and AHs. Particular anatomical features in the medial PFC and specific morphology of the PCS could underpin the relationship between brain activity within these areas and reality-monitoring process.
The present study aimed to identify whether reality-monitoring performances were linked to specific neuro-anatomical features, including PCS length and gray-matter volume (GMV), in hallucinating patients with schizophrenia. Therefore, we conducted a magnetic resonance imaging (MRI) study combining an investigation of reality-monitoring performances, a morphological analysis of the PCS, and a whole-brain voxel-based morphometry (VBM) analysis. We hypothesized that reality-monitoring deficits, and particularly the externalization bias, will be negatively correlated with the PCS length. These hypotheses were based on three lines of work presented above showing that: (a) the absence of PCS is associated with poor reality-monitoring performances [23], (b) shorter PCS length is associated with AHs [24], and (c) AHs are associated with a specific deficit in reality monitoring: the externalization bias [8]. In addition, we hypothesized that poorer realitymonitoring performances, including higher externalization bias, would be associated with lower GMV in the brain regions that were identified as functionally involved in reality monitoring [11] and in the externalization bias [12] (e.g., the medial PFC).
Participants
Thirty-five patients meeting the DSM-IV-TR criteria for schizophrenia were recruited from our clinical unit for treatment-resistant schizophrenia at Le Vinatier Hospital between 2009 and 2015. All participants were native French speakers and presented daily treatment-resistant AHs, defined as persistent daily AHs despite an antipsychotic treatment at an adequate dosage for more than 6 weeks. Patients' diagnoses were assessed through a formal interview with a trained psychiatrist using the Mini-International Neuropsychiatric Interview [26]. Participants were assessed for the severity of their symptoms using the Positive and Negative Syndrome Scale (PANSS) [27]. Patients' current antipsychotic medication classes (typical, atypical including clozapine, and combination of classes) were reported in Table 1. Written informed consent was obtained from all participants. All experiments were approved by a local ethic committee (CPP SUD EST VI, Clermont-Ferrand, France) and performed in compliance with relevant guidelines and regulations.
Reality-monitoring task
The task was divided in a presentation phase and a test phase, according to the task used and validated by Brunelin et al. [28]. Briefly, during the presentation phase, 16 words were presented one by one on a computer screen for 3 s, all preceded by an instruction also presented during 3 s. Instructions were "Imagine yourself hearing the following word" or "Listen to the following word." During the test phase, performed immediately after the presentation phase, a 24-word list was presented including the 16 words previously presented (8 imagined and 8 listened) and 8 new words (distractors). Patients had to determine the source for each word (i. e., "Imagined," "Heard," or "New"). Before the task, patients performed a short practice trial to acquaint with requirements of the task and to ensure for their good comprehension.
Three main outcomes were computed according to previous studies [29,30]. (a) Reality-monitoring accuracy was calculated using the following formula: where fii is the number of imagined words that were correctly recognized as imagined, fih is the number of imagined words identified as being heard, fhh is the number of heard words correctly identified as heard, and fhi is the number of heard words identified as imagined. This measure of reality monitoring, also known as average conditional source identification measure [31], reflects the proportion of correct source judgments among the item correctly recognized as old. (b) The externalization bias was defined as the number of imagined words recognized as heard among all imagined words incorrectly judged (i.e., as new or heard). (c) Item memory accuracy was calculated as the standardized hit rate (zscore of hit rate, i.e., the proportion of old items identified as old) minus the standardized false alarm rate (z-score of false alarm rate, i.e., the proportion of new items identified as old). Before calculation, hit and false alarm rates were corrected to avoid the values of 0 and 1, as recommended by Snodgrass and Corwin [32]. This measure of item memory, also known as the Signal Detection Theory metrics' d' [33], reflects the sensitivity to discriminate between old and new items.
Magnetic resonance imaging acquisition
MRI acquisitions were performed on a 1.5-T Siemens Magneton scanner. A 3D anatomic T1-weighted sequence covering the whole brain volume was acquired with the following parameters: 176 transverse slices, TR = 1,970 ms, TE = 3.93 ms, field of view = 256 mm 2 , and voxel size = 1 mm 3 .
Paracingulate sulcus measurements
The PCS was measured following the measurement protocol described by Garrison et al. [25] (see Figure 1 as an example).
To validate the procedure, inter-and intrarater reliabilities were calculated. See the Supplementary Material for more details.
Voxel-based morphometry analysis
All images were preprocessed and analyzed with the Computational Anatomy Toolbox (CAT, version 12.6; http://www.neuro. uni-jena.de/cat/) implemented in Statistical Parametric Mapping (SPM12) (Welcome Trust Center for NeuroImaging, London, UK; http://www.fil.ion.ucl.ac.uk.gate2.inist.fr/spm/software/spm12/) using MATLAB (R2018a, MathWorks, Inc., Massachusetts, USA). Both processing and analysis were performed following the standard protocol (http://www.neuro.uni-jena.de/cat12/CAT12-Manual.pdf) with default settings, unless otherwise indicated. This method has been previously validated and provides a great compromise between good quality and speed of processing [34]. Prior to preprocessing, each image was visually inspected for artifacts. Then, T1 images were corrected for bias field inhomogeneities, segmented into gray matter, white matter, and cerebrospinal fluid, spatially normalized into a standard Montreal Neurological Institute (MNI) space using the DARTEL algorithm, and modulated to allow comparison of the absolute amount of tissue. A second quality control for intersubject homogeneity and overall image quality was achieved using the automated quality check protocol of the CAT12 toolbox. After quality check, the total intracranial volume of each subject was estimated to be used as covariate on the second-level analyses to take into account intersubjects brain size variations. Finally, images were smoothed using an 8-mm Full Width-Half Max (FWHM) kernel.
Statistical analyses
Statistical analyses were conducted using R software (version 3.5.2) [35]. Normality of the data was tested using the Shapiro-Wilk test. Partial Spearman's rank correlations were calculated to assess the Abbreviation: SD, standard deviation. a N = 34 (one missing data). relationship between PCS lengths (separately for each hemisphere) and outcomes of the reality-monitoring task (reality-monitoring accuracy, externalization bias, and item memory), with total intracranial volume and age as confounding variables. For all analyses, a significance level of p < 0.05 was employed. As exploratory analyses, we investigated whether PCS lengths were also related to total positive symptoms, by computing partial Spearman's rank correlations between PCS lengths and total PANSS positive scores, with total intracranial volume and age as confounding variables. VBM statistical analyses were performed with the CAT12 toolbox (version 12.6). A multiple linear regression model was used to test for voxel-wise correlations between GMV and reality-monitoring outcomes. Total intracranial volume and age were used as confounding covariates in these analyses. A 0.1 absolute masking threshold was applied to avoid artifact on the gray/white matter limit. For all voxel-based analyses, we thresholded statistical maps with an uncorrected p < 0.001 at voxel level and with an false discovery rate (FDR)-corrected p < 0.05 at the cluster level. Significant clusters were labeled using the Anatomical Automatic Labelling in SPM.
Results
Patients' demographic and clinical characteristics, as well as realitymonitoring outcomes, total intracranial volumes, and PCS lengths for each hemisphere, are presented in Table 1. Details on patients' scores at each individual item of the PANSS positive subscale are provided in the Supplementary Material.
Reality monitoring and PCS length
While controlling for age and total intracranial volume, the PCS length was positively correlated with reality-monitoring accuracy in the right hemisphere (Spearman's partial ρ = 0.431, p = 0.012; Figure 2A) but not in the left hemisphere (Spearman's partial ρ = 0.052, p = 0.773). There was a significant negative correlation between the length of the right PCS and the externalization bias (Spearman's partial ρ = À0.379, p = 0.029; Figure 2B), but no significant correlation was found for the left PCS length and the externalization bias (Spearman's partial ρ = 0.171, p = 0.340). No significant correlations were found between PCS lengths and item memory (for the right PCS: Spearman's partial ρ = 0.137, p = 0.448; for the left PCS: Spearman's partial ρ = À0.003, p = 0.988).
Reality monitoring and GMV
VBM analysis revealed a significant positive correlation between reality-monitoring accuracy and GMV in the right angular gyrus (peak MNI coordinates [23 À59 44], t = 4.03, p < 0.001; see Table 2 and Figure 3A).
Discussion
The present study sought to identify the neuroanatomical correlates of reality monitoring in a sample of schizophrenia patients with AHs. We reported two main findings: (a) the right hemisphere PCS length was positively correlated with reality-monitoring accuracy and negatively correlated with the externalization bias, that is, the misattribution of imagined words to an external source, and (b) the reality-monitoring accuracy was positively correlated with the GMV in the right angular gyrus, whereas the externalization bias was negatively correlated with the GMV in a set of temporal and parietal areas.
We demonstrated a significant correlation between the realitymonitoring abilities of hallucinating patients with schizophrenia and the length of the PCS in the right hemisphere: the shorter the PCS, the poorer the reality-monitoring accuracy and the greater the externalization bias. On the one hand, these results are highly coherent with those found in healthy subjects associating the absence of PCS with worse overall reality-monitoring accuracy [23]. On the other hand, the region containing the PCS has been associated with both AHs and reality-monitoring abilities [23][24][25]. Moreover, a recent study has demonstrated that this region causally supports reality monitoring. In healthy subjects, active real-time fMRI neurofeedback training of the paracingulate cortex has been reported to improve the reality-monitoring accuracy for imagined items as well as the functional activity of the paracingulate cortex [36]. If the relationship between the PCS morphology and the functional role of the paracingulate cortex remains unclear, taken together these findings suggest that the PCS morphology may be the structural basis for the causal role of the paracingulate cortex in reality-monitoring abilities and hallucinations. Indeed, the PCS morphology is known to influence the topography of the medial PFC [21] and to generate a great interindividual variability on the location of the neural activity evoked in the medial PFC during a given cognitive task in healthy subjects [37]. Future fMRI studies should consider this morphological variability when reporting differences in brain activity in the medial PFC during realitymonitoring paradigms. The differences in medial PFC activity observed at the group level during a reality-monitoring task could reflect a different location of the neural activity due to intersubject differences in the PCS morphology. Taking into account this neuroanatomical feature when studying functional patterns of reality monitoring would provide more reliable evidence of a deficit in populations experiencing AHs.
It is noteworthy that the PCS is one of the latest sulci to develop in utero, appearing at the 36th week of ontogeny and maturing to the perinatal period for human [22,38]. This sulcus is thus exposed to environmental factors able to interfere with its development. The reality-monitoring impairment found to be correlated with the PCS length may thus result from defective neurodevelopmental mechanisms. In this line, abnormal reality-monitoring performances Note: Statistical threshold of p < 0.001 at the peak-level and FDR-corrected p < 0.05 at the cluster level.
European Psychiatry 5
have also been observed before the onset of frank psychotic episode in individuals at risk for schizophrenia within the continuum of psychosis [39]. A deepen investigation of the sulcal ontogeny, and even more of the developmental factors that may influence the PCS morphometry could improve the understanding of its relationship with reality-monitoring deficits. Consistent with the right lateralization of our findings, a recent study found a reduction of the PCS length only in the right hemisphere of both psychotic and nonclinical voice hearers [40], suggesting the right PCS length reduction to be a specific marker of AHs whatever the clinical condition. By contrast, some studies identified bilateral PCS reductions in schizophrenia patients with AHs as compared with schizophrenia patients without AHs, nonclinical subjects with AHs, and healthy controls [24,25], and some others found specific left PCS reduction in schizophrenia patients with AHs as compared with those without AHs, and healthy controls [41]. Further studies are thus needed to clarify if the length of the right PCS may be considered as a specific neuroanatomical marker of AHs or if the bilateral PCS is only reduced in schizophrenia patients with AHs.
Surprisingly, reality-monitoring performances did not correlate with GMV in medial frontal areas. Yet, the functional capacity of the medial PFC has been largely involved in the reality-monitoring process in both patients with schizophrenia and healthy individuals [13,42], and reduced GMV has been observed in these brain areas in patients with schizophrenia [43]. In addition, the presence/absence of PCS has been associated with GMV in the surrounding frontal regions, and these volumetric changes were related to reality-monitoring performances [23]. Further studies are now needed to investigate a potential relationship between the PCS variability and the surrounding prefrontal volume and its implication on the prefrontal functional capacity during reality monitoring.
As we hypothesized, most of the regions for which the GMV correlated with reality-monitoring performances correspond to the temporoparietal areas previously identified by functional imaging during reality-monitoring tasks. We found several brain structures whose GMVs negatively correlate with the externalization bias, indicating that schizophrenia patients with AHs with reduced GMV in these structures are more likely to misattribute internally generated information to an external source.
First, we observed a negative correlation between the externalization bias and a cluster encompassing the left supramarginal gyrus and the left superior temporal gyrus, which is considered as a part of Wernicke's area (BA 40) involved in auditory and speech processing. Disruption to this system would induce an inadequate treatment of the verbal items presented in reality-monitoring tasks and participate to patients' misattributions of source. In addition, a recent meta-analysis on motor agency specifically highlighted the left BA 40 as an integral part of the body-ownership network [44]. This cluster can thus be considered as an element of both verbal and nonverbal self-production recognition, suggesting its modalitygeneral implication in reality-monitoring processes. Consistently, the GMV and activity of this temporoparietal region have also been associated with AHs in schizophrenia patients [45][46][47]. The causal implication of temporoparietal regions in reality monitoring has finally been demonstrated by noninvasive stimulation over this region that modulated the externalization bias in both healthy subjects and schizophrenia patients and alleviated AHs in schizophrenia patients [16,[48][49][50].
The VBM analysis also revealed negative correlations between the externalization bias and gray matter in several posteroinferior temporal regions. Considered as associative visual areas, these structures have mainly been associated with visual processing and visual hallucinations [47,51,52]. For now, the implications of the correlation between their GMV and externalization bias in our semantic task are unclear, and future studies should clarify the relationship between reduced GMV in these areas and the incorrect source attributions observed in schizophrenia patients with AHs. However, a substantial body of functional studies has already reported an activation of the right lingual gyrus during Theoryof-Mind tasks, involving among other things to make the distinction between internal and external space [53]. On its side, the left inferior temporal gyrus has been shown to specifically activate in the reality-monitoring contrast "correct attributions" versus "misattributions" in healthy participants [54].
We identified a significant positive correlation between the reality-monitoring accuracy and the GMV of the right angular gyrus. This result replicates in a population of schizophrenia patients with AHs the results reported by Buda et al. [23] in a sample of healthy subjects. The right angular gyrus is engaged in a wide range of tasks reflecting our ability to discriminate the internal from external environment, such as Theory-of-Mind or agency attribution tasks [55,56]. Moreover, several case reports described its causal involvement in out-of-body experiences, a phenomenon referring to an autoscopic experience during which the subject perceive the world from an out-of-body position [57,58]. In this way, our findings contribute to define the right angular gyrus as a pivotal neural locus for the distinction between the self and the external world. Its increased GMV may underlie its overactivity and in turn sustain decreased reality-monitoring performances in schizophrenia patients with AHs.
In addition to the sample size that could be considered as limited for correlation analyses (estimated post hoc power of 0.75), the main limitation of this study is the lack of comparison groups. Additional groups of healthy participants, healthy voice hearers, and patients with schizophrenia without AHs would had allowed us to determine if the structural correlates of reality monitoring are specific to schizophrenia or if they could be expanded to the global population. However, despite this limitation, our study has the advantage of investigating reality monitoring in a homogeneous sample of patients with severe daily treatment-resistant AHs, as compared with mixed samples of patients with heterogenous symptoms that are usually enrolled in the literature. The particularity of our patient sample in terms of treatment resistance and severity of AHs might also contribute to the differences in the right PCS length observed between our study and other studies including patients with AHs [24,25,41]. Second, the question of the specificity of findings reported in the current study remains open. VBM findings suggested that reality-monitoring performances and item memory were linked to GMV changes in different brain regions. In addition, the PCS length seems to be specifically linked to reality-monitoring performances, that is, to reality-monitoring accuracy and externalization bias, but not to item memory or total positive symptoms. However, further investigations might assess whether reality monitoring might be related to other sulci. Third, one could question how the PCS, which can be considered as a static brain structure, could be related to a dynamic process such as reality monitoring. Although the PCS is expected to remain stable after its maturation during perinatal period, some PCS length changes over time have been described in a longitudinal study with adolescent onset psychosis [59]. Nevertheless, the observed correlation of reality-monitoring outcomes and PCS lengths does not necessarily imply that the PCS length is the only anatomical substrate for reality-monitoring deficits (and the emergence of AHs). Rather, we could hypothesize a two-hit process with a reality-monitoring deficit that predates the emergence of AHs, since reality-monitoring deficits are also reported in people with an at-risk mental state for psychosis and unaffected relatives of patients with schizophrenia [39], and which might be linked to the PCS length, and a second phase of aggravation of reality-monitoring deficits, together with other neuroanatomical features, such as GMV alterations.
In summary, this study demonstrated that reality-monitoring performances correlated with both the PCS morphology and the GMV in crucial brain regions engaged in the reality-monitoring neural network in patients with schizophrenia. If the exact relationship between the structural evidence that we have highlighted and their functional implications remains little known, these correlations propose some anatomical substrates for the observed reality-monitoring errors in schizophrenia patients with AHs. Such associations would lead future studies to clarify the relationship between the PCS and GMV variability and reality-monitoring abilities. Finally, further research work should investigate if similar structural features would be associated with AHs in nonclinical hallucinating individuals or if they specifically characterize AHs in schizophrenia. Data Availability Statement. The data that support the findings of this study are available from the corresponding author, M.M., upon reasonable request.
|
v3-fos-license
|
2022-09-24T15:10:59.926Z
|
2022-09-21T00:00:00.000
|
252492847
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2504-4990/4/4/40/pdf?version=1663761073",
"pdf_hash": "070cd2790113716f411c6643b47de5b30c261753",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43484",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"sha1": "64179eb4b3d96a6816b219305f366e41a66c7690",
"year": 2022
}
|
pes2o/s2orc
|
Artificial Intelligence Methods for Identifying and Localizing Abnormal Parathyroid Glands: A Review Study
: Background: Recent advances in Artificial Intelligence (AI) algorithms, and specifically Deep Learning (DL) methods, demonstrate substantial performance in detecting and classifying medical images. Recent clinical studies have reported novel optical technologies which enhance the localization or assess the viability of Parathyroid Glands (PG) during surgery, or preoperatively. These technologies could become complementary to the surgeon’s eyes and may improve surgical outcomes in thyroidectomy and parathyroidectomy. Methods: The study explores and reports the use of AI methods for identifying and localizing PGs, Primary Hyperparathyroidism (PHPT), Parathyroid Adenoma (PTA), and Multiglandular Disease (MGD). Results: The review identified 13 publications that employ Machine Learning and DL methods for preoperative and operative implementations. Conclusions: AI can aid in PG, PHPT, PTA, and MGD detection, as well as PG abnormality discrimination, both during surgery and non-invasively. to identify abnormal PGs in the early MIBI, late MIBI and TcO4 thyroid scan images. The study includes 632 parathyroid scans (414 PG, 168 nPG). The proposed model, which is called ParaNet, exhibits top performance, reaching an accuracy of 96.56% in distinguishing between abnormal PGs and normal PGs scans. Its sensitivity and specificity are 96.38% and 97.02%, respectively. PPV and NPV values are 98.76% and 91.57%, respectively.
Introduction
A parathyroid adenoma (PTA) is a noncancerous (benign) tumor of the Parathyroid Glands (PGs). PGs are located in the neck, near or attached to the back side of the thyroid gland. PTA is part of a spectrum of parathyroid proliferative disorders that includes parathyroid hyperplasia, PTA, and parathyroid carcinoma [1].
Approximately eighty percent of primary hyperparathyroidism (PHPT) is caused by a PTA [2], followed by four-gland hyperplasia with ten to fifteen percent [2], and multiple adenomas with five percent [1].
Computer-Aided Diagnostic (CAD) assistance tools in PTA identification could significantly alleviate human tiredness and routine in everyday clinical practice, allowing the experts to put their efforts into nontrivial tasks. In addition, online surgical assisting tools that detect and localize important areas can aid in error prevention. Identification and preservation of the parathyroid glands (PGs) during thyroid surgery are very important. Damaging, devascularizing, autotransplanting, or inadverting PGs can cause post-operative hypocalcemia. To this end, near infrared-induced autofluorescence (NI-RAF) can deliver normal and pathologic PG localization in real time. Such tools are already embedded into modern image acquisition technologies and computer-enabled surgery frameworks. However, more modern solutions are worthy of examination; recent advances in Artificial Intelligence (AI) algorithms, specifically Deep Learning (DL) methods, demonstrate substantial performance in detecting and classifying medical images [2][3][4].
DL brought a revolution in feature-extraction from image data, enabling the computer-suggested capture of millions of potentially significant image features. DL algorithms can learn to detect and distinguish important features that characterize an image according to a predefined label. For example, such methods have achieved remarkable success in various cancer-detection studies utilizing various imaging modalities [3][4][5]. DL implementations are also found in video processing and biomedical signal processing.
Recent clinical studies report novel optical technologies which enhance the localization or assess the viability of Parathyroid Glands (PG). These technologies could become complementary to the surgeon's eyes and may improve surgical outcomes in thyroidectomy and parathyroidectomy [6]. More importantly, combining such technologies with state-of-the-art image and video processing computational models can multiply the capabilities of these systems and greatly increase their necessity and utility in hospitals.
Non-invasive medical imaging acquisition modalities, such as SPECT, aid in the preoperative identification of hyperparathyroidism and abnormal PG localization. Again, AI methods can substantially contribute to the detection task and assist medical staff.
The present review investigates the implementation of AI for identifying and localizing abnormal PGs and PHPT. The Literature Review identifies 13 related papers from the year 2000 to July of 2022 and discusses their findings and methods. Current limitations and future suggestions are provided in the Discussion section.
Literature Review
The relevant publications were identified through extensive searches in approved publication-indexing websites and repositories. PubMed, Scopus, and Google Scholar were the major sources of information. Multiple keyword combinations were used to discover research papers and constitute the initial library, including: The survey covered publications from January 2000 to July 2022. A total number of thirty-three publications constituted the initial library. Each publication's abstract and title were used to exclude irrelevant entries. Subsequently, a total of twelve research studies qualified for the review. The complete process is presented in Figure 1. This procedure identified 13 relevant papers which qualify for this review.
Machine Learning and Deep Learning in a Nutshell
This section describes the AI methods and algorithms reported in the literature review.
Machine Learning
ML is a part of AI [7]. It uses structured or unstructured data to learn patterns, forecast future values, or discover underlying knowledge [8]. The general idea of a machine that learns through a set of past observations is not an idea of our time [9]. The large amounts of data of any kind which are at the disposal of medical research centres and hospitals do not guarantee the successful development of an ML model. One of the most difficult challenges for engineers and programmers is labelling [10]. An ML model is commonly built upon a specific question or hypothesis to be investigated. For example, the malignancy suspiciousness rating of nodules inside specific organs and tissues in our body could be the focal point of ML methods. In essence, the medical dilemma of whether an observed nodule is malignant or benign is a likely domain for applying an ML model. We can assign varying levels of discretion to the methods and algorithms of any ML implementation concerning the focal point. Hence, we discuss supervised ML, unsupervised ML, and semi-supervised ML [10].
Supervised learning involves working with labelled datasets for training and testing. Every instance in the training data is accompanied by a specific and desired output/target, which is utilized by the algorithm in order to learn [11]. Examples of data where the desired output is known and their values are predefined are called labelled examples. In the case of parathyroid gland detection, the actual location of an image finding is considered to be the label of each instance. Based on this label, the ML model learns to identify patterns in the image related to this location. In a similar example, research may focus on distinguishing between normal and abnormal parathyroid images of various modalities (e.g., SPECT or histopathological images). In that case, the actual label of the image (normal or abnormal) is considered the ground truth.
Contrary to supervised learning, unsupervised learning utilizes unlabeled data, aiming to discover hidden patterns that group the data into clear and sufficiently demarcated sets [12]. Unsupervised ML can reveal new knowledge from data by analyzing the suggested patterns and performing cross-examination [13]. Unsupervised learning is often identical to data mining, a broader field aiming to discover patterns for data, deploying both ML and statistical or mathematical tools.
Dealing with labelled and unlabeled data is the objective of semi-supervised learning [14]. Not necessarily reliant upon discovering underlying patterns within the unlabeled data, this method instead focuses on discovering basic patterns from a set of labelled data and matching them with similar patterns of a set of unlabeled data [15]. Based on the confidence of the prediction, a certain amount of unlabeled data is incrementally incorporated into the labelled data to increase their size.
Deep Learning
DL alludes to various ML approaches utilizing many nonlinear processing units grouped by layers to process the input information by gradually applying specific transformations [22]. In the basic approach, the layers are usually sequentially connected. In essence, each layer processes the previous layer's output [23]. In this way, different levels of abstraction can acquire hierarchical representations of the input data. Special neural networks are utilized in DL's applications, which are related to image feature extraction.
Those networks are known as CNN, and their name comes from the convolution operation, which is the cornerstone of such methods. CNNs were introduced by LeCun [24]. CNN is a deep neural network that mainly uses convolution layers to extract useful information from the input data, usually feeding a final Fully Connected (FC) layer [25]. They exhibit impressive performance in a variety of tasks. A detailed explanation of the convolution process is presented in the next section. A convolution operation is performed as a filter, which is a table of weights slides throughout the input image. An output pixel produced at every position is a weighted sum of the input pixels (the pixels that the filter has passed from). The weights of the filter, as well as the size of the table (kernel), are constant for the duration of the scan. Therefore, convolutional layers can seize the shift-invariance of visible patterns and depict robust features.
After several convolutional and pooling layers, one or more FC layers may aim to perform high-level reasoning. FC layers connect all previous layers' neurons with every neuron of the FC layer.
The last layer of CNN is an output layer. The Softmax [26] operator is a common classifier for CNNs. A Support Vector Machine (SVM), usually combined with CNN features, is used. CNNs have been widely used for medical image classification [27][28][29][30][31][32].
Results
The review identified two major categories, namely, thyroidectomy-assisting methods for localizing PGs and preoperative PG detection and abnormality identification. Tables 1 and 2 summarize the type and the results of the reported 13 studies, respectively.
Thyroidectomy Assisting Methods for Localizing Parathyroid Glands
Early and precise detection of PGs is a challenging problem in thyroidectomy due to their small size and an appearance similar to that of surrounding tissues. Several AI methods have been designed and proposed to assist surgeons in localizing and identifying PGs. Recent literature fully uses emerging ML and DL algorithms to achieve high detection rates.
Kim et al. [33] introduced a prototype solution for the reduction of false-positive PGs localized using near-infrared autofluorescence (NIRAF) [34] methods. Their appliance is equipped with a coaxial excitation light (785 nm) and a dual-sensor. Under this setup, the authors employed the YOLO v5 [35] network, a real-time object detection DL model, to identify and localize PGs. The authors evaluated their solution's clinical feasibility in situ and ex vivo using sterile drapes on ten human subjects. Video data of 1287 images of wellvisualized and localized PGs from six human subjects were utilized. This method yielded a mean average precision of 94.7% and a 19.5-millisecond processing time/detection. It is a matter for future research whether the proposed method remains at a top performance after the inclusion of more human participants.
Akbulut et al. [36] proposed a decision tree for intraoperative autofluorescence assessment of PGs in PHPT. The study involved 102 patients and 333 confirmed PGs. The authors extracted predictors from each PG, and the developed decision tree used normalized autofluorescence intensity, heterogeneity index, and gland volume to predict normal versus abnormal glands and subclasses of parathyroid pathologies. The algorithm achieved 95% accuracy in distinguishing between normal and abnormal PGs and 84% in predicting parathyroid pathologies' subclasses. However, the authors do not report the training and evaluation samples.
Wang et al. [37] benchmarked the YOLO V3, Faster R-CNN, and Cascade algorithms for identifying PGs during endoscopic approaches. The study involved 166 endoscopic thyroidectomy videos, of which 1700 images were employed (frames). The experiments revealed the superiority of Faster R-CNN in this task, which achieved precision, recall rate, and F1 scores of 88.7%, 92.3%, and 90.5%, respectively. The authors evaluated this network further using an independent external cohort of 20 videos. Senior and junior surgeons' visual estimation was used for comparisons. In this test set, the parathyroid identification rate of their method was 96.9%, while senior surgeons and junior surgeons achieved 87.5% and 71.9%, respectively.
Avci et al. [38] used the Google AutoMl platform to identify an optimal DL model to localize parathyroid-specific autofluorescence on near-Infrared imaging. The study involved 466 intraoperative near-infrared images of 197 participants undergoing thyroidectomy or parathyroidectomy procedures. The study was split into three sets, training, validation, and test. 527 PG AF signals from the near-infrared images obtained intraoperatively from these procedures were used to develop the model's training set. The method yielded a recall of 90.5% and a precision of 95.7%, respectively. Those scores correspond to a 91.9% accuracy in detecting PGs.
Avci et al. [39] repeated the above study using a total of 906 intraoperative parathyroid autofluorescence images of 303 patients undergoing parathyroidectomy/thyroidectomy. The dataset was split, and 20% was kept for evaluation. The authors evaluated their models based on AUROC and AUPRC, which were found to be 0.9 and 0.93, respectively. Precision and recall were reported at 89% each.
Wang et al. [40] proposed an innovative method for identifying PGs based on laserinduced breakdown spectroscopy (LIBS). The study involved 1525 original spectra (773 PG spectra and 752 NPG spectra) from 20 smear samples of three rabbits. The authors extracted the emission lines related to K, Na, Ca, N, O, CN, and C2 and built several ML algorithms to distinguish between PGs and nPGs. The predictive attributes were ranked based on the importance weight calculated by Random Forest. The Artificial Neural Network model and the Random-Forest-based feature selection achieved a 92% accuracy.
Preoperative Parathyroid Gland Detection and Abnormality Identification
Sandqvist et al. [41] proposed an ensemble of decision trees with Bayesian hyperparameter optimization for predicting the presence of overlooked PTAs at a preoperative level using 99mTc-Sestamibi-SPECT/CT technology in Multiglandular Disease (MGD) patients. The authors used six predictors, namely, the preoperative plasma concentrations of parathyroid hormone, total calcium, and thyroid stimulating hormone, the serum concentration of ionized calcium, the 24-h urine calcium, and the histopathological weight of the localized PTA at imaging. The retrospective study involved 349 patients, whilst the dataset was split into 70% for training and 30% for testing. The authors designed their framework utilizing two response classes; patients with Single-Gland Disease (SGD) correctly localized at imaging and MGD patients in whom only one PTA was localized on imaging. Their algorithm achieved a 72% true positive prediction rate for MGD patients and a misclassification rate of 6% for SGD patients. This study confirmed that AI could aid in identifying patients with MGD for whom 99mTc-Sestamibi-SPECT/CT failed to visualize all PTAs.
Stefaniak [42] et al. developed an ANN to detect and locate pathological parathyroid tissue in the planar neck scintigrams. This study involved 35 participants. The detailed data consisted of sets of three single pixels, each belonging to one of the three consecutive neck scintigrams generated 20 min after (99m)TcO(4)-administration, 10 min after (99m)Tc-MIBI injection, and 120 min after (99m)Tc-MIBI injection, respectively. The results of the ANN were compared to the conventional assessment of two radionuclide parathyroid examinations, namely, the subtraction method and (99m)Tc-MIBI double-phase imaging. The ANN yielded a close relationship with the visual assessment of original neck scintigrams, with R square coefficient R 2 of 0.717 and standard error equal to 0.243 during its training. Multidimensional regression analysis yielded a weaker relationship, with an R 2 of 0.543 and a standard error of 0.567.
Yoshida et al. [43] employed RetinaNet [44], a DL network for the detection of PTA by parathyroid scintigraphy with 99m-technetium sestamibi (99mTc-MIBI) before surgery. The study enrolled 237 patients who took parathyroid scintigrams using 99mTc-MIBI and each of whom were determined to be a positive or negative case. Those patients' scans included 948 scintigraphy with 660 annotations, which were used for training and validation purposes. The test set included 44 patients (176 scintigrams and 120 annotations). The models' lesion-based sensitivity and mean false positive indications per image (mFPI) were assessed with the test dataset. The model yielded a sensitivity of 82%, with an mFPI of 0.44 for the scintigrams of the early-phase model. For the delayed-phase model, the results reported 83% sensitivity and 0.31 mFPI.
Somnay et al. [45] employed several ML models for recognizing PHPT using clinical predictors, such as age, sex, and serum levels of preoperative calcium, phosphate, parathyroid hormone, vitamin D, and creatinine. The study enrolled 11,830 patients managed operatively at three high-volume endocrine surgery. Under a 10-fold cross-validation procedure, the Bayesian network was found superior to the rest of the ML models, achieving 95.2% accuracy and an AUC score of 0.989.This performance by the Bayesian network is interesting because, in general, such networks tend to overfit and their generalization capabilities are very limited.
Imbus et al. [46] benchmarked ML classifiers for predicting MGD in PHPT patients. The study involved 2010 participants (1532 patients with SGD and 478 with MGD). The fourteen predictor variables included patient demographic, clinical, and laboratory attributes. The boosted tree classifier was found superior to the rest ML modes, reaching an accuracy of 94.1%, a sensitivity of 94.1%, a specificity of 83.8%, a PPV of 94.1%, and an AUC score of 0.984.
Chen et al. [47] applied transfer learning for the automatic detection of PHPT from ultrasound images annotated by senior radiologists. The study involved 1000 ultrasound images containing PHPTs, of which 200 images were used to evaluate the developed model. For this purpose, they employed three well-established Convolutional Neural Networks to analyze the PHPT ultrasound and suggest potential features underlying the presence of PHPT. This study achieved the best recall, at 0.956.
In a recent work by Apostolopoulos et al. [48], the authors developed a three-path VGG19-based network to identify abnormal PGs in the early MIBI, late MIBI and TcO4 thyroid scan images. The study includes 632 parathyroid scans (414 PG, 168 nPG). The proposed model, which is called ParaNet, exhibits top performance, reaching an accuracy of 96.56% in distinguishing between abnormal PGs and normal PGs scans. Its sensitivity and specificity are 96.38% and 97.02%, respectively. PPV and NPV values are 98.76% and 91.57%, respectively.
Discussion
The research study identified and described 13 studies addressing the issue of PG identification and localization, PHPT, PTA, and MGD detection. Most studies focus on PG detection (42%), while PG localization is addressed in 33% of the total studies.
There has been a significant amount of research conducted for preoperative delivery using ultrasound and scintigraphy image sources. Preoperative detection of abnormalities is also addressed using ML approaches without deploying any imaging modality. Significant clinical and demographical predictors are revealed in the literature, contributing to the diagnosis of PHPT and MGD. Overall, the preoperative delivery methods are introduced in 54% of the reviewed publications ( Figure 2). The studies report very promising results in preoperative classification tasks, such as normal-abnormal image discrimination or MGD prediction using clinical factors. The observed sensitivity varies between 82 and 96 per cent. The majority of studies report an accuracy that ranges between 91 and 96 per cent. However, PG localization is not yet explored. It is expected that localizing each abnormal PG in thyroid scans would yield a number of false positive findings, thereby making this task very challenging.
The research community is also making efforts to provide novel appliances and topologies to improve the detection of findings during surgery. Most relevant publications accompany their technological solutions with traditional ML and DL approaches to enhance detection accuracy or to provide assisting computational tools. Studies presenting technological and AI solutions that deliver during surgery report better results regarding PG localization. It is observed that none of the reviewed research works integrates clinical factors and imaging data. It is expected that combining any available demographic, clinical, and biological data, where existent, would improve the diagnostic accuracy of imagebased approaches and reduce the many reported false positive cases. Despite their promising results, most studies use very few participants to train and evaluate their models. Most studies address this issue by extracting many video frames and slices from each patient. Therefore, the amount of samples is adequate for model training. However, the datasets remain biased because the utilized frames/slices share the same origin. As a result, the study's results might be misleading. Still, there are studies that use more participants and report acceptable results and meaningful conclusions [41,46,48]. Most studies are validated on cohorts that do not exceed 500. As a result, the reported results, though undeniably encouraging, are not yet well-grounded. While the number of studies published peaks after 2021, the research on PG identification and localization, PHPT, PTA, and MGD detection is still constrained. The absence of publicly available data repositories covering relevant tasks impedes Biomedical Engineering experts from exploring the full potential of Artificial Intelligence in this domain. Nevertheless, the significant results reported in the literature undeniably open the horizons. Specifically, in PG detection and localization, the emergence of large-scale image datasets could accelerate the exploration of novel and state-of-the-art DL approaches and provide trustworthy solutions for medical assisting tools.
The emerging field of eXplainable Artificial Intelligence (XAI) as a set of algorithms and methods providing explanations can increase the medical importance and usefulness of AI methods in PHPT detection and PG abnormality discrimination. Most studies do not use explainable algorithms that inform the user of their decisions. As an example of explainable AI, the study of Imbus et al. [46] uses a decision tree for discriminating MGD from SGD. Decision trees are inherently self-explanatory. However, in studies where an ensemble of decision trees is employed (e.g., [36]), it is difficult to provide explanations. In studies where DL is employed (e.g., [43,48]), post-hoc explainability methods, such as the Grad-CAM algorithm [49], are not considered. Future studies could consider adopting explainable strategies to enhance their results and provide frameworks that are meaningful in everyday practice.
It was observed that many studies do not extensively report their methodology in terms of the employed ML and DL algorithms. Moreover, the majority of studies employ basic AI methods without mentioning any parameter tuning. For example, in studies where the decision trees are designed, the maximum number of leaf nodes and the maximum depth are not documented.
It is concluded that more effort should be put into designing and furnishing problemspecific models with well-grounded parameter selection. As an example of such methodology, in [41], the authors performed a Bayesian hyperparameter tuning, one which improved their results.
Finally, there is no established and documented method for validating the results. Some studies consider a train-test split solely, without any cross-validation method. This method is only suitable when large amounts of data are involved. In studies with few samples, partitioning the dataset at random may introduce biases. Other studies perform a cross-validation method (e.g., 3-fold, 10-fold) but do not consider control groups and external test sets. As a result, comparisons between studies are difficult.
Moreover, the robustness of the proposed pipelines regarding acquisition device variation is not explored. It is usual that different devices yield different image characteristics, e.g., resolution, pixel intensities, and video frames. Some variations regarding the models' effectiveness are expected and should be investigated.
Conclusions
This review study presented twelve works addressing the issue of PG identification and localization, HPT, PTA, and MGD detection. The reviewed studies were focused on both preoperative and operative solutions. Significant clinical and demographical predictors are revealed in the literature, contributing to the effective diagnosis of PHPT and MGD. Most relevant publications accompany their technological solutions with traditional ML and DL approaches to enhance the detection accuracy or to provide assisting computational tools. in the task of PG detection and localization, the emergence of largescale image datasets could accelerate the exploration of novel and state-of-the-art DL approaches and provide trustworthy solutions for medical assisting tools. Moreover, explainable algorithms must be introduced to enhance the results and increase the significance of the proposed methods.
|
v3-fos-license
|
2016-01-22T01:30:34.548Z
|
2015-02-05T00:00:00.000
|
12940856
|
{
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://dmtcs.episciences.org/2118/pdf",
"pdf_hash": "25e472100d500d852278b4b2fd4df39ccaa38646",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43485",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "332ccef4e8f93e7ef8be611b8706c1a01ea053d0",
"year": 2015
}
|
pes2o/s2orc
|
An approximability-related parameter on graphs—properties and applications
We introduce a binary parameter on optimisation problems called separation . The parameter is used to relate the approximation ratios of different optimisation problems; in other words, we can convert approximability (and non-approximability) result for one problem into (non)-approximability results for other problems. Our main application is the problem (weighted) maximum H -colourable subgraph (M AX H -C OL ), which is a restriction of the general maximum constraint satisfaction problem (M AX CSP) to a single, binary, and symmetric relation. Using known approximation ratios for M AX k - CUT , we obtain general asymptotic approximability results for M AX H -C OL for an arbitrary graph H . For several classes of graphs, we provide near-optimal results under the unique games conjecture. We also investigate separation as a graph parameter. In this vein, we study its properties on circular complete graphs. Furthermore, we establish a close connection to work by Šámal on cubical colourings of graphs. This connection shows that our parameter is closely related to a special type of chromatic number. We believe that this insight may turn out to be crucial for understanding the behaviour of the parameter, and in the longer term, for understanding the approximability of optimisation problems such as M AX H -C OL .
Introduction
In this article we study an approximation-preserving reducibility called continuous reduction (Simon, 1989).A continuous reduction allows the transfer of constant ratio approximation results from one optimisation problem to another.We introduce a binary parameter on optimisation problems which measures the degradation in the approximation guarantee of a given continuous reduction.We call this parameter the separation of the two problems.
The separation parameter is then used to study a concrete family of optimisation problems called the maximum H-colourable subgraph problems, or MAX H -COL for short.This family includes the problems MAX k-CUT for which good approximation ratios are known (Frieze and Jerrum, 1997).Starting from these ratios, we use the notion of separation to obtain general approximation results for MAX H -COL. Our main contribution in relation to approximation is Theorem 3.10 which gives a constant approximation ratio of 1 − 1 r + 2 ln k k 2 (1 − 1 r + o(1)) for MAX H -COL, when the graph H has clique number r and chromatic number k.
In the setting of MAX H -COL, we view separation as a binary graph parameter.While the initial motivation for introducing this parameter was to study the approximability of optimisation problems, it turns out that separation is a parameter of independent interest in graph theory.We investigate this aspect of separation in the second part of the article.Among the most striking results is the connection between separation and a (generalisation of) cubical colourings and fractional covering by cuts previously studied by Šámal (2005, 2006, 2012).
The Separation Parameter
Before we consider continuous reductions and the separation parameter, we formally define optimisation problems.An optimisation problem M is defined over a set of instances I M ; each instance I ∈ I M has an associated finite set Sol M (I) of feasible solutions.The objective is, given an instance I, to find a feasible solution of optimum value, with respect to some measure (objective function) m M (I, f ), where f ∈ Sol M (I).The optimum of I is denoted by Opt M (I), and is defined as the largest measure of any solution to I. (We will only consider maximisation problems in this article.)We will make the assumption that every instance of every problem considered has some feasible solution and that all feasible solutions have positive rational measure.Then, the following quantity is always defined, where I ∈ I M and f ∈ Sol M (I).
A solution f ∈ Sol M (I) to an instance I of a maximisation problem M is called r-approximate if it satisfies R M (I, f ) ≥ r.
An approximation algorithm for M has approximation ratio r(n) if, given any instance I of M , it outputs an r(|I|)-approximate solution.We say that M can be approximated within r(n) if there exists a polynomial-time algorithm for M with approximation ratio r(n).
All optimisation problems that we consider belong to the class NPO; this class contains the problems for which the instances and solutions can be recognised in polynomial time, the solutions are polynomially bounded in the input size, and the objective function can be computed in polynomial time.An NPO problem is in the class APX if it can be approximated within a constant factor.If, in addition, for any rational value 0 < r < 1, there exists an algorithm which, given an instance, returns an r-approximate solution in time polynomial in the size of the instance, then we say that the problem admits a polynomialtime approximation scheme (PTAS).Note that the dependence of the time on r may be arbitrary.
A reduction from an NPO-problem N to an NPO-problem M is specified by two polynomial-time computable functions, ϕ which maps instances of N to instances of M , and γ which takes an instance I ∈ I N and a solution f ∈ Sol M (ϕ(I)) and returns a solution to I. Completeness in APX can be defined using appropriate reductions and it is known that an APX-complete problem does not admit a PTAS, unless P = NP.For a more comprehensive introduction to these classes and their accompanying reductions, see Ausiello et al. (1999) and Crescenzi (1997).Our main focus will be on the following reducibility.
Definition 1.1 (Simon, 1989;Crescenzi, 1997) A reduction ϕ, γ from N to M is called a continuous reduction if a positive constant α exists such that, for every I ∈ I N and f ∈ Sol N (ϕ(I)), it holds that R N (I, γ(I, f )) ≥ α • R M (ϕ(I), f ). (1) Every continuous reduction preserves membership in APX.More specifically, we have the following.
Proposition 1.2 (Simon, 1989) Assume that there is a continuous reduction from N to M with a constant α.If M can be approximated within a constant ratio r, then N can be approximated within α • r.
Given a fixed continuous reduction ϕ, γ, we ask the following question: Which is the largest constant α that satisfies (1)?
To answer this question we take the supremum of all positive constants satisfying (1) over all I ∈ I N and f ∈ Sol M (ϕ(I)).
Definition 1.3 The separation of M and N , given a continuous reduction ϕ, γ from N to M , is defined as the following quantity.
s(M, N ) := inf Needless to say, the separation is difficult to compute in the general case.Thus, we henceforth concentrate on one particular optimisation problem that is parameterised by graphs.It is, however, important to note that the parameter s can be defined over many different types of optimisation problems and it is by no means restricted to problems parameterised by graphs.
The Maximum H-Colourable Subgraph Problem
Denote by G the set of all non-empty, simple, and undirected graphs.For a graph G ∈ G, let W(G) be the set of weight functions w : E(G) → Q ≥0 such that w is not identically 0. For w ∈ W(G), we let w 1 = e∈E(G) w(e) denote the total weight of G with respect to w.
Let G and H be graphs in G.A graph homomorphism from G to H is a vertex map which carries the edges in G to edges in H.The existence of such a map is denoted by G → H.If both G → H and H → G, then G and H are said to be homomorphically equivalent.This is denoted by G ≡ H.
We now consider the following collection of problems, parameterised by a graph H ∈ G.
Problem 1 The weighted maximum H-colourable subgraph problem, or MAX H -COL for short, is the maximisation problem with Instance: An edge-weighted graph (G, w), where G ∈ G and w ∈ W(G).
Solution: A subgraph G ⊆ G such that G → H.
Measure:
The weight of G with respect to w.
Example 1.Let G be a graph in G. Given a subset of vertices S ⊆ V (G), a cut in G with respect to S is the set of edges from a vertex in S to a vertex in V (G) \ S. The MAX CUT problem asks for the size of a largest cut in G.
More generally, for k ≥ 2, a k-cut in G is the set of edges between S i and S j , i = j, where S 1 , . . ., S k is a partition of V (G).The MAX k-CUT problem asks for the size of a largest k-cut.This problem is readily seen to be equivalent to finding a largest subgraph of G which has a homomorphism to the complete graph K k , i.e. finding a largest k-colourable subgraph of (a uniformly edge-weighted graph) G. Hence, for each k ≥ 2, the problem MAX k-CUT is included in the collection of MAX H -COL problems.It is well known that MAX k-CUT is APX-complete, for k ≥ 2 (Ausiello et al., 1999).
Given an edge-weighted graph (G, w), denote by mc H (G, w) the measure of an optimal solution to the problem MAX H -COL. Denote by mc k (G, w) the (weighted) size of a largest k-cut in (G, w).This notation is justified by the equivalence of the problems MAX k-CUT and MAX K k -COL.The decision version of MAX H -COL, the H-COLOURING problem, has been extensively studied (see the monograph by Hell and Nešetřil (2004) and its many references).Hell and Nešetřil (1990) were the first to show that this problem is in P if H contains a loop or is bipartite, and NP-complete otherwise.
Assuming that M → N , we consider the reduction ϕ 1 , γ 1 defined as follows.The function ϕ 1 maps an instance (G, w) ∈ I N to (G, w) ∈ I M and the function γ 1 maps a solution G ∈ Sol M (G, w) to the solution G ∈ Sol N (G, w).Here, m M (ϕ 1 (I), f ) = m N (I, γ 1 (I, f )), so the separation defined in (2) takes the following simplified form.
Definition 1.4 Let M, N ∈ G.The separation of M and N is defined as the following quantity.
Remark 1.5 Note that we do not require M → N in Definition 1.4.The reason for this is that the definition makes sense independently of its connection to the reduction ϕ 1 , γ 1 .We can then study the separation parameter independently as a graph parameter.
In Section 2 we show that the reduction ϕ 1 , γ 1 is indeed a continuous reduction.Proposition 1.2 therefore implies the following.
Lemma 1.6 Let M → N be two graphs in G.If mc M can be approximated within α, then mc N can be approximated within α • s(M, N ).If it is NP-hard to approximate mc N within β, then mc M is not approximable within β/s(M, N ), unless P = NP.
Example 2. Goemans and Williamson (1995) give an algorithm for MAX CUT which is a 0.87856approximating algorithm for MAX K 2 -COL.In Proposition 2.5 we will see that s(K 2 , C 11 ) = 10/11.
We can now apply Lemma 1.6 with M = K 2 and N = C 11 , and we find that this MAX CUT-algorithm approximates MAX C 11 -COL within 0.87856 • s(K 2 , C 11 ) ≈ 0.79869.
Article Outline
The basic properties of separation for the MAX H -COL family are worked out in Section 2. The main result, Theorem 2.1, provides a simplification of (3) which we then use to obtain explicit values and bounds on separation.In particular, this shows that the reduction ϕ 1 , γ 1 defined above is continuous.A linear programming formulation of separation is presented in Section 2.3.
In Section 3, we use separation to study the approximability of MAX H -COL and investigate optimality issues, for several classes of graphs.Comparisons are made to the bounds achieved by the general MAX 2-CSP-algorithm by Håstad (2005).Our investigation covers a spectrum of graphs, ranging from graphs with few edges and/or containing long shortest cycles to an asymptotic result, Theorem 3.10, for arbitrary graphs.We also look at graphs in the G(n, p) random graph model, pioneered by Erdős and Rényi (1960).
In order to apply our method successfully to the MAX H -COL problem but also to get a better understanding of the separation parameter, we want to compute some explicit values of s(M, N ) for various graphs M and N .To this end, we turn to the circular complete graphs in Section 4. We take a close look at 3-colourable circular complete graphs, and amongst other things, find that there are regions of such graphs on which separation is constant.The application of these results to MAX H -COL relies heavily on existing graph homomorphism results, and in this context we will see that a conjecture by Jaeger (1988) has precise and interesting implications (see Section 6.2).
Another way to study separation is to relate it to known graph parameters.In Section 5 we show that our parameter is closely related to a fractional edge-covering problem and an associated "chromatic number", and that we can pass effortlessly between the two views, gaining insights into both.This part is highly inspired by work of Šámal (2005, 2006, 2012) on cubical colourings and fractional covering by cuts.In particular, our connection to Šámal's work brings about a new family of chromatic numbers that provides us with an alternative way of computing our parameter.We also use our knowledge of the behaviour of separation to decide two conjectures concerning cubical colourings.
Finally, we summarise the future prospects and open problems of the method in Section 6.
Basic Properties of the Separation Parameter
In this section we establish a basic theorem on separation, Theorem 2.1, and derive a number of results from it.It follows that the reduction ϕ 1 , γ 1 is continuous.We also give a general bound on the separation parameter, exact values in some special cases, and a linear programming formulation.
Let N ∈ G be a graph.An edge automorphism of N is a permutation of the edges of N that sends edges with a common vertex to edges with a common vertex.The set of all edge automorphisms is called the edge automorphism group of N and its denoted by Aut * (N ).The graph N is called edge-transitive if Aut * (N ) acts transitively on E(N ).Let Ŵ(N ) be the set of weight functions w ∈ W(N ) that satisfy w 1 = 1, and for which w(e) = w(π • e) for all e ∈ E(N ), π ∈ Aut * (N ).That is, w is constant on the orbits of Aut * (N ).
We postpone the proof of Theorem 2.1 to Section 2.2.
Corollary 2.2 Let M → N be graphs in G.The reduction ϕ 1 , γ 1 is continuous with constant s(M, N ).
Proof: For every instance I = (G, w) of MAX N -COL and solution G → N , we have, by definition,
Exact Values and Bounds
Let n(G) and e(G) denote the number of vertices and edges in G, respectively.Let ω(G) denote the clique number of G; the greatest integer r such that K r → G. Let χ(G) denote the chromatic number of G; the least integer n such that G → K n .The Turán graph T (n, r) is a graph formed by partitioning a set of n vertices into r subsets, with sizes as equal as possible, and connecting two vertices whenever they belong to different subsets.The properties of Turán graphs are given by the following theorem.
Using Turán's theorem, we can determine the separation exactly when the second parameter is a complete graph.
Proposition 2.4 Let H be a graph with ω(H) = r and let n be an integer such that χ(H) ≤ n.Then, Proof: The graph K n is edge-transitive.Therefore, by the second part of Theorem 2.1, it suffices to show that mc H (K n , 1/e(K n )) = e(T (n, r))/e(K n ).By definition, T (n, r) is an r-partite subgraph of K n , so Next we consider separation between cycles.The even cycles are all bipartite and therefore homomorphically equivalent to K 2 .The odd cycles, on the other hand, form a chain between K 2 and C 3 = K 3 in the following manner: Note that the chain is infinite on the K 2 -side.The following proposition gives the separation between pairs of graphs in this chain.The value depends only on the target graph.
Proposition 2.5 Let m > k be positive integers.Then, .
Proof: Since C 2k+1 is edge-transitive, it suffices by Theorem 2.1 to show that mc 2 (C 2k+1 ) = 2k = mc C2m+1 (C 2k+1 ).Such cuts clearly exist since after removing one edge from C 2k+1 , the remaining subgraph is isomorphic to a path, and therefore homomorphic to K 2 (and to C 2m+1 ).Moreover, these cuts are the largest possible: Given two graphs M, H ∈ G, it may be difficult to determine s(M, H) directly.However, if we know that H is "homomorphically sandwiched" between M and another graph N , so that M → H → N , then it turns out that we can use s(M, N ) as a lower bound for s(M, H).More generally, we have the following lemma.
Lemma 2.6 The following implications hold.
The second part follows similarly.2 We will see several applications of Lemma 2.6 in the following sections, but first we will use it to get a general lower bound on the separation parameter.
Proposition 2.7 Let M ∈ G be a fixed graph.Then, for any N ∈ G, Proof: The upper bound follows from Theorem 2.1.For the lower bound, let n = χ(N ) be the chromatic number of N , so that N → K n .Then, s(M, N ) ≥ s(M, K n ) = mc M (K n , 1/e(K n )) by Lemma 2.6 and the second part of Theorem 2.1, respectively.
To give a lower bound on mc M (K n , 1/e(K n )) we apply a standard probabilistic argument.Let f : V (K n ) → V (M ) be a function, chosen randomly as follows: for every Every possible function f appears with nonzero probability and each function defines a subgraph of K n by including those edges that are mapped by f to edges in M .We will show that there is at least one function that defines a subgraph K ⊆ K n with the right number of edges.
For e ∈ E(K n ), and e = {u, v} ∈ E(M ), let Y e,e = 1 if f maps e to e , and Y e,e = 0 otherwise.Then, E(Y e,e ) = 2 • deg(u) deg(v)/(2e(M )) 2 .Let X e = 1 if f maps e to some edge in E(M ), and X e = 0 otherwise, so that X e = e ∈E(M ) Y e,e .Then, the total number of edges in K is equal to e∈E(Kn) X e , and by linearity of expectation, Finally, we note that for an arbitrary fixed vertex v m ∈ V (M ), the function defined by f (v) = v m for all v ∈ V (K n ) defines the empty subgraph, and has a non-zero probability.Since K n and M are non-empty we have E(e(K)) > 0, so there must exist at least one f which defines a K with strictly more than the expected total number of edges. 2 From Proposition 2.7, we have s(K m , N ) > m−1 m .It may seem surprising that the value of s(K m , N ) for any graph N can be bounded this close to 1, in particular since we can choose m as large as we like.For large m, it follows from Lemma 1.6 that when K m → N , an algorithm for MAX m-CUT will approximate MAX N -COL almost as well.This seems like a quite strong statement.However, there is a straightforward explanation: This implies that a lower bound for mc M (mc m in our case) immediately yields a lower bound for s(M, N ).Now, a probabilistic argument, analogous to the one in the proof of Proposition 2.7, shows that mc m (G, w/ w 1 ) > m−1 m , for every G and every (non-zero) w.Hence, the fact that s(K m , N ) is close to 1 is merely a reflection of the fact that every (edge-weighted) graph has a large m-cut.
The bipartite density of a graph H is defined as b(H) := mc 2 (H, 1/e(H)), and is a well-studied graph parameter (Alon, 1996;Berman and Zhang, 2003;Bondy and Locke, 1986;Hopkins and Staton, 1982;Locke, 1990).From Theorem 2.1, it follows that b(H) = s(K 2 , H) whenever H is edge-transitive.The following proposition shows that separation is invariant under homomorphic equivalence.Note that this is not true for bipartite density: Let T be a triangle and let H be the graph of two triangles sharing a common edge.In this case, T ≡ H, but b(T ) = 2/3 = 4/5 = b(H).
Proposition 2.8 Let M ≡ K and N ≡ H be two pairs of homomorphically equivalent graphs.Then, s(M, N ) = s(K, H).
Proof: Using Lemma 2.6, H → N , and M → K, we get s(M, N ) ≤ s(M, H) ≤ s(K, H).On the other hand, N → H and K → M , so s(K, H) ≤ s(K, N ) ≤ s(M, N ). 2 2.2 Exploiting Symmetries (Proof of Theorem 2.1) In this section we prove Theorem 2.1.First we need to establish a number of lemmas.The optimum mc H (G, w) is sub-linear with respect to the weight function, as is shown by the following result.
Lemma 2.9 Let G, H ∈ G, α ∈ Q ≥0 and let w, w 1 , . . ., w r ∈ W(G) be weight functions on G.Then, Proof: The first part is trivial.For the second part, let G be an optimal solution to the instance (G, r i=1 w i ) of MAX H -COL.Then, the measure of this solution equals the sum of the measures of G as a (possibly suboptimal) solution to each of the instances (G, w i ). 2 The solutions to a MAX H -COL instance have an alternative description, which is better suited for calculations: for any vertex map f : or, alternatively, when h # is a total function.The set of solutions to an instance (G, w) of MAX H -COL can then be taken to be the set of vertex maps f : w(e). (4) We will predominantly use this description of solutions.Let f : V (G) → V (H) be a solution to the instance (G, w) of MAX H -COL, and define w f ∈ W(H) as follows: Note that w f 1 = 1 iff f is optimal.The following lemma and its corollary establishes half of Theorem 2.1.
Lemma 2.10 Let M, N ∈ G be two graphs.Then, for every G ∈ G, every w ∈ W(G), and any solution Proof: Arbitrarily choose an optimal solution g : Fig. 1: The relation between the graphs and solutions in the proof of Lemma 2.10.
Note that we have w(e) Clearly, the measure of The result follows after division by mc N (G, w). 2 From Lemma 2.10, we have the following corollary, which shows that it is possible to eliminate G ∈ G from the infimum in the definition of s.
Corollary 2.11 Let M, N ∈ G be two graphs.Then, Proof: First, we fix some optimal solution f = f (G, w) for each choice of (G, w).By taking infima over G and w on both sides of the inequality in Lemma 2.10, we then have where the second inequality holds since w f 1 = 1 for any optimal f .For the other direction, we specialise G to N , and restrict w to obtain: This concludes the proof. 2 Proof of Theorem 2.1: From Corollary 2.11, we have that To complete the first part of the theorem, it will be sufficient to prove that for any graph G ∈ G and w ∈ W(G), there is a w ∈ Ŵ(N ) such that the following inequality holds.
Taking infima on both sides of this inequality then shows that Let A = Aut * (N ) be the edge automorphism group of N and let π ∈ A be an arbitrary edge automorphism.If f is an optimal solution to (G, w) as an instance of MAX N -COL, then so is π • f .By Lemma 2.10, inequality (6) is satisfied by w π•f .Summing π in this inequality over A gives where the last inequality follows from Lemma 2.9.The weight function π∈A w π•f is determined as follows: π∈A where Ae denotes the orbit of e under A. We have now shown that the inequality in ( 6) is satisfied by w = π∈A w π•f /|A|, and that w is in Ŵ(N ).The first part of the theorem follows.
For the second part, note that when the edge automorphism group A acts transitively on E(N ), there is only one orbit Ae = E(N ) for all e ∈ E(N ).Then, the weight function w is given by since f is optimal. 2
A Linear Programming Formulation
In this section, we first derive a linear program for the separation parameter based on Corollary 2.11.Later we will see how to reduce the size of this program, but it serves as a good first exercise, and it will also be used for comparison with the linear program studied in Section 5.Each vertex map f : V (N ) → V (M ) induces an edge map f # , which provides a lower bound on the separation: w(e) ≤ s(M, N ). (7) By Corollary 2.11, we want to find the least s such that for some weight function w ∈ W(N ), w 1 = 1, the inequalities (7) hold.Let the variables of the linear program be {w e } e∈E(N ) and s.We then have the following linear program for s(M, N ).
Minimise s Given an optimal solution {w e } e∈E(N ) , s to (8), a weight function which minimises mc M (N, w) is given by w(e) = w e for e ∈ E(N ).The measure of this solution is s = s(M, N ).The program will clearly be very large with |E(N )| + 1 variables and |V (M )| |V (N )| inequalities.Fortunately it can be improved upon.
From Theorem 2.1 it follows that in order to determine s(M, N ), it is sufficient to minimise mc M (N, w) over w ∈ Ŵ(N ).We can use this to describe a smaller linear program for computing s(M, N ).Let A 1 , . . ., A r be the orbits of Aut * (N ).The measure of a solution f when w ∈ Ŵ(N ) is equal to r i=1 f i • w i , where w i is the weight of an edge in A i and f i is the number of edges in A i which are mapped to an edge in M by f .Note that given a w, the measure of a solution f depends only on the vector (f 1 , . . ., f r ) ∈ N r .Therefore, take the solution space to be the set of such vectors.
Let the variables of the linear program be w 1 , . . ., w r and s, where w i represents the weight of each element in the orbit A i and s is an upper bound on the solutions.
Minimise s subject to Given an optimal solution w i , s to this program, a weight function which minimises mc M (N, w) is given by w(e) = w i for e ∈ A i .The measure of this solution is s = s(M, N ).
Example 3. The wheel graph on k vertices, W k , is a graph that contains a cycle of length k − 1 plus a vertex v, which is not in the cycle, such that v is connected to every other vertex.We call the edges of the k − 1-cycle outer edges and the remaining k − 1 edges spokes.It is easy to see that the clique number of W k is equal to 4 when k = 4 (W 4 is isomorphic to K 4 ) and that it is equal to 3 in all other cases.Furthermore, W k is 3-colourable if and only if k is odd, and 4-colourable otherwise.This implies that for odd k, the wheel graphs are homomorphically equivalent to K 3 .
We will determine s(K 3 , W k ) for even k ≥ 6 using the previously described construction of a linear program.The first three wheel graphs for even k are shown in Figure 2. Note that the group action of Aut * (W k ) on E(W k ) has two orbits, one which consists of all outer edges and one which consists of all the spokes.If we remove one outer edge or one spoke from W k , then the resulting graph can be mapped homomorphically to K 3 .Therefore, it suffices to choose F = {f, g} with f = (k − 1, k − 2) and g = (k − 2, k − 1) since all other solutions will have a smaller measure than at least one of these.The program for W k looks as follows: The optimal solution to this program is given by Example 4. In the previous example, the two weights in the optimal solution were equal.Here, we provide another example, where the weights turn out to be different for different orbits.The circular complete graph K 8/3 has vertex set {v 0 , v 1 , . . ., v 7 }, which is placed uniformly around a circle.There is an edge between any two vertices which are at a distance at least 3 from each other.Figure 3 depicts this graph.
We will now calculate s(K 2 , K 8/3 ).Each vertex is at a distance 4 from exactly one other vertex, which means that there are 4 such edges.These edges, which are dashed in the figure, form one orbit under the action of Aut * (K 8/3 ) on E(K 8/3 ).The remaining 8 edges (solid) form a second orbit.Let V (K 2 ) = {u 0 , u 1 }.We can obtain a solution f by mapping f (v i ) = u 0 if i is even, and f (v i ) = u 1 if i is odd.This solution will map all solid edges to K 2 , but none of the dashed, hence f = (0, 8).We obtain a second solution g by mapping g(v i ) = u 0 for 0 ≤ i < 4, and g(v i ) = u 1 for 4 ≤ i < 8.This solution will map all but two of the solid edges in K 8/3 to K 2 , hence g = (4, 6).
Let h be an arbitrary solution.If Otherwise, h 1 > 0, so it maps at least one dashed edge, say the edge between v 0 and v 4 , to K 2 .There are two edge-disjoint even-length paths using solid edges from v 0 to v 4 , one via v 3 and one via v 5 .The solution h sends at most 3 of these solid edges from each path to K 2 .Hence, Therefore, the inequalities given by f and g imply the inequality given by any other solution, and we have the following program for s(K 2 , K 8/3 ): The optimal solution to this program is given by w 1 = 1/20, w 2 = 1/10, and s(K 2 , K 8/3 ) = s = 4/5.
Approximation Bounds for MAX H -COL
In this section we apply the reduction ϕ 1 , γ 1 and use some of the explicit values obtained for s in Section 2 to bound the approximation ratio of MAX H -COL for various families of graphs.First, we would like to remind the reader of some earlier results and also give a hint of what to expect when we start studying the approximability of MAX H -COL.The probabilistic argument in Proposition 2.7 shows that MAX H -COL is in APX.Furthermore, Jonsson et al. ( 2009) have shown that whenever H is loop-free, MAX H -COL does not admit a PTAS, and otherwise the problem is trivial.Let us have a closer look at a concrete, well-known example: the MAX CUT problem.This problem was one of Karp's original 21 NP-complete problems (Karp, 1972) and has a trivial approximation ratio of 1/2, which is obtained by assigning each vertex randomly to either part of the partition.The trivial randomised algorithm is easy to derandomise; Sahni and Gonzalez (1976) gave the first such approximation algorithm.The 1/2 ratio was in fact essentially the best known ratio for MAX CUT until 1995, when Goemans and Williamson (1995), using semidefinite programming (SDP), achieved the ratio 0.87856 mentioned in Example 2. Frieze and Jerrum (1997) determined lower bounds on the approximation ratios for MAX k-CUT using similar SDP techniques.Sharpened results for small values of k have later been obtained by de Klerk et al. (2004).Håstad (2005) has shown that SDP is a universal tool for solving the general MAX 2-CSP problem (where every constraint only involves two variables) over any (finite) domain, in the sense that it establishes non-trivial approximation results for all of those problems.Until recently no other method than SDP was known to yield a non-trivial approximation ratio for MAX CUT.Trevisan (2009) broke this barrier by using techniques from algebraic graph theory to reach an approximation guarantee of 0.531.Soto (2009) later improved this bound to 0.6142 by a more refined analysis.Khot (2002) suggested the unique games conjecture (UGC) as a possible direction for proving inapproximability properties of some important optimisation problems.The conjecture states the following (equivalent form from Khot et al. (2007)): Conjecture 3.1 Given any δ > 0, there is a prime p such that given a set of linear equations x i −x j = c ij (mod p), it is NP-hard to decide which one of the following is true: • There is an assignment to the x i 's which satisfies at least a 1 − δ fraction of the constraints.
• All assignments to the x i 's can satisfy at most a δ fraction of the constraints.
Under the assumption that the UGC holds, Khot et al. (2007) proved the approximation ratio achieved by the Goemans and Williamson algorithm for MAX CUT to be essentially optimal and also provided upper bounds on the approximation ratio for MAX k-CUT, k > 2. The proof for the MAX CUT case crucially relies on Gaussian Analysis.In particular, it uses Borell's Theorem to answer the question of partitioning R n into two sets of equal Gaussian measure so as to minimise the Gaussian noise-sensitivity, thereby transferring a Fourier analytic question to a geometric one.
Recently, Isaksson and Mossel (2012) showed that a similar geometric conjecture have further implications for the approximability of MAX k-CUT.
Conjecture 3.2 The standard k-simplex partition is the most noise-stable balanced partition of R n with n ≥ k − 1.
A partition of R n into k measurable sets A 1 , . . ., A k is called balanced if each A i has Gaussian measure 1/k.The ε-noise sensitivity is defined as the probability that two (1 − 2ε)-correlated n-dimensional standard Gaussian points x, y ∈ R n belong to different sets in the partition.The standard k-simplex partition of R n is obtained by letting R n = R k−1 × R n−k+1 and then partitioning R k−1 into k regular simplicial cones.Assuming this standard simplex conjecture (SSC) and the UGC, Isaksson and Mossel show that the Frieze and Jerrum SDP relaxation obtains the optimal approximation ratio for MAX k-CUT.
Every MAX H -COL problem can be viewed as a MAX CSP(Γ) problem, where Γ, the so called constraint language, is the set containing the single, binary, and symmetric relation given by the edge set of H. Raghavendra (2008) has presented approximation algorithms for every MAX CSP(Γ) problem based on semi-definite programming.Under the UGC, these algorithms optimally approximate MAX CSP(Γ) in polynomial-time, i.e. no other polynomial-time algorithm can approximate the problem substantially better.However, it seems notoriously difficult to determine the approximation ratio implied by this result, for a given constraint language: Raghavendra and Steurer (2009) show that this ratio can in principle be computed, but the algorithm is doubly exponential in the size of the domain.In combination with our results, such ratios could be used to confirm or disprove the UGC.
A General Reduction
Our main tool will be a generalisation of the reduction introduced in Section 1.2.Let M and N be (arbitrary) undirected graphs and consider the following reduction, ϕ 2 , γ 2 , from MAX N -COL to MAX M -COL: The function ϕ 2 maps an instance (G, w) ∈ I N to (G, w) ∈ I M .Let f : V (G) → V (M ) be a solution to (G, w).Let g : V (M ) → V (N ) be an optimal solution in Sol N (M, w f ) (see ( 5)).The function γ 2 maps f to g • f .
Proof: First, we argue that γ 2 can be computed in polynomial time.We must show that g can be found in polynomial time.An optimal solution to (M, w f ) can be obtained by brute force.This takes |V (M )| |V (N )| times the time to evaluate a candidate solution g .The measure of a solution depends on w f , and thereby on f , but for a given candidate g , it can clearly be obtained in polynomial time.Since M and N are fixed, the total time is polynomial in the size of (G, w).
Next, we show that ϕ 2 , γ 2 is continuous with constant s(M, N ) where the final equality follows from Corollary 2.11.From (3) we have the inequality mc N (G, w) ≤ mc M (G, w)/s(M, N ) for all G ∈ G and w ∈ W(G).Consequently, with I = (G, w), Since s(M, N ), s(N, M ) > 0, it follows that the reduction is continuous. 2 As a direct consequence (using Proposition 1.2), we get the following generalisation of Lemma 1.6.
The symmetric nature of this result has some interesting consequences.For example, It is possible to show that 1 − s(M, N ) • s(N, M ) is a metric on the space of graphs taken modulo homomorphic equivalence; cf.Färnqvist et al. (2009).
Our main algorithmic tools will be the following two theorems.
Theorem 3.5 (Goemans and Williamson, 1995) MAX CUT can be approximated within A few logarithms will appear in the upcoming expressions.We fix the notation ln x for the natural logarithm of x, and log y for the base-2 logarithm of y.
Theorem 3.6 (Frieze and Jerrum, 1997) MAX k-CUT can be approximated within ).We note that de Klerk et al. (2004) have presented the sharpest known bounds on α k for small values of k.Table 1 lists α GW together with the first of these lower bounds.Theorem 3.7 (Håstad, 2005) There is an absolute constant c > 0 such that mc H can be approximated within where n = n(H) and t(H) = n 2 − 2e(H).
We will compare the performance of Håstad's algorithm on MAX H -COL with the performance of the algorithms in Theorems 3.5 and 3.6 when analysed using the reduction ϕ 2 , γ 2 and estimates or exact values of the separation parameter.For this purpose, we introduce two functions, F J k and Hå, such that, if H is a graph, then F J k (H) denotes the best bound on the approximation guarantee when Frieze and Jerrum's algorithm for MAX k-CUT is applied to the problem mc H , while Hå(H) is the guarantee when Håstad's algorithm is used to approximate mc H .This comparison is not entirely fair since Håstad's algorithm was not designed with the goal of providing optimal results; the goal was to beat random assignments.However, it is currently the best known algorithm for approximating MAX H -COL, for arbitrary H ∈ G, which also provides an easily computable bound on the guaranteed approximation ratio; this is in contrast with the conjectured optimal algorithms of Raghavendra ( 2008) (see the discussion in Section 6).
Near-optimality of our approximation method will be investigated using results depending on Khot's unique games conjecture (Conjecture 3.1).Hence, we will henceforth assume that the UGC is true, which implies the following inapproximability results.
Theorem 3.8 (Khot et al., 2007) Under the assumption that the UGC is true, the following holds: • For every ε > 0, it is NP-hard to approximate mc 2 within α GW + ε.
Furthermore, assuming the standard simplex conjecture (Conjecture 3.2), we have the following strengthening of Theorem 3.8.Theorem 3.9 (Isaksson and Mossel, 2012) Under the assumption that the UGC and the SSC are true, the following holds: • For every ε > 0, it is NP-hard to approximate mc k within α k + ε.
Asymptotic Performance
Next, we derive a general, asymptotic, result on the performance of our method.
Theorem 3.10 Let H ∈ G be a graph with ω(H) = r and χ(H) = k.Then, , where o(1) is with respect to k.
Furthermore, mc H cannot be approximated within , where o(1) is with respect to r.
Proof: By Proposition 2.4, we have By Lemma 3.4 and Theorem 3.6, we then have and the first inequality follows, since For the second part, we have s(K r , H) ≥ s(K r , K k ) = s(H, K k ) by Lemma 2.6 and Proposition 2.4, so Note the similarity between this upper bound and the expression for s(H, K k ).In fact, without losing any precision in the following calculations, we could replace the O(k −2 )-term by O(r −2 ).By Lemma 3.4, mc H cannot be approximated within α r /s(K r , H).An upper bound for α r /s(K r , H) can now be calculated as in the first part with r and k swapped.In the final expression we drop − 1 k from the last parenthesis since this is absorbed by o(1). 2 To give an upper bound on the performance of Håstad's algorithm, we can proceed as follows: Let n = n(H) and r = ω(H), k = χ(H) as in the proposition.By Theorem 2.3, e(H) We see that our algorithm performs asymptotically better.
Some Specific Graph Classes
Next, we investigate the performance of our method on sequences of graphs "tending to" K 2 (K 3 ) in the sense that the separation of K 2 (K 3 ) and a graph H k from the sequence tends to 1 as k tends to infinity.In several cases, the girth of the graphs plays a central role.The girth of a graph G is the length of a shortest cycle in G.The odd girth of G is the length of a shortest odd cycle in G. Hence, if G has odd girth g, then Proposition 3.11 We have the following bounds.
3. Let H be a planar graph with girth at least g = 20k−2
(2) Lai and Liu (2000) show that if H is a graph with the stated properties, then there exists a homomorphism from H to C 2k+1 .Thus, (3) Borodin et al. (2004) show that there exists a homomorphism from H to C 2k+1 .The result follows as for case (2).
(4) We know from Example 3 that The result again follows by Lemma 3.4. 2 We can compare the results of Proposition 3.11 to the performance of Håstad's algorithm as follows: Let n = n(H).In (1), we have e(H) = n; for (2), Dutton and Brigham (1991) have given an upper bound on e(H) of asymptotic order n 1+2/(g−1) ; in (3), e(H) ≤ 3n − 6, since H is planar; and finally in (4), we have e(H) = 2(n − 1).Now note that by ignoring lower-order terms in the expression for Hå(H) in Theorem 3.7, we get Hå(H) = 2e(H) n 2 (1 + o(1)).Hence, for each case (1)-(4), Hå(H) → 0 as n → ∞.Proposition 3.11(3) can be strengthened and extended in several ways: For K 4 -minor-free graphs, Pan and Zhu (2002) have given odd girth-restrictions for 2k + 1-colourability which is better than the girthrestriction in Proposition 3.11(3).Dvořák et al. (2008) have proved that every planar graph H of odd girth at least 9 is homomorphic to the Petersen graph, P .The Petersen graph is edge-transitive and its bipartite density is known to be 4/5 (cf.Berman and Zhang (2003).)In other words, s(K 2 , P ) = 4/5.Consequently, mc H can be approximated within 4 5 • α GW but not within 5 4 • α GW + ε for any ε > 0. This is an improvement on the bounds in Proposition 3.11(3) for planar graphs with girth strictly less than 13.We can also consider graphs embeddable on higher-genus surfaces.For instance, Proposition 3.11(3) is true for graphs embeddable on the projective plane, and it is also true for graphs of girth strictly greater than 20k−2 3 whenever the graphs are embeddable on the torus or Klein bottle.These bounds are direct consequences of results in Borodin et al. (2004).
Random Graphs
Finally, we look at random graphs.Let G(n, p) denote the random graph on n vertices in which every edge is chosen uniformly at random, and independently, with probability p = p(n).We say that G(n, p) has a property A asymptotically almost surely (a.a.s.) if the probability that it satisfies A tends to 1 as n tends to infinity.Here, we let 0 < p < 1 be a fixed constant.Proposition 3.12 Let H ∈ G(n, p).Then, a.a.s., Proof: For H ∈ G(n, p) it is well known that a.a.s.ω(H) assumes one of at most two values around , 1972;Bollobás and Erdős, 1976).Let r = ω(H).By Theorem 3.10, 2 For a comparison to Håstad's algorithm, note that e(H) = n 2 p 2 (1 + o(1)) a.a.s. for H ∈ G(n, p), so 1).The slow logarithmic growth of the clique number of G(n, p) works against our method in this case.However, we still manage to achieve an approximation ratio tending to 1 unlike Håstad's algorithm which ultimately is restricted by the density of the edges.
We conclude this section by looking at what happens for graphs H ∈ G(n, p) when p is no longer chosen to be a constant, but instead we let np tend to a constant ε < 1 as n → ∞.The following theorem allows us to do this.Theorem 3.13 (Erdős and Rényi, 1960) Let c be a positive constant and p = c n .If c < 1, then a.a.s.no component in G(n, p) contains more than one cycle, and no component has more than ln n c−1−ln c vertices.Now we see that if np → ε when n → ∞, then G(n, p) almost surely consists of components with at most one cycle.Thus, each component resembles a cycle where, possibly, trees are attached to certain vertices of the cycle, and each component is homomorphically equivalent to the cycle it contains.Proposition 2.5 is therefore applicable in this part of the G(n, p) spectrum.
Circular Complete Graphs
The successful application of our method relies on the ability to compute s(M, N ) for various graphs M and N .In Section 2.3 we saw how this can be accomplished by the means of linear programming.This insight is put to use here in the context of circular complete graphs.We have already come across examples of such graphs in the form of (ordinary) complete graphs, cycles, and the graph K 8/3 in Figure 3.We will now take a closer look at them.Definition 4.1 Let p and q be positive integers such that p ≥ 2q.The circular complete graph, K p/q , has vertex set {v 0 , v 1 , . . ., v p−1 } and edge set {{v The image to keep in mind is that of the vertices placed uniformly around a circle with an edge connecting two vertices if they are at a distance at least q from each other.Example 5. Some well-known graphs are extreme cases of circular complete graphs: • The complete graph K n , n ≥ 2 is a circular complete graph with p = n and q = 1.
• The cycle graph C 2k+1 , k ≥ 1 is a circular complete graph with p = 2k + 1 and q = k.
These are the only examples of edge-transitive circular complete graphs.
A fundamental property of the circular complete graphs is given by the following theorem.
Theorem 4.2 (see Hell and Nešetřil, 2004) For positive integers p, q, p , and q , K p/q → K p /q ⇔ p/q ≤ p /q Due to this theorem, we may assume that whenever we write K p/q , the positive integers p and q are relatively prime.
One of the main reasons for studying circular complete graphs is that they refine the notion of complete graphs.In particular, that they refine the notion of the chromatic number χ(G).Note that an alternative definition of χ(G) is given by χ With this in mind, the following is a natural extension of proper graph colouring, and the chromatic Definition 4.3 The circular chromatic number, χ c (G), of a graph G is defined as inf{p/q | G → K p/q }.A homomorphism from G to K p/q is called a (circular) p/q-colouring of G.
For more on the circular complete graphs and the circular chromatic number, see the book by Hell and Nešetřil (2004), and the survey by Zhu ( 2001).
We will investigate the separation parameter s(K r , K t ) for rational numbers 2 ≤ r < t ≤ 3.In Section 4.1, we fix r = 2 and choose t so that Aut * (K t ) has few orbits.We find some interesting properties of these numbers which lead us to look at certain "constant regions" in Section 4.2, and at the case r = 2 + 1/k, in Section 4.3.Our method is based on solving a relaxation of the linear program (9) which was presented in Section 2.3, combined with arguments that the chosen relaxation in fact finds the optimum in the original program.Most of the calculations, which involve some rather lengthy ad hoc constructions of solutions, are left out.The complete proofs can be found in the technical report Engström et al. (2009a).
Maps to an Edge
We consider s(K 2 , K t ) for t = 2 + n/k with k > n ≥ 1, where n and k are integers.The number of orbits of Aut * (K t ) then equals (n + 1)/2 .We will denote these orbits by for c = 1, . . ., (n + 1)/2 .Since the number of orbits determine the number of variables of the linear program (9), we choose to begin our study of s(K 2 , K t ) using small values of n.For n = 1 we have seen that the graph K 2+1/k is isomorphic to the cycle C 2k+1 .For n = 2 we can assume that k is odd in order to have 2k + n and k relatively prime.We will write this number as . The map f sends v i to w 0 if i is even and to w 1 if i is odd.Then, f maps all of A 1 to K 2 but none of the edges in A 2 , so f = (4k, 0).The solution g sends a vertex v i to w 0 if 0 ≤ i < 2k and to w 1 if 2k ≤ i < 4k.It is not hard to see that g = (4k − 2, 2k).It remains to argue that these two solutions suffice to determine s.But we see that any map h = (h 1 , h 2 ) with h 2 > 0 must cut at least two edges in the even cycle A 1 .Therefore, h 1 ≤ 4k − 2, so h ≤ g, componentwise.The proposition now follows by solving the relaxation of (9) using only the two inequalities obtained from f and g. 2 Note that the graph K 8/3 from Example 4 is covered by Proposition 4.4.The argument in that example is very similar to the proof of the general case.
Constant Regions
In the previous section we saw that s ) and used it to prove that s . This is a special case of a phenomenon described more generally in the following proposition.
Proposition 4.7 Let k ≥ 1, and let r and t be rational numbers such that Proof: From Theorem 4.2, we have the following chain of homomorphisms. .
By Lemma 2.6, this implies 2k+1 .Two more applications of Lemma 2.6 show that which proves the proposition.2 We find that there are intervals } where the function s r (t) = s(K r , K t ) is constant for any 2 ≤ r < (2k + 1)/k.In Figure 4 It turns out that similar intervals appear throughout the space of circular complete graphs.Indeed, it follows from Proposition 2.4 that for a positive integer n and a rational number r such that 2 ≤ r ≤ n, we have From ( 10) we see that s(K r , K n ) remains constant for rational numbers r in the interval k ≤ r < k +1, where k is any fixed integer k < n.Furthermore, for positive integers k and m, we have When we combine this fact with (10) and Lemma 2.6, we find that s
Maps to Odd Cycles
It was seen in Proposition 4.7 that s(K r , K t ) is constant on the region (r, t) ∈ [2, 2 + 1/k) × I k .In this section, we will study what happens when t remains in I k , but r assumes the value 2 + 1/k.A first observation is that the absolute jump of the function s(K r , K t ) when r goes from being less than 2 + 1/k to r = 2 + 1/k must be largest for t = 2 + 2/(2k − 1).Let V (K 2+2/(2k−1) ) = {v 0 , . . ., v 4k−1 } and V (K 2+1/k ) = {w 0 , . . ., w 2k }, and let the function f map v i to w j , with j = 2k+1 4k • i .Then, f maps all edges except {v 0 , v 2k−1 } from the orbit A 1 to some edge in K r .Since the subgraph A 1 is isomorphic to C 4k , any map to an odd cycle must exclude at least one edge from A 1 .It follows that f alone determines s, and we can solve the linear program (9) to obtain s(K 2+1/k , K 2+2/(2k−1) ) = (4k − 1)/4k.Thus, for r < 2 + 1/k, we have .
Smaller t ∈ I k can be expressed as t = 2 + 1/(k − x), where 0 ≤ x < 1/2.We will write x = m/n for positive integers m and n which implies the form t = 2 + n/(kn − m), with m < n/2.For m = 1, it turns out to be sufficient to keep two inequalities from (9) to get an optimal value of s.From this we get the following result.
Proposition 4.8 Let k, n ≥ 2 be integers.Then, There is still a non-zero jump of s(K r , K t ) when we move from K r < 2 + 1/k to K r = 2 + 1/k, but it is smaller, and tends to 0 as n increases.For m = 2, we have 2(kn − m) + n and kn − m relatively prime only when n is odd.In this case, it turns out that we need to include an increasing number of inequalities to obtain a good relaxation.Furthermore, we are not able to ensure that the obtained value is the optimum of the original (9).We will therefore have to settle for a lower bound on s.Brute force calculations have shown that, for small values of k and n, equality holds in Proposition 4.9.We conjecture this to be true in general.
Proposition 4.9 Let k ≥ 2 be an integer and n ≥ 3 be an odd integer.Then, , , where A k is a constant that does not depend on n.
Fractional Covering by H-cuts
In the following, we generalise the work of Šámal (2005, 2006, 2012) on fractional covering by cuts.We obtain a complete correspondence between a family of chromatic numbers, χ H (G), and s(H, G).These chromatic numbers are generalisations of Šámal's cubical chromatic number χ q (G); the latter corresponds to the case when H = K 2 .Two more expressions for χ H (G) are given in Section 5.2.We believe that these alternative views on the separation parameter can provide great benefits to the understanding of its properties.We transfer a result in the other direction, in Section 5.3, disproving a conjecture by Šámal on χ q , and settle another conjecture by him in the positive, in Section 5.4.
Separation as a Chromatic Number
We start by recalling the notion of a fractional colouring of a hypergraph.Let G be a (hyper-) graph with vertex set V (G) and edge set is a subset of J. Let J denote the set of all independent sets of G and for a vertex v ∈ V (G), let J (v) denote all independent sets which contain v. Let J 1 , . . ., J n ∈ J be a collection of independent sets.
Definition 5.1 An n/k independent set cover is a collection J 1 , . . ., J n of independent sets in J such that every vertex of G is in at least k of them.
The fractional chromatic number χ f (G) of G is given by the following expression.
| there exists an n/k independent set cover of G}.
The definition of fractional covering by cuts mimics that of fractional covering by independent sets, but replaces vertices with edges and independent sets with certain cut sets of the edges.Let G and H be undirected simple graphs and f be an arbitrary vertex map from G to H. Recall that the map f induces a partial edge map f # : E(G) → E(H).We will call the preimage of E(H) under f # an H-cut in G.When H is a complete graph K k , this is precisely the standard notion of a k-cut in G. Let C denote the set of H-cuts in G and for an edge e ∈ E(G), let C(e) denote all H-cuts which contain e.The following definition is a generalisation of cut n/k-covers (Šámal, 2006) to arbitrary H-cuts.
The graph parameter χ H is defined as: | there exists an H-cut n/k-cover of G}.Šámal (2006) called the parameter χ K2 (G), the cubical chromatic number of G.Both the fractional chromatic number and the cubical chromatic number also have linear programming formulations.This, in particular, shows that the value in the infimum of the corresponding definition is obtained exactly for some n and k.For our generalisation of the cubical chromatic number, the linear program is the following: Proposition 5.3 The graph parameter χ H (G) is given by the optimum of the linear program in (11).
Proof: The proof is completely analogous to those for the corresponding statements for the fractional chromatic number (cf.Godsil and Royle, 2001) and for the cubical chromatic number (Lemma 5.1.3 in Šámal, 2006).Let C 1 , . . ., C n be an H-cut n/k-cover of G.The solution f (C) = 1/k if C ∈ {C 1 , . . ., C n }, and f (C) = 0 otherwise, has a measure of n/k in (11).Thus, the optimum of the linear program is at most χ H (G).
For the other direction, note that the coefficients of the program (11) are integral.Hence, there is a rational optimal solution f * .Let N be the least common multiple of the divisors of f * (C) for C ∈ C. Assume that the measure of f * is n/k.Construct a collection of H-cuts by including the cut C a total of N • f * (C) times.This collection covers each edge at least N times using C∈C N • f * (C) = N • n/k cuts, i.e. it is an H-cut n/k-cover, so χ H (G) is at most equal to the optimum of (11). 2 We are now ready to work out the correspondence to separation.
Proof: Consider the dual program of (11). Maximise g(e) subject to e∈X g(e) ≤ 1 for all H-cuts X ∈ C , In ( 12) let 1/s = e∈E(G) g(e) and make the variable substitution w = g • s.This leaves the following program.w(e) = 1 Since max s −1 = (min s) −1 , a comparison with (8) establishes the proposition.2
More Guises of Separation
For fractional colourings, it is well-known that an equivalent definition is obtained by taking χ f (G) = inf{n/k | G → K n,k }, where K n,k denotes the Kneser graph, the vertex set of which is the k-subsets of [n] and with an edge between u and v if u ∩ v = ∅.For H = K 2 , a corresponding definition of χ H (G) = χ q (G) was obtained in Šámal (2006) A parameterised graph family which determines a particular chromatic number in this way is sometimes referred to as a scale.In addition to the previously mentioned fractional chromatic number χ f , where the scale is the set of Kneser graphs, and the cubical chromatic number χ q , where the scale is {Q n/k }, another prominent example is the circular chromatic number (Section 4) for which the scale is given by the family of circular complete graphs K n/k .
We now generalise the family {Q n/k } to produce one scale for each χ H .To this end, let H n k be the graph on vertex set V (H) n and an edge between (u 1 , . . ., u n ) and (v 1 , . . ., v n ) when |{i | {u i , v i } ∈ E(H)}| ≥ k.The proof of the following proposition is straightforward, but instructive.
where b(S) denotes the bipartite density of S. Conjecture 5.5.3 in Šámal (2006) suggested that this inequality, expressed on the form χ q (S) ≥ 1/(min S⊆G b(S)), could be replaced by an equality.We answer this in the negative, using K 11/4 as our counterexample.Lemma 4.5 with k = 1 gives s(K 2 , K 11/4 ) = 17/22.If s(K 2 , K 11/4 ) = b(S) for some S ⊆ K 11/4 it means that S must have at least 22 edges.Since K 11/4 has exactly 22 edges it follows that S = K 11/4 .However, a cut in a cycle must contain an even number of edges.Since the edges of K 11/4 can be partitioned into two cycles, we have that the maximum cut in K 11/4 must be of even size, hence |E(K 11/4 )| • b(K 11/4 ) = 17.This is a contradiction.
Confirmation of a Scale
As a part of his investigation of the cubical chromatic number, Šámal (2006) set out to determine the value of χ q (Q n/k ) for general n and k.For the fractional chromatic number and the circular chromatic number, results for such measuring of the scale exist and provide very appealing formulae: ), we are immediately out of luck as 1/2 < s(K 2 , G) ≤ 1, i.e. 1 ≤ χ q (G) < 2 for all non-empty graphs.For 1 ≤ n/k < 2, however, Šámal gave a conjecture (Conjecture 5.4.2 in Šámal, 2006).We complete the proof of his conjecture to obtain the following result.
Proposition 5.7 Let k, n be integers such that k ≤ n < 2k.Then, If we make sure that k is even, possibly by multiplying both k and n by a factor two, we get the following interesting corollary.
Corollary 5.9 For every rational number r, 1/2 < r ≤ 1, there is a graph G such that s(K 2 , G) = r.Šámal (2006) provides the upper bound for Proposition 5.7 and an approach to the lower bound of using the largest eigenvalue of the Laplacian of a subgraph of Q n/k .The computation of this eigenvalue boils down to an inequality (Conjecture 5.4.6 in Šámal, 2006) involving some binomial coefficients.We first introduce the necessary notation and then prove the remaining inequality in Proposition 5.11, whose second part, for odd k, corresponds to one of the formulations of the conjecture.Proposition 5.7 then follows from Theorem 5.4.7 in Šámal (2006), conditioned on the result of this proposition.
Let k, n be positive integers such that k ≤ n, and let x be an integer such that 1 When x is odd, the function f : S o (2k, k, x) → S e (2k, k, x), given by the complement f Lemma 5.10 Assume that x is odd, with 1 ≤ x < n = 2k − 1.Then, N e (n, k, x) = N e (n, k, x + 1) and To prove this, define the function f : 2 That is, f acts on σ by ignoring the element n − x and renumbering subsequent elements so that the image is a subset of The first part of the lemma now follows from the injectivity of the restrictions f | A2 and f | B2 .The second equality is proved similarly.2 Proposition 5.11 Choose k, n and x so that k ≤ n < 2k and 1 ≤ x ≤ n.Then, Proof: We will proceed by induction over n and x.The base cases are given by x = 1, x = n, and n = k.
where the inequality holds for all n < 2k.For x = n and odd k, we have N e (n, k, x) = 0, and for even k, we have The case for N o (n, k, x) and even k is treated identically.
Finally, let n = 2k − 1.If x is odd, then Lemma 5.10 is applicable, so we can assume that x is even.Now, as before where the first term is evaluated using ( 16).The same inequality can be shown for N o (2k − 1, k, x) and even k, which completes the proof. 2 6 Discussion and Open Problems What started out as a very simple idea has diverged in a number of directions, with plenty of room in each for further investigation and improvements.We single out two main topics, and discuss their respective future prospects and interesting open problems.These two topics relate to the application of our approach to the approximability of the problem MAX H -COL (and more generally to MAX CSP(Γ)), and to the computation and interpretation of the separation parameter.
Separation and Approximability
Our initial idea of separation applied to the MAX H -COL problem lead us to a binary graph parameter that, in a sense, measures how close one graph is to be homomorphic to another.While not apparent from the original definition in (3), which involves taking an infimum over all possible instances, we have shown that the parameter can be computed effectively by means of linear programming.Given a graph H and known approximability properties for MAX H -COL, this parameter allows us to deduce bounds on the corresponding properties for MAX H -COL for graphs H that are "close" to H. Our approach can be characterised as local; the closer the separation of two graphs is to 1, the more precise are our bounds.
We have shown that given a "base set" containing the complete graphs, our method can be used to derive good bounds on the approximability of MAX H -COL for any graph H.
For the applications in Section 3, we have used the complete graphs as our base set of known problems.We have shown that this set of graphs is sufficient for achieving new, non-trivial bounds on several different classes of graphs.That is, when we apply Frieze and Jerrum's algorithm (Frieze and Jerrum, 1997) to MAX H -COL, we obtain results comparable to, or better than those guaranteed by Håstad's MAX 2-CSP algorithm (Håstad, 2005), for the classes of graphs we have considered.This comparison should however be taken with a grain of salt.The analysis of Håstad's MAX 2-CSP algorithm only aims to prove it better than a random assignment, and may leave room for strengthening of the approximation guarantee.At the same time, we are overestimating the distance for most of the graphs under consideration.It is likely that both results can be improved, within their respective frameworks.When considering inapproximability, we have relied on the unique games conjecture.Weaker inapproximability results, independent of the UGC, exist for both MAX CUT (Håstad, 2001) and MAX k-CUT (Kann et al., 1997), and they are applicable in our setting.We emphasise that our method is not per se dependent on the truth of the UGC.
For the purpose of extending the applicability of our method, a possible direction to take is to find a larger base set of MAX H -COL problems.We suggest two candidates for further investigation: the circular complete graphs, for which we have obtained partial results for the parameter s in Section 4, and the Kneser graphs, see for example Hell and Nešetřil (2004).Both of these classes generalise the complete graphs, and have been subject to substantial previous research.The Kneser graphs contain many examples of graphs with low clique number, but high chromatic number.They could thus prove to be an ideal starting point for studying this phenomenon in relation to our parameter.
We conclude this part of the discussion by considering some possible extensions of our approximability results.We have already noted that MAX H -COL is a special case of the MAX CSP(Γ) problem, parameterised by a finite constraint language Γ.It should be relatively clear that we can define a generalised separation parameter on a pair of general constraint languages.This would constitute a novel method for studying the approximability of MAX CSP-a method that may cast some new light on the performance of Raghavendra's algorithm.As a way of circumventing the hardness result by Khot et al. (2007), Kaporis et al. (2006) show that mc 2 is approximable within 0.952 for any given average degree d, and asymptotically almost all random graphs G in G(n, m = d 2 n ).Here, G(n, m) is the probability space of random graphs on n vertices and m edges, selected uniformly at random.A different approach is taken by Coja-Oghlan et al. (2005) who give an algorithm that approximates mc k within 1 − O(1/ √ np) in expected polynomial time, for graphs from G(n, p).Kim and Williams (2011) give an algorithm for finding a cut with value at least an additive constant k better than α GW times the value of an optimal cut (provided such a cut exists) in a given graph, if you are willing to spend time exponential in k to do so.In a similar vein, Crowston et al. (2011) show how to use time exponential in k to find a cut better than the Edwards-Erdős bound, i.e., with value e(G)/2 + (n(G) − 1)/4 + k in a given graph G or decide that no such cut exists.It would be interesting to study whether separation can be used to extend these results to improved approximability bounds on MAX H -COL.
Separation as a Graph Parameter
For a graph G with a circular chromatic number r close to 2 we can use Lemma 2.6 to bound s(K 2 , G) ≥ s(K 2 , K r ).Due to Proposition 4.4, we have also seen that with this method, we are unable to distinguish between the class of graphs with circular chromatic number 2 + 1/k and the (larger) class of graphs with circular chromatic number 2 + 2/(2k − 1).Nevertheless, the method is quite effective when applied to sequences of graph classes for which the circular chromatic number tends to 2, as was the case in Proposition 3.11(1)-(3).Much of the extensive study conducted in this direction was instigated by the restriction of a conjecture by Jaeger (1988) to planar graphs.This conjecture is equivalent to the claim that every planar graph of girth at least 4k has a circular chromatic number at most 2 + 1/k, for k ≥ 1.
The case k = 1 is Grötzsch's theorem; that every triangle-free planar graph is 3-colourable.Currently, the best lower bound on the girth of a planar graph which implies a circular chromatic number of at most 2 + 1/k is 20k−2 3 , and is due to Borodin et al. (2004).We remark that Jaeger's conjecture implies a weaker statement in our setting.Namely, if G is a planar graph with girth greater than 4k, then G → C k implies s(K 2 , G) ≥ s(K 2 , C k ) = 2k/(2k + 1).Deciding this to be true would certainly provide support for the original conjecture, and would be an interesting result in its own right.Our starting observation shows that the slightly weaker condition G → K 2+2/(2k−1) implies the same result.
For edge-transitive graphs G, it is not surprising that the expression s(K r , G) assumes a finite number of values, as a function of r.Indeed, Theorem 2.1 states that s(K r , G) = mc Kr (G, 1/e(G)), which leaves at most e(G) possible values for s.This produces a number of constant intervals that are partly responsible for the constant regions of Proposition 4.7, and the discussion in Section 4.2.More surprising are the constant intervals that arise from s(K r , K 2+2/(2k−1) ).They give some hope that the behaviour of the separation parameter can be characterised more generally.We propose investigating the existence of more constant regions, and possibly showing that they tile the entire space.
In Section 5 we generalised the notion of covering by cuts due to Šámal.In doing this, we found a dif-ferent interpretation of the separation parameter as an entire family of chromatic numbers.It is our belief that these alternate viewpoints can benefit from each other.The refuted conjecture in Section 5.3 is an immediate example of this.It is tempting to look for a generalisation of Proposition 5.7 with K 2 replaced by an arbitrary graph H.A trivial upper bound of s(H, H n k ) ≤ k/n is obtained from Proposition 5.5, but we have not identified anything corresponding to the parity criterion which appears in the case H = K 2 .This leads us to believe that this bound can be improved upon.The approach of Šámal on the lower bound does not seem to generalise.The reason for this is that it uses bounds on maximal cuts obtained from the Laplacian of (a subgraph of) Q n/k .We know of no such results for maximal k-cuts, with k > 2, much less for general H-cuts.
In recent work, Šámal and coauthors have shown that the cubical chromatic number χ q can be approximated within α GW (Šámal, 2012).This suggests the interesting possibility of a close connection between the approximability of mc H and that of s(H, G), with H fixed. Let M , N , and H be graphs.A function g : E(M ) → E(N ) is said to be H-cut continuous if, for any H-cut C ⊆ E(N ) in N , we have that g −1 (C) ⊆ E(M ) is an H-cut in M .For any homomorphism h, the edge map h # is H-cut continuous for every H. Šámal (2005) used cut continuous maps (H = K 2 ) to show that certain non-homomorphic graphs have the same cubical chromatic number.Here we show how general H-cut continuous maps can be used to generalise the implication in Lemma 2.6.
where w N (e) = e ∈f −1 (e) w(e ).Let g : V (N ) → V (M ) be an optimal solution to (N, w N ).Then, C = (g # ) −1 (E(M )) is an M -cut in N , so f −1 (C) is an M -cut in H. Hence, there exists a solution to (H, w) which contains precisely the edges in f −1 (C).The measure of this solution is given by w(e) = e∈(g # ) −1 (E(M )) e ∈f −1 (e) w(e ) = m M (g).
Since g is optimal, m M (g) = mc M (N, w N ), and inequality (17) holds. 2 The possibility of efficiently computing (bounds on) s(M, N ) have an immediate application: Lemma 2.6 and Lemma 6.1 give necessary conditions for the existence of a homomorphism N → M .As noted by Šámal (2012), this can be used as a no-homomorphism lemma, proving the absence of a homomorphism between two given graphs.Needless to say, establishing such properties is often a non-trivial task.
otherwise.Let x > 1 and consider N e (n, k, x) for odd k and k < n < 2k − 1. Partition the sets σ ∈ S e (n, k, x) into those for which n ∈ σ on the one hand and those for which n ∈ σ on the other hand.These parts contain N o (n − 1, k − 1, x − 1) and N e (n − 1, k, x − 1) sets, respectively.Since k − 1 is even, and since k Lemma 6.1 Let M , N , and H be graphs in G.If there exists an M -cut continuous map from H to N , then s(M, H) ≥ s(M, N ).Proof: Let f : E(H) → E(N ) be an M -cut continuous function.It suffices to show that for any graph H ∈ G and w ∈ W(H), we have mc M (H, w) ≥ mc M (N, w N ), It remains to show that s(M, N ) is positive.For every edge-weighting w of N , there is at least one edge e with w(e) ≥ 1/|E(N )|.Since M is non-empty, the subgraph consisting of only e maps homomorphically to M , so mc M (N, w) ≥ 1/|E(N )|.By Theorem 2.1, it follows that s(M, N ) ≥ 1/|E(N )| > 0.2 by taking the infimum over n/k for n and k such that G → Q n/k .Here, Q n/k is the graph on vertex set {0, 1} n with an edge between u and v if d H (u, v) ≥ k, where d H denotes the Hamming distance.
|
v3-fos-license
|
2019-04-24T13:11:58.610Z
|
2013-03-24T00:00:00.000
|
140633563
|
{
"extfieldsofstudy": [
"Geology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13202-013-0055-0.pdf",
"pdf_hash": "f22cab888d11612af62fb8b0bc92c2dbc4a0c795",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43486",
"s2fieldsofstudy": [
"Geology",
"Engineering"
],
"sha1": "f22cab888d11612af62fb8b0bc92c2dbc4a0c795",
"year": 2013
}
|
pes2o/s2orc
|
Pre-drill pore pressure prediction using seismic velocities data on flank and synclinal part of Atharamura anticline in the Eastern Tripura, India
The Tripura state went through extensive geological tectonics that resulted in the creation of complex structural styles with different fault types, lineaments, and plate boundaries, which in turn caused possible zones with over-pressured formations characterized by higher seismic amplitude signatures. Without accurate estimates of pore pressures, drilling through these hazardous zones is very troublesome and could jeopardize the whole drilling rig site. Pore pressures are easily predicted for sediments with normal pressure gradient. The prediction of pore pressure for the abnormally pressured (i.e., overpressured) sediments is more difficult and more important. Understanding of the pore pressure is a requirement of the drilling plan in order to design a proper casing program. With balanced drilling mud, overpressured formations, and borehole instability will be effectively controlled while drilling and completing the well. Well control events such as formation fluid kicks, loss of mud circulation, surface blowouts, and subsurface kicks can be avoided with the use of accurate pore pressure and fracture gradient predictions in the design process. In this study, transform models using modified Eaton’s method were used to predict pore pressures from seismic interval velocities. Corrected two-way travel times and average velocity values for 28 sorted common depth points were input into the transform for pore pressures prediction predicted pore pressures show a reasonable match when plotted against formation pressure data from the offset wells namely AD-4 trend, Agartala Dome-6. Ambasa trend, Kathalchari trend, Kubal, Masimpur-3, Rokhia structure-RO1, and Tichna structure-TI1. In this study, it is observed that overpressure starts at shallow depths (1,482–2,145 m) in synclinal section while in flank section it starts deeper (2,653–5,919 m) in Atharamura anticline. It is also observed that the most of wells showing pressure match are located in the western side of the Atharamura. The maximum predicted pore pressure gradient observed in this study is 1.03 psi/feet in both synclinal and flank sections of Atharamura anticline. Based on our observations, it is interpreted that Tripura region is characterized by single pressure source and the pressure is distributed evenly in all the anticlines in this region.
Introduction
A pre-drill prediction of pore pressure is an integral part of the well planning and formation evaluation process. An accurate estimation of formation pore pressure is a key requirement for safe and economic drilling in overpressured sediments. Pore pressure within formations determines the mud weight required to build a balancing fluid pressure in the downhole. An improper understanding of the subsurface geology and the formation pressures may result in fracturing the formation if the mud weight is too high. In contrast, if mud weight is too low, then the formation fluids can flow into well, potentially leading to well blowouts if not controlled. High pore pressures or overpressures have been observed at drilling sites all over the world, in both land and offshore. The frequently encountered overpressures in the Gulf of Mexico have been particularly well studied and observed since it is an important area of hydrocarbon production. The phenomenon has been observed in many other places, including North Sea, Caspian Sea, Pakistan and the Middle East, eastern parts of India. The nature and origin of pore pressures are manifold and complex. The demands for better understanding and pre-drill prediction of pore pressure are substantial. To estimate the abnormal pressures first, it is important to understand the pore pressure concepts and under what conditions it becomes abnormal. In this paper, pore pressures were estimated from seismic velocities using an appropriate model for velocity to pore pressure transform (Bowers 1995;Hottman and Johnson 1965;Dutta 1997). Pre-drill pore pressure has been obtained from transform model using seismic interval velocity. The transformation model (seismic interval velocity to pore pressure transform) is shown in the Fig. 1 but the accuracy of pore pressure prediction depends on the estimation accuracy of seismic interval velocities. The seismic interval velocities were estimated from two-way seismic root mean square (RMS) velocities by Dix equation (Dix 1955). During seismic processing, seismic velocities used are designed in such a way that the stack/migration is optimum with local fluctuations being smoothed out and the velocity pick interval usually being too coarse for accurate pore pressure prediction.
Pore pressure or formation pressure is the pressure experienced by the pore fluids in the pore space of subsurface formations. Knowledge of expected pore pressure and fracture gradients provide valuable information for efficiently drilling wells with optimum mud weights, casing point selection, and for proper completions. Formation pressures influence compressibility and the failure of reservoir rocks. Furthermore, it allows the identification of the effectiveness of seals and system hydraulic connectivity. To understand the possible cause of abnormal formation pressures, it is essential to understand the importance of petrophysical and geochemical parameters and their relationship to the structural and tectonic history of a given area. Before drilling, pore pressure is estimated based on pore pressure data from offset wells in the area. However, subsurface conditions can vary widely from well-known area to an area of similar conditions in a nearby basin. Pore pressure can also be estimated from logging while drilling data such as resistivity measurements using various methods. Pore pressure data is needed to calibrate the results.
In normal pressured formations, porosity decreases with depth as the pore fluids are expelled out of pores due to the increasing overburden weight. Thus, this pore fluid expulsion maintains the effective communication of pore fluids with the surface. So, at any depth pore pressure is simply the same as the hydrostatic pressure (1.03 g/cm 3 or 0.433 psi/feet) of the water column. In other words, pore pressure in normal compacted sediments is entirely due to the density and height of the fluid column. In abnormal overpressured formations, the pore water expulsion is intercepted by rapid sedimentation and the absence of permeable pore networks. Thus, when the pore fluid experiences pressure above the normal hydrostatic pressure (1.03 g/cm 3 or 0.433 psi/feet), overpressure or super pressure develops (Bourgoyne 1991). In other words, the moment the pore fluid starts bearing the weight of the overlying sediments overpressure develops.
In the absence of well data, seismic velocities are the only available pre-drill tools to estimate the formation pressures. Though the pore pressure prediction has the history of five decades, preciseness of wildcat pore pressure predictions are still in wide range of uncertainty. Pore pressure prediction in geologically challenging areas such as anticlines and fold thrust faults combined with possibility of abnormal pressures elevates this prediction to a high level of uncertainty (Swarbick Richard et al. 1999). This paper discusses pre-drill pore pressure prediction from seismic velocities for safe wildcat well planning.
As the Atharamura stratigraphy thickness is not well known, bulk density of the formation was calculated using the values of Dt max -62.5 ls/feet (for shale, Dt f = 200 ls/feet) (for water) in the Eq. 2.
In this study, the repeat formation tester (RFT) data were collected from nearby offset wells, namely AD-4 trend, Agartala Dome-6. Ambasa trend, Kathalchari trend, Kubal, Masimpur-3, Rokhia structure-RO1, and Tichna structure-TI1 to estimate reliable surface fluid flow as well as to get better understanding of the pore pressure succession in this region. The predicted pore pressure values at common depth points (CDPs) are compared with pore pressure measured by RFT from these drilled offset wells.
Geologic and structural settings
The Tripura region is situated in the north-eastern sector of India, and is surrounded by the territories of Bangladesh and Burma, except in the north-eastern part, which is bordered by the Indian states of Assam and Manipur (Fig. 2). Geographically, it is bounded by the latitudes 22°00/N and 24°30/N and the longitudes 91°10/E and 93°30/E. Geomorphologically, this region is characterized by an alternating succession of rides and valleys of roughly north-south trend. The general elevation of the region rises eastward from few tens of meters in the area adjoining Bangladesh plains in western Tripura to about 1,800 m in eastern Mizoram bordering the Chin Hills of Burma.
The sedimentation in this basin probably started with a breakup of the Gondwana land in Jurassic and Cretaceous (Ganguly 1983) and had been almost continuous since then. The Palaeocene Eocene Disang formation forms the base overlain by rocks of the Barail group which is divided into Lisang followed by Jenam and Renji formation. The overlain Miocene Surma group is made up of lower, middle and upper Bhuban formations with Bokabil formation occupying the topmost part. The Tipam, Dupitila, Disang formations constitute the upper Neogene units. The aerial gravity survey leads to the delineation of 14 large closed anticlinal structures, comprising thick deltaic sedimentary succession of Neogene age with favorable geological prospect, viz., Rokhia, Tichna, Gojalia, Baramura, Tulamura, Atharamura, Batchia, Langtarai, Harargaj, Machlithum, Khubal, Skham, Langai, and Jampai anticlines (Fig. 3). In addition, a buried dome structure was suspected on the basis of geomorphological features in the wide synclinal trough between Rikhia and Baramura anticlines and this has later been confirmed by seismic survey and named as Agartala Dome. A series of long and narrow anticlines with northsouth trending axial traces separated by board intervening synclines are present in Tripura fold belt thrust (FBT). Some of these anticlines show en echelon offsets. In most of the anticlines middle Bhuban formation is capped by upper Bhuban, Bokabil, and Tipam formations (Fig. 4). High abnormal to super pressures are observed from middle-lower Bhuban, practically in all the structures of the Cachar area with pressure gradient reaching to almost geostatic or even exceeding it. Compaction disequilibrium, aided partly by clay diagenesis and tectonic activity, has been found responsible for generation of overpressures in Tripura area (Sahay et al. 1998).
From the past, well-drilled data in the Tripura region, supper pressure regime below the middle Bhuban is confirmed, but not a single well is drilled in the deeper depths of the middle Bhuban due to well control problems. Unlike other anticlines, in Atharamura anticline middle Bhuban is exposed to the surface and increases the possibilities of overpressure at the shallow depth.
Methodology
In the lack of pore pressure data in the study area, we utilize seismic velocities as reasonable alternative for formation pore pressure prediction. The seismic velocity analysis in the reflection wave method has been traditionally applied for the pre-drill prediction of subsurface overpressure formations since the early 1960s. The location of the first successful implementations was the Gulf of Mexico, where thick Pleistocene-Miocene sediments are represented by clastic rock, which are poorly consolidated at the upper section and are often overpressured from the very top.
In order to obtain an estimation of the pore pressure from seismic velocities, one must know how the velocities are influenced by pore pressure. Pore pressure estimation from seismic velocity data is a multidisciplinary subject that requires thorough knowledge of seismic data processing as well as an understanding of rock physics. The key parameters for pore pressure prediction are P-wave (V p ) and S-wave (V s ) velocities. All methods take advantage of the fact that sonic velocities depend on the effective pressure, and hence the pore pressure.
The relation between effective pressure and velocity depends heavily on the texture and mineral composition of the rock. For instance, for unconsolidated sandstones, the P-wave velocities vary significantly with effective pressure (Domenico 1977). The mechanism thought to be important here is the strengthening of grain contacts with increasing effective pressure. When applying external load to unconsolidated sand, the contact between the individual grains becomes stronger. Thus, the stiffness of the sand is increased. This leads to an increase in P-wave velocity. Mostly, overpressured zones are caused by undercompaction in which porosity is abnormally high with depth. Thus, this leads to reduction in seismic wave velocity and hence an increase in acoustic impedance.
As per Hottman and Johnson (1965), empirical correlation of velocity ratios versus expected pressure gradient or mud weight is essential for the quantitative pore pressure evaluation. In all velocity-based methods, developmental of normal compaction curves plays critical role in overpressure estimations and limits the uncertainty of the prediction. Hottman and Johnson method work well where the offset well data is readily available, but in the wildcat planning the development of normal compaction will determine the uncertainty in pore pressure estimation.
Hydrostatic pressure is defined as the pressure exerted by a column of water at any given point in that column, when water is at rest. It is the pressure due to the density and vertical height of the column which exerts force in all directions perpendicular to the contacting surface (Bourgoyne 1991). Mathematically, it can be expressed as a product of fluid density, height of the fluid column, and acceleration due to gravity.
Overpressure, on the other hand, is caused by undercompaction, fluid volume increase, fluid migration and buoyancy, and tectonics (Swarbick Richard et al. 1999). Compaction is a process associated with sedimentation. When the deposition occurs, grains support the weight of overlying sediments due to point to point contact between the grains. As deposition proceeds fluid trapped in the pore spaces escapes and porosity reduces to balance the overburden weight so as to maintain the normal hydrostatic pore pressure. When this equilibrium (pore water expulsion) is disturbed by the rapid sedimentation (Rubey 1927) or absence of permeable pore networks, abnormal or overpressure occurs. This change in normal trend is called undercompaction or compaction disequilibrium. This is the dominant overpressure causing mechanisms in shallow depth.
Formation pressure is the pressure acting upon the fluids in the pore space of the formation. In a normally pressured geologic setting, the formation pressure will equal the hydrostatic pressure. Any deviation from the normal trend line, abnormal formation pressure occurs.
From among the available methods Bowers (1995) and Eaton's (1975) methods predictions are well known for their accuracy. But the real constraint in the selection of prediction method is availability of data. As the Tripura sub-basin is starving of extensive exploration work, offset well data requiring for the Bower's method are readily not available. So here modified Eaton's method is used to predict pore pressure and the predictions are compared with offset well measured pore pressures.
By the time-depth conversions (Pennebaker 1968), formations depths having different acoustic impedance can be found. From the interval travel time data, formation interval density was found by the following formula (ENI 1999).
In terms of interval velocity
In terms of interval transit time where v int ; v max ¼ interval and matrix velocities of the formation m=s ð Þ: Dt max , Dt f are interval transit time in rock matrix and fluid (ls/feet).Then the overburden pressure is calculated simply by the following equation: The development of normal compaction parameters plays a vital role in determining the reliability of the pore pressure prediction. Notable pore pressure prediction methods are Hottman and Johnson, equivalent depth Eaton's (1975) and Bowers (1995) methods. The first prediction approach by Hottman and Johnson (1965) method is still used in the industry due to its preciseness in pore pressure prediction. This method utilises calibrated sonic log velocities from offset well data and estimates the pore pressure for the proposed drilling location by linear regression. Eaton's method (Eaton 1975) approximates the effective vertical stress with ratio of sonic log velocities and resistivity values (Fig. 5). The modified Eaton's equation for variable overburden gradient is where V observed;R observed ¼ observed values of interval velocity; resistivity at the depth of interest: v normal;R normal ¼ values of interval velocity; resistivity if the formation is compacted normally at the samedepth: The method applied for pore pressure prediction from the seismic velocity data is explained very systematically in Fig. 6. Two seismic sections located in the synclinal and flank part of the Atharamura anticline are considered in the paper. Total 6 CDPs are taken from the synclinal section and total 22 CDPs are taken from the flank section. Both seismic sections are oriented in the direction of west-east
Result and discussion
In Atharamura anticline the overpressure starts in shallow depths (1,482-2,145 m) in synclinal section part while in flank part it starts deeper (2,653-5,919 m) as shown in the Table 1. The pore pressure gradient attains the maximum value of 1.03 psi/feet both in synclinal and flank sections of Atharamura anticline. Velocity reversals were frequently observed on both flank and synclinal part of Atharamura anticline (Figs. 7, 8) but we were unable to confirm the cause of fluid expansion, as the density log data are not available. Table 1 shows that the prediction of pore pressure observed overpressured formation continues up to depth of 10,000 m, which is practically impossible as there are no sedimentary rocks that exist below the depth of 6,000 m. This is because the entire pore pressure prediction is based on the assumption that velocities have the linear relationship with depth. But this assumption is not valid at greater depths (Aki and Richards 1980) as the linear relationship between propagation velocity and depth vanishes below 6,000 m.
The predicted pore pressure from 6 CDPs in the synclinal section and 22 CDPs from the flank section of the Atharamura anticline were compared with offset well measured pore pressures and observed an excellent match between them ( Table 2). Out of 22 CDPs prediction, 3 of them show similarities with offset well measured pore pressures in flank section, and 5 CDPs prediction out of 6 show similarities with offset measured pore pressure in synclinal section. It should be stressed here that there is no accurate match but only resemblance between the predictions and measured pore pressures. This can be observed well with the help of figures shown for eight different offset wells namely AD-4 trend, Agartala Dome-6. Ambasa trend, Kathalchari trend, Kubal, Masimpur-3, Rokhia structure-RO1, and Tichna structure-TI1 (Figs. 9,10,11,12,13,14,15,16). But this pressure match cannot be taken as assurance for the accuracy of the predictions. Moreover, it is not necessary for the prediction to match with pore pressure of the offset wells which are located far away from Atharamura anticline and in different geological conditions. For example, one of the offset well locations, Agartala Dome is a surface structure while Kathalchari, Tichna, Kubal wells were located in different anticlines exposed to the surface. But this match gave an opportunity to explore the most likely reason for the pore pressure succession in the Tripura region (Fig. 3).
Measured pore pressures of wells drilled on the top of the other anticlines match with predicted pore pressures on the flank and synclinal part of the Atharamura. This indicates the presence of single pore pressure source in the sub-basin. As the anticlines in Tripura become steeper from west to east, the overpressure measured on the top of the other structures located in the western part matched with predicted pressure of the Atharamura structure in eastern part. The increasing steepness from west to east could be the main reason for the overpressure migration to the greater depths in the Atharamura. Unlike in the flank and syncline part, hydrocarbon-bearing middle Bhuban Fig. 11 Ambasa N trend measured pore pressures versus predicted pore pressures Fig. 10 Agartal Dome-6 measured pore pressures versus predicted pore pressures Fig. 12 Kathalchari trend measured pore pressures versus predicted pore pressures Fig. 14 Masimpur-3 measured pore pressures versus predicted pore pressures Fig. 15 Rokhia structure-RO1 measured pore pressures versus predicted pore pressures Fig. 13 Kubal measured pore pressures versus predicted pore pressures Fig. 16 Tichna structure-TI1 measured pore pressures versus predicted pore pressures formation is exposed up to the surface on top of the Atharamura anticline. Thus, it offers the permeable flow path for pore fluids to the top of the Atharamura. If the impermeable seal is available on top of the anticline, overpressures, which are encountered at deeper depths in flank part could be expected at shallow depths on the top of the anticline.
Conclusions
• Seismic interval velocities can be used to estimate pore pressure section from surface seismic data but accurate seismic velocities are required for reliable results and offset well data for comparison. Pore pressure prediction provides critical information for the design of future wells and understanding of fluid migrations. • Pre-drill pore pressure prediction approach requires integration of surface and borehole measurements to minimize drilling risks and reduce the cost of drilling. • Detection of overpressured zone in the Atharamura anticline can be achieved through establishing an accurate seismic velocity-pore pressure transform. The predicted pore pressure after calibration to formation pore pressure measurements indicated different pore pressure regimes at different depths. • Seismic velocity plateaus confirm the cause of overpressure. The main reason for this overpressure is undercompaction. • From the comparisons with offset wells measured pore pressures it is found that the region is characterized by single pressure source and overpressures migrated to the shallow depths from west to east in the Tripura subbasin. • As the hydrocarbon bearing middle Bhuban formation is exposed on the top of the anticline, there is huge possibility for the presence of overpressures at the shallow depths, provided impermeable seal on the top.
|
v3-fos-license
|
2020-02-13T09:23:05.424Z
|
2020-02-10T00:00:00.000
|
218862490
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://publichealth.jmir.org/2020/2/e17217/PDF",
"pdf_hash": "eaf7f103841f41f1a737f77917d998da0098b583",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43487",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "de4047ac0aa1a1803a1e0ee3c61e681c01f84428",
"year": 2020
}
|
pes2o/s2orc
|
Time From HIV Diagnosis to Viral Suppression: Survival Analysis of Statewide Surveillance Data in Alabama, 2012 to 2014
Background Evaluation of the time from HIV diagnosis to viral suppression (VS) captures the collective effectiveness of HIV prevention and treatment activities in a given locale and provides a more global estimate of how effectively the larger HIV care system is working in a given geographic area or jurisdiction. Objective This study aimed to evaluate temporal and geographic variability in VS among persons with newly diagnosed HIV infection in Alabama between 2012 and 2014. Methods With data from the National HIV Surveillance System, we evaluated median time from HIV diagnosis to VS (<200 c/mL) overall and stratified by Alabama public health area (PHA) among persons with HIV diagnosed during 2012 to 2014 using the Kaplan-Meier approach. Results Among 1979 newly diagnosed persons, 1181 (59.67%) achieved VS within 12 months of diagnosis; 52.6% (353/671) in 2012, 59.5% (377/634) in 2013, and 66.9% (451/674) in 2014. Median time from HIV diagnosis to VS was 8 months: 10 months in 2012, 8 months in 2013, and 6 months in 2014. Across 11 PHAs in Alabama, 12-month VS ranged from 45.8% (130/284) to 84% (26/31), and median time from diagnosis to VS ranged from 5 to 13 months. Conclusions Temporal improvement in persons achieving VS following HIV diagnosis statewide in Alabama is encouraging. However, considerable geographic variability warrants further evaluation to inform public health action. Time from HIV diagnosis to VS represents a meaningful indicator that can be incorporated into public health surveillance and programming.
Introduction
The HIV care continuum ("treatment cascade") is a unifying framework delineating the successive steps following acquisition of HIV infection needed to achieve optimal individual and population health outcomes [1]. The continuum, beginning with serostatus awareness via HIV testing and culminating in plasma HIV viral suppression (VS, <200 c/mL), has been widely adopted for clinical, public health, advocacy, and policy purposes. Indeed, six of the 10 targeted outcomes in the updated National HIV Prevention Indicators for the United States [2] represent discrete steps along the continuum. Individual-level goals focus on attaining higher levels of VS (80% among persons with diagnosed HIV) through increased diagnosis, linkage, and retention in HIV care. A population health-level goal is to reduce new HIV diagnoses by 25%. Similarly, the Joint United Nations Programme on HIV/AIDS has put forth global "90-90-90" targets for three distinct steps on the HIV care continuum: 90% serostatus awareness, 90% antiretroviral therapy (ART) receipt among those with diagnosed HIV, and 90% VS among those receiving ART [3].
Although the value of delineating performance at the successive steps on the continuum is clear, there is an opportunity to take a broader view evaluating success traversing the anchoring steps on the continuum, HIV diagnosis, and VS. Indeed, as HIV surveillance data reported to public health departments and the US Centers for Disease Control and Prevention (CDC) now include reporting of individual-level plasma HIV viral load (VL) values in most jurisdictions in addition to reporting of diagnoses, there is an opportunity to use surveillance data to evaluate VS among persons with newly diagnosed HIV. To this end, we published on a novel HIV surveillance indicator, time from HIV diagnosis to the initial report of VS (<200 c/mL) using publicly reported HIV surveillance data from 19 jurisdictions with comprehensive plasma VL reporting in 2009 [4]. In this study, we observed a median time of 19 months from HIV diagnosis to VS among 17,028 diagnosed persons across jurisdictions. Notably, linkage to care within 3 months of diagnosis (hazard ratio, HR 4.84, 95% CI 4. 27-5.48) and better retention in care, as indicated by a higher number of time-updated care visits (HR 1.51 per additional visit, 95% CI 1.48-1.52), were associated with more expeditious VS. From a clinical and public health perspective, a shorter time from HIV diagnosis to VS translates to a reduction in morbidity and mortality and to a reduction in time during which an individual is viremic and likely to transmit HIV [5,6]. People living with HIV who take HIV medicine as prescribed and get and keep an undetectable VL have effectively no risk of transmitting HIV to their HIV-negative sexual partners [7,8]. Similarly, decreasing time between HIV diagnosis and VS and support for the maintenance of VS corresponds to a decrease of circulating virus in the population that can ultimately reduce HIV incidence [9].
Supportive services (eg, case management and transportation assistance), such as those provided through the Ryan White HIV/AIDS Program, are vital for helping shepherd people living with HIV (PLWH) through the HIV care continuum and attaining VS [10]. Similarly, enhanced personal contacts (eg, personalized reminder calls for upcoming appointments and check-ins after missed appointments) increases retention in care [11]. However, evaluation of the time from diagnosis to VS captures the collective effectiveness of HIV prevention and treatment activities in a given locale, including testing, clinical, ART, and supportive services provided by public health, community-based organizations (CBOs), and clinical entities to move persons across the steps of the HIV care continuum [10]. As such, it provides a more global estimate of how effectively the larger HIV care system is working in a given geographic area or jurisdiction and serves a complimentary role to evaluating individual steps on the continuum. In particular, evaluation of temporal and geographic variability in median time from diagnosis to VS may serve as a powerful public health indicator to measure changes over time in response to HIV treatment and prevention initiatives and, more so, identify areas in need of process improvements and/or additional resources. Here, we use data from National HIV Surveillance System (NHSS) to evaluate temporal and geographic variability between 2012 and 2014 across the 11 public health areas (PHAs) in Alabama as a case study of the utility of this novel HIV surveillance indicator to inform public health practice and policy.
Methods
Historically, Alabama is divided into 11 PHAs (Figure 1), with statewide coordinated HIV prevention and treatment activities led by the Alabama Department of Public Health (ADPH) in conjunction with local health departments, CBOs, and clinical agencies, with HIV care largely supported by the Health Resources and Services Administration via Ryan White funding [12]. Our primary objective was to evaluate temporal and geographic variability in median time from HIV diagnosis to VS by PHA to inform public health action. The ADPH reports cases of HIV, including demographic, clinical, and risk characteristics, to CDC's NHSS. Reporting was expanded by law in 2011 to include HIV VL test results. All labs in Alabama are required by state law to report diagnostic tests confirming HIV diagnoses and all VL results, including undetectable VLs, to the ADPH. In addition, community and clinical agencies providing HIV testing services are required to submit case report forms, including sociodemographic data, to the ADPH for persons with newly diagnosed HIV to allow for monitoring of epidemiological trends over time. Trained ADPH staff are responsible for follow-up with community and clinical agencies when there is incomplete data reporting on new HIV cases, in many instances extracting the requisite data from agency medical records to ensure complete data capture. The ADPH transmits statewide HIV surveillance data to the CDC without personal identifiers. sociodemographic, temporal (median times to VS), and geographic variables. Kaplan-Meier approach was used to evaluate proportion without VS and time from HIV diagnosis to VS, defined as the first date with a VL value <200 c/mL. All analyses were conducted using SAS software, version 9.3 (SAS Institute) [14].
As the data for this study was void of personal identifiers and analysis conducted by members of the study team at the CDC in a way that participants cannot be identified, review by an institutional review board was not required.
Results
Among 1979 Figure 3). The median time of 13 months in these 2 PHAs is considerably higher than 5 to 8 months in the other nine PHAs. PHA03 (including the Tuscaloosa metropolitan statistical area, MSA) and PHA11 (including the Mobile MSA) include more populous regions of the state. In contrast, PHA07, which includes a mostly rural, less-resourced area within Alabama's Black Belt, had the shortest median time from HIV diagnosis to VS, that is, five months. c The upper boundary of the 95% CI for MSM and IDU was missing because its value was beyond the 48 months of the study period from 2012 to 2015. In the era of rapid ART start programs, time to VS has become a critical indicator of programmatic success. However, it is well noted that sustained VS is essential to maximize individual health outcomes and the population health benefits of U=U (Undetectable=Untransmittable) [16]. As such, cross-sectional 12-month VS gives some indication of sustained VS beyond the initial time to VS metric. However, other methods of measuring sustained VS clearly have value and are needed to best measure longitudinal VL trajectories and maintenance of VS beyond initial success. We suggest that readily available HIV surveillance data, including the novel time from diagnosis to VS indicator, can be used to inform public health action. This can extend to aid in the evaluation of new and ongoing HIV prevention and treatment initiatives in a community, as well as targeted allocation of limited resources to maximize HIV outcomes.
Temporal trends in Alabama from 2012 to 2014 are encouraging, with a four-month decrease (from 10 months to six months) in the median time to achieve VS. Notably, adoption of changes in HIV treatment guidelines recommending universal ART treatment for all persons living with HIV, as well as the increased uptake of integrase strand inhibitors during the observation period, which have more rapid decline in plasma viremia relative to other antiretrovirals, may contribute to the observed large improvement over a relatively short period of time. Our findings provide proof of concept that there is value in monitoring this surveillance indicator to evaluate temporal variability at a population level, as well as to provide a critical variable for modeling exercises evaluating how variability over time (eg, shorter median time to VS following HIV diagnosis) has population-level impact on new HIV infections. Simulation modeling exercises (eg, Markov modeling and agent-based simulations), as espoused by Skarbinski and colleagues [17], could evaluate the impact of varying median times from diagnosis to VS over time and across geographic areas, accounting for disease prevalence, to estimate how shortening the interval to VS would translate to anticipated new HIV cases. As time elapses and more data are available, time from diagnosis to VS could be used in population modeling approaches to evaluate the impact of this interval on observed new HIV cases longitudinally and across geographic areas.
Interestingly, we observed that larger municipalities with likely more resources for HIV prevention, treatment, and supportive services did not necessarily have shorter median times from diagnosis to VS. The broad range of five to 13 months to achieve VS among 1979 persons with diagnosed HIV across Alabama's 11 PHAs is a call to action. To understand this variability, further research is needed within each PHA into the services offered and lived experiences of PLWH, traversing the care continuum from initial diagnosis to VS. We posit that a range of factors at various levels grounded in a socioecological framework, from the individual, interpersonal, community, and health care system, will impact individuals' trajectories across the continuum, as measured by time from diagnosis to VS [18]. Potentially salient multilevel factors accounting for the variation found among Alabama PHAs may include those associated with suboptimal adherence to ART, such as poverty [19,20] and neighborhood disorder in the community (eg, crime and drug use) [21]. As adherence is an important step in the HIV care continuum and is necessary for achieving VS [22], it is likely that factors which affect adherence also influence time to VS.
Although not the focus of this study, it is also important to consider some of the racial, structural, and geographic factors in Alabama that affect HIV incidence in the state, as these help to contextualize our findings. Black/African Americans are disproportionately affected by HIV in Alabama: although our study found that 69.8% (1382/1979) of new HIV diagnoses in Alabama between 2012 and 2014 were among black/African American people, just over one-quarter (26.8%) of persons living in Alabama identify as black or African American, according to 2018 estimates [23]. Alabama is also one of 14 states to date that has not expanded Medicaid following implementation of the Affordable Care Act [24], thereby creating a coverage gap whereby people with the lowest incomes, below 138% of the Federal Poverty Level, are ineligible for subsidized health insurance through the Marketplace [25]. Lack of Medicaid expansion has negative implications for HIV health, as being uninsured (and without any other health care assistance, as in from the Ryan White HIV/AIDS Program) is associated with increased odds of viral nonsuppression [26]. In addition, as one of the seven states highlighted in the national "Ending the HIV Epidemic" initiative as having a disproportionate incidence of HIV in rural areas [27], Alabama experiences a high HIV burden in rural regions of the state. Although these contextual factors are important for assessing differences in VS across states, they may also help to illuminate some of the intrastate variation in VS found in our study. For example, a possible reason why the mostly black/African American, rural PHAs in Alabama performed better than some of the other, more metropolitan areas of the state may be because of racial segregation, which is common in most Alabama cities. As racial discrimination has been linked to suboptimal ART adherence [28], racial segregation and resultant racial discrimination may help to explain this finding.
As this study exemplifies, it is imperative to gain a better understanding of shared and unique factors across geography to identify the most salient barriers and facilitators, as well as best practices, to emulate toward efforts of expediting the time to VS following HIV diagnosis for all persons, regardless of geography. This oversight would also be applicable and beneficial in other states to inform the generalizability of our findings. Such analyses could provide additional insights on shared and discrepant performance of this HIV surveillance indicator, according to a range of factors grounded in a socioecological framework, which could further inform public health action and resource allocation.
In recent years, increased attention has focused on reducing the time from initial HIV diagnosis to linkage to medical care and ART initiation to achieve better early engagement in HIV medical care and more expeditious VS [29]. Notably, there are often numerous agencies that interact with an individual across the HIV prevention and treatment continua. CBOs and public health departments tend to offer extensive HIV testing as well as other prevention and supportive services. High-impact prevention activities, as defined by the CDC as evidence-based, have expanded in many instances to include linkage to care and ART adherence programs, affecting subsequent steps on the care continuum. Evidence informed activities, such as the Data to Care initiative to use surveillance data to identify out-of-care PLWH and link them to care, are also important for helping PLWH move through steps of the HIV care continuum [30]. On-going attendance and retention in medical care is also needed to optimize sustained ART receipt to achieve VS. Rather than evaluating individual steps along the HIV care continuum, time from diagnosis to VS is a surveillance indicator that captures the successful, expeditious traverse through the care continuum as a result of the collective efforts between numerous agencies. As such, the performance of this indicator may represent the effectiveness of the response and delivery of services within a community or geographic area. However, we suggest these data can provide an objective measure that can be tracked over time to assess, in part, the effectiveness of linkage to care and treatment services affecting the disease locally. The results of several recent trials in urban domestic and international settings have indicated that rapid ART initiation, including starting ART on the same day as HIV diagnosis, shows promise in improving patient and programmatic outcomes, including improved linkage to care, early retention in care, and, indeed, shorter time to VS [29,31,32]. As suggested earlier, a more detailed understanding of an individual's experience traversing the care continuum within a geographic area, such as within each PHA in Alabama, is essential to inform our interpretation of the widespread variability in VS by place and to guide a more efficient and effective statewide coordinated HIV plan.
Limitations of our study include the potential for underreporting of VL values that could impact the time from HIV diagnosis to VS. However, we note that widespread efforts from the ADPH to monitor laboratory reporting and provide feedback as well as technical assistance would negate impact on study findings. Furthermore, we were only able to observe persons with HIV diagnosed over a three-year period from 2012 to 2014 because of the relatively new implementation of HIV biomarker reporting in our state and the required lags for data reporting. However, temporal improvement was still observed and lends to this surveillance indicator being a useful tool to monitor efficacy of community-level programs. In addition, its application in other states and jurisdictions will allow for more mature and robust reporting through the National HIV Surveillance System. It was beyond the scope of this study to further explore other factors that may have been associated with the geographic heterogeneity seen, including locally coordinated high-impact prevention efforts, barriers to and facilitators of primary medical care access, and the lived experiences of individuals with diagnosed HIV, especially as these are affected by HIV-related stigma. These will be critical areas for future research. As the focus of our study was on temporal and geographic variability, we did not control for sociodemographic differences in assessing VS within 12 months and median time to VS. Future research should account for individual-level variation. In addition, future research should assess whether these differences in VS across Alabama PHAs represent durable patterns or vary over time.
Public Health Implications
We describe the application of a novel HIV surveillance indicator, time from HIV diagnosis to VS, which is readily captured from data that are reported to state health departments and the CDC. The temporal and geographic variability in this HIV surveillance indicator among persons with HIV diagnosed in Alabama between 2012 and 2014 provides proof of concept of how incorporation of this metric could inform public health practice within jurisdictions, states, and geographic regions in the United States. This novel surveillance indicator, spanning the steps of the HIV care continuum from testing to VS, represents a composite measure of the effectiveness of HIV prevention, treatment, and supportive service provision within a locale and can be used to measure trends over time and across geographic territory. Further research, grounded in a socioecological framework, exploring individual and contextual factors that may contribute to heterogeneity seen in this study, is essential to inform and to guide a tailored public health plan to maximize population health impact.
|
v3-fos-license
|
2023-07-04T06:17:27.164Z
|
2023-07-03T00:00:00.000
|
259315272
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.jcim.3c00360",
"pdf_hash": "a8a4a8a8fc9db06f61e06d358df404040797348b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43488",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "fdb2d02d582809969ce581d957d3233c9507f2c7",
"year": 2023
}
|
pes2o/s2orc
|
Arginine Residues Modulate the Membrane Interactions of pHLIP Peptides
Most processes at the water–membrane interface often involve protonation events in proteins or peptides that trigger important biological functions and events. This is the working principle behind the pHLIP peptide technology. A key titrating aspartate (Asp14 in wt) is required to protonate to induce the insertion process, increase its thermodynamic stability when membrane-embedded, and trigger the peptide’s overall clinical functionality. At the core of pHLIP properties, the aspartate pKa and protonation are a consequence of the residue side chain sensing the changing surrounding environment. In this work, we characterized how the microenvironment of the key aspartate residue (Asp13 in the investigated pHLIP variants) can be modulated by a simple point mutation of a cationic residue (ArgX) at distinct sequence positions (R10, R14, R15, and R17). We carried out a multidisciplinary study using pHRE simulations and experimental measurements. Fluorescence and circular dichroism measurements were carried out to establish the stability of pHLIP variants in state III and establish the kinetics of the insertion and exit of the peptide from the membrane. We estimated the contribution of the arginine to the local electrostatic microenvironment, which promotes or hinders other electrostatic players from coexisting in the Asp interaction shell. Our data indicate that the stability and kinetics of the peptide insertion and exit from the membrane are altered when Arg is topologically available for a direct salt-bridge formation with Asp13. Hence, the position of arginine contributes to fine-tuning the pH responses of pHLIP peptides, which finds wide applications in clinics.
■ INTRODUCTION
The peptides and membrane interactions are vital for several biological processes, such as molecular transport, signaling pathways, and cell membrane integrity. 1−3 While these processes are at the core of a wide array of research areas, the molecular interactions between protein and lipids are still hard to fully characterize in complex systems. Simple peptide models are widely popular as they are customizable; they mimic defining traits of membrane proteins and there are limitless combinations of peptides with membrane models to study different types of biological systems. The advent of these peptide models, such as GALA 4 and WALP, 5 propelled the study of transmembrane peptide design, from which were identified several residue sequence patterns that determine membrane folding and insertion, such as hydrophobic residue stretches (alanine and leucine) and outward hydrophilic residues (lysines, arginines, aspartates, glutamates). 6 Several computational studies have focused on simple models, using single lipid membranes, to modulate and understand the physical chemistry of peptide− membrane interactions. These studies delved into modifying the peptide length, charged residues, and hydrophobic stretches to provide molecular insight into a wide range of biophysical phenomena, including peptide structural disposition in the membrane 7,8 and the formation of membrane pores. 9,10 Although most of these studies do not fully mimic the physiological environment, they provide important information to determine possible folding pathways 11,12 and identify key residues for peptide function. 13−15 Still, in other experimental studies, complex lipid compositions (i.e., cholesterol and anionic lipids) are often used in tandem with different ion concentrations to highlight how peptide kinetics changes as a result of the more electrostatically charged environments on the protonation changes of the relevant residues. 16−20 One of the more clinically relevant peptide models is the pHlow insertion peptide (pHLIP), which can target imaging and therapeutic agents to tumors. The pHLIP family is characterized by long (28−40 amino acids) membrane-inserting peptides, whose distinguishing trait from other transmembrane model peptides is their acidity-dependent membrane insertion and folding. 21−26 WT-pHLIPs possess a kinked α-helical fold, 14 more commonly occurring below pH 6.0 (state III). Otherwise, the peptide adopts a random coil conformation either adsorbed to the membrane surface (pH 7.0 to 8.0; state II) or in solution (pH > 8.0; state I). 21 The pH dependency results from the titratable residues that populate the water−membrane interface. By fluctuating between the phosphate region and the deeper ester region, one of the key residues (Asp14) undergoes (de)protonation events, which either promote peptide insertion or membrane exiting. 14 The proton binding affinity is a measurement of the energy needed to protonate a given residue and various factors affect this property: peptide movement, (un)folding, and intermolecular (membrane lipids) and intramolecular (side chains) interactions. 8,15 While Asp14 mostly contributes to the stability of the inserted state, other titratable anionic residues play an important role in the kinetics of transitions between states. 14 In our previous work, we characterized and identified the electrostatic interactions dictating the pK a of wt-pHLIP 8,14 and the Var3 peptide, 24,27 which is in clinical trials with imaging and therapeutic agents, in a simple liposomal-like model and in a cell-like model that accounts for the existence of the pH gradient setup. 28 We highlighted the necessity of including the pH gradient to accurately assess the therapeutic potential of transmembrane peptide models and we also described the intramolecular interactions that predetermine the peptide's thermodynamic membrane stability. One of these fundamental interactions occurs between the key aspartate and a neighboring arginine residue. This creates a distinct aspartate electrostatic microenvironment, which impacts the residue proton binding affinity. The interactions of the arginine with both the aspartate and lipid headgroups were previously discussed as well. 15 For many years, the role of cationic residues has been discussed in the context of studies about cell-penetrating peptides studies. A critical impact in inducing/hindering peptide insertion through morphological membrane alterations was established 29,30 while also affecting the peptide structure and position within the membrane, 31 including the possibility of promoting pore formations and membrane permeability. 9,29 These effects hinge on the cationic residues' behavior in the membrane, especially for arginines and lysines, as they remain positively charged as they move along the membrane normal. 32,33 When an α-helical peptide is inserted into a membrane, the cationic residue is dragged from an energetically favorable solvent environment to an apolar lipid medium. Depending on the lipid bilayer region placement, the residue may interact with anionic (e.g., phosphate) groups, effectively working as a peptide anchor, or it can snorkel to minimize the energy cost associated with membrane embedding, pulling the peptide with it, as seen in KALP peptides. 34 Several studies have strengthened the significance of cationic residues in transmembrane peptide models and their ability to modulate the peptide−membrane equilibrium. Furthermore, the presence of more charged groups near key peptide residues directly impacts the protonation and pK a of such residues in pHLIP. In this study, we introduced and investigated several pHLIP Arg variants, based on a known single-Trp template, 25 where the Arg residue was permuted at different distances and locations relative to the key Asp residue in a systematic manner and compared the experimental data with modeling calculations. We aim at characterizing and assessing the impact of these mutations in the context of transmembrane peptide design, while also discussing the impact on the peptide−membrane equilibria, folding stability, key residues pK a , and other relevant properties.
■ METHODS Synthesis of Peptides. pHLIP peptides were synthesized and purified by CSBio. The lyophilized peptides were dissolved in a buffer containing 2.7 M urea and then centrifuged through a G-10 column to remove urea. Concentrations of the peptides were calculated spectrophotometrically by measuring absorbance at 280 nm and using an extinction coefficient of 12,660 M −1 cm −1 . At the concentration range used in the experiments, wt-pHLIP peptides are predominantly monomeric. 35,36 The variants used here did not change significantly the overall number and type of amino acids, just their sequence. Therefore, it is unlikely to expect oligomerization of the investigated pHLIP variants at these concentrations.
Fluorescence and Circular Dichroism (CD) Measurements. Using an excitation wavelength of 295 nm and 1 mm excitation and emission slits, tryptophan fluorescence spectra were recorded from 310−400 nm on a PC1 spectrofluorometer (ISS, Inc). The excitation polarizer was set to the magic angle, or 54.7°, while the emission polarizer was set to 0°to reduce Wood's anomalies. CD spectra were recorded from 190−260 nm with 1 nm steps on a MOS-450 spectrometer (Bio-logic, Inc). The concentrations of the peptide and POPC were 7 μM and 1.4 mM, respectively, in each experiment. The temperature control was set to 298 K for both fluorescence and CD measurements.
The pH-dependent insertion of the peptides into the lipid bilayer of the POPC liposomes was studied by monitoring either the shift in the spectral maximum of the tryptophan fluorescence spectra or changes in the molar ellipticity at 222 nm as a function of the pH. After the addition of aliquots of citric acid, the pH values of the solutions containing the peptide and POPC liposomes were measured using an Orion PerHecT ROSS Combination pH Micro Electrode and an Orion Dual Star pH and ISE Benchtop Meter. Fluorescence spectra were analyzed peroxide and 75% sulfuric acid, and (8) rinsing with Milli-Q purified water. A POPC lipid monolayer was deposited on a quartz substrate by the Langmuir−Blodgett (LB) method using a KSV minitrough. For the LB deposition, a small amount of POPC lipids in chloroform was spread on the surface of the subphase and the solvent was allowed to evaporate for about 10 min. Next, the monolayer was compressed to 32 mN/m. When the surface pressure was stabilized, the first slide was inserted into the trough and held there for 60 s so the surface pressure would stabilize again, and then it was pulled out from the subphase with a speed of 10 mm/min. The second layer was created by fusion with POPC vesicles. About 80 μL of a state III sample (7 μM pHLIP, 0.7 mM POPC, and 2 mM pH 3−4 citrate phosphate buffer) was spread onto the slide. The process was repeated for eight more slides. The slides were then stacked on top of each other, with the spacers keeping them from sticking together, to have a complete set of 9 slides (16 bilayers). Immediately after stacking the slides, OCD spectra were measured (0 h). Afterward, the slides were kept at 100% humidity at 277 K for 6 h. At the end of 6 h, the excess solution was shaken off each slide and replaced with 80 μL of buffer at the same pH. The slides were again stacked together while filling with the buffer and stored at 100% humidity at 277 K for another 6 h. At the end of the second 6 h incubation period, the 12 h OCD spectra were measured for Arg variants in state III (the peptide inserted into the lipid bilayer of the membrane at low pH). Then, after incubating another 12 h, the 24-h OCD spectra were measured.
Kinetics of Insertion into and Exit from the Membrane. The tryptophan fluorescence kinetics were measured using an SFM-300 mixing system (Bio-Logic Science Instruments) in combination with a MOS-450 spectrometer with temperature control set to 298 K. All samples were degassed before the measurements to minimize air bubbles in the samples. The peptide and POPC samples were incubated overnight to reach equilibrium to ensure that most of the peptide is associated with the liposome lipid bilayers. To measure the kinetics of pHLIP's exit from the membrane, the pH of the sample was then lowered to 3.5−4.0 by adding citric acid approximately 30 min before each experiment. To follow the peptide insertion or exit, equal volumes of the peptide/POPC solution and of either citric acid or sodium phosphate dibasic, respectively, were mixed to either lower the pH from 7.2−7.4 to 3.5−4.0 or to raise the pH from 3.5−4.0 to 7.2−7.4, respectively. To monitor fluorescence intensity changes during the peptide insertion/exit induced by the pH drop or raise, the tryptophan emission signal was recorded through a cutoff 320 nm filter at an excitation of 295 nm.
Fitting. All data was fit to the appropriate equations by nonlinear least squares curve fitting procedures employing the Levenberg−Marquardt algorithm using Origin 8.5. The pHdependence data were normalized to a (0,1) scale and fitted with the Hill equation to determine the cooperativity (n) and the midpoint (pK) of the transition The kinetics data were normalized to the fluorescence intensity of state II and fitted with a multiexponential decay equation The value of N was determined by fitting with an increasing number of exponentials until the fit converged with a reduced chi-square less than 3 × 10 −5 or until the addition of another exponential term would only lower the chi-square value by less than a factor of 10.
System Setup and pHRE Simulations. Four pHLIP variant systems were prepared, composed of 32 amino acid residues and derived from the wt sequence (Table 1). The pHLIP−membrane setups were built using the previous simulations of the wt system 8 as a template. In each setup, the peptide variant was generated as a full α-helical structure inserted in a 256 2-oleoyl-1-plamitoyl-sn-glycero-3-phosphocholine (POPC) membrane bilayer. The initial structures were built with the key aspartate (Asp13) placed in the water− membrane interface. Although wt-pHLIP variants, these initial structures aimed at a more unbiased approach to the system setup and equilibration protocol, since their equilibrium conformations for the inserted state may differ. After the setup, all systems were submitted to both minimization and initialization procedures, followed by a two-step equilibration protocol: the first step consisted of molecular dynamics (MD) simulations (100 ns), with the titrating residue protonation states chosen as neutral (if membrane-inserted) or charged (if solvent exposed). Additionally, distance restraints (1000 kJ/ mol·nm 2 ) were applied to preserve the integrity of the α-helix hydrogen bonds of every n th − n th + 4 residues, starting in the 17 th until the 28 th residues of the C-terminus region, which corresponds to the region located in the membrane core. The initial protonation assignment and imposed distance restraints on the helical hydrogen bonds improve the thermodynamic stability of the peptides in their relevant state III starting configuration while promoting a smoother accommodation of the surrounding lipids to the peptide presence, i.e., a decrease of nonphysical peptide−membrane configurations. The first step of the equilibration procedure using these restraints resulted in most peptides' N-terminus segment converging to the kinked αhelical conformation, similar to what has been shown for the wt peptide. 14 This behavior has also been observed recently using MD simulations and bromolipid quenching experiments (Table S1 of the Supporting Information). 20 The second step of the protocol consisted of a 100 ns unrestrained constant-pH molecular dynamics (CpHMD) simulation at pH 6.0 to enable residue titration and remove all initial bias, equilibrating both the conformation and protonation states of the titrating residues.
All systems were simulated using the pH replica exchange (pHRE) method. 8,38 The pHRE is an extension to the CpHMD-L methodology 14 enhanced sampling technique. 46,47 This scheme consists of a four-step cycle of n simultaneous CpHMD simulations (pH replicas), assigned to a pH value within a given pH range, that occurs as such: a Poisson−Boltzmann/Monte Carlo (PB/MC) calculation, followed by a molecular mechanics/molecular dynamics (MM/MD) solvent relaxation step, and then a final (MM/MD) simulation, with a pH exchange step within the previous step framework. The MC calculations assign the new protonation states using the PB-derived free energies from the system conformation of the previous cycle. The relaxation step allows the solvent molecules to accommodate the new charged states, avoiding nonphysical spikes in the system's potential energy. The final MM/MD step samples new system conformations using the calculated protonation states. During the MM/MD step, the simulation is stopped and a pH exchange attempt occurs at a fixed frequency of 20 ps (τ RE ), lagging 10 ps from τ prot , between adjacent pH replicas. If the replica exchange is accepted, according to the probability given by eq 3, the conformations and protonation states are swapped between the replicas' pH values, thus increasing the sampling variability at both low and high energy states in every replica.
where pH m and pH l are the exchanging pH values and x i and x j are the number of protonated groups. For all systems, five replicates of 100 ns were performed, each replicate consisting of four pH replicas. The assigned pH values were in the 5.00 to 7.25 pH range, with a 0.75 pH step. The chosen pH range differs from previous works, 8,14 as according to the previous equation, a smaller pH gap between replicas improves the probability and pH exchange. In these simulations, the average exchange efficiency was 40% across all systems. Each replica CpHMD cycle consisted of 20 ps (tau prot ) steps, whereas the relaxation step was 0.2 ps (tau rlx ). All systems were titrating the N-and Ctermini and the acidic residues highlighted in Table 1. In all systems, each replicate starting conformation was obtained from the final segments of the CpHMD equilibration protocol.
MM/MD and CpHMD Settings. All CpHMD and pHRE simulations used a modified version 40,48,49 of the GROMACS 5.1.5 package 50 and the GROMOS 54A7 force field, 51 while a Python-based wrapper was used to apply the pH replica exchange method. 8,28,38 Meanwhile, the restrained MD equilibration simulations were performed using the GROMACS 2020.1 package with the GROMOS 54A7 force field. 51 A single cutoff scheme was applied for the treatment of nonbonded interactions. The forces were updated at every step as all pairs were under a 14 Å cutoff. 52 Pertaining to the longrange interactions, the van der Waals forces were truncated at 14 Å, while a generalized reaction field (GRF) method, with a dielectric constant of 54 53 and an ionic strength of 0.1 M, was used to treat the Coulombic interactions. Both peptide and lipid bond lengths were constrained using the P-LINCS algorithm 54 and the water molecule model used was the simple point charge (SPC), 55 whose bonds were constrained with the SETTLE algorithm. 56 The integrator time step, for all MD simulations, was 2 fs and the conformations were sampled from an NPT ensemble. The used temperature bath scheme was the vrescale 57 at 310 K with a relaxation time of 0.1 ps coupled to the solute (the peptide and membrane) and solvent separately. The system pressure was kept constant with a Parrinello−Rahman barostat 58 at 1 bar with a relaxation time of 5 ps and a compressibility of 4.5 × 10 −5 bar −1 .
Poisson−Boltzmann/Monte Carlo Simulations. The Delphi V5.1 program 59 was used to perform Poisson− Boltzmann calculations. The atom radii were obtained from the Lennard-Jones parameters of the GROMOS 54A7 force field using a 2 RT energy cutoff. 60 The atomic partial charges were used directly from the same force field. The peptide−membrane molecular surface was defined by the following parameters: a 1.4 Å radius probe, an ion-exclusion layer of 2.0 Å, and an ionic strength of 0.1 M. The dielectric constants used were 2 and 80 for the solute and solvent, respectively. To calculate the electrostatic potential, a two-step focusing procedure was conducted with two 91 grid points. The coarse grid had a ∼1 Å spacing between the grid points, while the smaller grid had ∼0.25 Å. The defined relaxation parameters were 0.20 and 0.75 for linear and nonlinear interactions, respectively. Periodic boundary conditions for lipid bilayer systems were applied in the x and y directions. Background interaction calculations were truncated at 25 Å and the electrostatic potential convergence threshold was 0.01 kT/e. 41,42,61 The PETIT program performed the MC calculations of the residues' protonation states using the free energy terms obtained from the PB calculations. 62 The proton tautomerism was accounted for all titrable groups. For each conformation, 10 5 MC cycles were performed and each cycle corresponds to a trial change of each individual site and pairs of sites with an interaction larger than 2 pK units.
Structural Characterization of the Arginine Variants. A proper configurational and local description of these peptide/ membrane systems requires structural and electrostatic analytical tools. Therefore, all systems were evaluated for their electrostatic properties, such as average protonation, the pK a of insertion (pK a ins ), 8,14,63 and the complete pK a profiles of each peptide key Asp13. The most common structural characterization consists of secondary structure analysis, membrane bilayer thickness, the Asp13 membrane insertion, and its intramolecular distances to the neighboring groups. This set of analyses clarifies both the configurational and local changes between each variant peptide conformational arrangement.
In this work, the membrane insertion of the Asp13 residue was used as a guideline to discriminate other properties' behaviors along the membrane normal. The membrane insertion of a given residue is defined by the relative difference between the average Z coordinates of the membrane surface reference and the residue of interest. 64 The membrane surface reference is defined by a minimum of 10 atoms of the neighboring lipid phosphate group, within a 6 Å radius in the xy plane, from the residue of interest. This data can be paired to the insertion values by their time stamps, followed by a slicing procedure, using 0.5 Å insertion bins, where the pertinent data are assigned to the corresponding insertion level, hence obtaining a given property insertion profile.
We used membrane thickness calculations to quantify the local membrane deformations. 8,14 The method performs halfthickness calculations for each monolayer within an annulus region. 64 This region was 0.5 Å wide as it was defined within two radii centered on the peptide. Using this annulus, a scanning procedure is performed on the xy plane of the membrane monolayer, as both radii are simultaneously increased in a 0.5 Å step. With this approach, we describe both local deformations and membrane "bulk" (unaffected) regions (lipids usually at distances >15 Å). The presented membrane local deformations were calculated as the difference between the local halfthicknesses and the bulk region half-thickness (beyond the 15 Journal of Chemical Information and Modeling pubs.acs.org/jcim Article Å cutoff). These calculations were done on all equilibrated conformation snapshots and, at the bulk regions, the thickness of both monolayers should converge to the same value, i.e., half the thickness value for a pure POPC membrane. The experimental POPC half-thickness value range was calculated by interpolating from experimental thickness measurements in the fluid range at different temperatures. 65 To characterize the key aspartate residue microenvironment at distinct membrane media, we need to assess and identify the neighboring groups within the Asp13 first interaction shell. For each peptide variant, we calculated the number of interacting lipid phosphate and choline groups, the ArgX−Asp13 side-chain interactions, and the number of hydrogen bonds established with water molecules. All of these and other system properties were calculated as time series and as a property insertion profile. The first interaction shell cutoff value (0.52 nm) was defined from the RDF distributions for water, phosphate, and choline groups obtained from our previous work. 8 pK a Profile Calculations and Electrostatic Contributions. The pK a profiles are an important tool to assess and interpret the local electrostatics and how it affects the proton binding affinity of a pH-sensing residue. To that effect, the pK a calculations must fulfill the following criteria: (1) each insertion bin must possess a minimum of 10 data points of each protonated state at each pH value and for each replicate; (2) in the pK a fit procedure, all conformational sampling data must originate from at least three replicates and each replicate requires data from at least two replicas to avoid sampling bias; (3) in a titration curve, the average protonation, at a given pH value, should not be higher (by 0.05) than the average of the previous lower pH, thus ensuring monotonicity. By fulfilling these conditions, the average protonations of each pH replica are calculated and then fitted to the Hill equation to derive the pK a values.
A semiquantitative analysis was also performed to ascertain how each electrostatic partner contributed to the pK a of the key aspartate. This analysis required the following steps. (1) The slicing procedure of each referenced neighboring group data to the insertion profiles for all pH values. (2) Then, all pHdependent data were used to perform a linear interpolation with the interpolate tool of the scipy module. 66 Then, we estimated each property value for the corresponding pK a at all insertion levels. (3) The Random Forest Regressor algorithm of the scikitlearn module 67 was applied. A data array (90 × 5) was constructed based on the properties data of all peptide variants, including the wt data from a previous work. 8 The data consisted of all values present in each electrostatic partner profile (4 independent features) and their corresponding pK a values (dependent value). The estimator generated several classifying decision tree predictions from several subsamples of the data set obtained from a bootstrap resampling method with replacement. Then, it calculated the average of all generated outputs to improve the prediction and it determined the relative importance ranking of each feature for the model. The hyperparameters used were 2500 trees (n estimator ) with a max_depth of 20.
Analyses and Error Calculations. All analyses pertaining to the secondary structure, distance measurements, the number of interactions between groups of interest, and property time series were performed using the GROMACS tool package. Further analysis was performed using in-house software (http:// mms.rd.ciencias.ulisboa.pt/#software) and the specified Python modules.
All pK a error values were calculated using a Bayesian bootstrap approach. These estimations prevent fitting issues by executing 1000 bootstraps from our average protonation samples. In each bootstrap, random weights were assigned to each sample. This procedure also requires the same selection criteria (mentioned above) to obtain final pK a and error values. For all other properties, the error bars reflect the property standard error of the mean.
Biophysical Characterization of the pHLIP Variants.
The targeting of pHLIP peptides can be modulated using small mutations to the sequence, especially to the residues in the transmembrane (TM) region (10 th to 30 th residue). 24,68−71 These residues dictate the thermodynamic stability of the inserted state, hence any mutation can disturb the electrostatic balance of the key titrating residues, the lipophilicity of this region, and, ultimately, the thermodynamic equilibrium of the state III peptide−membrane configurations. Cationic residues can modify the electrostatic nature of the TM region and the insertion/exit pathways; 15 therefore, we designed four wt peptide variants, each with a distinct arginine position relative to key Asp13 (Table 1).
Arg residues were placed in positions 10, 14, 15, and 17 in pHLIP sequences, and they were expected to interact differently with the Asp13 residue. All pHLIP variants contained a Trp residue at the inserting end of the peptides. Changes in the fluorescence of Trp residues and peptides' CD spectral signal were monitored during peptides' interactions with a lipid bilayer membrane of POPC liposomes. All peptides exhibited pHdependent pHLIP-like behavior when investigated at high and low pH values in the absence and presence of POPC liposomes ( Table 2). The OCD spectra were recorded to confirm that all Position of the maximum of fluorescence spectra, λ max ; helicity at 222 nm; mid of transition (pK) and cooperativity of transition (n) for the peptides' partitioning into the membrane as measured by fluorescence changes, and for the coil−helix transitions as measured by CD changes; and characteristic times of insertion into the membrane, τ insertion , and times of exit from the membrane, τ exit , are shown. In solution, pHLIP forms an unstructured polymer at high pH (∼8), leading to the so-called state I. The interaction of pHLIP with the lipid bilayer of the POPC membrane at high pH (∼8) corresponds to state II. The transmembrane helical orientation of pHLIP triggered by low pH (3−5) is often called state III.
peptides indeed adopt the transmembrane orientation at low pH ( Figure S1 of the Supporting Information). The R10 variant was investigated previously (it was called W30 in the published study). 25 The obtained results indicate that the transition at 208−210 nm disappears in OCD spectra, proving confirmation of a transmembrane orientation of the peptides in the lipid bilayer. Also, we noted that the highest helicity in state III was observed for the R14 variant, and the lowest helical content was established for the R15 variant ( Table 2 and Figure S1 of the Supporting Information).
Our experimental analysis of the pK of transition from state II (the peptide in solution at high pH in the presence of POPC liposomes) to state III (the peptide inserted into the membrane at low pH) of Arg variants revealed the trend for the reduction of pK moving from R10 to R17. The changes in the fluorescence signal during the transition reflect the insertion of the tryptophan residue or partitioning of the peptide into the bilayer of the membrane ( Figure 1A), while the changes in the CD signal reflect the coil−helix transition ( Figure 1B). The Table 2 of the main manuscript. significantly lower pK (5.5−5.6) was established for the R17 variant compared to other peptides.
We also investigated the kinetics of peptide insertion and exit from the membrane (Figure 2). The insertion times of the different variants varied in the range of 4−60 s. The obtained data reflect the insertion and equilibration processes in the membrane. The R10 and R17 variants exhibit the fastest insertion/equilibration. The R14 variant also has a fast initial phase with a slowed final process, which was completed with a characteristic time of 27 s. The R15 variant exhibits by far the slowest insertion/equilibration kinetics compared to all variants. The exit of both the R10 and R15 variants is completed within ∼200 ms. The exit of the R14 and R17 variants is slower and was completed within 9−14 s. Both peptides showed some interesting behavior within the first 100−200 ms (insets in Figure 2B). The fluorescence intensity increases before it starts to decay in the case of the R17 variant. The R14 variant shows even two oscillations in the signal prior to the main decay. These short-time scale phenomena are not taken into account by the main decay fits for the R14 and R17 variants.
Structural Characterization of the pHLIP Variants. The pHRE MD simulations show that all peptides slowly converged to a similar structure, not very different from the typical wt αhelical conformations displaying the characteristic kink near the water−membrane interface (Figures 3 and S2 of the Supporting Information). The most important peptide and membrane properties equilibrated relatively fast, with convergence obtained after the initial 30 ns, which were discarded (Figures S3−S14 of the Supporting Information).
The peptides' structural characterization highlights the unique effects of each arginine permutation on their structural stability ( Figure 4A) and the local Asp13 vicinity, in particular, their specific interactions with the key Arg residues ( Figure 4B). The peptide variants' distinct folding patterns suggest that arginine mutations placed lower in the sequence (R14, R15, and R17) progressively induce larger hydrophobic mismatches ( Figure 4C,D) than the wt sequence (Asp14 membrane insertion is −2.0 ± 0.6 Å at pH 6.0), as they increasingly expose the C-terminus hydrophobic flanking regions to the water− membrane interface, leading to more thermodynamically unstable states. More pronounced peptide tilting ( Figure 4E), compared to the wt at pH 6.0 (16.0 ± 3.7°), and helical unfolding ( Figures 4A and S2 of Supporting Information), also relative to the wt TM region helicity at pH 6.0 (91.5 ± 1.0%), promote internalization of the hydrophobic stretch (Pro18 to Leu24) to mitigate these mismatch effects. This is further evidenced by the progressively deeper positions (negative values) of the central Leu21 ( Figure 4C), with the exception of the R17 variant where the significant peptide tilting (≈30°) counteracts the TM region vertical movement (positive values). Overall, the energy penalty associated with the internalization of the N-and C-termini charged polar residues outweighs the partial helical unfolding and structural tilting, favoring these conformational rearrangements.
Interestingly, the R10 folding pattern contrasts with the other peptides, as placing the positive guanidinium group higher in the sequence creates a TM hydrophobic mismatch in the opposite direction. The Arg10 position inverts the observed helical loss of the hydrophobic TM stretches (18 th −21 th and 22 th −24 th ) ( Figure S2 of the Supporting Information) due to the closer proximity of the TM stretch to the polar environment of the outer water−membrane interface ( Figure 4D). In the computational model, this proximity triggers helical loss of the hydrophobic stretch (18 th −21 th ) to stabilize near the acyl chains (≈−1.4 at pH 5.75 in Figure 4C). However, this behavior is the opposite of what has been observed by the CD measurements (Table 2 and ref 25), indicating that the conformational ensemble of the R10 system may not be completely representative. Nevertheless, it is clear that the arginine residue functionally works as a positive anchor that, depending on its sequence position, either propels (pulls) the peptide to (from) the hydrophobic membrane core and inner water−membrane interface region. The TM regions' hydrophobic mismatches depend on the position permutation and strongly affect the stability of the peptide−membrane configuration, the peptide tilting, and the degree of α-helix folding in the flanking regions ( Figures 4A,E and S2 of the Supporting Information).
Major and minor (local) peptide movements are intertwined to define transmembrane pHLIP configurations and the local electrostatic vicinity of key Asp13. The structural disposition of the peptides imparts distinct Asp13 membrane behaviors, populating either shallow membrane regions (R10) similarly to the wt peptide (−2.0 ± 0.6 Å) or below the ester region (R14, R15, and R17) ( Figures 4D and S16 of the Supporting Information). The internalization of a polar charged residue deeply perturbs the membrane bilayer, as water molecules and lipid headgroups typically form a stabilizing polar shell. Deeper Table 1) is shown in the cartoon with its respective color (R10−orange; R14−olive green; R15−pink; R17−blue). The unbiased full α-helical initial conformation is depicted on the left in light green. The key Asp and Arg residues are shown in sticks.
Asp13 residues should induce larger deformations, yet our results show that the deeper R14 and R17 variants induce smaller membrane perturbations, while R15 and the more shallow R10 cause pronounced perturbations ( Figures 4D and 5) correlated with a fast exit from the membrane (Figure 2 and Table 2). The decoupling between the major peptide structure and lipid bilayer deformations warrants a look at the local Asp13 environment.
Although all peptides exhibit unfolding events, the Asp13 region remains remarkably conserved throughout the simulations ( Figure S2 of the Supporting Information), also observed in the wt peptide. 8,14 Therefore, this behavior either preserves the lack (R10, R15), as in wt-pHLIP (7.5 ± 0.2 Å at pH 6.0), or the presence (R14, R17) of tight aspartate−arginine interactions ( Figure 4E), to minimize solvent exposure of the hydrophobic flanks, as previously noted, and induce small membrane invaginations in the inner membrane monolayer ( Figure 5B). The pronounced outer membrane monolayer perturbations induced by R10 and R15 ( Figure 5A) result from the membrane internalization of a well-solvated aspartate. The residue sequence positions prevent a spatial arrangement of the αhelix that favors tight ArgX interactions, hindering the aspartate stabilization through a salt bridge ( Figure 4B). Consequently, choline headgroups and water molecules become the stronger interaction partners, promoting the deformation of the local lipid monolayer.
Overall, the destabilization of the water−membrane interfaces seems to mostly depend on the ability of the peptide to stabilize its Asp13 negative charge. When the Arg side chain is available to interact with Asp13 (R14 and R17), the peptides insert deeper into the membrane and minimize the water-induced deformations. Otherwise, structural constraints hamper the salt-bridge neutralization, inducing more pronounced deformations and less stable peptide−membrane configurations. Nevertheless, the arginine position is pivotal in stabilizing the peptide/membrane configuration as deeper positions experience more snorkeling events that pull the hydrophobic TM segment upward to the apolar membrane core. Altogether, the structural character-ization of these peptides pinpoints an important role of the Arg position in modulating the Asp13 electrostatic environment.
Proton Binding Affinity and Electrostatic Shell of Asp13. The investigated peptides' thermodynamic stability strongly depends on the (de)protonation of Asp13 to promote/ hinder the insertion and exit processes. The proton binding affinity of an amino acid residue in a membrane bilayer environment is defined by the strength of the surrounding electrostatic interactions and the level of access to the solvent. 8,14,43 Despite the complexity of depicting the different possible states of the diverse peptide−membrane configurational ecosystem, the insertion property of a residue is an indirect measurement of the peptide−membrane equilibrium, as each distinct insertion level represents a given medium (solvent, water−membrane interface, membrane core). 64 Therefore, it is possible to accurately predict the pK a behavior (as detailed in the pKa profile calculations section in the Methods section) for a given residue along the membrane normal ( Figure 6).
Overall, we observe that all peptides exhibit distinct pK a behaviors according to their own unique microenvironment. As expected, some of them exhibit a similar trend to the wt peptide, 8,14 where the pK a shifts toward higher values ( Figure 6) induced by desolvation effects. 43 However, R10 and, notably, R17 show unusual pK a profiles, hinting at other structural and electrostatic effects to be present. The R10 profile invariance along the membrane normal and the lack of sampling in the deep membrane region (−5 to −6 Å) confirm our initial assessment that our pHRE simulations are not capturing the correct structure and protonation ensembles for this sequence. This was also hinted at by the observed disagreement in helical content between simulations and experimental data ( Figure 4A). This small loss of helicity in the TM region coupled to a vertical peptide movement along the membrane normal, pulling Asp13 . pK a profiles of wt-pHLIP and its Arg variants. Each pK a trend shows the shift along the membrane normal. The white and gray-shaded regions correspond to the water phase and membrane interior, respectively. The light blue vertical stripe identifies the pH region ideal for TME selection. The wt-pHLIP data were adapted from ref 8. The R10 and R17 profiles showed significant sampling limitations at deeper membrane insertion regions (<−5 Å), which resulted in the absence of data points (R10) or higher error values (more than one pK unit), which were omitted for clarity (R17). away from deep membrane regions, resulting in these observed prediction limitations. This could be confirmed and possibly circumvented with the use of multiple replicates in the peptide/ lipid assembly/equilibration protocol, which could help identify outliers, although at a significant increase in the computational cost. All other variant peptides show good sampling and their distinct pK a values in the deep membrane region could be calculated (Table 3).
Although the pK a profiles diverge in the deep membrane regions, both the structural and electrostatic analysis (Figures 4B and 7A) hint at two behavioral modalities regarding either the presence or a lack of tight arginine interactions. The lack of tight arginine interactions would indicate a certain structural similarity of the R15 profile (and R10, in principle) with the wt. Indeed, our predicted R15 profile (pK a ins = 6.3 ± 0.1) exhibits remarkably similar behavior to the model wt profile (pK a ins = 6.4 ± 0.1) at deep membrane regions (−5 to −6 Å), despite its small deviation from experimental insertion pK (pK ins = 5.9). The omission of tight arginine interactions (Figures 4B and 7A) and a constant balance of choline and phosphate groups within the interaction shell ( Figure 7B,C) further hints at analogous electrostatic environments with the wt peptide. 8,28 The previously discussed structural characteristics ( Figure 4A) attenuate the impact of distinct sequence positions, thus sampling equivalent peptide−membrane configurations in equilibrium.
Regarding the R14 and R17 peptides, the structural analysis highlighted tighter Asp13−ArgX interactions, hence we expect some divergence in the pK a profiles relative to the R15 sequence. R14 shared some structural similarities with the R15 peptide, noted by only a small difference in helical content (<5%− Figure 4A) and analogous membrane monolayer perturbations ( Figure 5). These resulted in similar pK a profiles, which deviate only in the deeper membrane regions (−5 to −6 Å). The resulting small difference in their proton binding affinities stems from a rearrangement of the interaction shell, triggered by the presence of a salt-bridge interaction along the residue internalization ( Figure 7A). Although the small decrease of R14 pK a ins (5.9 ± 0.1), when compared to R15, can be related to its electrostatic environment, the robust experimental pK ins (<6.0), with almost no change to R15, indicates that the arginine electrostatic contribution is probably also not decisive in R14.
The R17 peptide is an evident outlier concerning peptide behavior ( Figure 6), with a very low pK a ins (<5.0). The deeper regions of the pK a profile are probably influenced by a partial lack of sampling, hinted by the large error bars (1−2 pK units). Nevertheless, the prominent shift to lower pK a values upon membrane insertion is very clear and indicates an overwhelmingly positive environment that overcomes the expected desolvation effect in the apolar membrane regions. Indeed, the interaction shell is characterized by progressively more frequent ( Figure 7A) and tight ( Figure 4B) arginine interactions, which energetically favor the stabilization of the anionic state of Asp13. This phenomenon results from the thermodynamically stable Incomplete R10 pK a profile precluded a reliable pK a ins estimation. peptide conformation ( Figure 4A) that promotes side-chain interactions, precluding large solvation shells and causing smaller membrane perturbations ( Figure 5). Consequently, these arginine interactions far outweigh other electrostatic contributions, as noted by the pronounced decay of phosphate interactions and a lack of competing choline and water interactions ( Figure 7A,C), especially at deep membrane regions. These structural features are probably slightly overestimated in our model since the quite low proton binding affinity of this peptide has only a semiquantitative agreement with the experimental pK ins (<5.6). Notwithstanding, it generated a structural model that helped to provide a convincing interpretation of the biophysical data. This detailed topological discussion can also provide some insight into the membrane insertion kinetics of the different peptides ( Figure 2 and Table 2). As previously established, the Arg and Asp residues can form tight intramolecular interactions in R14 and R17 ( Figure 4B) since they are topologically close (Figure 3). These charge-stabilizing intramolecular interactions allow the peptides to be more amenable to membrane insertion, thus shedding the solvation shell and decreasing membrane disruption ( Figure 5). In contrast, the distant Arg and Asp positions in the R10 and R15 peptides hinder these chargestabilizing intramolecular interactions, which are replaced by anchoring intermolecular interactions with phosphate/choline groups at the membrane interface ( Figure 4). The higher charge density surrounding these groups requires more water molecules (Figure 7), inflicting deeper membrane deformations ( Figure 5). In sum, R14 and, especially, the R17 peptide exhibit faster membrane insertion kinetics than the R10 and R15 sequences and much slower exit kinetics compared with R10 and R15 (Table 2) as a result of their residues' topological position.
Which Electrostatic Interactions Drive the Asp13 pK a Shift? A residue pK a derives from the delicate trade-off between the electrostatic contributions of several interacting partners within the solvation shell. Accordingly, different permutations of these effects, due to changes in the peptide microenvironment, result in distinct pK a shifts. Nonetheless, the impact of these partner permutations is difficult to estimate, as certain neighboring interactions may have more prominent effects on the residue proton binding affinity than others. Therefore, we used a Random Forest Regressor method (see more details in the Methods section) to quantify the contributions of each electrostatic feature in the overall Asp13 pK a values of these pHLIP variants (Table 4).
In our previous works, 8,14,28 we determined that the phosphate groups along with the desolvation effect were the major factors for the anionic residues pK a shifts. These observations are in agreement with the semiquantitative feature estimation, as the model gives a larger weight to these features (0.57 and 0.19 for phosphate groups and water hydrogen bonds, respectively). Surprisingly, the arginine contribution (0.07) seems strikingly low for the model, even though our structural data highlights how the arginine sequence position heavily shapes the electrostatic microenvironment and overall peptide stability. Although unexpected, this model only assumes direct charge contributions, hence their indirect impact in modulating the restrictive Asp13 interaction shell is not taken into account. As a result, some features may be exacerbated, such as the phosphate groups. Note that our model tries to estimate the contribution of each property within the residue interaction shell (0.52 nm), whose volume can only be occupied by a limited number of particles. When a phosphate group is tightly interacting with the aspartate, it is simultaneously shielding the aspartate from the nearby cholines as exemplified in the R14 contributions profiles (Figure 7). The spatial composition of these groups is intricately correlated to each other, leading to the information of a property change being already encoded in the others, exacerbating the estimation. Nevertheless, this analysis was still very important in weighing the importance of the group moieties that modulate the Asp13 pK a , being in qualitative agreement with our previous discussions on the key role of the phosphate groups. Overall, these results show that a thorough structural and electrostatic analysis is pivotal to obtaining a detailed picture of the molecular intricacies at play.
■ CONCLUSIONS
The peptide−membrane configuration and the interactions between crucial residues modulate the delicate balance between structural and electrostatic effects. The ionic interactions between the key Asp and Arg residues define the favorable thermodynamic states, while the same configurations reorganize the local electrostatic environment sensed by the residue pair. This balance is fundamental to the acidity-dependent ability of the pHLIP peptides to interact with the membrane and their therapeutic applicability. In this work, we performed a multipronged structural characterization of pHLIP peptides with distinct arginine residue positions (R10, R14, R15, and R17), studied their impact on the proton binding affinity of key Asp13, and compared the calculations with experimental results. The pHRE simulations revealed both unique structural and electrostatic features in each arginine permutation. Overall, we observed that deeper arginine positions typically pull the aspartate away from the water−membrane interface undergoing a salt-bridge charge neutralization, although this depends on helical folding and the residues' side chains' topological proximity. Nevertheless, we showed that a more complex and intricate electrostatic interaction network seems to modulate the proton binding affinity across different membrane insertion environments.
In terms of the therapeutic potential of the peptide variants studied, only the R17 peptide exhibits a pH-dependence profile, confirmed by experiments (pK ins = 5.6) and computations (pK a ins < 5.0), that is markedly outside the therapeutic range (pH 6.0− 6.5 at the surface of metabolically active acidic cells) and quite different from the wt peptide behavior. In the remaining peptide sequences, the ArgX/Asp13 direct interactions are either hindered by the peptide helical topology or outweighed by solvation. Therefore, we found that the position of the arginine group is fundamental in defining the first interaction shell of titrating Asp13. Most of the proton binding affinity contributions result from the phosphate groups' configurational reorganization within the shell region, which is also complemented by the interactions with other electrostatic players. The arginine, when available for salt-bridge formation with key Asp, seems to act as a pH sensor inhibitor, significantly modulating the pH response of the peptide. Overall, cationic residues can be an important feature for peptide−membrane equilibria in transmembrane peptides, and, while the aspartate is the key residue that determines the therapeutic performance of each pHLIP variant, the arginine position can play a decisive supporting role in fine-tuning these clinically relevant peptides.
■ ASSOCIATED CONTENT Data Availability Statement
The GROMACS package is freely available software used to perform MD simulations and can be downloaded at https:// manual.gromacs.org/documentation/2020.1/download.html. PyMOL v2.5 is also free software for molecular visualization and generating high-quality images. It can be downloaded from https://pymol.org/2.
Comparison of membrane insertion of Trp-15 in wt-pHLIP in different published studies; experimental CD and OCD spectra for R14, R15, and R17 at low pH values; average helicity content per residue of peptide variants; time series data of all peptide variants at all pH values of arginine interactions with Asp13, the Leu21 distance to the membrane center, and the peptide tilt angle relative to the membrane normal; graphical representation of all peptide variants in a POPC membrane bilayer, highlighting the peptide tilt angle feature; and probability density function of Asp13 insertion for all variants at all pH values (PDF) Starting configurations, topologies, index, and parameter files for all simulations and final conformations of all replicates at pH 6.5 (6.0 in the wt) (
|
v3-fos-license
|
2018-12-11T03:27:16.733Z
|
2018-04-23T00:00:00.000
|
55975131
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.ccsenet.org/journal/index.php/gjhs/article/download/74132/41325",
"pdf_hash": "276c8f5e40125eefe629d8dabe2ad7ccf0292aa3",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43490",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "276c8f5e40125eefe629d8dabe2ad7ccf0292aa3",
"year": 2018
}
|
pes2o/s2orc
|
Final Year Dental Students’ Perception of Knowledge, Training and Competence in Medical Emergency Management
Objective: The potential for a medical emergency to occur during dental treatment must be met by dental practitioners who are competent to manage such situations. However the literature shows that not all dentists have received training in this area, and of those who have, many are deficient in knowledge, skills and confidence. The objective of this study was to examine the perceptions of final year Jordanian dental students regarding their education and preparedness to manage medical emergencies. Methods: This study was a cross-sectional, descriptive study which gathered questionnaire data from an undergraduate student cohort at two Jordanian universities. Descriptive analysis of the data was undertaken, and a Chi-squared test was used to explore the relationships between participants’ responses and the variables of gender and previous attendance at any ME workshop. Statistical significance was deemed at p<.05. Results: Three hundred and seventy dental students responded to the questionnaire with response rates of 76.2% and 81.8% from the two sites. The results indicate that not all of the students had received training in medical emergency management, and their self-reported proficiency and experience was sub-optimal. However, participating in a workshop on managing medical emergencies was associated with changes in some skills and experiences. Conclusion: The low levels of medical emergency management knowledge and skills in the final year dental students reflects the situation reported in existing literature. This study indicates the importance of effective medical emergency management training within the dental undergraduate program, and may be used to inform future curricula planning.
Introduction
As the global population continues to age, both the proportion and absolute number of older people are increasing (World Health Organization, 2015).To illustrate this, it has been estimated that the population over 65 years of age in the United States of America (US) will more than double in the years between 2005 and 2050 (Passel & Cohn, 2008).Likewise, the population of all countries of the Middle East is projected to increase dramatically over future decades, including Jordan which has a projected increase of 70% between 2007and 2050(Roudi-Fahimi & Kent, 2007).With increasing age, there is an increasing incidence of chronic diseases such as cardiovascular, respiratory, musculoskeletal, mental and neurological disorders, as well as cancer, diabetes, and dementia (Prince et al., 2015).It has been estimated that 80% of elderly people have one chronic disease and 50% have two or more (National Centre for Chronic Disease Prevention and Health Promotion, 2011).This worldwide change in demographic profile of the population increases the likelihood that dental patients will present with more complex medical histories, thereby increasing the potential for medical problems to occur during dental care (Little, Miller, & Rhodus, 2018).A study from the US reported that patients in their dental clinic had an average age of 52 years, and over half were taking medication or had at least one systemic illness (Radfar & Suresh, 2007).The combination of an aging patient profile, multi-morbidity and medications taken results in an increasingly medically complex patient cohort, thereby necessitating that dentists are prepared to identify and manage medical emergencies (MEs).Furthermore, dental procedures such as anaesthesia or surgery may increase the possibility of a ME occurring in predisposed patients (Laurent et al., 2009).
MEs may occur at any time, in any environment, including within a dental practice, and have been reported to be over 5 times more likely to occur in a dental office than a medical office (Feck, 2012).The incidence of MEs has been reported at 164 events per million dental visits (Anders, Comeau, Hatton, & Neiders, 2010).An earlier report concluded that a dentist who practices for 40 years will be exposed to between nine and eleven emergency events throughout their career (Atherton et al., 1999), and studies from various countries have reported that 32 to 69% of dentists have encountered a ME in their practice (Čuković-Bagić et al., 2017;Müller, Hänsel, Stehr, Weber, & Koch, 2008;Mwita, Machibya, & Nyerembe, 2017).Although it is difficult to compare frequency levels due to various timeframes being used for data collection, this does indicate that MEs do occur and are an important consideration in dental practice.
Vasovagal syncope is often cited as the most commonly occurring ME in dental practice (Alhamad et al., 2015;Müller et al., 2008).However other MEs that may occur include: cardiac arrest, anaphylaxis, airway obstruction, stroke, as well as hypoglycaemic, asthmatic, and epileptic episodes (Leelavathi, Reddy, Elizabeth, & Priyadarshni, 2016;Müller et al., 2008).While the occurrence of a ME is relatively infrequent in dental practice, all dentists have a duty of care to provide effective and safe care to their patients (Jevon, 2012).This imperative includes the ability to identify and manage a ME should one arise.A slow or non-existent response to a ME such as a cardiac arrest may worsen a patient's prognosis considerably (Resuscitation Council (UK), 2015).
However, previous research indicates that dentists may lack the knowledge, confidence and preparedness to be able to manage MEs that may occur within their practice (Alhamad et al., 2015;Arsati et al., 2010;Broadbent & Thomson, 2001).The literature suggests that not all dentists are exposed to training about MEs, or basic life support (BLS) including cardiopulmonary resuscitation (CPR) (Leelavathi et al., 2016;Müller et al., 2008;Stafuzza, Carrara, Oliveira, Santos, & Oliveira, 2014).For example, a study carried out in New Zealand reported that 18% of dentists had no undergraduate ME training (Broadbent & Thomson, 2001), and approximately 25% of two different cohorts in India reported no ME training (Abraham & Afradh, 2016;Varma, Pratap, Padma, Kalyan, & Vineela, 2015).However even for those dental practitioners who have received training about MEs or BLS, many do not feel confident or competent to deliver these skills effectively (Stafuzza et al., 2014;Varma et al., 2015).Moreover, basic and advanced life support skills have been shown to deteriorate after only 6 months post skill acquisition (Cooper, Johnston, & Priscott, 2007;Yang et al., 2012), highlighting the importance of ongoing, recurrent training or refresher courses, both at undergraduate and postgraduate levels.
The dental program at the Jordan University of Science and Technology (JUST) is a five year undergraduate program, where second year students have a one-hour theoretical session covering basic emergency care taught by an emergency physician.This is a newly introduced course started two years ago, at the commencement of this study.The dental program at the University of Jordan (JU) is also a five year program which includes a three hour theoretical session covering the principles of first-aid, and also taught to second year students.The aim of the study was to examine perceptions of final year dental students at JUST and JU regarding their education and preparedness to manage MEs within a dental office.
Methods
This study was approved by the JUST Institutional Research Board (IRB protocol number 182/ 2015).Student responses were examined using a cross-sectional, descriptive methodology.A paper-based questionnaire developed by the authors at JUST was used to gather demographic information as well as information regarding participants' self-perceived skills and competencies of managing MEs, the types of MEs and emergency procedures they had encountered, and whether they had attended any ME training.
The questionnaire had not been previously utilised, and as such the reliability and validity was yet to be determined.In this study, it was used in an exploratory capacity to gather data on a topic in a previously unexamined cohort.Further studies will be required to determine the aforementioned questionnaire characteristics.The questions were selected via author consultation and consensus.Initially, the questionnaire was distributed to a group of ten JUST dental students as a pilot study to evaluate their understanding of the questions.After student feedback, amendments were made to the questionnaire.The ten students who participated in the pilot study were excluded from the subsequent study group.The finalised survey was distributed to students from both participating universities by research assistants and the principle investigator at the conclusion of lectures and was collected after survey completion.Participation was voluntary, and consent was implied by return of a completed survey.
Data analysis was completed with the Statistics Package for the Social Sciences (IBM SPSS v17).Descriptive analysis of the data was undertaken, and a Chi-squared test for independence (with Yates Continuity Correction) was used to explore the relationships between participants' responses and the variables of gender and previous attendance at any ME workshop.Statistical significance was deemed at p<.05.
Results
A total of 370 dental students responded to the questionnaire with a response rate from JUST of 76.2% (253/332) and 81.8% (117/143) from JU.The majority (66.2%) were female and the mean age was 22.9 (SD 1.3) years.Most (68.4%) were enrolled at the Jordan University of Science and Technology (JUST) and 31.6% at the University of Jordan (JU).
Approximately half of the respondents (51.6%) reported receiving CPR training, and 42.7% had attended a workshop on managing MEs.Although 91.9% of respondents reported that they had knowledge about CPR, only 51.4% reported that they could perform CPR.Self-perceived competency with regards to emergency treatment and drugs revealed that 75.7% had knowledge about oxygen and its route of administration, and 69.5% reported the same knowledge of adrenaline.Almost eight in ten students (79.5%) were aware of common drugs used in dentistry that may precipitate an allergic reaction.
The self-reported proficiency in various ME management skills was generally low.For example, less than 50% of respondents felt they possessed skills in administering artificial respiration or the Heimlich manoeuvre.Proficiency in giving injections by various routes was also low (23.2% -45.8%), however more than eight in ten individuals felt they were able to check the carotid pulse (Table 1).The proportion of the students who had previously encountered various MEs ranged from 24.9% -37.3% and the proportion that had encountered the specified ME procedures ranged from 10% -15.7% ( The associations of gender with ME skills, knowledge and experiences were then examined.Gender was significantly associated with encountering venepuncture χ 2 (1, n=369) = 8.01, p=.005, intubation χ 2 (1, n=369) = 5.21 p=.02, automated external defibrillation χ 2 (1, n=370) = 4.83, p=.03, and cardiopulmonary resuscitation χ 2 (1, n=370) = 6.02, p=.01, in practice.Gender was also associated with the ability to give a subcutaneous injection χ 2 (1, n=370) = 4.43, p=.04, to perform the Heimlich manoeuvre χ 2 (1, n=370) = 6.25, p=.01, to have knowledge about CPR χ 2 (1, n=370) = 4.67, p=.03, and attending a workshop on handling MEs χ 2 (1, n=370) = 3.9, p=.05.There was no significant association of gender and the self-perceived ability to: deliver artificial respiration, IM and IV injections, checking the carotid pulse, performance of CPR, attendance at previous CPR training, willingness to participate in future ME training, knowledge of oxygen, adrenaline and drugs which may precipitate an allergic reaction, encountering any of the specified MEs, or encountering pulse oximetry during practice.
Discussion
This study explored the perceptions of final year dental students at JUST and JU regarding their education and preparedness to manage MEs within a dental office.Overall the student cohort reported sub-optimal levels of ME knowledge and skill.Less than half of the respondents reported possessing emergency medical skills such as providing artificial respiration, administering injections by various routes, and performing the Heimlich manoeuvre.In comparison, over 80% reported knowing how to check the carotid pulse.Moreover, only 51.4% reported that they could perform CPR.This is comparable to a previous project conducted in Saudi Arabia, which reported only 45% of dentists felt competent to perform CPR (Alhamad et al., 2015).Other studies have likewise reported the majority of dentists felt they are unable or unprepared to provide CPR, BLS or first aid in an emergency (Arsati et al., 2010;Stafuzza et al., 2014;Varma et al., 2015).
Approximately half of the present cohort reported that they had done some CPR training.This compares to figures reported in other studies; 56% of dental interns in Southern India had BLS training (Elanchezhiyan et al., 2013), a report from Nigeria documented that 58% final year students had received medical emergency training (Ehigiator, Ehizele, & Ugbodaga, 2014), an Indian study reported that three quarters of dental interns had received some ME training (Abraham & Afradh, 2016), and approximately six in 10 dentists in a Brazilian study had undergone CPR training (Arsati et al., 2010).It is noteworthy that the majority of dentists and dental students feel that they require more training in BLS and MEs and have a positive attitude to learning these skills (Abraham & Afradh, 2016;Mwita et al., 2017;Somaraj et al., 2017).This is reflected in the present cohort where 83.5% of participants were willing to do further ME training.
At least a quarter of the cohort had been exposed to one of the MEs included in the questionnaire.The literature reports varying amounts of exposure to MEs by students and dentists.It has been reported that one to two thirds of dental interns have encountered a ME (Elanchezhiyan et al., 2013;Leelavathi et al., 2016), with other reports stating that over a third of dental practitioners have encountered at least 1 ME over the previous few years (Joshi & Acharya, 2016;Mwita et al., 2017).Additional studies have reported higher incidence levels (Broadbent & Thomson, 2001;Müller et al., 2008); however, it is difficult to compare studies as various reporting timeframes have been used.
The present study has demonstrated a statistically significant association between previous attendance at a ME workshop and the self-perceived ability to perform artificial respiration and CPR as well as the various skills, such as giving IM injections, that may be required in an emergency medical situation.Previous studies have likewise shown that BLS training increases levels of BLS confidence, skills and knowledge (Ibnerasa & de Garve, 2016;Sharma & Attar, 2012).
However, self-perceived confidence in a skill and being able to perform the skill effectively may be two disparate entities, as demonstrated in a study of French dental students where more students felt competent to perform CPR than were able to when examined practically (Laurent et al., 2009).The over-estimation of CPR competencies has also been reported for other healthcare students (Grześkowiak, 2006), and may be influenced by social desirability (Van de Mortel, 2008).Thus, for future studies, it may be important to include practical examination of skills to allow a better understanding of the interaction of educational interventions on skill acquisition.
Although structured BLS and ME training should be an integral component of dental undergraduate curricula (Sharma & Attar, 2012), it is also important to consider the value of repeated BLS / ME training throughout undergraduate and postgraduate periods.Resuscitation skills have been shown to decline from six weeks post training, with the greatest decreases occurring between six and twelve months (Yang et al., 2012).A survey of general dentists and final year students demonstrated that students had a higher level of knowledge of the management of medically unwell patients than practitioners (Ghapanchi, Shahidi, Kamali, & Zamani, 2016), and Akbari et al (2015) reported that more professional dental experience was associated with lower ME management awareness (Akbari, Raeesi, Ebrahimipour, & Ramezanzadeh, 2015).These studies suggest that it is not enough to have a single exposure to ME management education and training, but rather the importance of ongoing, repeated theory and practical sessions throughout the life of a dental student and practitioner.
A limitation of this study is the self-report nature of the questionnaire due to the possibility of a social desirability response bias.That is, participants responding in a way that is perceived to be appropriate within their social and/or professional environment.One may propose that this could be particularly true of health professionals when answering items which reveal their knowledge or competencies.A further limitation may be a non-response bias, where those who didn't respond to the questionnaire may have answered items differently than those who did respond.Furthermore, the use of a novel, non-validated survey limits generalisability.To advance understanding in this area, it would be beneficial for the questionnaire used in this study to undergo examination for validity and reliability.It is also recommended that further studies in this area examine the effects of educational interventions upon questionnaire findings and skill acquisition.
Conclusion
Although the occurrence of MEs within dental practice is relatively infrequent, it is vital for dental practitioners to have the knowledge, confidence, and competence to be able to identify and manage a ME in practice.This study demonstrates that the participating final year dental students have less than optimal levels of self-perceived knowledge and competency with regards to the management of medical emergencies.However the majority are willing to attend future training in this area.This indicates that the student cohort view training in medical emergency management as an important inclusion in dental education.Along with the importance of competence in this area, it is proposed that medical emergency management should be a mandatory component when planning dental undergraduate curricula.
Table 1 .
Self-perceived possession of ME management skills
Table 2 .
Table2).Approximately four in ten students (42.7%) had attended a workshop on managing MEs, however overall 83.5% said that they were willing to attend any ME training in the future.For those that had not attended a ME workshop, their reasons for non-attendance were as follows: lack of time 24.3%, not interested 26.5%, didn't know where to go 36.2%, and felt it was unnecessary for a dentist 12.7%.For those that had attended ME training, all four included training modalities; simulation, seminars, videotapes, and slides / PowerPoint, were reported to have been used (Table2).Medical Emergency training and experiences
Table 3
shows the Chi-squared associations between having attended a ME workshop and various ME skills and knowledge, as well as experience of MEs and their management.Attending a ME workshop had a statistically significant association with the self-reported ability to provide artificial respiration, CPR, and intramuscular, subcutaneous and intravenous injections, having received CPR training, as well as encountering MEs (foreign body inhalation, chest pain, shortness of breath), and procedures (CPR, automated external defibrillation, venepuncture, intubation, pulse oximetry) while in practice.
|
v3-fos-license
|
2021-12-31T16:06:18.401Z
|
2021-12-29T00:00:00.000
|
245572627
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "928b746e7fef834af99b37409df17b578bf4f9a8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43491",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "e5d48b10946ffe7a7cd74c13cd4b6a1d8f223554",
"year": 2021
}
|
pes2o/s2orc
|
NLRP3 Inflammasome Activation Controls Vascular Smooth Muscle Cells Phenotypic Switch in Atherosclerosis
(1) Background: Monocytes and nucleotide-binding oligomerization domain-like receptor protein 3 (NLRP3) inflammasome orchestrate lipid-driven amplification of vascular inflammation promoting the disruption of the fibrous cap. The components of the NLRP3 inflammasome are expressed in macrophages and foam cells within human carotid atherosclerotic plaques and VSMCs in hypertension. Whether monocytes and NLRP3 inflammasome activation are direct triggers of VSMC phenotypic switch and plaque disruption need to be investigated. (2) Methods: The direct effect of oxLDL-activated monocytes in VSMCs co-cultured system was demonstrated via flow cytometry, qPCR, ELISA, caspase 1, and pyroptosis assay. Aortic roots of VSMCs lineage tracing mice fed normal or high cholesterol diet and human atherosclerotic plaques were used for immunofluorescence quantification of NLRP3 inflammasome activation/VSMCs phenotypic switch. (3) Results: OxLDL-activated monocytes reduced α-SMA, SM22α, Oct-4, and upregulation of KLF-4 and macrophage markers MAC2, F4/80 and CD68 expression as well as caspase 1 activation, IL-1β secretion, and pyroptosis in VSMCs. Increased caspase 1 and IL-1β in phenotypically modified VSMCs was detected in the aortic roots of VSMCs lineage tracing mice fed high cholesterol diet and in human atherosclerotic plaques from carotid artery disease patients who experienced a stroke. (4) Conclusions: Taken together, these results provide evidence that monocyte promote VSMC phenotypic switch through VSMC NLRP3 inflammasome activation with a likely detrimental role in atherosclerotic plaque stability in human atherosclerosis.
Introduction
Cardiovascular diseases (CVD) are still the predominant cause of death and morbidity, with atherosclerosis as the main underlying cause [1]. Atherosclerosis is a lipid-driven, chronic inflammatory disease characterized by the build-up of subendothelial deposition of cholesterol and the formation of leukocyte-rich plaques in the intimal layer of the arteries. Inflammation plays a major role in promoting the disruption of the fibrous cap that covers the atherosclerotic plaque, resulting in myocardial infarction and stroke [2]. The fibrous cap is composed mainly of VSMCs. Expansion of monocytes is an independent risk factor for CVD, causally linked to the enlargement of the atherosclerotic lesion [3]. Oxidized low-density lipoprotein (oxLDL)-activated monocytes enhance atherogenesis by triggering inflammatory cascades and overproduction of reactive oxygen species (ROS), and the accumulation of monocyte-derived macrophages [3]. The uptake of oxLDL by macrophages results in the formation of lipid-laden foam cells with impaired migratory ability, that dies and forms a necrotic core that further contributes to destabilizing the plaques [4][5][6]. Nucleotide-binding oligomerization domain-like receptor protein 3 (NLRP3) inflammasome activation has been shown to be a powerful mediator of inflammatory response via the release of the pro-inflammatory mediators interleukin-1β (IL) and IL-18 that boost lipid deposition, foam cell accumulation, and atherosclerosis progression [7]. Furthermore, the CANTOS trial confirmed the inflammatory hypothesis of atherosclerosis as well as the significant role of IL-1β in the pathogenesis of atherosclerosis, although this did not result in approval of the studied IL-1β-inhibitor canakinumab due to higher rates of infection in the active treatment group [8]. Interestingly, 60% to 70% of foam cells in atherosclerotic lesions are of VSMC, not leukocyte origin, but whether NLRP3 inflammasome activation plays a role in VSMC phenotypic switch is not known. IL-1β is a proinflammatory cytokine exerting its functions through autocrine, paracrine, or endocrine mechanisms [9]. Moreover, IL-1β has been shown to induce its own gene expression in various cell types in an amplification loop manner called autoinduction [10,11]. IL-1β promotes endothelial dysfunction [12], leukocyte-endothelial cell adhesion, procoagulant activity, and recruitment of leukocytes [12] and neutrophils promoting atherogenesis and plaque rupture [13,14]. Interestingly, it has been shown that IL-1β triggers proliferation, IL-6 and platelet-derived growth factor production in VSMCs [10]. A recent publication by the group of Owens demonstrated that after using VSMC Il1r1 knockout mice, IL-1 signaling is required for the investment of VSMCs into the fibrous cap in a model of advanced atherosclerosis [15]. However, the effects of an IL-1β-neutralizing antibody deleterious to fibrous cap stability in mice [15] proved to be beneficial in reducing cardiovascular events in the CANTOS trial in humans [8]. Since NLRP3 inflammasome activation was shown to be an important mechanism driving atherogenesis, inflammation, and foam cell formation, it could emerge also as a crucial mechanism triggering VSMC phenotypic switch and subsequently plaque destabilization. Until now, this hypothesis has not been investigated and it could open a door to the revelation of a new mechanism in vascular pathology.
OxLDL-Activated Monocytes Promote VSMC Phenotypic Switch
VSMCs were isolated from the aortic arch of 8 to 12 weeks old C57BL/6 mice and after VSMCs expansion, the phenotype and purity were confirmed by staining with antimouse α-SMA, SM22α, and CD31 (endothelial cell marker) and CD90 (fibroblasts cell marker). Supplementary Figure S1a shows that the obtained VSMCs expressed the VSMCspecific markers α-SMA, SM22α, but are negative for the endothelial cells marker (CD31) as well as the fibroblasts marker (CD90). Mouse monocytes were isolated from bone marrow of C57BL/6 mice and after purity check up (Supplementary Figure S1b) were used in co-culture experiments with VSMCs. OxLDL-activated monocytes are known to trigger inflammatory cascades, promoting endothelial dysfunction and enhancing atherogenesis [4][5][6]. To demonstrate the effect of oxLDL on monocytes activation we showed a dose-dependent ROS production and IL-6 expression in oxLDL-activated monocytes as indicated by the increase of the mean fluorescent intensity of carboxylated H2DCFDA and upregulated expression of IL-6 (Supplementary Figure S1c,d). To study the direct effect of monocytes and particularly the role of oxLDL-activated monocytes on VSMCs phenotypic modulation, we performed co-culture experiments in a trans-well system in which VSMCs were plated in the plate wells while monocytes or oxLDL-activated monocytes were added to the well cell culture inserts (Figure 1a). Monocytes oxLDL activation was induced upon direct supplementation of oxLDL to well cell culture inserts impermeable to oxLDL to ensure monocytes restricted activation. Treatment of VSMCs with oxLDL resulted in a pronounced reduction in the expression of the VSMC-specific markers α-SMA and SM22α. Importantly, the supplementation of VSMCs with oxLDL-activated monocytes resulted in a pronounced reduction in the expression of VSMC-specific markers α-SMA and SM22α expressed as mean fluorescence intensity (Figure 1b,c). Interestingly, the percentage of double positive α-SMA + SM22α + VSMCs was only prominently downregulated in VSMC upon co-cultured with oxLDL-activated monocytes ( Figure 1d). As expected, oxLDL treatment of VSMCs promoted increased expression of macrophages markers MAC2 and F4/80 in VSMCs (Figure 1d,e). Furthermore, co-culture with oxLDL-activated monocytes elevated the expression of MAC2 and F4/80 (Figure 1d,e) in VSMCs as well as the expression of CD68 which was only significantly elevated post co-culture with oxLDL-activated monocytes (Figure 1f). In line with our hypothesis, monocytes and oxLDL-activated monocytes downregulated the expression of the transcription factor Oct-4 in VSMC, known to be important in preserving VSMC contractile phenotype [16], while KLF-4 showed to promote VSMCs phenotypic modulation [17,18], was upregulated in VSMCs (Figure 2a,b). Taken together, these results demonstrate that oxLDL activated monocytes are effective at promoting VSMCs phenotypic switch and their transdifferentiation to macrophages-like cells.
Monocytes Promote VSMC NLRP3 Inflammasome Activation
Despite a great deal of evidence pointing out the critical role of monocytes/macrophages in atherosclerosis vascular diseases [19], previous studies have not clearly defined the inflammatory effect of monocytes on VSMCs in atherosclerosis. Furthermore, NLRP3 inflammasome activation was shown to be an important mechanism driving atherogenesis, inflammation, and foam cells formation, therefore it could emerge as a crucial mechanism triggering VSMCs phenotypic switch. However, until now this hypothesis has not been investigated and it could open a door to the revelation of a new mechanism in vascular pathology. To demonstrate the effect of monocytes and oxLDL on VSMCs NLRP3 inflammasome activation we performed co-culture experiments where the direct effect of monocytes or oxLDL-activated monocytes on VSMCs NLRP3 inflammasome activation was investigated. We used a trans-well system in the co-culture experiments in which VSMCs were plated in the plate wells while monocytes or oxLDL-activated monocytes were added to oxLDL impermeable trans well inserts, as previously described. In order to facilitate inflammasome assembly, NLRP3 interacts with the N-terminus of the adapter protein ASC via PYD-PYD interactions; the C-terminus of ASC has a caspase recruitment domain (CARD) that binds to procaspase-1 via CARD-CARD interactions triggering caspase dimerization and subsequent activation. Interestingly due to its prion-like properties ASC forms large fibrillar aggregates known as "specks" [20]. Using the above-described co-culture system we could demonstrate that monocytes, as well as oxLDL-activated monocytes, promote ASC specks formation as visualized by confocal microscopic analysis of ASC speck formation in VSMCs (Supplementary Figure S2a). To further confirm that ASC speck formation results in the activation of caspase-1 involved in the maturation of IL-1β into a biologically active form, and cleavage of gasdermin D (GSDMD) to promote pyroptotic cell death (pyroptosis) [21], we investigated caspase 1 activation and pyroptosis in VSMCs treated with oxLDL or co-cultured with monocytes or oxLDL activated monocytes as described previously. Caspase 1 activity in VSMCs was raised particularly when VSMCs were exposed to paracrine mediators from monocytes as well as oxLDL-activated monocytes in the co-culture system (Figure 3a). To measure IL-1β secretion specifically in VSMCs the trans-well inserts were removed and VSMC were supplemented with a fresh medium to be able to evaluate IL-1β secretion specifically by VSMCs. In line with the caspase 1 activation induction, the co-culture with monocytes or oxLDL activated monocytes triggered IL-1β secretion by VSMCs (Figure 3b). Pyroptosis programmed cell death associated with NLRP3 inflammasome activation [21] was more pronouncedly induced in VSMCs after they were exposed to paracrine factors release by oxLDL-activated monocytes in the co-culture system (Figure 3c). In parallel, VSMCs treated with oxLDL or co-cultured with monocytes or oxLDL-activated monocytes showed a pronounced increase in cell death as evaluated by VSMCs positive staining for the red live/dead propidium iodide and 7-AAD staining apoptotic cells and quantified by flow cytometry analysis (Supplementary Figure S1e). Under hypercholesteremia, Apoe −/− mice exhibit an increased percentage of VSMCs undergoing phenotypic switch and expressing NLRP3 as indicated by co-expression of α-SMA, CD68, and NLRP3 (Supplementary Figure S2b). Moreover, NLRP3 inhibitor MCC950 abrogated oxLDL or oxLDL-activated monocytes-induced VSMC phenotypic switch as evident by the pronounced reduction in F4/80 expression in Myh11 positive VSMCs and VSMCs foam cells (F4/80 + LipidTOX + ) (Figure 4a,b). These results provide evidence for the involvement of NLRP3 in VSMCs phenotypical switch upon hypercholesteremia or in the presence of oxLDL-activated monocytes. indicating VSMC phenotypic switch upon oxLDL treatment or co-culture with monocytes or or ox-LDL-activated monocytes, as indicated, with n = 6/group and * p < 0.05, ** p < 0.01, and *** p < 0.001, one-way ANOVA.
Monocytes Promote VSMC NLRP3 Inflammasome Activation
Despite a great deal of evidence pointing out the critical role of monocytes/macrophages in atherosclerosis vascular diseases [19], previous studies have not clearly defined the inflammatory effect of monocytes on VSMCs in atherosclerosis. Furthermore, NLRP3 inflammasome activation was shown to be an important mechanism driving atherogenesis, inflammation, and foam cells formation, therefore it could emerge as a crucial mechanism triggering VSMCs phenotypic switch. However, until now this hypothesis has not been investigated and it could open a door to the revelation of a new mechanism in vascular pathology. To demonstrate the effect of monocytes and oxLDL on VSMCs NLRP3 inflammasome activation we performed co-culture experiments where the direct effect of monocytes or oxLDL-activated monocytes on VSMCs NLRP3 inflammasome activation was investigated. We used a trans-well system in the co-culture experiments in which VSMCs were plated in the plate wells while monocytes or oxLDL-activated monocytes were added to oxLDL impermeable trans well inserts, as previously described. In order to facilitate inflammasome assembly, NLRP3 interacts with the N-terminus of the adapter protein ASC via PYD-PYD interactions; the C-terminus of ASC has a caspase recruitment domain (CARD) that binds to procaspase-1 via CARD-CARD interactions triggering caspase dimerization and subsequent activation. Interestingly due to its prion-like properties ASC forms large fibrillar aggregates known as "specks" [20]. Using the above-described co-culture system we could demonstrate that monocytes, as well as oxLDL-activated monocytes, promote ASC specks formation as visualized by confocal microscopic analysis of ASC speck formation in VSMCs (Supplementary Figure S2a). To further confirm that ASC speck formation results in the activation of caspase-1 involved in the maturation of IL-1β into a biologically active form, and cleavage of gasdermin D (GSDMD) to promote pyroptotic cell death (pyroptosis) [21], we investigated caspase 1 activation and pyroptosis in VSMCs treated with oxLDL or co-cultured with monocytes or oxLDL activated monocytes as described previously. Caspase 1 activity in VSMCs was raised particularly when VSMCs were exposed to paracrine mediators from monocytes as well as ox-LDL-activated monocytes in the co-culture system (Figure 3a). To measure IL-1β secretion specifically in VSMCs the trans-well inserts were removed and VSMC were supplemented with a fresh medium to be able to evaluate IL-1β secretion specifically by VSMCs. In line with the caspase 1 activation induction, the co-culture with monocytes or oxLDL activated monocytes triggered IL-1β secretion by VSMCs (Figure 3b). Pyroptosis programmed cell indicating VSMC phenotypic switch upon oxLDL treatment or co-culture with monocytes or or oxLDL-activated monocytes, as indicated, with n = 6/group and * p < 0.05, ** p < 0.01, and *** p < 0.001, one-way ANOVA. death associated with NLRP3 inflammasome activation [21] was more pronouncedly induced in VSMCs after they were exposed to paracrine factors release by oxLDL-activated monocytes in the co-culture system ( Figure 3c). In parallel, VSMCs treated with oxLDL or co-cultured with monocytes or oxLDL-activated monocytes showed a pronounced increase in cell death as evaluated by VSMCs positive staining for the red live/dead propidium iodide and 7-AAD staining apoptotic cells and quantified by flow cytometry analysis (Supplementary Figure S1e). Under hypercholesteremia, Apoe −/− mice exhibit an increased percentage of VSMCs undergoing phenotypic switch and expressing NLRP3 as indicated by co-expression of α-SMA, CD68, and NLRP3 (Supplementary Figure S2b). Moreover, NLRP3 inhibitor MCC950 abrogated oxLDL or oxLDL-activated monocytes-induced VSMC phenotypic switch as evident by the pronounced reduction in F4/80 expression in Myh11 positive VSMCs and VSMCs foam cells (F4/80 + LipidTOX + ) (Figure 4a,b). These results provide evidence for the involvement of NLRP3 in VSMCs phenotypical switch upon hypercholesteremia or in the presence of oxLDL-activated monocytes. Graph's bar represent the mean ± SEM of (a) caspase 1 activation, (b) IL-1β secretion, and (c) pyroptosis in VSMC upon oxLDL treatment or monocytes or oxLDL-activated monocytes supplementation as indicated, with n = 6/group and * p < 0.05, ** p < 0.01, and *** p < 0.001, one-way ANOVA.
Figure 4.
Bars graphs represent the mean ± SEM of (a) Myh11 + F4/80 + -expressing VSMCs and (b) F4/80 + LipidTOX + (foam cells) formation as a percentage of alive cells as indicated upon oxLDL or monocytes, or oxLDL-activated monocytes supplementation in the presence or absence of MCC950, with n = 6/group and * p < 0.05, ** p < 0.01, *** p < 0.001, one-way ANOVA. Graph's bar represent the mean ± SEM of (a) caspase 1 activation, (b) IL-1β secretion, and (c) pyroptosis in VSMC upon oxLDL treatment or monocytes or oxLDL-activated monocytes supplementation as indicated, with n = 6/group and * p < 0.05, ** p < 0.01, and *** p < 0.001, one-way ANOVA. death associated with NLRP3 inflammasome activation [21] was more pronouncedly induced in VSMCs after they were exposed to paracrine factors release by oxLDL-activated monocytes in the co-culture system ( Figure 3c). In parallel, VSMCs treated with oxLDL or co-cultured with monocytes or oxLDL-activated monocytes showed a pronounced increase in cell death as evaluated by VSMCs positive staining for the red live/dead propidium iodide and 7-AAD staining apoptotic cells and quantified by flow cytometry analysis (Supplementary Figure S1e). Under hypercholesteremia, Apoe −/− mice exhibit an increased percentage of VSMCs undergoing phenotypic switch and expressing NLRP3 as indicated by co-expression of α-SMA, CD68, and NLRP3 (Supplementary Figure S2b). Moreover, NLRP3 inhibitor MCC950 abrogated oxLDL or oxLDL-activated monocytes-induced VSMC phenotypic switch as evident by the pronounced reduction in F4/80 expression in Myh11 positive VSMCs and VSMCs foam cells (F4/80 + LipidTOX + ) (Figure 4a,b). These results provide evidence for the involvement of NLRP3 in VSMCs phenotypical switch upon hypercholesteremia or in the presence of oxLDL-activated monocytes.
IL-1β Promotes VSMC Phenotypic Switch and Transdifferentiation to Macrophages-Like Cells
To investigate whether IL-1β has a direct effect on promoting VSMCs phenotypic switch, we supplemented IL-1β to VSMCs. Treatment of VSMCs with 10 ng/mL of IL-1β for 7 days promoted pronounced reduction in the expression of α-SMA (Figure 5a). Furthermore, IL-1β treatment of VSMCs promoted a pronounced increase in the expression of macrophages markers MAC2 + (LGALS3 + ) while the combination of IL-1β in addition to oxLDL treatment further upraised the expression of MAC2 in Myh11 + VSMC cells (Figure 5b). Remarkably, IL-1β treatment resulted in a profound increase of lipids accumulation in VSMCs as evidenced by amplification of the Myh11 cells expressing Lipid-TOX as an indicator of lipids cell accumulation and subsequently foam cells formation (Figure 5c). These data not only support a critical role for IL-1β in induction of VSMCs phenotypic switch to macrophages-like cell but also reveal the involvement of IL-1β in VSMCs foam cells formation, with a critical role in atherosclerotic plaque stability. Interestingly, in the presence of ZVAD-FMK a cell-permeable pan-caspase inhibitor, the oxLDL and IL-1β-induced VSMC phenotypic switch were partly abrogated, as evident by a restoration of α-SMA expression in VSMCs as well as reduction of MAC2 expression in Myh11 + VSMCs (Figure 5d,e,g). Furthermore, ZVAD-FMK significantly diminished the percentage of Myh11 cells expressing LipidTOX (foam cells formation) in comparison to oxLDL and IL-1β-treated VSMCs (Figure 5f). These findings clearly demonstrate that inhibition of IL-1β signal transduction might be a way to regulate VSMCs phenotypic switch and foam cells formation. The specific involvement of NLRP3 inflammasome activation in VSMC phenotypic switch induced by oxLDL and IL-1β was demonstrated using MCC950 which is a potent highly specific small molecule inhibitor of both canonical and noncanonical activation of NLRP3 inflammasome leading to reduction of IL-1β production [22]. OxLDL and IL-1β promoted reduction in the expression of VSMCs-specific contractile protein Myh11 + while MCC950 supplementation completely restored the Myh11 expression in VSMCs ( Figure 5h). Moreover, MCC950 reduced prominently the expression of the macrophage markers in VSMCs treated with oxLDL and IL-1β (Figure 5i,j). The presented findings clearly demonstrate the specific involvement of NLRP3 inflammasome activation in VSMC phenotypic switch since small molecule inhibitor of NLRP3 inflammasome MCC950 abrogated VSMCs phenotypic switch.
NLRP3 Inflammasome Inhibition Abrogates VSMCs Phenotypic Switch
COLCOT (Colchicine Cardiovascular Outcomes Trial) and LoDoCo2 (Low Dose Colchicine2) trial demonstrated that low-dose colchicine is efficient in preventing major adverse cardiovascular events [23,24]. However, the precise mechanism of colchicinemediated effects is not revealed. In this regard, we could demonstrate that oxLDL promoted VSMCs phenotypic switch, as indicated by the reduction in α-SMA expression and increased expression of CD68 and MAC2 in VSMCs, while colchicine treatment abrogated the VSMC phenotypic switch (Figure 6a-c). The presented data strongly suggest that hypercholesteremia induces NLRP3 inflammasome activation in VSMCs and subsequent VSMC phenotypic switch, and demonstrate the potential inhibitory effect of colchicine on oxLDL-induced VSMC phenotypic switch, which could potentially result in the prevention of plaque progression and destabilization. VSMCs phenotypic switch, as indicated by the reduction in α-SMA expression and increased expression of CD68 and MAC2 in VSMCs, while colchicine treatment abrogated the VSMC phenotypic switch (Figure 6a-c). The presented data strongly suggest that hypercholesteremia induces NLRP3 inflammasome activation in VSMCs and subsequent VSMC phenotypic switch, and demonstrate the potential inhibitory effect of colchicine on oxLDL-induced VSMC phenotypic switch, which could potentially result in the prevention of plaque progression and destabilization. Figure 6. Graph bars represent the mean ± SEM of (a) α-SMA + , (b) α-SMA + CD68 + , and (c) α-SMA + MAC2 + expression in VSMCs upon colchicine and/or oxLDL treatment as indicated, with n = 5-6/group and ** p < 0.01, *** p < 0.001, **** p < 0.0001 one-way ANOVA.
Hypercholesteremia In Vivo Promotes NLRP3 Inflammasome Activation in VSMCs Associated with VSMCs Phenotypic Switch
To demonstrate that NLRP3 inflammasome activation in VSMCs is a relevant mechanism involved in VSMCs phenotypic switch in vivo, we used VSMCs lineage tracking mice. Apoe −/− Myh11ERT2-CreR26R-eYFP mice with a tamoxifen-inducible recombinase driven by a VSMC-specific gene (Myh11) promoter in combination with reporter protein to facilitate specific labeling of VSMC in Apoe −/− mice [25] were fed NCD or HCD. HCDfed Apoe −/− Myh11ERT2-CreR26R-eYFP mice showed significantly elevated levels of cholesterol and LDL-C as well as larger atherosclerotic lesions in the aortic roots as well as abdominal aorta in comparison to NCD-fed Apoe −/− Myh11ERT2-CreR26R-eYFP mice (data not shown). Apoe −/− Myh11ERT2-CreR26R-eYFP mice are an excellent model for the objective since they allow stable labeling of VSMCs at baseline, which facilitates VSMCs precise tracing and importantly the tracking of VSMC-derived cells during atherogenesis, even when VSMC characteristics might otherwise have been lost. Importantly, Apoe −/− Myh11ERT2-CreR26R-eYFP mice exhibited pronounced NLRP3 inflammasome activation as demonstrated by cleaved caspase 1 and IL-1β expression in VSMCs (Myh11eYFP + cells) undergoing phenotypic switch (co-expressing CD68) in the aortic roots (Figure 7a,b). Moreover, hypercholesteremia significantly increases the expression of cleaved caspase 1 and IL-1β in Myh11eYFP + cell co-expressing CD68 in comparison to the mice fed NCD. (Figure 7c,d). These findings clearly demonstrate that that inflammasome activation is indeed involved in VSMC phenotypic switch in response to hypercholesteremia in vivo. Figure 6. Graph bars represent the mean ± SEM of (a) α-SMA + , (b) α-SMA + CD68 + , and (c) α-SMA + MAC2 + expression in VSMCs upon colchicine and/or oxLDL treatment as indicated, with n = 5-6/group and ** p < 0.01, *** p < 0.001, **** p < 0.0001 one-way ANOVA.
Hypercholesteremia In Vivo Promotes NLRP3 Inflammasome Activation in VSMCs Associated with VSMCs Phenotypic Switch
To demonstrate that NLRP3 inflammasome activation in VSMCs is a relevant mechanism involved in VSMCs phenotypic switch in vivo, we used VSMCs lineage tracking mice. Apoe −/− Myh11ERT2-CreR26R-eYFP mice with a tamoxifen-inducible recombinase driven by a VSMC-specific gene (Myh11) promoter in combination with reporter protein to facilitate specific labeling of VSMC in Apoe −/− mice [25] were fed NCD or HCD. HCDfed Apoe −/− Myh11ERT2-CreR26R-eYFP mice showed significantly elevated levels of cholesterol and LDL-C as well as larger atherosclerotic lesions in the aortic roots as well as abdominal aorta in comparison to NCD-fed Apoe −/− Myh11ERT2-CreR26R-eYFP mice (data not shown). Apoe −/− Myh11ERT2-CreR26R-eYFP mice are an excellent model for the objective since they allow stable labeling of VSMCs at baseline, which facilitates VSMCs precise tracing and importantly the tracking of VSMC-derived cells during atherogenesis, even when VSMC characteristics might otherwise have been lost. Importantly, Apoe −/− Myh11ERT2-CreR26R-eYFP mice exhibited pronounced NLRP3 inflammasome activation as demonstrated by cleaved caspase 1 and IL-1β expression in VSMCs (Myh11eYFP + cells) undergoing phenotypic switch (co-expressing CD68) in the aortic roots (Figure 7a,b). Moreover, hypercholesteremia significantly increases the expression of cleaved caspase 1 and IL-1β in Myh11eYFP + cell co-expressing CD68 in comparison to the mice fed NCD. (Figure 7c,d). These findings clearly demonstrate that that inflammasome activation is indeed involved in VSMC phenotypic switch in response to hypercholesteremia in vivo.
NLRP3-Inflammasome Activation in VSMCs Is Associated with Plaque Rupture in Human Carotid Artery Disease
To gain insight into a possible role of NLRP3 inflammasome activation in VSMCs phenotypic switch and its relevance for the destabilization of human atherosclerotic plaques, we performed immunofluorescence staining of human carotid atherosclerotic plaques derived from carotid artery disease patients. We found that VSMCs (Myh11 + ) undergoing transdifferentiation to macrophages-like cells in human atherosclerotic plaques co-express cleaved caspase 1 as well as IL-1β, indicating the involvement of NLRP3 inflammasome activation in VSMCs phenotypic switch in human atherosclerosis (Figure 8a,b). Furthermore, symptomatic patients who had experienced an ipsilateral ischemic stroke in comparison to asymptomatic patients (no ischemic events) had a significant increase in the number of Myh11 + Cleaved Caspase 1 + CD68 + as a percentage of the total Myh11 + present in human carotid atherosclerotic plaques in comparison to asymptomatic patients (Figure 8c). In line with the observed cleaved caspase 1 upregulation, we found a higher percentage of Myh11 + CD68 + IL-1β + cells present in human carotid atherosclerotic plaques of symptomatic patients versus asymptomatic CAD patients (Figure 8d). These findings imply that the increased number of VSMCs undergoing switch could have a causal role in human atherosclerotic plaque destabilization. Taken all together our data imply the involvement of
NLRP3-Inflammasome Activation in VSMCs Is Associated with Plaque Rupture in Human Carotid Artery Disease
To gain insight into a possible role of NLRP3 inflammasome activation in VSMCs phenotypic switch and its relevance for the destabilization of human atherosclerotic plaques, we performed immunofluorescence staining of human carotid atherosclerotic patients (Figure 8c). In line with the observed cleaved caspase 1 upregulation, we found a higher percentage of Myh11 + CD68 + IL-1β + cells present in human carotid atherosclerotic plaques of symptomatic patients versus asymptomatic CAD patients (Figure 8d). These findings imply that the increased number of VSMCs undergoing switch could have a causal role in human atherosclerotic plaque destabilization. Taken all together our data imply the involvement of NLRP3 inflammasome activation in VSMCs phenotypic switch with possible implications in human atherosclerotic plaque destabilization.
Discussion
Our present study provides evidence for the involvement of monocytes in triggering NLRP3 inflammasome signaling promoting VSMCs phenotypic switch and atherosclerosis progression. NLRP3 inflammasome activation and IL-1β signaling appeared to play a direct role in VSMCs phenotypic modulation. Our results provide insight into the direct role of monocytes and hypercholesteremia in triggering NLRP3 inflammasome activation Figure 8. Representative immunofluorescence staining of Myh11 + cells, expressing (a) cleaved caspase 1, CD68 and (b) IL-1β, CD68 in human atherosclerotic plaques associated with NLRP3inflammasome activation in VSMC and linked to plaque rupture in human carotid artery disease shown by confocal microscopy. Graph bars show the mean ± SEM of (c) Myh11 + CD68 + cleaved caspase 1 + and (d) Myh11 + CD68 + IL-1β + co-expressing cells as a percentage of plaque Myh11 cells + cells with n = 12/group and * p < 0.05, unpaired t-test.
Discussion
Our present study provides evidence for the involvement of monocytes in triggering NLRP3 inflammasome signaling promoting VSMCs phenotypic switch and atherosclerosis progression. NLRP3 inflammasome activation and IL-1β signaling appeared to play a direct role in VSMCs phenotypic modulation. Our results provide insight into the direct role of monocytes and hypercholesteremia in triggering NLRP3 inflammasome activation involved in VSMCs phenotypic switch/foam cells formation with a possible implications for the destabilization of human atherosclerotic plaques.
Upon entrance to the intima, monocytes uptake ox-LDL and undergo macrophages foam cells formation via metabolization of ox-LDL through membrane scavenger receptors [26]. The accumulated foam cells are commonly found in early atherosclerotic lesions which can impact the functionality of VSMCs. Indeed, the interaction of VSMCs with monocytes/macrophages has been shown to promote matrix metalloproteinases production involved in VSMCs migration [27,28], affect the VSMCs phenotype and proliferative capacity [29][30][31], and promote VSMC apoptosis via Fas receptor-ligand binding to macrophages [32,33]. The present study reveals a major novel mechanistic controlling the initiation of VSMCs phenotypic switch/foam cell formation in atherosclerosis. Monocyte promotes VSMCs phenotypic switch to macrophages-like cells via VSMCs NLRPs inflammasome activation. We observed that monocytes and hyperlipidemia modulate VSMCs as follow: (1) Promote their phenotypic switch to macrophages-like cells; (2) reduce the expression of the transcription factor Oct-4 in VSMC, known to be important in preserving VSMC contractile phenotype [16], and upregulate KLF-4 expression shown to promote VSMC phenotypic modulation [17,18]; (3) trigger NLRP3 inflammasome activation and IL-1β secretion by VSMCs as well as; (4) induction of pyroptosis programmed cell death associated with NLRP3 inflammasome activation [21] and with the atherosclerosis plaque rupture [34]. Taking all together, monocytes and hypercholesteremia trigger VSMC phenotypic modulation, cholesterol accumulation, inflammasome activation, secretion of highly pro-inflammatory cytokine-IL-1β and cells death. The effect of these all could result in induction of necrotic core formation, which in turn may lead to overwhelming plaque destabilization leading to plaque rupture. Indeed, we have observed that NLRP3-inflammasome activation in VSMCs is associated with plaque rupture in human carotid artery disease.
IL-1 isoform can act extracellularly in an autocrine or paracrine manner [9], while IL-1β induce its own gene expression in an amplification loop manner called autoinduction [10,11]. Secondary necrosis of apoptotic VSMCs promotes the release of both IL-1α and IL-1β, which induces the surrounding viable VSMCs to produce proinflammatory cytokines, thus causing a chronic inflammation associated with atherosclerosis [35]. In the present study, we could demonstrate that IL-1β triggers VSMCs phenotypic switch and transdifferentiation to macrophages-like cells whose effect was amplified in the presence of oxLDL. Remarkably, IL-1β treatment increased profoundly the lipids accumulation and VSMCs foam cells formation highlighting the critical role of IL-1β in atherosclerosis progression as well as VSMCs foam cells formation, with consequences for atherosclerotic plaque stability. Interestingly, ZVAD-FMK a cell-permeable pan-caspase inhibitor partly abrogated VSMCs phenotypic switch as well as foam cells formation. These findings clearly demonstrate that caspase inhibition might be a way to preserve VSMCs contractile phenotype. More interestingly, we could demonstrate the role of NLRP3 inflammasome activation in VSMC phenotypic triggered by OxLDL or OxLDL-activated monocytes or IL-1β using MCC950, a potent highly specific small molecule inhibitor of both canonical and noncanonical activation of NLRP3 inflammasome leading to the reduction of IL-1β production [22].
Current widely used anti-atherosclerosis therapies modulate only the factors associated with the development of the disease while growing evidence supports a role for inflammation in atherosclerosis. Colchicine is a small molecule, a natural product derived from the autumn crocus plant which has been used to treat chronic auto-inflammatory conditions [36] as well as pericarditis, stable coronary artery disease, and postpericardiotomy syndrome [37]. Colchicine interferes with the assembly of microtubules and in this way, it impedes the assembly of the multiple components that comprise inflammasomes and thus colchicine blocks inflammasome assembly and thus IL-1β production [38]. Colchicine is currently under extensive evaluation for safety and efficacy in large randomized controlled trials. The COLCOT (Colchicine Cardiovascular Outcomes Trial) and LoDoCo2 (Low Dose Colchicine2) trial both demonstrated that low-dose colchicine is efficient in preventing major adverse cardiovascular events [23]. Among the ongoing trials it is worth mentioning the COLPOT trial in patients with recent acute coronary syndromes, the CLEAR-SYNERGY (OASIS-9) trial in patients with STEMI undergoing percutaneous coronary intervention (PCI) or the CONVINCE trial which will determine the long-term tolerability and efficacy of low-dose colchicine for secondary prevention in patients with CAD [39]. However, there is a need for mechanistic studies explaining the athero-protective effects of colchicine and particularly whether colchicine could affect VSMC phenotypic switch and subsequently plaques destabilization. Our present finding shows the direct inhibitory effect of colchicine on oxLDL-induced VSMC phenotypic switch which could at least partly explain the atheroprotective effect of colchicine in preventing major adverse cardiovascular events [23].
The finding of this study goes beyond the simple understanding of the pathogenesis of atherogenesis, since it provides a new mechanistic insight of therapeutic strategies preventing plaque destabilization and major adverse cardiovascular events. Taken all together our data implies that NLRP3 inflammasome activation is a critical mechanism involved in VSMCs phenotypic switch with possible implications in human atherosclerotic plaque destabilization.
Animals
Eight to twelve-week-old C57BL/6 mice were used for VSMCs or monocytes isolation. Eleven-week-old male Apoe −/− or Apoe −/− Myh11-CreERT2, ROSA26 STOP-flox eYFP +/+ mice were fed a NCD (4.6% fat, 21.1% protein, 4.5% fiber, 6.4% ash, Special Diets Services, UK) for 16 weeks (early atherogenesis) [40] or a HCD for 11 weeks (20.1% fat, 1.25% cholesterol, Research Diets, Inc., New Brunswick, New Jersey, United States) to promote advanced atherogenesis [41]. To facilitate VSMCs lineage tracing, injection of tamoxifen was used to induce Cre recombinase activation in male Apoe −/− Myh11-CreERT2, ROSA26 STOP-flox eYFP +/+ mice [42]. A series of ten intraperitoneal 1 mg tamoxifen (Sigma) injections from 9 to 11 weeks of age, for a total of 10 mg of tamoxifen per mouse, and an average bodyweight of 25 g for the 2 weeks running up to the start of the high cholesterol diet was performed [15]. Whole blood was collected and serum triglycerides, total cholesterol, low-density lipoprotein-cholesterol (LDL-C) were measured. Animals were sacrificed by exsanguination after anesthesia with 4% isoflurane. Experimental protocols and procedures were reviewed and approved by the Institutional Animal Care and Use Committee of the Geneva University School of Medicine. Animal care and experimental procedures were carried out in accordance with the guidelines of the Institutional Animal Care and Use Committee of the Geneva University School of Medicine. All procedures conform to the guidelines from Directive 2010/63/EU of the European Parliament on the protection of animals used for scientific purposes or the NIH Guide for the Care and Use of Laboratory Animals.
Human Samples
Specimens of internal carotid plaques of a previously published cohort study [43] from symptomatic patients with CAD and a first episode of ipsilateral ischemic stroke (ipsilateral focal neurological deficit of acute onset lasting >24 h), as well as of asymptomatic patients (no history of ischemic symptoms) undergoing endarterectomy for severe carotid stenosis were used for immunofluorescent analysis. Carotid endarterectomy (CEA) was performed due to extra cranial high-grade internal carotid stenosis (>70% luminal narrowing) in symptomatic and asymptomatic patients. US Doppler echography and angiographic confirmation using the criteria of the North American Symptomatic Carotid Endarterectomy Trial (NASCET) [44] was applied to determine the degree of luminal narrowing. The indication for CEA for asymptomatic patients was based on the recommendations of Asymptomatic Carotid Surgery Trial (ACST) [45] while for symptomatic patients, CEA indication followed the recommendations of the European Carotid Surgery Trial (ECST) [46] and the North American Symptomatic Carotid Endarterectomy Trial (NASCET) [46]. After surgical excision, the internal carotid plaque specimens were cut perpendicular to the long axis through the point of maximum stenosis to obtain the atherosclerotic plaque upstream to the blood flow. The upstream internal carotid plaque specimens from symptomatic and asymptomatic patients were embedded in optimal cutting temperature (OCT) compound. The study was approved by the Medical Ethics Committee of San Martino Hospital in Genoa (Italy) and conducted in compliance with the Declaration of Helsinki after participants provided written informed consent.
Cells Isolation
VSMCs isolation from the aorta of 8-12-weeks old C57BL/6 mice was successfully established in the laboratory. Briefly, after intracardial perfusion, the aorta is surgically excised. The aortic arch is separated from the thoracic part of the aorta. The aorta adventitia was carefully excised by sharp surgical dissection in a clearly defined plan, to leave a naked media over the length of the aortal segment. The intima was scrapped softly to eliminate endothelial cells. The obtained arch is digested for 40 min-1 h at 37 • C in DMEM containing Collagenase P, Dispase and DnaseI. VSMCs were isolated from via digestion at 37 • C in DMEM containing Collagenase P, dispase and DnaseI. VSMC phenotype was confirmed by flow cytometry analysis for smooth muscle α-actin and Myh11 positive expression and negative expression of CD31 (endothelia cell marker) and CD90 (fibroblasts cell marker). These cells were cultured at a density of 3 × 104 cells/cm 2 using SmBMTM Basal Medium (CC-3181, Lonza) and SmGMTM-2 SingleQuotsTM supplements (CC-4149, Lonza) required for growth of VSMC for 3 weeks. The medium was renewed every 3 days. Briefly, the iliac, tibiae, and femur marrow cells were obtained from 8-12-weeks old C57BL/6 mice via flushing with cold PBS using a 22-gauge needle and passing the cell suspension through a 40-µm cell strainer (BD Biosciences, MD, USA). Mononuclear cells from blood were obtained after centrifugation by density gradient sedimentation using Histopaque (Sigma). Erythrocytes were lysed and nucleated cells were washed twice, counted, and suspended in PBS. Monocytes were isolated using mouse Monocyte Isolation Kit (BM) (Miltenyi Biotec, 130-100-629) according to the manufacturer instructions under sterile conditions. Dead cells and doublets were excluded based on exclusion dye or forward scatter profiles, respectively. Monocytes cell purity (>95%) and phenotype were confirmed by flow cytometry using Anti-Ly-6C-FITC, mouse (Miltenyi Biotec, 130-102-295) CD11b-VioBlue, (Miltenyi Biotec, 130-113-810).
Flow Cytometry Analysis of Vascular Smooth Muscle Cells Phenotypic Switch
Quantification of VSMC transdifferentiation was performed using VSMC in passage 1. In vitro VSMC were stimulated with either 40 ng/mL oxLDL (Thermo fisher), Z-VAD-FMK 10 µM (InvivoGen) or 100 ng/mL Colchicine (Sigma-Aldrich), or 10 ng/mL of IL-1β for 7 days or co-cultured with monocytes derived from male C57Bl/6 mice using Transwell Cell Culture Inserts for 7 days. VSMCs were co-cultured with monocytes or monocytes activated with oxLDL upon direct supplementation of 40 ng/mL oxLDL (Thermo Fisher) to well cell culture inserts (pore size 0.02 µm) in the trans well plates for 7 days. The direct supplementation of oxLDL with a known diameter size of more than 20 nm [47] ensured monocytes restricted oxLDL activation since oxLDL was retained in the trans well inserts with a pore size of 0.02 µm. Quantification of VSMC transdifferentiation was performed via flow cytometry analysis of anti-mouse CD68 PerCP/Cy5.5 (Biolegnd), anti-mouse MAC2 PE/Cy7 (Biolegnd), anti-mouse F4/80 Brilliant Violet 650™, α-SMA Alexa Fluor 488, SM22α + Alexa Fluor 700 after excluding dead cells via LIVE/DEAD Fixable Near-IR Dead Cell Dye staining (Thermo fisher). Samples were acquired in Gallios flow cytometer (Beckman Coulter) and analyzed using FlowJo software (TreeStar, Version 10.0.8r1, Ashland, OR, USA).
Quantitative Real-Time PCR
Total mRNA was prepared by Trizol ® (Thermofischer), according to the provider protocol. Reverse transcription was performed using the ImProm-II Reverse Transcription System (Promega, Madison, WI, USA) according to the manufacturer's instructions. Realtime PCR (StepOne Plus, Applied Biosystems, Waltham, MA, USA) was performed with the SensiFast (LabGene). Real-time duplex qPCR analysis was conducted. The levels of mRNA expression were normalized against the expression of a housekeeping gene (hprt) and analyzed using the comparative ∆CT method. Probes were purchased from Applied Biosystems. All measurements were conducted in triplicate.
Immunofluorescent Staining and Quantification
VSMCs stimulated with oxLDL or co-cultured with monocytes or oxLDL-activated monocytes were cultured in 6-well chamber slides for 24 h at 37 • C with 5% CO 2 . The cells were fixed with 4% paraformaldehyde for 30 min at room temperature (RT), permeabilized for 30 min with PBS plus 0.01% Triton X-100 for 30 min, and stained with Phalloidin (Abcam) for 1 h at RT followed by three washing steps for 10 min and counterstained with ASC (Cell Signaling). Confocal microscopy was performed with a confocal LSM 800 Airyscan (Zeiss). Internal carotid plaque specimens from symptomatic and asymptomatic patients, and the aortic roots of male Apoe −/− Myh11-CreERT2, ROSA26 STOP-flox eYFP +/+ mice on NCD and HCD were embedded in OCT serially cut into 5-µm sections. Cryosections were fixed in 1% paraformaldehyde and then washed with 1xPBS and incubated with blocking solution, consisting of 5% BSA in PBS for 30 min, then permeabilized with Triton X-100 0.1%. Endarterectomy specimens were stained with primary anti-Myh11 (Thermo Fischer) and CD68, cleaved caspase 1 or Il-1β antibody (Cell Signaling) in blocking solution. After washing, the samples were incubated with secondary antibody and mounted with ProLong Glass Antifade Mountant (Thermo Fischer). Immunofluorescent images will be acquired with Axioscan Z1 microscope, analyzed, and quantified with QuPath software platform for whole slide image analysis. The extent of VSMC phenotypic switch/NLRP3 inflammasome activation was corelated with the risk of CAD events in human atherosclerosis using the two groups of CAD patients (symptomatic versus asymptomatic). Aortic roots cryosections of Apoe −/− or Apoe −/− Myh11-CreERT2, ROSA26 STOP-flox eYFP +/+ mice fed NCD or HCD were stained with primary rabbit anti-CD68 (BioRad), cleaved caspase 1 (Cell Signaling), NLRP3 (Cell Signaling), α-SMA (Abcam), or IL-1β (Cell Signaling) antibody in blocking solution. After washing, the samples were incubated with the following secondary antibody Alexa 647 anti-rabbit (Thermo Fischer) and DyLight 405 and mounted with ProLong Glass Antifade Mountant (Thermo Fischer). Immunofluorescent images were acquired with Axioscan Z1 microscopy and analyzed and quantified with QuPath software platform for whole slide image analysis.
Caspase-1 Activity Assay and Pyroptosis/Caspase-1 Assay
Caspase 1 activity was measured with a caspase-1 colorimetric assay (R&D Systems, Minneapolis, MN, USA) according to the manufacturer's protocol. In brief, 50 µL containing 100 µg of VSMCs protein extract was mixed with 50 µL of 2X Reaction Buffer 1 and 5 µL of a caspase-1 colorimetric substrate and incubated for 2 hours at 37 • C. The caspase 1 activity in the samples was quantified with a microplate reader using a wavelength of 405 nm. Data represent the absorbance of the samples. For pyroptosis/caspase-1 assay, caspase-1 activity was assessed in whole VSMC cells in vitro treated with oxLDL or co-cultured with monocytes or oxLDL-activated monocytes as previously described, using FAM-YVAD-FMK Pyroptosis/Caspase-1 Assay, Green (ImmunoChemistry Technologie), according to the manufacturer's protocol. The activity of the caspase-1 enzyme inside the cells was quantified using a cell-permeant FLICA retaining the green, fluorescent signal within the cell, with no interference from pro-caspases or inactive forms of the enzymes. To access pyroptosis, after labelling with FLICA, VSMCs were counter-stained with the red live/dead stains propidium iodide (ImmunoChemistry Technologie) and 7-AAD (ImmunoChemistry Technologie) and the fluorescence signal was quantified via flow cytometer. Samples were acquired in Gallios flow cytometer (Beckman Coulter) and analyzed using FlowJo software (TreeStar, Version 10.0.8r1).
IL-1β ELISA
To measure IL-1β secretion specifically in VSMSs upon oxLDL or co-culture with monocytes or oxLDL activated monocytes as previously described, the transwell inserts were removed and the VSMC in the well plates were supplemented with fresh medium, which was collected after 3 days and IL-1β secreted by VSMCs was determined using a mouse IL-1β ELISA kit (Cloud Clone Corp., Houston, TX, USA) according to the manufacturer's protocol.
Statistical Analysis
Statistical analysis was performed using a GraphPad Software, Inc., La Jolla, CA, USA. All data sets were tested for normal distribution with normality tests before proceeding with parametric or non-parametric analysis. Grubb's test was performed in order to exclude spurious outliers. Statistical significance was tested using unpaired t-test, one-way analysis of variance (ANOVA) with Tukey post-test and two-way ANOVA with Bonferroni post-test for data sets with normal distributions. Statistical significance was tested with Mann-Whitney test and one-way ANOVA with Dunn's post-test for data sets without a normal distribution. Data are presented as mean ± SEM. Differences were significant when the two-sided p-value was lower than 0.05.
|
v3-fos-license
|
2022-11-24T16:05:41.872Z
|
2022-11-01T00:00:00.000
|
253835111
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2079-4991/12/22/4094/pdf?version=1669023332",
"pdf_hash": "8d2cc554e1bd9b777cdc4a43ce8d0e427dbdd0af",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43493",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Materials Science"
],
"sha1": "18d2571c6345242805035b905487ef7cd8541967",
"year": 2022
}
|
pes2o/s2orc
|
Flexible and Lightweight Carbon Nanotube Composite Filter for Particulate Matter Air Filtration
Particulate Matter (PM) has become an important source of air pollution. We proposed a flexible and lightweight carbon nanotube (CNT) composite air filter for PM removal. The developed CNT filtering layers were fabricated using a floating catalyst chemical vapor deposition (FC-CVD) synthesis process and then combined with conventional filter fabrics to make a composite air filter. Filtration performance for CNT filtering layer alone and composited with other conventional filter fabrics for particles size 0.3 μm to 2.5 μm was investigated in this study. The CNT composite filter is highly hydrophobic, making it suitable for humid environments. The CNT composite filter with two layers of tissue CNT performed best and achieved a filtration efficiency over 90% with a modest pressure drop of ~290 Pa for a particle size of 2.5 μm. This CNT composite filter was tested over multiple cycles to ensure its reusability. The developed filter is very light weight and flexible and can be incorporated into textiles for wearable applications or used as a room filter.
Introduction
Air pollution has become the biggest environmental health risk. According to the World Health Organization (WHO), air pollution kills an estimated seven million people every year. The data showed that 9 out of 10 people breathe air that exceeds guideline limits of high levels of pollutants [1]. EPA research on health effects from air pollution showed that air pollutants have detrimental effects on lungs and cause heart disease and other health problems [2]. There is a need to understand the adverse effects of air pollution and invent solutions to ensure healthy life and a sustainable environment. According to Fortune Business Insights, the global market for air filters stood at 12.10 USD billion in 2019 and is estimated to reach 20.63 USD billion by 2027 [3]. The increased demand and construction of green buildings is making air filtration an important part of a sustainable environment. In addition, the growth of the automobile industry will also increase the use of these filters in upcoming years.
Due to the large amount of particulate matter (PM) emission from the industrial sector, power plants, and other human activities, PM has become an important source of air pollution. PM consists of inorganic matter (sulfates, nitrates, etc.), organic matter (elemental carbon, organic carbon, etc.), in various sizes [4][5][6]. Based on particulate diameter, PM can be divided into PM 10 (aerodynamic diameter smaller than 10 µm) and PM 2.5 (aerodynamic diameter smaller than 5 µm) [7]. PM sized under 10 µm can pass through the body and enter the lungs, but PM sized under 2.5 µm are more harmful and can penetrate alveolus and blood vessels due to their small size [8]. Moreover, PM 2.5 can be present in the atmosphere for longer periods. To avoid the risk of exposure, capturing PM using filtration membrane is the commonly used solution [9][10][11][12][13]. In recent years, various CNT based particulate matter filters were studied due to their low-weight, high surface area and small pore size. Single walled carbon nanotubes Nanomaterials 2022, 12, 4094 2 of 11 (SWCNTs) of two different diameters were used to adsorb organic vapors [14]. It was observed that the amount adsorbed by tubes with narrow diameter is larger than the CNTs with wider diameter. The surface area of the narrow diameter tube was three times more than the wider diameter tubes. The higher adsorption amount was due to the enhanced interaction with adsorbate molecules in narrow diameter CNTs. The adsorption of volatile organic compounds (VOCs) on multi walled carbon nanotubes (MWCNTs) was also studied [15]. The presence of an amorphous carbon layer on the MWCNTs increased the adsorption property of the organic compounds on the MWCNTs. In passive capturing, filters block the pollutants with the help of dense filter structure and filter toxic pollutants by a colliding, attaching, and capturing mechanism [16]. A particulate air filter was fabricated by depositing CNT membranes on Si/SiO 2 chips [17]. The permeability of the membranes was adjusted by the growth time. A filter with lower permeability showed 99% filtration efficiency for submicron size particulate. In another research, aligned CNT sheets were combined with polypropylene nonwoven fabric in a 3-layers cross-ply filter structure [18]. The developed filter was tested for particle diameters ranging from 0.01-0.3 µm at a face velocity of 10 cm/s. The filter showed a very high-quality factor and met the HEPA specifications. It was observed that by increasing the number of CNT layers, filtration efficiency increased along with an increase in pressure drop. A CNT-metal filter was fabricated by growing CNTs onto a conventional metal filter [19]. The filter with a bush-like CNTs nanostructure showed higher filtration efficiency.
In this paper, we demonstrated the use of a CNT network as a PM filter with very high efficiency. Different combinations of CNT layers and conventional filtering material were tested. The stability of the filter was investigated for multiple filtration cycles. This flexible, high efficiency filter can be incorporated into textiles or as a room filter.
Synthesis of Thin CNT Filtering Layer
The thin CNT sheet was synthesized using the floating catalyst chemical vapor deposition (FC-CVD) method. The FC-CVD method is a one-step method and produces large-scale nanotube material. The fuel used in the synthesis process consists of a carbon precursor and catalyst. Sulfur was used as a promoter and injected into a tube in a high-temperature furnace (1400 • C). The fuel is a mixture of methanol (Thermo Fisher Scientific, MA, USA), thiophene (Sigma-Aldrich, Inc., St. Louis, MO, USA), and ferrocene (Sigma-Aldrich, Inc., St. Louis, MO, USA). The fuel injection rate varied from 30 mL/hr to 60 mL/hr. The fuel injection was carried out with the help of an atomizer which injected a fine mist of fuel at the inlet of the reactor. The power of the atomizer is self-driven and varies according to the applied load. Hydrogen and argon gases were used as carrier gases, the flow for hydrogen was 100 sccm, and for argon it varied from 500-2000 sccm. The CNT sheet was collected on the other end of the tube on a rotating drum to form a thin sheet. The thickness of the sheet can be controlled by varying the collection time. Other details about the synthesis process can be found in previous publications [20][21][22][23][24][25][26][27][28][29][30].
Filter Fabrication
The filter was prepared by collecting a very thin layer of CNT sock on the base carbon fiber tissue layer and melt blown fabric. CNT filters are easily clogged by the PM due to the very small space between the CNTs. Using a porous material as a base layer provides a microporous structure with structural stability while CNT increases the surface area of the filter [11]. Single layers and various composites were tested for better filtration efficiency.
PM Generation and Measurement
In this study, we have used incense smoke as a PM source. These PM were captured using a CNT composite filter. The experiments were conducted for 20 min over several days and months. After the filtration experiment, the tested layers were examined with SEM and EDX for a before and after comparative analysis. A particle counter was also used at the outlet of the filter to record the concentration of the PM particles.
Filter Efficiency Measurement
The performance of the air filter was evaluated using two key factors: filtration efficiency and pressure-drop. To calculate the efficiency of the filter, input and output concentration of the PM were measured using the particle counter.
The filtration efficiency was calculated using the below equation: where C in is the measured PM concentration without a filter and C out is the maximum PM concentration measured after the filtration. The Quality Factor was calculated using the following equation: where E is the filtration efficiency and ∆P is the pressure drop.
Results and Discussion
The experimental setup for PM filtration is shown in Figure 1A. PM smoke was introduced at the inlet of the tube. The CNT composite filter was placed at the center. A particle counter to record the PM concentration was attached to the outlet of the tube. To reduce the back flow at the inlet, suction was applied at the outlet side. Before starting the experiment, PM particles using incense smoke filled the inlet and then continuously introduced PM particles during the experiment. An inlet view with a CNT filter inside is shown in Figure 1B. The size of the holder is 2.5 × 2.5-inch.
The surface morphology and composition of the particles from incense smoke is shown in Figure 1C. An EDX analysis was performed on different spots of the sample. The EDX analysis showed that the smoke mainly consisted of O, Si, Al, Na, Ca, K, Mg, and other elements. It is clear from Figure 1D that the majority of the PM particles from the incense smoke are small particles with diameter smaller than 2.5 µm. A particle counter was used to measure the concentration of the PM from incense smoke.
Single-Layer Performance Analysis
Three different single layers were tested: melt blown (Single layer of melt blown conventional filtering fabric), thin CNT on thick tissue (thin layer of CNT was collected on the base carbon fiber tissue layer), and thin CNT with Granulated Activated Carbon (GAC) (thin layer of CNT with GAC nanoparticles incorporated during the synthesis process). The filter captured the PM with a combination of different mechanisms such as Brownian diffusion, interception, inertia impact, and electrostatic deposition [11,16,31]. SEM and EDX analyses were performed on the tested layer to determine the morphology of the captured PM particles. The captured PM particles on the melt blown single layer is shown in Figure 2. The color of the melt blown layer was changed after the filtration experiment. The SEM images of captured PM particles from incense smoke are shown in Figure 2A,B, for 10 µm and 50 µm size range. The EDX analysis ( Figure 2C) showed the peaks for Na, Mg, Ca, Al, Si, S, Cl, P, and other PM in our tested sample. The captured PM particles on thin CNT for a thick tissue sample are shown in Figure 3. The SEM images of captured PM particles from incense smoke of 1 µm size is shown in Figure 3A. From the EDX analysis ( Figure 3B), it is clear that the sample trapped particular matter from the incense smoke. Peaks for Na, Al, Si, S, O, and K particulate matter in our tested sample are clearly visible.
The captured PM particles on thin CNT-GAC on a thick tissue sample are shown in Figure 4. The SEM images of captured PM particles from incense smoke on the top layer and middle part are shown in Figure 4A,B. An EDX analysis was performed on different spots of tested sample layers. From the EDX analysis ( Figure 4C), we can see the peaks for Na, Mg, Al, Si, S, Cl, P, Ca, Cr, and other particulate matter in our tested sample. The CNT-GAC sample trapped more particles compared to other tested single layer filters.
Brownian diffusion, interception, inertia impact, and electrostatic deposition [11,16,31]. SEM and EDX analyses were performed on the tested layer to determine the morphology of the captured PM particles. The captured PM particles on the melt blown single layer is shown in Figure 2. The color of the melt blown layer was changed after the filtration experiment. The SEM images of captured PM particles from incense smoke are shown in Figure 2A,B, for 10 μm and 50 μm size range. The EDX analysis ( Figure 2C) showed the peaks for Na, Mg, Ca, Al, Si, S, Cl, P, and other PM in our tested sample. The captured PM particles on thin CNT for a thick tissue sample are shown in Figure 3. The SEM images of captured PM particles from incense smoke of 1 μm size is shown in Figure 3A. From the EDX analysis ( Figure 3B), it is clear that the sample trapped particular matter from the incense smoke. Peaks for Na, Al, Si, S, O, and K particulate matter in our tested sample are clearly visible. The captured PM particles on thin CNT-GAC on a thick tissue sample are shown in Figure 4. The SEM images of captured PM particles from incense smoke on the top layer and middle part are shown in Figure 4A,B. An EDX analysis was performed on different spots of tested sample layers. From the EDX analysis ( Figure 4C), we can see the peaks for Na, Mg, Al, Si, S, Cl, P, Ca, Cr, and other particulate matter in our tested sample. The CNT-GAC sample trapped more particles compared to other tested single layer filters. The captured PM particles on thin CNT-GAC on a thick tissue sample are shown in Figure 4. The SEM images of captured PM particles from incense smoke on the top layer and middle part are shown in Figure 4A,B. An EDX analysis was performed on different spots of tested sample layers. From the EDX analysis ( Figure 4C), we can see the peaks for Na, Mg, Al, Si, S, Cl, P, Ca, Cr, and other particulate matter in our tested sample. The CNT-GAC sample trapped more particles compared to other tested single layer filters. From Figure 5A, the concentration of the PM particles from incense smoke are greater for a particle size smaller than 2.5 µm. Single layers of three samples were tested and the output concentration of PM particles were recorded using a particle counter. The filtration efficiency of these three samples for a particle size of 0.3 µm to 2.5 µm is shown in Figure 5B. A sample with thin CNT on thick tissue performed poorly and had less than 60% filtration efficiency for all particle sizes. For sizes of 1 µm and 2.5 µm, the filtration efficiency from this sample was lower than 60%. The single layer of melt blown performed at 80% filtration efficiency for a particle size of 0.3 µm. For sizes 1 µm and 2.5 µm, the filtration efficiency of melt blown was lower than 60%. The sample with CNT-GAC performed best among all the three samples with filtration efficiency greater than 80% for a particle size of 0.3 µm and 0.5 µm. Figure 5C shows the pressure drop of the filter samples.
The CNT-GAC filtering layer showed a very high pressure drop. The pressure drops of the filters were significantly increased with continuous loading of the PM during the experiment. The quality factor provides the overall performance of the filtering material by including both pressure drop and filtration efficiency parameters. The higher the quality factor, the better the performance of the filtering material [32]. In Figure 5D, melt blown material has a very high-quality factor compared to other filtering materials.
It is clear that single layers are not sufficient to filter all the particulate matter from smoke. The as-synthesized CNT sheet is not uniformly aligned, which creates gaps between the nanotube strands. The PM can easily pass through these gaps [33]. Therefore, multiple layers in different combinations (CNT with melt blown or other commercially available material) are required to fill these gaps and improve the filtration efficiency. 5B. A sample with thin CNT on thick tissue performed poorly and had less than 60% fil-tration efficiency for all particle sizes. For sizes of 1 μm and 2.5 μm, the filtration efficiency from this sample was lower than 60%. The single layer of melt blown performed at 80% filtration efficiency for a particle size of 0.3 μm. For sizes 1 μm and 2.5 μm, the filtration efficiency of melt blown was lower than 60%. The sample with CNT-GAC performed best among all the three samples with filtration efficiency greater than 80% for a particle size of 0.3 μm and 0.5 μm. Figure 5C shows the pressure drop of the filter samples. The CNT-GAC filtering layer showed a very high pressure drop. The pressure drops of the filters were significantly increased with continuous loading of the PM during the experiment. The quality factor provides the overall performance of the filtering material by including both pressure drop and filtration efficiency parameters. The higher the quality factor, the better the performance of the filtering material [32]. In Figure 5D, melt blown material has a very high-quality factor compared to other filtering materials.
It is clear that single layers are not sufficient to filter all the particulate matter from smoke. The as-synthesized CNT sheet is not uniformly aligned, which creates gaps between the nanotube strands. The PM can easily pass through these gaps [33]. Therefore,
Composite Filter Performance Analysis
For a composite filter, samples with 2-layers of thin CNT on carbon fiber tissue, undensified CNT on melt blown with melt blown outer layer, and CNT with micro-granulated activated carbon (CNT-GAC) with an outer layer of melt blown were tested. Undensified CNT is a CNT sheet without any alcohol densification. During the CNT sheet synthesis process, we use acetone for densification while collecting our CNT. The densification process makes CNT collection easier but also reduces the surface area. We investigated the filtration performance of both types of CNT sheets: with densification (normal as-synthesized CNT sheet) and without densification (Undensified CNT). The concentration of the PM is higher for a particle size range from 0.3 µm to 1 µm ( Figure 6A). The sample with CNT-GAC showed very high filtration efficiency, but the pressure drop of the sample was also very high. Among all the samples ( Figure 6B,C), the CNT-Tissue with two layers showed good performance with a moderate pressure drop of~286 Pa. The filtration efficiency of the filter for a particle size of 2.5 µm reached up to 93%. For this study, we concluded that CNT-Tissue with two layers is the best composite filter for PM air filtration for a particle size of 2.5 µm. The low pressure drop of this composite filter may be due to its two-layer structure, which provided enough separation between the CNT layers for efficient air flow. By adding the second CNT-Tissue layer, the thickness of the filter increased, which also improved the filtration efficiency [34]. Figure 6D shows the quality factor of the filters. There is a need to understand the filtration mechanism for a particle size below 2.5 µm for our filter and try to improve the overall performance of the filter.
we concluded that CNT-Tissue with two layers is the best composite filter for PM air filtration for a particle size of 2.5 μm. The low pressure drop of this composite filter may be due to its two-layer structure, which provided enough separation between the CNT layers for efficient air flow. By adding the second CNT-Tissue layer, the thickness of the filter increased, which also improved the filtration efficiency [34]. Figure 6D shows the quality factor of the filters. There is a need to understand the filtration mechanism for a particle size below 2.5 μm for our filter and try to improve the overall performance of the filter. To ensure the stability of the composite filter, the performance of the CNT-Tissue with two layers was investigated over multiple cycles (Figure 7). The filtration performance ( Figure 7A) did not vary much, but the pressure drop ( Figure 7B) of the filter increased continuously with continuous loading. The increase in pressure drop of the filter due to the clogging of the filter with continuous loading is also supported by Thomas et.al's study [35]. This filter can be useful in different applications. Firefighters often work in harsh environments and encounter hazardous pollutants. To reduce the risk of exposure, this CNT filter can be incorporated into firefighter uniforms to prevent any toxic particles from entering the fabric. Several methods for integrating the CNT sheet with available conventional textiles have been studied in our previous research [22,26]. The safety of the CNT sheet is discussed in our previous publication [24], in which it is advised that a veil or outer layer can be used with CNT sheets for safety purposes.
in harsh environments and encounter hazardous pollutants. To reduce the risk of exposure, this CNT filter can be incorporated into firefighter uniforms to prevent any toxic particles from entering the fabric. Several methods for integrating the CNT sheet with available conventional textiles have been studied in our previous research [22,26]. The safety of the CNT sheet is discussed in our previous publication [24], in which it is advised that a veil or outer layer can be used with CNT sheets for safety purposes.
Conclusions
In this paper, a composite filter was fabricated using the randomly distributed CNTs in a nonwoven sheet with other conventionally available filtering materials. To provide better air flow, a thin layer of CNT was collected on a very thin carbon fiber tissue layer, using it as a base layer. The filtration efficiency for different combinations of CNT layers alone and composited with commercially available filter fabric were investigated for a particle size varying from 0.3 μm to 2.5 μm. The developed CNT composite filter with two layers of Tissue CNT performed best for a particle size of 2.5 μm and achieved a filtration efficiency of over 90% with a moderate pressure drop of ~290 Pa. The filtration efficiency of the filter did not change much while using it for a longer period. Apart from air purification, it can possibly be integrated in textiles due to its light weight and flexibility for wearable applications.
Conclusions
In this paper, a composite filter was fabricated using the randomly distributed CNTs in a nonwoven sheet with other conventionally available filtering materials. To provide better air flow, a thin layer of CNT was collected on a very thin carbon fiber tissue layer, using it as a base layer. The filtration efficiency for different combinations of CNT layers alone and composited with commercially available filter fabric were investigated for a particle size varying from 0.3 µm to 2.5 µm. The developed CNT composite filter with two layers of Tissue CNT performed best for a particle size of 2.5 µm and achieved a filtration efficiency of over 90% with a moderate pressure drop of~290 Pa. The filtration efficiency of the filter did not change much while using it for a longer period. Apart from air purification, it can possibly be integrated in textiles due to its light weight and flexibility for wearable applications.
|
v3-fos-license
|
2019-02-10T03:46:39.010Z
|
2017-08-02T00:00:00.000
|
90458731
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=78403",
"pdf_hash": "f90f6293009bf970abb13e8856b6470e9187ce21",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43494",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "f90f6293009bf970abb13e8856b6470e9187ce21",
"year": 2017
}
|
pes2o/s2orc
|
Essential Oils from Mentha piperita , Cymbopogon citratus , Rosmarinus officinalis , Peumus boldus and Foeniculum vulgare : Inhibition of Phospholipase A 2 and Cytotoxicity to Human Erythrocytes
The essential oils from Mentha piperita, Cymbopogon citratus, Rosmarinus officinalis, Peumus boldus and Foeniculum vulgare were extracted by hydrodistillation and characterized and quantified by GC-MS and GC-DIC. The oils induced hemolysis with all the doses evaluated (0.6 to 1.8 μL), and the diameters of the halos varied between 9 and 15 mm. Pre-incubation of P. boldus oil with Bothrops jararacussu venom resulted in potentiation of venom-induced hemolysis (30%) (proteases and phospholipases A2). The essential oil from M. piperita (0.6 μL) inhibited venom-induced hemolysis by 45%, whereas 0.6 μL of R. officinalis oil increased the hemolysis by 20%. For the essential oil from F. vulgare, 100% inhibition of activity (0.6 and 1.2 μL) was observed. The application of C. citratus oil induced hemolysis with all the volumes evaluated. Phospholipase activity induced by the venom was only inhibited (10%) with the 0.6 μL volume of R. officinalis oil. The oils from M. piperita and F. vulgare (1.8 μL) and C. citratus oil (0.6 μL) potentiated the phospholipase activity. The results highlight the need for a broad characterization and regulation of the use of natural products, because they can have therapeutic or toxic actions.
Introduction
Studies have shown that lipophilic substances can often induce hemolysis because they are capable of destabilizing the lipid bilayers present in cell membranes, causing lysis of erythrocytes and increasing plasma hemoglobin levels.
This effect can result in several complications: hemolytic anemia, multiple organ failure and even death [1].However, many natural compounds have properties that cause the reduction in the membrane fluidity of erythrocytes, reducing hemolytic processes that lead to lower blood viscosity.This property has a potential for pharmaceutical use.
Hemolysis is a process of destruction of red blood cells (erythrocytes) in which the rupture of the plasma membrane occurs, resulting in the release of hemoglobin and causing serious damage to vital organs such as liver, kidneys and heart.Hemolysis is caused not only by chemical compounds such as penicillin, methyldopa, some types of antibiotics and anti-inflammatory agents, but also by natural compounds such as animal venoms.Several plant extracts with hemolytic activity have been described, some of which are cytotoxic or genotoxic, making it necessary to perform pharmacological and toxicological analyses of essential oils and plant extracts [2].
The phospholipids constituting the membranes can be degraded by numerous substances, including natural compounds, and they can interact with different compounds, resulting in destabilization of the membranes and alteration in the flow of liquids and ions through the membranes.These substances might also act by inhibiting the action of phospholipases from various sources, both animal and human, thereby exerting anti-inflammatory activity and interfering in processes such as blood coagulation and platelet aggregation.The phospholipids constituting the membranes can be degraded by numerous substances, including natural compounds, and they can interact with different compounds, resulting in destabilization of the membranes and alteration in the flow of liquids and ions through the membranes.These substances might also act by inhibiting the action of phospholipases from various sources, both animal and human, exerting anti-inflammatory activities and interfering in processes such as blood coagulation and platelet aggregation.These processes are closely related to the action of eicosanoids generated from arachidonic acid, one of the products of the breakdown of phospholipids [3].
Ophidic poisoning has been of great concern for public health, especially in tropical and neotropical countries, both because of the incidence and the action of venoms on living organisms [4].However, snake venoms are mixtures of substances, mainly proteins (for example, phospholipases A 2 and proteases), that have various biological activities such as enzymatic, myotoxic, cardiotoxic and cytotoxic activities [5].
Phospholipases A 2 (PLA 2 ) and proteases (metalloproteases and serinoproteases), present in venoms of snakes of the Bothrops genus, can act directly on erythrocytes, myocytes, blood coagulation cascade factors, epithelial cells and vascular endothelium, causing severe physiological disorganization and resulting in coagulation or intravascular hemolysis or predisposing the organism to the development of diseases [6].Venoms have been widely used as laboratory tools for physiological studies and for the characterization of several compounds, mainly of vegetal origin, as exemplified by plant extracts, plant drugs and essential oils.The aim of this study was to evaluate the enzymatic inhibition of phospholipases A 2 and cytotoxicity to human erythrocytes using the essential oils from M. piperita, C. citratus, R. officinalis, P. boldus and F. vulgare.
Extraction, Identification and Quantification of Essential Oils
The essential oils from M. piperita, C. citratus, R. officinalis, P. boldus and F. vulgare were extracted in the Laboratory of Essential Oils of the Department of Chemistry of the Federal University of Lavras by hydrodistillation over a 2-h period using a modified Clevenger apparatus.They were identified by gas chromatography coupled to a mass spectrometer (CG-MS) and quantified by gas chromatography coupled to a flame ionization detector (FID) according to the procedure previously described by Rezende et al. (2017) [7].
Hemolytic Activity: Cytotoxicity to Human Erythrocytes
The analyses involving human blood were approved by the Human Research Ethics Committee (COEP) of the Federal University of Lavras, with registration number 48793115.0.0000.5148.The erythrocyte suspension was prepared using 10 mL of blood collected in tubes containing sodium citrate, which were centrifuged for 10 min at 4˚C and 2500 g.After centrifugation, the plasma was removed, and the erythrocytes were suspended in phosphate-buffered saline (PBS) (pH = 7.2 -7.4) and centrifuged again under the same conditions.This procedure was repeated three times to obtain a packed red blood cell pellet.The analysis of the hemolytic activity of the essential oils was accomplished using the methodology of Price, Wilkinson and Gentry (1982) [8], with modifications.The medium was prepared with 1% agar in PBS and 0.01 M calcium chloride, 0.1 g sodium azide, and 1% blood erythrocytes.Cavities 3 mm in diameter were made in the medium after solidification in Petri dishes for the application of 0.6, 1.2 and 1.8 μL aliquots of the essential oils, and the plates remained in a cell culture chamber at 37˚C for 16 hours.
The essential oils (0.6, 1.2 and 1.8 μL aliquots) were incubated with the Bothrops jararacussu venom in a water bath at 37˚C for 30 minutes to evaluate a possible inhibitory action of the oils on the hemolysis induced by the venom.The formation of a translucent halo around the cavity in the gel is indicative of activity, and this halo was measured in millimeters for the quantification of hemolytic activity.
Activity of Phospholipase A2
The phospolipase activity was determined in a solid medium in accordance with the method described by Gutiérrez et al. (1988) [9].A gel similar to that described for the determination of hemolytic activity was prepared except that 1% lecithin from egg yolk was substituted for the erythrocytes.After solidification of the medium, cavities 3 mm in diameter were prepared for appliction of the samples.
The essential oils (0.6, 1.2 and 1.8 μL) were incubated with B. jararacusu venom in a water bath at 37˚C for 30 minutes and then applied to the plates, which were maintained for 16 hours at 37˚C in a cell culture chamber.The formation of a translucent halo around the hole in the gel was indicative of activity, and this halo was measured in millimeters for the quantification of phospholipase activity.
Statistical Analysis
For the cytotoxic and phospholipase activities, the test was performed by comparing averages one at a time (each volume was separately compared to the control).The data were submitted to analysis of variance, and the means were compared by the Scott-Knott test at the 5% probability level.The statistical program used was SISVAR [10].
In Figure 1, the classes of constituents present in the essential oil from M. piperita can be observed.The oxygenated monoterpene class is predominant (86%).The essential oil from M. piperita (mint) contained carvone and limonene as the main components.
The essential oil from C. citratus contained approximately 91% monoterpenes, as can be observed in Figure 2. The essential oil from C. citratus (lemongrass) contained geranial, neral and myrcene as the main constituents.
The classes to which the constituents of the essential oil of R. officinalis belong, being 87% oxygenated monoterpenes, are presented in Figure 3.The classes of constituents found in the essential oil of P. boldus are shown in Figure 4.The essential oil of F. vulgare was the only one that presented a component belonging to the class of phenylpropanoides in its composition (Figure 5).This component represented 90% of the oil.
Hemolytic Activity: Cytotoxicity to Human Erythrocytes
With the exception of the essential oil from R. officinalis, all the oils evaluated induced hemolysis (Figure 6); halos between 9 and 15 mm in diameter were observed for the oils from C. citratus and P. boldus.The 15 mm halo, referring to the 1.8 μL volume of oil from P. boldus, does not differ significantly from the control containing only venom.
Pre-incubation of the essential oil from P. boldus with B. jararacussu venom resulted in potentiation of hemolytic activity by approximately 30% for all the volumes of oil evaluated (Figure 7).An inhibition of approximately 45% occurred with the 0.6-μL aliquot of the essential oil from M. piperita, whereas the 1.2-and 1.8-μL volumes potentiated the action of the hemolytic enzymes present in the venom, represented mainly by proteases and phospholipases A 2 .
For the essential oil from F. vulgare, 100% inhibition was observed with the 0.6and 1.2-μL volumes.However, the essential oil from C. citratus did not significantly alter the hemolytic activity induced by the venom (Table 1).These differentiated activities for each essential oil can be explained by the difference in their compositions.
The different performances of the essential oils evaluated for the hemolytic activity induced by B. jararacussu venom are presented in Table 1.The data suggest the presence of specific interactions between the constituents of some oils and the hemolytic toxins present in the venom because potentiation was not observed for C. citratus oil (hemolytic in volumes of 0.6-, 1.2-and 1.8-μL) and F. vulgare (hemolytic with the 1.8 μL volume).Potentiation would be expected if there was a sum of the effect of the oils with that of the venom.In addition, significant potentiation was observed with the 0.6-μL volume of R. officinalis oil and significant inhibition with the 1.2-and 1.8-μL aliquots.These results differ from those observed with the oil from M. piperita, which caused inhibition at the lowest volume evaluated and potentiation at higher volumes.The total or partial inhibition of hemolytic activity may be related mainly to the action of phospholipases A 2 , which may represent the presence or absence of interactions between the molecules present in the toxins and essential oil constituents, as well as the possibility that the inhibitory action might also result from antioxidant mechanisms [11].These mechanisms are closely related to the number of molecules (enzymes and active plant compounds) present in the reaction environment and justify the different actions (inhibitory, potentiating or no effect) observed for the various oil volumes analyzed.
Phospholipase A2 Activity
According to Figure 8, the action of the phospholipases A 2 present in the venom was potentiated only by the 1.8-μL aliquots of the essential oils from M. piperita and F. vulgare and the 0.6-μL volume of the C. citratus oil.The smallest volume evaluated (0.6-μL) of the essential oil from R. officinalis caused a 10% inhibition of the phospholipase activity, whereas the essential oil from P. boldus did not alter the activity induced by the venom.Some studies have described the action of plant compounds on the different classes of enzymes present in snake venoms.Silva et al. (2017) [12] evaluated the inhibitory potential of essential oils from Mentha viridis (L).L. and Mentha pulegium L. on phospholipase A 2 present in snake venoms and observed that both essential oils were able to inhibit the degradation of phospholipids induced by Bothrops venoms.The essential oils also presented hemolytic activity, the activity of the oil from Mentha viridis (L).L. being observed only at the highest concentrations (14.6 and 29 μL•mL −1 [13] investigated the inhibitory properties of the essential oils from Baccharis dracunculifolia, Conyza bonariensis, Tithonia diversifolia and Ambrosia polystachya by means of the coagulation and fibrinogenolytic activities induced by Bothrops and Lachesis snake venoms.They observed that the essential oils exhibited some therapeutic properties because they inhibited the coagulation and fibrinogenolysis induced by the poisons.The authors suggested that the topical use of the oils, in general, does not require specific pharmaceutical preparations and can be applied directly after the extraction.Many oils with antimicrobial, anti-inflammatory and curative properties are described in the literature, and these actions are of great value in the treatment of snake poisonings.Miranda et al. (2014) [14] studied the effect of the essential oil from Hedychium coronarium on the fibrinogenolytic and coagulant activities induced by Bothrops and Lachesis venoms.They observed significant inhibition of the coagulation induced by both venoms, suggesting their possible use as a complementary alternative to serum therapy because the essential oils do not require specific formulations and their topical use can be performed immediately upon extraction.
Yamaguchi and Veiga-Junior (2013) [15] evaluated the hemolytic capacity of essential oils obtained from Endlicheria citriodora branches and leaves and observed no damage to the membrane.They also reported that both oils were composed basically of methyl geranate (monoterpenoid ester), which corresponds to 95.15% and 93.75% of the oils from the branches and leaves, respec- Phospholipases A 2 in Bothrops snake venom induce the hydrolysis of membrane phospholipids and can generate arachidonic acid, which is a precursor of prostaglandins, thromboxanes, leukotrienes and other bioactive lipids that act mainly in inflammatory processes and in the blood coagulation cascade, thereby altering hemostasis [16].The different venom toxins can be inhibited by several molecules, including chelating agents such as heparin, plasma factors of animal origin and plant extracts [17].
Studies by Silva et al. (2017) [12] using Bothrops venom showed that both the essential oil from Mentha pulegium and that from Mentha viridis had an inhibitory effect on phospholipases A 2 on the order of 4.1% at the concentration of 14.6 μL•mL −1 .
In the year 2016, Oliveira et al. [11] evaluated the possible interactions between vitamins and enzymes present in Bothrops atrox and Crotalus durissus terrificus venoms in vitro.Inhibition assays for proteolysis, hemolysis, coagulation, and hemagglutination were performed using different proportions of vitamins to inhibit the minimum effective dose of each venom.The authors observed that the vitamins were responsible for the 100% reduction in the cleavage of azocasein by C.d.t.poison, induced thrombolysis by the B. atrox venom, and also observed the induction of fibrinogenolysis by both poisons.Oliveira et al. (2016) [11] observed possible interactions between the vitamins and the active site of the enzymes.These interactions may occur in the hydrophobic regions present in the enzymes and vitamins, as well as in the inhibitions exerted by the antioxidant mechanism.
According to reports by Borges et al. (2000) [18], the aqueous extract of Casearia sylvestris inhibited the hemorrhagic activity caused by the venom of sev- eral snakes of the Bothrops genus.An aqueous extract of Mandevilla velutina was an effective inhibitor of phospholipase A 2 and inhibited some of its toxic effects, such as hemorrhage [19].Carvalho et al. (2013) [3] reported the importance of plant species in treating snakebites, especially in places that do not have access to serotherapic treatment.
Phospholipases A 2 , being among the main constituents of Bothrops snake venoms, can be inhibited by components of these plants, such as phenolic compounds, flavonoids, alkaloids, steroids, terpenoids (mono-, di-and triterpenes), and polyphenols (vegetable tannins).
In the present work, approximately 10% inhibition of phospholipases occurred in the presence of the oil from R. officinalis (rosemary).This oil is composed of terpenes, alcohols and ethers.This result agrees with the work reported by Mors, Nascimento and Pereira (2000) [20] that demonstrated the inhibitory action of several pentacyclic triterpenes, such as oleanolic acid, lupeol, ursolic acid, taraxerol, taraxasterol, α, β-amirina and friedeline, on snake venoms.Considering that phospholipase activity is only exerted by PLA 2 s and that hemolytic activity is exerted by both PLA 2 s and proteases, the observed results point to the
Figure 1 .
Figure 1.Classification of the constituents of the essential oil from Mentha piperita.
Figure 2 .
Figure 2. Classification of the constituents of the essential oil from Cymbopogon citratus.
Figure 3 .
Figure 3. Classification of the constituents of the essential oil from Rosmarinus officinalis.
Figure 4 .
Figure 4. Classification of the constituents of the essential oil from Peumus boldus.
Figure 5 .
Figure 5. Classification of the constituents of the essential oil from Foeniculum vulgare.
Figure 6 .
Figure 6.Evaluation of hemolytic activity against human erythrocytes induced by essential oils from Mentha piperita, Cymbopogon citratus, Rosmarinus officinalis, Peumus boldus and Foeniculum vulgare alone and by the Bothrops jararacussu venom.*Differ from the control containing only venom by the Scott-Knott test at 5% of significance.
Figure 7 .
Figure 7. Effect of the essential oils from Mentha piperita, Cymbopogon citratus, Rosmarinus officinalis, Peumus boldus and Foeniculum vulgare on the hemolytic activity induced by Bothrops jararacussu venom (10 μg) in human erythrocytes after incubation of the oils with the venom at 37˚C for 30 minutes.The values obtained for the pure venom were considered to represent 100% of activity.*Differs from the positive control by the Scott-Knott test at 5% significance.
Table 1 .
Quantitative data of the effect of the essential oils of Mentha piperita, Cymbopogon citratus, Rosmarinus officinalis, Peumus boldus and Foeniculum vulgare on the hemolytic activity induced by Bothrops jararacussu venom on human erythrocytes.
*0% value represents the absence of an effect in tests in which the oils did not inhibit or potentiate the action of the venom.**Differ from the positive control (activity of the snake venom considered as 100%) at 5% of significance.
). Figure 8.Effect of the essential oils from Mentha piperita, Cymbopogon citratus, Rosmarinus officinalis, Peumus boldus and Foeniculum vulgare on the phospholipase activity induced by Bothrops jararacussu venom (10 μg) after incubation of the venom with the oils at 37˚C for 30 minutes.The values obtained for the pure venom were considered to represent 100% of activity.*Differs from the positive control by the Scott-Knott test at 5% significance.
|
v3-fos-license
|
2019-04-29T13:11:59.287Z
|
2016-12-01T00:00:00.000
|
138516804
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1002/pts.2221",
"pdf_hash": "40c34c8d032f3186e765db545291e458d6690515",
"pdf_src": "Wiley",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43495",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science",
"Agricultural and Food Sciences"
],
"sha1": "de94137e3fb9058663a704e8e524d5e34f7d10b1",
"year": 2016
}
|
pes2o/s2orc
|
Outlook and Challenges of Nanotechnologies for Food Packaging
Nanotechnology has been considered to have high potential for food packaging applications very early on. The ability to provide additional consumer benefits through the improvement of key properties of packaging materials and the creation of new functionalities means that the increased use of nanomaterials and nanotechnologies is highly likely. It has however up to now failed to reach the widespread use that was initially expected, mainly because of remaining uncertainties on the safety of these materials during the various stages of their life‐cycle, which limit legal and consumer acceptance.
INTRODUCTION
The concept of nanotechnology refers to the manipulation of materials at a nanometric scale to benefit from the specific physico-chemical properties occurring in this size range. The concept was first mentioned in a speech by Richard Feynman given in December 1959 at the annual meeting of the American Physical Society. 1 Theoretical knowledge and analytical tools were developed over the next two decades leading to the discovery of fullerenes in 1985 (resulting in a Nobel prize in 1996) 2 and carbon nanotubes a few years later. 3 From the early days nanotechnology was identified by the packaging industry as a potential enabler of increased functionality in packaging materials. This was initially in the domain of barrier and mechanical property improvement 4 and later in the broader context of active and intelligent packaging. 5 Nanotechnology-enabled packaging materials have since grown to become a major area for innovation within the food sector. 6,7 Innovations in packaging are being driven by the notable changes in consumer demand and behaviour, which are expected to influence the way we use, and what we expect from packaging in the future. For instance, (a) Observable shifts towards smaller households, (b) increased an out-of-home food consumption, (c) a greater awareness of, and increasing expectations from the nutrition, health and wellness aspects of food, (d) an increased desire for freshness and naturalness and (e) more environmentally aware consumers. These trends are placing clear new demands on both food and the packaging, which accompanies it, some of which are listed hereafter: 1. Changes in household size will inevitably lead to the adaptation of pack sizes, which will affect packaging surface to volume ratios. Consequently, the protection requirements will need to be adapted. 2. Increased consumption of products outside of the home may require additional functionalities to facilitate convenient access and provide means to reclose. 3. Expectations from consumers regarding the health, wellness and nutritional aspects need to be met but also communicated. There are consequently greater demands from packaging to provide the consumer with more and increasingly detailed information. 4. Fresher and more natural products are more sensitive to degradation (e.g. reduction of preservatives and use of unsaturated fats) consequently there will be higher performance expectations from the packaging to protect the contained food product.
These shifting requirements have led to the development of new packaging technologies. Materials have been developed with improved barrier and mechanical properties, and there has been a steady increase in work to develop materials from renewable materials to address protection and sustainability requirements. Active and intelligent packaging technologies have also continued to emerge to address food waste concerns in the form of oxygen scavengers, antimicrobial packaging and freshness indicators, while interactive packaging offers the potential to enable greater consumer engagement and communication through augmented reality, digital packaging and QR codes.
The market pull on packaging development is therefore very strong, and many of the newer requirements can be addressed through the use of technologies available or in development today that is either based on or use nanoscience or nanotechnology in one way or another.
Despite the interest and also the potential benefits of using nanotechnologies, in recent years attitudes towards them have changed considerably. Initially, this technology was seen as a 'must have'; however, concerns were later raised over potential health risks in certain applications. This led to consumer acceptance issues, especially in application areas related to food. [7][8][9][10][11][12][13][14][15] Although the understanding of nanotechnology is highly advanced in some domains, there are still safety, toxicological and eco-toxicological questions, which have to be fully addressed. These open questions, together with a legislative framework, which is still in formation, are currently considered as barriers towards a more broadly accepted introduction and use of nanotechnology in packaging applications.
A principle that is often applied when a new technology is used in a pioneering application is that of balanced risk and benefit. This principle is valid when the risks and the benefits can be clearly identified and related, such as in medical applications. When considering food applications, however, the same approach would mean that any risk associated with food would have to be balanced against potential consumer benefits, which would most likely be quality related. However, jeopardizing safety to achieve quality related benefits is an approach that is unacceptable. It is therefore necessary to prove the safety of a specific technology prior to any application aimed at improving product quality. This should be done either by proving the innocuousness of a potential exposure or by preventing the possibility of exposure to a potentially harmful substance during the entire life-cycle of the packaging material (i.e. from production to final disposal).
If the use of nanotechnology in packaging applications is to become relevant in terms of volume and scope, the safety and inertness of these new materials with respect to health (human and animal) and ecosystems must be proven, and public acceptance improved by demonstrating consumer benefits, which are linked to actual needs. 10,11,16 Furthermore, given the numerous technologies and potential applications, nanotechnology cannot be assessed as a single technology. Instead each of the main categories will have to be treated independently with respect to their specific benefits and safety considerations.
A key message from these earlier experiences is that driver for using nanotechnology should not be because it is seen as the latest 'must have' technology. Nanotechnology must rather be seen as an enabler, which can be used to achieve specific and desired material properties that deliver measurable benefits. These material properties and associated benefits must be achieved without jeopardizing product safety or quality.
This paper provides a state-of-the-art review of work relevant to the application of nanotechnologies in food packaging applications. The purpose is to provide a consolidated review of current state-of-the-art research on the topic in order to inform a current broader public debate on the use of nanomaterials in food packaging.
Definitions are first given and new nanoscalar materials and processes for use in food packaging applications are reviewed along with the current applicable regulatory framework. The important aspect of safety is then addressed by first examining the exposure aspect of risk assessments and then reviewing recent work on the migration of nanomaterials from packaging materials to foods and the technologies allowing characterization of nanostructures and migration testing. Considerations are also given towards potential environmental implications, which could arise from the use of such materials over the complete life cycle of the products they are containing. Finally, there is a discussion of the findings and their implications on risk assessment.
DEFINITION
Definitions relating to nanotechnologies can be found in a series of technical specifications edited by the International Organization for Standardization (ISO), namely, the ISO TS 80004 series, comprising 8 parts edited between 2010 and 2015. [17][18][19][20][21][22][23][24] Nanotechnology is defined in part 1 17 as being: The application of scientific knowledge to manipulate and control matter in the nanoscale in order to make use of size-and structure-dependent properties and phenomena, as distinct from those associated with individual atoms or molecules or with bulk materials. The nanoscale being the size range from approximately 1 nm to 100 nm.
Nanomaterials are defined as materials with any external dimension in the nanoscale (nano-objects) or having internal structure or surface structure in the nanoscale (nanostructured materials).
In part 2, 24 nano-objects are further clustered into three classes: 1. Nanoparticles: nano-objects with all external dimensions in the nanoscale where the lengths of the longest and the shortest axes of the nano-object do not differ significantly. 2. Nanofibers: nano-objects with two external dimensions in the nanoscale and the third dimension significantly larger. 3. Nanoplates: nano-objects with one external dimension in the nanoscale and the other two external dimensions significantly larger.
Other nano-objects are nanorods, nanotubes and nanowires, which are specific examples of nanofibers that are solid, hollow and (semi-)conducting, respectively. Nanoribbons are a special case of nanoplates where one planar dimension is significantly larger than the other. Figure 1 shows selected nano-objects.
Terms related to nanostructured materials are defined in part 4. 19 A nanophase is a physically or chemically distinct region or collective term for physically distinct regions of the same kind in a material with the discrete regions having one, two or three dimensions in the nanoscale. A nanocomposite is defined as a solid comprising a mixture of two or more phase-separated materials, one or more being a nanophase. materials and articles. 36 Materials not covered by the European Union (EU) harmonized specific measures are subject to national legislations.
Certain EU regulations include requirements for nanomaterials, however no definition is given. For this reason, it is usually accepted that the 696/2011 Recommendation for a definition of nanomaterial adopted by the European Commission in 2011 applies, 25 which states 'Nanomaterial' means a natural, incidental or manufactured material containing particles, in an unbound state or as an aggregate or as an agglomerate and where, for 50% or more of the particles in the number size distribution, one or more external dimensions is in the size range 1 nm-100 nm. In specific cases and where warranted by concerns for the environment, health, safety or competitiveness, the number size distribution threshold of 50% may be replaced by a threshold between 1% and 50%.
The definition also considers fullerenes, graphene flakes and single wall carbon nanotubes with one or more external dimensions below 1 nm as nanomaterials.
It differs from the ISO definitions presented earlier by excluding nanostructured materials, except aggregates and agglomerated of primary nanoparticles, and defines a threshold of 50% on the number-based size distribution.
The Recommendation also contained a deadline for its revision by December 2014, which was not achieved. Nevertheless, the Joint Research Center of the European commission has recently completed a series of three reports that evaluate the current definition and presents options in view of its revision. [37][38][39] It can therefore be reasonably expected that the definition contained in Recommendation 696/2011 might evolve in the coming months.
Plastics
Plastic materials and articles are covered by Regulation 10/2011 32 and its amendments. Naturally occurring macromolecules not chemically modified are not within the scope of the Plastics Regulation, e.g. starch based polymers are not in scope while polymers based on chemically modified starch are covered. The use of additives to modify the macromolecule as such is not considered a chemical modification. For example, plasticized starch-based polymers would not be covered by the Plastic Regulations.
Only nanoparticles authorized and specifically mentioned in the specification of Annex I of the Regulation can be used in plastic packaging. This applies also to nanoparticles intended to be used behind a functional barrier. 1 Nanoparticles, which were initially listed in the specifications and thus authorized, are silicon dioxide and carbon black, 2 Titanium nitride (in nanoform) is also authorized in Annex 1 but under specific use conditions. The Annex has since then been amended based on several opinions published by the European Food Safety Authority (EFSA) on nanomaterial usage in FCM, and other materials have been added, however, with specific conditions of use. These are mentioned later. [40][41][42][43][44] Given the few materials currently authorized for use, most of the nanoparticles with potential for improving functionality in plastic packaging materials mentioned in the next chapters are not authorized at EU level, even if their bulk form is authorized. Before these substances can be authorized and used in plastic FCMs, an application for their authorization will need to be submitted containing specific information regarding toxicology and possible exposure.
Nanoparticles used as antimicrobials to keep the surface of FCMs free from microbial contamination (surface biocides) and which do not exhibit an antimicrobial function on the food are not currently included in the list of authorized substance in the EU. However, 10 silver containing antimicrobials have been evaluated to which the EFSA has responded favourably. 1 Functional barrier is a layer that reduces the migration of a substance from behind the layer into food to a non-detectable level with a detection limit of 10 ppb.
Active and intelligent materials and articles
Active and intelligent materials are covered by Regulation 450/2009. 36 Active materials are materials and articles that are intended to extend shelf-life or to maintain or improve the condition of packaged food; they are designed to deliberately incorporate components that would release or absorb substances into or from the packaged food or the environment surrounding the food. Intelligent materials are materials and articles, which monitor the condition of packaged food or the environment surrounding the food.
Substances forming the passive part of the active or intelligent material are regulated by their corresponding specific measure, e.g. plastics by the Plastics Regulation and rubber by the framework regulation and national legislation.
Substances released into food to become a component of the food must comply with food legislation. For example, a released antioxidant must be authorized in food as antioxidant in the food additives legislation. 45 Substances released or grafted that are intended to exhibit an antimicrobial function on the food need to be authorized as preservatives in the food additives legislation. Food additives are authorized for particular food types only. Currently no nanoparticles have been authorized as food preservatives or antioxidants in the EU. If such a nanoparticle were to be used in active materials then an application for authorization would need to be submitted under the authorization scheme for food additives. For substances forming part of the active or intelligent component, and which are not intentionally released, the Regulation (EU) No. 450/2009 36 provides the authorization scheme.
Currently, a list of authorized substances for use in active and intelligent packaging applications for the EU is under development. Once complete, only those substances listed will be authorized for use in active and intelligent packaging components. This will also apply to nanomaterials such as those in intelligent indicators, e.g. nanopigments or nano-based colour systems even if used behind a functional barrier. Until the EU list is established, the general rules of the Framework Regulation and national legislation apply.
Regenerated cellulose film and ceramics
In the following sections, no applications of nanomaterials are described for the use in regenerated cellulose film (RCF) or ceramics. If uses in RCF were envisaged in the future, they would be covered by the authorization scheme foreseen for RCF. 34 The current list on substances that can be used in RCF does not cover nanomaterials. Regulation (EC) No. 1935/2004 31 requires any user of an authorized substance to inform the Commission on any new scientific information, which might affect the safety assessment of the authorized substance. Therefore, any authorized substances that were envisaged to be used as nanoparticle would need a new application for authorization.
Legislation on ceramics 33 is only harmonized for the migration of lead and cadmium. Any use of nanoparticles would be covered by national legislation.
MATERIALS NOT COVERED BY EU HARMONIZED SPECIFIC MEASURES
All other materials have to comply with the general safety requirements of the Framework Regulation and specific national legislation. The Framework Regulation requires, in particular, that materials and articles should not release its constituents in concentrations that could • endanger human health, • bring about an unacceptable change in the composition of the food, or • bring about a deterioration in the organoleptic characteristics thereof.
SPECIFIC CASES
Nanocoatings of, e.g. silicon dioxide that are applied on a plastic layer are covered by the rules of the Framework Regulation. Inorganic coatings such as silicon dioxide coatings are not covered by Regulation (EU) No. 10/2011. 32 EFSA has given a positive opinion on one silicon dioxide nanocoating. 46 Pigments used in printing inks or in plastics are not covered by an EU specific measure. They have to comply with the general safety requirements of the Framework Regulation and specific national legislation. If they would be used in intelligent components of intelligent materials they would fall under Regulation (EC) No. 450/2009. 36 Surface biocides used in other FCMs than plastic FCMs are not covered by an EU specific measures. However, they are covered by the biocidal products regulation. 47
REGULATORY FRAMEWORKS IN THE UNITED STATES
The responsibility of the US food and drug administration The US Food and Drug Administration (FDA), an Agency within the US Department of Health and Human Services, has the responsibility for safety and efficacy of drugs and devices for humans and animals, products that emit radiation, biological products for humans, foods (including direct and indirect food additives, food contact substances and dietary supplements), colour additives and cosmetics. The FDA is also responsible for advancing the public health by helping to speed innovations that make foods and medicines more effective, safer and more affordable and helping the public get the accurate, science-based information they need to use medicine and foods to improve their health.
Nanotechnology and food contact substances
A new food contact substance (FCS) must be the subject of an effective food contact notification (FCN) to be lawfully used in the United States. For new authorization of an FCS, FDA focuses on particle size when it is important for the identity of the food contact substance, when it impacts the functionality of the food substance or when it impacts the intended technical effect. The FDA does not have a bias either for or against nanotechnology as it applies to food additives or food packaging. If the nanomaterial in question can be shown to be safe under the intended conditions of use, it can be used in contact with food.
Guidance for approval of new food contact substances
The FDA offers guidance to the regulated industry on the submission of FCNs, for new FCMs. While the current guidance documents for FCNs 48 do not presently make specific recommendations regarding nanomaterials, the chemistry guidance does address the issue of properties that are specific to particle size. A document entitled Guidance for Industry: Assessing the Effects of Significant Manufacturing Process Changes, Including Emerging Technologies, on the Safety and Regulatory Status of Food Ingredients and Food Contact Substances, Including Food Ingredients that are Color Additives was issued in June, 2014, and provides some of the Agency's current thinking on FCS manufacturing relating to nanotechnology. 49 More specific requirements will be addressed in future revisions of the guidance documents. Until that time, the FDA offers informal advice on the issues of interest when evaluating these materials. Its general recommendations are presented hereafter.
Chemistry considerations
Identity. If the particle size is important for the FCS to achieve its intended technical effect, such that the additive is produced or processed using techniques or tools that manipulate the particle size and may contain altered particles that are formed as manufacturing by-products, data on the size (average and distribution), shape, surface area (average and distribution), surface charge (zeta potential) and morphology of the particles, as well as any other size-dependent properties (e.g. agglomeration, aggregation and dispersion) should be included as appropriate.
FCS would need to be described. Replacing an existing FCS with a nanoscalar version might have significant safety implications.
Intended technical effect and use. A clear statement of the intended technical effect(s) of the FCS in food is a necessary component of an FCN. If the technical effect of the FCS is related to particle size, the statement should explain how size-dependent properties of the FCS affect functionality (e.g. solubility, viscosity, stability, antibacterial properties and antioxidant properties).
Impact on safety of the food contact substance
The replacement of an existing FCS with a nanoscalar version, or the introduction of a new nanoscalar additive might require safety considerations in addition to those in use for traditional (non-nano) additives. For example, are the uptake, absorption and bioavailability of the modified product different than the conventional product? Are new impurities detected at concentrations that are of concern? Are there new toxicology issues that were not previously addressed?
Considering the basic feature of nanomaterials, the very small particle size, it is understandable that introduction of a new nanomaterial as an FCS might introduce new issues that warrant additional or different evaluation during a safety assessment of a food substance or might raise new safety issues that have not been seen in their traditionally sized counterparts.
There is significant debate in the toxicology community regarding the correct and appropriate testing to judge the safety of nanomaterials. Extrapolation from data on traditionally manufactured food substances can generally be conducted only on a case-by-case basis. Safety assessments should be based on data relevant to the nanoscalar version of the FCS.
Presently, all nanoscale FCSs may not fit the general guidance; hence, the tiered toxicity testing recommendations as outlined in the guidance to industry may not apply to all nanoparticles. Moreover, little is known currently concerning the in vivo toxicity of nanoparticles via the oral route of administration, and none of the in vitro assays used to evaluate genotoxicity have been validated for use with nanoparticles. As nanoparticles may present unique challenges in the assessment of their genotoxicity in vitro and their toxicity in vivo, the FDA suggests that any submission concerning these nanoparticles consider the validity of the test protocol and the applicability of that protocol to the test substance to assure safety with regard to its relationship to the substance notified for food contact uses.
In addition to addressing the safety of the nanoparticles based on the dietary exposure as outlined in FDA's toxicology guidance, the following should be considered: • The form of the FCS migrating to food is important. As an example, if only the disassociated ionic form of a nanoparticle metal FCS migrates to food, it may be appropriately conservative to use available toxicity data for the appropriate metal salt to support the safety of the FCS. Should the nanoparticle itself migrate to food, it would be necessary to address the safety of the nanoparticle form of the FCS. • As stated earlier, the applicability and suitability of the use of a given in vitro genotoxicity assay in the safety assessment of a given nanoparticle should be carefully considered, with particular attention paid to the issues of excess cytotoxicity or precipitation. Assessment of the agglomeration/aggregation characteristics and other relevant physico-chemical characteristics of the nanoparticles in the media used in the in vitro test system should be performed. • For in vivo studies via the oral route of administration, the test substance would be given either in the drinking water or in the diet. Given the fact that agglomeration of nanomaterials becomes more of an issue at high concentrations, gavage with a concentrated solution of a nanomaterial may induce high levels of agglomeration in vivo, decreasing the bioavailability of the nanomaterial. Assessment of agglomeration/aggregation characteristics in the drinking water or feed matrix would be important, but in vitro pH studies of particle agglomeration at pH 1 and 9 in the absence of feed matrix will not be very relevant to the agglomeration state of the particle in the gut. • Should the in vivo micronucleus assay be the only genotoxicity test able to be used with the test substance, the toxicity of the test substance to the bone marrow should be evaluated to ensure that the test substance reached the target site. Routes of administration other than oral (inhalation and intraperitoneal) may be used for this assay, should oral administration not deliver the test substance to the target site. • Should the notifier wish to use alternative methods for the assessment of the genotoxicity of the nanoscalar FCS that are not currently recommended in the FDA Redbook 2000, consultation with the FDA via a pre-notification consultation (PNC) is strongly recommended to discuss the feasibility of this approach. • In conducting in vivo toxicity studies, careful attention should be paid to the issue of dosimetrics.
Consideration of surface area and particle number, as well as mass concentration, in the study design is appropriate.
IMPACT OF NANOTECHNOLOGY ON REGULATORY STATUS
The regulatory status of nanoscale versions of FCSs is frequently questioned. In the absence of formal guidance, FDA has been addressing these questions on a case-by-case basis. For currently approved direct and indirect food additives, and food contact substances, the use of the additive is no longer in compliance with an existing regulation if the change in manufacturing practice alters the chemical such that the chemical identity and composition are no longer the same as the approved compound, the use or intended use is no longer in conformity with the regulation in Title 21 of the US Code of Federal Regulations, or the quantity of the additive in food renders it injurious to health.
For FCSs, as defined under Section 409 (h)(2)(C) of the Federal Food, Drug and Cosmetic Act, a FCS approval does not apply to a similar or identical substance manufactured or prepared by a person other than the manufacturer identified in the notification. FDA's long-standing guidance to industry on administrative aspects of a food contact notification advises that a new notification should be submitted if substantive changes are made in the specifications for the FCS or if significant changes are made in the manufacturing method that result in substantive changes in the identity of the product or its impurities, and/or levels of impurities.
For substances Generally Recognized As Safe (i.e. having GRAS status) under the provisions of Title 21 of the US Code of Federal Regulations, it is the obligation of the manufacturer to demonstrate whether the ingredient has been affirmed as, or is otherwise, GRAS. 49 Relevant to such a determination are the identity of the food substance and its conditions of use as described in the administrative record for a substance affirmed or identified as GRAS.
Responsibility of the manufacturer or user
Ultimately, it is the manufacturer's responsibility to ensure that the foods and food components that they bring to market are safe and lawful. Thus, the manufacturer or user has an obligation to take all appropriate steps to ensure that the substance as manufactured is safe and lawful under the conditions of its intended use. FDA encourages food manufacturers to conduct a thorough safety assessment of all manufacturing changes.
Guidance for currently authorized food products
As mentioned earlier, a guidance document has been published in June 2014 to address the impact of manufacturing changes, including nanotechnology, on the regulatory status of FCSs. 49 The FDA encourages manufacturers to consult this guidance document, as well as other relevant guidance documents (available at www.fda.gov/food/IngredientsPackagingLabeling/default.htm) and to consult with the FDA before undertaking any experimental activities in support of submission of a future FCN so that pertinent issues can be discussed and mutual understanding on issues be achieved.
ENHANCEMENT OF FOOD PACKAGING PROPERTIES THROUGH NANOTECHNOLOGY
The primary purpose of food packaging is to keep the packaged product, fresh, safe and secure and to prevent damage during transportation and storage. Packaging can also extend the shelf-life of food products by controlling the transfer of moisture, gases, flavours and taints, and thus plays a significant role in reducing food waste worldwide. The food packaging in use today is the result of decades of evolution, and during this period, a wide range of materials have been utilized from wooden crates, cardboard boxes, paper bags and glass bottles, to modern polymers that offer stronger, lightweight, recyclable, and in some cases functional materials.
However, all packaging materials have some drawbacks and limitations. Glass and tinplate provide a perfect barrier to gases and vapour and are recyclable, provided the right infrastructure is in place. However, they have a high specific weight and their production is considered energy intensive in comparison with other materials.
Paper-based materials, on the other hand, are often considered as materials with low environmental impact, as they are produced from renewable resources and are in most cases recyclable. However, they lack mechanical properties (e.g. tear resistance) and product protection properties because they have very low barrier performance to gases and vapour.
Plastic materials provide barrier to either oxygen (or other gases) or water vapour but rarely both. In addition, they are in many cases transparent, which can be seen as a benefit for some applications but can generate photo-induced degradation in some products.
In the case of flexible packaging applications, these limitations are often overcome through combining different layers of materials with different properties and functions. Multi-layering approaches are also used for rigid packages but are technically more difficult to achieve.
Blending of polymers with different properties is another common way to improve the properties of plastic packaging materials.
The ability to combine different materials allows packaging to be developed with all the desired properties for a specific application. However, it significantly increases complexity and cost and can limit the recovery and valorization potential of the materials.
Nanotechnology has the potential to improve the properties of different packaging materials and thus is seen as technology, which can provide enhanced protection without some of the previously mentioned drawbacks and complexity of currently used approaches thus potentially providing benefits in terms of reduced material use, cost and environmental performance.
The application of nanotechnology in packaging polymers has so far taken two major routes -(a) continuous nanocoatings applied on the polymer surface 3 or (b) dispersion of nano-objects or nanophases within a polymer matrix. These approaches are aimed at enhancing barrier, mechanical and/or other functional properties of the packaging materials.
Nanomaterial-polymer composites for improved barrier properties
The incorporation of nanoparticles into polymer materials is reported to significantly enhance barrier properties. 4,12,50,51 Nanoclay-polymer composite, for instance, have exhibited excellent gas barrier properties. The nanoclay mineral most often used to achieve these improved gas barrier properties has been montmorillonite (also known as bentonite), which has a natural nanoscalar layered structure that can restrict the permeation of gases when incorporated into a polymer. It has the added advantage that it is relatively inexpensive and also widely available.
Nanoclays have been added to a range of polymers, with a variety of suggested food packaging applications, such as for processed meats, cheese, confectionery, cereals and boil-in-the-bag foods. Their application in extrusion-coated packaging materials for fruit juices, dairy products and co-injection processes for the manufacture of beer and soft drink bottles has also been suggested. Polymer materials modified with nanoclay include multi-layered film packaging, polyethylene terephthalate (PET) bottles for carbonated drinks and thermoformed containers. A few of these materials are available commercially, and their use has been reported in some countries for bottling beer and other beverages. 7,8,52 Moisture and gas barrier properties of bio-based polymer packaging materials such as starch and polylactide have also been improved with the use of nanomaterials. 51,[53][54][55][56] A nanoplate material rapidly gaining attention is graphene. Graphene is a single layer of carbon atoms with each atom bound to three neighbours in a honeycomb structure. 18 Several authors have reported barrier improvements by using graphene sheets or graphene oxide in polymer composites. [57][58][59][60] The mechanisms underlying the barrier improvement of nanoplate-based composites is the wellknown 'tortuous path' principle, schematized in Figure 2.
High improvement factors can only be reached when the plates are dispersed in the matrix and oriented perpendicularly to the migration direction. 61 A perfect orientation of the particles is difficult to achieve in bulk materials or even in films. New technologies have been developed to address this such as polymer based coatings highly loaded with nanoplates or more recently with layer-by-layer deposition of nanocomposites alternating organic and inorganic layers.
Layer-by-layer is a method by which a multilayer coating/film of nanometre-thick layers is produced by sequential adsorption of oppositely charged polyelectrolyte on a solid support.
Several authors reported the fabrication of multilayered films by depositing intercalated layers of anionic sodium montmorillonite clay and cationic polymer on a polymer substrate yielding materials with extremely good barrier properties. 53,59,[62][63][64][65] Another means of improving the barrier properties of a polymer film is the deposition of a thin, metal or metal-oxide or organic-based coating. Nanocoatings have a thickness in the nanometre range and comparatively infinite size in the other dimensions that are deposited on the surface of a substrate (e.g. metallization or SiOx coating). Nanocoatings have been available for decades, and some are extensively used today (e.g. metallization). The principle is to coat a substrate with a thin layer of organic or inorganic material that can act as a barrier to permeation and migration, as chemical protection for the substrate, or as surface property modifiers. 4
Nanomaterial-polymer composites for improved mechanical properties
The incorporation of nanomaterials has been shown to improve certain mechanical properties of polymerssuch as flexibility, durability, temperature/moisture stability, etc. Nanoclays have been shown to improve the mechanical properties of a range of thermoplastic, thermoset and elastomer polymers, such as polyamides, polyolefin, polystyrene, ethylene-vinylacetate copolymer, epoxy resins, polyurethane, polyimides and polyethylene terephthalate. 50,54,55,57,[66][67][68][69][70][71][72] Some nanomaterials have been used as UV absorbers (e.g. titanium dioxide) to prevent photo-degradation of plastics such as polystyrene, PE and PVC. Nanozinc sulfide, in combination with organic stabilizers has been reported to improve the durability of plastic polymers. 50
Special case: bio-based materials
Biopolymers have attracted considerable attention as possible replacements for conventional oil based plastic packaging materials. The term bioplastic encompasses a whole family of materials, which differ from conventional plastics insofar as that they are bio-based or biodegradable or both. 73 Bio-based polymers may be derived from plant materials (starch, cellulose, other polysaccharides and proteins), animal products (proteins and polysaccharides), microbial products (polyhydroxybutyrate and PHB) or polymers synthesized chemically from naturally derived monomers (e.g. polylactide and bio-polyethylene).
For food packaging applications, biopolymers typically have mechanical and barrier properties, lower than conventional plastics, which limit their industrial use. Especially challenging is the development of moisture barrier due to the hydrophilic nature of most biopolymers.
There are also bio-based nanomaterials, which are receiving increasing attention. These are nano or microfibrillated cellulose as well as cellulose nanocrystals. These materials, directly extracted from natural fibres, can improve strength and gas barrier however are still very sensitive to moisture. Their application in stand-alone films, coatings and fillers in nanocomposite systems has been explored. Although still in their infancy, the amount of research performed on these materials is significant, as can be seen for the review articles by Siqueira et al. 74 and more recently by Li et al. 75 Research is ongoing to overcome these inherent shortcomings of biopolymer-based packaging materials, with nanotechnology being one avenue of exploration. 51,[53][54][55][56]59,65,66,68 Nanomaterialbiopolymer materials have the potential to match many of the properties required for packaging applications; hence, their use could increase the potential for biopolymer use.
Active packaging
Recent developments on blends have allowed the production of monolayer bottles with oxygen scavenging properties. These consist of blends of PET and oxygen absorbing polymer (typically polyamides or co-polyesters), where the latter is separated through processing and compatibilization of the two polymers into nanoscalar inclusions in the matrix. In most cases, the reaction is catalysed by the presence of cobalt dissolved in the polymer matrix. A strong reduction of the size of the included polymer domains is desirable, both for improving the properties of the blend and retaining a good transparency of the resulting material. 4 To achieve the latter, it is clear that the domain size must be below the wavelength of visible light, hence clearly in the nanometre range ( Figure 3). Antimicrobial packaging has received extensive attention in research, especially through the application of nanotechnology.
As mentioned earlier for oxygen scavenging applications, blending can be used as a means of providing antimicrobial properties to a material, e.g. through controlled release of natural or synthetic antimicrobial agents. 76 Certain metal and metal oxide nanomaterials are known to have strong antimicrobial properties. Their incorporation into polymer materials has led to the development of FCMs with antimicrobial surface properties. These nanomaterial-polymers are claimed to preserve packaged foodstuffs for longer by inhibiting the growth of micro-organisms at the food contact surfaces. Examples include nanosilver that is claimed to add antimicrobial and anti-odorant properties to plastic food storage containers and bags. 12,67,[77][78][79][80] The discovery of antimicrobial properties of nanozinc oxide and nanomagnesium oxide may provide further opportunities for the use of less expensive nanomaterials for antimicrobial food packaging materials. 81 A plastic wrap containing nano-zinc oxide is available in some countries, which is claimed to keep the packaging surfaces hygienic under indoor lighting conditions. 27 Another way to include active functionality through nanotechnology is by grafting active components (anti-microbial, scavengers and anti-oxidant) on clay nanoplates. This approach allows to efficiently disperse the active compound in the matrix, while improving other properties linked to the presence of clay, as detailed earlier. [82][83][84] 'Intelligent' (or 'Smart') packaging concepts Nanomaterials can also be used to monitor the condition of packed foods and provide a visual indicator of condition. This functionality is enabling the development of novel intelligent packaging concepts, which can be in the form of sensors (bio or temperature), or nanomaterial-based 'intelligent' inks, which can be printed on labels or incorporated in coatings for food packaging. [85][86][87] Such labels could, for example, show the consumer that the product is safe to consume or that the package integrity has not been compromised in the supply chain. Other labels can determine the level of microbial activity in a package by indicating when the food starts deteriorating or can provide an indication of time-temperature exposure to indicate if the cold chain has been breached. [88][89][90] Nanomaterials are also increasingly found in authentication technologies for food products for instance in nanobarcodes or taggants, incorporated into printing inks or coatings. Another area of foreseen application for these materials is in the development of printed electronics for packaging applications, either as pigment nanoparticles for the inks, or other deposition technologies. Such technologies are expected to appear in anti-counterfeiting, traceability and/or provide other intelligent features. An extreme example of this is given by Vicente et al. presenting printable solar cells on liquid carton packaging. 91
Nanostructured surfaces
The surface of packaging is the part that will interact most with either the consumer or the product. In this sense, modifying its properties can add new functionalities to packaging materials.
Nanostructures at the surface of a material can affect its compatibility with other materials and products as well as induce light diffraction in a controlled way to generate special optical effects.
Structured surfaces can be obtained through material addition (e.g. deposition, coating and printing), material removal (e.g. etching) or by actual structuring of the surface itself (e.g. through nanoimprinting).
Nanostructuring of a surface can drastically change its adhesive properties in one way or another, as can be observed in natural instances, such as the lotus or gecko effects. Such structures have, for example, shown a drastic reduction of biofilm formation by reducing the adhesion of bacterial cells to solid surfaces. 92,93 Although not yet broadly applied, the authors believe that nanostructured surfaces will play an important role in providing new functionalities to packaging in the future.
MIGRATION, OTHER EXPOSURE ROUTES AND ENVIRONMENTAL CONSIDERATIONS
Despite the anticipated benefits of using nanomaterials in food packaging, their larger scale adoption would require that open questions regarding effects on consumer health and the environment are addressed. These questions, and consequently, challenges to broader use have been highlighted in recent publications, 7,8,52 and discussed at several meetings, workshops and conferences. [94][95][96] The main issues relate to uncertainties and knowledge gaps regarding the possible health effects of nanomaterials when used in food packaging and the lack of analytical tools and methods to assess the migration of nanomaterials in particulate form to food products, which are necessary to assess potential exposure.
To address the open questions, it is imperative that the effects and impacts of nanomaterial containing packaging on food quality, consumer health, and the environment are investigated and understood to facilitate further developments in this area.
This chapter provides an overview of the current state-of-knowledge in respect to these issues.
Migration testing
Nanomaterials incorporated in plastic polymers are present in embedded, fixed or bound forms. Like other chemical constituents in FCMs, nanomaterials could have the potential for transfer to foods with which they are in contact. Thus, a fundamental question in relation to compliance and safety is whether or not any nanoparticles can migrate from such materials to packaged foodstuffs in significant quantities. Nanomaterial migration can occur through mechanisms such as, dissolution of the compound in a simulant or food, actual diffusion of particles or transfer through abrasive action on the surface of the FCMs.
Today, the migration of solubilized nanoparticles can be evaluated as well-established methods of detection are already in existence.
On the other hand, assessing the migration of nano-objects in particulate form is more complex but remains of prime importance as their possible occurrence in food represents the main source of uncertainty regarding safety.
Currently, the level of nanoparticle migration expected is very low (as shown in subsequent sections). Consequently, the methods used need to allow separation, concentration and determination of chemical identity as well as the particulate nature of nanomaterials. The possible creation of new nano-objects for instance, during the concentration phase (e.g. through precipitation of solubilized compounds) must also be avoided.
The behaviour of nanomaterials is governed not only by the chemical composition and the size of the particles but also by other physico-chemical properties such as surface composition, surface morphology, surface charge and distribution of charges and also thickness of interfacial membranes. 97 Multiple techniques are therefore needed for characterization. Generally, methods can be classified as those yielding information on (a) composition, (a) morphological structure and (c) physico-chemical properties.
The description of these methods is out of the scope of this paper but can be found in the following references: 37,[97][98][99] This paper will instead focus on the methods used for the assessment of migration from FCMs containing nanomaterials.
Migration tests
Regulation require that packaging materials be tested for their suitability for use in food contact applications. These tests are generally carried out using food simulants that mimic the migration effects in different types of foods. In general, it is accepted that this same approach is also appropriate for assessing the migration of nanoparticles. [40][41][42][43]100,101 In Europe, migration testing for nanomaterials will need to be carried out in accordance with the existing legislation, 32 which defines the conditions and the simulants for plastic FCMs. Migration testing is carried out under conditions that are equivalent to the worst foreseeable conditions of use. The exposed simulant samples and controls (blanks and overspikes) are subjected to analysis for the detection and quantification of migrating substances using a range of analytical methods.
Chemical characterization of migrants
Any compounds migrating from the packaging to the food or simulant can be assessed by different means, depending on the chemical nature of the expected migrants.
In some cases, the migrants can be directly analysed while other application require extensive samples preparation, especially when migration is directly measured into food rather than food simulants.
The most commonly used sample preparation method is matrix degradation. Here, either enzymatic or chemical (alkaline or acidic) digestion or thermal degradation of the matrix is carried out, followed by dissolution of the obtained ashes.
Once sample preparation is completed, the resulting solution or suspension can be analysed to detect compounds or elements contained in the nanomaterial of interest.
In the case of inorganic nanoparticles, the most commonly used analytical method is inductively coupled plasma (ICP) coupled with either mass spectroscopy (ICP-MS) or atomic emission spectrometry (ICP-AES), also referred to as optical emission spectroscopy (ICP-OES).
In these methods, the sample is directly vaporized in plasma where the elements composing the substance studied are liberated as atoms or ions.
In the case of ICP-MS, the formed ions are analysed by mass spectroscopy, whereas for ICP-AES, the radiation generated by the transition between excited and relaxed state of the atoms or ions is detected. The energy of these transitions is element dependent and can therefore be used for identification. 102 Another commonly used analytical technique is atomic absorption spectroscopy, which determines the concentration of specific element in an atomized gas using light absorption characteristic of specific elements. Different atomization technologies can be used; the most common being flame and graphite tube atomizers.
Being the most sensitive, ICP-MS is the most used technique when migration studies are being performed.
Physical characterization of migrants
The chemical identification methods presented earlier allow migrants to be identified along with their concentrations. They do not, however, identify the nature of the migration, i.e. whether it happens through solubilization of components or through actual particle transfer. In order to assess the risk of migration from FCMs, it is essential to understand the mechanism behind the migration as the toxicological properties of a substance in particulate form could differ from that in solution.
In order for particles to be identified, sample preparation must be performed in such a way not to alter potentially present particles or to create new particles.
As in the case of chemical analysis, matrix digestion can be applied. Care must be taken to ensure that target nanostructures are not broken down or digested. For example, if the nanostructure is composed of organic compounds, it may be subject to hydrolysis or oxidation reactions that may alter or destroy it. Where nanostructures are initially present in a packaging material, this scenario is relatively easy to test, as the nanomaterial is known and could be obtained and tested separately.
In some cases, extracts obtained using food simulants can be further processed without digestion.
Once the matrix has been digested, nanostructures present can be concentrated or extracted by filtration, centrifugation or solvent extraction. It should be noted that separating nanostructures using these traditional separation techniques is not trivial, and other separation techniques have been developed recently, some of which are described in the following paragraphs.
A very powerful separation technique that promises to be of particular importance for the determination of the particulate migration is field flow fractionation (FFF). FFF allows the separation of mixtures that contain compounds that vary substantially in their molar mass. 103 Therefore fractionation of macromolecules from nanoparticles or microparticles is made possible. 104 In principle, the technique is a liquid chromatography-type elution method where the separation is accomplished by applying a fractionation flow field perpendicular to the flow field that is established to transport the sample through the separation chamber. During the application of the fractionation flow field, the sample is displaced to the outer walls of the separation chamber. After a brief relaxation period, small molecules or structures rapidly diffuse back into the centre of the separation chamber. The subsequently applied transport flow field that typically has a parabolic flow profile, rapidly transports the small structures in the centre of the chamber, where flow speeds are highest, to the outlet while larger particles exit the chamber later.
A recent evolution of FFF, i.e. Asymmetric flow FFF (AF4), has recently gained much attention as separation technique, especially in the field of determination of nanomaterials in food, but also in testing the migration from FCMs. [105][106][107][108][109] Contrarily to FFF, the cross flow is generated by the presence of a single porous membrane at the bottom of the system, enabling a more efficient size separation.
Other separation techniques that are under investigation to detect nanostructures in food include hydrodynamic chromatography, where a size dependent exclusion from the wall of a microchannel allows a separation of larger particles from smaller ones. The laminar flow profile that is established in the small microchannel prevents larger structures from entering the slow flowing regimes near the wall. As such, larger structures are ejected faster from the microchannel while smaller ones remain in the separation chamber longer.
These separation techniques can then be coupled with various detection techniques such as Multi-Angle Laser light scattering (MALLS), ICP-MS, UV-Vis spectroscopy and Dynamic Light Scattering, among others, or combinations thereof. 37,98 Another particle detection method that gains interest is single particle ICP-MS. In this method, a highly diluted suspension of nanoparticles is injected in the ICP-MS. The dilution allows that particles potentially present in suspension will enter the plasma one by one, creating high intensity peaks in the signal; thanks to the sudden high concentration of the analysed element. The presence and intensity of these peaks allow both the identification of particles and the determination of their size. A constant background signal indicates the presence and concentration of the substance in a dissolved state. 109,110 Transmission and scanning electron microscopy or other imaging techniques are also regularly used to identify the presence of particles, sometimes coupled with elemental analysis. These techniques however imply the extraction of the particles from the fluid they are suspended in, e.g. through evaporation. This sample preparation step could, if the substance is solubilized, generate particles through precipitation, hence potentially leading to erroneous conclusions.
Potential migration of nanomaterials
Despite the need for understanding the migration behaviour of nanoparticles from nanomaterialpolymer based food packaging, the number of studies available in this area remains fairly low and mainly focused on nanosilver and nanoclay materials. It emerges from these few studies that nanoparticles are considered unlikely to migrate to a large extent under normal conditions of use for packaging materials.
Avella et al. 51 determined the migration of Fe, Mg and Si from a biodegradable starch/nanoclay composite film. The study used vegetable samples (lettuce and spinach) placed in bags made of either potato starch, potato starch-polyester blend or their respective composites with nanoclay. After storage for 10 days at 40°C, the vegetables were acid-digested, and the migration of minerals was determined by atomic absorption spectrometry. The results showed no significant increase in Fe and Mg in the vegetables compared with controls, whereas an increase in Si content was recorded. The concentrations of Si detected in the vegetables were 16-19 mg/kg in the case of nanoclay composites of potato starch and potato starch-polyester blend, compared with 13 mg/kg for the same polymers without nanoclay and around 3 mg/kg in unpackaged vegetables.
Xia et al. 111 studied the migration of nanoclay from polypropylene (PP) and PA nanocomposites, showing low levels of migration in the range of 80 to 150 μg/L, depending on the matrix material and the simulant. The characterization of actual particle release was carried out by evaporating the extract on a copper grid and observed by TEM. A known drawback of this method is that the substance may precipitate during the evaporation. The results showing actual particle migration were, therefore, not conclusive. The authors also studied the evolution of the concentration over time, showing a fairly rapid increase during the first hours or days of the experiment followed by a plateau where the concentration remained constant. This suggests that only particles at the surface of the polymer film were extracted or dissolved.
Farhoodi et al. 112 studied the migration of clay nanoparticles from PET bottles into acidic food simulant. In this case, a steady increase in the concentration of both aluminium and silicon was observed over a period of 90 days by ICP-OES. The study concluded that nanoclay particles could migrate from PET nanocomposite, although no data was provided to support the hypothesis. On the contrary, the concentration ratio between aluminium and silicon in the extract was approximately 10 times lower than what is theoretically present in clays. Actual particle migration would be expected to display a ratio much closer to the natural ratio.
Chaudhry et al. 113 studied migration from multilayered PET bottles with nanoclay composites embedded between PET layers. The migration testing was performed using different foods and simulants. ICP-MS analysis showed no detectable migration of clay minerals from PET bottles.
The migration of clays from low density polyethylene bags was studied by Echegoyen et al. 114 Here some aluminium was detected, indicating that migration did occur. This paper also showed that, based on single particle ICP-MS results, a certain fraction of the clay migrated as particles.
Huang et al. 115 studied the migration of nanosilver from commercially available polyethylene plastic bags containing 100 μg silver per gram of the plastic material into food-simulants. The silver nanoparticles ranged from 100 to 300 nm in diameter. Migration testing was carried out over a range of temperatures between 25°C to 50°C and time intervals between 3 to 15 days. The food simulants used in the study included water, acetic acid (4%), ethanol (95%) and hexane. The study reported migration of nanosilver from the polyethylene bags to food-simulants, which seemed to increase with storage time and temperature. The lowest migration was observed in 95% ethanol, although there was no significant difference in the amounts recorded in all simulantsi.e. they ranged between 0.5 and 1.0 (at 25°C) μg/dm 2 of the polyethylene material, and between 3.0 and 4.0 μg/dm 2 (at 50°C) after 15 days. Scanning electron microscopy and energy-dispersive X-ray analysis (EDX) were used to confirm the presence and morphology of the migrating silver, and quantification was carried out by atomic absorption spectroscopy. The sample preparation method used for microscopy experiments involved solvent evaporation; hence, there was potential for creating particles through precipitation. The likelihood of this occurring is considered high because the particles observed were much larger than what could have been expected (300 nm or larger).
Metak et al. 116 studied the migration of silver nanoparticles from polyethylene containers and films into various foods. A combination of ICP-MS and electron microscopy was used to quantify the amount of silver migrating from the packaging material and characterizing the form under which the migration occurred. In the case of milk powder, electron microscopy coupled with energy-dispersive X-ray analysis was used directly on the product stored in the packaging during the migration test and showed that some silver nanoparticles were released to the product. Liquid products first undergo drying, ashing and finally digestion prior to scanning electron microscopy analysis. It can therefore not be concluded whether the observed particles migrated or whether they have been generated during the evaporation process. The conclusions of the paper are, therefore, considered only partly valid.
Chaudhry et al. 113 studied migration from polypropylene food containers, which were claimed to have dispersed nanoparticles of silver within the polymer matrix. Extensive migration testing was performed using different foods and simulant systems. ICP-MS analysis showed the migration of silver from polypropylene to be lower than the limit of quantification.
Cushen et al. also reported the migration of silver nanoparticles from various matrix materials by ICP-MS in a series of publications. 117,118 No information was gained on the nature of the migration, and the assumption of worst case, i.e. particulate migration, was made for the subsequent exposure assessments. A more recent paper, 119 by the same authors, compared TEM images of the silver particles prior to insertion in the matrix and after migration, which showed a significant reduction in size of the particles. These results led them to conclude that the majority of the silver migrated in the ionic form.
Greiner and Hetzer 120 studied the migration of silver from polyethylene films. The conclusion of this work was that the total amount of silver migrating out of the material, as determined by ICP-MS was low. The authors also used single particle ICP-MS to determine the proportion of particulate migration and found this to be less than 1% of the total silver migration.
The same testing methods were applied by Echegoyen and Nerín; 121 the proportion of particulate migration was much larger, i.e. ranging from 1 to 20% of the total amount of silver migration. It is to be noted that in both cases, the level of migration was below the maximum migration limit for silver found in legislation (valid for solubilized silver).
The difference observed in the fraction of particle migration in the two studies mentioned earlier 120,121 can be due to factors such as the initial concentration or the interaction between the particles and the matrix. It could also be explained by the findings published by Bott et al. in their study on the migration of nanosilver from LDPE polymer. 122 In this study, the migration of silver was tested in various simulants and under different conditions, using ICP-MS and AF4/MALLS to assess particulate migration yielding similar conclusions to those drawn in Greiner and Hetzer 120 and Echegoyen and Nerín. 121 The authors however further measured the stability of the silver nanoparticles in the food simulants by injecting the same samples hourly over a period of 5 h, which showed a rapid dissolution of the particles into silver ions in acidic food simulants. This could further explain part of the discrepancies observed in different studies. As a comparison, the authors added silver nanoparticles to ultrapure water under these conditions; they were more stable with an 80% retention of particles after 24 h, showing the validity of the approach.
Bott et al. also 123 demonstrated that carbon black nanoparticles embedded in polyethylene or polystyrene did not migrate whatever the conditions or simulant used. This study also used a combination of AF4 and MALLS for the detection of particulate carbon black in the extracts. This allows an unambiguous identification of carbon black particles, considered difficult with other techniques due to the chemical nature of the compound, i.e. practically pure carbon. The results were further validated using a simulation approach. The combination of AF4 and MALLS for the determination of migration from nanocomposites was first been described by Schmidt et al. 124 in a study aiming to determine whether nanoclay particles migrated from a poly(lactic acid) (PLA) matrix.
In addition to the previously mentioned studies, the EFSA Panel on FCMs, enzymes, flavourings and processing aids (CEF) has issued several scientific opinions on the use of nanomaterials in FCMs. These opinions concern titanium nitride in PET bottles and films, 41,101 silicon dioxide, silanated, 100 kaolin 42 and montmorillonite clay. 44 All these opinions concluded that the use of these materials do not represent a health risk because no exposure would occur, as proven by absence of migration. The techniques used for the determination of migration were ICP-MS and AF4-MALLS.
A more recent scientific opinion was given on the use of Zinc oxide in FCM, based on migration results obtained by ICP-MS and ICP-AES. 43 The results indicate that the migration of Zinc occurs to a significant level, while still remaining compliant with the current specific migration limit (SML) for Zinc. Although no proof of the absence of migration in particulate form was given, the panel concluded that the substance does not migrate in nanoform and that the safety evaluation should focus on the migration of soluble ionic zinc. The conclusion is based on the understanding that even if particulate Zinc oxide would migrate, it would immediately dissolve in acidic foods or stomach acid. The panel however recommends that the current SML for zinc should be reduced to take into account zinc exposure from other sources.
There is an additional concern that nanomaterials added to polymer matrices could also influence the migration of other compounds present in the polymer. Chaudhry et al. 113 showed that this was not the case in either of the two nanomaterial-polymer materials tested in their study.
Work by de Abreu et al. 125 investigated the effect of nanomaterials on the migration of caprolactam, triclosan, and trans, trans-1,4-diphenyl-1,3-butadiene (DPBD) from polyamide and polyamide/ nanoclay composites to different food simulants. The presence of the nanoparticles was reported to slow down the rate of migration of (non-nano) substances from the matrix polymer into the food by up to six times. This reduction was not only related to the tortuous path created by the presence of the nanoplates, which is the basic principle of barrier property improvement, but also to a potential interaction between the non-nano substances and the clay particles.
Modelling as assessment tool for migration
In addition to experimental protocols, modelling is a useful approach, which can provide information about the potential risk that may arise from nanoparticles migrating into food from packaging material. [126][127][128][129][130][131] A report by the Joint Research Centre of the European Commission 132 provides an excellent overview of available mathematical models that can be used to describe the migration of compounds incorporated in a plastic packaging material with respect to environmental conditions, composition and structure of the FCMs, the nature of the migrant. Available mathematical equations are largely based on a diffusion-driven mass transport process, which is governed by the diffusion coefficient of the migrant and the partition coefficient for the migrant between the plastic and the food simulant. 133 Experimental studies have shown that available models are capable of predicting the migration of compounds from a packaging matrix into a food, as long as the process is indeed diffusion driven and not due to other mechanisms such as erosion, swelling or abrasion of the packaging materials. For nanomaterials, the modelling may also be used to better understand potential exposure scenarios.
Šimon et al. 134 used a physico-chemical approach to model the factors that control migration of nanoparticles from nanomaterial polymer composite materials to food. The modelling performed suggests that migration of nanoparticles is only likely to take place when an equilibrium distribution of nanoparticles is established between the packaging and the packaged food. Because the concentration of nanomaterials in both polymer composite and food is likely to be small, nanoparticle interaction is predominantly expected to be with the surrounding matrix. In such a situation, standard chemical potentials should reflect the strength of the interactions between the nanoparticle and the polymer or the packaged food. Because the equilibrium distribution of nanoparticles between the food and the polymer are likely to establish over a period of time, the resulting migration would also be time dependent. The approach considered that a few important variables influence the migration of nanoparticles, such as (the square root of) time, the temperature and the radius of a nanoparticle. Movement of a nanoparticle would also be affected by interactions with the polymer matrix, hence the dynamic viscosity of the polymer are likely to have a major influence on the overall migration. 134 showing favourable correlation between experimental and predicted migration values for silver, although the model was much less precise for the migration of copper. The authors attributed the lack of precision to the high variability of copper content in the chicken used for the migration tests. The use of food simulant rather than actual food was consequently recommended for further validation of numerical modelling of migration.
Building on previous work, Bott et al. presented a model estimating the migration of chemical substances 132,135 in which the molar cross sectional area 136 is replaced by the volume or size of the nanoparticles. This model suggests that particles larger than 3.5 nm in diameter will not migrate from polymer matrices, when diffusion is considered the sole mechanism. The authors concluded that, due to the usual size, shape and aggregation or agglomeration of the particles used in FCMs, they would actually be immobilized in the matrix, and thus prevent possible consumer exposure.
DISCUSSION
The studies cited earlier show that in the last few years, several analytical techniques have been developed and applied to the assessment of the migration from nano-based packaging materials into food or food simulants. This topic has also been reviewed in a recent paper by Duncan and Pillai, 137 reaching similar conclusions. The knowledge generated by these studies will help to better assess the risk of using nanomaterial in contact with food.
The general conclusions that can be drawn are the following: • Nanomaterials contained in FCMs are shown to migrate in some applications to a quantifiable level. • The materials that showed the highest migration are those intentionally designed to fulfil a function that requires a certain migration level, i.e. active packaging materials. All studies nevertheless show that the levels of migration remain in the limits set by regulation for dissolved species. • The migration of actual particles cannot be excluded, but appears to be limited to particles on the surface of the material directly in contact with the food or the simulant, even in the case of matrices of low dynamic viscosity such as polyolefins. This would mean that, in the case where direct food contact is not required to fulfil the function, any direct food contact layer not containing nanomaterials would be sufficient to prevent any particulate migration and act as a functional barrier. This is currently not well taken into consideration by regulations on FCM's containing nanomaterials.
These conclusions would suggest that, in most applications, classical risk assessment could be applied to FCM's containing nanomaterials. In the cases where particle migration could be proven, the risk assessment would have to be based on the potential toxicity of substance in nanoparticulate form, which, as will be shown later in the paper, is not straightforward.
Migration, if at all occurring, is not the only route through which human exposure to nanomaterials could occur. Other exposure routes as well as some environmental consideration are exposed in the next sections.
Possible release and exposure scenarios for nanomaterial-polymer packaging materials
Manufacturing. The handling processes involved at the manufacturing stagese.g. powder handling, blending and disposal of waste materials involving nanomaterials during dispersion in solvents or polymer meltsmay generate particulate emissions. Hence, provide the possibility for worker exposure. Factors that may influence the potential for release of nanoparticles include the physical state of the supplied materials (e.g. powder, dispersion and pre-mixed batches), quantities to be handled, dustiness of the blending process, containment and particulate control measures in place (e.g. ventilated enclosures or LEV). Mixing tasks have demonstrated release potential of nanomaterials (e.g. Han et al. 138 ).
Similar programmes with other nanomaterials (e.g., TiO2, SiO2 and nanoclays) have included monitoring of the workplace air during continuous melt compounding and post processing operations (grinding and cutting). The grinding and cutting tests of such composites have not thus far indicated any detectable regeneration of nanoparticles. 139 The few studies carried out so far have indicated that nanoparticle emissions during manufacturing stage can be effectively controlled through appropriate engineering measures. The management of these risks is described in details in the ISO/TS 12901 series. 140,141 Transportation. Post-production transportation of nanomaterials and nanomaterial-polymer resins is likely to be carried out in sealed containers. Hence, a risk of exposure or release would only be expected in the case of an accident for instance where a sealed container is breached or nanomaterials containing polymers are combusted in an uncontrolled environment.
Use. Releases of any significant quantities of nanomaterials are not anticipated under normal use of the nanomaterial-polymer materials. As described in the previous sections, the use of such materials in food packaging may lead to migration of nanomaterials into food; however, the available studies discussed earlier and the modelling estimates indicate that the levels of nanomaterial migrating from plastic polymer-nanomaterial based packaging materials to food are likely to be either nil or very low. However, depending on the nature of nanomaterials and polymers, there may be some exceptions. For example, more studies are needed to establish whether migration patterns for nanoparticles in different polymer typesespecially biopolymersare not any different from those observed and estimated in the few plastic polymer systems tested so far. The likely contribution of surface aberration of packaging in the transfer of nanoparticles to food (especially during re-use of FCMs) is currently not known. This is further emphasized in a recent review by Duncan 142 on the potential release of nanoparticles through matrix degradation, which clearly shows that this aspect has hardly been studied for packaging material. It should nonetheless be noted that packaging materials, during their use phase are rarely exposed to weathering conditions that are reported to have significant effects on particle release. Abrasion of nanocoating by mechanically aggressive food products has however been observed. 143 This abrasion has however degraded the properties of the packaging material to a point where they were not fit for the protection of the product.
End of life. Even with the increasing emphasis on recycling, a significant proportion of nanomaterialpolymer packaging is likely to be disposed of in landfills or littered into the environment. These packaging materials will eventually degrade in the environment because of physical and biological factors, resulting in the possible release of nanoparticles into the environment. Like other environmental contaminants, any persistent nanomaterials released from packaging materials are likely to end up in soil and aquatic environments. A few modelling studies (such as Boxall et al. 139 ) and reviews 142 have estimated the likely concentrations of nanomaterials in the environment from the current use and disposal of nanomaterial-containing consumer products (including packaging). Boxall et al. 139 estimated the expected environmental levels from such route to be very lowin the order of lower parts per billion for most nanomaterials. However, these estimates were based on simple modelling parameters and did not take into account the persistence, concentration or accumulation of some nanomaterials in the environment. The accumulation or concentration of nanomaterials in any of the environmental compartments will be dependent on the chemical reactivity (or inertness) and persistence to physical, chemical or biological degradation.
It is currently not clear how industrial waste from nanomaterial-polymer composite manufacturing facilities should be dealt with, i.e. whether it should be recycled, incinerated or land-filled. Some materials, such as CNTs are known to be completely degraded when heated at 740°C under oxidative conditions. Most organic nanomaterials are also likely to be degraded during incineration. It is, nevertheless, imperative that if final disposal of nanomaterial-polymer based packaging is through incineration; it is carried out under appropriately controlled conditions. Some nanomaterials, such as metals and metal oxides, may survive incineration process and may need subsequent chemical treatment. Nanomaterial-polymer composites and resins are also likely to be disposed of to landfills. It is, however, not known whether nanomaterials may also be released from the packaging materials and migrate to soil/ water environments.
Bio-based polymer can, in some cases, be more sensitive to matrix degradation, which could increase the release rate of the nanomaterials in the environment. This would especially hold true for biodegradable or compostable polymers. Such considerations are rarely taken into consideration in studies claiming environmental benefits for the use of bio-based matrix nanocomposites.
Recycling of the used nanomaterial-polymer materials may involve separating, chemically cleaning, grinding, chopping, milling, melting, mixing, pelletizing, and compounding of the disposed packaging materials. It is currently not clear whether and how separate collection/separation streams for nanomaterial-polymer material will be set up and work. There is a strong likelihood of nanomaterial contamination of other recycled polymer materials if the nanomaterial-containing packaging is not separated prior to recycling. Whether this will be detrimental to the quality and safety of recycled materials still has to be evaluated.
There is a potential for exposure of workers to nanomaterials during recycling, but it will depend on the processes involved, the degree of manual handling, and the safety measures in place. Some of these processes may lead to worker exposure if they are not carried out as closed processes or under appropriate engineering controls. 140, 141 The main emphasis from the exposure point of view may need to be on stages in the lifecycle of products, beyond that of manufacture where emission control measures are unlikely to exist or where the same level of process control may be achieved.
Environmental considerations
Life cycle assessment (LCA) is a methodology, which enables one to quantify the environmental impacts associated with a specific service, manufacturing process or product. The methodology enables one to take a holistic approach and consider the complete product life cycle from the extraction of the required raw materials to the point where all residuals are returned to the earth. Typically the life cycle is partitioned into the main life cycle phases: (a) raw materials, (b) manufacturing, (c) use and (d) end of life. As well as considering the complete life cycle, several environmental impact categories are considered to ensure that a holistic view is obtained and that burden shifting is avoided (where impacts of one sort are reduced at a cost of another). Climate change, ozone depletion, tropospheric ozone creation (smog), eutrophication, acidification, toxicological stress on human health and ecosystems, depletion of resources and land use are all impacts categories, which can be considered. LCA can assist with optimizing the environmental performance of a product or for comparative assessments between products to determine the most environmentally favourable. LCA is considered the most widely accepted approach for assessing environmental performance and is supported by a set of standards from the International Organization for Standardization namely ISO 14040 144 and 14044, 145 which provide guidelines on how to define and what to include in an LCA.
A workshop organized by Woodrow Wilson Institute for International Scholars and the EU Commission in 2006 concluded that the ISO-framework for LCA is also applicable to LCA of nanomaterials and nanoproducts. 146 A few limitations were identified, such as nanomaterial-containing products may have new functions for which it may be difficult to use a conventional (benchmark) functional alternative for comparison; the inventory may be difficult to develop because of rapidly evolving production technologies; or the impact assessment of risks of nanomaterials may be difficult because of the lack of data on release exposure and effects.
While the approach is already established and successfully used for many products, there are some limitations with relation to its use for nanotechnology derived products. The current scarcity of data on characterization of both hazard and exposure of nanoparticles is one such gap that is confirmed and even emphasized in recent review papers on the topic. [147][148][149][150] It is worth noting, for the non-specialist, that LCA is not a tool for exposure assessment, but information on exposure assessment is essential in LCA for the impact assessment of nanomaterial releases. Despite this, and until these data gaps are addressed, LCA still remains a useful tool for the assessment of nanomaterials containing products. For example, LCA can provide important information to support decisions on, and during, the development of new nanomaterials or nanomaterial-products. 150,151 For instance by identifying opportunities for process improvement along the production cycle or by quantifying potential environmental benefits of use associated with improved functionality or material performance.
Despite the current lack of data on the effects of exposure and release to the environment, there are still examples of where LCA has been used for nanomaterial-containing polymer-nanomaterial composite. One such study is by Roes et al. where a PP/layered silicate nanocomposite is assessed as packaging film. 152 The purpose of the LCA was to investigate whether the use of a PP nanocomposite has environmental advantages over the use of conventional polyolefins. The study took into account several impact categories, however again highlighting the discussed gaps in knowledge, did not take into account potential toxicity or ecotoxicity associated with the nanoparticles. The findings of the LCA showed benefits in terms of reduction of materials used to achieve the same level of performance and functionality (e.g. barrier properties) of PP. The material reduction amounted to 9% for packaging film and 36.5% for agricultural film. This reduction in material usage will undoubtedly have a positive effect on environmental performance and highlights some of the advantages that nanotechnologies can bring; however, these benefits will of course have to be assessed against potential negative impacts, which cannot currently be taken into account modelled.
Actual occurrence in the environment. A recent report, 153 published by the Danish Ministry of Environment and Food presents the results of an environmental assessment of nanomaterial use in Denmark. This study evaluated the risk of the current presence of nanomaterials in fresh water and effluents from sewage treatment plants by calculating the ratio between the predicted environmental concentration and the predicted no-effect concentration. For ratios smaller than one, the biological effects of the concentration were considered to be acceptable. The study concludes that the 10 nanomaterials evaluated do not present environmental concern at their current usage state. It however suggests that some of these deserve continued observation, due either to their already high occurrence in the environment (e.g. TiO2 and carbon black) or due to their potentially high eco-toxicity (silver, copper oxides and carbon nanotubes). The source of the materials is clearly not uniquely related to packaging materials, but all of the materials evaluated in the study have been evaluated or used in food contact applications.
Introduction
As a general principle, an FCM must not pose a risk to consumers upon use. Therefore, a risk assessment is performed to determine if an FCM is safe for its intended use. Data concerning the migration of components from the FCM to the food matrix provide an indication of the potential exposure of consumers to migrants. The 'Food Contact Materials Note for Guidance', published by EFSA, 154 provides guidance on the interpretation with respect to FCM regulation. This also applies to nanomaterials, which are intended for use in FCM. However, this guidance does not cover nanomaterials that have different characteristics and physico-chemical properties to non-nano equivalents. Recently, EFSA has published a guidance note on the risk assessment of the application of nanosciences and nanotechnologies in the food and feed chain. 97 This guidance provides the latest views on how to deal with nano-related aspects for hazard, exposure and risk characterization within the food chain. In the USA, no guidance is available for the safety evaluation of nanomaterials; the US-FDA performs safety evaluations of nanomaterials on a case-by-case basis, as detailed in a previous section of this paper.
Detailed procedures for risk assessment of nanomaterials are still under development, even though some approaches for safety assessment in food have been published. 155,156 The scientific committee on emerging and newly identified health risks 157 reported that it can be expected that these procedures remain under development until there is sufficient scientific information available to characterize the possible harmful effects on humans and the environment. Therefore, within this chapter, implications for hazard and risk assessment of nanomaterials are provided as a general guide.
Exposure assessment
The starting point in the evaluation of the safety of a nanomaterial is to determine the exposure, which depends on the potential of the nanomaterial to migrate from the FCM to the packaged food, irrespective of its function in the FCM. If adequate migration testing shows no measurable migration of nanomaterials from the FCM to the simulant, no human exposure via food consumption will likely occur. Consequently, there will be no overall risk to the consumer and further toxicity testing will not be needed.
In the case where it is demonstrated that nanomaterials are capable of entering the food matrix, the solubility of the nanomaterial in the food matrix and/or upon gastrointestinal passage is of relevance. For nanomaterials that completely dissolve in the food matrix, the hazard and risk upon exposure will be similar to its non-nanoform. Where it is demonstrated that the nanomaterial originating from an FCM is present in the food matrix but is dissolved completely before intestinal absorption can take place, the safety upon exposure should be evaluated related to the relevant guidance for the nonnanoform. Hazard and risk assessment can be performed in this case using toxicity data of the nonnanoform. In case no conventional counterpart is known or no toxicity data are available, the normal testing strategy for FCM applies.
For further details on exposure assessment for FCMs reference is made to the 7th Framework EU funded project flavourings, additives and food contact materials exposure task, or in short FACET (http://www.ucd.ie/facet/links/).
Determining solubility in the food matrix is relatively straightforward, and tests can be integrated during migration testing.
Solubility in gastro-intestinal fluids can, however, be more challenging to determine depending on the nanomaterial in question. In vitro gastro-intestinal absorption systems or stability in gastric fluids will likely provide the most accurate information. For nanomaterials, which are assumed to completely dissolve in gastrointestinal fluids, the dissolution process of the nanomaterials is important to consider. In case the dissolution is relatively slow, it cannot be excluded that nanomaterials may become systemically available and/or some local 'site of contact' effects may occur. Therefore, the location and timing of the dissolution process should be evaluated and discussed to ensure that the nanomaterial is dissolved completely before intestinal absorption takes place.
Although in vivo studies seem the most appropriate it should be noted that evaluation of absorption in vivo may overestimate the dissolving potency of a nanomaterial, because of the time needed to sample and evaluate the intestinal content for the presence of nanomaterials.
Characteristics of nanomaterials from FCM
Where it is demonstrated that a nanomaterial will migrate to food and will not (completely) dissolve in the food matrix and/or during gastrointestinal passage, the hazard and risk upon exposure of the nanomaterial must be evaluated. The hazard upon exposure will be dependent on the characteristics of the nanomaterial in question.
A nanomaterial migrating from an FCM may have different characteristics to the one added during processing. Possible modifications include coating of the nanomaterials by components of the matrix material or the inclusion of the nanomaterial in migrating components of the matrix. Consequently the particle size, shape, available surface area, surface chemistry, etc. may be altered, which may result in a modification of its toxicity. 158 Biopersistence and biodurability of particles are also important for hazard and risk assessment as they may influence long-term toxicity. 159 Therefore, they should be taken into account and the need for specific toxicity testing may arise, although preparing representative test material for a conclusive hazard/risk assessment may be challenging today. The assessment may be simplified, however, if it can be demonstrated that the particles migrating from the FCM have lost their characteristics specifically related to the nanoscale. Nevertheless, it can be considered that, from a worst case point of view, the nanomaterial as used in the FCM is in fact the most critical chemical form for use in hazard and risk assessment.
In general, it could be expected that upon clustering or attaching to FCM particles, a reduction of toxicity could occur (e.g. a nanomaterial coated with polyethylene will have a reduced surface charge and/or chemistry and therefore is likely to have a lower toxicity). There may be an exception where nanomaterials of different types are combined or when the material's structure is strongly affected by processing, such as through the exfoliation of clay particles. Although data would be needed to support this, it is proposed to pragmatically start the hazard assessment with the naked material. Reduction of the likely adverse effects of a nanomaterial, e.g. when evidence shows a reduction in the surface charge or chemistry due to an interaction with the FCM, can be discussed in relation to its potential absorption and in the context of overall risk assessment.
Factors to consider for hazard characterization
As for FCMs in general, the combined hazard characterization and potential exposure of the nanomaterial will determine the overall risk to the consumer. Until now, no generic threshold for nanomaterials [like the most critical threshold of toxicological concern class of 0.15 μg/person/day for bulk chemicals with (potential) genotoxic properties] has been defined. Consequently, any exposure to nanomaterials should be considered for risk assessment. In addition to normal considerations in toxicology, characteristics like chemical reactivity and morphology of a nanomaterial should be considered in relation to the potential toxicity. Some of these characteristics are discussed later.
Chemical composition
The chemical composition, specifically the presence of impurities and/or contamination with other nanomaterials, may influence the characteristics and toxicity of nanomaterials. Therefore, the chemical composition and purity criteria for the nanomaterial to be used in FCMs should be described in detail and be covered by toxicity testing. Lower purity may also be related to a higher or less consistent toxicity profile of the nanomaterial under evaluation, e.g. due to specific catalytic activity of the impurities, etc. Therefore, a representative batch, which falls within the specifications for the identity of the nanomaterial to be used in the FCM, has to be considered before hazard characterization. Furthermore, potential batch-to-batch variations including ageing effects should also be taken into account.
Physico-chemical characteristics
Particle size, morphological form, surface area, surface charge, etc. of a nanomaterial might be of relevance to chemical reactivity, migration characteristics and absorption and distribution in the human body. Therefore, hazard characterization has to be related to specific characteristics of the nanomaterial, and this should be taken into consideration in toxicity testing. 160 Premature choices of nanomaterials for toxicity testing may lead to unusable toxicity data in the case where changes in the characteristics of nanomaterials may occur during FCM development or processing.
Dose metrics
For non-nanomaterials, the presence in food on a mass basis is the current standard for dose metrics. However, there may be other characteristics, which are relevant discriminators to be considered in relation to dose metrics. Currently, particle size, shape and surface area are considered to be the most relevant parameters for dose metrics, as they have also been shown to correlate with certain adverse effects. 161 It should be noted that depending on the type of nanomaterial, different dose metrics may be considered relevant. As a consequence, for such nanomaterials, the limit values should also be set in the relevant metric. This may have a consequence for (procedures of) monitoring.
Furthermore, the toxic effects of a nanomaterial may not only depend on structural features of the particle itself but also on the nano-bio-interface, i.e. their interaction with (sub) cellular structures and biomolecules. Therefore, it should be noted that for toxicity testing and use in FCMs, well-defined nanomaterials should be used, because a change in, e.g. particle size or area, may result in significant differences in hazard characteristics. Furthermore, a valid comparison for read-across with other nanomaterials having the same chemical composition but slightly altered characteristics will be highly dependent on the physico-chemical characteristics of the nanomaterials.
The mass dose in parts per million or milligram per kilogram can be applied as standard dose metrics when exposure of a well-defined nanomaterial can be related (1 : 1) to its hazard (so all toxicity tests are performed with an equivalent well-defined nanomaterial). In cases where exposure estimates are above the safe human intake values by comparison on a mass dose (and taken into account conventional uncertainty factors 97 ), potential health risk cannot be quantified. Furthermore, when an unforeseen change in characteristics of a nanomaterial is concerned (e.g. outside batch variations), interpolation to determine the toxicity between two nanomaterials differing only in shape and/or surface area can be achieved by introducing an intermediate dose metrics to determine dependency of the hazard towards a specific parameter, e.g. the surface area. It should be noted, that this is only valid for welldefined nanomaterials. Depending on the nanomaterial and its characteristics, the parameters available will determine the possibilities for a dose metrics interpolation or comparison.
Prior to toxicity testing
Before toxicity testing is considered, there need to be evidence or reasons to assume that human exposure to nanomaterial might occur (see also paragraph on Exposure assessment). When starting toxicity testing with nanomaterials the following aspects should be taken into account.
Route of administration
In in vivo testing the nanomaterial may be administered via feed, water or by gavage. Although bolus gavage administration may be preferential due to a high peak exposure, the presence of a (protein) corona when the nanomaterial is administered via feed may result in higher systemic exposure. The administration method is therefore dependent on the characteristics of a nanomaterial and also the way of exposure. Therefore, the coherence between the nanomaterial and the administration vehicle, and possible interactions, should be investigated prior to toxicity testing.
Kinetics
Toxicity testing of nanomaterials usually starts with determining the absorption, distribution, metabolism and excretion (ADME) to identify the kinetics and possible accumulation of nanomaterials in the body using, e.g. in Peyers patches, and/or to identify possible affinity for specific target organs and metabolic transformation. ADME information is a prerequisite to determine the toxicokinetic behaviour of the nanomaterial, which may be heavily influenced by physico-chemical properties (e.g. size, surface charge and functionalizing groups). Therefore, slight changes of these parameters may have a significant influence on the gastrointestinal absorption and uptake of nanomaterials in the body. Furthermore, partial solubilization, e.g. of some nanometals and/or oxides, could alter the results of ADME studies if absorption was determined by chemical analysis without ascertaining if the materials were absorbed in nanoparticulate form.
Apart from in vivo ADME testing, in vitro models can be used to assess permeability and/or barrier integrity of the cell walls. It is essential that before ADME testing starts, analytical methods are available for the detection of nanomaterials and/or their non-nano counterparts in the body. In case labelling is used, one should be able to discriminate between the nanomaterials and their possible non-nano counterparts.
In case the ADME data convincingly demonstrates that no nanomaterial is absorbed in the gastrointestinal tract, i.e. the nanomaterial will not become systemically available, a limited set of toxicity testing is required. Therefore, it is of great importance to have consensus on the limit of detection to be used in these studies. This is especially of importance for the interpretation of test results determining sufficient evidence that no absorption of nanomaterial occurs.
Toxicity testing
In case systemic exposure can be excluded, EFSA requires at least in vitro genotoxicity and in vivo local effects testing be carried out.
It should be noted that when the nanomaterial is not absorbed it will not be able to reach the cell in genotoxicity testing. In that case the in vitro genotoxicity test will only provide information on local effects.
Where absorption of nanomaterials upon oral exposure has been demonstrated, hazard identification of the nanomaterial by appropriate in vitro and/or in vivo studies for mutagenicity, and repeated dose toxicity (90 day) is required.
In vivo genotoxicity testing is required when initial in vitro tests show positive results or when the results return inconclusive results. This may be the case with one of the basic tests for non-nanoform substances (Ames test). If nanomaterials are not able to enter the bacterial membrane through endocytosis, the relevance of this test for nanomaterials is limited. For genotoxicity testing in vivo, one should take care that the nanomaterial is capable of reaching the target cells, which may result in a selection of a different dosing route. If this is not the case, the results of the test are considered invalid. Furthermore, the choice of further in vivo testing depends on the endpoint in which a positive effect was found. In the case of a positive in vitro gene mutation test, a comet assay or transgenic rodent gene mutation assay may be considered. A positive in vitro micronucleus test is followed by an in vivo micronucleus test. In case the nanomaterial cannot be tested in vitro, an in vivo comet assay included in the ADME study or repeated dose test may be considered as an alternative.
The main mechanism of nanomaterial toxicity is considered to be from oxidative stress, which triggers inflammation via the activation of oxidative stress-responsive transcription factors. Therefore, the mechanism of genotoxicity may be of relevance to determine, i.e. primary effects such as those resulting from DNA binding, or secondary effects such as those resulting from oxidative stress/ROS formation.
In vivo sub-chronic repeated dose testing (a 90 day study) should include endocrine-related endpoints, cardiovascular and inflammatory parameters, as well as effects to the mononuclear phagocyte system (due to clearing by phagocytosis). 162,163 The need for additional testing for reproduction toxicity, neurotoxicity, allergenicity and/or other endpoints will be determined by the toxic effects observed in the initial tests. Also in the case where an increased hazard is identified by comparing the results of the nanomaterial to its non-nanoform and/or there is potential for accumulation in the body. Additional in vitro testing can be performed to generate mechanistic data on, e.g. epithelial permeability, release of inflammatory mediators or other parameters. Triggers for developmental effects cannot be derived from initial test with the nanomaterial. For this endpoint information from the nonnanoform may be helpful. If no non-nanoform data are available, potential developmental effects will remain an uncertainty.
Because of the limited knowledge on nanomaterials, in vitro genotoxicity testing, ADME and a repeated-dose (90 day), study in rodents will be required independently of the amount of migration. It should be noted that unrealistically high dosing can lead to effects by overload rather than toxicity of a nanomaterial under evaluation. The dose levels to be tested should therefore be chosen with care, taking into account the expected exposure based on the migration testing including safety factors to convert from animal testing to human exposure. 164 If present, also public human data may be taken into account to determine the relevance of animal studies for humans, e.g. when performed with an intention for pharmaceutical use. 165 If not indicated otherwise by consideration of the data, the conventional default uncertainty factors of 10 for inter-species and 10 for intra-species differences should be applied. There are currently no indications for a need to modify these factors. 97 CONCLUSIONS Nanomaterials and nanotechnology are considered to have strong potential to bring a variety of new or improved properties and/or functionalities to food packaging. The food packaging industry sees potential in using these materials, because the properties and functionalities, which accompany their use, can be linked to a number of benefits related to higher product quality, shelf-life extension, better environmental performance, improved consumer experience and security.
Despite the interest and perceived advantages of these materials and technologies in the food packaging sector, so far their use has not been widespread. There have been various barriers to a more extensive implementation, in the form of gaps in knowledge related largely to (a) benefits of use, (b) unclear legislative requirements, and (c) safety. All these contribute to limited consumer acceptance.
The aim of this article was to review the work in the field of nanomaterials and nanotechnology with a specific focus on food packaging applications and thus to provide clarification regarding the existing identified barriers, which have hindered their wider use until now. More specifically, this paper has explored current technological developments along with potential benefits in terms of property enhancement. The legislative framework for these materials has also been investigated, along with potential safety risks, related to both human and environmental exposure.
The range of available nanomaterials and nanotechnologies is extremely broad, and there is continuous research in the field to develop these materials and technologies further. The broad nature of this domain means that the use of those materials can be applicable to a plethora of polymer base materials used to enhance properties in packaging. The processes and materials developed until now have been shown to provide novel properties, which can answer current and emerging industry needs and offer new functionalities, which bring additional benefits to the consumer and other stakeholders along the supply chain. Benefits could be gained through enhanced mechanical, functional and barrier properties, which can be translated into direct benefits for the consumer, retailers and food producers.
These enhanced properties could enable improved protection capability of the packaging materials currently in use, which could lead to several benefits. For instance these properties have the potential to protect food products for longer as the higher barrier properties provide increased protection from various degradation factors such as oxygen and moisture. Food could therefore be kept fresh for an extended amount of time.
The enhanced barrier and mechanical properties could enable lower gauge packaging films to be used, which could potentially bring environmental benefits and cost reduction through light-weighting. This benefit would, of course, have to be weighed against the impact on manufacture. Property enhancement of bio-based materials is a clear potential application for nanotechnology materials because there currently is significant interest in this domain and any improvement of properties would only help to broaden their scope for the use. Finally, although less defined, there is also potential for the development of 'Intelligent' and 'Smart' packaging concepts to ensure safety, security and authenticity of packaged food products.
The legislative framework currently enforced in Europe and the United States was described. In Europe, specific legislation on plastic-based food-contact materials limits the use of nanomaterials to those explicitly mentioned in the positive list. Currently, the number of authorized nanomaterials is limited to a handful of compounds. The fact that all applications have to undergo a specific risk assessment, even those where the nanomaterial is used behind a functional barrier, might partly explain the slow authorization rate for new nanomaterials. A similar approach is defined in the harmonized measure on active and intelligent packaging, while those for ceramics and regenerated cellulose do not mention nanomaterials. FCMs not covered by these harmonized measures have to comply with the general safety requirements of the Framework Regulation and with specific national legislations. In the United States, there is no specific legislation for the use of nanomaterials in food contact applications. However, the nature of any components likely to be added is evaluated. Food contact notification is therefore based on a risk assessment, which takes nano-aspects into consideration.
Perceived safety concerns over the use of nanomaterials and technologies have meant that consumer acceptance has not yet reached a level where high volume industrial-scale applications could commence. Any possible safety risk would be dependent on the migration of nanoparticles into packaged foodstuffs. Consequently, specific consideration was given to the likelihood of migration of nanoparticles from packaging materials.
Methods have now been developed, which enable the migration of nanoparticles from packaging materials into foodstuffs to be determined. Such methods are now available for both particulate and non-particulate migration. This is a key message because these methods now are able to address one of the main questions regarding the use of nanomaterials in packaging and enable consumer exposure to be estimated. The studies that are available on this topic so far indicate that any significant migration of nanoparticles from polymer packaging materials into packaged foodstuffs is unlikely. However, more studies are required in this area to ascertain whether the same trend can be observed in other packaging types (biopolymers for instance) that are more prone to matrix degradation. Another area where further work is required is where nanotechnology is used to produce active packaging. Here, a certain level of (non-particulate) migration is desired, such as in the case of anti-microbial packaging materials. Ultimately the available information on the topic of migration does provide some assurance that the use of nanomaterial-polymer composites for food packaging application is unlikely to create a consumer exposure risk during the use phase.
The implications of potential migratory behaviour of nanoparticles have also been explored. Should any migration of nanoparticles be evident, a potential exposure would exist and, consequently, a safety risk assessment would be required. In such cases, non-particulate migration could be addressed with knowledge available from the non-nanoform of the substance in question. However, migration of actual nanoparticles poses a greater challenge in terms of risk assessment. This would require an initial toxicological evaluation using in vitro and in vivo tests. A positive indication of toxic effects from these initial tests would trigger further more detailed toxicological investigations. In view of this, it is highly recommended that new packaging materials based on nanotechnologies be developed in a manner that reduces risks by minimizing the chances of consumer exposure to nanomaterials during their use and potential exposure in the environment upon disposal. This can be achieved by ensuring that packaging materials are purposely designed to minimize nanoparticle migration to packaged foodstuffs and that the packaging materials are appropriately handled and treated at their end of life.
Ultimately, the findings from this work show that that momentum surrounding nanomaterials and nanotechnologies in terms of interest and research is growing with respect to their use in packaging applications. There are clear benefits linked to their use in terms of performance enhancements and functionalities that can improve material properties in existing and emerging materials. There is an existing legislative framework applicable to these materials, which continues to evolve to meet the requirements of this rapidly evolving field. The various questions regarding the risk of these materials are also in the process of, to being addressed with the emergence of relevant methods and approaches.
It can therefore be expected that the packaging applications, which use nanomaterials and nanotechnology will continue to grow in coming years as the previous barriers are increasingly removed. The benefits of using such materials and their potential to improve environmental performance reduce food waste and provide improved product quality and security, along with reassurances regarding safety could also improve consumer acceptance.
|
v3-fos-license
|
2022-06-30T15:27:05.219Z
|
2022-06-27T00:00:00.000
|
250122693
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://koedoe.co.za/index.php/koedoe/article/download/1702/2975",
"pdf_hash": "a8e1e87b8e841fc4254d98875eb6af548e45ecea",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43497",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "74a6b13b57e4abe1cb63c5cbf9e58d907911291f",
"year": 2022
}
|
pes2o/s2orc
|
Diversity and distribution of benthic invertebrates dwelling rivers of the Kruger National Park, South Africa
water flowed downstream through the park, suggesting a beneficial effect of protected river reaches on benthic invertebrate diversity. However, for the Crocodile River, which makes up the southern border of the park, this trend was less conspicuous, suggesting that this river may experience the greatest threats. More generally, benthic invertebrate communities were driven by the concentrations of phosphates, sulphates, ammonium and organic matter and by substrate characteristics. Conservation implications: Meiobenthic organisms are very abundant in KNP rivers and react to environmental gradients; thus, they should be more considered for bio-monitoring or conservation of comprehensive assemblages of animals. Interestingly, protected reaches tended to show a reduced dominance of the invasive T. granifera and a higher diversity of benthic invertebrates.
Introduction
Rivers represent only a minor part of Earth's water resources, but they play a key role for the connectivity of ecosystems and the maintenance of biological diversity at a global scale (Dudgeon et al. 2006). However, most river ecosystems have been degraded by human activities throughout history (e.g. fisheries, commercial routes, irrigation, production of energy, receptacles for wastewater and introduction of invasive species), resulting in a current situation where rivers are one of the most threatened ecosystems worldwide (Malmqvist & Rundle 2002). As an example, South African rivers are intensively exploited to ensure water security for urban areas, agriculture and industry, where water use often exceeds natural water availability (O'Keeffe 1989). Those rivers further serve as outlets for urban wastewater, excess nutrients and xenobiotics from agriculture, as well as metals draining from mine exploitation or industrial activities (e.g. Jackson et al. 2013;Rimayi et al. 2018;Riddell et al. 2019). Nel et al. (2007) analysed the health status of South African rivers and found that 84% of river reaches (and especially those located in the largest, permanently flowing river systems) were degraded to the point they could be considered as highly threatened. Nature conservation regions including relatively large portions of river catchments can help mitigate pollution inputs and restore water quality (Nel et al. 2007). Although some xenobiotics like organochlorine pesticides Meiobenthos (or meiofauna) are microscopic invertebrates that inhabit biofilms and interstitial spaces in rivers. They are diverse and extremely abundant, and they perform essential ecological functions by linking microbial production to higher trophic levels (e.g. macrobenthic invertebrates and fishes). However, meiobenthic communities remain poorly studied in Africa. Here, we sampled meio-and macrobenthic invertebrate communities associated with biofilms and sediments across an upstream-downstream gradient along the Olifants, Sabie and Crocodile rivers flowing through the Kruger National Park (KNP). We expected to link differences in community structure to environmental gradients as those rivers show different degrees of anthropogenic stress as they enter the park. Both meio-and macrobenthic communities differed across rivers and also structured along an upstream-downstream gradient. The upstream sites, which were the closest to the park borders, consistently showed a lower diversity in all three rivers. There, the invasive snail Tarebia granifera strongly dominated (making up 73% -87% of the macrobenthos), crowding hard substrates, while concomitantly the abundances of biofilm-dwelling meiobenthos like nematodes and rotifers were substantially reduced. Nevertheless, the diversity and evenness of communities then tended to increase as water flowed downstream through the park, suggesting a beneficial effect of protected river reaches on benthic invertebrate diversity. However, for the Crocodile River, which makes up the southern border of the park, this trend was less conspicuous, suggesting that this river may experience the greatest threats. More generally, benthic invertebrate communities were driven by the concentrations of phosphates, sulphates, ammonium and organic matter and by substrate characteristics.
can still drift over long distances, their bio-accumulation potential in aquatic organisms of conservation areas is correlated with their proximity to pollution sources (Wolmarans et al. 2021). This means that river reaches further downstream from pollution sources should show a better ecological status.
To reduce threats and develop proactive management practices, one requires an in-depth understanding of the response of natural communities and ecosystem processes to water quality. However, in the case of South African rivers, management of water resources is often decided in a context of incomplete knowledge of the response of the different biological communities present in the field, leading to uncertainty in decision-making (see, e.g., Roux, Kleynhans & Thirion 1999a;Roux et al. 1999b). For example, management decisions concerning the conservation actions applicable in the large protected area of the Kruger National Park (KNP) have been traditionally based on the assessment of abiotic rather than biotic factors (Solomon et al. 1999). This is based on the assumption that resultant biotic patterns are likely to be correlated with abiotic components. However, to measure, protect and restore natural river integrity in the KNP, it would be more relevant to include multiple biological indicators of water quality (Rogers & Biggs 1999). Pollutants not only alter the chemistry of the water column but also persist in sediments (Gerber et al. 2015) and within tissues of organisms (Gerber et al. 2016;Seymore 2014;Wolmarans et al. 2021). The consequences of altered environmental conditions have been found to translate into conspicuous modifications of the structure of bacteria, diatom, fish and insect communities in KNP water (Farrell et al. 2019;Rasifudi et al. 2018;Riddell et al. 2019;Shikwambana et al. 2021). Another cause of stress to the native aquatic communities of KNP is the accidental introductions followed by problematic blooms of invasive pathogenic micro-organisms, fishes, crayfishes and molluscs species (e.g. Jones et al. 2017;Macdonald 1988;Petersen et al. 2017).
While the response of vertebrates to abiotic and biotic threats is conspicuous, the response of benthic invertebrates (living hidden in biofilm matrices and aquatic sediments) is mostly overlooked when it comes to assess ecosystem health status. Among those benthic invertebrates, one may further make a distinction between the macrobenthos, such as aquatic insects, snails and leeches that are large enough to be visible and usually are retained on 500 μm meshes, and the meiobenthos, such as nematodes, rotifers, copepods and tardigrades that are so tiny that they are invisible and usually pass through 500 μm-meshes but are retained on 20 μm meshes (Ptatscheck, Gehner & Traunspurger 2020a). Meiobenthic invertebrates are little studied although they show complex behaviours and extraordinary physiologies that allow them to colonise most, if not all, benthic habitats (Brüchner-Hüttemann, Ptatscheck & Traunspurger 2020;Rebecchi, Boschetti & Nelson 2020) where they can reach remarkable abundances (between 10 5 -10 6 ind. m -2 on any submerged substrate) (Majdi, Schmid-Araya & Traunspurger 2020;Traunspurger, Wilden & Majdi 2020). Meiobenthos play an important role in riverine food webs as intermediaries: grazing on microbes (e.g. bacteria, protozoans and micro-algae) and serving as prey for a variety of macro-invertebrates and fish juveniles (Majdi & Traunspurger 2015;Ptatscheck et al. 2020b;Schmid-Araya et al. 2002). Some studies have highlighted the potential of meiobenthic invertebrates to reflect aquatic ecosystem health in South Africa: for instance, Gyedu-Abadio et al. (1999) observed that the density and diversity of nematode assemblages were affected by the concentration of metals in a subtidal portion of the Swartkops estuary. Further experimentation in the laboratory showed that nematode genera like Axonolaimus, Theristus and Paramonhystera were tolerant to metal pollution (Gyedu-Ababio & Baird 2006). In Europe, a nematode 'species at risk' index has been developed to monitor river sediment pollution (Höss et al. 2011). This index adds value to routinely used macrobenthicbased indices because nematodes are ubiquitous and experience the effects of pollutants during their wholelife cycle within the sediment (Brüchner-Hüttemann et al. 2021). Whether such meiobenthos-based indices could be tested, developed and added to the bio-indication toolbox outside Europe is a question of general interest for ecologists and stakeholders, but lacks support from an insufficient collection of field data so far.
The specific objective of this study is to provide fresh insights into the diversity and distribution of benthic invertebrates (meio-and macrobenthos) in three large rivers (Olifants, Sabie and Crocodile) flowing through an emblematic protected area of austral Africa, the KNP. We sampled the main benthic habitats in those rivers (i.e. biofilms associated with hard substrates and surface sediment) to examine the structure of resident communities. The three rivers were expected to show a gradient of environmental stress as they enter the park: Olifants and Crocodile rivers experience different types of pressures (mostly related to mining practices for Olifants and to agricultural practices for Crocodile), while Sabie is quite pristine in comparison. Thus, we expected that the structure of benthic invertebrate communities could reflect gradients of pollution experienced by the three rivers. Finally, an upstream-downstream gradient was also expected to be manifest through a beneficial effect of preserved areas of the park on the taxonomic richness of benthic assemblages.
Study sites
The landscape in the southern part of the KNP consists of plains showing a gentle eastward slope and being drained by three major river systems (Crocodile, Sabie and Olifants rivers; Figure 1). The Crocodile (catchment area: 10 455 km 2 ) and Sabie rivers (6252 km 2 ) start off in the Drakensberg region. The Olifants River has the largest catchment area (54 434 km 2 ) and starts in the Highveld region (Muller & Villet 2004;Venter & Bristow 1986). It generally shows the highest annual run-off followed by the Crocodile and Sabie rivers (Muller & Villet 2004) but, at the time of sampling, all rivers had similar, relatively low discharges (3.82 m 3 s -1 , 3.87 m 3 s -1 and 4.11 m 3 s -1 in Olifants, Crocodile and Sabie rivers, respectively). This was the result of a steady decrease in flow over 1 month prior to sampling (Figure 1-A1).
The three rivers drain residential and agricultural areas and then flow eastwards through well-preserved ecozones in the park, except for the Crocodile River that delineates the southern border of the park, one of its bank draining the large agricultural zone of the Lowveld (Figure 1). The Crocodile River is considered highly threatened, draining untreated industrial and household wastewater from large cities such as Johannesburg and Tshwane, its water being increasingly used for irrigation and receiving agricultural effluents. The Olifants River drains the industrial fertiliser complex of Phalaborwa as well as phosphate rock mining facilities (foskorite and pyroxenite) before entering the park and is considered one of the most polluted rivers in South Africa (Gerber et al. 2015 and references cited therein). In contrast, the Sabie River is recognised as a remarkable hotspot of aquatic biodiversity and one of the most pristine river systems of South Africa (Riddell et al. 2019).
Three sites were selected in each river system, with the help of South African National Park officials and KNP game rangers. Each site was chosen according to its accessibility, the availability of riverine microhabitats (hard and soft substrates) and its coherence regarding our aim to examine the effects of an environmental gradient through the park. As far as possible, we selected river reaches that showed the most similar hydro-morphologies and without riparian canopy cover (e.g. Figure 2-A1). Upstream sites were located close (< 500 m) to the park's border, and to ensure substantial spatial coverage, each sampling site was located 10 km -60 km apart from each other ( Figure 1).
Sampling
Samples were collected over a 4-day sampling campaign in May 2019 during austral autumn, a period of low flow conditions (Figure 1-A1). To homogenise sampling, at each site, a 25-m reach was selected along which five 1 m 2 shallow (< 30 cm water depth) plots located approximately 5 m apart were set (e.g. Figure 3-A1). Each plot comprised both hard substrates (in the form of large bedrock or cobble stones) and soft substrates (mostly in the form of fine, sandy sediment). To avoid trampling disturbance, plots were always sampled from downstream to upstream (Figure 3-A1). Samples were placed at 4°C in the dark upon collection and further preserved with fixatives or frozen until further laboratory analyses could take place.
At each plot, 500 mL of water was collected for nutrient analyses; and water temperature, pH, conductivity, dissolved inorganic salts, total dissolved solids and turbidity were measured in situ with a portable probe (HI9813-5, Hanna Instruments Inc., Bedfordview, South Africa). Sediments were sampled to measure total organic carbon by pushing one 2-cm diameter Perspex corer (Figure 1-A1) through 5 cm of the upper sediment layer at one location chosen randomly within each plot. Sediments were sampled for meiobenthos by pushing two 2-cm diameter Perspex corers through 5 cm of the upper sediment layer at two different locations within each plot. The two sediment cores collected were merged in the same tube and the resulting sample was fixed in a final concentration of 4% buffered formaldehyde. A calibrated brush-sampler (Figure 3-A1; Peters et al. 2005) was used for underwater sampling of 3.14 cm 2 areas of biofilm (and its associated meiobenthos) growing on hard substrates such as rocks and cobbles. This process was repeated three times at different random locations within each plot, using a 20-μm-aperture nylon mesh sieve each time to concentrate the collected subsamples. The contents of the sieve were then poured into a tube and the resulting total sample (representing an area of 9.42 cm 2 ) was fixed in 4% buffered formaldehyde. After meiobenthos sampling, a 'kick-sampling' procedure was used to obtain a semi-quantitative sample representative of the macrobenthic community dwelling in each plot.
Stones, macrophytes and superficial sediments were thoroughly agitated by hand and foot all over the plot for 3 min. The resulting suspended organisms were collected in a sturdy hand net (500 μm meshes) held downstream of the plot. The content of the net was poured in a jar and preserved in 70% ethanol.
Sample processing
Water chemistry variables were measured in the laboratory using a Merck Spectroquant Pharo 300 Spectrophotometer and appropriate test kits following standard protocols (Merck KGaA 2014). Variables were measured on defrosted, unfiltered water samples and included Cl -(further referred to as chloride) (Merck protocol #14897), P-PO 4 3- . Subsamples of frozen sediments were defrosted, dried in an oven at 60°C for 96 h, weighted, burned to ashes at 600°C for 6 h, then weighted once again to measure the percentage of total organic content (TOC) as a ratio between ash-free dry weight and dry weight of the sediment.
To extract meiobenthos from sediment samples, a densitycentrifugation procedure was used in the laboratory involving the flotation of organic particles on Ludox HS-40 (specific gravity set at 1.14), following the method of Pfannkuche and Thiel (1988). The supernatant, containing the meiobenthic invertebrates, was poured through 20-μm meshes, preserved in 4% formaldehyde and stained with a few drops of Rose Bengal. The pellet (i.e. inorganic sediment particles from which the meiobenthos had been extracted) was further passed through a series of stacked sieves (1000 μm, 500 μm, 250 μm, 125 μm, 63 μm and 32-μm-aperture meshes). Each sediment size fraction retained on the respective sieve (further coined F1000, F500, F250, F125, F63 and F32) was dried in an oven at 60°C for 96 h and weighted to estimate the sediment particle size distribution of each sample.
Meiobenthic invertebrates were counted and identified under a 40-80x magnification stereomicroscope (Nikon SMZ1500). Morpho-taxonomic features were used to classify the meiobenthic individuals into coarse taxonomic groups: nematodes, rotifers, gastrotrichs, copepods and their Nauplii larvae, ostracods, oligochaetes, tardigrades, mites, molluscs (mostly veliger and pediveliger larval stages) and larval stages of plecopters, ephemeropters and dipters (detailing Chironomidae and Ceratopogonidae families). The weight of sediment from which meiobenthos had been extracted was used to express abundances per gram sediment dry weight. Meiobenthic organisms dwelling on hard substrates were not extracted using the Ludox procedure and they were directly counted and identified as described above, except that abundances were expressed per area of biofilm.
Individuals of macrobenthic invertebrates were counted in each sample under a 1-5x magnification dissection microscope (Nikon C-LEDS) and identified to lowest possible taxonomic level based on morphological characteristics and regional identification key for aquatic macro-invertebrates.
Total abundances were expressed as number of individuals per 1 m 2 plot. However, relative abundances were used for community structure analyses to avoid potential misinterpretations because of the semi-quantitative nature of the 'kick-sampling' procedure used to sample (Table 1-A1).
Effects of river identity (Olifants, Crocodile or Sabie) and reach location (upstream, midstream or downstream) on abiotic parameters, faunal abundances and macrobenthic diversity indices were tested using a two-way analysis of variance (ANOVA) method followed by a post-hoc Tukey's HSD test for pairwise comparisons. The univariate data were checked for normality and homoscedasticity using Shapiro-Wilk test and Levene's test, respectively. If data were not normally distributed and homoscedastic, the non-parametric Kruskal-Wallis rank sum test followed by a multiple comparison test was used instead. For macrobenthos data, effects of river identity and reach were tested on Shannon's diversity index (H') and Pielou's evenness (J').
Multivariate effects of river and reach on the benthic community structure were tested using permutational multivariate analysis of variance using distance matrices (PERMANOVA: adonis function with 9999 permutations).
The assumption of multivariate homogeneity of variances was first checked using the PERMDISP2 procedure (betadisper function), which is a multivariate analogous to Levene's test. The function rankindex was also used before tests to rank the performance of different dissimilarity indices. In our case, Bray-Curtis distance was the most relevant dissimilarity index.
Non-metric multidimensional scaling (NMDS) based on Bray-Curtis distances (metaMDS function) was used to ordinate samples and species scores. To highlight the most important environmental factors describing the variability in community structure of macrobenthos and meiobenthos, envfit function (that fits vectors of continuous environmental variables) was used. The significance of environmental variables (vectors) fitted onto the ordination was assessed using a goodness-of-fit (R 2 ) statistics and empirical p-values based on a permutation test (N = 999 permutations). The direction of vector showed the direction of the gradient, and the length of the arrow was proportional to the correlation between the variable score and the ordination space. Only significant vectors were further displayed on NMDS plot using 'p.max=0.05' as a filter argument when plotting vectors on the ordination space.
The response of individual taxon abundances to river identity and reach location was examined using multi-level pattern analysis after De Cáceres and Legendre (2009), which is an extended method of indicator species analysis. Multi-level pattern analysis (multipatt function) is a permutation test examining which taxa are significantly responsible for differences among a group of samples. Multi-level pattern analysis was chosen over other species indicator methods because it is less sensitive to the weight of over-dominant species, and it is not based on the relative contribution to differences (De Cáceres & Legendre 2009).
Ethical considerations
This article followed all ethical standards for research without direct contact with human or animal subjects.
Water and sediment environment
Overall, water pH was slightly alkaline but did not differ significantly across rivers or reaches (Table 1). However, conductivity was significantly higher in the Olifants River (on average 663 μS cm -1 ) in comparison to Crocodile (450 μS cm -1 ) and Sabie (104 μS cm -1 ) rivers (Kruskal-Wallis test: χ 2 = 38.7, p < 0.001). The concentrations of sulphate and chloride were likely the dominant contributors to variance in conductivity values as strong positive correlations were observed between conductivity and sulphate and chloride concentrations (Figure 4-A1). Turbidity was significantly lower in Sabie in comparison to Crocodile and Olifants rivers (ANOVA, F 2,36 = 31, p < 0.001), and in the latter two more turbid rivers, turbidity was found to be significantly higher at upstream sites (ANOVA, F 2,36 = 9.8, p < 0.001). Nitrate and ammonium concentrations did not show significant associations with river or with reach, although they were found to be positively correlated with phosphate concentration (Figure 4-A1). Phosphate was significantly more concentrated in the Crocodile River (ANOVA, F 2,36 = 37.1, p < 0.001) and interestingly its concentration tended to increase in downstream areas in all rivers (ANOVA, F 2,36 = 187.7, p < 0.001).
The sediment of the Olifants River showed a significantly greater proportion of organic matter in comparison to the other rivers (Kruskal-Wallis test: χ 2 = 8.2, p = 0.015). In terms of granulometry, relatively coarse sediment dominated (fractions > 500 μm represented on average 63.5% of the sediment grain size distribution). However, it is worth noting that the downstream site on Olifants River showed unusually low proportions of the largest grain size categories (Table 1). The proportion of coarser sediment was negatively correlated with the proportion of finer grain size categories (Figure 4-A1). The coarsest sediment fraction (> 1000 μm) showed a significant river*reach interaction (ANOVA, F 4,36 = 6.2, p < 0.001), essentially confirming that Sabie's riverbed showed coarser sediment as we moved downstream.
Abundance and diversity of benthic invertebrates
Macrobenthos was at least three orders of magnitude less abundant than the meiobenthos: on average, 130 macrobenthic individuals per m 2 against 332 000 ind. m -2 for biofilm-dwelling meiobenthos (Table 2). Macrobenthos abundance appeared significantly lower in the Sabie River (ANOVA, F 2,36 = 8.4, p < 0.001). The macrobenthic community consisted of 82 taxa from 53 families or coarser taxonomic groups. Fifty-seven taxa could be further resolved to genus or species level (Table 1-A1). The taxon dominating the macrobenthic community was Tarebia granifera, making up 36.95% of the macrobenthic community on average. However, T. granifera made up 73% -87% of individuals in upstream reaches (Figure 2a). As a result, reach position had a significant effect on the relative abundance of T. granifera (ANOVA, F 2,36 = 101, p < 0.001). The larval stage of the ephemeropteran genus Caenis sp. was the second most common macrobenthic taxon, representing 16.8% of the assemblage on average and being especially abundant in mid-and downstream reaches of the Sabie and Crocodile (Table 2); however, taxon richness was significantly lower in upstream reaches in comparison to mid-or downstream reaches (Table 2, ANOVA, F 2,36 = 12.7, p < 0.001). Shannon's diversity index (H') was also lower in upstream reaches (Table 2) but was further found to be significantly lower in the Crocodile River as well (ANOVA, F 2,36 = 11, p < 0.001). Pielou's equitability index followed a similar pattern as H' with communities generally more equitable in mid-and downstream reaches (ANOVA, F 2,36 > 11.2, p < 0.001), except for the downstream reach of the Crocodile River, which was uneven and strongly dominated (87%) by Caenis sp. (Table 2, Figure 2a).
Meiobenthos abundance in sediment did not show any significant trend (Table 2). However, biofilm-dwelling meiobenthic organisms were significantly less abundant in upstream reaches (ANOVA, F 2,36 = 9.3, p < 0.001). Nematodes and rotifers were the numerically dominant representatives of the meiobenthos in all rivers and reaches (Figure 2b and Figure 2c), but while nematodes made up on average 21% of the meiobenthic assemblage both in the sediment and in biofilms, rotifers were relatively more abundant in biofilms (45.7% of individuals) than in sediments (31.9%; Table 2).
Structural responses of the macrobenthic community to environmental gradients
The structure of the macrobenthic community was significantly affected by river identity (PERMANOVA, F 2,42 = 29, R 2 = 0.12, p = 0.002). Multivariate homoscedasticity was met without data transformation (PERMDISP2, p = 0.72); however, the effects of reach location on macrobenthic community structure could not be tested as multivariate homoscedasticity of variances was not met, even after reasonable data transformation techniques. This was caused by a largely heterogeneous distribution of distances to centroids among reaches: samples from upstream reaches had a very skewed community structure because of the large dominance of T. granifera (Figure 2a), resulting in a very low dispersion of sample scores around group centroids (N = 15, average distance to centroid: 0.15 for upstream samples), while for mid-and downstream sections, average distances of samples to centroids were 0.51 and 0.54, respectively.
The multi-level pattern analysis identified 14 taxa significantly associated with one river group. Those groups are highlighted with stars on the NMDS biplot ( Figure 3). Namely, only macrobenthic nematodes were found to be significantly associated with the Crocodile River. The ephemeropteran larvae Tricorythus sp. and Elassoneuria sp., the trichopteran larvae Cheumatopsyche sp. and Macrostemum sp., the dipteran larvae of Simulium sp., the dragonfly larvae of Pseudagrion sp. and coleopteran adults of Berosus sp. were significantly associated with the Olifants River. The ephemeropteran larvae of Machadorythus maculatus, the hemipteran Laccocoris sp., the dipteran larvae of subfamily Chironominae, the dragonfly larvae of Lestinogomphus sp. and Notogomphus sp., and the coleopteran larvae of subfamily Larainae were significantly associated with the Sabie River. The clam Corbicula fluminalis was associated with both Crocodile and Olifants rivers. Dragonfly larvae Coenagrionidae and Paragomphus sp., and ephemeropteran larvae of the Leptophlebiidae family were associated with both Crocodile and Sabie rivers (Figure 3).
When performed to examine the associations with a specific river reach, the multi-level pattern analysis identified two taxa, namely, the larvae of Ceratopogonidae and copepods being associated with both mid-and downstream reaches ( Table 2).
The structure of the meiobenthic community dwelling biofilms growing on hard substrates was significantly affected by river identity (PERMANOVA, F 2,36 = 4.9, R 2 = 0.15, p < 0.001), river reach (PERMANOVA, F 2,36 = 5.1, R 2 = 0.16, p < 0.001) and by the river*reach interaction (PERMANOVA, F 4,36 = 1.8, R 2 = 0.11, p = 0.023). Multivariate homoscedasticity was achieved without data transformation (PERMDISP2 river , p = 0.79; PERMDISP2 reach , p = 0.65). This may have been caused by the relatively coherent pattern of lower abundances in upstream reaches coupled with relatively higher abundances of chironomids in the Sabie River (Figure 2b and Table 2). Concentration of ammonium in the water was found to be the strongest driver of biofilm-dwelling meiobenthic community structure (R 2 = 0.17, p =0.02; Figure 5), followed by TDS and conductivity (R 2 = 0.145, p = 0.035 and R 2 = 0.146, p = 0.032, respectively). The multi-level pattern analysis identified three taxa significantly associated with the Sabie River, namely, oligochaetes, tardigrades and gastrotrichs; NMDS, non-metric multidimensional scaling. while one taxon (Chironomids) was significantly associated with both Olifants and Sabie rivers ( Figure 5). When performed to examine associations with a specific river reach, the multi-level pattern analysis identified four taxa being associated with both mid-and downstream reaches, namely, ostracods and larval stages of Ceratopogonidae, Chironomidae and Ephemeropterans.
Discussion
Here we studied for the first time the entire assemblage of benthic invertebrates dwelling hard and soft substrates of three major rivers flowing through the KNP, South Africa. Overall, we observed a great diversity of benthic invertebrates of various sizes and morphologies belonging to a total of seven major animal groups: Arthropoda, Mollusca, Nematoda, Rotifera, Gastrotricha, Tardigrada and Annelida. We found the highest diversity of families in the Sabie River (42 families), followed by the Olifants (36 families) and Crocodile River (25 families). These results generally confirmed a better biodiversity status for the Sabie River as was already observed in previous studies focusing on macrobenthic invertebrates dwelling rivers of the KNP. For example, Muller and Villet (2004) identified 58, 51 and 46 families in the Sabie, Crocodile and Olifants rivers, respectively. The Crocodile and Olifants rivers are known to experience substantial anthropogenic pressures in comparison to the Sabie River (Riddell et al. 2019). We found different structures for both meio-and macrobenthic communities across the rivers, confirming our hypothesis
TDS Cond
NMDS, non-metric multidimensional scaling. that benthic communities would react to environmental gradients across rivers. Crocodile River's water had more phosphates and showed the largest alterations of benthic communities, suggesting that this system faced acute stress. Olifants River's water showed more sulphates and a higher conductivity and turbidity, and the sediment texture was finer housing higher amounts of organic material. But as Olifants ran through the KNP, diversity and evenness tended to improve. Sabie showed lower concentrations of nutrients and the most diverse assemblages.
However, we found that upstream sites located at the park border were consistently less diverse, supporting that the richness of benthic assemblages benefits from the course of the river through protected areas of the park. The Crocodile River, making the southern border of the park, did not show such improvement presumably because it receives effluents from intensive agriculture plots all along its southern catchment outside the KNP (Van der Laan, Van Antwerpen & Bristow 2012). One of the most conspicuous effects of the upstream-downstream gradient was the alteration of community structure because of the strong dominance of T. granifera at all upstream sites. This snail species originates from Asia and was first reported from South Africa in 1999 in northern KwaZulu-Natal (Appleton & Nadasan 2002). It has further rapidly spread into the Mpumalanga province, KNP and Swaziland and will probably continue its expansion to northern sub-tropical lowlands in Zimbabwe and Mozambique (Appleton, Forbes & Demetriades 2009). Fifteen years ago, Wolmarans and De kock (2006) sampled T. granifera from the Sabie and Crocodile rivers, but did not find it in Olifants. In our study we found substantial numbers of T. granifera in all river sites, including all river sites along the Olifants. T. granifera can attain high densities in invaded areas, thus impacting other benthic fauna and flora (Appleton et al. 2009;Jones et al. 2017). In invaded estuarine reaches, it has been shown that T. granifera mostly exploits algal biofilms and detritus (Miranda, Perissinotto & Appleton 2011). Thus, their grazing activity on hard substrates might particularly affect resource availability for biofilm-dwelling meiobenthos, as was evidenced empirically in the case of other aquatic snails dampening biofilm-associated assemblages (Peters, Hillebrand & Traunspurger 2007).
Here, we found a significant reduction of biofilm-dwelling meiobenthos in upstream reaches, reduction that could have been caused by a more severe biofilm grazing activity by T. granifera (we observed that ostracods and small instars of insect larvae were rare in the locations strongly dominated by T. granifera). However, stronger evidences are needed to show direct causality so we recommend experimental evaluation of the direct top-down or indirect bottom-up effect of the invasive T. granifera on local biofilmdwelling meiobenthos. Interestingly, our results showed that T. granifera did not over-dominate other downstream reaches in the KNP. A possible rationale is that the more diverse communities in mid-and downstream reaches could have reduced the invasion success of T. granifera through increased competition, predation, or parasitism (Miranda et al. 2016). This would also need further experimental evaluation. Nevertheless, our results contribute to support that the success of invasive species is all the more increased when indigenous fauna is impoverished because of stress like habitat deterioration (Van der Velde et al. 2002).
The positive association of sulphates with the Olifants River is not surprising as this river has historically suffered from salt enrichment because of sulphates that enter the river from effluent discharge which is one of the main contributors to the high salinity measurements in this river (Goetsch & Palmer 1997;Riddell et al. 2019). Furthermore, we found a positive association of phosphates in the Crocodile River as this system receives a great deal of organic effluents through fertilisers from agricultural activities taking place along its catchment outside the KNP ( Van der Laan et al. 2012). Similar results were obtained by Riddell et al. (2019) who attributed impacts of orthophosphates on rivers in KNP to agricultural activities taking place immediately adjacent to the park. In our study, increased sulphate and phosphate levels, along with other organic and inorganic variables emerged as significant drivers of the meio-and macrobenthic community structure. Biota such as nematodes, the water scavenger beetles Berosus spp., the Hydropsychid caddisfly Cheumatopsyche spp., Caenis spp. mayflies, dipteran Simulidae and Chironominae are adapted to tolerate a wide range of water quality parameters, such as organic enrichment and high salinity (Erasmus et al. 2021;Malherbe, Wepener & Van Vuren 2010), which is likely why they were associated with the Olifants and Crocodile rivers.
Habitat likely also played a role in the distribution of macrobenthos in the rivers as most of the biota specifically associated with either the Olifants or the Sabie river also have particular habitat and water flow preferences. For instance, the mayflies Elassoneuria sp. and Tricorythus sp. and the caddisflies Macrostemum sp. were significantly associated with the Olifants River presumably because of their habitat preference for rocky surfaces in fast-flowing water (Gerber & Gabriel 2002), a habitat that was more represented at sampling sites in this river. Similarly, the Sabie River habitats consisted of dense reed vegetation (Figure 2-A1), and most of the macrobenthos that were significantly associated with the Sabie River have affinities with such dense vegetation patches, particularly Laccocoris, or show affinities with the sandy to muddy substrate in slower flowing water at the edges of streams, particularly the gomphids Lestinogomphus sp. and Notogomphus sp. (Gerber & Gabriel 2002).
Although they are omnipresent in limnetic habitats, meiobenthic communities are poorly considered in ecology and conservation studies ). This knowledge gap is even more preoccupying for tropical and subtropical regions because studies about freshwater meiobenthos ecology and taxonomy have traditionally focused on temperate regions from the northern hemisphere (e.g. Fontaneto et al. 2012;Traunspurger et al. 2020;Zullini 2014). Here we found an abundant meiobenthic community dominated by nematodes and rotifers, and the taxonomic composition and abundance values reported in the present http://www.koedoe.co.za Open Access study are quite comparable to meiobenthic communities found in streams and rivers in Europe (e.g. Brüchner-Hüttemann et al. 2020;Majdi et al. 2012;Majdi, Threis & Traunspurger 2017). Some meiobenthic groups have been previously investigated in the KNP: for example, 30 years ago, Botha and Heyns (1991, 1992, 1993) identified a total of 33 nematode species (including species new to science) in sediment samples collected from Crocodile and Olifants rivers. Here we also observed an abundant nematode community, particularly in fine sediments of the Olifants River. In South African estuaries, Pillay andPerissinotto (2009) andNozais, Perissinotto andTita (2005) have observed that the abundance and community structure of the meiobenthos respond to river inflow and drought conditions (mostly through a decrease in physiologically sensitive taxa under harsher conditions). Here we found that biofilm-and sediment-dwelling meiobenthos exhibited different responses: nutrient and water quality proxies affected the biofilm-dwelling organisms, while sedimentdwelling meiofauna was also shaped by sediment texture and the availability of benthic organic material. Overall, the observed patterns are in agreement with previous studies showing that freshwater meiobenthic communities can become relevant bio-indicators of sediment pollution, water eutrophication or sediment texture (e.g. Haegerbaeumer et al. 2017;Schenk et al. 2020;Traunspurger et al. 2020). However, further studies should detail the response of dominant meiobenthic phyla (e.g. nematodes or rotifers) to the genus or species level in order to identify sensitive and tolerant species and develop subtler bioindication tools.
Conclusion
The KNP is known worldwide for its unique and emblematic terrestrial megafauna but less so for its remarkable freshwater fauna of benthic invertebrates. In this study we observed that the diversity of those invertebrates increased as we moved away from park borders, highlighting a potential beneficial effect of protected river reaches. Furthermore, in protected reaches, the assemblage was more even, less affected by the domination of the invasive snail T. granifera. The effects of this species on native assemblages should be further assessed, our results suggesting that T. granifera could affect the structure of biofilm-dwelling communities. Other drivers (like the concentration of nutrients in water and sediment granulometry) also showed significant effects on the distribution of benthic invertebrates. We recommend that the interplay between biotic and abiotic drivers should be further studied for a more comprehensive management and conservation of aquatic resources. Finally, our results stress that the Crocodile River showed the poorest and most unbalanced communities all the way. This suggests that further environment-damaging project on the Crocodile River (such as current applications for largescale coal-mining on the southern boundary of the park) could have catastrophic impacts on an already stressed river system.
|
v3-fos-license
|
2023-05-05T13:10:27.834Z
|
2023-05-02T00:00:00.000
|
258487400
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1128/mbio.02174-23",
"pdf_hash": "b023203a0d827ddc85201897d4061811339b1579",
"pdf_src": "ASMUSA",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43500",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "1e3986cfb64fe683e94ff0dcf9e157e926a7a7db",
"year": 2023
}
|
pes2o/s2orc
|
Uncovering the temporal dynamics and regulatory networks of thermal stress response in a hyperthermophile using transcriptomics and proteomics
ABSTRACT Facing rapid fluctuations in their natural environment, extremophiles, like the hyperthermophilic archaeon Pyrococcus furiosus, exhibit remarkable adaptability to extreme conditions. However, our understanding of their dynamic cellular responses remains limited. This study integrates RNA-sequencing and mass spectrometry data, thereby elucidating transcriptomic and proteomic responses to heat and cold shock stress in P. furiosus. Our results reveal rapid and dynamic changes in gene and protein expression following these stress responses. Heat shock triggers extensive transcriptome reprogramming, orchestrated by the transcriptional regulator Phr, targeting a broader gene repertoire than previously demonstrated. For heat shock signature genes, RNA levels swiftly return to baseline upon recovery, while protein levels remain persistently upregulated, reflecting a rapid but sustained response. Intriguingly, cold shock at 4°C elicits distinct short- and long-term responses at both RNA and protein levels. Cluster analysis identified gene sets with either congruent or contrasting trends in RNA and protein changes, representing well-separated arCOG groups tailored to their individual cellular responses. Particularly, upregulation of ribosomal proteins and significant enrichment of 5′-leadered sequences in cold-shock responsive genes suggest that translation regulation is important during cold shock adaption. Further investigating transcriptomic features, we reveal that thermal stress genes are equipped with basal sequence elements, such as strong promoter and poly(U)-terminators, facilitating a regulated response of the respective transcription units. Our study provides a comprehensive overview of the cellular response to temperature stress, advancing our understanding of stress response mechanisms in hyperthermophilic archaea and providing valuable insights into the molecular adaptations that facilitate life in extreme environments. IMPORTANCE Extreme environments provide unique challenges for life, and the study of extremophiles can shed light on the mechanisms of adaptation to such conditions. Pyrococcus furiosus, a hyperthermophilic archaeon, is a model organism for studying thermal stress response mechanisms. In this study, we used an integrated analysis of RNA-sequencing and mass spectrometry data to investigate the transcriptomic and proteomic responses of P. furiosus to heat and cold shock stress and recovery. Our results reveal the rapid and dynamic changes in gene and protein expression patterns associated with these stress responses, as well as the coordinated regulation of different gene sets in response to different stressors. These findings provide valuable insights into the molecular adaptations that facilitate life in extreme environments and advance our understanding of stress response mechanisms in hyperthermophilic archaea.
T he ability of extremophilic microorganisms to adapt and thrive in extreme environ ments has captivated the scientific community for decades (1)(2)(3).These organisms provide unique opportunities to investigate the molecular basis of stress response, adaptation, and recovery of cellular activities, offering valuable insights into fundamen tal biological processes and enabling novel biotechnological applications (4).Among extremophiles, the hyperthermophilic archaeon Pyrococcus furiosus has become a model organism for studying the molecular strategies employed by thermophiles to withstand high temperatures (5)(6)(7).Pyrococcus species are marine-living anaerobic organisms that grow over a broad temperature range from 70°C to 104°C and can be found in marine sediments and in black smokers within hydrothermal vent systems that are characterized by significant temperature gradients ranging from 2°C to 400°C (Fig. S1A at https:// doi.org/10.6084/m9.figshare.24006960.v1)(8)(9)(10)(11).Since hydrothermal vents are sterile during their initial formation, it was first hypothesized and later experimentally shown that they are colonized by hyperthermophilic archaea from the surrounding 4°C-cold seawater (12,13).Moreover, evidence suggests that Pyrococcus must possess mecha nisms to withstand extended cold-shock periods as it has been successfully recultivated from cooled-down submarine plumes and floating volcanic slick taken over 1 km away from the active zone of an erupted submarine volcano (14).This observation has also been replicated in the laboratory, where it was demonstrated that hyperthermophiles could survive for at least 9 months in cold environments and react within seconds upon returning to their optimal growth temperature by initiating motility (15).
In general, hyperthermophilic archaea have adapted to thrive at temperatures exceeding 80°C through various molecular mechanisms, including stabilization of tRNAs and rRNA by higher GC-content, enrichment in hydrophobic and charged amino acids, alterations in protein structure, unique membrane composition, increased investment in nucleoid-associated proteins, and positive DNA supercoiling by reverse gyrase (16)(17)(18)(19)(20).However, the biomolecular and biophysical challenges associated with heat shock above the individual temperature limit of each organism are at least, to some extent, shared across the domains of life (21,22).In contrast, less is known about shared molecular principles dealing with cold shock response, especially in archaea (23,24).
Extreme temperature fluctuations pose significant challenges to cellular macromole cules, such as proteins, nucleic acids, and lipids.For instance, elevated temperatures can lead to protein denaturation, aggregation, and loss of function during heat shock.Concurrently, DNA and RNA can undergo structural alterations, impairing replication, transcription, and translation processes (25).On the other hand, cold shock induces the stabilization of secondary structures in nucleic acids, impeding their proper function and reducing transcription and translation efficiency.Furthermore, low temperatures affect membrane fluidity, potentially impairing membrane-associated processes and transport (23,24,26).
In bacteria, the heat shock response is primarily regulated by the conserved sigma factor σ32, which promotes transcription of heat shock genes encoding chaperones, such as DnaK and GroEL, and proteases responsible for protein refolding and degrada tion (27).In addition to the positive regulation by sigma factors, it is well-known that transcriptional repressors, like HrcA or HspR, can also induce expression of heat shock genes by dissociating from the promoter (28)(29)(30).Cold shock responses involve the synthesis of cold-induced proteins, such as CspA in Escherichia coli, which counteract the effects of low temperatures on nucleic acids and cellular processes.These Csps are small, nucleic acid-binding proteins that are widely distributed in bacteria and structurally highly conserved, containing a cold shock domain that enables binding to target RNA and DNA.Also, they function as RNA chaperones, destabilizing secondary structures at low temperatures and facilitating transcription and translation (24,31,32).
In eukaryotes, the heat shock response is orchestrated by heat shock transcription factors, which modulate the expression of heat shock proteins like Hsp70 and Hsp90.On the other hand, the cold shock response involves diverse mechanisms, such as changes in membrane lipid composition and synthesis of cold-inducible RNA-binding proteins (33)(34)(35)(36).
Organisms across the tree of life are challenged by stressors such as heat and cold shock.The specific molecular players and regulatory networks involved in these processes, however, can differ significantly.For example, cold shock domain proteins are absent in all thermophilic and hyperthermophilic archaea, and the regulation of the cold shocks response in archaea is less well understood compared to bacteria and eukaryotes (37).However, for the heat shock response, some regulatory mechanisms have been identified, such as the transcription factor (TF) Phr in P. furiosus, which recognizes a palindromic DNA sequence and acts as a negative regulator of many heat-inducible genes (38)(39)(40).In contrast, no thermal-responsive regulator has been identified in Crenarchaeota and, thus, is also not present in the well-described thermophile Sulfolo bus acidocaldarius (21,41).Nevertheless, S. acidocaldarius exhibits a classic heat shock response characterized by the induction of small chaperones and the thermosome.Notably, its genome encodes two to three different thermosome subunits, and their assembly is modulated based on the prevailing environmental conditions (21,42).
In addition to transcription factor-based regulation, archaeal-specific small RNAs (asRNAs) and RNA-binding proteins like proteins of the Sm-like protein family have been implicated in fine-tuning regulation of stress responses on the post-transcriptional level (43)(44)(45)(46).Despite significant advancements in the field, our understanding of the complex regulatory networks and adaptive mechanisms employed by P. furiosus in response to extreme temperature variations remains limited, especially when considering the time-related aspects of these processes on the RNA and protein level.
This study aims to unravel the temporal dynamics of thermal stress response in the hyperthermophilic archaeon P. furiosus using an integrative omics approach that combines transcriptomic and proteomic analyses at multiple time points.Our objective is to analyze shifts in gene expression and protein abundance that occur as the organism experiences conditions mimicking the thermal environment of hydrothermal vents.To achieve this, we first compare our findings with the established knowledge of responses on the transcriptional level and mechanisms of the heat and cold shock response.We, furthermore, integrate the transcriptomic and proteomic data to generate a more comprehensive understanding.Moreover, we consider the temporal resolution of our experimental setup to identify gene clusters and key cellular processes that display similar regulatory patterns.Furthermore, we explore putative regulatory elements, such as promoters, terminators, and operons, that may influence the transcriptomic landscape under stress conditions.In particular, we seek to enhance our understanding of the temporal resolution of Phr-regulated heat shock response.Ultimately, this in-depth analysis aims to expand our knowledge of the molecular adaptations that enable long-term survival in cold seawater and recolonization of black smokers, contributing to the broader understanding of extremophile biology.
Growth conditions and sample preparations
Pyrococcus furiosus DSM 3638 was cultured in 120 mL serum bottles following the protocol previously described under anaerobic conditions at one bar excess of nitrogen (47).Cells were grown in 40 mL SME medium supplemented with 40 mM pyruvate, 0.1% peptone, and 0.1% yeast extract for 15 h at 95°C until they reached the mid-exponential growth phase with a cell density of 5 × 10 7 cells/mL (47).
For cold shock treatment, cells in the mid-exponential growth phase were subjected to cold shock by rapidly cooling the medium through a 2.5-m long, 2-mm diameter viton hose flushed with anaerobic NaCl solution into a new bottle containing 1 bar excess of nitrogen (Fig. S1B and C at https://doi.org/10.6084/m9.figshare.24006960.v1).The transfer took 90 s, and the time to reach 4°C in an ice water bath was 160 s in total, which marked the beginning of the cold shock treatment.For recovery, the ice bath was replaced with a 95°C hot water bath, and a recovery sample was collected 5 min after the target temperature of 95°C was reached.
For heat shock treatment, serum bottles with cells in the mid-exponential growth phase were placed in a 105°C incubator.It took 26 min for the cultures to reach the target temperature of 105°C, which was considered the starting point for the heat shock treatment.Samples were collected after 5 and 15 min of incubation at this temperature.For recovery, cultures were transferred to another incubator and allowed to cool down to 95°C, which took 10.5 min (Fig. S1D at https://doi.org/10.6084/m9.figshare.24006960.v1).
Cells from all conditions were harvested by centrifugation at 14,000 × g for 10 min at 4°C and resuspended in 2 mL buffer (25 mM Tris/HCl pH 7.6, 100 mM NaCl).Samples were then divided for RNA (1.5 mL) and protein analysis (0.5 mL), followed by centrifugation at 17,700 × g for 5 min at 4°C.The resulting pellets were stored at −80°C until further analysis.
RNA extraction
Total RNA isolation was performed using cell pellets, which were initially lysed by adding 1 mL of TRI Reagent (Zymo Research) and subsequently processed with the Direct-zol RNA Miniprep Plus kit (Zymo Research) following the manufacturer's instructions.This protocol included DNase I digestion to eliminate genomic DNA contamination.
To evaluate the purity, quality, and quantity of the extracted RNA, several analytical methods were employed.Standard spectroscopic measurements were performed using a NanoDrop One spectrophotometer (Thermo Fisher Scientific).Fluorometric quantification was carried out with the Qubit RNA assay (Thermo Fisher Scientific).Lastly, RNA integrity values were assessed using a Bioanalyzer (Agilent Technologies) to ensure the suitability of the RNA samples for downstream applications.
Library preparation and sequencing
Four independent biological replicates were prepared for each condition and subjec ted to differential gene expression analysis.Prior to library preparation, ribosomal RNAs were depleted from 1 µg input RNA using the Pan-Archaea riboPOOL (siTOOLs) according to the manufacturer's instructions.Library preparation and RNA-seq were carried out as described in the Illumina "Stranded mRNA Prep Ligation" Reference Guide, the Illumina NextSeq 2000 Sequencing System Guide (Illumina, Inc., San Diego, CA, USA), and the KAPA Library Quantification Kit-Illumina/ABI Prism (Roche Sequencing Solutions, Inc., Pleasanton, CA, USA).In brief, omitting the initial mRNA purification step with oligo(dT) magnetic beads, around 5 ng of rRNA-depleted archaeal RNA was fragmented to an average insert size of 200-400 bases using divalent cations under elevated temperature (94°C for 8 min).Next, the cleaved RNA fragments were reverse transcribed into first-strand complementary DNA (cDNA) using reverse transcriptase and random hexamer primers.Actinomycin D was added to allow RNA-dependent synthesis and to improve strand specificity by preventing spurious DNA-dependent synthesis.Blunt-ended second strand cDNA was synthesized using DNA Polymerase I, RNase H, and dUTP nucleotides.The incorporation of dUTP, in place of dTTP, quenches the second strand during the later PCR amplification because the polymerase does not incorporate past this nucleotide.The resulting cDNA fragments were adenylated at the 3′ ends, and the pre-index anchors were ligated.Finally, DNA libraries were created using a 15 cycle PCR to selectively amplify the anchor-ligated DNA fragments and to add the unique dual indexing (i7 and i5) adapters.The libraries were bead purified twice and quantified using the KAPA Library Quantification Kit.Equimolar amounts of each library were sequenced on an Illumina NextSeq 2000 instrument controlled by the NextSeq 2000 Control Software (NCS, v. 1.4.1.39716),using one 50 cycles P3 Flow Cell with the dual index, single-read (SR) run parameters.Image analysis and base calling were done by the Real-Time Analysis Software (RTA) (v.3.9.25).The resulting .cbclfiles were converted into .fastqfiles with the bcl2fastq software (v.2.20).
Library preparation and sequencing were performed at the Genomics Core Facility "KFB -Center of Excellence for Fluorescent Bioanalytics" (University of Regensburg, Regensburg, Germany; www.kfb-regensburg.de).
Differentially expressed genes from RNA-seq count data were identified following the recommendations in the Bioconductor vignette of the DESeq2 package (v.1.36) (51).Briefly, featureCounts (RSubread package v. 2.10.5) was used to calculate the count matrix based on a custom GTF file generated by filtering the P. furiosus DSM 3638 GFF annotation file downloaded from the NCBI for protein-coding genes (column biotype) (52).Principal component analysis (PCA) was performed on variance stabiliz ing transformed data, and outlier replicates (HS 1 replicate 1, HS 2 replicate 4, HS R replicate 1, CS R replicate 2) were removed from the data set after visual inspection.Differential expression analysis was conducted by comparing each of the cold or heat shock conditions with the control condition in a pairwise manner.
Sequencing was performed at the Genomics Core Facility "KFB -Center of Excellence for Fluorescent Bioanalytics" (University of Regensburg, Regensburg, Germany; www.kfbregensburg.de)and carried out as described in the Illumina NextSeq 500 System Guide (Illumina, Inc., San Diego, CA, USA), and the KAPA Library Quantification Kit-Illumina/ABI Prism (Roche Sequencing Solutions, Inc., Pleasanton, CA, USA).In brief, the libraries were quantified using the KAPA Library Quantification Kit.Equimolar amounts of each library were sequenced on a NextSeq 500 instrument controlled by the NextSeq Control Software (NCS, v. 2.2.0), using one 75 Cycles High Output Kits with the single index, paired-end (PE) run parameters.Image analysis and base calling were done by the Real-Time Analysis Software (RTA, v. 2.4.11).The resulting .bclfiles were demultiplexed and converted into .fastqfiles with the bcl2fastq software (v.2.18).
The Term-seq protocol was applied to four biological replicates of cells grown to mid-exponential growth phase at 95°C.
Identification of enriched 3' ends
Fastq files were quality and adapter trimmed using trimmomatic (v.0.39) in paired-end mode (ILLUMINACLIP:TruSeq3-PE.fa:2:30:10:8:true, AVGQUAL:25, MINLEN:20) (53).Unique molecular identifiers (UMIs) were extracted from paired-end reads by UMI-tools (v.1.0.1)using the umi_tools command (--bcpattern=NNNN, --bc-pattern2=NNNN).Mapping was performed by bowtie2 (v.2.2.5) in --sensitive-local mode with the last four nucleoti des of each read being trimmed using --trim3 4 (54).SAM files were converted to BAMs and sorted and indexed using samtools (v.1.9) (50).Subsequently, mapped reads were deduplicated using the extracted UMIs by umi_tools dedup (v.1.0.1) with default settings for paired-end data.Detection of enriched 3′ ends by peak calling and downstream analysis was performed as described in reference (55).Briefly, strand-spe cific bedgraph files were first generated and CPM (counts per million normalization for sequencing depth) normalized using deepTools bamCoverage (v.3.5.0),with the SAM flags 83 and 99 in combination with --Offset 1 and --binSize 1 --minMappingQuality 20.Next, the termseq_peaks script was used with default parameters to call peaks using four replicates as input (https://github.com/nichd-bspc/termseq-peaks)(56).For end detection, peak files were merged with the position-specific count files using bedtools intersect (option -wao, v. 2.31.0).Enriched positions were finally filtered and annotated based on the following criteria: For each peak, the position with the highest number of reads was selected per replicate and only maximum peak positions selected that were present in at least three of the four replicates.Positions with less than five CPM counts were excluded from the analysis.Positions were assigned based on their relative orientation to a gene and their respective peak height as primary (highest peak within 300 bases downstream from a gene), secondary (each additional peak 300 bases downstream from a gene), and internal (each peak in the coding range).
PCR-cDNA sequencing libraries were prepared following the Oxford Nanopore Technologies (ONT) PCR-cDNA barcoding kit protocol (SQK-PCB109) with minor modifications, such as using a custom 3′ cDNA RT primer (5′-ACTTGCCTGTCGCTCTATC TTCATTGATGGTGCCTACAG-3′, 2 µM), which replaces the VN primer.RNA and cDNA sizes were assessed using a Bioanalyzer (Agilent).After reverse transcription and templateswitching as described in the ONT protocol, samples were PCR-amplified for 12 cycles with a 500-s elongation time.During library preparation, samples were quantified and quality checked using standard spectroscopic measurements (Nanodrop One, Thermo Fisher Scientific) and Qubit assays (Thermo Fisher Scientific).Equimolar amounts of samples were pooled, adapter-ligated, and sequenced on an R9.4 flow cell (Oxford Nanopore Technologies) using an MK1C device for 72 h.PCR-cDNA sequencing was performed for two biological replicates of cells grown to mid-exponential growth phase at 95°C.
Lastly, strand-specific coverage files were created using samtools depth (-a, -J options enabled) with a binsize of 1 and including reads with deletions in the coverage computation.Downstream analysis, including CPM normalization, calculating the mean coverage for each position of the two replicates, and plotting, was performed using the Tidyverse (v.2.0.0) in R (61).
Sample preparation, analysis, and data processing
Mass spectrometry analysis was performed for four biological replicates according to the following protocol.Protein samples were first purified by running them a short distance into a 4%-12% NuPAGE Novex Bis-Tris Minigel (Invitrogen), followed by Coomassie staining and in-gel digestion with trypsin (62).
For generation of a peptide library, equal amount aliquots from each sample were pooled to a total amount of 200 µg and separated into 12 fractions using a basic pH reversed phase C18 separation on an FPLC system (Äkta pure, Cytiva) and a staggered pooling scheme.All samples were spiked with a synthetic peptide standard used for retention time alignment (iRT Standard, Schlieren, Schweiz).
DDA analysis was performed in PASEF mode (63) with 10 PASEF scans per topN acquisition cycle.Multiply charged precursors were selected based on their position in the m/z-ion mobility plane and isolated at a resolution of 2 Th for m/z ≤ 700 and to 3 Th for m/z > 700 for MS/MS to a target value of 20,000 arbitrary units.Dynamic exclusion was set to 4 min.Two technical replicates per C18 fraction were acquired.
DIA analysis was performed in diaPASEF mode (64) using 32 × 25 Th isolation windows from m/z 400 to 1,200 to include the 2+/3+/4+ population in the m/z-ion mobility plane.The collision energy was ramped linearly as a function of the mobility from 59 eV at 1 /K0 = 1.6 Vs cm −2 to 20 eV at 1 /K0 = 0.6 Vs cm −2 .Two technical replicates per biological replicate were acquired.
Protein identification and quantification were performed in Spectronaut Software 15.6 (Biognosys).Protein identification was achieved using the software's Pulsar algorithm at default settings against the UniProtKB Pyrococcus furiosus reference proteome (revision 09-2021) augmented with a set of 51 known common laboratory contaminants.For quantitation, up to the 6 most abundant fragment ion traces per peptide and up to the 10 most abundant peptides per protein were integrated and summed up to provide protein area values.Mass and retention time calibration as well as the corresponding extraction tolerances was dynamically determined.Both identification and quantification results were trimmed to a False Discovery Rate of 1% using a forward-and-reverse decoy database strategy.Protein quantity distributions were normalized by quartile normalization, and intensity-based absolute quatification (iBAQ) values as proxies for protein levels (65).
Differential enrichment analysis
Quantitative values were processed using log 2 transformation, and PCA was used for quality control and detection of outlier replicates (CS 2 replicate 1, CS 3 replicate 4, CS R replicate 2, HS 2 replicate 3, HS R replicate 1) after visual inspection.Normalized protein expression data were analyzed using the R limma package (v.3.52.4) to identify differentially expressed proteins between control and cold or heat-stressed samples (66).
Comparison of transcriptomic and proteomic data
For comparison of transcriptomic and proteomic results, either log 2 -fold changes or count values were compared.While iBAQ values were used as proxies for protein abundance, total transcripts per million (TPM) was used as a normalization method for RNA-seq data to account for differences in sequencing depth and transcript length.TPM values were calculated by dividing the number of reads mapping to each gene by the gene length in kilobases, then dividing the resulting reads per kilobase (RPK) values by the sum of all RPK values in the sample, and, finally, multiplying the quotient by one million.This normalization method ensures that the sum of all TPM values in each sample is the same, allowing for better comparison of gene expression levels between samples.
Additionally, TPM counts were used for visualization purposes throughout the manuscript for the transcriptomics data.
Functional enrichment analysis based on arCOG classification
To gain insights into functional characteristics of differentially expressed genes, functional enrichment analysis based on the Archaeal Clusters of Orthologous Genes (arCOG) classification was performed as described previously (67,68).Briefly, arCOGs for P. furiosus were retrieved from (69) and gene set enrichment analysis performed with the goseq package (v.1.48.0) in R, which accounts for gene lengths bias (70).For each comparison, a condition-and method-specific background file was generated from all genes that could be detected.Next, P-values for overrepresentation of arCOG terms in the differentially expressed genes were calculated separately for up-and downregula ted genes based on RNA-seq and MS data, respectively.Significant enrichments were identified with a threshold set at 0.05.
Promoter and terminator analysis
Primary transcription start sites (TSS) and corresponding 5′ UTR lengths were extracted from (47).Genes were categorized by the existence of an archaeal-typical promoter motif containing a TFB-recognition element [BRE, 5′-G/C(A/T)AAA-3′] and a TATA element [5′-TTT(A/T)(A/T)(A/T)−3′] (47, 71).Therefore, the sequences from −50 to +10 from the TSS were analyzed via MEME (v.5.4.1.,-mod zoops -minw 8 -maxw 20), resulting in a classification of genes contributing (+promoter) and not contributing (−promoter) to the motif (72).Position-specific motifs were plotted in R using the ggseqlogo package (v.0.1) (73).Promoter strength was estimated according to the method used in reference (74).Briefly, sequences from −42 to −19 were extracted and analyzed using MEME in oops mode.The P value from the motif search was used as an estimator for the promoter strength.A lower P-value is indicative for a stronger promoter with a low propability of the archaeal promoter motif being found randomly.
3′ ends were classified similarly based on the presence of a poly(U)-terminator motif.To this end, sequences from −35 to +2 from the primary 3′ ends were extracted and analyzed via MEME (-mod zoops -minw 4 -maxw 20) and genes contributing to the motif classified as +poly(U) motif.Nucleotide enrichment analysis was performed as described in reference (75).Briefly, the frequency of each base was calculated using the extracted sequences and compared to the same calculation based on randomly sampling 100,000 positions from intergenic regions.Values from this comparison were log 2 transformed and only the enrichment position of nucleotide U plotted.
Additionally, structural stability of the RNA was predicted by folding the 45 nt long RNA upstream of the terminators using the LncFinder package (v.1.1.5)in R.This package utilizes the RNAfold software (v.2.4.18) with standard parameters to predict the minimum free energy of the RNA structure.We compared these values to the calculated values from 1,000 randomly subsampled intergenic regions (76).
Operon analysis
Operon analysis of selected heat-shock genes was performed by comparing the annotation from the DOOR2 operon database, the operon annotation based on ANNOgesic prediction using short-reads from mixed conditions from P. furiosus and from single-read analysis of PCR-cDNA Nanopore reads generated for this study (47,77).Selected genes were predicted to be a single gene unit or part of a multi-unit operon by visual inspection based on the gene overlap of single transcripts.
Correlation analysis
The Pearson correlation coefficient was employed as a general measurement of similarity between the count values or fold changes in a pairwise complete manner and was calculated using the ggpubr package (v.0.6.0) in R.
For evaluating the similarity of the heat-shock and cold-shock results with a previously published heat-shock study, we compared the log 2 -fold changes from each condition (relative to the control condition) with the expression changes extracted from Supplementary Tables S2 and S3 in reference (78).This allowed us to quantify the extent of similarity between the heat-shock, cold-shock, and recovery conditions and the temperature shift from 90°C to 97°C as described in reference (78).
Detection and analysis of signature clusters
The z-score normalized log 2 -fold changes of RNA and protein values were subjected to PCA using the prcomp function in R. Next, the PCA results were used to calculate the pairwise Euclidean distances between samples.This distance matrix serves as a measure of similarity between samples, enabling us to group genes that share similar profiles.To gain further insights into shared patterns, we performed hierarchical clustering with the ward.D2 linkage method.
The number of clusters was determined using the elbow method and after inspection of enrichment analysis of arCOG categories.This approach allowed for the identification of clusters of genes with similar expression patterns across different conditions and data sets, which could provide insights into the regulation of cellular processes in the hyperthermophilic archaeon Pyrococcus furiosus.
Functional analysis was performed using the STRING database to identify known and predicted protein-protein interactions among the differentially expressed genes (79).
The codon adaption index (CAI), a common index that measures how well the genes' codons are adapted to the organism's preferred codon usage, was imputed using the CAI function from the coRdon (v.1.14.0)package using the 5% most abundant proteins according to our control sample (80).
Experimental setup and analysis strategy for studying the heat and cold shock response in Pyrococcus furiosus
To investigate the temporal dynamics of transcriptomic and proteomic responses of P. furiosus to heat and cold shock, we designed a custom temperature-controlled system for rapid cooling and reheating of samples (Fig. S1B at https://doi.org/10.6084/m9.figshare.24006960.v1).Cells were initially cultivated at an optimal temperature of 95°C to mid-exponential growth phase, serving as the control condition (Ctrl).Cold shock (CS) was simulated by shifting the temperature to 4°C, with samples taken at 20 min (CS 1), 2 h (CS 2), and 24 h after transfer of the samples to 4°C (CS 3) (Fig. 1A).This choice of 4°C as the cold shock temperature was carefully considered to replicate a biologically relevant yet standardized thermal stress condition, which simulates the extreme temperature fluctuations encountered by marine extremophiles, even for organisms isolated from geothermally heated sediments like P. furiosus that face unknown temperature gradients.Heat shock (HS) samples were collected after 5 min (HS 1) and 15 min (HS 2) after shifting the temperature to 105°C, a temperature that inhibits cell growth (8).In both setups, recovery samples (CS R, HS R) were obtained by returning cells to their optimal growth temperature following the final shock time point.
Principal component analysis (PCA) revealed distinct temporal and temperaturedependent responses at the RNA level, with the most significant variance observed after HS 1, HS 2, and CS 3 (Fig. 1E).To determine whether differences in the PCA can be explained by stress islands in the genome, we visualized z-score normalized TPM counts globally, finding that transcript changes were dispersed throughout the transcriptome (Fig. 1F).Variance at the RNA level was corroborated at the protein level, with two exceptions (Fig. 1G).First, the HS recovery sample did not cluster with the Ctrl condition but was positioned between HS 1 and the control, suggesting a prolonged protein response.Second, CS 1 formed a separate group at the protein level, reflecting substan tial variation in protein expression patterns.Similar to the RNA level, proteome changes during CS and HS were not restricted to specific islands (Fig. 1H).
Transcriptome analysis reveals extensive reprogramming during heat shock and moderate overlapping response with the cold shock response in P. furiosus
Having verified the robustness of our experimental design for analyzing temporal aspects of thermal stress and recovery responses in P. furiosus, we proceeded with our investigation in two stages.Therefore, we first conducted a comprehensive transcrip tome analysis to validate the selected time points and temperatures based on known stress responses and to identify potential regulatory aspects before integrating the results with the proteomics data.A 5-min HS at 105°C (HS 1) induced significant changes in 68% of the transcriptome, with 330 genes upregulated over 2-fold and 411 genes downregulated over 2-fold (Fig. 2A).Notably, several upregulated genes included well-known HS proteins, confirming the effectiveness of our experimental setup.Specifically, HS in P. furiosus has been shown to induce the expression of the thermosome (log 2 FC: 2.7), small chaperones, such as HSP20 (log 2 FC: 5.4), VAT1 (log 2 FC: 6.7), and VAT2 (log 2 FC: 0.2), and proteases that bind, refold, or degrade misfolded proteins accumulating in cells (7) (Fig. 2B).Moreover, we could confirm previous data showing that the synthase (Myo-inositol-1-phosphate synthase: Myo-Synthase, PF1616; log 2 FC: 4.0) catalyzing a precursor of the compatible solute Di-myo-inositol-phosphate (DIP) accumulates during HS.DIP is suggested to have a protein-stabilizing role in hyperthermophiles, while proteins typically involved in controlling the thermal damage of the proteome, like the ATP-independent chaperone prefoldin (log 2 FC: −1.1), are not upregulated during HS (78,81).Some of the aforementioned proteins are under the control of the negative transcriptional regulator Phr.We confirmed that all currently known target genes, except PF1292, are substantially upregulated under our selected conditions (40).While the expression of the only ATP-dependent protease in P. furiosus, the proteasome (alpha-subunit log 2 FC: −0.4; beta-subunit log 2 FC: 0.2), is not increased during HS, the proteasome-assembling chaperone homolog PbaA (log 2 FC: 3.9) that forms a complex with PF0014 (log 2 FC: 3.9) is highly upregulated (7, 82) (Fig. 2B).
For systematic HS response analysis, we conducted functional enrichment analysis based on archaeal clusters of orthologous groups (arCOGs) (Fig. 2C) (69).Genes related to replication, recombination, and repair (category L) are overrepresented among highly upregulated genes.In contrast, genes associated with energy production (category C), motility (category N), cell defense (category V), inorganic ion transport (category P), amino acid transport (category E), and unknown groups are overrepresented among highly downregulated genes.The remaining significantly regulated genes with minor fold changes smaller than two do not exhibit clear categories, except for cell cycle (category D) overrepresentation in the upregulated group.
In comparison to HS, much less is known about the CS reaction in Pyrococcus.Our experimental setup significantly differs from previous analyses, which employed a temperature at which Pyrococcus can still grow (70°C), thus limiting direct comparisons (83).Upon immediate CS, 47% of the transcriptome underwent significant alterations (Fig. 2D).However, fewer protein-coding transcripts were strongly affected during CS 1 compared to HS 1, with only 78 strongly upregulated and 137 strongly downregulated genes observed.The most substantial upregulation occurred for a riboflavin-synthesis operon (PF0061-PF0064, log 2 FC: 4.5), potentially under the control of the transcriptional regulator RbkR (PF0988) (84).Among the cold-responsive solute-binding proteins CipA (log 2 FC: 1.5) and CipB (log 2 FC: 0.2), only the former was induced significantly during CS (83).Functional enrichment analysis revealed that genes related to coenzyme transport and metabolism (category H) were overrepresented in highly upregulated genes during CS (Fig. 2E).Additionally, energy production (category C), carbohydrate transport (category G), and lipid transport (category I) were overrepresented in upregulated genes.In contrast, genes associated with amino acid transport and metabolism (category E) were downregulated on the RNA level during CS.Interestingly, genes involved in translation and ribosome function (category J), as well as inorganic ion transport (category P), remained unaffected at the RNA level.The functional overlap between groups of regulated genes of the HS and CS response was limited, except for the shared downregulation of amino acid metabolism (category E).
To further investigate this, we compared the gene-specific regulation of groups that are highly up-or downregulated under both conditions (Fig. 2F).A considerable proportion of genes downregulated during CS (41%) were also downregulated under HS.Similarly, this pattern was consistent for upregulated genes with 33% of the genes being upregulated during HS (Fig. S2 at https://doi.org/10.6084/m9.figshare.24006960.v1).Two currently uncharacterized transcription factors were upregulated under both CS and HS, suggesting either a potential general role in stress regulation or temperature-induced secondary effects that lead to the observed overlap between conditions (Fig. 2G).While the riboflavin operon remained unaffected during HS, the tungsten transport protein WtpA was substantially downregulated under HS and CS conditions.Comparing the expression levels of the control condition for the regulation group, we observed that HS triggered extensive reprogramming of the transcriptome (Fig. 2H).Initially, low transcribed genes are highly upregulated, while genes highly transcribed during exponential growth are massively downregulated (Fig. S3 at https://doi.org/10.6084/m9.figshare.24006960.v1).This pattern was not observed in the CS response, where only a smaller proportion of highly upregulated genes had lower expression values in the starting condition.
The results validate functional aspects of HS regulation and reveal some overlap with the CS response, warranting further investigation into the temporal dynamics of the thermal stress responses and the crosstalk of stress response pathways in general.
Temporal dynamics of the thermal stress response at the transcriptome level
We examined the number of up-and downregulated genes across all conditions, including HS 1, HS 2, and the recovery sample, using the previously established significance and fold change cutoffs (Fig. 3A).Approximately half of the strongly induced genes remained highly upregulated, while the other half returned to unchanged or mildly induced levels compared to the control sample.Minimal to no overlap with downregulated transcripts was observed, confirming the specificity of the response.This pattern also applied to normally upregulated genes, which returned to initial levels.A similar pattern emerged for downregulated genes although a higher proportion of initially downregulated genes became upregulated later.Samples taken 5 min after switching from HS or CS to recovery exhibited the lowest proportion of gene expression change, with only 4% being strongly up-or downregulated.
Gene enrichment analysis revealed a counter-regulation indicating a rapid recov ery response for some cellular processes.Genes with transcription (category K) and replication (category L) annotations were overrepresented in the upregulated groups, but this pattern was not observed in the prolonged HS 2 condition, suggesting that extended heat exposure elicits a distinct response (Fig. 3B).Instead, translation-related genes (category J) are overrepresented after extended heat exposure, indicating a switch from protection to maintaining translation.Additionally, cell motility (category N) was induced and remained enriched even after recovery, alongside energy production (category C), which is overrepresented in downregulated genes at HS 1 and HS 2. Defense mechanisms (category V) were shut down during the immediate HS response.Notably, many currently uncharacterized genes (category R,S) were either shut down or upregulated during HS 1 and HS 2.
In the CS response, the number of highly up-or downregulated genes remained consistently low across all CS conditions (Fig. 3C).CS 1 and CS 2 exhibited a similar time-dynamic pattern as HS, with minimal to no overlap between up-and downregula ted genes.In contrast, the 24-h response (CS 3) showed slightly more overlap between groups, with some genes that are not differently expressed after 2 h (CS 2) but altered after 24 h.However, a set of highly upregulated genes remained upregulated.Interest ingly, the recovery condition displayed the most significant changes.As we hypothesized that this condition could induce a heat-shock-like reaction, we compared the results to a study using an HS setup from 90° to 97°, finding the best correlation to the CS recovery from all of the conditions (Fig. S4 at https://doi.org/10.6084/m9.figshare.24006960.v1)(78).
To further investigate this, we conducted functional enrichment analysis.Energyrelated categories (category C, G) were overrepresented in almost all upregulated CS conditions, including recovery.Translation-related genes (category J) exhibited a delayed response in both HS and CS but were immediately counter-regulated in the CS recovery.Transcription (category K) was overrepresented in upregulated genes earlier than translation.In contrast, most downregulated genes in the CS response have not yet been described (category S).Only amino acid transport (category E) appears to be silenced, while the rest of the response is nonspecific.
In summary, differential gene expression and functional enrichment analysis of the RNA-seq data indicate that HS triggers substantial transcriptome changes, which is balanced remarkably quickly.In contrast, CS induces multiple responses, especially after a prolonged incubation time.
Investigating basal features of the archaeal transcriptome landscape in heat shock and cold shock regulation
Next, we investigated whether stress-induced genes share similar regulatory sequence features with genes under normal expression, highlighting their potential role in essential cellular functions.We first examined promoter elements, which have been shown to contribute to gene expression, although escape of the transcription elongation complex has been found to be the rate-limiting step during exponential growth (74).In P. furiosus, genes with archaea-typical BRE-TATA promoters display higher expression levels during exponential growth (Fig. 4A and B) (47).To determine whether stress-regulated The flow diagram size is plotted to scale.Note that due to rounding of the percentages to the nearest whole number, some values might not add up to 100%.(B) Gene set enrichment analysis of archaeal clusters of orthologous genes (arCOGs) for all three HS conditions.Significance levels are indicated by a continuous color bar from white to dark purple.Genes with a P-value <0.05 are considered significantly overrepresented and highlighted by a circle, with circle size reflecting the number of genes in the category.For better comparison, up and down categories include all upregulated and downregulated genes, respectively, regardless of fold changes.(C) Flow diagram for the CS experiment displaying the relative number and interconnected genes between conditions (x-axis) for each regulatory group, as described in panel A. The flow diagram size is plotted to scale.Note that due to rounding of the percentages to the nearest whole number, some values might not add up to 100%.(D) Gene set enrichment analysis of arCOGs for all four CS conditions, with significance levels indicated by a continuous color bar from white to dark purple.Genes with a P-value <0.05 are considered significantly overrepresented and highlighted by a circle, with circle size reflecting the number of genes in the category.For better comparison, up and down categories include all upregulated and downregulated genes, respectively, regardless of fold changes.
genes are equipped with strong promoters, we analyzed the promoter strength of HS 1and CS 1-induced genes, finding no difference compared to non-differential expressed genes (Fig. 4C).Notably, highly upregulated genes under CS conditions are all leadered, with a 5′ UTR of at least nine nucleotides possibly allowing efficient loading of the ribosome (Fig. 4D).Accordingly, more genes which are downregulated during CS are leaderless, which is significant in CS 1, CS 2, and the recovery condition (Fig. 4E).
Next, we employed Term-seq to survey the termination landscape of the P. fur iosus transcriptome, identifying 897 enriched 3′ ends downstream of protein-cod ing transcripts under exponential growth conditions (Table S2 at https://doi.org/10.6084/m9.figshare.24007026.v1).In agreement with previous findings for archaea, we discovered a long poly(U)-stretch of approximately 16 bases enriched at the 3′ end (Fig. 4F) (75,85).Although many genes (538 of 897) lack a robust poly(U)-signal, termination Chi-square test compares each stress condition to Ctrl sample distribution.Relative proportion color-coded from white (0) to dark purple (100%); significance (P < 0.05) indicated by an asterisk.(F) Position-specific terminator motif based on primary 3′ ends derived from Term-seq experiments, grouped by MEME search for genes with poly(U) signal (blue) or not (white).(G) Transcript abundance (TPM-normalized counts under control conditions) for genes with terminator poly(U) motif (blue) and without (gray).Box plot parameters as in B; significance level **** indicates P-values <0.0001.(H) Nucleotide enrichment meta-analysis compares nucleotide content of each group in a position-specific manner to randomly selected intergenic positions (n = 100,000).Log 2 enrichment shown for all primary 3′ ends with poly(U) motif (blue) or not (gray), and highly upregulated and downregulated genes in HS 1 and CS 1. (I) Proportion comparison of genes in regulatory groups with poly(U) signal or not, for all conditions.Note that an insufficient number of genes was detected for the CS 2 sample for robust statistical testing.
Chi-square test checks whether the proportion of genes with poly(U) terminator differs significantly from Ctrl condition; no significant difference at P <0.05.generally does not appear to be triggered by secondary structures (Fig. S5 at https:// doi.org/10.6084/m9.figshare.24006960.v1).Genes exhibiting a strong poly(U)-motif at the 3′ end are highly expressed under optimal exponential growth conditions (Fig. 4G).Assessment of poly(U) enrichment in highly up-and downregulated genes under HS and CS conditions revealed no difference compared to the control sample, suggesting that numerous stress genes are terminated by poly(U)-signals and, therefore, equipped for efficient termination (Fig. 4H and I).Additionally, no difference in 3′ UTRs was observed between up-or downregulated genes in any condition except HS 1, where downre gulated genes exhibit significantly shorter 3′ UTRs (Fig. 4J).Many genes associated with HS and CS in P. furiosus are predicted or validated to be transcribed within an operon, impacting transcriptional and translational regulation (47).Upon investigation of the operon organization of certain signature HS response genes, we found that the differential gene expression data did not correspond with operonization (Fig. 5A and B).To better understand the transcription of these genes, we collected long-read RNA-seq data using Nanopore sequencing of PCR-amplified cDNAs from the control condition.We specifically investigated an operon containing HSP20, VAT1, and the highly abundant DNA-binding protein AlbA.Single-read analysis confirmed 3′ end data gathered from Term-seq and primary 5′ end positions, suggesting a primary stop immediately after HSP20 and separate transcription start sites for each gene (Fig. 5A).This is consistent with Phr regulation of HSP20 and VAT1, as binding sites precede each gene.Interestingly, the stress condition HS 1 explains why the algorithm used in the 2019 paper, based on short-read data analysis of mixed RNA conditions, identified both of these genes in an operon (47).This observation is further supported when examining the thermosome operon, which, according to the long-read data, is distinctly transcribed as a single gene with very minimal readthrough rather than alongside other genes as annotated in the DOOR2 database (Fig. 5B).
Integrated transcriptomics and proteomics unveil shared functional respon ses and moderate correlation between total RNA and protein expression values
To further elucidate the molecular mechanisms underlying thermal stress response, we next analyzed the proteomics data (quantities and iBAQ values listed in Table S3 at https://doi.org/10.6084/m9.figshare.24007029.v1).Focusing on the HS response, where we observed a distinct response in the PCA (Fig. 1G), we identified 23% of the genes as upregulated and 26% as downregulated (Fig. 6A).Notably, log 2 -fold changes at the protein level were smaller as compared to the RNA level, prompting us to forgo applying an additional threshold for group description to ensure robust analysis.Key HS response proteins, such as the Myo-Synthase and Hsp20, displayed significant upregu lation.Time-dependent analysis revealed no overlap between up-and downregulated groups during HS, a trend more pronounced than in the transcriptomics data.Moreover, a larger number of proteins remained affected during the recovery condition compared to the RNA-seq data, indicating that protein-level reactions occur at a slower pace (Fig. 6B).For CS, we observed that 16% of proteins were upregulated after 20 min and 18% were downregulated, with virtually no change between the control condition and the 2 h protein sample (Fig. 6C).After 24 h, substantial alterations in the proteome were evident, with regulation rapidly reverting in the recovery sample.Functional enrichment analysis demonstrated that overrepresented groups exhibited greater consistency compared to the RNA sequencing data (Fig. 6D).Energy-related genes (category C) were overrepresented across all upregulated HS categories, as well as in the CS 1 and recovery conditions, suggesting potential similarities at the protein level.Additionally, after 24 h of CS, translation-related genes (category J) were overrepre sented, which had already been observed at the RNA level.Downregulated genes under HS included defense mechanisms (category V), ion transport (category P), and, in part, post-transcriptional modifications and chaperones (category O).Numerous currently uncharacterized proteins (category S) were also downregulated under HS (Fig. 6E).
Integrating our findings with the transcriptomics data, we identified a low correlation between fold changes; however, the same trend was evident for previously known HS-responsive genes (Fig. 6F).The correlation improved but remained moderate when comparing total expression values, normalized by either TPM for transcripts or iBAQ value for proteins (Fig. 6G).Upon comparing various conditions, we determined that the correlation between CS conditions was generally higher, particularly at the count level, where a notable correlation of 0.69 emerged between RNA and protein iBAQ values in CS 1 (Fig. 6H).
Identifying signature genes and common principles for heat shock and cold shock response
Analyzing the response regulated by the TF Phr in P. furiosus, we observed a timedependent trend in RNA and protein values relative to the control sample, indicating a rapid transcriptome-level response followed by a more prolonged, less dynamic response at the protein level (Fig. 7A and B).To further examine regulatory aspects, we conducted a cluster analysis to identify groups sharing temporal trends in transcript and protein levels (summary of all data listed in Table S4 at https://doi.org/10.6084/m9.figshare.24007035.v1).Therefore, we performed hierarchical clustering of z-score normalized log 2 -fold changes, accounting for the varying sensitivities of RNA-seq and MS (Fig. 7C).In the HS response, we identified distinct clusters displaying either similar (cluster 3, 4) or contrasting trends in RNA and protein levels, particularly during HS 1 (cluster 1, 2) and HS 2 (cluster 5).Log 2 -fold change analysis and arCOG enrichment of these clusters facilitated the integrative analysis in a functional context (Fig. 7D through F).
HS cluster 1 featured DNA maintenance and repair genes upregulated at the RNA level but downregulated at the protein level during HS 1, such as the cell cycle regula tor cdc6, the DNA polymerase subunit B, the histone A2 (PF1722), and Hef-associated nucleases PF0399 and PF2015.Conversely, HS cluster 2 displayed counter-regulation, with upregulation of protein levels and downregulation of RNA levels.This cluster was significantly enriched in genes associated with central metabolism and translation-rela ted processes.HS cluster 3 exhibited striking consistency in regulation, with genes related to post-translational modification, protein turnover, and chaperones initially downregulated during the immediate shock and subsequently recovering.This pattern was particularly evident for redox enzymes like SufB, SufC, and SufD.HS cluster 4 displayed the regulation expected for an HS protective response and is characterized by genes rapidly upregulated under HS 1 at the RNA level and a prolonged upregulating effect on the protein level.In this case, the signature genes were not selected based on arCOGs but on experimentally identified Phr targets, all of which were present in this cluster.However, while some transcription-related genes are overrepresented in cluster 4, we did not find some of the genes initially thought to contribute to thermal fitness in the cluster.One prominent example are nucleoid-associated proteins (NAPs), like AlbA (PF1881), TrmBL2 (PF0496), and histones A1 (PF1831) and A2 (PF1722), that are not uniformly regulated (Fig. S6A at https://doi.org/10.6084/m9.figshare.24006960.v1).Nevertheless, we could confirm that P. furiosus has a high investment of the total protein to the NAPs, especially histone A2 and AlbA with protein levels increasing up to >5% under HS conditions (Fig. S6B at https://doi.org/10.6084/m9.figshare.24006960.v1)(17).Lastly, cluster 5 encompassed defense systems downregulated at the RNA level during HS 1, with protein levels remaining down for an extended period, displaying an opposite trend to cluster 4.This cluster contained genes related to defense systems transcribed from the Cas locus 1, including Cmr (Type III-B) and Cst (Type I-G), as well as transporterrelated genes.
Re-evaluating the genes in HS cluster 4, we hypothesized that the Phr regulon may encompass additional targets not yet identified experimentally.Indeed, through motif analysis and comparison to the RegPrecise database, which houses transcrip tional regulons based on comparative genomics, we discovered additional targets with palindromic Phr binding motifs, expanding the initial 10-gene group (Fig. S7 at https://doi.org/10.6084/m9.figshare.24006960.v1)(86).We identified the proteasomeassembling chaperone PbaA and its complex partner PF0014, the predicted transcrip tional regulator PF1932, and the KaiC domain-containing protein PF1931 as additional targets strongly upregulated under HS (Fig. S7A and B at https://doi.org/10.6084/m9.figshare.24006960.v1).By comparing the Phr motif with the upstream sequences of predicted targets, we noted that genes predicted to have a motif but not exhibit ing the typical Phr-mediated response at the RNA and protein levels, such as PF0239 and PF1117, deviate from the consensus sequence (Fig. S7C at https://doi.org/10.6084/m9.figshare.24006960.v1).Additionally, we confirmed previous analyses showing that PF0321 has two start sites, with only the more distant site being under the control of the Phr motif, resulting in a weaker response.
In analyzing the regulatory dynamics during CS response, we observed distinct short-and long-term responses, which were further investigated.Interestingly, the riboflavin operon and several highly upregulated genes during CS 1 were also affected at the protein level but were strongly downregulated during CS 1, while protein levels remained unchanged or even increased (Fig. 8A and B).Performing cluster analysis similar to the HS samples, we identified distinct clusters that are discussed in the following (Fig. 8C and D): CS cluster 1 featured proteins especially upregulated at CS 3, with increasing upregulation at the RNA level over time, encompassing translation-related and membrane proteins (Fig. 8D through F).Notably, ribosomal proteins were upregulated at the protein level after 24 h, counter-regulated after recovery at 95°C.Interestingly, PF1265, a tRNA/rRNA cytosine-C5 methylase, was identified in this cluster.Further analysis of potential tRNA/rRNA-modifying genes revealed that PF1265 was, indeed, the only gene upregulated under CS 3 at both RNA and protein levels (Fig. S8 at https:// doi.org/10.6084/m9.figshare.24006960.v1).Conversely, KsgA expression was downregu lated at the protein level after HS, and only the Fmu homolog PF0666 and the tRNA methyltransferase PF1871 were upregulated at both levels during HS.CS cluster 2 is characterized by coordinated downregulation at both RNA and protein levels, partic ularly after prolonged CS.This cluster contains several unknown genes, as well as transporters related to multidrug resistance.CS cluster 3 exhibits upregulation at CS 1 and CS 3 time points on the protein level and is enriched in metabolism-related genes, including electron transport chain enzymes.In contrast, CS cluster 4 displays downregu lation at the protein level immediately following CS.Interestingly, this cluster contains proteins involved in protein quality control, such as the thermosome and prefoldin alpha, and Fe-S cluster assembly proteins SufB, SufC, and SufD, which are implicated in sulfate metabolism.
Considering that some of these genes are regulated by the TF SurR (PF0095), we investigated the overlap between the thermal stress response to other known TF-regu lons in P. furiosus.Therefore, we examined experimentally verified regulons, such as SurR, CopR (PF0739), TrmBL1 (PF0124), TFB-RF1 (PF1088), as well as predicted regulons like cobalamin biosynthesis regulation by CblrR (unknown), riboflavin biosynthesis operon regulation by RbkR (PF0988), and thiamin transport regulation by ThiR (PF0601) (Fig. S9 at https://doi.org/10.6084/m9.figshare.24006960.v1)(40,(86)(87)(88)(89)(90).Many genes controlled by validated or yet unknown transcription factors exhibit significant regulation at both the RNA and protein levels, suggesting either TF regulatory networks that affect one another or secondary effects.While genes upregulated by SurR during the primary S 0 response also exhibit upregulation under CS and HS, genes downregulated by SurR display consistent downregulation under the tested conditions at the RNA level.Intriguingly, targets of the copper-regulator CopR are substantially downregulated, especially genes involved in binding and transporting metal ions, such as PF0723.
In summary, while the response to HS is regulated by Phr and possibly encoded at the sequence level, other effects are likely controlled by currently undetermined features, as well as post-transcriptional and post-translational regulatory mechanisms.
DISCUSSION
In this study, we investigated the response of the hyperthermophilic archaeon P. furiosus to HS and CS, unveiling complex patterns of gene expression and protein regulation.Our results indicate a coordinated, temporally organized response, utilizing various adaptive mechanisms for distinct cellular processes and pathways.Refined clustering and functional analysis provided a clear depiction of the various regulatory patterns observed in the HS and CS response of P. furiosus, highlighting the complexity of molecular mechanisms underlying this process.However, it is crucial to acknowledge that besides the direct response to temperature stress, the results may also include secondary effects induced by the temperature change itself.These could be influenced by various cellular processes, such as growth rate alterations or changes in energy utilization, that contribute to the overall transcriptional and proteomic landscape.
HS triggers extensive reprogramming of the transcriptome, upregulating numerous heat shock proteins that serve to prevent cellular damage.We confirmed HS-induced upregulation of predicted targets of the transcriptional regulator Phr on both RNA and protein levels in a time-resolved manner, thus broadening its known regulon (40).Moreover, we demonstrated that stress-induced genes generally possess archaea-typical promoter and transcription terminator sequences.This suggests that these genes are silenced under normal growth conditions but can be rapidly activated, as observed for Phr-regulated genes.While HS primarily leads to the upregulation of proteins related to energy production and transcription-related processes, we observed the downregulation of defense mechanisms, including the CRISPR-Cas system.Silencing of these systems during stress might be attributed to energy conservation or preventing inadvertent activation of the immune response, which could adversely affect genome stability, especially since RNA and DNA targeting CRISPR-Cas systems coexist in Pyrococcus (91,92).
While we and others found a number of proteins that confer a specific heat shock response, we did not identify proteins that confer protection against CS.The CS response appears to be characterized by two distinct phases: an initial phase focused on energy provision, followed by a phase aimed at sustaining translation.This coordi nated response is evident at both RNA and protein levels and rapidly reverts during recovery.Regulation during CS is more subtle compared to HS, with multiple responses observed at both short-and longer-term CS on RNA and protein levels.While previous studies investigating the CS response in (hyper-)thermophilic archaea have identified some cold-induced genes, only a subset of these are reflected in our data.This discrep ancy might be explained by the experimental conditions used in earlier studies, which primarily involved temperatures at the lower growth limit as the CS temperature (83,93,94).Nevertheless, a 4°C shock is a plausible scenario for these organisms, considering their exposure to surrounding cold seawater that may also penetrate the porous material of black smokers (95).
Interestingly, it has been reported that general adaptations of psychrophilic archaea share some characteristics with CS response in hyperthermophiles (94).One of these signature domains found in psychrophiles is TRAM, which is universally distributed and functions via RNA chaperone activity (96)(97)(98).The protein containing this domain, possibly mimicking the bacterial cold shock protein A function in archaea, is also present in P. furiosus (PF1062).While it is downregulated under early CS conditions, it is significantly upregulated after 24 h on the protein level.In addition to the investigation of single cold-induced genes, a global quantitative proteomics study in the cold-adapted Methanococcoides burtonii demonstrated that the abundance of ribosomal subunits peaked at 4°C (99).Our analysis supports this observation, as we detected a timedependent upregulation of ribosomal proteins.Interestingly, this may be connected to the finding that genes upregulated during early CS in our study consistently possess a 5′ leader sequence, compared to approximately 15% leaderless genes observed under normal conditions.The presence of a 5′ leader sequence, including a Shine-Dalgarno site, could potentially facilitate ribosome recruitment and translation initiation factor binding, thereby ensuring efficient translation of cold-responsive genes under these challenging conditions (100).Moreover, this arrangement may allow for more intricate post-transcrip tional regulatory mechanisms, such as RNA-binding protein interactions or secondary structure formation, which could enable fine-tuning of gene expression.However, the mechanism of leaderless translation remains unclear, complicating functional compari sons (101)(102)(103).Although it has been shown that this can be facilitated by 30S riboso mal subunits pre-loaded with initiator tRNA, it remains an open question whether this process is also influenced by additional ribosomal proteins (as observed in Bacteria) or if ribosome hetereogeneity, in general, plays a role (104)(105)(106).Interestingly, bacterial ribosome biogenesis has been shown to be coupled to post-translational protein quality control through the HSP70 machinery under stress conditions (107,108).Furthermore, ribosome biogenesis, being an energy-demanding process sensitive to temperature fluctuations, may lead to an enrichment of distinct rRNA precursors under thermal stress conditions affecting ribosome composition (108)(109)(110).
Although previous research has indicated that ribosomal RNA transcription in P. furiosus is influenced by growth rate, temperature-dependent rRNA heterogeneity in Thermococcales has not been studied (111).The use of a recently established Nanopore protocol could enable temperature-dependent rRNA processing and simultaneously identifying potential rRNA modifications that contribute to ribosome function (112).Some modifications, such as KsgA-dependent dimethylation and Nat10-dependent acetlyation, have already been shown to contribute to fitness and adaptation (68,113).Notably, the extent of ac 4 C acetylations is significantly increased in response to temperature increases in Thermococcus and Pyrococcus, with a stabilizing effect on the RNA (113).Understanding the functional implications of these modifications and their significance in ribosome dynamics under thermal stress warrants further exploration (114,115).
Combining Nanopore approaches, enabling to tackle ribosome heterogeneity, with ribo-seq analysis at different temperatures would be a highly promising approach.Integrating these complementary techniques could offer valuable insights into ribosomal subpopulations and potential temperature-specific translation regula tions, providing a more comprehensive understanding of adaptive stress response in hyperthermophilic archaea.
The overlap of transcription regulons examined in this study underscores the complex cellular responses to environmental stressors, which may be coordinated across a broad range of conditions.Notable examples include the RbkR-regulated riboflavin (vitamin B 2 ) operon, producing essential cofactor precursors flavin mononu cleotide (FMN) and flavin adenine dinucleotide (FAD), and the ThiR-regulated thiamine (vitamin B 1 ) operon, precursor of thiamine pyrophosphate (TPP) (84,116,117).Given the substantial upregulation of energy-related enzymes during CS, there may be an increased demand for maintaining proper functioning of metabolic pathways, which could explain the upregulation of these operons (118).Alternatively, secondary effects may also contribute to the observed transcriptional regulation.For example, disrupted redox homeostasis might explain the regulation of SurR targets, which is a redox-active transcriptional regulator (87,119,120).
Although our study has yielded significant findings, we must acknowledge certain limitations.Our experimental design focused on specific time points, potentially missing the full dynamics of gene expression changes during stress response.Time-course experiments for certain targets would offer a more comprehensive view of tempo ral changes in gene expression and could reveal additional regulatory mechanisms.Additionally, our experimental setup allows only for relative quantitative comparisons of the final amounts of transcripts and proteins in the cell.As such, we cannot make definitive statements regarding the neo-synthesis of RNA and proteins, which should be considered for interpretation.Furthermore, we cannot address the heterogeneity of responses on the genome and transcriptome levels.While single-cell analysis has provided deeper insights into individual cellular mechanisms during thermal stress and various growth conditions in bacteria, its application to archaea remains unexplored (121,122).Future utilization of single-cell technologies will help uncover rare cellular states and genome plasticity of individual Pyrococcus furiosus cells, particularly during HS, shedding light on the diverse strategies these organisms employ to cope with extreme environments.
In conclusion, our study comprehensively analyses the transcriptomic and proteomic responses to thermal stress in the hyperthermophilic archaeon Pyrococcus furiosus.We identified distinct expression patterns and regulatory mechanisms, providing valuable insights into the dynamic response mechanisms to environmental fluctuations and their control by transcription factor networks in archaea.Our findings enhance our understanding of P. furiosus' remarkable adaptability and expand our knowledge of life in extreme environments.
The mass spectrometry proteomics data have been deposited to the ProteomeX change Consortium via the PRIDE partner repository with the datasetdata set identifier PXD041262 (124).Documentation and code of all essential analysis steps (tools and custom Rscripts) are available from https://github.com/felixgrunberger/HSCS_Pfu.
FIG 1
FIG 1 Validation of the experimental setup by temporal analysis of the heat and cold shock response in Pyrococcus furiosus via transcriptomics and proteomics.(A) Experimental design illustrating the seven stress-related conditions, including cold shock (CS) at 4°C and heat shock (HS) at 105°C, along with recovery samples, analyzed using RNA-seq and proteomics, compared to the control condition (Ctrl) at the optimal growth temperature of 95°C.(B) Venn diagram displaying the overlap between identified protein-coding transcripts (gray) and total identified proteins (yellow) under stress-related and control conditions.Genes only identified using RNA-seq or proteomics are shown at the left and right side, respectively.(C) Distribution of transcripts per million (TPM) normalized transcript counts, color-coded based on the legend in panel A. (D) Distribution of intensity-based absolute quantification (iBAQ) values for proteomics samples.(E) Principal component analysis of RNA-sequencing samples based on total counts; replicates are indicated using different symbols, with outliers removed.(F) Circular genome-wide plot of the P. furiosus genome (position 0 at top) with protein-coding genes on the two outer rings; Z-score normalized TPM counts color-coded by dark brown (negative) to dark green (positive) for each condition in the following order (outer to inner): Ctrl, CS 1, CS 2, CS 3, CS R, HS 1, HS 2, HS R. (G) PCA and (H) genome-wide circular plot for mass spectrometry samples analogous to panels E and F.
12 FIG 2
FIG 2 Immediate heat shock leads to extensive reprogramming on the transcriptome level with moderate overlap to cold shock response.(A) Volcano plot displaying log 2 -fold changes (x-axis) and significance (−log 10 adjusted P-value, y-axis) comparing transcriptome changes at HS 1 (5 min) to the control condition.Protein-coding transcripts are categorized by significance and fold changes, color-coded as strongly upregulated (padj < 0.05 and log 2 FC ≥ 1, dark green, bold font), upregulated (padj < 0.05 and log 2 FC < 1 and log 2 FC > 0, light green, normal font), non-regulated genes (NS, padj ≥ 0.05, white), strongly downregulated (padj < 0.05 and log 2 FC ≤ −1, dark brown, bold font), downregulated (padj < 0.05 and log 2 FC > −1 and log 2 FC < 0, light brown, normal font).HS genes are highlighted.Significance level of 0.05 is shown as a dashed line.(B) Schematic representation of the HS response in P. furiosus, with genes color-coded according to transcriptome fold changes.(C) Gene set enrichment analysis of archaeal clusters of orthologous genes (arCOGs), with significance levels indicated by a color bar ranging from white to dark purple.Genes with a P-value <0.05 are considered significantly overrepresented and highlighted by a circle, with circle size reflecting the number of genes in the category.(D) Volcano plot showing log 2 -fold changes for CS 1 condition compared to the control condition, with the y-axis displaying significance.A cutoff value of adjusted P-value of 0.05 is indicated by a dashed line, and relevant genes are highlighted.(E) Functional enrichment analysis using arCOG descriptions; significance levels and the number of regulated genes are shown in the upper legend.(F) Overlap between strongly upregulated (dark green) and downregulated (dark brown) genes in HS 1 and CS 1 conditions; total gene numbers are shown in horizontal bar graphs, and comparison set numbers are displayed in vertical bar graphs.(G) Scatter plot comparing the log 2 -fold changes for HS 1 (x-axis) and CS 1 (y-axis), color-coded according to plotting density; fold change densities for each condition are displayed in the side density plots.(H) TPM normalized expression values from the control condition for respective regulatory groups from panels A and D for CS 1 and HS 1 conditions; box edges delineate the first and third quartiles, center line represents the median, and whiskers denote points within 1.5× of the interquartile range.
FIG 3
FIG3 Temporal analysis shows rapid rebalancing of transcriptome changes following heat shock, while cold shock elicits multiple responses.(A) Flow diagram visualizing the relative number and interconnected genes between conditions (x-axis) in the HS experiment for each regulatory group: strongly upregulated (dark green, bold font), upregulated (light green), non-regulated (white), strongly downregulated (dark brown, bold font), and downregulated (light brown).
FIG 4
FIG 4 Stress-induced genes are equipped with archaea-typical regulatory sequences features.(A) Position-specific promoter motifs of primary transcripts from (47), grouped by MEME motif search into classical archaeal promoter motif or not, based on the presence of BRE and TATA elements.(B) Transcript abundance (TPM-normalized counts under control conditions) for genes with a promoter motif (blue) and without (gray).Box edges delineate first and third quartiles, center line represents median, and whiskers indicate points within 1.5× interquartile range.Welch's t-test used to assess differences between groups; significance level **** indicates P-values <0.0001.(C) Promoter strength comparison, estimated by P-values of MEME-detected promoter motifs (mode: one occurrence per site), for highly upregulated (dark green) and unchanged (white) genes under CS 1 and HS 1. Box plot parameters as in B; ns signifies P-values ≥0.05.(D) 5′ UTR length comparison for highly upregulated or downregulated genes against total distribution (Ctrl, 779).Density plots shown with bars in a window size of 1 and overlaid densities.(E) Proportion of leaderless (5′ UTR = 0) and leadered (5′ UTR ≥1) transcripts in highly upregulated and downregulated groups across all conditions.
(
J) 3′ UTR comparison of regulatory groups to the Ctrl condition.Box plot parameters as in B; significance levels of * and ns indicate P-values <0.05 and ≥0.05, respectively.
FIG 5
FIG 5 Long-read sequencing reveals operon organization of heat shock genes.(A) Operon analysis using long-read PCR-cDNA Nanopore sequencing for HSP20 operon and (B) thermosome operon.Top panel: annotated operons from DOOR2 database, previous short-read RNA-sequencing, and long-read Ctrl condition sequencing.Genes visualized as rectangles, strand indicated by arrow direction, and operons connected by blue background lines.Next panel: current gene annotation and log 2 -fold changes on RNA level for three HS conditions (1, 2, R) from left to right.Significance indicated by up or down arrows.Profiles of mean normalized coverage values from Nanopore and short-read RNA-sequencing, color-coded as Ctrl (gray) and HS 1 (orange).The last panel presents a single-read analysis of Nanopore reads.Each line represents one full-length sequenced read, sorted by their read start position.Vertical lines indicate primary start sites identified in reference (47), while dashed lines represent primary 3′ ends from short-read Term-sequencing.
FIG 6
FIG 6 Proteomics count values but not fold changes are moderatly correlated to RNA levels.(A) Volcano plot displaying log 2 -fold changes in protein levels (x-axis) and −log10 (adjusted P-value, padj) on the y-axis comparing HS 1 (5 min) with control condition.Protein-coding genes are categorized based on significance and fold changes, color-coded as upregulated (padj < 0.05, log 2 FC > 0, green), non-regulated (NS, padj ≥ 0.05, white), and downregulated (padj < 0.05 and log 2 FC < 0, brown).HS genes are highlighted.Genes were assessed at a significance level of 0.05, indicated by a dashed line in the plot.(B) HS and (C) CS flow diagram visualizing the relative number and interconnected genes between conditions on the x-axis for each regulatory group, as described in panel A. Note that due to rounding of the percentages to the nearest whole number, some values might not add up to 100%.(D) Gene set enrichment analysis of archaeal clusters of orthologous genes (arCOGs) for all three HS and (E) all four CS conditions.Significance level is indicated by a continuous color bar from white to dark purple.Genes with a P-value <0.05 are considered significantly overrepresented in the category and are highlighted by a circle.The circle size reflects the number of genes in the category.(F) Comparison of log 2 -fold changes and (G) normalized TPM and iBAQ values of HS 1 measured at RNA (x-axis) and protein (y-axis) levels.HS genes are highlighted.Pearson's correlation coefficient is shown in the top left.Density of plotting is color-coded.(H) Correlation matrix (Pearson's correlation coefficient) of pairwise comparisons based on fold changes (upper left corner) and normalized count values (bottom right).
FIG 7
FIG 7 Cluster analysis identifies signature genes involved in heat shock response.(A) Pathway plot illustrating log 2 -fold changes at RNA level (x-axis) and protein level (y-axis) in a line plot from condition HS 1 via HS 2 to recovery condition.Genes of the Phr regulon are highlighted with colors and points.(B) Fold changes for genes presented in panel A. Significant regulation in respective conditions is indicated by a circle (padj < 0.05) or a rectangle.(C) Comparison of median z-score values shown as a point in each cluster (row) and condition (column) for protein (yellow) and RNA (gray) values.Shaded area represents the interquartile range.(D) Log 2 -fold changes of genes sorted by clusters.Points indicate median, while shaded area shows interquartile range.(E) Gene set enrichment analysis of archaeal clusters of orthologous genes (arCOGs) for five selected clusters.Significance level is represented by a continuous color bar from white to dark purple.Genes with a P-value <0.05 are considered significantly overrepresented in the category and are highlighted by a circle.Circle size reflects the number of genes in the category.(F) Functional enrichment analysis of genes from each cluster was performed by selecting all significantly regulated genes in the most significantly overrepresented arCOG category.Confidence in predicted interactions, according to the STRING database, is indicated by line thickness.Only genes with at least one connection are shown.Genes are depicted as points and colored based on the fold change at RNA level in condition HS 1.
FIG 8
FIG 8 Cluster analysis identifies signature genes involved in cold shock response.(A) Pathway plot illustrating log 2 -fold changes at RNA level (x-axis) and protein level (y-axis) in a line plot from condition CS 1, through CS 2 and CS 3 to recovery condition.The nine genes with the highest upregulation at RNA level (CS 1) are highlighted with colors and points.(B) Fold changes for genes presented in panel G. Significant regulation in respective conditions is indicated by a circle (padj < 0.05) or a rectangle.(C), Comparison of median Z-score values shown as a point in each cluster (row) and condition (column) for protein (yellow) and RNA (gray) values.Shaded area represents the interquartile range.(D) Log 2 -fold changes of genes sorted by clusters.Points indicate median, while shaded area shows interquartile range.(E) Gene set enrichment analysis of archaeal clusters of orthologous genes (arCOGs) for four selected clusters.Significance level is represented by a continuous color bar from white to dark purple.Genes with a P-value <0.05 are considered significantly overrepresented in the category and are highlighted by a circle.Circle size reflects the number of genes in the category.(F) Functional enrichment analysis of genes from each cluster was performed by selecting all significantly regulated genes in the most significantly overrepresented arCOG category.Confidence in predicted interactions, according to the STRING database, is indicated by line thickness.Only genes with at least one connection are shown.Genes are depicted as points and colored based on the fold change at protein level in condition CS 3.
|
v3-fos-license
|
2023-05-24T15:19:57.260Z
|
2023-03-18T00:00:00.000
|
258855926
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jurnal.untirta.ac.id/index.php/JELS/article/download/17094/10327",
"pdf_hash": "b2a29c21652ffb7fcf9be8d1b8863dea8c712398",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43501",
"s2fieldsofstudy": [
"Education"
],
"sha1": "a8a137371fbb1487cd0b76e9ab7bbdc06edbbdb6",
"year": 2023
}
|
pes2o/s2orc
|
Journal of English Language Studies
The creation of a well-developed ecological environment for English language teachers in digital platforms is a relevant and humane practice amidst the pandemic. During these times of crisis, Higher Education Institutions (HEIs) in the Philippines need to adopt new learning modalities. However, some students find it difficult to attend in synchronous mode of classes, as an alternative way, teachers need to employ asynchronous activities such as conducting online discussion forums (ODFs) where students can freely interact and share their insights on a topic within the given timeframe. Studies conducted on ODFs in the English language classrooms remain scant, especially in the Philippines, which examined the same or related focus. Thus, through qualitative approach, semi-structured interviews were conducted to 21 freshmen English language students in a Philippine state university. Further, the data was analyzed using conventional content analysis method (Hsieh and Shannon, 2005). The findings indicated the students’ insights on the use of ODFs such as it its risk for cheating during the asynchronous forums, its accessibility through Facebook as a platform for asynchronous engagement, encourage the exchange of ideas for new learning opportunities, and helps review the previous lessons discussed. There were notable challenges encountered by the students, however, specific strategies they have employed were disclosed. Also, when ODFs will be managed well by classroom teachers, students will become more engaged in asynchronous discussions. © 2023 JELS and the Authors - Published by JELS.
INTRODUCTION
The creation of a well-developed virtual ecological environment for second language classroom is a relevant practice and input for L2 teachers in time of the pandemic.The term ecology is an adopted concept to give a clear picture as to what constitutes interactions in digital spaces like during the students' talk-ininteraction in Facebook comment threads and in interacting with online forums.Just like in the field of science, the context of ecologies in interactional practices in digital networks composes the participants (teacher and students), the environment itself (Facebook) and the text used for interaction (comment threads) (DeVoss and Eidman-Aadahl, 2010).Facebook has been very useful in the conduct of flexible teaching modalities during the pandemic.As it is evident in the recent local study in the Philippines conducted by Santiago et. al (2021) which revealed that Facebook was considered as the topmost convenient platform used by students because of its accessibility feature compared to Zoom and Google meet.This notion was also supported in a study with the same focus conducted by Arciosa (2022) wherein his respondents preferred the use of Facebook in online interaction during the conduct of flexible modality amidst the pandemic.
As a common tool in class asynchronous engagement, Facebook become popularly used platform for flexible learning in the Philippines.This platform effectively caters both teachers and students on the delivery of the course content and at the same time foster the interaction among them (Meishar-Tal, Kurtz, and Pieterse, 2012).
The engagement of Filipino learners in this kind of virtual platform was further enhanced due to the milestone of ICT in the Philippine education context (Lazaro, 2016).A study conducted by the University of the Philippines (UP) Open University stated the engaging characteristic of Facebook as a Web 2.0 tool in the educational landscape (Esteves, 2012).It can be deduced from these insights that the said platform was beneficial among young learners in the Philippines since most of them are already equipped with its navigation features.Hence, Filipino students did not find it difficult to adapt with the transition of traditional learning to the use of Facebook.
In the 21 st century educational landscape, online class discussion forum (ODF) is defined and considered as the new tool in educational platforms and a creative means of learning though it is uncommonly known before because of the common tradition of conducting face-to-face classes.The employment of ODF becomes a common tool to make the students be fully engaged in the virtual classroom and this allow the students to post messages or comments to their teacher's discussion thread which makes it more interactive (Balaji and Chakrabarti, 2010).This kind of activity foster deeper understanding among students which discusses a specific topic, and they could review their exchanges of thoughts at any time.During these times of crisis, institutions need to adopt new learning modality through online classes, like on the use of social media sites where anyone can create their own learning community and they will be able to learn and gain new ideas that may be helpful in academic situation of learners.This mode can assess themselves and improve their availability in using electronic devices likes cellular phones, computers and etc., in additional, both teachers and students will be responsible users of social media sites and receive same benefits to it.
The collaborative nature (Zhong and Norton, 2018) of ODF makes it interesting for the students to get engaged in the comment threads, and its feedbacking opportunity would help learners to further improve their language ability (Mohammadi, Jabbari, and Fazilatfar (2018).Also, the integration of language plays such as GIFs and emoticons make it interesting for the learners to participate in an interactive discussion (Bailey and Almusharraf, 2021).The researcher argued that ODFs provide a more meaningful language practice among learners of English given the multimodal features it covers.Despite of its drawbacks when integrated in flexible learning, generally, ODFs could assist both teachers and students achieve a meaningful discussion when properly managed.Hafner and Ho (2020) states and elaborates the fundamental roles and usage of ODFs where it also facilitates the techniques, strategies and approaches being used in online class community; helping learners to join and interact within virtual online classes; motivates and uplift the knowledge both on educators and students to have literate skill on using technology devices; and understand the purpose of online class discussion forums.Those roles are helpful at this time, especially that face-to-face classes are prohibited by the government that is why conducting online class discussion forums are very essential to the students and as well as to the teachers in providing sufficiency in terms of educational needs of the students more particularly to English Language learners where the common language used in online websites is English.
Meanwhile, several scholars have contended the beneficial use of these online forums in developing the communicative competence of the learners such as those studies conducted by other researchers in the field.For instance, Balaji and Chakrabarti (2010) quantitatively examined the use of ODFs in traditional classes.
Their study integrated the "theory of online learning" and "media richness theory" as tested when used to ODFs.It was also revealed from their analyses that perceived richness of ODFs results to having a significant positive impact to the participation of the students and the quality of their learning.Further, instructors play a very significant role in ODFs as revealed in their study since they are the one who are in the authority to lead the class interaction.
The same focus of study was explored by Krish (2011), de Villiers andPretorius (2013) wherein they have studied the discourse of the students when engaged in ODFs.Using mixed-methods research, their study revealed that ODFs has established a conducive and comfortable environment for social responsibility.It was evident in the results that interpersonal relations were highly developed and this becomes very effective in fostering the students' collaborative learning.Also, Kent (2013) explored the use of Facebook as a platform for the students' online learning activity in Internet Communications course.The quantitative study disclosed the fact that ODFs has changed the interaction of the students in class where they become more involved and expressive in the discussions.On the same vein, the employment of ODFs during the pandemic was very useful as being noted in the study of Rinekso and Muslim, (2020) and Al-Jarf (2021) wherein the students expressed positive response towards the use of this platform for online discussions.Also, essential skills such as developing the students' critical thinking skills together with their writing skills was evident in their findings.
In view of the significant findings disclosed in previous studies, the present investigation focused on exploring the use of ODFs in asynchronous settings among English major students in the Philippines.It is crucial to understand the experiences of the students from a qualitative perspective, in contrast to the cited studies conducted that employed quantitative methods.Thus, this study was conducted.Moreover, the researcher observed how COVID-19 pandemic changed the pedagogical approach of language teachers to teach English through the use of online platforms.Specifically, some language teachers are using online discussion forums to elicit their students' insights about the topic being discussed at hand in an asynchronous manner.However, to date, there were only very few studies that focuses on examining the experiences of the learners in engaging with ODFs in the Philippine context.Hence, it is high time to investigate the students' perception, challenges and the strategies they employ to lessen these issues as they participate in ODFs as part of the flexible learning modality in Philippine higher education.The findings of this study might provide relevant implications as to how language teachers will effectively manage this kind of asynchronous activity that could promote students' interactional competence during the conduct of online classes which is considered as an important skill in second language learning especially in the Philippine context.
Research Design
This study employed qualitative approach to investigate the participants' lived experiences and concerns with the aid of conventional content analysis (Hsieh and Shannon, 2005) as the specific method in analyzing the interview data which did not impose predetermined categories and perspectives.The researcher aimed to describe the perspectives, experiences and observations of the participants with their engagement in ODFs and identify the employed strategies of the chosen participants to cope with the experiences.The study attempts to capture the subject matter in a detailed manner, devoid of restrictions so that more could be drawn from analyzing it (Flick, 2010) with the use of semi-structured interviews.The interview data obtained focused at the perspectives, challenges met, and the strategies employed in online discussion forums in a closed Facebook group.
Setting and Participants
The participants of this study were twenty-one (21) first year Bachelor of Arts in English Language (BAEL) students in a state university in Tacloban City, the Philippines.These students were enrolled in Introduction to the English Language System (ELS 123) class in the second semester of school year 2020-2021.The said subject used online discussion forums (ODF) as asynchronous activities in class.The students commented in the ODFs via their Facebook closed groups as part of their graded interaction in class.Moreover, purposive-convenience sampling was used to determine the participants since only those BAEL students who were in the first year level in the program in the said school were asked to voluntarily join the study following ethical agreements.Also, the study was conducted between February and May 2021 where the COVID-19 restrictions was strictly implemented, only those students who were available and had the access to internet for the conduct of virtual interviews were accommodated.
Data Collection and Analysis
The data gathering process was done during the COVID-19 pandemic in the Philippines through virtual mode.Semi-structured interviews and follow-up interviews were conducted via Google Meet which took for about 20-30 minutes for each participants including some probing questions for further clarifications.Prior to the gathering of the needed data, consent forms were secured from the target participants, and codes were used to anonymize their identity.Specifically, the interviews were composed of open-ended questions that elicits detailed answers from these aspects: the perspectives, challenges and strategies used in engaging with ODFs.The collected interview data were sent back to the participants enable to give them the time to recheck and evaluate the accuracy of their responses for validity purposes.The said data was carefully "transcribed, read and reread" (Widodo, 2014), and the content analysis followed by extracting the themes and subthemes from the interview data.Further, the data in this study was analyzed using conventional content analysis (Hsieh and Shannon, 2005) wherein the researcher allowed the themes and subthemes to flow naturally and directly from the data obtained.
In addition, the coding of themes and subthemes using conventional content analysis of Hsieh and Shannon (2005) which followed the following major processes.
First, the data were read several times by the researcher enable to obtain immersion of the text.Second, specific codes were derived by highlighting the exact phrase or words from the text that reveals the key thoughts.This includes the writing of notes on the researcher's first impression and initial analysis of the interview data.Third, the specific codes were sorted out to various emergent categories and grouped into meaningful clusters which were then organized into smaller number of categories.
Lastly, sample excerpts from the interview data which reflects the meaning conveyed by each code and category are determined.This approach of analysis included relevant theories and significant research findings in the discussion part of this study.
RESULTS AND DISCUSSION
The study aimed to establish an empirical study that could explain the lived experiences of English major students when interacting with ODFs in a Facebook group, especially in the Philippine context.Specifically, this section discusses the findings framed through conventional content analysis (Hsieh and Shannon, 2005) within the scope of the research questions such as the participants' perspectives, challenges and the strategies used when they encounter with these challenges when engaging with ODFs.
Perspectives of the Freshmen English Major Students on the Use of ODFs
This study explored the perspectives of the freshmen English major students on the employment of online discussion forums.After analyzing the significant statements from the respondents, the researcher came across with the following emergent themes: (1) ODFs provide risk for cheating during the asynchronous forums, (2) ODFs ensure accessibility through Facebook as a platform for asynchronous engagement, (3) ODFs encourage the exchange of ideas for new learning opportunities, and (4) ODFs helps review the previous lessons discussed.
ODFs provide risk for cheating during the asynchronous forums
The emergence of flexible learning during the pandemic has equipped some students on different forms of cheating online.The primary perspective that was mentioned by the students was the risk for cheating in providing answers during the asynchronous forums.The academic integrity of the students in higher education has been tinted due to the absence of face-to-face conduct of classes, some of them would tend to cheat many times (San Jose, 2022) One of the students expressed during the interview that their other classmates tend to commit academic dishonesty when providing answers in the discussion forums.
Since the classes are conducted online, some students are very dependent to get their answers from the different online platforms that results to cheating.These insights can be inferred in the following responses: I think it's just right, but there's always risk for cheating since our lessons are only online.While we are engaged in the forums, we could just easily open Google website and browse the answers we could provide in the comment section…Plagiarism is not anymore checked in the forum, so it is okay to just copy and paste our answers, especially when we have a lot of tasks to do.
[P1] Most of us could maximize the use of our data.We sometimes use Google chrome to search for answers that we could copy and paste in the comment thread of the online forums.[P14] This becomes a common practice among students, especially during the virtual conduct of classes.Some of them would tend to use internet tools such as chrome and other search engine as sources of their answers without making some necessary revision or paraphrasing.However, despite on the students' awareness of academic integrity, their practices in accomplishing academic related tasks still resort to the practice of cheating.A corollary to this insight, online cheating is rampant during the conduct of asynchronous examinations (Karaman, 2011).It can be deduced from this finding that the nature of Facebook as an online platform is being abused by some students that results to committing academic dishonesty.Since the use of ODFs is done asynchronously, some learners resort to online cheating.
ODFs ensure accessibility through Facebook as a platform for asynchronous engagement
Facebook becomes an accessible tool for both students and teachers in the conduct of classes.This was considered as an efficient tool especially when students are expected to interact with one another asynchronously.One of the participants mentioned that accessibility of using Facebook as a platform for asynchronous engagement is very efficient specifically when engaging in the online discussion forums.They find Facebook as a very accessible platform when interacting with the online forums, as expressed by one of the participants: It's helpful for a lot of people since most have FB accounts they can use.And only with mobile data, we could already use it in class for few days.
[P4] Moreover, most of the Filipino youth are equipped with mobile phones and part of their daily routine is the use of Facebook.During the pandemic, face-toface classes in higher education institutions has been disrupted, and most teachers integrated Facebook as their platforms in engaging with their students (Avila and Cabrera, 2020).These insights are reflected in the following responses: Like I said earlier, it's helpful for those with FB accounts, it's easy to access for a lot of people, and can be used if someone doesn't have internet load.
[P1] The use of fb group as a platform in this online learning is commendable.It was easily accessed by the students and there will be no difficulty to participate in the class….integratingFacebook in our classes is such a good idea for its easy access.[P8] It was also highlighted by Santiago et al. (2021) that Facebook was considered as top of the most convenient educational platform that was used during the new normal setting.As expressed by the participants, even if they do not have an internet load, they could still view the updates in their FB private groups through free data.Hence, social media tools such as Facebook helps the efficacy of student engagement of the teaching and learning process even in the midst of abrupt crisis.
ODFs encourage the exchange of ideas for new learning opportunities
With the interactive nature of ODFs, this tool would encourage the students to share their insights by just typing in their answers in the comment section.This is one of the comforting features of ODFs when it comes to letting the students participate in their classes (Hurt et al., 2012) especially when it is integrated in Facebook.Based on the responses of the participants, encouraging the students in exchange of ideas for new learning opportunities is another perspective that emerged from the in-depth interviews.They believe that sharing their answers in the comment threads would help other students to learn new knowledge, as reflected in the responses below: A lot of people can see what your insights are and you can also learn the insights of others since the post is made public.[P2] The said forums also encourage the students to discuss issues in a particular topic and rebut the insights shared by other students.This allows us to share our thoughts and opinions about the given questions and come to disagree and agree with our classmate's perspective unto it.
[P12] Also, ODFs help the students develop their critical thinking skills as they counter argue with their classmates' insights in the comment threads, and this supports the idea of Jamali and Krish (2021) that ODFs foster the students' reasoning skills that is essential in developing their linguistic competence.
ODFs helps review the previous lessons discussed
The best platform to review the lessons discussed in class is another perspective shared by the respondents.Through reading the comment of other There are times that a notification does not pop up in my notification bar so I have to often check on it so that I will be able to read the history of the exchanges of comments.This will give me the idea what to share.[P2] Sometimes when I come back in our discussion forums, many students have already given their responses, and what I do is I need to go back to the first comments then read until the last comment so that I will be able to catch up with their discussion.[P11] Moreover, these insights relate on the findings of Jamali and Krish (2021) where students are guided in learning the subject when they are reviewing the exchange of comments in ODFs.Some of the participants are preoccupied with academic tasks in other subjects, hence, this lessens their time to get engaged in the ODFs, and some of them could not promptly interact in the forum.Thus, revisiting the comment threads in ODFs would help the students connect their insights relevant to the discussion.
Poor internet connectivity in accessing ODFs
Majority of the respondents shared the same problem that they are experiencing poor internet connectivity in accessing the discussion forums.Most of them are financially unstable that hinders them to avail for a stable internet connection.Given that the participants' geographical location is in remote areas, most of them suffer from intermittent internet connectivity.This experience interrupts their engagement in the ODFs as reflected in the following responses: Lack of load and poor internet connection hinders my early engagement in the forums.Most of us only use mobile data and it is commonly not stable in our area.[P7] Unstable internet connection is one of the major interruption since we are living in the province and we are only using prepaid loads for our devices.
[P12] It can be inferred from the responses above that P7 and P12 are highly affected with the slow internet connectivity in their area, and this affect their engagement in ODFs.Eliveria et at. (2019) argued that one of the major reasons that hinders the students become more productive in the conduct of online classes is the unstable internet connection of our country.In time of the pandemic, all classes in the higher education institutions were conducted in virtual platforms.
The participants expressed their dismay on the status of the internet connection in the country, especially in remote areas.Some of them shared that they need to wake up at midnight just to experience a good connection.These findings support the views in the studies of Joaquin et al. (2020) and Alvarez (2020) that this internet issue becomes a major concern especially during the conduct of flexible learning in the pandemic.Further, it can be deduced from the above responses that this issue is happening in reality, especially those students situated in the provinces.
Lack of active engagement in the forums' comment threads
Another challenge that emerged from the in-depth interviews is the nature of having a low interest of engagement by other students.ODFs are created for students' and teacher interaction, and if there is a low interaction, the exchange would not that become meaningful.As expressed by the participants, other students do not engage actively in the comment threads of the forums, and this can be seen in the following responses: Based from my experience in participation within the said forum, it was notable that some of the students dislikes the idea of rebuts or feedback which becomes a barrier to idea networking.[P10] Most of the students do not have a prompt engagement in the online discussion forums.They usually interact when the deadline is already fast approaching, which is commonly in the near end of semester.[P4] This is one of the teachers' challenges when aiming to engage their students in the virtual classroom such as in Facebook closed groups.ODFs take part a great role for the students to participate in class asynchronously.However, due to several academic factors, some of them are feel hesitant to participate.
According to Bailey and Almusharraf (2021), engagement of the learners in virtual forums need to be strictly guided by teachers to make it more interactive and be able to accomplish its purpose in the teaching and learning process.In the above extracts, the students were not that comfortable in arguing with their other classmates, and this limits the interactive nature of ODFs.
Strategies Employed by Freshmen English Major Students on the Use of ODFs
This study also explored the strategies employed by the freshmen English major students on the use of online discussion forums.After analyzing the significant statements from the respondents, the researcher came across with the following emergent themes: (1) prompt reading of new comment notifications in ODFs, (2) create responses' draft until the connection resumes, and (3) name tagging in the ODFs for the continuity of exchange.
Prompt reading of new comment notifications in ODFs
With the interactive feature of ODFs in Facebook groups, there is an influx of comments that eventually form into exchange of threads.Due to several tasks assigned to students in the university as part of their academic requirements, it usually take time for them to interact in the discussion forums.Hence, they make effort to backread the previous exchange of comments enables for them to catch up with the discussion.In this sense, they make sure that they will be able to have a prompt reading of new comments in the ODFs, as reflected in the following responses: I usually read the new messages as early as possible for me to catch up with what is being discussed in the comment thread.[P8] Every time there is a new notification of comments in the online discussion forum in our group, I make sure that I will be able to read the message as early as possible for me not be left behind the current discussion…I need to be quick in reading those comments to grasps the updated discussion.[P12] Also, it can be deduced from the above responses that the participants are aware that they need to be prompt when reading the comments in ODFs.In this way, it would not be difficult for them to catch up with the discussion.As digital learners, Balaji and Chakrabarti (2010) argued that students nowadays are very quick to grasp information in the virtual platforms.Just like in ODFs, students could still manage to accomplish their tasks in a limited time.The advantage of ODFs in Facebook is its easy accessibility feature (Kent, 2013) that anyone could interact at their own comfort.
Create responses' draft until the internet connection resumes
Because ODFs can be done asynchronously, interruption in engaging with this task due to intermittent interne connection is not really a big problem.Students could still go back to the ODFs and be able to type in their answers in the comment threads.In the Philippines, students who are situated in remote areas normally experience unstable internet connection, and this interrupt their engagement in the ODFs.However, these students make use of their time by writing drafts of their answers in a piece of paper or in their cell phones' note pads while waiting for the internet to resume.These insights are evident in the responses provided: I usually wait for the internet connection to be better…and while waiting, I do some notes of my answers so that I could polish first my thoughts before I eventually send that in the comment section.[P4] Most of the time, when I am currently experiencing an internet problem, I do write first the draft of my answers in reply to the insights of my classmates in the forums.This helps me develop my thoughts in response to mu other classmates' answers while waiting for my internet data to resume.[P11] In most cases, majority of the participants disclosed from their responses that they are fond of making drafts of their answers before they share it in ODFs.It can be inferred in the above excerpts that P4 and P11 make use of the time on crafting the content of their answers while waiting for the internet connection to resume.Since ODFs are done asynchronously, they are given enough time to be engaged in the interactive discussion.This argument is supported in the study of Rinekso and Muslim (2021) that ODFs would give the students ample time to construct their ideas if technical issues will occur such as the unstable internet connection which is a very common problem in the Philippines (Joaquin et al., 2020;Alvarez, 2020).Hence, in most cases, students do initiate some alternative tasks to fully utilize their time while offline.
Name tagging in the ODFs for the continuity of exchange
Sometimes, these students are preoccupied with several academic activities in school, and they are lagged in commenting in the ODFs.One strategy that the other students use is the mentioning of their classmates' names in the comment threads.This way, they will be prompted to respond in the exchange.This feature of Facebook in private groups would encourage the students to share their thoughts.
Tagging their names would instantly notify them in their Facebook accounts, and this would prompt them to get engaged, as revealed in the responses below: Our other classmates do not interact very well and actively in the forums, no matter how much pressure that our teacher would tell us in our GC.What we sometimes do is we tag the names of our classmates in the comment section of the forum, and solicit their insights about the topic.[P5] We usually mention our classmates in the comment section, especially when we see that they are online.They are really forced to share their insights and ideas since our teacher is also there in our private Facebook group.[P9] If there is a low interaction in ODFs, other students who have already shared their insights in the platform would name tag their other classmates who are not participative even though they are online.As expressed by P5's and P9's responses, the discussion is being limited since only few of them are interacting in the forums.As contended in the study of Avila and Cabrera (2020), the name tagging or mentioning feature in Facebook closed group's comment section strategically help the teachers to get the attention of those students who are not participative, and the students also use this feature to encourage their classmates to interact.
CONCLUSION AND RECOMMENDATION
Although the findings were drawn from insights shared by a limited number of English major students in the Philippine context, it may be agreed that ODFs has a potential in transforming the language classroom into making it more meaningful and interactive.However, the teachers should be able to manage well the ODFs employed in their virtual classes, such as Facebook in this study, to encourage the students to become more participative.The findings indicate that there are pros and cons on the use of ODFs as perceived by students that might affect their learning engagement in the flexible modality.First, ODFs could manifest an interactive virtual class atmosphere and the students will be able to recapitulate the lessons discussed from the ODFs by just browsing the comments provided by the participants.Second, ODFs may develop students' online cheating since this task is accomplished asynchronously.In terms of the it can be noted that the lack of interaction in the forums and the intermittent internet connectivity was disclosed as intervening factors that limit their participation in ODFs.However, despite of these challenges, the students are aware on the strategies they could apply in case these issues might occur again in the future.Promptness in interacting with ODFs, composing draft of their answers while waiting for the internet to resume and name tagging their classmates in the forum to foster interaction was the essential strategies they may apply to lessen the challenges they commonly encounter in ODFs.Further, strict implementation of specific guidelines in participating with ODFs should be established for its smooth conduct.
Future research may examine the teachers' experience of employing ODFs in their language classes aside from getting the views of the students.Mixed-method techniques may be used to clearly understand the nature and use of ODFs in English language classes.If explored in full qualitative approach, ethnography and narrative inquiry would be more interesting to use for a more critical understanding of students' discourses in ODFs.Also, studies on ODFs deemed necessary enable to provide significant inputs that might be used by teachers when using this kind of activity in virtual modalities.This research direction is also important to be considered by other researchers in various contexts enable to derive unique findings that could possibly contribute to recreating a linguistically and culturally sensitive ODF.Other than the use of ODFs in Facebook closed groups, other online platforms such as Google Classroom may be studied.Further studies likewise may investigate how the interaction of teachers and students in ODFs create a space for collaborative teaching and learning process.Moreover, it may be significant to consider the linguistic aspect of exploration on the participants' responses in the comment threads, including the investigation of gender related issues encountered in ODFs.
With the abovementioned gaps and observations, this research was conducted.This study primarily aims to determine the case of freshmen English major students in a state university in the Philippines in terms of their asynchronous engagement in online discussion forums via Facebook closed group in their major subject for S.Y 2020-2021.Specifically, the study aimed to answer the following question: (1) what are the perspectives of the freshmen English major students on the use of online discussion forums?(2) what are the challenges met by the freshmen English major students on the use of online discussion forums?and (3) what strategies are employed by the freshmen English major students on the use of online discussion forums?
|
v3-fos-license
|
2018-12-07T01:32:51.944Z
|
2017-06-14T00:00:00.000
|
54856989
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nat-hazards-earth-syst-sci.net/17/881/2017/nhess-17-881-2017.pdf",
"pdf_hash": "b7a349b674e31bfe1770af348dbda3a40656d5cb",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43502",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "b7a349b674e31bfe1770af348dbda3a40656d5cb",
"year": 2017
}
|
pes2o/s2orc
|
Brief Communication : A low-cost Arduino ®-based wire extensometer for earth flow monitoring
Continuous monitoring of earth flow displacement is essential for the understanding of the dynamic of the process, its ongoing evolution and designing mitigation measures. Despite its importance, it is not always applied due to its expense and the need for integration with additional sensors to monitor factors controlling movement. To overcome these problems, we developed and tested a low-cost Arduino-based wire-rail extensometer integrating a data logger, a power system and multiple digital and analog inputs. The system is equipped with a high-precision position transducer that in the test configuration offers a measuring range of 1023 mm and an associated accuracy of ±1 mm, and integrates an operating temperature sensor that should allow potential thermal drift that typically affects this kind of systems to be identified and corrected. A field test, conducted at the Pietrafitta earth flow where additional monitoring systems had been installed, indicates a high reliability of the measurement and a high monitoring stability without visible thermal drift.
Introduction
Earth flow activity alternates between long periods of slow and/or localized movements and surging events (e.g.Guerriero et al., 2015).Slow movement is normally concentrated along lateral-slip surfaces (Fleming and Johnson, 1989;Gomberg et al., 1995;Coe et al., 2003;Guerriero et al., 2016) which consist of fault-like segments (e.g.Segall and Pollard, 1980) locally associated with cracks arranged in en echelon sets (Fleming and Johnson, 1989).Movement velocity is controlled by hydrologic forcing, and seasonal acceleration and deceleration are induced by variation of the pore-water pressure (e.g.Iverson, 2005;Grelle et al., 2014).Thus, most earth flows move faster during periods of high precipitation or snowmelt than during drier periods, and the correlation between precipitation and velocity is normally complex (Coe et al., 2003;Schulz et al., 2009).Earth flow surges can occur when prolonged rainfalls are associated with the loss of efficient drainage pathways (Handwerger et al., 2013) and new sediment becomes available in the source zone through retrogression of the upper boundary (e.g., Guerriero et al., 2014).In these conditions, the earth flow material can fluidize and fail catastrophically.
Each different kinematic behavior materializes a specific hazard level that needs to be quantified on the basis of monitoring data.An accurate identification of hazard also includes the understanding of factors controlling earth flow movement (Schulz et al., 2009).In this way, a continuous record of earth flow displacement and its environmental drivers is essential in defining the dynamic of the process (e.g.Corominas et al., 2000).Additionally, for earth flow involving human infrastructures (e.g.roads and railroads) displacement monitoring is crucial for understanding the ongoing evolution and designing mitigation measures.
Displacement monitoring can be completed with a variety of instrumentations (e.g.rTPS, GPS), but most of them (i) do not allow nearly continuous monitoring, (ii) imply time-consuming and expensive monitoring campaigns and/or (iii) cannot be integrated with additional sensors (Corominas et al., 2000).Wire extensometers are particularly suitable for continuous monitoring, especially when it is concentrated along well-defined shear surfaces, and can be easily integrated in multisensor monitoring systems.Major dis-Published by Copernicus Publications on behalf of the European Geosciences Union.advantages of extensometers are their cost (a single-point high-performance sensor is sold at ∼ EUR 1000) and their sensitivity to temperature is also a function of the characteristics of cabling systems.In this paper, we present a new Arduino ® -based wire-rail extensometer specifically developed for monitoring earth flow movement.We chose the Arduino board because it has been successfully used for the development of monitoring systems for different applications (e.g.Bitella et al., 2014;Di Gennaro et al., 2014;Lockridge et al., 2016).The system integrates a power unit, a data logger and an operating temperature sensor, has a very low cost (∼ EUR 200), is configurable with different measurement ranges and accuracy and has the potential to work with additional sensors.We test extensometer performance at the Pietrafitta earth flow in southern Italy and compare its measurements with those derived by successive GPS surveys and discrete rTPS measurements.
Extensometer components, structure, and code logic
The extensometer is composed of (i) processing and storage modules, (ii) an on-board operating temperature sensor, (iii) a linear position transducer, and (iv) a power unit (Fig. 1; a simplified assemblage guide is reported in the Supplement).The processing and storage modules are an Arduino Uno board and a XD-05 data logging shield, respectively.The Arduino Uno board is a user-friendly version of an integrated microcontroller circuit that uses an AT-mega238p low-power CMOS 8-bit microcontroller.It has 14 digital input/output pins, 6 analog inputs, a 16 MHz quartz crystal, a USB connection, a power jack, an ICSP header and a reset button (https://www.arduino.cc).The presence of multiple digital and analog inputs make this platform ready for reading multiple sensors.The data logging shield integrates a SD card read/write slot, a Real Time Clock (RTC) module with coin cell battery backup and a prototyping area.In this way, monitoring data are logged in a SD card at a predefined interval and a date/time is associated.To choose this shield, we considered the presence of the RTC module and its cost.We chose the cheapest.The operating temperature sensor consists of a 10 K thermistor characterized by a tolerance of ±5 %.It is installed with a reference resistor of 10 K in the prototyping area of the logging shield (see wiring schematic in the code attached as supporting material).For our test, the thermistor was calibrated between −10 and 40 • C and obtained Steinhart-Hart coefficients were used for temperature estimation.Steinhart-Hart coefficients were calculated using a SRS Thermistor Calculator (http://www.thinksrs.com/downloads/programs/ThermCalc/NTCCalibrator/NTCcalculator.htm).We estimated temperature error by comparing thermistor measurements with a precision thermometer (accuracy ±0.05 • C) in laboratory controlled conditions.The RMSE calculated on the basis of 40 observations (between −5 and 35 • C) was ∼ 1 • C. The linear position transducer is responsible for measuring cumulated displacement and consists of a 1Kohms Bourns 3540S-1-102L precision potentiometer equipped with a 3-D printed pulley.For our development and test we used a pulley of 33 mm in diameter made of ABS (acrylonitrile butadiene styrene) plastic, the design files of which (.obj and .skp)are reported in the Supplement.This allows a measurement range of 1023 mm and an associated accuracy of ±1 mm (nominal accuracy of 0.1 %).Such a range can be varied using a pulley with different diameter.A change in range results in a change in accuracy and resolution.The pulley was printed using a 3-D PRN LAB54 printer and an ABS filament of 1.75 mm in diameter (specific density 1488 kg m −3 ).The filament was extruded at 210 • C with a velocity of 30 mm s −1 .
The system is powered using a 50 w solar panel and 12 V, 12 Ah battery.Since the Arduino Uno has a broad power input range (recommended 7-12 V), and in order to avoid overheating of the board connected to the use of a 12 V power input voltage, a DC-DC converter is used to stabilize power voltage to 7.2 V.The processing and storage module, the onboard operating temperature sensor and the linear position transducer are housed in a 15 × 10 × 7 cm waterproof box that, together with the power system, is housed in a second larger 40 × 30 × 15 cm waterproof box.Such a modular structure is used to protect both electronic and mechanic components from environmental conditions and allow very short cables to be used that, as well as the modular structure, prevent the system from thermal drift.
The code for the extensometer has been developed and compiled using the Arduino IDE environment and open source code-string available online.The logic of the code is reported in Fig. 2 and the code is reported in the Supplement as well as the sensors wiring schematics.The final cost of the extensometer was around EUR 200 including additional installation equipment (see Table 1; e.g.rebars, wire, screw etc. . ..).Additionally, even if it has been developed specifically for earth flows it can be used for all types of landslide that move along well-defined lateral-slip surfaces, and with specific improvements it can also be installed in different position such as at the head of a landslide.
Installation and testing at the Pietrafitta earth flow
We test the extensometer performance at the Pietrafitta earth flow (Fig. 3a) in the Apennine mountains of southern Italy (Campania region, Province of Benevento).Since 2006, this earth flow has been periodically active, exhibiting an alter-L.Guerriero et al.: Wire extensometer for earth flow monitoring nation of slow persistent movements and rapid movement especially localized at the toe of the flow.We chose this earth flow because it is actively moving, its movement occurs largely along a lateral well-defined shear surface (e.g.Gomberg et al., 1995) and is monitored using both discrete GPS surveys and nearly continuous rTPS measurements.We installed our low-cost Arduino-based extensometer along the left flank of the earth flow toe (Fig. 3b).The installation was completed using a 2.5 m long wire supported by several rebars, which forms a rail parallel to the strike-slip fault, materializing the left flank of the flow.In this way, the extensometer is dragged/moves along the flank registering the cumulative displacement (scheme is shown in Fig. 3c) every 30 s.We used available displacement data to make a comparative analysis of the monitored displacement.To compare displacement measured with different systems, we installed a GPS antenna screw mounting and a rTPS target on the wire extensometer (Fig. 3b).Raw data measured with our monitoring systems are shown in the graph of Fig. 3d.In particular, the earth flow toe moved approximately 1 m in 6 days and 6 h.The average velocity calculated on the basis of these data was ∼ 6.6 mm h −1 .In this part of the flow, the movement was largely dominated by the horizontal component.This makes it possible to compare the displacement measured by the extensometer and the horizontal component of the displacement vectors reconstructed with both the GPS surveys and the rTPS.The comparison of the results indicates that the total displacement measured by our extensometer was approximately the same as that measured by combining GPS and rTPS surveys.The difference between total displacement measured by the combination of GPS surveys and rTPS and the extensometer was around 1.5 cm (1.5 %).Additionally, the displacement time series reconstructed using rTPS data perfectly fits the extensometer time series for the first 4 days of rTPS monitoring, despite a slight thermal drift of the rTPS (see red curve of Fig. 3d).In the successive 2 days the degree of fit seems to decrease and affects the measured total displacement.This was probably also caused by the deformation of the rail induced by the tilting of the ground surface around the extensometer.Despite this drawback, the system exhibits a very high monitoring stability without visible thermal drift, also at operating temperatures higher than 35 • C.
Concluding remarks and possible future improvements
The Arduino-based extensometer was developed to provide a low-cost improved platform for continuous earth flow/landslide monitoring.The prototype was developed on the basis of the Arduino Uno board and integrated a data logging RTC shield and an operating temperature sensor.It was equipped with a high-precision position transducer that in the test configuration offers a measuring range of 1023 mm and an associate accuracy of ±1 mm.The field test indicates a high reliability of the measurement and the importance of the rail setup.In particular, for horizontal displacement monitoring it is important to consider the topography of the surface of the flow and possible surface deformation caused by movement.In this way, periodic inspection of the system needs to be planned.Major advantages of the system are (i) the very low cost, (ii) the presence of an integrated data logger, (iii) the potential to integrate it with additional sensors, (iv) the possibility for use with different types of landslides.Even though our test indicates the ability of the system to work in real field conditions by providing reliable data, we have to consider that our test was very short, and in only 6 days our extensometer reached the maximum measurable displacement (average velocity of 6.6 mm h −1 ).Thus, for very low velocities and very long monitoring periods it might be useful to use a 12 bit Arduino DUE board that permits an increase of 4 times the resolution and/or the range.The system can be integrated with a data transmission shield that allows near real time data transmission and has the potential to be used in landslide emergency scenarios.To further reduce the cost of the device it would be possible to use the Arduino Pro Mini board that is cheaper than the Arduino Uno and has a lower power consumption that allow to choose also a cheaper power system and smaller housing boxes.Additionally, we have planned to replace the ABS plastic pulley with an aluminium pulley that should ensure a higher durability.This change might increase the cost of the system.
Figure 1 .
Figure 1.The Arduino-based extensometer after the assemblage.Major electronic components are labeled.
Figure 2 .
Figure 2. Flow chart showing acquisition and storage logic.
Figure 3 .
Figure 3. (a) The Pietrafitta earth flow in the Appenine mountains of southern Italy, Campania region.The black star indicates the position of the extensometer during the field test.(b) Installation configuration and monitoring equipment for comparative analysis.(c) Installation scheme.(d) Results of displacement monitoring with the extensometer and comparison with GPS derived and rTPS results.Black circles indicate error associated with GPS surveys.Operating temperature measured by the extensometer is also shown.
Table 1 .
Individual cost and seller type of each component of the monitoring system.
* Minor components like board supports are not considered in the list because they were already available.* * Online cost does not include shipping fees.Total shipping fees are around EUR 35.
|
v3-fos-license
|
2021-04-02T05:30:24.287Z
|
2021-03-04T00:00:00.000
|
232480428
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23802359.2021.1886008?needAccess=true",
"pdf_hash": "d962a7e9682a1402c0133e7874165646bc3815bc",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43503",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "d962a7e9682a1402c0133e7874165646bc3815bc",
"year": 2021
}
|
pes2o/s2orc
|
The mitochondrial genome of a parasitic wasp, Chouioia cunea Yang (Hymenoptera: Chalcidoidea: Eulophidae) and phylogenetic analysis
Abstract Chouioia cunea Yang 1989 is a parasitic wasp and natural enemy of several lepidopteran pests during their pupal stage. In this study, we sequenced and analyzed the mitochondrial genome of C. cunea, and obtained a complete DNA molecule that is 14,930 bp in size with 13 protein-coding genes (PCGs), two ribosomal RNA genes (rRNAs), and 22 transfer RNA genes (tRNAs) (GenBank accession number MW192646). All the 13 PCGs started with typical ATN (ATA, ATG, and ATT) and terminated with the stop codon TAA or TAG. Phylogenetic analysis showed that C. cunea formed the sister group with Tamarixia radiata, which belongs to the same family.
Chouioia cunea Yang (Eulophidae: Chalcidoidea) was first collected from pupal of the fall webworm (Hyphantria cunea Drury), which can also be used for biological control of a variety of Lepidopteran pests such as c Fabricius, Clania variegata Snellen, Stilpnotia candida Staudinger, S. salicis (L.), Micromilalopha troglodyta (Graeser), and Ivela ochropoda Fabricius (Zhao et al. 2016;Xin et al. 2017). Insect mitochondrial genome has many characteristics which can make it play an important role in molecular evolution, phylogenetics, and population genetics. However, only T. radiata has been sequenced in Eulophidae (Du et al. 2019).
In this study, individuals of C. cunea were collected from a field in Hainan province and reared in the Environment and Plant Protection Institute, China Academy of Tropical Agriculture Sciences, Hainan, China (110 20 0 9 00 N, 19 59 0 21 00 E). The samples were preserved in 95% ethanol at À20 C in herbarium of Post-Entry Quarantine Station for Tropical Plant, Haikou Customs District, PR China with an accession number IN07040201-0001-0020. Single sample were used for genomic DNA extraction. The mitogenome sequence of C. cunea was generated using Illumina HiSeq X TEN Sequencing System and assembled by MitoZ software without parameters (Meng et al. 2019).
The mitochondrial genome of C. cunea is 14,930 bp long, including 13 protein-coding genes (PCGs), 22 transfer RNA genes (tRNAs), two ribosomal RNA genes (rRNAs), and one partial non-coding AT-rich region with a length of 220 bp. All the genes were distributed on two coding chains, of which 27 genes were encoded on majority strand (J-chains) and the rest were transcribed on minority chains (N-chains). The overall base composition of the mitogenome sequence is 44.8% A, 40.3% T, 8.2% C, 6.7% G, with a high AT bias of 85.1%. Compared with ancestral insect mitochondrial genome, the mitogenome of C. cunea exhibits a dramatic mitochondrial gene rearrangement, of which is consistent with previous studies (Wu et al. 2020). A total of 27 genes have been rearranged in the C. cunea, including seven PCGs (trnC, trnK, trnD, ATP8, ATP6, NADH3, trnR), 20 tRNAs. The gene block (NADH3-trnG-COX3-ATP6-ATP8-trnD-trnQ-trnK-COX2-trnL2-CO X1) was inverted in the mitochondrial genome of C. cunea.
All the 13 PCGs of C. cunea started with the conventional ATN codons, including three ATAs (ND1, ND3, and ND4L), five ATTs (ND2, ND5, ND6, COX2, and ATP8), five ATGs (COX1, ND4, CYTB, COX3, and ATP6). Twelve PCGs terminate with the stop codon TAA, whereas ND1 end with TAG. The length of 22 tRNA genes is between 58 and 69 bp, and all them have the typical clover-leaf structure, except for trnS1 and trnR which lack dihydrouridine (DHU) arm. It is a common phenomenon that trnS1 lacks DHU arm in the mitochondrial genome of many insects (Yuan et al. 2015;Xiong et al. 2019). Two rRNA genes (s-rRNA and l-rRNA) located at trnV/trnA and trnA/ trnL1 regions, with the lengths of 754 and 1331 bp, respectively.
To validate the phylogenetic status of C. cunea, we selected the mitochondrial DNA sequences of 18 closely related taxa of Chalcidoidea in NCBI, and extracted the sequences by Phylosuite software . Pelecinus polyturator and Ibalia leucospoides were used as outgroups. The analyses were performed with Bayesian inference and maximum likelihood in Phylosuite (Nguyen et al. 2015;Zhang et al. 2020). The result showed that C. cunea was clustered with T. radiata (Figure 1), which belongs to the same family. To sum up, the results of this study can provide essential and important DNA molecular data for further phylogenetic and evolutionary analysis of Chalcidoidea.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Data availability statement
The genome sequence data that support the findings of this study are openly available in GenBank of NCBI at https://www.ncbi.nlm.nih.gov/ under the accession no. MW192646. The associated BioProject, SRA, and Bio-Sample numbers are PRJNA691215, SRP301245, and SRS8004285, respectively.
|
v3-fos-license
|
2020-10-19T18:08:35.560Z
|
2020-09-30T00:00:00.000
|
224905359
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://microbiologyjournal.org/download/40734/",
"pdf_hash": "0ef0ff99343813df7fff10477c61f6356f8542fb",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43505",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "a38ae15e07b4c28cc9b1ae3d13cdd54f262b6327",
"year": 2020
}
|
pes2o/s2orc
|
Seasonal Variation of Culturable Benthic Soil Prokaryotic Microbiota as Potential Fish Pathogens and Probiotics from an Aquaculture Farm in East Kolkata Wetlands, india
Rising demand in the aquaculture sector tends towards finding innovative ways to promote better yield and profitability. Benthic soil microbiota can provide an insight into the potent opportunistic fish pathogens as well as probiotics present in the aquaculture system. This study reports the seasonal diversity and abundance of fifteen culturable pathogenic bacterial strains belonging to the genera of Comamonas, Aeromonas, Providencia, Klebsiella, Escherichia, Acinetobacter, Serratia, Stenotrophomonas, Staphylococcus, and Enterobacter along with nine probiotic strains native to genera of Bacillus and Pseudomonas isolated from an aquaculture farm benthic soil, located in East Kolkata Wetlands, West Bengal, India. Strains are isolated using traditional microbial culture methods and tested for their antimicrobial susceptibility against commonly available antibiotics. 16S rDNA analysis was done for the identification of the strains and the establishment of their phylogenetic relationships. Among the isolates, B. pumilus strain S8 in the pre-monsoon sample, E. coli strain M2aR1 in the monsoon sample, and A. hydrophila strain P6dF1 in the post-monsoon sample were the most abundant having MPN counts of 275±21 x 106 CFU/gram dry soil, 278±18 x 106 CFU/gram dry soil, and 321±28 x 106 CFU/gram dry soil respectively. Data on the temporal diversity, abundance, and drugsusceptibility of prokaryotic fish-pathogens and probiotics can be used to formulate measures for sustainable aquaculture practices with reduced maintenance costs.
as natural sinks for waste recycling and attenuation of floods. Secondly, these water bodies are utilised for a thriving culture of several fish species 20 . The sewage fed aquaculture ponds, locally known as bheris, are unique in their operating procedure as they utilize municipal waste products as a fish feed with the occasional addition of inert feeds 21 . However, the seasonal dynamics of the physicochemical properties of water and benthic soil play crucial roles in the microbiome of the benthic soil of aquaculture farms 22,23 . Though several references are available on the description of the microbial communities of East Kolkata Wetlands, seasonal dynamics of the benthic soil prokaryotic microbiota emphasizing on the variation of probable fish pathogens and probiotics are scarce.
MATERIAl AND METhoDS Media and Chemicals
All chemicals and media were procured from Himedia (India) and Sigma Aldrich Chemical (USA). Tryptone Soya Agar (TSA) was used as enrichment media for the isolation of strains. Routine subculture and the permanent stocks were made in Nutrient agar (NA). Antibiotic Susceptibility test was done on Mueller Hinton Agar (MHA). Culture media were subjected to sterilization at 120°C temperature and 20 psi pressure for 15 minutes before inoculation.
Sample collection
Benthic soil samples were acquired aseptically from an aquaculture farm in East Kolkata Wetlands (Lat -22.5699° Long-88.4394°) from the top surface of the soil below 60cm of the water column. Samples of three seasons in the year 2019 viz. Pre-monsoon sample (April), Monsoon sample (August), and Post monsoon sample (December) were chosen for prokaryotic analyses in this study.
Isolation of bacterial strains
Isolation of culturable bacterial strains was done according to the method given by Vieira and Nahas 2005 24 , with some minor alterations. 5gms (wet weight) of each soil sample was first diluted and homogenized in sterile water (50 ml) with intermittent cooling in an ice bath for 30 minutes. The homogenate was then passed through a sterile 2mm mesh. Filtrates were serially diluted up to 10 8 fold, and 100 µl from the last four dilutions (10 5 to 10 8 ) of each filtrate were spread on TSA plates (in triplicates), which were then incubated at 37°C for 48 hours. A dry weight of soil was calculated by incubating 50 grams of benthic soil in 105°C for a period of 8 hours, and the final weight was used to calculate the wet weight to dry weight conversion factor 25 .
Most Probable Number Count
Colonies that appeared on Each TSA plate after 48 hours of incubation, were marked based on colony morphology (shape, size, colour, transparency, margin contour, and surface topology) and were analysed for the Most Probable Number (MPN) count was done according to the method given by Janssen et al. 2002 26 with some minor alterations. Conversion factors of wet weight to dry weight of each soil sample and dilution factor of the inoculum were considered in the MPN count. The count was taken as a mean along with standard deviation for each dilution (from the triplicate count) and results were expressed in CFU x 10 6 /gram dry soil. For the process of purification, colonies were transferred to NA plates and repeated subcultures were done until the strains were free from conglomeration. Each purified strain was stored permanently in NA stab cultures at 4°C and 80% glycerol stock at -20°C.
Biochemical characterization
A total of fourteen biochemical tests, including determination of Gram Character, Methyl Red (MR), tests for gelatinase, Triple Sugar Iron (TSI), degradation of starch, Voges-Proskauer (VP), indole production, citrate, oxidase, motility tests, etc. were performed with each purified strains and they were characterized following the methods of Bergey's Manual of Systematic Bacteriology 27 .
Molecular Characterization Isolation of Genomic DNA and 16S rRNA gene amplification
Isolation of Genomic DNA from the cultured bacterial strains was done using a bacterial genomic DNA isolation kit (GCC Biotech, India). Genomic DNA was electrophoresed through 0.8% agarose gel and was measured spectrophotometrically at 260 nm wavelength to estimate purity. The samples were subjected to 16S rRNA gene amplification using a Gradient Thermal Cycler (Biorad Laboratories, USA). Universal primer 27f (5'-AGAGTTTGATCMTGGCTCAG-3') and 1492r (5'-GGTTACCTTGTTACGACTT-3') encompassing V1-V9 hypervariable region of 16S rDNA were used for PCR amplification in conjunction with 2X PCR MasterMix (Thermo Fisher Scientific, India). The PCR amplification was carried out for 30 cycles and the amplicons (1.4 kbp approx.) were tested in 1.5% agarose gel using Ethidium Bromide stain. Purification of the amplicons was done using the Agarose Gel purification kit (NEB, USA) and sequencing was done from Xcelris Labs Ltd. (Ahmedabad, India).
in -silico analysis
Sequences were analysed in-silico using BLASTn (NCBI database), the hits were recorded for finding the nearest neighbour with the highest Max score. The aligned sequences were obtained in the FASTA format for downstream analyses in MEGA 7 31 . CLUSTALW algorithm was used for alignment analyses and the data thus obtained were considered for phylogenetic analyses using Neighbor-Joining method 32 . The evolutionary distance was calculated by Maximum Likelihood Composite method 33 and the branch lengths were calculated based on base substitution per site. Bootstrap tests of 500 replicates were done to construct the consensus tree to represent the evolutionary history of the taxa analysed 34 . Nearest four neighbours of each isolate from the BLASTn hits were used to create the phylogenetic tree. The partial 16S rDNA sequences of all strains were deposited into the GENBANK database.
Statistical analysis
Experiments on the enumeration of bacterial MPN were performed in triplicate and mean values were indicated along with standard deviation (SD). SPSS 17.0 (SPSS Inc., Chicago, USA) was used for the Statistical analyses.
Isolation of bacterial strains
In total, two hundred and forty-eight culturable bacterial strains were isolated and purified from the aquaculture benthic soil samples collected from three seasons. Molecular phylogenetic analyses revealed that several bacterial strains were common in samples from more than one season. Amongst the isolates, only twenty-four strains were revealed to have either presumptive pathogenic or probiotic values to fish and are reported in this communication. In the pre-monsoon season sample, a total of six strains (Bacillus flexus strain S1a, Aeromonas punctata strain S2, Bacillus pumilus strain S8, Bacillus subtilis strain S8a1, Comamonas aquatica strain SAC1 and Bacillus cereus strain SWA6a) were isolated. Cumulatively, eight strains were found in the monsoon season sample (Bacillus thuringiensis strain M10, Aeromonas enteropelogenes strain M11, Escherichia coli strain M2aR1, Pseudomonas aeruginosa strain M2F1, Bacillus flexus strain M3, Acinetobacter junii strain M5fR1, Serratia marcescens strain M5hR1 and Stenotrophomonas maltophilia strain M6aR1). Finally, from the post-monsoon season sample, another ten strains were isolated (Escherichia coli strain P1bR1, Providencia vermicola strain P2fR1, Enterobacter cloacae strain P4cR1, Bacillus flexus strain P5a, Klebsiella pneumoniae strain P5aR1, Serratia marcescens strain P5bR1, Bacillus cereus strain P6b1, Aeromonas hydrophila strain P6dF1, Comamonas aquatica strain P7, and Staphylococcus aureus strain P8d3a1). Isolate B. flexus strain S1a, A. punctata strain S2, B. pumilus strain S8, B. subtilis strain S8a1 were found in the samples from both the pre-monsoon and monsoon season whereas, E. coli strain M2aR1, B. flexus strain M3 and S. marcescens strain M5hR1 strains were found to occur in the monsoon and post-monsoon samples.
MPN count
The MPN counts of bacterial strains isolated from all three seasons are depicted in Figure 1. Isolate B. pumilus strain S8 in the premonsoon sample, isolate E. coli strain M2aR1 in the monsoon sample, and A. hydrophila strain P6dF1 in the post-monsoon sample were the most abundant having MPN counts of 275±21 x 10 6 CFU/ gram dry soil, 278±18 x 10 6 CFU/gram dry soil, and 321±28 x 10 6 CFU/gram dry soil respectively. Strains were found to vary drastically in abundance in different seasons revealed by the difference in seasonal abundances of B. flexus strain S1a,
Biochemical Characterization
The biochemical characteristics of all the isolates are compiled in Table 1. Among the isolates, nine were Gram positives and rest were Gram-negative bacteria. Fermentation of sugar as analysed by TSI tests revealed a varying degree of fermentation potentiality amongst the isolates, in both aerobic and anaerobic conditions. All the strains tested were DNase negative and were unable to produce H 2 S in TSI media.
Antibiogram
The results of the antibiogram of the bacterial strains are given in
Molecular Characterization and phylogenetic analyses
BLASTn results revealed the identity of the isolates and taxonomic names were assigned based on the nearest neighbour of the NCBI database. Table 3 depicts the BLASTn analysis of the bacterial strains. Phylogenetic analysis of the fifteen pathogenic strains is given in Figure 2 and the optimal sum of the branch length was found to be 3.522. The phylogenetic relationship of the nine probiotic isolates is shown in Figure 3 with an optimal sum of branch length 1.478. Both the trees are drawn to scale and are shown as legend in the figure.
With the help of online literature databases, a review was done highlighting the pathogenic effect of the isolates on fish species cultivated in the farm and also to ascertain the efficacy of the species isolated as probiotics. Data obtained from the literature survey is summarised for the pathogens and probiotics in Table 4 and Table 2. Antibiogram results of twenty-four isolates from the soil sample against thirteen different Antibiotics. The table is colour coded according to the legend given below Table 5 respectively. Notably, among the isolates, three species are belonging to genus Aeromonas were found that cause Aeromoniasis, which is by far the worst affecting disease, causing high fish morbidity in aquaculture fish cultivation scenario and is advised by several citations [35][36][37][38][39] . However, it is also appealing that among the probiotics isolated, several of them are reported to impart resistance against the pathogens isolated from the same source as well as boosting the immunity of the cultured fish specimens [40][41][42][43] .
DISCUSSIoN
This study reveals several pathogens and probiotic species present in the benthic soil of the aquaculture farm under study. The farm cultivates Indian and exotic major carps along with some catfishes and cichlid fishes and there is ample chance that any of these fish species may get affected by the pathogens revealed in this study. The presence of pathogens in a fish farm is never evaluated until or unless a disease breakout occurs with high fish morbidity 44 . Mass fish morbidity is reported caused by pathogens like A. hydrophila 10,35,45 , A punctata 36 , K. pneumonia 46,47 causing a huge loss in revenue. Bacterial isolates identified in this study revealed that the benthic soil sample collected from post-monsoon season harbors a huge number of opportunistic pathogens with A. hydrophila being the most abundant. It is also noteworthy that, in comparison with the other two seasons, diversity and abundance of the opportunistic pathogens are much higher in the post-monsoon season sample as the occurrence of new genera of pathogens viz. Klebsiella, Enterobacter and Providencia were noticed along with strains of Escherichia and Serratia. This high abundance of diverse opportunistic pathogens could be attributed to the mixture of urban sewage and rainwater runoff from adjacent areas during monsoon and post-monsoon seasons.
Journal of Pure and Applied Microbiology
Several antibiotic-resistant strains have been reported by scientists due to the indiscriminate antibiotic use in aquaculture farms 48,49 and also pathogens like E. coli can transmit the resistance to other pathogens possibly through horizontal gene transfer 44 . Since the study reveals the antibiotic susceptibility of the strains, indiscriminate use of antibiotics could be avoided and specific treatment measures can be implemented.
The prokaryotic probiotics present in soil and water also boost the immunity of culturable fishes 8,50-53 against several pathogens, and thus, steps could be formulated for the aid of the probiotics. These probiotic bacteria can also act as a dietary supplement for the fish specimens, mitigating the problem raised by inert fish feeds 43,[54][55][56] . Probiotic species also engage in the continuance of biogeochemical cycles by several enzymes that help in the biodegradation of detritus materials in the benthos 18 . Probable probiotic species isolated in this study produces enzymes like cellulase 57 , lipase (unpublished data), and hence could be forerunners in the maintenance of the aquatic ecosystem. These probiotics are often consumed from the soil by bottom feeders thus circulating them along the food chain and could alter the gut microcosm of fish beneficially 58 . Further, it was revealed that the presence of probiotics is maximum is pre-monsoon season and declines rapidly from the monsoon season and thereafter. This is because the farm practices liming and the addition of organic fertilizers once annually in the pre-monsoon season leading to the amplification of probiotics. The monsoon rains bring a lot of agricultural and domestic runoffs as well as sewage overloads reducing the quality of water and soil, thus diminishing the number of probiotics and leading to the growth of pathogens. Coliforms are also found abundant in the soil during monsoon and post-monsoon season. This is probably the first report that provides a more in-depth insight into the culturable microcosm from an aquaculture farm of East Kolkata Wetlands, emphasizing their roles as fish pathogens and probiotics. This project could lead to early detection of pathogens and formulation of remedial measures even before the onset of the fish pathogenesis. This could be done by formulating a suitable bioaugmentation program to reinforce the growth of probiotics and eradication of the pathogens ensuring sustainable and profitable aquaculture.
|
v3-fos-license
|
2019-12-15T14:49:44.847Z
|
2019-12-01T00:00:00.000
|
209409949
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/19/24/5517/pdf",
"pdf_hash": "2fde9ce844760db2996ba6b1212bb55e7cf3ccc0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43506",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"sha1": "d3f7912658241079ae573aaf8499236995480ddd",
"year": 2019
}
|
pes2o/s2orc
|
Hand Rehabilitation and Telemonitoring through Smart Toys
We describe here a platform for autonomous hand rehabilitation and telemonitoring of young patients. A toy embedding the electronics required to sense fingers pressure in different grasping modalities is the core element of this platform. The system has been realized following the user-centered design methodology taking into account stakeholder needs from start: clinicians require reliable measurements and the ability to get a picture remotely on rehabilitation progression; children have asked to interact with a pleasant and comfortable object that is easy to use, safe, and rewarding. These requirements are not antithetic, and considering both since the design phase has allowed the realization of a platform reliable to clinicians and keen to be used by young children.
Introduction
The hand, both morphologically and functionally, represents one of the main elements that characterize humankind and serves as the main tool by which to interact with the environment in daily life. High finger mobility and opposable thumbs allow us to pinch, grab, manipulate, and interact with objects, enabling us to effectively explore the world, proving to be an essential element of the learning process throughout our life cycle, from childhood to old age [1].
During pregnancy, embryogenesis of the upper extremities takes place between the fourth and eighth week after fertilization, and the majority of congenital anomalies of the upper extremity occurs during this period. These occur nearly twice in every 1000 children born [2], and they manifest under several typologies with varying degrees of seriousness. In the most severe cases, when a few or no fingers are present (Oligodactyly), current clinical practice is based on surgically transferring one or more toes to the hand to enable it to grasp objects for interaction. However, to achieve this goal, a long rehabilitation period is required. Rehabilitation is also required, although for a shorter time, for those children who, after domestic or outdoor accidents, suffer from injuries to their hands (from fractures to finger amputation), requiring clinical intervention.
Classical rehabilitation is constituted of a set of exercises aimed to increase finger strength, coordination, speed, and accuracy of movement. Moreover, for transfer surgery, such exercises are aimed also to integrate implanted fingers into a new body scheme [3]. To engage children, exercises are usually carried out through toys (e.g., building blocks), that are manipulated by children according to the exercises goal and under the instructions and supervision of a professional therapist in one-to-one
Related Research
The first attempts to use technology to support hand rehabilitation come from the robotic field where instrumented gloves [18][19][20][21][22] and exoskeletons [23][24][25][26] have been proposed, also in combination with exergames. These systems have proven to be very useful when dealing with patients who do not have enough muscle power to move their hands on their own. However, they limit the freedom of patients' movement and alter proprioception and natural behavior of the hand [27]. Moreover, they are Sensors 2019, 19, 5517 3 of 21 bulky, not easy to dress, costly, and are therefore restricted primarily to clinical use with post-stroke patients. Lastly, their dimension makes it difficult to use them in pediatric hand rehabilitation.
For these reasons, sensing devices that have not to be attached to hands are preferred. A possible approach is based on using cameras to survey movement. Motion capture systems, based on using passive markers to robustly identify the finger joints on the camera images, were first proposed [28]. Although this approach is potentially accurate, it is limited to the laboratory domain, as the set-up time required to attach the markers to the hand makes this approach unfeasible at home. Of more interest are markerless approaches to motion capture. Recently, the Leap Motion TM [29,30] and RGB-D cameras (e.g., Kinect TM [31]) have been introduced to track the human body. However, such systems require that relatively simple movements, well visible to the camera, be performed. Moreover, their accuracy is not adequate for capturing free grasping movements [32,33]. Lastly, they require that the user lies inside the working volume of the device, while children tend to move a lot while playing. For these reasons, such devices are not a good match for hand rehabilitation, especially at a pediatric age.
Solutions that do not have such problems are based on touch screen devices in general and on tablets in particular. Tablets can detect the movement of one or more fingers on its surface with high accuracy [34][35][36][37][38][39]. Combined with exergames, they allowsguiding the users in exercises aimed to recover hand dexterity. Most of these solutions allow setting an adequate degree of difficulty and in some cases provide the clinicians with a log of children activity. However, a clear definition of an adequate set of rehabilitation exercises and their quantitative evaluation is still largely missing. Moreover, neither force exercises can be provided with these approaches nor grasping of real objects.
More recently, to enable force exercises, a few attempts to embed sensors inside objects have been described. These objects are inspired by the Myogrip [40], Takei [41], Jamar [42], or Vigorimeter [43] handheld dynamometers, that are used to measure the maximum grasping force exerted squeezing their handle. All of these devices are meant for clinical use. A few attempts to provide equivalent measurements at home have been proposed. The Grip-ball system is constituted of an inflated ball containing a pressure sensor and the associated electronics [44]. A similar approach has been adopted in the Domo-grip system [45], in the SqueezeOrb device [46], the latter developed for human-machine interfacing, and in the Lokee smart ball [47] or the Gripable device [48]. However, such devices allow only the palm power grasp and not all the other finer prehension modalities typical of the human grasping repertoire [49].
Moreover, all of these sensing devices maintain a clinical-like structure, in which the tracking object is clearly distinguishable and associated with measurement. This has been recognized to be an obstacle to adoption especially for children and the idea of instrumenting objects of everyday life has been recently pursued. In CogWatch project [50] objects of everyday life (e.g., coffee machines, mugs) have been instrumented with inertial and contact wireless sensors to gather data that can be used to assess the correct sequence of gestures (e.g., preparing a coffee) to early detect cognitive decline. Sen.se [51] has pushed this approach further providing motion sensors that can be attached to any daily life object. However, such approaches do not allow recovering detailed information on hand interaction. A more suitable approach to hand rehabilitation has been pursued in the Caretoy project [52], that has developed wireless cylindrical toys made of elastomer with embedded sensors. The toys are composed of two soft air-filled chambers connected to a pressure sensor; a rigid case inside the toy contains the control unit. We leverage this approach to realize a flexible pressure-sensing device that can be used to measure fingers pressure in different grasping modalities giving to it the aspect of a toy. This device, combined with a tablet device and with suitable exergames, allows supporting the large variety of exercises required for hand rehabilitation.
Platform Specifications
To provide a valid solution, the starting point is eliciting the functional specifications by all stakeholders [53], who, in our case, are: Clinicians (physiotherapists, neurodevelopmental disorders therapists and surgeons): They require a tool that can guide the child through adequate exercises, which can be adapted to child current motor abilities. Moreover, they require remote monitoring of the rehabilitation progression from the hospital.
Patients (children): They have to use the platform for exercising autonomously. The platform should be easy to use and fun at the same time. We aim here to children between 2 and 7 years old for which a maximum force range of 5 kg has been defined.
Caregivers: These are, preferentially, family members. They need to minimize travel to the hospital with their child and maintain a tie with the clinicians at the hospital to tune exercise and optimize recall visits.
We have to co-design the platform with all of these stakeholders. To realize a system that can be useful and functional for the end-users, we have adopted two parallel processes throughout the development phase: the Design Thinking Process [54] and Human-Centered Design [55,56]: the first is based on searching for possible solutions to the problem, keeping the needs of the users at the center of the design process, while goal of Human-Centered Design is to maximize the usability and user experience of the system. Accordingly, we have involved all the users early in the design loop to take care effectively of several key interaction aspects: ergonomics, ease of use, and maintenance of the system, as well as effectiveness.
In the first phase, we have identified a core set of exercises required by observing live sessions and through focus groups with clinicians. These belong to two categories: Strength exercises: Aim at strengthening fingers muscles to regain the power required to grasp objects. To achieve this, patients are required to grasp objects using different prehension modalities: pinch, palm, or side grasp [49]. Typically, therapists set a specific force, duration and intensity of the exercises and a repetition frequency.
Mobility exercises: Aim at regaining mobility, increasing sensitivity and range of fingers motion. To this aim, patients are required to tap, slide or pinch (e.g., like in the zoom-in/out functionality of many software applications) on a surface using one or multiple fingers. Here, therapists typically set the range of motion, its speed and accuracy [57].
We explicitly remark that the degree of difficulty of each exercise is regulated upon the current child's motor ability.
The two sets of exercises call for different hardware requirements. The first set requires hardware capable of measuring finger strength when the user interacts with objects with different finger arrangements according to the different grasping modalities. The second set requires hardware capable of precisely track fingers position.
A single instrument cannot satisfy the two requirements in an unobtrusive and ecological way. If mobility exercises can be performed on the tablet screen as shown in [34][35][36][37][38][39], a sensing object that can be used for strength exercises was not available and has been developed as described here.
Force Sensing Sub-System
The core element of the platform is a specifically designed toy that embeds the dedicated hardware capable of sampling, digitizing and wirelessly transmitting in real-time pressure measurements to the mobile device (typically a tablet or a smartphone) on which the exergames are played (Figure 1). The toy realized here is based on Lego TM that is a typical construction familiar to children of young age, and has the shape of a cottage (Figure 2a) that hides the sensing architecture. The pressure sensor adopted should be robust enough to resist possible damage when handled by children. To this aim, we have adopted a load cell (13 mm × 13 mm × 50 mm) obtained disassembling a consumer kitchen electronic scale (Zheben, SF-400) with a pressure range of 0-7 kg. The load cell has been encapsulated inside a Lego TM structure and the sensitive part of the cell has been covered by a flat Lego TM red tile ( Figure 2c) to indicate clearly where the child has to press. The encapsulated load cell has then been positioned inside the Lego cottage in an area where the children can comfortably interact with. Figure 2a).
The pressure sensor adopted should be robust enough to resist possible damage when handled by children. To this aim, we have adopted a load cell (13 mm × 13 mm × 50 mm) obtained disassembling a consumer kitchen electronic scale (Zheben, SF-400) with a pressure range of 0-7 kg. The load cell has been encapsulated inside a Lego TM structure and the sensitive part of the cell has been covered by a flat Lego TM red tile (Figure 2c) to indicate clearly where the child has to press. The encapsulated load cell has then been positioned inside the Lego cottage in an area where the children can comfortably interact with. The pressure sensor adopted should be robust enough to resist possible damage when handled by children. To this aim, we have adopted a load cell (13 mm × 13 mm × 50 mm) obtained disassembling a consumer kitchen electronic scale (Zheben, SF-400) with a pressure range of 0-7 kg. The load cell has been encapsulated inside a Lego TM structure and the sensitive part of the cell has been covered by a flat Lego TM red tile (Figure 2c) to indicate clearly where the child has to press. The encapsulated load cell has then been positioned inside the Lego cottage in an area where the children can comfortably interact with.
The processing electronics is housed inside the cottage and connected to the load cell. The force signal is amplified by a differential operational amplifier with an adjustable gain (TI INA125P); a particular configuration has been adopted to maximize stability as shown in Appendix A. The amplifier is connected to the micro-controller of an Arduino board that samples the signal, filters it, and sends it wirelessly through a Bluetooth chip HC-06 transceiver [58] at a frequency of 250 Hertz. Bluetooth 2.0 + EDR (Enhanced Data Rate) has been adopted to limit current sink and therefore power consumption. To maximize attractiveness, children can personalize the cottage with Lego TM characters or other constructions, provided that these do not interfere with the sensing area.
In the configuration shown in Figure 2a,b, the child can activate the sensor by pressing with one finger. To accommodate pinch grasp, a rotating mechanism, made of a Lego TM shaft inserted into two parallel bricks, has been designed and realized ( Figure 2c). In this way, the Lego TM case of the load cell is free to rotate and emerges from its housing; the child can thus squeeze the case with two opposite fingers as required by pinching. Different pinch amplitudes can be accommodated by simply making the load cell case thicker by adding flat bricks on the top of it ( Figure 2d). Finally, for power grasp (palm grasp), the cell case can be detached from the cottage and used as a free-tracker ( Figure 2e). In this way, the entire repertoire of basic prehensions of the human hand is covered.
Two ranges of force have been defined: from 0 to 3 kg and from 0 to 5 kg, according to age and rehabilitation state. The prototype is built on a 30 cm × 30 cm Lego TM base and the weight of the construction is about 0.3 kg, which makes it easily portable. The accuracy and reliability have been fully tested as reported in Appendix B. Linearity and repeatability are in the order of 0.1%, which is 3 g for the 3 kg range and 5 g for the 5 kg range.
Such a high resolution is not required when the child exercises at home where the capability of applying a force above a given threshold is sufficient for training purposes. Under this hypothesis, the tracker can be simplified and a simpler sensing mechanism, based on the same principle on which a clothespin works, has been devised.
The sensing object is constituted of a pet-toy realized through 3D printing (FABTotum Personal Fabricator printer). This has been designed to be attractive to children: a crocodile made of two parts: the upper and lower jaw pivoting around the end of the mouth ( Figure 3). The child has to press the tail of the crocodile to make it open its mouth; such movement is resisted by a set of rubber bands applied to the mouth whose strength depends on their number and regulates the force that has to be exerted by the child. A smart button, constituted of a digital contact, a microcontroller, and a radio transmitter, is inserted through a frontal slot inside a lodge created in the bottom jaw of the crocodile, to make battery substitution easy. The button is used to sense when the user has exerted enough pressure to open the mouth. The smart button chosen here is a Camkix Bluetooth Remote Shutter; typically used as a remote shutter of digital cameras, it has the robustness required for use by children. Such a device transmits a digital pulse whenever the contact is open and is compatible with the blue-tooth HID (Human Interface Device) profile; it can be easily interfaced with smartphones and tablets operating system as it is recognized automatically, like regular keyboards or mice, and can therefore be used easily as an input device.
Motion Sensing Sub-System
For the mobility exercises, we leverage the multi-touch display of mobile terminals like smartphones or tablets. Such devices accurately detect the position of one or more fingers at 30 Hz and can therefore easily be used to measure fingers tapping, sliding, or pinching over the device surface (see videos in the supplementary material). This tablet or smartphone will be the same that works as a host for the pressure-sensing device.
Platform Architecture
The sensing devices and the host constitute the platform that is given to the families to support autonomous rehabilitation of their child. It can be regarded as the client component of the typical client-server architecture used in the telerehabilitation domain [6]: the client runs the exergames, acquires the pressure data from the sensing devices, logs them and attaches a time stamp. At the end of the exercise, it transmits these data to a server, that, in turn, processes the data to compute the results and show them to the clinicians. Moreover, the server sends to the client the list of the exergames chosen by the clinicians, along with their difficulty degree ( Figure 4).
Motion Sensing Sub-System
For the mobility exercises, we leverage the multi-touch display of mobile terminals like smartphones or tablets. Such devices accurately detect the position of one or more fingers at 30 Hz and can therefore easily be used to measure fingers tapping, sliding, or pinching over the device surface (see videos in the Supplementary Materials). This tablet or smartphone will be the same that works as a host for the pressure-sensing device.
Platform Architecture
The sensing devices and the host constitute the platform that is given to the families to support autonomous rehabilitation of their child. It can be regarded as the client component of the typical client-server architecture used in the telerehabilitation domain [6]: the client runs the exergames, acquires the pressure data from the sensing devices, logs them and attaches a time stamp. At the end of the exercise, it transmits these data to a server, that, in turn, processes the data to compute the results and show them to the clinicians. Moreover, the server sends to the client the list of the exergames chosen by the clinicians, along with their difficulty degree ( Figure 4). The server is composed itself of two main components: the data storage and a graphical application that allows clinicians to analyze rehabilitation progression and to configure the rehabilitation sessions. Such a graphical application is implemented as a web-application to allow clinicians to access the data from anywhere, even outside the hospital.
When the application is started on the mobile terminal, the user sees the exercises grouped into a suite: the user can choose among the different exergames assigned by the therapist. The exercises will be already configured at the right difficulty and to be played either with the tablet or with the pressure sensor as a tracker ( Figure 5).
Therapeutic Exergames
A set of exergames were developed, following a methodology specifically designed for therapeutic exergames [53], that consists of three main steps: Identification of the exercises, all their requirements, constraints and all the parameters that determine their challenge level; Transformation of the exercises into Virtual Exercises to discuss with clinicians game mechanics and actions; The server is composed itself of two main components: the data storage and a graphical application that allows clinicians to analyze rehabilitation progression and to configure the rehabilitation sessions. Such a graphical application is implemented as a web-application to allow clinicians to access the data from anywhere, even outside the hospital.
When the application is started on the mobile terminal, the user sees the exercises grouped into a suite: the user can choose among the different exergames assigned by the therapist. The exercises will be already configured at the right difficulty and to be played either with the tablet or with the pressure sensor as a tracker ( Figure 5). The server is composed itself of two main components: the data storage and a graphical application that allows clinicians to analyze rehabilitation progression and to configure the rehabilitation sessions. Such a graphical application is implemented as a web-application to allow clinicians to access the data from anywhere, even outside the hospital.
When the application is started on the mobile terminal, the user sees the exercises grouped into a suite: the user can choose among the different exergames assigned by the therapist. The exercises will be already configured at the right difficulty and to be played either with the tablet or with the pressure sensor as a tracker ( Figure 5).
Therapeutic Exergames
A set of exergames were developed, following a methodology specifically designed for therapeutic exergames [53], that consists of three main steps: Identification of the exercises, all their requirements, constraints and all the parameters that determine their challenge level; Transformation of the exercises into Virtual Exercises to discuss with clinicians game mechanics and actions;
Therapeutic Exergames
A set of exergames were developed, following a methodology specifically designed for therapeutic exergames [53], that consists of three main steps: Identification of the exercises, all their requirements, constraints and all the parameters that determine their challenge level; Transformation of the exercises into Virtual Exercises to discuss with clinicians game mechanics and actions; Introducing all the contour elements that characterize a video-game (characters, graphical elements, color, music, sounds and rewards) as a layer on top of the exercises themselves to maximize entertainment.
The exergames were designed to implement different exercises such that the same game can be played to increase either force or mobility, thus creating a multi to multi mapping between exercises and exergames. We briefly describe here some of the developed exergames. As required, they cover the range of 2-7 years old.
Breaking Eggs
A closed egg is placed in the middle of the screen. The player has to scratch the egg surface to break it. As a reward, a pet animal, different from trial to trial, exits from the egg and appears on the screen; the child can caress the pet, receiving an additional reward from the game: cute little stars or hearths, and pleasant sounds (Figure 6a). This game supports both mobility and strength exercises. The child can break the egg by tapping, sliding, or pinching on the display surface (mobility exercises), or by pressing, pinching, or grasping the sensor (strength exercises). The simple rewarding mechanism of the game has been designed for children 2-3 years old.
Discussion
To guarantee a good, fast, and complete recovery, rehabilitation after surgery should be continuous and intensive [59], and the system described here goes in this direction: it is meant to be used for a prolonged time and at home, autonomously. For this reason, a careful design phase is required. We have started from the clinical needs and developed the sensing part first. The touch screen of mobile devices has been identified as a suitable means to track the movements required to increase finger mobility. Pressure detection instead has required the design and development of a novel flexible pressure-sensing device.
We have started designing the first prototype to illustrate to clinicians the sensing mechanism [53]. This was constituted of the bare load cell attached to a rigid metal frame (Figure 7a) and to a simple clothespin with a button that detected when the clothespin was closed (Figure 7b). The load cell was considered adequate, as it showed enough robustness for children use; the clothespin
Jigsaw
A nine-piece jigsaw is proposed to the child, who can solve it by dragging each piece in the right place with his/her hand. This game supports all the mobility exercises and can be played only through the multi-touch screen (Figure 6b). Different jigsaws are provided, and additional ones can be obtained from images provided by the child himself. It has been designed for children 3-7 years old.
Mouse & Cheese Maze
A mouse is trapped inside a maze and the child has to guide it with his/her hand all the way to a target where it finds a piece of cheese. The maze shape is created procedurally and will be different from trial to trial to increase variability. This game supports all the mobility exercises and can be played only through the multi-touch screen (Figure 6c). It has been designed for children 3-7 years old.
Hot Air Balloon (and Similar Themes)
This is a runner game. The player is in control of a hot air balloon that flies in the sky. The goal is to catch targets (e.g., coins) and avoid distractors (e.g., birds) by moving the balloon up and down by moving his/her fingers according to the specified exercise (Figure 6d). This game supports all the exercises using either the multi-touch screen or the pressure sensing device. Some variants of this exergame are available. In a marine theme, the graphical environment is the sea, the controlled element is a submarine, the targets are coins, and the distractors are fishes. In an outer space theme, the graphical elements represent the stars, the mobile element is an astronaut, and the distractors are aliens. The different themes allow showing to the child games that appear very different one from the other, while they share the same mechanics and guide the child through the same exercises. These games have been designed for children of 4-7 years old.
Discussion
To guarantee a good, fast, and complete recovery, rehabilitation after surgery should be continuous and intensive [59], and the system described here goes in this direction: it is meant to be used for a prolonged time and at home, autonomously. For this reason, a careful design phase is required. We have started from the clinical needs and developed the sensing part first. The touch screen of mobile devices has been identified as a suitable means to track the movements required to increase finger mobility. Pressure detection instead has required the design and development of a novel flexible pressure-sensing device.
We have started designing the first prototype to illustrate to clinicians the sensing mechanism [53]. This was constituted of the bare load cell attached to a rigid metal frame (Figure 7a) and to a simple clothespin with a button that detected when the clothespin was closed (Figure 7b). The load cell was considered adequate, as it showed enough robustness for children use; the clothespin mechanism was considered suited to elicit an adequate force from the children, but the grasping region on top was considered somehow slippery and a better shaping was required for grasping.
The second step was to provide a case that can be attractive to children. For this reason, we have embedded the load cell into a Lego TM cottage structure (Figure 7c). This has allowed on one side to hide the sensor inside the cottage, and on the other side to offer to the child an object with whom he/she can be familiar. Following a similar line of thought, the clothespin was transformed into a 3D pet animal. A crocodile was chosen as it has two desirable characteristics that are a good match for the sensor requirements: it has a large head, inside which the trigger sensor can be hidden, and it has a long tail that can be used to induce the grasping movements required by clinicians in an ecological way (Figure 7d). Additionally, the tail has been designed with scales that facilitate a firm grasp. Moreover, it has allowed us to greatly simplify the case, reducing cost and size.
The load-cell based sensing device has first been tested for accuracy, repeatability, and linearity (Appendix B) to assess measurement validity following a procedure similar to that proposed in [45]. The resolution and accuracy are on average an order of magnitude larger than that of dynamometers typically used in the clinics [40][41][42], that have an accuracy of 50 g, with a sensitivity of 10 g [40]. This is due to the fact that such dynamometers have been developed to assess grip force mainly in adults and elders, and have a range that is almost twenty times larger as it reaches 90 kg. The device itself is also of large dimension. Focus on the pediatric hand has allowed miniaturizing the device on one side and increasing its accuracy and resolution on the other, to values that are larger than the target set by clinicians. The load-cell based sensing device has first been tested for accuracy, repeatability, and linearity (Appendix B) to assess measurement validity following a procedure similar to that proposed in [45]. The resolution and accuracy are on average an order of magnitude larger than that of dynamometers typically used in the clinics [40][41][42], that have an accuracy of 50 g, with a sensitivity of 10 g [40]. This is due to the fact that such dynamometers have been developed to assess grip force mainly in adults and elders, and have a range that is almost twenty times larger as it reaches 90 kg. The device itself is also of large dimension. Focus on the pediatric hand has allowed miniaturizing the device on one side and increasing its accuracy and resolution on the other, to values that are larger than the target set by clinicians.
A single non-rechargeable button cell (CR2032 3 V) is used in the crocodile sensor. This is dimensioned to operate the device for one month, half an hour a day before the cell discharges and has to be replaced. The opening in the lower jaw (Figure 7d) makes this operation easy. The Lego TM cottage accommodates a larger source of energy inside the case. We use here a pair of rechargeable stylus batteries (1.5 V AA) that are recharged typically at the end of the week. Indeed, recharging modalities that do not require extracting the batteries from their housing (USB, induction) would be preferable, but they require additional room for the charging components and will be part of a subsequent refinement stage.
The two sensing systems were then tested with stakeholders. Clinicians have validated the adequacy of the interaction modalities offered and of the measurements collected. Children, on their side, generally liked the objects; however, their feeling and that of their parents was that the sensing systems were somehow "cold" objects, not really interesting for children per sè. A single non-rechargeable button cell (CR2032 3 V) is used in the crocodile sensor. This is dimensioned to operate the device for one month, half an hour a day before the cell discharges and has to be replaced. The opening in the lower jaw (Figure 7d) makes this operation easy. The Lego TM cottage accommodates a larger source of energy inside the case. We use here a pair of rechargeable stylus batteries (1.5 V AA) that are recharged typically at the end of the week. Indeed, recharging modalities that do not require extracting the batteries from their housing (USB, induction) would be preferable, but they require additional room for the charging components and will be part of a subsequent refinement stage.
The two sensing systems were then tested with stakeholders. Clinicians have validated the adequacy of the interaction modalities offered and of the measurements collected. Children, on their side, generally liked the objects; however, their feeling and that of their parents was that the sensing systems were somehow "cold" objects, not really interesting for children per sè.
Eliciting some emotional reaction is fundamental to increase the attractiveness and therefore intrinsic motivation to use the system. To this aim, we have added some emotional content to the objects [60]. We have added Lego TM characters and additional aesthetic Lego TM structures on top of the cottage (Figure 2). Moreover, we have also granted to children the possibility to further personalize the cottage. The crocodile's surface was pained with a-toxic acrylic painting to provide a colorful and playful final appearance (Figure 3). The degree of pleasantness was largely increased in this last step and children were keen to use the crocodile repeatedly [61]. Although the overall system is quite simple, it contains one of the main requisites of telerehabilitation and telemonitoring systems in general: they can be adjusted to the patient's current capabilities [6,7]. The combination of the exergames with the sensing devices enables a difficulty progression in the exercises along the three dimensions typically required by clinicians: range, speed, and accuracy. In all exergames, the game pace sets the speed, which is the frequency of spawning targets and distractors. The therapist can then program the range of motion required to consider a movement valid. For instance, for pinch movement, the child has to open the fingers by more than 40 mm, or the pinch force has to be larger than 1 kg. Only valid movements allow moving the avatar inside the game. Finally, the degree of accuracy can be increased by reducing the dimension of the targets to be hit by the game avatar. Similar reasoning applies to the other grasping modalities. Such flexibility in the platform is allowed by a combination of an adequate range and resolution of the sensors with the definition of specific and intuitive game parameters that regulate the exercise difficulty, according to the good practice of serious games design [12].
A single suite of exer-games is provided to guide both exercises on mobility and strength. This allows speeding up game learning and increases the attachment to the graphical characters that are used again and again. The exergames are designed to automatically collect data from the input devices (position or force), as well as game interaction outcomes (targets hit/miss, distractors hit/miss) along with their timestamp. Force data depend on the sensor used. They are values expressed in grams for the cottage sensor device and they are binary values (trigger values) for the crocodile toy; the latter notifies the game that the user applied enough force on the sensor. Position data are obtained by converting fingers position expressed in pixels into millimeters, considering the device-specific screen resolution (PPI: pixels per inch). These data are used by the game itself to evaluate user movement, interact with game elements, and provide feedback in real-time. When a game ends, the data acquired are sent to a remote server in the cloud for storage and analysis by the therapists who can access them through a web application. This modality allows them to review the rehabilitation progression of their patients in the most suitable period of the day. Moreover, this allows creating a loop that starts from the therapist, who assigns the exercises to the patients with a predefined level of difficulty, goes through the exercises that are carried out at home, and goes back to the clinicians with the results that can be used to tune the therapy. Such continuous telemonitoring capability is at the basis of any autonomous rehabilitation platform.
To make therapist evaluation most effective, views of the data with a different level of detail are provided. To get a quick and clear picture of patient progression, data are aggregated on a per-exercise basis according to the methodology described in [62] and average values on speed, accuracy, and range of motion are plotted over time. For instance, a typical plot of pinch progression is shown in Figure 8.
The clinicians can also analyze in-depth a single exercise of a single rehabilitation session through a detailed report like that shown in Figure 9.
This in-depth analysis of the time course of movement and force development along with game interaction outcome allows having a clear picture of how the exergame has been played. All these data, collected over the cohort of patients served by the same hospital, can be processed through adequate machine learning techniques to cluster patients, identify suitable models of rehabilitation progression for each cluster, and optimize the scheduling of hospital recall visits and the therapy program [6].
Game results, such as score, number of target, or distractors hit and missed are also displayed in this report; these values are not used by therapists to assess exercise performance quality, but to establish whether the game is at the right difficulty level. If the game becomes too simple or too difficult, the patient tends to be distressed by the game and quit [12,53]. This information is used to set a proper challenge level for the user with the goal of keeping him/her interested and engaged in the game itself (flow state). Sensors 2019, 19, x FOR PEER REVIEW 13 of 21 Figure 8. Pinch report on all pinching exercises performed in a 10 weeks period. The amplitude of pinching increases largely in the first month of training and then increases even more as shown by the green line. The same is true for pinch speed. Notice that a clearer picture can be obtained from the trend over time of the performance. Dots represent rehabilitation sessions carried out with the exergames.
The clinicians can also analyze in-depth a single exercise of a single rehabilitation session through a detailed report like that shown in Figure 9. The clinicians can also analyze in-depth a single exercise of a single rehabilitation session through a detailed report like that shown in Figure 9. Indeed, the score achieved in a game is dependent not only on the use of hand/fingers, but also on the gamer's skill that theoretically can improve by continuing with playing. However, this possibility is very remote here as the ability in the game is strictly dependent on the force or motion skills acquired. Therefore, this information is used to set proper level of difficulty of the exer-game, by increasing range, force, or accuracy required to animate the avatar.
Overall, the platform presented here is a general approach in which sensors disappear inside objects and make the objects smart as they can measure some relevant aspect of everyday life in a real setting. Such smart objects are the basis of what can be classified a telehealth rehabilitation service in which the toys developed are integrated inside a cloud-based system that provides the analysis of rehabilitation data and relevant information to clinicians [62].
The platform can acquire a huge amount of data from a large population of patients. This enables machine learning algorithms to browse the data to extract meaningful information at the population level. One of the most promising directions is the identification of homogenous clusters of patients for whom a tailored rehabilitation program can be defined and the frequency of recall visits can be optimized. The same data can allow the comparison of different rehabilitation programs to identify strong and weak elements.
The large population could also be the basis for creating a virtual community of patients and their parents. This can, on one side, support gamification to boost child motivation in performing the exercises (e.g., virtual medals, points, leader boards, ranking, and so forth, cf. [63]), while on the other side it connects all the stakeholders and provides them a place where to exchange information, data, suggestions, and where they can find continuous support from peers.
Additional work can be spent in trying to identifying automatically when the child has made the right movement. In fact, children tend to use either their intact fingers or their intact hand to perform the movements required by the exergames; presently, their parents correct them to perform the right movement. This could be achieved, for instance, by analyzing the pressure or movement time course with adequate classification algorithms. Finally, although the hardware is relatively simple and cheap (at present less than 100 Euros), its cost should be further reduced to enable massive diffusion.
Conclusions
We show here how combining electronics with functional and emotional design, a sensor, which can be used for clinical purposes, can be realized such that is appears as an attractive toy to its young users. This allows prolonged natural use that is the goal of such smart objects and opens the way to increased contamination between mechanics, computer science, electronics, and design to produce objects of everyday use that can have transparent sensing capabilities. Such an approach also allows the combining of telerehabilitation and telemonitoring into a single platform, thus making the system particularly suitable for autonomous use at home.
Acknowledgments:
The authors wish to thank the anonymous reviewers for their comments and criticisms that have allowed to greatly improve the paper.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A. Amplification Design
Force is sensed through a load cell here. This is typically constituted of a block of steel or aluminum inside which a section less resistant to deformations is made, for instance by creating holes inside the block as it can be clearly seen in Figure 2d. Force applied induces a strain in the block that deforms mainly the less resistant section. At the top and bottom of this section a strain gauge, in the form of foil resistors, is applied to sense strain ( Figure A1). To minimize resistance variations due to temperature, resistors are arranged as a full Wheatstone bridge [64]. This produces a differential voltage proportional to the overall strain [65].
The output voltage of the load cell is in the order of few millivolts, and amplification is required to produce a voltage inside the range that can be sampled reliably by a microcontroller. This is usually accomplished by an instrumentation amplifier [66] that combines high input impedance, high differential gain, high common-mode and power supply rejection ratio thanks to matched transistors, low drift and offset [67]. We have adopted here the widely used INA 125 P amplifier (Texas Instrument) [68] because of its low power consumption, and its high linearity over the full measurement range. Therefore a linear model can be used to describe the sensor/amplifier behavior, as classically adopted in sensors domain [69]: where V out is the output voltage amplified, proportional to the weight applied on the load cell, x is the load cell output voltage, G is the amplifier gain and H its offset. G could be set in the range of 4 to 10,000 through an external resistor, R G ( Figure 8) [66,68]: To determine G and H in Equation (A1) an iterative procedure has been adopted here inspired to [70]. When the differential amplifier input voltage is bounded to be only positive (V in + ≥ V in −), like in this case, particular attention has to be paid to input common-mode range [67]. One possibility is to polarize V in + and V in − such that minimum input voltage stays above 1 V. However, this might introduce noise on the measured voltage unless an extremely precise and stable reference, equal at both terminals, is applied. For this reason, we have preferred to bias IA REF to provide a minimum voltage to the output, V min , equal to 0.3 V, such that the amplifier's transistors are on and operate in their linear region [70]. To achieve this we have introduced an external variable resistor R T trimmed as follows.
We first set R G (500 Ω trimmed resistor) to its maximum value to obtain the minimum possible amplification. Then we regulate nominal value suggested by Equation 1 and set R T such that V a ≈ 0,5 V and V out ≈ V min when no load is applied on the load cell. We then adjust R G (decreasing its value) to obtain a V out equal to the full measuring range of the Analog to Digital Converter (ADC) of the microcontroller at full load, V ref equal to 2.5 V in our case. To this aim, the load cell is loaded with the desired maximum weight (F Max ), and R G is adjusted until V out reaches V ref . We then measure again V out with the unloaded cell, and it will generally differ from V min measured before because of the larger IA REF introduced by R G . We tune R T again to bring V out back to V min and R G such that the output reaches V ref at F Max . This procedure converges in a few iterations. The final value of V out , with the sensor unloaded, represents the H parameter in Equation 1. Such procedure has been carried out with the V out pin connected to the microcontroller ADC pin to provide the true load impendence, although, due to the very high impedance of the ADC input, no appreciable change on R T is seen with or without the connection. Sensors 2019, 19, x FOR PEER REVIEW 16 of 21 Figure A1. The circuit arrangement of the instrumentation amplifier INA125: it is connected with the load cell full bridge on the left end, and to the microcontroller analogic input to the right end. The trimmed resistor, RG, determines the gain. It has been set here to 30 Ω for the 3 kg range. The trimmed resistor RT provides a polarization of the output to keep the amplifier inside its linear region.
We first set RG (500 Ω trimmed resistor) to its maximum value to obtain the minimum possible amplification. Then we regulate nominal value suggested by Equation 1 and set RT such that Va 0,5 V and Vout Vmin when no load is applied on the load cell. We then adjust RG (decreasing its value) to obtain a Vout equal to the full measuring range of the Analog to Digital Converter (ADC) of the microcontroller at full load, Vref equal to 2.5 V in our case. To this aim, the load cell is loaded with the desired maximum weight (FMax), and RG is adjusted until Vout reaches Vref. We then measure again Vout with the unloaded cell, and it will generally differ from Vmin measured before because of the larger IAREF introduced by RG. We tune RT again to bring Vout back to Vmin and RG such that the output reaches Vref at FMax. This procedure converges in a few iterations. The final value of Vout, with the sensor unloaded, represents the H parameter in Equation 1. Such procedure has been carried out with the Vout pin connected to the microcontroller ADC pin to provide the true load impendence, although, due to the very high impedance of the ADC input, no appreciable change on RT is seen with or without the connection. Figure A1. The circuit arrangement of the instrumentation amplifier INA125: it is connected with the load cell full bridge on the left end, and to the microcontroller analogic input to the right end. The trimmed resistor, R G , determines the gain. It has been set here to 30 Ω for the 3 kg range. The trimmed resistor R T provides a polarization of the output to keep the amplifier inside its linear region.
Appendix B. Characterization of Force Measurements
We have tested experimentally the linearity of the output signal by measuring the output voltage of the amplifier against a set of known weights placed on the sensor. The measurement range upper limit, F Max , was enlarged of about 50 g above maximum measurement range to avoid ceiling effects (F Max = 3050 kg and F Max = 5050 kg respectively). The ADC resolution is: where H represents the offset in Equation (A1) and it was determined experimentally for the two ranges level as H = 102 (for the 3 kg range) and H = 113.32 (for the 5 kg range). From this, it follows that the resolution is of 3.31 g for the 3 kg range and of 5.55 g for the 5 kg range. These values are well above the minimum resolution required for rehabilitation. To convert the ADC output into pressure values we need to determine offset and gain of the ADC [68]. To this aim, we have measured a set of known weights distributed over the whole measurement range with one weight being just below the upper end of the range to avoid saturation ( Figure A2). We have taken ten measurements for each weight and we report their average in Figure A2. The least-squares linear regression is computed and plotted. As expected, the line slope is larger for the widest range (5 kg). The plots show a high degree of linearity over the whole range with an RMS error of 3.25 g and 1.80 g, for 3 kg and 5 kg range, respectively (Tables A1 and A2). These values are below the ADC resolution. For sake of clarity, we have excluded from the regression the last value of the two tables because of possible saturation, in fact, the maximum weight (3 kg and 5 kg respectively) was slightly underestimated, that might be attributed to saturation of the differential amplifier. However, being this error well below the resolution required by the therapists, we have not further investigated this issue.
ranges level as H = 102 (for the 3 kg range) and H = 113.32 (for the 5 kg range). From this, it follows that the resolution is of 3.31 g for the 3 kg range and of 5.55 g for the 5 kg range. These values are well above the minimum resolution required for rehabilitation.
To convert the ADC output into pressure values we need to determine offset and gain of the ADC [68]. To this aim, we have measured a set of known weights distributed over the whole measurement range with one weight being just below the upper end of the range to avoid saturation ( Figure A2). We have taken ten measurements for each weight and we report their average in Figure A2. The least-squares linear regression is computed and plotted. As expected, the line slope is larger for the widest range (5 kg). The plots show a high degree of linearity over the whole range with an RMS error of 3.25 g and 1.80 g, for 3 kg and 5 kg range, respectively (Tables A1 and A2). These values are below the ADC resolution. For sake of clarity, we have excluded from the regression the last value of the two tables because of possible saturation, in fact, the maximum weight (3 kg and 5 kg respectively) was slightly underestimated, that might be attributed to saturation of the differential amplifier. However, being this error well below the resolution required by the therapists, we have not further investigated this issue. Figure A2. Linearity test: Objects with known weights (inside the 3 kg range, left; inside the 5 kg range, right) were placed statically over the sensor. The read-out of the sensor was collected through the micro-controller and sent to the host for the analysis. Each value corresponds to measurement averaged over 10 samples. The linear regression obtained (R 2 = 0.99998231 for 3 kg and R 2 = 0.99999035 for 5 kg) that represent a highly linear response.
The absence of hysteresis has been assessed through test-retest [71]. To this aim, we have measured the same set of known weights used before going from the smallest to the largest and then from the largest to the smallest for both the 3 kg and 5 kg ranges. Each time, the current weight was removed from above the load cell, and the next weight was placed over the cell and its weight acquired. No parameter was tuned in between each pair of samplings. The average absolute difference between the two measurements was 3.16 g and the RMSE of the second acquisition (3.11 Figure A2. Linearity test: Objects with known weights (inside the 3 kg range, left; inside the 5 kg range, right) were placed statically over the sensor. The read-out of the sensor was collected through the micro-controller and sent to the host for the analysis. Each value corresponds to measurement averaged over 10 samples. The linear regression obtained (R 2 = 0.99998231 for 3 kg and R 2 = 0.99999035 for 5 kg) that represent a highly linear response.
The absence of hysteresis has been assessed through test-retest [71]. To this aim, we have measured the same set of known weights used before going from the smallest to the largest and then from the largest to the smallest for both the 3 kg and 5 kg ranges. Each time, the current weight was removed from above the load cell, and the next weight was placed over the cell and its weight acquired. No parameter was tuned in between each pair of samplings. The average absolute difference between the two measurements was 3.16 g and the RMSE of the second acquisition (3.11 g) was similar to the RMSE of the first acquisition (3.25 g). All these values are below the resolution of the measurement of 3.31 g and 5.55 g respectively (Table A3).
|
v3-fos-license
|
2018-12-31T21:19:33.622Z
|
2018-03-19T00:00:00.000
|
59319480
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scielo.br/pdf/jmoea/v17n1/2179-1074-jmoea-17-01-0134.pdf",
"pdf_hash": "10b7bbe9f4451a12d2a85af260357578125d33c1",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43507",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "10b7bbe9f4451a12d2a85af260357578125d33c1",
"year": 2018
}
|
pes2o/s2orc
|
Sharp Rejection and Wide Stopband Microstrip Lowpass Filters using Complementary Split Ring Resonators
Chebyshev microstrip lowpass filters with improved performance, achieved by means of circular complementary split ring resonators (CSRR), are presented. CSRR particles exhibit frequency rejection bandwidths in the vicinity of their resonant frequencies that can be used to meliorate both selectivity and stopband in microstrip lowpass filters. Two configurations have been used: a stepped-impedance model and a configuration using open-circuited stubs. Microstrip filters having 5, 7 and 9 poles were designed and fabricated. Selectivity values up to 86 dB/GHz and suppression levels reaching 60 dB in the stopband were obtained. The filters have cutoff frequency around 2 GHz and rejection band up to 10 GHz. The insertion of the designed CSRR resonators in the ground plane of the filters removes all the transmission spurious observed in the analyzed frequency band. No considerable variation in the passband group delay is observed in the CSRRbased lowpass filters. A comparison is made among the filters designed in this and other referenced works, considering their number of poles and size. The measured results are in good agreement with the simulated ones. The presented filters can be candidates for applications using the VHF, UHF and L bands, being also very effective in the rejection of the S and C bands. Applications that demand the rejection of the first half of the X band can also use the filters detailed in this paper.
I. INTRODUCTION
Filters are important elements in many radiofrequency/microwave applications.Wireless Communications, for instance, continue to impose strict filter design requirements, demanding lightweight structures having high performance and reduced dimensions and cost [1].An alternative to planar structures is fabricating them using microstrip technology.Improvements in performance for microstrip lowpass filters can be obtained using higher degree structures [2].
The use of complementary split ring resonators (CSRR), first presented by Falcone et al. [3], is an configurations.
II. MICROSTRIP LOWPASS FILTERS: DESIGN AND ANALYSIS
In Fig. 1, the lowpass filter configurations used in this paper are shown.Two configurations have been chosen.The first one, known as stepped-impedance, uses high-impedance microstrip lines to approximate the inductors (L), whereas the capacitors (C) are built using low-impedance microstrip lines.The second configuration uses open-circuited stubs to fabricate the capacitors, maintaining the high-impedance lines to approximate the inductors [1], [2].
The formulations in [1] have been used to design the dimensions of the microstrip sections for each filter.The specifications for the structures are: cutoff frequency (f c = 2 GHz), filter order n (n = 5, 7 and 9), frequency response (Chebyshev, with passband ripple = 0.01 dB), and source/load impedance Z 0 (= 50 Ω).The relevant dimensions of the CSRRs used in this paper are organized in Table II, as well as their resonant frequencies.These values have been calculated using the model in [7].The fabricated modified ground planes are shown in Fig. 6.The CSRR resonators on the ground planes are aligned with the center of their respective microstrip lines, except for those in the extremities, which are 7 mm away from the input and output of the structures.III contains the values of the 3 dB cutoff frequency (f c ), stopband frequency (f s ) and selectivity (ξ) of the filters.The calculation of ξ follows the expression (1), defined in [8]: where α 2 and α 1 represent, respectively, the 20 dB and 3 dB attenuation points.The responses for the 7-pole CSRR-based filters are shown in Fig. 9. Filters F9 and F10 have cutoff frequencies of, respectively, 1.85 and 1.63 GHz.The rejection for these filters starts at 2.12 and 2.05 GHz, respectively.The values of insertion loss in the passband for F9 and F10 are around 1.2 dB.The selectivity values of these filters are presented in Table III.The responses for the 9-pole CSRR-based filters are depicted in Fig. 10.F11 and F12 have 3 dB cutoff frequencies equal to 1.92 and 1.58 GHz, respectively.The corresponding frequency values for the 20 dB attenuation points are 2.15 and 1.87 GHz.It is worth noticing that filter F11 presents an excellent rejection band between 2.42 and 5.23 GHz, with a very high rejection level (> 50 dB).It can be observed that the insertion loss in F11 and F12 is around 1.3 dB, considering the passband.
Different from the previous filters, these 9-pole CSRR-based filters present higher levels of ripple in the passband, which can be a drawback.Table IV presents a performance comparison among some of the designed lowpass filters and other referenced works.As it can be noticed, F8 has an excellent transition bandwidth of 200 MHz, and a stopband bandwidth of 4f c .This result is quite satisfactory, considering that the procedure to design this filter is considerably easier than the approaches described, for instance, to design the filters presented in [5] and [9]- [13].It is also shown a comparison among the sizes of the designed filters and some references.The parameter λ g , in Table IV, is the guided wavelength of a 50 Ω microstrip line at the cutoff frequency of the filter, as stated in [5].
Finally, simulated and measured group delay results are shown in Fig. 11.Considering the filters F11 and F8, which have the best performances (see Table IV), it can be seen that the experimental group delay in the passband (< 2GHz) for both filters has no considerable variation, maintaining its values up to 5 ns.The differences observed between the simulated and measured group delay curves can be interpreted, firstly, as an outcome of the welding losses, present only in the fabricated filters.
Secondly, other losses, as those present in the use of connectors and cables, are also part only of the experimental results.Some disturbances in the group delay curves are seen around the resonance frequencies of the added CSRR resonators.
Fig. 4 .
Fig. 4. Configuration and relevant dimensions of a circular CSRR.The metal parts are depicted in grey.
Fig. 5 .Fig. 6 .
Fig. 5. CSRR-based microstrip lowpass filters.The microstrip sections of the filters are depicted in black and the metallization of the ground plane is depicted in grey.TABLE II.CSRR DIMENSIONS FOR THE FILTERS DEPICTED IN FIG. 5. Filter CSRR radius (mm) c (mm) d (mm) f 0 (GHz) F7 (Fig. 5a)
Fig. 7 .Fig. 8 .
Fig. 7. Design procedure used to obtain the CSRR-based microstrip lowpass filters.IV.RESULTS Simulated and measured S 11 and S 21 results (in dB) of the designed CSRR-based lowpass filters are shown in Fig. 8, Fig. 9 and Fig. 10.A good agreement is observed between the simulated and experimental responses.The results were analyzed considering the experimental values.A comparison between the CSRR-based 5-pole microstrip lowpass filter results is shown in Fig. 8.
Fig. 10 .
Fig. 10.CSRR-based 9-pole microstrip filter S parameters, simulated and measured: (a) F11: stepped-impedance configuration and (b) F12: open-circuited stubs configuration.Comparing the results of the CSRR-based microstrip lowpass filters with the traditional ones (see TableIIIand Figs.[8][9][10], it can be affirmed that the CSRR-based filters having 5 poles presents a performance comparable (F7) or much more superior (F8) to the best result obtained by the traditional filters (F5 and F6), which are 9-pole structures.This leads to a filter with an area reduction of approximately 27%.The transmission spurious bands of the conventional microstrip lowpass filters (see Fig.3) were all removed due to the insertion of the designed CSRR resonators.
TABLE I .
DIMENSIONS (IN MM) OF THE MICROSTRIP SECTIONS OF THE FILTERS
F6 (9-pole)
Journal of Microwaves, Optoelectronics and Electromagnetic Applications, Vol. 17 No. 1, March 2018 Fig.8(a), filter F7 has a 3 dB cutoff frequency of 1.95 GHz.The attenuation of this filter drops to 20 dB at 2.44 GHz.The 3dB cutoff frequency of filter F8, whose response is shown in Fig.8(b), is 1.97 GHz.The rejection of this filter occurs at 2.17 GHz.Consequently, F8 is a more selective filter than F7.Both filters have values of insertion loss in the passband around 1.1 dB.Table DOI: http://dx.doi.org/10.1590/2179-10742018v17i11101Brazilian Microwave and Optoelectronics Society-SBMO received 27 Nov 2017; for review 03 Dec 2017; accepted 08 Mar 2018 Brazilian Society of Electromagnetism-SBMag © 2018 SBMO/SBMag ISSN 2179-1074 140 According to
TABLE III .
CUTOFF FREQUENCY, STOPBAND FREQUENCY AND SELECTIVITY OF THE LOWPASS FILTERS.
|
v3-fos-license
|
2020-11-11T14:08:25.764Z
|
2020-11-01T00:00:00.000
|
226296664
|
{
"extfieldsofstudy": [
"Medicine",
"Political Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jamanetwork.com/journals/jamanetworkopen/articlepdf/2772733/yasenov_2020_ld_200171_1604346447.05466.pdf",
"pdf_hash": "a40ef9e76ae520b612cee922e92ed9678a40475a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43508",
"s2fieldsofstudy": [
"Political Science",
"Medicine"
],
"sha1": "9cf5198d8a7b4d63372fb2c0550a3126b7a7f57d",
"year": 2020
}
|
pes2o/s2orc
|
Association Between Health Care Utilization and Immigration Enforcement Events in San Francisco
This cohort study explores whether an inclusive, local health care system could serve as a buffer against adverse effects on access to health care due to actions related to immigration status in patients who are likely undocumented.
Introduction
Researchers have documented a negative association between immigration enforcement and health care utilization among immigrants 1 and expressed concern about decreased utilization after the 2016 US presidential election. 2, 3 We explored whether an inclusive, local health care system in San Francisco acts as a buffer against adverse utilization effects of enforcement and related political events among patients who likely have undocumented immigration status.
Methods
Data for this cohort study came from a single large, integrated health system that provides services to patient members of Healthy San Francisco (HSF), a health care program that provides access to a broad array of health care services to adults unable to access other public insurance options. 4 San Francisco Health Network includes primary and specialty clinics and a hospital and trauma center and serves as the medical home for most patient members of HSF. 4 After California's Medi-Cal expansion took effect, immigration status was the primary reason HSF members were ineligible for other types of insurance. 4 Individuals with undocumented immigration status are generally excluded from public health insurance programs such as Medi-Cal.
We used participation in HSF as a proxy for adults' immigration status. 5 For analyses of adults, the 2 groups we expected would be most affected were (1) all patients who had all encounters billed to HSF (HSF always) and (2) Hispanic patients who had at least 1 encounter billed to HSF between November 1, 2015, and March 1, 2018 (HSF ever, Hispanic). Groups we expected would be less affected or not affected were Hispanic patients and non-Hispanic patients who had encounters billed to Medi-Cal only (Medi-Cal always, Hispanic and Medi-Cal always, non-Hispanic). For analyses of pediatric patients, the group we expected to be more affected was Hispanic children and the group we expected to be less affected was non-Hispanic children.
We identified 6 periods in which actual or anticipated adverse immigration policy or enforcement events (eg, local Immigration and Customs Enforcement raids, immigration enforcement executive orders, the 2016 US presidential election) occurred at the federal or local level (Figure 1). The 3 primary outcomes were the log number of encounters in primary care clinics, urgent care, and emergency departments. We also examined preventive care visits in primary care clinics, emergency department encounters specific to ambulatory care-sensitive conditions, and pediatric patient visits across all health care settings.
Our analysis was at the week-group level covering 5 weeks before and after each event.
We also pooled all 6 events and groups to achieve statistical precision. We used a difference-indifferences design controlling for week and group fixed effects. Stata Statistical Software (release 15.1) was used for analysis. Significance was set at P < .05, and tests were 2-sided. See the eAppendix in the Supplement for additional methodological details. A, Timeline of immigration-related enforcement and policy events analyzed in this article. B, Difference-indifferences estimates of the association between these events and health care utilization among patients hypothesized to be most likely affected by these events. ICE indicates Immigration and Customs Enforcement.
Results
Among the 168 975 encounters involving 22 525 patients, 2815 patients (12.5%) were included in the HSF always group; 4627 (20.5%) in the HSF ever, Hispanic group; 5001 (22.2%) in the Medi-Cal always, Hispanic group; and 10 082 (44.8%) in the Medi-Cal always, non-Hispanic group. Plots of pre-event health care utilization suggested parallel trends before each event across groups and settings (Figure 2). In pooled estimates that compared outcomes for groups likely to be most affected with outcomes for less affected groups across all events, there were no significant associations between immigration events and utilization of primary care (difference in differences estimate, −0.008; 95% CI, −0.07 to 0.05), urgent care (difference in differences estimate, −0.024; 95% CI, −0.17 to 0.12), or the emergency department (difference in differences estimate, 0.11; 95% CI, −0.08 to 0.30) (Figure 1).
Discussion
Prior research has documented an association between decreased health care utilization and immigration enforcement in Alabama and Arizona. 1,2 We did not find systematic evidence of an association between enforcement events and changes in utilization among patients with potentially
|
v3-fos-license
|
2016-05-18T17:57:38.602Z
|
2013-09-27T00:00:00.000
|
16324895
|
{
"extfieldsofstudy": [
"Medicine",
"Geology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/srep02785.pdf",
"pdf_hash": "84d22154efb80a80ce0580c373cb8474529e8a02",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43512",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"sha1": "84d22154efb80a80ce0580c373cb8474529e8a02",
"year": 2013
}
|
pes2o/s2orc
|
High-resolution summer precipitation variations in the western Chinese Loess Plateau during the last glacial
We present a summer precipitation reconstruction for the last glacial (LG) on the western edge of the Chinese Loess Plateau (CLP) using a well-dated organic carbon isotopic dataset together with an independent modern process study results. Our results demonstrate that summer precipitation variations in the CLP during the LG were broadly correlated to the intensity of the Asian summer monsoon (ASM) as recorded by stalagmite oxygen isotopes from southern China. During the last deglaciation, the onset of the increase in temperatures at high latitudes in the Northern Hemisphere and decline in the intensity of the East Asia winter monsoon in mid latitudes was earlier than the increase in ASM intensity and our reconstructed summer precipitation in the western CLP. Quantitative reconstruction of a single paleoclimatic factor provides new insights and opportunities for further understanding of the paleoclimatic variations in monsoonal East Asia and their relation to the global climatic system.
4 Figure S1 The negative correlation between MAP and the δ 13 C of modern C 3 plants. (a) averaged values for 15 sites in the loess region of north China (Wang et al., 2003), the total number of data being 367; (b) averaged values of 7 sites from the central CLP (Zheng and Shangguan, 2007), the total number of data being 121; the number of data for each site is shown by Arabic numerals.
Please note the different correlation coefficients.
Figure S2
The detailed results of the negative correlation between MAP and the δ 13 C of different modern C 3 species. (a) 3 modern C 3 species from northwest China (Liu et al., 2005a); (b) 4 modern C 3 species from northwest China (Wang and Han, 2001a). Please note the differences in the linear relations for different species indicated by different colours.
bungeana, Lespedeza sp. and Heteropappus less) in northwest China displays a completely different linear negative correlation with local MAP, the only significant correlation being with Stipa bungeana (Liu et al., 2005a;a in Fig. S2). Similar results 5 come from 4 modern C 3 species (Plantago depressa, Lepidium apetalum, Chenopodium album and Cirsium leo) also in northwest China (Wang and Han, 2001a; b in Fig. S2), only the correlation with Plantago depressa being insignificant. In a specific location near Baiyin City (also close to the Jingyuan loess profile mentioned in this paper) in northwest China, 7 modern C 3 species were sampled during late June and middle July, 1999. The corresponding weather data is shown in Table S1, and the δ 13 C results are shown in Fig. S3. The results clearly indicate the negative responses of δ 13 C in modern C 3 plants to increasing precipitation and the different sensitivities of the δ 13 C in different C 3 species to identical variation in precipitation (Wang and Han, 2001b).
Table S1
Summer weather data for Baiyin city in 1999 (cited from Wang and Han, 2001b; please note the much higher rainfall in July, and the similar temperature and solar radiation values in June, July and August).
June
July August Rainfall (mm) 32.6 154 13.8 Temperature (°C) 20.2 21.2 21.7 Solar radiation (h) 234.6 255 251 Figure S3 Comparative results of the δ 13 C of 7 modern C 3 species sampled in late June and middle July in 1999 near Baiyin City (modified from Wang and Han, 2001b).
7 Figure S4 Variations in δ 13 C TOC in LC, LA, BJ, WN, JX, YS (Gu et al., 2003), HX, XF, LT (Liu et al., 2005b), and JD loess profile (Vidic and Montañez, 2004) since the last glacial. All the results indicate an increase in C 4 relative abundance since the last glacial to the Holocene, with more positive δ 13 C TOC values occurring in Holocene paleosol (S0) layers. (The locations of these profiles are shown in Fig. 1, and the codes for the loess profiles are identical as in Fig. 1).
Northwest CLP, a decrease in the relative abundance of C 4 plants would be expected . Comparison of the loess δ 13 C TOC data in the 3 profiles (Weinan, Lingtai and Huanxian located in the CLP along a transect from southeast to northwest, respectively), clearly demonstrates the decrease of C 4 relative abundance northwestward during both the last glacial and the Holocene, with the δ 13 C TOC data of both the last glacial loess layer and the Holocene paleosol layer becoming 8 increasingly negative towards the northwest (Fig. S5). Consistent with the decreasing trend of C 4 relative abundance in the CLP towards the northwest during both the last glacial and the Holocene, the loess δ 13 C TOC data from the Yuanbao (YB) profile located in the westernmost CLP indicates that the local terrestrial vegetation during the last glacial was dominated by C 3 plants with only a negligible C 4 contribution (Rao et al., 2005;Chen et al., 2006). Similarly, the loess δ 13 C TOC data from the Jingyuan (JY) profile that located in the northwesternmost CLP indicates that the local terrestrial vegetation since the last glacial was dominated by C 3 plants with only a negligible C 4 contribution (Liu et al., 2011, Fig. S6).
Figure S5
Variations in loess δ 13 C TOC in WN, LA (Gu et al., 2003) and HX profile (Liu et al., 2005b) along a spatial gradient. The results indicate that the relative abundance of C 4 plants decreased from southeast to northwest in the CLP during both the Holocene and the last glacial, with δ 13 C TOC data in both Holocene paleosol (S0) and last glacial loess (L1) layers decreasing gradually towards the northwest.
9 Figure S6 Comparison of δ 13 C TOC records from the JY profile (a) located in the northwesternmost CLP (Liu et al., 2011),the YB profiles (b) located in the westernmost CLP (this study), and the LT profile (c) located in the southernmost CLP (Liu et al., 2005b), See Figure 1 for site locations. The age series of the JY profile is a linear interpolation of OSL data from Sun et al., 2010Sun et al., , 2012; the YB profile series is a linear interpolation of OSL data from Wintle, 2006 andLai et al., 2007; and the LT profile series is taken directly from Liu et al., 2005b. On a glacial/interglacial timescale, the δ 13 C TOC data for the Holocene paleosol at the JY site were more negative than for the last glacial loess, which is converse to the results from the other profiles (see Figures S4 and S5). This indicates that the δ 13 C TOC data for the JY do not record variations in C 3 /C 4 relative abundance but, rather, record δ 13 C variations of C 3 plants since the last glacial. During the last glacial, δ 13 C TOC data for JY and YB were more negative in the weakly developed paleosol layer formed during MIS3 than in the loess layers accumulated during MIS2 and MIS4, which is the converse of the results from the LT profile. This indicates that the terrestrial vegetation at the JY and YB sites during the last glacial was dominated by, or composed entirely of, C 3 plants. The overall trend of the loess δ 13 C TOC data is shown in bold light blue arrows for clear comparison.
The loess δ 13 C TOC data from the YB profile during the early Holocene are emphasized by the comparative use of red dots indicating the LT profile data and the apparent C 4 contribution.
Clearly, the results shown above demonstrate that local terrestrial vegetation 10 during the last glacial at the YB site and since the last glacial at the JY site were dominated by C 3 plants with only a negligible C 4 contribution. In other words, loess δ 13 C TOC data of the last glacial at the YB site and since the last glacial at the JY site can be used for paleoprecipitation reconstruction by way of the modern relation between the δ 13 C data of C 3 plants and precipitation.
Part 3, modern summer monsoon limit in East Asia
As shown in Figure S7, the YB and JY profiles are located in the frontier area of the modern Asian summer monsoon.
Figure S7
Locations of the YB and JY loess/paleosol profiles and the Hulu cave in southern China.
The yellow arrow shows the approximate winter monsoon path; the brown arrow shows the westerlies and the blue arrow shows the East Asian and Indian summer monsoon paths. The dashed red line indicates the modern Asian summer monsoon limit (Chen et al., 2008). Clearly, the YB and JY profiles are located in the frontier region of the modern Asian summer monsoon. The original map was generated by ESRI ArcGIS (v9.1); for the source of the original data for this map please refer to Amante and Eakins, 2009. 11 Considering that the intensity of the summer monsoon gradually deceased during the late Holocene, as shown by a recently reported stalagmite oxygen isotopic record from Sanbao Cave (Fig. S8, Dong et al., 2010), it seems that the intensity of the most recent summer monsoon is very close to that during the Younger Dryas event (ca. 12 ka B.P.). However, there remains a significant distance between the modern summer monsoon limit and the location of the JY and YB sites, especially the latter (more than 250km), so it seems likely that the summer monsoon was also the major source of summer precipitation in the JY and YB sites during the last glacial.
Figure S8
Stalagmite oxygen isotopic record for the last ca. 14, 000 years from the Sanbao Cave located in central China (Dong et al., 2010). Different colors represent data from different stalagmite samples. Note the horizontal grey bar, which provides a comparison of the most recent data with that of the YD event.
Part 4, surface soil δ 13 C results from arid central Asia As above mentioned in Supplementary Part 1, the relation between the δ 13 C values of modern C 3 plants and precipitation cannot be used directly for paleoprecipitation reconstruction. Therefore, relations between surface soil δ 13 C values and precipitation from an area with a full vegetation cover dominated by C 3 plants is a better choice as a modern reference for paleoprecipitation reconstruction.
Given that such surface soil δ 13 C values can represent the carbon isotopic signal of the overlying vegetation at an ecosystem level, the influence of different sensitivities of different C 3 plants to variations in precipitation can be largely avoided.
In arid central Asia, the δ 13 C in 196 surface soil samples along a south to north transect were measured and reported (Lee et al., 2005;Feng et al., 2008;Fig. S9).
Owing to the proximity of our profiles (YB and JY) to the surface soil transect ( Fig. 1) and the considerable spatial gradient of the surface soil transect (very representative), we chose the relation between these surface soil δ 13 C values and precipitation as a modern reference for paleoprecipitation reconstruction.
It has been widely recognized that temperature is the most important climatic factor controlling the growth of C 4 plants (Long, 1983;Rao et al., 2012). An investigation of modern plants on Gengga Mountain in southwestern China indicated that almost no C 4 plants have been observed above an altitude of ca. 2100m with a MAT of 9.4°C and a summer temperature of 15.3°C (Li et al., 2009). A similar investigation on Lingshan Mountain near Beijing city in north China demonstrated that almost no C 4 plants have been observed above an altitude of ca. 1800m (Wang et al., 2010). According to the results of Long, C 4 plants are extremely rare in areas with summer temperatures lower than 16°C (Long, 1983). Comparative analyses of surface soil δ 13 C values at a continental scale from eastern China to Australia and the Great Plains of North America, with a MAT of ca. 12°C, has been found to be the "threshold 13 Figure S9 Relations between surface soil δ 13 C TOC data from arid central Asia and precipitation and temperature. (a) δ 13 C TOC in 196 modern surface soils plotted against MAP (Feng et al., 2008); (b) δ 13 C TOC in 196 modern surface soils plotted against MAT (Feng et al., 2008). A line is not shown because a linear relation is not apparent; (c) averaged δ 13 C TOC values of 19 sites (solid blue dots) plotted against summer precipitation (Lee et al., 2005); the linear negative correlation is represented by the purple line, and the black lines represent the 95% confidence interval of the linear correlation; (d) averaged δ 13 C TOC values of 19 sites plotted against summer temperature (Lee et al., 2005). Horizontal and vertical light blue bars in (c) and (d) represent the 1 σ standard deviations of the averaged surface soil δ 13 C TOC values and corresponding averaged climatic data (summer precipitation and temperature) respectively. See Fig. 1 for distribution of the surface soils.
The quantitative relation between summer precipitation and averaged surface soil δ 13 C TOC data in (c) has been used as the modern reference for summer precipitation reconstruction in this work, with the assumption that the loess δ 13 C TOC data were systematically 1‰ more positive after longterm decomposition, and the 95% confidence interval of the linear correlation in (c) has been used to estimate the uncertainties of the summer precipitation reconstruction.
temperature" for the growth of C 4 plants (Rao et al., 2010 Although we cannot completely preclude a contribution from C 4 plants in the δ 13 C dataset from arid central Asia, and considering the significant influence of temperature on the growth of C 4 plants and the distribution of the δ 13 C dataset along MAT in arid central Asia (b in Fig. S9), we firmly believe that the relative abundance of C 4 plants in the local biomass, or the contribution of C 4 plants to the surface soil δ 13 C data, are extremely limited, and therefore negligible.
It should also be noted that our survey in the Linxia Basin in which the YB site is located (MAT of 6.8°C) demonstrates that there are only few C 4 species mainly around croplands and residential areas. Therefore, considering the lower temperature during the last glacial, it is reasonable to conclude that the contribution by C 4 plants to the YB loess δ 13 C during the last glacial was negligible.
There seems to be an exponential relation between surface soil δ 13 C values and 15 MAT in arid central Asia (b in Fig. S9). However, the positive surface soil δ 13 C values around MAT of ca. 2.5°C mainly range from -20‰ to -24‰. The corresponding MAP for these positive δ 13 C values (-20‰ to -24‰) mainly range from 100mm to 300mm (a in Fig. S9), falling into the climatic range of desert and Gobi in arid central Asia, and located in the middle of the studied transect ( Fig. 1, Lee et al., 2005;Feng et al., 2008). Therefore, the exponential relation may just reflect the significant effect of precipitation on the plant δ 13 C values from another aspect, rather than the significant effect of temperature. More importantly, the influence of temperature on plant δ 13 C values is very complicated. Generally speaking, temperature can affect plant δ 13 C via the effect of the stomatal conductance of the leaves and the bioactivity of the photosynthetic enzymes. Temperatures that are either too low or too high will restrain the stomatal conductance of the leaves and the bioactivity of the photosynthetic enzymes. Normally, the "transform temperature" lies between 20°C and 30°C or higher. Apparently, even considering that summer temperature is higher than MAT, the observed MAT of ca. 2.5°C (the corresponding summer temperature is 8°C to 10°C, Lee et al., 2005) is too low to be treated as the "transform temperature". Therefore, the exponential relation between MAT and the surface soil δ 13 C values from arid central Asia (b in Fig. S9) is just presentational, not logical. (Lee et al., 2005). Therefore, the relation between averaged surface soil δ 13 C values close to the 19 weather stations and corresponding summer (May to September) precipitation recorded by the weather stations is much more stable with constrained uncertainties (Lee et al., 2005;c in Fig. S9). In this paper, we select the quantitative relation between the averaged surface soil δ 13 C values and summer precipitation (c in Fig. S9) as the modern reference for summer precipitation reconstruction with the assumption that loess δ 13 C TOC data were systematically 1‰ more positive after long-term decomposition. Based on the original data including the averaged δ 13 C values and summer precipitation amount from the 19 weather stations, we calculated the 95% confidence interval (CI) of the linear relation (c in Fig. S9). Also, the quantitative estimation of the summer precipitation with a prediction interval (PI) at the 95% level was calculated using this relation and our δ 13 C data during the last glacial derived from the YB site and since the last glacial derived from the JY site. The linear fitting, CI calculation, summer precipitation estimation, and PI calculation were all performed using the statistical package R, High variability also existed in the loess δ 13 C TOC data from the YB and JY profiles and the corresponding reconstructed summer precipitation, especially in the highresolution data of the YB profile during the last glacial (Fig. S10), indicating the high The most recent reconstructed summer precipitation for the JY site is ca. 150mm with an uncertainty of ca. 70mm, close to the modern averaged summer precipitation of the Jingyuan station of ca. 190mm (1961~1990). The Holocene loess δ 13 C TOC data 19 in the YB profile apparently contain a C 4 signal as shown in Fig. S6, especially during the early Holocene, that's why we abandoned the summer precipitation reconstruction of the entire Holocene at the YB site. The loess δ 13 C TOC data in the topmost 2 samples are -28‰ and -27.5‰, respectively (Table S2). If these two data are used to calculate the summer precipitation, results of ca. 415mm and 386mm, respectively, are obtained also with uncertainties of ca. 70mm , which is very close to the modern averaged summer precipitation at the Linxia station of ca. 400mm (1961~1990; Table S3).
Comparison of the most recent calculated summer precipitation and the corresponding modern averaged value, the relatively greater difference in the JY profile apparently resulted from its relatively positive loess δ 13 C TOC data which fall into the most positive end of the surface soil δ 13 C TOC values in arid central Asia (Fig. S9), thus raising uncertainty. All this evidence validates our summer precipitation reconstruction method.
Detailed comparison of the reconstructed summer precipitation of the YB and JY profiles with the age sequences is impossible, due to the data resolution in the JY profile being too low. However, both datasets show higher summer precipitation from 30ka to 60ka (marine isotope stage 3, MIS3), followed by a decrease towards MIS2 ( Fig. S11).
Part 6, refined age-model of the YB profile
The relatively huge errors of the OSL data from the YB profile, especially those during the last glacial (as shown in Fig. 2 in the main text) preclude the comparison of 20 high-resolution YB records with other records. For a long time (e.g. Porter and An, 1995), grain size data in the Chinese loess have been related to temperature variations at high latitudes in the northern hemisphere by way of variations in the intensity of the East Asian winter monsoon and the vigor of the westerly winds. During cold phases such as the Heinrich events, the enhanced winter monsoon and the northern hemisphere westerlies transported more coarse dust grains to the CLP, especially to its western sector because of its proximity to the deserts (Chen et al., 1997;Sun et al., 2010Sun et al., , 2012. This relation allows us to transfer the ice core ages from high latitudes in the northern hemisphere to the loess profile in the western CLP. The grain size data in the YB profile (> 40μm, %, mainly reflecting the intensity of the winter monsoon) have been compared with the oxygen isotopic record of NGRIP (NGRIP, 2004; indicating the temperature variations in the high latitudes of the northern hemisphere) as a means of selecting appropriate age control points, mainly dependent on the cold events ( Fig. S12). After that, the transferred age series in the YB profile was obtained by linear interpolation of the selected age control points. The results indicate that, between ca. 10 and 20ka, the OSL data were generally consistent with the transferred NGRIP ages; between ca. 20 and 60ka, it seems that the OSL data are systematically younger than the transferred NGRIP ages with an average offset of ca. 4~5ka (Fig. S13). This comparison further confirms the validity of transferring the NGRIP ice core ages to the YB loess profile based on comparison of the loess grain size and the NGRIP oxygen isotopic data (NGRIP, 21 Figure S12 Comparison of high-resolution grain size data (blue lines) from the YB profile plotted against depth and oxygen isotopic data (red lines) from the NGRIP ice core (NGRIP, 2004) plotted against the age series of GICC05 (Svensson et al., 2008). Based on the comparison, 9 age control points (represented by the vertical grey bars) were selected in order to match the NGRIP age series to the YB profile by interpolation. Arabic numerals indicate interstadial events. YD is the Younger Dryas event and H1-H6 marks the Heinrich events. Please note the reversed grain size scale.
2004). Although the transferred age series is not accurate, and considering the widely accepted control mechanism of Chinese loess grain size, we chose the transferred age series from the NGRIP ice core as the final age model of the YB profile for comparing the YB records with others ( Fig. 3 in main text).
22 Figure S13 Comparison of the OSL data (solid red and purple dots with error bars, Lai and Wintle, 2006;Lai et al., 2007) and the selected 9 age control points (solid blue triangles) transferred from NGRIP plotted against depth of the YB profile. The purple line is based on part of the OSL data represented by solid purple dots to show the approximately systematic difference between OSL data and NGRIP data between 20ka and 60ka.
Part 7, original data of the YB profile
For an intuitive presentation, original data and reconstructed summer precipitation from the YB profile are plotted against depth and age series as shown in Figs.S14 and S15, respectively. Correspondingly, all relevant data from the YB profile are shown in Table S2.
23 Figure S14 The original data from the YB profile plotted against depth. Red series are grain size data (>40μm, %); blue series are magnetic susceptibility (SI units); purple series are loess 13 C TOC data (‰, VPDB); black dots with error bars are the OSL dating results (same as in Fig. S13); calculated summer precipitation is shown by solid red dots with uncertainties represented by light blue vertical bars.
Figure S15
The original data from the YB profile plotted against the age series transferred from the NGRIP ice core. Red series are grain size data (>40μm, %); blue series are magnetic susceptibility (SI units); purple series are loess 13 C TOC data (‰, VPDB); calculated summer precipitation is shown by solid red dots with uncertainties represented by light blue vertical bars.
Part 8, climate data from Jingyuan and Linxia (1961~1990)
The climate data from Jingyuan and Linxia (close to the YB profile) stations, including the monthly precipitation and temperature data from 1961 t0 1990, are shown in Table S3 as following:
|
v3-fos-license
|
2016-01-09T01:06:55.066Z
|
2010-07-29T00:00:00.000
|
11272028
|
{
"extfieldsofstudy": [
"Computer Science",
"Geology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=2294",
"pdf_hash": "e7a900da1f611771924a59b6445458a45ac16336",
"pdf_src": "Crawler",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43513",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"sha1": "e7a900da1f611771924a59b6445458a45ac16336",
"year": 2010
}
|
pes2o/s2orc
|
Neotectonic Evidences of Rejuvenation in Kaurik-Chango Fault Zone , Northwestern Himalaya
Neotectonic investigations using morphotectonic parameters such as basin asymmetry, drainage anomalies, digital data interpretation and geomorphic field evidences were carried out in Satluj river valley downstream of Khab in the Kaurik Chango Fault (KCF) zone. The study reveals the presence of a north-south trending fault which is similar to the KCF. Unpaired, tilted terraces, V shaped valleys, deep gorges and lakes are the manifestations of fault movement in the area. Presence of deformation structures preserved in the palaeolake profile at Morang indicates that the area has also been seismically active in the past. In this paper we present a conceptual model of the formation of lakes in KCF zone. Morphometric analysis was carried out with the help of Digital Elevation Models (DEMs) and field investigations.
Introduction
Himalayan mountain range was created as a result of the collision of the Indian and the Asian plate.Ensuing tectonic turmoil is witnessed in the form of intra-continental deformation along major faults and thrusts [1][2][3][4].The Himalayan region is dissected by several NW-SE trending regional thrusts namely Indo Tsangpo Suture Zone, Main Central Thrust and Main Boundary Thrust.These thrust planes and their subsidiary fault systems are the foci of several devastating earthquakes.In the Satluj-Spiti river valleys, a number of N-S trending faults have disturbed the Precambrian-Palaeozoic succession of the Tethys Himalaya [5][6][7][8].Kinnaur lies in the Higher Himalayan region between Main Central Thrust (MCT) and Indo-Tsangpo Suture Zone (ITSZ).
The expression of active tectonism in Kinnaur is reflected in tilted terraces, V shaped valleys, convex slopes, rampant landslides and gorges.Kinnaur is a seismically active segment of the Himalaya.A major earthquake of magnitude > 6.8 occurred in this region in 1975 [9].The region exhibits diverse deformation including strike-slip, normal and thrust faulting [10].Kaurik-Chango fault has been studied in detail by several workers [9][10][11][12][13].Several palaeolake profiles have been reported along the Kaurik Chango fault in the upper Spiti valley [10,[12][13][14][15].However, research has not been done in the area between Khab and Akpa.This paper is an attempt to study the neotectonic activity in the area using morphotectonic parameters.The study area lies between 78˚00΄ and 79˚00΄ E and 31˚25΄ to 32˚ N in the rugged Higher Himalayan terrain characterized by barren slopes and steep gradient.
Geology of the Area
Satluj River has been studied in detail by a number of workers [7,[16][17][18][19].The study area comprises thick succession of medium to high grade metamorphic rocks and their sedimentary cover.The succession is emplaced by granite intrusions of varying ages.Rocks of Vaikrita and Haimanta Groups are exposed in the region Figure 1(a).Vaikrita Group comprises psammatic gneiss with quartzite bands, banded gneiss, granite gneiss, quartz mica gneiss.Haimanta Group comprises grey-purple quartzites, black carbonaceous phyllites and quartz mica schist interbedded with amphibolites and calc schists [19].
Figure 1(b) shows the SRTM image of the study area.River Spiti takes an abrupt southerly turn and flows parallel to the Kaurik-Chango fault near Sumdo.Seven terraces have been observed on the eastern bank [19].There is no evidence of terraces on the western bank.This may be due to the shifting of the river westward owing to the uplifted eastern block of the Kaurik Chango fault [20].Quaternary fluvio-lacustrine deposits occur all along the Satluj valley downstream of Khab.These deposits are well preserved on the eastern bank.Several evidences gathered during the present work between Khab and Akpa indicate that Kaurik-Chango fault extends downward upto Akpa showing an uplifted eastern block.There are indications of tectonic and seismic activity similar to those in the Spiti valley.
In this paper, several morphotectonic indices were used to analyze the tectonic deformation in the area.Digital Elevation Model and satellite data were used to study the landscape evolution of the region.Field investigations were carried out to verify the data generated in the lab.
Methodology
Digital data interpretation was carried out with the help of a 3D Digital Elevation Model (DEM) and 2D topographic map to the scale 1:50,000.The softwares that were used to process the digital data are Surfer 9. of the trunk stream and At is the total area of the drainage basin.In case of tectonic tilting, the value of AF becomes greater than 50 and the tributaries present on the tilted side of the main stream grow longer than those on the other side [21].Transverse topographic symmetry factor (T) is defined as T = Da/Dd where Da is the distance from midline of the drainage basin to the midline of the active meander belt.Dd is the distance from the basin midline to basin divide.Value of T ranges between 0 to 1. T = 0 implies a perfectly symmetrical basin and T = 1, a perfectly asymmetrical one [20,21].
The landscape morphology of an area is governed by drainage of the rivers and their tributaries.Tectonic deformation has a direct impact on the drainage of that area.Tectonic deformation changes the channel slope which is responsible for variation in the channel morphology [22][23][24][25][26][27].
Drainage map and stream profiles were prepared with the help of the SRTM data.Valley incision is used to define relative uplift [24,28].Cross valley profile for the Satluj basin was prepared with the help of SRTM data.A high valley-floor-width to valley-height ratio (seen in a broad valley) indicates tectonic stability.On the other hand, a low valley-floor-width to valley-height ratio (seen in a narrow valley) is associated with recent tectonic movement [29].Asymmetry Factor calculated for the Satluj river basin is 55.84 indicating that the basin is asymmetrical.Transverse Topographic symmetry factor (T) calculated for the basin is given in Table 1.The data clearly shows that the basin is tilted towards the northwest
Drainage and Stream Profiles
Drainage map of the area shows lower order streams joining the trunk stream at 90˚.The streams on the eastern block are longer and more in number compared to those on the western block.River Satluj flows through a crystalline basement belonging to Vaikrita Group.The area lies in the Kaurik-Chango fault zone.Quaternary reacti-vation of these faults has lead to bedrock incision by Satluj which flows in a gorge for most of its course in the study area Figure 3(a).Longitudinal profile of Satluj shows a change in elevation near Spilu.Between Spilu and Akpa, the river has carved a deep gorge.This abrupt change in river morphology indicates that the river course in this region is tectonically controlled Figures 3(b ) -(c).Low valley floor width to height ratio suggests that the river is cutting downwards due to the tectonic activity in the region.
Geomorphology
The Satluj river basin under investigation is a rectangular basin with an area of 1839 sq km.The mean height of the basin is 3118.3 m.River Satluj is a 4 th order stream as per the Horton-Strahler method of stream ordering.The highest point in the basin is about 4400 m.The total basin relief is 2400 m.The streams on the eastern block flow in escarpments along most of their course.Landslide cones and springs are rampant on the eastern block Figure 3(e).The river flows in a narrow valley for most of its stretch from Spilu to Morang Figure 4(a).
River Satluj has carved three levels of unpaired terraces at Akpa Figure 4(b).The river in this region deflects abruptly towards the west.Tectonic rejuvenation of the N-S fault has also caused the tilting of the terraces Figure 4(c).Landslide cones and springs are rampant along the N-S lineament Figure 4(d).
In Morang, fluvio-lacustrine deposits are exposed for about 1 km Figure 4(e).These deposits are 60-70 m thick.The sedimentary succession is represented by laminated clay-silty clay and horizontally bedded sands.Laminated sediments dominate the lake section.Presence of lacustrine deposits also suggests that neotectonic movements along the N-S fault were responsible for blocking the river and forming a lake.The uplifted eastern block led to the damming of the rivulet, Khokpa nala.Ensuing landslides facilitated the formation of a lake on the footwall
Discussion
Kinnaur and Lahaul Spiti districts of Himachal Pradesh were severely rocked by a major earthquake of magnit-ude > 6.8 in 1975.The earthquake was associated with movements along a subvertical N-S trending normal fault named Kaurik-Chango fault.Luminescence chronology of seismites in Sumdo suggests that activation of Kaurik-Chango fault and seismic activity dates back to Late Pleistocene [15].
In the present area of investigation, a N-S lineament was observed along which river Satluj flows for a consi-derable distance before getting deflected near Akpa.A number of neotectonic evidences which were gathered during the study testify to its active nature.Morphotectonic parameters such as Asymmetry Factor (AF) and Transverse topographic symmetry factor (T) as well as the field evidences suggest that the Satluj river basin is tilted.DEM of the area also shows the difference in elevation of the two blocks.Drainage analysis clearly displays the effect of tectonic rejuvenation.Blocking of Khokpa nala, a small rivulet took place due to fault movement leading to the formation of Morang palaeolake on the eastern block.Several levels of deformation structures have been observed in the fluvio-lacustrine profile at Morang.Similar structures are seen near Kaurik Chango fault in Sumdo and Leo.Presence of deformation structures in the Morang fluvio-lacustrine profile indicate that the area lies in the tectonically and seismically active zone and has experienced several pulses of intense seismic activity.Neotectonic indicators such as the uplifted terrain, unpaired terraces, fluvio-lacustrine deposits, deformation structures, alignment of springs and landslides in the fault zone are strong evidences of the active nature of the N-S fault in the study area.
Figure 1 .
Figure 1.(a) Geological map of the study area (after Bhargava and Bassi, 1998); (b) SRTM image of Himachal Himalaya showing the location of the study areas.Also marked are the major drainage, thrusts and faults in the study area (image taken from NASA-SRTM program).
Figure 2 .
Figure 2. (a) Digital elevation model of the study area; (b) SRTM image of the study area showing basin asymmetry.
Figure 3 .
Figure 3. (a) Drainage map of the study area; (b) longitudinal profile of river Satluj in the study area; (c) cross valley profile of river Satluj in the study area; (d) hypothetical model showing lake formation in KCF zone; (e) geomorphological map of the study area.
Figure 4 .
Figure 4. (a) River Satluj flowing in a narrow valley from Spilu to Morang; (b)-(c) three levels of unpaired terraces formed by the river at Akpa, showing tilted surface; (d) recent landslide material; (e) fluvio lacustrine deposits at Morang; (f) sketch of deformation structure in the lacustrine profile.
|
v3-fos-license
|
2019-05-17T13:33:48.412Z
|
2019-04-30T00:00:00.000
|
155507808
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.intechopen.com/citation-pdf-url/64998",
"pdf_hash": "0fe6029208d7c11583296e31ebdd0434c0ba6a67",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43514",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b34086d6409bccea863ee9b5a7a0a5ece76f64e5",
"year": 2019
}
|
pes2o/s2orc
|
Tuberculosis and Immunosuppressive Treatment in Uveitis Patients
Uveitis is reported to be related to tuberculosis in 0.2–20% of cases. This large range reflects prevalence variations of tuberculosis around the globe as well as differences in diagnostic criteria. In addition, patients with noninfectious uveitis are frequently treated by immunomodulatory drugs and are thus at risk of TB reactivation. Search for tuberculosis infection is thus an important aspect in the work-up of patients with uveitis, even in low prevalence area. In the work up of such patients, the first question to ask is whether the patient has been infected by mycobacterium tuberculosis or not. The second question is to determine whether the uveitis is due or linked to this mycobacterial infection or not. Classical tuberculosis screening tools are used to answer the first question (TST, IGRA and chest X ray). The answer to the second question is much more challenging and will require the exclusion of other causes, to consider epidemiological data and clinical signs, polymerase chain reaction (PCR) on ocular fluids and therapeutically treatment trial. Disease prevalence will greatly influence all proposed tests and the final diagnosis. Tuberculosis prevalence in Western countries has progressively decreased during the twentieth century but remains elevated in cities with large migrating populations and drug addicts, with an increase of ultra-resistant cases. All those data must be carefully analyzed in order to collect enough evidences supporting tuberculosis uveitis before the initiation of a treatment with potential serious side and adapt the treatment to the increasing resistance.
Introduction
Tuberculosis (TB) is a worldwide problem and a main concern for the World Health Organization. Nowadays, 30% of the human population is infected with the Koch bacillus and tuberculosis remains one of the major health problems on earth [1][2][3][4]. In 2014 alone, 9.6 million people were thought to be infected with Mycobacterium tuberculosis (Mtb) globally, in the vast majority of cases, infection leads to a latent form of tuberculosis, active disease being found in only 10% [1]. Latent tuberculosis (LTBI) occurs when individuals have been exposed to TB but remained systemically healthy. This latency relies on the presence of an active immune response against Mtb. All those people are thus at risk of TB reactivation in case of immunodepression. With area of globalization, all countries are affected with varying rates of infection, with high endemic countries from where migrant groups settle.
Uveitis is reported to be related to tuberculosis in 0.2-20% of cases [5]. This large range reflects prevalence variations of tuberculosis around the globe as well as differences in diagnostic criteria. The etiological relationship between tuberculosis and ocular inflammation is complex. Hence, direct demonstration of the presence of Mtb inside the eye is fairly rare because of the pauci-bacillary nature of the infection. If the patient has the evidence of systemic active TB infection, the uveitis may indicate direct ocular involvement by Mtb. However, in most cases, a diagnosis of presumed ocular tuberculosis will be made on the basis of the presence of compatible ophthalmological signs in the setting of a systemic (usually latent) infection [6][7][8]. In this context, recent studies suggest that in patients with vision-threatening uveitis with no identifiable cause who have LTBI, the recurrence rate of uveitis is greatly reduced with concomitant anti-tubercular therapy (ATT) and immunosuppressive treatment [9][10][11]. Another important issue, reopened with the introduction of biologics, is obviously the risk of inducing tuberculosis reactivation in patients with severe vision-threatening non-infectious uveitis where systemic corticosteroids and steroid-sparing agents are required. Search for tuberculosis infection is thus an important aspect in the work-up of patients with uveitis, even in low prevalence area in order to prevent reactivation of LTBI [10]. In this chapter, we will review those important aspects of the relation between TB and immunosuppressive (IS) drugs/immunomodulatory treatment (IMT) in uveitis patients.
Screening in non-infectious uveitis patients for LTBI infection before starting IS or IMT
The mainstay therapy of sight-threatening noninfectious uveitis is based on corticosteroids and immunosuppressive drugs administration. IS drugs are usually restricted to refractory cases and to patients requiring high doses of steroids, in which visual prognosis depends on more aggressive therapeutic approaches. Their long-term use is limited by ocular and systemic side effects.
The introduction of biological agents such as anti-tumor necrosis factor (anti TNF-α), which is a key cytokine in host defense against intracellular infection as Mtb, by regulating the integrity of granuloma where TB is contained, led to the upsurge of TB reactivation [12]. In contrast, none anti-TNF-α targeted biologics like IL-6 inhibitor tocilizumab (TCZ), anti-CD20 rituximab (RTX) and more are not likely associated to any increase risk [13]. To date available TNF alpha booking agents are: infliximab (IFX), adalidumab (ADA), golimumab (GOL), certolizumab peg (CZP) which are monoclonal antibodies directed against TNF alpha, and etanercept (ETN) which is a soluble receptor blocking agent. Several publications reported the effectiveness of anti TNF-drugs in the treatment of uveitis [14,15]. Anti-TNF treatment had a profound effect on the management of autoimmune vision threatening uveitis with known etiology. ADA is the first licensed anti-TNF treatment for uveitis patients. It is important to emphasize that anti-TNFα agents (infliximab, adalimumab, golimumab) may be more efficient than soluble receptors of TNFα (etanercept) in decreasing the risk of uveitis [16]. But also paradoxical reactions during treatment with a biologic agent, like palmoplantar pustular and psoriasiform reactions, psoriatic arthritis, hidradenitis, inflammatory bowel disease, pyoderma gangrenosum, granulomatous reactions, and vasculitis have subsequently been reported through anecdotal cases, cohort studies, and analysis of DOI: http://dx.doi.org /10.5772/intechopen.82773 drug event databases, showing also that uveitis can flare during anti-TNF-α therapy especially with etanercept [17].
Because of the risk of developing active systemic TB, screening strategies for LTBI detection and preventive therapy for patients undergoing therapy with biological agents have been developed. LTBI is detected either by tuberculin skin test (TST), also named Mantoux test, or by blood-based interferon-gamma release assay (IGRA) including QuantiFERON TB Gold in Tube (QFT). Based on the WHO recommendations, either TST or IGRA are acceptable for LTBI screening [18]. Clinicians may consider, before starting IS, to use IGRA in persons with a history of BCG, but if the index of suspicion of LTBI is high, independently of BCG vaccination, both IGRA and TST may be done, especially prior to initiating anti TNF-α therapy [19]. Recent studies have evaluated the effectiveness of QFT and TST in the screening of arthritis patients and patients with inflammatory bowel disease [20,21]. Concordance between the two tests was moderate, and it appears lower with immunosuppression. QFT alone may be appropriate in immunosuppressantnaïve patients but both tests should be considered in immunosuppressed patients. In guidelines pertaining to medical immunosuppression, the recommendations for screening varied considerably between the use of TST and IGRA. Concurrent testing with both TST and IGRA was supported by many guidelines [19][20][21][22][23]. Lu et al. conducted a systematic review and meta-analysis to compare the accuracy of IGRAs and TST for the diagnosis of Mtb [24]. IGRAs showed better performance than TST for the diagnosis of the tuberculosis. Data on comparative and cumulative sensitivity and specificity indexes for both tests are detailed in Table 1. Cotter and Rosa et al. reported an interesting approach to choose the eligibility for treatment of LTBI after screening with TST and IGRA in immunosuppressed and immunocompetent patients suffering from inflammatory bowel disease, based on a very practical algorithm adapted from Duarte et al. to trace the routes to be followed to decide which patients has LTBI and need tuberculosis treatment according to IGRA and TST [25]. We think that this algorithm can be extrapolated to all patients with inflammatory diseases like uveitis ( Table 2). Patients with inflammatory diseases who require long-term maintenance medical immunosuppression with a negative screening TST or IGRA may not need further evaluation in the absence of risk factors and/or absence of clinical suspicion for TB in low TB risk countries [19,21]. Annual evaluation is highly recommended if they live, travel, or work in situations where TB exposure is likely while they continue treatment with biologic agents [23]. It is important to decrease false-positive LTBI testing that may lead to potential toxic antibiotic treatment and result in the unnecessary interruption of biologic therapy. After screening, if either test is positive (TST or IGRA), a chest CT-Scan is mandatory to exclude active pulmonary TB.
LTBI can progress to active TB in 5-10% in subjects who are at higher risk like recent contact, people leaving with HIV, children below 5 years, also an age > 65, immigrants from high TB prevalence countries and candidates of biological treatment [18]. When the patient is evaluated, clinicians should also take in account other variables including the host-related TB risk based on age, socioeconomic status, lifestyle, malnutrition, immune-suppression conditions and co-morbidities. The underlying disease itself is also associated with a higher TB risk, with a peak ranging from 2.0 to 8.9 in rheumatoid arthritis patients not receiving biologic therapies, and a lower risk in those with ankylosing spondylitis (AS), psoriatic arthritis (PsA), and psoriasis (Pso) [21][22][23]26]. Systemic TB reactivation has rarely been reported as side effect related to Anti-TNF-α therapy in patients with refractory relapsing chronic posterior uveitis [14,15]. A review of the US Food and Drug Administration (FDA) Adverse Event Reporting System data revealed 70 cases of active TB in 147,000 patients receiving IFX worldwide [22]. Of these, 47 occurred in patients with RA, 18 in those with Crohn's disease, and 5 in people with other types of arthritis, with a median interval of 12 weeks from starting the biologic therapy. The incidence rate of TB was 4 times higher in IFX-treated patients with RA than the estimated incidence in people with RA not receiving biologic therapy. As mentioned, there is an evidence of single biological-related risk as reported by Cantini et al. [10]. The risk is at least 3-4 times higher in patients exposed to monoclonal antibodies IFX and ADA than in those receiving the soluble receptor ETN. Subsequent studies aimed to establish the relative risk (RR) of TB in patients using TNF-α inhibitors (and other biologics) compared to that in the general population. Registries for patients on biologics have provided a valuable resource for studies that aimed to determine the risk of TB associated with these therapies. A French study using the RATIO registry found age-and sex-standardized incidence ratios (SIR) for infliximab, adalimumab, and etanercept of 18.6 (95% CI, 13.4-25.8), 29.3 (95% CI, 20.3-42.4), and 1.8 (95% CI, 0.7-4.3), respectively, compared to that in the general population [27].
Of note, the combined use of anti-TNF agents and traditional DMARDs exposes to a higher risk of TB reactivation in subjects with LTBI compared to patients treated with anti-TNF-α monotherapy. But practicians need to be aware that patients with inflammatory diseases, for which biologics are prescribed, already have an increased risk of TB associated with their immunosuppressed disease state and often also have co-morbidities and additional medications that themselves have an increased risk of TB compared to that of the general population [28]. The risk of TB reactivation in inflammatory patients treated with non-anti-TNF-α target biologics like IL-6 inhibitor tocilizumab (TCZ), anti-CD20 rituximab (RTX) and IL-1 inhibitor anakinra (ANK) and more are not likely associated to any increase risk [13,29,30].
Recommendations state that in the case of a diagnosis of LTBI (positive score to an immune diagnostic test (TST or IGRA) and a chest radiograph negative for active TB lesions), active TB prevention with a 6-9-month course of isoniazid is recommended associated to pyridoxine supplementation (vitamin B6), with an average protective effect against TB of 60% during the observation period [31]. There is no clear evidence in the literature concerning the optimal interval between the beginning of the preventive therapy for TB reactivation and biologic therapy [23]. Biologic therapy is suggested to be postponed for at least 1 month thereafter. Therefore, the decision to treat an individual must balance the potential personal benefits against the risk of drug hepatotoxicity and neurotoxicity which is higher in chronic alcoholics, malnourished persons, and pregnant women or healthy individuals (0.2%) due to the inhibitory effect of isoniazid on the function of pyridoxine metabolites. Daily rifampicin alone for 3-4 months compared to placebo has shown a 59% reduction of incident TB [32]. A multi-center clinical trial comparing 4 months of self-administered rifampicin to 9 months of daily isoniazid therapy has been recently completed in 2017. Daily therapy with isoniazid plus rifampicin for 3 months and standard therapy with isoniazid for 6-12 months were equivalent in terms of efficacy and as expected, given the shorter regimen and direct observation, treatment completion was significantly higher in the combination therapy group (82.1% vs. 69.0%). Toxicity was also less reported in the shorter regimen, with fewer individuals taking rifampicin/isoniazid developing drug-related hepatotoxicity [33].
Considering the most frequently used IS and IMT drugs for treatment of non-infectious uveitis, a few specific ophthalmologic reports aims to provide an overview on their use in patients with a recent or past history of systemic serious infection presumably unrelated to their inflammatory eye diseases (IED) [34]. Recently, an expert committee considered assessment and investigation of patients with severe IED initiating immunosuppressive and/or biologic therapy [35]. Infections that may be exacerbated or reactivated as a result of systemic immunosuppressive of biological therapy include: Tuberculosis, hepatitis B virus, hepatitis C virus, HIV and toxoplasmosis. These infection risks should be assessed or exclude before the initiation of such therapy. We keep our focus on risk of TB reactivation in IED patients. Studies regarding this issue are mainly focused on biological therapy, although some studies have indicate the potential risk for developing a TB when using traditional IS agents, particularly MTX [36]. But a significant relationship between the use of MTX and increased incidence of active TB was not established but should be still considered.
While it has been described that If TB develops during anti-TNF-α treatment, it is more likely to be disseminated and extra-pulmonary than are other TB cases. Few reports addressed the occurrence of uveitis tuberculosis development during anti-TNF treatment. A French group reported the uveitis cases occurring in patients with chronic rheumatic diseases, chronic inflammatory intestinal diseases or connective tissue diseases, while treated with disease-modifying anti-rheumatic drugs (DMARDs) and/or biologic therapies. A total of 32 cases of uveitis were reported, and 5 were of infectious origin, 2 toxoplasmosis, 2 herpes virus and 1 tuberculosis [37]. We faced one case of patient with SA and anterior uveitis treated with ADA for years, who developed a panuveitis with choroidal granulomas (Figure 1), associated with progressive cough, dyspnea, and pyrexia. A computed tomographic scan revealed extensive thoracic lymphadenopathy and interstitial shadowing of the lungs. Culture and polymerase chain reaction (PCR) of a mediastinal lymph node biopsy specimen showed acid-fast bacilli.
Ocular tuberculosis and IMT
There is a great deal of ambiguity in establishing a firm relationship between tuberculosis and ocular inflammation. It's not uncommon, when investigating patients with uveitis, that there is no identifiable systemic or ocular disease and that the only positive test is Mantoux test or QFT associated or not to abnormalities on the chest X-ray. In those patients classically classified as idiopathic uveitis, and treated by immunomodulation, the role of Mtb in disease development has been questioned. On the other hand, the role of immunomodulation in the treatment of well-established tubercular uveitis is also debated.
Severe studies tried to establish a cause/effect relationship between TB and uveitis using some criteria for presumption of tubercular etiology including positive Mantoux test/QFR, healed lesions on the chest X-ray, no other etiology, and suggestive clinical presentation of uveitis [5,6]. In such patients, the question arises as to whether the uveitis is related to TB or not, leading to the other question of establishing or not ATT.
Intra-ocular TB accounts for 6.9-10.5% of uveitis cases without a known active systemic disease and 1.4-6.8% of patients with active pulmonary disease have concurrent ocular TB [38,39]. In some patients there is a direct invasion by TB mycobacterium, into local ocular tissues, such as in choroidal granuloma, as evidenced by the histopathological examination of the biopsied involved ocular tissue, smears and cultures of the tissue fluid, and the polymerase chain reaction (PCR). In other patients, there is no clinical evidence to suggest active ocular TB infection. The pathogenesis of uveitis in these patients remains unclear. It is uncertain whether the uveitis is the result of reactivation of LTBI or a hypersensitivity response to Mtb [38,40]. Bansal proposed guidelines for the diagnosis of intra-ocular TB including a combination of clinical ocular findings, ocular and systemic investigations, exclusion of other etiology and response to ATT [41]. Based on these and their own results, Gupta et al. proposed to classify intra-ocular TB into confirmed, probable, and possible intra-ocular TB [11]. Recently The Collaborative Ocular Tuberculosis Study (COTS)-1 tried to clarify through a multinational retrospective review, Tuberculosis and Immunosuppressive Treatment in Uveitis Patients DOI: http://dx.doi.org /10.5772/intechopen.82773 what are the suggestive clinical features and approach to diagnosis of patients with tubercular uveitis. The diagnostic criteria for tubercular uveitis used in COTS-1 are developed in Table 3 [42]. Based on those criteria, we propose a diagram explaining the diagnostic pathways for patients suspected of having TB ( Table 4). In 2018, they provided in more details the different phenotypes of choroidal involvement in tubercular uveitis, also geographical variations in the phenotypic expression and treatment outcomes. The phenotypic variants reported were serpiginous-like choroiditis (SLC) in 46.1%, choroidal tuberculomas (CTC) in 13.5%, and multifocal choroiditis (MFC) in 9.4%. Other rare phenotypic variants of choroiditis were observed including ampiginous choroiditis (APC) in 9.0% and acute posterior multifocal placoid pigment epitheliopathy (APMPPE) in 3.3% and other indeterminate type of choroiditis in 18.8%. Those varied clinical phenotypes are probably based on the interaction and activity of mycobacterium bacilli and immune system. While SLC was clearly the most prevalent phenotype in the Asia Pacific region, it was less prevalent in the West. Furthermore, APC is a phenotype of choroiditis that is infrequently reported in association with tubercular uveitis [43].
Because TB can be sometimes confined purely to the eye, and as a pauci-bacillary infection, there is a lack of agreed management guidelines among ophthalmologists in establishing the diagnosis of intra-ocular TB. Similarly, there is no agreed consensus between ophthalmologists and other physicians with regards to role of ATT and duration of treatment in cases of isolated intra-ocular TB. Bansal et al. assessed the long-term impact of adding anti-tubercular treatment to the standard anti-inflammatory therapy consisting mostly of corticosteroids in patients with uveitis and evidence of latent or manifest TB. The group speculated that if uveitis was related to hypersensitivity reaction to tubercular antigens attributable to latent TB, the elimination of LTBI would lead to elimination of future recurrences of uveitis in these patients. The administration of anti-tubercular therapy in these patients substantially reduces recurrences when given along with standard corticosteroid therapy. Corticosteroids may limit damage to ocular tissues caused from delayed type hypersensitivity [41]. The use of ATT to manage presumed ocular tuberculosis is regarded as an effective tool for tubercular uveitis and response to therapy can be a good surrogate for diagnosis of presumed ocular tuberculosis.
A case control study conducted by Chee et al. on patients with uveitis with evidence of latent TB and no other underlying disease, who were treated with ATT for more than 9 months duration, were approximately 11 times less likely to develop recurrence of inflammation compared with patients who had not received ATT. This association was independent of potential confounders such as demographics, classification of uveitis and corticosteroid therapy. On the other hand, patients who were treated with ATT for <6 months or 6-9 months duration did have a reduction in recurrence, but this was not statistically significant [39]. The Collaborative Ocular Tuberculosis Study (COTS)-1 group also reported the role of ATT in the management of patients with TB uveitis from a multinational cohort and explore potential correlations of clinical features with treatment response. A low treatment failure rate was reported in patients with TB uveitis treated with ATT. On multivariate regression analysis, they showed that the presence of choroidal involvement with vitreous haze and snowballs in patients with panuveitis was associated with a higher risk of recurrence. Concerning the addition of corticosteroids to ATT, their results suggests that patients treated with corticosteroids may have had poorer outcomes than those who were not [42]. Effectively, the possible beneficial effect of immunomodulation in association of ATT in the management of tubercular uveitis is still debated. A recent meta analyze was conducted on 37 articles to assess the effect of ATT associated or not to IMT on ocular outcome of patients with presumed ocular TB. The meta-analysis revealed that 84% of the patients receiving ATT showed © 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. non-recurrence of inflammation during the follow-up period. A successful outcome was observed in 85% of patients treated with ATT alone; in 82% of patients treated with ATT and systemic steroids and in 85% of patients treated with ATT and systemic steroids and immunomodulators. It was not possible to conclude which regimen was the best to control ocular inflammation [44][45][46].
Conclusion
The link between tuberculosis, uveitis and immunosuppression are important and complex. First, patients with inflammatory diseases treated with IMT agents, including noninfectious uveitis patients, are at risk to develop active tuberculosis, including ocular tuberculosis. Secondly, many data suggest that Mtb might play a role in disease development of idiopathic uveitis in LTBI patients and that ATT must be considered in such cases. Finally, inflammatory and immune reaction are likely to play a role during ocular tuberculosis and immunomodulation has a beneficial effect.
In summary, we have to keep in mind that the main concern of TB screening for ophthalmologist is to avoid systemic TB reactivation in front of a sight threatening uveitis with known etiology destined to IS/IMT. But when facing an idiopathic uveitis under IS/IMT, there is another risks which has to be considered, the risk of ocular TB misdiagnosis with a non-or partial response to immunosuppressive treatment. Introduction of ATT in those cases will control inflammation, will help to discontinue most IMT and will prevent recurrences.
|
v3-fos-license
|
2018-12-16T20:34:22.221Z
|
2012-01-23T00:00:00.000
|
55612679
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/jnt/2012/613746.pdf",
"pdf_hash": "5c282d3b993c35654362d6843c598aaad0437a45",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43516",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"sha1": "5c282d3b993c35654362d6843c598aaad0437a45",
"year": 2012
}
|
pes2o/s2orc
|
Synthesis of Carbon Nanocapsules and Nanotubes Using Fe-Doped Fullerene
We synthesized iron-(Fe-)doped C60 nanowhiskers (NWs) by applying the liquid-liquid interfacial precipitation method that employs a C60-saturated toluene solution and a solution of 2-propanol containing ferric nitrate nonahydrate (Fe(NO3)3·9H2O). Fe particles of 3–7 nm in diameter were precipitated in the NWs. By heating at 1173 K, the NWs were transformed into hollow and Fe3C-encapsulated carbon nanocapsules and carbon nanotubes.
Method
C 60 powders were dissolved in toluene to prepare a C 60saturated solution with a solubility of 2.8 g/L.In addition, ferric nitrate nonahydrate (Fe(NO 3 ) 3 •9H 2 O) was dissolved in 2-propanol to give a concentration of 0.1 M. Next, the C 60 toluene solution was transferred to a glass vial, and the solution of 2-propanol containing Fe(NO 3 ) 3 •9H 2 O was added to form a liquid-liquid interface.The vial was maintained at 278 K for one week, and the mixed solution was then filtered to extract precipitates.The precipitates were dried and heated under high vacuum at 1173 K for 1 h.The as-precipitated and heat-treated specimens were dispersed on microgrids and observed by scanning electron microscopy (SEM) and transmission electron microscopy (TEM).
Results
Figure 1 shows an SEM image of as-precipitated C 60 NWs.Figures 2(a) and 2(b) show a bright-field image and a highresolution image of an as-precipitated C 60 NW, respectively.The diameters of the as-precipitated NWs ranged from 0.5 to 7.5 μm, as shown in Figure 3. Lattice fringes with a spacing of 0.52 nm were observed in the NWs, as shown in Figure 2(b).Figure 2(c) shows a selected-area electron diffraction pattern of the NW depicted in Figure 2(a).The high-resolution images and diffraction patterns reveal that the NWs have a tetragonal lattice with lattice constants of a = 0.99 nm and c = 2.1 nm.The lattice fringes with a spacing of 0.52 nm depicted in Figure 2(b) correspond to the (004) plane.The long axis of the NW is parallel to the (110) direction.Figure 2(d) shows a high-resolution image of an NW surface, where Fe particles with diameters in the range 3-7 nm were observed.Thus, the LLIP method using a solution of Fe(NO 3 ) 3 •9H 2 O in 2-propanol resulted in the precipitation of Fe particles in the C 60 NWs.Owing to the precipitation of Fe particles, the crystal growth of the NW was inhibited; as a result, the surfaces of the NWs had a rough topography.Therefore, the Fe-doped NWs presented here differ from pure C 60 NWs, which are surrounded by plane surfaces [19][20][21][22][23].
Figure 4 shows a bright-field image of the heat-treated specimen.Hollow and encapsulating CNCs and CNTs were observed in the specimen, as were chains of CNCs.
Figure 5 shows a bright-field image and a selected-area diffraction pattern of a CNC encapsulating a particle.The 220, 230, and 050 spots of Fe 3 C (cementite) are observed; the particle was identified to be Fe 3 C.The formation of CNCs and CNTs was not confirmed when the heating temperature was changed to 873 K, 973 K, 1073 K, and 1123 K.When the heating time was shortened to 0.5 h at 1173 K, the size distribution of CNCs and CNTs was similar.
Discussion
4.1.Formation of Fe 3 C Particles.In the as-precipitated NWs, Fe particles 3-7 nm in diameter were observed.On the other hand, after heating at 1173 K, the diameter of the Fe 3 C particles in the CNCs and CNTs was found to increase to 5-100 nm.This implies that the Fe particles had aggregated, and carbon was solved in them during heating.According to Ding et al., pure Fe particles that are several nanometers in size melt at 1000 K [27].The melting Fe particles are mobile and fuse together.Fe-carbides with a carbon concentration of more than 50 at% are formed when the melting Fe particles contact with carbon [28].During cooling, the solubility limit of carbon in the particles decreases and is followed by the precipitation of graphene layers on the particle surfaces.In this experiment, the particles observed in CNCs and CNTs were Fe 3 C.When the carbon concentration of the particles at coagulation exceeds 25 at%, Fe 3 C particles can be formed in the CNCs and CNTs.Schaper et al. showed from in situ TEM that graphene layers precipitate from Fe 3 C particles at 1143 K [28].This temperature is 30 K lower than the heating temperature used in this study.The Fe 3 C particles shrink owing to the precipitation, resulting in the formation of an empty space at the core of the CNCs [28].Such empty spaces were observed in this study, as shown in Figure 6(a).Fe 3 C is a quasistable phase and does not transform to other phases at room temperature.
Formation of CNCs and CNTs.
According to Saito, CNCs are produced by the precipitation of carbon from catalysts, that is, metallic or carbide particles [10].Thus, a CNC encapsulating a particle is initially formed.Jiao et al. showed from in situ TEM that in, CNCs encapsulating Fe and Fe carbide particles, the particle is removed from the CNC at 1173-1373 K [29].As Ding et al. discussed, the driving force of the particle removal relates to the temperature and concentration gradients in the particle, which are caused by the precipitation of graphene layers [27].By removal of the particle, the CNC is broken and a hole is formed.This hole is closed by subsequent growth of the layers [30], and hence, a hollow CNC is formed.Subsequently, the next precipitation starts.Thus, the precipitation of graphene layers occurs intermittently.By repeating the processes of precipitation and the removal of the encapsulated particle, hollow CNCs proliferate and one CNC encapsulating the particle remains.The CNCs aggregate, resulting in the formation of a chain, as shown in Figure 4.In the case of the formation of CNTs, the encapsulated Fe 3 C particles showed rod shapes.Tubular graphene layers, that is, CNTs, precipitate around such rodshaped particles.Once a CNT is formed, the encapsulated rod-shaped particle shifts along the symmetric axis of the CNT.Carbon precipitates continuously on the same region of the particle, along one direction, leading to growth of the CNT [10].According to Jourdain et al., a particle is encapsulated in a CNT owing to the strong interfacial tension between the inner wall of the CNT and the surface of the melting particle [31].It is suggested that the interfacial tension of the rod-shaped particles encapsulated by CNTs is higher than that of the spherical particles in the CNCs.In this study, both CNCs and CNTs were formed in the same specimen.Whether CNCs or CNTs are formed depends on the shape of each encapsulated particle.the graphene/Fe 3 C interfaces.This spacing is 10% smaller than that of graphite.Banhart et al. observed a similar reduction in the spacing of graphene layers in carbon onions and interpreted the reduction on the basis of compression and the transition of orbitals from sp 2 to sp 3 [12,13].In the Fe 3 C-encapsulated CNCs produced in this study, the smaller spacing of the graphene layers is related to the Fe 3 C particle.The bonding between the graphene layers and the Fe 3 C particle may contribute to the transition of orbitals from sp 2 to sp 3 .
Characteristics of the Present Method.
In this production method of Fe 3 C-encapsulated CNCs and CNTs, Fe-doped FNWs were synthesized by the LLIP method and heating was applied.The LLIP method is performed by a simple process: the mixing of two kinds of liquid around room temperature.Thus, the control of the production is easier than other methods for production of CNCs and CNTs encapsulating particles, for example, arcdischarges [4,[7][8][9].In particular, when production quantity becomes larger, the simple control is advantageous although the LLIP process need longer time, for example, one week.
Conclusion
Fe-doped C 60 NWs were synthesized by the LLIP method using a C 60 -saturated toluene solution and 2-propanol containing Fe(NO 3 ) 3 •9H 2 O.The additive of Fe(NO 3 ) 3 •9H 2 O resulted in the precipitation of Fe particles in the C 60 NWs.
Heat treatment of the NWs at 1173 K produced both hollow CNCs and CNTs.Fe 3 C-encapsulated CNCs and CNTs were also produced.The present method can be applied to synthesize NWs including other metals, for example, cobalt and nickel.Thus, the present method is suitable for the production of CNCs and CNTs encapsulating various foreign nanomaterials.
Figures 6 (
a) and 6(b) show high-resolution images of an Fe 3 C-encapsulated CNC.The diameters of the CNCs and the Fe 3 C particles ranged 25-175 nm and 5-100 nm, respectively, as shown in Figure 7.The Fe 3 C particle does not completely fill the empty space at the core of the CNC.
Figure 6 (
c) shows a high-resolution image of graphene layers in an Fe 3 C-encapsulated CNC.The spacing of the graphene layers around the surface is 0.34 nm, whereas the spacing decreases to 0.31 nm around the graphene/Fe 3 C interface.Figure 8(a) shows a high-resolution image of a CNT encapsulating Fe 3 C particles (Figures 8(b) and 8(c)), similar to the case of the CNCs.The Fe 3 C particles encapsulated by the CNTs showed rod shapes, as shown in Figure 8(a).This is different from the spherical Fe 3 C particles observed in CNCs.The diameters of the CNTs and the Fe 3 C particles ranged 10-70 nm and 5-50 nm, respectively, as shown in Figure 9.
Figure 2 :
Figure 2: (a) Bright-field image, (b) high-resolution image, and (c) selected-area electron diffraction pattern of as-precipitated Fedoped C 60 nanowhisker.The diameter of the nanowhisker is 1.2 μm.(d) High-resolution image of Fe particles in the nanowhisker.The lattice fringes of (110) Fe with a spacing of 0.20 nm are observed.
Figure 5 :
Figure 5: (a) Bright-field image and (b) selected-area diffraction pattern of Fe 3 C-encapsulated carbon nanocapsule.The 220, 230, and 050 spots of Fe 3 C are observed.
Figure 6 :
Figure 6: (a) High-resolution images of Fe 3 C-encapsulated carbon nanocapsule and (b) enlargement of the (200) and (020) lattice fringes of the Fe 3 C particle in (a).(c) Graphene layers of the carbon nanocapsule in (a).
4. 3 .
Decrease in the Spacing of Graphene Layers of CNCs.The spacing of the graphene layers decreases to 0.31 nm around Fe 3 C particles encapsulated by carbon nanocapsules
Figure 7 :Figure 8 :
Figure 7: Histograms of (a) outer diameter of carbon nanocapsules and (b) Fe 3 C particles encapsulated by carbon nanocapsules.
Fe 3 C particles encapsulated by carbon nanotubes
Figure 9 :
Figure 9: Histograms of outer diameter of carbon nanotubes (a) and Fe 3 C particles encapsulated by carbon nanotubes (b).
|
v3-fos-license
|
2017-06-22T17:52:34.491Z
|
2014-08-12T00:00:00.000
|
9393706
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/s13054-014-0456-6",
"pdf_hash": "f5b706d0529bfcf7e2d1dfc037df5b6f95fc5ec0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43517",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "63fc7f3e4df8707379ea2e503bc6704a9764ad14",
"year": 2014
}
|
pes2o/s2orc
|
Emergent severe acute respiratory distress syndrome caused by adenovirus type 55 in immunocompetent adults in 2013: a prospective observational study.
Introduction Since 2008, severe cases of emerging human adenovirus type 55 (HAdV-55) in immunocompetent adults have been reported sporadically in China. The clinical features and outcomes of the most critically ill patients with severe acute respiratory distress syndrome (ARDS) caused by HAdV-55 requiring invasive mechanical ventilation (IMV) and/or extracorporeal membrane oxygenation (ECMO) are lacking. Methods We conducted a prospective, single-center observational study of pneumonia with ARDS in immunocompetent adults admitted to our respiratory ICU. We prospectively collected and analyzed clinical, laboratory, radiological characteristics, sequential tests of viral load in respiratory tract and blood, treatments and outcomes. Results The results for a total of five consecutive patients with severe ARDS with confirmed HAdV-55 infection were included. All five patients were immunocompetent young men with a median age of 32 years. The mean time from onset to dyspnea was 5 days. Arterial blood gas analysis at ICU admission revealed profound hypoxia. Mean partial oxygen pressure/fraction of inspired oxygen was 58.1. Mean durations from onset to a single-lobe consolidation shown on chest X-rays (CXRs) and, from the first positive CXR to bilateral multilobar lung infiltrates, were 2 days and 4.8 days, respectively. The viral load was higher than 1 × 108 copies in three patients and was 1 × 104 in one patient. It was negative in the only patient who survived. The mean duration for noninvasive positive pressure ventilation (NPPV) failure and IMV failure were 30.8 hours and 6.2 days, respectively. Four patients received venovenous ECMO. Four (80%) of the five patients died despite receiving appropriate respiratory support. Conclusions HAdV-55 may cause severe ARDS in immunocompetent young men. Persistent high fever, dyspnea and rapid progression to respiratory failure within 2 weeks, together with bilateral consolidations and infiltrates, are the most frequent clinical manifestations of HAdV-55-induced severe ARDS. Viral load monitoring may help predict disease severity and outcome. The NPPV and IMV failure rates were very high, but ECMO may still be the respiratory support therapy of choice. Trial registration Clinicaltrials.gov NCT01585922. Registered 20 April 2012
Introduction
Human adenoviruses (HAdVs) are notorious pathogens in people with compromised immune function and a frequent cause of outbreaks of acute respiratory disease among young children. Life-threatening adenoviral pneumonia has previously been documented among military trainees, patients with AIDS and transplant recipients [1][2][3][4][5]. Human adenovirus type 55 (HAdV-55), which is emerging as a highly virulent pathogen for acute fatal adenoviral pneumonia among immunocompetent adults in China, has gained increasing attention [6]. HAdV-55 is a newly identified, emergent acute respiratory disease pathogen causing two recent outbreaks in China in 2006 [7] and in Singapore in 2005 [8]. In 2011, this pathogen apparently re-emerged in Beijing, China, causing several cases of severe community-acquired pneumonia [9]. This pathogen was fully characterized by whole-genome sequencing [10]. Comparative studies showed that the ability of HAdV to cause severe disease may relate to the serotypes of HAdVs. Severe adenoviral pneumonia induced by HAdV-55 has been reported to be more closely related to severe cases compared to other serotypes (HAdV-3, HAdV-7 and HAdV-14) [6].
Current knowledge of HAdV-55-induced severe acute respiratory distress syndrome (ARDS) requiring invasive mechanical ventilation and/or extracorporeal membrane oxygenation (ECMO) support in immunocompetent adults is derived from single case reports or relatively small, single-center series. As a result, little information is available on HAdV-55 pneumonia complicated with severe ARDS, the frequency of which is expected to increase in the coming years. Here we describe the clinical features and outcomes of five prospective cases of HAdV-55 pneumonia complicated with severe ARDS in immunocompetent adults in our ICU.
Study population
Beginning in May 2012, a randomized trial of noninvasive positive pressure ventilation (NPPV) in ARDS patients was carried out in our center (ClinicalTrials.gov ID: NCT01585922). From May 2012 to April 2014, all adult patients with ARDS caused by pneumonia who were admitted to the respiratory ICU of Beijing Chao-Yang Hospital were prospectively enrolled. Severe ARDS was diagnosed according to the Berlin definition: (1) developing within 1 week of a known clinical insult or new or worsening respiratory symptoms; (2) bilateral opacities not fully explained by effusions, lobar and/or lung collapse, or nodules; (3) respiratory failure not fully explained by cardiac failure or fluid overload; (4) partial oxygen pressure/ fraction of inspired oxygen (PaO 2 /FiO 2 ) ≤100 mmHg with positive end-expiratory pressure (PEEP) ≥5 cmH 2 O; and (5) a chest radiograph with three or four quadrants with opacities. Patients with HAdV-55 infection and severe ARDS who failed conventional NPPV and invasive mechanical ventilation (IMV) were included in the analysis. This study was approved by the Institutional Review Board of Beijing Chao-Yang Hospital (LLKYPJ2012031). Data were analyzed anonymously. Each patient gave written informed consent for their data to be used for research and publication.
Clinical data collection
Clinical information collected by investigators with a standardized data form included the following: demographic characteristics (age and sex), comorbidities, clinical symptoms (fever, cough, sputum, dyspnea, chest pain, rash, nausea, vomiting, abdominal pain, diarrhea and headache), signs (body temperature, heart rate, respiratory frequency, blood pressure and crackles in the lungs), laboratory tests (whole-blood cell count and blood chemistry) and microbiological findings and images of the lung (chest X-ray (CXR) and computed tomography). Concomitant medications, respiratory support, complications and outcomes were also recorded.
Microbiological tests
Patients' specimens, including sputum, whole blood and serum samples, were collected upon admission and during hospitalization. Microbiological tests were performed at the Department of Infectious Disease and Clinical Microbiology in our center, and the detection methods used were described in our previous report [6]. Common viruses causing respiratory illness were screened using a kit with 15 different viral assays. Serum samples were used for Mycoplasma pneumoniae, Chlamydia pneumoniae and Legionella pneumophila antibodies. All patients had their HAdV-55 infection confirmed by RT-PCR assay. Partial sequences of the hexon gene were analyzed to type the phylogeny of HAdV-55 strains. The adenoviral load was also performed on both respiratory specimens and blood by multiplex RT-PCR assay.
Criteria for human adenoviral pneumonia
Viral pneumonia was diagnosed based on the presence of HAdV detected in sputum or throat swab samples by molecular methods.
Statistical analysis
Continuous variables were summarized as mean ± standard deviation (SD) or median (interquartile range).
Results
During the study period, a total of eight patients diagnosed with HAdV infection and respiratory failure were admitted to our ICU, and seven of them received a diagnosis of ARDS. Five consecutive patients with severe ARDS with confirmed HAdV-55 infection were admitted to our ICU between April and July 2013. They were included in the analysis. The other two patients had mild ARDS and were infected with other types of HAdVs.
Demographics
All five patients were immunocompetent young men with a median age of 32 years (range, 28 to 40 years). All of the patients shared a B blood type and came from the same city: Baoding city, Hebei province, northern China. All patients had no exposure to farm animals, corn or hay. Patient 3 had tuberculosis pleuritis and received antituberculosis therapy at ICU admission. His blood tests, including the T-SPOT tuberculosis assay (Oxford Immunotec, Marlborough, MA, USA) and antibody of Mycobacterium tuberculosis, were negative.
Clinical characteristics
Flulike symptoms, such as fever, cough and little sputum, were commonly observed at the onset of illness. All patients presented with a high fever, with a mean body temperature of 39.5°C (range, 39.0°C to 40.0°C), which persisted for 8 days (range, 6 to 11 days). Productive cough was observed in two patients. Dull substernal chest pain and rash were also observed in two patients. All patients had dyspnea. The mean time from onset to dyspnea was 5 days (range, 1 to 10 days). After the onset of dyspnea, patients usually progressed to respiratory failure or hypoxemia. The mean time from onset to ICU admission was 9.6 days (range, 8 to 11 days) ( Table 1). All patients had tachypnea when admitted to the ICU, with a mean rate of 43 breaths per minute (range = 38 to 52). Arterial blood gas analysis at ICU admission revealed profound hypoxia, with a mean PaO 2 /FiO 2 of 58.1 (range = 49 to 62.5). White blood cell counts were low or in the normal range. All patients had elevated serum aspartate aminotransferase (AST), lactate dehydrogenase (LDH) and hydroxybutyrate dehydrogenase (HBDH) ( Table 1). At admission, all patients' levels of immunoglobulin (serum immunoglobulins G and M) and components C3 and C4 were in the normal range.
Four patients had lower than normal T-cell subset counts (Table 2).
Radiographic features
CXRs revealed multiple bilateral lobar or segment consolidation in the lungs of all five patients, and radiographic lesions progressed rapidly after ICU admission ( Figure 1). Three patients were examined by highresolution computed tomography (HRCT). Unilateral or bilateral consolidations and infiltrates were found on HRCT scans of all three of these patients. Consolidations within a single lobe or several lobes with a clear border and air bronchogram were the most common findings on HRCT scans. Nodules, patches, pleural effusion, abscess and a cavity were also seen visualized by HRCT (Figure 2). The mean duration from onset to a single-lobe consolidation on CXRs was 2 days (range = 1 to 5 days). The mean duration from the first positive CXR to bilaterally multilobar lung infiltrates was 4.8 days (range = 4 to 7 days).
Detection of adenoviruses by RT-PCR
All patients had HAdV-55 viremia. In four of the five patients, it was first detected in endotracheal aspirate (ETA) samples. The time between initial ETA sample collection of adenoviruses and positive results for HAdV-55 nucleic acid in the blood was 1 to 10 days (Table 3). Virus DNA copies in ETAs were determined for all patients during their ICU stay. The viral load was higher than 1 × 10 8 copies in three patients and 1 × 10 4 in one patient. The viral load became negative in the only patient who survived. In the four patients who did not survive, DNA copies did not decrease, even with antiviral therapy (Figure 3).
Respiratory support
Oxygenation was not maintained with conventional NPPV or IMV support in any of the patients. The mean duration until NPPV failure was 30.8 hours (range = 22 to 48 hours), and the mean time until IMV failure was 6.2 days (range 2 = to 13 days) ( Table 1). Four patients received venovenous ECMO to maintain oxygen saturation, and one patient refused ECMO support and received high-frequency oscillatory ventilation instead. Table 4 gives the oxygenation data of patients before and after venovenous ECMO support.
Antimicrobiological therapy and outcome
All patients received antiviral therapy, including acyclovir (10 mg/kg, every 8 hours, intravenous drip), ganciclovir (5 mg/kg, every 12 hours, intravenous drip) and ribavirin (250 mg, twice daily, intravenous drip). Considering that bacterial coinfection may combine with a severe viral infection, broad-spectrum intravenous antibiotics were given to all patients. Tests for bacterial pathogens were negative for only one patient (Table 3). Four (80%) of the five patients died. Among the four patients receiving venovenous ECMO, only one patient survived. The other four patients died due to ARDS, Aspergillus fumigatus coinfection, septic shock and catheter-related bloodstream infection due to Acinetobacter baumannii, respectively.
Discussion
To the best of our knowledge, this is the first cohort observational study on the clinical characteristics of patients with severe ARDS caused by emergent HAdV-55 infection and also the first on the evaluation of a viral load test for monitoring the reaction to therapy and for prediction of patient outcome. The following are the main findings of this study. (1) HAdV-55 may cause severe ARDS in immunocompetent young men with blood type B. All of our patients were from the same city of Hebei province, northern China. (2) Persistent high fever, dyspnea and rapid progression to respiratory failure within 2 weeks, together with bilateral consolidations and infiltrates at the same time, are the most frequent clinical manifestations of severe HAdV-55induced ARDS. (3) Viral load monitoring may help predict disease severity and patient outcome. (4) The NPPV and IMV failure rates were very high, and ECMO may be the last support method for this group of patients. (5) HAdV-55-induced severe ARDS has a very high mortality rate (80%) despite appropriate respiratory support.
Sporadic severe adenoviral infection in healthy adults has historically been described for serotype 4 [11], serotype 7 [4,12] and, more recently, serotype 14 in the general population and in military trainees [13,14]. HAdV-55 was first completely characterized in Shaanxi, China [7] and then reemerged in Hebei, a province close to Beijing, where it caused several cases of acute respiratory disease [9]. It was presumed that HAdV-55 was a recombinant form of the B2 species of HAdV-14 and HAdV-11 [7,15] due to its sharing a hexon gene with the HAdV-11 and HAdV-14 chassis [16]. The results of our study show that HAdV-55, as an emerging pathogen among immunocompetent adults, may cause severe ARDS.
The prevalence of severe fatal adenoviral pneumonia induced by HAdV-55 in our study is somewhat similar to that described by Cao and colleagues [6]. All cases of reported HAdV-55 in our study were from the same city: Baoding, Hebei province, northern China. They occurred between April and July 2013, just partly overlapping or following the influenza epidemic. The patients with severe disease also came from the same region and were treated during a similar time period, which suggests that HAdV-55 may be an important viral pathogen derived from this region.
Our study results suggest that the following may be clinical features of ARDS caused by HAdV-55: persistent high fever, rapid progression of dyspnea, need for mechanical ventilation support, elevated AST level and rapid progression from unilateral infiltrates to bilateral consolidations. These clinical features are highly similar to those of ARDS caused by other types of HAdV described in previous reports [6,9].
Recent studies have shown that the immune system plays a crucial role in the clearance of HAdV viremia and survival of the host [17]. Chen et al. reported that, in the acute phase of HAdV-55 infection, patients with severe disease may have high levels of dendritic cells and Th17 cells [18]. In our study, the only patient who recovered from severe infection had higher T-cell counts. Three of the five patients had relatively low T-cell counts when admitted. Our results suggest that these three patients may have been relatively immunocompromised and that a lower T-cell count may be a risk factor for HAdV-55 infection in young adults. HAdV-55 DNA was previously reported in 41.2% of patients with severe infection [18]. In our study, HAdV-55 DNA was detected and monitored in all patients with severe ARDS. The initial, and trend of, viral load that presented as HAdV-55 DNA copies in the respiratory tract samples and blood may suggest the severity of infection and may predict both the reaction to therapy and patient outcome.
The use of mechanical ventilation and ECMO in patients with ARDS caused by HAdV-55 has not been detailed in previous studies. In our cohort, we found that severe HAdV-55 infection could cause a rapid progression of respiratory failure, with a very high failure rate for NPPV and IMV. This failure rate may be a result of the large area of consolidation that induced a severe shunt in the lung, which may lead to lack of response to positive pressure ventilation. For patients with severe ARDS, ECMO should be considered a better choice for oxygenation.
Our study has limitations. It is an observational study with no comparison group, so the difference between the severe and modest infections could not be clarified in terms of immune status, clinical features, radiological findings, viral load and treatment effects on respiratory support and antiviral therapy. Sequential dynamic analysis is needed to determine the relationship between HAdV-55 viremia and treatment response.
|
v3-fos-license
|
2018-04-03T01:23:10.840Z
|
2003-02-21T00:00:00.000
|
29283640
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/278/8/5744.full.pdf",
"pdf_hash": "6d8fc98f1a030e793de9e1d68df0584ae7e0a60e",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43518",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "e738f202596602f0a8795d7671d0c60f76ca5787",
"year": 2003
}
|
pes2o/s2orc
|
Hypoxia actively represses transcription by inducing negative cofactor 2 (Dr1/DrAP1) and blocking preinitiation complex assembly.
Hypoxia is a growth inhibitory stress associated with multiple disease states. We find that hypoxic stress actively regulates transcription not only by activation of specific genes but also by selective repression. We reconstituted this bimodal response to hypoxia in vitro and determined a mechanism for hypoxia-mediated repression of transcription. Hypoxic cell extracts are competent for transcript elongation, but cannot assemble a functional preinitiation complex (PIC) at a subset of promoters. PIC assembly and RNA polymerase II C-terminal domain (CTD) phosphorylation were blocked by hypoxic induction and core promoter binding of negative cofactor 2 protein (NC2 alpha/beta, Dr1/DrAP1). Immunodepletion of NC2 beta/Dr1 protein complexes rescued hypoxic-repressed transcription without alteration of normoxic transcription. Physiological regulation of NC2 activity may represent an active means of conserving energy in response to hypoxic stress.
Solid tumors generally have a poor vascular supply, which results in areas of decreased perfusion and hypoxia (1). The hypoxic microenvironment may increase tumor aggressiveness (2), presumably through specific induction of transcription factors such as hypoxia-inducible factor 1 (HIF1) 1 (4) and alterations in gene expression (3). Under normoxic conditions the HIF1␣ subunit is hydroxylated at proline 564, promoting high affinity interaction with the von Hippel-Lindau (VHL) tumor suppressor protein and ubiquitin-mediated degradation (5,6). Upon exposure to low oxygen tension, HIF1␣ is not hydroxylated, is stabilized, and interacts with HIF1. The HIF1 heterodimer binds to hypoxia-responsive elements and activates the promoters of HIF-responsive genes (7).
In addition to HIF-mediated gene activation, cells under reduced oxygen also limit non-essential cellular processes in order to survive (2). Thus, cells under stress may channel their energy into productive gene expression, whereas non-productive genes and processes are turned off. Gene-specific transcription inhibition can be induced by regulated repressors of transcription (reviewed in Refs. 8 and 9). Certain types of these repressors modify chromatin structure (10), whereas others inhibit transcription through core promoter and general transcription factor interactions (11) (reviewed in Ref. 12). Among this latter group of transcriptional repressors is negative cofactor 2 (NC2␣/, Dr1/DrAP1), which blocks transcription by association with DNA-bound TFIID and inhibits PIC assembly in vitro. NC2␣ and NC2 are essential genes, and their gene products regulate ϳ17% of all Saccharomyces cerevisiae genes either positively or negatively (13). In S. cerevisiae, NC2 is required for specific TATA-containing gene repression under times of reduced nitrogen availability (14). Recent work surveying Drosophila promoter elements and studies with purified proteins reveal that NC2 represses transcription from TATA elementcontaining promoters but activates promoters that rely on downstream promoter elements (DPEs) for regulation (15)(16)(17).
Our findings reveal an active mechanism by which mammalian cells can establish inactive promoters in response to hypoxia. We show that the physiological stimulus of hypoxia increases NC2 activity in mammalian cells, concomitant with selective transcription repression. A limited survey of cellular stress conditions reveals that hypoxia may be unique in exploiting this fundamental mechanism of gene regulation.
EXPERIMENTAL PROCEDURES
Cells and Treatments-Hepatoma cells (Hepa 1-6, American Type Culture Collection) and primary, normal human fibroblasts were plated overnight in glass dishes in Dulbecco's modified Eagle's medium with 10% fetal bovine serum at 10,000 cells/cm 2 , the media was changed, and the cells were transferred into a hypoxic chamber (Bactron 2, Sheldon Labs) for the indicated times. These conditions have been established in order to maintain stable pH and glucose over the course of the experiment. Cell viability, at the end of all treatments, was judged by trypan blue exclusion and found to be Ͼ70% even under severe, long term hypoxic conditions. Cells extracts were prepared as described (18). The drug treatments 150 M CoCl 2 (in H 2 O, Sigma), 0.5 g ml Ϫ1 doxorubicin (in H 2 O, Sigma), and 50 M ALLN (in Me 2 SO, Calbiochem) were added to fresh media at the indicated times. Ionizing radiation was delivered from a 137 Cs source at 3.5 gray/min. RNA Analysis and in Vitro Transcription-Northern blot probes were obtained from the Image Consortium (Incyte) and sequenced prior to use to confirm inserts. Primer extension analyses with lacZ-specific primer (AFP), luciferase-specific primers (3ϫ HRE and VEGF), chick -globin-specific primer, and a chloramphenicol acetyl transferase (CAT)-specific primer (p21) were performed under standard conditions (19). Single-round transcription conditions were performed as published (20), except that a 10-min preincubation period was used as established for supercoiled templates.
Immunodepletion and Protein Add-back-Protein A beads were swollen in RNase-free H 2 O (250 mg/2 ml H 2 O) and transferred into 1ϫ phosphate-buffered saline. Antibody (anti-NC2, gift of T. Oegelschlager; anti-TBP, Santa Cruz Biotechnology; anti-CCAAT/enhancer-binding protein, Santa Cruz Biotechnology) was added to the beads at a ratio of 200 ng of antibody/1 ml of 50% slurry beads. The beads were washed twice in nuclear dialysis buffer (NDB), (20 mM Hepes, pH 7.9, 50mM KCl, 0.2 mM EDTA, and 20% glycerol). Nuclear extract was added to the beads at 1.5ϫ the volume of washed, antibody-bound beads and incubated for 1 h at 4°C, rocking. Beads were removed by centrifugation, and supernatant was used as an immunodepleted extract.
TFIID, PC4, and FLAG-RNA polymerase II were purified and added to transcription reactions as described previously (21). Recombinant DR1/NC2 protein was purchased from ProteinOne, College Park, MD.
RESULTS AND DISCUSSION
Recapitulation of Hypoxia-mediated Transcription Regulation in Vitro-We investigated mechanisms of hypoxia-regulated gene expression using in vitro transcription. Whole-cell transcription extracts were prepared from murine hepatoma (Hepa 1-6) cells that are p53 wild-type and AFP positive. Activation of AFP is associated with hepatocellular carcinoma and various malignant tumors. Extracts were prepared after cell exposure to ionizing radiation, 0.01% oxygen, the hypoxiamimetic CoCl 2 (150 M), or the proteosome inhibitor ALLN (Fig. 1A). In vitro transcription of AFP in each extract, including control normoxia and HeLa extracts, revealed complete inhibition of transcription only in the hypoxic cell extract.
The transcription inhibitory response to hypoxia was neither tissue-specific nor restricted to transformed cells. We used extracts of normal human fibroblast cells grown as primary cultures (Fig. 1B) in normoxia, hypoxia, or the hypoxia-mimetic cobalt chloride (150 M) and found an equally dramatic inhibition of transcription, specifically in the hypoxic extracts. This repression in hypoxic extracts is dominant in mixing experiments, suggesting the function of an inhibitor rather than the loss of an essential protein. Control and hypoxic hepatoma extracts were mixed together at the indicated ratios and tested for transcriptional activity on the AFP template (Fig. 1C). Note that even at 20% input (lane 3) the hypoxic extract inhibited transcription activity. The dominant function of the hypoxic extract supports a model of active repression versus simply loss of energy under hypoxic conditions.
Hypoxic Repression Is Widespread but Selective-
The general nature of hypoxia-mediated repression was revealed by transcription of three gene templates that share few if any trans-acting regulatory elements upstream of the proximal promoter, i.e. the p21/WAF1 gene (which is p53-activated) (22), AFP (which is repressed by p53) (19), and chick -globin (which is not regulated by p53) (23) ( Fig. 2A). Transcription of each gene was strongly inhibited in hypoxic cell extracts. By contrast, treatment of cells with doxorubicin (0.5 g ml Ϫ1 for 24 h) robustly induced p53 in these cells (data not shown) but had varied effects, including none (AFP), 2-fold inhibition (-globin), and induction of several transcripts from aberrant start sites (p21). These data confirmed hypoxia-dependent repression of transcription conferred through a widely conserved regulatory element independently of p53 activation.
Though multiple genes were repressed in hypoxic extracts, this repressive effect is selective. As observed in vivo, the transcription of gene templates containing hypoxia-inducible HIF1 regulatory elements is activated in vitro rather than repressed (Fig. 2B). One of these constructs contains three copies of an HRE from the 3Ј-untranslated region of the erythropoietin (EPO) gene (24) fused to a heterologous promoter/ luciferase reporter (3ϫ HRE). The other hypoxia-activated template is a natural VEGF promoter and upstream regulatory region (from Ϫ2275 to ϩ51) plasmid containing a single HRE and a luciferase reporter (VEGF). This construct lacks the identified VEGF RNA stability element (25). The 3ϫ HRE and VEGF constructs are transcriptionally induced when transfected into cells and incubated under hypoxic conditions (data not shown). Each of these DNA templates was transcribed in the same extracts under identical conditions and exhibited a different profile of time-and "dose"-dependent responses to hypoxia. AFP showed hypoxic repression and moderate hypoxia (2% oxygen) yielding moderate repression, whereas severe hypoxia (0.01% oxygen) showed increasing repression with time, and CoCl 2 showed no effect. The 3ϫ HRE template showed hypoxic activation, and moderate hypoxia showed induction, whereas severe hypoxia clearly induced expression at 12 h, and by 24 h expression was reduced to control levels; CoCl 2 treatment also showed significant induction. In contrast, the VEGF promoter showed induction that required more extreme hypoxic conditions. The natural VEGF HRE-containing, TATA-less promoter transcribed poorly and was slightly acti- 7). B, extracts from normal human fibroblasts also show hypoxia-dependent transcriptional repression. Control, hypoxic (0.01% for 24 h), and CoCl 2treated (50 M for 24h) cells were used to make extracts for in vitro transcription of AFP as indicated. C, hypoxic repression is dominant in mixed extracts. Control and hypoxic Hepa 1-6 extracts were mixed as indicated (microgram extract protein of each, total microgram protein held constant) and tested for transcription of AFP. All newly synthesized transcripts were quantitated by primer extension. Molecular weight standards (MW) are radiolabeled ⌽X174 DNA digested with HaeIII (Invitrogen). vated at moderate hypoxia but was activated in severe hypoxia slightly at 12 h, increasing at 24 h, and was not activated by CoCl 2 . This pattern of hypoxia-induced VEGF expression, including the limited response to CoCl 2 , matched that observed for endogenous VEGF activation ( Fig. 2C and Ref. 26).
We determined by Northern blot analysis that hypoxia can result in repression of specific mRNAs in vivo. Fig. 2C shows that, in primary human fibroblasts exposed to severe hypoxia, specific genes show decreased mRNA levels, whereas others are unchanged or activated by hypoxia. The selective repression of certain genes with the activation of others suggests an active process rather than a passive loss of macromolecular synthesis under conditions of reduced energy. This decline of steady-state RNA levels for a variety of genes, combined with in vitro transcription results, supports a role for hypoxia-regulated transcription repression through widely conserved regulatory elements.
Hypoxia-induced Repression Acts at the Core Promoter-Because a diverse set of templates was repressed (Fig 2A), we focused on the promoter region and examined PIC assembly in control and hypoxic cell extracts (Fig. 3A). Under established conditions for in vitro single-round transcription (20), transcription is dependent solely on protein-DNA interactions (PIC assembly) established (in the absence of NTPs) before washing the protein-bound DNA template and adding Sarkosyl detergent, which prevents subsequent protein binding (see Fig. 3A, diagram). Addition of Sarkosyl after PIC assembly (Fig. 3A, lanes 4 -8) revealed that hypoxic cell extract is incapable of functional PIC assembly (Fig. 3A, lanes 6 and 8). PICs assembled in hypoxic extract could not elongate (Fig. 3A, lane 8) nor could transcription be rescued by control extract when DNA binding of control extract proteins is precluded (Fig. 3A, lane 6). Control extract rescue of hypoxic transcription does occur in the absence of Sarkosyl (Fig. 3A, lane 3). Importantly, we found that hypoxic extract did not inhibit transcription elongation by PICs preassembled in control extract (Fig. 3A, lane 5). Thus, any inhibitory factors present in the hypoxic cell extract must act during PIC assembly rather than by altering preassembled transcription initiation complexes or inhibiting transcription elongation.
One direct consequence of complete PIC assembly is phosphorylation of the RNA polymerase II C-terminal domain (CTD) before initiation (27,28). Western blot analyses with specific antibodies raised against the RNA polymerase II amino terminus (N-20), CTD phosphoserine 5 (H14), CTD phosphoserine 2 (H5), and unmodified CTD (C-19) (29) revealed marked differences between the hypoxic cell extract and other control or treated cell extracts (Fig. 3B). All of the cell extract preparations contained similar levels of unphosphorylated RNA polymerase II (IIA form) as revealed by the N-20 and C19 antibodies. However, there was a dramatic absence of RNA polymerase II with its CTD phosphorylated (IIO form) at serine 5, and very low amounts of IIO phosphorylated at serine 2 in the hypoxic cell extract. Comparison of blots probed with phospho-specific antibodies and functional analysis of transcription in vitro (Fig. 1) revealed a correlation between loss of RNA polymerase II CTD phosphorylation and repression of transcription. RNA polymerase II CTD-hypophosphorylation at both serines 5 and 2 in hypoxic extracts is most consistent with inhibition of initiation (30). These findings support a FIG. 2. Hypoxia effects selective transcription repression or activation. A, several tested templates showed hypoxic repression (ϩ); p21, AFP, and -globin genes were transcribed in Hepa 1-6 extract from control (lanes 2, 5, and 9), hypoxic (lanes 3, 6, and 10), or doxorubicin-treated cells (lanes 4, 7, and 11). Transcription reactions without extract (lane 1) and with HeLa extract (lane 8) were used as controls. B, hypoxia-induced genes showed transcriptional activity in vitro. AFP/lacZ, 3ϫ HRE, and VEGF/luc were transcribed under identical conditions in extracts from cells under control conditions, CoCl 2 , 24 h under 2% oxygen and 12 and 24 h under 0.01% oxygen treatment. C, Northern blot analysis of total RNA from normal human fibroblasts exposed to the control, hypoxia, or cobalt chloride. Probes are indicated (p53R2, ribonucleotide reductase isoform 2; Hif1, hypoxia inducible factor 1␣), and methylene blue-stained 18 S RNA was used as a loading control. model in which PIC assembly is affected by hypoxia, a dysfunction marked by the lack of RNA polymerase II CTD phosphorylation.
Assembly of a PIC is regulated at numerous levels and involves the interactions of many proteins (recently reviewed in Refs. 8 and 31). We surveyed a number of purified proteins for their potential ability to restore transcription function to the hypoxic extract. These proteins included TFIID, TFIIH, mediator complex, RNA polymerase IIA, RNA polymerase IIO, immunopurified RNA polymerase II (21), and recombinant TBP (Fig. 3C, and data not shown). Among these factors, only relatively high concentrations of recombinant TBP could overcome hypoxia-mediated transcription repression to any degree (to 15% of the control level; Fig. 3D) when added alone. FLAGtagged RNA polymerase II (Fl-RNA Poll II; Ref. 21) alone was also unable to enhance hypoxic transcription (Fig. 3C, lanes 4 -6). However, the combination of both TBP and purified RNA polymerase II (Fig. 3 C, lanes 7-9) reversed hypoxia-mediated repression and increased transcription of hypoxic extract to control activated transcription levels (Fig. 3C, lane 1). From these data, we hypothesized that high levels of recombinant TBP partially squelched the repressive effect of hypoxia to regain a basal level of transcription (compare with transcription driven by purified RNA polymerase II plus PC4 and TFIID Fig. 3D, lane 7). The addition of both FLAG-RNA polymerase II, which is not hyperphosphorylated (Fig. 3B), and TBP could effectively rescue hypoxic transcription to control activated levels. These data suggest that a hypoxia-induced repressor interacted with both transcription factors and/or that the RNA polymerase II preparation contained proteins that augmented the ability of TBP to squelch inhibition and promote activated transcription.
Hypoxia Induces Accumulation of a Negative Regulator of PIC Assembly-One of several negative regulators of PIC assembly is NC2 ␣/ (or Dr1/DrAP1 protein) (12,32,33). Additionally, interactions between NC2 and RNA polymerase II, which affect RNA polymerase II CTD phosphorylation, have been reported previously (34). In vitro experimentation supports a model wherein the NC2 protein associates with TBP bound at TATA boxes, which inhibits further assembly of the PIC. We examined this candidate repressor of hypoxic transcription by Western blot analysis with antibodies specific for the NC2 subunits Dr1/NC2 and DrAP1/NC2␣. We found that both Dr1/NC2 and DrAP1/NC2␣ protein levels were elevated in extracts of hypoxia-treated hepatoma cells (Fig. 4A). NC2 is likely regulated post-transcriptionally, as levels of both NC2␣ and NC2 mRNA remain unchanged with hypoxic treatment (data not shown). In these same extracts, TBP levels are un- FIG. 3. Hypoxic repression occurs at the core promoter. A, hypoxic extracts are dysfunctional for PIC formation. Immobilized AFP templates were preincubated with control (C; lanes 1, 2, 4, 5, and 7) or hypoxic extract (H; lanes 3, 6, and 8) without NTPs to allow PIC assembly. Protein bead-DNA complexes were washed, and transcript elongation was initiated with an NTP addition to control (lanes 1, 3, 4, and 6), hypoxic (lanes 2 and 5), or no extract (lanes 7 and 8) and plus (ϩ; lanes 4 -8) or minus (Ϫ; lanes 1-3) 0.025% Sarkosyl (Sigma). B, RNA polymerase II CTD is hypophosphorylated under hypoxia. Fifty micrograms of total extract protein from control, hypoxia, doxorubicin, CoCl 2 , ALLN, and ionizing radiation exposed cells alongside immunopurified epitope-tagged RNA polymerase II (Fl-RNA Pol II, lane 1) were immunoblotted sequentially with RNA polymerase II N-terminal antibody N20 and CTD-specific antibodies C19 and phosphoserine H14 and H5 antibodies (Research Diagnostics). C, hypoxic repression overcome by recombinant TPB and immunopurified RNA polymerase II. TBP protein alone (lane 3; 25 ng), increasing amounts of purified FLAG-RNA polymerase II (lanes 4 -6; 75, 150, and 300 ng, respectively) alone or a combination of both TBP (25 ng) and FLAG-RNA polymerase II (lanes 7-9; 75, 150, and 300 ng, respectively) were added to hypoxic extract (lanes 2-9). Lane 1 shows equal total protein levels of control normoxic extract transcription. D, addition of high levels of TBP squelches hypoxia-mediated repression to basal transcription levels. TBP (lanes 3-5; 25, 50, and 100 ng) was added to hypoxic extract (lanes 2-5). Transcription levels in the presence of TBP are comparable with basal levels generated by purified FLAG-RNA polymerase II (lanes 6 and 7; ϳ135 ng) and TFIID (lanes 6 and 7; 0.75 ng) in the presence of factor PC4 (lane 7; 100 ng). changed, and AFP protein levels are reduced, reflecting the transcription response of AFP under hypoxic conditions. The addition of recombinant Dr1/NC2 to control transcription extracts effected repression in a concentration-dependent manner (Fig. 4B, lanes 1-7) of the control extract to levels observed in hypoxia-incubated cell extracts (lanes 6 -10). The ability of the single subunit to repress transcription in vitro has been previously shown with multiple gene templates (reviewed in Ref. 35).
NC2 (Dr1/DrAP1) Induction Blocks PIC Assembly-We assayed for endogenous NC2 activity by comparing the PIC components (TBP, TFIIB, and Dr1/NC2) present in control and hypoxic cell extracts (Fig. 4C) to those bound to promoter DNA, as described previously (Fig. 4C, diagram, and Ref. 36), under conditions for single-round transcription ( Fig. 2A). Similar levels of total TBP (or TFIID) and TFIIB proteins were present in hypoxic and control extracts, but NC2/Dr1 was increased by hypoxia (Fig. 4C, lanes 4 -6). Comparison of soluble extract to DNA-bound PIC proteins (lanes 1-3, long exposure) revealed an inverse relationship between NC2 and TFIIB in the PICs (lanes 2 and 3). Analyses of mixed hypoxic/control (1:1) extracts support a concentration-dependent profile of proteins specifically bound at the PIC (TBP and NC2) versus those excluded from the PIC (TFIIB) rather than effects on protein degradation or modification.
We extended the PIC analysis to RNA polymerase II and its phosphorylated forms bound to the DNA versus total protein (Fig. 4D). Again, there were similar levels of unphosphorylated RNA polymerase IIA in both hypoxic and control extracts (N-20 antibody) and sharply reduced hypoxic levels of CTD-phosphorylated RNA polymerase II (H14 antibodies). Parallel analysis of PICs revealed that no RNA polymerase II was bound in hypoxic-assembled PICs. Therefore, in the presence of NC2 (Dr1/DrAP1), TFIIB and RNA polymerase II cannot assemble as part of a functional PIC. Reported roles for NC2 (Dr1/ DrAP1) complex function (reviewed in Refs. 12, 35, and 37) in the repression of transcription by blocking entry of TFIIB in PIC assembly is consistent with our results of PIC analysis in hypoxic extracts.
To determine whether NC2 protein complexes were primarily responsible for transcription repression induced by hypoxia, we immunodepleted NC2 protein complexes from hypoxic and control cellular extracts. Hypoxic and control extracts were incubated with antibody-coated beads, the beads were removed, and the depleted extracts were assayed for transcription function (Fig. 4E). Incubation with nonspecific antibody did not alter the transcription properties established for hypoxic and control extracts. Depletion of TBP protein from control extract obliterated transcription and demonstrated the effectiveness of immunodepletion. Incubation with NC2 antibody-coated beads restored transcription capability to hypoxic cell extract and did not alter the ability of control extract to transcribe in vitro (lanes 4 and 5). These data show that either NC2 or an NC2-associated protein complex is an essential component of hypoxia-mediated transcription repression, the removal of which rescues transcription function.
The NC2 (Dr1/DrAP1) complex could assume multiple roles in both transcription repression and activation due to either post-translational modifications or association with specific protein binding partners. Interpretation of the NC2/TBP/ TATAA DNA ternary complex crystal structure suggests that transcriptional activators or co-activators could overcome NC2mediated inhibition of functional PIC assembly (38). Posttranslational modification may be "stressor-" or target sitespecific, as phosphorylation of NC2 by casein kinase II inhibits binding to DNA in general and increases the specificity of TBP FIG. 4. Hypoxia-induced NC2 inhibits PIC assembly. A, NC2 is induced in hypoxic extracts. Fifty micrograms of control and hypoxic Hepa 1-6 extracts were immunoblotted with both anti-AFP and anti-TBP antibodies (top panel) and sequentially with anti-Dr1 and anti-DrAP1 monoclonal antibodies (bottom two panels). B, control (C, lanes [1][2][3][4][5] and hypoxic (H, lanes 6 -9) extracts were supplemented with recombinant NC2 (Dr1) protein (lanes 2-5 and 7-9; 2, 20, 40, and 100 ng, respectively), and transcription was performed. C, PIC assembly is incomplete in hypoxic extracts. Immobilized AFP templates were incubated with control, hypoxia, or 1:1 control/hypoxia mixture without NTPs for 10 min and processed for immunoblotting of PICassembled proteins with anti-TBP, TFIIB (both from Santa Cruz Biotechnology), and NC2 polyclonal antibodies. Total extracts were immunoblotted as a control. D, PIC assembly and immunoblotting for the bound RNA Pol IIA and Pol IIO forms were similarly performed for control, hypoxic, or mixed extracts. E, transcription levels in control extracts were unaffected by immunodepletion using nonspecific (NS; CCAAT/enhancer-binding protein) or NC2 antibodies but were severely inhibited by anti-TBP depletion. Hypoxic extracts (H) were reactivated for transcription by depletion with an anti-NC2 but were not reactivated by immunodepletion with nonspecific or TBP antibodies (Ab). interaction (39). Genome-wide expression analyses and chromatin immunoprecipitation of temperature shift-induced NC2 protein in S. cerevisiae revealed NC2 association with both positively and negatively regulated promoters in response to the stress of heat shock (13). More recently, a role for NC2 in stabilizing TBP-DNA binding to promote basal transcription and the displacement of NC2 to effect activated transcription has been demonstrated both in vivo and in vitro for S. cerevisiae (40).
A mechanism for a dual role in positive and negative regulation of transcription has been proposed for NC2 (15,16) (reviewed in Ref. 31). NC2, which represses PIC assembly at TATA-containing core promoters, activates distal promoter element-regulated promoters in Drosophila (15). This model presents a potential paradigm for mammalian cells in that one induced protein such as NC2 could act as a repressor at many TATA-containing promoters but as an activator at different subsets of genes, e.g. those regulated by HIF1, evoking a timely and energy-efficient response to stress. Potential interaction(s) between HIF-responsive gene promoters and NC2 protein therefore becomes an important question for future investigations.
|
v3-fos-license
|
2023-03-16T15:37:35.282Z
|
2023-03-02T00:00:00.000
|
257555815
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://brill.com/downloadpdf/journals/orie/aop/article-10.1163-18778372-12340019/article-10.1163-18778372-12340019.pdf",
"pdf_hash": "a4c37f96fa5706af883e0dd34fe9893fba810ddf",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43519",
"s2fieldsofstudy": [
"History"
],
"sha1": "a9f7c292226f65cd08844829179eb3dee670f748",
"year": 2023
}
|
pes2o/s2orc
|
, Lute: An Ancient Greek Perspective on Their Prehistory
Starting from early Arabic sources, the absolute pitch of the Early Abbasid ʿūd is considered and related to evidence on pitch usage in Roman-period source s. Similar instruments, it is argued, must have existed already in late antiquity. Iconographic evidence takes us back to late Classical Greece, whose music would have provided especially fertile ground for designing such a lut e. In contrast to the traditional tuning in fifths and fourths throughout, lutes with equidistant design had also existed for a long time, likely also on precursors of the ʿūd . The association of this style with the name of Manṣūr Zalzal must therefore be reassesse d.
First we need to agree on a working definition of what the title is meant to refer to. Al-ʿūd is the well-known Arabic name for the short-necked lute that has formed the backbone of Islamicate music theory from its earliest extant writings on; a name that lives on even though the instrument underwent a major refinement when it lost its frets during the Middle Ages. In the early period, others would have called it barbaṭ. The question of its prehistory has customarily been phrased as that of the origins of the short-necked lute, or especially that with a roughly pear-shaped body.1 It has been traced back to Central Asia in the first centuries CE, with early depictions in Gandhāran art.2 A few centuries later it had spread to China and Japan in the East as well as Europe in the West, where it subsequently received various modifications according to the diverse musical needs of its new hosts.
However, while the shape of a lute may be of some consequence regarding its sound and possibly available playing techniques, it has little bearing on the accessible pitches. The boundaries of the possible gamut are largely determined by the vibrating lengths of the strings, their material, and the length of the fretted (or fingered) region on the neck. The characteristics of the available scales, on the other hand, depend on the distribution of the frets (if there are frets), and to a lesser degree on the intervals between the open strings.
As a consequence, a music-historical viewpoint that is interested in the development of tonalities (in the widest sense of the word) cannot achieve its goal by looking for pear shapes. Modern guitars, for instance, especially within popular music from the late twentieth century CE on, come in an astounding variety of shapes, which would hardly serve as meaningful criteria of classification for the said musical purposes. Less extravagantly, grand and upright pianos are much the same instrument regarding what music can be played on them. There is no doubt that similarities of instrument shapes can tell us a lot about dissemination. One must only avoid the pitfall of a reverse conclusion: dissimilar shapes need not necessarily indicate dissimilar musics. A history of ancient lutes that wants to access overarching questions beyond outward appearances must therefore acknowledge the musical primacy of fretting, vibrating length, number of strings and their material.
So we may need to define a more tonality-centered concept of 'ʿūd' for our present purpose: what kind of instrument would an Early Abbasid ʿūd player be able to take up and play without too much adjustment? As is well known, Human fingers are very evenly spaced and do not encourage a system of frets that would combine tone and semitone steps. They might therefore play a sequence of either three tones or three semitones. The former option, however, would require short lutes of comparatively high pitch. Otherwise, we are left with precisely the sequence of three semitones that we find attested, which is comfortably available at the typical string lengths of short-necked lutes. Three subsequent semitones, of course, can never belong all to the same heptatonic (or, a fortiori, pentatonic) scale, not even in the ancient Greek 'chromatic genus' , which incorporated a sequence of merely two consecutive semitones. As a consequence, modulating capabilities will come built into the instrument, as an almost inevitable result of human physiology.
The space between nut and first fret, in contrast, is not subject to physiological restraints. Another semitone at this position would appear a waste of pitch range; on the other hand, an interval larger than a tone would lead to gapped scales, at least as far as the diatonic is concerned, which formed the implicit and explicit basis of ancient Greek music, which is exclusively attested in the cuneiform system, which we find once more dominant in the Roman period, and which is presupposed in the early Arabic sources, starting from the ninth century. An interval of a tone between nut and frets is therefore again the most natural choice.
As a result, the interval between the open string and the small-finger fret, consisting of a tone and three semitones (TSSS), forms a fourth, precisely the interval that bounds the tetrachord, the fundamental building block of ancient Greek harmonic theory, its reflexes in early Arabic musical writings, and ultimately in maqām music. A lucky coincidence?
Whenever frets are made the simple way, either by binding string material or gluing a piece of wood perpendicularly across the neck, each fret will affect all strings in (nearly) the same way. The scale fragment defined by the frets thus replicates at intervals that correspond to the relative tuning of the strings. What these intervals may most reasonably be also depends on their number -and vice versa. Seeing that the Ancient Near East, the ancient Greeks and Romans and the Arabic writers all considered notes an octave apart more or less functionally identical, one might primarily consider configurations that warrant replication at the octave. Other factors that may govern the tuning of the open strings are: an inclination to exhaust the possible pitch range, on the one hand, and the maximization of consonant intervals between simultaneously available notes, on the other.7 Given the identical microscales on all strings, the 1-33 | 10.1163/18778372-12340019 latter was most straightforwardly assured by tuning these in consonanceswhich in the ancient understanding primarily included octaves, fifths and fourths. A combination of a fifth and a fourth additionally ensures a universal duplication of pitches at the octave.
Alternating fourths and fifths thus might extend the scale to infinity, were it not for the physical limits of the strings, which break when stretched beyond a certain pitch, and cease to give a musically useful sound when slackened too much. Open strings of identical material and the length that we find on the ʿūd may support a range of hardly more than an octave plus a fifth; a larger gamut may be realized by combining different materials, notably gut for the bass (bamm) and silk for the treble (zīr) string8 -an option that may have become available in late antiquity, though it is, as far as I know, only attested as late as the Arabic sources. At any rate, with strings of uniform material, a tuning of the considered pattern could hardly exceed four strings, spanning either an octave plus a fourth (two fourths separated by a fifth) or an octave plus a fifth (two fifths separated by a fourth).9 As we have seen, both the general distribution of the frets and the number of strings of the early ʿūd can be derived from a small set of partially anatomical, partially musical axioms which we know to have applied. However, the same is not true for the tuning of the open strings. Instead of an 'expected' alteration of fifths and fourths, the sources agree that the historical ʿūd was most typically tuned in fourths throughout. As a result of the missing 'disjunctive' tone that would complete the octave after two fourths, the pair of treble strings is out of phase with the two bass strings. This shift of a tone in fact separates the two halves of the instrument by two steps in the circle of fifths: in comparison with the low strings, the higher appear shifted towards the flat keys. This is certainly remarkable: even though each string spanned a fourth, and Poikilia: Accompaniments to Greek Melody," in Mousike: Metrica, ritmica e musica greca in memoria Di Giovanni Comotti, ed. by Bruno Gentili and Franca Perusino (Pisa/Roma: Istituti Editoriali e Poligrafici Internazionali, 1995), 41-60) and indirectly in all the cultures that used doublepipes or strumming techniques on the lyre -i.e. everywhere between the Near East and Spain. 8 Using the formulas in Djilda Abbott and Ephraim Segerman, "Strings in the 16th and 17th Centuries," The Galpin Society Journal 27 (1974): 48-73, the available range for gut strings of 60cm length is calculated to 2020 cents, and for silk strings, 1630 cents. The combination of both materials increases the range to 2610 cents, more than two octaves, paving the way for the addition of a fifth string. (The preceding calculations assume a plucking position about 7cm from the bridge, a plucking string displacement of no more than 4mm, and an acceptable pitch shift within the attack period of 20 cents). 9 The former amounts to 1698 cents, the latter to 1902 cents. much as a Greek tetrachord did, the four similarly divided fourths of the four strings cannot play the role of corresponding structural units within the same non-modulating scale.
Nonetheless the partially chromatic layout of the frets mitigates the phase shift, because it still allows projecting a single non-modulating diatonic scale across the instrument's entire range. Insofar the sources imply that ʿūd music was diatonic music -a scale would generally omit either the middle-finger or the ring-finger fret10 -there is good reason to regard this single scale as fundamental for the instrument, at least in a historical sense. It is therefore also the only one in which every note in the high range has a counterpart in the low range and vice versa, since the low middle fingers here correspond to the high ring fingers.11 Expressed in relative modern note names without accidentals, this unique scale runs like shown in Figure 1,12 starting just with the first letter of our alphabet. A lucky coincidence? 10 Ibn al-Munaǧǧim (Kitāb an-Naġam, transl. Neubauer, "Al-Ḫalīl ibn Aḥmad," 301; 310-11) relates the teachings of Isḥāq b. Ibrāhīm al-Mawṣilī, whose quasi-modal nomenclature refers to that dichotomy on the maṯnā string. 11 For the importance of octave counterparts on the ʿūd in the period in question, cf. al-Kindī, Risāla fī Ḫubr ta ʾlīf al-alḥān 1.1-3, ed. by [21][22]transl. Neubauer, This 'natural' transcription is used by Neubauer ("Die acht ‚ Wege' in der arabischen Musiktheorie und der Oktoechos," Zeitschrift für Geschichte der arabisch-islamischen Wissenschaften 9 (1994): 387; "Al-Ḫalīl ibn Aḥmad"), in contrast for instance to a rendition based on G (e.g. Owen Wright, "Ibn Al-Munajjim and the Early Arabian Modes," The Figure 1 Matching a diatonic scale to the typical ʿūd tuning (pantonic variant) Note: I introduce the term 'pantonic' for a tuning generated exclusively by alternating fifths and fourths, which leads to segmenting the octave in terms of whole tones, first to an anhemitonic pentatonic, then to a heptatonic scale, and finally, when the whole tones overlap, to a full chromatic scale in the modern sense, albeit with semitones of two different sizes. The traditional term 'Pythagorean' should be abandoned as historically misleading and wrongly Eurocentric, apart from being loaded with esoteric associations. Apart from that bass note A, we learn that g, sounded from the open second string (maṯnā) was pivotal as the reference note (ʿimād) for the tuning of the instrument (and others).13 However, we must notice that the note from the open maṯnā is both the lowest of the higher and the highest of the lower range. Consequently, it lacks an octave counterpart, unless the bass string is relaxed by one tone. In standard tuning, therefore, we would expect that in spite of its role as a reference note, g is less useful as a tonal center than are the other notes from the same string.
Can we know the typical absolute pitch to which the early ʿūd was tuned, or at least a range within which it was typically tuned? Two potential clues may guide us: on the one hand, the relation to the human voice; on the other, the physics of strings. The latter defines absolute boundaries: as discussed above, four strings of a single material tuned in fourths basically exhaust their potential. The early medieval ʿūd, however, combined gut and silk strings. Its four strings might thus have been tuned to any pitch between the low boundary for gut and the high for silk.
The Iḫwān aṣ-Ṣafāʾ, on the other hand, describe all strings in terms of silk strands and state that the treble string was stretched close to the point of breaking.14 This cannot be true, because it would preclude adding a fifth string, tuned a fourth above the original zīr, as had occasionally been done by the time of Ibn al-Munaǧǧim and became customary later.15 The original treble Ibn al-Munaǧǧim, Kitāb an-Naġam, transl. Neubauer, "Al-Ḫalīl ibn Aḥmad," 302; cf. Neubauer, "Bau der Laute," 321-22. 14 Iḫwān aṣ-Ṣafāʾ, Ep. 5.8, ed. Wright, ٦٣-٧٦, 111-18. Much later, Aḥmad b. Muḥammad al-Maqqarī considered silk strings the historically older material, apparently reversing the historical development; cf. Henry George Farmer, "The Origin of the Arabian Lute and Rebec," The Journal of the Royal Asiatic Society of Great Britain and Ireland, no. 4 (1930): 773. 15 Ibn al-Munaǧǧim, Kitāb an-Naġam, transl. Neubauer, "Al-Ḫalīl ibn Aḥmad," 303; cf. Neubauer, "Bau der Laute," 306. By the way, without any historical context, the description of the Iḫwān would make good internal sense. A silk string of about 60cm length tuned a minor third below breaking would sound about an F4 (c. 345Hz), resulting in a melodic gamut within the two higher strings of C4-B4 ♭ . This is a rather convenient range for a male voice singing an octave lower, a bit low perhaps. But the fifth string as well as the following considerations contradict such a reconstruction. It is difficult to see what went wrong with the Iḫwān's account. Did they erroneously transfer the close-to-breaking rule from the five-stringed lute to the older form, or perhaps a rule that had applied to gut strings to silk? Note that few musicians would actually experiment with the breaking pitch, and that the rule to tune something close to breaking is entirely useless as a practical advice.
string must therefore have been comparatively slack, at least a fourth further removed from the breaking point of silk than would have been necessary. Coincidentally, the difference in breaking pitch between silk and gut of reasonable quality is also about a fourth.16 Apparently the partially silk-strung ʿūd, as long as it had only four strings, therefore played at a pitch that was also accessible to an instrument with gut strings throughout. Quite possibly the relation between voice and instrument had remained stable even after the adoption of silk, advantage of whose full potential was taken only later when the range was augmented by a new treble string. In East-Asia, in contrast, where silk strings were customary, the vibrating string length was increased by a fourth (from about 60cm to about 80cm17), so that the instruments would optimally play in the same range as the smaller Western variant did with gut strings.
A high-quality gut string of 60cm length is expected to break at about 340Hz. The highest advisable tuning, allowing for plucking and strumming the string without breaking it, is about a minor third lower. A gut zīr might therefore have been tightened up to around 290Hz or a tiny bit higher, which would place the open maṯnā at approximately 215Hz, no more than a quartertone below modern A3. Without compromising the sound of the bass string too much, the instrument might have been tuned up to about a major third or even a fourth lower. However, for certain modes ʿūd players used to lower the bass string by a tone, down to the octave below open maṯnā.18 The 'normal' bamm tuning must therefore be sought at least a tone above the lowest possible pitch.
Considerations regarding the range of vocal performances are necessarily less precise. The best starting point is the reported feat of prince and short-time caliph Ibrāhīm Ibn al-Mahdī to sing the same melody in four different octave registers.19 This is unlikely if that melody would have unfolded over an octave or more; on the other hand, it might have been less impressive 16 Cf. Abbott with a melody consisting of only three or four different notes. If the melody stayed within a fifth, the total compass of the reported performance would have amounted to three octaves and a fifth and might thus include all usual male ranges from deep bass to countertenor. Exceptional male voices with an ambitus of four octaves and more are documented and may extend over quite different ranges (cf. Figure 2).20 But the story contains details that may permit narrowing down Ibrāhīm Ibn al-Mahdī's ambitus. Firstly, he dropped two octaves below the normal range, but ventured only a single octave higherobviously, his bass range was even more exceptional than his treble. Secondly, the source describes the 'normal range' as the "range of the ʿūd," which refers to the instrument's two higher strings, in contrast to the range of the lower strings about an octave below (isǧāḥ), which is mentioned afterwards. The higher strings, within whose ambitus the melodies of Arabic song mostly unfolded, must therefore have formed part of the common male register. Since even deep basses cannot access a range two octaves below the lower end of this common register, we must conclude that Ibrāhīm's melody avoided such low notes. Judging from modern exceptional basses, his melody's lowest pitch (in the ordinary range) can hardly have been lower than about F♯3. On the other hand, being an ordinary melody that remained within the typical gamut of the instrument, its highest note cannot have exceeded the common range and therefore cannot have been much higher than E4. Assuming a minimal melodic range of a fifth, Ibrāhīm's voice would therefore have extended at least down to A1 but perhaps to F♯1, and up to at least C♯5 but perhaps E5. Evidently he was a kind of Barry White or David Bowie; a Prince might have sung two octaves higher but only one lower, while an Axl Rose might have added even a fifth register. The account of Ibrāhīm's vocal ambitus thus betrays that the overall range of the higher two strings on the ʿūd covered rather the higher part of the common vocal range. This accords with string physics, which had suggested tuning the zīr not much lower than B3. The upper limit of the common range, in turn, cautions from tuning it higher, as this would push its small-finger fret beyond that range -especially keeping in mind that even higher notes were sometimes played on this string. This places maṯnā around F♯3, maṯlaṯ at about C♯3, and bamm around G♯2.
How precise are these results? On the one hand, they depend on the vibrating length of the open strings, for which 60cm is only an approximate value, depending on the exact extent of the 'finger' by which the early sources describe the construction of the ʿūd; however, we can be reasonably confident that the actual length differed by no more than about a quartertone, more probably in the direction of lower pitch. On the other hand, the common vocal range is a crucial factor. We have started from modern values, but these are nicely corroborated by data from Greco-Roman antiquity. If all remains of ancient notated vocal music are evaluated, the octave within which the largest number of notes falls is precisely the one that starts from our presumed upper limit downwards (in Figure 2, 'DAGM best-fit vocal octave').21 Expressed in ancient notation, this octave ranges from hyperypátē FF up to paranḗtē ÖÖ; the absolute pitch of these notes is known from mutually corroborative evidence.22 Our tentative ascription of the open bamm to G♯2 also agrees perfectly with Eckhard Neubauer's interpretation, who started from different sources and inferred A and G as possible values on the basis of modern vocal ranges.23 21 According to our latest data, 2474 extant vocal notes fall within this octave, as compared with 2358 for the octave a semitone lower, and 2315 for that a semitone higher; data pool: notes clearly read according to Egert Pöhlmann As is also shown in Figure 2, the dominant string instrument of antiquity, the cithara, not only also played in the higher region of the common vocal range, but exceeded it by a whole tone. Though we do find its highest note, nḗtē ÅÅ, in the vocal scores, it is far less frequent than its lower neighbor, paranḗtē ÖÖ (16 vs 162 instances). According to Ibn al-Munaǧǧim's report of the teachings of the Mawṣilī school, the range of the regular ʿūd frets was extended by changing the hand position on the treble string: by shifting the fingers away from the nut by two fret positions, the ring finger became able to play the note a semitone above the highest fret.24 In the respective passage, Ibn al-Munaǧǧim is only interested in enumerating the set of functional notes within the octave, so he does not discuss the role of the small finger in this playing position, which would just reduplicate the note obtained from the open maṯnā an octave below. In fact, this note must have been used as well. Otherwise there would have been no point in moving the hand forth and back over two frets; a shift of one fret would have sufficed, accessing the additional semitone with the small finger instead of the ring finger. By the attested shift of two frets, however, the upper limit of the ʿūd becomes virtually identical with that of the ancient cithara, reflecting the latter's divergence between its strings and the typical vocal range by the requirement of changing the playing position in order to access the same additional tone.
If our reconstruction of early ʿūd pitch is correct, the analogies with structures found in ancient music do not stop here. The pitch of the open zīr echoes that of ancient ('Lydian') mésē II, the pivotal note of lyre tuning and the system of ancient notation alike. The open maṯnā, in turn, the primary reference pitch of the Arabic sources and as such the lowest note and starting point of Ibn al-Munaǧǧim's enumeration, reflects ancient hypátē SS, originally the lowest string on the lyre before hyperypátē FF was added. A fourth lower, the higher of the two bass strings, maṯlaṯ, appears to have sounded the ancient note gg, which forms a frequent bass note in the scores and seems to have featured prominently in doublepipe design at least from Hellenistic times on.25 The pitch of the open bamm, finally, is deeper than ancient melodies normally reached. Only a single fragment plunges down to that region, doing so mainly for a single line of text, duplicating its normal range an octave below for special effect, reminiscent of the register changes reported for Ibrāhīm Ibn al-Mahdī.26 A more detailed relation between the suggested approximation of early ʿūd pitches and the notes of Roman-period song is displayed in Figure 3. Apart from the relative prominence of the notes associated with the open strings, especially when compared with their immediate neighbors, the most striking coincidence is the absence of Q, which falls right within the undivided whole tone on the lute, above the index-finger fret. This is all the more noteworthy because in ancient theory Q forms part of the Unmodulating System of the natural key, so that its omission from Roman-period singing comes as a surprise.27 On the other hand, the lute design does not provide for R, which would fall on the corresponding position on maṯnā. Among the keys (tónoi or trópoi) that are found in use in the Roman era, R is particular to the Lydian, which corresponds to a 'Dorian' lyre tuning in the citharodic terms that Ptolemy uses. Consequently an ʿūd, when transplanted into late antiquity, would not be able to play in this key, but only in the 'sharper' keys of Hypolydian, Hyperiastian and Iastian, which respectively represent 'Hypodorian' , 'Phrygian' , and 'Hypophrygian' cithara tunings.
The musical fragments from the Roman period span more than two centuries. Things become even more interesting when we stop treating the corpus as monolithic. In Figure 4, the notes statistics is restricted to those sources which Pöhlmann and West unequivocally date after 200 CE. Astoundingly, the note R appears unattested in this period. Instead, the diagram now looks as if it was meant to illustrate ʿūd music: maṯnā is the centre of melodic movement, which 26 Pöhlmann and often also ventures up to the highest fret of zīr and also one note beneath the open maṯnā to the middle-finger fret on maṯlaṯ. The most typical key is the Hypolydian, which would combine the open strings and index-finger frets with the middle-finger frets, omitting the ring fingers. A similar sidelining of R appears in the design of the Louvre aulos, also shown in Figure 4, which could play along a cithara tuned to Hypolydian and might just as well accompany an early ʿūd. The common bass note of its two pipes might even exemplify Ibn al-Munaǧǧim's statement that the open maṯnā provided the tuning standard for wind instruments. 28 We find only one seeming discrepancy between late-Roman-period note statistics and the earliest information about ʿūd music, according to which the melodies were generally realized on the two higher strings. The important note F, which formed the bass string of the post-Classical kithara, called hyperypátē or diápemptos, can only be played on maṯnā, the third string.
However, it appears that precisely this note also featured so prominently in Early Abbasid music that it triggered a blatant inconsistency in Ibn al-Munaǧǧim's account. On the one hand, he insists that notes an octave apart are equivalent to a degree that does not warrant attributing different functions to them, and consequently catalogs the notes from the bass strings only as duplicates.29 Accordingly, when discussing the notes that are mutually compatible or 28 See above, note 14. 29 Ibn al-Munaǧǧim, Kitāb an-Naġam, transl. Neubauer, "Al-Ḫalīl ibn Aḥmad," 304-5. incompatible within a single (simpler) composition, Ibn al-Munaǧǧim generally refers only to the treble strings. On the other hand, he departs from this convention in the case of the two highest notes from the maṯlaṯ, to which he refers notwithstanding the fact that their octave counterparts, sounded from zīr, are also enumerated.30 Where the higher instance on the zīr is inconspicuously realized from the normal small finger fret, it is impossible to see what would have prompted the duplication of this functional note in Ibn al-Munaǧǧim's list other than the simple fact that it was indeed accessed in regular melodies, for instance as a lower 'leading note' toward the open maṯnā.31 In this way, the concordance between the notes of Roman-period melodies and those provided for by the earliest Arabic source of which we know in sufficient detail becomes almost perfect. All this suggests considering a much closer connection between the music of Roman-period Egypt and Early Abbasid Baghdad than one might have dared to assume. In turn, we must wonder whether such a connection might also have included a continuation in lute culture. From the regions that formed the Roman empire, no lute imagery seems to survive from before the Arabic conquest that resembles the characteristic pear shape of the ʿūd; the surviving late antique and early Byzantine lutes show a very different form and typically have only three strings arranged in two courses.32 Some Roman-period iconography, in contrast, distinctly shows four strings.33 At the same time, pear-shaped lutes turn up in Gandhāran art, with four strings, wherever the nature of the source suggests counting these.34 When such instruments arrived in China a few centuries later, to be transformed to the pípá 琵琶,35 they seem to have featured the same four strings and four frets in TSSS spacing that the Arabic sources describe; this is most obvious from the famous eighth-century specimens that are preserved intact in the Shōsōin at Nara, Japan. Only later does the pípá acquire its modern abundance of frets, apparently inspired by the long-necked round-bodied ruăn 阮.36 The common fundamental design of pípá and ʿūd strongly suggests that it had already defined their common ancestor, which may take us back to the fourth century CE, before it reached China, or even the first century CE, if we assume that, along with the shape and the number of strings, the arrangement of frets had also remained constant during that period. At any rate, before the East-Asian and the Western branches of the instrument split, it appears more than likely that an ʿūd-like design existed in Central Asia contemporaneously with the aforementioned Roman-Imperial musical evidence, which fits so well with Ibn al-Munaǧǧim's account. This, in turn, may suggest that the four-stringed lutes found in the iconography from the Roman Empire, albeit differently shaped, shared the typical TSSS fret design and/or playing position that, as we have seen, is most natural for lutes from a diatonic music culture whose strings exceed the length within which the human fingers can access a series of notes belonging to a single heptatonic scale.
These considerations may find circumstantial support from more distantly related instruments. In the repertory of seven extant late antique or early Byzantine lutes, commonly but misleadingly termed 'Coptic' , a particular variant stands out: firstly for its comparatively small size; secondly because two items of virtually the same musical design have been preserved; and thirdly because their original fretting is known and they are therefore uniquely well understood.37 In contrast to the ʿūd, or any instrument with tied frets, these came with different sets of wooden frets for different strings: merely three for the bass string, but six for the treble course which consisted of two strings that were almost certainly tuned to the same pitch. With only two courses, we cannot easily pinpoint the absolute pitch of these instruments; but it is possible to determine their relative scales. In a recent publication, guided only by the evidence from Roman-period music and still unprejudiced by the considerations the present contribution is putting forward, I have tentatively suggested tuning their treble strings an octave above the lyre hypátē S, and their bass string a fourth lower, an octave above g. This would put their treble course precisely an octave above the pitch we have above derived for the ʿūd (cf. Figure 3). We can now evaluate this suggestion by comparing the sizes of the instruments, on the working hypothesis that their respective treble string tensions were comparable, i.e. removed from breaking tension by about the same amount. Assuming that the maṯnā of the ʿūd was tuned an octave below the treble course of the smaller lutes, this would put the zīr of the ʿūd, which was tuned a fourth above maṯnā, a fifth below that treble course. The respective length difference between the approximately 60cm vibrating string length on the ʿūd and about 39cm on the smaller lutes in turn amounts to about 745 cents. This differs from the hypothetical fifth by less than a quartertone -an astounding coincidence given the overall flexibility of string tunings. The two lines of reasoning thus gain additional support from each other. Most importantly, the apparent octave relationship between the two types of lute exists irrespective of their potential links to the musical pitches of the ancient musical documents. After all, four-stringed lutes resembling the ʿūd had existed centuries before the extant late antique small lutes were built, at least in post-Hellenistic Central Asia, alongside differently-shaped four-stringed cousins in the Mediterranean sphere. A close musical connection between the two types, making the apparently younger sort of the treble version of the older should therefore come at little surprise. How would that relation bear out in detail? Above all, the idea of a 'treble version' is not at all reflected in the basic design or playing technique, just as the idea of 'octavating' is not implemented simply by producing a half-size instrument. Being pitched a fifth higher, as far as string length is concerned, the late-antique lutes allowed for heptatonic fretting. Operating on shorter strings, where similar distances create larger intervals, the left hand was here able to access five successive notes of a heptatonic scale on a single course without changing position. On the downside, the absence of additional semitone steps precluded modulation to other tonalities: melodic modal variety was only attainable by the selection of different focal notes. Since the melodic course was already comparatively close to breaking tension, no further treble string could be added; instead, in contrast to the ʿūd, any melody that exceeded five notes required changing the position of the left hand on the neck, as is customary with long-necked lutes.
Nevertheless, the total melodic gamut of both instrument types is identical. The pair of fourths that the treble strings of the ʿūd form, is implemented as a single series of frets on the melodic course of the smaller lutes. The spacing of those frets furthermore reflects the physical requirements of the ʿūd by starting both fourths with a whole-tone step. Instead of implementing a combination of tone and semitone, the remaining interval is split in two physical halves and consequently roughly two three-quartertone steps, resulting in a much more physiological fret spacing. The ensuing kind of tuning with its neutral thirds is incompatible with the tenets of ancient Greek music theory, but was, in mathematically idealized form, described by Claudius Ptolemy, who famously called it the 'even diatonic' (diatonikón homalón). As a part of actual music culture, it is not described until al-Fārābī's Kitāb al-Mūsīqī al-kabīr. Here it surfaces as an alternative ʿūd fretting associated with the name of the famous lutenist Manṣūr Zalzal aḍ-Ḍārib from the first half of the ninth century. This fretting contrasts with both the diatonic-chromatic scales of the earliest Arabic sources and the slight deviation from these whose distinctive fret al-Fārābī knows as the 'Persian middle finger' (al-wusṭā al-furs).38 Importantly, although the musical result is very similar to Ptolemy's account, al-Fārābī's math works out differently by the twentieth part of a tone,39 so that we can be certain he is not following ancient tradition but tries to describe the musical practice of his own time, even though this did not result in a structure that lends itself to a description in terms of small integers, as would satisfy the philosophical aesthetics of a Pythagoreanising tradition.
Otherwise the small late-antique lutes implement a scale whose intervallic structure appears tailored to the melodic requirements that surface from the early Arabic sources. If their open treble course was indeed tuned an octave 38 Al-Fārābī, Kitāb al-mūsīqī al-kabīr, ed. by al-Ḫašaba, 510-11. 39 While one would need to bisect the distance between index and small finger fret in order to imitate Ptolemy's even division, al-Fārābī splits that between the the ring finger and the Persian middle finger, which in turn sits halfway between index and ring finger.
above the open maṯnā of the ʿūd, the frets of this course lay out a heptatonic scale precisely up to an octave above the highest fretted note of the ʿūd, played by the small finger on zīr. Whole tones reflect those between nut and index fret on the ʿūd, while the equidistant nature of the instrument realized the intervening notes right between the standard diatonic/chromatic middle and ring fingers of the ʿūd, so that they created the notes an octave above the 'Zalzalian' middle fingers. Perhaps most strikingly, the note a whole tone below the open maṯnā, which, as we have argued above, was included in the typical melodic repertory in spite of being realized on maṯlaṯ, was played in the same manner on the lute from Antinoë: a fret for it is provided on the bass string (the bass fret positions are lost on the similar instrument from Saqqāra).40 The tonal correspondences between two instruments, which are so unlike each other at a first glance, are astonishing, and it is difficult not to regard the late antique lutes as a sort of missing link between Roman-Imperial and Early Abbasid music. However, if the similarities are not dismissed as coincidental, they also do not suggest a development from the earlier small lutes to the later large ones. This is because the precise arrangement of frets and intervals is all but natural on the ʿūd -above we have discussed how its design responds to the physical requirements of string material and finger spans. On the smaller instruments, the same tonal structure appears as a random choice out of various possibilities. If anything, the late antique lutes must have been designed to reflect the tonality of larger instruments, not the other way round. This accords with our former conclusion that four-stringed lutes very similar to the ʿūd had been around centuries before the extant small lutes were manufacturedalmost certainly in Central Asia, but quite probably also in the Mediterranean, where they lacked the pear-shaped design that was to become the hallmark of the instrument.
Limited by their own physical constraints, the extant small lutes however did not implement a design corresponding to the TSSS frets we need to expect for that pre-ʿūd on the basis of the East-Asian tradition and the early Arabic writings, but the three-quartertone steps and neutral thirds that we find later associated with the name of Zalzal. This is perhaps the most puzzling aspect in our story: how would a musical structure that had existed for centuries become associated with a specific musician of the ninth century? Did he really introduce it to the ʿūd? If so, what about the otherwise so close structural match between a 'Zalzalian' ʿūd and the small late-antique lutes? Moreover, if Isḥāq 40 Apart from this fret, the bass string has one for the fourth (from which it could easily be tuned to the treble course) and one a semitone higher, corresponding to the 'neighbor of the index' fret described by al-Fārābī, which would add the missing parypátē RR.
Oriens (2023) al-Mawṣilī was his student who held him in the highest esteem,41 why do we fail to find the slightest reflex of Zalzal's near-equidistant fretting in his teachings as relayed by Or is the connection between Zalzal's name and the near-equidistant tetrachord only circumstantial? Notably, though al-Fārābī first introduces the Zalzalian middle finger as the roughly equidistant one, he makes it later clear that this is not the universal practice: others place Zalzal's fret quite differently, at a distance of a leîmma (baqiyya) from the ring-finger fret, i.e. a small 'Pythagorean' semitone instead of the larger known as the apotomḗ that the old standard tuning employed there. Al-Fārābī describes how expert musicians establish the required fret position with precision by temporarily retuning the lowest string.43 It lies close to the 'Persian' middle finger, less than a twelfth of a tone above it. Instead of a neutral third, this version of a 'Zalzalian' middle finger creates a perfectly pure minor third with the open string as well as a pure major third with the index fret of the next higher string,44 at the expense of precise octaves and fifths wherever a middle-finger fret is involved. This much smaller modification -shifting the fret by an eighth of a tone instead of more than a quartertone -therfore makes eminent musical sense. In contrast to the neutral-third 'Zalzalian' , the frets of this alternative pure-third 'Zalzalian' retain an obvious sequence of a tone and three semitones (cf. Figure 5) by merely switching the positions of the larger and one of the smaller semitones. In this way, all the notes preserve their basic identity, much as in modern Western music notes are perceived as identical whether they are played on a tempered instrument or in pure thirds, for instance by a string ensemble. Al-Fārābī's description in terms of a fourth minus two adjacent leímmata places the fret at a theoretical 317.6 cents from the nut, which is indistinguishable from the 315.6 cents of a perfect minor third. 45 It may be significant in this context that the same indifference is found within al-Kindī's work, where the numerical fret positions reported in his Risāla fī l-Luḥūn wa-n-naġam (cf. Neubauer, "Bau der Laute," 327) do not reflect the tuning in fifths and fourths he describes in his Risāla fī Ḫubr ta ʾlīf al-alḥān. Instead, the middle-finger fret posited in the former work numerically embodies the pure thirds of the alternative 'Zalzalian' (30:25 = It is striking that the two versions of 'Zalzalian' middle-finger positions differ more from each other than does one of them from the other recorded positions of the same fret. Apparently, any upward displacement of the fret from its pantonic or 'Persian' standard had somehow come to be called by Zalzal's name. But it is hardly conceivable that the historical Zalzal introduced more than one option without further distinction. Most probably, his innovation concerned one particular tuning, whose designation was later generalized when the two different traditions had met on an equal footing: both could easily be understood as analogous, if variously extreme, deviations from the old pantonic standard. Which one would originally have been Zalzal's? Within the context of the transmitted musical concepts from his environment, a slight adjustment towards resonant thirds appears a priori far more credible than the creation (or adoption) of a radically different tuning that stands in unmitigated opposition to contemporary as well as somewhat later outlines. The documented age of the equidistant variant points in the same direction: it would be much more difficult to explain how it might have come to be perceived as an 'invention' of the Abbasid era. Finally, even though al-Fārābī introduces the equidistant 'Zalzalian' first, when he later uses the term in the context of quantifiable interval sizes, he often takes it for granted that the reader understands that the less extreme variant is meant -apparently confirming that this should somehow be regarded as the 'truer' variant.46 6:5). Accordingly, the position of this fret agrees with al-Fārābī's account of the latter within 1mm; this is not the place to discuss the implications in depth. On balance, it appears reasonable to attribute only the smaller shift of the middle-finger fret to Manṣūr Zalzal, restoring much greater consistency to the music of the Abbasid court around 800 CE. Its lute frets, as is presupposed or described in the early Arabic sources, notably Ibn al-Munaǧǧim, al-Kindī in his Risāla fī Ḫubr ta ʾlīf al-alḥān and the Iḫwān aṣ-Ṣafāʾ, would generally have been tuned in fifths and fourths throughout, resulting in the TSSS scheme with one larger semitone between two smaller, a scheme that was shared by contemporary instruments in the Far East. Starting from this traditional layout, Manṣūr Zalzal pushed the middle-finger fret slightly towards the ring-finger fret, so that the instrument would play pure thirds. The 'Persian' middle-finger, in turn, looks like a 'tempered' version, a compromise between pure thirds, on the one hand, and pure fifths and octaves, on the other. Within al-Fārābī's musical horizon, in contrast, an equidistant middle-finger fret was also in widespread use, which gave rise to a wholly different sort of scales. Its position was much further removed from the TSSS standard than that of Zalzal's fret. Nevertheless it became associated with his name as well, probably in an environment where the traditions met: the notion of 'Zalzal's middle finger' , being associated with the idea of shifting the middle-finger fret to a higher pitch, was thus transferred to any such shift.
This model remains necessarily speculative, but it incorporates all data much more smoothly than would conceivable alternatives. Not only does it attribute a plausible role to Zalzal himself, it also allows for a tradition of lutes with neutral thirds long before his time, and specifically the probably four-stringed version playing in the same register as the ʿūd that appears to have influenced the design of the smaller late-antique lute.
At any rate, there is excellent reason to date at least the TSSS variant of a musically ʿūd-like lute before the Sassanid period, and perhaps a cause for a neutral-third variant of it as early as the fifth century CE. Where and when was the general design invented? The Gandhāran representations in combination with four-stringed lutes in Roman-Imperial iconography caution us from generally asserting a 'Central Asian' origin (even though this may well be true for the variant with a pear-shaped body in particular).
More than half a century ago, R.A. Higgins and R.P. Winnington-Ingram collected pictorial and textual evidence for lutes in the ancient Greek cultural bisection recommends itself as a simple extension of the process by which the 'Persian' middle fingers are obtained; no sufficiently accessible procedure would have given the other Zalzalian variant.
sphere.47 In both types of sources they were able to trace this rarely mentioned and scarcely represented instrument back to the fourth century BCE. When scrutinizing possible instrument names, they mostly settled on variants of the stem pandour-, on the one hand, and skindapsós, on the other. With fewer strings and an especially exotic connotation, the former might rather have represented long-necked lutes, perhaps spike lutes, than anything like the instrument we are looking for. In contrast, the skindapsós is associated with four strings (τετράχορδος: Matron, cited in Athenaeus, Deipnosophists 183a) and a large size and in some way likened to the lyre (μέγας, λυρόεις: Theopompus of Colophon, cited in Athenaeus, Deipnosophists 183a-b). Regarding lutes of comparatively large size in the iconography, Higgins and Winnington-Ingram point to the one in the hand of a Muse on the Mantinea base (Athens, National Museum inv. 216). This is the best representation of a 'European' lute before the Roman period; unlike the lyres on the same monument, it is evidently shown being played, the left hand stopping the strings at the uppermost playing position, the right hand with plectrum in the typical position for strumming or plucking. Allegedly stemming from the workshop of Praxiteles, the image inspires some confidence in its realism.
Of course we need to remain aware that musical scales ought (almost) never be inferred from iconography; even though visual artists may strive for a 'naturalistic' representation of important proportions and other details, such as string numbers, this cannot a priori be taken for granted. In contrast, it is perfectly admissible to compare representations with external evidence: where the two concur, it is likely that the former intentionally portrays certain organological aspects with reasonable accuracy.48 Our present case falls within this category: we only need to establish whether the Mantinea lutist is compatible with what we know about possible related instruments from reliable sources.
The important distances are shown in Figure 6. The length of the open strings (d) is determined by the position of the bridge (f) and the nut (g). While the latter is represented as a band within which the precise position from where the string would vibrate cannot be guessed -is it at its lower end Apart from the well-known fact that the bulk of Archaic and Classical Greek representations of lyres show the canonical seven strings known from literary testimonies, one finds also that represented proportions between players and instruments of various types are consistent with the latter's inferred pitch within a quite small margin (Hagel,Ancient Greek Music,(88)(89)(90)(91)(92)).
Oriens (2023) or rather in the centre? -the location of the former is concealed beneath the player's right hand. For both, we need to work from plausible ranges. The fret positions (h-k) are better determined, since they can be judged from the placement of the finger tips. In combination with the ranges for the endpoints of the strings, each fret position translates to an interval range, which is most conveniently expressed relative to the open string. In Figure 7, these ranges are compared with the traditional tuning of the Early Abbasid ʿūd. It emerges that, although that tuning mostly exploits the upper limits of the ranges, the displayed Greek lute is apparently as compatible with the later instrument as one might wish when dealing with iconography -at least as far as the relative scale is concerned. Approximating absolute measurements is less straightforward. Here we need to judge the size of the instrument against its player, assuming a model of average ancient size. The seated goddess offers mainly three distances of reference: her upper right arm from shoulder to elbow (Figure 6, a); the cubit of her left arm (b); and her left leg from the upper end of the bent knee down to the sole (c). The last, being positioned in the foreground, may be larger in scale in comparison with the instrument, while the cubit is somewhat shortened by perspective; we would therefore expect that the upper arm, being placed only slightly behind the instrument, gives the best estimate. Since body proportions vary between individuals, I have based the following assessment on data averaged from three available females from different regions; at any rate, the variance in the relevant values remained in the range of 1% of the total body height. As expected, when comparing the ratios between the lengths, the knee-sole distance of the relief appears exaggerated, and the cubit, shortened, while the upper-arm length ratio is almost identical with the average, which can therefore be safely used as a starting point. In order to translate measurements on the instrument to approximate real-life equivalents, we finally need to settle on a plausible body height for the Muse; a conventional average of 168cm has proven useful also in other musical contexts.49 On this basis, the vibrating string length would range between 56.5cm and 61.5cm; the best fit for a TSSS tuning occurs at 58cm. The uncertainties that enter the equations should not affect the outcome by more than a few percent. Without pressing iconographic evidence too much, we can therefore state with confidence that no significant difference can be established between the Mantinea lute and the early ʿūd regarding either absolute size or the finger positions. The same goes for the number of strings. The width of the neck below the left hand ( Figure 6, e) extrapolates to about 3.4cm, above it (l), to 3.9cm, a distance that nicely accommodates four strings, as it does for instance on a modern guitar. With more than four strings, the distances would become too small for stopping individual strings, with fewer, the neck would appear unreasonably wide, needlessly encumbering access to the strings. All this appears to shift the burden of the argument to those who would deny that the Mantinea lute and the early ʿud are functionally similar instruments. A competing musical interpretation of the former, if it can be found at all, would have to change certain parameters by a significant amount. It is hard to see how such a modification might remain compatible with the representation.
Conversely, if only a number of four strings is acknowledged as plausible for the Mantinea lute, for any lute of such a large size a general design of the TSSS kind would appear most natural, as we have discussed above, which would again render it functionally identical with the early ʿud. Would such a design fit in the context of late fourth-century BCE Greek music? First of all, if the early ʿūd basically played the same melodic notes as did the ancient kithara, this would also be true for the Mantinea lute. If Higgins and Winnington-Ingram have correctly identified the latter as a skindapsós, this would lend a very specific significance to the epithet "lyre-like" (λυρόεις) found in Theopompus. Secondly, some of the coincidences we have observed would also gain a deeper meaning. We have already observed that the arrangement of the pitches in similarly structured fourths recalls the concept of the tetrachord. With the whole tone interval at the bottom, a single string cannot embody a structural tetrachord in the precise Aristoxenian sense, where the lowest position is always occupied by the smallest intervals, but it transpires from Aristoxenus' own writings, which are roughly contemporary with the Mantinea base, that the term was generally employed in a much looser way.50 Even more telling is perhaps the partially chromatic nature of the division. While the cuneiform sources transmit a system that was tailored to pure diatonic, and ancient Chinese theory contemplated the full set of twelve semitones in the octave, arrangements of merely two or three sequential semitones appear typically Greek. Two semitones plus a minor third make up a 'chromatic tetrachord' (e f f ♯ a). In combination with a diatonic tetrachord (e f g a), one obtains three semitones plus a tone (e f f ♯ g a). In mathematical terms, such sequences are first found in the pseudepigraphic Timaeus Locrus from the late Hellenistic period,51 but a close association between diatonic and chromatic is already established in a famous passage that almost certainly predates the Mantinea monument.52 Their mixture, and thus the creation of pitch configurations that match the TSSS frets of the lute, must have been a hallmark of the citharodic art, where a typical modulating tuning would have started, from low pitch, hyperypátē (d), hypátē (e), parypátē ( f ), khrōmatikḗ ( f ♯), diátonos (g).53 A lute with a similar intervallic pattern would therefore fit perfectly within the musical environment of Late Classical Greece.
Furthermore, since the Greek model scale, the so-called Perfect System, starts with a step of a tone as well, its lower octave, expressed in diatonic-chromatic, matches that of a TSSS lute ( Figure 8); since the Western note names derive from the Perfect System, this is also the reason why the bass note of the ʿūd is most conveniently transcribed as (relative) A. In the upper octave, the presence of a 'modulating' tetrachord (synēmménon) along the 'regular' one (diezeugménon) introduces a complication. Obviously the creators of the system deemed it important enough to bemuddle their neat scheme. On the lute, it is matched by tuning the second highest string a fourth above the third, even though perfect octave relations would have been assured by tuning it a fifth above. As a result, in the higher octave the chromatic c♯ is traded off against the diatonic and chromatic b♭. The highest string, finally, stops short of completing the double octave, so that the greater part of the highest tetrachord cannot be played, which the Greeks termed the 'excess notes' (hyperbolaîai). Once more, it is hard to imagine a musical context within which the traditional ʿūd design would make more sense than in the Greek world of the late Classical period, when the chromatic and the diatonic genera were equally valued, and when the Perfect System had just been established.54 As Higgins and Winnington-Ingram have shown, the lute does not seem to appear in Greek sources of any kind before the fourth century BCE, precisely the environment in which the ʿūd-like design makes most sense. Notably this is also the period in which the monochord was introduced to harmonic science,55 a device so similar to the lute that it would be incomprehensible why earlier theorists would have used the imprecise notion of air columns instead of dividing strings, had the lute played any notable role. If the instrument family was indeed introduced or popularized around the middle of the century, not long before the Mantinea slabs were devised, this would coincide perfectly with the new-fangled method we find in the Division of the Canon.
The Perfect System served as an abstract grid onto which all the various scales could be matched; it prescribes no absolute pitch. Above we have seen that what we know about Early Abbasid ʿūd music echoes the melodies of the Roman period if the open maṯnā is equated with the lyre hypátē SS. In this way, the open bamm would play mm, placing the underlying Perfect System in the Iastian key. This is all well for the Roman period, when the Iastian is frequent: for instance, it is the key of the most famous musical document, the Seikilos song. If played on the ʿūd, its melody would start on the open maṯnā, 54 The system is presupposed both by Aristoxenus and by the However, the musical documents reveal a deep rift separating Roman-period music culture from its Hellenistic precursor, which manifests itself above all in the use of very different keys. Instead of the 'sharp' keys such as Iastian, earlier scores are typically notated in 'flat' keys such as Phrygian. What late antiquity called Iastian, for all we know, appears to have originated as a purely theoretical concept, probably invented only by Aristoxenus who baptized it 'low Phrygian' , because it was a semitone lower than the traditional Phrygian. If there had been a Greek TSSS lute before the transformation period (whose details we struggle to understand), its musical conceptualization would have had to change as well.
At any rate, the 'Iastian' analysis that has worked so well for late antiquity is out of phase with Hellenistic musical conventions, as far as we can tell from the limited evidence ( Figure 9). Instead, tuning the lute a semitone higher in respect to ancient notation appears to work reasonably well ( Figure 10).56 This would take the lute's Perfect System to the Phrygian key, and the open maṯnā would represent the Dorian mésē PP57 -two keys that all but disappear from the record in the Roman period.
Is it significant that the plausible interpretation of an ʿūd in Hellenistic times would point to Phrygian, while the Roman-period equivalent would be Iastian, and that the latter is but a late designation for Aristoxenus' "Low 56 The troubling pitch that would fall between the open string and the index fret on maṯnā is mainly represented by the 'irregular' note O in the second section of the First Delphic Paean (Pöhlmann and West,Documents of Ancient Greek Music no. 20). 57 Note that this is the functional ('dynamic') mésē of the particular key (tónos). Above, we have considered the absolute ('thetic') mésē of lyre tuning, which coincides with the functional Lydian mésē II of ancient musical notation. Phrygian"? Does this apparent downward shift by a semitone capture some music-historical reality within the enigmatic transition between the periods? Might one even relate it to the proportions we have observed on the Mantinea monument, which might suggest a slightly smaller size compared to the ʿūd, within a scope of about a third of a tone? The shaky evidence cannot currently provide answers to such questions.
Conclusion
What insights have we gained that rest on firmer ground? There seem to be good reason to trace the precursors of the Abbasid ʿūd back to late antiquity, at least. It may have been in Central Asia that the instrument acquired its typical shape which consequently spread both eastwards and westwards to the ends of the known world. A similar design of four strings structured by frets in a sequence of a tone and three semitones may however have been older, perhaps significantly older. The first sufficiently realistic depiction that matches the required parameters as well as the approximate string length appears in late Classical Greece, in a musical environment that related exceptionally well to characteristic features of the ʿūd, such as limited chromaticism and the realization of the theoretical model scale, and which produced a literary reference to a four-stringed instrument. It is obvious that the Greeks imported the basic concept of lute from other cultures, where long-necked types had been in use for millennia.58 However, a 58 For instance, the term pandour-is associated with an old (Euphorion quoted by Athenaeus, Deipnosophists 4.182e) three-stringed (Pollux 4.60) instrument of more or less exotic origin (ibid.: Assyrian; Athenaeus, Deipnosophists 4.183f-4a: a Troglodytan variant). direct precursor of an instrument such as that on the Mantinea monument has not been identified. Is it plausible that the particular design was a Greek invention, after all? In fact, al-Kindī mentions sources that entertained precisely that possibility, rivaling a Babylonian claim.59 Modern scholarship might be well advised not to preclude it without further argument.
|
v3-fos-license
|
2019-04-22T13:12:50.569Z
|
2018-10-29T00:00:00.000
|
125684752
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ccsenet.org/journal/index.php/mas/article/download/0/0/37276/37602",
"pdf_hash": "12a4a7e68458cb44c0596206cbba729101f0391f",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43521",
"s2fieldsofstudy": [
"Political Science"
],
"sha1": "12a4a7e68458cb44c0596206cbba729101f0391f",
"year": 2018
}
|
pes2o/s2orc
|
Increasing Citizen Engagement and Participation through eGovernment in Jordan
Supporters of e-Government believe that this technology will be a panacea for enhancing the engagement and participation of citizens in politics and government. However, there is little empirical support for this assertion. Due to the rapid proliferation of e-Government in Jordan there is an impetus to determine how e-Government impacts citizen participation and engagement in politics and government within the country. Using qualitative phenomenological focus group interviews with 40 citizens who utilize e-Government, an effort was made to understand how this technology influences outcomes with regard to participation and engagement with government. The results indicate that those using e-Government were politically active before using the technology and have extensive experience with technology use. E-Government for the politically active serves to extend participation in the process. For individuals that lack technological savvy and/or are not politically active, e-Government alone may not be enough to increase citizen engagement and participation in politics and government.
Introduction
Modern governments face a wide range of challenges when it comes to effectively meeting the diverse needs of citizens.The inability of governments to meet the needs has, in some instances lead, to the disenfranchisement of many prompting concern regarding the willingness and ability of citizens to take an active role in political discourse (Wirtz et al., 2014).Initially, scholars called for research concerning exploiting the benefits of crucial electronic systems for job scheduling services (e.g.Karajeh & Maqableh, 2014;Maqableh & Karajeh, 2014) among other fields such as on cloud computing (e.g.Tarhini et al., 2017a), e-learning (e.g. Almajali& Al-Dmour, 2016;Tarhini et al., 2017b), e-services (e.g.Almajali & Maqableh, 2015;Khwaldeh et al., 2017;Obeidat et al., 2017), online banking (Tarhini et al., 2015),e-learning systems (Almajali et al., 2016), and on e-government (e.g.Alenezi et al., 2017).
In an effort to address some of these challenges e-Government initiatives have been increasingly adopted.As reported by Ifinedo& Singh (2011) e-Government involves "the utilization of the Internet and World Wide Web for delivering government information and services to citizens and other stakeholders in a country" (166).These authors go on to argue that the purpose of e-Government is to make resources and information more accessible to individuals, increasing citizen engagement in government.Smith (2016) further argues that e-Government has the potential to improve the image of the public sector while increasing citizen trust in government and well as increasing citizen participation in government.Clearly, the use of e-Government holds the potential for substantially altering citizen engagement, participation, and interaction with government at all levels.
e-Government Adoption
The adoption of e-Government by various countries across the globe appears to represent the intersection of a broad range of social, economic and political variables.On one hand, the transactional nature of government has become laborious due to the number of services provided, requiring governments to utilize e-Government as a means for interfacing with citizens seeking to complete basic transactions (Nguyen, 2016).This motivation for the development of e-Government has been fueled by a need to increase efficiency in government services while cutting costs (Ziagham, 2011).On the other hand, the adoption of e-Government has been supported as a foundation for increasing citizen engagement and participation in politics and policy creation; a process that is viewed as essential for building a foundation for sustainable social and cultural discourse (Ifinedo & Singh, 2011).
Even though e-Government has been widely supported across various countries and regions, data indicates that the adoption of this technology platform has not been uniform (Nguyen, 2016).While Bussell (2011) argues that the uneven development of e-Government can be traced to a wide range of social, political, cultural, and economic factors, there is a dearth of empirical research definitively demonstrating which variables universally impact outcomes for e-Government adoption ? Nguyen, (2016) further examines the adoption of e Government noting that citizen perceptions of this technology can impact use and continued interest in developing this technology.Specifically, Nguyen (2016) argues that e-Government has, in many instances, failed to deliver promised results, impacting citizen trust in government and the willingness of citizens to access and use these technology platforms.This raises the question of whether the use of this technology platform has increased or decreased citizen engagement and participation in government.
e-Government Adoption in Jordan
e-Government adoption in Jordan has accelerated in recent years.Al-Hujran et al. (2011) note that in Jordan there is a strong desire to build a knowledge-based economy.As a result of this focus, considerable effort and emphasis has been placed on the development of e-Government services (Al-Hujran et al., 2011;Alenezi et al., 2015Alenezi et al., , 2017)).Al-Hujran & Shahateet (2010) go on to argue that government officials in Jordan have linked the achievement of "good governance" with information and communication technology (ICT).What this suggests is that government officials view technology as an integral component of building effective government services that benefit citizens.Although leaders in Jordan continue to work toward integrating technology as part of a larger plan to improve government, results regarding outcomes including citizen engagementin, e-Government and participation in government are limited.However, existing research on the adoption of e-Government by citizens does provide important insight into factors motivating the decision of citizens to utilize these services.
Data provided by Alomari et al. (2012) indicates that citizen intention to use e-Government in Jordan is shaped by user perceptions regarding trust in the internet, potential advantages for using e-Government, and ease of website use.Similarly Abu-Shanab (2014) found that citizen trust in the internet in general, impacted the decisions of citizens to access and use e Government services.Privacy and security issues were noted to be of particular concern for citizen adoption and use of e Government (Abu-Shanab 2014).Further, Majdalawi et al. (2015) note that citizen adoption and use of e Government in Jordan has been hampered by a lack of awareness among the population regarding services available through the internet.Citizen awareness, trust, and a lack of knowledge regarding internet use were further noted by Al-Hujran & Shahateet (2010) as impacting citizen adoption of e-Government in Jordan.
Additional insight provided in the empirical research indicates that similar issues regarding e-Government adoption in Jordan have been reported.In particular, Nusir et al. (2012) report that ICT skills represent one of the most pertinent factors influencing e-Government adoption.Perceived ease of technology us, attitudes of citizen, and intentions to use government services have also been noted by Al-Hujran & Al-dalahmeh, (2011) as having a direct influence on citizen use of e Government in Jordan.Al-Hujran & Al-dalahmeh (2011) further note the cultural implications shaping e-government use in Jordan.Fidler et al. (2011) also highlight cultural barriers to the adoption of e-Government noting the custom of Wasta and its implications for impacting citizen decision to use this technology.
Although investigations of e-Government in Jordan clearly indicate that there have been numerous studies examining the factors that facilitate and impede citizen use and adoption of this technology, there is a paucity of salient data evaluating the impact of e-Government on those who use this technology.In particular, it is pertinent to consider if e-Government services provide a foundation for engaging citizens to facilitate further interest and participation in government and politics.Understanding the reach of e-Government and its implications for citizen involvement in government is important as supporters of this technology argue that this outcome can be achieved (Ifinedo & Singh, 2011).Given the current gap in understanding the topic it is imperative to assess how those who regularly utilize e-Government services view their level of participation and interaction with government as a result of utilizing this technology.
Research Purpose
Research regarding the implications of e-Government for citizen engagement and participation in politics and government in country of Jordan is still in its infancy.While it is possible to identify a broad range of factors impacting e-Government adoption among citizens, what is not as clear is what occurs when these obstacles are overcome and citizens access and use e-Government services.Do these interactions lead to the desire for citizens to understand government and to take a more active role in politics?Or do e-Government services serve a perfunctory role without actually facilitating user engagement in government activities?The purpose of this qualitative study was to assess the scope and level engagement and participation of Jordanian citizens in politics and government as a result of using e Government tools and services.
Research Methodology
The methodology selected for use in this investigation was a qualitative phenomenology.A qualitative approach was selected due to the exploratory nature of the topic, the potential for participants to be influenced by a broad range of variables, and the dearth of empirical research on the topic to guide quantitative inquiry into the subject.Qualitative methods are often employed when succinct variables cannot be quantified for investigation (Creswell, 2014).This approach to inquiry offers a foundation for understanding the conditions and context of a specific topic (Creswell, 2014).As noted at the outset of this investigation there is currently a dearth of research examining the impact of e Government on citizens who utilize this technology in Jordan.Quantitative methods to understand the conditions and context of this topic thus justify the choice of this approach in the current research.
A phenomenological framework for investigation was selected due to the emphasis of this methodology on understanding a phenomenon of interest.The focus of the current research aimedto understand the impact of e-Government on engagement and participation for citizens living in Jordan.The influence of e-Government on the behavior of citizens was the primary phenomenon investigated.Phenomenological frameworks typically involve a small number of subjects that are examined to understand patterns and relationships of meaning to the phenomenon under investigation (Creswell, 2014).Data collection for phenomenological investigations typically includes personal or focus group interviews in which an effort is made to collect data based on participant experiences with the phenomenon (Creswell, 2014).Aligning the current research with a phenomenological approach, a focus group was selected for data collection.
Data Collection
A general survey of Internet use was sent to all students, faculty and staff working at a large metropolitan university and adjoining healthcare center to identify individuals with extensive e-Government and internet experience and use.A general internet usage survey was acquired from researchers at the University of Washington and mailed to all known addresses for students, faculty, and staff.A total of 49 individuals reported considerable internet and e-Government service use in the last six months.Each respondent was contacted by phone or email and 40 agreed to participate in the study.Data collection for this investigation employed four focus group interviews of 10 participants each.The focus group interviews were scheduled at a time and place convenient for all focus group members.The focus group interviews were scheduled to last between 45 and 75 minutes.All interviews were videotaped and transcribed within 48 hours of their completion.
Data Analysis
Data analysis for this investigation was undertaken through the use of qualitative data analysis software, Atlas.ti.7.This software provides the ability to identify codes within text data and to uncover relationships between pertinent themes.Information from each of the focus group transcripts was first analyzed to ascertain similarities and differences between the groups with regard to their engagement and participation in government via e-Government services.After which, data from all four transcripts was simultaneously analyzed to acquire a representative picture of how e-Government shapes citizen engagement and participation in government and politics.
Research Results
The results for this investigation are presented below and include an overview of the pertinent themes noted in each of the focus group interviews followed by a review of connected themes identified across all four of the interviews.The data demonstrates that there are some pertinent factors influencing citizen engagement and participation in government that may not be directly related to e-Government.Understanding these issues may be important for building a better foundation for increasing citizen engagement and participation in government.
Conclusion
One of the key issues highlighted through the data analysis was that citizen engagement and participation in e-Government was supported through antecedents leading to the willingness and ability of the individual to participate this process.The data indicates that comfort with technology, being politically active before e-Government, technology use by family and friends, and willingness to use e-Government serves all played a role in increasing citizen engagement and participation in government and e-Government.The outcomes that resulted from this process included the ability to use e-Government to extend political activity and an increased use of technology in general.Based on this assessment, it is evident that increasing citizen participation and engagement through the use of e-Government can be facilitated through efforts to build citizen involvement with technology and politics simultaneously.
The results bring to light the importance of addressing political participation in Jordan as a factor influencing citizen use of e Government.Al-Sabeelah et al. (2015) acknowledge that the changing social, economic, and political climate in Jordan may have implications for the participation of citizens in government.In particular, these authors argue that shifts in traditional values and beliefs may serve as the basis for disenfranchising many citizens, disrupting and discouraging participation in the political process.Conceptualizing the intersection of these variables with those that impact individual decision making with regard to using e-Government services is clearly a topic for further consideration.In addition, researchers stressed the point on considering the antecedents and enabling factors of applying electronic services (e.g.Masa'deh, et al., 2008;AlHarrasi & AL-Lozi, 2015;Kateb, et al., 2015;AlHrassi et al., 2016;Mikkawi & Al-Lozi, 2017;Abualoush et al., 2018;Masa'deh, et al., 2018), hence, future research is vital to examine theseenablers as toassist stakeholders on their decisions on reaching high levels of e-Government services.
Although proponents of e-Government believe that access to government information via the Internet will facilitate citizen participation in politics and government, the data provided in this investigation suggests that in order to build citizen participation though e-Government simultaneous efforts are needed to build technological competencies and political activism for individuals.Current research regarding political participation by citizens in Jordan does indicate that social and economic changes may be fueling the detachment of citizens from the political sphere further impacting the willingness and desire to engage with government through the use of technology.By building a broader conceptual understanding the confluence of these variables it will be possible to acquire a deeper understanding of the topic.This information could also be utilized as a foundation for increasing citizen willingness and desire to use e-Government as a foundation for extending political action and for becoming more involved in government activities.Clearly, technology alone cannot bridge the divide between citizen engagement and participation in politics and government.
Figure 1 .
Figure 1.Integration of themes how eGovernment impacts citizens
Table 1 .
Common themes by focus groups
|
v3-fos-license
|
2022-01-11T16:03:59.486Z
|
2021-10-01T00:00:00.000
|
245849092
|
{
"extfieldsofstudy": [],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://ijma.journals.ekb.eg/article_198444_c88dcd2e9d8002804a7ea786724cb2ba.pdf",
"pdf_hash": "72edf35e051c0648f059654d2ece78ec63e771c6",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43524",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "8e93a952607dc421c05d41f822344e840b47f145",
"year": 2021
}
|
pes2o/s2orc
|
Prediction of Fetal Lung Maturity by Fetal Pulmonary Artery Doppler in Cases of Severe Pre-Eclampsia
Background: Severe pre-eclampsia occurring remote from the term is a decision-making dilemma for the obstetrician, the general recommendation is that women with severe pre-eclampsia should be delivered to avoid maternal complications; others recommend prolonging pregnancy in most cases of severely premature pre-eclamptic gestation until the development of fetal lung maturity, development of fetal or maternal distress, or achievement of gestational age of 34 to 36-week of gestation. The cut-off level of pulmonary artery acceleration time to ejection time ratio [PAT/ET] that determines fetal lung maturity in cases of severe pre-eclampsia [PE] is not determined yet. Aim of the work: To study the Doppler indices of the main fetal pulmonary artery and their role in predicting respiratory distress syndrome [RDS] in severe pre-eclampsia. Patients and Methods: A prospective longitudinal cohort research was designed in which 102 pregnant women with PE were chosen and fetal pulmonary artery flow velocity data were acquired by Doppler ultrasound and linked with the development of neonatal RDS. Results: The AT/ET ratio in the fetal pulmonary artery velocity waveform was found to be directly related to the development of newborn RDS. A cut-off value of 0.3 resulted in a sensitivity of 71.4%, specificity of 79.7%, and total accuracy of 77.5%. Conclusion: A high AT/ET ratio in the fetal pulmonary artery is related to the future development of RDS in neonates of mothers with severe pre-eclampsia, implying that fetal pulmonary artery Doppler ultrasound may be a valuable tool in the identification of fetal lung maturity in situations of severe pre-eclampsia.
INTRODUCTION
Since the advent of ultrasound and antenatal fetal testing, there has been a growing emphasis on optimizing fetal outcomes in difficult, high-risk pregnancies [1,2] . Preeclampsia affects about 5% of pregnancies and hypertensive disorders of pregnancy cause over 60,000 maternal fatalities each year around the world [3] . The decision to deliver a woman with severe pre-eclampsia that is far from the term [mid-trimester] is difficult for the obstetrician. The usual suggestion is that women with severe pre-eclampsia be delivered after stabilization to avoid maternal problems [4] .
In most cases of highly premature pre-eclamptic gestation, certain institutions propose continuing pregnancy until one of the following occurs development of fetal lung maturity, development of fetal or maternal discomfort, or accomplishment of gestational age of 34 to 36-week of gestation [5,6] . Preterm labor [before 37-week] is the most common complication, with seven to ten percent of deliveries resulting in mortality in the second part of pregnancy. One of the primary causes of morbidity and mortality in premature neonates is neonatal respiratory distress syndrome [RDS] [7] .
Numerous tests have been developed in an attempt to identify if fetal lung maturation has reached a level sufficient to prevent the development of fetal RDS. These tests are based on four core themes: biochemical testing for active surfactant components, biophysical testing for surfactant functionality, physical testing of amniotic fluid opacity, and ultrasound examination of the fetuses and their tissues [8,9] .
Neonatal RDS has been predicted using fetal pulmonary artery Doppler. Kim and colleagues discovered that a high PAT/ET was linked to newborn RDS [10] .The development of RDS in preterm newborns is linked to an increased acceleration time to ejection time [AT/ET] ratio in the fetal pulmonary artery. These findings show that fetal pulmonary artery Doppler velocity could be a reliable noninvasive tool for assessing fetal lung maturity, similar to how middle cerebral artery Doppler has replaced amniocentesis for fetal anemia testing [10] .
AIM OF THE WORK
The cut-off level of PAT/ET that determines fetal lung maturity in cases of severe pre-eclampsia [PE] is not determined yet. Thus, this work aims to predict fetal lung maturity using the fetal main pulmonary artery Doppler waveform in cases with severe pre-eclampsia and comparing these results with the neonatal outcome to detect the cutoff level of fetal pulmonary artery acceleration time to ejection time ratio [PAT/ET] that determines fetal lung maturity in cases of severe pre-eclampsia [PE].
PATIENTS AND METHODS
From October 2016 to September 2019, a prospective longitudinal cohort study was undertaken at the New Damietta Al Azhar University hospital's obstetrics and gynecology department in Egypt. One hundred and two pregnant women with severe pre-eclampsia who received prenatal care in obstetrics' outpatient clinic and gave birth within 24 h after admission were included in the research.
Inclusion criteria: All pregnant women with severe preeclampsia between 32 and 37-week gestation.
Exclusion criteria: Multiple pregnancies, unclear gestational age, identified fetal congenital defect, preterm labor at gestational age less than 32-week, pregnant women with gestational age more than 37-week, pregnant women who do not meet the criteria for severe pre-eclampsia, multiple pregnancies, and unsure gestational age.
Intervention: According to the definition of the American college of obstetricians and gynecologists 2020, there were 110 patients with severe pre-eclampsia with the following characteristics: thrombocytes 100,000/L, microangiopathic cell lysis LDH, higher serum transaminase levels-ALT or AST, continuous migraine, or other cerebral or visual abnormalities, persistent abdominal discomfort, blood pressure 160/110 mm Hg and albuminuria 2.0 g/24 h or 2+ dipstick and or serum creatinine >1.2 mg/dl except if earlier raised.
Methods: for each patient, the following data were obtained: Complete medical and surgical history, including obstetric history [preterm labor, stillbirth, IUFD, abortion], current history [to detect symptoms of severe pre-eclampsia such as headache, epigastric discomfort, blurring of vision, and vomiting], family history, and medical and surgical history. Clinical examination: blood pressure, pulse, and temperature readings. Blood group, fasting, and postprandial blood sugar, thyroid-stimulating hormone, total blood picture to identify platelet count, SGOT, SGPT, LDH, and serum creatinine to diagnose organ affection are fundamental laboratory investigations for pregnant women.
Basic ultrasound assessment
A single examiner [from the authors' team] conducted all ultrasonic examinations using the Voluson s8 ultrasound machine [GE Healthcare Austria GmbH, Seoul, South Korea] equipped with a 3 to 5 MHz convex array sector transduce after a routine ultrasound examination that included fetal biometry, estimated fetal weight, and amniotic fluid index. For S/D ratio determination, the investigator conducted a pulsed wave Doppler assessment of the fetal umbilical artery and middle cerebral artery. In the axial view of the thorax, with the fetuses at rest and no fetal breathing motions, the examiner assessed the fetal heart systematically [the fourchamber view, the outflow tracts, and the three-vessel view], then traced the main pulmonary artery [MPA] until midway between the pulmonary valve and the bifurcation of the right and left branches.
Neonatal assessment and outcomes
Neonatal resuscitation and evaluation by a senior neonatologist based on Apgar scores at 5 and 10 min, neonatal ICU entrance, and RDS development as follows: [European Consensus Guidelines] Early after delivery, there was evidence of respiratory impairment [tachypnea, retractions, and/or nasal flaring] and sustained oxygen demand for more than 24 h. External pulmonary surfactant infusion, hyaline membrane disorders are seen on radiographs. Comparing those who developed RDS with the normal neonates in the study group, and correlative analysis between fetal main pulmonary artery Doppler findings in two gropes to detect the cut-off level of PAT/ET that determines fetal lung maturity in cases of severe pre-eclampsia.
Study flow
More than 1500 pregnant woman attending obstetrics outpatient clinic were assessed to predict patient with high risk to develop pre-eclampsia with pregnancy, 277 cases were high risk to develop hypertensive disorders with pregnancy followed up with obstetrics outpatient clinic; 110 cases developed severe pre-eclampsia, 108 cases approved to be part of this study were counseled about the steps of this study and signed on consent including that, and 6 cases were excluded,2 cases delivered earlier than 32 weeks gestation and 4 cases had stillbirth.
The study included one hundred and two pregnant women with severe pre-eclampsia between 32 weeks and 37 weeks gestation were followed up till delivery with demographic, medical and obstetric history, laboratory findings, symptoms of severity, 2D ultrasound and Doppler findings including fetal pulmonary artery Doppler, dexamethasone doses, mode of delivery and gestational age at termination and Neonatal outcome after delivery then correlate between pulmonary artery Doppler findings and development of neonatal RDS [ Figure 3].
Neonates without RDS [n = 74] Neonates with RDS [n = 28]
Comparing demographic, medical and obstetric characteristics, laboratory findings, symptoms of severity, 2Dultrasound and Doppler findings in those who developed RDS with the normal neonates in the study group Ethical considerations: The purpose of this study and the role of pulmonary artery Doppler in predicting fetal lung maturity and preterm complications were explained to these patients. One hundred eight cases were approved to participate in this study and were counseled about the procedures of the study and signed informed permission, with 6 instances being disqualified for failing to meet the inclusion criteria. After discussing the research's goal with each patient, the patient must give informed written consent that has been approved by the department's ethical committee [IRB00012367-16-01-006].
Statistical analysis: Data were fed to the computer and analyzed using IBM SPSS software package version 22.0. Qualitative data were described using numbers and percentages. Quantitative data were described using median [minimum and maximum] or non-parametric data and mean, the standard deviation for parametric data after testing normality using the Kolmogorov-Smirnov test. The significance of the obtained results was judged at the [0.05] level. Chi-square test for the comparison of 2 or more groups, Fischer Exact test was used as a correction for the chi-Square test when more than 25% of cells have a count less than 5 in 2×2 tables. Quantitative data from two groups were included. The Student t-test, which was utilized, is a type of parametric test that makes assumptions about the parameters of the population distribution from which the sample is derived. This is frequently the assumption that the population data are normally distributed, and non-parametric tests such as the Mann-Whitney and U test, which was used to compare two independent groups, were used to test this assumption. Binary stepwise logistic regression analysis was used for the prediction of independent variables of a binary outcome. Significant predictors in the univariate analysis were entered into the regression model using the forward Wald method/ Enter. Adjusted odds ratios and their 95% confidence interval were calculated.
Diagnostic accuracy: Receiver Operating Characteristic
[ROC] curve analysis: The diagnostic performance of a test or the accuracy of a test to discriminate diseased cases from non-diseased cases is evaluated using Receiver Operating Characteristic [ROC] curve analysis. Sensitivity and Specificity were detected from the curve and PPV, and NPV and accuracy were calculated through cross-tabulation.
I. Descriptive part
Demographic data among studied females: The mean age is 28.57 years with median gravidity & parity 3 and 2, respectively. The mean gestational age was 36.09-week.
II. Analytic part
Comparing demographic characters and obstetric history of the studied females in those who developed RDS with the normal neonates in the study group, there is no statistically significant association between the incidence of RDS among neonates and age, gravidity, and parity of their mothers. Mean gestational age was significantly lower among neonates with respiratory distress syndrome than without RDS [35.61 versus 36.28 weeks] [ Table 1].
Comparing the medical history of the studied females in those who developed RDS with the normal neonates in the study group, there is no statistically significant association between RDS incidence among neonates and the presence of hypertension, diabetes, heart disease, or other medical conditions found among their mothers. Comparing between laboratory findings and symptoms of the studied females in those who developed RDS with the normal neonates in the study group, there is no statistically significant association between neonates with & without RDS and laboratory results of their mothers including [SGT, SGPT, Presence of proteinuria, serum creatinine and platelet count]. And there is no statistically significant association between the presence of new severe persistent headache, projectile resistant vomiting, and persistent blurring of vision among studied mothers and the presence of respiratory distress syndrome among their neonates. However, a higher frequency of epigastric pain was associated with a higher prevalence of RDS [57.1%] versus 29.7% of neonates without RDS. Comparing the number of dexamethasone courses in those who developed RDS with the normal neonates in the study group, there is a statistically significant association between dexamethasone administration to pregnant mothers and incidence of respiratory distress syndrome among studied neonates with 85.7% of cases with respiratory distress have one dose of dexamethasone and 14.3% only with two doses of dexamethasone [higher dose associated with lower incidence of RDS]. Comparing 2D ultrasound findings in those who developed RDS with the normal neonates in the study group, we found that there is statistically significant higher mean BPD, Comparing Doppler findings in those who developed RDS with the normal neonates in the study group, we found that PET, PAT/ET, and P.S/D ratio were significantly associated with respiratory distress syndrome among studied neonates with P.ET & P.S/D ratio were higher among neonates with respiratory distress syndrome than neonates without RDS, however, P.AT/ET was higher among neonates with respiratory distress syndrome. Median P.AT, P.RI, P.PI, umbilical artery S/D ratio & M.C. S/D ratio were not associated with RDS [p>0.05] [ Table 3].
Comparing those who developed RDS with the normal neonates in the study group, we found that IUGR, Doppler. PET, Doppler P.S/D ratio & one cycle of dexamethasone course to mothers were significant predictors of respiratory distress syndrome among study neonates, with 79.5% of respiratory distress can be predicted by these 4 factors [ Table 4]. ROC curve for PAT/ET in differentiating respiratory distress syndrome among study neonates with the area under the curve was excellent and the best-detected cut-off point was 0.3 yielding sensitivity 71.4% and specificity of 79.7% with total accuracy was 77.5% [ Table 5].
DISCUSSION
Results of the current work revealed that, there was no statistical significance between the collected demographic data [maternal age, gravidity, and parity] and RDS in cases of severe pre-eclampsia despite gestational age at the time of termination which has a statistical significance with the development of fetal RDS in cases of severe pre-eclampsia. These findings go with the result of a large study on 13,490 VLBW infants, including 2,200 [16.3%] cases born to mothers with pre-eclampsia in Taiwan which concluded that GA, but not birth weight, was associated negatively with RDS. GA and birth weight were associated inversely with severe RDS, with an OR of 0.68 [95% CI, 0.65-0.7] and 0.94 [95% CI, 0.91-0.97], respectively [11] .
In this study, there was no statistical significance between medical history in the studied cases and the development of RDS in delivered infants; these results agree with the results cited by Hochberg et al. [12] that support the link between pre-eclampsia in moms and preterm newborns with broncho-pulmonary dysplasia suggesting that chronic hypertension is protective for preterm neonates [13] . In contrast to our study, Li et al. [14] concluded that maternal diabetes, including gestational diabetes mellitus and pregestational diabetes mellitus is connected to a higher incidence of neonatal RDS. Another study found that the risk of pre-eclampsia increased with the severity of diabetes in women with pre-gestational diabetes mellitus. Proteinuria early in pregnancy was linked to a significant increase in negative neonatal outcomes, even when pre-eclampsia was not present [15] . These difficulties are thought to emerge in DM because surfactant synthesis is delayed as a result of maternal hyperglycemia [16] , and reduced fluid clearance in the fetal lungs [17] . The controversy between these results could be explained as in our study severe pre-eclampsia stimulate early surfactant synthesis and the patients gave good glycemic control.
In this study, there was no statistical significance between laboratory findings and RDS among cases being studied, this finding goes with a large systematic quantitative review [18] regarding proteinuria, and coincide with many other studies ] regarding liver function tests [19][20][21] .
In this study, we found no statistically significant relationship between symptoms of severe pre-eclampsia and RDS among study cases despite epigastric pain, these findings correlated with findings of a large study on 13,490 patients by Yu-Hua Wen [11] .
In this study, there was a statistically significant inverse relationship between 2D Ultrasound findings as regard BPD, FL, AFI and IUGR, and RDS among the cases being studied. These results agreed with Gilbert and Danielsen [22] who cited that prematurity linked to poor neonatal outcomes [RDS, IVH, NEC, and CHA] was significantly influenced by IUGR in the third trimester. These results agreed with the results of Rabinovich et al. [23] Who cited that among women with pre-eclampsia who gave birth prematurely, oligohydramnios is an independent risk factor for perinatal morbidity, and these results go with another study which found that gestational age at the time of termination is a good predictor for neonatal outcome in cases of severe preeclampsia [11] . A relatively recent study concluded that even in the context of fetal development limitation, the obstetrician should try to extend pregnancies complicated by early-onset severe pre-eclampsia up to 32 gestational weeks as far as maternal conditions allow. This type of management policy could help to enhance newborn outcomes [24] .
Regarding fetal pulmonary artery ejection time and acceleration time to ejection time ratio, there was statistically significant relationship between Doppler indices and neonatal RDS among cases examined in this investigation. These findings corresponded with those of a recent study done by Büke et al. [25] who stated that the fetal PAT/ET ratio is a promising noninvasive technique for predicting RDS in preterm births, and the results of another study, which discovered that measuring FLV or PA-RI can predict RDS in preterm fetuses when these metrics were used together, their predictive power increased [26] .
In this research, there was a statistically significant inverse relationship between giving one corticosteroid course and neonatal RDS, this finding agrees with the results of another research which concluded that steroids reduce respiratory distress syndrome [27] . In contrast to our result, Witlin et al. [28] found no relationship between corticosteroid and neonatal outcome, and this difference could be explained as in their study they examine patients delivered from 24 to 33-week gestation, and we study cases 32 to 37-week gestation. Furthermore, Crowther et al. [29] found that after the first course of prenatal corticosteroids, mothers at continued risk of preterm birth received repeat doses, which reduced the likelihood of their infant requiring respiratory support after birth and resulted in a neonatal benefit, and the difference between the results could be explained as in our study 6 patients didn't receive any corticosteroid courses with no neonatal RDS as they developed late-onset pre-eclampsia after lung maturity, 91 patient received one course of corticosteroid with a statistically significant reduction in the development of neonatal RDS and 5 patient received two corticosteroid courses with 4 patients developed neonatal RDS as they developed early-onset pre-eclampsia with its adverse effect on neonatal outcome [30] .
The development of respiratory distress syndrome [RDS] in the neonates of mothers with severe pre-eclampsia during pregnancy is linked to an elevated acceleration time/ejection time ratio [AT/ET] with a cut-off level of 0.30 in the fetal pulmonary artery, with a sensitivity 71.4 and specificity 79.7.
These results are compatible with the results of recent research examining 105 patients and they found that even after accounting for gestational age, estimated fetal weight, and fetal gender, researchers discovered an adverse connection between the diagnosis of RDS in neonates and PAT/ET levels [r = 0.52 and p =.0017]. A cut-off value of 0.327 yielded 77.1% specificity, 90.9% sensitivity, 95.4% negative predictive value and 52.7% positive predictive value [25] . Another study found that for subsequent diagnosis of TTN in small for gestational age [SGA] neonates, the cut-off value of 0.298 offered ideal specificity of 93.0 % and sensitivity of 81.0 % [31] .
This study had many strength points; it is the first one that discusses the cut-off level of PAT/ET in the prediction of neonatal RDS in cases of severe pre-eclampsia. Preeclampsia is a dynamic process, so we can terminate the pregnancy once fetal lung maturity is detected and before deterioration of the maternal condition, we can use this parameter in the prediction of neonatal RDS in other highrisk pregnancies as there were many medical disorders were included in this study accompanied with pre-eclampsia as diabetes mellitus and heart disease, and a wide range of maternal age included in this study. We can measure PAT/ET with two D ultrasound with Doppler with no need for a high-end ultrasound machine or 4D ultrasound; we can predict fetal lung maturity with a non-invasive technique instead of other invasive techniques with its drawbacks.
There are some limitations of the study; many cases included at the beginning of this reassure were delivered either before 32-week gestation or after 37-week gestation, so they were excluded from the study, fetal echo performance is difficult at a gestational age between 32 and 37-week gestation because of high echogenicity of fetal ribs, so anterior position of the fetal chest is preferred to get an optimum view, severe pre-eclampsia is associated with oligohydramnios which adds more difficulty in the performance of fetal echo and the detection of fetal pulmonary artery Doppler indices.
CONCLUSION
The fetal P.ET, P.AT/ET, and P.S/D ratios detected through Doppler findings were significantly associated with respiratory distress syndrome among neonates, with P.ET, P.SD, and P.AT/ET ratios being higher among neonates with respiratory distress syndrome than neonates without RDS. IUGR, Doppler P.ET, Doppler P.S/D ratio & one cycle of dexamethasone course to mothers were significant predictors of respiratory distress syndrome among neonates under study. The best-detected cut-off point of the P.AT/ET ratio for the detection of neonatal RDS in cases with severe pre-eclampsia between 32w to 37-week gestation was 0.3.
Financial and Non-financial Relationships and Activities of Interest
None
|
v3-fos-license
|
2020-10-18T13:05:39.532Z
|
2020-10-01T00:00:00.000
|
223559496
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/21/20/7589/pdf",
"pdf_hash": "38216855f0affec43a4b4b005e1fc23f9f7da43f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43525",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "6819ebbd8aeb1a672ee0e3297cbd7c4334858ac1",
"year": 2020
}
|
pes2o/s2orc
|
Bovine Oviduct Epithelial Cell-Derived Culture Media and Exosomes Improve Mitochondrial Health by Restoring Metabolic Flux during Pre-Implantation Development
Oviduct flushing is enriched by a wide variety of nutrients that guide the 3–4 days journey of pre-implantation embryo through the oviduct as it develops into a competent blastocyst (BL). However, little is known about the specific requirement and role of these nutrients that orchestrate the early stages of embryonic development. In this study, we aimed to characterize the effect of in vitro-derived bovine oviduct epithelial cell (BOECs) secretion that mimics the in vivo oviduct micro-fluid like environment, which allows successful embryonic development. In this study, the addition of an in vitro derived BOECs-condition media (CM) and its isolated exosomes (Exo) significantly enhances the quality and development of BL, while the hatching ability of BLs was found to be high (48.8%) in the BOECs-Exo supplemented group. Surprisingly, BOECs-Exo have a dynamic effect on modulating the embryonic metabolism by restoring the pyruvate flux into TCA-cycle. Our analysis reveals that Exo treatment significantly upregulates the pyruvate dehydrogenase (PDH) and glutamate dehydrogenase (GLUD1) expression, required for metabolic fine-tuning of the TCA-cycle in the developing embryos. Exo treatment increases the influx into TCA-cycle by strongly suppressing the PDH and GLUD1 upstream inhibitors, i.e., PDK4 and SIRT4. Improvement of TCA-cycle function was further accompanied by higher metabolic activity of mitochondria in BOECs-CM and Exo in vitro embryos. Our study uncovered, for the first time, the possible mechanism of BOECs-derived secretion in re-establishing the TCA-cycle flux by the utilization of available nutrients and highlighted the importance of pyruvate in supporting bovine in vitro embryonic development.
Introduction
The oviduct fluid microenvironment is composed of plethora of growth stimulatory factors, immunomodulatory components and extracellular-vesicles/exosomes (Exo), which all are known to play an important role during a series of crucial reproductive events [1][2][3]. The oviduct epithelium contains the population of several ciliated and secretory cells that makes the oviduct fluid, which influences the whole journey from embryonic development into a successful adult [1]. Co-culture of bovine oviduct-epithelial cells (BOECs) with embryos and its extracted secretome in an in vitro system creates an environment conducive to fertilization and supports the early embryonic stages [4][5][6][7]. The use of a co-culture system of BOECs and its derivatives has been employed to overcome the developmental cessation of in vitro produced mammalian embryos, which ultimately improves the pregnancy rate and outcome of embryo transfer [4,8]. Loss of normal metabolic function is one of the causes of development cessation during in vitro produced (IVP) mammalian embryos [9,10]. The absence of maternal signaling cues in the in vitro medium renders the embryo unable to absorb the available nutrients from the external environment to overcome the increasing demand of ATP required for expansion and proliferation [11,12]. Thus, studies regarding the influence of oviduct secretions on mammalian embryo development to increase the quality and survivability must continue to bring about an improvement in IVP, as well as refinement of ART procedures.
The broad applications of BOECs-culture system and its derivatives considered as an important step towards the advancement of assisted reproductive technologies (ART). To date, efforts have been made in the formulation of culturing media for the successful in vitro development of pre-implantation embryos [13]. Although the conventional in vitro microenvironment is formulated to closely mimic the in vivo oviductal fluid composition, the currently available media still lack various unknown oviductal fluid proteins and exosomes (Exo) [14]. Exo are very important bioactive molecules present in the oviduct fluid, generated from the secretion of epithelial cells. These nano-sized shuttles are able to cross the embryonic membrane and play an important role in modulating the pre-implantation embryonic development, as well as in establishing cross-talk between embryo-maternal interactions [11,[14][15][16]. Exo are encapsulated by phospholipid bilayer membrane and are loaded with proteins, lipids, pool of RNA species (mRNA, miRNA, long noncoding RNA), and DNA fragments. These regulatory bioactive molecules enable the exosomes to mediate the responses of several signaling pathways associated with physiological and pathophysiological functions among neighboring cells and distant cells via extracellular environment [5,11,12]. Absorption of Exo by embryos from the extracellular environment provides a new insight into the intracellular communication between the oviduct epithelium and the developing embryo through an exogenous biomolecule delivering tool.
The developing embryo undergoes several important morphological and transcriptional event changes during its journey through the oviduct [1,17,18]. Several studies showed the positive influence of using BOECs and their secretome on early embryonic development; these can be observed in terms of developmental competence, cryotolerance, timing of embryonic genome activation, acquisition of epigenetic modifications, and cellular proliferation [12,16]. It has been demonstrated that during in vitro culture of mammalian embryos, especially in ruminants, the development arrested at the morula stage of 8-16 cell compaction [19], whereas the use of co-culture system of oviduct epithelial cell or its secreted condition medium overcome this developmental block and increases the likelihood of survival of IVP embryos [18]. This evidence shows that BOECs' secreted factors have a remarkable effect, changing the nutrients and metabolite of the culture media through an exchange of signals between the developing embryo and the external environment [18]. However, a number of compelling pieces of evidence have shown the advantageous effects of BOECs-derived condition medium (CM) and BOECs-isolated Exo on the qualitative yield of IVP embryos [5,12,14], but the molecular mechanism involved in the oviduct-embryo dialogue to impact the metabolic responses of the developing embryos remains unknown.
Considering the metabolic response of the developing embryos, a study on the in vitro development of mouse embryos showed a preference for pyruvate uptake during initial pre-implantation stages, although the glucose is also essential during the compacted morula to BL transition [20]. After compaction, the glucose uptake increases, whereas the requirement of pyruvate uptake also remained high to support development beyond morula stage [10,20]. Studies on human in vitro embryonic development also suggested the need for pyruvate throughout development to the BL stage [21]. These highlighted that the growth medium has a significant influence on energy metabolism during preimplantation development, which improves mitochondrial health and the developmental competency of in vitro produced embryos [10,22]. Pyruvate is an essential molecule of the mitochondrial TCA (tricarboxylic acid) cycle for the generation of ATP [23]. Blocking the entry of pyruvate into TCA-cycle disturbs the metabolic flux and causes mitochondrial dysfunction [9,22]. Development arrest at the compaction stage in vitro might be the result of insufficient pyruvate for the TCA cycle.
The in vivo microenvironment of oviduct fluid provides several important biological molecules-lipids, proteins, and essential amino acids-to support normal metabolic flux throughout embryonic development [3,4]. However, the current evidence on oviduct secretions and intercellular communication between embryo and oviduct still requires conclusive studies. The current study focuses on the use of BOECs-monolayer derived CM and Exo, with a special emphasis on their role in improving the metabolic flux during in vitro embryonic development. Our study shows that the addition of BOECs-CM and BOECs-Exo to the in vitro culture medium restores the pyruvate flux by blocking its upstream inhibitor and upregulate the pyruvate dehydrogenase (PDH) and glutamate dehydrogenase 1 (GLUD1) expression. Our study, for the first time, provided an explanation for the molecular mechanism of IVP embryos cultured in the presence of BOECs-CM and Exo, which greatly influence the metabolic response of embryos by re-establishing the pyruvate flux that ultimately improves the mitochondrial functioning. Our study establishes an important link between energy metabolism and morula arrest by using BOECs-monolayer derived CM and Exo that dislodges the development block of the compacted morula stage.
Maintenance of BOEC Monolayer Typical Epithelial Cell Morphology and Characteristics
To mimic the in vivo-like microfluidic environment during in vitro production of embryos, we first isolated and stably cultured the BOECs in a defined condition medium for 7 days until they formed a tightly confluent monolayer (Figure 1a). The oviduct epithelium consists of an abundant population of ciliated and secretory cells. In resemblance to natural characteristic features, the results of light microscopy showed cell aggregates during the first 24 h culture of BOECs, with typical morphology of secretory and ciliated cell population detected by the presence of vigorously beating cilia on the apical surface of all cell aggregates (Figure 1b). Epithelia polarization is further characterized for 6-7 days culture of BOECs that allow the cells to grow coherently for 6-7 days culture period and finally settle as a tight, confluent monolayer sheet ( Figure 1c). The cultured BOECs were further characterized by the presence of several markers for the determination of epithelial lineage, cell culture purity, and proliferation potential. The strong expression of CD44 and EP-CAM highlighted the cells are of epithelial origin. Whereas the non-detectable expression of CD34 and CD14 showed the absence of hematopoietic progenitor cells in the isolated BOECs culture population. Also, the presence of c-MYC and OCT4 explained how the cells maintain their proliferation potential throughout the 6-7 day culture period (Figure 1d). Moreover, the functional morphology of BOECs-monolayer was further validated by the expression of oviduct-specific genes makers. The results show that the BOECs-monolayer maintained the characteristics oviduct genes marker throughput the culture period; although the expression was slightly downregulated, but there was no statistical difference during long term culture ( Figure 1e). All these results provide evidence of the typical physiological feature of BOEC monolayer and suggest the appropriate maintenance of oviduct cell culture system during in vitro processes.
Oviductal Secretion are Enriched in Oviductosomes/Exosomes
Next, we wanted to determine the effect of oviduct epithelial cells' secretion on early embryonic development. In vivo, the oviduct luminal fluid is enriched in a wide variety of secretory factors that influence the early embryonic development. To exclusively determine the effect of Exo during pre-implantation development, we performed a biophysical and molecular characterization of Exo obtained from the confluent in vitro cultured BOECs monolayer ( Figure 2). Nanoparticle tracking analysis (NTA) was performed to reveal the size and concentration of BOECs-monolayer derived Exo. The analysis revealed that Exo range 80-150 nm in size and have an average concentration of 3 × 10 8 particles per mL (Figure 2a). The size and distribution of particles were also presented in terms of intensity and visualized as shown in the screenshot captured by NanoSight LM10 (Figure 2b). In addition, the BOECs monolayer-derived Exo enrichment was further verified and quantified with CD9 Exo-specific antibody through Exo-ELISA colorimetric assay (Figure 2c,d). All together, these results suggest that an in vitro cultured BOECs monolayer could be maintained well in a functional state and that its secretion is capable of generating abundant Exo. are presented as a result of ± SEM (standard error of mean) of four replicates. ns, indicates nonsignificant difference.
Oviductal Secretion are Enriched in Oviductosomes/Exosomes
Next, we wanted to determine the effect of oviduct epithelial cells' secretion on early embryonic development. In vivo, the oviduct luminal fluid is enriched in a wide variety of secretory factors that influence the early embryonic development. To exclusively determine the effect of Exo during preimplantation development, we performed a biophysical and molecular characterization of Exo obtained from the confluent in vitro cultured BOECs monolayer ( Figure 2). Nanoparticle tracking analysis (NTA) was performed to reveal the size and concentration of BOECs-monolayer derived Exo. The analysis revealed that Exo range 80-150 nm in size and have an average concentration of 3 × 10 8 particles per mL (Figure 2a). The size and distribution of particles were also presented in terms of intensity and visualized as shown in the screenshot captured by NanoSight LM10 (Figure 2b). In addition, the BOECs monolayer-derived Exo enrichment was further verified and quantified with CD9 Exo-specific antibody through Exo-ELISA colorimetric assay (Figure 2c,d). All together, these results suggest that an in vitro cultured BOECs monolayer could be maintained well in a functional state and that its secretion is capable of generating abundant Exo.
BOECs-Derived CM and Exosomes Reduces the Embryonic-Development Block and Enhances the In Vitro Produced BLs Quality and Yield
To better understand that how oviduct secretions influence the early embryonic development, we determined the practical application of condition media as well as the oviductosomes/exosomes generated from in vitro-cultured BOECs monolayer during bovine pre-implantation development. Our supplementation resulted in a noticeable improvement in BLs yield and hatching ability (BOECs-CM: 43.6 ± 0.86; 37.5 ± 0.85, respectively) as shown in Table 1 and Figure 3a. The supplementation of BOECs derived Exo showed a dose-dependent impact on embryonic development; for instance, the addition of 3% Exo during maturation and embryo culture significantly improved the embryo quality in terms of BLs formation rate and hatching ability (BOECs-Exo: 45.4 ± 0.68; 48.9 ± 0.97, respectively) as shown in Table 2 and Figure 3b,c. Also, the development of 8-16 cell stage embryos were remarkably enhanced with the supplementation of both BOECs-CM and Exo as shown in Tables S1 and S2. To determine the effect of in vitro derived BOECs secretions on development and implantation regulated genes expression, we performed q-RT-PCR analysis of BLs cultured in the presence of BOECs-CM and Exo. The results showed that BOEC's secretory milieu remarkably enhances the various development-and implantation potential-related genes expression in embryos produced in BOECs-CM and Exo supplemented medium compared to control embryos ( Figure 3d). To further evaluate the quality of embryos, we determined the total cell abundance. Our analysis indicated that both BOECs-CM and Exo-derived embryos have significantly higher cell proliferation ratios than embryos developed in control culture medium (Figure 3e,f). All these results indicate that CM and Exo from in vitro cultured BOECs mimics the in vivo oviduct flushing and are capable of beneficially influencing the in vitro culture conditions for production of bovine embryos.
Co-Culture of Embryos with BOECs-CM and Exosomes Reduces the Cellular Stress and Improves the Cell Survival Ratio
To determine the impact of in vitro-derived BOECs-CM and Exo on cellular oxidative stress and damage during pre-implantation development of bovine embryos, we measured the production of reactive oxygen species (ROS) level. Bovine embryos with the addition of BOECs-CM and Exo remarkably encountered the subsequent rise in the ROS level during the in vitro culture period compared to the control cultured medium embryos (Figure 4a). Quantification of ROS florescence intensity showed in Figure 4b. Concomitantly, we examined the mRNA abundance of well-known antioxidant enzymes. The analysis indicated that embryos cultured in the presence of BOECs-CM and Exo have significantly higher expression of antioxidant transcripts relative to the embryos cultured without BOECs-CM and Exo supplementation ( Figure 4c). Next, we performed the respective apoptotic assay to test the DNA damage level. Results indicated more TUNEL-positive nuclei in the control embryos developed in conventional cultured media, while BOEC-CM and Exo supplementation led to significantly fewer apoptotic cells observed during fluorescence imaging (Figure 4d). Quantification of average TUNEL-positive nuclei versus the total cell number, with significant differences among the groups are presented in Figure 4e,f. Moreover, the effect on cell death and damage was further confirmed by the expression of anti-apoptotic marker genes. These results suggested that higher mRNA expression of BCL2 and BAX in the BOEC-CM and Exo supplemented groups inhibited the induction of apoptosis in embryos compared to the control group (Figure 4g). All together, these results provided evidence that BOEC secretions contain some additional factors that have a protective role and enhance the cell survival ratio in early developing bovine embryos during extended periods of in vitro culture.
BOECs-Derived CM and Exosomes Re-Establish the Pyruvate Flux and Improves the In Vitro-Produced Embryo Metabolism
To analyze the effect of BOECs-CM and Exo on the utilization of fatty acid oxidation (FAO) metabolism, we first determined the lipid content of the in vitro-cultured embryos from the control as well as BOECs-CM and Exo supplemented group. Confocal imaging by a lipid specific florescent probe showed that BOECs-CM supplemented BLs have same lipid content as the control BLs whereas Exo supplementation significantly oxidizes the lipid droplet in developing embryos (Figure 5a). The florescent intensities are shown in Figure 5b. To confirm this analysis, we assessed the expression of several lipid-metabolizing genes. The qRT-PCR analysis revealed that expression of PPARα, CPT1, and PDK4 in control and BOECs-CM cultured embryos were not statistically different. Interestingly, Exo supplementation led to a significant increase in CPT1 expression and an inhibitory effect on PDK4 expression (Figure 5c). PDK4 is the key regulatory enzyme of glucose and fatty acid metabolism that regulates the entry of pyruvate into tricarboxylic acid (TCA) cycle [24]. Furthermore, we performed the analysis to determine any changes in the expression of TCA-cycle-metabolizing genes. The analysis revealed that BOECs-CM and Exo supplementation significantly enhances the pyruvate dehydrogenase (PDH) and glutamate dehydrogenase 1 (GLUD1) expression, whereas it downregulates the expression of Sirt4 gene, and a significant effect was observed in the Exo-exclusive supplemented group (Figure 5d). In a schematic model, it is suggested that the suppression of Sirt4 and PDK4 expression in the pre-implantation embryos cultured with the addition of BOECs-CM and Exo might improve the entry of pyruvate into TCA cycle by up regulating the expression of TCA metabolizing enzymes. Both PDH and GLUD1 fuel the TCA cycle which ultimately provides enough energy to support the pre-implantation stages of the developing embryo (Figure 5e).
BOECs-CM and Exosome-Mediated Metabolic Flux Improves Mitochondrial Functioning during Embryonic Development
Based on our results, which suggested the improvement of TCA cycle metabolism in developing embryos cultured in the presence of BOECs-CM and Exo supplementation, we aimed to determine the mitochondrial function by analyzing the mitochondrial membrane potential (∆Ψ m ). The results indicated a remarkable difference in the mitochondrial activity in the control embryo group versus BOECs-CM and Exo supplemented groups, which showed the presence of higher J-aggregate ratio (Figure 6a). Quantitative difference in the florescence intensities of mitochondrial ∆Ψ m are visible in Figure 6b. Moreover, we checked the mitochondrial metabolic activity by analyzing several mitochondrial oxidative phosphorylation (OXPHOS) subunit gene expression and ATP-synthesizing marker. Similarly to the mitochondrial ∆Ψ m , the embryos cultured in the presence of BOECs-CM and Exo-supplemented media showed a significantly higher expression profile of several mito-OXPHOS subunit genes and increased mRNA transcript of ATP synthesizing enzyme relative to the control culture medium embryos (Figure 6c). All these results suggested that BOECs-CM and Exo greatly impact the in vitro cultured condition and significantly improve the developing embryo quality by helping it to establish its own metabolism.
Discussion
In this study, we demonstrated the application of a co-culture system of BOECs monolayer generated CM and BEOCs-derived Exo to support the in vitro embryonic development. Our study revealed the systematic approach of in vitro culturing of BOECs and the use of its derivative secretions, specifically BOECs-derived Exo, not only improve the standard IVP conditions but also help to improve our understanding of how, in vivo, the concentration of soluble factors influences the early embryonic developmental processes.
Previously, it has been shown that BOECs can be maintained in a functional state in the absence of serum; however, it showed some morphological degenerative signatures such as loss of secretory
Discussion
In this study, we demonstrated the application of a co-culture system of BOECs monolayer generated CM and BEOCs-derived Exo to support the in vitro embryonic development. Our study revealed the systematic approach of in vitro culturing of BOECs and the use of its derivative secretions, specifically BOECs-derived Exo, not only improve the standard IVP conditions but also help to improve our understanding of how, in vivo, the concentration of soluble factors influences the early embryonic developmental processes.
Previously, it has been shown that BOECs can be maintained in a functional state in the absence of serum; however, it showed some morphological degenerative signatures such as loss of secretory granules in FCS-free medium [4]. To avoid the significant loss of characteristic secretory features, we cultured the BOECs in a complete growth standard culture medium according to the composition previously described by [17]. The BOECs sheets were well maintained for up to seven days' culture period and analysis showed the existence of full functional biological activity with typical epithelial characteristic morphology (Figure 1). The detection of ciliated cells shows the maintenance of cell polarity, fundamental for the architecture and function of epithelial cells [4]. It was documented that naturally neutral lipids were present in the bovine oviduct epithelium that were used as a source of energy for their growth and proper functioning [13]. Thus, the addition of serum not only allows the better attachment of the proliferating cells, but has been proven to be required during the in vitro culture of BOECs [25]. On the other hand, a high concentration of serum was associated with a low quality of embryo production, higher incidence of apoptosis that leads to early embryonic death or either survived fetus produced with several developmental anomalies [18,26]. Thus, in order to minimize the effect of serum on embryo quality, we reduced the concentration of FCS (2%) for the production of CM enough to support the growth of a well-settled 100% confluent BOEC monolayer for 48 h. The results showed that BOECs-CM not only remarkably improve the BL formation rate but also did not impair the quality of the BLs, such as the hatching ability relative to the control (Table 1 and Figure 3a). These results are in agreement with a previous investigation that reported that a low concentration of FCS (2.5%) did not interfere with the survival and conception rate of BLs after cryopreservation and embryo transfer, respectively [27].
It is well known that, from compaction to the BL stage, nutrients in the oviduct fluid play a role in the transit of embryonic metabolism via maintaining the oviduct-embryo dialogue [1]. The oviductosomes/exosomes are the main cargo present in the oviduct fluid that establishes the embryo-maternal-crosstalk, which results in better utilization of nutrients from the luminal fluid and help the embryos to establish their own metabolism [11,14]. Our results from the BOECs-Exo exclusive supplementation group showed a significant beneficial effect on early embryonic development, dose dependently, whereas the excessive concentration of Exo above 3% impaired the development and hatching rate of BLs (Table 2 and Figure 3b,c). The presence of an excess Exo concentration in the culture media might lead to a perturbed metabolic pathway during the pre-implantation period of embryonic development. The regulation of metabolism plays a central role when considering the interaction of embryo with its environment [10]. Interestingly, the supplementation of in vitro derived BOECs-Exo had a remarkable effect on metabolism-related genes expression during in vitro cultured bovine embryos. During embryonic development within the oviduct, the nutrient composition of the oviduct fluid protect the developing embryo from the oxidative stress, either by secreting free radical scavenging proteins, or by modulating antioxidant enzymes' level to provide protection against the oxidative damage [28,29]. The perturbed redox balance and insufficient antioxidants levels are the major cause of DNA fragmentation which ultimately accelerates cell death and results in poor quality of embryos [29]. The production of ROS level, the expression of several oxidative metabolism genes, and the cell apoptotic ratio were remarkably reduced with the addition of BOECs-CM and Exo (Figure 4a-c), which reflects that developing embryo might have shifted the utilization of energy from oxidative to glycolytic metabolism due to the availability of nutrients in the culture media.
During oocyte maturation and development of mammalian embryo, substantially, oxidation of fatty acids provide higher amount of energy in the form of ATP by converting it into acetyl-CoA [30,31]. The in vitro cultured BLs supplemented either with BOECs-CM and Exo were analyzed for lipid droplet accumulation and FAO metabolizing genes' expression; both groups showed similar expression pattern, but the effect was significantly more pronounced in the BOECs-Exo supplemented group ( Figure 5). The addition of serum during preparation of BOECs-CM may explain the reason for the presence of accumulated lipid droplets in BOECs-CM derived embryos. The major FAO metabolism regulating genes, such as PPARα and CPT1, are highly upregulated in Exo-treated BLs. The statistically insignificant difference between BOECs-CM and the control may reflect the effect of serum in the culture media. This also explains that how BOECs-derived secretions regulate the FAO metabolism in the developing embryo by promoting PPARα and CPT1 expression. PPARα is found to be the upstream regulator of CPT1 and stimulates its transcription by binding with PPAR responsive element on the promoter region of CPT1 gene [32], thus the analysis showed a concomitant increase in the mRNA expression of both genes. The key regulatory enzyme of the metabolic pathway is PDK4, which is known to play an important role in maintaining metabolic flux by regulating the amount of pyruvate entering into the TCA cycle through the inhibition of pyruvate dehydrogenase (PDH) [24]. PDH is the main enzyme that regulates the conversion of pyruvate into acetyl-CoA and leads to the synthesis of a higher amount of ATP to meet the metabolic demands of the developing mammalian embryo [9,23]. Surprisingly, supplementation of BOECs-derived Exo significantly enhances the expression of PDH by blocking its upstream inhibitors PDK4 and SIRT4 (Figure 5c,d). SIRT4 serves as a key metabolic sensor that inhibits the PDH and GLUD1 activity [10]. Sirt4 is also an upstream inhibitor of PDH and GLUD1 [33,34]. The enhanced expression of PDH and GLUD1 suggests the removal of a PDK4 and SIRT4 inhibitory block, which ultimately restores the pyruvate flux to fuel the TCA cycle during embryonic pre-implantation development.
More frequently, it has been reported that development and quality of an embryo are directly co-related with its metabolism [10]. Our observations showed that the effect on the glycolytic metabolic pathway was more pronounced in the Exo exclusive treatment group than in the BOECs-CM supplementation group. These effects of Exo treatment echo a similar effect, the improvement of the TCA cycle which was reported in Exo derived from mesenchymal stromal cells, improved the mitochondrial function deficiencies in an in vitro culture model of human pulmonary artery smooth muscle cells [35]. The upregulation of TCA cycle metabolizing genes is a signature of improved mitochondrial health [9,10]. It is also well known that, in developing embryos, the transition from oxidative to glycolytic metabolism is accompanied by the maturation of mitochondrial function [22]. In our results, the mitochondrial metabolic activity was significantly improved in embryos cultured in a BOECs-CM and Exo supplemented medium. The embryos have remarkably higher expression of OXPHOS and ATP-synthesizing genes ( Figure 6). These observations suggested that BOECs-derived-CM and Exo have a significant impact in terms of improving the mitochondrial heath of in vitro-produced bovine embryos.
In conclusion, our results suggest a possible mechanism in the in vitro cultured BOECs-derived-CM and Exo produced embryos in enhancing the metabolic flux. The expression of several TCA-cycle metabolizing genes were not the same as the genes expressing the flux of molecules involved in the metabolic pathways. These genes improve the metabolism and lead to better in vitro embryonic development. These findings highlight that the understanding of several embryologic nutrient requirements is necessary during the preparation of commercially synthesized media, which mimics the in vivo oviductal fluid composition and fully supports the early embryonic stages of development. Regarding the role of Exo, there are several possible mechanisms by which BOECs-Exo exerts a positive effect to influence the embryonic development. Exo effects on embryonic development varied according to the content of medium used for the primary culture of BOECs [14]. These observations suggested that a detailed proteomic analysis of in vitro as well as in vivo derived BOECs-Exo is needed to further decipher the role of Exo and its application during in vitro culture.
Materials and Methods
All chemicals/reagents used in assays were purchased from Sigma Aldrich (St. Louis, MO, USA) unless otherwise specified.
Isolation and Culture of Bovine Oviduct Epithelial Cells (BOECs)
Cow oviducts were collected from a local abattoir under the legislation of Institutional Animal Care and Use Committee of Gyeongsang National University (Approval ID: GAR-110502-X0017; date: 02-05-2011). Excised oviducts from slaughter house were immediately transported to the laboratory within two hours in ice cold DPBS (Dulbecco's phosphate buffered saline). Oviducts were washed thrice in cold DPBS supplemented with 100 U/mL penicillin plus 100 µg/mL streptomycin and dissected free of surrounding tissues. BOECs were collected and cultured as described by [17]. In a brief BOECs were isolated in HEPES-buffered Medium 199 supplemented with 100 U/mL penicillin and 100 µg/mL streptomycin by scrapping and squeezing the oviductal contents out of the ampullary end of the oviducts. The retrieved cells were washed twice by centrifuging for 550× g at 25 • C for 5 min in HEPES-buffered Medium 199 containing 100 IU/mL penicillin and 100 µg/mL streptomycin. For 24 h the clean cells with minimal blood contamination were cultured in HEPES-buffered Medium 199 supplemented with 100 U/mL penicillin, 100 µg/mL streptomycin, and 10% fetal calf serum (FCS; Bovogen Biologicals, Melbourne, Australia, was centrifuged at 100,000× g at 4 • C for 60 min to deplete the extracellular vesicle content from FCS, aliquoted and stored at −20 • C for further use). Within these 24 h, the cells formed floating vesicles with actively beating cilia collected by centrifugation at 550× g at 25 • C for 5 min. The cells were suspended in DMEM/Ham's F12 medium (DMEM/F12 Glutamax I, Gibco BRL, Paisley, UK) supplemented with 5 µg/mL insulin, 5 µg/mL transferrin, 10 ng/mL epidermal growth factor, 50 nM trans-retinoic acid, 10 mM glutathione, 100 µg/mL gentamycin, 5% FCS, and 2.5 mg/mL amphotericin B. The cells were mechanically separated with the aid of pipette. Cells were then cultured at final concentration of 3 × 10 6 cells/mL in 6-well plates at 38.5 • C in an atmosphere of 5% CO 2 with saturated humidity until confluence within 6-7 days. The 100% confluent monolayer was washed with PBS and used for the generation of condition media.
Isolation of Oviduct Extracellular Vesicles (Oviductosomes/Exosomes)
Conditioned media obtained from a 100% confluent BOEC-monolayer were pooled into a 50 mL conical tube. Exo were purified by sequential centrifugation of the conditioned media as described by [36], with little modification of the centrifugation length and speed. In brief, the filtered condition media was first centrifuged at 300× g for 10min, followed by 10,000× g for 30 min to remove dead cells, debris, and contaminating proteins. The pellet discarded and clean supernatant was ultra-centrifuged at 100,000× g at 4 • C for 60 min in a (BECKMAN L8-M; SW41T1 rotor, USA) to pellet the Exo. The pellets were suspended in ice cold PBS, aliquoted and stored at −20 • C for further analysis and supplementation into in vitro culture media.
Nanoparticle Tracking Analysis (NTA)
The software used for capturing and integration of the data was NTA version 3.4 Build 3.4.003 (Malvern, UK). For the analysis of BOECs-monolayer derived Exo, the 100× sample dilution in PBS was used. The sample loaded into the cell chamber of an instrument that measured each sample at different random position throughout the cell. The measurement was recorded in triplicate cycles at each position. The pre-assessment parameters were adjusted to sensitivity 85, viscosity (water) 0.9 cP, frame rate of 25 frames per second (FPS) and total of 1498 frames per sample was examined, camera type (sCMOS) with manual shutter speed and gain adjustment, laser type green with 100 laser pulse duration, 25 • C temperature, and 7.0 pH. The specification of post-acquisition parameters was as follows: minimum brightness 22, maximum pixel area 1000, and minimum area 10 pixels. All quality control parameters such as temperature, conductivity, electrical field, and drift measurements were adjusted by applying instrument optimized setting. The mean, median, and mode which indicated the diameter and size of the particles as well as along with their distribution/concentration in terms of particles per mL of the sample, were calculated by excluding the less accessible readings from the data. The curve in the graph indicated the number of particles per particle size was obtained by using quadratic interpolation.
Exo-ELISA Colorimetric Assay for Exo-Protein Quantification
For quantitative and qualitative analysis of Exo-protein concentration, a double sandwich enzyme-linked immunoassay (ELISA) ExoQuant TM colorimetric kit from (cat no. K1205-100; BioVision, Milpitas, CA, USA) was used according to the manufacturer's instruction. The standard Exo and BOEC-monolayer derived Exo sample was serially diluted and added into the wells of ELISA strips, then incubated overnight at 37 • C under humid atmospheric condition. After washing, it was incubated with primary antibody anti-CD9α (diluted in 1× sample buffer) at 37 • C incubator for 2 h. After washing again, incubated with streptavidin-HRP conjugated secondary antibody at 37 • C for 1 h. The content was washed off and incubated with chromogenic substrate solution for 10 min at room temperature under dark conditions. The reaction was stopped by adding a stop solution and the OD value at 450 nm absorbance with an ELISA-reader was measured. The standard curve was obtained by plotting the average concentration values from different standard concentrations against the corresponding amount of Exo. The mean absorbance for standard Exo and sample Exo was calculated in duplicate sets. The results of OD values were interpreted by linear regression analysis.
In Vitro Embryo Production
Immature cumulus oocyte complexes (COCs) were retrieved from follicles (2-8 mm diameter) from the ovaries of Korean native Hanwoo cows, obtained from the local slaughterhouse, and placed in physiological saline (0.9% NaCl) at 37.5 • C. The TL-HEPES medium was used for the collection of oocytes, and afterwards submitted to in vitro maturation (IVM) and in vitro fertilization (IVF) as described previously by [37]. In a brief, the collected oocytes were cultured for 22-24 h in NUNC 4-well plates (Nunc, Roskilde, Denmark) containing IVM medium, composed of tissue culture media-199 (TCM-199) supplemented with 10% (v/v) fetal bovine serum (FBS), 1 µg/mL estradiol-17β, 10 µg/mL follicle-stimulating hormone, 0.6 mM cysteine, and 0.2 mM sodium pyruvate. After following IVM, the matured COCs were inseminated with 1 × 10 6 sperms/mL of frozen-thawed semen straws from Hanwoo Bulls (KPN-1175, NongHyup, Agribusiness Group Inc, Republic of Korea). Approximately 18-20 h of post-insemination, the presumptive zygotes were cleared from the cumulus cells by repeated pipetting and completely denuded presumptive zygotes were cultured for up to 8 days in synthetic oviduct fluid (SOF) medium. To support the in vitro culture (IVC), SOF media supplemented with 44 µg/mL sodium pyruvate (C3H3NaO3), 14.6 µg/mL glutamine, 10 IU/mL penicillin, 0.1 mg/mL streptomycin, 3 mg/mL FBS, and 310 µg/mL glutathione were used for culturing of embryos [38]. The embryonic development proceeded to the BL stage under humidified atmospheric condition of 5% CO 2 at 38.5 • C were recorded and used for comparative analysis.
Supplementation of BOEC-CM and Exo during Embryo Culture
Forty-eight hours before starting the culture of embryos, the cell culture media was replaced with in vitro maturation medium (IVM) and synthetic oviduct fluid medium (SOF) supplemented with essential amino acids with addition of 2% FCS. To minimize the effect of serum concentration on embryonic development minimal dose of serum was used. After 48 h, the condition media (CM) from three different monolayers were pooled out. Three harvested collections were combined in a conical tube and centrifuged at 300× g for 10 min to remove all cell debris. The CM was then filtered through a 0.22 µm nitrocellulose membrane, aliquoted, and stored at 4 • C refrigerator. The BOECs derived CM was pre-warmed in 38.5 • C incubator before oocyte maturation and culture of embryos. For the supplementation of (Exo) during in vitro development of embryos, the 100× dilution of Exo was tested between the ranges of 1 to 10%. The supplementation of 3% Exo was observed to support the maximal embryonal development. To perform the further experimental analysis 3% Exo concentration was used. At day 8, the number of embryos developed to the BLs stage in BOECs derived-CM and BOECS derived-Exo media were recorded. Afterwards, BLs were washed three times in 1× PBS and either used live or fixed in 4% paraformaldehyde for further experimental analysis. For gene expression pattern, washed BLs in nuclease free water were immediately snap-frozen in liquid nitrogen and stored at −80 • C in Eppendorf tubes.
RNA Extraction and Real-Time PCR
The total RNA from BLs (n = 5 per group) was isolated with RNA isolation kit (PicoPure, Arcturus, ThermoFisher, Foster City, CA, USA) and used to synthesized cDNA with iScript reverse transcriptase (BioRad). The qRT-PCR analysis was performed as described by [38]. In a brief, the relative mRNA abundance of all genes was analyzed by real-time quantitative (q)RT-PCR with SYBR Green master mix using Cycler BioRad system. Threshold (Ct) values normalization of all tested genes was done with (Ct) values of GAPDH. The conditions used for PCR amplification were: initial denaturation at 94 • C for 5 min followed by 40 cycles of 94 • C for 30 s, 58 • C for 30 s, and 72 • C for 30 sec. For mRNA expression pattern analysis, three independent experiments were performed with four replicates. Primers used for RT-PCR and qRT-PCR are listed in Table S3.
ROS Assay
The florescent probe 2 , 7 -dichlorodihydrofluorescein diacetate molecules (DCHDFA) from Sigma D-6883 Aldrich were used for the analysis of ROS (reactive oxygen species) production. The stock solution preparation and assay protocol was performed as previously described by [38]. In brief, day-8 BLs were collected, washed in PBS/polyvinyl pyrolidone (PVP) solution and incubated in a 10 mM/mL concentration of DCHDFA solution at 37 • C for 30 min. After a 30 min incubation, BLs were washed and visualized under a confocal laser microscope (Olympus, FV1000, Tokyo, Japan) for the florescent emission of ROS level.
Cell Proliferation and Apoptotic Assay
The effect of in vitro culture condition on cellular proliferation and DNA fragmentation was determined with anti-BrdU labelling and terminal deoxynucleotidyl transferase (TdT) 20-deoxyuridine, 50-triphosphate (dUTP) nick-end labeling (TUNEL) assay by using an In Situ Cell Death Detection Kit (Roche Diagnostics Corp., Indianapolis, IN, USA), as explained previously by [37,38]. Briefly, for cell proliferation assay, day-8 BLs were washed and incubated for 6 h with 100 mM BrdU at 37 • C. BLs were fixed, permeabilized, and incubated with 1 N HCL solution at room temperature (RT) for 30 min. After 1 h blocking with 3.0% BSA (bovine serum albumin), incubation with anti-BrdU (B8434-100 µL, Sigma) primary antibody followed by secondary TRITC-conjugated antibody was used to detect the BrdU labelled cells. Antibodies specifications are listed Table S4.
For the determination of apoptotic index, day-8 BLs were fixed in 4% PFA (paraformaldehyde) for 15 min. Washed and permeabilized for 30 min at room temperature in (0.5% (v/v) Triton X-100 and 0.1% (w/v) sodium citrate). Thereafter, incubated with TUNEL assay kit reagents for 1 h at 37 • C under preventive light condition. Washed and incubated with DAPI (5 min) for nuclei staining. After washing BLs were mounted on glass slides and images were captured by epifluorescence microscope (Olympus IX71, Tokyo, Japan). Image J software (National Institute of Health, Bethesda, MD, USA) were used to count the BrdU and TUNEL labelled positive nuclei from individual BL. The average percentage was determined by dividing the total cell number (stained with DAPI).
Nile Red Staining for Quantification of Lipid Content
A fluorescent probe (Nile red, NR) specific for detection of intracellular lipids was used to evaluate the accumulation of lipid content in the day-8 BLs. the protocol used was same as described by [36].
Briefly, fixed BLs were washed with PBS/PVP solution and incubated with 10 mg/mL NR solution for 3 h at room temperature under dark condition. Thereafter, NR stained BLs were washed with PBS/PVP solution and incubated with DAPI (5 min) for nuclei staining. The confocal laser-scanning Olympus FluoView FV1000 microscope was used to capture the images of glass slide mounted BLs. Red florescent intensities showing NR lipophilic specific probe were measured by using Image J software.
JC-1 Staining
A fluorochrome dye, JC-1 (Molecular probe) from (Invitrogen, Carlsbad, CA, USA) was used to determine the mitochondrial membrane potential (Ψ) in in vitro cultured BLs. The day-8 BLs were collected and fixed in 4% PFA. Washed and incubated with 10 mg/mL JC-1 dye prepared in PBS/PVP solution at 37 • C for 1 h under dark condition. In principal, the dye incorporates into mitochondria and generates either a green florescence signal by forming monomers (J-monomer) that indicates low membrane potential or a red florescence signal by forming aggregates (J-aggregate) that indicates high membrane potential. Thereafter, BLs were washed with PBS/PVP solution and stained with DAPI for 5 min. Followed by washing, BLs were mounted on a glass slide with cover slip. Images were viewed under a confocal laser scanning microscope (Olympus, FV1000, Tokyo, Japan).
Statistical Analysis
Statistical difference in embryonic development data were analyzed by using SPSS software version 18.0 (IBM Corp., Armonk, NY, USA). The all embryonic percentile data are presented as mean ± standard error of mean (SEM). For the analysis of imaging data, experiments were performed in triplicate sets of experiments and a single BL image was shown as representative image from the individual group. All graphical data are presented as the mean standard error of mean (SEM) curated from triplicate sets of experiments. For all the imaging data, mean fluorescence intensities were quantified per BL (n = 20) from individual group and histogram values were measured by using Image J software (National Institute of Health, Bethesda, MD, USA). The difference in expression level of various genes in all the groups as well as difference in the detection of fluorescence intensities were analyzed by using one-way analysis of variance followed by Sidak's Multiple Comparison Test. GraphPad Prism 6.0 software package (USA) was used. * p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001, were considered as significant differences.
|
v3-fos-license
|
2024-01-04T14:05:31.624Z
|
2024-01-03T00:00:00.000
|
266745184
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41467-023-44270-3.pdf",
"pdf_hash": "13e156dfeb39040600a7f5e1b613c8faa50d060d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43528",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "8aec674c6abe929178cbe32751f0046e309e687f",
"year": 2024
}
|
pes2o/s2orc
|
Targeting IL-17A enhances imatinib efficacy in Philadelphia chromosome-positive B-cell acute lymphoblastic leukemia
Dysregulated hematopoietic niches remodeled by leukemia cells lead to imbalances in immunological mediators that support leukemogenesis and drug resistance. Targeting immune niches may ameliorate disease progression and tyrosine kinase inhibitor (TKI) resistance in Philadelphia chromosome-positive B-ALL (Ph+ B-ALL). Here, we show that T helper type 17 (Th17) cells and IL-17A expression are distinctively elevated in Ph+ B-ALL patients. IL-17A promotes the progression of Ph+ B-ALL. Mechanistically, IL-17A activates BCR-ABL, IL6/JAK/STAT3, and NF-kB signalling pathways in Ph+ B-ALL cells, resulting in robust cell proliferation and survival. In addition, IL-17A-activated Ph+ B-ALL cells secrete the chemokine CXCL16, which in turn promotes Th17 differentiation, attracts Th17 cells and forms a positive feedback loop supporting leukemia progression. These data demonstrate an involvement of Th17 cells in Ph+ B-ALL progression and suggest potential therapeutic options for Ph+ B-ALL with Th17-enriched niches.
activate survival pathways, protect against excessive ROS, induce metabolic reprogramming, and promote immunosuppression, consequently supporting leukemogenesis, cell proliferation and therapeutic resistance 4 .Thus, these leukemic niches provide an opportunity to identify therapeutic targets to improve the efficacy of TKI treatment in Ph + B-ALL 5,6 .
T helper type 17 (Th17) lymphocytes secrete Th17-associated cytokines such as IL-17, IL-21, IL-22, and TNF and play crucial roles in autoimmune diseases, chronic inflammatory disorders, and cancer [7][8][9][10][11] .Experimental and clinical evidence suggests that IL-17A is a promising therapeutic target in many autoimmune and chronic inflammatory diseases 11,12 .Secukinumab and ixekizumab, two anti-IL-17A monoclonal antibodies, have been approved for the clinical treatment of psoriasis, psoriatic arthritis, and ankylosing spondylitis 13,14 .Recent research has indicated the roles of IL-17 in maintaining barrier integrity, establishing host defense under physiological conditions, and driving cancer progression under pathological conditions 15 .Previous studies have indicated that the increases in the Th17 cell population in B-cell ALL (B-ALL), acute myeloid leukemia (AML), and multiple myeloma (MM) are positively correlated with cancer progression and drug resistance [16][17][18] .Therefore, targeting Th17 cells may improve the outcomes of Ph + B-ALL treatment and reduce drug resistance.However, the exact roles of Th17 cells and Th17-associated cytokines in the Ph + B-ALL niche microenvironment and disease progression are still undefined.
The pathophysiological functions of Th17 cells and Th17-associated cytokines depend on the ability of IL-17 to induce proinflammatory mediators, the mitogenic effects in tissue progenitor cells and the ability to reprogram cellular metabolism 19 .In tumor microenvironments, the expression of immunological mediators is elevated by the inflammatory response in cancer cells, which plays vital roles in tumor metastasis, migration, proliferation and cancer stemness 5,[20][21][22][23][24][25] .Chemokine (C-X-C motif) ligand 16 (CXCL16), a membrane-bound chemokine, acts as a ligand for C-X-C chemokine receptor type 6 (CXCR6) and contributes to the progression of many chronic inflammatory diseases, including fibrosis, nonalcoholic fatty liver disease, atherosclerosis, and cancer [26][27][28][29][30][31] .The expression of CXCR6 has been observed in Th17 cells, and the effector CD4+ cells with a CCR6 + CXCR6 + phenotype predominantly expressed classical Th17 markers such as IL-23R, IL-17A, and RORC.CCR6 and CXCR6 can be used to identify cytotoxic Th17 cells in experimental autoimmune encephalomyelitis 32 .CXCL16 can also act as a chemotactic agent for tumor-infiltrating T lymphocytes to create a microenvironment enhancing cancer progression 33,34 .However, the exact role of CXCL16 in Th17 cell differentiation and function, especially in the leukemia niche microenvironment, remains uncharacterized.Given that the Th17-associated cytokine IL-17A induces the production of various proinflammatory mediators, which can remodel the local microenvironment, we postulated that Th17 cells in the BM niche contribute to Ph + B-ALL development.We studied the functions and mechanisms of the niche-located inflammatory loop formed by Th17 cells, IL-17A, and CXCL16 in supporting Ph + ALL progression and illuminated potential therapeutic strategies.
Here, we show that Th17 cells and IL-17A are highly elevated in Ph + B-ALL patients, which in turn promote the progression of Ph + B-ALL by activating BCR-ABL, IL6/JAK/STAT3 and NF-kB signalling pathways and increasing the secretion of the chemokine CXCL16 by leukemia cells.CXCL16 further promotes Th17 cell differentiation, and recruitment and forms the positive loop in the niche microenvironment.As such, this study provides additional rationale for IL-17A-or CXCL16-directed therapy for patients with Ph + B-ALL disease.
Results
High IL-17A expression is associated with poor prognosis in patients with Ph + B-ALL To investigate whether Th17 cells are enriched in Ph + B-ALL, we detected the frequency of Th17 cells in freshly isolated bone marrow mononuclear cells (BMMCs) from Ph + B-ALL patients and healthy donors (HDs).The frequency of Th17 cells was significantly higher in BMMCs from patients with Ph + B-ALL than in those from HDs (Fig. 1a and Supplementary Fig. 1a).However, the frequency of CD4 + T cells among BMMCs from Ph + B-ALL patients was not different from that among BMMCs from HDs (Supplementary Fig. 1b).We then used a p210 BCR/ABL-inducible transgenic expression system 35 .Withdrawal of tetracycline administration in double transgenic mice (BCR-ABL tTA ; C57BL/6 background) induced the expression of BCR-ABL1 and resulted in the development of B-cell leukemia in 100% of the mice (Supplementary Fig. 1c).The survival time of BCR-ABL tTA mice after tetracycline withdrawal was as long as 15 weeks (Supplementary Fig. 1d).Necropsy demonstrated massive splenomegaly and enlargement of the majority of lymph nodes (LNs) in these mice (Supplementary Fig. 1e, f).Flow cytometric analysis demonstrated that the splenic lymphoblasts from the diseased mice were B220 dim (Supplementary Fig. 1g, left) and expressed CD19, CD43, and BP-1 (Supplementary Fig. 1g, right).This pattern suggested that the transformed cells underwent arrest at the pre-B-cell stage of development and was similar to the cell surface marker expression pattern identified in Ph + B-ALL patient samples and MMTV-BCR-ABL tTA transgenic mice (FVB background) 36,37 .Additionally, the frequency of immature blasts was found to be increased in Wright-Giemsa-stained peripheral blood (PB) and HE-stained BM and spleen sections of BCR-ABL tTA mice compared to wild-type (WT) mice (Supplementary Fig. 1h).We then examined the frequency of Th17 cells in mice with BCR-ABL-driven B-ALL (BCR-ABL tTA mice).The results indicated that the frequency of Th17 cells among peripheral blood mononuclear cells (PBMCs) increased progressively during BCR-ABL-induced B-ALL development (Fig. 1b and Supplementary Fig. 1i), but the proportion of CD4 + T cells in the PB of mice with BCR-ABL-driven B-ALL was similar to that in the PB of WT mice during B-ALL progression (Supplementary Fig. 1j).Moreover, the proportions of Th17 cells in the BM, spleen and LNs were elevated in BCR-ABL tTA mice compared to those in WT mice at the same age (Fig. 1c).Moreover, single-cell datasets from HDs and Ph + B-ALL patients (GSE134759) indicated that the proportion of Th17 cells among the BMMCs from patients with Ph + B-ALL was significantly higher than that among BMMCs from HDs (Fig. 1d, e).These data indicate that the frequency of Th17 cells is increased in Ph + B-ALL patients and the Ph + B-ALL-like mouse model.
Th17 cells can secrete various cytokines, including IL-17A, IL-17F, IL-21, and IL-22, which contribute to Th17-mediated diseases 9,10 .The mouse Th1/Th2/Th17 cytokine and chemokine assays showed increased concentrations of cytokines such as IL-17A, IL-5, IL-28, IL-17F in the plasma of BCR-ABL tTA mice, and IL-17A was among the top 5 of the 42 cytokines examined (Fig. 1f).The concentration of plasma IL-17A ranged from 100 pg/ml to 500 pg/ml in BCR-ABL tTA mice (Supplementary Fig. 1k).Th17 cells can also produce IL-21 and IL-22, and we then measured the concentrations of IL-21 and IL-22 in the plasma of BCR-ABL tTA mice and WT mice.Although a slight increasing trend in plasma IL-21 and IL-22 concentrations was observed in BCR-ABL tTA mice compared to WT mice, there was no statistically significant difference between these two groups (Supplementary Fig. 1l).It has been reported that IL-17A mediates signal transduction via the IL-17 receptor complex, comprising the IL-17RA and IL-17RC subunits 38 .Therefore, we queried IL-17RA and IL-17RC expression in Ph + B-ALL patients from the GEO database (GSE13204).Both IL-17RA and IL-17 RC were expressed in Ph + B-ALL patients.However, the mRNA expression of IL-17RA in Ph + B-ALL patients was lower than that in HDs (Fig. 1g).Consistently, protein expression results showed that IL-17A was significantly increased in BM from newly diagnosed Ph + B-ALL patients compared with that from HDs or Ph -B-ALL patients (Fig. 1h).Furthermore, we identified positive cell surface expression of IL-17RA and IL-17RC in primary Ph + B-ALL cells using flow cytometry analysis (Fig. 1i).Both IL-17RA and IL-17RC were detected in these cells by flow cytometry.Interestingly, the expression of IL-17RC was significantly higher in Ph + B-ALL cells than in normal B cells, whereas the expression of IL-17RA remained unchanged (Fig. 1i).In addition, patients with B-ALL expressing high IL-17A expression had significantly shorter overall survival (OS) and disease-free survival (DFS) times than patients with B-ALL with low IL-17A expression (Fig. 1j).These data indicate that IL-17A secreted by Th17 cells positively correlates with Ph + B-ALL progression.IL-17A promotes the proliferation, survival, and homing of Ph +
B-ALL cells
To determine whether IL-17A secreted by Th17 cells promotes leukemia development, we separately isolated Th17 cells and leukemia cells from Ph + B-ALL patients and then cocultured the leukemia cells with the Th17 cells.The cocultured cells were treated with anti-human IL-17A neutralizing antibodies or human IgG1 (IgG1) for 24 h, and the proliferation and apoptosis activity of the leukemia cells were then evaluated (Fig. 2a).Coculture with Th17 cells increased the percentage of the Ki67 + leukemia subpopulation and the survival of Ph + B-ALL cells, and these changes were abolished by treatment with the anti-IL-17A neutralizing antibody (anti-IL-17A) (Fig. 2b, c).Furthermore, we evaluated the direct effects of IL-17A on Ph + B-ALL cell growth and survival in vitro.IL-17A alone significantly induced both dose-and time-dependent proliferation of SupB15 cells and BV173 cells (Fig. 2d, e).In addition, IL-17A maintained the survival (Fig. 2f) and proliferation activity (Supplementary Fig. 2a) of primary Ph + B-ALL cells.Then, we performed homing assays (prior to 18 h) to investigate the effects of IL-17A treatment on the homing of transplanted leukemia cells (GFP-luc-tagged SupB15 cells).Flow cytometric analysis showed that IL-17A treatment enhanced the homing of leukemia cells to the BM and spleen in recipient mice (Supplementary Fig. 2b, c).To further explore the effect of rhIL-17A treatment on the engraftment of B-ALL cells, we performed rhIL-17A treatment 4 days after SupB15 cell transplantation (Fig. 2g).Biophotonic imaging on day 1 confirmed the successful transplantation of leukemia cells in all mice, with comparable leukemia burdens observed (Fig. 2h).By day 15, all mice transplanted with leukemia cells developed leukemia, indicating successful engraftment (Fig. 2h).The engraftment results showed that IL-17A treatment promoted Ph + B-ALL progression (Fig. 2h and Supplementary Fig. 2d), as indicated by the increased infiltration of leukemia cells in the PB, BM, and spleen (Fig. 2i, j and Supplementary Fig. 2e), ultimately leading to a decreased overall survival rate of leukemia-engrafted mice (Fig. 2k).These data suggest that IL-17A plays a critical role in Ph + B-ALL pathogenesis.
IL-17A deficiency or neutralization attenuates the progression of Ph + B-ALL
To further examine the in vivo role of IL-17A in leukemogenesis, we transplanted B-ALL cells isolated from BCR-ABL tTA mice into WT littermate mice and IL-17A-knockout (IL-17A -/-) mice.Over time, we monitored leukemia progression by detecting B220 dim CD19 + cells in PB from engrafted mice (Fig. 3a and Supplementary Fig. 3a).IL-17A deficiency significantly reduced the percentage of B220 dim CD19 + cells in the PB over time (Fig. 3b).Three weeks after transplantation, a substantial reduction in the immature blast cell population was observed in the PB and spleens of engrafted mice on the IL-17A -/-background (Fig. 3c).The spleen weights (Fig. 3d, e) and leukemia cell infiltration in the BM, spleen, and LNs (Fig. 3f) were decreased in IL-17A -/-recipients.
In addition, IL-17A deficiency reduced the proportion of B220 dim CD19 + Ki-67 + cells in multiple organs and among PBMCs of engrafted mice (Fig. 3g and Supplementary Fig. 3b).Moreover, we performed homing assays in IL-17A -/-mice to detect the effect of IL-17A knockout on the homing of leukemia cells by using CFSE-labeled leukemia cells isolated from BCR-ABL tTA mice.We examined the distribution of leukemia cells in the BM and spleen of both WT and IL-17A -/-mice 16 h after transplantation.Flow cytometric analysis showed that IL-17A knockout reduced the homing of leukemia cells to the BM and spleen in recipient mice (Supplementary Fig. 3c).
Considering the blocking effects of anti-IL17A neutralizing antibodies, we first performed a homing assay to determine the impact of anti-IL-17A treatment on the homing of leukemia cells.We found that anti-IL-17A treatment reduced the homing of leukemia cells to the BM and spleen of recipient mice (Supplementary Fig. 3d).Then, we evaluated the therapeutic effect of anti-IL-17A or mouse IgG1 (IgG1) in B-ALL cell-engrafted mice (Fig. 3h).Twenty-one days after treatment, the mice in the anti-IL-17A-treated group exhibited dramatic reductions in the spleen weight (Fig. 3i, j) and immature blast cell populations in the spleen and PBMCs (Fig. 3k).Similar to IL-17A deficiency, anti-IL-17A treatment reduced the number of B220 dim CD19 + cells (Fig. 3l) and Ki-67 + leukemia cells (Fig. 3m) in the BM, spleens, LNs, and PB of recipients, leading to a decrease in the death of B-ALL mice (Fig. 3n).More strikingly, the combination of anti-IL-17A antibodies and imatinib synergistically decreased leukemia cell infiltration and population in the spleens, PB, BM, and LNs of recipients and significantly enhanced the survival of B-ALL cell-engrafted mice (Fig. 3l-n), suggesting that combination treatment with anti-IL-17A and imatinib increased the therapeutic efficacy of imatinib.Overall, these data indicate that IL-17A depletion attenuates the progression of Ph + B-ALL and exerts synergistic effects with imatinib to inhibit Ph + B-ALL development.
IL-17A activates the BCR-ABL signaling pathway to promote the proliferation of Ph + B-ALL cells IL-17A has pleiotropic effects on multiple target cells 39,40 .To understand the roles and potential molecular mechanisms of IL-17A in Ph + B-ALL cells, we performed RNA sequencing (RNA-seq) and analyzed the gene expression profiles of primary mouse Ph + B-ALL cells with or without recombinant mouse IL-17A (rmIL-17A) treatment (Fig. 4a).Gene set enrichment analysis (GSEA) showed that genes involved in the hallmark IL6/JAK/STAT3 signaling, hallmark TNF signaling via NF-kB, and hallmark inflammatory response pathways were highly enriched in primary mouse Ph + B-ALL cells treated with IL-17A (Fig. 4b and Supplementary Fig. 4a, b).Moreover, we queried the PubMed GEO database from the MILE study (GSE13204) incorporating 575 B-ALL patients and 122 Ph + B-ALL patients as previously described 41 .Patients with Ph + B-ALL with IL-17A mRNA expression (208402_at) above the median level (IL-17A high) exhibited positive NES but no enrichment of the inflammatory response, IL6/JAK/STAT3 signaling and hallmark TNF signaling via NF-kB pathways compared with patients with leukemia with IL-17A mRNA expression below the median level (IL-17A low) (Supplementary Fig. 4c, d).Considering that leukemia cells acted mainly as responders but did not release IL-17A, IL-17RA, the receptor for IL-17A, should be a better marker of leukemia cells responding to IL-17A stimulation.We then conducted GSEA to compare patients with leukemia with IL-17RA mRNA expression (205707_at) above the median level (IL-17RA high) with patients with leukemia with IL-17RA mRNA expression below the median level (IL-17RA low).Similar to the findings in mouse primary B-ALL cells treated with IL-17A, the hallmark inflammatory response, hallmark IL6/JAK/STAT3 signaling and hallmark TNF signaling via NF-kB pathways were significantly enriched (FDR q < 0.25) in the IL-17RA high subgroup (Fig. 4c and Supplementary Fig. 4e).Additionally, we analyzed the published single-cell datasets from HDs and Ph + B-ALL patients (GSE134759) and found that the hallmark TNF signaling via NF-kB pathway was enriched in B cells from Ph + B-ALL patients compared to B cells from HDs (Fig. 4d).Furthermore, the BCR-ABL signaling pathway was indeed activated in the IL-17A high subgroup and IL-17RA high subgroups of Ph + B-ALL patients (Supplementary Fig. 4f, g).We then conducted real-time PCR to monitor the major regulated genes involved in the above-mentioned signaling pathways.IL-17A treatment increased the transcription of IL-6 and Jak2 in primary mouse B-ALL cells (Fig. 4e).Moreover, IL-17A treatment increased IL-6 production in SupB15 B-ALL cells (Fig. 4f) and primary Ph + B-ALL cells (Fig. 4g).In fact, the concentration of plasma IL-17A ranged from 100 pg/ml to 500 pg/ml in BCR-ABL tTA mice and from 10 pg/ml to 50 pg/ml in patients with B-ALL (Supplementary Fig. 1j, g).We then investigated the effect of pathological concentrations of IL-17A on the transcription of IL-6 and JAK2 in B-ALL cells.Treatment with 200 pg/ml IL-17A resulted in increases in the mRNA levels of Il-6 and Jak2, whereas treatment with 20-100 pg/ml IL-17A did not result in this effect (Fig. 4h).Additionally, we observed that treatment with pathological concentrations of IL-17A (200-500 pg/ml) significantly increased the mRNA levels of Il-6 and Jak2 (Fig. 4h).Consistently, treatment with pathological concentrations of IL-17A (200-500 pg/ml) increased the phosphorylation of BCR-ABL, STAT5, AKT, STAT3, p65 and MEK1/2 (Fig. 4i).These results suggest that IL-17A can activate the BCR-ABL, IL6/JAK/STAT3, and NF-kB signaling pathways in B-ALL cells.
CXCL16 secreted from leukemia cells stimulated with IL-17A promotes the differentiation and migration of Th17 cells
The crosstalk between B-ALL cells and the surrounding microenvironment in BM creates a BM malignant niche and accelerates disease progression 42,43 .Indeed, we found that Th17 cells accumulated within the B-ALL BM microenvironment niche.Moreover, these Th17 cells are located in the surroundings of Ph + B-ALL cells (Fig. 5a).Therefore, we then investigated whether Th17 cells can be recruited by Ph + B-ALL cells by detecting the recruitment of Th17 cells in a noncontact coculture system with Ph + B-ALL cells.Leukemia cells triggered the migration of Th17 cells, which was further promoted by rhIL-17A stimulation (Fig. 5b).
Chemokines are essential factors in attracting specific cell types to the tumor microenvironment 44 .To investigate which chemokine participates in the migration of Th17 cells to leukemia cells, we detected the effect of IL-17A on the chemokine secretion of leukemia cells.The expression of CXCL5, IL-5, CXCL16, and CCL5 was significantly increased in both SupB15 and BV173 cells after rhIL-17A treatment (Fig. 5c).We also detected the effect of IL-17A on chemokine expression in a Ph -leukemia cell line (NALM-6).The expression levels of CXCL16, CCL5, and IL-5 increased after rhIL-17A treatment (Supplementary Fig. 5a).These results align with our observations in the Ph + leukemia cell lines SupB15 and BV173 (Fig. 5c).However, the fold change in CXCL16 expression in NALM-6 cells treated with IL-17A (1-3-fold) was comparatively lower than those observed in SupB15 (9-14-fold) and BV173 (4-5-fold) cells.
We then assessed the impact of IL-17A on the secretion of CXCL5, IL-5, CXCL16, and CCL5.Our findings revealed significant increases in the secretion of IL-5, CXCL5, CCL5, and CXCL16 in SupB15 cells following rhIL-17A treatment, and among these four cytokines, CXCL16 exhibited the most robust secretion from Ph + B-ALL cells (Fig. 5d).In addition, the plasma concentration of CXCL16 in BCR-ABL tTA mice (100-300 pg/ml) was increased at least 8-fold compared to that in WT mice (10-50 pg/ml) (Fig. 5e).Moreover, the serum CXCL16 concentration was significantly increased in newly diagnosed Ph + B-ALL patients compared with HDs (Fig. 5f).Furthermore, we quantified the systemic CXCL16 level in BCR-ABL tTA mice treated with IgG or anti-IL-17A.Compared to IgG treatment, anti-IL-17A treatment significantly reduced the serum CXCL16 concentration (Supplementary Fig. 5b).Regarding Th17 cell functions, we then investigated the effects of CXCL16 secreted from Ph + B-ALL cells on Th17 cells in vitro.Naïve CD4 + T cells were isolated and induced to differentiate into Th17 cells in the presence or absence of CXCL16 (Fig. 5g).CXCL16 significantly triggered the differentiation of Th17 cells in a dose-dependent manner (Fig. 5h) but had no effect on the proliferation activity of Th17 cells (Supplementary Fig. 5c).Because the level of CXCL16 in the plasma of BCR-ABL tTA mice was 100-300 pg/ml, we then performed a Th17 cell in vitro differentiation assay to investigate whether treatment with 200 pg/ml CXCL16 could affect the differentiation of naïve CD4 + T cells into Th17 cells.We found that treatment with 200 pg/ml CXCL16 induced the differentiation of naïve CD4 + T cells into Th17 cells (Supplementary Fig. 5d).To determine whether CXCL16 secreted by leukemia cells promotes the migration of Th17 cells, which in turn maintains the survival of leukemia cells, we separately isolated Th17 cells and leukemia cells from Ph + B-ALL patients and then cocultured the leukemia cells with the Th17 cells in a noncontact coculture system (Fig. 5i).Treatment with an anti-CXCL16 neutralizing antibody (anti-CXCL16) significantly inhibited the migration of Th17 cells (Fig. 5j) and the proliferation activity of leukemia cells (Fig. 5k).These data indicate that CXCL16 secreted from leukemia cells promotes the differentiation and migration of Th17 cells, which support leukemia cell proliferation.
We then investigated how IL-17A stimulated the secretion of CXCL16 from leukemia cells.IL-17A increased the expression levels of both CXCL16 mRNA and protein in Ph + B-ALL cells (Fig. 6a, b).In the primary B-ALL mouse model, we also found that CXCL16 accumulated more in the spleen niche in BCR-ABL tTA mice than in WT mice (Fig. 6c).Moreover, rmIL-17A treatment further increased CXCL16 + cell infiltration in the spleens of BCR-ABL tTA mice compared to WT mice (Fig. 6c).To exclude the possibility that the increase in CXCL16 accumulation in the spleens of B-ALL mice was due to the high leukemia burden, we measured CXCL16 expression on a per-cell basis in primary B-ALL cells after rmIL-17A treatment by FACS.Flow cytometric analysis revealed an increase in the mean fluorescence intensity of CXCL16 in Ph + B-ALL cells after IL-17A treatment, suggesting that IL-17A increases CXCL16 expression in Ph + B-ALL cells on a per-cell basis (Fig. 6d).As indicated in primary mouse and human B-ALL cells (Fig. 4b and Supplementary Fig. 4d), IL-17A activated the NF-κB signaling pathway, which plays a critical role in inducing the expression and secretion of chemokines and cytokines 45 .Our results indicated that rmIL-17A significantly increased p65 phosphorylation in the nucleus in a dose-dependent manner in primary mouse B-ALL cells (Fig. 6e).Immunofluorescence staining also indicated elevated phosphorylated activity of p65 upon rhIL-17A treatment (Fig. 6f).Furthermore, NF-kB reporter luciferase activity was increased in a concentration-dependent manner at 24 h following rhIL-17A treatment (Fig. 6g).These results indicate that IL-17A increases NF-kB transcriptional activity in primary B-ALL cells.
To further investigate whether CXCL16 is the target gene of NF-κB, we performed chromatin immunoprecipitation (ChIP)-qPCR analyses with the truncated promoter region of CXCL16 (Fig. 6h, top).We found that NF-κB could bind to sequences from -2000 bp to -1314 bp and from -424 bp to -178 bp within the CXCL16 putative promoter region and that rmIL-17A increased the binding activity of NF-kB to the CXCL16 promoter (Fig. 6h, bottom).To further confirm whether IL-17A stimulates the secretion of CXCL16 through NF-kB activation, we applied the IKK inhibitor BAY-117082 to inhibit NF-κB activation (Fig. 6i) and measured CXCL16 expression after treatment.Notably, BAY-117082 inhibited CXCL16 mRNA and CXCL16 protein expression, but rmIL-17A treatment only weakly restored CXCL16 mRNA and CXCL16 protein expression (Fig. 6j, k), suggesting that NF-κB activation is required for IL-17A-induced CXCL16 expression.These results indicate that rmIL-17A increases CXCL16 secretion in a manner dependent on NF-kB activation in primary B-ALL cells.
CXCL16 depletion attenuates the progression of Ph + B-ALL
We then investigated the leukemogenesis role of CXCL16 in vivo.After secondary transplantation of BCR-ABL tTA B-ALL cells, the transplanted g Schematic representation of the SupB15-GFP-Luc cell xenograft experiment.h The infiltration of B-ALL cells in vivo was quantitatively monitored by a biophotonic imaging system in mice treated with rhIL-17A or vehicle.The infiltration of B-ALL cells was monitored on days 1, 8, and 15 after transplantation and representative images of indicated groups were shown (n = 6 mice per group).i Representative images of Wright-Giemsa-stained PB smears and H&E staining of spleens from the indicated groups (n = 3 mice per group).j Flow cytometric analysis of the percentage of hCD45 + cells in the PB, spleen and BM of indicated groups (n = 5 mice per group).The gating strategy for hCD45 + cells is shown in Supplementary mice were treated with rmCXCL16 for 2 weeks.The percentages of B220 dim CD19 + cells in the BM, spleens, LNs and PB were significantly higher in rmCXCL16-treated Ph + B-ALL syngeneic transplant mice than in PBS-treated mice (Fig. 7a).Indeed, the percentage of immature blast cells in the PB (Fig. 7b), leukemia cell infiltration in the spleen (Fig. 7c), the spleen weight (Fig. 7d) and the percentage of Ki-67 + cells in the spleen (Fig. 7e) were substantially increased after rmCXCL16 treatment in mice with secondary BCR-ABL tTA B-ALL cell transplantation.Furthermore, we determined the percentage of Th17 cells in the leukemia niche in mice with secondary BCR-ABL tTA B-ALL cell transplantation treated with or without CXCL16.CXCL16 dramatically increased the percentage of Th17 cells in the BM, spleen, LNs and PB (Fig. 7f, g and Supplementary Fig. 6b), indicating that CXCL16 promotes the migration of Th17 cells to the leukemia niche in vivo.
CXCL16 treatment accelerated the death of mice with secondary transplantation of BCR-ABL tTA cells, indicating that CXCL16 promotes the progression of Ph + B-ALL (Fig. 7h).We then examined the therapeutic effect of anti-CXCL16 or goat IgG isotype control (IgG) in the BCR-ABL-induced B-ALL secondary transplantation mouse model (Fig. 7i).On day 21 after treatment, anti-CXCL16 treatment reduced leukemia cell infiltration in the spleen (Fig. 7j); the spleen weight (Fig. 7k); the populations of immature blast cells in the PB and spleen (Fig. 7l); the percentages of B220 dim CD19 + cells in the PB, BM, spleen and LNs (Fig. 7m); and the percentage of Ki-67 + cells in the spleen (Fig. 7n).Moreover, anti-CXCL16 treatment significantly reduced the percentages of Th17 cells in the BM, spleen, LNs and PB of B-ALL cell-transplanted mice, indicating that anti-CXCL16 treatment impeded the migration of Th17 cells to the leukemia niche in vivo (Fig. 7o).Additionally, combination treatment with anti-CXCL16 and imatinib further reduced leukemia progression and the percentage of Th17 cells in the leukemia niche in mice with secondary transplantation (Fig. 7j-o), suggesting that combination treatment with anti-CXCL16 and imatinib increased the therapeutic efficacy of imatinib.These data indicate that anti-CXCL16 treatment, by blocking Th17 cell activities in the BM niche, attenuates the progression of Ph + B-ALL (Fig. 8).g Schematic strategy for studying the differentiation of Th17 cells with or without rmCXCL16 stimulation in vitro.h Representative FACS plots and quantification of the percentage of Th17 cells in naïve CD4 + T cells with or without rmCXCL16 treatment.i Schematic strategy for studying the migration of Th17 cells cocultured with leukemia cells with or without anti-CXCL16 mAb treatment in vitro.j, k Flow cytometric analysis of the percentage of Th17 cells in the lower chambers of the inserts (j) and proliferation activity of leukemia cells (k) in a coculture system with or without anti-CXCL16 treatment.The gating strategy for isolating Th17 cells is shown in Supplementary Fig. 6a. (b-d, h, j, k
Discussion
Th17 cells accumulate specifically in various tumors, indicating the occurrence of targeted recruitment of these cells by the tumor microenvironment.Th17 cells play a crucial role in inflammation and tumor immunity through the secretion of Th17-associated cytokines such as IL-17A, IL-21, IL-23, and TNF 46 .Studies have demonstrated that hematological malignancies, including multiple myeloma, B-cell lymphoma, AML, and ALL, exhibit elevated frequencies of Th17 cells 16,17,[47][48][49] .These increased numbers of Th17 cells, along with increased levels of IL-17 and other proinflammatory cytokines, contribute to growth, drug resistance, and apoptosis inhibition in B-cell lymphoma cells, as well as suppression of immune responses.These studies highlight the important roles of Th17 cells and IL-17A in leukemia.However, determining whether Th17 cells could serve as a novel immunotherapy target to improve the outcomes of leukemia treatment remains challenging.Furthermore, the increased presence of Th17 cells in the leukemia niche and the relationship between IL-17A and leukemia progression in vivo requires further investigation.
Here, we found that in the B-ALL microenvironment, IL-17A secreted by Th17 cells promoted the proliferation and survival of B-ALL cells and increased the secretion of the chemokine CXCL16 by leukemia cells.This, in turn, further promoted the recruitment of Th17 cells to the leukemia microenvironment.However, importantly, Th17 cells have a high degree of plasticity and can transdifferentiate into other lineage subsets depending on the microenvironment 50 .It remains unclear whether Th17 cells are capable of transdifferentiating into other lineage subsets, such as Th1 or Treg cells, in the B-ALL microenvironment.
We found that IL-17A was highly expressed in Ph + B-ALL patients and that high expression of IL-17A was associated with poor clinical outcomes.The established BCR-ABL-induced B-ALL mouse model provides an excellent tool for investigating the related molecular mechanism and validating the therapeutic efficacy.IL-17A deficiency or anti-IL-17A treatment significantly inhibited leukemia cell proliferation activity and increased the survival rates of B-ALL mice.More strikingly, B-ALL mice treated with both anti-IL-17A and the TKI imatinib achieved complete remission.Monoclonal antibodies targeting IL-17A, including secukinumab and ixekizumab, have been approved to treat moderate to severe plaque psoriasis 13,14 .Although some researchers are cautious about the dual role of Th17/IL-17A signaling in tumor progression 11 , numerous studies have demonstrated that IL-17A can enhance angiogenesis, proliferation, metastasis or drug resistance in multiple cancers, which supports the potential value of blocking IL-17A as an antitumor therapy 15 .Our study reveals that anti-IL-17A antibodies may achieve significant therapeutic efficacy in Ph + B-ALL patients under appropriate conditions, especially in combination with TKI therapy.Moreover, Salvestrini et al. reported that the leukemia cells homing to the bone marrow niche resist chemotherapeutic drugs 51 .In this study, we found that IL-17A promoted but anti-IL-17A monoclonal antibodies (mAbs) treatment reduced leukemia cells' homing, suggesting that anti-IL-17A mAbs treatment may render leukemia cells more sensitive to standard chemotherapeutic drugs.
It has been reported that IL-17A treatment induces the activation of the MAPK/ERK pathway through the phosphorylation of c-RAF, P42/ P44 MAPK, MEK and ERK in human nasal endothelial cells 52 , HUVECs 53 , macrophages 54 , and breast cancer cells 55 .Additionally, Mazzera et al. reported that activated MEK1/2 could assemble into a pentameric complex with BCR-ABL1, BCR, and ABL1, leading to the phosphorylation of BCR and BCR-ABL1 at Tyr360 and Tyr177 56 .We also determined the effect of IL-17A treatment on the phosphorylation of MEK.The phosphorylation of MEK was increased in B-ALL cells following IL-17A treatment.Thus, IL-17A may increase the phosphorylation of BCR-ABL1 through MEK activation.Bi et al. also reported that IL-17A promoted the proliferation of Ph -B-ALL cells through activation of the PI3K/AKT and Jak2/Stat3 signalling pathways 16 .Here, we found that IL-17A treatment promoted the proliferation of Ph + B-ALL cells by activating the IL6/JAK/STAT3 signalling pathway, which is consistent with the previous study.Additionally, we found that IL-17A treatment increased the activation of the BCR-ABL signalling pathway.The BCR-ABL oncoprotein is a key molecular basis for leukemia pathogenesis, playing important roles in cell proliferation, survival, and immunosuppression through the activation of several downstream signalling pathways, such as the JAK/STAT, PI3K/AKT, and Raf/MEK/ERK pathways 57 .Thus, targeting the IL-17A signalling pathway could be a valuable therapeutic approach for B-ALL.
NF-κB is an important regulator involved in the gene transcription of chemokines 58,59 .In this study, we showed that IL-17A could induce secretion of the chemokine CXCL16 in the leukemia niche by promoting the nuclear translocation and subsequent activity of NF-κB.In the nucleus, NF-κB binds to the CXCL16 promoter region and increases CXCL16 transcription.The mechanism by which IL-17A was found to induce CXCL16 secretion in B-ALL cells in this study is similar to previous findings indicating that LPS or oxidative stress generated by H 2 O 2 induced CXCL16 expression via the NF-κB signalling pathway 60,61 .CXCL16 is characterized as a proangiogenic cytokine and acts as an important angiogenic factor in the tumor microenvironment 62 .Suppression of CXCR6, a receptor for CXCL16, has been found to reduce tumor angiogenesis in a hepatocellular carcinoma xenograft mouse model 63 .Moreover, targeting CXCL16 either in cancer cells using shRNA or in the microenvironment using a neutralizing antibody efficiently blocks tumor growth and angiogenesis in thyroid cancers 64 .In our study, we verified the roles of CXCL16 in Ph + B-ALL in vivo.We found that the depletion of CXCL16 with an anti-CXCL16 neutralizing antibody attenuated leukemia progression and decreased the frequency of Th17 cells in the leukemia niche in vivo, suggesting that IL-17A mediates leukemia progression by promoting CXCL16 secretion.Therefore, CXCL16 might be another therapeutic target and be suitable for use in a precision medicine approach for Ph + B-ALL treatment, especially in patients with elevated CXCL16 expression.
Here, we found that Th17 cells are enriched in the niche microenvironment.The cytokine IL-17A, derived from Th17 cells, promotes Ph + B-ALL progression and increases CXCL16 expression in leukemia cells.The increased amount of CXCL16 further enhances Th17 cell differentiation and recruitment, creating a positive loop that drives Ph + B-ALL progression in the BM niche.Importantly, we further confirmed that targeting the Th17 niche or deleting CXCL16 inhibits Ph + B-ALL progression and exhibits a synergistic therapeutic effect in combination with imatinib in vivo.Our study provides valuable insights for the development of a therapeutic approach combining anti-IL-17A or anti-CXCL16 neutralizing antibodies with the TKI imatinib to achieve complete remission of Ph + B-ALL.
Fig. 6 | IL-17A activates NF-kB to induce CXCL16 transcription and CXCL16 secretion in leukemia cells.Primary Ph + B-ALL cells were treated with or without rhIL-17A for 24 h, and relative CXCL16 mRNA level (a) and CXCL16 secretion (b) were evaluated by real-time PCR and ELISA, respectively.c Representative immunofluorescence images of B220 (red) and CXCL16 (green) staining in spleen tissue sections of WT mice and BCR-ABL tTA mice treated with or without 50 µg/kg rmIL-17A for a duration of 3 weeks (twice a week) (n = 3 mice per group).d Flow cytometric analysis and quantification of CXCL16 expression in primary mouse B-ALL cells treated with or without rmIL-17A.e The protein levels of p65 and p-p65 in the cytoplasm and p-p65 in the nucleus of primary mouse leukemia cells treated with or without rmIL-17A were measured by Western blotting.f Immunofluorescence of p-P65 in primary B-ALL cells treated with or without rhIL-17A.g The effect of rhIL-17A treatment on NF-kB transcriptional activity.HEK 293T cells were transfected with a synthetic NF-kB luciferase reporter construct (pNF-kB-Luc) for 12 h and then treated with different concentrations of rhIL-17A for 24 h.NF-kB transcriptional activity was detected by a luciferase assay.h ChIP-qPCR analyses of the binding of NF-kB to the CXCL16 promoter region in SupB15 cells treated with or without rhIL-17A.i-k The effect of BAY11-7082, rIL-17A or BAY11-7082 in combination with rIL-17A on the cytoplasmic and nuclear protein levels of p65, p-p65, and CXCL16 in primary Ph + B-ALL cells.i The indicated protein band intensities were quantified using ImageJ software.The relative CXCL16 mRNA level (j) in primary Ph + B-ALL cells and CXCL16 protein level (k) in the supernatant of primary Ph + B-ALL cells were measured by RT-PCR and ELISA, respectively.(a, b, d-k) n = 3 independent experiments.Statistical significance was calculated by (a, b, d) two-tailed Student's t-test; (e, g-k) one-way ANOVA with Tukey's multiple comparison tests; Data are presented as means ± S.E.M. Source data are provided as a Source Data file.
Study approval
Human B-ALL patient specimens were obtained from the Institute of Hematology and Blood Diseases Hospital of Peking Union Medical College (PUMC).Informed consent was obtained from all participants in accordance with the Declaration of Helsinki, and the informed consent of participants under the age of 18 was obtained from their parents or guardians.The procedure was approved by the institutional review board at the Ethics Committee of the Institute of Hematology and Blood Diseases Hospital of PUMC (IIT2021011-EC-1).Our study is compliant with the 'Guidance of the Ministry of Science and Technology (MOST) for the Review and Approval of Human Genetic Resources', which requires formal approval for the export of human genetic material or data from China.The patient-related information are available in Supplementary Data 1.
Animal studies
NOD-SCID IL2Rg-null (NSG) mice (6-8 weeks old, male) were purchased from the Nanjing Biomedical Research Institute of Nanjing University (Nanjing, China).The tetO-BCR/ABL1 (B6.FVB/N-Tg(tetO-BCR/ABL1) 2Dgt/Nju) mice (5-6 weeks old, 1 male and 2 females, strain #N00005) were purchased from the Nanjing Biomedical Research Institute of Nanjing University (Nanjing, China).These mice had been backcrossed over 10 generations onto the C57BL/6 background.MMTV-tTA (B6.Cg-Tg(MMTVtTA)1Mam/J) mice (6-8 weeks old, 2 males and 2 females, Strain #002618) were purchased from The Jackson Laboratory (CA, USA).C57BL/6 J mice (female, 6-8 weeks old) were purchased from Hua Fu Kang Technology Co., Ltd.(Beijing, China).BCR-ABL tTA mice were generated by crossing female tetO-BCR/ABL1 (B6.FVB/N-Tg(tetO-BCR/ ABL1)2Dgt/Nju) mice with male MMTV-tTA mice (B6.Cg-Tg(MMTVtTA) 1Mam/J) under continuous administration of tetracycline (0.5 g/L) in the drinking water.Withdrawal of tetracycline in BCR-ABL tTA mice resulted in the development of Ph + B-ALL within 1.5-2 months 35 .IL-17A -/-(C57BL/6Smoc-IL17a em1Smoc ) mice (5-6 weeks old, 1 male and 2 females, Cat.NO.NM-KO-00131) were obtained from Shanghai Model Organisms Center, Inc..To determine the role of IL-17A in leukemogenesis, founder lines of BCR-ABL tTA mice that had features of human B-ALL and scored positive for Ph + B-ALL were used.A total of 1 × 10 5 mouse B-ALL-like spleen cells from BCR-ABL tTA mice were intravenously injected into nonirradiated IL-17A -/-mice or WT mice at 8-10 weeks of age.All mice were maintained in the animal facility at the Institute of Materia Medica under specific-pathogen-free (SPF) conditions.Animals were monitored daily.If mice manifested symptoms such as failure to thrive, weight loss > 10% of total body weight, hunching, activity decrease (stationary unless stimulated or hind limb paralysis), bleeding, infection, and/or fatigue, they were euthanized immediately, as per protocol approved by the Animal Experimentation Ethics Committee of the Chinese Academy of Medical Sciences.For animal studies, the mice were earmarked before grouping and then randomly separated into groups by an independent person; however, no particular randomization method was used.The sample size was predetermined empirically according to previous experience using the same strains and treatments.Generally, we used N ≥ 3 mice per genotype and condition.We ensured that the experimental groups were balanced in terms of animal age and weight.All animal procedures were conducted according to the guidelines of the Institutional Committee for the Ethics of Animal Care and Treatment of the Chinese Academy of Medical Sciences (CAMS) and Peking Union Medical College (PUMC).All animal procedures were consistent with the ARRIVE guidelines.The feedback loop between Th17 cells and B-ALL cells Primary leukemia cell purification/isolation B cells from mouse spleens, lymph nodes (LNs) or bone marrow (BM) were isolated with Dynabeads Mouse Pan B (B220) (Dynabeads; Invitrogen, 11441D).Human primary B-ALL cells were purified using MACS CD19 MicroBeads (Miltenyi Biotec, 130-050-301), and the percentage of B-ALL cells (CD19 + ) was determined to be > 90% by flow cytometry.
Mouse model of secondary transplantation of BCR-ABL tTA B-ALL cells
All recipient mice used for transplantation, including WT and IL-17A KO mice, were on a C57BL/6 background.The original tetO-BCR/ABL1 mice were generated on the FVB background 35 .We obtained tetO-BCR/ ABL1 mice from Nanjing Biomedical Research Institute of Nanjing University; these mice had been backcrossed over 10 generations onto the C57BL/6 background.These tetO-BCR/ABL1 mice were then crossed with MMTV-tTA transactivator (B6.Cg-Tg(MMTVtTA)1Mam/J) mice on the C57BL/6 background to generate double transgenic (BCR-ABL tTA ) mice.Since the donor BCR-ABL tTA mice and recipient mice were congenic on the same background, no additional backcrossing was performed before transplantation.After 1.5-2 months of tetracycline administration, the first-generation BCR-ABL tTA mice (G1) were sacrificed.For primary transplantations, 5 × 10 6 BM or spleen cells were intravenously injected into nonirradiated female C57B6/L mice between 6 and 8 weeks old.For secondary transplantations, mouse B-ALL-like spleen cells from primary transplanted mice were intravenously injected into female nonirradiated C57B6/L mice (1 × 10 5 cells per mouse) between 6 and 8 weeks old.Mice transplanted with leukemia cells were randomly assigned to either treatment.Three days after transplantation, 70 mg/kg imatinib (Sigma-Aldrich, SML1027, p.o., once a day), 5 mg/kg anti-IL-17A neutralizing antibody (Bio X Cell, BP0173, i.v., twice a week), or 0.5 mg/kg anti-mouse CXCL16 neutralizing antibody (R&D, MAB503, i.v., twice a week) were administered for 3 consecutive weeks.The mice in the control group were treated with IgG.For CXCL16 administration, rmCXCL16 protein (PeproTech, 250-28) in 100 μL of PBS (50 μg/kg) was administered by intraperitoneal injection every 3 days for 2 weeks.The mice in the control group were treated with an identical volume of PBS.The mice were sacrificed by excessive anesthesia 14 days after cytokine administration.To evaluate the survival rate, these mice were monitored for 50 days.
Murine Xenograft Model of Leukemia
To evaluate the effect of IL-17A on the progression of Ph + B-ALL in vivo, NOD-SCID IL2Rg-null (NSG) mice (6-8 weeks old, male) were intravenously injected with 2 ×10 6 GFP-luc tagged SupB15 cells.One-half hour later, in vivo animal imaging was performed to monitor the development of B-ALL in NOD-SCID mice using the IVIS Spectrum optical imaging system (PerkinElmer Inc., OH, USA) at different time points (day 1, day 8, and day 15).Then, beginning 4 days after SupB15 cell transplantation, 50 μg/kg rhIL-17A was administered for 3 weeks (twice a week), and the survival rate of these mice was monitored for 50 days.
Bioluminescence imaging and analysis
A total of 2 × 10 6 GFP-luc tagged SupB15 cells were intravenously injected into NSG mice (6-8 weeks old, male).The infiltration of B-ALL cells was monitored on days 1, 8, and 15 after transplantation with a bioluminescence imaging system.For bioluminescence imaging, mice were anesthetized and injected with 1.5 mg of D-luciferin (i.p., 15 mg/ml in PBS).Imaging was completed between 2 and 5 min after injection and was performed with an IVIS SpectrumCT In Vivo Imaging System coupled to live image acquisition and analyzed with the Living Image Software (version 3.2) (Perkin EImer, OH, USA).For plotting bioluminescence imaging data, the photon flux was calculated for each mouse by using a rectangular region of interest encompassing the thorax of the mouse in a prone position.
Flow cytometry
Surface antigens of primary cell suspensions from the BM, PBMCs, spleen, or LNs were stained with fluorophore-conjugated antibodies in FACS buffer for 30 min at room temperature (RT) in the dark.For intracellular staining, cells were stimulated with 30 nM PMA (Sigma-Aldrich, P1585) and 3a, b.To evaluate Th17 cell frequencies, bone marrow mononuclear cells (BMMCs) isolated from Ph + B-ALL patients and HDs were stimulated with 30 nM PMA (Sigma-Aldrich, P1585) and 1 μM ionomycin (Sigma-Aldrich, I3909) in the presence of 2 μg/ml brefeldin A (Selleck, S7046) for 4-6 hr at 37 °C.After incubation, cell surface staining with an APC-conjugated antihuman CD4 antibody (BioLegend, 317416, 1:100) was performed at room temperature in the dark for 30 min.The cells were subsequently fixed, permeabilized and stained with a PE/Cyanine7-conjugated antihuman IL-17A antibody (BioLegend, 512315, 1:100).The gating strategies for human CD4 + T cells in BM and human Th17 cells in the CD4 + T cells are shown in Supplementary Fig. 1a.For analysis of the proliferation of human leukemia cells, CD19 + leukemia cells were loaded with 2.5 μM CFSE (eBioscience, 65-0850-84, 1:1000) according to the manufacturer's instructions.FCS EXPRESS or FlowJo 10.8.1 software was used for data analysis.
Cell culture
The acute B-cell leukemia cell lines SupB15, BV173, and NALM-6 were purchased from Cell Resource Center, Peking Union Medical College, where they were recently authenticated by short tandem repeat (STR) profiling and characterized by mycoplasma detection.The cells were cultured and maintained in RPMI 1640 medium supplemented with 10% fetal bovine serum (Invitrogen, CA, USA) under 5% carbon dioxide.SupB15 cells stably expressing GFP-Luc were achieved by infecting cells with GFP-Luc lentivirus particles, and stable GFP-Luc-overexpressing SupB15 cells were selected by sorting for GFP + cells by flow cytometry.Stable Su-B15-GFP-Luc cell lines were cultured in RPMI 1640 medium containing 5 μg/ml puromycin (Gibco, CA, USA).All cell lines were verified negative for mycoplasma contamination by MycoAlert TM Mycoplasma Detection Kit (Lonza, LT07-318).
Luciferase reporter assay
To test whether IL-17A regulates the transcriptional activity of NF-kB, the NF-kB reporter gene plasmid and luciferase reporter constructs were stably transfected into HEK-293 cells, which were then treated with different concentrations of recombinant human IL-17A (Pepro-Tech, 200-17).After 24 h, luciferase activity was detected with a luminometer using a Luciferase Reporter Assay System (Promega, Madison, WI, E1910) in accordance with the manufacturer's instructions.
RNA-seq library preparation and sequencing
RNA purity was assessed using a Kaiao K5500® Spectrophotometer (Kaiao, Beijing, China).RNA integrity and concentration were assessed using the RNA Nano 6000 Assay Kit of the Bioanalyzer 2100 system (Agilent Technologies, CA, USA).The RNA integrity number (RIN) was > 7.5 for all samples.Sequencing libraries were generated using an NEBNext® Ultra TM RNA Library Prep Kit for Illumina® (#E7530L, NEB, USA) following the manufacturer's recommendations, and index codes were added to attribute sequences to each sample.In brief, mRNA was purified from 2 µg of each total RNA sample using poly-T oligoattached magnetic beads.Using divalent cations under elevated temperature, fragmentation was performed in NEB Next First Strand Synthesis Reaction Buffer (5X).First-strand cDNA was synthesized using random hexamer primers and RNase H. Second-strand cDNA synthesis was subsequently performed using buffer, dNTPs, DNA polymerase I and RNase H.The library fragments were purified with QiaQuick PCR kits and eluted with EB buffer, and then A-tailing and adapter additions were implemented for terminal repair.The intended products were retrieved, and PCR was performed to complete the library preparation.
Gene Set Enrichment Analysis (GSEA)
Publicly available microarray data of ALL patients were retrieved from the Gene Expression Omnibus (GEO) (GSE13204).Of these cases, the upper tenth (58 patients, IL-17A positively correlated cases) had the highest levels of IL-17A expression.In contrast, the lower tenth (58 patients, IL-17A negatively correlated) had the lowest levels of IL-17A expression.Preranked GSEA was performed using a gene set of the BCR-ABL signaling-associated genes.For RNA-seq data, GSEA was performed using the clusterProfiler package by R 4.0.3.
Real-time PCR
Total RNA was extracted from cell samples using TRIzol (Invitrogen, CA, USA) following the manufacturer's instructions.According to the manufacturer's instructions, reverse transcription of the total cellular RNA was performed using a TransScript One-Step gDNA Removal Kit (TransGen Biotech, AE311) and a cDNA synthesis superMix Kit (Trans-Gen Biotech, AE341).PCR amplification was performed in triplicate.Each reaction contained 1X SYBR FAST qPCR Master Mix (KAPA BIO-SYSTEMS, KK4602), 1 μL of mixed primers and 1 μL of template cDNA.PCR was conducted with a MyCycler thermal cycling instrument (qTOWER, Analytik Jena, German).The PCR primer sequences are listed as follows: human IL-13 forward: 5′-ACGGTCATTGCTCTCAC GSE210091.The single-cell datasets from HDs and Ph+ B-ALL patients were analyzed using the accession code GSE134759.
Fig. 1 |
Fig. 1 | IL-17A, the signature cytokine of Th17 cells, is positively associated with disease progression in Ph + B-ALL.a Flow cytometric analysis of the frequencies of Th17 cells within CD4 + T population in BMMCs from Ph + B-ALL patients (n = 8 samples) or Healthy Donors (n = 6 samples).b Flow cytometric analysis of the frequencies of Th17 cells within the CD4 + T population in PBMCs of WT mice (n = 8 mice) or BCR-ABL tTA -transplanted mice (n = 10 mice) at the indicated times after transplantation.c Immunofluorescence analysis of the Th17 cells in the BM, spleen and LNs of WT mice and BCR-ABL tTA mice.n = 5 fields for BM, 8 fields for spleen, and 6 fields for LN from three different mice per group.d, e Uniform manifold approximation and projection (UMAP) visualization of different cell clusters (d) and stacked bar plot showing the percentages of indicated cells (e) in HD (n = 1 sample, GSM3732352) and Ph + B-ALL patient (n = 1 sample, GSM3732354) from GSE134759 dataset.f Serum cytokine concentrations in WT mice and BCR-ABL tTA mice were measured by Cytokines Array (n = 5 mice per group).Heatmaps with Z scores of serum cytokines are presented.g Log2 expression values for IL-17RC and IL-17RA in Ph + B-ALL patients (n = 121 samples), Ph -B-ALL patients (n = 215 samples), and HDs (n = 56 samples) were extracted from GEO database (GSE13204).h The IL-17A concentrations in the serum of Ph + B-ALL patients (n = 10 samples), Ph -B-ALL patients (n = 8 samples) and HDs (n = 7 samples) were measured by ELISA.i The expression of IL-17RA and IL-17RC on CD19 + cells from Ph + B-ALL patients (n = 16 samples) and HDs (n = 10 samples) were detected by FSCs.j Kaplan-Meier survival curves of B-ALL patients stratified by IL-17A expression in B-ALL patients from the GSE11877 dataset.Statistical significance was calculated by (a, b, c, i) twotailed Student's t-test; (g, h) one-way ANOVA with Tukey's multiple comparison tests; (j) two-sided log-rank test; Data are presented as means ± S.E.M. Source data are provided as a Source Data file.
Fig. 2 |
Fig.2| IL-17A promotes the proliferation and survival of Ph + B-ALL cells.a Schematic experimental design showing Th17 cells cocultured with leukemia cells with or without anti-IL-17A mAb treatment.b, c Representative FACS plots and flow cytometric analysis of leukemia cell apoptosis as determined by Annexin-V and PI staining (b) and Ki-67 + leukemia cells (c) in the coculture system with or without anti-IL-17A mAb treatment.b, c n = 3 independent experiments.The strategy for isolating Th17 cells is shown in Supplementary Fig.6a.d SupB15 (n = 3 independent experiments) and BV173 cells (n = 5 independent experiments) incubated with different concentrations of rhIL-17A were detected by CCK-8 assay.e Relative cell proliferation viabilities of SupB15 cells and BV173 cells incubated with rhIL-17A (100 ng/ml) or vehicle were detected by CCK-8 assay at the indicated times.The colors represent different time points, and the diameter indicates the relative cell proliferation (normalized to the baseline on day 1).(n = 4 independent experiments) f The viability of primary Ph + B-ALL cells incubated with or without rhIL-17A (100 ng/ml) was determined by a cell counter (n = 3 independent experiments).
h
Fig. 2e.k Kaplan-Meier survival curves of SupB15-GFP-LUC cell-engrafted NSG mice treated with rhIL-17 or vehicle (n = 8 mice per group).Statistical significance was calculated by (b, c, d) one-way ANOVA with Tukey's multiple comparison tests; (f, j) two-tailed Student's t-test; (k) two-sided log-rank test.Data are presented as means ± S.E.M. Source data are provided as a Source Data file.IL-17A -/- 2-month old BCR-ABL tTA a WT IL-17A -/- WT IL-17A -/- WT IL-17A -/- ib + I g G 1 A n t i-I L -1 7 A A n t i-I L -1 ib + I g G 1 A n t i-I L -1 7 A A n t i-I L -1 7 A + I m a t in ib A n t i-I L -1 7 A A n t i-I L -1 7 A + I m a t in ib A n t i-I L -1 7 A A n t i-I L -1 x 10 -4 p=1.75 x 10 -5 p=0.00059 p=5.94 x 10 -5 p=1.18 x 10 -5 p=0.0037 p=8.32 x 10 -4 p=6.39 x 10 -5 p=0.0209 p=0.0008 p=3.34 x 10 -4
1 Fig. 3 |
Fig.3| IL-17A deficiency or neutralization attenuates the progression of Ph + B-ALL.a Schematic strategy for investigating the role of IL-17A knockout in leukemogenesis.b Flow cytometric analysis of the percentage of B220 dim CD19 + cells in the PB of WT mice and IL-17A -/-mice with secondary transplantation at the indicated times (n = 5 mice per group).c-e Representative images of Wright-Giemsa-stained PB smears (c, left), H&E staining of the spleen (c, right), spleen (d) and spleen weight (e) in WT or IL-17A -/-mice with secondary transplantation (n = 5 mice per group).f, g Flow cytometric analysis of the percentage of B220 dim CD19 + cells (f) and B220 dim CD19 + Ki-67 + cells (g) in the BM, spleen, LNs and PBMCs from WT mice and IL-17A -/-mice with secondary transplantation (n = 5 mice per group).h Strategy for investigating the effects of anti-IL-17A mAb alone or combined with imatinib on Ph + B-ALL progression in vivo.i-k Representative images of spleens (i), statistical analysis of the spleen weight (j), representative images of Wright-Giemsa-stained PB smears (k, top) and H&E staining of spleens (k, bottom) from mice with secondary transplantation treated with the indicated agents (n = 5 mice per group).l-m Flow cytometric analysis of the percentages of B220 dim CD19 + cells (l) and B220 dim CD19 + Ki-67 + cells (m) in the PB, BM, LNs and spleens of mice with secondary transplantation treated with the indicated agents (n = 5 mice per group).n Kaplan-Meier survival curves of mice with secondary transplantation treated with the indicated agents (n = 8 mice per group).b, f, l The gating strategy for B220 dim CD19 + cells was shown in Supplementary Fig.3a.g, m The gating strategy for B220 dim CD19 + Ki67 + cells was shown in Supplementary Fig.3b.Statistical significance was calculated by (b, e, f, g) two-tailed Student's t-test; (j, l, m) one-way ANOVA with Tukey's multiple comparison tests.(n) two-sided log-rank test; Data are presented as means ± S.E.M. Source data are provided as a Source Data file.
Fig. 4 |
Fig. 4 | IL-17A activates the BCR-ABL and JAK/STAT3 signaling pathways to promote the proliferation of Ph + B-ALL cells.a Volcano plot of differentially expressed genes in primary mouse B-ALL cells with or without rmIL-17A treatment.According to the criteria of Log2(fold-change) >1.5 or <-1.5 and P value < 0.05, selected upregulated and downregulated genes are highlighted in the volcano plot.b Molecular signature classification screening by GSEA with the top enriched signaling pathways in primary mouse leukemia cells treated with or without rmIL-17A.a, b n = 3 independent experiments.c GSEA demonstrating the enrichment of gene sets in Ph + B-ALL patients with IL-17RA mRNA ( 205707_at) expression above the median level (IL-17RA high) (n = 61) and below the median level (IL-17RA low) (n = 61).d GSEA demonstrating the enrichment of indicated gene set in B cells from Ph + B-ALL patients and HDs in GSE134759 dataset.e Real-time PCR analysis of the relative mRNA levels of Jak2 and Il-6 in primary mouse leukemia cells treated with or without rmIL-17A (n = 3 independent experiments).f, g The human IL-6 concentrations in the culture supernatant of SupB15 cells (n = 3 independent experiments) (f) and primary Ph + B-ALL cells (g) treated with different concentrations of rhIL-17A were measured by ELISA (n = 4 independent experiments).h Realtime PCR analysis of the relative mRNA levels of Jak2 and Il-6 in primary mouse leukemia cells treated with pathological concentrations of rmIL-17A (n = 3 independent experiments).i The levels of proteins in primary mouse leukemia cells treated with or without rmIL-17A were measured by Western blotting.n = 3 independent experiments.The samples were derived from the same experiment, and the gels/blots were processed in parallel.a-d Statistical significance was determined by a one-sided permutation test, and statistical adjustments were made for multiple comparisons.Statistical significance was calculated by (e, g) two-tailed Student's t-test; (f, h, i) one-way ANOVA with Tukey's multiple comparison tests; Data are presented as means ± S.E.M. Source data are provided as a Source Data file.
Fig. 5 |
Fig. 5 | IL-17A stimulates the secretion of CXCL16 from leukemia cells, promoting the differentiation and migration of Th17 cells.a B220 + cells and Th17 cells in the BM niche in WT mice and BCR-ABL tTA mice (n = 3 mice per group) were subjected to immunofluorescence staining.Representative images were shown.b The numbers of Th17 cells migrating to culture medium or to leukemia cells treated with or without 50 ng/ml rhIL-17A were determined by cell counting and FCS analysis.c Real-time PCR analysis of the relative mRNA levels of chemokine genes in SupB15 (left) and BV173 cells (right) treated with rhIL-17A (100 ng/ml) or PBS control.d The human IL-5, CXCL5, CCL5 and CXCL16 levels in the culture supernatant of SupB15 cells treated with or without rhIL-17A (100 ng/ml) were measured by ELISA.e The CXCL16 concentrations in the serum of BCR-ABL tTA mice and WT mice (n = 5 mice per group) were quantified with a Mouse Chemokine Assay Q1 (Raybiotech).f The CXCL16 concentrations in the serum of HDs (n = 7 samples)and primary Ph + B-ALL patients (n = 14 samples) were measured by ELISA.g Schematic strategy for studying the differentiation of Th17 cells with or without rmCXCL16 stimulation in vitro.h Representative FACS plots and quantification of the percentage of Th17 cells in naïve CD4 + T cells with or without rmCXCL16 treatment.i Schematic strategy for studying the migration of Th17 cells cocultured with leukemia cells with or without anti-CXCL16 mAb treatment in vitro.j, k Flow cytometric analysis of the percentage of Th17 cells in the lower chambers of the inserts (j) and proliferation activity of leukemia cells (k) in a coculture system with or without anti-CXCL16 treatment.The gating strategy for isolating Th17 cells is shown in Supplementary Fig.6a.(b-d, h, j, k) n = 3 independent experiments.Statistical significance was calculated by (c-f, j, k) two-tailed Student's t test; (b, h) one-way ANOVA with Tukey's multiple comparison tests; Data are presented as means ± S.E.M. Source data are provided as a Source Data file.
) n = 3 independent experiments.Statistical significance was calculated by (c-f, j, k) two-tailed Student's t test; (b, h) one-way ANOVA with Tukey's multiple comparison tests; Data are presented as means ± S.E.M. Source data are provided as a Source Data file.
Fig. 7 |IL
Fig. 7 | The anti-CXCL16 mAb synergizes with imatinib to attenuate the progression of Ph + B-ALL.a Flow cytometric analysis of the percentages of B220 dim CD19 + cells in BM, spleens, LNs and PB of mice with secondary transplantation treated with or without rmCXCL16 (n = 6 mice per group).b-d Representative images of Wright-Giemsa-stained PB smears (b, top), H&E staining of spleens (b, bottom), spleens (c) and spleen weights (d) from the indicated mice (n = 6 mice per group).e Representative images of Ki67 staining in the spleen tissues from the indicated mice.n = 4 fields, two different mice per group.f Flow cytometric analysis of the percentages of Th17 cells in the PB, LNs, spleens and BM from the indicated mice (n = 6 mice per group).g Spleen tissues from the indicated mice were subjected to immunofluorescence staining for IL-17A (red), CD4 (green), and B220 (rose red).Representative images of Th17 cells were shown.n = 4 fields, two different mice per group.h Kaplan-Meier survival curves for the indicated mice (n = 8 mice per group).i Schematic strategy for investigating the effects of anti-CXCL16 mAb alone or combined with imatinib on Ph + B-ALL progression.j-l Representative images of spleens (j), spleen weights (k), Wright-Giemsa-stained PB smears (l, top), and H&E staining of the spleen (l, bottom) from leukemia mice treated with the indicated agents (n = 3 mice per group).m Flow cytometric analysis of the percentages of B220 dim CD19 + cells in the PB, BM, LNs and spleens from leukemia mice treated with the indicated agents (n = 3 mice per group).n The percentage of Ki-67 + cells in the spleen was detected by immunofluorescence staining in the indicated mice.Data are presented as means ± S.E.M of eight random fields of view from three different mice per group.o Flow cytometric analysis of the percentages of Th17 cells in the PBMCs, LNs, spleens and BM of leukemia mice treated with the indicated agents (n = 3 mice per group).(a, m) The gating strategy for B220 dim CD19 + cells was shown in Supplementary Fig. 3a.(f, o) The gating strategy for Th17 cells in the CD4 + T cells was shown in Supplementary Fig. 6b.Statistical significance was calculated by (a, d-g) two-tailed Student's t-test; (k, m-o) one-way ANOVA with Tukey's multiple comparison tests; Data are presented as means ± S.E.M. Source data are provided as a Source Data file.
Fig. 8 |
Fig. 8 | Schematic diagram illustrating the feedback loop formed by Th17 cells, IL-17A, and CXCL16 that promotes Ph + B-ALL progression.The Th17 cell population and IL-17A expression are distinctively increased in Ph + B-ALL patients, and high expression of IL-17A promotes the progression of Ph + B-ALL.IL-17A promotes the proliferation and survival of Ph + B-ALL cells by activating the BCR-ABL and IL6/JAK/STAT3 signaling pathways.Moreover, IL-17A can increase the secretion of the chemokine CXCL16 from leukemia cells by activating NF-kB, which in turn mediates the differentiation and recruitment of Th17 cells to the leukemia niche microenvironment.Targeting IL-17A or CXCL16 in the leukemia niche microenvironment attenuates the progression of Ph + B-ALL.
The overall survival (OS) and disease-free survival (DFS) of patients with high IL-17A and in low IL-17A expression were analyzed in data set GSE11877.IL-17 RA and IL-17 RC expression in Ph + B-ALL patients, Ph -B-ALL patients and Healthy Donors and enrichment of gene sets in Ph + B-ALL patients with IL-17 RA high vs IL-17 RA low or IL-17 A high vs IL-17 A low were analyzed using the accession code GSE13204.The remaining data are available within the Article, Supplementary Information or Source Data file.Source data are provided in this paper.Source data are provided with this paper.Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/ licenses/by/4.0/.
|
v3-fos-license
|
2023-07-12T07:18:01.689Z
|
2023-08-01T00:00:00.000
|
259750344
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jeeng.net/pdf-166388-91513?filename=Formation%20of%20the%20Quality.pdf",
"pdf_hash": "33c5c74c3367c3fde2351d09acb7c95c457f15d3",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43535",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "90b343ccd2bb7e0460c49c3c9f3667f04e3dd4ae",
"year": 2023
}
|
pes2o/s2orc
|
Formation of the Quality Indicators of Hemp ( Cannabis sativa L.) Seeds Sown under Organic Growing Technology
The oil content of hemp seeds is controlled by the genotype and in the conducted studies did not depend on the growing technology; however, this factor had a synergistic effect with others. The protein content of hemp seeds during the years of research did not depend on weather conditions. Like other quality indicators, it had a slight variation, which indicates the significant role of the genetic characteristics of the varieties. On average, over the years of research, the protein content of the variants grown according to conventional technology was 25.2%, and according to transitional technology, it was 0.03% higher, which was within the limits of statistical error. The organic technology ensured the protein content at the level of 25.3%, and the use of the BioStymix-Niva microbial biodegrader - biodestructor contributed to the further growth of the indicator to 25.4%. The oil content of hemp seeds is not limited by other important characteristics, such as the yield of the hemp stems or the fiber content. Only the Glyana variety showed inverse correlations with plant height, hemp stems and seed productivity, they were of medium strength ( r = -0.60 – -0.43). In the Zolotoniski 15 variety, only one inverse relationship was recorded, i.e. plant height ( r = -0.57). No correlation was established between protein content and oil content in seeds. Correlations may change depending on other factors of cultivation, including weather conditions, elements of technology, etc., but the evaluation of varieties for cultivation according to these characteristics can significantly increase the efficiency of the production of cannabis products.
INTRODUCTION
After many years of unprecedented restrictions, the agricultural culture of ancient civilization is gradually returning to the fields and attracts the deserved attention of science and industry. Not only a narrow approach to the revival of hemp growing, but also a much deeper problem, namely greening of crop production, food and technical industry, improvement of the environment, renewal of raw resources and a large number of aspects of human activity, is gaining special relevance. In this context, the basis is a scientifically based, rational system of hemp cultivation, which is based on the selection of varieties for targeted cultivation as well as development of organic cultivation technologies, taking into account agrobiological and agroecological features. Hemp has the potential for environmentally friendly sustainable cultivation (Small and Marcus, 2002) and the majority of US farmers surveyed, namely 75%, expressed interest in certified production (Dingha, 2019). According to Lithuanian scientists, this crop meets the principles of ecological production as much as possible: it does not deplete the soil, reduces the number of weeds in the field, but in addition, hemp products are also ecologically clean and harmless to the environment. Moreover, the revival of this industry, which began in Lithuania in 2014, has good economic and social prospects, creating new jobs (Jonaitienė et al., 2016). The growth and development of hemp, their productivity largely depend on the supply of plants during the growing season with the necessary productivity factors -heat, moisture, nutrients, etc.
Excessive temperature (daily maximum temperature above 30°C) during the grain filling phase will be one of the main factors affecting seed quality, limiting seed oil accumulation (Baldini et al., 2020). Hemp requires a sufficient amount of moisture during the entire growing season, severe drought can significantly accelerate ripening, plants can have a small height, which will negatively affect the yield and quality of fiber, as well as seed yield (Amaducci et al., 2002).
An important aspect of obtaining the optimal combination of quality and yield is also the density of plants per unit area. In thickened crops, up to 50-60% of plants were lost due to intraspecific competition (Amaducci et al., 2002). There is a debate about the norms and terms of application of nitrogen fertilizers, for example, some researchers say that there is no reaction to the terms of fertilizer use (Finnan and Burke, 2013), and others report the significant effect of post-sowing application (Legros et al., 2013).
There are conflicting data regarding the norms and terms of nitrogen application, as well as possible undesirable consequences of excessive nitrogen fertilizer application, and therefore it is recommended to study the reaction of varieties to fertilizers under specific environmental conditions where hemp is planned to be grown (Papastylianou et al., 2018). High application rates of nitrogen fertilizers increase the protein content in seeds (Adamovics et al., 2016). Depending on the growing conditions, nitrogen application rates from 60 to 200 kg/ha were found to be the most effective, which can increase plant height, stem diameter, seed yield, and biomass production for dual purpose varieties (Tang et al., 2017;Aubin et al., 2016;Vera et al., 2004). Food products and medicinal products require organic production (Caplan et al., 2013). The application of the organo-mineral fertilization system was most effective when 100 kg ha -1 of nitrogen and 20 tons ha -1 of cow manure were used. The highest oil content was obtained when applying the maximum N rate of 50 kg ha -1 without using phosphorus. 30 tons ha -1 of manure and 100 kg ha -1 of nitrogen increased leaf yield index and decreased seed yield index. Thus, hemp responds well to the combined application of nitrogen fertilizers and animal manure, while its response to P application was limited (Laleh et al., 2021). When using urea for fertilization, it is advisable to combine its application with nitrogen inhibitors, which ensure a higher yield of straw and seeds, as well as significantly reduce the rate of use of nitrogen fertilizers (Kakabouki et al., 2021).
A comprehensive research program in the Netherlands has concluded that hemp is a potentially profitable crop with unique traits and characteristics to fit harmoniously into a sustainable farming system (Werf et al., 1996). Organic strategies require special adaptation to the environment, whereas conventional strategies do not meet the requirements or principles of organic farming at all. The norm in many European countries is the cultivation of varieties suitable for multipurpose use, in particular fiber and seeds (Carus et al., 2013;Tang et al., 2016). Global food security and sustainability can be improved by increasing the production and use of seeds, which not only provide healthy sources of protein and oil, but the whole plant can be used for fiber (Schultz et al., 2020;Trovato et al., 2023). Hemp oil is rich in the nutrients that have nutritional and functional benefits for the human body. The content and quality of the oil are influenced by both varietal properties and agrotechnical methods (Marzocchi and Caboni, 2020).
The most interesting property of hemp oil is the content of polyunsaturated fatty acids, i.e. linoleic (C18:2, ω-6) and α-linolenic (C18:3, ω-3). In addition, it is noted that the unsaponifiable fraction has an anti-inflammatory, antimicrobial effect and lowers cholesterol. It is assumed that these oils can be used for the production of dietary supplements with a high content of ω-6 and ω-3 of plant origin (Da Porto et al., 2015). Similar data were also obtained by Ukrainian scientists (Sova et al., 2018).
Oil content and quality are largely determined by varietal characteristics (Blasi et al., 2022;Frassinetti et al., 2018). The differences between varieties can be a significant interval. In the studies of Golimowski et al., it was established that the yield of oil from seeds, depending on varietal properties, was 30.45 ± 0.85, 29.47 ± 0.84, and 31.03 ± 1.03% for Finola, Earlina 8FC, and S. Jubileu varieties, respectively, while the oil differed in quality (Golimowski et al., 2022). Thus, the selection of varieties for cultivation needs due attention depending on the conditions (Pavlovic et al., 2019). Hemp seeds are a good source of plantbased protein, for example, 2-3 tablespoons of hemp seeds provide almost 11 grams of protein, containing methionine, lysine and cysteine. Seeds The aim of the work was to determine the influence of varietal properties and growing technologies on the indicators of seed quality -oil and protein. In addition, an attempt was made to determine the dependence between indicators of the yield of stems and the quality of hemp seeds as well as to establish the limiting indicators of each other.
MATERIALS AND METHODS
The experiments were conducted in the Poltava region, which belongs to the zone of unstable moisture. The soil is leached chernozem, a layer of 0-20 cm, with the following parameters: pH -6.6, P 2 O 5 content -140.3, K 2 O -87.7 mg/kg. The depth of the humus horizon is 53-100 cm, the humus content is 4.16%. After the main tillage of the soil in the fall (ploughing), spring closing of moisture and pre-sowing cultivation were carried out. Sowing was carried out by a Monosem seeder with a seeding rate for two-sided use -1.2 million pieces/ha, for greens -4.0 million pieces/ha of similar seeds in four times. The registered area of the site for green areas and two-way use is 25 m², the total area of the experiment is 0.68 hectares. Before spring tillage, BioStimIx-Niva destructor was applied using a ground sprayer at the rate of 1 l/ha. In the experiment, seed hemp varieties Glyana, Zolotoniski 15, Lara, Globa, Sula were grown according to cultivation technologies: conventional with application of N30P30K30 as a broadcast before sowing; transitional, organic and organic with the use of microbial biodegradation -biodestructor BioStymix-Niva (1 l/ha). The oil content was determined from the defatted residue using a Soxhlet apparatus. Diethyl (sulfur) ether was used as a solvent, 1 g of ground seeds was extracted until the ether was completely decolorized. The protein content in seeds was determined by the Kjeldahl method (House et al., 2010).
RESULTS AND DISCUSSION
The oil content of hemp seeds is controlled by the genotype and, in the studies conducted, did not depend on the growing technology, but this factor had a synergistic effect with others. Actually, the oil content indicator itself also had a weak variation, as did the fiber content. On average, over the years of research, the lowest oil content was contained in the seeds of the 2019 harvest -29.41% -whereas in the following two years it was slightly higher -29.45%. The highest oil content, on average over three years of research, was obtained on variants of the Globa variety -29.73-29.76% (Table 1).
The Lara variety was characterized by a slightly lower content, the oil content in the seeds of which was within 29.72%, the rest of the varieties had an even lower indicator, and the lowest amount of oil was noted in the seeds of the Glyana variety -29.1-29.16%. The use of the destructor in cultivation technologies did not lead to any significant change in the oil content. The protein content of hemp seeds during the years of research did not depend on weather conditions. Like other quality indicators, it had a very small variation, which indicates the significant role of the genetic characteristics of the varieties. The protein content in the seeds depended on the varietal properties and the interaction of this factor with the growing technology, as well as the combined effect of weather conditions and varietal properties was observed. Indeed, over the years of research, the protein content of the seeds was virtually the same at 25.3%. Therefore, the difference between the options should be sought in the middle of the multifactorial complex.
Over three years of research, the best results in terms of protein content were recorded in the Sula variety, the difference between the options of inorganic technology and organic technology exceeded the NIR value ( Table 2). On average, over the years of research, the difference was 0.16%, reaching 0.2% in 2019. The average protein content in hemp seeds of the Sula variety was 25.9% for all cultivation options. Other varieties contained a comparatively smaller amount.
The Globa variety was characterized by an average protein content of 25.5%, in the Lara variety, the amount of protein was in the range of 24.9-25.3%, providing an average value of 25.2%. The Zolotoniski 15 and Glyana varieties had the same content, which did not exceed 29.9%. Thus, the cultivar factor is the most effective and should be taken into account when selecting cultivars for cultivation. The "phenol" variety contains 24.8% protein (Callaway, 2004). On average, over the years of research, the protein content of variants grown according to conventional technology was 25.2%, and according to transitional technology, it was 0.03% higher, which was within the limits of statistical error.
Organic technology provided protein content at the level of 25.3%, and the use of a destructor contributed to further growth of the indicator to 25.4% (Figure 1). From an economic point of view, the difference between varieties and variants of cultivation technology did not have too much influence on the formation of protein content, thus all varieties are suitable for cultivation according to organic technologies. The observed regularity, however, should be taken into account for the selection of the variety, evaluating it also from the point of view of other economic and valuable features. To solve this issue, it was necessary to conduct a correlation analysis of the protein content with other economic value indicators. Varietal properties significantly influenced the formation of dependencies; for example, in the Glyana variety, no correlations that would limit the accumulation of its content in seeds were observed -all of them were within the range of In the Zolotoniski 15 variety, an inverse correlation was observed with plant height, and in the Lara variety with fiber content, the correlation coefficients were -0.32 --0.30. Thus, these varieties can fulfill a dual purpose, but in case of a change of some factor, it is quite possible, as well as weather conditions, that the problem of loss of quality will arise in one of the areas of use. The Sula variety, which had the best yield and seed quality indicators, was also characterized by inverse correlations with plant height, oil content, and fiber yield (r = -0.29 --0.66). However, in the general array of data of the experiment, these reverse correlations dissolved and were actually absent. The close correlation of the protein content with the yield of the hemp stems turned out to be significant -r = 0.70, which, in authors' opinion, indicates that up to a certain value of the yield or protein content, there may be a linear relationship between them, which can be transformed into some other. The obtained research results allow stating that regardless of the biometric aspects of cultivation and the existence of interdependencies between them, organic cultivation technologies do not limit the protein content, but on the contrary, contribute to its increase. Therefore, in organic production, the food direction of growing seed hemp also has significant, if not primary, prospects. As it can be seen from the abovementioned results, the oil content of hemp seeds is not limited by other important characteristics, such as the hemp stems yield or the fiber content. Only the Glyana variety showed inverse correlations with plant height, hemp stems and seed productivity, they were of medium strength (r = -0.60 --0.43). In the Zolotoniski 15 variety, only one inverse relationship was recorded, i.e. plant height (r = -0.57). In the Globa variety, such a correlation was observed with seed yield. It is quite possible that the large number of correlations in the Glyana variety is the reason for its lower yield and somewhat lower quality due to certain limitations of some factors by others.
In general, according to the results of multiple regression analysis, the equation for the formation of oil content can be written in the form presented in Figure 2, which shows the actual and possible dependence of oil content on protein content and seed yield. Thus, the equation of dependence between these indicators will be written by equations shown in Figure 2.
According to the obtained equations, the oil content was inversely correlated with seed yield and directly correlated with protein content. Modeling by the method of nonlinear multiple estimation showed that the highest protein content in hemp seeds can be formed at a seed yield of 0.54-0.60 t/ha. In this case, it is also possible to achieve the highest protein content at the level of 26.5-27.0% under the conditions of organic cultivation with unstable moisture.
The protein content also depended on the yield of seeds, but in addition, this indicator was also influenced by the yield of the crop (Figure 3). A direct relationship was recorded between these traits, but the regression model of non-linear assessment of dependence allowed establishing the possibility of the existence of a feedback relationship with the protein content in the protein system The given dependencies indicate the absence of a connection between the quality indicators of hemp within the values obtained over the years of the experiment. Similarly to yield properties, qualitative indicators do not deteriorate under the condition of growing hemp according to organic technologies. The obtained results indicate the prospect of managing hemp production processes to achieve maximum performance.
In order to effectively manage the economic and valuable features of hemp agrocenoses, it is necessary to carefully analyze the system of mutually limiting relationships, which may carry the risk of yield failure or deterioration of product quality. Such relationships can be detected using correlation analysis. In production, the use of such a method of evaluating varieties will obviously encounter certain problems, because for objective evaluation it is necessary to conduct accurate observations and records, to have appropriate laboratory and software equipment, as well as trained and highly qualified personnel. Therefore, such a task should be set in the originating institutions.
No correlation was established between protein content and oil content in seeds. The studied varieties were characterized by a different number of correlations (Table 3). The variety Glyana had inverse relationships of fiber content in stalks and oil content in seeds with plant height, hemp stems and seed yield, and the values of correlation coefficients were close to defining these inverse relationships as strong. The protein content in the seeds, on the contrary, had a weak or medium relationship with the indicated indicators. Thus, this variety is better suited for cultivation for purposes related to the use of specific compounds, i.e. proteins.
Similar features of the manifestation of correlations were also observed in the Globa variety; the correlation coefficients showed the average inverse relationship of the oil content with the height of the plants, yield of hemp stems, fiber and seeds. In the Zolotoniski 15 variety, the height of the plants during the harvesting period turned out to be a limiting factor for the oil and protein content; the correlation coefficients were -0.57 and -0.30, respectively. Thus, this variety may be better used for cultivation for technical purposes, to obtain fiber.
The Lara variety was characterized by the smallest number of correlations between economic and valuable traits; only the oil content had an average correlation with the yield of the hemp stems -r = -0.32. This property of the variety indicates its prospects for dual purpose cultivation. The system of correlations in the Sula variety was somewhat different. Fiber content in stems was inversely correlated with length of period to biological maturity and fiber yield. Also, this variety was characterized by the presence of inverse correlations of the protein content with the rest of the economic value characteristics, while the oil content was directly correlated with them.
It should be noted that the correlations may change depending on other factors of cultivation, under particular weather conditions, elements of technology, etc., but the evaluation of varieties for cultivation according to these characteristics can significantly increase the efficiency of hemp production. Therefore, modeling the processes of forming indicators of product quality and yield , the first models of hemp crops were implemented more than 20 years ago, and later studies considered only the phenology of the crop. The conducted research largely compensates for the lack or low prevalence of the mentioned information and is aimed at the multi-purpose use of hemp, which is now gaining special relevance (Baldini et al., 2020). The established features of correlation relationships to a large extent specify the selection of varieties for growing under organic production conditions. As it was noted, (Carus et al., 2013;Tang et al., 2016) the combination of dual use of hemp for fiber and seeds, as a rule, leads to the formation of uneven short stems; thus, there is a possibility of the existence of inverse correlations. In the conducted research, such correlations characterized the varieties Zolotoniski 15, Glyana and Globa; the coefficients of oil content with plant height were in the range of -0.61 --0.38. Lara and Sula varieties did not have such connections, which indicates their better suitability for dual use and growing according to organic technologies.
CONCLUSIONS
The oil content in the seeds is mainly influenced by varietal properties and conditions of the year of cultivation. The choice of cultivation technology did not affect the formation of this indicator; however, an additive effect was observed with other factors: the variety and conditions of the year. Growing hemp using organic technologies contributed to a significant increase in the protein content of seeds. Compared to inorganic cultivation options, organic technology provided protein content at the level of 25.3%, and the use of a destructor contributed to the further growth of this indicator to 25.4%. Each variety had its own characteristics of correlations of this indicator with others, which proves the need for careful selection of varieties for cultivation. To select the varieties for growing for different purposes, it is advisable to use correlation analysis, since they can be characterized by a different number, strength and direction of connections. The definition of such a system can make it possible to more effectively use the potential of agrocenoses of culture.
|
v3-fos-license
|
2017-09-29T03:37:04.000Z
|
2017-07-19T00:00:00.000
|
54506617
|
{
"extfieldsofstudy": [
"Physics",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-017-06198-9.pdf",
"pdf_hash": "908cc628706f097aa341237b5a67199ed9049aa2",
"pdf_src": "ArXiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43536",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "908cc628706f097aa341237b5a67199ed9049aa2",
"year": 2017
}
|
pes2o/s2orc
|
Su-Schrieffer-Heeger chain with one pair of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal{P}\mathcal{T}$$\end{document}𝒫𝒯-symmetric defects
The topologically nontrivial edge states induce \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal{P}\mathcal{T}$$\end{document}𝒫𝒯 transition in Su-Schrieffer-Heeger (SSH) chain with one pair of gain and loss at boundaries. In this study, we investigated a pair of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal{P}\mathcal{T}$$\end{document}𝒫𝒯-symmetric defects located inside the SSH chain, in particular, the defects locations are at the chain centre. The \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal{P}\mathcal{T}$$\end{document}𝒫𝒯 symmetry breaking of the bound states leads to the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal{P}\mathcal{T}$$\end{document}𝒫𝒯 transition, the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal{P}\mathcal{T}$$\end{document}𝒫𝒯-symmetric phases and the localized states were studied. In the broken \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal{P}\mathcal{T}$$\end{document}𝒫𝒯-symmetric phase, all energy levels break simultaneously in topologically trivial phase; however, two edge states in topologically nontrivial phase are free from the influence of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal{P}\mathcal{T}$$\end{document}𝒫𝒯-symmetric defects. We discovered \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal{P}\mathcal{T}$$\end{document}𝒫𝒯-symmetric bound states induced by the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal{P}\mathcal{T}$$\end{document}𝒫𝒯-symmetric local defects at the SSH chain centre. The \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal{P}\mathcal{T}$$\end{document}𝒫𝒯-symmetric bound states significantly increase the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal{P}\mathcal{T}$$\end{document}𝒫𝒯 transition threshold and coalesce to the topologically protected zero mode with vanishing probabilities on every other site of the left-half chain and the right-half chain, respectively.
Introduction. The parity-time (PT ) symmetric non-Hermitian Hamiltonians can possess real spectra [1][2][3][4][5][6][7][8] but nonunitary dynamics, such as faster evolution [9,10] and power oscillation [11]. The PT system experiences a phase transition when its spectrum changes between real and complex. The PT transition point is the exceptional point associated with eigenstates coalescence [8]. In onedimensional system, the exceptional point varies as system size and structure [12,13]. The critical gain/loss rate approximately equals to the coupling strength of a uniform chain for a pair of PT -symmetric gain and loss defects at boundary [14]. For defects locations at center, the critical gain/loss rate is the coupling strength between the two defects [15]. Topological properties were extensively investigated in condensed matter physics [16][17][18][19][20][21][22][23][24][25][26][27] and in photonic systems [28][29][30]; furthermore, the PTsymmetric topological insulator was proposed in twodimensional (2D) coupled optical resonators. Different with traditional Hermitian topological insulator, the edge states are unidirectional amplified and damped [31]. Topological insulator states are PT symmetry breaking in a PT -symmetric non-Hermitian extension of Dirac Hamiltonians, because the PT operator switches the edge states locations at boundary [32]. The Chern number was generalized for non-Hermitian systems, the tightbinding model on the honeycomb and square lattices under different symmetry classes were examined, broken PT -symmetric edge states with real part eigen energies being zero were found [33]. The topologically chiral edge modes found in 2D non-Hermitian system were related to the exceptional point of the bulk Hamiltonian that characterized by two half-integer charges of the exceptional point [34].
The Su-Schrieffer-Heeger (SSH) chain [35] with a pair of PT -symmetric defects at boundary was studied, the edge states found in topologically nontrivial phase are * jinliang@nankai.edu.cn sensitive to the non-Hermiticity [36] and the critical non-Hermitian gain and loss approach zero [37]. The non-Hermitian Kitaev and extended Kitaev models were investigated similar as the SSH model [38,39]. Optical systems are fruitful platforms for the investigation of PT symmetry [40][41][42][43][44][45][46]. The robust light interface states were discovered at the interface of a combined two SSH chains with different quantum phases of PT symmetry [47]. Recently, non-Hermitian SSH chains were experimentally realized by coupled dielectric microwave resonators [48,49] and photonic lattices [50,51]. In passive SSH chains with periodical losses, single coupling disorder induces asymmetric topological zero mode [49] and PT -symmetric topological zero mode interface states, respectively [51]. PT symmetry breaking and topological properties were theoretically investigated in other PT -symmetric systems [52][53][54], the competition between two lattice defects can induce PT symmetry breaking and restoration as non-Hermiticity increasing in Aubry-André-Harper model [55].
In this work, we study an open SSH chain with one pair of PT -symmetric gain and loss. The PT -symmetric thresholds, the topologically nontrivial edge states, and the local defects induced PT -symmetric bound states are investigated. The PT -symmetry breaking is closely related to the appearance of localized states. When considering the defects located near the chain boundary, the edge states in topologically nontrivial region break the PT symmetry if the defects are at the sites with nonzero states distribution probabilities; otherwise, the edge states are free from the influence of the on-site defects and the PT symmetry phase transition is induced by the bulk states: The extended (bound) state induces the PT symmetry phase transition at weak (strong) non-Hermiticity for defects near the chain boundary (at the chain center).
The PT transition threshold is the largest when the defects located at the chain center, being the weak inhomogeneous couplings of the SSH chain. Two edge states and four bound states exist at large non-Hermiticity, the number of breaking energy levels increases as defects moving from the chain boundary to the center. For defects near the chain center, when PT transition happens, all energy levels break simultaneously in topologically trivial phase; by contrast, two topologically nontrivial edge states are not PT symmetry breaking although in broken PT symmetry phase. The PT transition is associated with the PT symmetry breaking of the eigenstate. The edge states and bound states probabilities localize around the PT -symmetric defects; therefore, they are PT symmetry breaking states except when the defects are the nearest neighbours. We discovered a pair of PT -symmetric bound states for the defects at the chain centre, the PT -symmetric bound states significantly increase the PT transition threshold, at which, the bound states coalesce to the topologically protected zero mode but their probabilities are not only confined to either the loss or the gain sublattice; the probabilities vanish on every other site of the left-half chain and the right-half chain, respectively.
PT -symmetric non-Hermitian SSH chain. In this section, we introduce a one dimensional N -site SSH chain with one pair of PT -symmetric imaginary defects, the system is schematically illustrated in Fig. 1. The couplings between neighbor sites are staggered 1 ± ∆ cos θ, which are modulated by parameter ∆. The coupled chain can be realized by optical resonators [56][57][58]. The defects pair includes a loss (in red) and a balanced gain (in green) [5,6,11,40,42,45]. We define P as the parity operator, which equals to a reflection symmetry with respect to the chain center, satisfying PjP −1 = N + 1 − j. The time-reversal operator satisfies T iT −1 = −i. Under these definitions, the balanced gain and loss as onsite defects pair satisfies the PT symmetry. The pri- is the creation (annihilation) operator on site j for fermionic particles. The chiral symmetry protects the topological properties of H 0 in topologically nontrivial region. In this work, we confine our discussion within system with even N , the non-Hermitian gain and loss defects are located at reflection symmetric positions m and N + 1 − m; the non-Hermitian extended SSH chain is Note that H satisfies the PT symmetry, and is expected to have purely real spectrum. The analysis and conclusions are applicable to corresponding bosonic particles [59]. Topologically nontrivial edge states disappear in system with universal non-Hermiticity [32,33], but remain in system with localized non-Hermiticity [36,53]. The topology is changed in the presence of universal non-Hermiticity, but is robust to several impurities although the impurities are non-Hermitian. The traditional Hermitian SSH model has inhomogeneous staggered hoppings between neighbor sites, the SSH Hamiltonian is a two-band model. Under periodical boundary condition, the Berry phases of the two bands can be calculated, both being π in topologically nontrivial phase when −π/2 < θ < π/2 and both being 0 in topologically trivial phase when −π θ −π/2 and π/2 θ π.
Under open boundary condition, the bulk-edge correspondence indicates the existence of edges states. For even N , two zero edge states appear in region −π/2 < θ < π/2; by contrast, for odd N , single edge state exists when θ = ±π/2 [59]. The edge states probabilities are localized on the chain boundary, thus edge states are immune to non-Hermitian on-site defects at the SSH chain center.
PT -symmetric phases. We first consider PTsymmetric SSH chain with even n as follows (Fig. 1a). The defects are at the chain center (m = n). The SSH chain with odd n has similar results (Fig. 1b). The Hamiltonian H is always in the exact PT -symmetric phase as θ varies when γ < 1 − ∆; the exact PTsymmetric region shrinks for 1 − ∆ < γ < 1 + ∆. At γ > 1 + ∆, H is in the broken PT -symmetric phase for arbitrary θ. In PT symmetry broken phase, all energy levels are PT symmetry breaking in topologically trivial region; however, two edge states are robust to gain and loss defects in topologically nontrivial region, and can be composed into a pair of PT -symmetric states. As an illustration, we numerically calculate the SSH chain spectra and depict them as a function of θ for N = 100 and ∆ = 1/2 in Fig. 2. The real and imaginary parts of the spectra are plotted in blue and red lines, respectively. Two topologically nontrivial edge states appear in the region −π/2 < θ < π/2. Figures 2a and 2b show the entirely real spectrum as a function of θ at γ = 1/4 and 1/2; however, the PT - Other system parameters are N = 100, ∆ = 1/2; the gain and loss are at sites 50 and 51.
symmetric SSH chain is nondiagonalizable for γ = 1/2 at θ = ±π, where the coupling inhomogeneity is at maximum. The Hamiltonian H become n 2 × 2 Jordan blocks after diagonalization, which indicates n pairs of two states coalescence. For γ > 1/2, the PT symmetry breaking appears at θ = ±π, the PT symmetry of all eigen states breaks simultaneously. In Fig. 2c, we depict the SSH chain spectrum for γ = 3/4. The exact PT -symmetric region is determined by γ = 1 + ∆ cos θ. Thus, the system is in exact PT -symmetric phase in region −2π/3 θ 2π/3; in other regions, all the N eigenvalues form n conjugate pairs. The eigen states with the largest imaginary part (we refer to the absolute values in the comparison) have highest probabilities localized near the chain center. The inverse participation ratio (sum of fourth power of the wave function amplitude j |ψ j | 4 ) for the extended state scales as system size in order of N −1 , but the IPR for the localized state approaches constant at large system size. As γ increasing, the probabilities are more localized and form bound states at approximately γ 1. The bound states are attributed to the PT -symmetric non-Hermitian impurities; and the bound states probabilities are localized near the chain center.
In Fig. 2d, γ = 1, exact PT -symmetric phase shrinks to −π/2 θ π/2. In broken PT -symmetric phase, the two red lines (two folders) with the maximum imaginary parts in regions −π θ −0.7π and 0.7π θ π correspond to four bound states. Notably, the bound states eigenvalues change more rapidly than the extended states and enter the extended states band around θ ≈ ±0.7π. As γ increasing to γ = 5/4 as shown in Fig. 2e, exact PTsymmetric phase shrinks to −π/3 θ π/3 (indicated by yellow line in the inset), where the SSH chain has two degenerate zero edge states and two bound states with real eigenvalues. The PT symmetry breaks out of region −π/3 θ π/3. In the regions −π/2 θ < −π/3 and π/3 < θ π/2, the SSH chain is in broken PTsymmetric phase with two topologically nontrivial edge states. There exist two bound states and two edge states, the two bound states have pure imaginary eigenvalues with the largest imaginary part, exiting the extended states band at θ ≈ ±π/2 and become real for |θ| π/3. Notably, the two edge states are also pure imaginary but with the smallest imaginary parts, approaching zero as |θ| close to 0. This is because weak inhomogeneity for |θ| close to π/2 induces more spreading of edges states; by contrast, strong inhomogeneity for |θ| close to 0 induces more localization of the edge states. In the numerical results, the imaginary parts of edge states are negligible in the region −0.4π θ 0.4π (indicated by green line in the inset), determined by the coupling inhomogeneity ∆. The two zero edge states are free from the influence of non-Hermitian defects at (or close to) the chain center. In the regions −π θ −π/2 and π/2 θ π, there are four bound states with the largest imaginary eigenvalues. In Fig. 2f, the spectrum for γ = 3/2 is plotted. The bound states always have the largest imagi-nary eigenvalues, the bound states are two folders out of region −π/2 < θ < π/2, and one folder in region −π/2 < θ < π/2. For γ > 3/2, the SSH chain spectrum is in the broken PT -symmetric phase at arbitrary θ. The only real energy states are the two topologically nontrivial edge states in the region −0.4π θ 0.4π (indicated by green line in the inset).
In Figs. 2g and 2f, 3/2 < γ < 2.96, the imaginary parts of the bound states experience a bifurcation in topological trivial regions −π θ −π/2 and π/2 θ π. After bifurcation, the eigenvalues of the bound states become pure imaginary, which is reflected from the real parts being zero. In the real part of the energy spectrum, the zero is four folders, and corresponds to the four bound states in the topologically trivial phase but corresponds to two bound states and two edge states in the topologically nontrivial phase. Notably, the four folders zero always exists in the real part of energy spectrum for γ > 2.96. The bound states have pure imaginary eigenvalues. The bifurcation behavior disappears in the imaginary parts of the bound states. In this situation, one pair of bound states have larger imaginary eigenvalues than the other pair at arbitrary θ.
In the situation of odd n (Fig. 1b), the coupling between the gain and loss is 1 − ∆ cos θ; by contrast, this coupling is 1 + ∆ cos θ for even n case. The spectrum structures are approximately the same as the even n case shown in Fig. 2, but shifted by π in parameter θ, but two zero edge states still exist in topologically nontrivial phase −π/2 < θ < π/2. In our discussion of odd n case, the coupling at the boundaries are unchanged as 1 − ∆ cos θ.
In the following, we discuss a general situation that the PT -symmetric defects are inside the SSH chain (0 < m < n) rather than at the SSH chain center. The configuration is illustrated in Figs. 1c and 1d. The topological properties are robust to the one pair of gain and loss defects; however, the PT -symmetric properties change significantly. The edge states with probabilities localized at the chain boundary are free from the influence of non-Hermitian defects when they are close to the chain center (m ∼ n) at strong coupling inhomogeneity. By contrast, for the gain and loss defects at the chain boundary (m = 1), the PT symmetry of the SSH chain is fragile to the non-Hermitian defects in the presence of topologically nontrivial edge states. This is because the probabilities of edge states are the highest at the chain boundary and decay exponentially. Thus, the influence of defects is the greatest for edge states; any small gain and loss rate breaks the PT symmetry of the SSH chain in the topologically nontrivial region −π/2 < θ < π/2; in topologically nontrivial region, the two edge states, forming a conjugation pair with pure imaginary eigenvalues, are the only PT symmetry breaking states [36].
As parameter θ varies, the number of breaking energy levels at maximum appears in the topologically trivial phase. The number of breaking energy levels at maximum is larger for larger m, which equals to 2m + 2 for odd m n − 2 in the topologically trivial phase but equals to 2m in the topologically nontrivial phase. By contrast, the number of breaking energy levels at maximum is 2m for m = n−1 and n in both topologically trivial and nontrivial phases. When m = n, all energy levels break simultaneously as shown in Fig. 2c-e. The number of breaking levels at maximum in topologically nontrivial region is two less than that in topologically trivial region for m ≤ n − 2 at odd m.
For N = 100, in topologically nontrivial phase of θ = 0, the coupling inhomogeneity is the strongest. The number of PT -symmetric breaking energy levels at maximum is 2m for even m < n − 2; which changes to 2m − 2 (2m) for odd 25 m < n−2 (m < 25). The two edges states have pure imaginary eigenvalue |E ES | < 10 −10 for m 25 along with defects moving from the chain boundary to the center, the edge states can be considered as unaffected and the eigenvalues are zero for odd m 25. The critical gain/loss rate is γ c ≈ 0.07 at m = 25. At weak non-Hermiticity (γ 1), bound states disappear and two pairs (four) of extended states break first with equal amount of energy imaginary parts. For n − 2 m n, the number of PT -symmetric breaking energy levels at maximum is 2m − 2. The topologically nontrivial zero edge states are real valued.
Edge states and bound states. The defects support localized modes, which can induce PT symmetry breaking [60]. In the SSH chain, the edge states break the PT symmetry if the gain and loss are at the sites with nonzero distribution probabilities in topologically nontrivial phase; otherwise, the edge states are free from the influence of gain and loss and the PT symmetry phase transition is induced by the bulk states, including the extended states induced PT symmetry phase transition at weak non-Hermiticity for defects near the chain boundary and the bound states induced PT symmetry phase transition at strong non-Hermiticity for defects at the chain center.
The probabilities of two edge states for γ = 0 are staggered decreasing from the chain boundary [59], the probability approaches zero for every other site. The probabilities of two edge states on site m and its P-symmetric position N + 1 − m are both zeros for even m. Thus, the edge states are unaffected (Figs. 3a and 3c). By contrast, the influence of defects pair on the edge states is remarkable for odd m, in particular, for the defects close to the chain boundary (Fig. 3e). The topologically nontrivial edge states are robust to non-Hermitian defects, the nonvanishing distribution probabilities at the chain boundary lead to PT symmetry breaking, and the energies of two edge states become conjugate imaginary pair for small gain and loss [37].
At large non-Hermiticity, six localized states at maximum are found, which include four bound states and two edge states as depicted in Fig. 3c. The number of localized states depends on the non-Hermiticity and the defects locations. For large enough gain and loss, all localized states eigenvalues have zero real parts. In topo- logically trivial phase, the localized states are four bound states; however, the number differs in topologically nontrivial phase. Figure 3 depicts the edge states and bound states probabilities for different defects locations. For m = 1, the defects at the chain boundary, two edge states are the only localized states; for 2 m n − 1, two edge states and four bound states are found; for m = n, there are four localized states, including two edge states and two bound states.
In Fig. 3a, the localized states in topologically nontrivial phase with gain and loss at the center are depicted. The localized states include one conjugate pair of bound states and two edge states. The eigenvalues are +3.6494i, −3.6494i, and two degenerate 0 from top to bottom. In Fig. 3b, the SSH chain is in topologically trivial phase, the eigenvalues of two pairs of bound states are +3.2995i, −3.2995i, +0.6062i, and −0.6062i from the top to the bottom.
In Fig. 3c, the eigenvalues of the four bound states are +3.2594i, −3.2594i, +0.6136i, and −0.6136i from top to middle. The other two states at bottom are the zero edge states. In Fig. 3d, the eigenvalues of four bound states are the same as those shown in Fig. 3c, but with probabilities distributions slightly different. This is because θ = 0 in Fig. 3c but θ = π in Fig. 3d, the two couplings 1 ± ∆ between each site and its nearest neighbors switch in the two structures. The bound states probabilities are localized near the defects and decay to zero at the chain boundary. Therefore, the SSH chain structures at the boundary are not important for the four bound states and they are approximately identical in the two cases of θ = 0 and θ = π. This conclusion is invalid when the gain and loss defects locations are close, where bound states probabilities decay from defects iγ and −iγ affected each other.
In case of defects at the chain boundary, the bound states disappear in topologically nontrivial phase. In Fig. 3e, the two edge states eigenvalues are +3.9445i, −3.9445i, and being ±iγ for n → ∞, both two edge states are fragile to impurities at the chain boundary. The staggered decay of edge states disappears when γ is large. In Fig. 3f, the eigenvalues are +3.3384i, −3.3384i, +0.5991i, and −0.5991i. The staggered decay of state probabilities is clearly seen for states with small imaginary eigenvalues (absolute values).
The bound states with positive (negative) imaginary eigenvalues are centered at the gain (loss) site. In the two pairs of bound states, the probabilities decay faster for the one with larger imaginary eigenvalues. The probabilities maxima of these two bound states are at the gain and loss sites. The other pair of bound states has smaller imaginary eigenvalues, which decay in a staggered way instead of monotonously and the decay is slower. The probabilities maxima of this pair of bound states are at the nearest neighbor site of the impurity, the site which has stronger coupling strength between the impurity and its neighbors. As shown in Figs. 3c and 3d, the stronger couplings are 1+∆ between 20 (81) and 21 (80) as shown in Fig. 3c and 19 (82) and 20 (81) as shown in Fig. 3d. The pair of bound states with smaller positive (negative) imaginary parts are localized at 21 (80) and 19 (82), respectively. By contrast, for the gain and loss at the chain center, bound states with smaller imaginary eigenvalues vanish (Fig. 3a). The situations are different for defects at the boundary (m = 1) and the center (m = 50), the dimerized unit with stronger couplings is incomplete at FIG. 4. The numerically calculated γc as a function of location m. (a) γc minimum in region θ ∈ [−π, π] is depicted, which indicates the PT -symmetric SSH chain spectrum is entirely real for γ < γc at arbitrary θ. The blue bars are for odd m, which approaches zero for m < 30; the green bars are for even m. (b) γc depicted for full θ ∈ [−π, π], the dark blue from left to dark red on the right represent θ from −π to π, the green area in the center corresponds to θ = 0, indicated by arrows in the upper right corner. Other parameters are N = 100, ∆ = 1/2. the boundary and the center in comparison with other cases, and the localized states partially vanish accordingly.
In Fig. 4a, we depict γ c as a function of location m. γ c is maximal at m = n, being 1 − ∆; the minimum γ c approaches zero for small odd m (gain and loss defects close to the chain boundary). The localized states are non-PT -symmetric except for the two degenerate zero edge states, which can be composed into PT -symmetric form; and the real valued bound states appear when defects are at the chain center. For even m, the edge states are unaffected; for odd m, the edge states break the PT symmetry when m is small (defects near the chain boundary). The PT symmetry is fragile to nonzero non-Hermiticity; when m is large (defects near the chain center), the edge states are still unaffected because the probabilities of edge states decayed to zero at the locations of defects pair. In Fig. 4a, γ c is no longer approaching 0 for odd m > 30 and monotonously increases as location m when m > 40. These all reflect that the influence of defects pair on the edge states is negligible and the two edge states energies are real and still being zero. The bound states appear in conjugation pairs, being non-degenerate; the bound states probabilities localize around each impurity. PT symmetry is thus fragile to the bound states. An exception is that when the defects are at the chain center (m = 50), PT -symmetric bound states can appear in topologically nontrivial phase for γ < γ c (in exact PTsymmetric phase). Figure 4b depicts the contours of γ c at different location m as function of θ in full region of θ ∈ [−π, π]. At m = 1, γ c maximum equals to 1 around |θ| = π/2 and shapely changes to zero in |θ| < π/2, where topologically nontrivial edge states appear. This is because that the edge states are fragile to the on-site non-Hermitian gain and loss. Affected by the edge states, the shape change of γ c occurs near |θ| = π/2 at odd m for defects near the chain boundary. The influence of edge states vanishes for defects near the chain center and the bulk states induce PT symmetry phase transition. Notably, γ c increases dramatically at m = 50 in comparison with other cases. γ c increases from 1 − ∆ = 1/2 at |θ| = π to 1 + ∆ = 3/2 at |θ| = 0. This large PT transition threshold implies the PT -symmetric bound states may appear.
The PT -symmetric bound states. In a large N system, the bound states located in the chain center have amplitude decayed to zero at the chain boundaries. At m = n, the bound states are analytically calculated. The eigenenergy is E = (t 1 χ + t 2 ) cos φ, where sin φ = −γ/t 2 , χ is the decay factor that depends on the chain configuration and the inhomogeneous couplings. The eigenenergy is real for |γ/t 2 | 1. For even n, t 1 = 1 − ∆ cos θ, t 2 = 1 + ∆ cos θ; for odd n, t 1 = 1 + ∆ cos θ, t 2 = 1 − ∆ cos θ.
Previously, the topologically protected PT -symmetric zero modes were demonstrated in non-Hermitian SSH chains, the SSH chains globally possess balanced gain and loss in each dimerized unit cell; the topologically protected zero modes are interface states that induced by the coupling disorder in the SSH chain centre and the global loss [48,49,51]. The interface states are confined to the passive sites with vanishing probability distributions on the lossy sites [49], and spread on both sublattices [51]. In our SSH chain with one pair of balanced gain and loss at the centre, the existed PT -symmetric bound states are different. The PT -symmetric bound states are induced by the non-Hermitian local defects and significantly enhance the PT transition threshold. The two bound states energies are symmetric about zero energy and approach zero as the non-Hermiticity increases. At the PT transition threshold, the bound states coalesce to the PT -symmetric zero mode, the zero mode is defective and topologically protected by the band gap [34]. The PT -symmetric zero mode still differs with that found at the interface between two topologically distinct PT -symmetric lattices induced by the coupling disorder [48,49]. The coalesced PT -symmetric zero mode probability vanishes on every other site of the left-half chain and the right-half chain, respectively. The two PTsymmetric bound states are composed by the edge state localized on the right edge n of the left half chain and the edge state localized on the left edge n+1 of the right half chain. When the coupling strength between the neighbours at chain center (sites n and n + 1) is stronger in the inhomogeneous couplings, the topologically protected zero mode appears at γ = max(t 1 ,t 2 ). The wave function contributions of the on-site defects ±iγ and the coupling between the neighbour sites n and n+1 cancel each other, they mimic a free-like boundary except for ψ n = iψ n+1 (ψ j represents the wave function amplitude for site j). The wave function amplitude stepped decays in form of χ l , where l is the dimerized unit cell index. The decay factor χ = −t 1 /t 2 < 1 for even n (χ = −t 2 /t 1 < 1 for odd n). In Fig. 5, the PT -symmetric bound states are depicted in topologically nontrivial phase at θ = 0. The real parts (upper panels) of the bound states are even functions of position while the imaginary parts (middle panels) of the bound states are odd functions of position. In this case (t 1 = 1/2, t 2 = 3/2), the PT transition threshold is at γ = 3/2 = t 2 , the PT -symmetric bound states coalesce and turn to the topologically protected zero mode; we depicted it in Fig. 5a, the probability distribution vanishes for every other site of the left-half chain and the right-half chain, respectively. This topological zero mode differs with that found in the SSH chain with loss in each unit cell due to the distinct interface at the chain centre. [48,49].
For |γ/t 2 | = 1, the decay factor χ is We choose the amplitude in the chain center |ψ n+1 | = 1 instead of renormalized the wave function for convenience.
The wave function of the PT -symmetric bound states at even n is where n c = (N +1)/2, the power exponent [σ j (j − n c )] of (−1) in equation 3 represents the integer part of σ j (j−n c ) and σ j is a sign function defined as σ j = sgn(j − n c ). At odd n case, the expression of ψ j is still valid; however, the bound states are not real valued and PT -symmetric bound states vanish. The values φ for the bound states with complex eigenvalues are φ 1 = sin −1 (−γ/t 2 ), φ 2 = − sin −1 (−γ/t 2 ), φ 3 = π − sin −1 (−γ/t 2 ), and φ 4 = −π + sin −1 (−γ/t 2 ); the corresponding decay factors χ and the wave function of bound states can be obtained from equations 2 and 3.
Conclusion.
We have studied a pair of balance gain and loss defects in a non-Hermitian SSH chain, the influence differs significantly as the PT -symmetric defects locations. The PT transition threshold has been investigated, the number of broken energy levels at maximum increases as the defects close to the chain center in the broken PT -symmetric phase; for the defects at the chain center, all energy levels break the PT symmetry simultaneously in topologically trivial phase, but two edge states are free from PT symmetry breaking in topolog-ically nontrivial phase. When the defects are near the chain boundaries, the edge states in topologically nontrivial phase break the PT symmetry if defects are at the sites with nonzero edge states distribution probabilities; the PT symmetry breaking is caused by the extended states at weak non-Hermiticity or by the bound states at strong non-Hermiticity. The bound states probabilities are localized at the defects and decay exponentially, thus are PT -symmetric breaking; however, the PT -symmetric bound states can be formed when the defects are at the SSH chain center, where the gain and loss are the nearest neighbors. Therefore, the PT transition threshold in this situation increases significantly, which is the largest and equals to the weak inhomogeneous coupling. The PTsymmetric bound states are the topologically protected coalesced zero mode at the PT transition threshold.
We acknowledge the support of National Natural Science Foundation of China (Grant Nos. 11605094 and 11374163) and the Tianjin Natural Science Foundation (Grant No. 16JCYBJC40800).
|
v3-fos-license
|
2019-05-10T13:05:43.449Z
|
2019-05-08T00:00:00.000
|
148569363
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41467-019-10033-2.pdf",
"pdf_hash": "63bec4a650af6d96d03d57b0b374f8a80f9d478f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43537",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "63bec4a650af6d96d03d57b0b374f8a80f9d478f",
"year": 2019
}
|
pes2o/s2orc
|
Enhancing the performance of pure organic room-temperature phosphorescent luminophores
Once considered the exclusive property of metal complexes, the phenomenon of room-temperature phosphorescence (RTP) has been increasingly realized in pure organic luminophores recently. Using precise molecular design and synthetic approaches to modulate their weak spin–orbit coupling, highly active triplet excitons, and ultrafast deactivation, organic luminophores can be endowed with long-lived and bright RTP characteristics. This has sparked intense explorations into organic luminophores with enhanced RTP features for different applications. This Review discusses the fundamental mechanism of RTP in pure organic luminophores, followed by design principles, enhancement strategies, and formulation methods to achieve highly phosphorescent and long-lived organic RTP luminophores even in aqueous media. The current challenges and future directions of this field are also discussed in the summary and outlook.
derivatives 23,24 , and polyacid derivatives 25,26 , etc., are desirable due to the versatility in their molecular design and engineering.
While organic RTP, which was probably originated from a small amount of impurities, was noted as early as 1930s 27 , the observations on RTP of pure organic luminophores have only been reported after another 30-40 years [28][29][30][31] . After that, the development of pure organic RTP luminophores has been plagued with uncertainty and a long period of inactivity until recently, when boron difluoride dibenzoylmethane moiety incorporated into poly(lactic acid) (PLA) was observed to display unusual RTP 32 . Since then, a plethora of organic luminophores with long-lived phosphorescence under ambient conditions has been increasingly discovered and realized [6][7][8][9][10] . These notable breakthroughs have sparked active investigations into the rational design and formulation of pure organic luminophores with enhanced RTP performance. Design principles based on halogen bonding [33][34][35] , H-aggregation [36][37][38] , and n-π transition 39,40 , while RTP enhancement strategies based on co-crystallization 34,[41][42][43] , rigid matrix host-guest system [44][45][46] , structural modified host-guest system 47,48 , and dopant-based system 49 , have brought about the effective realization of enhanced organic RTP. With their special features and advantages, this class of organic luminophores has been actively demonstrated to have immense potential for specific applications, including organic optoelectronics [50][51][52][53][54] , anti-counterfeiting labeling for advanced data security 36,37,46 , highly sensitive sensing 44,45,55 , and high-contrast time-gated in vitro and in vivo phosphorescence biological imaging [56][57][58][59][60] . Intriguingly, for applications such as optoelectronics and data security, pure organic RTP luminophores could be deposited directly on a wide range of substrates, including semiconducting matrices, plastics, glasses, and papers, and utilized without complicated treatments. For biological applications, however, organic RTP luminophores need to be equipped with sufficient aqueous dispersibility and appropriately processed because the triplet excited state is highly sensitive to oxygen 48 , a molecule that is crucial for physiological functions. As a result, specialized processing strategies are necessary to successfully generate pure organic RTP luminophores with long luminescence lifetime, good water dispersibility, and excellent biocompatibility for biological applications [57][58][59] . Collectively, this indicates that the effective realization and applications of organic RTP luminophores are an integrated process, covering optical characterization, molecular design, organic synthesis, mechanical study, physical processing, surface modification, and functional evaluation.
Herein, we provide a general overview on the emergence of pure organic RTP luminophores and the rational enhancement of their performance based on structure-property relationship. Specifically, we focus on organic small molecule-based RTP luminophores due to the more established understanding of their RTP mechanisms, as well as the wider explorations into their molecular design, performance enhancement, and potential applications. In contrast, the more complex organic RTP luminophores, including carbon dot-and macromolecule-based (i.e., proteins, starch, carbohydrates, and cellulose) RTP luminophores, will not be discussed in detail as their RTP mechanisms have been less well characterized and understood [61][62][63][64] . In this Review, the fundamental mechanism of phosphorescence, which forms the basis of the molecular design of organic luminophores with persistent RTP is first introduced. Specifically, the utilization and manipulation of ISC between singlet and triplet states for realizing stable organic molecular RTP are discussed. By keeping these fundamental principles in mind, we further describe the design principles of pure organic RTP luminophores and the various ingenious strategies used to strengthen and maximize their phosphorescence feature. In particular, we focus on approaches to enhance the ISC rate and lifetime as well as to suppress the non-radiative deactivation pathways of triplet excitons. We next highlight the techniques used to formulate organic RTP luminophores for specific applications. Particular emphasis here is on the nanocrystallization and nanoencapsulation methods used to process organic RTP luminophores for phosphorescence applications in aqueous media. This Review eventually summarizes and presents our perspectives on the current challenges and potential opportunities in the field.
Mechanism of organic room-temperature phosphorescence Organic photoluminescent materials have generated considerable interests in recent years due to their tunable luminescent properties, which can be obtained through precise design of molecular structures and control of solid-state intermolecular interactions. When an organic molecule absorbs incident photons, electrons will be excited from ground state to excited state and holes will be simultaneously generated. The generated electron-hole pairs, when they are attracted to each other through electrostatic force, are commonly known as excitons.
In general, the emission characteristics of organic photoluminescent molecules are significantly dependent on the electronic transitions between the different states of excitons. By controlling the properties of excitons, such as their configuration, energy level, and lifetime, the emission properties of organic luminescent molecules can be fine-tuned. The numerous possible photophysical pathways taking place between the singlet and triplet excited states in organic molecules are summarized in the Jablonski diagram below (Fig. 1a).
As one of the two radiative photophysical relaxation processes, phosphorescence occurs when there is an ISC from an excited energy state (usually a singlet state) to another state with a higher spin multiplicity (usually a triplet state), followed by radiative deactivation to the ground state. In contrast to fluorescence, phosphorescence is inherently slower and possesses a longer lifetime. To yield phosphorescence, it is imperative to have an effective non-radiative ISC between the isoenergetic singlet excited state (S 1 ) and T n . The rate of ISC is an intrinsic feature of organic molecule and therefore, dependent on its electronic configuration and electronic level. To populate T n from S 1 , an efficient spin-orbit coupling is essential.
In accordance to El-Sayed's rule, effective spin-orbit coupling and a high ISC rate (k ISC ) can be achieved in pure organic luminophores if the non-radiative transition from singlet to triplet excited states involves different molecular-orbital configurations because of the effective overlapping orbitals (Fig. 1b) 39,65 . For instance, ISC can occur from 1 (n,π*) singlet to 3 (π,π*) triplet excited states or from 1 (π,π*) singlet to 3 (n,π*) triplet excited states. However, ISC from singlet to triplet excited states with similar electronic configurations, such as from 1 (n,π*) to 3 (n,π*) or from 1 (π,π*) to 3 (π,π*), is not favorable due to minimal orbital overlapping, which results in inefficient spin-orbit coupling. Consequently, the presence of n orbitals perpendicular to π orbitals is beneficial to facilitate a strong spin-orbit coupling and promote the ISC from singlet to triplet excited states. Furthermore, the electronic configuration of triplet excited state is a combination of two configurations with varying percentages of α n 3 (n,π*) + β π 3 (π,π*), in which α n + β π = 1, and because of this, the rate of phosphorescence relaxation becomes tunable. As a result, the molecular design and precise tuning of the triplet excited state with appropriate proportions of hybrid (n,π*) and (π,π*) configurations are important to achieve effective long-lived RTP luminophores 39 .
In addition to mixing singlet and triplet states with different electronic configurations to enhance spin-orbit coupling, the rate of the ISC process from S 1 to T n can be considerably strengthened by introducing a small splitting energy gap (ΔE ST ) between these excited states 39 . Separately, the inclusion of heavy-metal atoms, such as platinum, iridium, and halogen atoms (e.g., bromine and iodine atoms), is also beneficial in triggering spin-orbit coupling to strengthen the rate of ISC from S 1 to T n , as well as the rate of radiative relaxation from T 1 to S 0 33,34,56 . This is necessary for phosphorescence to compete with other photophysical processes, such as fluorescence, internal conversion, non-radiative deactivation pathways from T 1 to S 0 , as well as oxygen quenching under ambient conditions. In fact, luminophores with high ISC quantum efficiency typically display fast k ISC , which can compete with fluorescence decay and internal conversion in the S 1 depopulation process. Because of this, another crucial requirement for RTP to occur in pure organic luminophores is the higher rate of radiative transition (k P ) than that of non-radiative transition from the triplet excited state to ground state (k nr ). Under ambient conditions, the non-radiative loss typically surpasses the radiative decay 66,67 . At the same time, since ISC is a forbidden process according to the principle of angular momentum conservation, it takes place on a longer time scale and triplet exciton luminescence has longer lifetimes. With the increase in luminescence lifetime of triplet excitons, they are highly susceptible to external environmental factors, including heat and oxygen in the air 68 . Consequently, it is extremely challenging to realize bright and highly efficient luminescence from long-lived triplet excitons under ambient conditions, especially in pure organic luminophores. The suppression of non-radiative relaxations and external quenching thus becomes a crucial factor in realizing highly effective and bright RTP in pure organic luminophores 48,67 .
To display persistent RTP, pure organic luminophores also need to have a slow T 1 decay rate. One of the most promising ways to achieve this is to stabilize the lowest triplet excited states T 1 by forming numerous lower-lying energy states with suppressed non-radiative decay in H-aggregates 36 . With the increase in the number of such triplet excited states at lower-energy level (T 1 *), the decay rate from T 1 to S 0 decreases considerably. This is because the newly formed triplet excited states can function as emission-forbidden energy trapping states, which will eventually lead to the generation of long-lived persistent organic RTP.
Altogether, to achieve pure organic RTP luminophores with both high phosphorescence quantum yield and persistent phosphorescence lifetime, several factors need to be fulfilled. These include: (1) an efficient spin flipping due to the ISC process from S 1 to T n , (2) a low non-radiative relaxation rate to promote phosphorescence from T 1 to S 0 , and (3) a slow decay of T 1 to realize long-lived phosphorescence. Various rational design principles and enhancement strategies have thus been geared towards meeting these requirements recently, which are discussed in the following sections.
Design of pure organic room-temperature phosphorescent luminophores Tremendous efforts have been channeled towards formulating rational rules for designing bright and long-lived pure organic RTP luminophores in recent years. In general, this unique class of organic luminophores can be designed systematically through major strategies based on halogen bonding [33][34][35] , Haggregation [36][37][38] , and n-π transition 39,40 (Fig. 2). Collectively, these strategies aim to strengthen the ISC transitions from the singlet to triplet excited states and simultaneously mitigate the prominent non-radiative relaxations of triplet excited states.
One of the earliest attempts in realizing persistent organic RTP luminophores relied on the crystallization-induced restriction of intramolecular motions (Fig. 2a-c) 33 . In this study, the crystals from a series of organic luminophores based on benzophenone and its halogenated derivatives, as well as methyl 4bromobenzoate (MBB) were prepared (Fig. 2a). All luminogens were generally non-emissive when they existed in the solution form due to annihilation of triplet excitons by active intramolecular rotations and vibrations. However, in the crystalline state, both crystal lattices and intermolecular interactions confined the intramolecular motions, resulting in bright RTP under ultraviolet (UV) irradiation at room temperature (Fig. 2b). While this study has largely focused on the effect of crystallization, considerable attention has also been paid to the role of halogen bonding, which was one of the intermolecular interactions central to RTP emission of organic crystals. Halogen bonding is generally a weak non-covalent interaction between a nucleophilic atom or a negatively charged ion and a positively polarized halogen atom. Owing to its directionality, this type of bonding has become increasingly crucial in crystal engineering. Heavy halogen atoms, such as bromine and iodine, are able to introduce heavy atom effect when bonded with luminophores to strengthen the efficiency of ISC process between singlet and triplet states. A closer look at the molecular packing of the crystal structure of MBB revealed the locking and stabilization of its conformation by numerous C-H···O and C=O···Br-C intermolecular interactions and halogen bonds, with distances of 2.55 and 3.05 Å, respectively (Fig. 2c). This illustrates the crucial roles of carbonyl group (C=O) and heavy atom of organic luminophores in rigidifying a b Fluorescence Absorption T n n 3 (n,π*) + π 3 (π,π*) Fig. 1 Fundamental mechanism of organic room-temperature phosphorescence (RTP) phenomenon. a Jablonski diagram illustrating the different photophysical relaxation processes, particularly the intersystem crossing (ISC) between singlet and triplet states, which forms the basis for phosphorescence of organic luminophores. b Schematic illustration of the El-Sayed's rule for ISC and its utilization for controlling phosphorescence decay rate based on the molecular-orbital hybridization of the lowest triplet states. b is adapted with permission from ref. 39 36 . Copyright (2015) Springer Nature. m and n are adapted/reproduced with permission from ref. 37 . Copyright (2017) Wiley-VCH Verlag GmbH & Co. o, p, and q are adapted with permission from ref. 39 . Copyright (2016) Cell Press. r, s, t, and u are adapted/ reproduced with permission from ref. 40 . Copyright (2017) Springer Nature their structure to achieve more efficient spin-orbit coupling and enable bright phosphorescence. The more pronounced effect of halogen bonding in facilitating a directed heavy atom effect to generate organic RTP was reported around the same time ( Fig. 2d-f) 34 . Here, a small molecule of 2,5-dihexyloxy-4-bromobenzaldehyde (Br6A) was firstly synthesized (Fig. 2d). When Br6A luminophores were in solution, they did not display any halogen bonding. Under this condition, there was inefficient generation of triplet excitons, resulting in ineffective RTP. This was evident from the weak fluorescence of Br6A in chloroform, with a lifetime of 0.5 ns and a quantum yield of 0.5%, when it was excited at 360 nm. However, Br6A crystals displayed distinct green emission with a phosphorescent lifetime of 5.4 ms and a quantum yield of 2.9% (Fig. 2e). As revealed through single-crystal X-ray diffraction (XRD), the molecular packing structure of Br6A crystal revealed the close proximity of carbonyl oxygen with bromine within the crystal. In fact, the C=O···Br angle of 126 o strongly indicates the presence of halogen bond. Furthermore, the distance of 2.86 Å between the carbonyl oxygen and bromine atoms is one of the shortest distances among the halogen bonds. The crystallizationinduced halogen bond was most likely the reason for the strong phosphorescence detected from the crystalline Br6A, although heavy atom effect would also play a role (Fig. 2f). More clearly, the strong halogen bonding of Br6A crystal induced a unique directed heavy atom effect specifically at the triplet generation sites, which could significantly boost triplet generation, activate triplet emission, and suppress fluorescence and vibrational loss to enhance phosphorescence. It was found that Br6A suffered from self-quenching and therefore, its RTP was extremely weak and short-lived. Encouragingly, numerous studies have demonstrated that the phosphorescence quantum efficiency and lifetime of Br6A could be significantly enhanced through various ingenious RTP enhancement strategies 34,41,44 .
In addition to halogen bonding, another increasingly explored strategy to design organic RTP luminophores with long lifetime is the stabilization of triplet excitons through molecular Haggregation ( Fig. 2g-n) 36,37 . In one of the more recent studies, a series of carbazole-based luminophores consisting of N, O, and P atoms was prepared (Fig. 2g). These atoms are able to expedite the spin-forbidden ISC of singlet to triplet excited states 69 . Furthermore, planar configurations with well-defined aromatic substituents were incorporated in the molecular design to stimulate the parallel alignment of the molecules and facilitate the generation of stable H-aggregates, which could possibly enhance the singlet excited state lifetimes 70 . The unique design features demonstrated in this work were largely inspired by the generation of persistent RTP in inorganic luminophores based on charge carrier trapping 71 . It was hypothesized that the existence of an energy trapping state T 1 * in these organic luminophores could lead to stabilization of triplet excited states for the generation of long-lived organic RTP (Fig. 2h). To test this idea, 4,6-diphenyl-2-carbazolyl-1,3,5-triazine (DPhCzT) was synthesized (Fig. 2i) to show green RTP with a considerably long phosphorescence lifetime of 1.06 s and a quantum efficiency of 1.25%. First-principle time-dependent density functional theory (TD-DFT) revealed a small energy gap of < 0.3 eV between the lowest singlet and triplet excited states in DPhCzT monomer, which favored the singlet-totriplet ISC process 72 . Moreover, XRD analysis of the single-crystal structure of DPhCzT uncovered the presence of H-aggregates as the measured angle between the transition dipoles and the interconnected axis was 80.9 o , which was larger than the critical value of 54.7 o differentiating J-aggregation and H-aggregation 36 (Fig. 2j). To demonstrate the general applicability of triplet exciton stabilization through molecular H-aggregation strategy in inducing persistent organic RTP, other pure organic luminophores with N, O, and P atoms, including di(9H-carbazolyl)-phenylphosphine (DCzPhP), were synthesized (Fig. 2k). By tailoring its molecular structure, DCzPhP was able to emit bright red RTP (Fig. 2l).
Extending the design concept of triplet exciton stabilization through H-aggregation, a range of organic RTP luminophores with halogen substituents, including (9H-carbazol-9-yl)(4-chlorophenyl)-methanone (CPhCz), was synthesized in a more recent study (Fig. 2m) 37 . Owing to the synergistic integration of two functional strategies, these aromatic amide derivatives could be excited by visible light to emit organic RTP. More clearly, strong intermolecular interactions on the xy plane redshifted the absorption of the luminophores, while on the z-axis, molecules with H-aggregations stabilized the triplet excitons of the longlived organic RTP. CPhCz had a high phosphorescence efficiency of 8.3% and displayed a long phosphorescence duration of 104 s under a brightness identifiable by naked eye (i.e., 0.32 mcd m -2 ) (Fig. 2n).
Stabilization of triplet excitons for long-lived organic RTP can also be achieved by controlling the intermolecular interactions in the crystalline state using external stimuli 73 . In fact, several recent studies have demonstrated the realization of ultralong organic RTP through UV photoactivation-enabled manipulation of intermolecular interactions 60,74 . For example, in one of the studies, the phenomenon of photoinduced RTP was realized in 10phenyl-10H-phenothiazine-5,5-dioxide derivatives through UV light-enhanced π-π interactions of aromatic rings 60,74 . In another work, photoactivated dynamic RTP was achieved in a series of organic luminophores with a triazine core, a carbozole unit, and alkoxyl chains with different length 73 . By irradiation under UV light for 8 min, the organic luminophores could be activated to exhibit RTP with emission lifetimes increasing from 1.8 to 1330 ms. These luminophores could be deactivated to emit short-lived phosphorescence by leaving them under ambient conditions for about 3 h or thermally treating them. Separately, instead of attaching alkyl chains onto the triazine core to fine-tune the molecular configuration, the flexible alkyl chains can be introduced to link carbazole moieties with heavy atoms to modulate spin-orbit coupling and strengthen intermolecular heavy atom effect for persistent organic RTP 75 . Carbazole-based organic RTP luminophores could be designed to exhibit yellow persistent RTP emission with a high quantum yield of 39.5% and a long lifetime of about 200 ms 75 . Interestingly, although the strong persistent RTP emission was initially attributed to the carbazole derivatives, additional purification of the same luminophores revealed the disappearance of the yellow RTP 76 . As such, the small traces of impurities existed in the organic compounds were most likely responsible for the observed persistent RTP.
Another major strategy to promote singlet-triplet ISC in order to populate triplet states and achieve long-lived organic RTP is the enhancement of spin-orbit coupling based on n-π transition ( Fig. 2o-u) 39,40 . This design concept was well documented in a recent structure-property relationship study, which investigated the effect of molecular orbital on the phosphorescence wavelength, lifetime, and efficiency 39 . Here, a series of benzophenonebased luminophores integrating carbonyl groups with nonbinding electrons and different π-conjugated groups was prepared (Fig. 2o). Taking 1-(dibenzo[b,d]furan-2-yl)phenylmethanone (BDBF) as an example, carbonyl groups were incorporated to provide n orbitals, which could enhance spin-orbit coupling to trigger ISC from the excited singlet state S 1 to different triplet states T n . Meanwhile, π-conjugated units were introduced to provide π orbitals and endow T 1 state with 3 (π,π*) configuration, which could induce a slow phosphorescence rate and a persistent phosphorescence lifetime. By varying the π-conjugated units, the (n,π*) and (π,π*) molecular orbitals could be mixed to generate tunable T 1 state with distinct 3 (π,π*) configuration and energy levels, which in turn, enabled the tuning of phosphorescence color, lifetime, and quantum yield. The luminophores prepared in this work were generally not luminescent in both solution and amorphous states, but became highly emissive when they were transformed into crystals at room temperature (Fig. 2p). For example, BDBF crystals displayed RTP with a phosphorescence lifetime of 232 ms and efficiency of 34.5%. First-principle TD-DFT calculations were performed on BDBF to unravel the mechanism of its RTP (Fig. 2q). It was noted that, because of a high Δα n (which is defined as α n,S1 -α n, T n ) of 46.3% between S 1 and T 2 (where α n,S1 and α n,T2 were 79.1% and 32.8%, respectively) indicating an enhanced n-π transition and stronger spin-orbit coupling, as well as a high β π,T1 of 99.7% indicating the prominent 3 (π,π*) configuration of T 1 , BDBF possessed a long phosphorescence lifetime.
Interestingly, the n-π transition design concept can also be extended to enable white phosphorescence emission from a single pure organic luminophore at room temperature. Pure organic white RTP was realized through intrasystem mixing of dualemission bands, which were originated from the high-and lowlying triplet excitons ( Fig. 2r-u) 40 . Fundamentally, if there are two radiative relaxations from both the higher and lower triplet excited states, dual phosphorescence could be observed 77,78 . In this study, four arylphenone derivatives consisting of heavy halogen atom (i.e., Br, Cl, or F), carbonyl group, and π-extended dibenzothiophene subunit, particularly 4-chlorobenzoyldibenzothiophene (CIBDBT), were prepared (Fig. 2r). While these organic molecules were not luminescent in solutions, they were capable of displaying dual RTP emission in crystalline state. The mechanism that enables this unique phenomenon is the combined effect of: (1) enhanced ISC due to the presence of lone pair electrons in carbonyl group and heavy halogen atom, with (2) multiple triplet excited states having distinct energy levels and orbital configurations due to the π-extended dibenzothiophene subunit. Intriguingly, upon 365 nm UV irradiation, CIBDBT could emit white phosphorescence, which turned yellow after the removal of excitation source (Fig. 2t). The crystal structure of CIBDBT showed that the luminophore existed in dimeric units linked by π-π interaction (Fig. 2s). The small dihedral angle of 21.8 o between dibenzothiophene and the carbonyl group indicates their strong conjugation within the luminophore. This raises the possibility that (n,π*) from the carbonyl group and (π,π*) could synergistically form hybrid triplet excited states. In fact, theoretical calculations showed that there were two low-lying triplet excited states (i.e., T 1 and T 2 ) below the lowest excited singlet state (i.e., S 1 ), where the lower-energy T 1 and the higher-energy T 2 are the mixed states dominated by (π,π*) and (n,π*) transitions, respectively (Fig. 2u). The (π,π*) transition localized to the benzophenone unit (i.e., the lowest unoccupied molecular orbital L) from the πconjugated dibenzothiophene (e.g., the highest occupied molecular orbital H), while the (n,π*) transition exhibited charge transfer from deeper orbitals (e.g., H-2) to L. All these suggest the importance of the existence of both low-and high-lying triplet excited states with (π,π*) and (n,π*) transitions in the design of organic persistent dual RTP luminophores.
Enhancement of organic room-temperature phosphorescence It is essential for pure organic RTP luminophores to be bright and with long lifetime for practical applications. While a majority of the pure organic luminophores display weak and short-lived RTP in the scale of ms, encouragingly, the RTP performance of organic luminophores can be greatly strengthened. To this end, various approaches which are capable of suppressing the non-radiative deactivation pathways and reducing the quenching of triplet excitons, as well as enhancing their ISC rate and lifetime have been increasingly explored. These mainly include co-crystal assembly 34,[41][42][43] , rigid matrix host-guest system [44][45][46] , structural modified host-guest system 47,48 , and dopant-based system 49 .
Co-crystallization. Co-crystallization relies on the synergistic coassembly of luminophores with host crystals to achieve a directed heavy atom effect in order to trigger a more effective ISC process to improve RTP ( Fig. 3a-g) 34 . In one of the pioneering reports demonstrating this enhancement concept, a pure organic luminophore Br6A was synthesized and co-crystallized with its bihalogenated analog host of 2,5-dihexyloxy-1,4-dibromobenzene (Br6) to generate a co-crystal of Br6A/Br6 (Fig. 3a) 34 . Br6 has a similar crystal structure and size to Br6A. Therefore, the presence of Br6 in the co-crystals could minimize the formation of excimers and suppress the self-quenching of Br6A to yield more effective RTP with a high quantum yield of 55%, almost 20-fold brighter than that of Br6A alone (Fig. 3b). It is interesting to note that the emission of Br6A/Br6 co-crystals was still originated from the pure Br6A crystals as evidenced by their narrow excitation band, which is within the absorption spectrum of Br6A but not Br6. XRD analysis of the crystal structures of both Br6A and Br6 revealed their similar packing motifs and aromatic ring distances. This suggests that Br6A luminophore was incorporated into the Br6 host through substitution.
In addition to the co-crystallization of luminophores with their analogous host crystals, organic emitters can be paired up with non-analogous host crystals (Fig. 3f-h). In one study reporting such pairing, 1,4-diiodotetrafluorobenzene (1,4-DITFB) and the polyaromatic hydrocarbon naphthalene (Nap) were prepared and co-crystallized to generate Nap-DITFB (Fig. 3f) 42 . Here, 1,4-DITFB and Nap served as halogen bonding donor and π-type halogen bonding acceptor, respectively. While polyaromatic hydrocarbons are typically good luminescent materials, it is challenging to observe their phosphorescence. The incorporation of 1,4-DITFB as a halogen bonding donor was, therefore, anticipated to improve the phosphorescence of the co-crystals. Analysis on the crystal structure of Nap-DITFB revealed the presence of C-I···π halogen bonding in the co-crystals (Fig. 3g). The stability of the co-crystal structure was also maintained by l Schematic illustration of the mechanism of RTP enhancement based on rigid matrix host-guest system, where G1 is embedded within PVA and the PVA-PVA hydrogen bonds, the G1-PVA hydrogen bonds, as well as the G1-G1 halogen bonds collectively contribute to organic RTP enhancement. a and c are adapted/reproduced with permission from ref. 41 . Copyright (2014) American Chemical Society. b, d, and e are adapted with permission from ref. 34 . Copyright (2011) Springer Nature. f and h are adapted with permission from ref. 42 . Copyright (2012) The Royal Society of Chemistry. g is reproduced with permission from ref. 6 . Copyright (2015) The Royal Society of Chemistry. i and j are reproduced with permission from ref. 44 . Copyright (2013) American Chemical Society. k and l are adapted/reproduced with permission from ref. 45 additional weak intermolecular interactions, notably C-H···I hydrogen bonding, C-F···F-C contact, and edge-to-edge π-π stacking. With the maximum excitation at 323 nm, Nap-DITFB could emit strong green phosphorescence with a phosphorescence lifetime of 0.067 ms (Fig. 3h). Altogether, the introduction of heavy atom 1,4-DITFB diluted the solid-state naphthalene concentration to suppress its self-quenching, while the C-I···π halogen and C-H···I hydrogen interactions rigidified the molecular environment, resulting in an improved ISC process and considerably enhanced organic RTP.
Rigid matrix host-guest system. Although bright organic RTP can be achieved through co-crystallization based on a directed heavy atom effect, the practical applications of the resultant organic crystalline luminophores are still largely hampered by their crystal quality 79 . For device fabrication and processing applications, including sensors, solid-state lighting, and organic light-emitting diodes, amorphous solids and polymers are much easier to process than crystalline materials. However, amorphous organic systems with bright RTP are challenging to realize. Rigorous vibration and diffusion movements, such as α and β transitions, are typically rampant in amorphous phase under ambient conditions 45 . These unavoidable phenomena in amorphous matrices favor the vibrational loss of triplet excitons of organic luminophores, causing significant quenching of their phosphorescence 80,81 . As such, suppressing the vibrational and diffusional motions in an amorphous environment is extremely important as these movements compete with phosphorescent decay.
To address this issue, increasing activities have been geared towards confining pure organic RTP luminophores within carefully chosen rigid amorphous polymer matrices with minimal amorphous phase relaxations, such as poly(methyl methacrylate) (PMMA) and polyvinyl alcohol (PVA) (Fig. 3i-l) 44,45 . With this notable feature, these polymer matrices were anticipated to mitigate the vibrational triplet decay to realize emissive triplet relaxation and boost the overall phosphorescence efficiency. Taking the luminophore Br6A as an example, it could be embedded within PMMA to improve its phosphorescence brightness and efficiency (Fig. 3i-j) 44 . Specifically, Br6A was embedded within a series of PMMA with different tacticity, i.e., atactic PMMA (aPMMA), isotactic PMMA (iPMMA), syndiotactic PMMA (sPMMA) (Fig. 3i) and their RTP emission intensity was evaluated. Among all samples, the iPMMAembedded Br6A showed the brightest phosphorescence, with a phosphorescence quantum efficiency up to 7.5% (Fig. 3j). This was attributed to the high isotacticity and low β-relaxation characteristic of iPMMA. Generally, amorphous PMMA with different tacticity arrangements displays varying degrees of βrelaxation, which influences the vibrational loss of triplet excited states and phosphorescence efficiency. In fact, the degree of βrelaxation of amorphous PMMA decreases with decreasing syndiotacticity, but increases with decreasing isotacticity. Therefore, with increasing isotacticity in iPMMA, its β-relaxation decreased correspondingly and phosphorescence brightness and quantum efficiency of the embedded Br6A were significantly improved.
The suppression of both vibrational and diffusional motions in an amorphous environment under ambient conditions can also be rationally achieved by introducing active non-covalent intermolecular interactions in the matrices 45,46 . For instance, a recent study has demonstrated this concept by relying on the strong halogen and hydrogen bonding between an amorphous polymer matrix and the embedded organic luminophore to effectively minimize vibrational dissipation and enable bright phosphorescence (Fig. 3k-l) 45 . In this work, an organic luminophore 2,2'-(2-bromo-5-formyl-1,4-phenylene)bis(oxy)diacetic acid (G1) with a bromoaldehyde core and carboxylic acid side chains was rationally embedded within amorphous PVA with hydrogen bonds (Fig. 3k). In principle, the bromoaldehyde core of G1 was expected to induce strong intermolecular halogen bonds between luminophores to suppress their vibrational dissipation and improve ISC (Fig. 3l). In addition, the carboxylic acid side chains of G1 would facilitate the formation of strong intermolecular hydrogen bonds between G1 and PVA to strengthen the restriction of luminophore vibrational loss. Additionally, the PVA-PVA hydrogen bonding would minimize diffusion of the polymer matrices. These intermolecular interactions synergistically reinforced RTP of G1 via the strengthened restriction of vibrational and diffusional motions. This was clear from the strong RTP exhibited by the G1-PVA hybrid system, where its high phosphorescence quantum efficiency of 24% was approximately three times higher than that of Br6A-iPMMA.
Besides via rigid matrix host-guest systems, the RTP of amorphous organic luminophores has been shown to be significantly enhanced via a structural modified host-guest system (Fig. 4a-e) 47,48 . To illustrate this enhancement principle, a rigid steroidal compound β-cyclodextrin (β-CD) was recently modified with various phosphorescent moieties to prepare a series of non-crystalline metal-free single RTP luminophores, including 6-bromo-2-naphthol-modified β-cyclodextrin (BrNp-β-CD) (Fig. 4a) 47 . These amorphous small molecular CD derivatives exhibited intense RTP luminescence in solid-state with phosphorescence lifetimes between 0.96 and 2.67 ms. This phenomenon was ascribed to the strong intermolecular hydrogen bonding between adjacent CDs, which mitigated the non-radiative vibrational deactivation of luminophores and protected them from quenchers, leading to efficient RTP emission. β-CD generally has a cavity, which is known to be able to host a plethora of guest molecules for the generation of a host-guest complex. As a proof of concept, a water-soluble fluorescent coumarin derivative (1s,3s)-N-(4-((2-oxo-2H-chromen-7-yl)oxy) butyl)adamantan-1-aminium chloride (AC) was introduced as a guest molecule into the cavity of BrNp-β-CD host to generate a structural modified host-guest complex AC@BrNp-β-CD (Fig. 4b). The supramolecular system possessed dual-emission properties originated from the individual components of AC and BrNp-β-CD and was capable of emitting multicolor luminescence, including white-light emission when excited at 295 nm. Intriguingly, this emission color could be manipulated by tuning the molar ratio of AC and BrNp-β-CD, or the excitation wavelength.
In addition to host molecules, modification can also be applied to guest molecules to reinforce the overall RTP performance. One of the earliest studies in this area formulated a structural modified host-guest system based on deuterated guest molecules (Fig. 4c-e) 48 . Specifically, the unique metal-free organic host-guest RTP systems were achieved through synergistic integration of a secondary amino-substituted deuterated carbon as a phosphorescent guest with its hydroxyl steroid analog as a host matrix (Fig. 4c). The two individual components of this amorphous host-guest system played crucial roles in affording long-lived RTP emission. In particular, the different amorphous hydroxyl steroidal compounds, such as β-estradiol, used as hosts possessed high rigidity and oxygen barrier characteristics, which could heavily suppress the quenching of long-lived triplet excitons. On the other hand, the aromatic hydrocarbons employed as guests were highly deuterated and secondary amino-substituted to minimize their non-radiative relaxations and enhance ISC. Interestingly, the advantage of highly deuterated guests were particularly pronounced in one of the characterization studies with β-estradiol as the host matrix (Fig. 4d). Various deuterated secondary amino-substituted aromatic hydrocarbon guests and their undeuterated counterparts were synthesized and combined with β-estradiol to generate a series of structural modified host-guest systems. Over a range of temperatures from 111 to 500 K, deuteration of the aromatic hydrocarbons was noted to reduce both the non-radiative triplet deactivation rate (k nr ) and triplet quenching rate (k q ), which is crucial to achieve long-lived triplet excitons and effective phosphorescence emission. Crucially, by leveraging on this RTP enhancement concept, persistent phosphorescence with a lifetime of >1 s and a quantum yield of >10% could be realized for a pure organic amorphous host-guest complex under ambient conditions (Fig. 4e).
One of the most exciting breakthroughs in the enhancement of RTP of pure organic luminophores has been reported most recently, in which organic long persistent luminescence was realized based on a dopant-based host-guest system (Fig. 4f, g) 49 . This work was largely inspired by the common use of mixtures of electron-donating and -accepting molecules to generate chargeseparated states in organic photovoltaics 82,83 . Leveraging on a simple mixture of two organic molecules, i.e., N,N,N',N'tetramethylbenzidine (TMB) and 2,8-bis(diphenylphosphoryl) dibenzo[b,d]thiophene (PPT) serving as electron-donating and electron-accepting molecules, respectively, TMB/PPT amorphous film with a remarkable phosphorescence lifetime of >1 h was demonstrated (Fig. 4f). The unique features of this long lifetime luminescence enhancement principle lie in the generation of long-lived intermediate charge-separated states and their gradual recombination, which proceeded in sequential steps (Fig. 4g (i-v)). Firstly, upon photoexcitation, the electrons from the highest occupied molecular orbital of TMB transferred to that of PPT, resulting in the creation of intermediate charge-transfer states between PPT and TMB (Fig. 4g (i) and Fig. 4g (ii)). Next, through charge-hopping among the PPT molecules, the PPT radical anions would diffuse to sequester the TMB radical cations, leading to the formation of stable charge-separated states (Fig. 4g (iii)). The generation of these intermediate persistent charge- g Schematic illustration of the mechanism of RTP enhancement based on dopant system where charge-transfer states are formed during photoexcitation (i and ii), followed by the formation of charge-separation states (iii) and the eventual exciplex emission (iv and v). a and b are adapted with permission from ref. 47 . Copyright (2018) American Chemical Society. c, d, and e are adapted with permission from ref. 48 . Copyright (2013) Wiley-VCH Verlag GmbH & Co. f and g are reproduced with permission from ref. 49 . Copyright (2017) Springer Nature separated states was essential to retard charge recombination in order to realize long-lived emission. The subsequent charge recombination of the TMB radical cations and PPT radical anions (Fig. 4g (iv)) would then produce exciplexes (i.e., 25% singlet exciplexes and 75% triplet exciplexes (Fig. 4g (v)). As the photogenerated radical anions and cations could maintain their energy and accumulate in the TMB/PPT blend, the exciplex emission could last for an extended period of time after the removal of photoexcitation. Interestingly, reverse ISC through thermal activation could occur in this exciplex system due to the existence of a small energy gap between S 1 and T 1 . It is important to highlight that the duration of the persistent phosphorescence emission was strongly dependent on several factors, such as the excitation duration and power, sample temperature, and concentration of the TMB dopant. For instance, to have a long phosphorescence lifetime, a low TMB concentration was needed as this could ensure a large distance between TMB and PPT, which in turn, maintained a low recombination probability of PPT radical anions with TMB radical cations. Overall, the keys to achieving long-lived and bright RTP emission lie in the enhancement of ISC process coupled with the heavy suppression of non-radiative deactivation pathways of triplets. Therefore, by satisfying these crucial requirements, the functions and performance of pure organic RTP luminophores can be maximized for different applications through rational design and enhancement strategies.
Advanced processing of pure organic RTP luminophores for biological applications Despite the fact that organic luminophores with enhanced RTP have only started to be rationally designed in the last few years, active explorations have been channeled towards identifying their potential applications. These primarily include information encryption for anti-counterfeiting labeling, organic optoelectronics, ultrahigh density optical recording, and sensing 7,8 . Several key features that render organic RTP luminophores attractive for these applications, as compared to conventional short-lived organic fluorophores and thermally activated delayed fluorescent molecules, are their external stimuli-driven dynamic and reversible luminescence, and persistent room-temperature luminescent lifetime of beyond milliseconds to a few seconds. Moreover, it is possible to use pure organic RTP luminophores directly for these applications without the need for complex processing. As indepth discussions on the potential applications of organic RTP luminophores in these areas have been previously covered in several reviews 7,8 , we therefore focus more on the emerging biological applications of pure organic RTP luminophores.
Organic RTP luminophores typically need to be specifically formulated for biological applications. This is because the effective utilizations of organic RTP luminophores for biological sensing and imaging hinge on stringent requirements, such as long-wavelength and bright RTP emission, suitable size, good water dispersibility, and excellent biocompatibility. As a promising application of organic RTP, biological imaging benefits from the utilization of organic luminophores with longwavelength luminescence. Because of this, various organic RTP luminophores with long-wavelength (e.g., red emission) have been increasingly developed through ingenious approaches. These include utilizing heteroaromatic molecules with red emission (e.g., benzo[2,1,3]thiadiazoles) 21 , expanding π-conjugated structures 34 , and introducing appropriate red emitter dopants to facilitate the exciplex-dopant energy transfer and hence, redshift the emission wavelength 84 .
Apart from specific strategies in designing pure organic RTP luminophores with long-wavelength emission, there is a crucial need to process these luminophores so that they can remain sufficiently emissive in biological environment. This is because, when organic RTP luminophores are in contact with oxygen in aqueous media, their triplet excited states can be easily quenched 48 . Encouragingly, recent years have seen the emergence of various strategies to address this issue. Of particular interests are nanocrystallization of amorphous phosphorescent aggregates 57 and polymer-assisted nanoencapsulation of phosphorescent luminophores 37,58-60 .
Nanocrystallization. Organic RTP luminophores are highly attractive for biological applications, such as phosphorescence and lifetime imaging. This is because, with long lifetimes extending beyond milliseconds to a few seconds, organic RTP luminophores have the capability to minimize biological tissue autofluorescence and background interference, which typically last up to several nanoseconds. At the same time, pure organic RTP luminophores are able to mitigate the extended utility of external illumination source and they possess excellent biocompatibility. Although organic RTP crystals possess such attractive features, there have been minimal successful attempts to use them for biological applications. This has been attributed to several factors. To date, a majority of the demonstrated bright pure organic RTP luminophores possess short-wavelength emissions. Many current strategies to yield bright organic RTP are not compatible with biological applications as additional molecules or matrices have been deliberately incorporated for the synthesis of these compounds. Unfortunately, the presence of these additives may interfere with biological systems and compromise the biocompatibility of the eventual organic RTP luminophores. Meanwhile, for those organic compounds exhibiting crystallizationinduced phosphorescence, their sizes are typically too large for biological applications.
To solve these problems, our group has recently developed a unique nanocrystallization strategy (Fig. 5) 57 . As shown in Fig. 5a, the nanocrystallization strategy relies on three primary processes, i.e., precipitation, seeding, and ultrasonication, to maximize the nucleation rate of crystals while simultaneously minimize their growth rate 85 . Amorphous nanoaggregates are initially formed through injection of an organic compound in a good solvent (e.g., tetrahydrofuran) into its antisolvent (e.g., water). Once crystal seeds of the organic compound are introduced to direct and induce crystallization, the suspension is subjected to ultrasonication to promote detachment of the crystals from their seeds, which serve as new seeds to accelerate the nucleation rate and formation of nanocrystals. Interestingly, the crystal size can be simply fine-tuned by varying the ratio of the antisolvent to good solvent. The conversion of amorphous nanoaggregates to nanocrystals is typically accompanied by changes in optical properties. The nanocrystallization method is widely applicable to a range of organic molecules with various chemical structures and sizes.
To realize nanocrystals with bright long-wavelength phosphorescence for biological applications, we designed a new RTP molecule (4-(4-(9H-carbazol-9-yl)butoxy)phenyl)(4-bromophenyl)methanone (C-C4-Br) by adding a butoxy spacer in between the carbazole and the 4-bromobenzophenone groups to spatially separate them (Fig. 5b). This soft spacer was incorporated in order to enhance the overall heavy halogen atom effect through strengthening the bond between the carbazolyl plane of one molecule and the bromine atom of the neighboring molecule. Following the steps shown in Fig. 5a, the nanocrystals of C-C4-Br with rod-like morphology and a size of 180 nm were synthesized (Fig. 5c). Along with the generation of C-C4-Br nanocrystals, their RTP emission was assessed (Fig. 5d). Of all three specimens under examination, i.e., C-C4-Br nanoaggregates, C-C4-Br nanocrystals, and fluorescein, only the nanocrystals were emissive and observable under all illumination conditions, in the presence and absence of visible light and UV light irradiations. Both the nanocrystals and nanoaggregates of C-C4-Br were then used for in vitro phosphorescence imaging of breast cancer cells (Fig. 5e). No photoluminescence could be detected from the cancer cells treated with the nanoaggregates. In contrast, the nanocrystaltreated cells displayed bright red phosphorescence emission. Furthermore, the phosphorescence lifetime of C-C4-Br nanocrystals was 0.14 s and could be utilized to clearly differentiate the nanocrystal-labeled cells from the fluorescein dye interference and background autofluorescence. The nanocrystals also displayed excellent biocompatibility. All these have clearly illustrated the effective cellular internalization and highly emissive phosphorescence of nanocrystals, and most importantly, the superiority of nanocrystalline RTP luminophores over their amorphous counterparts for biological applications.
Nanoencapsulation. Further to the top-down nanocrystallization, bottom-up approach such as nanoencapsulation was also used to process organic RTP luminophores for biological applications 37,[58][59][60] (Fig. 6). Using this approach, pure organic RTP luminophores are generally wrapped within an amphiphilic shell to endow the eventual structures with excellent water dispersibility and good cellular uptake. For example, pure organic RTP nanocrystals of BDBF (Fig. 2o) have recently been encapsulated within an amphiphilic saponin with a unique permeabilization property for improved intracellular delivery and imaging of live cancer cells (Fig. 6a, b) 58 . The saponin-encapsulated BDBF nanocrystals were prepared by introducing saponin into the BDBF nanocrystals dispersed in phosphate-buffered saline (Fig. 6a). These processed nanocrystals had a uniform size of about 460 nm. To demonstrate the advantage of the saponinbased encapsulation process, the saponin-encapsulated BDBF nanocrystals were used for in vitro phosphorescence imaging of HeLa cells. It was noted that these nanocrystals were able to diffuse through the permeabilized membrane of HeLa cells, stain the cells with high photostability, and maintain the persistent lifetimes of up to 100 ms, which is significantly longer than background autofluorescence (Fig. 6b).
Recent studies have also shown that the nanoencapsulation technique can be used to process pure organic RTP luminophores prepared from both top-down and bottom-up synthetic strategies (Fig. 6c, d) 59,60 . For instance, a series of organic RTP luminophores, including DPhCzT (OS1) (Fig. 2i), was prepared through both the top-down transformation of solid crystals into nanoparticles (OSNs-T) and the bottom-up nanoprecipitation of nanoparticles (OSNs-B) 59 . An amphiphilic triblock copolymer of PEG-b-PPG-b-PEG (F127) was subsequently utilized to encapsulate OSNs-T and OSNs-B in order to stabilize and endow them with good aqueous dispersibility (Fig. 6c). Although each pair of the nanoparticles possessed the same phosphorescence dye and spherical morphology, the hydrodynamic diameter of all OSNs-B (~20 nm) was notably smaller than that of OSNs-T (~70 to 80 nm). Both sets of organic nanoparticles did not induce any noticeable cytotoxicity. Interestingly, all three OSNs-T exhibited enhanced phosphorescence as compared to their OSNs-B counterparts, possibly due to the stronger molecular packing and enhanced stabilization of triplet excitons. In addition, of all OSNs-T, OSN1-T, which was prepared from DPhCzT, exhibited the strongest and longest phosphorescence, which could be detected using a whole-animal imaging set-up after switching off external illumination. OSN1-T was then selected and used for proof-of-concept in vivo long-term phosphorescence imaging (Fig. 6d). Even at a low nanoparticle concentration of 7.5 nM, the phosphorescence of OSN1-T could be clearly observed in the subcutaneous tissue of living mice. Importantly, due to the mitigation of tissue autofluorescence, OSN1-T enabled a highly sensitive imaging of axillary lymph node in living mice with a signal-to-noise ratio of 40, while fluorescence imaging could not distinguish the lymph node from normal tissue.
Separately, the F127-mediated nanoencapsulation strategy was also used to wrap organic RTP aggregates CPhCz (Fig. 2m) to generate water-dispersible RTP nanoparticles for in vitro phosphorescence imaging 37 labeling and phosphorescence imaging of hepatocellular cells. Similar approach to encapsulate organic long-lived RTP luminophores, i.e., specific derivative of 10-phenyl-10H-phenothiazine-5,5-dioxide, using F127 to generate water-dispersible nanoparticles for in vivo phosphorescence imaging has also been demonstrated in a recent study 60 . The F127-encapsulated nanoparticles were intradermally administered into living mice and their ultralong RTP was clearly observed. In short, these studies have demonstrated general applicability of nanoencapsulation method in generating dispersible RTP nanoparticles with retained phosphorescence brightness and lifetimes for real-time phosphorescence imaging.
Summary and outlook
Motivated by the rapid advancements in the molecular design, formulation, and processing strategies, pure organic long-lived RTP luminophores have been increasingly demonstrated in recent years. Efficient singlet-triplet ISC, slow triplet excited state decay rate, and suppressed non-radiative relaxation rates, which are essential factors to the realization of pure organic RTP, have been enabled through various molecular design strategies, such as halogen bonding [33][34][35] , H-aggregation [36][37][38] , and n-π transition 39,40 . Concurrently, the emergence and development of various organic RTP enhancement approaches, notably cocrystallization 34,41-43 , rigid matrix host-guest system [44][45][46] , structural modified host-guest system 47,48 , and dopant-based system 49 , have significantly enhanced and elevated the performance of pure organic RTP luminophores, in terms of phosphorescence quantum efficiency and lifetime.
As an emerging class of luminescent materials, pure organic luminophores with persistent and bright RTP have remarkable properties, such as configurable molecular structures, tunable photoluminescence properties, long phosphorescence emission lifetime, and stimuli-responsiveness. Because of these attractive properties, they are poised to have tremendous potential applications. For instance, the long-lived phosphorescence of organic RTP luminophores coupled with their stimuli-responsiveness have been actively explored for applications in sensing and anticounterfeiting labeling for data security protection. Additionally, the extended luminescence lifetimes of organic RTP luminophores are beneficial to the mitigation of background autofluorescence, which enables highly specific and sensitive biological imaging. With such unique features and potential applications, the value and importance of pure organic RTP luminophores have grown exponentially in the last several years, spurring increasing explorations to uncover all aspects of this unique group of organic materials. Some of these important aspects include: (1) re-investigations and deeper elucidations of the fundamental mechanisms of a range of pure organic RTP luminophores (i.e., organic small molecules, macromolecules, and carbon dots), (2) more facile and efficient design of novel and robust pure organic RTP luminophores with high brightness and long lifetime, (3) further enhancements of the phosphorescence features and performance of these organic RTP luminophores (e.g., organic RTP lifetime, long-wavelength emission, full-color emission, etc.), (4) comprehensive evaluations of the biological effects of pure organic RTP luminophores, and (5) further exploitations of the potential applications of pure organic RTP luminophores.
Firstly, while increasing studies have highlighted the uniqueness and superiority of pure organic RTP luminophores, it is important to highlight that the development of this field is still in its infancy. As a result, certain proposed mechanisms underlying the occurrence of organic RTP may warrant deeper investigations or revisits. For instance, although a large body of literature has demonstrated the strong persistent RTP emission of the carbazole-based organic compounds when they exist in crystal state, there has been a correction report pointing that this RTP 76 . This interesting observation has underlined the importance of sample purity before any phosphorescence characterization. As such, some of the currently well-received principles may require further verifications. More efforts are definitely needed to elucidate these RTP enhancement mechanisms thoroughly before the rational design strategies can be tested and established.
Despite the fact that a large number of investigations to date have focused primarily on RTP mechanisms of organic small molecules, those of other structures, such as carbon dots 61,62 and macromolecules (e.g., proteins, starch, carbohydrates, cellulose, and conjugated polymers) 63,64 , have not been widely explored. For example, macromolecules from natural products have been reported to exhibit obvious long-lived phosphorescence under ambient conditions 63 , although the driving mechanisms are still not well understood and await further exploration. Similarly, preliminary studies on carbon dots have demonstrated that these nanomaterials are capable of emitting long-lived phosphorescence under ambient conditions when they are embedded in rigid matrices, such as PVA 86 and silica gel 87 . More efforts are consequently essential to unravel the mechanisms, rational design, and enhancement strategies of the long-lived RTP phenomenon of more complex organic materials.
Secondly, leveraging on more comprehensive insights into the mechanisms of a wide range of pure organic RTP luminophores, increasing focus should be placed on developing more facile and efficient design strategies to realize highly robust organic RTP luminophores. Currently, there are still limited types of pure organic RTP luminophores with high brightness and persistent RTP emission. Although this may be partly attributed to the infancy of the field, it is worth mentioning that most of the organic RTP luminophores have been designed through complex approaches. Moreover, these luminophores can only manifest their RTP emission under stringent conditions. All these have inevitably limited the types of available high-performance organic RTP luminophores for practical applications. An increased understanding of pure organic RTP is, therefore, crucial to overcome this limitation as the newly gained insights will be beneficial to the development of a more rational and facile design framework to expand the library of pure organic RTP luminophores.
Thirdly, in addition to a deeper understanding of the fundamental mechanisms and improved designs of pure organic RTP luminophores, significant enhancements to their phosphorescence performance should be carried out in tandem. It is noteworthy that the phosphorescence features of organic RTP luminophores are still not comparable to those of their inorganic counterparts, especially in terms of phosphorescence quantum efficiency and lifetime. Encouragingly, there are clear signs that recent activities have been geared towards bridging the performance gaps between organic and inorganic RTP luminophores. This is evident from one of the most recent demonstrations of a persistent organic dopant-based RTP system with a remarkably long phosphorescence lifetime of~1 h under ambient conditions 49 . With this breakthrough, there has been growing confidence that maximization of the phosphorescence performance of pure organic RTP luminophores to rival that of inorganic luminophores could possibly be achieved in the near future. Systematic investigations are thus needed for pushing this performance limit of organic RTP luminophores.
Fourthly, another important aspect of pure organic RTP that requires deeper examinations is the biological effects of organic RTP luminophores. While there has been great excitement for the imminent utilization of organic RTP luminophores for a wide range of applications, further advancement needs to proceed with care. In fact, to fully realize the practical applications of pure organic RTP luminophores, specifically their biological applications, it is imperative to understand their biological characteristics so that these functional nanomaterials can be rationally engineered and used safely. Unfortunately, to date, the information on the toxicological profiles and biocompatibility of pure organic RTP luminophores is still relatively scarce and largely unknown. With the increasing explorations into potential applications of pure organic RTP luminophores, their biological characteristics definitely deserve more attention and investigation.
Lastly, more potential applications of organic RTP luminophores need to be further exploited and developed. In fact, it is essential to identify the "killer" applications of pure organic RTP luminophores, which they are capable of revolutionizing and which they will have competitive advantages over conventional and other similar nanomaterials. Additionally, while many studies have reported the promising utilizations of pure organic RTP luminophores, most of these demonstrations have largely been confined at an early proof-of-concept stage. More works are required in tandem to move these early-stage demonstrations into maturation and possible commercialization.
In summary, it remains to be seen if organic RTP luminophores can rival or even surpass their inorganic counterparts and live up to the expectations. However, with a more in-depth understanding of the fundamental mechanisms of the phenomenon of organic RTP coupled with the further development in the rational design and RTP enhancement strategies, we envision that pure organic luminophores with long-lived and bright RTP emission may find potential widespread applications in the near future.
|
v3-fos-license
|
2020-08-20T10:01:54.860Z
|
2020-08-01T00:00:00.000
|
221740582
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.cureus.com/articles/35429-chronic-infectious-complications-of-recreational-urethral-sounding-with-retained-foreign-body.pdf",
"pdf_hash": "d6d31ba7aa8f444081844fbf2dfa715a44f52e7e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43540",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "62afcf6b67d8411380624f0e11d4a34632fcb056",
"year": 2020
}
|
pes2o/s2orc
|
Chronic Infectious Complications of Recreational Urethral Sounding With Retained Foreign Body
The practice of recreational urethral sounding involves insertion of foreign body in the urethra usually for sexual gratification. We present a case of a 62-year-old male with longstanding recurrent urinary tract infections complicated with Staphylococcus epidermidis bacteremia, discitis, and osteomyelitis at T12-L1 vertebral level associated with left psoas abscess secondary to a retained foreign body inserted into his urethra and urinary bladder. He had extraction of foreign body, cystoscopy, and open cystolithotripsy. He received long-term antibiotics and back surgery resulting in residual chronic back pain. This case illustrates important chronic infectious complications associated with the high-risk sexual practice of urethral sounding.
Introduction
From a medical perspective, a sound is an instrument inserted into bodily passages most commonly the urethra or uterus to gently probe, dilate, or relieve strictures [1][2]. Beyond healthcare practice, urethral sounding or urethral "play" refers to the insertion of a foreign body into the urethra often for sexual or erotic purposes [3][4][5]. Occasionally, these foreign objects may end up within the urinary bladder. These could predispose to infection, injury, or trauma and require prompt treatment. Patients commonly present with acute symptoms including obstruction, hematuria, urinary frequency, dysuria, or pelvic pain [6]. Here, we report the consequences of a chronic intravesical foreign body that led to persistent bacteremia, psoas abscess, and deep spinal infectious complications in a 62-year-old man.
Case Presentation
A 62-year-old male was admitted for evaluation and management of three weeks of progressive back pain radiating to his legs. He was noted to have a history of nicotine and amphetamine abuse, hypertension, mood disorder, and neuropathic pain. He denied any lower extremity weakness, bowel or urinary incontinence. Physical examination including neurologic assessment was unremarkable. White blood cell count was normal at 8.6 K/uL (reference range, 4-11) but inflammatory markers, erythrocyte sedimentation rate, and C-reactive protein were significantly elevated at 94 mm/h (reference range, 0-15) and 110 mg/L (reference range, 0-8) respectively. MRI of the back revealed abnormal marrow signal and enhancement in the T12-L1 vertebral bodies centered at the T12-L1 disc space likely secondary to discitis ( Figure 1). There was corresponding abnormal paraspinal edema and enhancement with a probable 1.7 cm intramuscular abscess in the left psoas muscle. The abscess was aspirated and grew Staphylococcus epidermidis. The same organism was in multiple blood cultures. He was placed on IV cefazolin.. He also complained of dysuria and hematuria. Further history revealed frequent urinary tract infections over the last six months with S. epidermidis as well for which he received different courses of oral antibiotics without relief. A CT urogram was done for persistent urinary symptoms and it identified a tubular 1.5 cm diameter peripherally calcified 10-12 cm structure with tapered distal ends and intermediate internal attenuation coiled in the urinary bladder ( Figure 2). After careful history, he admitted his girlfriend inserted a sex toy shaped like a fishing worm into his urethra few months back, but he did not remember if it was removed. He underwent cystoscopy and open cystolithotripsy by which the foreign body was extracted ( Figure 3). The patient was discharged to a skilled nursing facility to complete IV antibiotics for six weeks followed by oral cephalexin for additional six weeks. Repeat MRI showed destruction in the intervertebral disc between T12-L1 and paraspinal soft tissue enhancement. He eventually required a T12-L1 corpectomy and posterior instrumented fusion of T9-L3. The patient continued to have significant back pain even after a year of follow-up. One of the most common reasons associated with the insertion of foreign bodies into the urethra is sexual or erotic in nature [5]. From a sexual context, this is referred to as urethral sounding. Our patient recalled his girlfriend inserting a sex toy into his urethra during sexual intercourse and it ended up in the urinary bladder. He did not seek consult for the retained device. However, he was repeatedly treated for urinary tract infection after that for months. It was only when he presented with sepsis that it was identified in imaging and retrieved. Unfortunately, the prolonged bacteremia resulted in discitis, osteomyelitis, psoas abscess, and prolonged debilitation from back pain.
Discussion
Like our patient, most are hesitant to admit foreign body insertion and may present to a healthcare provider only when symptoms develop or if there are complications [7]. Acutely retained foreign body in the urinary bladder can cause irritation and patients may complain of frequency, dysuria, or pelvic pain. Hematuria is not uncommon especially if there is injury to urethral mucosal membrane or injury to the wall of the bladder. Obstruction and inability to void may occur especially if the urethra is involved [6][7][8][9]. Subacute and long-term intravesical or urethral foreign body may then eventually lead to frequent urinary tract infections, abscesses, calculus formation, urethral diverticuli, or stricture, or fistula formation [5][6][10][11].
Diagnosis of retained foreign body in the bladder is challenging especially if patients do not volunteer the history that a foreign body was introduced. Patients may refuse answering questions on sexual history or avoid a genital examination. Radiologic imaging such as plain Xray, ultrasound, or CT scan is warranted especially if the index of suspicion is high. It provides information on the extent, size, and location of the foreign body [5].
For definitive management, endoscopic and minimally invasive techniques should be encouraged. Most intravesical foreign bodies may be removed by this approach. Larger objects, calcified or those irretrievable by endoscopic approach may require open surgery such as cystotomy [7].
Several previous cases of intravesical foreign objects reported in literature usually presented with acute to subacute symptoms of bladder pains, irritation, urinary retention, or dysuria [12][13][14][15][16]. Our patient was also treated for recurrent urinary tract infections for over six months before he eventually ended up with bacteremia and deep-seated infectious complications of discitis and psoas abscess that was managed both medically and surgically. This case is unique as it highlights the potential for chronic infections and complications associated with a longstanding intravesical foreign body. Deep infections such as discitis and psoas abscess suggest prolonged period of bacteremia. The calcifications in the tubular structure found in the bladder also point towards its chronicity. Unfortunately, our patient ended up with chronic back pain as a sequela of foreign body retention and subsequent infection.
Conclusions
Retained foreign body in the urinary bladder as a consequence of urethral sounding may be a rare occurrence. A number of these patients may not necessarily present acutely to the urology office or the emergency room but rather to primary healthcare providers complaining of recurrent urinary tract infections. A high index of suspicion is warranted to facilitate diagnosis. Long-term complications may include bacteremia resulting in infectious complications such as psoas abscess and discitis.
Additional Information Disclosures
Human subjects: Consent was obtained by all participants in this study. Sanford Health IRB issued approval n/a. No approval necessary for case report. . Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
v3-fos-license
|
2018-04-03T06:10:22.593Z
|
2008-01-01T00:00:00.000
|
45795602
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/jcnc/2008/603860.pdf",
"pdf_hash": "7857cbc8f754d35bf66107134743c4718f4a7e71",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43542",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"sha1": "7857cbc8f754d35bf66107134743c4718f4a7e71",
"year": 2008
}
|
pes2o/s2orc
|
Packet Classification byMultilevel Cutting of the Classification Space : An Algorithmic-Architectural Solution for IP Packet Classification in Next Generation Networks
Traditionally, the Internet provides only a “best-effort” service, treating all packets going to the same destination equally. However, providing differentiated services for different users based on their quality requirements is increasingly becoming a demanding issue. For this, routers need to have the capability to distinguish and isolate traffic belonging to different flows. This ability to determine the flow each packet belongs to is called packet classification. Technology vendors are reluctant to support algorithmic solutions for classification due to their nondeterministic performance. Although content addressable memories (CAMs) are favoured by technology vendors due to their deterministic high-lookup rates, they suffer from the problems of high-power consumption and high-silicon cost. This paper provides a new algorithmic-architectural solution for packet classification that mixes CAMs with algorithms based on multilevel cutting of the classification space into smaller spaces. The provided solution utilizes the geometrical distribution of rules in the classification space. It provides the deterministic performance of CAMs, support for dynamic updates, and added flexibility for system designers.
INTRODUCTION
Traditionally, the Internet provides only a "best-effort" service, treating all packets going to the same destination similarly.However, providing differentiated services for different users based on their quality requirements is increasingly becoming a demanding issue.For this, routers need to have the capability to distinguish and isolate traffic belonging to different flows.This ability to determine the flow each packet belongs to is called packet classification.
Originally, classification came from routing, where the route lookup problem could be seen as a special case of the more general packet classification problem.The primary task of routers is to forward packets from input links to the appropriate output links.In order to do this, Internet routers must consult a route table containing a set of addresses and the output link or next hop for packets destined for each address.The task of resolving the next hop from the destination IP address is commonly referred to as route lookup or IP lookup.The adoption of classless interdomain routing (CIDR) allows the network addresses in route tables to be of any size.This makes entries in the lookup table of variable-length, and subsequently, a route lookup requires finding the longest matching prefix in the table for the given destination address.As packet classification emerged from the router's need to classify traffic into different flows, the longest prefix matching techniques used in routing lookup tables formed the origin of packet classification techniques.
If an Internet router is to provide more advanced services than packet forwarding, it must perform flow identification.The process of identifying the packets belonging to a specific flow or group of flows between a source and destination is the task of packet classifiers.Packet classification is searching a table of rules for the highest priority rule or set of rules which match the packet.An example of rules in a 5-tuple classifier is shown in Table 1.The route lookup problem may be seen as a special case of the more general packet classification problem.Applications for quality of service (QoS), security, monitoring, and multimedia communication typically operate on flows, so each packet must be classified in order to assign a flow identifier.Packet classification is needed to facilitate the following: filtering packets for security reasons (Firewalls), delivering packets within certain delay bounds (QoS), enabling premium services, policy-based routing, traffic rate-limiting and policing, traffic shaping, and billing.
In the near future, 40 gigabit per second (OC768) line speeds are expected to be achieved.Given the smallest packet size of 40 bytes in the worst case, the router needs to lookup packets at a speed of 125 million packets per second.Thus, hardware accelerated architectures are inevitable for next generation network classifiers.Technology vendors are reluctant to support algorithmic solutions that take advantage of the availability of cheap commodity RAM due to their nondeterministic performance [1].Despite the disadvantages of CAMs like silicon cost and high-power consumption, they are favoured by technology vendors for their deterministic performance.CAMs guarantee that packets will be classified at a fixed rate which is necessary to provide services at line speed.Algorithmic-architectural solutions that combine both CAMs and algorithms can improve the efficiency, scalability of the CAMs without sacrificing deterministic high performance.
In this paper, a new architecture that takes advantage of the geometrical characteristics of rule sets, HyperCuts packet classification algorithm, and CAMs is proposed.The proposed architecture works by multilevel cutting of the classification space and it has the deterministic performance of CAMs.As technology providers are reluctant to support algorithmic solutions due to their nondeterministic performance, the proposed architecture is expected to take algorithmic solutions of packet classification based on cutting the classification space from theory to implementation on emerging technologies such as ASIC and FPGAs.
Section 2 of this paper presents the related research and motivation.In Section 3, the packet classification problem is presented from a geometrical point of view.Section 4 presents the proposed architecture.Section 5 provides a detailed analysis of the proposed architecture.Section 6 discusses issues related to the dynamic updates of the architecture, and finally, Section 7 concludes this paper.
RELATED RESEARCH AND MOTIVATION
Packet classification solutions are divided mainly into three categories: algorithmic solutions, content addressable memories (CAMs), and algorithmic-architectural solutions.
Algorithmic solutions
Algorithmic solutions for packet classification take advantage of the availability of cheap commodity memories and they fall into one of the following categories.
Cutting and tries
This is where the classification space is divided into subspaces by forming a decision tree in order to eliminate irrelevant rules until a leaf node is reached which contains the matching rule(s).A description of the algorithms that fall into this category follows.
HiCuts [2] divides the search space into subspaces by cutting across a selected dimension (i.e., source address, destination address, etc.) at each node until the number of rules in the node falls below a predetermined threshold.The selection of the dimension to cut across and the number of cuts is determined by the heuristics of the algorithm.
HyperCuts [3] exploits the idea of HiCuts in dividing the classification space; however, each node in HiCuts represents a hyperplane, while it represents a k-dimensional hypercube in HyperCuts.The aim of that is to reduce the depth of the tree; however, this will make searching at nodes in HyperCuts more complicated.
Modular packet classification [4] organizes the classification space into 3 layers: index jump table, search tree, and rules buckets.As shown in Figure 4, the index jump table divides rules into different groups using some initial prefixes of selected dimensions.The search tree is built by examining a certain number of bits of the rules at a time.The particular bits chosen are any arbitrary unexamined number of bits selected in order to make the tree as balanced as possible and to decrease replication of rules in buckets.
Set pruning tries [5] are a 2-dimensional classification algorithm that builds a trie for destination address prefixes.For each prefix in a destination address trie, there is an associated trie for the relevant source address prefixes.This algorithm works by finding the longest match in the destination trie for the incoming packet header destination address field, then searching the associated source trie to find the matching rule.
An FIS tree [6] is a tree that stores a set of segments or ranges.The leaf nodes of the tree correspond to the elementary intervals on the axis.Projections of the d-dimensional rectangles specified by the rules define elementary intervals on the axes; in this case, elementary intervals are formed on the port axis.N rules will define a maximum of I = (2N + 1) elementary intervals on each axis.
Disintegration
This is where the multiple field search problem is disintegrated (decomposed or disassembled) into instances of the single field search problem.Results are then aggregated either in one stage or several stages.A description of the algorithms that fall into this category follows.
Crossproducting [7] divides the search process into separate search processes-one for each field.For example, if we have 5-tuple rules, then the unique values of each field will be combined in a separate list.For an incoming packet, a search process will be carried out for each field in the corresponding list then the results will be combined in one step to find a match.Crossproducting can provide high-search speed with parallel implementation.However, it suffers from exponential memory requirements.
In the recursive flow classification (RFC) algorithm [8], the problem of packet classification can be considered as a mapping process, mapping S packet header bits onto T flow ID bits.The mapping process is determined by R classifier rules represented by T = log 2 (R) flow ID bits.The lookup operation is performed over several stages by decomposing the lookup field into smaller segments.Each segment is used in parallel to lookup an index that is significantly smaller than the initial segment, compressing the TCP/IP field into a smaller field.Over multiple stages, the smaller segment lookups are then combined, yielding the final lookup value.
DCFL [9] was motivated by two observations on the real rule sets.
(i) Match Condition Redundancy: for each rule field, the number of unique match conditions specified by rules in the rule set is much less than the number of rules in the rule set.
(ii) Match Set Confinement: for each rule field, the number of unique match conditions that can be matched by a single packet header field is small even for larger rule set.
DCFL uses independent search engines for each rule field and then aggregates the results of each field search using a high level of parallelism.DCFL works by labelling unique field values with unique values and assigning a count value for the number of rules specifying the field value.The count values are updated as rules specifying the corresponding values are added or removed.The data structure needs to be updated when the count value changes from 0 to 1 or from 1 to 0. By concatenating, the label of each field value, a unique value for each rule can be obtained.Packet classification by bit vectors (BVs) [10] is a classification solution that is based on defining elementary intervals on the axes by projecting from the edges of the d-dimensional rectangles specified by the rules.N rules will create at most 2N+1 elementary intervals on each axis.For each elementary interval on each axis, there is a corresponding N-bit vector.Each bit position corresponds to a rule in the set of rules.The bit positions corresponding to the rules that overlap the associated elementary interval are set to 1, otherwise to 0. For each dimension d, an independent data structure is built to find the elementary interval corresponding to a given point.Baboescu and Varghese [11] utilized the fact that the maximum number of rules matching a packet is limited in real rule sets, so the bit vectors are sparse.They introduced the aggregated bit vector (ABV) algorithm as an enhancement of BV.ABV divides the N-bit vectors into A chunks.Each chunk is N/A bits in size.Each chunk has an associated bit in an A-bit aggregate bit vector.
Arranging into tuples
This is where a tuple defines the number of bits in each field of the rules.Hash functions are used to find the matching rules for each set of rules arranged in a tuple.Classification by arranging into tuples [12,13] came from the observation that the number of tuples is much less than the number of rules.A tuple defines the number of bits in each field of the rules.As all rules that map to a particular tuple have the same mask, then the required number of bits could be concatenated to construct a hash key from an incoming packet to probe a certain tuple.An exact match technique, such as hash table, could be used to find the matching rule.
Content addressable memory
CAM [14][15][16][17][18] is a storage array designed to find the location of a particular stored value by comparing the input against all of the values in the storage array simultaneously in 1-clock cycle.In binary CAMs, data comprises only from "1" or "0", while in ternary CAMs (TCAM), data comprises from "1", "0", or " * " (do not care) bits.In RAM, each bit is stored in a cell, while in CAM, each bit requires comparison circuitry in addition to the storage cell.This makes CAM very expensive in terms of silicon cost and power dissipation in comparison to RAM.
Similar to RAM, CAM writes data in words in the storage array but it reads in a different way.In RAM, the data in a specific location (i.e., the content) is read by inputting the address of that location.In CAM, the input to the array is the data to be searched for (i.e., the content), and the output is the location of the match.
In a binary CAM, the content can be made up of bits comprising two states, "0" and "1".In a ternary CAM, a third "does not care" state can also be included.A TCAM stores content as a (value, mask) pair, where value and mask are each W-bit numbers, and W is the width of the value.In addition to the storage cells that are required for value, a similar number of storage cells are required for the mask.Moreover, the matching circuitry is more complicated than for a binary CAM.
TCAM has the advantage of finding a match in 1 clock cycle; however, they suffer the disadvantages of high-silicon cost and high-power consumption.Typically, a TCAM cell requires six transistors to store one bit, and the same number of transistors to store the mask bit and four transistors for the match logic.Thus, each TCAM cell requires 16 transistors, which makes the cell 2.7 times larger than the standard SRAM cell [19].In some architectures, a TCAM cell requires 14 transistors [20,21].The power consumption for a single TCAM bit is around 3 microwatts per bit [22], compared with 20-30 nanowatts per bit in SRAM (i.e., a factor of ≈100) [23].
In addition to the above mentioned disadvantages, TCAM suffers storage inefficiency when dealing with ranges.Range comparison is required for port numbers.As TCAM does not store ranges, ranges must be converted to prefixes.For example, the range 1-13 (4 bits) will be represented by the following prefixes: 0001, 001 * , 01 * * , 10 * * , and 110 * .The range to prefix expansion might result in an expansion factor of 2(W − 1), where W is the field width in bits, that is, 30 for each port field.For the source and destination ports in IPv4, the expansion factor might reach 900 in the worst case.
Multimatch classification is required for new network applications such as intruder detection systems.In [24], the authors presented a solution that produces multimatch classification results with only one TCAM lookup and one SRAM lookup per packet.
In [25], a distributed TCAM scheme that exploits chip-level-parallelism is proposed to improve the packet classification throughput.This scheme seamlessly integrates with a range encoding scheme, which solves the range matching problem and ensures a balanced high-throughput performance.
Algorithmic-architectural solutions
These solutions mix algorithms with CAMs in order to achieve a tradeoff between the expensive deterministic performance of CAMs and the nondeterministic performance of algorithms based on cheap commodity RAMs.Under this category, there are two solutions: parallel packet classification (P2C) and label encoded CAM (LECAM).
Parallel packet classification (P2C) [26] exploits parallelism between independent field searches and then encodes the intermediate search results.The authors presented a P2C configuration in which the fields are searched using the balanced routing table search (BARTS) scheme [27] in SRAM, and the multidimensional search is implemented using a TCAM.After constructing the table of ternary match strings for each field, the ternary strings associated with each rule are concatenated and stored in TCAM in order of rule priority.A data structure is constructed for each rule field to return the intermediate bit vector for the elementary interval covering the given packet field.These data structures operate in parallel to generate the search key which will be used to query the TCAM.
LECAM [1] is an algorithmic-architectural solution that aims to have the deterministic performance of CAM by blending DCFL with a modified CAM architecture.Similar to DCFL, LECAM uses independent search engines for each rule field, where the search engines are optimized for the type of match condition.LECAM uses the label encoding technique used in DCFL to exploit the redundancy observed in real rule sets to more efficiently represent the set of stored rules.In DECL, the results from the search engines are aggregated in distributed fashion.In LECAM, the search engine results are aggregated using a modified CAM architecture.
Until the time of writing this paper, the state-of-theart in packet classification is LECAM, which mixes the DCFL packet classification algorithm with a modified CAM.LECAM is an algorithmic-architectural solution that utilizes disintegration and which exploits the observation about real rule sets that the number of unique values in each rule field is much less than the number of rules.LECAM performance is dependent on the high redundancy of unique values in rules fields, and that result in poor performance for LECAM if that property does not hold.
However, the number of rules remains much smaller than the classification space (even if it reaches millions of rules), raising the need for packet classification solutions that utilize the scarcity in the number of rules compared to the classification space.Thus, algorithmic-architectural solutions that are based on cutting the classification space into smaller spaces are expected to be suitable for next generation networks.
PACKET CLASSIFICATION FROM A GEOMETRICAL POINT OF VIEW
The packet fields most commonly used in IPv4 packet classification are the 8-bit protocol, 32-bit source address, 32-bit destination address, 16-bit source port, and 16-bit destination port.The "classifier" or "rule database" in a router consists of a finite set of rules, R1, R2, . . ., Rn.Each rule is a combination of k values, one for each header field in the packet.A packet P matches rule Ri if all the packet header fields Pj, j = 1, . . ., k match the corresponding fields in Ri.
There are three kinds of match: exact match, prefix match, or range match.In an exact match, the header field of the packet must exactly match the rule field.In a prefix match, the rule field is a prefix of the header field, and only that prefix of the header field needs to exactly match the rule field.In a range match, the header values must lie in the range specified by the rule.Range matching is used for specifying port number ranges.Exact match, and prefix match, could be looked at as a special case of range match.For example, the prefix 11 * * could be represented as a range 12-15, and the exact value 1100 could be represented as the range 12-12.
According to the above definition, a rule can be considered as a hyperplane in k-dimensional space.Rules in the classifier may overlap; consequently, the classifier is a set of overlapping hyperplanes.A packet is a point in the kdimensional space.Thus, the packet classification problem is equivalent to finding the set of the overlapping hyperplanes that contain the point to be located.This is similar to the point locating problem in computational geometry [28].The difference between packet classification and point locating problem is in the point locating problem, hyperplanes do not overlap, while in packet classification, hyperplanes may overlap.This makes the packet classification problem more complex than the point locating problem.Nonetheless, structures and characteristics of rules in the classifier may be exploited to reduce the complexity of the packet classification problem in order to get high-performance packet classification algorithms.
The concept of cutting [29,30] came from computation geometry, where "divide and conquer" principle is applied to reduce the complexity of searching in the classification space to searching in smaller subspaces.The fact that the number of rules in a classifier is much smaller than the classification space means that most of classification space is unoccupied; thus, cutting is a suitable technique to exploit the scarcity of occupied regions in the classification space.
Figures 1 and 2 depict the analog between the packet classification problem and the point locating problem in computational geometry.The example shows that 2dimensional classification is similar to finding a point in a two-dimensional space.However, in packet classification, a point could be covered by more than one hyperplane.The example of a two-dimensional classification space could be generalized to 5-tuple packet classification.Figure 3 shows a two-dimensional classifier that has 8 rules.The solid black boxes in the figure represent rules.The classification space is cut into subspaces, where each subspace contains only 1 rule or nothing, that is, a bucket size of 1.The bucket size is the maximum number of rules a subspace can contain with no need to divide it further.
At the first level, the classification space is divided into 4 cuts along the source address dimension and 4 cuts along the destination address dimension.That will result in 16 subspaces (cuts).As shown in Figure 3, all of the rectangles have 1 rule or less except the rectangle that lies on the intersection of row 3 and column 3 is denoted as (3,3), and the rectangle that lies on the intersection of row 4 and column 4 is denoted as (4,4).Thus, these two rectangles require further dividing.(3,3) is divided into two smaller rectangles along the destination address dimension.Those two rectangles contain 1 rule each, so no further dividing is required.(4,4) is divided into two subrectangles along the source address dimension.As the left hand one contains two rules, it is divided further into two subrectangles in which each subrectangle contains only 1 rule.
The classification space depicted in Figure 3 represents a multilevel tree as shown in Figure 4.In Figure 4, the number in the boxes represents the number of rules that lie in that subspace (cut).The boxes that have bold frames are nodes to be divided further.The rest of the boxes are leaf nodes.Leaf nodes contain a number of rules less than or equal to a predefined threshold (bucket size) which is 1 in this example.
AN ARCHITECTURE FOR CLASSIFICATION BY MULTILEVEL CUTTING
In this section, an architecture that fulfils the following requirements is proposed.
(i) The number of rules in the classification space is much smaller than the classification space, so packet classification by multilevel cutting of the classification space is an appropriate solution.FPGAs, makes them a desirable solution for system designers for network processing.
The proposed architecture
For all the above reasons, the architecture shown in Figure 5 is proposed.This architecture gives the required flexibility in trading off the different components of the architecture; it utilizes multidimensional cutting of the classification space and it gives the deterministic performance of CAMs.The proposed architecture is controlled by a microprocessor.The flow chart shown in Figure 6 describes how an incoming packet is classified.
An incoming packet will traverse a decision tree until a matching rule is found.The decision tree is composed of nodes, where branching decisions are made.If the number of rules in a node is less than a predefined threshold (bucket size), then this node is called a leaf node, and it will not be divided further.The incoming packet will be compared with the rules stored in the leaf node to find the matching rule.The number of divisions (cuts) and the dimensions to cut along for each node are decided according to the heuristics used for cutting.
The architecture proposed in Figure 5 is composed mainly of 3 sets of tables: the nodes table, the leaf nodes table(s), and the rules table(s).In addition to the tables, the architecture contains control logic, initial shifts register, comparator(s), CAM, and a priority encoder.
The nodes table consists of a number of entries equal to the number of nodes (including the leaf nodes) in the tree.Each entry is composed of 3 fields: address, type, and shifts.The address field is a pointer to the location of the first node in the subnodes of the current node, if the node is not a leaf node.If it is a leaf node, then the address field is a pointer to the location of the corresponding leaf node in the leaf nodes table.Type field determines if the node pointed to is a leaf node or not.Its value is either 1 or 0 (1 bit).Shifts field is used when the node is not a leaf node.The width of this field is 18 bits.The shifts field is actually composed of 4 subfields: shifts for the source address (5 bits), shifts for the destination address (5 bits), shifts for the source port (4 bits), and shifts for the destination port (4 bits).The value of the shifts field determines the number of cuts, that is, a value of 3 for the source address shifts means 8 cuts along the source address dimension.
The leaf nodes table(s) contains pointers to the locations of rules stored in the rule tables.The leaf nodes table(s) is/are triggered by the control logic with a signal of width w7 which carries the address of the rule in the leaf node.The width of the entries in the leaf nodes table(s) is log 2 (number of rules).The width w7 is log 2 (number of leaf nodes * bucket size/number of leaf nodes table(s)).The purpose of having multiple leaf nodes tables is to speed up comparing rules with incoming packets.
For example, instead of having a leaf nodes table that contains 10 leaf nodes with a bucket size of 8, the leaf nodes table could be split into 2 leaf nodes tables each containing 10 leaf nodes with a bucket size of 4, or 4 leaf nodes tables each containing 10 leaf nodes with a bucket size of 2. That will speed up the comparison process to 2 rules at a time or four rules at a time, respectively.The cost for that will be more comparator and rules tables as each leaf nodes table has a corresponding rules table and comparator, provided that the technology used supports such parallelism.
The rules are stored in the rules table(s).The number of entries in this table is the number of rules.Each rule consists of 32 bits for the source address, destination address, source address mask, and destination address mask.In addition to that, it consists of 16 bits for the source port lower value, 16 bits for the source port upper value, 16 bits for the destination port lower value, and 16 bits for the destination port upper value.Thus, each rule requires 192 bits.The number of rule tables is equal to the number of leaf nodes tables.The output of the rules table is the rule and the rule location which is the ID of the rule, so the output is of width w9 which is 192 + log 2 (number of rules).
The function of the logic unit is to generate a pointer of width w2 to the matching node in the nodes table or to generate a pointer of width w7 to the leaf nodes table.w2 is equal to log 2 (number of nodes including leaf nodes).w7 is equal to log 2 ((number of leaf nodes * bucket size)/number of leaf nodes tables).
The input to the control logic is composed of the incoming packets headers, the initial shifts register, and a signal from the nodes table.The packet header consists of a 32-bit source address, a 32-bit destination address, a 16 source port, a 16-bit destination port, and an 8-bit protocol.Thus, the width w1 of packet header fields is 104 bits for the 5-tuple packet classification.In implementations, usually, the protocol field is ignored as it has very few values like TCP, UDP, and ICMP.The control logic block gets a signal of width w3 from the nodes table composed of address, type, and shifts.The address field is a pointer to the location of the first node of the subnodes of the current node.The initial shifts for the highest level are stored in the initial shifts register (the location of the first node of the subnodes of the highest level is zero).The width w3 is equal to 18 + 1 + log 2 (number of nodes including leaf nodes).The control logic uses the incoming packet header with the address and shifts values to point to the current node location.If the current node is a leaf node, then the control logic triggers searching the leaf nodes table, otherwise the first node address will be set equal to the value read.The control logic keeps iterating until the node pointed to is a leaf node.Each iteration is equivalent to going one level down the tree.
CAM could be used as a classifier on its own; however, the silicon cost and high-power dissipation make it an undesirable solution.In this architecture, CAM provides a mechanism to guarantee the deterministic performance by controlling the depth of the decision tree (number of levels).By controlling the number of levels, the maximum number of access times required to reach the leaf node is controlled.If the architecture is pipelined, then the maximum number of levels determines the number of pipeline stages.Moreover, CAM increases the performance of cutting by storing rules that cause high redundancy.Another advantage of CAM is supporting dynamic update which is very important for the next generation networks as rules will be set dynamically by the traffic itself.The output of the CAM or the comparators has width w10 = log 2 (number of rules) which is the ID of the rule that will be fed to the priority encoder to resolve any conflict if there is more than one matching rule for a given packet header.
In the following sections, the tradeoffs between the components of the architecture will be analyzed.The parameters are as follows: bucket size, leaf nodes table size, nodes table size, and CAM size.
PERFORMANCE ANALYSIS FOR THE PROPOSED ARCHITECTURE
In this analysis, the tradeoffs between the parameters of the architecture are discussed.Testing packet classification solutions remains a difficult task due to the obstacles in obtaining real rule sets and the lack of standard metrics in evaluating packet classification solutions.In order to facilitate packet classification solutions testing, the packet classification benchmarking tool presented in [31] was proposed.This packet classification benchmarking tool was based on 12 real rule sets, the author obtained from communications companies and other researchers.The packet classification benchmarking tool gives flexibility in generating any number of rules based on these rule sets.For confidentiality purposes, the real rule sets were hidden, and the seed files that represent the real rule sets were provided.
The number of rules in the real rule sets used to build the packet classification benchmarking tool ranges from 68 to 4557.
The flexibility, given by the packet classification benchmarking tool in generating an infinite number of rule sets based on the 12 seed files, created difficulty in taking a decision about the seed file that should be used, and the number of rules to be generated.According to [3,9], real rule sets contain less than 5000 rules.Based on that, the number of rules targeted in our analysis is around 5000.In order to
For each dimension Get the number of unique values Get the average of the number of unique values over all dimensions Select the dimensions where the number of unique values ≥ average Get the number of cuts for each selected dimension
Algorithm 1: Heuristic to select the dimensions to cut along.generate a synthetic rule set that is close to the real rule set, the seed file chosen is the one that was based on the real rule set that contains 4557 rules, that is, ACL5 in [31].The parameters for controlling the smoothness and scope of the benchmarking tool are set to zero to keep the synthetic rule set as close as possible to the real one.The benchmarking tool usually generates fewer than the number of rules specified in the command line that is less than the specified number in the command line due to the replication of some generated rules.The number specified in the command line was, therefore, set to 10000, which resulted in generating 5115 rules.It is believed that the number of rules will remain less than 5000 because, in practice, rules are created manually by a system administrator using a standard management tool such as CiscoWorks VPN/security management solution (VMS) [32] or Lucent security management server (LSMS) [33].
The flexibility in the parameters of the heuristics and the components of the architecture gives enough flexibility in choosing the most suitable values for a given rule set.There are two heuristics used in building the multilevel decision tree.The first one is to select the dimensions to cut along, and the second one is to select the number of cuts per dimension.These two heuristics were based on what is published in [3].In our implementation, the heuristic shown in Algorithm 1 was used to select the dimensions to cut along.The heuristic selects the dimensions, where the number of unique values is larger than the average number of unique values in all dimensions.
For each selected dimension, heuristic shown in Algorithm 2 is used to select the number of cuts.
This heuristic keeps doubling the number of cuts until the percentage of empty nodes exceeds a predefined percentage.The number of total cuts at each node is limited to a predefined maximum.This number is limited in order to avoid an explosive growth in the number of nodes.
The above two heuristics will be used at each node until the number of rules falls below the predefined bucket size.The focus of this analysis is the tradeoffs between the components of the architecture rather than the parameters of the algorithms themselves.Thus, the maximum number of cuts is kept fixed at 128, and the maximum percentage of empty nodes is kept fixed at 0.9 throughout the analysis.It is believed that 128 and 0.9 will give flexibility for the heuristics to keep cutting along a selected dimension, so that most of the resulting subrule sets will have a number of rules less than the bucket size.
The analysis was carried out for bucket sizes 8, 16, 32, 64, 128, and 256.The results targeted are the number of nodes, the number of leaf nodes, and the maximum number of levels.As this architecture must achieve the deterministic performance of CAMs, the number of levels measured is the maximum value (worst case) rather than the average value.
Figures 7, 8, and 9 depict the effect of increasing bucket size on the number of levels, number of nodes, and number of leaf nodes.The number of levels represents the number of times the control logic has to access the nodes table to get the location of the leaf node or the number of pipeline stages if the architecture is pipelined.Number of nodes represents the size of the nodes table.Number of leaf nodes multiplied by the bucket size represents the size of the leaf nodes table.
These graphs are useful due to the fact that for a given rule set, the system designer is able to decide what the best bucket size is for the technology that will be used in implementation according to the limitations of the technology, and the parameters of the architecture such as speed and power consumption.In order to illustrate this, two examples are given below.
If the bucket size is 8, then the number of levels is 14, the number of nodes is 5.5 millions, and the number of leaf nodes is 2500.To store 5.5 million nodes, off-chip memory is required.The control logic requires accessing the memory 14 times to reach the leaf node (assuming that 1 access to the memory is enough to read the required data about the node).Off-chip QDR SRAM works at 300 MHz (word length of 32 bits) [34] which means that 1 access to memory requires 3.3 nanoseconds; which implies that by using offchip SRAM that works at a speed of 300 MHz, the control logic requires 46.2 nanoseconds to reach the leaf node which implies that the classifier cannot classify more than 21.6 million packets per second (calculations are based on the worst case scenario in order to guarantee the deterministic performance).
The other extreme of the bucket size is 256, where the number of levels is 5, the number of nodes is 7625, and the number of leaf nodes is 180.Assuming on-chip memory that is operating at 550 MHz nanoseconds [35], and an implementation technology that supports enough parallelism in searching a leaf node of size 256 for 180 leaf nodes to find the matching rule within the 5 times required to access the nodes table.Under that assumption, 5 accesses to memory require 10 nanoseconds, which means that the classifier speed cannot exceed 100 million packets per second.For a pipelined implementation, the classifier speed will rise to 500 million packets per second.This number is very promising provided that finding the matching rule in the leaf node does not require more than 1 access to the memory.Due to the technology limitations, in real implementation, finding the matching rule in the leaf node requires multiple accesses to the memory.This means that the speed of the classifier will drop by a factor equal to the required number of accesses to the memory.Increasing the bucket size comes at the cost of larger memory requirements.As shown in Figure 9, increasing the bucket size from 8 to 256 resulted in decreasing the number of leaf nodes from 2501 to 180, but the overall entries in memory = the bucket size * number of leaf nodes.This implies that by increasing the bucket size from 8 to 256, the memory requirements increased from 20008 to 46080 entries.Each entry is composed of 104 bits for the 5 fields, Figure 8: The effect on the number of nodes of restricting the maximum depth of the tree (number of levels) to 5 using CAM.
and 104 bits for the mask (192 bits if the protocol field is ignored).Moreover, increasing the bucket size could result in decreasing the speed of the classifier.This is due to the fact that increasing the bucket size increases the number of accesses to the memory to find the matching rule.
The system designer should optimize the architecture according to the limitations of the technology used in the implementation, and the preferences of the design.
Controlling the maximum number of levels using CAM
The previous results show that the maximum number of levels decreases as the bucket size increases.However, the bucket size cannot guarantee that the maximum number of levels will not exceed a certain limit.This could be guaranteed by using CAM as the tree will be trimmed at a certain depth, and rules in the trimmed nodes will be moved to the CAM.
In the following figures, the bucket size will be variable, and the number of levels will be kept constant at 5. Figures 7, 8, and 9 show the effect of using CAM to force the number of levels to be 5.Any number of levels other than 5 could be used, as the results should follow the same trends.
In general, the number of nodes and the number of leaf nodes decrease by using CAM.The effect of using CAM is more significant when the bucket size is small.However, this comes at the cost of using larger CAM as shown in Figures 10 and 11 controlling the maximum number of pipelining stages or controlling the worst case delay that is caused by accessing the nodes table.The effect of using CAM to control the maximum number of levels on the number of nodes and the number of leaf nodes is more significant for the small bucket sizes like 8 and 16.This is due to the fact that the tree will be trimmed from 14 levels to 5, and rules will be stored in CAM instead of the leaf nodes.The corresponding CAM sizes for bucket size 8 and 16 are 75% and 60%, respectively, which means that the architecture could be looked at as a CAM with increased capacity or reduced power consumption.The CAM size approaches zero as the bucket size increases due to the fact that the number of levels decreases as the bucket size increases.This implies that the silicon cost of the architecture is mainly in the RAMs as CAM size approaches zero.
Tackling redundant rules using CAM
Some of the rules in the classifier might be redundant in large numbers of cuts as shown in Figure 12.The redundant rules could be tackled by storing them in the CAM before building the tree.Heuristic shown in Algorithm 3 is used to select the largest rules (hyperplanes) that cover the largest areas of the classification space, and then moves them to the CAM.Figures 13,14,and 15 show the effect of removing the largest rules (hyperplanes) on the number of nodes, number of leaf nodes, and the size of the CAM used to control the maximum number of levels (depth of the decision tree) to 5. In this example, the top 10% of the rules (sorted by the largest) will be moved to CAM before building the tree.As shown in Figures 13 and 14, there is a reduction in the number of nodes and in the number of leaf nodes.The percentage of reduction goes down as the bucket size goes up.
The effect of moving the largest 10% of the rules to CAM reduces the rules set size from 5115 to (5115-510), but it adds 510 rules to the CAM. Figure 15 shows that there is a reduction in the CAM size; however, moving the 10% rules to CAM, and this increases the CAM size.The effect of that on the number of nodes and number of leaf nodes is a significant reduction especially for the small bucket sizes.For example, for a bucket size of 16, the number of nodes went down from 130000 to 50000-a reduction of 60%.That result in a saving of memory especially if on-chip memory is used.The same thing applies for leaf nodes, where there is a reduction from 550 to 450 leaf nodes, which results in a saving in the resources used.The saving in resources comes at the cost of an increase in the CAM size.The CAM size went down by 350 from 3100 to 2750.There is not actually a reduction as 510 rules should be added to the CAM; thus, CAM size effectively increases by 160.So, there is a tradeoff between the increase in the CAM size and the decrease in number of nodes and number of leaf nodes.The effect of moving rules that cover large areas keeps decreasing as the bucket size increases.The reason is that rules that cover large areas cause the cutting heuristic to increase the number of cuts which is and that results large number of nodes.
DYNAMIC UPDATES
The classifier supports dynamic updates if rules are inserted and deleted without the need to rebuild the decision tree.
To explain dynamic updates for packet classification by multilevel cutting of the classification space, consider the decision tree depicted in Figure 16.This classifier has a bucket size of 4 with rules distributed on buckets as shown in Figure 16.
The classifier shown in Figure 16 has 6 empty locations in the buckets, so the maximum number of rules that can be inserted in the classifier without rebuilding the decision tree is 6.However, if a rule is going to be inserted in Bucket 4, then the tree requires rebuilding or that rule cannot be inserted in the classifier.This means that the classifier can support inserting any number of rules between 0 and 6 depending on where the rules are going to be inserted.So, without CAM, the classifier can support low to average rate of dynamic updates.The classifier can support dynamic updates until a condition happens, where the number of rules to be inserted in a bucket is more than the number of empty locations in that bucket such as the following.
(i) A rule is required to be inserted in Bucket 4.
(ii) Four rules are required to be inserted in Bucket 1, or any other conditions that result in overflowing the buckets.
Thus, the probability that the tree requires rebuilding is the probability that an insertion of a rule results in an overflow in one of the buckets.The highest probability is the probability that a rule is inserted in Bucket 4.
Assuming a model in which the rules inserted are randomly distributed over the buckets, then the probability that a rule is inserted in Bucket 4 is 1/4.The probability that a second rule is inserted in the same bucket is (1/4) * (1/4) which is 1/16, and so on.If N rules are supported by CAM, then the probability is 1/(4 ∧ (CAM size)) or generally 1/((number of buckets) ∧ (CAM size)).Obviously, this number approaches zero exponentially as the CAM size increase linearly.
Moreover, dynamic updates are not only insertions for rules as there are rules going to be deleted.For a classifier that supports certain number of rules, it is expected in average that for each rule inserted, there will be a rule deleted.This makes the need for rebuilding the tree is even less probable.If there is any transient case in which the number of rules inserted in higher that the number of rules deleted then CAM will accommodate this transient condition.
The proposed architecture has the capability to support a high rate of dynamic updates.However, the decision tree could be rebuilt optionally to optimize the distribution of rules over cuts.
CONCLUSION
A new architecture for packet classification is proposed based on multilevel cutting of the classification space into subspaces.The proposed architecture takes advantage of the HyperCuts packet classification algorithm, the geometrical distribution of rules in the classification space, and CAMs.The proposed architecture gives the deterministic performance of CAMs with flexibility for system designers to trade off the components of the architecture according to the technology limitations and to the preferences of the system (speed, power dissipation).The proposed architecture was analyzed for the different tradeoffs between the bucket size, number of nodes, number of leaf nodes, and number of levels.CAM was used in the architecture to limit the number of levels, to remove redundant rules, and to support dynamic updates.
Washington University in St. Louis proposed DCFL, the best algorithmic solution in the "disintegration" category, and based on DCFL, they proposed ELCAM which is the state-of-the-art in algorithmic-architectural solutions.DCFL and ELCAM are based on the observation that the number of unique values in each rule field is very small compared to the number of rules.The number of unique values in the rule fields has significant impact on the performance of DCFL and ELCAM.If the number of unique values in rule fields increases, then the performance of DCFL and ELCAM declines.Dynamic updates support for DCFL and ELCAM is dependent on the number of unique values in rule fields.For the next generation networks where the number of rules reaches millions, and rules are dynamic and set by the traffic itself, it is not expected that the number of unique values in each rule field will remain very small compared to the number of rules.
The architecture proposed in this paper is able to support next generation networks as it is based on the fact that the number of rules is small compared to the classification space.The characteristics of the rules could be exploited to make the cutting more optimal.Moreover, ELCAM does not provide the flexibility provided in this architecture for the system designer to trade off the components of the architecture according to the limitations of the technology used and the preferences of the system designer.The existence of the aggregation CAM in ELCAM, which has a size dependent on the characteristics of the rules, restricts the ability of the system designer to trade off the components of the architecture.
The proposed architecture takes HyperCuts, the-state-ofthe-art packet classification algorithm in the "cutting and tries" category, from theory to implementation on emerging technologies.Technology providers are reluctant to support algorithmic solutions including HyperCuts due to their nondeterministic performance.The proposed architecture supports dynamic updates at high rates which were not supported in HyperCuts, and it provides the deterministic performance of CAMs.
Figure 4 :
Figure 4: Hierarchy of cutting that represents the classification space in Figure 3.
Figure 5 :
Figure 5: An architecture for multilevel cutting.
Figure 7 :
Figure 7: Restricting maximum depth of the tree (number of levels) to 5 using CAM.
Figure 9 :Figure 10 :
Figure9: The effect on the number of leaf nodes of restricting the maximum depth of the tree (number of levels) to 5 using CAM.
Figure 12 :
Figure 12: Some rules are redundant in many cuts.
Figure 13 :
Figure13: The effect on the number of nodes of moving the largest 10% of the rules.
Figure 14 :Figure 15 :
Figure14: The effect on the number of leaf nodes of moving the largest 10% of the rules.
Table 1 :
Rules table of a 5-tuple classifier. -bit . Controlling the number of levels guarantees Figure 11: CAM size required to accommodate rules moved to CAM as percentage to the total number of rules.
|
v3-fos-license
|
2018-04-03T02:42:48.763Z
|
2015-10-07T00:00:00.000
|
14387215
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/srep14936.pdf",
"pdf_hash": "5428e3ea96353500005ad97ac8a31da012a85474",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43543",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"sha1": "5428e3ea96353500005ad97ac8a31da012a85474",
"year": 2015
}
|
pes2o/s2orc
|
Light attenuates lipid accumulation while enhancing cell proliferation and starch synthesis in the glucose-fed oleaginous microalga Chlorella zofingiensis
The objective of this study was to investigate the effect of light on lipid and starch accumulation in the oleaginous green algae Chlorella zofingiensis supplemented with glucose. C. zofingiensis, when fed with 30 g/L glucose, synthesized lipids up to 0.531 g/g dry weight; while in the presence of light, the lipid content dropped down to 0.352 g/g dry weight. Lipid yield on glucose was 0.184 g/g glucose, 14% higher than that cultured with light. The light-mediated lipid reduction was accompanied by the down-regulation of fatty acid biosynthetic genes at the transcriptional level. Furthermore, light promoted cell proliferation, starch accumulation, and the starch yield based on glucose. Taken together, light may attenuate lipid accumulation, possibly through the inhibition of lipid biosynthetic pathway, leading to more carbon flux from glucose to starch. This study reveals the dual effects of light on the sugar-fed C. zofingiensis and provides valuable insights into the possible optimization of algal biomass and lipid production by manipulation of culture conditions.
Results
Growth characteristics of glucose-fed C. zofingiensis with or without light. C. zofingiensis was cultured in the medium containing 30 g/L glucose, illuminated either with or without light. As indicated by Fig. 1, C. zofingiensis reached stationary growth phase after six days of cultivation under both culture conditions. Notably, in the presence of light, C. zofingiensis grew faster with a specific growth rate of 0.502 h −1 , which is slightly higher than that without light (0.486 h −1 ) (Fig. 1A). Accordingly, C. zofingiensis with light gave a higher maximum dry cell weight (Fig. 1A). C. zofingiensis tended to turn yellow-orange-red during cultivation period (Fig. 1B), attributed to the synthesis and accumulation of secondary carotenoids, astaxanthin in particular 19,24 . It is worth noting that, C. zofingiensis also exhibited difference in the color of cultures between both conditions (Fig. 1B). The more intensive color of the light cultures resulted from the biosynthesis of light-induced chlorophylls.
Light attenuates lipid accumulation while stimulating starch biosynthesis. In green microalgae, lipid and starch biosyntheses share the common carbon precursors, though the regulation of carbon partitioning into these two biosynthetic pathways is not well understood 25,26 . Glucose-fed C. zofingiensis cultures maintained a basal lipid level during the first 3 days of cultivation; thereafter, the lipid built up rapidly and reached the maximal content of 53% of dry weight on day 7 ( Fig. 2A). The algal cultures provided with light followed the similar pattern of lipid accumulation, but the lipid content was greatly attenuated as compared to the light-free cultures, e.g., 28% of dry weight on day 7, which is 48% less than the cultures without light. In contrast, there was greater starch accumulation in the cultures with light than in the cultures without light, and the starch content of the former remained higher than the latter during the whole cultivation period (Fig. 2B). Interestingly, starch content started to accumulate ahead of lipid accumulation ( Fig. 2A,B). Yield on glucose can reflect the carbon flux allocation. Notably, opposite trend of flux to lipid and starch was observed in cells with and without light (Fig. 2C,D). As compared to the cultures with light, C. zofingiensis without light had higher lipid yield on glucose (Fig. 2C), though both of them dropped after cells started to divide. On the contrary, starch yield on glucose with light was higher than that without light during whole culture period (Fig. 2D). Different from lipid yield on glucose, starch yield remained relatively stable. Figure 3A shows the time course of algal cell density grown with or without light. Dark-grown C. zofingiensis exhibited almost no change in cell number until day 6; in contrast, in the presence of light the alga started to divide on day 2 and reached up to be 15-fold higher in cell density, indicating that cell proliferation was greatly enhanced by light. Consistently, our cell cycle analysis data by flow cytometry also demonstrated that light promoted the algal cell mitosis (Fig. 3B). G1/G0 phase stands for cell in diploid or stationary form, which represents newborn cell after division. Obviously, light cultures showed a much earlier peak in G1/G0 phase than dark cultures (labeled as M1 in Fig. 3B).
Light-induced lipid reduction is accompanied by accelerated cell proliferation.
There was an increase in the per cell weight observed during the early culture period, followed by a significant drop close to the initial value (Fig. 3C). Overall, C. zofingiensis without light maintained greater (up to 5 times) per cell weight than that with light. Similar to the pattern in per cell weight, the per cell lipid content of C. zofingiensis without light showed a drastic increase and reached the maximal value of 250 pg/cell after 5 days of cultivation, which is 5-fold higher than the light-illuminated cultures (Fig. 3D). In addition, BODIPY 505/515, a fluorescent lipophilic dye for the neutral lipids' staining 27 , was employed to monitor the in vivo dynamic changes of lipids in C. zofingiensis cells (Fig. 3E). The green Scientific RepoRts | 5:14936 | DOi: 10.1038/srep14936 signals represent the staining of neutral lipids, predominantly in the form of TAGs. In accordance with the per cell weight (Fig. 3C) and lipid content (Fig. 3D), dark-grown C. zofingiensis exhibited a drastic increase in both cell size and florescent staining with a peak value obtained on day 4, while the algal culture with light reached the maximum on day 2, followed by a gradual decline in cell size and staining due likely to the accelerated cell division (Fig. 3E).
Light alters the transcriptional expression of fatty acid biosynthetic genes. It is commonly agreed that green algae follow the similar lipid biosynthetic pathway as in higher plants. Among the enzymes involved in lipid biosynthesis, acetyl-CoA carboxylase (ACCase) is a rate-limiting enzyme catalyzing the first committed step for de novo fatty acid synthesis in chloroplast 28 . Chloroplastic ACCase is composed of four subunits and the expression of the genes coding the subunits is autoregulated to each other. Thus, the characterization of one subunit such as biotin carboxylase (BC) can be representative of ACCase. Stearoyl ACP desaturase (SAD) introduces the first double bond to acyl chain and plays an important role in determining the degree of saturation of fatty acids 28 . To investigate the effect of light on fatty acid biosynthesis, the transcript levels of SAD and BC in C. zofingiensis were determined using a real time-PCR approach. In the dark-grown C. zofingiensis cells, an increase in the steady-state mRNA level of both SAD and BC was observed and the mRNA levels reached their maximum on day 4, much higher than the maximum values in light-grown cells on day 3 (Fig. 4A,B). This is well consistent with the data that dark cells showed a sharp increase in per cell lipid content on day 4 (Fig. 3C). The introduction of light to dark culture after 3 days of cultivation exerted a negative effect and attenuated the SAD and BC transcripts dramatically compared to dark-grown cells on day 4. It's worth noting, however, that the expression of both genes increased sharply on day 5 and then decreased. When the light-illuminated cells transferred to dark, both SAD and BC transcriptional levels exhibited significantly higher than those in light-grown cells from day 4 to 6. Overall, C. zofingiensis accumulated more SAD and BC transcripts in dark than under light, which may explain why dark cells accumulated more lipids than light cells.
Culture conditions have little effect on fatty acid profiles. The quality of biodiesel is largely determined by its fatty acid composition 29 . GC-MS was employed to analyze the fatty acid profiles in C. zofingiensis under different culture conditions. The algal cells produced fatty acids mainly in the form of C18:1 (32.2%-35.8%), C18:2 (18.2%-20.1%), and C16:0 (16.1%-18.5%), which together account for more than 66% of total fatty acids, regardless of the culture conditions (Fig. 5). Although the total lipid contents varied greatly, no significant difference was observed in the fatty acid composition under the tested conditions (Fig. 5).
Discussion
C. zofingiensis can grow well photoautotrophically, mixotrophically, and heterotrophically. Under heterotrophical growth conditions, organic carbon sources, glucose in particular, are the sole carbon and energy sources 8,19,24 . Chlorella possesses an inducible hexose/H + active symport system that is responsible for the uptake of glucose from the medium 30,31 . In the presence of glucose, the hexose/H + symport system protein in Chlorella cells can be activated in just a few minutes 32 . Usually, the specific growth rate of glucose-fed microalgae growing with light is approximately the sum of cell growth rates under photoautotrophic and glucose-fed conditions in dark 33 . This might explain our findings that C. zofingiensis grew faster feeding on glucose in the presence of light (Fig. 1A). In green microalgae, lipid and starch are the two dominant energy storage forms and share the common carbon precursors for biosynthesis. It has been reported that photoautotrophic Chlorella is able to accumulate lipid and starch up to 60% and 45% of dry weight, respectively, depending on the algal strains and culture conditions [34][35][36][37][38] . Little attention, however, has been paid to the effect of light on carbon flux to lipid and starch in algal cells feeding on organic carbon sources such as glucose. In the present study, for the first time, we investigated the accumulation of lipid and starch in glucose-fed C. zofingiensis with or without light. Both lipid and starch contents increased, but starch accumulation preceded lipid synthesis ( Fig. 2A,B), which is consistent with the previous studies in photoautotrophically cultured C. zofingiensis 39 and Pseudochlorococcum sp 40 . Compared to the light cultures, the dark cultures tended to accumulate more lipids and less starch (Fig. 2). One possible explanation is that lipids require more reducing power than starch for production, while glucose-fed cultures in dark can generate more reducing power 17 . For example, lipid synthesis for a C18 fatty acid needs 16 NADPH molecules 33 , which is less energetically economical than starch synthesis, as the latter requires 6 NADPH molecules and 9 ATP molecules to form an 18-carbon molecule 40 . On the other hand, light can stimulate algae proliferation (Fig. 3). Cell division is an energy-consuming process and microalgae tend to accumulate sufficient energy before proliferation. The energy storage materials, lipids and starch in particular, tend to accumulate in the algal cells under stress conditions when the cell growth halts. Lenneke 41 discovered that starch and lipid accumulation in Neochloris oleoabundans occurred before mitosis. Stress relief facilitates the degradation of lipids or starch, which can provide energy for boosting the cell growth. In the present study, we noticed a sharp decrease in lipid yield on glucose but not in starch yield on glucose after cell proliferation (Fig. 2C,D), suggesting that glucose-fed C. zofingiensis cells tended to utilize lipids rather than starch to provide energy for cell division.
SAD and BC are two genes encoding key enzymes involved in fatty acid biosynthesis. It has been suggested that the control of these two genes on fatty acid biosynthesis may occur at transcriptional level in C. zofingiensis 25 . Consistent with the attenuated lipid yield on glucose after 3 days (Fig. 2C), a depressed expression level of these two genes was found in the cells provided with light, as compared to the dark-grown cells on day 4 (Fig. 4A,B). In this context, light might down-regulate the expression of fatty acid biosynthetic genes, leading to the decrease in lipid content to provide energy for cell proliferation, which would direct more carbon flux from glucose to starch biosynthetic pathway resulting in enhanced starch accumulation.
Fatty acid composition determines the quality of biodiesel and is subject to change in different culture conditions. The fatty acids in C. zofingiensis cells consisted predominantly of 16-18 carbons and the maximum unsaturation degree was 3, which are similar to that of plant oils currently used for biodiesel production 42 . It was found that oil from C. zofingiensis in dark was more suitable than that from photoautotrophic cells for biodiesel production, as the former contained high content of oleic acid (C18:1), linoleic acid (C18:2), and palmitic oil (C16:0), which can balance oxidative stability and low-temperature properties and can promote the quality of biodiesel 8 . Notably, our result suggested that light had no significant effect on fatty acid composition in glucose-fed C. zofingiensis.
C. zofingiensis accumulates lipid efficiently in the dark supplemented with glucose. Light has a dual effect on C. zofingiensis: promoting cell proliferation and biomass yield, while on the other hand enhancing starch accumulation at the cost of lipid possibly through the inhibition of lipid biosynthetic pathway. Our results provided insights into utilizing different culture conditions for boosting biomass, lipid content, and lipid yield on glucose. O. Thirty grams of glucose was added to 1 liter of medium. The pH of the medium was adjusted to pH 6.1 prior to autoclaving. Briefly, 10 mL of liquid Kuhl medium was inoculated with cells from slants, and the alga was grown aerobically in flasks at 25 °C for 4 days with orbital shaking at 150 rpm and with continuous illumination at 50 μ mol photon m −2 ·s −1 . The cells were then inoculated at 10% (v/v) into a 250-mL Erlenmeyer flask containing 50 mL of the growth medium. Algal cells in exponential growth phase were used as seed cells for the following batch cultures.
Methods
For dark cultivation, seed cells were inoculated into 100 mL of fresh medium in 500-mL flasks at a starting cell density of 0.5 g/L and were grown in the dark at 25 °C with orbital shaking at 150 rpm. Light cultivation was conducted under continuous illumination at 50 μ mol photon m −2 ·s −1 ; the other parameters were the same as for dark culture.
A total of 48 samples were divided into four groups, which were cultivated in the dark, in light, in dark-to-light and in light-to-dark conditions. Samples in the latter 2 groups were transferred to the light or dark after three days of incubation. Samples from four culture condition groups were collected for testing every day.
Analysis of lipid content and starch content.
Total lipids were extracted from lyophilized cell powder according to Converti 43 with some modifications. A 100-mg mass was ground before 2 extractions with petroleum ether; the supernatants were then merged and evaporated with N 2 . The crude oil was then weighed. Lipid content was expressed as lipid weight per unit biomass.
The starch content was analyzed using the above defatted sediment according to a modified method used by Brányiková 34 . A 30% perchloric acid solution was added to 5 mg of sediment, stirred for 15 min at 25 °C and centrifuged. This procedure was repeated 3 times. The extracts were combined, and the volume was adjusted to 10 mL. Next, 2-mL aliquots of solubilized starch solution were reacted with 5 mL of concentrated sulfuric acid (98% by weight) and 1 mL of phenol (6%, w/v) at room temperature for 10 min. The absorbance was read in a spectrophotometer at 490 nm. Samples were then quantified by comparison to a calibration curve using glucose as the standard. Starch content was expressed as starch weight per unit biomass.
Determination of cell density, biomass, specific growth rate, per cell weight, per cell lipid content, lipid and starch yield on glucose. Cell density were counted using a hemocytometer.
Microalgal cells were centrifuged and filtered through a pre-dried Whatman GF/C filter (Cat No 1822-047) after 2 washes with distilled water. Next, the filter paper was dried at 80 °C in a vacuum oven for 12 h and was subsequently cooled down to room temperature before weighing. The biomass was expressed as cell dry weight. The specific growth rate (μ max ) was calculated according to: RNA isolation and real-time RT-PCR assay. The expression levels of two genes involved in fatty acid synthesis were determined using real-time PCR according to Liu 46 . Briefly, RNA was isolated from aliquots of approximately 10 8 cells using TRIZOL reagent (Molecular Research Center, Cincinnati, OH, USA) according to the manufacturer's instructions. The concentration of total RNA was determined spectrophotometrically at 260 nm. Total RNA (1 μ g) extracted from different samples was reverse transcribed to cDNA using the SuperScript III First-Strand Synthesis System (Invitrogen, Carlsbad, CA, USA) for reverse transcription PCR (RT-PCR) primed with oligo(dT) according to the manufacturer's instructions. Real-time RT-PCR analysis was performed using 1 μ L of the RT reaction mixture in a total volume of 20 μ L with specific primers and the Platinum SYBR Green qPCR SuperMix-UDG (Invitrogen). PCR amplification was conducted using specific primers targeting BC (forward, 5′ -GTGCGATTGGGTATGTGGGGGTG-3′ and reverse, 5′ -CGACCAGGACCAGGGCGGAAAT-3′ ), SAD (forward, 5′ -TCCAGGAACGTGCCACCAAG-3′ and reverse, 5′ -GCGCCCTGTCTTGCCCTCATG-3′ ), and the internal control actin (ACT) gene(forward, 5′ -TGCCGAGCGTGAAATTGTGAG-3′ and reverse, 5′ -CGTGAATGCCAGCAGCCTCCA-3′ ). PCR was performed in a Bio-Rad iCycler IQ Multi-Color Real-Time PCR Detection System (Bio-Rad, Hercules, CA). The relative levels of the amplified mRNAs were evaluated using the 2 −ΔΔCt method 47 , using the actin gene for normalization.
Cell cycle analysis. Cell cycle was determined by flow cytometry (FCM) with prodium iodide (PI) staining. The method was according to Gerashchenko 48 with modifications. Briefly, 10 6 cells were collected and washed twice with PBS. Then methanol was added in before removing PBS. Cells in methanol were dispersed and stored in 4 °C for analysis. Upon analysis, the samples were washed with PBS and then stained with 50 lg/ml PI in the presence of 25 lg/ml RNase A in 37 °C bath for 30 min. The cell cycle distribution of 10,000 cells was recorded by a flow cytometer (BD FACS Calibur), and result was analyzed with ModFit software.
|
v3-fos-license
|
2017-11-08T18:06:18.141Z
|
1994-11-01T00:00:00.000
|
519166
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://europepmc.org/articles/pmc2033554?pdf=render",
"pdf_hash": "91ba57c967a7a485ef92b21e9094da32c7058600",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43544",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "91ba57c967a7a485ef92b21e9094da32c7058600",
"year": 1994
}
|
pes2o/s2orc
|
Loss of heterozygosity on chromosome 22 in ovarian carcinoma is distal to and is not accompanied by mutations in NF2 at 22q12.
Frequent loss of heterozygosity (LOH) has been reported on 22q in ovarian carcinoma, implying the presence of a tumour-suppressor gene. The neurofibromatosis type 2 gene (NF2) at 22q12 is a plausible candidate. Analysis of 9 of the 17 exons of NF2 by single-strand conformational polymorphism (SSCP) in 67 ovarian carcinomas did not detect any somatic mutations, suggesting that NF2 is not involved in the pathogenesis of ovarian carcinoma. LOH data support this conclusion and that the putative tumour-suppressor gene lies distal to NF2, beyond D22S283.
Cytogenetic and allele loss studies point to the involvement of a chromosome 22 tumour-suppressor gene in a variety of malignancies. Loss of 22q markers has been observed in 11-38% of breast cancers (Sato et al., 1990;Chen et al., 1992;Lindblom et al., 1993), 21-50% of colon carcinomas (Miyaki et al., 1990;Okamoto et al., 1988) and 33% of hepatocellular carcinomas (Takahashi et al., 1993). In ovarian carcinoma, loss of 22q markers has been reported at frequencies of up to 71% (Cliby et al., 1993;Dodson et al., 1993). although others have reported lower frequencies in the range of 25% (Sato et al., 1991;Yang-Feng et al., 1993;Osborne & Leech. 1994). These studies have not defined a minimum region of LOH. However, with the exception of hepatocellular carcinomas, the 22q losses are consistent with deletion of the region containing NF2 on 22ql2 (Rouleau et al., 1993;Trofatter et al., 1993) and this therefore may be the target for the LOH.
Neurofibromatosis type 2 (NF2) is an autosomal dominant syndrome which predisposes to schwannomas, meningiomas, ependymomas and juvenile cataracts. LOH for DNA markers on chromosome 22 has been found in both hereditary and sporadic tumours associated with the NF2 phenotype (Sanson et al., 1993), suggesting that NF2 is acting as a tumour-suppressor gene. NF2 is expressed in a variety of non-central nervous system tissues and therefore could be implicated in tumours arising from these tissues. Although ovarian carcinoma is not part of the NF2 syndrome, somatic mutations in the tumour-suppressor gene NFI, for example, have been reported in tumour types not usually associated with the hereditary syndrome (Li et al., 1992). The strong genetic evidence for the presence of a tumour-suppressor gene on 22q prompted us to undertake a mutation analysis of NF2 in ovarian carcinomas and, if appropriate, a deletion analysis to determine the smallest region of loss.
Materials and methods
Tumour specimens and DNA extraction Tumour and blood samples were obtained from 67 patients undergoing primary surgery for ovarian carcinoma. Thirtyeight of the samples have been descnrbed previously and have been the subject of extensive molecular studies (Foulkes et al., 1993a(Foulkes et al., -c. 1994Allan et al., 1994). The remaining 29 samples were collected from hospitals in and around Southampton. The 67 tumours studied included 46 serous tumours. 11 mucinous. two mixed Muillerian. seven endomet-rioid and one granulosa cell tumour. DNA was extracted from the tumours and blood as described by Foulkes et al. (I993a).
Polymerase chain reactions (PCRs)
PCRs were performed on 10-200 ng of genomic DNA in a reaction volume of 20 ;LI with the inclusion of 1 iLCi of [a-32PJdCTP (Foulkes et al., 1993c) using the published PCR cycle conditions. SSCP anatvsis NF2 exons were amplified with primers within surrounding intronic sequence. Details of these primers, including the regions amplified by each set and the PCR conditions used, have been reported by Twist et al. (1994). Samples were prepared for single-strand conformational polymorphism analysis as described previously (Campbell et al., 1994) and electrophoresis carried out at room temperature in 6% nondenaturing acrylamide gels in the presence of 5% and 10% glycerol.
Results
Nine of the 17 coding exons of NF2 (exons 2, 5, 7-12 and 15) were examined for the presence of somatic mutations. The SSCP analysis was biased toward more 5' exons since the N-terminal region of the merlin protein is more frequently associated with mutations (Deprez et al., 1994), however no mutations were detected in any of the 67 ovarian carcinomas. The sensitivity of SSCP analysis has been shown to be related to the size of the PCR product (Sheffield et al., 1993). In our study, this was 188 bp, and we therefore estimate the SSCP analysis sensitivity to be 70-75%.
In view of the absence of detectable NF2 mutations we undertook a preliminary allelotype analysis of 37 ovarian carcinomas (from the collection described by Foulkes et al., 1993a-c) in order to identify tumours with partial loss of 22q, since previous studies have not enabled an assignment of the gene to any particular region on chromosome 22. All four 22q markers used map distal to NF2, with the most proximal marker, D22S430, located in a region approximately 300kb distal to NF2. The relative genetic and "6 P. ENGLEFIELD et al. (Matise et al., 1994;Buetow et al., 1994). The inter-locus distances are in centimorgans. The position of the NF2 gene is indicated just proximal to D22S430. cytogenetic positions of these markers are shown in Figure 1. Twenty-three of 32 informative tumours (72%) showed loss of one or more 22q markers. In three of these tumours partial loss of 22q was observed (Figure 1). In tumour 28, complete LOH was observed with D22S274 and D22S282, while only partial LOH was seen with D22S283. The finding was consistently found on repeated testing and therefore might be accounted for by clonal variation. In tumour 72, heterozygosity is clearly maintained with D22S283 and completely lost with the more distal markers. In tumour 91, there was maintenance of heterozygosity with the most proximal marker D22S430 but clear loss with the distal markers D22S283 and D22S274.
LOH data suggest that a tumour-suppressor gene relevant in ovarian, breast and colon carcinomas resides on chromosome 22, but it is unclear at this stage if the same gene is involved in these and other malignancies. NF2, located at 22q12, is a plausible candidate for this gene, but to date somatic mutation analysis has been restricted largely to tumours of the central nervous system (CNS). However, in two reports, mutations were detected in the coding region of NF2 in a total of I of 69 breast carcinomas, 6 of 17 melanomas and 2 of 64 colon carcinomas (Arakawa et al., 1994;Biachi et al., 1994). The significance of the low frequencies of NF2 mutation in the breast and colon carcinomas is unclear. However, in this study of 67 ovarian carcinomas, the absence of NF2 mutations supports the conclusion that NF2 is not involved in the pathogenesis of ovarian carcinoma. Although we have not examined all the NF2 coding exons, studies in NF2related tumours (Rouleau et al., 1993;Deprez et al., 1994;Rubio et al., 1994;Ruttledge et al., 1994;Twist et al., 1994) and non-CNS tumours (Arakawa et al., 1994;Biachi et al., 1994) have shown that mutations are distributed throughout the gene with a bias towards more 5' exons. The distribution of NF2 mutations is unlikely to differ significantly in ovarian carcinomas, and given our large sample size we consider it improbable that NF2 is the 22q ovarian carcinoma tumoursuppressor gene.
In our preliminary allelotype study we demonstrated 72% LOH of 22q markers, consistent with the frequencies reported by Cliby et al. (1993) and Dodson et al. (1993). Three tumours were identified which had retained heterozygosity for markers proximal to NF2 but showed LOH for more distal markers. Although some somatic deletions might be the result of generalised genomic instability unrelated to tumour pathogenesis, the finding of three tumours with consistent losses distal to NF2, coupled with the absence of mutations in the NF2 gene, support the conclusion that the 22q ovarian carcinoma tumour-suppressor gene lies distal to NF2, beyond D22S283. Detailed deletion mapping of 22q in an expanded set of ovarian carcinomas is currently under way to refine the location of this gene. Abbreviado LOH, loss of heterozygosity; NF2, neurofibromatosis type 2; PCR, polymerase chain reaction; CNS, central nervous system. This work was fumded by a grant from the Medical Research Council. Additional support was provided by the Wessex Medical Trust. We thank-Mrs M. Baron and Mrs N. Thomas and the clinicians from hospitals in the Wessex region who provided tumour samples and Dr G. Rouleau for providing the NF2 primer sequences.
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.