added
string
created
string
id
string
metadata
dict
source
string
text
string
version
string
2024-04-24T15:17:32.693Z
2024-04-22T00:00:00.000
269320808
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-445X/13/4/561/pdf?version=1713779315", "pdf_hash": "4c12fd91c60fd60de4ac1c44def119e075dcd093", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44754", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "sha1": "b7a011db4519f27a80493c4ea51864aad6627c7d", "year": 2024 }
pes2o/s2orc
Ecological Risk Assessment of Land Use Change in the Tarim River Basin, Xinjiang, China : In recent years, global climate change and human alterations to land use have led to a decrease in ecosystem services, making ecosystems more vulnerable. However, unlike the well-established risk assessment frameworks used in natural disaster research, the concept of ecological risks arising from changes in land use is still in its early stages, with its nuances and assessment methodologies yet to be clearly defined. This study proposes a new framework for assessing ecological risks resulting from changes in land use in the Tarim River Basin. The framework employs a coupled PLUS and Invest model to evaluate the ecological risks of land use change under three development scenarios projected for the Tarim River Basin in Xinjiang by 2035. The findings indicate that: (1) Between 2000 and 2020, the predominant land use types in the Tarim River Basin in Xinjiang were primarily unused land, followed by grassland and cropland. Conversely, grassland, water, and construction land were relatively less prevalent. During this period, the area of unused land and cultivated land increased, while grassland, forest land, and water exhibited a declining trend. Moving forward, under the three scenarios from 2020 to 2035, land use changes in the study area are characterized by the expansion of cropland and unused land, coupled with a significant decrease in grassland area, while other land categories demonstrate minor fluctuations. (2) From 2020 to 2035, across various scenarios, the total ecosystem service within the study area demonstrates an overall increasing trend in both the northern and southern marginal zones. Specifically, under the baseline scenario, the total amount of ecosystem services in the study area decreased by 15.247% compared to 2020. Similarly, under the economic development scenario, this decrease amounted to 13.358% compared to 2020. Conversely, under the ecological protection scenario, the decrease reached 19.852% compared to 2020. (3) The structure of ecological risk levels from 2020 to 2035, across multiple scenarios, demonstrates a consistent pattern, characterized by a predominant proportion of moderate risk. Conversely, other risk levels occupy relatively smaller proportions of the area. Introduction Since the beginning of the 21st century, the combination of human activities and natural processes has caused significant changes in land use and global ecosystems.This has brought attention to the ecological risks associated with changes in land use [1,2], which are a major concern for both developed and developing nations [3].Human alterations to land use patterns in recent decades have significantly disrupted ecosystems, resulting in ecological functional degradation, soil erosion, land desertification, environmental pollution, and diminishing biodiversity [4,5].These alterations have markedly increased ecosystem risks [6] and pose severe threats to human well-being [7][8][9][10].Assessing ecological risks associated with changes in land use and identifying their causes is essential for establishing an ecological risk warning system.This system can help accurately and effectively control ecological risks, guide human behavior, and provide a scientific basis for ecological construction. The concept of ecological risk assessment originated in the United States Environmental Protection Agency (EPA), which defines ecological risk as the probability of adverse ecological impacts resulting from exposure to one or more stressors [10].Since the 1990s, ecological issues have become increasingly prominent, leading to a shift in the focus of risk assessment from human health to ecological risk assessment [11].This shift has extended to populations, communities, and entire ecosystems.Ecological risk is typically defined as the probability and magnitude of adverse effects on ecosystem structure, function, stability, and sustainability caused by external stressors [12].As research scales expand from local to regional, a significant branch of ecological risk research has emerged, known as regional ecological risk assessment.This primarily evaluates the probability and extent of adverse effects of environmental pollution, human activities, or natural disasters on multiple risk receptors at the regional scale [13].Currently, ecological risk sources include natural variability and human activities.Within the domain of natural disasters, a theoretical and methodological framework for ecological risk assessment has been established [14,15].Changes in land use can reflect the impact of human activities on natural ecosystems [16,17].Such changes can have significant ecological impacts on atmospheric, soil, aquatic, and biological systems [18].These impacts are cumulative and can have regional effects. The assessment of ecological risks associated with changes in land use primarily focuses on urban areas, watersheds, coastal regions, administrative districts, and nature reserves.In urban areas, frequent conversions among different land use types due to excessive population density and rapid spatial expansion have resulted in increasingly prominent ecological issues [19,20].Watershed areas are ecologically sensitive due to their poor natural endowments, particularly water resources.The landscape in these areas experiences both ecological improvement and deterioration.However, the deterioration trend outweighs the improvement trend, leading to an intensification of landscape ecological deterioration [21,22].During the development of coastal areas, uncontrolled urban expansion in the early stages of economic development can lead to landscape fragmentation, which increases overall ecological risk and tends to concentrate spatially.However, as expansion is restrained, overall ecological risk decreases [23,24].Establishing conceptual models for ecological risk assessment and determining quantitative assessments of ecological risk present significant challenges due to the complexity of ecosystems and the uncertainty of risk occurrence. Two mature theories and methods currently exist in the field of ecological risk assessment.The first is the traditional assessment approach based on the source-pathwayreceptor theory, known as "source analysis-receptor assessment-exposure and hazard assessment-risk characterization' [25].The Relative Risk Model (RRM) is widely employed for conceptual model construction within this framework.For example, Muditha K. and Heenkenda used this assessment system to rank and classify pressure sources and habitats in a specific area.They modeled the interactions between them using exposure and effect filters, revealing the spatio-temporal distribution of ecological risks in a port setting [26,27].Yu et al. [28] constructed a "source-pressure source-ecosystem-ecological adverse endpoint" assessment framework based on the principles of RRM to predict and rank the potential ecological risks of various sub-regions in Xiamen Bay.In comparison to RRM, the Landscape Pattern Risk Assessment and Evaluation (LPRAE) method is more commonly used to assess single risk sources.LPRAE quantitatively evaluates ecological risk by assessing the likelihood of ecological risk occurrence within a region and the losses caused by risk sources to risk receptors [29].Another method involves ecological risk assessment based on landscape ecology theory.This method emphasizes the impact of landscape patterns on specific ecological functions or processes and focuses on the overall loss of landscape in providing ecosystem services and ecological functions [30,31].Evaluation indicators include landscape fragility, resilience, and stability.Research outcomes emphasize the comprehensive characterization and spatial visualization of multiple risk sources, supporting sustainable landscape planning, design, and ecological management.Landscape ecological risk assessments often overlook the structure and functionality of the ecosystem, focusing solely on the landscape perspective.Despite the fact that the landscape represents only a small portion of the entire ecosystem, assessments are typically static, failing to account for the dynamic nature of land use changes.As a result, landscape ecological risk assessments do not adequately capture the fluctuations in ecological risks associated with land use changes.Current risk assessment methodologies only consider the impact of baseline land use changes, without exploring the potential transitions between different land use types and ignoring future risks.Therefore, a new approach is needed to evaluate the ecological risks associated with land use changes, capable of addressing the dynamic alterations in land use and the complex nature of ecosystems. The Tarim River Basin (TRB) is located in an extremely arid region of China and is heavily influenced by dry climates and intense human activities.As a result, significant alterations in ecological processes have occurred, leading to a heightened ecological vulnerability (EV).Furthermore, rapid urbanization and extensive cropland development in the region have resulted in dramatic changes to the ecosystem.The study's logical framework focuses on analyzing and simulating land use changes, assessing ecosystem service functions, and spatially identifying and evaluating ecological risks.To achieve this, we coupled the PLUS and Invest models [32] to simulate potential future scenarios of land use changes in the TRB under three development scenarios for the year 2035: baseline development, economic development, and ecological conservation.The economic Sharpe ratio [33] is subsequently introduced to integrate land use simulation results and ecosystem services into ecological risk assessment.This study provides empirical evidence for ecological risk assessment in the TRB by analyzing the spatiotemporal differentiation of ecological risks associated with land use changes under the three development scenarios for 2035 and exploring their attribution.It also offers technical support and a multi-level research approach for similar study areas. Overview of the Study Area The Tarim River is the largest inland river in China, stretching 2179 km in length, and ultimately flowing into the Taitema Lake.Its basin covers an area of 1.02 million square kilometers, accounting for approximately one-sixth of China's territory [34].The geographical coordinates range from longitude 71 • 39 ′ E to 93 • 45 ′ E and latitude 34 • 20 ′ N to 43 • 39 ′ N. The TRB has a total of 42.9 billion cubic meters of water resources, with 39.83 billion cubic meters being surface water resources and 3.07 billion cubic meters being groundwater resources.Its climate is typical of a temperate arid continental climate, characterized by abundant sunshine, dryness, strong winds, large diurnal temperature variations, sparse precipitation, and intense evaporation.This region contains 54% of the world's natural poplar forests and 90% of China's natural poplar forests.It serves as a gene bank for poplar forest resources and is a crucial component of China's "Two Screens and Three Belts" ecological security strategy.Its ecological significance is irreplaceable [35] (Figure 1). Data Sources The study utilized various types of data, including land use, topographic, meteorological, socioeconomic, and other sources (Table 1).All data were resampled to a spatial resolution of 250 m and projected onto the WGS_1984_World_Mercator coordinate system. Research Framework This study analyzes the ecological risks associated with land use changes in the TRB of China.The PLUS model was used to simulate three scenarios for the year 2035: baseline development, economic development, and ecological conservation.Four ecosystem service functions were quantified: water yield, carbon storage, soil retention, and habitat quality.The study quantified the ecological risk of land use changes using the Sharpe ratio Data Sources The study utilized various types of data, including land use, topographic, meteorological, socioeconomic, and other sources (Table 1).All data were resampled to a spatial resolution of 250 m and projected onto the WGS_1984_World_Mercator coordinate system. Research Framework This study analyzes the ecological risks associated with land use changes in the TRB of China.The PLUS model was used to simulate three scenarios for the year 2035: baseline development, economic development, and ecological conservation.Four ecosystem service functions were quantified: water yield, carbon storage, soil retention, and habitat quality.The study quantified the ecological risk of land use changes using the Sharpe ratio and simulated the spatial distribution of ecological risks under three scenarios.Figure 2 illustrates the specific framework. and simulated the spatial distribution of ecological risks under three scenarios.Figur illustrates the specific framework. Carbon Stocks The Carbon module in the InVEST model version 3.9.2 was used to evaluate a study land use cover type and carbon stock.The overall carbon stock was calculated determining the above-ground carbon stock of vegetation, below-ground carbon stock vegetation, soil carbon stock, and dead organic matter carbon stock using a specific carb density reference [36].The calculation formula is as follows: where Ctotal is the total carbon stock, Cabove is the above-ground part of the carbon sto Cbelow is the below-ground part of the carbon stock, Csoil is the soil carbon stock, and Cde the dead organic carbon stock. Water Production The water production module in the Invest model uses the water balance princi based on Budyko's [37] coupled hydrothermal equilibrium assumptions and aver annual precipitation data.This means that the annual water production, Y(x), for e raster cell, x, in the study area is calculated as the difference between precipitation a actual evapotranspiration, which is shown in the formulas below: Carbon Stocks The Carbon module in the InVEST model version 3.9.2 was used to evaluate and study land use cover type and carbon stock.The overall carbon stock was calculated by determining the above-ground carbon stock of vegetation, below-ground carbon stock of vegetation, soil carbon stock, and dead organic matter carbon stock using a specific carbon density reference [36].The calculation formula is as follows: where C total is the total carbon stock, C above is the above-ground part of the carbon stock, C below is the below-ground part of the carbon stock, C soil is the soil carbon stock, and C dead is the dead organic carbon stock. Water Production The water production module in the Invest model uses the water balance principle based on Budyko's [37] coupled hydrothermal equilibrium assumptions and average annual precipitation data.This means that the annual water production, Y(x), for each raster cell, x, in the study area is calculated as the difference between precipitation and actual evapotranspiration, which is shown in the formulas below: where Y xj is the average annual water production of grid x, and P x is the annual rainfall of grid x.Since the actual annual evapotranspiration cannot be obtained by direct measurement, it can be approximated by using a curve to AET x /P x .The R x value is dimensionless and is an index of dryness of grid x, which can be calculated from the potential evapotranspiration and rainfall.w x is an empirical parameter that can be calculated.AWC x is the vegetation available water content, which is determined by soil texture and effective soil depth, and is used to determine the total amount of water stored and provided by the soil for plant growth.Z is known as the Zhang coefficient [38], and the final Z coefficient was determined to be 3.6 in this study. Soil Conservation Soil conservation aims to reduce soil erosion by improving the structure of vegetation.This is achieved by calculating the potential soil erosion and sand production, as well as the real erosion and sand production, based on the topography of the study area region, precipitation, and other factors.The difference between these two measurements is used as the quantitative value of soil conservation.The specific formulas are shown below: where Q srx is the soil retention, Q se_px is the potential soil erosion, Q se_ax is the actual soil erosion, R x is the rainfall erosivity factor, K x is the soil erodibility factor, L x is the slope length factor, S x is the slope gradient factor, C x is the vegetation cover factor, and P x is the factor that indicates soil and water conservation measures. Habitat Quality The InVEST model's habitat quality module quantifies regional habitat quality by considering the range of vegetation types in a given area and the degree of degradation of each type.The model assumes that areas with good habitat quality also have high biodiversity.Specific calculations are as follows: where D xj denotes the degree of habitat degradation of raster x in habitat type j; R is the number of threat factors; W r is the weight of the threat source r; Y r is the number of rasters of the threat source; r y is the coercion value of raster y; i rxy is the accessibility of the threat source to raster x; β x is the sensitivity of the habitat type j to the threat source r; and d xy denotes the level of stress exerted by the grid cell y on grid x, with two types of effects-linear and exponential. exp where d xy is the straight-line distance between grid x and grid y; and d r max the maximum coercive distance of the threat source r.The habitat quality formula is: In the equation, Q xj represents the habitat quality index of grid x in habitat type j; H j denotes the habitat suitability of habitat type j, ranging from 0 to 1; k is the half-saturation constant, set to half of the maximum habitat degradation degree, designated as 0.5; z is the normalization constant, typically set to 2.5.Threat sources are extracted from cropland, Land 2024, 13, 561 7 of 18 construction land, and unused land.The maximum threat distance, weight, attenuation type, and sensitivity of different habitats for each threat source are specified in Table 2.This study selected cropland, construction land, and unused land directly affected by anthropogenic factors as threat factors based on the geographical environment and land use patterns in the TRB, as well as the model user manual [37] and existing relevant literature [39].The maximum threat distance, weight, and decay characteristics of the threat factors were determined through comparison and calibration, as specified in Table 3. Quantification of Total Ecosystem Services To emphasize the significance of ecosystems, this study selected four key indicators for ecosystem service assessment in the TRB based on the principles of data accessibility, necessity, and priority, and considering the current situation and ecological service importance.These indicators include water conservation, habitat quality, soil erosion, and carbon storage.According to this research, the total quantity of ecosystem services in the TRB is the sum of soil retention, water yield, habitat quality, and carbon storage, which represent four critical ecosystem service functions. where ESI j represents the sum of standardized values of the four ecosystem services in grid j.It signifies the total ecosystem service in grid j.ESN ij denotes the standardized ith ecosystem service in grid j.To standardize the four dimensions of ecosystem services, the values are normalized to ensure that the value of each ecosystem service falls between 0 and 1. where ESN ij is the ith ecosystem service in standardized grid j; ES ij represents the value of the ith ecosystem service in grid j; ES max is the maximum value of the ith ecosystem service in grid j; ES min is the minimum value of the ith ecosystem service.Thus, the value of ESI, j, ranges from 0 to 4; higher values indicate a higher capacity of the ecosystem to provide ecosystem services.To facilitate subsequent calculations, ESI was normalized using the formula.j is again a value between 0 and 1; values close to 1 indicate a higher capacity of the ecosystem to provide ecosystem services. The PLUS Model The PLUS model is a cellular automaton (CA) model that simulates land use/land cover (LULC) changes at the patch scale using raster data.It integrates rule mining methods based on land expansion analysis and a CA model based on multi-type random seed mechanisms.This allows for the identification of driving factors of land expansion and the prediction of the patch-level evolution of land use landscapes.The PLUS model enhances existing CA models by utilizing the Transition Analysis Strategy (TAS) and Pattern Analysis Strategy (PAS) to better represent patch-level changes in land use processes.It also employs landscape dynamic change simulation strategies and targets transformation rule mining strategies.Furthermore, the Transformation Rule Mining Framework (LEAS) and the Cellular Automaton Random Seed (CARS) model offer advantages in land expansion analysis.To assess the accuracy of simulated land use data, we generated simulation data for the year 2020 based on baseline land use data from 2000 and 2010.We validated the accuracy of both real and simulated land use data using the Figures of Merit (FOM) coefficient and the Overall coefficient.The FOM for the year 2020 was 0.135, and the overall coefficient was 0.813.These results demonstrate a high level of spatial consistency between the two layers, satisfying future scenario simulations.Please refer to reference [40] for specific formulas. Multi-Scenario Model Land use change predictions involve planning interventions, incentives, and restrictive measures.Changes in natural reserves and infrastructure may alter the land development process when modeling future scenarios.The PLUS model, which utilizes historical land use changes and suitability maps, can forecast land use scenarios for specific future dates.In this study, three future scenarios were set: the baseline development scenario, ecological protection scenario, and urban expansion scenario. (1) Baseline Development Scenario (JZ): The LULC changes in this scenario follow the current development trends from 2000 to 2020.The areas of LULC types in 2035 were obtained through Markov chain analysis. (2) Ecological Protection Scenario (ST): This scenario primarily focuses on a development pattern centered around protecting forest ecosystems within the basin.This scenario utilizes the most recent data on natural reserves in China to establish ecological limitations and incentives for land changes within the reserves.Transfer probabilities from cropland, grassland, unused land, and construction land to forest land are increased by 50%, while probabilities of transfer from forest land to grassland, construction land, cropland, and unused land are decreased by 40%. (3) Economic Development Scenario (JJ): This scenario assumes an acceleration in the rate of conversion from grassland, unused land, and construction land to cropland of 50%.Similarly, the rate of conversion from grassland, cropland, and unused land to construction land is also accelerated by 50%, based on thresholds set by previous studies and expert opinions. Quantification of Ecological Risk Indicators The purpose of this paper is to evaluate the Regional Ecological Risk Index (ERI) using the Sharpe ratio.The formula for the Sharpe ratio is provided below: where E(R p ) is the expected return of the portfolio and R f is the risk-free rate; E(R p ) − R f is the excess return of the portfolio and σ p is the standard deviation of the portfolio, which is used to measure the level of risk.The ratio is an indicator of risk-adjusted return.The higher the ratio, the higher the return for a given level of risk, and vice versa.Drawing on this concept, future ESV at a given spatial unit can be considered as the expected ecological return, and uncertainty about future land use change can be considered as risk.Combined with the scenario approach, the likelihood of future land use patterns and corresponding ESV can be obtained.However, the Sharpe ratio is anomalous when negative excess returns are encountered.Therefore, the formula was improved.The evaluation model for ERI is shown below: where ERI j is the ecological risk index of region j, and is the excess ecological return of region j, which is calculated by subtracting the risk-free ecological return from the expected ecological return.In this study, ESV kj is the ESV of region j in 2020 as the risk-free ecological regression, ESV ij is the total ecosystem services of region j under scenario i in 2030 as the expected ecological regression, and σ j is the standard deviation of EER. Analysis of the Evolution of Spatial and Temporal Patterns of Land Use From 2000 to 2020, the land use types in the study area were predominantly unused land, followed by grassland and cropland, with comparatively fewer areas covered by grassland, water, and construction land (refer to Figure 3).Unused land is concentrated mainly in the central part of the basin, belonging to the Taklimakan Desert region.Cropland is primarily distributed in the central-western and southwestern central zones, while forestland is concentrated in the Awat area in a strip-like pattern.Grassland predominantly occupies the northern and southern edge areas, with substantial area coverage, while water and construction land are relatively dispersed.Table 4 shows that from 2000 to 2020, cropland, construction land, and unused land increased, while forestland, grassland, and water decreased.The proportion of unused land increased significantly by 1.72%, and cropland increased by 1.54%.In contrast, the grassland area experienced the most significant decrease, with its proportion declining by 1.87%.This was followed by a decrease in water proportion by 1.45% and forestland area proportion by 0.10%.However, the area of construction land only saw a marginal increase of 0.17%.Therefore, from 2000 to 2020, there were minimal changes in the area of construction land and forestland. Land 2024, 13, x FOR PEER REVIEW 10 of was followed by a decrease in water proportion by 1.45% and forestland area proporti by 0.10%.However, the area of construction land only saw a marginal increase of 0.17 Therefore, from 2000 to 2020, there were minimal changes in the area of construction la and forestland.The study analyzes the spatial dynamics of different land use types during historical periods, showing the areas of increase and decrease for each category from 2000 to 2020. Figure 4 illustrates that cropland has increased mainly in the northwest of Kashgar and the central region of Awat, with a relatively concentrated distribution.Conversely, the areas experiencing loss of cropland are sparse and scattered.Regions with increased forestland exhibit a sporadic distribution patterns, whereas areas with decreased forestland are primarily concentrated in the central region of Awat.Increased grassland areas are sporadically distributed along the southern fringe, whereas areas of decrease are mainly concentrated in the northwest of Kashgar, the central region of Awat, and the northern part of Manas, displaying a relatively concentrated distribution.There are no substantial changes observed in water.Areas of increased construction land are predominantly distributed in the eastern part of Korla, while areas of decrease are primarily concentrated in the northern region of Korla, exhibiting a relatively concentrated distribution.Regions of increased unused land are mainly situated in the northern part of Korla, with a relatively concentrated distribution, whereas areas of decrease in unused land are predominantly characterized by sparse and scattered distribution patterns. Multi-Scenario Model of Land-Use Change Using the 2020 baseline image, and based on the expansion changes in land use between 2010 and 2020 in the study area, three scenarios of land use in the TRB for 2035 were simulated (refer to Figure 5). Compared to 2020, according to the statistical data under the baseline development scenario, the area of cropland increased by 15,976.63km 2 , with a growth rate of 2.46%; the Multi-Scenario Model of Land-Use Change Using the 2020 baseline image, and based on the expansion changes in land use between 2010 and 2020 in the study area, three scenarios of land use in the TRB for 2035 were simulated (refer to Figure 5). expand towards the periphery, while construction land predominantly exhibits scattered patchy distributions in the northern region.Under the economic development scenario (see Figure 5c), the expansion of cropland is notably active in the northwest of Kashgar, the central region of Awat, and the northern region of Korla.Extensive encroachment of cropland into grassland is observed in the northwest of Kashgar and the central region of Awat.Forestland expansion is concentrated in the eastern region of Awat and the western region of Korla, displaying patchy distributions.Water exhibits more active expansion in the southwest of Hotan and the northern region of Yining. Characterizations of Ecosystem Services under Different Model Scenarios The total ecosystem service quantity in the study area exhibits significant spatial heterogeneity, showing a general increasing trend in the northern and southern marginal zones.Specifically, under the baseline development scenario, the total ecosystem service quantity decreased by 15.247% compared to 2020, with a decrease of 3.818% in water yield, 0.02% in habitat quality, and 0.061% in carbon storage, and an increase of 0.036% in soil retention.Under the economic development scenario, the total ecosystem service quantity decreased by 13.358% compared to 2020, with a decrease of 4.81% in water yield, 0.253% in habitat quality, and 0.005% in carbon storage, and an increase of 0.451% in soil retention.Under the urban expansion scenario, the total ecosystem service quantity decreased by 19.852% compared to 2020, with a decrease of 3.628% in water yield and 0.516% in habitat quality, an increase of 0.011% in carbon storage, and a decrease of 7.113% in soil retention. There is significant regional variation in ecosystem services in the study area, with ecosystem service conditions in the marginal areas significantly better than those in the plains (see Figure 6).Specifically, significant changes are observed in the northwestern, southern, and western parts of the study area, while minimal changes are noted in the central and eastern regions, with the lowest values of ecosystem service functions.Particularly, the spatial distribution of soil retention function predominantly exhibits low values, with relatively higher values observed in the northern and western marginal areas.Water yield appears to be clustered in large areas of high-value regions in the northern area, while the spatial distribution characteristics of habitat quality and carbon storage Compared to 2020, according to the statistical data under the baseline development scenario, the area of cropland increased by 15,976.63km 2 , with a growth rate of 2.46%; the area of water increased by 945.19 km 2 , with a growth rate of 0.13%.Meanwhile, the areas of forestland, grassland, and unused land decreased by 480.31 km 2 , 13,352 km 2 , and 2461.81km 2 , respectively, with reduction rates of 0.12%, 2.64%, and 1.40%.Under the economic development scenario, the areas of cropland, water, and construction land increased by 24,545.79km 2 , 996.13 km 2 , and 702.31 km 2 , respectively, with growth rates of 2.95%, 1.40%, and 0.21%; whereas the areas of forestland, grassland, and unused land decreased by 499.90 km 2 , 19,622.50 km 2 , and 6121.84 km 2 , respectively, with reduction rates of 0.13%, 3.00%, and 1.37%. Figure 5 depicts the expansion of cropland towards the northwest in the Kashgar area under the baseline development scenario (Figure 5b).The encroachment of cropland and unused land on grasslands is significant, particularly in the northwest region.Unused land primarily expands towards the marginal zones, while changes in construction land, forestland, and water are less pronounced.Under the economic development scenario (Figure 5d), cropland expands more actively in the northwest of Kashgar, the central region of Awat, and the northern region of Korla.In particular, there is extensive encroachment of grassland by cropland in the northwest of Kashgar and the central region of Awat.Unused land mainly encroaches upon grassland at the margins and continues to expand towards the periphery, while construction land predominantly exhibits scattered patchy distributions in the northern region.Under the economic development scenario (see Figure 5c), the expansion of cropland is notably active in the northwest of Kashgar, the central region of Awat, and the northern region of Korla.Extensive encroachment of cropland into grassland is observed in the northwest of Kashgar and the central region of Awat.Forestland expansion is concentrated in the eastern region of Awat and the western region of Korla, displaying patchy distributions.Water exhibits more active expansion in the southwest of Hotan and the northern region of Yining. Characterizations of Ecosystem Services under Different Model Scenarios The total ecosystem service quantity in the study area exhibits significant spatial heterogeneity, showing a general increasing trend in the northern and southern marginal zones.Specifically, under the baseline development scenario, the total ecosystem service quantity decreased by 15.247% compared to 2020, with a decrease of 3.818% in water yield, 0.02% in habitat quality, and 0.061% in carbon storage, and an increase of 0.036% in soil retention.Under the economic development scenario, the total ecosystem service quantity decreased by 13.358% compared to 2020, with a decrease of 4.81% in water yield, 0.253% in habitat quality, and 0.005% in carbon storage, and an increase of 0.451% in soil retention.Under the urban expansion scenario, the total ecosystem service quantity decreased by 19.852% compared to 2020, with a decrease of 3.628% in water yield and 0.516% in habitat quality, an increase of 0.011% in carbon storage, and a decrease of 7.113% in soil retention. There is significant variation in ecosystem services in the study area, with ecosystem service conditions in the marginal areas significantly better than those in the plains (see Figure 6).Specifically, significant changes are observed in the northwestern, southern, and western parts of the study area, while minimal changes are noted in the central and eastern regions, with the lowest values of ecosystem service functions.Particularly, the spatial distribution of soil retention function predominantly exhibits low values, with relatively higher values observed in the northern and western marginal areas.Water yield appears to be clustered in large areas of high-value regions in the northern area, while the spatial distribution characteristics of habitat quality and carbon storage remain consistent with the overall ecosystem services, showing a pattern of high values in the periphery and low values in the central region. Characteristics of Changes in the Distribution of Ecological Risks under Different Scenarios Based on the statistics of changes in risk zones at various levels, it is clear that in 2035, the ecological risk level structure in the study area will be dominated by medium risk, with other risk levels accounting for a small proportion of the total area.It is important to note that this analysis is objective and does not include any subjective evaluations.The medium-risk zones cover the largest and most widespread area, accounting for over 97% of the total ecological risk level area.In contrast, the low-risk zones are the smallest, each accounting for less than 0.2%.Under the baseline development scenario, the area distribution of high-risk levels is the widest, accounting for 1.14%.Meanwhile, the area of medium-risk levels reaches its maximum under the economic development scenario, accounting for 98.17%, followed by the baseline development scenario, accounting for 98.03%.In the medium-high risk levels, the economic development scenario increases by 0.36% compared to the ecological protection scenario.Meanwhile, the ecological protection scenario has the largest proportions of low-risk and medium-low-risk levels, accounting for 0.15% and 1.32%, respectively, with increases of 0.12% and 1.05%. The spatial distribution of risk zones remains consistent across the three scenarios, mainly dominated by medium risk (see Figure 7).However, there are significant changes in the medium-low-risk and high-risk levels.Under the baseline development scenario (see Figure 7a), high-risk areas are primarily located in the northwest of Kashgar, the central part of Awate, the central part of Hotan, and the northwest direction of Korla.The widest distribution of high-risk areas is in the northwest of Kashgar, while other regions exhibit scattered and patchy distributions.In the economic development scenario (see Figure 7b), the main changes occur in the medium-high-risk and medium-low-risk aspects.Mediumlow-risk areas are concentrated mainly in the northwest of Kashgar and the central part of Awate.Medium-high-risk areas are mainly distributed in the central part of Kashgar, the north-central part of Awate, the northwest of Korla, and the central part of Hotan.Other areas exhibit scattered patchy distributions.Regarding ecological protection (see Figure 7c), significant changes occur mainly in the medium-low-risk level.These changes are mainly distributed in Atushi, the central and northern parts of Kashgar, the northern part of Awate, the central and southern parts of Hotan, and the central part of Korla.The distribution in Kashgar is the most concentrated, while the high-risk level is mainly distributed in scattered patches, mainly concentrated in the Kashgar and Awate areas. Land 2024, 13, x FOR PEER REVIEW 14 of 19 part of Hotan.Other areas exhibit scattered patchy distributions.Regarding ecological protection (see Figure 7c), significant changes occur mainly in the medium-low-risk level.These changes are mainly distributed in Atushi, the central and northern parts of Kashgar, the northern part of Awate, the central and southern parts of Hotan, and the central part of Korla.The distribution in Kashgar is the most concentrated, while the high-risk level is mainly distributed in scattered patches, mainly concentrated in the Kashgar and Awate areas. Spatial Heterogeneity of Ecological Risk Indices and Their Formation Mechanisms Land is a crucial element for socio-economic activities, and serves as a tangible representation of human development and utilization of the natural environment.Changes in land structure and patterns are closely related to the spatiotemporal distribution of ecological risk.By assessing the spatial patterns of ecological risk, we can reveal the impacts of land use changes on the structure and function of ecosystems.In the TRB report of 2035, the spatial distribution of ecological risk remained largely unchanged across all three scenarios.The risk tends to increase from the central part of the basin Spatial Heterogeneity of Ecological Risk Indices and Their Formation Mechanisms Land is a crucial element for socio-economic activities, and serves as a tangible representation of human development and utilization of the natural environment.Changes in Land 2024, 13, 561 14 of 18 land structure and patterns are closely related to the spatiotemporal distribution of ecological risk.By assessing the spatial patterns of ecological risk, we can reveal the impacts of land use changes on the structure and function of ecosystems.In the TRB report of 2035, the spatial distribution of ecological risk remained largely unchanged across all three scenarios.The risk tends to increase from the central part of the basin towards the periphery, with medium risk being the most common.The study area is characterized by peripheral regions with high-and medium-high risk areas, while low-and relatively low-risk areas are mainly distributed around the basin.The most extensive distribution of areas with medium risk is observed.Unused land is the predominant land use type in the study area, especially in the basin's central region, which includes the Taklamakan Desert, China's largest and the world's tenth-largest desert, as well as the second-largest mobile desert globally.As a result of the land use types, the overall ecosystem services in the basin's central region are relatively low [41].The transitional zones adjacent to the desert have extensive grasslands and forests.However, these areas are experiencing a decrease in forest, grassland, and water due to the encroachment and expansion of unused land [42].As a result, ecosystem services are declining.In this environmentally harsh region, the types of land available for human development and utilization are relatively limited.Therefore, the expansion of construction land will not be significant over the next 20 years, although it will be relatively concentrated in distribution. Under the baseline development scenario, land use changes mainly follow historical developmental trends.The expansion of cropland and unused land predominates, while there is a trend of decreasing ecological land areas such as forests, grasslands, and water.Construction land experiences concentrated expansion, with relatively insignificant land encroachment.However, the proportion of cropland and unused land encroaching on other ecological areas increases, leading to significantly reduced vegetation cover, sparse vegetation, severe soil erosion, and intense desertification.Consequently, ecosystem service indices, such as water yield, habitat quality, and carbon storage, are relatively low, while soil conservation service indices increase.Under the baseline development scenario, the transitional zones at the periphery of the basin present higher ecological risks.Therefore, it is necessary to control agricultural and ecological spaces.Protection measures should be improved in the agroforestry transition zones, as well as in the grassland and unused land transition zones, to enhance forest and grass cover and strengthen ecosystem stability.In the scenario of economic development, rapid urbanization and agricultural expansion worsen human activities, which significantly contribute to the decline in regional ecological environment quality [43].The expansion of construction and cropland increases the size of areas classified as medium-to high-risk.Urbanization accelerates soil erosion and water loss, while cropland expansion encroaches upon other ecological land, resulting in decreased grassland and forest areas.This threatens ecological balance [44].Therefore, ecological risks in transitional areas between cropland and grassland or forest are worsened.In the ecological protection scenario, the proportion of cropland, forest land, and water increases, resulting in significant changes in areas classified as low-ecological-risk and medium-low-ecological-risk.These changes are mainly concentrated in the transitional zones between grassland and cropland, which are characterized by high vegetation cover, high organic matter content in soil, and fertile soil, thus exhibiting strong ecosystem service functions.However, the study area's unique location and fragile ecological environment exacerbate the pressures on cropland, forest land, and unused land due to intensified human disturbances.Therefore, it is crucial to nurture mountain vegetation and water resources in this region for sustainable development. Simulations of multiple future scenarios indicate an urgent need to optimize land use structures to provide decision support for sustainable development and high-quality ecological environments.Considering the extensive desertification in the hinterland of this basin, which may exacerbate desertification in peripheral areas, it is imperative to rationally plan urban development boundaries, limit uncontrolled expansion, and improve land use efficiency at the same time.Urban development boundary planning is significant in mitigat-ing the exacerbation of desertification and improving land use efficiency [45].To enhance the ecological benefits of forest and grassland areas with low to medium-low ecological risks, it is essential to promote the positive feedback evolution of forest-grassland ecosystems and urgently strengthen land remediation efforts.The goal is to intensify cropland and grassland use, limit the disorderly expansion of unused and construction land, and harmonize relationships among various ecological landscapes.This will reduce ecological risk levels and enhance the stability of the land ecosystem.Additionally, measures can be taken to increase soil carbon sequestration and fertility potential.This can be achieved by actively implementing ecological protection projects [46], optimizing species richness, enhancing ecosystem stability and resilience, and promoting synergies among ecosystem services.These actions can enhance ecosystem productivity and sustainability [47,48]. The TRB is a crucial ecological barrier in western China.The degradation of its ecological environment poses significant challenges to the high-quality development of ecological civilization.The rapid development and transformation of urban construction have inevitably altered land use patterns and functions, leading to ecological risks.In the core area of the study region, the activation speed of marginal sand dunes has accelerated, posing a severe threat to cropland.This is particularly concerning as artificial oases continue to expand, replacing natural ones.As a result, the buffer zone between oases and deserts is continuously shrinking, which has a negative impact on desert-edge vegetation.Therefore, to implement the concept of guided restoration, it is essential to create artificial ecosystems in the Taklamakan Desert by constructing artificial oases.Additionally, adjusting the industrial structure and optimizing the allocation of agriculture, forestry, animal husbandry, and subsidiary industries can help mitigate the risks of vegetation destruction caused by human activities.Under the conditions of economic and technological priority, broad-scale afforestation is an effective method for protecting vegetation on the edges of deserts.Therefore, it is necessary to strictly adhere to policies and regulations to protect the ecological environment, mitigate the negative impacts of human activities on the environment, and promptly initiate ecological restoration efforts. Comparison with Previous Research Currently, the ecological risks of land use changes in the TRB have not been assessed.This study integrates simulated land use results and ecosystem services into ecological risk assessment by introducing the economic Sharpe ratio [33]. Extensive research has been conducted on the ecosystem services of the TRB.The ecological security assessment, based on ecological footprint, indicates that water scarcity is a significant constraint on the socio-economic development of the TRB.This is exacerbated by climate change and rapid cropland expansion.Furthermore, the significant growth of artificial ecosystems, referred to as artificial oases, resulting from the transformation of natural oases or deserts, worsens landscape fragmentation and the ongoing degradation of ecological security levels in the basin [49].The increase in ecological risks due to water scarcity is consistent with the findings of this study, primarily resulting in a gradual rise in ecological risks in the transitional zones between cropland and forests.The TRB indicates an overall increase in ecosystem vulnerability levels, with some areas experiencing extremely severe vulnerability [50].This finding is consistent with the gradual rise in ecological risk levels discussed in this study.The evaluation of ecological risks resulting from ecosystem degradation shows a significant correlation between these risks and the swift urbanization and expansion of cropland, which are the primary drivers of ecological risks in the TRB [13]. The study shows that the spatial distribution of ecological risk resulting from land use change in the TRB is closely aligned with the spatial distribution pattern of ecosystem service functions in the TRB after the implementation of ecological restoration projects, as described in other studies [51].This suggests that human activities significantly influence land use change-induced ecological risks. Shortcomings and Prospects The PLUS model's Markov module relies exclusively on past land use changes for quantitative prediction.However, accurately predicting the transfer of land use types in the future is challenging due to the influence of multiple factors, limiting the model's accuracy.To enhance accuracy, it is advisable to comprehensively consider policy and natural economic factors for precise quantitative analysis.Furthermore, the spatiotemporal evolution of ecological risk is influenced by multiple factors, which requires additional research.To clarify the impact of land use change on ecological risk, longer time series studies are necessary, along with a thorough investigation of the relationship between ecosystem services and ecological risk.In summary, future research on the ecological risk of land use change should consider multiple variables, including social, economic, and environmental factors, and conduct more empirical studies.Furthermore, ecological protection should be a dynamic adaptive response, focusing on the trends of land use change in high-risk areas, and intensifying land restoration efforts to enhance resilience against ecological risks. Conclusions (1) Between 2000 and 2020, the primary land use types in the study area were unused land, followed by grassland and arable land.Grassland, water, and construction land had relatively less coverage.There was a significant increase in unused and arable land, while grassland, forest land, and water exhibited a declining trend.From 2020 to 2035, the main trend in the three development scenarios was the expansion of arable and unused land, while the grassland area decreased significantly.There were minimal changes in other land types.Unused land was mainly clustered in the central part of the basin, while arable land was concentrated in the central areas of Kashgar and Awat.The Awat region had a strip-like pattern of forest land, while the northern and southern marginal areas had mainly grassland.Water and construction land areas were relatively dispersed in comparison. (2) Between 2020 and 2035, the total quantity of ecosystem services in the study area showed significant spatial differentiation under various scenarios.There was an overall increasing trend in the total quantity of ecosystem services in the northern and southern marginal areas.Under the baseline development scenario, the total quantity of ecosystem services decreased by 15.247% compared to 2020.Under the economic development scenario, it decreased by 13.358% compared to 2020.Under the ecological protection scenario, it decreased by 19.852% compared to 2020. (3) Between 2020 and 2035, various scenarios showed that ecological risk levels had similar characteristics.The majority of the areas fell under the moderate-risk category, with other risk levels accounting for a smaller proportion of the total area.Moderate-risk areas had the largest and most widespread distribution, covering over 97% of the total ecological risk level area.Conversely, low-risk areas had the smallest area, with proportions all below 0.2%.In various scenarios, the distribution of high-risk areas varied.In the baseline development scenario, high-risk areas were mainly located in the northwest of Kashgar, the central part of Awate, the central part of Hotan, and the northwest part of Korla.Under the economic development scenario, there were significant changes in the distribution of moderate-high-risk and moderate-low-risk areas.Under the ecological protection scenario, there were noticeable changes in the distribution of moderate-to low-risk areas.These changes were mainly concentrated in specific regions, with Kashgar having the most concentrated distribution. Figure 1 . Figure 1.Overview of the study area.(a) Location of the Tarim River Basin in China, (b) Tarim River Basin countries, (c) Tarim River Basin DEM. Figure 1 . Figure 1.Overview of the study area.(a) Location of the Tarim River Basin in China, (b) Tarim River Basin countries, (c) Tarim River Basin DEM. Figure 3 .Table 4 .Figure 3 . Figure 3. Spatial distribution of land use from 2000 to 2020.Table 4.The areas of land use types for the years 2000, 2010, and 2020. Land 2024 , 13, x FOR PEER REVIEW 11 of 19 decrease in unused land are predominantly characterized by sparse and scattered distribution patterns. Figure 4 . Figure 4. Distribution of increases and decreases in different land use types. Figure 4 . Figure 4. Distribution of increases and decreases in different land use types. Figure 5 . Figure 5. Distribution of multi-scenario land use projections for 2035. Figure 5 . Figure 5. Distribution of multi-scenario land use projections for 2035. Figure 6 . Figure 6.Spatial distribution of multi-scenario projections of ecosystem services from 2020 to 2035. Figure 7 . Figure 7. Spatial distribution of ecological risks under multiple scenarios in 2035. Figure 7 . Figure 7. Spatial distribution of ecological risks under multiple scenarios in 2035. Author Contributions: Y.C. and X.Z.: methodology, software, data curation, validation, and writing-original draft; Y.C. and X.Z.: project administration and writing-review and editing; W.S.: conceptualization, funding acquisition, writing-review and editing.All authors have read and agreed to the published version of the manuscript.Funding: This research was funded by The Third Comprehensive Scientific Investigation in Xinjiang (Grant No. 2022xjkk0905). Table 1 . Land use change ecological risk data characterization. Table 1 . Land use change ecological risk data characterization. Table 2 . The habitat sensitivity data. Table 3 . The threats data. Table 4 . The areas of land use types for the years 2000, 2010, and 2020.
v3-fos-license
2023-07-11T02:41:39.919Z
2023-01-01T00:00:00.000
259465814
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.5267/j.ijdns.2023.5.016", "pdf_hash": "5e19b1eaa587ee2b6abf8ac4d7b91a9dcda0f82e", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44755", "s2fieldsofstudy": [ "Business" ], "sha1": "68117c0e0e8d2ae3222beb53a4e0c354d9a34a6a", "year": 2023 }
pes2o/s2orc
Utilizing business intelligence and digital transformation and leadership to enhance employee job satisfaction and business added value in greater Amman municipality The goal of this study was to find out how business intelligence systems, AI, and digital leadership affect how satisfied employees are with their jobs and how much value they add to companies in the Greater Amman Municipality. After the study samples were taken and looked at, a total of 246 samples were approved to be used in the PLS software-based analysis. The results of this study showed that putting in place business intelligence tools, artificial intelligence, and digital leadership all made employees happier with their jobs and gave businesses more value. The research showed that there are four key parts to digital leadership: commander, communicator, collaborator, and co-creator. The main parts of business intelligence are Data Warehouse, Data Mining, Business Process Management, and Competitive Intelligence. Findings show that digital transformation is made up of three key parts: changing processes, developing business models, and changing domains. The results also show that an employee's level of job satisfaction, which includes things like business success, work commitment, and job thinking, is linked to how much value they add to the company. Intriguingly, the current results go against those of earlier studies, which said that the variables of interest have no effect on how happy employees are with their jobs or how much value companies add for their customers. When the results of this study are looked at as a whole, they say that businesses should start doing things that make employees happier at work and increase the value of the business. The current study is innovative because it focuses on the most important parts of business intelligence, artificial intelligence, and digital leadership in order to improve employee satisfaction at work and the quality of business learning with added value in Greater Amman Municipality. Introduction Recent trends in business performance enhancement founded on the application of information technology systems (Holmström, 2022).Manita et al. (2022) suggest that the far-reaching impacts of digital technology on society and industry can be categorized as business performance.Improving overall performance through adding values for end products and services and increasing employee job satisfaction are challenges most firms are meeting today, and many of those companies are looking to digital technology for finding, and transforming new solutions (Vaska et al., 2021).It is necessary to conduct new research to define the capabilities and characteristics of business intelligence, artificial intelligence, and digital leadership on business performance, as well as their capacity to assist employees in improving their work performance and achieving activities and goals (Judeh et al., 2022a).This is necessary to generate new products and services that have a greater potential for adding value (Basile et al., 2023).The process of change, improvement, and development that occurs in the characteristics of a product as a result of the application of systems, tools, and technological methods of communication that lead to the discovery of new ideas and products, the development of novel solutions, the management of operations through technological means, and the overall improvement of business performance through the addition of valuable and new values is what is meant by the concept of business added-value (Kulinich et al., 2022). Our inductive framework is based on an extensive literature analysis, and it demonstrates how improvements in business intelligence, artificial intelligence, and digital leadership have contributed to improvements in employee job performance and in the added values that businesses receive.The purpose of this study is to assist businesses in evaluating the influence that BI, AI, and DL have on improving employee work performance and business added values.The study investigates the effects of adopting business intelligence, digital transformation, and digital leadership concepts and measures whether or not it could have an effect on enhancing employee job performance and business added value (Buck et al., 2023).Every business can evaluate its added value using a variety of metrics, including the percentage of satisfied customers, the rate at which new customers are acquired, and the number of repeat transactions.Following the discussion on how to perform a review, we will investigate the findings of previous studies and offer some suggestions for the direction of future research. The remaining portion of this investigation is composed of four separate sections, the first of which is the introduction to each section.In Section 2, we will talk about the research that came before.In the third segment, we go over the steps involved in conducting research and collecting data.In Section 4, the findings are discussed, and then in Section 5, the overall findings and interpretations of the research are presented. Business Intelligence Businesses faced challenges related to the quantity, quality, precision, and validity of the data when attempting to acquire and manage massive quantities of data (Wang et al., 2018).The result is that business intelligence as a method has matured into a modern, cutting-edge approach to gaining a lead in the marketplace through the identification of previously untapped value (Carbajal et al., 2023;Judeh et al., 2022b).The ability of a company to store, organize, analyze, and combine the various types of data it collects to obtain insights and create new products is greatly enhanced by business intelligence (BI) (Ahmad et al., 2023).Organizations have been compelled to use analytical business intelligence tools due to the difficulty of work without extensive use of technological systems, their ability to deal with and analyze big data and attempt to extract new values, and the complexity of the process required to achieve business performance (Ahmad & Mustafa, 2022;Younus, 2022).This is because modern businesses require a wide array of technical infrastructure to function (Mbima & Tetteh et al., 2023).Businesses can gain new insights, streamline their decision-making processes, address previously intractable issues, and eventually offer improved services and goods to customers by storing data in data warehouses, classifying it, verifying its accuracy, and searching for new data relationships (Schmitt et al., 2023;Bygstad et al., 2022). Digital Transformation Artificial intelligence is often compared to the study of human intellect by computer scientists (Holmström, 2022).Computer scientists frequently make comparisons between artificial intelligence and the area of computational methods used to aid businesses in running their operations (Manita et al., 2020).Research shows that involving the target audience in the product's conception and design phase yields excellent results (Vaska et al., 2021).Because of its potent medium for bridging the distance between customers and businesses, digital technology is a crucial part of the creative and innovative process (Lara & Florez, 2022).To most academics, the incorporation of technological tools and systems into administrative, operational, and industrial contexts was a must for the growth of their respective fields (Avgerou & Walsham, 2017).To do this, we formulated longterm strategies that can be used as a springboard for developing detailed plans for future product iterations (Hai et al., 2021).A plan of action was developed as part of the procedure (Ulas, 2019).Artificial intelligence is one example of a technological instrument that can help managers and decision-makers make sense of the vast amounts of information available in online repositories and databases (Frank et al., 2019).It allows companies to adopt and implement new operational models, which can enhance their current situations in a variety of ways (Leone et al., 2021).These include the development of novel products with the potential to increase customer loyalty, the fortification of the company's capacity for transformation and development, and the gain of market share and advantages over competitors (Ahmad et al., 2021). Digital Leadership Leadership in the digital age necessitates guiding followers to make the most of the company's online tools for the benefit of all (Tigre et al., 2023).Many businesses are experiencing significant changes in their organizational structures and the roles that employees perform because of the rapid development of digital technology in recent years (Olson et al., 2005).Numerous aspects of the company will need to undergo change to accommodate the new circumstances.These include the types of jobs accessible, the company culture, and the technology used in the workplace (Abidin et al., 2023;Dwivedi et al., 2020).Transformational efforts drive shifts to better meet immediate needs while also laying the groundwork for an uncertain future (Shin et al., 2023).To effectively mitigate these problems and aid in the transformation, digital leaders need a unique collection of skills (El Akid et al., 2023).Leaders exert considerable sway because they shape their organizations to face an increasingly unclear and unstable future (Petry, 2018).For instance, it is difficult for digital leaders to inspire their teams to work with the new set of technologies that may or may not be adopted in the future because of the inherent uncertainty of the future of digital technology (Sheninger, 2019).This is a common issue for digital leaders, and it's exacerbated by the fact that many leaders lack the skills required to be effective digital leaders (Shin et al., 2023).Good news is they seem determined to finally acquire these skills (Ahmad et al., 2022).Organizations struggled in the digital economy because they lacked the tools that would allow them to reach customers, provide distinctive and innovative products ahead of competitors at competitive prices, and maintain a stable position in relation to competitors (Hanandeh & Mustafa, 2022;Hammouri & Abu-Shanab, 2017).As a result of the rising costs of commercial, operational, and transportation expenses, the increasing reliance on technological systems for the management of large amounts of data (Gretzel et al., 2015), and the rising expectations of customers, the majority of today's businesses are investing in technological advancements to remain competitive (Tigre et al., 2023). Employee Job Performance and Business Added Values Workplace effectiveness has been the subject of countless studies in the fields of industrial management and corporate behavior (Chen et al., 2023).It can be defined as an individual's observable action or behavior that creates value for the company and helps it achieve its goals (Ghorbanzadeh et al., 2023;Hammouri et al., 2022).When we talk about an employee's performance on the job, we're referring to the extent to which they meet the broad performance expectations of the company (Mishra & Kasim, 2023).Over the past few decades, there has been a profound shift in how we think about "job performance," from a narrow focus on fixed positions and duties to a broader grasp of roles within dynamic organizational contexts (Anasori et al., 2023).Because of the increasingly competitive and worldwide nature of the modern workplace, it has become increasingly important for businesses to be flexible enough to adapt to new situations quickly (Alkharabsheh et al., 2023).A broader conception of what constitutes "good work" in the modern workplace is required, one that includes any and all efforts that add to the success of the business (Al-Zagheer et al., 2022).Role performance, adaptive performance, proactive performance, and citizenship actions are represented in the definition of individual performance (Al-Zagheer et al., 2022). According to this revised framework for measuring employee productivity, role performance can make a difference at three distinct tiers: the individual, the team, and the company.Competence, flexibility, and initiative are the three main types of behavior that can be broken down into sub dimensions of job position performance (Hanandeh et al., 2023). Research Methodology The primary objective of this research is to understand how business intelligence, AI, and digital leadership enhance employees job satisfaction.A quantitative cross-sectional design was utilized to test the research model.The population of this study was employees who are working in Greater Amman Municipality.Quantity's five-point Likert scale (1=strongly disagree; 2=disagree; 3=neutral; 4=agree; and 5=strongly agree) was used to evaluate the study's key formulations on Google Drive.PLS explored study hypotheses.246 respondent responses were approved for analysis and discussion of the study's hypothesis after data cleaning.Finally, data outweighed predictors 10-to-1. Research Results The measurement model underwent tests to evaluate its validity and reliability.Regarding reliability, one method used to assess both reliability and internal consistency is Cronbach's alpha.Hair et al. (2006) emphasized that Cronbach's alpha should exceed the threshold of 0.70.In Table 1, the results showed a high level of internal consistency for the scale, as Cronbach's alpha values for each construct surpassed the recommended threshold (0.70). To measure convergent validity, the composite reliability (CR) and average variance extracted (AVE) tests were utilized.Fronell and Larcker suggested that CR and AVE should meet the recommended values, which are greater than 0.70 and 0.50, respectively.The findings in Table 1 indicated that the values of CR and AVE for all constructs exceeded the threshold values.Additionally, the analysis revealed that all indicators for each factor were significant, with standardization path loadings surpassing the acceptable value of 0.50.Furthermore, the evaluation of discriminant validity was conducted using the Fronell-Larcker criterion.This criterion examines whether the square root value of the average variance extracted (AVE) for each construct exceeds the inter-factor correlations between constructs.Table 2 presents the results, indicating that the square root values of all AVEs (shown as diagonal bold values) were higher than the correlations between the constructs.This outcome confirms the presence of discriminant validity.After assessing the validity of the measurement model, the structural model was examined.The results indicated that the Rsquared value (R 2 ) was 55.4%.Furthermore, the R 2 value exceeded the acceptable threshold of 25% as stated by Hair et al. (2016).The research findings supported all the proposed hypotheses, as evidenced by the statistically significant p-values presented in Table 3.The results revealed that the digital leadership (DL) had a direct and significant influence on the employees' job satisfaction (β = 0.215, p < 0.05) and business added value (β = 0.419, p < 0.05), supporting H1 and H2.Additionally, the findings demonstrated that business intelligence (BI) significantly predicted both employees' job satisfaction (β = 0.314, p < 0.05) and job added value (β = 0.197, p < 0.05), thereby supporting H3 and H4, respectively. Moreover, digital transformation (DT) was found to be statistically significant in explaining employees' satisfaction towards their jobs (β = 0.297, p < 0.05), and influencing positively on business added value (β = 0.319, p < 0.05), such findings confirming H5 and H6 respectively.Finally, the study showed that employees' job satisfaction (EJS) had a positive and significant impact on business added value (β = 0.307, p < 0.05), thus confirming H7. Research Conclusion and Implication The aim of this study was to utilize business intelligence and digital transformation and leadership to enhance employee job satisfaction and business added value in Greater Amman Municipality.The study also aims to give full information about the capabilities of applying new concepts such as business intelligence, digital transformation, and digital leadership concepts and their effects on job satisfaction and business added value.The findings reveal that business intelligence, digital transformation, and digital leadership have significant impacts on employee job satisfaction (H1, H3, and H5) and business added value (H2, H4, and H6). The study showed the importance of changing the role of managers from playing the role of traditional managers to becoming leaders of organizations by adding the behavioral theory of direct interaction and providing employees with the information required to complete the work to the classic theory of directing, controlling, and decision-making.The research focused on the importance of digital leadership, represented by increasing the percentage of flexibility, transferring valuable information to employees, and supporting the application of entrepreneurial and creative ideas for employees, which are capable of adding new values to products and creating a creative environment capable of increasing employee satisfaction (Holmström, 2022). Research provides more information about business intelligence systems and their ability to assist managers and employees in improving their ability to perform business.Research statistical analysis proved that apply business intelligence can enhance employees' capabilities in giving new values for end products able to compete in a competitive environment (Costa Melo et al., 2023).During the companies' confrontation with the Corona pandemic and the transformation of most businesses to perform their business through the application of information technology systems, companies noticed the ability of employees to perform business while significantly reducing transaction costs, and the study has proven that applying the concept of digital transformation can reduce the proportion of direct interaction between employees and customers and reduce time Lost to complete the business and thus increase the time used by employees to increase their productivity (Abidin et al., 2023). The results show that DL, BI, and DT all have good effects on how happy employees are with their jobs and how much value they add to the business.This finding is similar to what other studies have found (Holmström, 2022;Costa Melo et al., 2023;Basile et al., 2023;Abidin et al., 2023;Ghorbanzadeh et al., 2023).This study also shows how digital leadership helps employees do their jobs better and adds value to the business.It does this by changing the roles of managers in ways that improve their leadership skills, communication and cooperation with employees, cooperation with partners and customers, and creative decision-making.This finding is the same as what other studies have found.This study also shows how business intelligence helps employees do their jobs better and adds value to the business.This is done by managing data in the data warehouse efficiently and effectively to reduce data conflicts, using analytical tools on the web to improve the performance of organizations, and trying to translate knowledge capabilities into new products and services with competitive advantages.This finding is the same as what other studies have found.Lastly, the research study shows how digital transformation improves employee job performance and business added value by relying on digital transformation, using new business models, and totally changing the scope of work to become fully dependent on technological development. Future Research While the current study focuses on the Greater Amman Municipality, future research could examine the impact of BI and digital transformation on employee job satisfaction and business added value in different organizational settings.Investigating how these concepts operate in various industries and sectors could provide a broader understanding of their applicability and effectiveness.Moreover, conducting longitudinal studies would be valuable to examine the long-term effects of BI and digital transformation initiatives on employee job satisfaction and business added value.By tracking these variables over an extended period, researchers can assess the sustainability and durability of the observed effects, as well as identify any potential changes or fluctuations.On the other hand, investigating potential mediating and moderating factors could enhance our understanding of the mechanisms through which BI and digital transformation influence employee job satisfaction and business added value.For example, exploring the role of organizational culture, leadership styles, or employee engagement as mediators or moderators could provide deeper insights into the complex relationships between these variables. Table 1 The results of the reliability and validity test
v3-fos-license
2021-10-17T05:15:15.014Z
2021-10-15T00:00:00.000
239002408
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0258655&type=printable", "pdf_hash": "dd91715cabef9ad46d2d345e236dbdb38552654b", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44758", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "03cc03b6c773c2f46f84906b95c07b5188ff57f6", "year": 2021 }
pes2o/s2orc
Differential STAT gene expressions of Penaeus monodon and Macrobrachium rosenbergii in response to white spot syndrome virus (WSSV) and bacterial infections: Additional insight into genetic variations and transcriptomic highlights Diseases have remained the major issue for shrimp aquaculture industry for decades by which different shrimp species demonstrated alternative disease resistance or tolerance. However, there had been insufficient studies on the underlying host mechanisms of such phenomenon. Hence, in this study, the main objective involves gaining a deeper understanding into the functional importance of shrimp STAT gene from the aspects of expression, sequence, structure, and associated genes. STAT gene was selected primarily because of its vital signalling roles in stress, endocrine, and immune response. The differential gene expressions of Macrobrachium rosenbergii STAT (MrST) and Penaeus monodon STAT (PmST) under White Spot Syndrome Virus (WSSV) and Vibrio parahaemolyticus/VpAHPND infections were identified through qPCR analysis. Notably, during both pathogenic infections, MrST demonstrated significant gene expression down-regulations (during either early or later post-infection time points) whereas PmST showed only significant gene expression up-regulations. Important sequence conservation or divergence was highlighted through STAT sequence comparison especially amino acid alterations at 614 aa [K (Lysine) to E (Glutamic Acid)] and 629 aa [F (Phenylalanine) to V (Valine)] from PmST (AY327491.1) to PmST (disease tolerant strain). There were significant differences observed between in silico characterized structures of MrST and PmST proteins. Important functional differentially expressed genes (DEGs) in the aspects of stress, endocrine, immune, signalling, and structural were uncovered through comparative transcriptomic analysis. The DEGs associated with STAT functioning were identified including inositol 1,4,5-trisphosphate receptor, hsp90, caspase, ATP binding cassette transmembrane transporter, C-type Lectin, HMGB, ALF1, ALF3, superoxide dismutase, glutathione peroxidase, catalase, and TBK1. The main findings of this study are STAT differential gene expression patterns, sequence divergence, structural differences, and associated functional DEGs. These findings can be further utilized for shrimp health or host response diagnostic studies. STAT gene can also be proposed as a suitable candidate for future studies of shrimp innate immune enhancement. STAT, Suppressors of Cytokine Signalling (SOCS), and Protein Inhibitor of Activated STAT have also been identified through various research efforts in the past years [27][28][29]. Furthermore, there had been some studies on the STAT gene expression changes in different shrimp species after pathogenic infections. STAT gene expression was up-regulated in Fenneropenaeus chinensis challenged with WSSV and Vibrio anguillarum [30]. The significant upregulation of Marsupenaeus japonicus [31] and L. vannamei [32] STAT gene expressions were identified under WSSV infection. Nevertheless, under some diseased conditions, Macrobrachium spp. also demonstrated non-differential STAT gene expression. For example, M. nipponense had no significant STAT gene expression changes after Aeromonas hydrophila [33] and non-O1 Vibrio cholerae [34] bacterial infections. Interestingly, due to the immune signalling importance of STAT gene, there exists risk of shrimp STAT manipulation by the invading pathogens as demonstrated by the shrimp STAT hijacking by WSSV virus [35]. The study of gene expression is an efficient strategy for the fast and accurate determination of gene activation or repression during pathogenic infections. Real time quantitative PCR (qPCR) and RNA-Seq analyses are more commonly utilized for gene expression studies in recent decades [36,37]. qPCR involves the usage of intercalating dyes or probes and qPCR machine for the determination of gene expression fold change between different treatment groups [38]. RNA-Seq utilizes high throughput next-generation sequencing (NGS) technology and has advantages in terms of price, efficiency, difficulty, and application range compared to more traditional methods such as microarray [39]. Despite the increasing numbers of gene expression studies involving pathogen-challenged shrimps, there had been a lack of research focusing on the comparison of immune gene expressions across different pathogenic conditions especially STAT gene. Therefore, this study involved identification and comparison of differential gene expressions of M. rosenbergii STAT (MrST) and P. monodon STAT (PmST) upon WSSV and V. parahaemolyticus/Vp AHPND infections. STAT gene was selected mainly because of its diverse functional importance through JAK-STAT pathway. This was followed by sequence and structure divergence identification between MrST and PmST. This is because genetic variations can lead to significant gene expression changes and functional alterations during pathogenic infections. A comparative transcriptomic analysis was also conducted to elucidate the underlying stress, endocrine, immune, signalling, and structural DEGs associated with STAT gene functioning during pathogenic infections. Overall, this study had the aim of obtaining more information on the functional importance of shrimp STAT genes involved in the aspects of expression, sequence, structure, and associated genes. The aim was successfully achieved. Pathogen preparations For WSSV virus propagation, the feeding of local P. monodon shrimps (15-20 g body weight) with WSSV-infected shrimp muscle tissues was conducted. The moribund shrimps were confirmed to be WSSV positive through PCR [40] and stored at -80˚C. WSSV virus stock solution was then prepared [41] which involved the homogenization and lysis of the WSSV-infected shrimp muscle tissues in TN Buffer followed by centrifugation, filtration, and storage at -80˚C. The WSSV stock solution viral copy number was quantified using primer pairs VP28-140Fw and VP28-140Rv [42]. On the other hand, P. monodon suspected with AHPND outbreak were collected and validated through both clinical sign observation and AP3 PCR detection method [43]. The Vp AHPND bacteria [44] were selectively propagated by incubating the digestive organs of Vp AHPND -infected shrimps in the order of tryptic soy broth (TSB+), thiosulfate citrate bile salt (TCBS) agar, and tryptic soy agar (TSA+). The bacteria preservation was done through cryovials (CRYOBANK™) at -80˚C and utilized for downstream experiments. Pre-challenge works For the WSSV and V. parahaemolyticus challenge with Macrobrachium rosenbergii, M. rosenbergii juvenile prawns (5-8 g body weight) were purchased from a hatchery at Kuala Kangsar, Perak, Malaysia. The acclimatization of the prawns was conducted for seven days under aseptic experimental setup. Each tank contained 10 prawns with aerated freshwater at 28 ± 1.0˚C. Whereas for WSSV challenge with P. monodon, locally obtained juvenile 4 th generation P. monodon shrimps (15-20 g body weight) of Mozambique, Africa strain (10 shrimps per tank) were acclimatized for seven days under aseptic experimental setup with aerated artificial seawater (30 ppt) at 28 ± 1.0˚C. For the Vp AHPND experimental challenge, disease tolerant crossbred (13 th generation Madagascar strain with 5 th generation local strain) juvenile P. monodon shrimps (15-20 cm body length) were involved. The acclimatization of the shrimps (27 shrimps each tank) was done for seven days under aseptic experimental setup with aerated artificial seawater (30 ppt) at 28 ± 1.0˚C. The negative screening of the prepared M. rosenbergii and P. monodon shrimps was conducted before experimental challenge using PCR methods and confirmed to be WSSV-free [40] and V. parahaemolyticus/Vp AHPND -free [43] respectively. Besides that, for the WSSV experimental challenge, P. monodon shrimps were injected with 100 μl filtered WSSV stock solution (4.11 x 10 5 copies/μl). Sterile PBS was injected for the negative control group shrimps. The shrimp hepatopancreas collection was done at 0, 3, 6, 12, 24, and 48 hpi and also 12 days post-infection (dpi) (survivors) and stored at -80˚C. The challenge details were described in previous publication [47]. All tanks were equipped with aerators and water filters. The experimental challenges were conducted with three biological replicates for each treatment and control groups. The positive screening of the challenged shrimps was done through PCR methods for WSSV [ Total RNA extraction and first strand cDNA synthesis Total RNA samples were extracted from shrimp hepatopancreas at each post-infection time interval of both treatment and control groups using NucleoSpin RNA II Extraction Kit (Macherey's-Nagel, Germany), RNA Isolation Kit (Macherey's-Nagel, Germany), and TransZol Up Plus RNA Kit (TransGen Biotech, Beijing, China) respectively. The extracted RNA samples were also treated with TransScript 1 One-Step gDNA Removal and cDNA Synthesis SuperMix (TransGen Biotech, Beijing, China) to achieve DNA contaminant removal and first strand cDNA synthesis for subsequent downstream applications. The manufacturer's protocols were followed for all kits utilized. Expression profile comparison through qPCR analysis The STAT gene expression profiles of M. rosenbergii (MrST) and P. monodon (PmST) during WSSV and V. parahaemolyticus/Vp AHPND infections were determined and compared through quantitative real-time PCR (qPCR) analysis. Three biological replicates with three technical replicates each were applied for every treatment group. The qPCR primers were designed through PrimerQuest Tool software (https://sg.idtdna.com/Primerquest/home/Index) and listed in S1 Table. The MrST qPCR experiments were conducted using TaqMan 1 Universal PCR Master Mix kit and Step One Plus Real-Time PCR System 1 instrument (Applied Biosystems, Foster City, CA, USA). The qPCR reaction (20 μl) consisted of 10 μl TaqMan Universal RT-PCR Master Mix, 1 μl primers/probe set containing 900 nM of forward reverse primers, 300 nM probe, 2 μl template cDNA, and nuclease-free water. The qPCR cycling program involved 50˚C for 2 mins, 40 cycles of 95˚C for 10 mins, 95˚C for 15 secs, and 60˚C for 1 min. Elongation factor 1-alpha (EF1a) gene was chosen as the internal control reference gene [49]. The experimental protocol details were mentioned previously [50]. The PmST qPCR experiments were carried out using GoTaq 1 qPCR Master Mix kit (Promega, Madison, Wisconsin, USA) and Agilent Technologies Stratagene Mx3005P instrument. The qPCR reaction (20 μl) included 10 μl GoTaq 1 qPCR 2X Mix, 500 nM forward primer, 500 nM reverse primer, 2 μl template cDNA, and nuclease-free water. The qPCR cycling program of 95˚C for 2 mins, 40 cycles of 95˚C for 15 s, and 56˚C for 35 secs was utilized. EF1a gene was selected as the internal control reference gene as well. The analysis of the Ct values obtained was conducted through Livak's 2^ddCt relative quantification method [51]. The differential gene expression values determined were then statistically validated through One-Way ANOVA analysis with post hoc Duncan test using SPSS software Version 22 (Significance value: P<0.05). The post hoc Duncan test was carried out to identify the exact differences of different treatment groups (classified under alphabetical subsets) when One-Way ANOVA analysis was significant. The raw data for the qPCR experiments was shown in S1 Data. Sequence and structural comparison of STAT genes The MrST, PmST, and LvST sequences were retrieved from NCBI nucleotide database (NCBI Accession Numbers: KT380661.1; AY327491.1; HQ228176.1) [46]. The MrST sequence validation was done through PCR method and subsequent Sanger Sequencing analysis. In addition, the PmST sequence of disease tolerant P. monodon shrimps used in this study was determined through PCR technique using conserved site targeting strategy. The PCR primers involved were designed using PrimerQuest Tool software (https://sg.idtdna.com/Primerquest/ home/Index) and listed in S2 Identification and comparison of annotated differentially expressed genes (DEGs) For additional identification and validation of STAT-related functional genes, the determination and comparison of Differentially Expressed Genes (DEGs) from M. rosenbergii and P. monodon under WSSV and V. parahaemolyticus/Vp AHPND infection conditions were conducted. The DEGs involved were identified and retrieved from RNA-Seq results of associated previous publications [44][45][46][47]. The data is available at the NCBI SRA database: SRR1424572, SRR1424574, SRR1424575, and SRP153251. Generally, the extracted M. rosenbergii and P. monodon hepatopancreas RNA samples were treated with DNase and subsequently sent for cDNA library preparation and NGS sequencing using Illumina HiSeq 2000/BGI-SEQ 500 Sequencer platform by the Beijing Genome Institute (Hong Kong). The raw sequencing reads were filtered by which the clean reads were used for DEGs determination. This was followed by the functional annotation of the identified DEGs through mapping to different databases. The details of RNA-Seq data analysis were described in previous publications [44][45][46][47]. The stress, immune, and endocrine DEGs were mainly identified and compared between different treated samples. The patterns of interaction including co-activation or co-repression of DEGs under different pathogenic conditions especially those involved in STAT functioning were elucidated. The qPCR validation details of these RNA-Seq results were also described in previous publications [44-47]. Sequence comparison between MrST, PmST, and L. vannamei STAT (LvST) Several selected shrimp STAT complete cds sequences, MrST (Accession Number: KT380661), PmST (disease tolerant strain), PmST (Accession Number: AY327491.1), and LvST (Accession Number: HQ228176.1) were obtained and compared using Clustal Omega software at translated amino acid (Fig 2) and nucleotide (S5 Fig) levels. The important conserved and diverged sites between the aligned sequences were marked by which a significant number of diverged sites were located at the 5' UTR and 3' UTR regions. At nucleotide level, the major conserved area was found at the middle region whereas long conserved overlaps were more frequently identified at the start and end regions. Important diverged sites determined between PmST (disease tolerant strain) and PmST (AY327491.1) Besides that, at translated amino acid level, both major conserved area and long conserved overlaps between compared STAT sequences were located at the middle region. Despite the multiple important diverged sites identified between the two compared PmST nucleotide sequences, at amino acid level, only two amino acid alterations were discovered at positions of Both MrST and PmST protein sequences were predicted to be intracellular and non-transmembrane through Protter analysis. MrST and PmST proteins had high probabilities to be located within cytoplasmic region (estimated probability of 0.61) and nuclear region (estimated probability of 0.94) respectively based on the MultiLoc 2 prediction analysis. The secondary structures of MrST and PmST protein sequences were predicted as shown in S10A and S10B Fig. For MrST protein sequence, 23 high probability common motifs and 1 Src homology 2 (SH2) domain profile (probability score, 13.148) were found (S11A Fig). Whereas 25 high probability common motifs and 1 SH2 domain profile (probability score, 13.519) were matched to PmST protein sequence (S11B Fig). 3D protein structures of MrST and PmST protein sequences were predicted which contained α-helix, β-sheet, and coil structures (S12A and S12B Comparative transcriptomic differentially expressed genes (DEGs) analysis The important stress, endocrine, immune, signalling, and structural DEGs of M. rosenbergii and P. monodon during WSSV and V. parahaemolyticus/Vp AHPND infections were identified and compared in the Fig 3 below. More details including gene identities, differential expression values, and annotation sources were given in the S7 Table. Based compared to P. monodon treatment groups. WP group possessed higher number of down-regulated DEGs compared to other treatment groups. Some DEGs were only down-regulated in WP group involving inositol 1,4,5-trisphosphate receptor, apoptosis-stimulating of p53 protein 1, mitochondrial coenzyme A transporter, polysaccharide lyase, trypsin, C-type Lectin, proPO, ceramide synthase, STAT, and TBK1. Intriguingly, Hsp90, transglutaminase, ALF1, ALF3, and ankyrin demonstrated sole up-regulation pattern across all treatment groups. Peroxisomal acyl-coenzyme A oxidase and catalase were up-regulated in M. rosenbergii treatment groups while down-regulated in P. monodon treatment groups. On the other hand, trehalose transporter was down-regulated in M. rosenbergii treatment groups while up-regulated in P. monodon treatment groups. Moreover, dopamine N-acetyltransferase and HMGB only showed up-regulation pattern in P. monodon treatment groups. Apoptosis-inducing factor (AIF) and IMD were only differentially expressed in Vp AHPND -infected treatment group. Caspase was down-regulated in all treatment groups except being up-regulated in VM group. Differential gene expression pattern of MrST and PmST during WSSV and V. parahaemolyticus/Vp AHPND infections STAT gene was chosen for gene expression analyses involving different pathogenic-challenged treatment groups because of its highly diverse gene functioning in the JAK-STAT signalling pathway which possesses stress, endocrine, and immune importance [20,21,23]. Unlike the potential up-regulation of STAT gene expressions in the early WSSV post-infection time points associated with previously described hijacking mechanism adapted by WSSV for viral replication [35], the MrST gene expressions were down-regulated in the early WSSV post-infection time points (Fig 1) Intriguingly, elevated STAT gene expressions caused by VP28 vaccination also aided in lowering viral gene expression and thus slowed down WSSV establishment in WSSV-infected P. monodon juvenile [63]. This infers a competitive relationship between host STAT gene expression and WSSV viral gene expression. Hence, the up-regulation of PmST gene expressions from 3 hpi to 48 hpi in response to WSSV infection (Fig 1) in this study is suggested to be the collective effect of stronger immune response from disease-resistant P. monodon and WSSV viral hijacking. On the other hand, for V. parahaemolyticus infection, the down-regulation pattern of MrST gene expressions observed (Fig 1) is supported by a similar scenario of down-regulated STAT gene expressions at later hpi of V. parahaemolyticus-infected S. paramamosain [64]. The upregulated PmST gene expressions identified in response to AHPND infection (Fig 1) Important conservation and divergence between STAT sequences The vital conserved areas determined between MrST, PmST (disease tolerant strain), PmST (AY327491.1), and LvST sequences (nucleotide and amino acid) can be applied in cross-species conserved primer development. This is exemplified by the development of conserved primers for decapod crustaceans (including shrimps) [66] and different bear species [67] for mitochondrial genome sequencing purpose. A probable common ancestry is inferred between MrST and LvST based on their overlapping stop codon positions at nucleotide level. This is supported by another inference of a single common ancestor origination for ALF genes of all crustaceans [68]. MrST sequence was most diverged from other compared STAT sequences at nucleotide (S5 Fig) and amino acid (Fig 2) levels. The differential gene expressions between MrST and PmST (Fig 1) might be significantly influenced by these genetic sequence (nucleotide and amino acid) variations involving divergence, additions or deletions. The important effect of genetic variations on the gene expressions had been highlighted by previous works [69,70]. Moreover, the two amino acid changes (614 aa and 629 aa) detected between PmST sequences compared could be essential in the enhanced functioning of PmST in disease tolerant P. monodon. These amino acid changes are caused by nonsynonymous mutations. The key outcome of such amino acid changes can be inferred to be the improvement or alteration of PmST protein recognition ability or binding affinity which can be related to some previous research findings [71][72][73]. The divergence at the 5' UTR and 3' UTR regions of the aligned STAT sequences suggests the potential involvement of pre-or post-transcriptional regulatory elements found in these regions in causing differential gene expressions and alternative disease tolerance. This is validated by previously determined significant correlation between increased gene expression and adaptive evolution in the 3' UTR and amino acid sequence [74]. By referring to the NCBI BLAST search results, although being closely related to shrimp species (F. chinensis) (89%), MrST amino acid sequence was also strongly conserved with crab species (S. paramamosain and E. sinensis) (88%; 87%). This is supported by the close evolutionary relationship between MrST and crab species (S. paramamosain and E. sinensis) in the phylogenetic analyses conducted (S6A and S6B Fig). Intriguingly, PmST amino acid sequence had high homology to P. trituberculatus (87%) in the NCBI BLAST search. This is supported by a previously determined mitochondrial DNA similarity between P. monodon and P. trituberculatus [75]. Furthermore, there was a close clustering between Macrobrachium prawns and Penaeidae shrimps (P. monodon, F. chinensis, and L. vananmei) in the phylogenetic analyses (S6A and S6B Fig), which suggests a similar ancestry between them. In silico predicted MrST and PmST structural variations The MrST protein sequence possessed slightly lower theoretical isoelectric point (pI) (6.04) compared to PmST protein sequence (6.11). The subcellular localization prediction of MrST and PmST proteins with intracellular and non-transmembrane properties was determined to be within the cytoplasmic (P = 0.61) and nuclear (P = 0.94) regions respectively. This is validated by a previously identified correlation of protein pI with subcellular localization by which cytoplasmic region contains higher number of acidic proteins compared to nuclear region [76]. Besides that, the four functional conserved domains (STAT_int, STAT5_CCD, STAT_ bind, and SH2) identified for both MrST and PmST protein sequences successfully contributed to the STAT gene identity validation of these sequences. The two amino acid changes described in Section 3.2 were found within the SH2 conserved domain which has functional importance in STAT dimerization and signalling specificity [77]. Additional transcriptomic insight into functional DEGs Based on the transcriptomic DEGs displayed in Fig 3 and S7 Table, further understanding was obtained for the DEGs' functionalities particularly those related to STAT. This is important due to the insufficient number of previous studies on the direct comparison between the transcriptomic DEGs of M. rosenbergii and P. monodon under viral and bacterial infection conditions. Overall, the significantly higher number of up-regulated DEGs compared to downregulated DEGs is postulated to be the effect of host immune response activation during pathogenic infection. In addition, the higher number of up-regulated DEGs identified in M. rosenbergii compared to P. monodon can be correlated to its stronger immune response. Such strong immune response was previously demonstrated by the ability of adult M. rosenbergii to achieve clearance of WSSV virus compared to susceptible P. monodon [78]. Intriguingly, some down-regulated DEGs were uniquely found in the WP group which resulted in its relatively higher number of down-regulated DEGs among the compared treatment groups. This is postulated to be caused by the decreased host response of survived P. monodon after successful WSSV clearance. The functioning of stress DEGs in the early stress-induced immunoendocrine response is inferred based on both up-regulation of these DEGs in this study and previous publications [79][80][81][82]. The up-regulation of endocrine DEGs across different treatment groups is postulated to be the effect of higher energy needs for host immune response activation and post-infection cell repair. This is validated by the significance of energy balance for stress adaptation and aquatic animal tolerance highlighted in previous publications [83,84]. The activated shrimp immune response led to the up-regulation of immune and signalling DEGs. The up-regulation of structural DEGs suggests the high probability of cell and tissue structural repair across different post-infection time points. This is supported by the identification of structural DEGs (including actin-associated genes) with cytoskeleton functions in V. parahaemolyticus-infected L. vannamei [85]. All these DEGs with stress, endocrine, immune, signalling, and structural functionalities are vital in the overall host response against invading pathogens. Interestingly, the uniquely up-regulated DEGs, including peroxisomal acyl-coenzyme A oxidase, catalase, and TBK1 in M. rosenbergii infers their greater importance in M. rosenbergii host response. On the other hand, dopamine N-acetyltransferase, trehalose transporter, and HMGB showed unique up-regulation in P. monodon during pathogenic infections, which suggests their stronger importance in P. monodon host response as well. The special functional importance of AIF and IMD signalling pathway in AP group may be further investigated to gain deeper understanding into their sole differential expressions in AP group. Overall, a synergistic functioning of shrimp stress, endocrine, immune, signalling, and structural genes during pathogenic infections can be postulated which is vital for host survival and elimination of invading pathogens. Moreover, these DEGs can be jointly proposed as Survival Adaptation Molecular Patterns (SAMPs) with STAT as one of the crucial signalling components. Conclusions In conclusion, during WSSV and V. parahaemolyticus/Vp AHPND infections, MrST gene expressions were significantly down-regulated (during either early or later post-infection time points) whereas PmST gene expressions were only significantly up-regulated. In addition, the sequence and structural comparison of MrST and PmST provided significant insight into the important similarities or differences between the compared shrimp STAT sequences. These differences were inferred to be one of the deciding factors resulting in the differential gene expression patterns observed. STAT gene plays vital diverse roles in JAK-STAT signalling pathway especially during pathogenic infections. Hence, the systematic comparison of selected omics data was done to identify the important DEGs (stress, endocrine, immune, signalling, and structural) in M. rosenbergii and P. monodon when exposed to WSSV and V. parahaemolyticus/Vp AHPND infections focusing on those involved in STAT functioning or potentially associated with JAK-STAT signalling. The functional grouping of these DEGs validated the diverse signalling roles of STAT. Overall, the findings of this study will be able to provide valuable insight for future research towards better understanding of the shrimp immune response especially STAT gene functioning. This study also possesses novelty in the emphasis of the stress and endocrine DEGs because these DEGs are easily neglected normally with more research focus given to immune DEGs. However, these DEGs are important as well because stress DEGs function as alert and trigger factors whereas endocrine DEGs function as regulatory and survival factors. The qPCR primers designed, sequence and structural divergence identified, and important DEGs obtained can be applied for shrimp health or immune response activation diagnostic purpose. STAT gene can also be proposed as a suitable candidate for the study of immune response enhancement or regulation due to its diverse signalling importance in shrimp immunity and survival.
v3-fos-license
2022-07-16T15:20:02.122Z
2022-07-13T00:00:00.000
250576756
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://l1research.org/article/download/367/389", "pdf_hash": "f418a05dc888415babce2da2e5b48c6384f68457", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44761", "s2fieldsofstudy": [ "Sociology" ], "sha1": "3d7fde7ad1ec4ec931d0dae4423a1f9533da1ed4", "year": 2022 }
pes2o/s2orc
LAYERING LITERACIES AND METAGAMING IN COUNTER STRIKE: GLOBAL OFFENSIVE The primary purpose of this ethnographic research is to explore what literacy practices unfold through and beyond gaming, how metagaming is conceptualized and how metagaming shapes the players' view and relation to their literacy practices with a particular focus on the first-person shooter Counter-Strike: Global Offensive (CS:GO) . Data from this study were drawn from ethnographic research of four young males within and around CS:GO in the context of Cyprus. Findings indicate that players go through a cycle of layering literacies in order to evolve their metagaming. Metagaming is about creating fluid forms of optimal or unexpected tactics and strategies during game play that go beyond the rules of the game to counter the opponent(s) by using pre-existing, current, and new knowledge from past game plays, as well knowledge and information from online and offline literacy practices. These layered literacies are multidirectional, interest-based and are part of learning related to high-level making decisions. The results con- tribute to the body of literature suggesting ways videogames and more specifically metagaming, could support literacy in L1 classrooms. Literacy and videogames Answering what literacy comprises has been-and continues to be-a focus of scientific discussions worldwide (Bartlett, 2008;Gee, 2003;Horton, 2007;Sang, 2017). Up until the late 1980s, the dominant literacy pedagogy relied almost exclusively on traditional definitions of literacy as a reified set of basic skills, such as reading and writing, and was restricted to paper-based, formalized, and standardized forms of language within the classroom context (Applebee, 1984;Green & Dixon, 1996). During the last three decades, however, there has been a shift towards more social and ideological models of literacy (Street, 1995(Street, , 2001. The social approach to literacy emphasizes literacy as a socially situated practice (Jones & Hafner, 2021) in which people address reading and writing rooted in conceptions of knowledge, identity, and existence with other social, political, economic parameters and local ideologies (Street, 1995). Hence, it extends beyond the conventional view of literacy as printed and written texts and includes meaning-making practices using digital technologies (e.g., videogames, weblogs, mobile texts, etc.; Gee, 2003;Gerber & Price, 2011), exploring the changes of beliefs toward literacy in the process of practices (Cope & Kalantzis, 2000). Here, literacy is considered situated because literacy practices may vary in different contexts. This expanded concept of literacy that emphasizes the diversity of social-cultural practices, and the diversity of the context of communication practice in the modern world-along with the fact that communication media have become multimodal-brought about the concept of multiliteracies (Kress 2003;Lankshear & Knobel, 2003;New London Group, 1996). Multiliteracies entail aspects of searching, sifting, evaluating information, understanding, and creating multimodal texts that involve multiple modes of representation, such as gestures, sound, and language (Kress, 2010;Lankshear & Knobel, 2011;Perry, 2012), according to the learning environment, the social space, and interests and expectations of the individuals involved in the learning process (Cope & Kalantzis, 2000). However, even though gaming literacy (Bourgonjon, 2014;Zimmerman, 2009) embodies multiliteracies, it also differentiates, because videogames are not just multimodal texts that need to be read or written. They are digital environments that require action in terms of solving the problems within of the game (Apperley & Beavis, 2011). In this sense, gaming literacy requires practical and interpretive knowledge of visuals, sounds, writing, and other forms of expression that are integral to the gaming experience (Buckingham & Burn, 2007). In other words, players, interact in the game world in terms of texts, spaces, objects, actions, and the ways they can use these aspects to accomplish their goal of winning the game and solving its problems. To solve the problems of the game though, players need to understand how the game mechanics work (Gee, 2014). Here, game mechanics refer to the particular components of the game at the level of data, algorithms, various action, and control mechanisms afforded to the players within the game context (Rouse III, 2005). Game mechanics are what the game allows the player to do, how to do it, and how this leads to a compelling game experience (Donaldson, 2017;Gee, 2014). For example, in Counter-Strike: Global Offensive (CS:GO), grenades can be effective tools for eliminating opponents, and walls are good places for hiding; there is a time limitation of 1 minute and 55 seconds for each game round, and interactive texts, such as maps, inform players about the actions of co-players and opponents. In another videogame, the Shadow of the Colossus, the game mechanics include elements that can make the player act in different ways, such as climbing, riding a horse, whistling, swimming, or diving (Sicart, 2008). In this regard, players interact with the game rules-the mechanics (Gee, 2014)-while simultaneously layering their literacies because they are engaged in combinations of independent and collaborative, digital and nondigital practices and spaces as they make meaning within and across texts and modalities (Abrams, 2017). Players are engaged in layering literacies (Abrams, 2015) and they produce paratexts, such as game reviews, YouTube videos, and fan discussions of games; hence, they tend to become engaged in relevant print-based and multimodal literacy practices, making these activities a fluid example of situated learning (Apperley & Beavis, 2011;Nebel et al., 2016).From this point of view, the concept of layering literacies can be understood as a multidirectional and fluid process in which the players are involved in collaborative, self-directed, and interest-based experiences by going back and forth and around game play and other peripheral activities; here, players have their own rhythm when it comes to learning the process (Abrams, 2015(Abrams, , 2017. Thus, acquiring gaming literacy does not merely involve learning how to play videogames or how to read multimodal texts during game play on a superficial level. Players must also have knowledge of the intertextual navigation, as well as the requisite reading skills of the official and unofficial paratexts, that is, the system of game-related media products, communications, and artifacts (Consalvo, 2007). Gaming literacy requires skills to contextualize the information contained in light of the credibility of the particular sources (Apperley & Walsh, 2012) in terms of how the images and possible actions can be used to solve problems (Beavis, 2013;Gee, 2014) and the ways of manipulating the game story to win. The current research draws on the concept of gaming literacy (Bourgonjon, 2014) and layering literacies (Abrams, 2015(Abrams, , 2017 to account for the various practices in and around the videogame CS:GO with respect to the semiotic domain of games, the ability to produce meanings to solve problems, and the ways these layering practices are helping players evolve their metagaming. From Gaming Literacy to Metagaming Considering that the rapidly advancing technological landscape is challenging individuals' skills to solve problems flexibly and think critically (Greene, 2021), the idea of how and who is considered literate in contemporary tech-oriented societies has been changing (Beavis et al., 2009;Cope & Kalantzis, 2015). Thus, metagaming stands as a core notion in the current paper, providing answers to how videogames can function not only as dynamic literacy environments that can engage players in layering literacies, but also in producing high-level decision making for solving problems. An example of high-level decision-making can be found in the videogame, Oscar Night, in which two teams wanted to lose the game in order to dodge a stronger opponent. To make their loss more believable, they played an unconventional, or "off-meta," strategy (see Kokkinakis et al., 2021, p. 2). The meaning and value of metagaming, though, is not actually heavily debated in game studies as it is in the fields of mathematics and economics (see Howard, 1972;Nash, 1997); nonetheless, the term has various definitions. The word "meta" is of Greek origin, entailing multiple meanings, with the most relevant one for the current paper referring to "higher or beyond" (Merriam-Webster, n.d.) According to Garfield (2000)-a mathematician and creator of the card game, Magic: The Gathering (Edwards, 2020)-metagaming is what a player brings to the game (e.g., the equipment), what a player takes away from a game (e.g., more experience), what happens between games (e.g., preparation), and what happens during the game other than the game itself (e.g., linguistic utterances). For Salen and Zimmerman (2003), metagaming is considered "the relationship between the game and outside elements, including everything from player attitudes and play styles to social reputations and social contexts in which the game is played" (p. 481). Steinkuehler (2007) suggested metagaming as a literacy practice in which players theorize about their own game, both within the digital environment of the game world and beyond it in the online fandom space (e.g., websites, discussion forums, chat rooms, blogs, wikis). In the same spectrum, a more recent study (Kahila, Tedre, Kahila, Vartiainen, Valtonen and Mäkitalo, 2021) suggests metagaming can occur within and outside of game play. Within game play, players devise, test, analyze and improve strategies to master the game, but they also can engage in other metagame activities such as watching game videos, discussing, searching for information, creating, and sharing activities and consuming activities. Carter et al. (2012) stated that metagaming includes "the goals and symbols of advancement implicit in the game architecture" (p. 15) and the pregame meaning that metagaming is optional content within the official game and excludes activities that do not contribute to success in the game. Offering a broader conception, Boluk and LeMieux (2017) defined metagaming as "a critical practice that encompasses everything occurring before, after, between, and during games as well as everything located in, on, around, and beyond games" (p. 315). Players perform metagame routines using real-life information that typically would not be accessible within the bounds of the game, here with an aim to gain advantage over other players during game play (Boluk & LeMieux, 2017). An example of this conception is role-playing games (RPG). In RPGs, metagaming is the information the player has, but the character does not have. In this sense, metagaming is when players use knowledge that goes beyond or exists outside the game to change the way they play their game avatar. First, although there is an understanding that the strength of metagaming in games lies in its ability to hook the interest of players, the variety of notions surrounding metagaming suggest that there is no unified term for metagaming. Second, metagaming has not yet been connected as a vital notion embedded in the concept of gaming literacy. For these reasons, the present paper seeks to present the ways in which metagaming and layering literacies relate to acquiring gaming literacy. Understanding games and practices through frame analysis To understand the practices of players during game play and the symbolic actions within it, I revisit Goffman's (1974) frame analysis. The idea of frame analysis as a theoretical tool was proposed by Pargman and Jakobsson (2008) as an alternative theory of the concept of the "magic circle" (Huizinga, 1970;Salen & Zimmerman, 2004). The notion of the "magic circle," which has been used by many scholars (e.g., Juul, 2008;Castronova, 2005), describes game play as a meaningful activity, unconcerned with materiality that is separate from the ordinary demands of everyday life (Juul, 2008). Scholars have criticized how the magic circle has been used to depict videogames as spaces in which players get into a "magic circle" totally separate from the outside world (Pargman & Jakobsson, 2008). Thus, frame analysis (Goffman, 1974) offers a lens for understanding game play without dichotomizing gamers into their online and offline lives (Kiourti, 2019). According to Goffman (1974), a frame denotes a set of conventions for a type of situation that organizes subjective experience, meaning, material doings, utterances, and events. In other words, a frame is what the participants are allowed to do or say in a specific situational context, and the frame depends on the rules, norms, expectations, and possible roles available to social actors to make sense of any given situation or encounter. For example, killing an opponent during game play would be perceived as pressing a button on the keyboard and simultaneously moving the mouse while having all of one's attention on a computer screen. In the frame of playing, the individual is a player, and in the frame of the game, the individual is an avatar. In any situation, multiple framings can occur simultaneously, and individuals can partake in multiple frames that can be switched among quite rapidly. Within frame analysis, there are norms that allow or prohibit actions. For instance, in the frame of social society, killing someone is considered a public wrong, and the individual will be punished. In the frame of playing, the individual is not only allowed to kill as many characters as desired, but specific types of killings, such as headshots, may be rewarded because they are considered to be skilled player actions. Within the gaming frame, it is crucial also for players to know what to say and how to say it and to be aware of the social and cultural settings in which each communicative act is embedded. Thus, game play conversations are enriched with special words, phrasings, and grammatical patterns (Gee, 2014) that exhibit a high frequency of short and long pauses "for the sake of focused game play" (Ensslin, 2012, p. 99). These multilayered social frames are exactly what help us locate and understand videogames as environments that empower individual creativity, experimentation, investment in learning, critical thinking, and agency for change within a wider social context in which actions can take place in a symbolic way. METHODOLOGY Although there is a rich line of research on literacy in videogames, there is a dearth of research in understanding the importance of connecting metagaming as an aspect of a player's engagement in layered literacy practices and, more broadly, with gaming literacy. As such, I will address the following research questions: • What kind of literacy practices unfold through and beyond gaming, particularly in the first-person shooter CS:GO? • How is metagaming conceptualized in relation to CS:GO? • How is metagaming shaping the players' view and relation to their literacy practices? The complex nature of literacy practices in gaming environments required the exploration of rich data; thus, the current study embraced the methodological approaches of ethnography (Hammersley & Atkinson, 2007) and virtual ethnography (Hine, 2000). Ethnography helps the researcher participate "overtly or covertly in people's daily lives for an extended period of time, watching what happens, listening to what is said, asking questions; in fact, collecting whatever data are available to throw light on the issues with which he or she is concerned" (Hammersley & Atkinson, 2007, p. 4). During the research, I was immersed in drawing data from a conventional faceto-face ethnographic study of gamers, but also from data in online gaming and other digital environments (e.g., game plays, participants' Facebook activity, Google searches, YouTube). For this reason, I chose to use virtual ethnography (Hine, 2000). Virtual ethnography is used as a methodology for bringing the features that were taken to be special about the analyzed technologies to answer the complexities of the objectives of research and ways to observe heterogeneous data (texts, audiovisual data, etc.). The research procedure, participants, data collection, and data analysis are discussed below. Participants and data collection The research data were collected in Cyprus, a geographical context with a diglossic Cypriot Greek speech community (Karyolemou & Pavlou, 2001). Thus, the participants' conversations were in the Greek Cypriot dialect and were translated into English. The participants were a group of four young Cypriot gamers (aged 16-17) named Demetris, Nestoras, Panos and Philippos whom I systematically observed (46 observations and 195 hours in total) through face-to-face video recordings and screen recordings in online environments for a period of nine months (May 2015-January 2016). The data in the next sections (see figure 8) also include excerpts with some of the participants' friends of the local gaming community (e.g., Alex, Nikos, Gregory). To address the ethical routes in the research, all of the participants, the parents of the participants and their friends included in the paper had been provided with relevant information about the research and what the participation would involve. Participation was discussed that it was voluntary and informed parental written consents were obtained, along with a written consent also from the participants. Taking into consideration that in an ethnographic research participants' observation is a core activity, gaining access to the social world of the participants and establishing a trusting relationship was a crucial aim. While I was entering the research field, I tried to situate myself in the space and develop a rapport with the participants, and I made myself available to the participants upon request. For example, the participants would sometimes gather at Kinx during late night hours (e.g., from 2:00 a.m. until 07:00 a.m.) and/or over their summer holidays during late mornings, afternoons, or even for a whole day. Data collection was a dynamic procedure that required me to participate in the participants' daily lives, observe their practices, listen to what was said, and ask questions. I personally was immersed in the ongoing gaming and other daily activities of the participants, which took place across a number of spaces: In a gaming center named Kinx (where the participants could use computers, primarily for the purpose of playing multiplayer computer games), at their houses, in those areas they ate (e.g., cafeterias), and even in nightclubs. With the use of Go Pro HERO3+ action cameras, research data were collected through video recordings of the participants as they played CS:GO, video-screen recordings of their game play were taken via Open Broadcaster Software, and rich field notes of their overall literacy practices (e.g., posts on social media, Google searches), field interviews, post-field diary notes, and a semi-structured interview per participant after the completion of the research were all performed. To protect the research participants' identities, their names and any identifiers have been replaced by pseudonyms. Data analysis For the analysis of CS:GO game play, I used unified discourse analysis (Gee, 2014). Unified discourse analysis offers tools to analyze game play as conversations between players and the game world (Gee, 2014). Here, videogames are composed of combinations of units (e.g., boxes, texts, equipment, maps) that make up patterns that are meaningful to players, and videogames share the syntax and semantics of the human visual world. The syntax of games is composed of the objects, spaces, and tools in the game that players can combine to make actions happen to accomplish their goals during game play: "The semantics is a conceptual labeling of these spaces and things not just in terms of their real-world identity (e.g., a crate), but in terms of what they are functionally good for in the game (e.g., breakable to get a power up)" (Gee, 2014, p. 43). Thus, unified discourse analysis helped me understand and analyze the ways the participants were making meaning during game play in terms of their actions. For the rest of the research data, I followed multimodal discourse analysis (Kress & Van Leeuwen, 2001). Multimodal discourse analysis considers how text draws on various modes of communication, such as pictures, film, video, images, and sound, in combination with words to make meaning. Within this frame, multimodal discourse analysis was the scientific tool helping me analyze the various and different semiotic modes and signs (e.g., layout, colors) of the data. I coded the data by relying on inductive coding (Saldaña, 2015) of the various literacy practices in which the participants were engaging within and around game play. I identified the main themes and bottom-up categories and categorized them using the data software MAXQDA (Kuckartz & Rädiker, 2019). More specifically, I analyzed each participant's screen recordings of their game play because it was important first to analyze each participant's actions frame by frame to have a more detailed view of the data. Additionally, to re-examine and reconfirm specific categories, I simultaneously merged and analyzed all participants' screen recordings of the same game sessions. This provided me with the opportunity to analyze segment by segment the game play of the participants' screen recordings collectively. From analysis, a specific number or categories arose, such as literacy practices in social media, long strategies, and interconnection of texts in game. I re-examined the categories, focusing on critical incidents of all the participants' screen recordings during game play. This examination produced coding with more detailed categories, such as strategies for deceiving opponents, Facebook activity, and Twitch professional matches, which enhanced access to the sizable dataset and allowed me to organize the observations in different thematic categories linked to (a) the practices of the participants during game play, (b) the layering of literacies around game play, and (c) metagaming. In the final stage, after the overall data analysis, I asked the participants to read extracts of the analysis to reconfirm and discuss whether the analysis was presented accordingly. Hence, data triangulation (Denzin, 2015) helped to strengthen the credibility and validity of this research. CS:GO game mechanics and maps CS:GO is a first-person shooter game played between two teams competing against each other in a 30-round game. At the start of the game, the players join either the Counter-Terrorist or Terrorist side and play on that team for the duration of the first half of the game. Once the game reaches the halfway point, the sides swap. Each round lasts for 1 minute and 55 seconds, which counts down to 0 seconds. During this time, the Terrorists must plant a bomb, while the Counter-Terrorists need to defuse it. Once the bomb is planted, it takes 40 seconds to explode. Each round is completed when a team wins the round by completing the aims of the game (plant, defuse the bomb, or kill the opposing team) or when the round's time limit has been reached. CS:GO tracks and evaluates how many times each team has won, how many players an individual player has killed, and how many times the players have died; the game rewards players in-game money for killing enemies or completing team objectives. Both teams receive additional money at the beginning of a new round, with the winners of the last round receiving more money than the losing team. If the players are killed before the completion of that round, then they become spectators, but they can still communicate with each other after they die (Counterstrike Fandom, n.d.). CS:GO has a variety of maps in which players can interact. In the research, the participants preferred the Dust II map because they considered it to be an equal map for both sides: both spawn sites (i.e., where the avatars first appear in the game) are in the middle line of the map (see Figure 1). To understand the description of the game play in the forthcoming sections, I briefly provide an explanation of the structure of Dust II. Dust II has a four-square map (see FINDINGS The findings suggest that the game participants interactively were engaged with multiple problems with subtle complexity during game play. This drove the players to go through a cycle of layering literacies to improve their metagaming in their next game play. Metagaming in CS:GO is a critical practice of collective, individual, long-term, short-term, fluid, optimal and unexpected strategies and tactics that occurs within the act of game play and is the result of a cycle of online and offline layering literacies within and beyond game play. More specifically, metagaming involves a player selecting and creating optimal and/or unexpected tactics and strategies during game play, using pre-existing, current, and new knowledge from past game plays, and layering literacies through a variety of online and offline practices: (a) solving problems in a multimodal literacy space, (b) using situated communicative patterns to be effective for implementing metagaming strategies in the limited time, (c) watching live tournaments to learn of better tactics and strategies for metagaming, (d) exploring gaming sites and forums, (e) watching and discussing tutorials and co-players' game plays (f) sharing articles, texts, and game play highlights on social media, and (g) speaking with local gamers to further develop their gaming literacy. In this way, metagaming can be viewed as a critical game-related practice in which players demonstrate their knowledge and skills in CS:GO in order to be recognized as literate gamers. Within this context, the coding of the data revealed the following themes. Metagaming: Countering the opponents from a complex theoretical and action standpoint Here, I provide and discuss two critical episodes of how metagaming is conceptualized in CS:GO during game play. The act of metagaming was about selecting and creating optimal and/or unexpected tactics and strategies in the game and using preexisting, current, and new knowledge, from both online and offline sources (e.g., discussions, forums, articles), to maximize the chances of winning. This required strategizing, which was directly linked to higher-level decision making for the players and was affected by how the opponents' tactics and strategies were played out. The first excerpt is from an episode when eating burgers at Nestoras's house, in which I observed the participants discussing CS:GO with Nestoras's father (who also was a gamer). Throughout the conversation, the participants described instances of metagaming in CS:GO when they addressed ways to counter their opponents (see Excerpt 1). The second episode (see Excerpt 2, Figure 2, and Figure 3) focuses on an example of game play discussing metagaming as fluid individual and collective strategies and tactics during game play. Strategies to deceive opponents Philippos: If you are not a good player in killing opponents, your co-player will choose to hide, and you will choose to sacrifice yourself. You will let the opponent kill you. Then, your co-player will go behind the opponent, and he will kill him. Everything about gaming is to deceive opponents. The main aim is to make your opponent think you are in a specific place, but in reality, you are somewhere else. Another example is when the players of one's team decide to go to Site B instead of Site A. So, in this situation, one player of this team will step back and start throwing flashbangs in Site A, and the opponents will assume, "Guys, they are in site A. Let's go there." So, they will all go to Site A, but the other team will actually be at the opposite site. Do you understand? This is the strategy I am talking about. And these are just the simplest examples. Nestoras: It's a mind game. Philippos: That's right. I want to make my opponent say, "Okay man, I thought about doing this, but my opponent challenged me. He hypothesized what I was planning to do and he did something else to avoid me." Panos: Best feeling to be called a hacker. Playing CS:GO includes a complex procedure in which players have to constantly analyze the problems of the game during game play, with the aim of applying those analyses into actions to maximize the chances of winning the game. In Excerpt 1, Philippos, described a strategy of deceiving opponents by letting the opponents assume that they had gained an advantage when they killed Philippos's co-player. This strategy can be used in cases where the team has players who are not "good players in killing opponents." This means that those players who are less experienced than others in CS:GO can fail at killing an opponent that may appear in front of them. This can result in the decrease of possibilities for the team to win. With these conditions, the player chooses to implement a rather unexpected tactic of sacrificing himself and be killed by the opponent to help the team deceive the opponents into thinking the unskilled player is the only one in that area. This strategy creates an ambush to the opponents because, as Philippos said, "Your co-player will go behind the opponent, and he will kill him." This means that another player of the team will appear unexpectedly and eliminate the opponent(s). For players, it is crucial to predict or understand the strategies of the opposing team and challenge, deceive, and defeat them because as Philippos mentioned "everything about gaming is to deceive the opponents." Another way to deceive opponents is when players orchestrate long strategies including tactics (throwing a flashbang), by letting opponents assume that the whole team is navigating in a specific location in the map, while they are actually navigating to a different location. More specifically, Philippos said that an example of this kind of strategy can be implemented specifically in CS:GO in the following way: "When the players of one's team decide to go to Site B instead of Site A." Site B is where the players can activate the explosive device, which is one of the main objectives of the game for winning. The tactic for implementing the strategy is "one player of this team will step back and start throwing grenades in Site A." The explosion of a grenade indicates the existence of a player or players of a team in the location which is exploded. In this example, players threw a grenade in Site A, while in reality they were navigating in Site B. This tactic, as they explained to me when we were discussing CS:GO, was implemented in order to deceive the opponent team in terms of location. Thus, opponents are expected to navigate to the area of explosion to find and kill the players of the other team ("Guys, they are in Site A. Let's go there"). On one hand, this strategy gives an advantage to the team in that they gain time and complete the objectives of the game (planting the bomb). On the other hand, successful deception of the opponent team results for them to lose time in the game. This happens because they are searching Philippos' team in the location where the grenade was exploded (Site A). Broadly, during game play, the participants agreed that they constantly challenged their opponents with unexpected tactics and strategies and vice versa. As Philippos stated, "I want to make my opponent say: ok man, I thought about doing this, but my opponent challenged me. He hypothesized what I was planning to do and he did something else to avoid me." For players, successfully challenging opponents with high-level and unpredictable strategies results in a recognition of their gaming literacy, and this is best summarized by Panos using the phrase, "Best feeling, to be called a hacker." This phrase is very common among the gaming community that plays CS:GO. All participants explained to me "a hacker" is any player that builds upon experiences and knowledge (e.g., skills on CS:GO, information gained from the opponents' metagaming during game rounds), can predict the opponents' next strategies and, as a result, challenges them with unexpected actions, thus taking advantage in the game. In other words, players metagame to show they are better players than others, meaning that their gaming literacy level is higher. In Excerpt 2 (Figures 2 & 3), a complete round of game play is described, focusing on metagaming as the implementations of individual and collective fluid strategies and tactics that can change because of the strategies, movements, and actions of the opposing team. In Excerpt 2, Demetris-the most experienced player among the participants-organized the initiative-long strategy that the team would implement. Within this context, he shared his roles with his co-players and suggested tactics they should implement to solve the main problem of the game: planting a bomb. To demonstrate the participants strategy, Figure 2 includes red arrows that show the direction of the team from T-Spawn to Site B. Specifically, starting at T-Spawn, Demetris asked Panos and Philippos to navigate straight toward the Middle Site, with the final aim of navigating into the B Site ("Mid to B") to plant the bomb (see Figure 2, left up "B"). During their pass from the Middle to B Site, Demetris asked Panos to implement a tactic by throwing a smoke grenade ("Smoke"). In CS:GO, grenades release clouds of smoke and blanket an area with a thick cloud of smoke for 15 seconds. This tactic can effectively hide the team from snipers of the opposing team and create a useful distraction that discourages opponents from attacking. In addition, when a smoke grenade is active, the opponents cannot hear the steps of players. Taking into consideration that the overall time of each round is about 1 minute and 55 seconds, Demetris asked Gregory (a friend of the participants who was co-playing during that specific game play) to navigate as fast as possible in the map ("Gregory, go quickly") and get into Site B while holding Tec-9 pistol (tactic). The specific gun is an ideal pistol for the terrorists when on the move and is lethal in close quarters because of its faster rate of fire compared with other pistols in the game. All these tactics (smoke grenade, Tec-9 pistol, team division in different areas) aimed to employ the initial core strategy of the team: planting the bomb in Site B with an aim to win. Once the game play started, however, the opponents' metagaming altered the initial strategy of the team. For example, 12 seconds (= 1:43) after the start of the game play, Panos was killed by an opponent (see Figure 3). This non expected action from the opponents affected the teams' strategy. With these new conditions, Demetris tried to continue implementing the initial strategy plan. Thus, Demetris killed the opponent and continued navigating to Site B. However, after 5 seconds (=1:38), he heard gunshots from the stairs behind him. This was an indicator that the opponents were near them, making him re-evaluate his individual strategies and the overall team strategy. Demetris decided to change direction and go to Upper Tunnel, because he hypothesized that the opponents would pass from there and he wanted to wait for them there to kill them. Once all the remaining players passed T-Spawn, as was the initial plan, Nestoras tried to navigate to Site B, but he was immediately killed by an opponent. When he was killed, he informed his co-players about the existence of opponents in the area ("Watch out! Two here"). By listening to this information and knowing from the map that he was near the area where Nestoras was killed, Demetris decided to alter his strategy once again. Instead of navigating to Site B, he decided to hide behind a wooden box, waiting for the opponents to arrive. Indeed, 20 seconds later, an opponent approached the area, and Demetris immediately killed him. The rest of the team, though, was killed by the opponents, and Demetris was the only survivor in the game, successfully going to Site B to plant the bomb. Data revealed players constantly organized and reorganized their strategies and tactics both collectively and individually based on the ongoing metagaming of their team, but also the opponents' team. Metagaming is about critically gathering and analyzing information to plan tactics and strategies and to challenge opponents with unexpected actions to take advantage of the game and increase the chances of winning. During metagaming, each player, as a member of a team, continuously tries to predict what the opponent/s assume/s about their own thoughts and actions during game play while simultaneously trying to predict the opponent's ongoing thoughts and next actions. Therefore, the goal is to "read" opponents' thoughts about their next strategies and anticipate future actions to outperform, outwit, and/or overcome them. In that way, players are engaging in a critical and predicting procedure with the aim of winning. Metagaming is a critical practice fueled by the pain of defeat and hours spent ruminating, looking for solutions, and competing through continuously acquired knowledge and meta-critical thinking. It is the in-game action outcome of a complex cycle learning process that encompasses the collection and critical analysis of layering literacies-as will be described in the following section-such as game play expertise, online, and offline literacy practices (e.g., watching troll videos, participating in forum discussions, sharing gaming information, mods, watching game plays, searching for new knowledge). In this sense, metagaming functions as a result of a cycle of layering literacies that has a positive impact on stimulating individuals as literate gamers. Players want to engage in a variety of online and offline literacy practices to gain all the information and knowledge they lack to become more literate in the ways they metagame. This complex procedure increases their chances of winning. Winning the game is beating the other players' gaming literacy level in respect to their tactics, strategies, and overall game play. The second section of the findings addresses specific literacy practices in which players self-engaged. As such, players' practices were fluid as they engaged in a cycle of layering literacies, here with a core aim to enrich the knowledge required and expertise to employ higher-level metagaming in subsequent game play. Learning to Solve Problems in a Multimodal Literacy Environment As mentioned in the introduction, videogames are highly multimodal environments in which players interact, trying to solve the problems of the game. With that in mind, one of the most important multimodal texts of CS:GO is the map. Considering that the game mechanics (e.g., in between rounds players can purchase weapons, items, utility, and armor using in-game cash: radio commands) are the same for every player in the game, the variety of CS:GO emerges from the ways in which players and teams approach each map and how they implement their strategies and tactics against their opponents. Figure 4 focuses on a segment of game play that discusses players' situated multimodal literacy practices in Dust II as they employed actions at an exact time to solve the problems they faced during game play. Figure 4) of the game round. In this game round, Demetris was reading and interacting with a variety of multimodal texts, such as the map, the radio commands, the sounds from the shootings, and the opponents' steps, to make decisions for his next actions. The circle on the upper left-hand side of Figure 4 features a mini map that shows the overall map layout. The map in CS:GO is dynamic. It constantly changes based on the live movements of co-players and indicates in what areas of the map the players are located (each player is represented by a different colored dot) and when gunshots are exchanged (represented by a red dot). The player colors are interlinked with the profile pictures of each player (see the upper-middle section of Figure 4). Underneath each profile, there is a line with a color for each player's profile in the game (which corresponds to the colored dot representing each player). Information about players' actions (e.g., killing) is displayed. For example, on the right-hand side of Figure 4, Demetris's gear is displayed and, more specifically, what gun he chose to hold during that time. In the lower left-hand side of Figure 4, there is information related to player positioning and action (e.g., Panos threw smoke and Nestoras, who was nearby, verbally informed Demetris that an opponent was close). All this information is crucial for the players; they also need to have the skills to link this information quickly, understand the ongoing situation in the game (e.g., the location of an opponent, which team player has been killed), and make decisions for their next actions without losing any time. Any delay in interconnecting the information would give an advantage to the opposing team. Specifically in this example, Demetris was informed by his co-players about the existence of an opponent in the area. He double checked this information along with the red dot provided in the live map. The shootings stopped though, but after three seconds, Demetris quickly checked near the wall to see if there was any opponent in the area. He did not see any opponent, so he decided to leave the area. Before walking away, he heard another gunshot. In less than a second and with his weapon high (preparing to shoot), Demetris went to a corner behind a wall, and this time, he heard (in his headset) and saw a second gunshot (which was visible on screen as a white line) for less than a second (see the arrow pointing to the line on the lower left-hand side in Figure 3). The white line functioned as a text for Demetris because it provided him with information about the direction and height of the opponent's gunshot in relation to the ground. Upon reading the text, Demetris informed his coplayers of their opponents' presence: "They are here!" This intensive situational awareness, text management, and quick decision making reinforced the players' ability to take functional actions, solve problems, and use their audiovisual and kinesthetic skills (clicking on the mouse and keyboard) while blocking out distractions. As illustrated in the example, the gamers interacted with a variety of modalities (audio, visual, linguistics) and text genres (maps, chat, hybrid texts 1 ) to solve problems. The players constantly needed to read, analyze, compose, and combine not only the text genres, but also the information from their co-players, and they needed to interact with different semiotic modalities to make decisions about future game play. In this sense, players learn to simultaneously read different text genres (e.g., maps, chat, labels) and text types (e.g., dialogues, narration) and combine this information to make their decisions regarding their next actions during game play. Thus, CS:GO functions as a demanding multimodal literacy problem-solving environment in which players constantly need to negotiate and interact with different texts, modalities (e.g., sound, image, symbols, gestures), and actions to solve the problems they face during game play and to metagame. Watching live tournaments to evolve expertise for metagaming When the players failed to solve the game's problems during game play, they were highly enthralled in finding ways to solve them in other ways because one of the main elements that highly engaged the players into playing games was metagaming (see Excerpt 1. "Best feeling, to be called a hacker"). Thus, the end of each game was the beginning of a cycle of layering literacies not only within but also around, and back and forth through game play in both online and offline spaces. Fan fiction, forum discussions, game play highlights, community feedback, live tournaments, Googling, sharing texts and extracts of their game play on social media, and gaming experiences within the local gaming community were some examples of the gaming literacy practices the players constantly engaged in. These activities were instances of situated learning that enhanced combinations of independent and collaborative-as well as iterative and generative-practices and (re)interpretations of meaning with no particular pattern but based on their learning preferences. Excerpt 4 ( Figure 5) focuses on the preferences of the participants to watch live tournaments and have discussions on Twitch because those live became vital resources for the participants not only for entertainment purposes but also in terms of gathering new knowledge and information about new tactics and strategies that could offer them new ideas on how to metagame. Watching live tournaments on Twitch Demetris: Two days ago, I watched a live tournament on Twitch because the teams are extremely good, and this is awesome. They play nice matches. Elisavet: And why is this nice? Because you see their strategies? Demetris: Yes. They are really good teams, and their game playing style is very good, and you see what they are doing, how they react during game play; if there is something good, you can learn a lot of things by watching them. You learn a lot. Tactics, not the way they shoot. Shooting is a different thing, and the tactics is another. Figure 5. Philippos watching CS:GO live tournaments at a Cypriot traditional restaurant. Excerpt 4 features a segment from a conversation I had with Demetris at Kinx. He explained to me that on one hand watching live tournaments was an entertaining practice for him. On the other hand, live tournaments were also a vital source of gathering new knowledge and information about new tactics and strategies that could offer him new ideas he could embed in his own game plays. The information received from watching the matches was not achieved through memorization, but rather from an understanding and analysis of the strategies such as to know that holding a knife means to be able to run faster. This literacy practice engaged participants in interpretations and analyses of those live tournaments for collecting information on tactics, strategies by professional players to improve their knowledge on how to play and metagame in CS:GO. Googling and reading gaming sites and forums Moreover, Nestoras and Panos, were Googling, reading articles, and exploring national and international gaming websites (see Figure 6) to learn more about CS:GO. Some examples of the sites were www.unboxholics.com, www.gameover.gr, www.gr.ign.com, www.gosugamers.net, www.gamespot.com, www.valvesoftware.com, and www.blog.counter-strike.net. Philippos: No. Nestoras, though, is searching all the time on the internet. This is something that he is famous about, I think. Only for this, though. He might not be smart at CS:GO, but the game he will play, he will search for it. Nestoras, for instance, throughout the research mentioned that "even though I am a good player in League of Legends, I do know that I am still a noob in CS:GO. I can't have the guys explaining me their tactics all the time, so I search a lot on the internet. I want to learn about CS:GO as much as possible because I want to be on their level." In Figure 5, Nestoras was navigating in different gaming sites and forums, gathering and composing information from different digital sources. On that day, he was Googling "How not to be a noob!". He was trying to find valid sources that would provide him with information on performing better in CS:GO. During an interview, Philippos also spoke about Nestoras's literacy practice of Googling ("Nestoras, though, is searching all the time on the internet. This is something that he is famous about, I think"). This literacy practice helped Nestoras reconfirm his existing knowledge and/or gain new knowledge on the ways CS:GO is played to embed it in his game plays and metagaming. Spectating and discussing (co)players' game plays and exchanging feedback Another literacy practice in which all the participants were engaged was watching game plays of other players of the gaming community, as well as co-players' game play highlights, with the more experienced co -players providing feedback. Taking into consideration that CS:GO is a videogame in which two opposing teams consisting of five players each compete and that the players may vary, players bring in their own knowledge, experience, tactics, and strategies. Thus, every time the game is played, it feels different for the players. This type of active spectatorship (Abrams, 2015) enriched the players' understandings of the possible ways that CS:GO is played by employing known tactics and strategies, as well as innovative ones. In the next section, I analyze this active spectatorship (see Figure 7 and Excerpt 5). Nestoras: Most things I have learned in CS:GO was because I was watching other players' game play. The other time I was watching Philippos game play, for example, and he was holding his gun always here (he shows high up), and when he was opposite an opponent, he was faster than him and managed to kill him first because the gun was in position. This way, I save time. As shown in Figure 7, Nestoras, on the left, was playing CS:GO with Philippos, in the middle, and Panos, on the right. Behind them was Demetris, watching the game. In this example, Demetris was commenting when he was observing the tactics or strategies he believed were wrong and that could make the team lose. Per Excerpt 5, Nestoras described that the literacy practice of watching other players' game play or highlights of the game play functioned as a learning environment for them. As previously noted, Nestoras was called a "noob" (i.e., newbie or less experienced player) by his co-players and mentioned several times throughout the research that he wanted to catch up to his co-players' level of expertise. By watching other players' game plays, he was learning of ways of employing more effective tactics, such as that holding up the weapon high when navigating on the map saves time and helps the player shoot faster, strategies and evolve his metagaming. Sharing articles, texts and game play on social media Social networking sites, such as Facebook, served as useful spaces for the participants to share articles, extracts from their game play, thoughts about gaming, and videos (see Figure 8). These practices created the space for participants to get involved in, sharing their own views and opinions about CS:GO, but also getting new information from their friends that were also gamers. The article referred to an update of CS:GO features such as the introduction of brandnew player body and world model weapon animations and solutions to the enduring problem of some weapon models that were sticking through walls, doors, and other surfaces. This update was carried out due to complaints by the community of gamers. Panos shared the article on his personal Facebook account, tagging two of his gamer friends, Nikos and Alex. Here, tagging was a practice of asking the opinion of friends considered extremely good players in CS:GO. This was also confirmed in a discussion we had afterwards. Nikos and Alex were very popular gamers in the local gaming community because of their expertise in CS:GO, and Panos sometimes was playing CS:GO at Kinx with them. Nikos commented below the article, "They fucked everything up. I have already played the game with this patch." Panos replied, "Omg cs go rip." (i.e., Oh my god. CS:GO Rest in peace). With this phrase, Panos expressed his worries believing that players would stop playing CS:GO because the upgrades were not good, and this would have an impact on reducing the interest of gamers. On the other hand, Alex had a different opinion on the new alteration of the CS:GO: "What did they fuck up dude? Finally, these patches will change the game." When I met Panos's friend, Alex, I asked him what he meant by that, and he explained to me that the new patches helped to the improvement of game play in CS:GO because with those changes, the gun fire and the bullets became more realistic in the sense that the energy of each bullet showed whether it went through an avatar's body. Such posting and commenting functioned as a literacy practice of sharing information and exchanging opinions about CS:GO. Oral discussion for upgrading gaming literacy from the local gaming community Finally, the data showed that all the participants shared information and past game play experiences in oral discussions within the local community of gamers in various settings, including, but not limited to, home and gaming centers. These discussions were vital for the participants because they incorporated new knowledge regarding how to learn from mistakes, how to learn about new strategies and tactics, and, generally, how to acquire knowledge for better performance during game play (see Excerpt 6). Excerpt 6, which is from Panos's interview, indicates how interactions about gaming with other players was a source of important knowledge that helped improve his game play. Excerpt 6 Learning from more experienced gamers Elisavet: So you learn stuff only during game play? Panos: No, also when you are out of the game. Especially us. The gamers, we are going to Kinx, and we find more experienced gamers than us. From those players, you learn. And they will tell you what tactics or strategies to follow. We always talk about gaming. I believe that you learn from discussions. Panos argued that players upgraded their existing knowledge not only by playing the game, but also by exchanging information in discussions within the local community of gamers that "are more experienced gamers" than them. The experienced players were considered to be resources of valuable information regarding learning new tactics and strategies for CS:GO that could be employed to improve their own performance. DISCUSSION AND IMPLICATIONS This paper presented the findings of an ethnographic study examining the literacy practices and metagaming within and around CS:GO. From the analysis of the research data, it became evident that players self-engage in a cycle of layering literacies within and around game play with an aim to go back to the game and perform better metagaming. Metagaming in CS:GO is a critical practice tactics that are both collective and individual, long-term and short-term, fluid and bounded, as well as anticipated and unexpected. These practices occur during and beyond game play through a cycle of layering literacies online and offline. The aim of metagaming for the players is to overcome the opponents' tactics and strategies with an aim to maximize the chances of winning. This requisite strategizing is directly linked to a highlevel decision making. The findings argue with Boluk and LeMieux (2017), that players metagame using real life information that typically would not be accessible within the bounds of the game with an aim to gain advantage over other players during game play. To implement tactics and strategies during game play means to understand that players are empowered to engage into a meta-critical constant selflearning by interpreting and responding to layering online and offline literacy practices around and through game play. Players layer their literacies as they solve problems in the game, watch live tournaments, explore gaming sites and forums, observe co-players' game plays, discuss tutorials, speak with local gamers, and share articles, texts and game play highlights on social media. These layering literacies are multidirectional (Abrams, 2015(Abrams, , 2017, based on participants' interests, and are "fluid, porous, and flexible in the same way that, ideally, learning should be" (Abrams, 2015, p. 15). The study suggests that bringing layering literacy practices and metagaming together offers a new perspective on what gaming literacy can offer in education. Winning the game is not just about entertainment, but also about beating the other players' gaming literacy level in respect to their tactics, strategies, and overall game play. Thus, this study suggests videogames can offer educational opportunities in L1 classrooms that extend beyond the conventional view of literacy as a reified set of basic skills, such as reading and writing and restricted to paper-based (Applebee, 1984;Green & Dixon, 1996). Examples of L1 metagaming practices for students could include co-organization of tactics and strategies for solving game-related problems and challenges, prediction of the future actions of the opponent team during game play and collaboration for high level decisions. Literacy practices similar to metagaming practices might include, but not be limited to, students commenting on matches in CS:GO, students writing a guide for players identifying key goals for communication, or students developing an archive of moves and related consequences. Videogames with an emphasis on metagaming can provide rich and situated problem-solving learning environments for students to create and share artefacts, seek and create information, and also engage in meta-critical thinking with respect to game-related problems and challenges alone and together with their peers, planning strategies, reflecting on the problems of the game, experimenting through trial and error, comparing, analyzing, evaluating, deconstructing and reconstructing meanings and actions. Overall, integrating gaming literacy in L1 classrooms offers opportunities for a more student-oriented, socially situated and dynamic learning environment that can prepare learners into be critical thinkers and solvers of reallife problems in effective ways. FINAL THOUGHTS If students are to be self-directed learners who think critically (achieved through variety of strategies, such as comparing, contrasting, analyzing, evaluating, deconstructing, and reconstructing knowledge), then the integration of videogames into L1 classrooms can support such efforts. Students need to become not only agents of the learning procedure, but also designers of creative solutions to real situated problems and in the best way possible. Gaming literacy approach including metagaming can encourage students' engagement, collaboration, competition with peers, critical thinking, problem-solving, experimentation and higher-level decision making. Teachers can engage students into projects that focus on real local and global problems that need to be solved. Examples of videogames that focus on solving situated real life problems can be found for example in Games For Change website (https://www.gamesforchange.org/). Team collaboration and competition should also be at the center of the learning procedure. Students can interact with each other in the form of exchanging ideas, competition of searching the best solutions to the problems, knowledge sharing and expertise sharing between students. Educators can achieve this by making synergies with other educators and bringing their classes together in online and offline environments. Just as gamers learn from observing and discussing videogame play, so, too, students can watch and ask questions for their classmates solving problems, e.g., science problems, environmental problems, literature problems, asking questions and witnessing how such operations are performed. In these ways, the learning procedure could function as an open space for students to sit in groups and to walk in the classroom watching other peers how they work, to communicate and collaborate online. This is a key point of understanding the knowledge should be share among others to understand mistakes, improve knowledge and act in the best possible way to solve a problem. Considering also that videogames are designed with game mechanics (e.g., time-limitation, rewards, educators could redesign the mechanics of learning activities in class such as time-limitation, coopetition, space for failing and retrying. With this, students can be empowered to explore, experiment, fail and retry. Students could also engage with different text genres, online multimodal environments in order to gain information, and or understand the ways each mode interacts with each other in order to create texts (e.g., posters to be shared in the community) that could be used as strategies to solve a problem. Therefore, educators can promote layered literacies by using existing online and offline spaces or by creating new ones. Concluding, a gaming literacy approach is about helping students to be critical literate problem-solvers. In this way, through gaming, learning is situated and tangible, and students can be problem-solvers prepared to enact change in the world around them.
v3-fos-license
2017-04-04T06:32:06.243Z
2008-01-22T00:00:00.000
17884389
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1364/oe.16.001958", "pdf_hash": "b6c12073be31c2508e6eb1d5810d132db0e84f50", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44762", "s2fieldsofstudy": [ "Physics" ], "sha1": "b6c12073be31c2508e6eb1d5810d132db0e84f50", "year": 2008 }
pes2o/s2orc
Near-field observation of spatial phase shifts associated with Goos-Hänschen and Surface Plasmon Resonance effects We report the near-field observation of the phase shifts associated with total internal reflection on a glass-air interface and surface plasmon resonance on a glass-gold-air system. The phase of the evanescent waves on glass and gold surfaces, as a function of incident angle, is measured using a phase-sensitive Photon Scanning Tunneling Microscope (PSTM) and shows a good agreement with theory. © 2008 Optical Society of America OCIS codes: (240.0240) Optics at surfaces; (120.5050) Phase measurement; (180.4243) Nearfield microscopy; (240.6680) Surface plasmons References and links 1. I. Newton, Opticks (William Innys, London, 1704). 2. N. J. Harrick, Internal reflection spectroscopy (Interscience Publishers, New York (0-470-35250-7) 1967). 3. R. C. Reddick, R. J. Warmack, and T. L. Ferrell, “New form of scanning optical microscopy,” Phys. Rev. B 39 (1989). 4. A. Lewis, H. Taha, A. Strinkovski, A. Manevitch, A. Khatchatouriants, R. Dekhter, and E. Ammann, “Near-field optics: from subwavelength illumination to nanometric shadowing,” Nature Biotechnol. 21 (2003). 5. R Quidant, J.C Webber, A Dereux, D Payrade, Y Chen, and G Girard, “Near-field observation of evanescent light wave coupling in subwavelength optical waveguides,” Europhys. Lett. 57 (2002). 6. F. Goos and H. Hänschen, “Ein neuer und fundamentales versuch zur total reflexion,” Ann.Phys 1 (1947). 7. R. H. Ritchie, “Plasma losses by fast electrons in thin films,” Phys. Rev. 1 (1957). 8. S. G. Nelson, K. S. Johnston, and S. S. Yee, “High sensitivity surface plasmon resonance sensor based on phase detection,” Sens. Actuators B 35 (1996). 9. S. A. Shen, T. Liu, and J. H. Guo, “Optical phase-shift detection of surface plasmon resonance,” Appl. Opt. 37 (1998). 10. V.E. Kochergin, A.A. Beloglazov, M.V. Valeiko, and P.I. Nikitin, “Phase properties of a surface-plasmon resonance from the viewpoint of sensor applications,” Quantum Electron. 28 (1998). 11. F. Pillon, H. Gilles, S. Girard, M. Laroche, and O. Emile, “Transverse displacement at total reflection near the grazing angle: a way to discriminate between theories,” Appl. Phys. B 80 (2005). 12. H. P. Ho, W. W. Lam, and S. Y. Wu, “Surface plasmon resonance sensor based on the measurement of differential phase,” Rev. Sci. Instrum. 73 2002. 13. X. B. Yin and L. Hesselink, “Goos-hanchen shift surface plasmon resonance sensor,” Appl. Phys. Lett. 89 (2006). 14. C. F. Li, T. Duan, and X. Y. Yang, “Giant Goos-Hanchen displacement enhanced by dielectric film in frustrated total internal reflection configuration,” J. Appl. Phys. 101 (2007). 15. P. I. Nikitin, A. A. Beloglazov, V. E. Kochergin, M. V. Valeiko, and T. I. Ksenevich, “Surface plasmon resonance interferometry for biological and chemical sensing,” Sens. Actuators B 54 (1999). 16. H. P. Chiang, J. L. Lin, and Z. W. Chen, “High sensitivity surface plasmon resonance sensor based on phase interrogation at optimal incident wavelengths,” Appl. Phys. Lett. 88 (2006). 17. K. M. Medicus, M. Chaney, J. E. Brodziak, and A. Davies, “Interferometric measurement of phase change on reflection,” Appl. Opt. 46 (2007). 18. S. Kaiser, T. Maier, A. Grossmann, and C. Zimmermann, “Fizeau interferometer for phase shifting interferometry in ultrahigh vacuum,” Rev. Sci. Instrum. 72 (2001). #89683 $15.00 USD Received 12 Nov 2007; revised 16 Dec 2007; accepted 17 Dec 2007; published 28 Jan 2008 (C) 2008 OSA 4 February 2008 / Vol. 16, No. 3 / OPTICS EXPRESS 1958 19. E Kretschmann and H Raether, “Radiative deacy of nonradiative surface plasmons excited by light,” Z.Naturforsch.A 23 (1968). 20. E.Hecht. Optics (Addison Wesley (0-201-83887-7) 1998). 21. C.K. Carniglia and L. Mandel, “Phase-shift measurement of evanescent electromagnetic waves,” J. Opt. Soc. Am. 61 (1971). 22. K. Kiersnowski, L. Jozefowski, and T. Dohnalik, “Effective optical anisotropy in evanescent wave propagation in atomic vapor,” Phys. Rev. A 57 (1998). 23. H.Raether. Surface Plasmons on Smooth and Rough Surfaces and on Gratings (Springer, Berlin (3-540-17363-3) 1988). 24. F. deFornel, P. M. Adam, L. Salomon, J. P. Goudonnet, A. Sentenac, R. Carminati, and J. J. Greffet, “Analysis of image formation with a photon scanning tunneling microscope,” J. Opt. Soc. Am. A 13 (1996). 25. M. L. M. Balistreri, J. P. Korterik, L. Kuipers, and N. F. van Hulst, “Local observations of phase singularities in optical fields in waveguide structures,” Phys. Rev. Lett. 85 (2000). 26. P. Mazur and B. Djafarirouhani, “Effect of surface-polaritons on the lateral displacement of a light-beam at a dielectric interface,” Phys. Rev. B 30 (1984). Introduction Newton demonstrated, with the help of a glass of water, that total internal reflection (TIR) at an interface separating two media can be inhibited and predicted the existence of evanescent waves in the optically rarer medium [1,2].This demonstration paved the way for the development of the entire field of internal reflection microscopy.The evanescent waves are non-radiative and bound close to the surface where they are generated.Probing these evanescent waves reveals the behavior of light at the interface in the near field [3,4,5].In 1947, Goos and Hänschen performed a more detailed analysis of Newton's prediction and experimentally demonstrated a lateral displacement of the total internally reflected beam, thereafter known as the "Goos-Hänschen effect" (GHE) [6]. A similar phenomenon on a metal-dielectric interface occurs due to the creation of "Surface Plasmon Polaritons" (SPP) [7].SPPs are charge density waves that can be excited on a metaldielectric interface when the resonance condition is met: the in-plane wave vector component (k x ) of the incident light matches the SPP wave vector (k sp ) [8].The distinct absorption dip and the sharp phase variation at a Surface Plasmon Resonance (SPR) have been extensively studied [8,9,10].The phase shift associated with both the GHE and the SPR have been independently studied in the past [11,12].It has already been shown that SPPs can be used to enhance the GHE [13,14].Such an enhancement of the GHE due to material resonances has applications in the field of SPR sensors [8,15,16].A recent interferometric study reported a measurement of the difference between the phase shift upon reflection off a glass-gold interface and that off a glass-air interface at normal incidence in the far field using a back reflection geometry [17].In order to observe this phase shift locally, here we probe the evanescent waves generated at the interface.We show near-field measurements of the spatial phase shift of optical fields across the glass-gold transition region as a function of incident angle using a phase-sensitive Photon Scanning Tunneling Microscope (PSTM). Unlike time-varying phase-shifting interferometry [18], we use a position varying technique where the spatial phase evolution of evanescent waves on surfaces with different optical constants is observed simultaneously.Evanescent waves were generated by TIR on a glass-air interface and SPPs by Kretschmann-Raether configuration [19] in a glass-gold-air stack as illustrated in Fig. 1(a).We exploit the unique property of evanescent waves that the surfaces of constant amplitude (parallel to the plane of the interface) are perpendicular to the surfaces of constant phase (normal to the plane of the interface).Since they do not coincide, the propagating surface wave is inhomogeneous [20].In other words there is no propagation in a direction perpendicular to the interface.Instead, the wave propagates parallel to the interface with a definite wavelength component λ x = 2π/k x where k x is the in-plane wave vector component given by Fig. 1.(a) Schematic illustration of our approach (b) Calculated phase change with respect to incoming beam obtained from the Fresnel's coefficients as a function of incident angle for both a glass-air interface (squares) and a glass-gold-air system (circles).The abrupt variation in phase change for the p-polarized incident beam in the glass-gold-air system (red) is due to the excitation of surface plasmons.θc is the critical angle of incidence for TIR on the glass-air interface k x = k i sin θ i where θ i is the angle of incidence and k i is the wave number of the incident beam.Upon introducing a sharp tapered optical fiber tip into the evanescent wave region, frustrated TIR occurs and a small part of the evanescent waves propagates into the fiber [5].Owing to the inhomogeneous nature of evanescent waves, the phase change upon evanescent wave coupling into the fiber, across an air gap, is independent of the width of the gap [21].By raster scanning the sample surface using the fiber tip, the spatial evolution of the phase of evanescent waves on the surface of the sample is observed.The different optical constants for glass and gold as well as the thin layer of gold on the glass surface induces a stationary difference between phases of the evanescent waves on the glass-air interface and on the glass-gold-air stack.In addition to this constant phase difference, there are two other significant phase changes that vary with θ i : one associated with the GHE on the glass-air interface and another with the SPR on the glassgold-air system.The phase change at the SPR occurs only when the incident beam is polarized parallel (p) to the plane of incidence; not for polarization perpendicular (s) to the plane of incidence.In contrast, the phase change associated with GHE occurs for both the polarizations, but is different for p and s polarization [22].Theoretical plots of phase change for p-and spolarized incident beams as a function of incident angle obtained from Fresnel's reflection and transmission coefficients [23] for glass/air and glass/gold/air systems are shown in Fig. 1(b). Experimental Our samples are commercially available gold SPR sensors (Ssens) with a titanium adhesive layer on a glass substrate of thickness 0.3 mm.The thickness of the gold layer is approximately 50 nm.Using Focused Ion Beam (FIB) milling, a glass 'window' on the sample is made by removing 50 × 500 μm 2 strip of gold.Technical constraints on the focused ion beam milling of the gold made it impossible to remove the gold completely from the glass surface and hence traces of gold remain.The region of our interest is the transition from gold to glass.The structure is placed on top of a glass (BK7) hemispherical prism with index matching oil in between.A fiber collimator focuses the laser light to the glass-gold transition region of the sample and is mounted on a goniometric stage for angles ranging from 40 • to 50 • .The divergence of the beam introduces a convolution error of less than 1 • in the angle of incidence.We selected sample configuration where the sample is oriented in such a way that the gold 'step' is parallel to the plane of incidence so that reflection effects due to the gold step are eliminated [24]. Schematic diagram of the phase-sensitive PSTM with the sample on top of a glass hemispherical prism (side view). SB -signal branch, RB -reference branch The operating principle of a phase-sensitive PSTM has already been reported in detail before [25]; but we will summarize its method of operation.A schematic is shown in Fig. 2. The incoming laser beam (He-Ne; λ = 632.8nm) is split into a signal branch that illuminates the sample, and a reference branch which is shifted in frequency by 40 kHz using two acousto-optic modulators.The reference beam interferes with the small signal picked up by the probe and is measured by a detector, generating a signal at 40 kHz.This signal is proportional to the optical amplitude picked up by the fiber tip (E s ) as well as the optical amplitude in the reference branch (E r ), and is measured with a dual-phase Lock-in-Amplifier (LIA) locked to 40 kHz.The phase of the detected signal depends on the phase of the local field on the sample surface compared to the phase of the reference branch.The LIA provides optical amplitude (E s E r ), optical amplitude times cosine of the relative phase of local field (E s E r cos(φ )) and optical amplitude times sine of the relative phase of local field (E s E r sin(φ )).The distance between the fiber tip and the sample surface is kept constant throughout the scanning using shear-force feedback.Thus local optical phase on the sample surface as a function of position and topographical information are simultaneously retrieved. Results and discussion A typical PSTM measurement for an s-polarized incident beam is presented in Fig. 3.The parallel and equidistant phase fronts (Fig. 3(b)) support the stability of our interferometric setup.Figure 3(c) shows the optical amplitude (E s E r ) obtained from the output of the LIA.Since the tip to sample distance is kept constant, the optical amplitude is uniform above each interface.We see a lower optical amplitude on the gold region which is due to lower transmis-sion in the absence of surface plasmons.Figure 3(d) shows a line profile taken across the phase fronts shown as a black dotted line on the glass part in Fig. 3(c). PSTM images depicting the evolution of the shift in spatial phase across the glass-gold transition region of the sample for the p-polarized incident beam are shown in Fig. 4. As the incident angle varies from 42.2 • to 43.6 • (Fig. 4(b)-(e)), the shift in the spatial phase on the boundary separating the glass and gold regions changes.The rate of change has a maximum in the SPR region.Using a two dimensional Fast Fourier Transform, the phases on the gold and glass regions of the sample were extracted for incident angles ranging from 40 • to 50 • .The difference between these phases plotted against incident angle is shown in Fig. 5(a), together with the difference between the theoretical phase changes which are shown in Fig. 1(b).The change in measured phase difference is 130 • , which agrees well with the theory and is caused mainly by the enhanced spatial phase shift on the glass-gold-air system due to the generation of SPP.This enhanced spatial phase shift corresponds to an increased GHE [26] of 3.5μm as defined in [14].Interestingly, we see an offset in the phase difference measured using a coated fiber tip compared to that measured using an uncoated fiber tip for both p-and s-polarized incident beams.The interaction between the metal coated probe and the surface might introduce an additional phase shift for the entire range of incident angles.This may explain the offset observed; but is not a relevant issue of concern in this study.We performed simultaneous far-field reflectivity studies to cross-check the generation of plasmons using a photodiode (as shown in Fig. 2) that detects the light reflected off the glass-gold-air system.The absorption dip in the reflectivity measurements for the p-polarized light clearly indicates the SPR angle as 43.3 • .Figure 5(b) shows a comparison between reflectivity and PSTM measurements.The distinct change in the phase difference of 130 • is measured over the SPR range for the p-polarized light, whereas for the s-polarized light a negligible change is measured over the SPR region as expected.We do see a minor fluctuation in the reflected signal voltage which we attribute to polarization impurity.This impurity leads to a similar fluctuation in the corresponding phase difference plot.It has been reported that a change in the ambient refractive index influences the SPR so that the angular position of the phase difference for p-polarized incident beam would move along the angle of incidence axis [15].Note that the angular position of the measured change in phase difference for p-polarized incident beam coincides with the absorption dip in the plot obtained from reflectivity measurement.This observation underlines the fact that the measured change in phase difference is not influenced by the presence of a near-field optical Summary In conclusion, we have observed the shift in the spatial phase of evanescent waves on glassair and glass-gold-air systems as a function of incident angle locally using a phase-sensitive PSTM.Our observations are the first of this kind in the near-field, which measure the spatial phase shift associated with GHE and SPR on the surface.The change in the phase difference of the evanescent waves across the glass-gold transition region of the sample as a function of incident angle shows the combined effect of the GHE and the generation of surface plasmons and agrees well with the theoretical predictions.Our experiments on directly measuring the local optical phase of the SPP provides a new approach for the near-field SPR sensors. It is a pleasure to thank Robert Moerland for fruitful discussions and Prof.Jennifer Herek for reviewing the manuscript.This research is supported by NanoNed, a nanotechnology program of the Dutch Ministry of Economic Affairs. #89683 -$15.00USD Received 12 Nov 2007; revised 16 Dec 2007; accepted 17 Dec 2007; published 28 Jan 2008 Figure 3(a) shows the topography of the sample.Figure 3(b) shows the optical amplitude times sine of the relative phase of the local field.The spacing between adjacent dark or bright lines is a direct measurement of the wavelength component λ x on the sample surface. Fig. 3 . Fig. 3.A PSTM measurement of the glass-gold transition region of the sample for a scan area of 24.1 × 14.8μm 2 for an s-polarized incident beam.(a) Topography, (b) phase of the local field expressed as optical amplitude times sine of the phase, (c) the measured amplitude of the optical field, (d) a line trace on the sample along the black dashed line in image (c).The images were obtained using a coated fiber tip. Fig. 5 . Fig. 5. (a) Comparison between the theoretical and experimental phase difference as a function of incident angle for p-and s-polarized light.(b) Comparison between far-field reflectivity and near-field PSTM measurements for p-and s-polarized incident beams.
v3-fos-license
2020-05-06T13:05:30.135Z
2020-05-06T00:00:00.000
218503816
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2020.00178/pdf", "pdf_hash": "e94113129445e1931267b9aea322317f51e7a568", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44764", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "e94113129445e1931267b9aea322317f51e7a568", "year": 2020 }
pes2o/s2orc
Transcriptome Profiling Reveals Indoxyl Sulfate Should Be Culpable of Impaired T Cell Function in Chronic Kidney Disease Introduction: Chronic inflammation and immune system dysfunction have been evaluated as major factors in the pathogenesis of chronic kidney disease (CKD), contributing to the high mortality rates observed in these populations. Uremic toxins seem to be the potential “missing link.” Indoxyl sulfate (IS) is one of the protein-bound renal toxins. It participates in multiple pathologies of CKD complications, yet its effect on immune cell has not been studied. This study aimed to explore the genome-wide expression profile in human peripheral blood T cells under stimulation by IS. Methods: In this study, we employed RNA-sequencing transcriptome profiling to identify differentially expressed genes (DEGs) responding to IS stimulation in human peripheral T cells in vitro. Flow cytometry and western blot were used to verify the discovery in RNA-sequencing analysis. Results: Our results yielded a total of 5129 DEGs that were at least twofold up-regulated or down-regulated significantly by IS stimulation and half of them were concentration-specific. Analysis of T cell functional markers revealed a quite different transcription profile under various IS concentration. Transcription factors analysis showed the similar pattern. Aryl hydrocarbon receptor (AhR) target genes CYP1A1, CYP1B1, NQO1, and AhRR were up-regulated by IS stimulation. Pro-inflammatory genes TNF-α and IFN-γ were up-regulated as verified by flow cytometry analysis. DNA damage was induced by IS stimulation as confirmed by elevated protein level of p-ATM, p-ATR, p-BRCA1, and p-p53 in T cells. Conclusion: The toxicity of IS to T cells could be an important source of chronic inflammation in CKD patients. As an endogenous ligand of AhR, IS may influence multiple biological functions of T cells including inflammatory response and cell cycle regulation. Further researches are required to promulgate the underling mechanism and explore effective method of reserving T cell function in CKD. INTRODUCTION Chronic inflammation and immune system dysfunction have been evaluated as major factors in the pathogenesis of chronic kidney disease (CKD), contributing to the high mortality rates observed in these populations. As a main component of cellular immunity, T cells play a leading role in defense of pathogens, immune homeostasis and immune surveillance. The impact of uremia on the immune system has been previously studied in end-stage renal disease (ESRD) patients. Betjes et al. (1,2) described a decline of T cell numbers especially naive T cells with an increased susceptibility to activation-induced apoptosis, expansion of terminally differentiated T cells with highly secretion of proinflammatory cytokines and lack of adequate antigen-specific T cell differentiation which may be the probable cause of high risk of infection in these patients. Uremic toxins have been suggested as a potential "missing link" between CKD and cardiovascular disease (CVD) since higher CVD risk in these patients cannot be sufficiently explained by classic factors. However, there are few studies on the mechanism of T cell dysfunction caused by uremic toxins. Indoxyl sulfate (IS) is a renal toxin that accumulates in blood of uremic patients with 97.4% bound to serum albumin (3). The serum level of IS in healthy humans is almost undetectable, whereas in uremic patients, it escalates to 236 µg/ml (4). IS plays an important role in the progression of renal disease, CVD, bone metabolism disorders and other complications by promoting oxidative stress and inflammatory response (5). IS also affects T cell differentiation. It has been reported that IS aggravates experimental autoimmune encephalomyelitis by stimulating Th17 differentiation (6) and lessens allergic asthma by regulating Th2 differentiation (7). As one of the endogenous ligands of aryl hydrocarbon receptor (AhR), IS probably make effect through AhR. Emerging evidence suggests that AhR is a key sensor allowing immune cells to adapt to environmental conditions and changes in AhR activity have been associated with autoimmune disorders and cancer (8). However, it remains largely unknown if whether IS or AhR are responsible for the T cell disfunction in uremic patients. In the past decade, next-generation sequencing technology has emerged as an effective tool to investigate the gene expression profiling of a species under specific conditions. The advantages of speed, precision and high-efficiency performance of RNA sequencing (RNA-seq) encouraged us to explore the genomewide expression profile in human peripheral blood T cells under stimulation by IS. Cell Isolation and Activation Buffy coats from healthy donors were obtained from Zhongshan Hospital, Fudan University. This study has been approved by the Medical Ethics committee of Zhongshan Hospital, Fudan University. Peripheral blood mononuclear cells were separated by density-gradient centrifugation using Ficoll-Paque Plus (GE healthcare Bio-Science, Uppsala, Sweden) and were further processed for separation of T cells using CD3 MicroBeads (MiltenyiBiotec, Auburn, USA). T cells were cultured in RPMI medium (Eurobio, Les Ulis, France) supplemented with 20 IU/mL penicillin, 20 µg/mL streptomycin, and 10% decomplemented FBS (Life Technologies), and stimulated with Dynabeads R T-Expander beads coated with anti-CD3 and anti-CD28 Abs (Life Technologies) at a 1:1 cell/bead ratio in the absence of IL-2 (30 U/ml). Then IS stimulation experiments were conducted by treating T cells with different IS concentration for 96 h. RNA Sequencing Total RNA was extracted from each sample using TRIzol reagent (Invitrogen, Carlsbad, CA, USA) following the manufacturer's protocol. The RNA concentration and purity were checked by OD A260/A280 (>1.8) and A260/A230 (>1.6). The quality and quantity of RNA obtained from each sample was checked using the NanoPhotometer R spectrophotometer (IMPLEN, CA, USA). RNA concentration was measured using Qubit R RNA Assay Kit in Qubit R 2.0 Flurometer (Life Technologies, CA, USA). RNA integrity was assessed using the RNA Nano 6000 Assay Kit of the Bioanalyzer 2100 system (Agilent Technologies, CA, USA). A total amount of 3 µg RNA per sample was used as input material for the RNA sample preparations. Sequencing libraries were generated using NEBNext R Ultra TM RNA Library Prep Kit for Illumina R (NEB, USA) following manufacturer's recommendations and index codes were added to attribute sequences to each sample. The clustering of the index-coded samples was performed on a cBot Cluster Generation System using TruSeq PE Cluster Kit v3-cBot-HS (Illumia) according to the manufacturer's instructions. After cluster generation, the library preparations were sequenced on an Illumina Hiseq platform and 125 bp/150 bp paired-end reads were generated. RNA-Seq Data Processing Clean reads were obtained by removing reads containing adapter, reads containing ploy-N and low-quality reads from raw data. Reference genome and gene model annotation files were downloaded from genome website directly. Index of the reference genome was built using Hisat2 v2.0.5 and paired-end clean reads were aligned to the reference genome using Hisat2v2.0.5. The read counts of each transcript were normalized to the length of the individual transcript and to the total mapped read counts in each sample and were expressed as FPKM. Differential expression analysis of two groups was performed using the DESeq2 R package (1.16.1). P-values were adjusted using the Benjamini and Hochberg's approach for controlling the false discovery rate. In the analysis, a criterion of |log 2 (fold-change)| > 0 and an adjusted P < 0.05 were assigned as differentially expressed. Hierarchical clustering was utilized to present the selected significant down-regulated and up-regulated genes. The cluster Profiler R package was used to perform the gene ontology (GO) enrichment analysis (http://www.geneontology. org). Kyoto Encyclopedia of Genes and Genomes (KEGG, https://www.genome.jp/kegg) and Reactome (https://reactome. org) pathway analysis were performed to understand the function and interactions among differentially expressed genes. Western Blot Analysis The cells were washed twice with cold PBS and then lysed in RIPA buffer supplemented with complete EDTA-free Protease Inhibitor Cocktail (Roche Applied Science, Mannheim, Germany) and PhosStop Phosphatase Inhibitor Cocktail (Roche Applied Science) on ice for 30 min. The cell lysates were sonicated five times for 10 s each and centrifuged at 11,000 g for 30 min at 4 • C. The supernatants were subsequently collected. Protein concentrations were measured using a BCA protein assay kit (Pierce, Inc., Rockford, IL). Statistical Analysis Data were reported as mean ± SD. Statistical analysis was performed using the GraphPad Prism5 software. The one-way ANOVA was used for multiple group comparisons. The paired Student's t-test was used for a single comparison between two groups, and the non-parametric t-test was also chosen if the sample size was too small and not fit Gaussian distribution. Purity of T Cells and RNA-Seq Profiling Analysis The purity of T cells was >97%, as confirmed by flow cytometry (Figure 1A). In this study, after filtered adapter and lowquality reads, about 44.25-62.64 million clean reads were obtained for all samples (Table S1). Hierarchical clustering based on Pearson correlation coefficients showed high correlation (0.969-1.00) among samples in each group and IS stimulation groups were distinctly separated from control group ( Figure 1B). There were 5129 DEGs that were at least twofold up-regulated or down-regulated significantly by IS stimulation and half of them were concentration-specific. Compared with the control group, there were 2535 DEGs in the group treated by 200 µM IS, of which 1101 DEGs were up-regulated and 1434 DEGs were down-regulated. Group treated by 500 µM IS had 2382 DEGs, of which 1131 DEGs were up-regulated and 1251 DEGs were down-regulated. Group treated by 1,000 µM IS had 4090 DEGs, of which 2178 DEGs were up-regulated and 1912 DEGs were down-regulated ( Figure 1C). 1332 were common DEGs in all groups treated by various concentrations of IS compared with the control groups ( Figure 1D). Of all common 1332 DEGs, 29 DEGs were up-regulated and 15 DEGs were down-regulated in a concentration dependent manner. 21 DEGs were upregulated or down-regulated at 200µM but reversely regulated when IS concentration was higher. These DEGs were listed in Figure 2. Functional Categories and Enriched Pathways by RNA-Seq Analysis We firstly focused on the T cell functional markers including cluster of differentiation, cytokine and cytokine receptor. Two hundred and seventy two genes were screened (Table S2) and 87 differently expressed genes were found under filtering conditions (corrected P < 0.0001 and FPKM>1), which were listed in Table 1. These genes were involved in T cell activation, adhesion and signal transduction. 19 genes including LTA, LTB, CXCL8, and CCR7 were up-regulated at IS concentration of 200 µM. Twenty genes including TNF-α, IFN-γ and CD40L were up-regulated when IS concentration were higher. 41 genes including IL2, CD28, PD1, and CTLA4 were down-regulated at IS concentration of 200 µM; some of them returned to normal or even up-regulated when IS concentration were higher. The effects of IS stimulation on AhR activation were shown in Figure 2E. mRNA levels of AhR target genes, CYP1A1, CYP1B1, NQO1, and AhRR were increased by IS stimulation, indicating a transcriptionally active form of AhR. Next, we focused on transcription factor (TF) responsive to IS stimulation. TF analysis of differentially expressed genes were extracted directly from the AnimalTFDB database. 114 TFs differentially expressed when compared to the control group under filtering conditions (corrected P < 0.0001). The three mostly altered TF families were zf-C2H2, bHLH, and TF-bZIP separately (Figure 3). Most TFs were differentially expressed at IS concentration of 200 µM, of which 24 TFs were up-regulated and 64 TFs were down-regulated. With a similar pattern of T cell functional markers, many of these TFs were diversely regulated when IS concentration was higher. Eighteen more TFs were differentially expressed when IS concertation raised to 500 µM and 9 TFs were only differentially expressed at IS concentration of 1,000 µM. Myc, BHLHE40, SOX4, CREM, and HIC1 were up-regulated in a concentration dependent manner. Several major regulators of T cell differentiation were also affected by IS stimulation. STAT1, STAT4, NF κ B1 (P50/P105), GATA3, Foxp3, Smad3, and IRF2 were significantly up-regulated. STAT3, STAT5B, STAT6, T-bet, MAF, Runx2, Runx1, BCL6, and NFATC1 were significantly down-regulated at IS concentration of 200 µM. Except for STAT5B and STAT6, these TFs were relatively upregulated at 500 µM or 1,000 µM. STAT2 was significantly down-regulated when IS concentration was higher than 500 µM (Figure 4). GO term enrichment analysis of the 2040 DEGs in IS 500 µM group revealed 182 significantly enriched GO terms under filtering conditions (corrected P < 0.001). The top 10 GO terms of the three aspects [Biological Process (BP), Molecular Function (MF) and Cellular Component (CC)] were shown in Figure 5A. The enriched GO terms on BP and MF were mainly related to immune function (e.g., "T cell activation, " "antigen receptor-mediated signaling pathway, " "leukocyte cellcell adhesion, " "leukocyte differentiation"), gene expression (e.g., "regulation of transcription from RNA polymerase II promoter in response to stress"). The top GO terms on CC were proteasome complex and focal adhesion. Reactome pathway analysis revealed lots of enriched terms concerning cell cycle especially DNA damage in the top 20 (e.g., "Autodegradation of the E3 ubiquitin ligase COP1, " "p53-Independent DNA Damage Response"). The top 20 Reactome pathway terms were shown in Figure 5B. Effects of IS on Inflammation and DNA Damage in Human Peripheral T Cells To validate RNA-seq results, flow cytometry was performed on two most common pro-inflammatory factors TNFα and IFN-γ. After 4 days of stimulation, secretion of TNF-α and IFN-γ were significantly elevated in T cells (Figure 6). To verify the pathway enrichment results that IS induced DNA damage in T cells, proteins related to DNA damage response (DDR) were tested by western blot. Protein level of p-ATM, p-ATR, p-p53, and p-BRCA1 were significantly higher in IS intervention group and so was AhR (Figure 7). By screening transcription of T cell functional markers including cluster of differentiation, cytokine and cytokine receptor, we found lots of pro-inflammatory genes were upregulated such as TNF-α, IFN-γ, CD40L, and CXCL8. Flow cytometry further proved that IS elevated the secretion of TNF -α and IFN-γ in T cells. In addition, IS down regulated CD28 expression, which could be vital since CD4 + CD28 − T cell has been well-proved to be closely related to chronic inflammation and clinical CVD events in CKD patients (10). CD7 and CD26, the two factors reported been down-regulated in chronic inflammatory diseases (11,12), were also downregulated by IS intervention. Many molecules known as immune checkpoint inhibitor, which activating in evolving immune activation cascade and contributing inhibitory signals to dampen an overexuberant response, were down-regulated, including PD1, CTLA4, LAG3, and CD200 (13)(14)(15). Down-regulation of these genes could aggravate inflammation. These results highly suggested that the toxicity of IS to T cells could be an important source of chronic inflammation in CKD or uremic patients. Many factors would contribute to chronic inflammatory status in CKD, including increased production of proinflammatory cytokines, oxidative stress, acidosis, altered metabolism of adipose tissue and even some treatment per se such as hemodialysis (16). Immune cells activated by uremic milieu produce more proinflammatory cytokines and aggravate the inflammation condition as a vicious circle. The most welldocumented studies were that patients with ESRD typically had an expansion of proinflammatory CD4 + CD28 − T cell and CD14 + CD16 ++ monocyte populations, which were considered to be novel, non-traditional cardiovascular risk factors (2). Besides cytokines, retention of uremic toxins should be the key mechanism that underlie the generation of oxidative stress and inflammation. As matter of fact, the crosstalk between gut microbiota and CKD has become a new focus for studying the mechanism of inflammation in these patients. IS and pcresyl sulfate, generated by protein fermentation in intestine, are potential candidates since they were not only associated with CKD progression but also related to poor prognosis in ESRD patients (17,18). Our study shed a light into the mechanism of immune disturbance in CKD patients. As an endogenous ligand of AhR, IS is functioning through AhR along with many other uremic toxins, such as indole-3acetic acid and indoxyl-β-D glucuronide (19,20). AhR is a ligandactivated transcription factor and is involved in the regulation of multiple cellular pathways such as inflammatory responses, cell cycle regulation and hormone signaling (21). Physiological functions of AhR may require tightly controlled and transient signaling, and sustained AhR signaling may underlie pathological responses (22). Accumulation of IS in CKD patients could cause prolonged AhR activation; it further leading to a pathological change. Recently, a clinical study confirmed that CKD patients displayed a strong AhR-activating potential, which is not only strongly correlated with serum IS level but also correlated with CVD risk (23). In the current study, the expression of AhR protein and known AhR-regulated genes such as CYP1A1, CYP1B1, NQO1, and AhRR were up-regulated, indicating a transcriptionally active form of AhR, which was consistent with the previous study (7). When analyzing transcription factors, we found some critical TFs such as STAT1, STAT3, IRF4 were differentially expressed at IS concentration of 200 µM, but were diversely regulated when IS concentration was higher. In addition, the plasma IS concentration of CKD patients is far beyond the scope of this experimental design, and it remains a question how TFs react at lower IS concentration. It seems that the effect of IS on T cells are quite different depending on various IS concentration. At present, we cannot fully understand the mechanism of TFs rebound. We speculate this may be related to the characteristics of AhR function, since lots of previous studies have shown that activated AhR could function oppositely by different kinds of ligand or even one ligand in different concentrations (24,25). It is worth noting that in GO and Reactome analysis, we found a lot of enriched items were concerning cell cycle especially DNA damage. Western blot also FIGURE 3 | The distribution of TF families. The x-axis represents different TF families (gene names were presented directly if there was only one TF in this TF family), the y-axis represents the percentage of corresponding TF family in total differentially expressed TFs. FIGURE 4 | FPKM of TFs involved in T cell differentiation in each IS group. The x-axis represents FPKM, the y-axis represents gene name and IS concentration. *corrected P < 0.05 compared with control group; + corrected P < 0.05 compared with IS 200 µM group; # corrected P < 0.05 compared with IS 1,000 µM group. confirmed that the expression of DDR proteins including p-ATM, p-ATR, p-p53, and p-BRCA1 were up-regulated under IS stimulation. Therefore, we suggest that IS may cause DNA damage, which could further lead to T cell senescence. Notably, T cell senescence has been considered a major contributor of inflammation and crucial mechanism of complications in CKD (11,26,27), thus it should be paid enough attention that IS may lead to DNA damage. We can't get answer here whether IS directly leads to DNA damage through AhR, or indirectly through inflammation or oxidative stress. But the first case is reasonable since it has been well-proved that activation of AhR could directly cause DNA damage in other settings (28,29). Our study had several limitations. First, this study was based on IS effect on T cells from healthy donors. Since inflammatory response in T cells of CKD patients could be different compared to the response in healthy T cells treated with IS, further research focused on patients' T cells should be conducted to providing better understanding of effect of IS on T cell function in CKD. Secondly, this research only presented an image of influence by IS to T cells, studies with the aim of understanding molecular mechanism are needed. In conclusion, our study shows that the toxicity of IS to T cells could be an important source of chronic inflammation in CKD patients. As an endogenous ligand of AhR, IS may participate in multiple cellular pathways such as inflammatory response and cell cycle regulation, which are closely related to impaired T cell function in CKD patients. We hope that this study will encourage other laboratories around the world to get FIGURE 7 | Expression of DDR protein in T cells by IS intervention. Protein level of p-ATM, p-ATR, p-p53 and p-BRCA1 were significantly higher in IS 500 µM group compared with the control group. Protein level of p-BRCA1 were also increased in IS 200 µM group. AhR were also up-regulated in IS 200 µM and IS 500 µM group. *P < 0.05 compared with the control group. more in-depth knowledge of the molecular mechanism of uremia associated immune dysfunction and make efforts to improve the clinical prognosis of CKD patients. DATA AVAILABILITY STATEMENT The datasets generated for this study can be found in the SRA, PRJNA599948. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Ethical Committee, Zhongshan Hospital, Fudan University. The patients/participants provided their written informed consent to participate in this study. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmed. 2020.00178/full#supplementary-material Table S1 | Clean reads for all samples.
v3-fos-license
2018-04-03T05:20:04.251Z
2017-03-15T00:00:00.000
25698196
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/chem.201604705", "pdf_hash": "3fd566e400ea83b64846fdbb42a4d9fbda0e2846", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44765", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "3fd566e400ea83b64846fdbb42a4d9fbda0e2846", "year": 2017 }
pes2o/s2orc
Accurate Bond Lengths to Hydrogen Atoms from Single‐Crystal X‐ray Diffraction by Including Estimated Hydrogen ADPs and Comparison to Neutron and QM/MM Benchmarks Abstract Amino acid structures are an ideal test set for method‐development studies in crystallography. High‐resolution X‐ray diffraction data for eight previously studied genetically encoding amino acids are provided, complemented by a non‐standard amino acid. Structures were re‐investigated to study a widely applicable treatment that permits accurate X−H bond lengths to hydrogen atoms to be obtained: this treatment combines refinement of positional hydrogen‐atom parameters with aspherical scattering factors with constrained “TLS+INV” estimated hydrogen anisotropic displacement parameters (H‐ADPs). Tabulated invariom scattering factors allow rapid modeling without further computations, and unconstrained Hirshfeld atom refinement provides a computationally demanding alternative when database entries are missing. Both should incorporate estimated H‐ADPs, as free refinement frequently leads to over‐parameterization and non‐positive definite H‐ADPs irrespective of the aspherical scattering model used. Using estimated H‐ADPs, both methods yield accurate and precise X−H distances in best quantitative agreement with neutron diffraction data (available for five of the test‐set molecules). This work thus solves the last remaining problem to obtain such results more frequently. Density functional theoretical QM/MM computations are able to play the role of an alternative benchmark to neutron diffraction. Supporting information for the paper: "Accurate bond distances to hydrogen atoms from single-crystal X-ray diffraction by including estimated hydrogen ADPs and comparison to neutron and QM/MM benchmarks" Structures and CCDC Refcodes Merged diffraction data of the investigated structures are deposited alongside this publication. The respective CCDC refcodes [1] of the earlier CIF depositions that contain the relevant structural models for refinement are provided in the following Table 1. Table 1: CCDC Refcodes for the structures investigated. Crystallographic information files for these can be downloaded to initiate refinement. The refcode for the neutron data of N-acetyll-4-Hydroxyproline·H 2 O is POKKAD02. For l-Threonine the refinement results of the 19K data were not deposited in the CCDC. Here earlier structure of a 12K dataset [2] should provide input coordinates. Depictions of non-positive definite H-ADPs from Hirshfeld atom refinment for four amino acids Statistical methods (also contained in the main article) Given a set of N values V = {V i } the mean value and its population standard deviation are defined by: The population standard deviation σ pop or root mean-square deviation (RMSD) gives an indication of the spread of the values around the mean. The error in the mean is given by: In this supplement several pairs of bond distances are compared, derived from neutron and X-ray measurements as well as ONIOM computations, denoted We follow earlier work [4] and use the statistical measures to describe similarities and differences. In the following comparisons the X-ray or ONIOM value to be compared {C i } is subtracted from the neutron value when this is available, so that a positive value indicates that the X-ray or ONIOM result is too short. When neutron values are not available, the quantum chemical ONIOM result is chosen as benchmark {B i } for the X-ray results. Following values for the combined set V are reported with the following nomenclature: This quantity is also known as the signed difference. The MD can be positive or negative, meaning that on average the parameters derived from the X-ray measurements or ONIOM computations are smaller or larger, respectively, than those derived from the neutron measurements. (ii) The mean of the square of the weighted difference -weighted by the combined standard uncertainties from both measurements -is denoted The combined standard uncertainty (csu), which appears in this expression, is given by [5] csu . Combining these equations, the mean of the square of the weighted difference is For reasons of convention, we report the square root of this property and refer to it as the csu-weighted root mean-square difference (wRMSD). For ONIOM results the standard deviation was used as zero. Detailed bond distances and discussion In this supplement all bond distances in eight standard and one non-standard amino acids are listed in several tables. These are further analyzed by the MD using all atoms, whereas in the main paper these values were only reported for the bond distances involving hydrogen atoms. Subsequently bond distances of all molecules are discussed in detail case by case. In the main paper only the X-H distances are discussed and analysed statistically. Due to the small sizes of the molecules studied bond distances will be given in all cases. We start the comparison using N-acetyl-l-4-hydroxyproline monohydrate. For this molecule neutron data at 150 K were available, which are used for comparison with highresolution 100 K X-ray data. Three X-ray models were evaluated: INV refinement that relies on the Hansen/Coppens multipole model, HAR (free refinement of positions and H-ADPS) and HAR with refined hydrogen positions but fixed estimated TLS+INV H-ADPs (Table 2). Bond distances from all five sources and approaches, invariom (with TLS+INV H-ADPs), HAR and HAR with TLS+INV H-ADPs (when necessary), neutron diffraction and ONIOM computations reasonably agree for N-acetyl-l-4-hydroxyproline monohydrate (Table 2), with the exception of a huge outlier for O(2)-C(1), where ONIOM overestimates the result. The MD calculated for all pairs of bond distances using neutrons as reference shows that ONIOM results agree best with the neutron result. Concerning the X-ray results invariom refinement with TLS+INV H-ADPs shows a higher MD than HAR. Here estimated TLS+INV H-ADPs give the lowest MD, better than HAR with freely refined H-ADPs. We next focus on the ONIOM results. For d,l-asparagine monohydrate, where neutron data collected at room temperature [6] are also available, the comparison likewise shows that neutron and ONIOM results are in best agreement for both molecules for all distances (including the X-H distances). The agreement can be less good for selected bond distances between heavier nuclei, and this will be discussed below for d,l-glu·H 2 O. More remarkable are trends in the individual bond lengths involved in hydrogen bonding, which are well reproduced by the two-layer ONIOM computation despite the approximation of using electrostatic interactions between high and low layer rather than a whole wave function for all molecules in the cluster only. We conclude that ONIOM results can be used as an alternative to neutron diffraction in general, as shown using the examples of the genetically encoded amino acids in this work. Concerning the X-ray results for d,l-asparagine monohydrate (Table 3) values are only given for INV and HAR with TLS+INV estimated H-ADPs to avoid results based on nonpositive definite H-ADPs (depicted in Figure 1). Therefore four rather than five sets of values are provided. HAR results perform better than the ONIOM results in this molecule. The situation that non positive definite H-ADPs are obtained after HAR is similar for l-phenylalanine l-phenylalaninium formic acid, d,l-proline monohydrate and l-threonine, where ortep plots [7] plots are provided in Figure 1 1 . For d,l-serine (Table 4, X-ray data from [8]) room temperature neutron data were taken from [9]. The invariom with TLS+INV bond distances improve compared to 2005 as listed in [8], where for hydrogen at the time only nearest neighbor atoms were considered in invariom model compounds, and where only isotropic displacement parameters were refined for them. In this study next-nearest neighbor model compounds [10] and the TLS+INV H-ADPs were used for hydrogen; including estimated H-ADPs can be considered an important improvement since X-H bond distances get closer to neutron diffraction and ONIOM results. When we use the 100 K synchrotron data to unusually high resolution from [11] rather than the 100 K dataset from the multi-temperature laboratory data from 2005 (Table 5), very similar results are obtained (only shown for invariom refinement since HAR failed). The higher resolution synchrotron data giving better agreement with neutron diffraction than the MoKα data for INV refinement. The MD value findings indicate that agreement with neutron diffraction is again best for ONIOM, followed by Invariom refinement (Flaig's as well as Dittrich's X-ray dataset) with TLS+INV H-ADPs. For HAR free refinement of H-ADP (Dittrich's data) and positional parameters gives better results than from HAR refinement with H-ADPs estimated by the TLS+INV approach when all bond distances are evaluated rather than just the X-H bond distances like in the main paper. Room temperature neutron results [12] are again available for l-glutamine (Table 6), and here free refinement of H-ADPs was also possible in HAR, providing five sets of values for comparison. Here the MD for all X-X bond distances agrees most favorably with those by HAR with TLS+INV H-ADPs and free HA refinement, followed by those computed by ONIOM. It can be noted that for l-glutamine both X-H (main paper) as well as X-X bond distances (the value given in Table 6 from all sources agree very favorably overall. For hydrogen-bonded d,l-glutamic acid monohydrate (Table 7) the trend in agreement in the absence of neutron data is the same than for most of the preceeding cases: using ONIOM results as reference for computing the MD, INV refinement agrees less well than HAR (free refinement), which is again less good than HAR refinement with estimated H-ADPs; neutron data really do not seem to be required to validate X-H bond as well as X-X distances. However, unlike in neutral N-acetyl-l-4-hydroxyproline the C α -N bond distance is an outlier in the theoretical ONIOM computations. It disagrees considerably with the X-ray bond distances in this zwitterionic structure. Table 8: Bond length (inÅ) for zwitterionic l-phenylalanine in the solvate l-phenylalanine l-phenylalaninium formic acid from quantum chemistry (ONIOM B3LYP/cc-pVTZ:UFF) and X-ray diffraction. Like for d,l-glutamic acid monohydrate neutron data for the structure of l-phenylalanine l-phenylalaninium formic acid are unavailable and the C α -N bond distance is an outlier in the theoretical ONIOM computation ( Table 8). The explanation that can be provided is the influence of the crystal field [13] that is only partly taken into account by point charges. The crystal field (including hydrogen bonding) causes oxygen atoms to polarize towards the positive carbon atom, while the H atoms polarize away from the negative N atom; polarization of the H-atoms thus leads to a weakening of the C α -N bond and its elongated bond distance in the ONIOM computation. Similar polarizations have been visualized for l-homoserine, using different levels of model sophistication, starting from point charges, then improving the description with point charges and dipoles surrounding a molecule, and finally from full periodic DFT calculations [14]. Therefore only a MO description of the low-layer atoms in ONIOM or full periodic computations can give the correct bond distances from theory, at considerably higher computational effort. Because we are mainly interested in the X-H bond distances we consider the ONIOM B3LYP/cc-pVTZ:UFF levels of theory entirely appropriate here. HAR failed for l-phenylalanine l-phenylalaninium formic acid both for free refinement giving non-positive definite H-ADPs as well as for H-ADP-constrained refinement, where no minimum was found; X-H distances from INV refinement agree reasonably well with ONIOM results. Since there are no neutron results for d,l-proline monohydrate ( Table 9) the reference to compare the X-ray data with has to be the result of the ONIOM computation. Invariom refinement shows a slightly less good agreement than HAR in terms of the MD when using all X-H and X-X bond distances. Optimized ONIOM bond distances come from a triple zeta basis set, as does HAR for the crystallographic refinement. I this regard the multipole model performs well despite the single Slater function per multipole -results are alost as good. For l-threonine a room temperature neutron structure [15] provides reference bond distances (Table 10). Again the agreement between neutron diffraction and ONIOM computations is clearly most favorable (apart from the C α -N bond), supporting the conclusion of the main paper that the latter can be used to provide comparative results for the other structures. The next-best agreement is HAR and then invariom refinement, both with with estimated TLS+INV H-ADPs. Table 11: Bond length (inÅ) for d,l-valine involving hydrogen atoms from quantum chemistry (ONIOM B3LYP/cc-pVTZ:UFF) and for X-ray diffraction.
v3-fos-license
2019-03-24T05:33:54.629Z
2019-01-01T00:00:00.000
85511043
{ "extfieldsofstudy": [ "Medicine", "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4360/11/1/152/pdf", "pdf_hash": "f7da12e5199502071b6ece330621af233371eb48", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44766", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "sha1": "f7da12e5199502071b6ece330621af233371eb48", "year": 2019 }
pes2o/s2orc
Resistance to Cleavage of Core–Shell Rubber/Epoxy Composite Foam Adhesive under Impact Wedge–Peel Condition for Automobile Structural Adhesive Epoxy foam adhesives are widely used for weight reduction, watertight property, and mechanical reinforcement effects. However, epoxy foam adhesives have poor impact resistance at higher expansion ratios. Hence, we prepared an epoxy composite foam adhesive with core–shell rubber (CSR) particles to improve the impact resistance and applied it to automotive structural adhesives. The curing behavior and pore structure were characterized by differential scanning calorimetry (DSC) and X-ray computed tomography (CT), respectively, and impact wedge–peel tests were conducted to quantitatively evaluate the resistance to cleavage of the CSR/epoxy composite foam adhesives under impact. At 5 and 10 phr CSR contents, the pore size and expansion ratio increased sufficiently due to the decrease in curing rate. However, at 20 phr CSR content, the pore size decreased, which might be due to the steric hindrance effect of the CSR particles. Notably, at 0 and 0.1 phr foaming agent contents, the resistance to cleavage of the adhesives under the impact wedge–peel condition significantly improved with increasing CSR content. Thus, the CSR/epoxy composite foam adhesive containing 0.1 phr foaming agent and 20 phr CSR particles showed high impact resistance (EC = 34,000 mJ/cm2) and sufficient expansion ratio (~148%). Introduction An epoxy foam adhesive is an epoxy containing a foaming agent that generates a gas inside the epoxy resin or expands upon heat treatment. After heat treatment, the epoxy foam adhesive is cured and foamed, simultaneously filling the gap between two substrates and binding them [1,2]. Through this process, the epoxy foam adhesive provides weight reduction, watertight property, and reinforcement effects, and can thus be applied to structural adhesives in automobiles [1]. As the mechanical strength of an epoxy foam adhesive weakens through an increase in the expansion ratio [3], it is necessary to improve the mechanical strength while preserving the expansion ratio. Particularly, since the impact resistance of an epoxy foam adhesive is important for its application to structural adhesives in automobiles, epoxy composite foam adhesives containing an additive that improves impact resistance are required [4]. Rubber particles have been widely used to enhance the impact resistance of epoxy composites [5][6][7][8]. However, the poor dispersion and aggregation of rubber particles in a composite decreases the impact resistance at a high content of rubber particles. Therefore, core-shell rubber (CSR) particles, in which rubber particles form the core structure and a polymer forms the shell, have been developed [9]. Using CSR particles in a composite rather than rubber particles, the dispersion of CSR particles in the composite can be improved and impact resistance can be enhanced [9][10][11]. The pore structure of epoxy composite foams is typically characterized by two-dimensional analysis, such as scanning electron microscopy (SEM) [3,[12][13][14][15]. Two-dimensional analysis can be used to observe only the sample surface, and the investigation of the internal pore structure necessitates a destructive evaluation of the epoxy composite foam adhesive. However, X-ray computed tomography (CT) can nondestructively characterize the internal pore structure of polymeric foams [1,16,17] and can quantitatively evaluate the average pore size, standard deviation of pore size, porosity, and expansion ratio. Further, the impact resistance of an epoxy composite containing CSR particles has been conventionally evaluated by a ballistic impact test [10], izod impact test [11], etc. By contrast, to evaluate the impact resistance of structural adhesives, a test specimen is adhesively bonded and impact is applied to the specimen using instruments such as an Izod impact tester [4,11,18], impact wedge-peel tester [19,20], and servohydraulic tester [21]. In this study, epoxy composite foam adhesives containing epoxy resin, a foaming agent, and CSR particles were prepared, and their pore structure was characterized by X-ray CT. We used an impact wedge-peel tester and their resistance to cleavage of the epoxy composite foam adhesive under impact condition. We investigated the effect of CSR particles on the pore structure and impact resistance of the epoxy composite foam adhesive containing different amounts of foaming agent and suggested an optimal content of CSR particles to achieve a high expansion ratio and impact resistance. Curing and Foaming of CSR/Epoxy Composite Foam Adhesive Materials were blended, maintaining the ratio of total equivalent weight of epoxy to curing agents as 1.00 (Table 2). Different amounts of the CSR mixture were blended so that the CSR content in the composites varied as 0, 5, 10, and 20 phr. Further, CaCO3 was added so that the total weight of CSR particles and CaCO3 was 20 phr. Moreover, the foaming agent content in the samples was varied as 0, 0.1, and 1 phr. All the samples were cured and foamed at 170 °C for 40 min. Curing and Foaming of CSR/Epoxy Composite Foam Adhesive Materials were blended, maintaining the ratio of total equivalent weight of epoxy to curing agents as 1.00 (Table 2). Different amounts of the CSR mixture were blended so that the CSR content in the composites varied as 0, 5, 10, and 20 phr. Further, CaCO 3 was added so that the total weight of CSR particles and CaCO 3 was 20 phr. Moreover, the foaming agent content in the samples was varied as 0, 0.1, and 1 phr. All the samples were cured and foamed at 170 • C for 40 min. Differential Scanning Calorimetry (DSC) DSC (DSC Q200, TA Instruments-Waters Korea Ltd., Seoul, Korea) was performed to compare the curing behaviors of the epoxy composite foams. The heat flow of exothermic curing reaction was measured during the DSC run in the temperature range of 60-240 • C at a constant heating rate of 5 • C/min. X-ray Computed Tomography The pore structure was characterized by X-ray CT (Skyscan 1272, Bruker Korea Co., Ltd., Gyeonggi-do, Belgium). The X-ray head (50 kV) was rotated around the epoxy composite foam and tomographic images were captured every 0.6 • . These tomographic images were collected and converted into 3D images. The average pore size, standard deviation of pore size, and porosity were evaluated by the software (CT Analyzer, Bruker Korea CO., Ltd., Gyeonggi-do, Belgium), and the expansion ratio was calculated by Equation (1). where V pore and V total represent the total volume of pores and the measured region, respectively. Impact Wedge-Peel Test An impact wedge-peel test was performed according to ISO 11343 standard to compare the resistance to cleavage of the CSR/epoxy composite foam adhesives under impact using a drop weight tester (Dyntaup®Model 9250HV, Instron, Norwood, MA, USA). Specimens for the impact wedge-peel test were prepared as shown in Figure 2. Two bent steel plates (length: 90 mm, width: 20 mm, thickness: 1.6 mm, material: CR340) were bonded using the CSR/epoxy foam composite adhesive (area: 20 × 20 mm 2 , thickness: 0.2 mm), and the force was measured when the adhesive layer was cleaved by the wedge at a velocity (v) of 2.0 m/s. * Contents of urethane-modified epoxy resin (UME), CA-1, and CA-2 were set as 15.20, 1.51, and 0.12 g, respectively. Differential Scanning Calorimetry (DSC) DSC (DSC Q200, TA Instruments-Waters Korea Ltd., Seoul, Korea) was performed to compare the curing behaviors of the epoxy composite foams. The heat flow of exothermic curing reaction was measured during the DSC run in the temperature range of 60-240 °C at a constant heating rate of 5 °C/min. X-Ray Computed Tomography The pore structure was characterized by X-ray CT (Skyscan 1272, Bruker Korea Co., Ltd., Gyeonggi-do, Belgium). The X-ray head (50 kV) was rotated around the epoxy composite foam and tomographic images were captured every 0.6°. These tomographic images were collected and converted into 3D images. The average pore size, standard deviation of pore size, and porosity were evaluated by the software (CT Analyzer, Bruker Korea CO., Ltd., Gyeonggi-do, Belgium), and the expansion ratio was calculated by Equation (1). where Vpore and Vtotal represent the total volume of pores and the measured region, respectively. Impact Wedge-Peel Test An impact wedge-peel test was performed according to ISO 11343 standard to compare the resistance to cleavage of the CSR/epoxy composite foam adhesives under impact using a drop weight tester (Dyntaup® Model 9250HV, Instron, Norwood, MA, USA). Specimens for the impact wedgepeel test were prepared as shown in Figure 2. Two bent steel plates (length: 90 mm, width: 20 mm, thickness: 1.6 mm, material: CR340) were bonded using the CSR/epoxy foam composite adhesive (area: 20 × 20 mm 2 , thickness: 0.2 mm), and the force was measured when the adhesive layer was cleaved by the wedge at a velocity (v) of 2.0 m/s. As shown in Figure 3, a force-time curve can be obtained by the impact wedge-peel test. According to the shape of the curve, crack growth can be classified into two types: Stable and unstable As shown in Figure 3, a force-time curve can be obtained by the impact wedge-peel test. According to the shape of the curve, crack growth can be classified into two types: Stable and unstable crack growth. While stable crack growth has a constant region of cleavage force, for unstable crack growth, cleavage occurred in an instant without a constant region of force (Figure 3a,b, respectively). The area of the force-time curve is proportional to the energy of crack growth (E C ), which quantitatively represents the impact resistance. Displacement for cleavage (D C ) is determined by the displacement at the finish point of the cleavage. Polymers 2019, 11 FOR PEER REVIEW 5 crack growth. While stable crack growth has a constant region of cleavage force, for unstable crack growth, cleavage occurred in an instant without a constant region of force (Figure 3a,b, respectively). The area of the force-time curve is proportional to the energy of crack growth (EC), which quantitatively represents the impact resistance. Displacement for cleavage (DC) is determined by the displacement at the finish point of the cleavage. Curing Behavior of CSR/Epoxy Composite The curing behavior of the CSR/epoxy composite was studied by DSC ( Figure 4). As the curing of epoxy is an exothermic reaction, all the samples exhibited an exothermic peak, and the maximum temperature of heat flow (Tmax (heat flow)) was plotted as a function of CSR content. Notably, as the CSR content increased, Tmax (heat flow) got higher. This indicates that the addition of CSR particles retarded the curing reaction of epoxy due to the steric hindrance effect of the CSR particles in the CSR/epoxy composite. Curing Behavior of CSR/Epoxy Composite The curing behavior of the CSR/epoxy composite was studied by DSC ( Figure 4). As the curing of epoxy is an exothermic reaction, all the samples exhibited an exothermic peak, and the maximum temperature of heat flow (T max (heat flow) ) was plotted as a function of CSR content. Notably, as the CSR content increased, T max (heat flow) got higher. This indicates that the addition of CSR particles retarded the curing reaction of epoxy due to the steric hindrance effect of the CSR particles in the CSR/epoxy composite. Pore Structure of CSR/Epoxy Composite Foam Adhesive The pore structures of the CSR/epoxy composite foam adhesives were analyzed by X-ray CT. As shown in Figure 5, the pore structure could be investigated from the 3D images of the CSR/epoxy composite foam adhesives, where the pore sizes are assigned a color gradation. Notably, the pore size changed with the CSR content, which indicated that the addition of CSR particles affected the expansion of the foaming agent. To quantitatively compare the pore structures of the CSR/epoxy composite foam adhesives, many parameters, including the average pore size, standard deviation of pore size, porosity, and expansion ratio, were evaluated ( Figure 6). The pore size and expansion ratio for 1 phr foaming agent is higher than those for 0.1 phr foaming agent. Compared to 0 phr CSR content, at 5 and 10 phr CSR contents, the pore size and expansion ratio increased sufficiently due to the decrease in curing rate. It has been reported that the curing behavior affects the pore growth and that the expansion ratio increases at a slow curing speed [1,14]. However, although curing was retarded at 20 phr CSR content, the pore size and expansion ratio decreased. This might have resulted from the steric hindrance effect of the CSR particles, which spatially prevented the expansion of the foaming agent [1]. Pore Structure of CSR/Epoxy Composite Foam Adhesive The pore structures of the CSR/epoxy composite foam adhesives were analyzed by X-ray CT. As shown in Figure 5, the pore structure could be investigated from the 3D images of the CSR/epoxy composite foam adhesives, where the pore sizes are assigned a color gradation. Notably, the pore size changed with the CSR content, which indicated that the addition of CSR particles affected the expansion of the foaming agent. To quantitatively compare the pore structures of the CSR/epoxy composite foam adhesives, many parameters, including the average pore size, standard deviation of pore size, porosity, and expansion ratio, were evaluated ( Figure 6). The pore size and expansion ratio for 1 phr foaming agent is higher than those for 0.1 phr foaming agent. Compared to 0 phr CSR content, at 5 and 10 phr CSR contents, the pore size and expansion ratio increased sufficiently due to the decrease in curing rate. It has been reported that the curing behavior affects the pore growth and that the expansion ratio increases at a slow curing speed [1,14]. However, although curing was retarded at 20 phr CSR content, the pore size and expansion ratio decreased. This might have resulted from the steric hindrance effect of the CSR particles, which spatially prevented the expansion of the foaming agent [1]. Resistance to Cleavage of Adhesive under Impact Wedge-Peel Condition Under the impact wedge-peel condition, the impact force was measured for 20 ms (Figure 7). With an increase in CSR content, the force was increased and sustained for a longer period, indicating that the CSR particles significantly improved the impact resistance of the epoxy composite foam adhesives. On the other hand, as the foaming agent content increased, the cleavage time decreased drastically, suggesting that the CSR/epoxy composite foam adhesive became fragile. Resistance to Cleavage of Adhesive under Impact Wedge-Peel Condition Under the impact wedge-peel condition, the impact force was measured for 20 ms (Figure 7). With an increase in CSR content, the force was increased and sustained for a longer period, indicating that the CSR particles significantly improved the impact resistance of the epoxy composite foam adhesives. On the other hand, as the foaming agent content increased, the cleavage time decreased drastically, suggesting that the CSR/epoxy composite foam adhesive became fragile. As shown in Table 3, the type of crack growth and the displacement for cleavage were investigated to compare the impact resistance. With an increase in the foaming agent content, the CSR/epoxy composite foam adhesives exhibited unstable crack growth and a short displacement for cleavage (D C ). It indicates that the impact resistance deteriorated with increasing foaming agent content. As the CSR content increased, the impact resistance of the CSR/epoxy composite foam adhesive improved dramatically, resulting in an increase in D C and changing the type of crack growth from unstable to stable crack growth. Additionally, by comparing the energy for crack growth (E C ), we quantitatively evaluated the resistance to cleavage of the CSR/epoxy composite foam adhesives under the impact wedge-peel condition. As shown in Figure 8, at foaming agent contents of 0 and 0.1 phr, the E C was significantly enhanced by the addition of CSR particles, indicating an improvement in impact resistance. However, as the foaming agent content increased to 1 phr, the E C hardly increased, which suggests that the impact resistance effectively improved at low foaming agent contents (0 and 0.1 phr). As shown in Table 3, the type of crack growth and the displacement for cleavage were investigated to compare the impact resistance. With an increase in the foaming agent content, the CSR/epoxy composite foam adhesives exhibited unstable crack growth and a short displacement for cleavage (DC). It indicates that the impact resistance deteriorated with increasing foaming agent content. As the CSR content increased, the impact resistance of the CSR/epoxy composite foam adhesive improved dramatically, resulting in an increase in DC and changing the type of crack growth from unstable to stable crack growth. Notably, for the CSR/epoxy composite foam adhesive containing 0.1 phr foaming agent and 20 phr CSR particles, the E C (34,000 J/m 2 ) was more than two times that of the adhesive containing no CSR particle (12,000 J/m 2 ). In addition, the type of crack growth changed from unstable to stable crack propagation by the addition of 20 phr CSR particles into the sample containing 0.1 phr foaming agent. Moreover, the expansion ratio of the adhesive containing 0.1 phr foaming agent and 20 phr CSR particles increased, compared to the adhesive containing no CSR particle; this indicated that a simultaneous increase in both the expansion ratio (~148%) and impact resistance was achieved. Additionally, by comparing the energy for crack growth (EC), we quantitatively evaluated the resistance to cleavage of the CSR/epoxy composite foam adhesives under the impact wedge-peel condition. As shown in Figure 8, at foaming agent contents of 0 and 0.1 phr, the EC was significantly enhanced by the addition of CSR particles, indicating an improvement in impact resistance. However, as the foaming agent content increased to 1 phr, the EC hardly increased, which suggests that the impact resistance effectively improved at low foaming agent contents (0 and 0.1 phr). Notably, for the CSR/epoxy composite foam adhesive containing 0.1 phr foaming agent and 20 phr CSR particles, the EC (34000 J/m 2 ) was more than two times that of the adhesive containing no CSR particle (12000 J/m 2 ). In addition, the type of crack growth changed from unstable to stable crack propagation by the addition of 20 phr CSR particles into the sample containing 0.1 phr foaming agent. Moreover, the expansion ratio of the adhesive containing 0.1 phr foaming agent and 20 phr CSR particles increased, compared to the adhesive containing no CSR particle; this indicated that a simultaneous increase in both the expansion ratio (~148%) and impact resistance was achieved. Conclusions CSR/epoxy composite foam adhesives were prepared with different amounts of foaming agent and CSR particles. With increasing CSR content, the curing reaction retarded, which affected the growth of the pores. The pore structure, pore size, porosity, and expansion ratio were determined by X-ray CT. The expansion ratio for 1 phr foaming agent was higher than that for 0.1 phr foaming agent. At 5 and 10 phr CSR content, the pore size and expansion ratio increased by decrease in curing rate, but, at 20 phr CSR content, the pore size and expansion ratio decreased due to the steric hindrance effect of the CSR particles. The impact resistance of the CSR/epoxy composite foam adhesive was compared in terms of EC. It was significantly enhanced by the addition of CSR particles at 0 and 0.1 phr foaming agent. However, at 1 phr foaming agent, the EC was hardly improved by the addition of CSR particle, indicating that the improvement in impact resistance is effective only at low foaming agent contents (0 and 0.1 phr). For the CSR/epoxy composite foam adhesives containing 0.1 phr foaming agent and 20 phr CSR particles, a simultaneous increase in both the expansion ratio (~148%) and impact resistance (EC = 34000 mJ/cm 2 ) was achieved. A limitation of this study is that we only focused on the impact resistance of the CSR-epoxy composite foam adhesives at room temperature. Since CSR particles can improve impact resistance at low temperatures, it is necessary to investigate the impact resistance of CSR/epoxy composite foam adhesives at low temperatures in future research. Conclusions CSR/epoxy composite foam adhesives were prepared with different amounts of foaming agent and CSR particles. With increasing CSR content, the curing reaction retarded, which affected the growth of the pores. The pore structure, pore size, porosity, and expansion ratio were determined by X-ray CT. The expansion ratio for 1 phr foaming agent was higher than that for 0.1 phr foaming agent. At 5 and 10 phr CSR content, the pore size and expansion ratio increased by decrease in curing rate, but, at 20 phr CSR content, the pore size and expansion ratio decreased due to the steric hindrance effect of the CSR particles. The impact resistance of the CSR/epoxy composite foam adhesive was compared in terms of E C . It was significantly enhanced by the addition of CSR particles at 0 and 0.1 phr foaming agent. However, at 1 phr foaming agent, the E C was hardly improved by the addition of CSR particle, indicating that the improvement in impact resistance is effective only at low foaming agent contents (0 and 0.1 phr). For the CSR/epoxy composite foam adhesives containing 0.1 phr foaming agent and 20 phr CSR particles, a simultaneous increase in both the expansion ratio (~148%) and impact resistance (E C = 34000 mJ/cm 2 ) was achieved. A limitation of this study is that we only focused on the impact resistance of the CSR-epoxy composite foam adhesives at room temperature. Since CSR particles can improve impact resistance at low temperatures, it is necessary to investigate the impact resistance of CSR/epoxy composite foam adhesives at low temperatures in future research.
v3-fos-license
2016-05-12T22:15:10.714Z
2015-03-23T00:00:00.000
2380310
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0120431&type=printable", "pdf_hash": "2517832b8256e0d5c08ed75dbd290fc473703a9e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44770", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "sha1": "2517832b8256e0d5c08ed75dbd290fc473703a9e", "year": 2015 }
pes2o/s2orc
K+-Dependent Selectivity and External Ca2+ Block of Shab K+ Channels Potassium channels allow the selective flux of K+ excluding the smaller, and more abundant in the extracellular solution, Na+ ions. Here we show that Shab is a typical K+ channel that excludes Na+ under bi-ionic, Nao/Ki or Nao/Rbi, conditions. However, when internal K+ is replaced by Cs+ (Nao/Csi), stable inward Na+ and outward Cs+ currents are observed. These currents show that Shab selectivity is not accounted for by protein structural elements alone, as implicit in the snug-fit model of selectivity. Additionally, here we report the block of Shab channels by external Ca2+ ions, and compare the effect that internal K+ replacement exerts on both Ca2+ and TEA block. Our observations indicate that Ca2+ blocks the channels at a site located near the external TEA binding site, and that this pore region changes conformation under conditions that allow Na+ permeation. In contrast, the latter ion conditions do not significantly affect the binding of quinidine to the pore central cavity. Based on our observations and the structural information derived from the NaK bacterial channel, we hypothesize that Ca2+ is probably coordinated by main chain carbonyls of the pore´s first K+-binding site. Introduction Potassium channels are proteins that allow the passive and selective flux of K + , excluding the smaller, and more abundant in the extracellular solution Na + ions. The structural framework of this selectivity resides in a conserved amino acid signature sequence (TVGYG) [1], which forms the selectivity filter (SF) of the pore [2][3][4]. Backbone carbonyl oxygen atoms from signature sequence residues point towards the pore lumen, simultaneously coordinating up to two dehydrated K + ions at alternate positions, or binding sites (s1/s3 or s2/s4) [3]. Based on crystal structures, it was proposed that K + is selected over Na + because SF oxygen atoms are positioned at the precise distance and geometry that permits the favorable replacement of the hydration shell of K + (atomic radius = 1.33Ǻ), but not of Na + ions, which have an atomic radius only 0.38 Ǻ smaller than that of K + [2]. The above proposal corresponds to the "snug-fit" model of selectivity [5]. This model does not assign any role to K + ions themselves in the determination of selectivity, and according to it permeation of large ions, such as Cs + (atomic radius = 1.69Ǻ), should also be halted. Several functional observations do not agree with the snug-fit model. Thus, the proposed SF rigidity stands in contrast with the flexibility of proteins [6,7], and indeed functional evidence indicates that SF is able to undergo sub-Angstrom fluctuations, such as that which accounts for the difference between K + and Na + radius. Some examples comprise experimental observations of the role of K + in the stability of K + conductance [8][9][10][11][12][13], in particular of the Shaker K + conductance which in the absence of K + collapses in a fully reversible manner [13], demonstrating that the Shaker pore can fluctuate between conducting and, non-inactivated, non-conducting configurations [13,14]. Additionally, other observations demonstrate that in some K + channels, replacement of K + by Na + ions allows the flux of Na + , at the moderate membrane potentials at which K + normally flows [15][16][17][18][19]. Moreover, a change in Na + vs. K + selectivity has been proposed as part of the mechanism of the slow, C-type, inactivation of Shaker [20,21]. In summary, extensive experimental observations regarding stability, gating and selectivity indicate that K + -selective pores are flexible structures, although the role of K + ions in these processes continues to be not well understood. A parallel, and to date incompletely characterized phenomenon, is the change in the pharmacological properties of the pore that should accompany K + -dependent changes in selectivity, as the latter likely arise from significant changes of pore geometry (e.g., Figure 1 from Hoshi and Armstrong, 2013, [21]). Herein we report that when internal K + ions are replaced by Cs + , a manipulation frequently carried out to eliminate currents through K + channels in cells expressing multiple types of ion channels, stable outward Cs + and inward Na + currents are observed, under bi-ionic Na o / Cs i conditions. The latter shows that selectivity is not accounted for by protein structural elements only, as implicit in the snug-fit model. Additionally, we report the block of Shab channels by external Ca 2+ ions, and show that ion conditions that undermine selectivity also impair both Ca 2+ and external TEA block of the pore. Our observations are interpreted within the context of recent structural information acquired with Na + -and-K + conducting bacterial channels [22]. Cell culture and channel expression Sf9 cells grown at 27°C in Grace's medium (Gibco) were infected, with a multiplicity of infection of~10, with a baculovirus containing Shab (dShab 11) K + -channel cDNA, as reported [11]. Experiments were conducted 48 h after infection of the cells. Electrophysiological recordings Macroscopic currents were recorded under whole-cell patch-clamp with an Axopatch 1D amplifier (Axon Instruments). Currents were filtered on-line, and sampled at 50 or 100 μsec/ point, depending on the experiment, with a Digidata 1322A interface (Axon Instruments). Electrodes were made of borosilicate glass (KIMAX 51) having resistances in the range of 1-1.5 MO resistance. The ground electrode was connected to the bath solution through an agar-salt (1M NaCl) bridge. 80% series-resistance compensation was always applied. The holding potential (HP) was -80 mV. Junction potentials were estimated following standard procedures [23], accordingly voltage values were corrected off-line as follows: 10K o -NMG o /Cs i : Vm-shift = 6 mV; 10K o -Na o /Cs i : Vm-shift = 8 mV; Na o /Cs i : Vm-shift = 8 mV. There was less than 1 mV difference between junction potentials of Na o /K i and Na o /Cs i solutions. Results K + channels are defined by their common characteristic of being highly selective for K + over Na + ions. Fig. 1A demonstrates that Drosophila Shab channels are typical K + channels. The figure presents a representative K + current (I K ) evoked by a +50mV/30ms pulse, followed by a strong hyperpolarization to -170 mV, with the cell bathed in bi-ionic Na o /K i solutions (see Methods). Note that, despite the huge driving force for Na + ions, there is no appreciably inward current at pulse end (indicated by the arrow). This indicates that P Na /P K <0.001 and demonstrates that Shab is a typical K + channel. To further address Shab selectivity, internal K + (atomic radius = 1.33Ǻ) was replaced by Rb + ions (1.48Ǻ) and channel selectivity (in Na o /Rb i ) was tested as in A. The lack of inward current in Fig. 1B (arrow) shows that the channels exclude Na + as well as they do with K + in the internal solution (P Na /P Rb <0.001). This observation agrees with the generalized use of Rb + as a K + substitute, and further endorses Shab as a typical K + channel. In contrast to the previous observations, Fig. 1C shows that upon replacement of internal K + by the larger Cs + ions (1.69Ǻ) a significant increase in Na + permeability is observed. Thus, with the cell bathed in Na o /Cs i solutions, the depolarizing pulse elicits macroscopic outward I Cs which is followed by inward I Na , upon membrane hyperpolarization. That is, internal K + replacement by Cs + ions undermines pore selectivity, as assessed by the ability of the channel to conduct Na + . Bathing Shab in Na + -containing solutions that lack K + ions irreversibly eliminates the ability of the channels to conduct ions [10][11][12]). Therefore, we tested the stability of the currents recorded in Na o /Cs i solutions. Fig. 1D presents the plot of the relative amplitude of the current as a function of the time of recording (closed circles, Na o /Cs i ). For a reference, the figure also illustrates the stability of I K with standard Na o /K i solutions (open circles, data from Ambriz-Rivas et al, 2005 [10]). Note that in Na o /Cs i solutions, the stability of the ion conductance is comparable with that observed with physiological [K i + ]. Furthermore, the inset demonstrates that the normalized outward currents, recorded in either of these solutions activate within the same range of potentials. In contrast, we could not obtain stable recordings when K i + was replaced by NH 4 + (Na o /NH 4i , not shown). To further characterize Cs + and Na + permeation we apply an instantaneous I-V protocol (II-V), activating the channels with a +50mV/30ms pulse, and thereafter stepping the voltage from -160 to +40 mV, in 10-mV increments. Fig. 2A left panel illustrates currents recorded with this protocol in a cell bathed in Na o /Cs i solutions. Note the inward I Na at negative potentials. The right panel presents the currents obtained in the same cell after the external Na + was replaced by NMG + (indicated by the arrow, NMG o /Cs i ). Note that only the outward I Cs are left, as shown by the average IIV in Fig. 2C. The latter confirms that the inward current observed in Na o /Cs i is carried by Na + . For the sake of completeness, it is pertinent to mention that we have never observed Li + currents (not shown). The previous observations are quantified in Fig. 2B and 2C, which present the average normalized II-V obtained with either Na o or NMG o solutions respectively. Regarding the former (Fig. 2B), note that although the overall distribution of the points is not linear, there are three regions of noticeably different constant slopes, as shown by the fitted least squares lines (L1-L3, correlation coefficients 0.99) (see Figure legend). The average of the points of intersection of the voltage axis with L2 of individual experiments yields V rev = -108±3 mV (n = 8), from which a permeability ratio P Na /P Cs = 0.01 is obtained. Concerning the outward I Cs , note that in the interval from -120 to -25 mV, I Cs has a relatively small, constant, conductance (L2 slope), whereas positive to -25 mV I Cs presents a~3.5-fold larger conductance (L1 slope). A small conductance near V rev , where the driving force is small, has been explained as possibly arising from single-ion occupancy of the pore [24]. However, in this case with the small conductance interval extending as far as~80 mV above V rev , the former explanation appears improbable. Therefore, it is most likely that, the small conductance indicates that in this region Cs + flux presents a rate limiting energy barrier. Finally, note that I Na presents a linear variation (L3), within the range of voltages tested, with G Na~4 G Cs near V rev (L3 slope). Thus, although P Cs >P Na , the channel conducts Na + better than Cs + near V rev . Finally, note that, at~-160 mV, I Na departs from L 3 , suggesting that a region with negative slope would have presented at more negative voltages, similar to that observed with K + present in the external solution (see below and Gomez-Lagunas et al, 2003 [25]). On the other hand, note that as expected, at pulse end, the channels clearly conduct K + better than Na + . Fig. 3B presents the instantaneous normalized I-V relationship obtained in 10K o -NMG o /Cs i solutions. Note that in contrast to the two slopes observed in Fig. 2B, in this case, with 10 mM K o + , I Cs is fitted by a single slope (L1 slope, see Figure legend). This difference is probably accounted for by the corresponding reversal potentials, which make outward I Cs with 10 mM K o + start at a voltage near to that at which the I Cs slope changes in Fig. 2B. A question of interest was whether this [K o + ] could eliminate Na + permeation. Fig. 3C shows that the reversal potential obtained with NMG ions (10K o -NMG o /Cs i ) is basically the same (P = 0.617) as the one obtained with Na + ions in the external solution (10K o -Na o /Cs i ). The latter demonstrates that 10 mM K o + eliminates Na + permeation through the channels, and yields the permeability ratio P Cs /P K = 0.17. As a reference, this ratio is about twice as that reported in Shaker channels [26]. Regarding Fig. 3B, note that I K varies linearly from -35 to -75 mV (L2), and as expected G outward,Cs <G inward , K (for the sake of comparison: G outward,Cs /G inward,K = 0.25, although [Cs i + ]/ [K o + ] = 12). Similarly, taking the ratio of the minimal squares slopes that fit I Na (Fig. 2B) and I K (Fig. 3B), both obtained with internal Cs + , as an indication of the relative conductance of the inward current carried by these ions we obtain G Na /G K = 0. 21 . 3D). Finally, note that at Vm δ -75 mV the inward current presents a region with a marked negative slope. The latter is the result of a voltage-dependent external Ca 2+ block of the channels, as shown below. ]. This indicates that the negative slope region of the I-Vs is the result of external Ca 2+ block of the channels, in agreement with previous observations performed in Shaker and plant K + channels [25,[27][28]. Cs + and K + Permeation Ca o 2+ block of Shab is quantified in Fig. 4B, which presents the fractional channel block (fb) as a function of voltage. Block was measured as fb = 1-(I K /I k,expected ), where I K is average I K in Fig. 4A, and I k,expected is the corresponding I K that would have been obtained in the absence of Ca 2+ block, as evaluated from the least-squares line that fits the points at depolarized voltages (Fig. 4A). Interestingly and as previously noted, a comparison of Fig. 2B & Fig. 4 indicates that Ca 2+ block is basically eliminated in the ionic conditions here reported that undermine pore selectivity, allowing the stable passage of Na + . The latter is quantified in Fig. 4C Block by TEA and Quinidine Finally, considering that the site at which external TEA blocks K + channels has been determined [29], we studied whether the conditions that allow Na + permeation, and eliminate Ca 2+ block, may also undermine TEA block, as this could suggest the site of Ca 2+ interaction with the channels. The representative traces in Fig. 5A, and the histogram in Fig. 5C, show that whereas in standard Na o /K i solutions 25 mM TEA blocks 73±2% of the channels, in Na o /Cs i solutions block is significantly decreased to 46±4% (P = 0.009). Additionally, note that addition of 10 mM K o + (10K o /Cs i ) restores TEA potency (62±12%, P = 0.36). It is worthy to emphasize that currents in Fig. 5A were elicited by a 0 mV depolarization, followed by repolarization to the HP of -80mV (see left panel). The latter explains the lack of inward Na + current upon repolarization in Na o /Cs i solutions. The parallel drop of TEA and Ca 2+ block, observed under a condition that allows the flux of Na + through the pore, suggests that external Ca ++ binds in a place located near the TEA o binding site, and that the configuration of this region is changed by the conditions that allow Na + permeation. Finally, in order to test for possible changes towards the internal side of the pore, we compared the effect of Quinidine on currents recorded in either Na o /Cs i or physiological 5K o /K i solutions. Quinidine is a compound, that regarding Drosophila K + channels is known to specifically block Shab channels, upon binding to the pore central cavity [12]. The traces in Fig. 5B illustrate that addition of 100 μM Quinidine to the external solution blocks~80% of Shab channels with the cell bathed in Na o /Cs i solutions. The inset in the right panel compares the control Na + -tail current at -140 mV (gray trace) against the Na + -tail current recorded in the presence of Quinidine (black trace). Note that the latter presents a slower time course and an initial hook, as expected from an internal pore blocker that hinders the closing of the activation gate [12]. Finally, Fig. 5D shows that, although slightly bigger, the extent of Quinidine block in physiological 5K o /K i solutions is similar to the block exerted in Na o /Cs i (P = 0.0715) (5K o /K i data in Fig. 5D are from Gomez-Lagunas, 2010 [12]). This suggests that pore change(s) that decrease (s) selectivity and external TEA and Ca 2+ block do not reach the central cavity. Although, further work is needed to understand the parallel ion dependence of TEA and Ca 2+ block reported in this work. For example, it would be important to determine whether TEA can compete with Ca 2+ for binding to the pore, and whether mutations known to affect TEA binding also exert en effect on Ca 2+ block. Discussion Herein we demonstrate that the iso-osmollar replacement of intracellular K + by Cs + ions allows Shab channels to stably conduct both Cs + and Na + ions. This demonstrates that care must be taken in experiments where K + ions are replaced by Cs + ions with the aim of preventing currents trough K + channels. More importantly, our observations show that the presence of K + plays an important role in impeding the flow of Na + under bi-ionic conditions (Na o /K i ), and therefore that K + ions are a cofactor required for maintaining the selectivity, as well as the stability [8][9][10][11][12][13], of K + pores. The above indicates that pore selectivity is not fully accounted for by protein structural elements only, as stated in the snug-fit model. Instead, our observations support the alternative Koshland´s induced-fit model [6], according to which ion binding sites are not rigidly positioned, and selectivity depends on the balance between the ions hydration energy and the strain energy required by the protein to properly coordinate the ions [6,30]. As a result, ions compete for available binding sites [16], and are selected according to the balance of their corresponding energies. Our observations agree with former observations [15][16][17][18][19], in particular with experiments showing that, upon K + replacement by Na + ions, Kv2.1 channels allow the passage of Na + [16]. On the other hand, in Shab, the Drosophila homolog of Kv2.1, the same ion substitution deranges the pore in a manner that a fast and irreversible collapse of the ion conductance takes place [10][11][12]. For the sake of completeness, it should be noted that despite the wealth of work that has been devoted to determine the mechanism of selectivity controversy still exist regarding the mechanism by which ions are selected, with the snug fit model still being favored by some authors (e.g., see Noskov & Roux (2006) [6], and Derebe et al (2011) [31]). Although under standard bi-ionic conditions (Na o /K i ) Shab does not conduct Na + , it is nonetheless pertinent to discuss our results regarding Na + conduction within the framework of observations recently obtained in a Na + and K + conducting bacterial channel (NaK), which presents a pore architecture similar to that of K + channels [2,22]. The NaK pore presents only two ion binding sites, which otherwise are chemically identical to the innermost K + sites of KcsA (s3, s4), and yet NaK lacks K + -selectivity [22]. Amino-acid substitutions that produced functional NaK pores endowed with a variable number of binding sites, led to the interesting proposal that K + selectivity requires the presence of 4 in-line K + binding sites, as observed in KcsA [3,31]. Our observations show that in the case of Shab the iso-osmollar substitution of K i + by Cs + changes the pore geometry, in such a way that it becomes able to conduct Na + . Based on the results obtained with the NaK channel [31], we hypothesize that pore occupancy by Cs + somehow reduces the effective number of ion binding sites, probably by inducing a small change in the geometry of coordinating carbonyls that point to the pore lumen, probably similar to the one thought to occur in the outermost site (s1) of Shaker channels during C-type inactivation [20,21]. In support of the latter possibility, we observed that bathing the channels in Na o /Cs i solutions brings a change in the conduction pathway that affects the extracellular side of the pore, as deduced by the decreased potency of block by external TEA and Ca 2+ ions, but not of quinidine. In Kv2.1 channels substitution of K + by Na + renders the channel resistant to TEA o . The latter involves the displacement of lysine at position 382, in the external vestibule of the pore, which hinders TEA binding [32,33]. Shab presents a threonine at the equivalent position, and further work is needed to determine whether a phenomenon similar to the one in Kv2.1 underlines the reduced potency of TEA block observed in Na o /Cs i . More important for the present discussion, the parallel elimination of Ca 2+ block observed under the previously mentioned conditions suggests that Ca o 2+ binds to the channels near the TEA o binding site, at the pore entry [34], notwithstanding the obtained values of δ, because it is known that δ does not indicate of a physical distance, and that instead its value is the result of the interaction between blocking and permeant ions [35,36]. The latter underlies the different values of δ obtained using either 30K o /K i (δ = -0.32) or K o /Cs i solutions (δ = -0.65). Several proteins bind Ca 2+ with oxygen atoms provided by serine, threonine, or carboxyl groups of either aspartate or glutamate side chains, as observed for example in calmodulin [6]. On the other hand, recent crystallographic images of cation channels show that Ca 2+ can also bind to main chain carbonyl groups that point to the pore lumen [37]. Therefore, based on the one hand on the observed parallel decrease of TEA o and Ca o 2+ block of Shab, under conditions that allow the passage of Na + , and on the other hand based on crystal structures exhibiting Ca 2+ ions bound to selectivity filter carbonyls [37], we hypothesize that Ca 2+ blocks the Drosophila Shab, and other K + channels like Shaker [25], by binding at the pore entry, probably above the first K + binding site (s1). In this scenario, the loss of selectivity and Ca 2+ block, observed in Na o /Cs i solutions, could probably arise from a change in the geometry of s1. The presence of 10 mM K o + inhibits Na + permeation and restores Ca o 2+ block probably by impeding the change of geometry of this site. In agreement with this hypothesis, the observations regarding Quinidine block suggest that the central cavity remains unchanged by conditions that allow Na + permeation. Finally, for the sake of completeness, it must be mentioned that the absence of Ca 2+ ions at the pore entry noticed in crystal structures of K + channels [37], might be the result of the voltage dependence of Ca o 2+ block which requires negative voltages (-70 mV), absent in crystal structures, to develop.
v3-fos-license
2024-05-12T15:16:21.825Z
2024-05-10T00:00:00.000
269729856
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpain.2024.1349847/pdf?isPublishedV2=False", "pdf_hash": "feddee2058d593c177881b649ef3159cafee3dc1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44772", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "c14762abb7ae82e3f5f94df211de3a2299b45c9e", "year": 2024 }
pes2o/s2orc
Reappraising the psychosomatic approach in the study of “chronic orofacial pain”: looking for the essential nature of these intractable conditions As burning mouth syndrome (BMS) and atypical odontalgia (AO) continue to remain complex in terms of pathophysiology and lack explicit treatment protocol, clinicians are left searching for appropriate solutions. Oversimplification solves nothing about what bothers us in clinical situations with BMS or AO. It is important to treat a complicated phenomenon as complex. We should keep careful observations and fact-finding based on a pragmatic approach toward drug selection and prescription with regular follow-up. We also need to assess the long-term prognosis of treatment with a meticulous selection of sample size and characteristics. Further investigation of BMS and AO from a psychosomatic perspective has the potential to provide new insight into the interface between brain function and “chronic orofacial pain.” Introduction "Chronic orofacial pain (COFP)" is an umbrella term used to describe painful regional syndromes with a chronic, unremitting pattern (1).This term is very convenient; however, the author does not prefer the term "chronic orofacial pain" due to its lack of therapeutic indications and potential for confusion (2).In fact, the study of COFP now seems to be losing focus because of its ambiguity. For example, studies on burning mouth syndrome (BMS) have seen remarkable growth in the last two decades.These study data have many limitations and do not apply to many clinical cases.Management of BMS has been seen as a "jumble of wheat and tares," with little evidence to support or refute interventions.The existence of "too many reviews and too few trials" leads to difficulty in choosing an appropriate therapy for each patient with BMS (3). Consequently, BMS often persists for many years, and patients may undergo several unproductive tests without any improvement in oral symptoms despite many treatment attempts (4).Dentists obviously feel the urgent need to offer some treatments for these BMS patients, developing a feeling of helplessness and frustration. Atypical odontalgia (AO) is included as another COFP condition that presents challenges for many dentists (5).This pain condition has been given more attention by many dentists because conventional dental procedures seldom provide relief for these patients; on the contrary, there is a risk of legal troubles.AO "pain" differs significantly from ordinary dental conditions like caries or pulpitis; however, patients' complaints are sometimes very confusing to distinguish from such ordinary dental pain that can be treated successfully.Dentists have become more cautious and nervous in diagnosing "toothache" and more careful when performing invasive dental procedures these days. Moreover, confusing terminology is impeding progress in the research for the treatment and pathophysiology of both conditions.It might be accurate to say that there is no perfect treatment that can be effective for all BMS or AO patients with various underlying backgrounds.In my opinion, the lack of "psychosocial interventions" is probably the most critical factor contributing to this confusing situation. In this brief opinion article, BMS and AO are mainly argued as "psychosomatic oral pain"; on the other hand, temporomandibular disorders (TMDs) and trigeminal neuralgia (TN) are distinguished from them. 2 What type of "pain" are patients complaining about in cases of BMS or AO? Many studies have indicated the important role of psychological factors such as depression and anxiety in BMS and AO.Nonetheless, most of them have remained superficial, failing to suggest any hopeful solutions for these chronic oral pain conditions.It seems nonsensical to argue the efficacies of antidepressants or other neuromodulations for BMS or AO without accompanying "psychosocial interventions."Like other chronic pain, treatment outcomes of BMS were affected easily by placebo and nocebo effects (6).Therefore, every treatment outcome of BMS and AO is probably affected by the patient-physician relationship.Moreover, the patient-physician relationship is crucial for patient's adherence to any pain medications. Mere administration without a convincing reason and a full understanding of patients would easily result in their nonadherence.The patient-physician relationship is one of the biggest watersheds between adherence and non-adherence.It should be prioritized to be aware of this psychological background underlying every prescription.Pain medication for BMS and AO requires this psychosomatic perspective."Psychoeducational treatment" (Table 1) would be necessary for successful pain medication.This is one of the very basic cognitive behavioral therapies (7). BMS and AO share common trigeminal nerve input, yet they are highly distinct disorders (8).Somatotopic segregation may occur at the level of the trigeminal nucleus, thalamus, and somatosensory cortex, and distinct ionic or neurochemical signaling pathways may be involved (9).This structural basis probably has a strong connection with instinctual emotional function, easily affected by various psychosocial factors. BMS and AO might be seen as models of a psychosomatic disorder, in which the biological environment interacts with psychosocial factors.This approach does not mean that the mechanisms underlying BMS and AO are purely psychological, but that the role of psychological (or psychopathological) factors is more substantial than in most diseases (4). In Japanese dentistry, BMS and AO have been regarded as oral psychosomatic disorders for more than half a century, requiring a multidisciplinary (medical and psychosocial) approach.Amitriptyline, a classic tricyclic antidepressant (TCA), has been used for both BMS and AO, with the need for accompanying psychotherapies since then.Nevertheless, difficulties in timeconsuming psychosomatic treatments and poor reimbursement (healthcare fee) have prevented many dentists from diligent practice for such patients.However, we have kept searching for BMS and AO as "psychosomatic oral pain" in the hope of finding treatments for them. 3 Problems pile up in researching BMS and AO Heterogeneity The heterogeneity of BMS or AO is the biggest barrier preventing us from reaching the best treatment (3).Moreover, BMS symptoms may change fluidly over time.Sometimes, burning pain goes successfully; however, relapse of oral discomforts such as xerostomia or taste disturbances might quickly become a new problem instead of pain. The nature of BMS is precisely that of a syndrome, which has several causative factors, including the psychosomatic nature of chronic pain.Hence, treatment response might differ depending on the predominance of individual confounding pathological factors such as neuropathic component, central sensitization, or psychiatric comorbidities.The problems are intertwined in so complex a way that they cannot be solved completely by a single therapy (3). In particular, psychiatric comorbidities might be significant for any treatments of both BMS and AO.Specifically, when planning pharmacotherapy, one should always consider the psychiatric condition and involve a complete psychologic/ psychiatric assessment (10). Recently, we have had to pay more attention to neurodevelopmental disorders hidden behind intractable AO or BMS (11,12).Their hypersensitivity might make the pain treatments more difficult; however, treatment response for a dopaminergic medication suggests some common pathophysiology underlying both conditions (13).Regarding these clinical phenomena, confirming pharmacological response (e.g., TCA-responsive BMS/AO vs. non-responders) is one of the challenging issues in understanding the pathophysiology of this pain (14). On the other hand, neurovascular compression of the trigeminal nerve might also be valuable to distinguish possible peripheral pathophysiology of AO (15). Oral cenesthopathy superimposed on BMS or AO The complaint of "burning" is often regarded as neuropathic pain; however, it also has a very similar nature to oral cenesthopathy (16).Oral cenesthopathy is characterized by bizarre and abnormal oral sensations without medical and dental evidence.In fact, oral cenesthopathy is sometimes comorbid with BMS (26.24%) or AO (5.78%) (17). The diagnosis of oral cenesthopathy is still controversial, and contemporary psychiatry does not provide independently defined diagnostic criteria (18).Oral sensory disturbances fall within a continuum in patients with or without diagnosed somatoform disorders.Careful consideration of the patient's dopaminergic state and the possible contribution of psychiatric comorbidities can help guide therapeutic choices, but the management may still involve some trial and error since symptoms evolve and overlap (19). Assessment of improvement In chronic pain research like BMS or AO, the biggest problem remains in how to assess subjective oral symptoms that cannot be quantitated.Next, what should be set as the treatment goal or target?How can we say a patient with BMS or AO has been saved? A satisfactory assessment tool for BMS remission is not yet available.The suffering of BMS or AO could hardly be assessed in visual analog scale (VAS) scores only.BMS involves not only a burning sensation but also discomfort such as dryness or dysgeusia (20), as mentioned above.Therefore, the clinicians should reconsider what a patient claims as "pain."We need more effective qualitative assessment tools for insight into the patient's experience of "pain" instead of using VAS only. A standardized symptom assessment tool is necessary to facilitate scientific discussion among researchers for improving diagnosis and treatment modalities.We developed the Oral Dysesthesia Rating Scale (Oral DRS) and evaluated its validity as an assessment tool (18).Since patients often develop impairments in oral functions such as eating and speaking and in the performance of daily activities, this new tool is designed to also assess these dysfunctions. We believe that the treatment goal or target for BMS or AO should not be set in "complete remission" nor "symptom-free" but good enough satisfaction for both patients and physicians.It must be hastened to develop better "clinically meaningful outcomes." Safety of pharmacotherapy Despite no strong evidence of the efficacy of specific medications or agreement between the authors, it is worth noting that the absence of evidence is not evidence of absence.Neuromodulators such as benzodiazepines (e.g., clonazepam) or antidepressants (e.g., amitriptyline) have been used for the treatment of BMS (21) or AO (14).We have anecdotal evidence in many patients that these drugs work well. These medication therapies can be continued as long as the patient's benefits outweigh the harm.Tricyclic antidepressants are not always safe (22), and there is the risk of abuse with benzodiazepines (23).However, in Japan, we seldom experience big problems such as dependence or misuse in prescribing benzodiazepines for BMS patients (24).It might be due to the different prescription "refill" service systems in each country.However, benzodiazepine therapy should only ever be initiated when the patient is aware of the risks and benefits of these drugs, understands what physiologic dependence is, and has a clear understanding that the drug will be discontinued after a short time (25).Physicians should weigh the risks versus benefits when prescribing benzodiazepines to patients with BMS.A lowdose strategy in these medications is probably appropriate in most cases. Lack of long-term prognosis Then, another important problem arises in the assessment of duration and follow-up of medications.BMS and AO have continuous, long-lasting symptoms, often with fluctuations.Despite the importance of studies evaluating the long-term prognosis, there is little data on longitudinal outcomes or recurrence in treating BMS or AO. We cannot ignore the systemic problem in university hospitals, where many staff members transfer their positions frequently.It becomes challenging for a patient to be followed up by one physician.This unstable treatment situation must be affected by the treatment effect and the dropout ratio. Patients with BMS or AO tend to easily drop out from any treatment.We believe evaluating the differences between dropout cases and the cases in good clinical courses would help resolve this (26).We suggest that real-world data may be more essential than short-term RCTs to know the best benefits and limitations of the treatment. Retrospective long-term treatment outcomes may be a more critical option (27,28).Complete remission of BMS or AO is not so frequent in these medication therapies; however, it is not always impossible if adequate psychosocial intervention is available.It might also be helpful to clarify the factors contributing to patient satisfaction with long-term observations.Goal attainment scaling (GAS) (29), a flexible and responsive technique for assessing outcomes in complex interventions, assimilates the achievement of individual goals into a single standardized "goal attainment scale."GAS has been proposed as a patient-centered, semi-quantitative measure.Each patient's problems are identified through agreement between the physician and the patient.Treatment goals are set for each problem using the specific, measurable, attainable, realistic, and timed (SMART) methodology.Such an assessment method could shed light on a new treatment strategy that reinforces the previous treatments for BMS and AO. Summary As BMS and AO continue to remain complex in terms of pathophysiology and lack explicit treatment protocols, clinicians are left searching for appropriate solutions. Oversimplification solves nothing about what bothers us in clinical situations with BMS or AO.It is important to treat a complicated phenomenon as complex.We should keep careful observations and fact-finding based on a pragmatic approach for drug selection and prescription with regular follow-up.We also need to assess long-term prognosis of treatment with a meticulous selection of sample size and characteristics (Table 2).Further investigation of BMS and AO with a psychosomatic perspective can provide new insight into the interface between brain function and "COFP." TABLE 1 Psychoeducation before pain medication.1.Careful explanations of the pathophysiological model of painRelationship between central sensitivity and chronic oral pain with unknown origin Not merely "psychogenic" but hypersensitivity of the brain 2. Getting to understand and agree on the treatment goal Confirm target of medication, Data on the efficacy of antidepressants, Possible side effects, Needs for continuation of at least 6 months 3. Behavioral activation Regularly rhythmical daily life; Enough sleep, a healthy diet, and light exercise Balance between rest and action, Monitoring (e.g., pain dialy), Pacing (timecontingent approach)
v3-fos-license
2018-04-03T05:43:12.984Z
2014-11-26T00:00:00.000
17564324
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jnanobiotechnology.biomedcentral.com/track/pdf/10.1186/s12951-014-0047-3", "pdf_hash": "e60510da7975466d9609e88501631402d7793fa8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44774", "s2fieldsofstudy": [ "Environmental Science", "Materials Science", "Medicine" ], "sha1": "ed28fa1e2e96458efd2e2daee6a798de8191aacd", "year": 2014 }
pes2o/s2orc
Comparative lung toxicity of engineered nanomaterials utilizing in vitro, ex vivo and in vivo approaches Background Although engineered nanomaterials (ENM) are currently regulated either in the context of a new chemical, or as a new use of an existing chemical, hazard assessment is still to a large extent reliant on information from historical toxicity studies of the parent compound, and may not take into account special properties related to the small size and high surface area of ENM. While it is important to properly screen and predict the potential toxicity of ENM, there is also concern that current toxicity tests will require even heavier use of experimental animals, and reliable alternatives should be developed and validated. Here we assessed the comparative respiratory toxicity of ENM in three different methods which employed in vivo, in vitro and ex vivo toxicity testing approaches. Methods Toxicity of five ENM (SiO2 (10), CeO2 (23), CeO2 (88), TiO2 (10), and TiO2 (200); parentheses indicate average ENM diameter in nm) were tested in this study. CD-1 mice were exposed to the ENM by oropharyngeal aspiration at a dose of 100 μg. Mouse lung tissue slices and alveolar macrophages were also exposed to the ENM at concentrations of 22–132 and 3.1-100 μg/mL, respectively. Biomarkers of lung injury and inflammation were assessed at 4 and/or 24 hr post-exposure. Results Small-sized ENM (SiO2 (10), CeO2 (23), but not TiO2 (10)) significantly elicited pro-inflammatory responses in mice (in vivo), suggesting that the observed toxicity in the lungs was dependent on size and chemical composition. Similarly, SiO2 (10) and/or CeO2 (23) were also more toxic in the lung tissue slices (ex vivo) and alveolar macrophages (in vitro) compared to other ENM. A similar pattern of inflammatory response (e.g., interleukin-6) was observed in both ex vivo and in vitro when a dose metric based on cell surface area (μg/cm2), but not culture medium volume (μg/mL) was employed. Conclusion Exposure to ENM induced acute lung inflammatory effects in a size- and chemical composition-dependent manner. The cell culture and lung slice techniques provided similar profiles of effect and help bridge the gap in our understanding of in vivo, ex vivo, and in vitro toxicity outcomes. Electronic supplementary material The online version of this article (doi:10.1186/s12951-014-0047-3) contains supplementary material, which is available to authorized users. Background It is well recognized that nanotechnology has been rapidly growing and advancing over the past 10 years, and will continue to expand in numerous market sectors [1,2]. The advances in nanotechnology, however are accompanied by a need for better understanding of the exposure and toxicity of engineered nanomaterials (ENM) across their life-cycle. Moreover, the enormously diverse and applications of ENM (e.g., shapes, sizes, chemical and surface characteristics) are likely to result in a broad array of exposures and potentially adverse health outcomes. Thus, methods to evaluate and predict the toxicity of ENM are of considerable importance [3]. In particular, more information is needed on the interactions of ENM with lung tissue, since inhalation is a common exposure route and can also lead to potential systemic toxicity [1]. There is already substantial epidemiologic and toxicological evidence that inhaled ENM cause pulmonary effects (e.g., inflammation and/or edema) and/or extrapulmonary or systemic effects (e.g., thrombosis, dysrhythmias, and myocardial infarction) [4][5][6][7]. In general, nanotoxicology studies of the respiratory tract are performed with in vivo (e.g., mice and rats) or in vitro (e.g., airway/alveolar epithelial cells, macrophages, and dendritic cells) models. Because of the inherent anatomical complexity of the intact lung which is comprised of about 40 different cell types interpretation of toxicity of ENM in in vitro cell culture models is limited as they do not reflect the complex cell-cell contacts and cell-matrix interactions in the tissue. Moreover, despite the need for studying the toxicity of ENM in vivo, there is a growing concern that broad toxicity testing will increase the number of animals required. Therefore, developing credible alternative testing methods predictive of in vivo ENM toxicity are essential to screen potential hazards and health risks associated with inhalation exposures to these novel materials [2]. Here, we investigated pulmonary toxicity of five ENM: one silicon dioxide (SiO 2 ), two cerium oxide (CeO 2 ), and two titanium dioxide (TiO 2 ) nanomaterials with different primary diameters. SiO 2 , CeO 2 , and TiO 2 nanomaterials are already widely used in industrial processes and consumer products. CeO 2 and TiO 2 nanomaterials are the most abundantly produced metal oxide nanomaterials in the U.S. [8] and have been independently tested for adverse health effects in vitro and in vivo, but not in the same study design [9,10]. CeO 2 nanomaterials are of interest because despite having the same crystalline form as the parent compound, the nano-sized material causes more oxidative stress as a result of subtle changes in their surface chemistry [11,12]. SiO 2 nanomaterials (particularly the amorphous form), have also recently received attention in biomedical applications, yet their toxicity is not fully understood [13]. In the present study, we conducted acute toxicity tests in mice (in vivo), mouse lung tissue slices (ex vivo), and mouse alveolar macrophages (in vitro) to extrapolate, and compare the results between ex vivo or in vitro to in vivo toxicity testing approaches. Lung tissue slices have shown to preserve almost all cell types and interactions with the microenvironment (i.e., cell-cell or cell-matrix interactions), thus providing the most in vivolike physiologically relevant response. Of all the different types of lung cells, alveolar macrophages are considered to be one of the first lines of a defense against inhaled particles and are primarily responsible for producing proinflammatory mediators [14]. The specific aims of this study were to determine the pulmonary toxicity and proinflammatory potential of ENM in mice, and compare these effects with the use of ex vivo lung slice and in vitro cell-based toxicity testing systems. Particle size distributions of ENM Hydrodynamic diameters of ENM in the various solutions used in this study were determined by dynamic light scattering (Table 1). Diameters of all ENM suspended in water were greater than the specifications provided by the manufacturer, and were even larger when the materials were suspended in culture media. Of all the ENM studied, TiO 2 (10) and SiO 2 (10) were the most highly agglomerated. Since this clumping behavior controls the density of the ENM agglomerates in suspensions, we estimated the agglomerate density and presented the results in Table 1. SiO 2 (10) had the lowest agglomeration density in any solution, indicating that this material was most likely to remain suspended in the solutions and less likely to interact with the cells. Agglomerated TiO 2 (200), on the other hand had the highest density which would promote settling and a greater potential to come in contact with the cells on the plate bottom. Pulmonary inflammation responses in vivo We monitored concentrations of lactate dehydrogenase (LDH) released into bronchoalveolar lavage fluid (BALF) at 4 hr and 24 hr post-exposure as a biomarker for lung cell injury. None of the ENM, except for CeO 2 (88) (at 24 hr post-exposure), significantly increased the concentrations of LDH at any time point compared with saline control groups ( Figure 1A). N-acetyl-β-D-glucosaminidase (NAG) and γ-glutamyl transferase (GGT) as biomarkers for lysosomal enzyme and oxidative stress, respectively, were also assessed and were unchanged for any of the ENM (data not shown). Concentrations of albumin and total protein in BALF from the CeO 2 (23)-exposed groups were significantly increased at 4 hr and 24 hr postexposure compared with saline-exposed groups, indicating that this material caused lung edema ( Figure 1B and C). As a positive control, LPS increased LDH, albumin, and protein as expected, but did not affect NAG or GGT. The size-and composition-dependent toxicity of ENM was also seen in pulmonary inflammatory cells at 4 hr and 24 hr post-exposure ( Figure 2). The CeO 2 (23)-exposure groups significantly increased the number of neutrophils (18% and 34% at 4 hrs and 24 hrs, respectively), compared with saline controls. While LPS-exposure groups induced an even stronger neutrophil influx, no other ENM caused significant changes in the neutrophil number. The number of macrophages in BALF was unchanged by any treatment. ENM induced more acute lung inflammation than their larger counterparts, and that chemical composition of ENM was a more important determinant than their size. Based on the cytokine response results, toxicity ranking of ENM approximated CeO 2 (23) ≈ SiO 2 (10) > TiO 2 (10) > CeO 2 (88) > TiO 2 (200). At 24 hr post-exposure, the cytokine concentrations decreased to saline control values except for CeO 2 (23) which maintained elevated levels of IL-6 and TNF-α. Interestingly, the inflammation was not related to uptake of ENM in lung macrophages. The less active TiO 2 (10) and TiO 2 (200) were avidly taken up by lung macrophages at both time points compared with other ENM (Additional file 1: Figure S1). Finally, there were no significant changes in circulating white blood cells, red blood cells (RBCs) or RBC indices between the ENM-exposed mice and saline controls (data not shown). Pulmonary inflammation responses ex vivo and in vitro LDH, GGT, and NAG concentrations in the supernatants from the lung tissue slices at 24 hr post-exposure were unchanged at any of the concentrations tested (data not shown). Only the SiO 2 (10) at the highest concentration (132 μg/mL) significantly increased the concentrations of IL-6 and MIP-2 compared with negative controls (Figure 4). Data are means ± SEM (n =5-6 in each group). *p <0.05 compared with the saline-exposed negative control group from the same time point. Mice exposed to 2 μg of LPS served as a positive control. Data are means ± SEM (n =5-6 in each group). *p <0.05 compared with the saline-exposed negative control group from the same time point. Mice exposed to 2 μg of LPS served as a positive control. CeO 2 (23) also had increased IL-6 concentration but this was not statistically significant. Assessment of the cell culture supernatant from ENMexposed MH-S cells at 24 hr post-exposure revealed that all ENM increased the LDH release in a dose-dependent manner ( Figure 5A). SiO 2 (10) and TiO 2 (10 and 200) appeared to be more and less cytotoxic, respectively, however no apparent size-dependent effects (on cell membrane integrity) were observed. Half-maximal effective concentrations (EC 50 ) for the cell membrane integrity of SiO 2 (10), CeO 2 (23), CeO 2 (88), TiO 2 (10), and TiO 2 (200) were approximately 100, 295, 141, 330, and 384 μg/ mL, respectively. Cell viability based on the metabolic activity of mitochondria was assessed at 24 hr postexposure ( Figure 5B). Similar to the LDH analysis data, we also observed dose-dependent effects of ENM. EC 50 for the cell viability of SiO 2 (10), CeO 2 (23), CeO 2 (88), TiO 2 (10), and TiO 2 (200) were approximately 13, 18, 55, 30, and 77 μg/mL, respectively (Additional file 2: Figure S2). Thus, toxicity ranking of ENM based on the EC 50 for viability was in the order of SiO 2 (10) > CeO 2 (23) > TiO 2 (10) > CeO 2 (88) > TiO 2 (200). Because the EC 50 was much higher for LDH, this would indicate that the mitochondrial function was more sensitive to ENM exposure than cell membrane integrity. We also measured cell proliferation based on DNA content at 24 hr post-exposure and found Data are means ± SEM (n =5-6 in each group). *p <0.05 compared with the saline-exposed negative control group from the same time point. Mice exposed to 2 μg of LPS served as a positive control. Data are means ± SEM (n =3 in each group). *p <0.05 compared with CM-exposed negative control group. Lung tissue slices exposed to 87 ng/mL of LPS served as a positive control. that cell numbers did not significantly change in any of the ENM-exposed groups except at the high concentration exposure ( Figure 5C). At 100 μg/mL concentration, SiO 2 (10) significantly decreased MH-S cell numbers, while TiO 2 (10) and TiO 2 (200) significantly increased the cell numbers. Concentrations of pro-inflammatory cytokine, IL-6, in MH-S cells were measured at 24 hr post-exposure ( Figure 6). SiO 2 (10) induced more IL-6 production than other ENM which was in line with the IL-6 lung tissue slice response. To provide a more realistic comparison, we converted the nominal mass media concentration (i.e., μg/mL) to mass per unit cell (or tissue) surface area (i.e., μg/cm 2 ) because lung tissue slices have a larger 3D surface area than the MH-S cells. Taking this into account the exposure dose of 132 μg/mL to the lung slice resulted in a dose of 4.7 μg/cm 2 . Therefore, the IL-6 responses in MH-S cells exposed to 12.5 μg/mL concentration (equivalent to 4.2 μg/cm 2 ) was comparable to those in the lung tissue slices exposed to 132 μg/ mL concentration (see the Materials and Methods section for a more detailed calculation). Discussion While much work is being done to better understand the potential toxic effects of ENM on human health, it is still not clear which physico-chemical parameters of ENM are Data are means ± SEM (n =3-6 in each group). *p <0.05 compared with CM-exposed negative control group. MH-S cells exposed to 1% Triton X-100 served as a positive control. Figure 6 Cytokine level in MH-S cells at 24 hr post-exposure to ENM (3.125-100 μg/mL). Data are means ± SEM (n =3 in each group). *p <0.05 compared with CM-exposed negative control group. most important. Moreover, assessing (or screening) the toxic potential of emerging ENM is likely to increase the numbers of animals required, unless alternative methods are available that consistently reflect the in vivo biological effects. Here we utilized three different toxicity testing methods (mice, mouse lung tissue slices, and alveolar macrophages) to investigate the comparative toxicity of five ENM (SiO 2 (10), CeO 2 (23), CeO 2 (88), TiO 2 (10), and TiO 2 (200)) and determine if the latter two techniques could predict effects seen in animals. We found, in all three different toxicity testing methods, that SiO 2 (10) and/or CeO 2 (23) had the highest activity on the basis of pro-inflammatory cytokine production. Importantly the mouse lung tissue slices and alveolar macrophages exhibited similar cytokine responses to the distinct ENM when the exposure dose metric was based on cell surface area. Size-and chemical composition-dependent lung toxicity of ENM in mice Numerous studies of nanotoxicology have shown that toxicity of ENM is strongly influenced by two factors: 1) chemical toxicity based on the chemical composition of ENM, and 2) cellular stress caused by the physical properties of ENM [9]. In line with published reports, it was evident that only the smaller-sized ENM caused significant inflammatory effects on mouse lungs, and that the chemical composition was important since stronger effects were noted in SiO 2 (10) and CeO 2 (23) but not TiO 2 (10). Interestingly TiO 2 lung macrophage uptake was higher than the other ENM, despite displaying lower toxicity suggesting that the observed inflammatory responses were not dependent on phagocytosis. In support of this, a similar study demonstrated that nanomaterial toxicity was not correlated with particle uptake in the cells [21]. Although further studies are needed to understand the mechanism underlying lung toxicity of ENM, the data also suggest that there was no clear relationship between lung toxicity and degree of ENM agglomeration (i.e., hydrodynamic diameters). Agglomerates of ENM form in biological fluids by loose binding (e.g., van der Waals force) while primary diameters, and not hydrodynamic diameters influence toxicity. In support of this, other researchers have reported that nanoparticle trafficking across lung epithelial cells was correlated with primary diameters and not the hydrodynamic diameters of the agglomerated nanoparticles [22]. Numerous studies have reported that ENM of various crystalline forms and solubility cause varying degrees of lung injury and inflammation. It is generally accepted that insoluble ENM are far less active in producing cellular damage or injury as compared to (partially) soluble ENM of similar size [23][24][25], although insoluble ENM have the potential to remain in the lungs and other organs for a long. It also should be noted that while insoluble ENM may not be potent enough to cause cell damage, crystallinity of the ENM (e.g., amorphous or crystalline) might contribute to other toxicological properties [10,26]. In addition, insoluble ENM may cause oxidative stress and lung inflammation depending on their conduction band energy levels [27]. The ENM used in this study (SiO 2 , CeO 2 , and TiO 2 ) were insoluble (or poorly soluble) in biological fluids and considered not to release free ions from the nanomaterials to the tissue or cells. Here, the cytokine responses induced by SiO 2 (10) and CeO 2 (23) were evident in mice at 4 hr post-exposure but receded to control levels at 24 hrs, indicating that the inflammatory response was transient. Others have reported sustained pro-inflammatory cytokine levels at 24 hrs after exposure to SiO 2 (amorphous and 14 nm) albeit with 50 mg/kg which is~15 times higher than the concentration used here [28]. While lung toxicity of SiO 2 nanomaterials has been extensively studied [26], there are only a few reports of lung toxicity of CeO 2 nanomaterials [9,[29][30][31][32]. Moreover, these studies have mainly focused on long-term toxicity in mice or rats, demonstrating that intratracheal instillation or inhalation of CeO 2 nanomaterials led to severe chronic lung inflammation for up to 28 day post-exposure. Although our findings were limited to the 24 hr time-course, we cannot rule out the possibility of further chronic inflammatory responses, particularly in light of human case studies which report development of lung disease in workers after repeated long-term exposure of CeO 2 [33,34]. Similarly, our results showed that TiO 2 nanomaterials did not cause significant lung inflammation in mice, consistent with recently published TiO 2 toxicity findings performed through multiple interlaboratory comparisons [35]. Comparing lung toxicity testing in mice to its alternatives Efforts to reduce the number of animals in toxicity testing have resulted in the development of numerous ex vivo and in vitro toxicity test methods but the results are still conflicting. This inconsistency could be due to the fact that there are 1) a lack of overall consensus on the relevant dose metric for in vivo and ex vivo/in vitro studies and 2) inherent limitations to most in vitro models such as a lack of complex cell-cell interactions [36]. Here, the mouse lung tissue slices (ex vivo) and MH-S cells (in vitro) displayed a similar pattern of cytokine response on the basis of the mass per unit surface area of cell or tissue (μg/cm 2 ) but not per unit volume of culture medium (μg/mL), suggesting that cell surface area should be considered in in vitro dosimetry when comparing toxicity endpoints from different systems. It is well documented that nanomaterials form agglomerates in suspension and their fate (or behavior) is governed by different mass transport properties (sedimentation and/ or diffusion), leading to differential exposures of nanomaterials to cells [17,18,20]. The nominal mass media concentration (μg/mL) in submerged cell-culture conditions assumes that the suspended nanomaterials are completely deposited on the cell surface which may not be always true for all nanomaterials in suspension and may result in misinterpretation of biological response data [37]. In the present study the density of agglomerated ENM in suspension (which influences delivered dose) was associated with the resultant cellular responses ex vivo and in vitro [17,18,20]. Notably, if the agglomerate density approached the density of the culture medium, the nanomaterials were more likely to remain suspended in the medium (i.e., low delivered dose), leading to a reduced exposure and diminution of biological responses to the nanomaterials [17,18]. In this regard, since the agglomerate densities of SiO 2 (10), CeO 2 (23) and TiO 2 (10) ex vivo and in vitro were closer to the culture media compared to other ENM, it is likely that the toxic effects were underestimated. In other words, the cytokine responses ex vivo and in vitro would be expected to increase even more if the cells were exposed to the same delivered dose. Therefore, considering the behavior of ENM agglomerates in submerged cell culture systems (ex vivo or in vitro) may reduce the disparity between in vitro and in vivo nanotoxicology outcomes. However, there are limitations to be considered when interpreting in vitro cellular responses based on agglomerate density. If ENM are soluble in culture media, their agglomerate density will change over time. Moreover, as described above, in the case of in vitro ENM toxicity tests, agglomerations may result in an underestimation of toxicity outcomes (or ranking), while in the case of in vivo ENM toxicity tests (via intratracheal instillation or oropharyngeal aspiration technique), agglomeration may cause an overestimation of toxicity outcomes (or ranking) [38]. It is also worth noting that the agglomeration associated with ability of ENM to absorb biological components (e.g., ions, salts, and proteins) in the in vitro and in vivo system may differently overshadow ENM properties (e.g., chemistry and surface charge), leading to the inconsistent results (in vitro versus in vivo) [39]. As aforementioned, one of the major challenges faced in cell-based in vitro models is that intact lungs are comprised of about 40 different cell types, and in vitro models cannot wholly reflect the microenvironment of cell-cell and cell-matrix interactions. Here we utilized the lung tissue slice model which preserves the lung architecture with nearly all cell types. We have previously reported that mouse lung tissue slices incubated with size fractionated particulate matter from a wildfire event displayed similar cytokine responses to those observed in mice [40]. In line with this finding, the lung tissue slice system also showed similar pro-inflammatory responses to ENM as those seen in mice (i.e., pro-inflammatory effects of SiO 2 (10) and CeO 2 (23) but not TiO 2 (10)). Taken together, the results provide further evidence for particlemediated biological responses in lung tissue slices and the feasibility of this application to lung toxicity testing. Although several studies have demonstrated toxicity of ENM in lung tissue slices [41,42], this is the first report to our knowledge to compare responses to different size and types of ENM in both mice and mouse lung tissue slices. In addition, the rank order of ENM IL-6 production from the MH-S cells was the same as that observed in both the ex vivo and in vivo comparisons suggesting that lung macrophages play an important role in this response. In contrast, the response ranking for TNF-α (which is expressed at lower levels in lung macrophages compared to IL-6 [43]) was not the same, suggesting that this biomarker would not be a good readout across the three systems. It should be noted that lung epithelial cells and macrophages differ in pro-inflammatory responses following exposure to ENM [44,45] and that toxicity differs depending on the cell of origin [36], as demonstrated by the observation that cancerous cells are more toxic than their normal precursors. Conclusions We conclude that small-sized ENM, SiO 2 (10) and CeO 2 (23) but not TiO 2 (10), caused acute lung toxicity in mice (in vivo). CeO 2 (23) had the strongest effect on cytokine (IL-6, TNF-α, and MIP-2) release, neutrophil recruitment, and increased protein into the mouse lungs, while the larger CeO 2 (88) and TiO 2 (200) were less potent, indicating that the effect was dependent on both size and chemical composition of ENM. The rank order of ENM toxicity from both lung tissue slices (ex vivo) and alveolar macrophages (in vitro) corresponded well to the ranking results from the mice (in vivo), suggesting that lung macrophages could replicate this effect. The similar profile of inflammatory response ex vivo and in vitro was most apparent when the exposure was based on mass per cell surface area. Although we demonstrated a relatively good correlation among the acute lung toxicity endpoints from three different testing methods, further studies are still needed that measure reversibility of effects or progression to long term toxicity. Nevertheless the results provide further evidence for the feasibility of replacing animal lung toxicity testing with cells or lung tissue slices, and provide information about the important parameters (e.g., agglomeration state and exposure dose metric) that will improve interpretation of ENM toxicity in biological systems. Experimental animals Adult pathogen-free female CD-1 mice (~20-25 g and~30-45 g body weights for pulmonary toxicity and lung tissue slice studies, respectively) purchased from Charles River Breeding Laboratories (Raleigh, NC). Mice were housed in groups of five in polycarbonate cages with hardwood chip bedding at the U.S. Environmental Protection Agency (EPA) Animal Care Facility accredited by the Association for Assessment and Accreditation of Laboratory Animal Care and were maintained on a 12-hour light to dark cycle at 22.3 ± 1.1°C temperature and 50 ± 10% humidity. Mice were given access to rodent chow and water ad libitum and were acclimated for at least 10 days before the study began. The studies were conducted after approval by the EPA Institutional Animal Care and Welfare Committee. Engineered nanomaterials (ENM) Five ENM were used in this study and designated by their mean primary diameter provided by the manufacturer: SiO 2 (10) (silicon dioxide with a primary diameter of 5-15 nm; amorphous; Sigma Aldrich (St. Louis, MO)), CeO 2 (23) (cerium oxide with a primary diameter of 15-30 nm; cerianite; NanoAmor (Houston, TX)), CeO 2 (88) (cerium oxide with a primary diameter of 70-105 nm; cerianite; Alfa Aesar (Ward Hill, MA)), TiO 2 (10) (titanium dioxide with a primary diameter of 10 nm; anatase; Alfa Aesar), and TiO 2 (200) (titanium dioxide with a primary diameter of 200 nm; anatase; Acros Organics (Fair Lawn, NJ)). The ENM were suspended in saline for in vivo and culture media (see below for further details) for ex vivo and in vitro, followed by sonication (Sonicator 4000; Misonix Sonicators, Newtown, CT) at 70-80 watts for 10 min and vortex mixing for 1 min to yield a stock solution at a concentration of 2 mg/mL. The ENM suspensions were stored at −80°C until toxicity testing. To explore the effect of solution chemistry on hydrodynamic diameters of ENM, dynamic light scattering (Zetasizer Nano ZS; Malvern Instruments, Malvern, UK) was used at 100 μg/mL ENM concentration in various solutions, such as distilled water, saline, and culture media. Further detailed physicochemical characteristics of ENM are presented in Table 1. In vivo toxicity of ENM Mouse exposure to ENM Oropharyngeal aspiration was performed on mice anesthetized in a small plexiglass box using vaporized anesthetic isofluorane, following a technique described previously [46]. Briefly, the tongue of the mouse was extended with forceps and 100 μg of ENM in 50 μL saline was pipetted into the oropharynx. Immediately, the nose of the mouse was then covered causing the liquid to be aspirated into the lungs. Similarly, a separate group of mice was instilled with 2 μg of lipopolysaccharide (LPS; Escherichia coli endotoxin; 011:B4 containing 10 6 unit/mg material; Sigma) as a positive control to demonstrate maximal responsiveness to this well characterized inflammatory agent while additional mice were instilled with 50 μL saline alone as a negative control. Bronchoalveolar lavage and hematology At 4 hr and 24 hr post-exposure, six mice from each treatment group were euthanized with 0.1 mL intraperitoneal injection of Euthasol (diluted 1:10 in saline; 390 mg pentobarbital sodium and 50 mg phenytoin/mL; Virbac AH, Inc., Fort Worth, TX), and blood was collected by cardiac puncture using a 1-mL syringe containing 17 μL sodium citrate to prevent coagulation. The trachea was then exposed, cannulated and secured with suture thread. The thorax was opened and the left mainstem bronchus was isolated and clamped with a microhemostat. The right lung lobes were lavaged three times with a single volume of warmed Hanks balanced salt solution (HBSS; 35 mL/kg mouse). The recovered bronchoalveolar lavage fluid (BALF) was centrifuged at 800xg for 10 min at 4°C and the supernatant was stored at both 4°C (for biochemical analysis) and −80°C (for cytokine analysis). The pelleted cells were resuspended in 1 mL HBSS (Sigma). Total BALF cell count of each mouse was obtained by a Coulter counter (Coulter Co., Miami, FL). Additionally, 200 μL resuspended cells were centrifuged in duplicate onto slides using a Cytospin (Shandon, Pittsburgh, PA) and subsequently stained with Diff-Quik solution (American Scientific Products, McGraw Park, PA) for enumeration of macrophages and neutrophils with at least 200 cells counted from each slide. Hematology values including total white blood cells, total red blood cells, hemoglobin, hematocrit, mean corpuscular volume, mean corpuscular hemoglobin concentration, and platelets were measured using a Coulter AcT 10 Hematology Analyzer (Beckman Coulter Inc., Miami, FL). Biochemical and cytokine analyses Concentrations of lactate dehydrogenase (LDH) and γglutamyl transferase (GGT) were determined using commercially available kits (Thermo Scientific, Middletown, VA). Albumin and total protein concentrations were measured by the SPQ test system (DiaSorin, Stillwater, MN) and the Coomassie plus protein assay (Pierce Chemical, Rockford, IL) with a standard curve prepared with bovine serum albumin (Sigma), respectively. Activity of N-acetyl-β-D-glucoaminidase (NAG) was determined using a NAG assay kit (Roche Applied Science, Indianapolis, IN). All biochemical assays were modified for use on the KONELAB 30 clinical chemistry spectrophotometer analyzer (Thermo Clinical Lab Systems, Espoo, Finland) as described previously [46]. Concentrations of tumor necrosis factor-α (TNF-α), interleukin-6 (IL-6) and macrophage inhibitory protein-2 (MIP-2) in BALF were determined using commercial multiplexed fluorescent bead-based immunoassays (Milliplex Map Kit, Millpore Co., Billerica, MA) measured by a Luminex 100 (Luminex Co., Austin, TX) following the manufacturer's protocol. The limits of detection (LOD) of each cytokine were 6.27, 3.28 and 29.14 pg/mL for TNF-α, IL-6 and MIP-2, respectively, and all values below these lowest values were replaced with a fixed value of one-half of the LOD value. Ex vivo toxicity of ENM Mouse lung tissue slice preparation and incubation Lung tissue slices were prepared as previously described [40]. Briefly, mice were euthanized with 0.1 mL intraperitoneal injection of Euthasol (diluted 1:10 in saline; Virbac AH, Inc.). The trachea was exposed and cannulated using a 20G luer stub adapter (Instech Solomon, Plymouth Meeting, PA). The lungs were filled with 1.5% (w/v) low-melting agarose (Sigma) in minimum essential medium (MEM; Simga) at 37°C. The lungs were rinsed with the ice-cold slicing buffer solution (Earle's balanced salt solution (Sigma) supplemented with 15 mM N-(2-hydroxyethyl)piperazine-N'-(2-ethanesulfonic acid) hemisodium salt (HEPES; Sigma)) and removed from the mouse. The lungs were transferred into the ice-cold slicing buffer solution to further solidify the agarose and then the lung lobes were separated using a surgical blade, and the lung tissue cores (8 mm diameter) were prepared using a tissue coring tool (Alabama Research and Development, Munford, AL). Tissue cores were cut into 350 μm thick slices in the ice-cold slicing buffer solution using a specialized vibratome (OTS 5000, FHC Inc., Bowdoinham, ME). The lung tissue slices were then incubated in the wash buffer solution (Dulbecco's modified eagle's medium/nutrient mixture F-12 Ham (Sigma) supplemented with 100 units/mL penicillin (Sigma) and 100 μg/mL streptomycin (Sigma)) under cell culture conditions for 4 hrs. The lung tissue slices were then transferred into a tissue culture treated polystyrene 48-well plate (Corning Inc., Corning, NY) and cultured in the slice incubation medium (the wash buffer solution supplemented with 200 mM L-glutamine (Sigma), 0.1 mM MEM non-essential amino acids (Sigma) and 15 mM HEPES) for up to 6 days at 37°C in a humidified atmosphere of 5% CO 2 and 95% air. The lung tissue slices received fresh media every day. Mouse lung tissue slice exposure to ENM Reconstituted ENM suspensions were sonicated for 2 min, vortexed for 1 min and further diluted with the slice incubation medium to achieve final concentrations of 22, 44, 66, and 132 μg/mL. On day 2 of culture, lung tissue slices were exposed to the ENM for 24 hrs. The initial concentration of 22 μg/mL (total volume of 0.5 mL, therefore of 11 μg of ENM per lung slice) was estimated to be five times higher than the in vivo exposure dose used in this study. If it is assumed that the lung surface area of a 20 g mouse is~650 cm 2 , 1 cm 3 mouse lung tissue has~800 cm 2 lung surface area, and 100% of oropharyngeal instilled ENM is delivered to the lungs, 100 μg of ENM dose in a mouse (~650 cm 2 lung surface area) is equivalent to 2.2 μg of ENM dose in a mouse lung slice (~14 cm 2 lung slice surface area) [47]. Moreover, if it is assumed that the lung slice surface area is~14 cm 2 , the exposure doses of 22, 44, 66, and 132 μg/mL are equivalent to the doses of 0.79, 1.6, 2.3, and 4.7 μg/cm 2 , respectively. Mouse lung tissue slices were exposed to 87 ng/mL LPS which was an equivalent concentration in vivo and served as a positive control. Mouse lung tissue slices exposed to the culture medium alone served as a negative control. At 24 hr post-exposure, lung slice culture fluids were collected, centrifuged at 10,000xg for 5 min, and culture supernatants were stored at both 4°C (for extracellular biochemical analysis) and −80°C (for cytokine analysis). Subsequently, mouse lung tissue slices were homogenized using a tissue homogenizer in a lysis buffer solution containing 0.5% Triton X-100, 150 mM NaCl, 15 mM Tris-HCl (pH 7.4), 1 mM CaCl 2 and 1 mM MgCl 2 [48]. Homogenates were then centrifuged at 10,000xg for 10 min and supernatants were stored at −80°C (for intracellular biochemical analysis). Biochemical and cytokine analyses Similar to the in vivo lung inflammation analyses described above, the supernatants of tissue culture fluids and tissue homogenates after exposure to ENM were used to determine the extracellular (LDH and NAG) and intracellular (GGT) biochemical analyses as well as cytokine analysis (IL-6, MIP-2, and TNF-α). Biochemical and proinflammatory cytokine analyses were performed using a KONELAB 30 clinical chemistry spectrophotometer analyzer (Thermo Clinical Lab Systems) and multiplexed fluorescent bead-based immunoassays (Milliplex Map Kit) measured by the Luminex 100 (Luminex Co). In vitro toxicity of ENM Alveolar macrophage cell culture The murine alveolar macrophages (MH-S) cells were purchased from ATCC (CRL2019, Manassas, VA) and grown in the following culture medium: RPMI 1640 (Sigma) supplemented with 5% fetal bovine albumin (FBS; Sigma) and 100 units/mL penicillin (Sigma) and 100 μg/mL streptomycin (Sigma) at 37°C in a humidified atmosphere of 5% CO 2 and 95% air. MH-S cells at passage 11 yielded 2.4 -2.9 × 10 6 cells/mL and were seeded at 3,000 cells per well of a 96-well culture plate. Alveolar macrophage cell exposure to ENM After 3 days in culture, MH-S cells were exposed to ENM at final concentrations of 3.125, 6.25, 12.5, 25, 50, and 100 μg/mL in the culture medium for 24 hrs. This exposure dose can be converted to the dose based on cell surface area (assuming MH-S cell surface area is 0.3 cm 2 ). Thus, the exposure doses of 3.125, 6.25, 12.5, 25, 50, and 100 μg/mL are equivalent to the doses of 1.0, 2.1, 4.2, 8.3, 16.7, and 33.3 μg/cm 2 , respectively. MH-S cells exposed to the culture medium alone served as a negative control and 1% Triton X-100 at 37°C served as a positive control. Biochemical and cytokine analyses After the cells exposed to ENM, the plate was centrifuged at 400xg for 5 min, followed by collection of supernatants to analyze LDH concentrations. The supernatants were also used to determine cytokine production (IL-6). The MH-S cells after centrifugation were then used to evaluate cell proliferation (CyQuant assay; Invitrogen, Eugene, OR). Viability of the MH-S cells exposed to ENM was tested by measuring enzymatic activity based on the cellular cleavage of water-soluble tetrazolium salt (WST-1) to formazan in the cells using a WST-1 assay kit (Roche Applied Science). Biochemical and pro-inflammatory cytokine analyses in this study were also performed using a KONELAB 30 clinical chemistry spectrophotometer analyzer (Thermo Clinical Lab Systems) and multiplexed fluorescent bead-based immunoassays (Milliplex Map Kit) measured by the Luminex 100 (Luminex Co). Statistical analysis Data were expressed as means ± the standard error of the mean (SEM). The results of the ENM-exposed groups were compared to those of the negative control group. Statistical comparison was performed by one-way analysis of variance (ANOVA) with the Newman-Keuls post-hoc test. Statistical analyses were performed using commercial software (GraphPad Prism 6.04, GraphPad Software, Inc., San Diego, CA). If the data did not meet the ANOVA assumptions of either normality or equal variances (Levene's test; p >0.05), the data were transformed. Subsequent to the transformation, the data were checked for requirement compliance and if acceptable, ANOVA proceeded. The statistical significance level was assigned at a probability value of p <0.05.
v3-fos-license
2023-08-03T15:42:21.965Z
2023-01-01T00:00:00.000
260408116
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ijels.com/upload_document/issue_files/16IJELS-107202340-Reading.pdf", "pdf_hash": "4f9fadd8c72c89bbbb4b5fac1488b8cab8af00b5", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44775", "s2fieldsofstudy": [ "Education" ], "sha1": "b996b00c4a9286732a001ad60efc68367812c6b9", "year": 2023 }
pes2o/s2orc
Reading Dysfluency in Indian Classrooms: An Insight — Reading text in English is an important skill for students of higher education in India, as their understanding of their core subjects in specific and the world of information in general is based upon this skill. The ability to read English fluently, increases their job prospects also, as real time job environments use English as the official medium of communication and require that students read English with ease. Technical students who engage primarily with numerical data, diagrams and other non-textual content in their core subjects, have an especially hard time coping with tasks which involve reading textual English. There are several problems that lead to lack of fluency and thus lack of understanding of the content. This paper will study reading “ dysfluency “ problems technical students face , when reading English like word recognition difficulties, inability to read in sense groups, problems to do with accuracy, automaticity and expression. This paper will attempt to offer strategies to overcome this problem like Loud reading, Echo reading and Choral reading and some unconventional reading practices that can help solve this very common but important skill gap. English reading fluency has suddenly become a very important requirement in most recruitment drives across all colleges in India, after having been neglected for many decades in our education system. Up until the 1980's , most school education had some component of marks in an examination, allocated to reading and recitation, in most Central and State Boards of education. However, in the last two decades with the focus of education in India shifting to pure or applied sciences & technology, leading to development of skills to deal with numbers and numerical data, reading in any language, more so English has become a rare skill to find indeed. Even students who have had their education in English medium institutions throughout their lives, struggle to read in English, right up to their undergraduate courses and beyond. Cambridge Dictionary defines fluency as " the ability to speak or write a language easily, well, and quickly". In other words, fluency means to read at an appropriate pace, with precision, with the correct aspect and air. To realize and know what they are reading, learners must be able to read fluently, both when they are reading audibly or soundlessly. Why is fluency in English so poor among students in our classrooms? Several reasons can be ascribed for this. The first among them is the way "fluency" has been defined to us, and secondly the methods we use to build fluency among our students. Many a times, reading fast has been understood to be "fluency', not taking into account the accuracy of the reading at all. Accuracy is a very important part of fluency and cannot be separated from it. So, let us first understand fluency and what it comprises of. Martin Galway in his article "A field Guide to Reading Fluency: A Reader's Digest of Our Work to Date", identifies three specific criteria for reading to be considered fluent. Automaticity Automaticity means the ability to read without having to decode a sentence word by word. This is an important skill because if the student's energies are simply engaged in decoding a word by its spelling, he loses focus on other aspects of reading such as comprehension, analysis, elaboration, and deeper understanding. As students grow older and start silent reading, teachers completely lose their influence over the student's English reading skills and this skill gap just remains with them all their lives. Accuracy Accuracy is the ability to recognise a word and read the text with the correct comprehension of its meaning. Accuracy should not be misconceived as merely 'speed'. Adequate pace without accuracy in deciphering meaning cannot be recognised as 'fluency". Decoding errors, omissions of words and replacing words in the text with other words while reading, impact comprehension. Fluency includes comprehension and therefore accuracy goes hand in hand with automaticity. Intonation Cambridge dictionary defines ' prosody' as ' the rhythm and intonation ( the way a speaker's voice rises and falls) of language'. Intonation has an important role to play in the comprehension of a spoken word. Meaning is derived not only from the words chosen but also from the tone a speaker uses, while he reads words aloud. Intonation not only improves peripheral or textual understanding of the intent of the text, but also contributes to a deeper and more holistic understanding of the authorial intent. Tone of voice carries information about emotion, intention, emphasis and beyond. So, intonation bridges the gap between word recognition and its meaning. If this is what fluency essentially is, let us look at what causes English dysfluency among students in our classrooms in India and then look at what practises and remedies can be followed to correct this problem. The relevant problems leading to dysfluency in reading English in our classrooms are: • For most students, English is not their mother tongue. • They do not have sufficient exposure to the English language, for them to gain an instinctive or intuitive rapport with it. • Most students study the language to clear examinations and not to learn the language. • Most of the earlier institutions they studied in, did not offer them conducive or encouraging environments to learn the language. • Most students do not realize the importance of English until they appear for interviews or plan to go abroad for higher studies. • Many of them do not have confidence, have learning disabilities or are just slow learners. • Since most of the English that is taught in our classrooms is through the ' Grammar Translation Method', most students do not develop mastery of the language's idiom and phraseology and lack a flair for it. • Most of the students have such a low exposure to the English language, socially, outside of the classroom that they have poor vocabulary. Due to a combination of this lackadaisical approach of the students and the erroneous teaching methodology adopted by the academia, students find it challenging to read or express themselves in English. They are not sufficiently conversant with proper pronunciation or grammar rules. Problems with quick recognition and accuracy often reveal themselves as dysfluent word reading or as reading without understanding it. Beth Villani, Reading Specialist, describes some of the behavioural manifestations of dysfluency as: • slow and laboured reading • frequently hesitating at new words • lacking appropriate expression which conveys the correct emotion/feeling. • inaccurate decoding of unfamiliar words • replacing words in the text with those of their own • inability to memorize words that have been cognised and practiced earlier • quick recognition of very small number of words • poor comprehension even at a superficial level. Apart from these, problems with phonological skills/phonics lead to inefficient and tedious decoding and this in turn leads to difficulty in the development of spontaneous recognition of words. Inadequate time to practice, reading connected text with specificity is also another major reason for dysfluency. Moats and Tolman call it 'A core problem with processing speed/orthographic processing which affects speed and accuracy of printed word recognition'. Remedies. Reading dysfluency in Indian classrooms appears to be a ubiquitous problem. However, the impact of this problem is enormous. As the famous American linguist, Benjamin Lee Whorf puts it "Language shapes the way we think, and determines what we can think about". Reading dysfluency not only negatively impacts development of other skills in the LSRW spectrum of language learning, it also reduces a student's ideating process and limits his vision and world significantly. Therefore, it becomes quite clear that the problem of dysfluency needs to paid heed to, and remedial action should be taken as early as is possible, during the language learning process. Certain basic remedials that can be undertaken early in the process of language acquisition are: 1. Tracking the words with a finger as the teacher reads in the classroom. Then the student reads it. 2. Having the teacher read aloud. Then, the student matches voice with the teacher. 3. Have the student read his favourite books multiple times, till his reading gathers automaticity, accuracy, and expression. 4. Evaluate the student to check if decoding or word recognition is at the root of the difficulty. If it is, then decoding will need to be addressed as an independent problem, independent of speed or expression. 5. Give the student an age-appropriate text that he can practice repeatedly. Get the student to read aloud and time him. Calculate words-correct-per-minute regularly. Discuss this analysis/data with the student, so that he can evolve his own improvement strategy. 6. Ask the student to record his reading. Ask him to play it back and identify his automaticity and specificity errors. Ask him to work on his errors. 7. Have the teacher read aloud in class and ask the student to read it back to him. 8. Instruct the student to read a passage with a definite emotion, such as sadness or excitement, to drive home the importance of intonation and expression. 9. The teacher needs to include timed practice reading sessions into his instructional repertoire. While these practises can and do help learners to tide over English Reading Dysfluency in the initial stages, more organized and formal strategies need to be employed to help older students struggling with reading dysfluency. Some of them strategies discussed by Martin Galway in his article "A field guide to reading fluency: a Reader's Digest of Our Work to Date", are: Loud Reading Loud reading and silent reading have advantages of their own. While loud reading in the classroom helps the learner to maintain his focus on the text and enhances his rapid reading and pronunciation skills, silent reading, on the other hand, is the most suitable method for reading in crowded places and helps improve comprehension skills. Loud reading is generally slower than silent reading, however it is a better way to focus attention on the students' automaticity, accuracy and pronunciation. Choral Reading Choral reading is reading aloud in unison with a whole class or group of students. Choral reading helps build students' fluency, self-confidence, and motivation. As students are reading aloud together, students who may ordinarily feel self-conscious or nervous about reading aloud, have built-in support. There are various types of choral reading. Some of them are: • Antiphonal: Antiphonal reading involves dividing the class into smaller groups. Each group is given a different part of the text to read. Students are given time to practise reading before all the teams are brought in to read the text, one after the other. • Role Play: In role plays, each group is given different speaking parts that contribute to make one role play. One team can play the narrator while the other groups play different characters. • Cumulative Choral Reading: In this method, the number of students reading, keeps increasing as the reading progresses. One group or one student can begin reading and another group joins in with him/them. The number of students who are reading, keeps increasing up until the end, when the entire class is reading together. • Extempore Choral Reading: One student begins reading the text and other students join in or fade out whenever they choose. Students can choose/plan their reading parts before the actual reading begins. Beginning with Smaller Texts Student's can begin dealing with their dysfluency by reading small texts, instead of lengthy ones. The small amount of decoding, accuracy and tonal needs will result in fewer errors. This will lead to an increase in confidence and motivate and prepare them for longer text reading exercises. 4. Repeated Reading Repeated reading is frequently used to improve vocal reading fluency. Repeated reading can be used by students who have started on some amount of initial word reading skills but display insufficient reading fluency for their grade or age level. The idea of repeated reading emerged in the late 1970's as a result of the writings of Jay Samuels, Director of the Minnesota Reading and Research Project (1979) and Carol Chomsky, Harvard University (1978). They found, in two independent studies, that engaging kids 2023, 8(4), (ISSN: 2456-7620) Scientific studies have shown the importance of "automaticity" to reading. Being able to decode without thinking about it consciously, is very essential to fluent reading. There is only a limited amount of brain space to think. The more a student uses this space for cognition or figuring out words, the less this space is available to comprehend the text's meaning. Jay Samuel believes, that repeated reading could help readers acquire an instinct for words. He believes that it helps readers become proficient in the art of reading words exactly and with sufficient speed. Poetry Recitation and Performing Scripted Skits Poetry and performing in skits and plays can also be used to improve dysfluency among students. Poetry has an inherent melody, rhythm, pace, expressions and ideas that helps students retain words in their memory, expanding their base vocabulary that in turn supports building fluency. Memorising dialogues for a skit does the same. The context, ideas, dialogue delivery with a certain emotion and coordinating with other actors help in recognising words, their comprehension and builds up a felicity in using them. Text Marking for Phrasing 'Marking the Text' is a reading plan that requires students to critique their own reading". While reading the text, the student analyzes ideas, evaluates ideas, and circles and underlines essential information to own and personalizes his own reading. There are three different types of marking in this strategy: numbering paragraphs, circling, sense grouping (putting words together like in normal speech, pausing properly between phrases, clauses, and sentences etc). Echo Reading In this strategy, the facilitator usually reads a text line by line or sentence by sentence, demonstrating appropriate fluency. After reading each line, the students echo the reading of the line with the same rate and intonation. Echo reading is an easy-to-use reading tool for helping struggling readers develop fluency, expression, and reading at an appropriate pace. This strategy can also help them learn about using punctuation marks while reading. This strategy is often called re-reading, but technically these are two different things. Teachers can train students to use this methodology at home too. It can help struggling students to improve their confidence, comprehension, ability to identify unknown words, improve their listening skills, phrasing and vocabulary. 7. Paired Reading Paired reading is a research-based fluency building tool. In this approach, students read aloud to each other. When pairing students, fluent readers can be paired with less fluent readers, or two students who are at the same level can be paired to re-read a story they have already read and practiced. Paired reading helps students to work together, encourages collaboration among them and provides a platform for peerassisted learning. It allows them to take turns at reading and provide feedback to each other, as a way to gauge comprehension. By reading together with a reading helper, a student's reading experience is modelled and supported, without their errors being held up for scrutiny and making them nervous. 8. Supported Reading Audio-assisted reading is an individual or group reading activity where students read along, in their books as they follow a fluent reader, reading the book on an audio recording (audiotape, audio book, or iPod). CONCLUSION Dsyfluency in the perusal of English texts in Indian classrooms continues to be a challenge for most teachers of English in India. While social and academic issues contribute to its existence, the teachers need to meet this challenge by thinking of different ways of making the text accessible to a struggling student and create out of the box solutions. Ultimately, the teacher has to ensure that the students lose their dependency on teachers and their peers and become independent readers, who can read fluently and comprehend their own reading as well. The teacher needs to adopt a 'problem solving approach', while dealing with this problem. The teachers can also 'think aloud' about what to do when they encounter this problem and encourage students to come up with their own strategies. This not only helps the students handle the text, but also helps them think about creative classroom strategies that can help mitigate the problem of dysfluency. To conclude, it is evident that, in spite of concerns around reading English in classrooms, it continues to challenge the teacher and his pedagogy. It is important to be attentive to it from the early years or whenever it is encountered and use tried and tested and sometimes innovative techniques to help students gain fluency in reading. This is necessary to make sure, that their other learning skills are not impacted adversely and their learning abilities remain independent and strong all through their learning years and through their long term learning curve.
v3-fos-license
2021-05-05T13:13:56.463Z
2021-05-01T00:00:00.000
233731742
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.202219", "pdf_hash": "47e36c3f3b06658e6f392cda0c520587dc1fa01e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44779", "s2fieldsofstudy": [ "Biology" ], "sha1": "b4629599461710aaa750bad807150f0a6884936d", "year": 2021 }
pes2o/s2orc
Genetic diversity through social heterosis can increase virulence in RNA viral infections and cancer progression In viral infections and cancer tumours, negative health outcomes often correlate with increasing genetic diversity. Possible evolutionary processes for such relationships include mutant lineages escaping host control or diversity, per se, creating too many immune system targets. Another possibility is social heterosis where mutations and replicative errors create clonal lineages varying in intrinsic capability for successful dispersal; improved environmental buffering; resource extraction or effective defence against immune systems. Rather than these capabilities existing in one genome, social heterosis proposes complementary synergies occur across lineages in close proximity. Diverse groups overcome host defences as interacting ‘social genomes’ with group genetic tool kits exceeding limited individual plasticity. To assess the possibility of social heterosis in viral infections and cancer progression, we conducted extensive literature searches for examples consistent with general and specific predictions from the social heterosis hypothesis. Numerous studies found supportive patterns in cancers across multiple tissues and in several families of RNA viruses. In viruses, social heterosis mechanisms probably result from long coevolutionary histories of competition between pathogen and host. Conversely, in cancers, social heterosis is a by-product of recent mutations. Investigating how social genomes arise and function in viral quasi-species swarms and cancer tumours may lead to new therapeutic approaches. Comments to the Author(s) Absolutely great paper overall. Novel, fascinating, well researched and well written. I offer some suggestions for improvement. -listing predictions of the model more explicitly would be useful -listing applied implications more explicitly would be useful -the model should apply to some forms of cancer (solid) but not so much to others (blood); please specify -why is the model not discussed in the context of pathogenic bacteria (eg with cross-feeding) -explain quasispecies briefly; most readers will not know about them -what sort of ENVIRONMENTS (ecological conditions) select for social heterosis most, least, under the model you describe. After all, in social insects, environments are of central importance -how is genetic relatedness involved in social heterosis in viruses and cancers, exactly? -viruses may provide good models for group selection effects because the infected cells impose a spatial structure that facilitates group selection. do you agree? if so add to ms. -the immune system is expected, as in HIV, to differentially attack the most common variants of a virus, leading to increased and maintained diversity. do you agree? if so add to ms. The authors stated: 'Furthermore, within individuals, more aggressive tumors with higher variability tend to arise in organs with apparently stronger anti-cancer defenses in comparison to other tissues [138]. Such patterns are counter intuitive. Stronger controls that limit what function a cell can express ought to make cancers less likely rather than more so. Resolving why this apparent evolutionary paradox exists could lead to future therapeutic options' -I do not see why this is counter-intuitive. Only strong tumors could arise and survive in welldefended organs. We hope you are keeping well at this difficult and unusual time. We continue to value your support of the journal in these challenging circumstances. If Royal Society Open Science can assist you at all, please don't hesitate to let us know at the email address below. Dear Dr Nonacs On behalf of the Editors, we are pleased to inform you that your Manuscript RSOS-202219 "Genetic diversity through social heterosis can increase virulence in RNA viral infections and cancer progression" has been accepted for publication in Royal Society Open Science subject to minor revision in accordance with the referees' reports. Please find the referees' comments along with any feedback from the Editors below my signature. I apologise for the time this has been in review, which has been down to the difficulty of finding reviewers. However, we have one very positive review who raises a number of minor points.We invite you to respond to the comments and revise your manuscript. Below the referees' and Editors' comments (where applicable) we provide additional requirements. Final acceptance of your manuscript is dependent on these requirements being met. We provide guidance below to help you prepare your revision. Please submit your revised manuscript and required files (see below) no later than 7 days from today's (ie 31-Mar-2021) date. Note: the ScholarOne system will 'lock' if submission of the revision is attempted 7 or more days after the deadline. If you do not think you will be able to meet this deadline please contact the editorial office immediately. Please note article processing charges apply to papers accepted for publication in Royal Society Open Science (https://royalsocietypublishing.org/rsos/charges). Charges will also apply to papers transferred to the journal from other Royal Society Publishing journals, as well as papers submitted as part of our collaboration with the Royal Society of Chemistry (https://royalsocietypublishing.org/rsos/chemistry). Fee waivers are available but must be requested when you submit your revision (https://royalsocietypublishing.org/rsos/waivers). Thank you for submitting your manuscript to Royal Society Open Science and we look forward to receiving your revision. If you have any questions at all, please do not hesitate to get in touch. -viruses may provide good models for group selection effects because the infected cells impose a spatial structure that facilitates group selection. do you agree? if so add to ms. -the immune system is expected, as in HIV, to differentially attack the most common variants of a virus, leading to increased and maintained diversity. do you agree? if so add to ms. The authors stated: 'Furthermore, within individuals, more aggressive tumors with higher variability tend to arise in organs with apparently stronger anti-cancer defenses in comparison to other tissues [138]. Such patterns are counter intuitive. Stronger controls that limit what function a cell can express ought to make cancers less likely rather than more so. Resolving why this apparent evolutionary paradox exists could lead to future therapeutic options' -I do not see why this is counter-intuitive. Only strong tumors could arise and survive in welldefended organs. -see also Your revised paper should include the changes requested by the referees and Editors of your manuscript. You should provide two versions of this manuscript and both versions must be provided in an editable format: one version identifying all the changes that have been made (for instance, in coloured highlight, in bold text, or tracked changes); a 'clean' version of the new manuscript that incorporates the changes made, but does not highlight them. This version will be used for typesetting. Please ensure that any equations included in the paper are editable text and not embedded images. Please ensure that you include an acknowledgements' section before your reference list/bibliography. This should acknowledge anyone who assisted with your work, but does not qualify as an author per the guidelines at https://royalsociety.org/journals/ethicspolicies/openness/. While not essential, it will speed up the preparation of your manuscript proof if you format your references/bibliography in Vancouver style (please see https://royalsociety.org/journals/authors/author-guidelines/#formatting). You should include DOIs for as many of the references as possible. If you have been asked to revise the written English in your submission as a condition of publication, you must do so, and you are expected to provide evidence that you have received language editing support. The journal would prefer that you use a professional language editing service and provide a certificate of editing, but a signed letter from a colleague who is a native speaker of English is acceptable. Note the journal has arranged a number of discounts for authors using professional language editing services (https://royalsociety.org/journals/authors/benefits/language-editing/). ===PREPARING YOUR REVISION IN SCHOLARONE=== To revise your manuscript, log into https://mc.manuscriptcentral.com/rsos and enter your Author Centre -this may be accessed by clicking on "Author" in the dark toolbar at the top of the page (just below the journal name). You will find your manuscript listed under "Manuscripts with Decisions". Under "Actions", click on "Create a Revision". Attach your point-by-point response to referees and Editors at Step 1 'View and respond to decision letter'. This document should be uploaded in an editable file type (.doc or .docx are preferred). This is essential. Please ensure that you include a summary of your paper at Step 2 'Type, Title, & Abstract'. This should be no more than 100 words to explain to a non-scientific audience the key findings of your research. This will be included in a weekly highlights email circulated by the Royal Society press office to national UK, international, and scientific news outlets to promote your work. At Step 3 'File upload' you should include the following files: --Your revised manuscript in editable file format (.doc, .docx, or .tex preferred). You should upload two versions: 1) One version identifying all the changes that have been made (for instance, in coloured highlight, in bold text, or tracked changes); 2) A 'clean' version of the new manuscript that incorporates the changes made, but does not highlight them. --An individual file of each figure (EPS or print-quality PDF preferred [either format should be produced directly from original creation package], or original software format). --An editable file of all figure and table captions. Note: you may upload the figure, table, and caption files in a single Zip folder. --If you are requesting a discretionary waiver for the article processing charge, the waiver form must be included at this step. --If you are providing image files for potential cover images, please upload these at this step, and inform the editorial office you have done so. You must hold the copyright to any image provided. --A copy of your point-by-point response to referees and Editors. This will expedite the preparation of your proof. At Step 6 'Details & comments', you should review and respond to the queries on the electronic submission form. In particular, we would ask that you do the following: --Ensure that your data access statement meets the requirements at https://royalsociety.org/journals/authors/author-guidelines/#data. You should ensure that you cite the dataset in your reference list. If you have deposited data etc in the Dryad repository, please only include the 'For publication' link at this stage. You should remove the 'For review' link. --If you are requesting an article processing charge waiver, you must select the relevant waiver option (if requesting a discretionary waiver, the form should have been uploaded at Step 3 'File upload' above). --If you have uploaded ESM files, please ensure you follow the guidance at https://royalsociety.org/journals/authors/author-guidelines/#supplementary-material to include a suitable title and informative caption. An example of appropriate titling and captioning may be found at https://figshare.com/articles/Table_S2_from_Is_there_a_trade-off_between_peak_performance_and_performance_breadth_across_temperatures_for_aerobic_sc ope_in_teleost_fishes_/3843624. At Step 7 'Review & submit', you must view the PDF proof of the manuscript before you will be able to submit the revision. Note: if any parts of the electronic submission form have not been completed, these will be noted by red message boxes. Author's Response to Decision Letter for (RSOS-202219.R0) See Appendix A. Decision letter (RSOS-202219.R1) We hope you are keeping well at this difficult and unusual time. We continue to value your support of the journal in these challenging circumstances. If Royal Society Open Science can assist you at all, please don't hesitate to let us know at the email address below. Dear Dr Nonacs, I am pleased to inform you that your manuscript entitled "Genetic diversity through social heterosis can increase virulence in RNA viral infections and cancer progression" is now accepted for publication in Royal Society Open Science. If you have not already done so, please remember to make any data sets or code libraries 'live' prior to publication, and update any links as needed when you receive a proof to check -for instance, from a private 'for review' URL to a publicly accessible 'for publication' URL. It is good practice to also add data sets, code and other digital materials to your reference list. You can expect to receive a proof of your article in the near future. Please contact the editorial office (openscience@royalsociety.org) and the production office (openscience_proofs@royalsociety.org) to let us know if you are likely to be away from e-mail contact --if you are going to be away, please nominate a co-author (if available) to manage the proofing process, and ensure they are copied into your email to the journal. Due to rapid publication and an extremely tight schedule, if comments are not received, your paper may experience a delay in publication. Please see the Royal Society Publishing guidance on how you may share your accepted author manuscript at https://royalsociety.org/journals/ethics-policies/media-embargo/. After publication, some additional ways to effectively promote your article can also be found here https://royalsociety.org/blog/2020/07/promoting-your-latest-paper-and-tracking-yourresults/. On behalf of the Editors of Royal Society Open Science, thank you for your support of the journal and we look forward to your continued contributions to Royal Society Open Science. The authors would like to thank you and the reviewers for your considered and constructive comments. They have brought about a marked improvement in the manuscript. Our pointby-point response is below (with our responses in blue bold): Associate Editor Comments to Author (Professor Matthew Collins): Comments to the Author: Thank your for your MS. All three reviewers agree that this is a fascinating paper which should be published and all suggesting only minor changes. I how that you would agree that the full data associated with the 87Sr/86Sr analyses should be included (e.g. in the supplementary section. These data should have 88Sr (V), 85Rb (V), 87Sr/86Sr, 87Sr/86Sr (1 standard error). Given the methods described in the manuscript for the determination of the 87Sr/86Sr data. It is best practice to include all the QC data generated with these analyses. Apologies that this was omitted from our original submission. We have added the extended 87 Sr/ 86 Sr data to the supplementary material (as well the extended δ 18 O data). Reviewer #1 Overall this is an excellent manuscript based on a very sound study. The authors applied multiple isotope analysis (strontium and oxygen of dental enamel; sulfur, carbon and nitrogen of dentin collagen/+1 rib sample) to construct fairly detailed, yet admittedly incomplete, osteobiographies of eight individuals from the famous shipwreck of the Mary Rose. In its current form the manuscript fulfills the main criteria for publication in this journal (Articles should report work that is scientifically sound, in which the methodology is rigorous and the conclusions are fully supported by the data.) and as such is certainly suitable for publication. I was also very impressed by the clear structure and writing, and particularly how the authors were able to carefully yet successfully incorporate the ancestry estimation results into the broader study. On this point, the way that this potentially problematic data type is used, and the further clarification provided in the supplementary document, provides a useful template for future studies attempting to integrate isotopic data and osteological data sets with ancestry estimations. The authors generally struck a good balance between over-and under-interpretation. The study is also especially important for highlighting the presence of presence of non-Europeans in the British/European archaeological record of the late mediaeval/early modern period. There are two main critiques of the ms, in its current, that if properly addressed would greatly improve this manuscript. 1) While the authors briefly mention the previous study of Bell et al. (2009) and the subsequent archaeological debate that this initiated (Millard & Schroeder 2010;Bell et al. 2010) there is no explicit effort made to engage in this debate nor even to compare the data in this study with that originally produced by Bell and colleagues. It is unclear why this is the case, and understandable that the authors clearly wish to emphasize their own new data. Nonetheless, since the original study by Bell et al. included some of the same isotope systems (namely, oxygen, carbon and nitrogen) applied to the same exact skeleton collection, the complete lack of engagement with, or direct comparison to this earlier study is problematic. If the authors have reason to doubt the reliability of the data reported by Bell and colleagues, then they are obligated to be explicit about this. If not, then it really is necessary to make a clear comparison between the isotope data from Bell et al. (2009) and the isotope data reported herein (e.g. how do they compare? do the ranges overlap? are the results consistent with the previous study or differ significantly?, etc...). The data from both studies should also be plotted together in one or more figures to allow the readers to directly compare and interpret the data themselves. We are grateful for these constructive comments and we agree that we should provide a clearer comparison with Bell et al.'s (2009) data. There are reasons for our not having done this in the original submission. We have some concerns about the comparability of the oxygen isotope data and therefore did not undertake detailed comparison initially. We appreciate that this is an odd omission so have added a caveated comparison and explained our concerns. We have added the following sentences to our discussion section to address this: 'Comparison with existing data from the Mary Rose must also be made with caution. Bell et al.'s [9] data are not directly comparable due to a different sampling methodology and the use of a NaClO pre-treatment, which has been demonstrated to make δ 18 O values lower [114]. Once converted to δ 18 Op [83], the mean for the previous dataset (17.6 ±0.87‰, 1σ) is markedly lower than in this study (19.2 ±0.95‰, 1σ), although there is considerable overlap in the datasets. This difference may derive from chance sampling, as both studies relate to only a small number of individuals, or may relate to varied sampling and pretreatment methods.' For the carbon and nitrogen data, comparisons have been made via the biplot (Figure 4). We appreciate the reviewer's suggestion but do not feel that we need to refer to examples of individuals of African ancestry found in Roman contexts as this is a postmedieval study period; we believe that the reference to historical evidence for people of African ancestry in the Tudor period is sufficient. 2) As is the case with most isotopic provenance studies, the approach used is (by necessity) exclusionary. However, the manner of presenting these types of results (rather long descriptions listing all of the places where each individual could NOT have originated from for each isotope proxy) is less than ideal. In most geographic contexts there is no other option than to present the results this way. However, Britain is the most intensively and extensively mapped areas of the world for most isotope systems, and one of the few regions where high quality isoscapes exist for both strontium and oxygen isotopes (e.g. Evans et al .2012;Pelligrini et a. 2016; see also the British isotope domains dataset and online tool at https://www.bgs.ac.uk/datasets/biosphere-isotope-domains-gb/). As such, it is somewhat striking that this study makes no attempt at a more systematic, quantitative, spatial approach to interpreting and presenting the isotopic provenance data. Such an approach would greatly improve the visualization, interpretation, and presentation of the isotope results by simply and effectively illustrating the areas of potential origin for each individual (or at least the British ones). Such an approach is not very complex and can be accomplished with a fairly simple application in ArcGIS, as demonstrated recently by a similar study combining skeletal isotope data and isoscapes (in a British context!) to trace the origins of individuals buried at Stonehenge (Snoeck et al. 2018). We appreciate comments from all three reviewers about how best to present and interpret the provenancing data. The reviewers express very different opinions on this issue. We were torn on how best to approach the presentation of the data and how confidently to interpret. This remains a delicate balance to navigate as is demonstrated by the fact that some review comments stated that we need to be more ambitious and others that we need to be more cautious. Britain is indeed the most comprehensively mapped and we used the BGS multi-isotope querying tool to explore origins. On the basis of the three proxies this indicated that only one individual (FCS-09, with African ancestry) was consistent with British origins. This is, in practice, inconsistent with the data and demonstrates that we are not yet (generally, at least) in a position where we can query data to plot origins on a map. The manifold variables that affect these isotope proxies means we are in a situation of providing the most parsimonious explanations of origins and using ArcGIS approaches to plotting origins can over-simplify the complex process of exploring origins. We would like to show greater ambition in refining provenance and take a more solid, quantitative approach, but we are not sure the data can sustain it and it would go against comments of another reviewer. In addition, three of the authors have been involved in a study that has been criticised for overambitious refinement of origins (see Barclay and Brophy 2020, Archaeological Journal), so would rather err on the side of caution. Based on the concerns detailed above, I recommend that the manuscript be accepted with minor revisions. I hope that the authors seriously consider the proposed suggestions for revision, and look forward to reading the revised and published version of this paper in Royal Society Open Science. Reviewer #2 The manuscript by Scorrer and colleagues presents a study on the medieval warship Mary Rose. The manuscript attempts an isotope study in order to understand the origins of the crew to the ship. The article is a well written summary and the study scientifically sound, so that the whole manuscript gives a detailed insight into the crews diet and possible origins. However, there are some flaws in the study's design, since the team analysed carbon, nitrogen, sulphur, oxygen and strontium isotopes of eight individuals the sample numbers are very low and discrepancies in the data result in highly biased data. Therefore, there are no strong conclusions and the article is adding another layer of information without revealing origins or deeper understanding of the Tudor's warship. In addition the authors decided to include a fully unrelated craniometric ancestry estimation into the article, which fails to connect to the other results and is only another mosaic puzzle piece in the story, which does not fit with other results. General remarks: The data table 1 should be merged with table S1 since they both contain valuable information and only together these information are intuitive assessable. Additionally the table description needs to be more substantial. We believe that combining table 1 and S1 will create too large a table so would like to keep these separate but agree with the reviewer that the data from both these tables should be better linked. We have signposted this in the table 1 caption (and have added more detail to the caption): Table 1. Multi-isotope analysis results from dental samples of eight individuals from the Mary Rose (see electronic supplementary material, table S1 for contextual information on these individuals). Oxygen values were converted from carbonate (δ 18 Oc) to phosphate (δ 18 Op) using the conversion equation set out in Chenery et al. [90]. The Figures need a higher resolution, Figures 1,3, and 4 need a higher resolution and better quality, my printouts were bad and even on screen were no distinctions between symbols. For Figure 1 of the Mary Rose, unfortunately we only have permission from the Pepys Library to have the resolution of the image as 70 dpi. The quality of figures 3 and 4 have been improved for final submission. The introduction to the Tudor kingdom and the mobility of the navy is excellent. It is well written, substantial yet not lengthy. The isotope background is alright, but lacks some introduction to isotope data from the studied period. There is a multitude of data published and a overview of available data would have been a good start for understanding the setting. We have added the following to the end of section 1.4 to address this: There has been little isotope work on post-medieval human remains in Britain (see [9, 53, 70, 71]). There is, however, a wealth of data for late medieval Britain, especially in terms of δ 13 C and δ 15 N [28, 29, 72-74], but also for 87 Sr/ 86 Sr and δ 18 O [66, 75]. We have also clarified we are referring to the Tudor rather than medieval period when we say that direct evidence of human remains from this period is limited (at the end of section 1.3), but that we are happy to add any studies if we have missed some that relate to this period. The study design is alright, given the importance and value of the samples, however, seven individuals are very few and therefore the robustness of such data will be limited. This is the major concern of the study, because the results will be only trustworthy, if there are no or limited numbers of outliers, but due to the nature of the historic background we would expect quite a number of outliers. This means there will be no background data for proper interpretation. Therefore, the general literature review of isotopic mobility studies is most important. The sample treatment and analytical details are highly detailed and should be cut to a minimum, additionally many of these methods have been published before and these should be acknowledged. We appreciate that we have a long methods section as multiple methods were used. Other reviewers and the editor have asked for more methodological detail, so we are obliged to keep them in, though we agree that they disrupt the flow of the text. The ancestry estimation by craniometric and morphoscopic methods should be excluded from the manuscript. Though valid to a certain point the sample number and variability in the skeletal remains are quite big and the interpretation results more in speculations than scientific conclusions. In my opinion the interpretation could be added in a side note, but not a full chapter. In the absence of aDNA data (though ongoing research will add this in time), we deemed it important to explore ancestry to some degree though we accept that the interpretations are far from certain. However, the results for FCS-09 are convincing and this does add another biographical element to the crew. Both other reviewers praised the ancestry estimation and the value it added. As a result we would like to retain this element. Furthermore, the results of this analysis were included in the museum interpretation and a Channel 4 documentary on the Mary Rose, and so we feel that the methods used should be peer-reviewed and published so others can evaluate interpretations across the data. The conclusion are just a summary and are relativizing the own data. In my opinion the results are only limited and this needs to be addressed. Additionally the authors have started a data set for the Mary Rose material which needs to be expanded in put into the larger context. We are grateful for this observation and have made major changes to the conclusion. It is now less of a summary and better emphasises that our sample is small and that more work on the Mary Rose collection needs to be done. We hope that this addresses the reviewer's comment. Minor comments: The collagen was not ultrafiltered, this is usually sufficient for carbon and nitrogen isotope results, however, for sulphur isotopic results this could be problematic. Additionally the salt water could have compromised the materials and therefore an additional ultrafiltration step seems wise. My concern is related to the correlation of sulphur isotope values with sulphur content in collagen. The highest sulphur isotope values also revealed the highest sulphur contents. This could be indicative for seawater sulphate intrusion, therefore these data should be questioned and double checked. This is an interesting point. We are unaware of evidence that ultrafiltration is necessary for sulphur isotope analysis of teeth deposited in marine environments. All quality control criteria were met, as demonstrated in the paper. We are very confident that our results are valid. The range of sulphur values would be difficult to explain if diagenesis had occurred and therefore we are confident that they are biogenic. In addition, diagenetic alteration would skew C:S and N:S ratios. The strontium results are nice, but the interpretation and presentation lacks ambition. I would recommend to use literature data for comparison and additional arguments. In my opinion these data have been neglected in the interpretation. Similarly the oxygen isotope data, which are more problematic, but in itself have some value, which needs to be addressed. We appreciate comments from all three reviewers about how best to present and interpret the provenancing data. The reviewers express very different opinions on this issue. We were torn on how best to approach the presentation of the data and how confidently to interpret. This remains a delicate balance to navigate as is demonstrated by the fact that some review comments stated that we need to be more ambitious and others that we need to be more cautious. Britain is indeed the most comprehensively mapped and we used the BGS multi-isotope querying tool to explore origins. On the basis of the three proxies this indicated that only one individual (FCS-09, with African ancestry) was consistent with British origins. This is, in practice, inconsistent with the data and demonstrates that we are not yet (generally, at least) in a position where we can query data to plot origins on a map. The manifold variables that affect these isotope proxies means we are in a situation of providing the most parsimonious explanations of origins and using ArcGIS approaches to plotting origins can over-simplify the complex process of exploring origins. We would like to show greater ambition in refining provenance and take a more solid, quantitative approach, but we are not sure the data can sustain it and it would go against comments of another reviewer. In addition, three of the authors have been involved in a study that has been criticised for overambitious refinement of origins (see Barclay and Brophy 2020, Archaeological Journal), so would rather err on the side of caution. The suggested changes are not to major in my opinion and therefore should be addressed and the manuscript altered accordingly. After doing so, in my regards to the high quality of the manuscript's style there is no issue with publication. Reviewer #3 Scorrer et al. have prepared a well-written manuscript that succinctly presents and very adequately interprets multi-isotope and morphometric skeletal data from a relatively small (but well selected) sample of humans from the Mary Rose wreck. This paper is an important contribution to the growing literature involving these datasets, and more specifically helps to add additional scientific value to our understanding about the life-ways (i.e. origins) of individuals from the Tudor time period. I have added specific comments/corrections to the attached .pdf of the manuscript, and mention a few of these items here. There are no major concerns with the publication of this manuscript providing these minor corrections made and/or considered to help substantiate the interpretation of the isotope data. The authors have been particularly thorough in their realistic evaluation of the morphometric determination using the existing software and databases available for this purpose (e.g. Fordisc), which are not specific to archaeological populations. The additional information on this part of the study contained in the supplemental was much appreciated and necessary. Finally, the balance between offering specific (likely) geographic origins for each of the individuals sampled with a recognition of equifinality inherent in individual, or combined, isotope systems (δ13C, δ15N, δ18O, δ34S, and 87Sr/86Sr) is reasonably done. I have suggested statistical treatment of the isotope data should be attempted/worked through via appropriate non-parametric methods to help further support the author's suggestion of origins within or outside 'Britain'. Given the additional information available on each of the individuals that are part of this study, one could consider a possible bayesian approach to determining 'local' versus 'non-local' in this context.
v3-fos-license
2021-12-01T16:07:25.847Z
2021-11-26T00:00:00.000
244735373
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2225-1154/9/12/168/pdf", "pdf_hash": "72f78ecbf37e4ce13b77306d5b0a3890ee6b0dd3", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44781", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "cedb5b0acd1a10a36f2c7d02f19f976fc429c8d6", "year": 2021 }
pes2o/s2orc
Coastal Wave Extremes around the Pacific and Their Remote Seasonal Connection to Climate Modes : At first order, wind-generated ocean surface waves represent the dominant forcing of open-coast morpho-dynamics and associated vulnerability over a wide range of time scales. It is therefore paramount to improve our understanding of the regional coastal wave variability, particularly the occurrence of extremes, and to evaluate how they are connected to large-scale atmospheric regimes. Here, we propose a new “2-ways wave tracking algorithm” to evaluate and quantify the open-ocean origins and associated atmospheric forcing patterns of coastal wave extremes all around the Pacific basin for the 1979–2020 period. Interestingly, the results showed that while extreme coastal events tend to originate mostly from their closest wind-forcing regime, the combined influence from all other remote atmospheric drivers is similar (55% local vs. 45% remote) with, in particular, ~22% coming from waves generated remotely in the opposite hemisphere. We found a strong interconnection between the tropical and extratropical regions with around 30% of coastal extremes in the tropics originating at higher latitudes and vice-versa. This occurs mostly in the boreal summer through the increased seasonal activity of the southern jet-stream and the northern tropical cyclone basins. At interannual timescales, we evidenced alternatingly increased coastal wave extremes between the western and eastern Pacific that emerge from the distinct seasonal influence of ENSO in the Northern and SAM in the Southern Hemisphere on their respective paired wind-wave regimes. Together these results pave the way for a better understanding of the climate connection to wave extremes, which represents the preliminary step toward better regional projections and forecasts of coastal waves. Coastal breaking waves can originate from long-wavelength wind-waves (swell) able to propagate across entire ocean basins [15][16][17] as well as from short-period wind seas. Therefore, they can be spawned from both local and remote storms that are produced by a large variety of large-scale atmospheric regimes. A recent study [18] used a statistical clustering method on historical simulations of ocean surface waves to classify mean global To identify the main seasonal atmospheric patterns responsible for regional coastal wave activity Almar et al. [19] used lag-correlations between significant daily wave energy in the Atlantic (Dakar, Senegal) and large-scale surface wind amplitude anomalies for different seasons. Their results highlight in particular that remote extra-tropical winter waves from both the Northern and Southern hemisphere tend to reach tropical coastlines with an oblique incidence. This promotes larger destabilizing effects and increased coastal erosion due to a stronger longshore sediment transport compared to the more shorenormal incidence of waves originating from tropical regimes. This stresses the importance of evaluating the influence of different local vs. remote large-scale atmospheric regimes on coastal waves variability for a better assessment of regional coastal vulnerability and thus better predictions for more adapted local resiliency procedures. The relationships between global ocean surface waves and large-scale atmospheric climate variability have been extensively investigated [20][21][22][23][24][25][26][27][28]. To summarize, changes in wave activity induced by large-scale wind regimes activity arise from the natural oceanatmosphere-coupled variability that operates at timescales ranging from intra-seasonal to multi-decadal. However, the Pacific basin is predominantly under the influence of the El Niño Southern Oscillation (ENSO), the dominant mode of global variability at interannual timescales [29]. Building upon the recent theoretical progress made in ENSO research [30][31][32] showed that accounting for the full variety of ENSO teleconnection pathways to tropical and extra-tropical storm activity allowed explaining, on average,~35% of Pacific coastal wave interannual variability (~20% in the Southern Hemisphere and up to 55% in the Northern Hemisphere) Hemer et al. [23] identified that the principal mode of interannual variability of the Southern Hemisphere wave variability was significantly related to the Southern Annular Mode (SAM). A follow-up study by [33] showed that the local wind-generated forcing in the Southern Hemisphere extra-tropics can produce waves that significantly propagate equatorward, far from their region of generation, and they potentially affect coastal regions even all the way to the Northern Hemisphere. This is a strong incentive to quantify the respective origins of coastal wave extremes, i.e., locally vs. remotely forced, in the context of the dominant control of Pacific climate modes on storm activity and associated wave generation, with a specific focus on ENSO and SAM. In particular, a new "2-ways wave tracking" algorithm was introduced to evaluate the open-ocean origins and associated atmospheric forcing of local coastal wave extremes all around the Pacific Rim. Once the mean connection between coastal wave extremes and basinscale atmospheric regimes are comprehensively quantified, the seasonal and interannual modulation of these paired atmospheric patterns and regional wave climates is examined. The remainder of the article is structured as follows: in Section 2, we present the data along with the main steps of the methodological framework employed to track open-ocean wave extremes down to the Pacific basin shorelines. Section 3 describes quantitatively the mean, seasonal, and interannual connections between the Pacific regional coastal extremes and the basin-scale wave regimes. In particular, we evaluated their variability in the context of ENSO and SAM, the dominant seasonally modulated climate modes in the Pacific. Finally, Section 4 provides a summary and a discussion on the relevance of our approach in view of more accurate regional projections and forecasts of coastal wave extremes. Wave and Wind Data Surface wave data [significant height H s , peak period T p , and direction D p ) and 10-m winds were extracted from the fifth generation of the European Centre for Medium-Range Weather Forecasts (ECMWF) Re-Analysis (ERA5) at a 1-day temporal resolution between 1979 and 2020 and a 0.5 • × 0.5 • horizontal resolution (30 km × 30 km for the Climate 2021, 9,168 3 of 17 atmospheric variables, i.e., wind speed and direction) over the Pacific basin (100 • E-300 • E; 90 • S-90 • N). ERA5 combines vast amounts of historical observations into global estimates using advanced modelling and data assimilation systems [34]. To characterize coastal wave activity specifically, ERA5 wave data were re-sampled equidistantly, via a nearest neighbor interpolation, along the Pacific coast at a spatial resolution of 0.5 • using the Global Self-consistent Hierarchical High-resolution Geography dataset (GSHHG version 2.3.6) [35]. To illustrate how large-scale atmospheric regimes are seasonally paired with openocean wave patterns, Figure 1 presents climatology maps of 10-meters wind speeds and significant wave heights (shading) and their direction (arrows) averaged in the boreal winter (December-February, DJF, Figures 1a and 1c respectively) and summer (June-August, JJA, Figures 1b and 1d respectively). We can observe the strong westerlies between 30 and 50 • N, a signature of the strengthened northern hemisphere polar front and easterlies or trade winds (10-30 • N) associated with the intensified Aleutian low and subtropical high-pressure system, respectively, in the boreal winter ( Figure 1a). Both regimes coincide with stronger ocean surface wave patterns located directly underneath ( Figure 1c). There is a seasonal reversal with increased westerlies and trade winds in the southern hemisphere in the boreal summer ( Figure 1b). It is noteworthy that although subject to a seasonal modulation, the polar front and associated extratropical open-ocean wave activity (Figure 1d) in the Southern Ocean remains strong all year round. Detection of Open-Ocean Extreme Waves The deep-water wave energy flux, proportional to and here simply defined as Hs 2 TP [36], was then spatially averaged in each of these regions. As an example, Figure 2 presents the resultant wave energy time series averaged in the Pacific North-East region zoomed Open Ocean Origins of Coastal Waves To evaluate the different origins of coastal wave extremes around the Pacific, the basin was divided into eight regions to isolate the essential paired large-scale wind and wave patterns (cf. black boxes on Figure 1), in particular the Pacific: • North-West (NW) that includes the western part of the northern hemisphere polar and sub-tropical fronts as well as the west Asian monsoonal system, • North-East (NE) that includes the eastern part of the northern hemisphere polar and sub-tropical fronts, • Tropical North-West (TNW) that includes the western part of the northern hemisphere trade wind system and the Pacific North-West tropical cyclone basin, • Tropical North-East (TNE) that includes the western part of the northern hemisphere trade wind system (i.e., the Inter Tropical Convergence Zone) and the Pacific North-East tropical cyclone basin, • Tropical South-West (TSW) that includes the western part of the southern hemisphere trade wind system (i.e., the South Pacific Convergence Zone) and the South Pacific tropical cyclone basin, • Tropical South-East (TSE) that includes the eastern part of the southern hemisphere trade wind system, • South-West (SW) that includes the western part of the southern hemisphere polar and sub-tropical fronts, • South-East (SE) that includes the eastern part of the southern hemisphere polar and sub-tropical fronts. Detection of Open-Ocean Extreme Waves The deep-water wave energy flux, proportional to and here simply defined as H s 2 T P [36], was then spatially averaged in each of these regions. As an example, Figure 2 presents the resultant wave energy time series averaged in the Pacific North-East region zoomed in on the period of 2015-2019, which showcases in particular the typical strong seasonal variability in the northern hemisphere characterized by increased (decreased) wave energy during the boreal winter (summer). Note that we assumed that the centroid of each box, evaluated as the spatial barycenter of the box considering only oceanic points and represented by the back triangles in Figure 1b, displays the spatial origin of all extreme events occurring in this region. A wave energy event was then considered extreme if the daily anomaly, calculated as the departure from the wave energy 1-month running mean, was above 120% of its 1-month running standard deviation. As compared to using a fixed criterion for the entire period, this "sliding threshold" allowed accounting for the strong seasonal variability and detecting extreme events even during the boreal summers characterized by a smaller wave energy. In addition, since waves are generated by storms that usually extend over the course of several days (e.g., the typical return period of extra tropical storms in the boreal winter is around 7-10 days) and therefore to not count the same event twice, a minimum of 7 days was set between the detection of two extremes. Tracking Down the Extreme Events from the Open Ocean to the Pacific Coastlines In the next step, we evaluated how different coastal regions of the Pacific basin fined by the colored dots along the Pacific coastline in Figure 2b) are affected by extre wave events generated in a given oceanic region or, in other words, towards which co lines and in which proportion these large ocean waves propagate. To do so, a "2-w wave tracking" algorithm was introduced. The main steps of this method are summari in the following. to infer the wave speed, i.e., 1.56 * , where TP is the wave peak period, can approximate a temporal window during which the waves generated off-sh are likely to reach the coast. This interval was estimated using a minimum (m mum) period TP of 8 s (20 s). Tracking Down the Extreme Events from the Open Ocean to the Pacific Coastlines In the next step, we evaluated how different coastal regions of the Pacific basin (defined by the colored dots along the Pacific coastline in Figure 2b) are affected by extreme wave events generated in a given oceanic region or, in other words, towards which coastlines and in which proportion these large ocean waves propagate. To do so, a "2-ways wave tracking" algorithm was introduced. The main steps of this method are summarized in the following. 1. Space-time criterion: Based on the distance between the centroid of an open-ocean region (i.e., the assumed origin of waves) and each coastal point around the Pacific basin and using the equation of group velocity under the deep-water approximation to infer the wave speed, i.e., c = 1.56 * √ T P 2 , where T P is the wave peak period, we can approximate a temporal window during which the waves generated off-shore are likely to reach the coast. This interval was estimated using a minimum (maximum) period T P of 8 s (20 s). 2. Wave incidence direction criterion: Since waves can be generated by a multitude of local and remote storms, it remains uncertain if the wave events identified at the coast using only the previous space-time criterion actually originate from the considered oceanic region and not from another area (i.e., from a different atmospheric forcing; Climate 2021, 9, 168 6 of 17 e.g., a local storm or another remote swell). To distinguish events with potential different off-shore origins, we filtered out coastal waves characterized by a direction of incidence not comprised within a certain directional interval/cone defined by the lines linking each coastal point in the Pacific to the two opposite corners of the oceanic box considered. 3. Lag-correlation criterion: Once the coastal events were narrowed down to only those with reasonable directions of incidence with regard to their potential off-shore origin, a lag-correlation criterion was applied to (i) evaluate/quantify the coherency between the open ocean and coastal wave events and (ii) if appropriate, estimate the lag (i.e., the time of propagation between the open-ocean wave origin and the coastline). In particular, we calculated the different lag-correlations between the offshore wave extremes and the coastal event within the temporal window and considered the waves at the coast as actually generated by the open-ocean extreme event only if the p-values of the maximum lag-correlation were below 0.05 (i.e., correlation significant at the 95% confidence level according to a Student t-test). Climate Mode Indices and Related Storm Activity As mentioned earlier, the modulation of surface wave activity in the Pacific basin at interannual time scales is dominated by ENSO in the Northern Hemisphere and tropical regions and SAM in the Southern Hemisphere. To characterize ENSO variations, we used the classic monthly Niño3.4 index calculated as the interannual anomalies related to the monthly mean climatology of sea surface temperatures (SST) averaged in the region (170 • -120 • W; 5 • N-5 • S). The Southern Annular Mode variability is described by the SAM monthly index and calculated as the zonal pressure difference between the latitudes 40 • S and 65 • S. Mean Coastal Extremes Climate This tracking algorithm is now applied to evaluate towards which coastlines the extreme events identified in each open-ocean region significantly propagate. Figure 2b shows the total number of extreme events detected in December-February (DJF, blue bars) and June-August (JJA, orange bars) for each oceanic region. Unsurprisingly, more extreme waves were seen to be generated in winter, but the methodology described above also successfully detects anomalously strong events in the summer. Figure 3 presents the repartition around the Pacific Rim of the number of extreme coastal events (i.e., days) that were generated in each oceanic area over the total period of 1979-2020. These spatial patterns reveal that coastlines are mostly under the influence of wave extremes generated in the neighboring open-ocean region by their associated paired large-scale atmospheric forcing. Yet, waves generated in the strong polar jet-streams at the higher latitudes (Figure 3a,b,g,h) seem to also propagate to a large extent towards the tropical regions. In particular, large waves formed in the Southern Ocean are seen to propagate significantly even towards the remote tropical regions of the Northern Hemisphere (Figure 3g,h). While the propagation of extreme waves generated in the tropical band tends to stay confined within this region, waves can however widely propagate away from their individual area of generation towards other remote tropical locations. They can even, to some extent, reach extra-tropical coastlines (Figure 3c-f), in particular in the North-West and North-East tropical Pacific, potentially through large waves formed by tropical storms in these extremely active hurricane basins. To quantify their local versus remote origins, Table 1 distinguishes the repartition (in %) of all coastal extremes accumulated along nine different coastal regions, delineated by the colored dots on Figure 1, according to their oceanic origin. Note that we split, in Table 1, both the tropical Pacific West and East coasts up into a northern and southern region to distinguish their respective hemispheric contributions. These statistics confirm that the largest individual number of coastal wave extremes originates from the closest openocean generating region (i.e., an average of 54% local origin). However, the summed-up influence of all other oceanic regions yields to a rather similar contribution from remote To quantify their local versus remote origins, Table 1 distinguishes the repartition (in %) of all coastal extremes accumulated along nine different coastal regions, delineated by the colored dots on Figure 1, according to their oceanic origin. Note that we split, in Table 1, both the tropical Pacific West and East coasts up into a northern and southern region to distinguish their respective hemispheric contributions. These statistics confirm that the largest individual number of coastal wave extremes originates from the closest open-ocean generating region (i.e., an average of 54% local origin). However, the summed-up influence of all other oceanic regions yields to a rather similar contribution from remote locations (i.e., 46%), highlighting the wide diversity in origins (and thus characteristics) of extreme coastal waves around the Pacific basin. Overall, 22% of extreme events in the Pacific coastal regions are generated remotely in the opposite hemisphere. Around 35% of coastal extremes in the tropics are generated at higher latitudes with the breakdown per hemisphere revealing a significant asymmetry with an average of 14% (10%) of large waves in the Tropics originating from the Southern (Northern) Hemisphere jet-stream, most likely a consequence of the year-round storm activity in the Southern Ocean. In particular, the strongest extratropical contribution to coastal extremes in the tropics (16%) comes from the South-Eastern Pacific. Similarly,~30% of coastal extremes impacting the higher latitudes originates from the tropical band. Interestingly, we observed also a significant hemispheric asymmetry with 13% of coastal extremes in the southern extratropical regions coming from the northern tropical band as compared to 7% of large waves in the northern higher latitudes originating from the southern tropical regions. For a more in-depth statistical description of coastal wave extremes' characteristics, Figure 4 displays bulk parameters of wave extremes, namely, their mean energy and direction of incidence, averaged over all events detected within each of the coastal regions and classified according to their oceanic origin. A preliminary visual inspection reveals that extratropical coastlines face more energetic swells and particularly in the Southern Pacific, most likely because the open Southern Ocean produces larger waves year-round. In addition, coastal regions in the eastern Pacific seaboard, particularly in the tropics, are under the influence of more energetic waves originating from the dominant extratropical westerlies (Figure 4b,e,g) as compared to east-facing shores (Figure 4a,b), with a notable exception in the Pacific South-West with its south-west-facing shores open to the strong extratropical storms generated year-round in the Southern Ocean. While extratropical coastlines are unsurprisingly impacted by extreme energetic wave events originating mostly from the northern and southern polar jet-streams, their incidence tends to be restricted to a smaller cone of influence (Figure 4a,b,g), as compared to the wide directional diversity found along tropical coastlines. This is particularly striking in the Pacific tropical islands, which are, even more so than the tropical continental coastlines, under the significant influence of both tropical and extratropical wave regimes from both hemispheres. Seasonal Variability of Coastal Extremes' Origins To explore the seasonality of these across-hemispheric and tropical-extratropical waves' propagation, Figure 5 shows annual cycles of the total number of extreme coastal events per hemisphere according to their Northern vs. Southern and tropical vs. extratropical origins. Note that the eastern and western part of each tropical and extratropical basins were combined together to facilitate a comprehensive evaluation of the paired atmospheric-wave regimes in both hemispheres. Coastal wave extremes impacting the Northern Hemisphere mostly originate from the northern tropical and extratropical regions in the boreal winter consistently with the increase in trade and westerly winds activity during this season (Figures 1a and 5a). Conversely, coastal waves in the boreal summer tend to originate rather equally from the tropical and extratropical regions of both the Southern and Northern Hemispheres, which is consistent with the increase in the southern trade and westerly winds activity and potentially of the northern tropical cyclone activity during this time of year. In the Southern Hemisphere, coastal waves are seen to mostly originate year-round from their tropical and extratropical regions (Figure 5b). Seasonal Variability of Coastal Extremes' Origins To explore the seasonality of these across-hemispheric and tropical-extratropical waves' propagation, Figure 5 shows annual cycles of the total number of extreme coastal events per hemisphere according to their Northern vs. Southern and tropical vs. extratropical origins. Note that the eastern and western part of each tropical and extratropical basins were combined together to facilitate a comprehensive evaluation of the paired atmospheric-wave regimes in both hemispheres. Coastal wave extremes impacting the Northern Hemisphere mostly originate from the northern tropical and extratropical regions in the boreal winter consistently with the increase in trade and westerly winds activity during this season (Figures 1a and 5a). Conversely, coastal waves in the boreal summer tend to originate rather equally from the tropical and extratropical regions of both the Southern and Northern Hemispheres, which is consistent with the increase in the southern trade and westerly winds activity and potentially of the northern tropical cyclone activity during this time of year. In the Southern Hemisphere, coastal waves are seen to mostly originate year-round from their tropical and extratropical regions (Figure 5b). However, we observed a significant contribution of the northern tropical region to wave extremes in this hemisphere in particular in the boreal winter and to some extent in the summer. This could reflect the strong ENSO control on the Northern Hemisphere extratropical storm activity and the seasonal increase in Northern-Hemisphere tropical cyclone activity, respectively. This interannual modulation of paired wind regimes and wave extremes is tackled in the next section. However, we observed a significant contribution of the northern tropical region to wave extremes in this hemisphere in particular in the boreal winter and to some extent in the summer. This could reflect the strong ENSO control on the Northern Hemisphere extratropical storm activity and the seasonal increase in Northern-Hemisphere tropical cyclone activity, respectively. This interannual modulation of paired wind regimes and wave extremes is tackled in the next section. Climate Modes-Driven Interannual Variability of Pacific Wind-Wave Regimes Now, the respective control of the two dominant Pacific climate modes, namely, ENSO and SAM, on the interannual variability of atmospheric regimes and their induced extreme wave activity is examined. First, Table 2 shows correlations between seasonally averaged interannual anomalies of extreme wave occurrence averaged in different open- Climate Modes-Driven Interannual Variability of Pacific Wind-Wave Regimes Now, the respective control of the two dominant Pacific climate modes, namely, ENSO and SAM, on the interannual variability of atmospheric regimes and their induced extreme wave activity is examined. First, Table 2 shows correlations between seasonally averaged interannual anomalies of extreme wave occurrence averaged in different open-ocean regions and the typical indices characterizing ENSO and SAM variability, Niño3.4, and SAM. [37]. This in turn promotes a strengthening of the Aleutian low-pressure system associated with an equatorward shift of the westerlies into the eastern subtropical region as indicated by the regression pattern of surface winds onto Niño3.4 in DJF on Figure 6a. This will induce an increased activity in the tropical northeastern Pacific wave regime as evidenced by the strong relationship (cf. Figure 6c) between the anomalous occurrence of extreme wave events in this region and an index characterizing the meridional displacement of the northern hemisphere jet-stream (calculated as the difference in surface wind speed between the subtropical and the high-latitudes region delineated by the blue boxes on Figure 6a). In the boreal summer, we observed a strong influence of ENSO on the western Pacific wave regimes and, in particular, in the Tropical North-West region ( Table 2, r = 0.60). However, the regression pattern of surface winds onto Niño3.4 in JJAS ( Figure 6b) displayed a reduction in trade winds activity related to the onset of El Niño events, which suggests that the increase in extreme waves in this region is not related to this typical climatological wind pattern. Instead, the regression showcases an increase in cyclonic surface winds (and associated vorticity) and a decrease in vertical wind shear (not shown), implying a potential strong control of ENSO on wave extremes in the western Pacific through its modulation of tropical cyclone activity [38]. To quantify the tropical cyclone activity in the northwestern Pacific, we calculated the monthly interannual anomalies of the accumulated cyclone energy (ACE), which represents an integrated measure of tropical storms intensity [39], averaged over the entire cyclonic basin (the red box on Figure 6b) following the method described in [40]. This ACE index anomalies are strongly correlated with the anomalous occurrence of wave extreme in this region (cf. Figure 6d), confirming the ENSO control on wave activity through the seasonally modulated increase in Tropical cyclone activity that can even affect the Pacific North-West extratropical coastlines (Table 2). • SAM: The occurrence of extreme waves in the extratropical south Pacific is significantly linked to SAM interannual variability year-round with the exception of the Pacific South-West in September-November and December-February ( Table 2). The regressions of surface wind speed in austral spring and summer (Figure 7a,c) are indeed characterized by a strong meridional dipole of the jet-related zonal wind structure marked in particular by a decrease and reversal in the westerlies south of Australia, preventing the generation of large waves in this area (i.e., the Pacific South-West). This pattern is concurrent with an increased intensity, a poleward confinement, and zonal extension of the polar jet into the Pacific South-East, which promotes an increased generation of large waves affecting this region. This is shown by the significant correlation between an index characterizing the strength of the zonal surface wind dipole (the difference in wind speed between the two blue boxes on Figure 7e) and the anomalous occurrence of extreme waves in the southeastern Pacific region (Figure 7e). Conversely, significant correlations were observed between anomalous occurrence of extreme waves in the Pacific South-West and SAM in March-May and June-August ( Table 2). The regressions of surface winds onto SAM during these seasons displayed a more meandering polar jet-stream, which, in particular, moved closer to Australia and can therefore generate large waves more likely to affect the Pacific South-West (Figure 7b,d). Interestingly, this meandering structure is also associated with a more northwest wind direction in the Pacific South-East that is less likely to affect coastal regions in this area and with an intensification of the high-pressure system in the central south Pacific. This anomalous anticyclonic circulation promotes more southwesterly winds and associated waves that can reach the tropical Pacific South-East. This is confirmed by the strong relationship between an index characterizing the strength of this high-pressure system calculated as the sum of the surface wind zonal velocity averaged in the red and blue boxes on Figure 7b and the anomalous occurrence of large waves in the tropical Pacific South-East. Climate 2021, 9, x FOR PEER REVIEW 12 of 18 • SAM: The occurrence of extreme waves in the extratropical south Pacific is significantly linked to SAM interannual variability year-round with the exception of the Pacific South-West in September-November and December-February ( Table 2). The regressions of surface wind speed in austral spring and summer (Figure 7a,c) are indeed characterized by a strong meridional dipole of the jet-related zonal wind structure marked in particular by a decrease and reversal in the westerlies south of Australia, preventing the generation of large waves in this area (i.e., the Pacific South-West). This pattern is concurrent with an increased intensity, a poleward confinement, and zonal extension of the polar jet into the Pacific South-East, which promotes an increased generation of large waves affecting this region. This is shown by the significant correlation between an index characterizing the strength of the zonal surface wind dipole (the difference in wind speed between the two blue boxes on Figure 7e) and the anomalous occurrence of extreme waves in the southeastern Pacific region ( Figure 7e). Conversely, significant correlations were observed between anomalous occurrence of extreme waves in the Pacific South-West and SAM in March-May and June-August ( Table 2). The regressions of surface winds onto SAM during these seasons displayed a more meandering polar jet-stream, which, in particular, moved closer to Australia and can therefore generate large waves more likely to affect the Pacific South-West (Figure 7b,d). Interestingly, this meandering structure is also associated with a more northwest wind direction in the Pacific South-East that is less likely to affect coastal regions in this area and with an intensification of the high-pressure system in the central south Pacific. This anomalous anticyclonic circulation promotes more southwesterly winds and associated waves that can reach the tropical Pacific South-East. This is confirmed by the strong relationship between an index characterizing the strength of this high-pressure system calculated as the sum of the surface wind zonal velocity averaged in the red and blue boxes on Figure 7b and the anomalous occurrence of large waves in the tropical Pacific South-East. Summary and Discussion In this study, a new method was introduced to quantify the connections between the occurrence of large waves in the open ocean and coastal wave events all around the Pacific rim over the period 1979-2020. This allowed the evaluations of the oceanic origins and associated large-scale wind regimes of coastal waves in the Pacific basin and therefore a comprehensive assessment of the mean climate and seasonal and interannual variability of coastal extreme episodes with regard to their local vs. remote generations. This statistical analysis revealed a relatively even distribution of large coastal wave events origins with 54% generated locally and 46% coming from a distant source. In particular, we evidenced a significant proportion of extreme waves propagating across the equator, with 22% of large coastal waves that originate from the opposite hemisphere. In addition, while Summary and Discussion In this study, a new method was introduced to quantify the connections between the occurrence of large waves in the open ocean and coastal wave events all around the Pacific rim over the period 1979-2020. This allowed the evaluations of the oceanic origins and associated large-scale wind regimes of coastal waves in the Pacific basin and therefore a comprehensive assessment of the mean climate and seasonal and interannual variability of coastal extreme episodes with regard to their local vs. remote generations. This statistical analysis revealed a relatively even distribution of large coastal wave events origins with 54% generated locally and 46% coming from a distant source. In particular, we evidenced a significant proportion of extreme waves propagating across the equator, with 22% of large coastal waves that originate from the opposite hemisphere. In addition, while large waves generated at higher latitudes were known to affect tropical coastlines in winter, this analysis uncovered a similar amount of extratropical coastal extremes originating from paired wind-wave tropical regimes (~30%). This extratropical↔tropical propagation of large waves occurs mostly in the austral winter/boreal summer and is related to the seasonally increased activity of both the southern jet-stream, which can spawn extreme waves able to spread towards the eastern tropical Pacific shorelines, and the Northern-Hemisphere cyclonic basins where strong northward reclining tropical storms can produce waves energetic enough to reach the northern extratropical regions. Regardless, we found that, compared to the extratropical shorelines, tropical coastlines tend to be under the influence of a wider variety of waves in terms of direction and energy in particular related to the seasonally alternated activation of waves propagating from the opposite hemisphere and/or extratropical regions. This suggests potentially more important destabilizing morphodynamical effects through more drastic changes in longshore drift in tropical areas as compared to the extratropical regions where waves have a narrower window of incidence year-round. At interannual timescales, alternatingly increased coastal wave extremes between the Western and Eastern Pacific were seen to emerge from the distinct seasonal influence on wave regimes from ENSO in the Northern and SAM in the Southern Hemisphere. In particular, ENSO was shown to promote, through atmospheric bridges, an intensification and equatorward shift of the northern jet-stream into the tropical band, which induces a significant increase in large coastal waves along the eastern seaboard in the boreal winter at the peak of El Niño events. Conversely, the El Niño summer onset is characterized by a more intense cyclonic circulation in the tropical western Pacific, driving a more active hurricane season and thus larger tropical-storm waves affecting the western Pacific facade. In the Southern Ocean, we similarly observed a seasonal east-west seesaw in large waves' occurrence associated with the SAM influence on the polar jet-stream and the extratropical wave regime. In particular, SAM drives a poleward shift and a meridionally homogeneous zonal extension of the jet stream into the Pacific South-East associated with more extreme waves in this region in the austral spring and winter. On the contrary, SAM's influence in the austral fall and winter is characterized by a meandering jet stream moving closer to the Australian region and favoring occurrences of large waves in the Pacific South-West. Interestingly, this snaking circulation also induces a strengthening of the south central Pacific subtropical high-pressure system associated with an increase in southwesterly waves that can spread towards the eastern Pacific tropical regions. Due to the strong societal impacts of coastal flooding and erosion on a worldwide increasing littoral population, there is an urgency to better understand the complex connections between large-scale climate variability and regional coastal dynamics for more adapted sustainable littoral management procedures [41][42][43]. This study provides a simple framework and a methodology easily implementable to achieve such a goal. In particular, it allows a rigorous and exhaustive quantification of the statistical relationships between the occurrence of local coastal wave extremes and large-scale wind/wave regimes not only restricted to the extratropical forcing patterns, which have long been the focus, but also encompassing the atmospheric forcing related to tropical cyclones' basin-scale activity considered as an integrated wave regime. This comprehensive examination of coastal wave origins emphasized the significant and widely underrated effect of remote wave regimes, which are known for their large destabilizing influence on beach morphology [19,44]. For instance, Ranasinghe et al. [45] and Anderson et al. [46] observed shoreline rotation at embayed beaches, and Trombetta et al. [47] observed an alongshore sediment drift reversal with large consequences for coastal zone management and infrastructures. These remote swells can also drive dramatic overtopping [6] even at storm-free areas, such as in the Gulf of Guinea [1,48], facing the South Atlantic storm track, and in the Pacific due to distant tropical cyclones [17,[49][50][51]. Once the connections between open-ocean wave regimes and regional coastal extremes established, it becomes possible to apprehend their modulation in the context of global climate variability, in particular, with respect to the dominant climate modes. Interestingly, these results highlight the strong difference in the wave-extremes-climate-modes relationships between summer and winter, through the seasonally modulated effect on the interannual variability of tropical and extratropical wave regimes associated with ENSO and SAM. This not only confirms a potentially strong deterministic variability in the Northern-Hemisphere wave activity operating from the nonlinear resonance between El Niño frequencies and the annual cycle as discovered recently (31) but also suggests that a similar mechanism might potentially play out in the Southern Ocean through a SAM-annual-cycle combination mode. Exploring such phase-locked relationships in SAM behavior may shed light on a new range of deterministic variability in the Southern Hemisphere, which might help constrain wave variability, seasonal forecasts, and future climate projection in a region already marked by a strong upward trend in wave extremes ( Figure 8). Because climate modes are usually predictable at seasonal timescales with relatively good confidence, these results pave the way for reliable forecasts of coastal waves and related hazards. For instance, the lead-time of state-of-the-art ENSO forecasts is~3-6 months [52], which therefore offers valuable anticipation for littoral and islands' communities particularly vulnerable in the Pacific basin. In addition, since ENSO's influence on both extratropical and tropical wave regimes extends beyond the Pacific through its control on the planetary jet-streams and tropical cyclone activity respectively [38,53], the framework described here can be replicated in other oceanic basins to evaluate the pantropical ENSO influence on global wave regimes. global climate variability, in particular, with respect to the dominant climate modes. Interestingly, these results highlight the strong difference in the wave-extremes-climatemodes relationships between summer and winter, through the seasonally modulated effect on the interannual variability of tropical and extratropical wave regimes associated with ENSO and SAM. This not only confirms a potentially strong deterministic variability in the Northern-Hemisphere wave activity operating from the nonlinear resonance between El Niño frequencies and the annual cycle as discovered recently (31) but also suggests that a similar mechanism might potentially play out in the Southern Ocean through a SAM-annual-cycle combination mode. Exploring such phase-locked relationships in SAM behavior may shed light on a new range of deterministic variability in the Southern Hemisphere, which might help constrain wave variability, seasonal forecasts, and future climate projection in a region already marked by a strong upward trend in wave extremes ( Figure 8). Because climate modes are usually predictable at seasonal timescales with relatively good confidence, these results pave the way for reliable forecasts of coastal waves and related hazards. For instance, the lead-time of state-of-the-art ENSO forecasts is ~3-6 months [52], which therefore offers valuable anticipation for littoral and islands' communities particularly vulnerable in the Pacific basin. In addition, since ENSO's influence on both extratropical and tropical wave regimes extends beyond the Pacific through its control on the planetary jet-streams and tropical cyclone activity respectively [38,53], the framework described here can be replicated in other oceanic basins to evaluate the pantropical ENSO influence on global wave regimes.
v3-fos-license
2021-12-21T06:22:51.562Z
2021-12-01T00:00:00.000
245338487
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21505594.2021.2010398?needAccess=true", "pdf_hash": "9de356ce25e9bbabe5bdbb94945f638c70472587", "pdf_src": "TaylorAndFrancis", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44782", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "sha1": "9503aabdd87dbeb1ef5226f156f8cc2f71310d5c", "year": 2021 }
pes2o/s2orc
Paeoniflorin reduce luxS/AI-2 system-controlled biofilm formation and virulence in Streptococcus suis ABSTRACT Streptococcus suis (S. suis), more specifically serotype 2, is a bacterial pathogen that threatens the lives of pigs and humans. Like many other pathogens, S. suis exhibits quorum sensing (QS) system-controlled virulence factors, such as biofilm formation that complicates treatment. Therefore, impairing the QS involving LuxS/AI-2 cycle in S. suis, may be a promising alternative strategy for overcoming S. suis infections. In this study, we investigated paeoniflorin (PF), a monoterpenoid glycoside compound extracted from peony, as an inhibitor of S. suis LuxS/AI-2 system. At a sub-minimal inhibitory concentration (MIC) (1/16 MIC; 25 μg/ml), PF significantly reduced biofilm formation by S. suis through inhibition of extracellular polysaccharide (EPS) production, without affecting bacterial growth. Moreover, evidence was brought that PF reduces AI-2 activity in S. suis biofilm. Molecular docking indicated that LuxS may be the target of PF. Monitoring LuxS enzymatic activity confirmed that PF had a partial inhibitory effect. Finally, we showed that the use of PF in a mouse model can relieve S. suis infections. This study highlighted the anti-biofilm potential of PF against S. suis, and brought evidence that it may as an inhibitor of the LuxS/AI-2 system to prevent S. suis biofilm-related infections. PF can thus be used as a new type of natural biofilm inhibitor for clinical application. Introduction Streptococcus suis (S. suis) is one of the most important zoonotic pathogens that mainly colonizes the upper respiratory of pigs, leading to sepsis, meningitis, pneumonia, and toxic shock-like syndrome [1,2]. S. suis not only causes huge economic losses to the pig industry, but as a zoonotic pathogen also poses a threat to human public health [3]. In China, Vietnam, and Thailand, S. suis has caused thousands of human diseases and has been identified as one of the culprits of human bacterial meningitis [4]. Studies have found that S. suis small RNA rss04 can help induce meningitis by regulating the synthesis of capsular and inducing biofilm formation in mouse infection model [5]. The route of intracranial subarachnoid infection in mouse infection model also further confirms the important role of S. suis biofilm in meningitis [6]. The abovementioned research clearly shows that the biofilm of plays a key role in S. suis meningitis. In addition, the ability of S. suis to form biofilms in the host causes persistent infections difficult to eradicate with antibiotics [7], and inhibit the formation of extracellular traps of neutrophils [8]. In fact, it is estimated that approximately 65% of hospital infections and up to 75% of bacterial infections that occur in the human body are related to biofilms. Biofilm formation in bacteria relies on the QS system, a "language" for bacterial communication, which can regulate the activities of bacteria to adapt to the harsh external environment [9]. LuxS/AI-2 is a QS system discovered in S. suis serotype 2 by our group [10]. The core of LuxS/AI-2 system is the protease LuxS involved in the formation of the signal molecule AI-2, which is a by-product of methyl metabolism. Research in Vibrio harveyi (V. harveyi) indicated that AI-2 can affect the ability of bacterial biofilm formation by inhibiting the expression of type III secretion system genes [11]. Previous work by our research group showed that the adhesion, gene expression, and virulence of S. suis in a biofilm state differ from the planktonic state, and it is speculated that the virulent strains of S. suis may rely on the formation of biofilm to achieve their infectivity and their ability to exhibit drug resistance [12][13][14][15]. Given the ability of bacteria to develop resistance to traditional antibiotics, it is particularly important to identify/develop antibacterial agents exhibiting novel antibacterial modes of action. A promising strategy is based on the inhibition of biofilm formation by pathogens. Recent studies have shown that natural products have advantages over traditional antibiotics because of their ability to regulate the formation of biofilms by bacterial pathogens [16,17]. The benefits of natural products in inhibiting biofilms are related to their high specificity and low toxicity [18]. Plants are the most extensive source of natural products, some of which have been found to exhibit antibiofilm properties. For instance, cranberry proanthocyanidins have been reported to interfere with the quorum sensing of bacteria by competing with the binding of spontaneous inducers and receptors [19]. Similarly, extracts from green tea and onion can suppress the biofilm by interfering with the bacterial quorum sensing system [20,21]. In addition, the sub-minimum inhibitory concentration (sub-MIC) of rhubarb water extract can inhibit the biofilm of S. suis by inhibiting the histidine kinase and the two-component signal transduction systems (TCSs) constituent proteins of histidine kinase and response regulator. However, plant extracts as new biofilm inhibitors also have some drawbacks, such as toxicity, reactivity and instability, and the effective ingredients of the extract are often not clear [22]. Given that numerous plants are edible and considered safe, it is necessary to continue to search for potential anti-biofilm drugs. Paeoniflorin (PF) is a monoterpene glycoside compound found in many Paeoniaceae plants such as peony, for which various pharmacological effects including antibacterial, antioxidant, anti-inflammatory, and anti-tumor have been identified [23]. Studies have found that PF can inhibit the formation of carbapenem-resistant Klebsiella pneumoniae (CRKP) biofilm and have a significant inhibitory effect on CRKP [24]. In this work, we provide evidence that PF inhibits the formation of S. suis biofilm and its virulence in a mouse model by affecting the synthesis of AI-2 signaling molecule of the LuxS/AI-2 system. An in-depth analysis of the biofilm inhibitory mechanism of PF at the molecular level was also performed in view to develop a new anti-S. suis biofilm inhibitor. Bacterial strains, growth conditions, and reagents S. suis HA9801, Vibrio harveyi BB120, and V. harveyi BB170 were used to investigate the anti-biofilm activities of PF. S. suis HA9801 is a virulent serotype 2 strain isolated from a diseased swine in the HaiAn City in 1998. A luxS mutant (ΔluxS) of S. suis and a complemented mutant strain (CΔluxS) were constructed in our previous study [10]. S. suis was grown in Todd Soy broth (TSB) at 37°C or plated on TSB agar. Escherichia coli (E. coli) BL21 (DE3) was transformed with pET28-luxS in our previous study [25]. The pET28-luxS (DE3) was grown at 37°C in LB medium supplemented with 50 μg/ mL of kanamycin. V. harveyi BB120 and V. harveyi BB170 were kindly provided by Professor XianGan Han from Shanghai Veterinary Research Institute Chinese Academy of Agricultural Sciences (Shanghai, China). V. harveyi was grown in autoinducer bioassay (AB) medium at 28°C. PF (CAS: 23,180-57-6, HPLC purity ≥99.5%) was obtained from the 3B Scientific Corporation Limited (Wuhan, Hubei, China). PF was transferred to pre-weighed vials and stored at −20°C. Prior to be used, PF was dissolved in distilled water and filter-sterilized. The AI-2 precursor molecule, (S)-4,5-Dihydroxy-2,3-pentandione (DPD), was purchased from Omm Scientific Inc. (Dallas, TX) and used at a concentration of 3.9 μM. Biofilm inhibition assay Biofilm formation ability of S. suis was monitored as described previously [26]. S. suis HA9801 was grown in TSB medium 12 h at 37°C, and then the bacterial culture was diluted with fresh TSB medium to a concentration of 10 6 CFU/ml for the anti-biofilm assay. A PF stock solution was freshly prepared in distilled water at a concentration of 1.6 mg/ml. After filtering with a 0.22 water-based filter, the stock solution was diluted at different concentrations ranging from 6.25 to 25 μg/ml in sterile culture medium, and an equal volume was added to the above bacterial suspension and incubated at 37°C for 24 h without shaking. A control culture with S. suis and no PF was also performed. Following growth, planktonic bacteria were removed and the biofilm was stained with 1% crystal violet for 10 min and then washed with phosphate-buffered saline (PBS). After adding 95% ethanol to release the dye, the absorbance at OD595 nm was recorded with a Tecan GENios Plus microplate reader (Tecan, Austria). Biofilm formation by the ΔluxS mutant of S. suis HA9801, previously constructed by our research group [10], was assessed as described above. The assay was performed in the presence of DPD (final concentration of 3.9 μM) and PF at 25 μg/ ml. A control culture with ΔluxS mutant and no PF was also performed. All assays were performed in triplicate and repeated three times. Scanning electron microscopy of biofilms Biofilms were observed for the following cultures: 0suis HA9801; S. suis HA9801 + 25 μl/ml PF; ΔluxS strain; ΔluxS + 25 μl/ml PF + AI-2 (3.9 μM DPD). An overnight growth S. suis was diluted to reach a concentration of 10 5 -10 6 CFU/ml. Then, the culture of 1 mL was added to a 24-well microplate (In vitro scientific, Hang Zhou, China) containing a sterile cell slide (0.5 cm 2 ). After culturing for 24 h at 37°C, the cell slide were rinsed with sterile PBS (0.2 M, PH = 7.2) so as to remove planktonic and loosely-bound bacteria. The biofilms were treated with 2.5% (w/v) glutaraldehyde for 6 h, washed with PBS (0.2 M, PH = 7.2), and fixed in 1% osmium tetroxide. And subjected to dehydration in a gradient alcohol system (25, 40, 55, 75, 90, and 100% ethanol). The samples are handled carefully throughout the drying process to prevent damage to the biofilm. And gold sputtered with a sputter coater (10 mA, 3 min) and observed by SEM (JSM-5610LV, Japan). Confocal laser scanning microscopy of biofilms Biofilm fromed by S. suis HA9801, S. suis HA9801 + 25 μl/ml PF, ΔluxS strain, and ΔluxS + 25 μl/ml PF + AI-2 (3.9 μM DPD) were also examined by Confocal Laser Scanning Microscopy (CLSM) (Carl Zeiss LSM800, Germany). Biofilms were formed in a 24-well microplate containing a round coverslip according to the above method. Following growth, the round coverslips were gently washed three times in PBS to remove planktonic and loosely-attached bacteria. After drying at normothermic, the biofilms were labeled with SYTO 9 according to the manufacturer of the LIVE/DEAD BIOFILM kit (ABI L10316, Invitrogen, USA). The stained biofilms were observed by CLSM equipped and magnification at 630 × . Capsular polysaccharide formation assay A 10-mL overnight culture of S. suis (HA9801, ΔluxS) was used to inoculate 990 mL of TSB medium supplemented with PF at a final concentration of 25 μg/ml, and the culture was incubated at 37°C for 24 h. Control cultures with no PF were also prepared. After centrifugation at 10,000 g, the bacterial pellets were suspended in 10 mL of glycine buffer (0.1 M, pH 9.2), and then 100 mg of crystalline salt-free egg white lysozyme was added. The bacterial suspensions were incubated at 37°C for 6 h with shaking (100 rpm/min). After centrifugation at 10,000 g, proteinase K at a final concentration of 100 μg/ml was added to the supernatant, and incubation was carried out at 55°C for 2 h. CaCl 2 at a final concentration of 0.1 M was added, and the solution was stirred for 1 h prior to add 25% (v/v) absolute ethanol. After 2 h at 4°C, the solution was centrifuged at 8000 g. To the supernatant, 80% (v/v) absolute ethanol was added and the solution was kept overnight at 4°C, prior to centrifuge (8000 g, 4°C) to harvest capsular polysaccharides (CPS). The CPS was quantified by the previously described method using phenol-sulfuric acid [27]. The change rate of CPS was calculated according to the following equation: Change rate (%) = 100% × (carbohydrate content of sample group -carbohydrate content of S. suis group)/carbohydrate content of S. suis group. Extracellular polysaccharide (EPS) formation assay The effect of PF on EPS formation by S. suis and ΔluxS mutant strain was determined by a previously described method [28]. TSB medium supplemented with PF at a final concentration of 25 μg/mL was inoculated with an overnight inoculum (1%) of of S. suis (HA9801, ΔluxS). After incubation at 37°C for 24 h, 1 ml of the culture centrifuged for 10 min (12,000 g, 4°C), and the supernatant was filtered (0.22 μm aqueous filter) prior to add 3 mL of precooled ethanol, and let stand at 4°C for 24 h. The solution was then centrifuged (10 min, 12,000 g, 4°C) and the pellet harvested, which contains EPS was suspended in 1 mL of deionized distilled (dd) water. Using glucose as the standard, the EPS content was determined by the phenol-sulfuric acid method [27]. The equation for measuring the standard curve is y = 0.0581x + 0.0913 (R 2 = 0.9969). The change rate of CPS was calculated according to the following equation: Change rate (%) = 100% × (carbohydrate content of sample group -carbohydrate content of S. suis group)/carbohydrate content of S. suis group. AI-2 activity assay To determine the effect of PF on the activity of AI-2 [29], S. suis HA9801 and ΔluxS were grown overnight at 37°C, the bacterial cultures were diluted to 10 5 CFU/ ml, and divided into three test groups: S. suis; S. suis with 25 μg/ml PF; ΔluxS. The above-mentioned diluted bacterial cultures were incubated at 37°C for 12 h. During this period, 1-ml aliquots of the cultures were harvested at 4, 6, 8, 10, and 12 h, and centrifuge at 10,000 g for 10 min at 4°C. The negative control was the supernatant obtained by centrifugation of E. coli DH5α under the above culture conditions. The supernatants were filtered through a 0.22 μm filter and stored at −80°C. In order to detect AI-2 activity in each test group, V. harveyi BB170 cultured overnight at 28°C was diluted 5000 times with AB medium. Ninety μL of the BB170 diluted culture along with 10 μL of AI-2 supernatants (prepared from the above test group) were incubated at 28°C in the dark for 6 h, and the bioluminescence value was measured using a Promega Luminometer at a wavelength of 490 nm. The test was repeated 3 times independently. The test results are displayed in the form of ratio: luminescence value of each test group/luminescence value of E. coli DH5α. Molecular docking assay As previously described [30], a virtual molecular docking analysis was conducted to determine how PF interacts with the LuxS protein. The chemical structure of PF was downloaded from PubChem, while the three-dimensional structure of LuxS protein of S. suis HA9801 was previously reported by our group. The LuxS protein model was pre-processed with the SYBYL-X 2.1 software (Triops, USA), including hydrogenation, side-chain repair, and deletion of water molecules. Finally, the AMBER force field was used to minimize the energy of the protein. PF was operated by adding hydrogen atoms and Gasteiger-Hückel charge, and optimized with the Tripos force field of SYBYL-X 2.1 software (convergence criterion: 0.005 kcal/(Å mol)), and saved in MOL2 format. Inhibition assay of PF on LuxS enzyme activity The LuxS protein was purified from BL21 competent cells transformed with the plasmid pET28-luxS as previously reported by our group [25]. The LuxS protein expression vector was grown in LB medium containing 50 μg/ml kanamycin at 37°C to a 0.6 at OD 600 nm . Then, 0.1 mM isopropyl-β-D-thiogalactopyranoside (IPTG) was added to induce LuxS expression and the bacterial culture was further incubated at 37°C for 5 h. The cells were collected by centrifugation (12,000 g, 4°C), suspended in lysis buffer, incubated at 25°C for 10 min, and lysed by an ultrasonic treatment (working power: 400 W; 80 cycle of 5 s with rest of 10 s between each cycle). The lysate was centrifuged (40,000 g, 4°C) for 30 min, and the supernatant was retained after filtration (0.45 µm pore size). The clarified lysate was loaded onto a 1 ml HisTrap HP column (GE Healthcare Life Sciences) using an GE Akta Pure (GE Healthcare Life Sciences). Proteins were eluted from the column using a linear gradient of elution buffer (20 mM Tris-HCl pH 8.0, 300 mM NaCl, 1 M imidazole). Protein concentration was determined using a bicinchoninic (BAC) protein assay (Supplementary materials 3) [25]. The preparation method of LuxS substrate (SRH) was based on a previously published report [31]. Briefly, SAH (Sigma) was dissolved in 1 M HCl at a concentration of 1 mg/ml in a boiling water bath for 20 min. Then, the pH was adjusted to 7.2 with 1 M NaOH, and the SRH solution was diluted in a 200 mM sodium phosphate buffer (pH 7.2) at a concentration of 4 mM. LuxS activity was determined by quantification of homocysteine by the Ellman method [32]. LuxS reaction (total volume = 100 μl) contained 0.5 mM EDTA, 200 mM (pH 7.2) sodium phosphate buffer and 20 μg/ml LuxS. The reaction was initiated by adding different concentrations of SRH (1-1000 μM) and performed at 37°C for 5 min. Finally, 100 μl of 2 mM 5,50-dithiobiguanide (2-nitrobenzoic acid) was added, and the mixtures were further incubated at 37°C for 10 min. The absorbance at 412 nm was monitored with a Synergy HT Multi-Detection Reader (BioTek Instruments, USA). Then, the standard curve (Y = 0.01300*X + 0.006456 R 2 = 0.9997) was used to calculate the homocysteine concentration ( Figure S1). Km (LuxS) value was determined by Michaelis-Menten equation nonlinear equation using graphPad 9.0. According to the above experimental method, different concentrations of PF (6.25, 12.5, 25 μg/ ml) were added to the reaction mixtures, and the Km values were determined. Cell viability assay Cell viability was detected using a previously described protocol [33]. Briefly, human laryngeal epidermoid carcinoma (HEp-2) cells were cultured in RPMI 1640 (PM150110, Procell, China) containing 10% bovine serum (16,170,060, Thermo Fisher, USA) and were seeded (1 × 10 4 cells) into the wells of a 96-well microplate and allowed to adhere for 24 h at 37°C under 5% CO 2 . The cells were treated with two-fold serial dilutions of PF (25,50,100,200, and 400 μg/ml) for 10 min. Then, PF was washed away and fresh culture medium was added prior to further incubate for 24 and 48 h. Cell viability was determined using an MTT (3-[4,5-diethylthiazol-2-yl]-2,5diphenyltetrazolium bromide) colorimetric assay according to the manufacturer's protocol. The cell survival rate is expressed as a percentage of the control value. Mouse protection assay A mouse protection assay was performed according to a protocol previously published by our group [15]. Mice were intraperitoneally injected with 1 × 10 6 CFU/ml of S. suis to induce infection. The mice infected with S. suis were randomly divided into five groups, each with 10 mice. The protective treatment administrated by injection of 100 µl through the tail vein consisted in: Group 1: WT + solvent (distilled water) group; Group 2: WT + 25 μg/g PF; Group 3: WT + 50 μg/g PF; Group 4: WT + 100 μg/g PF; Group 5: ΔluxS + solvent. The first administration was 2 h after the establishment of the S. suis infection, and thereafter twice a day. The mortality of the mice was recorded after 7 days. Mouse anti-infection assay S. suis was cultured overnight at 37°C, and then diluted with sterile PBS to a concentration of 5 × 10 6 CFU/mL. Fifteen female Balb/c mice (4-6 weeks) were equally divided into four groups, one of which was a blank control group without any treatment, and a group was inoculated with 200 μl of ΔluxS by intraperitoneal injection. Each mouse in the other two groups was inoculated with 200 μl of S. suis HA9801 by intraperitoneal injection. Then, different concentrations of PF (0, 100 μg/g) were used for treatment via tail vein injection. The first treatment was two hours after the infection of S. suis, and then two treatments a day thereafter. All mice were sacrificed two days later, and the brain, lung, liver, and spleen were dissected. Part of the brain, lung, liver, and spleen were aseptically taken and fixed in 4% paraformaldehyde (pH = 7) for 24 h. The tissues were embedded in paraffin, cutted into 4 μm-thick sections, stained with hematoxylin and eosin, and observed with an optical microscope. The remaining brain, lung, liver, and spleen tissues were added into 1 ml of PBS, homogenized, and serially diluted. The bacterial CFU counts were then determined. Statistical methods The significance of the data in Figure 1d, 2a, 2e and 2g was analyzed according to unpaired Student's twosided t-test. *p < 0.05, **p < 0.01, and ***p < 0.001. The samples/animals were randomly allocated to experimental groups and processed for blind evaluation. Anti-biofilm properties of PF MIC and MBC values of PF against S. suis HA9801 were 400 and >1600 μg/ml, respectively. Then, we used the CFU method to analyze the effect of sub-inhibitory concentrations of PF on the growth kinetics of S. suis. As shown in Figure 1c, when the concentration of PF ≤ 25 μg/ml (1/16 MIC), the growth of S. suis was not significantly different from that of the control (no PF). Furthermore, the semi-quantitative determination of biofilm by crystal violet staining showed that PF at 12.5 and 25 μg/ml significantly inhibits biofilm formation by S. suis, while a concentration of 6.25 μg/ml had no biofilm inhibitory effect. PF affects EPS production through LuxS/AI-2 system for anti-biofilm activity In order to identify the mechanism by which PF affects the formation of biofilm in S. suis, we monitored changes in the amounts of AI-2 secreted when bacterial growth was achieved in the presence of PF. As shown in Figure 2a, compared with control growth (no PF), the amount of AI-2 signal molecule secreted by S. suis HA9801 was significantly reduced for growth in the presence of PF, as also observed for the growth of ΔluxS (in the absence of PF). The above suggests that PF modulates the production of the signal molecule AI-2 by the LuxS/AI-2 system or/and directly acts on the AI-2 molecule during the growth process. To verify this hypothesis, we monitored biofilm formation by ΔluxS in a culture medium containing PF and supplemented with AI-2. As shown in Figure 2e, there was no significant difference between the biofilm formation ability of the wild-type strain (HA9801) grown in the presence of PF (25 μg/ml) and the ΔluxS mutant (no PF). Moreover, adding the AI-2 signal molecule to ΔluxS grown in the presence of PF (25 μg/ml), restored its biofilm formation ability, which was not significantly different from that of the S. suis wild strain. Biofilms formed under the above conditions were observed by laser confocal microscopy ( Figure 2b) and scanning electron microscopy (Figure 2c). The ΔluxS strain grown in the presence of PF and AI-2 formed a dense biofilm (Figure 2b) with a threedimensional structure composed of bacteria and biofilm matrix, with channels for allowing nutrient exchange with the external environment, similar to that formed by the wild strain (Figure 2c). On the contrary, the biofilms formed by the wild strain in the presence of PF and the ΔluxS mutant were similar; the bacterial cells were more dispersed and aggregated less, and the biofilm matrix was greatly weakened. These results indicate that PF can weaken the biofilm matrix through LuxS/AI-2 system, but will not inactivate AI-2. Since the biofilm matrix is mainly composed of polysaccharides, it is of interest to determine the effect of PF on the capsular polysaccharide and extracellular polysaccharide of S. suis. The result shown in Figure 2f provided evidence that PF did not inhibit the production of CPS through LuxS/AI-2. In addition, as shown in Figure 2g, in the presence of PF, the production of EPS by S. suis was markedly reduced, even not significantly different from ΔluxS. Finally, according to previous results obtained by our research group, we selected luxS gene-regulated virulence genes to perform quantification by qPCR. The results showed that PF has a down-regulating effect on the transcription of these virulence genes, to reach levels comparable to those observed with ΔluxS. (Figure 2h). This finding provides direct evidence that the reduction in the production of EPS induced by the presence of PF is a key factor in the weakening of the biofilm matrix. In conclusion, these results strongly indicate that PF can affect the biofilm of S. suis through the LuxS/AI-2 system. The gene expression level for the wild type strain (HA9801) in the absence of PF was set at 100%, and the gene expression level for the wild type strain + PF (25 μg/ml) and ΔluxS were relative to that of the wild type strain. In figures (a), (e), (f), (g), and (h), data are shown as the mean ± SD. Statistical significance was assessed by unpaired Student's two-sided t-test compared to the control group. ** p < 0.01, *** p < 0.001. All experiments were performed in triplicate. Molecular interaction between PF and LuxS The above results (Figure 2) suggest that PF can affect the LuxS/AI-2 system, but cannot inactivate the AI-2 signaling molecule. We thus analyzed the interaction between PF and LuxS using the Ellman method. As shown in Figure 3c, when the concentration of added PF increases, the Km value of LuxS gradually increases, although the maximum reaction rate does not change significantly, which indicates that PF is a competitive inhibitor of LuxS. In order to further understand the interaction between PF and LuxS, we conducted a virtual docking experiment. As shown in (Figure 3a), the threedimensional structure indicates that PF interacts with the LuxS active site and forms protein-ligand interactions with the key amino acid residues of LuxS. More specifically, the two-dimensional interaction map (Figure 3b) clearly shows that PF forms hydrogen bonds with ARG102, ILE109, SER160, and CYS116, respectively. Effect of PF in a mouse model of S. suis infection In order to evaluate the potential therapeutic effect of PF as an inhibitor of the LuxS/AI-2 system, we used a mouse infection model. As shown in Figure 4b, PF showed a protective effect against S. suis HA9801 infection at doses of 25, 50, and 100 μg/g. Among them, PF at the doses of 50, 100 μg/g, the lethality of S. suis on mice was not significantly different from that of the ΔluxS group. From the dissection of the mouse organs (Figure 4c), it was found that treatment with PF resulted in almost no edema in the brain of the mice, while the lungs, liver, and spleen have milder lesions. The total bacterial count in brain, liver, spleen and lung (Figure 4a) of the PF-treated mouse group was significantly lower than that of control group, but there was no significant difference from the ΔluxS group. Histological analysis (Figure 4d) showed that there were no obvious brain lesions in the PF-treated group. Although the alveoli were slightly congested, there was inflammatory cell infiltration in the portal area of the liver, and moderate congestion in the spleen. However, compared with the untreated mouse group, the PF-treated group showed signs of remission. In addition, assessment of cell viability showed that PF (<100 μg/ml) was nontoxic ( Figure S2) for human laryngeal epidermoid carcinoma cells (HEP-2). All the above clearly shows that PF may be an effective therapeutic agent to reduce S. suis infections by impairing the virulence of the bacterium. Discussion S. suis type 2 is a highly pathogenic zoonotic pathogen [1] that causes serious economic losses in the pig industry, in addition to representing a serious threat to human life [3]. The strain HA9801 used in this study is a serotype 2 showing a typical high virulence and a biofilm-forming ability. Biofilms of S. suis bind to extracellular matrix proteins in both endothelial and epithelial cells and causes persistent infections. We previously identified nine unique proteins in the biofilm of S. suis through comparative proteomics analysis [34]. Further research found that the pdh significantly up-regulates the adhesion and invasion ability of S. suis [13], and the otc can improve the pathogenicity in the mouse abdominal infection model [15]. Our previous studies have shown that S. suis serotype 2 can regulate biofilm formation and virulence factor expression through the LuxS/AI-2 density sensing system, leading to a marked resistance to fluoroquinolones and tetracycline antibiotics [10,12,14,15,35]. Currently, there is little treatment option for infections caused by S. suis serotype 2, and consequently the search for antibiotic substitutes with the ability to inhibit bacteria in a biofilm state is an active field of research. Studies have found that sub-inhibitory concentrations of Syringopicroside [36] and Emodin [37] can effectively inhibit the formation of S. suis biofilm. In addition, the essential oils of cinnamon, thyme, and winter fragrant can also significantly inhibit the biofilm formation ability of S. suis [38]. In recent years, the therapeutic effects of various medicinal plants and natural plant compounds exhibiting anti-biofilm activities have attracted much attention [18]. In most studies, plant materials are used in the form of crude extracts, decorations or tinctures. Although these simple pharmaceutical preparations are often effective, their mechanisms of action are often not scientifically verified. In this study, PF was found to significantly reduce the biofilm formation ability (Figure 1c) and virulence of S. suis at a concentration that does not affect the growth rate (Figure 1d). Notably, PF can reduce the production of AI-2 in S. suis, but it cannot inactivate AI-2 (Figure 2a). This shows that PF can directly or indirectly affect the biofilm formation ability and virulence of S. suis through the LuxS/AI-2 system. Moreover, the active site of PF is likely related to LuxS. In agreement with our observations, it was previously demonstrated that PF can affect Candida albicans (C. albicans) infection and inhibit the formation of carbapenemase-producing Klebsiella pneumonia (K. pneumonia) biofilm through QS system [24,39]. The three-dimensional structure of the biofilm of S. suis grown in the presence of sub-inhibitory concentrations of PF, as observed by scanning electron microscopy and laser confocal electron microscopy was found to be weakened. Biofilms are defined as aggregates of microorganisms embedded in polysaccharides secreted by them [40]. Therefore, we examined the effect of PF on the production of exopolysaccharides by S. suis. Previous reports have proved that bacterial polysaccharides are the matrix of bacterial biofilms [40]. The main components of the bacterial polysaccharides are divided into CPS and EPS. CPS are structural cell surface components of bacteria. Studies have shown that clinical pneumococcus encapsulated by CPS has impaired capacity to form biofilms [41]. The extracellular polysaccharide secreted in the mucilage can promote the adhesion between cells, thereby contributing to the formation of biofilm [42]. Further, we quantified the amounts of the extracellular polysaccharide produced by S. suis. The results showed that PF does not affect CPS content of cocci, but decrease the content of EPS. Obviously, the biofilm is probably affected by the weakening of EPS. This may be a mechanism by which PF can lead to the weakening of the S. suis biofilm. Our previous study showed that AI-2 overexpression [14] and deletion of luxS gene [10] can regulate some virulence factors of S. suis. In this study, PF can also regulate the transcription level of these virulence factors. Further studies are required to better determine how PF affects the biofilm and virulence of S. suis. In this study, PF was found to affect the expression of AI-2 signaling molecules in S. suis. It is well known that the core of LuxS/AI-2 quorum sensing system is the AI-2 signal molecule synthetase LuxS. The enzyme is involved in the formation of HCY and DPD, and DPD forms AI-2 through self-cyclization, a furanone acyl boronic acid diester structure. AI-2 is actually a byproduct of bacterial methyl metabolism (Figure 5a). Through the determination of the AI-2 concentration in S. suis, we found that PF does not affect the AI-2 signal molecules added to S. suis, but it can downregulate the amount of AI-2 signal molecules secreted by S. suis itself. Therefore, the key enzyme involved in the regulation of the synthesis of AI-2 signal molecules-LuxS enzyme in S. suis is likely the target of PF (Figure 5b). Our group previously purified the LuxS enzyme from S. suis type 2 and analyzed its structure in the early stage [25]. In this study, using a virtual molecular docking analysis, we demonstrated that PF may bind to LuxS enzyme, although further research is needed to further confirm these findings. We showed that PF can indeed inhibit LuxS enzyme activity, thus affecting the production of S. suis AI-2. Previous reports have proven that the quorum sensing system plays an important role in the formation of bacterial biofilms. Microorganisms use this information exchange, called Quorum Sensing (QS), to induce infectious diseases in eukaryotes, regulate their proliferation, and express their pathogenicity through QS, thereby evading the eukaryotic defense system. Further, the ability of PF to alleviate the symptoms of S. suis infections and to reduce colonization in vivo suggests broader applications aimed to prevent or treat S. suis associated infections ( Figure 4). In conclusion, our results indicate that PF interferes with the activity of the luxS enzyme in the LuxS/AI-2 quorum sensing system of S. suis, thereby reducing the secretion of EPS and attenuating the virulence, which ultimately leads to the decrease of pathogenicity. Therefore, PF may be used to guide the development of new anti-biofilm drugs to control S. suis infections. Disclosure statement No potential conflict of interest was reported by the author(s). Funding This work was supported by the National Natural Science Foundation of China(32172852, 31902309), Central Plains Scholars Fund of Henan Province (202101510003) and Funded Project of Henan Province Traditional Chinese Medicine Industry Technology System. Author contributions Formulation of overarching research goals and aims: YW, XGH and LY. Conducting a research and investigation process, specifically performing the experiments: JPL, CLM, and QYF. Application of statistical, mathematical, computational, or other formal techniques to analyze or synthesize study data: JPL, MYJ, and HZ. Preparation, creation, and/or presentation of the work, specifically writing the initial draft: XLZ, LYS, and JPL. Preparation, creation, and/or presentation of the work, specifically critical review, commentary, or revision: JPL, DG, LY and YW. The dashed box on the left is a diagram of the mechanism of PF affecting the formation of signaling molecule AI-2, and the dashed box on the right is a diagram of the mechanism of normal AI-2 signaling molecule formation. Data availability statement The data used and/or analyzed during the current study are available from the corresponding author on reasonable request.
v3-fos-license
2023-03-12T15:48:42.955Z
2023-03-08T00:00:00.000
257461249
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/ijz/2023/8863486.pdf", "pdf_hash": "43dc9343d136000cd2edbad17c1712becd1a105c", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44784", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "d752d1793925fc9ca5d46e73d93c22ecd742a025", "year": 2023 }
pes2o/s2orc
Spatial and Temporal Monitoring of North African Turtle Doves Streptopelia turtur arenicola (Hartert, EJO, 1894): First Migrants Arrive Early and Select Nesting Trees next to Foraging Resources while Second Breeders’ Wave Breed around Earlier Nests , Introduction Te Turtle Dove Streptopelia turtur (Linnaeus, 1758) is a long-distance migrant Columbidae that migrates between Europe and Asia as breeding habitats (Mars-September) and Sub-Saharan Africa as wintering grounds between October and February [1][2][3]. In Europe, Streptopelia turtur is mentioned as a breeder in Spain, France, the UK, and Germany in the west [1,4,5], as well as in Turkey, Greece, and Bulgaria in the East [6,7]. In Africa, the species is recorded in North Africa as breeding migrant [8][9][10], while in Sub-Sahara it is considered as winterer [2]. On the other hand, Turtle dove is the remarkable example of a European long-distance migrant bird that has sufered a rapid and severe decline across its western European range (−78% in Great Britain from 1980 to 2020 as well as −70% in the Iberian peninsula, mainly in Spain from 1980 to 2017) [11]. Terefore, it has been ranked as "Vulnerable" globally and "Near Treatened" within the Northern slope of the Mediterranean basin following recent evaluation [12,13]. Potential causes responsible for the species" deterioration include deprivation of breeding sites [14], scarcity in food availability due to intensifcation of farming activities [15][16][17], unsustainable hunting policies [18], and variation in ecological conditions throughout the migration fyways [16,19]. In Morocco, studies of North African Turtle Dove S. t. arenicola were mostly conducted in farmlands, principally olive, orange, and apple orchards [31,33,47]. Tese studies have detailed breeding biology, reproductive success, and menacing factors. However, the selection of nesting trees inside breeding orchards was neglected in Europe [41,48] and North Africa [9,30,47,49]. Tis element is of great importance in clarifying patterns of nesting site selection with regards to orchard characteristics, tree heights, and disturbing factors [49]. If we consider the vulnerable status and low population densities, an understanding of such elements is suggested to improve the management measures in agricultural habitats mostly colonized by this declining game species. Tis study aimed to (i) map the microdistribution of nests built by North African Turtle Doves inside orange orchards, (ii) analyse breeding parameters counting chronology and success with regard to distribution in monitored orchards, and (iii) compare breeding parameters between earlier and later clutches. Tese elements are suggested to fll the gap concerning the zonation and nesting strategies of Turtle Doves at selected breeding sites [30]. Study Area. Fieldwork was conducted in the Beni Mellal-Khenifra region, located in the center of Morocco ( Figure 1). Te study area is dominated by various climatic stages linked to altitudinal zonation, from the plains (400 m altitude) to the mountains (up to 1000 m altitude), and this induces a spatial variability of precipitation and temperature. Te rainfall regime in the mountains (northern slopes of the Middle and High Atlas) is Mediterranean with oceanic infuence, with annual precipitation between 550 mm and 700 mm in Azilal and up to 1000 mm in the High Atlas. In contrast, precipitation in the Beni Mellal and Tadla plains is low (around 436 mm). Equally, temperatures vary from 1.1°C in January to 35.7°C in August. However, these climatic conditions are subject to strong interannual variability [50]. To monitor the migratory Doves, we selected one orange grove in Abou Khayma El Bazzaza village, located in the north of Beni Mellal (Figure 1). Te grove is about four hectares with 1,182 trees of Valencia late (Citrus sinensis), placed in lines and separated by 8 meters between trees. Cereals and legumes surround the grove from the south and east, while the west and north are limited by other orange groves. Te irrigation system is installed underground, and the water is opened weekly. Breeding Chronology. Monitoring of the area started at the beginning of March (2016), considered the date of arrival for migrant Doves in Morocco and Northwest Africa [8,31]. We examined the dates of frst arrivals to the region, and after breeding season, we noted the date where the last Doves were observed in the area (last departure dates). After the arrival of the frst individuals, we conducted visits to the study orchard weekly to survey breeding activities (during the beginning of April, the orchard was visited twice to identify the frst nesting attempts, while from May on, visits were reduced to once per week due to lower nesting activities). Breeding chronology, counting the construction of nests, eggs laying, hatching, and fedging of chicks, were monitored from the frst week of March to the last week of September. We noted the evolution of nests, eggs, and chicks, as well as the failed ones. Failure factors were noted per visit and nest for each breeding stage. Spatial Distribution. To evaluate the spatial distribution of breeders inside the study orchard, we noted nesting trees with specifc numbers and dates. To be more accurate, we used lines and rows of citrus trees (Valencia late) as coordinates, as explained in Figure 1. For each nest found, we sanded the tree trunk and marked the nest number using sandpaper and a permanent marker. Tis method has helped to fnd the position of the nest during the ulterior visits (surveillance of nests from the construction to the success or failure of chicks) easily. We used a selfe stick to photograph the content of the nest and a piece of paper containing the plan of the orange grove to locate each nest with a symbol representing its content. Once the nest is located, it is monitored during each visit, from construction to fedging or loss of clutches. Further, we noted the distance of nests to the cereals and legumes, to the central zone of the orchard (epicenter) and to the periphery of orchards (periphery next to cereals and periphery next to other orange orchards, as clarifed in Figure 1) based on the distance separating nesting-citrus-trees and the targeted zone. Statistics. Breeding season was divided into frst and second phases based on a long break in nest construction and egg-laying (hatching and fedging were neglected since the failure factors are susceptible to impacting them). Reproductive rates counting nests (occupied nests/constructed nests), eggs (hatched eggs/laid eggs), and chicks (chicks that survived/hatched chicks) were calculated for the entire season from the frst nest to the last chicks and for each breeding phase. We chequered for normality and homogeneity of variance for all breeding parameters (variables) via the Kolmogorov-Smirnov test. We compared breeding success rates and failure factors (predation, desertion, destruction, and unhatched) among breeding stages (nesting, laying, hatching, and fedging) using ANOVA One way test. Further, we compared breeding parameters and failure factors between the frst and second breeding phases using the T test. To evaluate the correspondence between breeding stages, nesting (number of nests), laying (number of eggs), hatching (occurred chicks), and fedging (fedged chicks), considered as response variables, and periods-4 weeks per month (April to September), considered as factors, we used principal For spatial distribution, we created the graphical planning of the orchard via Adobe Illustrator 26.0.3 (2022) software [51] (based on map extracted from Goggle Earth). Nests were placed on the nesting-trees in the graphical planning, based on data collected from the feld (the lines and rows of nesting-citrus-trees were used as coordinates). However, the nests were divided into three periods: (i) frst nesting wave (the frst breeders to colonize the orchard after arrival dates); (ii) frst breeding phase (frst massive nesting stage after the colonization of breeding orchard); and (iii) second breeding phase (second massive nest construction after the long break of the frst phase). However, we reinforced the graphical mapping with detrended correspondence analysis (DCA) to demonstrate statistically the distribution of nests inside the orchard as realized currently by Mansouri et al. [8] and Squalli [52] for passerine birds and Columbidae. In our case, nesting sites in orchard counting central zone (the epicenter of the orchard), peripheryorchard (the marginal zones surrounded by other orange orchards), and periphery-cereals (marginal zones surrounded by cereals and legumes (potential foraging resources)) were considered as response variables, while distances of nests (148 nests) toward central zone, peripheryorchard, periphery-cereals, and cereals were considered as dependent variables. For graphical plot, only eigenvalues superior to 1.0 and an axis with a percentage of variance >50% were selected. Tese methods are commonly used to assess the ecological requirements of birds, including dove species [7,12]. All statistical analyses were executed using SPSS 18 [53], while graphs were created by GraphPad Prism 8.3.0 [54]. Results were given as percentage for success rates and sample size and as the mean ± SD for breeding parameters. Arrival and Departure. In Beni Mellal province, the frst birds of S. t. arenicola were observed during the third week of March. Tese birds arrived solitary (single and pairs) during the last days of March, while in April, arrivals were in groups of few hundreds and mostly observed in electrical lines in the vicinity of roads principally in rural areas. After breeding season, migratory birds (adults and subadults), were gathered in foraging sites near water resources counting rivers, dams, and irrigation tunnels. Te last migrants were seen at 10 October, which marked the latest departure date in the Beni Mellal area. Chronology of Breeding Activities. Breeding chronology of S. t. arenicola in Beni Mellal is summarized in Figure 2. Construction of nests started during the second week of April, and nesting activities continued into the frst week of August. Nesting activities were divided into the following two phases: the frst phase from the frst week of April to the frst week of June with a peak of nesting during the second week of May, and the second phase from the second week of June to the second week of August with a peak during the third week of June. Laying of eggs started during the third week of April, and egg-putting activities sustained to the frst week of August. Te laying phases were divided into two phases; the frst one from the second week of April to the second week of June with a peak of egg-positioning during the third week of May, and the second phase from the third week of June to the second week of August with a peak during the third week of June. Te number of eggs was signifcantly superior during the frst laying stage. Te occurrence of chicks (hatching of eggs) started during the frst week of May, and hatching activities sustained until the third week of August. Te hatching and fedging activities showed two peaks in the fourth week of May and the third week of June, respectively, and then fuctuations continued until the end of the breeding period. Both phases seem to be unclear in the curves, and this can be explained by modifcations that occur in each nest during the incubation and rearing period, i.e., predation of eggs or broods, desertion due to disturbance, destruction of nests, etc. Multivariate analysis (PCA) of optimum periods for breeding activities is summarized in Figure 3. Nesting and laying activities were concentrated principally during the frst three weeks of May and the last two weeks of April. Hatching was concentrated between the fourth week of May and the second week of July, while fedging was mainly recorded between the fourth week of July and the frst week of September. Tese periods marked the optimal breeding times for migrant Doves. Spatial Distribution of Nests. Te distribution of nests inside the breeding orchard is summarized in Figures 4 and 5. First nests (the frst nests after prenuptial migration between second and third week of April) were placed in the periphery of the orchard surrounded by cereal farms. During the frst nesting phase (after the installation of the earliest nests), breeders occupied the nesting trees without any oriented selection (nests were documented in the entire orchard). During the second phase, nests were constructed in gregarious forms next to nests of the frst breeding phase. Further, three support trees used during the frst phase were reoccupied during the second breeding phase. Table 1. Among the 148 monitored nests, only 9.45% were deserted during the nesting stage. Among 134 occupied nests (eggs), 34.32% did not achieve the hatching stage. Among the 261 counted eggs, only 153 (58.62%) have succeeded to achieve the hatching stage, while 95 were predated, deserted, destructed, and unhatched. Te loss rate in the fedging stage was limited; among recorded 166 chicks, 153 fedged successfully with a loss rate of 7.83%. Reproductive Rates. Te total reproductive success of the North African Turtle Dove at Beni Mellal is summarized in Failure factors were variable (DF � 2, F � 33.960, P < 0.001). Nest desertion caused the highest loss of S.t. arenicola clutches (14 nests, 42 eggs, and 8 chicks), followed by predation (35 eggs and 4 chicks), and destruction (10 eggs). Nest desertion was caused principally by anthropic activities counting, tree pruning, fruit harvesting, irrigation, pesticide use, and hunting, which were applied in coincidence with the breeding activities. Te main predators observed in the monitored orchard were reptiles mostly Montpellier snake Malpolon monspessulanus (Hermann, 1804) and Horseshoe whip snake Hemorrhois hippocrepis (Linnaeus, 1758) documented on nesting-trees, as well as raptors counting, Common kestrel Falco tinnunculus, Peregrine falcon Falco peregrinus, Black-winged kite Elanus caeruleus, and Barn owl Tyto alba. Te comparison of reproductive success parameters and failure factors between breeding phases is summarized in Table 2. For breeding success, only fedging rates difered between the two breeding phases, while laying, hatching, and nesting were similar. Equally, failure factors (desertion, predation, and destruction) were similar between the frst and second breeding phases. Discussion Tis study highlighted the temporal and spatial microdistribution of breeding North African Turtle Doves Streptopelia turtur arenicola at Beni Mellal's irrigated perimeter (Morocco). Our central objectives were to provide detailed data on the chronology of breeding activities and the distribution of nests inside the breeding orchard. We obtained the frst data mapping distribution of nests in an occupied orchard and the time evolution of breeding activities. Tese fndings are of great importance for mapping zones of high breeding rates and then orienting well-adapted conservation actions to protect this threatened game in Morocco and the entire southern slope of the Mediterranean basin. Our study documented the breeding activities of Doves, which confrms the vital importance of Beni Mellal area as a breeding and stopover area for Moroccan (breeders) and European (migrants) populations of Turtle Doves, respectively [8,32,46]. In our case, during spring, the frst Doves were witnessed in the Beni Mellal province during the third week of March, which is in agreement with results cited by Vaurie [20] in the same zone (last decade of March). However, these dates are earlier when compared with European migratory Doves (S. t. turtur) observed on 25 April, in the Beni Mellal area and Moroccan breeders (S. t. arenicola) in high altitude zones [32]. In high altitudes, the low temperatures and high precipitations push Doves to delay their entire breeding chronology counting arrival dates to avoid the abortion of their clutches [9]. In contrast, departure dates (mid-October) were similar between Beni Mellal and other highlands in Morocco [9], in which Doves migrate on 13 th October. Further, [8] recorded currently International Journal of Zoology many wintering Doves in Morocco, and this is suggested to modify the phenological status of the species in entire North Africa. Our study revealed that the breeding season of S.t. arenicola in Beni Mellal province is divided into earlier and tardive clutches, which is in contradiction with previous studies conducted in other Moroccan [30,32,33,49] and Algerian regions [55] on the southern slope of the Mediterranean, as well as with prior studies conducted in Spain [56] and Britain [14,34] on the European side. Te abundance of cereals and other cultivated seeds around breeding orchards is strongly suggested to encourage the two breeding clutches in S. t. arenicola at Beni Mellal [17]. Further, in Beni Mellal, breeding activities started with the nest initiation recorded in the second week of April, followed by laying of eggs during the third week of April, hatching during frst week of May, and fedging during fourth week of May. Similar results were recorded in the adjacent areas, in Tadla and Midelt which are only 30 and 180 km from Beni Mellal, respectively [9,32]. However, these breeding dates are markedly late when compared with sublatitudinal breeding habitats in Taroudant located 400 km to the South of Morocco [33], where breeding activities started in March, and earlier when compared with Northern breeding habitats in Spain [56] and Britain [34] where breeding season starts in mid-April. Tese diferences may indicate a suggested efect of latitudinal gradient as recorded in many Western Palearctic birds counting the European Turtle doves Streptopelia turtur turtur that take nearly 17 days between North African stopovers (as low latitudinal limits) and European breeding grounds (as higher latitudinal limits in the Northern hemisphere) [41,41]. However, more investigations are needed to confrm this issue. Tis study highlighted the distribution of nests inside breeding orchards, which is the frst of its kind. First nesting sites were selected in marginal trees placed near cereals and other cultivated seeds, and this indicates the crucial role of foraging resources in the selection of breeding sites and habitats [9,17]. Te selection of breeding trees near foraging seeds is suggested to support breeding pairs and their nestlings during the breeding season as mentioned currently by Mansouri et al. [9]. Nests of the second breeding phase were constructed in gregarious forms and nesting trees were selected next to nests of the frst breeding phase. As a potential explanation, we suggest that the frst breeders prospect the selected orchards for the security of nests and availability of forage via their earlier nests, while during the second wave of breeders (second breeding phase), Doves nest intensively near the trees selected by the prospectors (frst wave of breeding phase) based on the security and food availability ofered for frst nests. However, this issue needs a specifc investigation, and its results are suggested to classify nesting-nucleus where the density of nests is higher as the case of many gregarious and social birds counting greater famingos (Phoenicopterus roseus) [57,58] and the Eurasian Coot Fulica atra [59] that colonise secure sites and other nesting sites of lower density. According to our annual breeding success evaluation, the breeding success rates of Turtle doves at Beni Mellal province were medium during all breeding phases. In total, 92.17% of chicks have survived from nesting to fedging. Tese results are highly close to those cited in apple orchards at Midelt [32], in olives at Taroudant [33], and in palms at Biskra in Algeria [60]. Despite the availability of food resources around orchard and water inside it, Turtle dove clutches were regularly disturbed. Conclusion Tis study ofers new data on the temporal and spatial distribution of the North African Turtle dove subspecies (S. In summary, frst breeders arrived early to nesting sites and the breeding season was divided into the frst phase from the frst week of April to the frst week of June (earlier), and the second breeding phase between the second week of June and the second week of August. Te frst breeders prospect breeding orchards and select nesting trees close to foraging resources, while during the second breeding phase nests were selected in gregarious forms close to those of the frst phase. However, despite the abundance of foraging resources and breeding requirements, breeding success was lower due to human disturbance, natural enemies, and abortion of eggs. Tese data are of great importance for comparative research concerning the microdistribution of the vulnerable Doves in breeding sites, as well as for conservation actions at orchard-scale via the reduction of human activities in most populated areas of breeding groves. Further, the installation of the second breeding phase between June and September needs more investigations, principally the impact of hunting activities from July to September, on breeders that realize sorties between nests and foraging habitats. Data Availability Te data used to support the fndings of this study are included in the article. Conflicts of Interest Te authors declare they have no conficts of interests.
v3-fos-license
2020-06-25T09:07:58.064Z
2020-06-21T00:00:00.000
225668062
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11465-020-0588-0.pdf", "pdf_hash": "f231a8db9caea29845213d22277141fa88efd7ef", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44786", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "749f16642c710e8214dc08b5296c5d8973033226", "year": 2020 }
pes2o/s2orc
Level set band method: A combination of density-based and level set methods for the topology optimization of continuums The level set method (LSM), which is transplanted from the computer graphics field, has been successfully introduced into the structural topology optimization field for about two decades, but it still has not been widely applied to practical engineering problems as density-based methods do. One of the reasons is that it acts as a boundary evolution algorithm, which is not as flexible as density-based methods at controlling topology changes. In this study, a level set band method is proposed to overcome this drawback in handling topology changes in the level set framework. This scheme is proposed to improve the continuity of objective and constraint functions by incorporating one parameter, namely, level set band, to seamlessly combine LSM and density-based method to utilize their advantages. The proposed method demonstrates a flexible topology change by applying a certain size of the level set band and can converge to a clear boundary representation methodology. The method is easy to implement for improving existing LSMs and does not require the introduction of penalization or filtering factors that are prone to numerical issues. Several 2D and 3D numerical examples of compliance minimization problems are studied to illustrate the effects of the proposed method. Introduction The successful topology optimization of continuums inspired from the study of the optimal thickness distribution of elastic plates [1] has led to the successive development of several topology optimization models, such as the homogenization method [2], solid isotropic material/microstructure with penalization (SIMP) method [3][4][5], rational approximation of material property method [6], evolutionary structural optimization (ESO)/bidirectional ESO (BESO) [7][8][9], level set method (LSM) [10][11][12][13][14], independent-continuous mapping method [15], moving isosurface threshold method [16], stiffness spreading method [17,18], and moving morphable component/void method [19,20]. The most mature method is SIMP, which has been successfully implemented in commercial software systems, such as OptiStruct, Tosca, and ANSYS. SIMP has been widely used worldwide, especially after Sigmund published the 99-line MATLAB code [21]. It provides an efficient way for new researchers to accept and start their work in topology optimization for continuums. ESO, which was first proposed by Xie and Steven [7], uses a similar idea to that of SIMP. The idea is to find an appropriate way of material distribution in the design domain. Later, Refs. [8,9,22] proposed BESO, which solves the drawback of ESO that a material cannot be added back into a structure after it is deleted. ESO/BESO has a clear concept of updating a structure to achieve optimum design, and it is easy to understand and implement. It has also been widely studied and implemented in the topology optimization tool Ameba. LSM [10][11][12][13][14][23][24][25][26][27][28][29][30], which is borrowed from computer graphics and image-processing fields, was first implemented in topology optimization around 2000 [11,12]. It uses a different idea based on the method of front tracking to drive the boundary of a structure iteratively to obtain the optimum design. It has received wide attention since it was first introduced into the topology optimization field and became popular in a short time due to its advantages [13,14]. For example, it always provides clear structural boundaries or material interfaces, making it suitable for optimization problems related to geometric control, and it does not require the introduction of a penalty factor in contrast to SIMP; therefore, it is more stable when solving dynamic optimization problems [31]. However, LSM is developed from a method based on the boundary evolution concept. The iterative updating of a structure during optimization is always generated from boundaries rather than from the entire domain as in density-based methods. Consequently, it lacks nucleation capacity and involves strong initial design-dependent problems. To overcome these drawbacks, we analyze the fundamental scheme of LSM to evaluate its difficulties in handling topology changes and compare the differences between the conventional LSM (CLSM) and the variational LSM, which is called zero LSM (ZLSM) in this paper. The comparison demonstrates that the latter is more recommended for topology optimization in practical applications because it has the ability of nucleation and less stringent requirements on meshes compared with the former. A level set band method is proposed to improve the capability of LSMs in handling topology changes. The method is easy to implement by involving only one parameter, the level set band, and does not require the introduction of penalization or filtering factors that are prone to numerical issues. This proposal may pave the way for the wide acceptance of level set-based topology optimization method in practical engineering applications. The remainder of this paper is organized as follows. The basic concept of LSM is introduced and a comparison between CLSM and ZLSM is presented in Section 2. The level set band method, which is a new method with a variable level set band, is proposed in Section 3. The optimization model is introduced in Section 4 and the method is studied and evaluated with several numerical examples in Section 5. Finally, conclusions are provided in Section 6. LSM for topology optimization LSM was first proposed by Osher and Sethian [10] for interface tracking. It has been successfully implemented in computer graphics and image segmentation fields before introducing into the topology optimization field [23,24]. The concept of a 2D design, which is represented with a level set function, is illustrated in Fig. 1. The basic idea of LSM is to define a one dimension higher surface, i.e., the level set function Φ, and use the zero level set of the function to represent the boundary of an object and control the evolution of the boundary by updating the level set surface, as illustrated in Fig. 1. The definitions of the boundary and the different parts of the domain are given as follows: where x 2 R 2 or R 3 denotes a point in the design domain D & R 2 or R 3 , Ω and ∂Ω are the solid part and boundary of the object, respectively, and t is the pseudo time to represent the updating iteration steps during the evolution of the level set function. Conventional level set method In CLSM, i.e., the earliest proposed LSM [23,24], the name "LSM" consists of a systematic methodology to trace the front of an interface in an implicit way in a fixed Euler grid mesh. This methodology includes a discrete way to solve the Hamilton-Jacobi (H-J) equation by implementing upwind scheme, ENO, or WENO [32,33], the reinitialization process to recreate a signed distance level set function [24,34], and the velocity extension approach [35] to extend the boundary velocity to the entire evolution domain or a narrow band area around the boundary to alleviate the computation cost. This methodology can be considered a conventional way to implement LSM in related problems. The approaches have been summarized in two classical books [23,24]; LSM has been well developed in computer graphics, as demonstrated by the brilliant animations and its successful implementation in the film industry. However, the application of LSM in the field of topology optimization has remained at the academic level for a long time, and its potential has not been fully exploited due to several limitations. In the following part, the basic concept of CLSM is schematically discussed, and its inherent limitations are analyzed. In CLSM, a level set function is updated by solving H-J equation given in Eq. (2), which is a partial differential equation with spatial derivative rΦ and temporal derivative ∂Φ=∂t: where V n in topology optimization is usually obtained using the sensitivity analysis based on the theory of shape derivative by applying the finite element method (FEM) to solve the state and adjoint equations [14]. The " -" in Eq. (2) depends on the outside positive normal direction of V n , which is determined by the level set representation model given in Eq. (1). If Φðx, tÞ > 0 is defined as the outside part, then " -" should be changed to "þ". Although CLSM has been introduced into the topology optimization field for a long time, it is still not as well understood by most people as density-based methods. The updating scheme of CLSM should be demonstrated with a diagram, as shown in Fig. 2, where the level set function is updated with one small time step Δt. _ Φ in Eq. (2) consists of two parts, which are ð∂Φ=∂tÞΔt and V n jrΦjΔt. The two parts are illustrated in Fig. 2(a). The equation _ Φ ¼ 0 is always satisfied in each step. Therefore, the two parts are cancelled each other out, and a point on the level set surface Φ of the nth time step p n moves horizontally to point p n +1 in the next time step. Here, V n denotes the normal velocity on point p n . Figure 2(b) depicts that all points in the level set function move in a horizontal direction when solving H-J equation. This condition can be used to explain an important drawback of CLSM that it cannot create new holes inside, i.e., lacking nucleation capability. Figure 3 illustrates several cases of updating of a level set surface, which means it can become steeper (Case 1) or flatter (Case 2) than before. Case 3 can never occur because the top part of the level set function cannot move downward to make a concave pit, as shown in Fig. 3. Case 4, in which the level set function is "pulled up", also cannot happen due to the same reason. Figure 3 demonstrates a theoretical situation, but in practical numerical implementations, Δt is not an infinitesimal value. The numerical errors may occasionally result in Cases 3 and 4 if the re-initialization scheme is not applied to recreate the signed distance level set function frequently. As mentioned above, CLSM is inherently lacking in the ability of nucleation capacity and is often criticized because it can induce the initial design-dependent problem, which means the final optimal design may heavily rely on the initial design. In the literature, researchers include numerous holes in the initial design to alleviate this problem. No widely recognized method exists for guiding people on how to establish the initial design. This drawback can be overcome by incorporating other nucleation schemes, such as topological derivatives [13], but CLSM still hardly handles topological changes naturally as density-based methods do. Zero level set method When CLSM was introduced into the topology optimization field, a set of variational LSMs, such as a method based on dynamic implicit surface function [36,37], piecewise constant LSM [38], LSM with a reactiondiffusion equation [39], the topology representation function [40], parameterized LSM [41][42][43][44][45], and other methods [46], was developed to overcome the drawbacks of CLSM. Most of these methods can be considered "LSM", but they do not follow the solution procedure of CLSM. We suggest that these variational LSMs, which also use the zero level set to represent the boundary of design but do not exactly follow the set of conventional numerical manipulations to update the level set function, can be called "ZLSMs" to distinguish them from CLSM. These ZLSMs are all laudable attempts to explore the practical implementations of LSMs and provide valuable experiences and references for the endless further explorations. The key point of these ZLSMs is that they also use a clear boundary to represent a design during the optimization process. Nonetheless, the numerical operation for level set updating is different. A way to realize ZLSMs is to revise Eq. (2) into Eq. (3) by directly removing the spatial difference term jrΦj: In accordance with Fig. 4, V n indicates the vertical velocity of each point on the level set surface. The physical meaning of V n has changed, and the notation V n becomes unsuitable for it. The meaning of V n in this model is similar to the sensitivity in density-based methods. The calculation of V n can be borrowed from SIMP or ESO/BESO models. If jrΦj ¼ 1 is held with re-initialization as CLSM always executes, the effects of V n in those models become the same. We keep using notation V n in Eq. (3) for convenience to compare Eqs. (2) and (3). Although the inherent updating logic of Eqs. (2) and (3) relatively differ in accordance with Figs. 2 and 4, their convergence conditions are equivalent, i.e., V n equals zero due to the nonzero property of jrΦj along the boundary. Therefore, we can use the same V n to update the level set function in CLSM and ZLSM. In our opinion, ZLSM may be more suitable for topology optimization than CLSM. The reasons that support our opinion are as follows. First, ZLSM naturally has nucleation capability, which can greatly alleviate the initial design-dependent problems, compared with CLSM. Second, ZLSM is more adaptable to miscellaneous meshes than CLSM. ZLSM uses an ordinary differential equation to update the level set function and has less complexity than the partial differential equation-driven CLSM, which is more convenient to be solved in a structured grid. Thus, ZLSM can be easily implemented in practical engineering problems in complex design domains and with unstructured meshes. A reasonable way in a commercial software system can be easily realized by sharing the same set of mesh with finite element analysis (FEA). The sensitivity analysis of ZLSM for certain problems can also be directly borrowed from density-based methods, and this point can make it easier to be implemented and accepted by people, such as for topology optimization on curved shell structures and sophisticated mathematical tools are needed for CLSM [47]. The characteristics of the "customized" LSM in topology optimization can be summarized as it should be well connected to FEM and can be implemented as easily as the SIMP or ESO/BESO methods. Its advantages can be further developed. Although ZLSM overcomes certain drawbacks of CLSM, the boundary evolution-based strategy in the level set-based model remains less natural than densitybased methods in dealing with topological changes. In density-based methods, the topological optimization problem is transformed into a size optimization problem, and the topological change becomes a continuous process [3,4]. In the level set-based method, the topological change is a discrete process. The objective and constraint functions in the feasible space are not as continuous as in the densitybased methods, which involve more difficulties in handling the topology optimization. This condition may also be considered a limitation of LSM; consequently, it is difficult to be commercialized. At this point, the density-based methods have predominant advantages in topology optimization. This issue has inspired us to introduce a density interpolation scheme to provide similar continuity in the level set-based method. In this study, we propose a new method that can improve the continuity of objective and constraint functions by using the high-dimensional information of the level set function. This method provides a simple way to combine the advantages of density-based and level set-based methods for realizing a natural topology evolution as a density-based method and a clear boundary representation solution. Level set band method In this section, a level set band method, which can be considered a combination of LSM with a density-based method to utilize the advantages of both methods, is proposed. This method follows, but is not restricted by, the parameterized LSM [43] by incorporating a new parameter, i.e., level set band Φ b , which is the distance between a user-defined upper bound Φ u and a lower bound Φ l . The upper bound Φ u and lower bound Φ l do not mean the maximum and minimum values of the level set function, respectively, but indicate a range between which the densities of the elements should be interpolated with the values of the level set function. In this method, the density of each element in the structure and sensitivity analysis depend on its nodal values of the level set function. In the numerical implementation, the level set function value Φ i in the middle of the ith element by interpolation can be adopted. Thus, the density of the ith element i can be calculated as where H 2 ff : R↕ ↓ R þ g is the Heaviside function; here, it is numerically approximated as [14] HðxÞ ¼ where ε is a small value indicates the lower bound density of void material and we can define Figure 5(a) shows the interpolated density distribution for FEA with the level set function when Φ b is adequately large, and Fig. 5(b) illustrates the density distribution when Φ b is smaller. Thus, the design is determined by a narrow band boundary. If Φ b decreases to zero, the design is then determined by the zero level set, where Φ ¼ 0. The optimization model becomes a ZLSM. In the proposed method, the strategy is to gradually reduce the width of the level set band Φ b during the optimization process, as shown in Fig. 5. When Δ is sufficiently large, as shown in Fig. 5(a), all of the level set function values fall into the band Φ b . The density of each element is thus a projection of the related level set function value, similar to a definition of a density-based method, such as SIMP, where the design variables are defined as the density and the upper and lower bounds are fixed at 1 and 0 (usually, a small value ε instead), respectively. The strategy in SIMP is that the density of each element is forced to approach the upper or lower bounds (1 and 0) with a penalization scheme. Then, a black-and-white optimal design can be obtained. Unlike SIMP, the proposed method has variable upper and lower bounds in mapping the level set function. The distance between the upper and lower bounds is defined as a density band Φ b ¼ 2Δ, which is represented in Eq. (4), and it is reduced gradually during the optimization process. When Δ becomes a considerably small value or zero, as shown in Fig. 5(b), the design is defined by the zero level set, and only the densities of the elements in an excessively small region around the boundary need to be calculated with the projection function Eq. (5). This method becomes an LSM. The density-based method in the initial stage finally converges to an LSM by assigning the parameter Δ ¼ Φ b =2 to a large value to start the optimization with a density-based method and then gradually decreasing the value of Δ during the optimization iteration process. We can combine the density-based method and LSM by involving only one parameter Δ to utilize the advantages of both methods. The density-based method presents flexibility in topology change and is minimally dependent on the initial design; LSM has clearly defined boundaries. A general way to implement the level set-based method can be considered the case with a small band between the upper and lower bounds, as shown in Fig. 5(b). LSM is usually applied to a fixed Euler grid. The mesh and boundary are difficult to be conformed, and the elements around the boundary are usually cut through by the zero level set. An accurate way to calculate the contribution of the stiffness of "half" elements is using the extended FEM [48][49][50]; another simpler but with lower accuracy way is to use an approximate density to represent the stiffness contribution of the element around the boundary [43]. The latter can be considered a means to interpolate the density of elements falling into the band or cut by the boundaries, as illustrated in Fig. 5(b), where the level set band is defined as a very small value. The proposed method can be considered an extension of this scheme to a wide space range around the boundary and to the entire optimization process over time. This method can be implemented in a simple manner by implementing a slight modification on the 88-line MATLAB code of parameterized LSM with radial basis functions [43]. However, this method is not limited to the parametric way but can also be used to the discrete way in the level set framework. The proposed level set band method not only can be combined with ZLSM, as demonstrated in this paper, but also can be implemented in the CLSM framework. Compared with ZLSM, the proposed method needs further efforts in handling complex design problems. The level set band decreases from a large value to a small value. The proposed model reveals a gradual change from a density-based method to a level set-based method. An example of an intermediate solution is provided to illustrate the level set band method in Fig. 6, in which the level set band Φ b ¼ 2Δ ¼ 7 (the length of the finite element is 1), and the density distribution is obtained on the basis of the level set surface. The level sets on the upper and lower bounds are also plotted. They approach the same design as the zero level set after the level set band Φ b decreases to zero. From the viewpoint of the level setbased model, the proposed method can be considered an extended LSM by replacing the zero level set with a variable level set band. From another viewpoint, the level set band method can be considered a variation of a densitybased method by replacing the fixed band of the density distribution between 0 and 1 with an alterable value. In this model, penalization and filtering mechanisms, as in SIMP, need not be introduced; related numerical issues [51,52] no longer need to be processed. Figure 7 provides a schematic explanation of this point and shows how the proposed method can be considered a combination of the densitybased method and LSM. Similar concepts by changing the level set isosurface were also evaluated in previous studies. In the moving isosurface threshold method, which was proposed in Ref. [16], a variational isosurface threshold for response surface, which has a similar definition to that of the parameter Δ ¼ Φ b =2 in this study for the level set surface, is applied to adjust the boundary design and determined by the Karush-Kuhn-Tucker condition. The level set-based method [53] uses different layers of a unique characteristic level set function to represent different designs for optimizing connectable graded microstructures. This concept can also be used in minimum distance control [54]. In the SIMP-based model, different density projections can be obtained to conduct robust topology optimization by implementing different threshold values [55]; this process is similar to applying different levels in the density distribution function [56]. The BESO-based model [57] also involves a level set isosurface, which is iteratively determined by the upper and lower bounds. The two bounds approach almost the same value after convergence. The smooth boundaries are always clearly determined to separate the design domain into black and white parts during the optimization process, and the intermediate density elements only occur along the boundaries on the elements that are cut by the zero level set. The numerical examples indicate that the method illustrates similar properties to those of ZLSM because it adopts the zero level set to represent the design, although it comes from the BESO method. All of these successful implementations of the level set concept demonstrate its tremendous potential; it deserves further development in a future study. Optimization scheme The optimization model of the level set-based method can be defined as where J is the objective function for a specific physical type described by f, u is the displacement field, ε is the linearized strain tensor, C is the elasticity tensor, v is the adjoint displacement in the space U of the kinematically admissible displacement fields, GðΦÞ is the volume constraint to limit material usage, V max is the maximum allowable volume fraction of the design domain, u 0 , n, and τ are the given displacement, the boundary unit normal vector and traction, respectively, and Γ u and Γ τ are the Dirichlet and Neumann boundaries, respectively. The energy bilinear form aðu, v, ΦÞ and the load linear form lðv, ΦÞ are defined as aðu, v, ΦÞ ¼ ! D εðuÞ : C : εðvÞ HðΦÞdΩ, where b represents the body force. In the framework of CLSM, the sensitivity analysis based on shape derivative [13,14] can be used to derive the normal velocity V n along the moving boundary in the steepest descent direction. Fig. 7 Level set band method can be considered a variation of the density-based method and the LSM by replacing the fixed bands Φ b with an alterable one. where κ ¼ r$n is the curvature around the boundary, l is the Lagrange multiplier to control the volume constraint, and it can be calculated with the augmented Lagrange multiplier method [43,58] or bisectional method [21]. For a compliance minimization problem f ðuÞ ¼ εðuÞ : C : εðuÞ without body force and boundary traction, the velocity can be simplified as V n ¼ εðuÞ : C : εðuÞ -l: With the velocity V n obtained in Eq. (11), the CLSM of Eq. (2) or ZLSM of Eq. (3) can be used to update the level set function until convergence to realize the topology optimization. This velocity is meaningful only along the boundary in CLSM because only the variation on the boundary can affect the objective function. In the proposed level set band method (Fig. 8), the evolution of the level set function of the solid part does not influence the objective function and volume constraint given that the changed part does not fall into the level set band, as shown in Fig. 8(b). The sensitivity with respect to the level set function can be considered zero, but updating the level set function at that interior area has the potential to change the objective function and the topology of the design, as shown in Fig. 8(c). This diagram also illustrates the scheme that the existing level set band makes the topology change a continuous procedure. In CLSM or ZLSM, the topology change is always a discontinuous procedure and may cause numerical issues, such as oscillation or local optimal. In this model, the velocity V n is extended to the entire design domain by directly calculating the velocity with Eq. (11) over the design domain. This process can be considered a "natural velocity extension" approach applied in most level set-based topology optimization models [14,41]. Thus, the level set function can be updated in the entire design domain, and the nucleation capability is easily realized. This approach is unlike the conventional way to extend the velocity in CLSM to keep the level set function a signed distance function as applied in the computer graphics field [10,34]. On the basis of Eq. (3), the level set function can be updated based on ZLSM with the first-order difference scheme as where the superscripts i and i + 1 indicate the iteration steps, and Δt is the time step size. In this study, the evolution of the level set function is realized by updating the coefficients in the parameterized LSM [43]. Readers can refer to Ref. [43] and the provided code for a detailed implementation approach. This scheme can also be implemented on the basis of CLSM by updating the level set function with Eq. (2) to obtain an accurate boundary evolution solution. Nevertheless, the related numerical manipulations of CLSM should be adopted, and the nucleation capability will be missing. Numerical examples In this section, several examples are analyzed to illustrate the effects of the proposed method. Please note the numerical examples in this paper are dimensionless. The basic parameters follow the 88-line MATLAB code [43], including Young's elasticity modulus E ¼ 1 for solid material, E ¼ 10 -9 for void material, and Poisson's ratio ¼ 0:3. The used radial basis functions are multiquadratic spline with c ¼ 10 -4 . Only the mean compliance minimization problem is studied, but this method can be easily applied to other problems, such as compliance mechanism design and material design problems. The convergence condition is set to be the same as in the 88-line MATLAB code that in all of the last nine steps, the mean compliance M satisfies the following criterion: where the subscript of M means the iterative step number. Cantilever beam The first example is shown in Fig. 9. A short cantilever beam is given, in which the left side is fixed. Concentrated force F ¼ 1 is applied vertically downward at the middle point at the right side. The size of the design domain is 80 Â 40, and 80 Â 40 four-node bilinear square elements are used to perform FEA. The total volume fraction is set as 50%. Figure 10 shows the iterative designs of the structure in Steps 1, 10, 20, 30, 50, and 109 (converged). Figure 10(a) shows the zero level set of the designs, Fig. 10(b) depicts the density distributions of the designs, Fig. 10(c) illustrates the level sets of the upper and lower bounds (red is the upper bound, and blue is the lower bound), and Fig. 10(d) indicates the level set functions and the upper and lower bound planes during the optimization. The initial value of Δ is 5 and decreases by 0.1 at each step to the minimum value of 0.5 after 45 steps. The initial value of the level set function Φ is given as -3£Φ£3, and the initial design is filled with gray elements, as shown in Fig. 10(b). In the first figure of Fig. 10(c), the initial level set function does not touch the upper and lower bounds (AE5), and thus, no red or white area exists. After several iterations, the values of the level set function and Δ change, as shown in the second figure in Fig. 10(d). The red part in Fig. 10(c) indicates the area where the density is 1, and the white part indicates the area where the density is ε. Figure 10(a) indicates the zero level set, but it is not the real design because the real finite element model for analysis is given by Fig. 10(b). This problem converges after 109 iterations and then the final zero level set becomes consistent with the finite element model, as depicted in Figs. 10(a) and 10(b). The level sets on the upper and lower bound planes almost perfectly coincide because the distance between the upper and the lower bounds is very small The iteration process of this example can be considered a model of the density-based method; if the initial value of Δ is set to a small value (e.g., 0.5), the iteration process then becomes a zero level set model. The results are given in Fig. 11, and the iteration numbers are 1, 10, 20, 30, 50, and 131. Figure 11(b) depicts that the design has few gray elements around the boundary from the initial design to the final design. The zero level set is almost the same as the density distribution. Figure 11(c) also shows that the level sets on the upper and lower bound planes are almost coincident from the start to the end. Compared with the optimization process shown in Fig. 10, this case can be considered a zero level set model because the clear boundary is always given. Therefore, boundary-related problems, such as gravity or hydraulic pressures, can be easily handled. In the following part, the same problem is solved with different initial designs to illustrate the initial designdependent issue of the proposed method. Figures 12(a) and 12(b) show the case with decreasing Δ from 5 to 0.5 by 0.1 at each step, and Figs. 12(c) and 12(d) show the case with fixed Δ ¼ 0:5. Figures 12(a) and 12(c) are the zero level sets during the iteration, and Figs. 12(b) and 12(d) are the corresponding density distributions. In the two cases, the initial density of each element is 0.5. Figure 12 demonstrates that the density distributions and topologies in the intermediate steps of the two cases are relatively different. In Figs. 12(a) and 12(b), the topology change is driven by the density variation in the earlier stage, in which Figs. 12(a) and 12(b) can be considered a density-based model. In Figs. 12(c) and 12 (d), the density distribution is almost black and white everywhere during the optimization process; this can be considered a zero level set model with nucleation. The only difference between the two methods is the value of Δ or Φ b . The comparison in this example clearly illustrates the most important characteristic of the proposed model, i.e., the density-based method and LSM can be connected with only one parameter Δ. If Δ decreases from a large value to a small value, this model becomes a density-based model; if Δ is set as a small value from the beginning to the end, this model becomes a zero level set model. The penalization and density filter are not needed to be specifically applied in this model. 5.2 Different Δ updating schemes for a simply supported beam problem In the following part, a simply supported beam, shown in Fig. 13, is evaluated with the same method to illustrate the convergence property of the proposed method. The beam size is 160 Â 40 and is discretized to 160 Â 40 Q4 elements. On the basis of the symmetry of the problem, only the right half part of the design is optimized. The simply supported beam problem generally varies in terms of the topology of the final designs. In this part, different initial designs and three Δ updating Schemes (a), (b), and (c) are imposed to provide a clearer understanding of the proposed model as described in Figs. 14(a), 14(b), and 14(c), respectively. Here, three Δ updating schemes are implemented. Scheme (a) uses a fixed Δ value of 0.01 in the overall optimization process; Schemes (b) and (c) use decreasing Δ values with different reductions. Δ 0 denotes the initial Δ value, dΔ denotes the reduction value of Δ in each iteration, and minΔ denotes the minimum value of Δ. N indicates the total iteration number when the optimization convergences, and MC means the mean compliance of the design. As shown in Fig. 14, in each problem, the upper figure is the initial design, and the lower figure is the final design after convergence. In each updating scheme, three randomly generated initial designs are used to examine the algorithm, and the last one is a fixed initial design with three holes inside. The iteration numbers and objective functions are provided in Fig. 14. These data are plotted in Fig. 15, in which the distribution of the final results can be clearly studied. From the previous study, Scheme (a) can be considered a ZLSM. Based on the results shown in Figs. 14 and 15, it can be found with the random initial design, the results show that this scheme needs relatively more iteration steps and obtains high objective function values. When given an appropriate initial design, the scheme obtains the best result. This observation shows that ZLSM generally needs a proper initial guess; otherwise, it may not achieve a good result. This condition is considered the initial designdependent problem. The nucleation capability can greatly alleviate this problem, but the initial guess is still important for level set-based models. Schemes (b) and (c) involve the density distribution stage that makes them close to a density-based optimization model. Scheme (b) uses a larger reduction of Δ in each iteration and shows faster convergent speed compared with those of Scheme (c). A gradual decrease in Δ can generate a reasonable final design, as demonstrated in Fig. 15; all of the final designs using Scheme (c) have low final mean compliances. Another observation is that the initial design for the density-involved schemes is also important because it accelerates the convergence greatly in Scheme (b). In Scheme (c), the results are adequately good, and the improvement is thus not obvious. This numerical study can be concluded with the following suggestion: If an appropriate initial design can be easily obtained, Schemes (a) and (b) should be chosen; otherwise, a small value for decreasing the level set band as in Scheme (c) should be selected to obtain a reasonable design by sacrificing some efficiency. 3D cantilever beam This method can also be applied to 3D models without difficulties. Figure 16 illustrates the optimization iteration process of a 3D cantilever beam optimization problem. The left side of the beam is fixed, and a downward force is applied at the middle point of the bottom line of the right side. The structure is discretized with 60 Â 30 Â 10 elements. The volume fraction is set at 20%. Δ is set at 5 at the start and decreased by 0.1 each step to 0.5. Figure 16(a) is the zero level set of the design at Steps 1, 15, 30, and 130. The corresponding density distributions are shown in Fig. 16(b) by the way provided by Ref. [59]. The initial design has a uniformly distributed density of 0.5. The iteration process can be considered a density-based model. In the end, only a small number of elements are gray around the boundary. The iteration process illustrates that the topology change can be easily realized, and a clear boundary can be obtained without implementing any penalization and filter schemes. Conclusions In this paper, a comparison between CLSM and ZLSM for solving structural topology optimization problems is schematically discussed. ZLSM, which can be easily applied and solved with minimal numerical issues, is suggested in practical implementations. We propose the level set band method to learn from density-based methods with a remarkable flexibility change in topology and improve the level set-based method by introducing a level set band Φ b , which utilizes the high-dimensional information of the level set function to improve the continuity of the objective and constraint functions in handling topology changes. The density and the level set-based methods can be seamlessly combined by changing the value of Φ b gradually. The proposed model with a large value of the level set band illustrates the property of the density-based model and is highly flexible in handling topology change. When the size of the level set band is decreased, this method becomes similar to an LSM, which can provide clear boundaries or a black-and-white design, without involving penalization. Thus, the parameter, level set band, can be used to connect the density-based method and LSM. In another aspect, this algorithm shows that the two methods have no essential difference. Numerical examples with random initial designs are used to evaluate the convergence property of the proposed method. If the initial design is unclear, the level set band can be slowly decreased to obtain a highly reasonable design with lesser efficiency; otherwise, the size of the level set band can be decreased rapidly to accelerate the convergence process. 2D and 3D examples are solved to illustrate the effectiveness of the proposed method.
v3-fos-license
2023-03-25T15:02:27.441Z
2023-03-01T00:00:00.000
257729133
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/143487/20230323-1369-icr73t.pdf", "pdf_hash": "ed1f9d1fd591ade0075de1270e3d98cd2519c032", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44787", "s2fieldsofstudy": [ "Medicine" ], "sha1": "55e4b2f5a32e4b2e06102d0f1036f9a189061f18", "year": 2023 }
pes2o/s2orc
Long-Term Survival of Patients With Glioblastoma of the Pineal Gland: A ChatGPT-Assisted, Updated Case of a Multimodal Treatment Strategy Resulting in Extremely Long Overall Survival at a Site With Historically Poor Outcomes We present an updated case report of a patient with glioblastoma isolated to the pineal gland with an overall survival greater than five years and no progression of focal central nervous system (CNS) deficits since initial presentation. The patient underwent radiotherapy up to 60 Gy with concurrent and adjuvant temozolomide with the use of non-standard treatment volumes that included the ventricular system. The utilization of ventricular irradiation as well as the addition of bevacizumab at disease recurrence may have encouraged this unusually long survival by preventing/delaying leptomeningeal spread. We also present an updated review of the literature, which shows a median survival of six months, reinforcing the patients atypical disease trajectory. Finally, we utilize OpenAI’s language model ChatGPT to aid in synthesizing this manuscript. In doing so, we demonstrate that ChatGPT is apt at creating concise summaries of relevant literature and topic subjects, however its output is often repetitive with similar sentence/paragraph structure, less than ideal grammar and poor syntax requiring editing. Thus, in its current iteration, ChatGPT is a helpful aid that cuts down on the time spent in data acquisition and processing but is not a replacement for human input in the creation of quality medical literature. Introduction Glioblastoma of the pineal region is an uncommon location for what is the most common primary central nervous system (CNS) malignancy, with 40 cases reported in the literature. Despite advances in cancer treatment, overall survival (OS) for patients with pineal gland glioblastoma is often in the span of months [1], making this site a significant clinical challenge compared to typical glioblastomas, where the median OS is 1.5 years [2]. The pineal gland is a small, pinecone-shaped structure located in the center of the brain. It plays an essential role in regulating circadian rhythms and secretes melatonin, a hormone that helps regulate sleep-wake cycles. Pineal gland tumors are rare, accounting for less than 1% of all brain tumors, with an extremely small fraction consisting of glioblastoma. Glioblastoma of the pineal region presents a unique diagnostic and therapeutic challenge due to its location. In being bounded by the third ventricle and quadrigeminal cistern, disease can disseminate along the ventricular/cistern surfaces and be broadcasted through the CSF fluid, settling along the neuraxis. Additionally, the pineal gland is adjacent to critical structures such as the brainstem and optic structures. Treatment options for pineal gland glioblastoma typically involve a combination of surgery, radiation therapy (RT), and chemotherapy. However, the optimal treatment approach for this rare and aggressive site remains unknown. This case study elaborates on the clinical presentation and treatment course of a patient with pineal gland glioblastoma with an over five-year overall survival, which may not be limited by disease progression. We provide an updated literature review that reinforces the exceptional survival of this patient. To demonstrate the capabilities of language-model artificial intelligence in aiding medical writing, we utilized the assistance of ChatGTP in developing this case report and literature review. By sharing this case, we hope to contribute to the growing body of knowledge on this rare disease site and help inform future treatment decisions for patients with pineal gland glioblastoma. Case background and update We previously published the case of a 64-year-old man with no significant medical history other than gastroesophageal reflux disease (GERD) who presented with vertical diplopia, headaches, and insomnia, where the neurological exam found right cranial nerve IV palsy and gait difficulties [1]. CT imaging revealed a hyperdense pineal mass, with biopsy demonstrating a glioblastoma histologically (atypical cells with giant nuclei, seven mitoses per three high-powered fields, multiple microvascular foci, pseudopalisading necrosis present) and molecularly (isocitrate dehydrogenase 1/2 wildtype with O-6-methylguanine-DNA methyltransferase (MGMT) promoter hypermethylation). The enhancing mass ( Figure 1A) was approximately 25 mm in diameter with no significant surrounding edema, as shown in Figure 1B. Initial treatment details from the radiotherapy treatment plan are elaborated upon in our initial report [1]. Briefly, given the access to the ventricular system, a low dose volume of 50 Gy was applied consisting of the gross enhancing disease as well as compartments typically included in whole-ventricle irradiation with a 2 cm margin. This was followed by a cone-down to gross-enhancing disease with a 2 cm margin to 60 Gy delivered in 2 Gy daily fractions. There was no T2-flair signal beyond the enhancing disease; therefore, these sequences had no effect in developing the low dose volume. He experienced the expected increased in fatigue toward the end of radiotherapy and afterward developed temporary alopecia with no other toxicity. The patient continued with adjuvant temozolomide (TMZ) and had no evidence of disease progression or recurrence until the 12th cycle. At 58 weeks post-biopsy, two new lesions were appreciated in the brainstem and right parietal lobe, at which point bevacizumab (7.5 mg/kg every three weeks) was added to TMZ. Given these radiographic findings, the decision was made to continue TMZ for 18 cycles alongside bevacizumab initiation. Overall, 17 cycles of adjuvant TMZ (one cycle at 150 mg/m 2 , the remaining 200 mg/m 2 first five days of the 28-day cycle) were completed, with the 18th cycle omitted due to limited prescription availability at the time. TMZ was discontinued at 88 weeks post-biopsy, and bevacizumab was discontinued 129 weeks after biopsy. There were no radiographic changes upon follow-up MRI at 146 weeks post-biopsy. During the interim between this MRI and his prior imaging, the patient underwent physical therapy on a regular basis, which lead to improved diplopia and ataxia. There is an approximate 25-mm enhancing pineal mass on T1 post-contrast imaging (black arrow), with no evidence of edema to the parenchyma highlighted by the white arrows that surround the lesion on T2 flair MRI. Since this initial report, the patient continued to undergo close clinical and radiographic surveillance with MRIs every two months. At 154 weeks post-biopsy, the patient exhibited T1 hypersensitivity in the pineal region, measuring 19 mm × 15 mm. By 192 weeks post-biopsy, the pineal region had gradually reduced to 12 mm × 10 mm, while there was an increased enhancement to the right superior colliculus and right periatrial white matter. There was also interval development of a small unrelated parenchymal hemorrhage. The pineal region significantly increased in size to 30 mm × 16 mm × 19 mm at week 201 postbiopsy, with continued increase in enhancement of the atrium of the right lateral ventricle. At 217 weeks, the pineal region had reduced to 30 mm × 15 mm × 10 mm, while at 234 weeks, a new 3 mm insular region nodule was observed. Both the pineal region and insular region demonstrated interval growth at 256 weeks post-biopsy. During this period, the patient developed decreased appetite with a 15lbs weight loss. He had less mobility at this time as a result of his decrease participation in physical therapy. A final MRI was obtained at 263 weeks post-biopsy, where the enhancing pineal region remained stable in size (Figure 2A), although smaller in volume than at presentation. Figure 2A also delineates the enhancing insular region, which had grown to 9 mm. T2 MRI from this time did not show any evidence of edema in the parenchyma surrounding both lesions as highlighted by the gray arrows in Figure 2B. Mild nodular enhancement was appreciated along the right and left lateral ventricles. There were also several chronic small vessel ischemic changes throughout the white matter and basal ganglia. At this time, he developed a urinary tract infection (UTI) that progressed to urosepsis, and the decision was made to enter hospice care, where he passed away at week 265 post-biopsy. Prior to death, the patient continued to experience diplopia and cognitive deficits that were stable from initial presentation, with no headaches, seizures or paresthesia, and no other changes in vision or focal muscle weakness. Throughout the post-radiation therapy (RT) MRI imaging, there was no radiographic evidence of radionecrosis to the brain parenchyma. The pineal gland demonstrates stable post-contrast enhancement (white arrow), while there was mild growth of the enhancing lesion of the right insular cortex to 9 mm (black arrow). The gray arrows demonstrate a lack of edema in the parenchyma surrounding both the primary and right insular cortex lesions on T2 MRI. Updated literature review Since our last review in 2017 [1], there have been four published case studies into glioblastoma of the pineal region with a total of 14 cases. Orrego et al. published a case study with four unique cases of pineal glioblastoma in the Neurosurgery Department at the Instituto Nacional de Enfermedades Neoplasicas between 1994 and 2012 [3]. Case 1 was a 48-year-old female patient diagnosed with a pineal tumor (glioblastoma) who underwent a ventricular peritoneal shunt and subtotal resection. She received adjuvant radiation therapy but died 12 months after surgery. Case 2 was a 50-year-old male patient diagnosed with a pineal tumor (glioblastoma) who underwent a ventricular peritoneal shunt and partial resection. His radiotherapy was discontinued prior to completion, and he died six months after surgery. Case 3 was a 56year-old male who underwent partial resection of his pineal glioblastoma with a ventricular peritoneal shunt. He received radiation therapy with concurrent TMZ but developed new symptoms nine months later and chose palliative care. He died 29 months after surgery. The final case involved a 25-year-old male patient who was diagnosed with a pineal glioblastoma and underwent a ventricular peritoneal shunt and maximal safe resection. He received radiation therapy with concurrent TMZ but developed a local recurrence. He passed away six months after treatment; however, this was secondary to pulmonary tuberculosis. A case series of 215 pineal region tumors was published in the interim by a single surgeon between 1990 and 2017, of which 8 (3.7%) were glioblastoma [4]. The median age at diagnosis was 48.5 years, and 87.5% of patients were male. The most common symptoms were headache, vision changes, and gait imbalance/ataxia. The tumor origin for pineal region tumors was believed to be in the pineal gland in 3 (37.5%) of the cases with the others originating from the thalamus or indeterminate. The cause of symptoms was hydrocephalus, and it was managed by an endoscopic third ventriculostomy or ventriculoperitoneal shunt. In analyzing the eight patients whose glioblastoma was primary to or spread into the pineal gland, the median OS was 15 months, and tumors recurred locally except in one patient who had distal recurrence in the right frontal lobe. Recurrent subtotal resection was achieved in 75% of the eight-patient cohort, and all received standard fractionated external beam radiotherapy. One patient died perioperatively, and another was contraindicated for chemotherapy, while the rest were treated with TMZ as initial chemotherapy. Additional chemotherapies were attempted in 37.5% of the mixed group of eight patients. The perceived tumor origin did not influence the ability to achieve radical subtotal resection. Individual analysis of the three glioblastomas originating in the pineal gland was not reported. The Güzel group described the case of a 5-year-old girl who was admitted with symptoms of headache, dizziness, difficulty walking, and impaired vision for one month [5]. A neurological exam showed sleepiness, unequal pupil size, inability to look laterally and weakness on the left side. An MRI revealed a mass in the pineal region that had spread to the right thalamus and superior peduncle and was determined to be a glioblastoma through histopathology. The patient had a shunt inserted for hydrocephalus, and the tumor was removed through a surgical approach. Treatment was still ongoing from seven months post-diagnosis. A fourth case report detailed a 55-year-old female patient who was admitted to the hospital for dizziness, headache, blurred and double vision [6]. The physical examination revealed Parinaud syndrome, and an MRI confirmed a heterogeneously enhancing mass in the midline pineal region with ventriculomegaly. The patient underwent a surgical resection and a ventriculoperitoneal shunt to resolve the hydrocephalus. The histological diagnosis was glioblastoma. A genomic profiling showed telomerase reverse transcriptase (TERT) amplification, multiple TERT fusions, and FGFR2 fusions, as well as CDKN2A/CDKN2B loss, TP53 mutation, and 19q chromosome deletion. The patient received radiotherapy and TMZ chemotherapy, but her condition worsened leading to an overall survival of three months. Discussion Glioblastoma is a highly malignant form of brain cancer with a poor prognosis. Glioblastoma of the pineal region is an extremely rare site, with 40 cases documented in the literature as of this review. The purpose of this manuscript, in addition to elaborating on an atypical case, is to summarize the current understanding of glioblastoma of the pineal gland through analysis of other case reports and their treatments. These reports have documented a variety of treatments for glioblastoma of the pineal region with an overall survival of less than three years. We previously reported the median survival of six months (range, 2-24 months) for pineal glioblastoma [1]. The available data in the cases published after 2017 [3][4][5][6] have similarly demonstrated a median survival of six months (range, 3-7 months). Excluding the present case, the median survival of all current cases in the literature stands at six months (range, 2-24 months). In stark contrast to the median survival in other publications, the current patient showed a remarkable survival of 5.1 years where there was no evidence of progression of initial neurological deficits, and where the patient's failure to thrive was possibly not secondary to his slowly progressive intracranial disease. The patient's treatment differed in that the dose volume included a significant amount of the ventricular system, while other studies report irradiating the pineal region, but none have specifically mentioned expanding the volume into the ventricles to prevent leptomeningeal spread. In other tumors with common leptomeningeal spread, whole brain radiation therapy (WBRT) has been shown to improve survival [7][8][9]. Alternatively, whole ventricular irradiation has shown success in tumors such as germinomas in limiting disease to common areas of spread while maintaining reduced levels of cerebral toxicity and better cognitive function [10,11]. Our case employed this strategy to specifically irradiate the ventricular system with 50 Gy to prevent leptomeningeal spread due to proximity of the pineal gland. Another distinction in the present case is the choice of systemic therapies. While a number of the published case reports utilized TMZ, the present case is the second case of pineal gland glioblastoma in the literature to receive bevacizumab as well [12]. In that case report, the addition of bevacizumab led to the treatment response of a pineal gland glioblastoma that was refractory to radiotherapy and TMZ. Although not initially involving the pineal gland, a recurrent glioblastoma case with leptomeningeal dissemination was also shown to have a clinical and radiographic response with the incorporation of bevacizumab therapy [13]. Part of the current patient's longevity may also be in part due to the application of bevacizumab at initial recurrence, as a randomized trial of recurrent glioblastoma showed that those with an early response in the trial arm receiving bevacizumab had improved overall survival [14]. Ultimately, the irradiation of the ventricles when combined with bevacizumab at initial recurrence may explain the prolonged survival in our patient compared to the literature through the prevention and treatment of leptomeningeal spread. Use of ChatGPT in assisting writing ChatGPT is a state-of-the-art language model developed by OpenAI that uses deep learning algorithms to generate human-like text. It is trained on a massive amount of data, including a portion of medical literature, which makes it capable of generating coherent and informative responses related to medical topics. One of the strengths of using ChatGPT for medical writing is its ability to quickly summarize input articles. This can be especially useful for medical professionals who need to quickly understand the main findings of a study or review a large number of articles in a short amount of time. Additionally, ChatGPT can be used to generate written content with remarkable speed, allowing medical professionals to focus their time and resources on more critical tasks. Quickly writing down the main topics the authors want to discuss, ChatGPT can quickly turn those points into an entire section of the paper. A summarized case by the authors can be used as input into ChatGPT, which outputs detailed and technical writing that surpasses the level of input, saving time for the authors in developing the language and summaries used in the analysis. However, despite its training on medical literature, it is crucial to note that ChatGPT is not a substitute for professional medical advice. It is not capable of diagnosing or treating patients and should not be relied upon for making medical decisions. All medical advice generated by ChatGPT should be reviewed and approved by a licensed physician before it can be published or used in any official capacity. Another limitation of using ChatGPT for medical writing is that it has a knowledge cutoff of 2021, meaning it does not have access to more recent medical literature. Additionally, it cannot access the internet and is only able to generate responses based on the information it was trained on, so it may not be able to provide information on all medical topics or the most up-to-date findings for up-to-date literature reviews. With exception of the abstract and figure legends, all other parts of this article utilized ChatGPT to generate the foundation of the written text, which showed the strength in some tasks/aspects and weakness in others. For example, the contents of the prior three paragraphs are the successful unedited output from ChatGPT, with Figure 3 illustrating the input query that was used. Similarly, the general instruction given to ChatGPT in regard to creating the manuscript's introduction rapidly created a well-detailed, comprehensive summary of pineal glioblastoma with syntax and overall structure that is hard to distinguish from human efforts ( Figure 4). Updating some values and adding some missing content was all that was needed for the introduction as written. In contrast to the two aforementioned tasks, we found that ChatGPT often produced paragraphs with style that was repetitive and grammar that was suboptimal. As exemplified in Figure 5, the summaries of the updated review of literature were prone to repetition of sentence structure, paragraph structure, and improper use of grammar. Artificial intelligence was able to understand the input of medical summaries or summarize a publication but lacked the ability to synthesize a complete picture of the patient's record and diagnosis. Further, there were some words that, to our knowledge, are not medical terms or jargon present in ChatGPT's output, such as "mesh-like pupils." In summarizing the patient's updated history, ChatGPT was able to take the rough MRI results and add some new details but failed to provide a stylistic summary or add any good overall insights into the progression of disease ( Figure 6). Essentially, the language syntax in our generated responses had repeated syntax and sentence structure that required continued changes to become easily readable. While ChatGPT served as a time-saving aid in creating this manuscript, in most cases, it served as a rough draft that still required considerable human input to lead to an article with appropriate grammar, flow and content.
v3-fos-license
2018-04-03T02:07:31.541Z
2017-09-26T00:00:00.000
3299572
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-017-11977-5.pdf", "pdf_hash": "195cda2397efb41387c915704a6930479e65e42e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44794", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "195cda2397efb41387c915704a6930479e65e42e", "year": 2017 }
pes2o/s2orc
Reliable Detection of Implicit Waveform-Specific Learning in Continuous Tracking Task Paradigm Implicit waveform-specific (IWS) learning during a typical continuous tracking task paradigm has been reported for decades, as evidenced by better tracking improvement on the repeated segment of a specific target waveform than random segments. However, the occurrence of the IWS learning in such a task paradigm has been challenged by several unsuccessful results in recent literature. This research concerns reliable detection of the induced IWS learning and to this end, proposes to use the similarity between the cursor and the target along the direction corresponding to the waveform pattern as the performance measure. A 3-day experiment designed with full examination on IWS learning including a practice phase, an immediate test phase and a consolidation test phase after 24 hours was conducted to validate the feasibility and sensitivity of the Pearson’s correlation coefficient on the vertical movement r v in this study. Experiment results indicate that r v is more sensitive in detecting the IWS learning in all phases compared to the conventional root mean square error (RMSE) performance measure. The findings confirm the importance of the performance measure in implicit learning research and the similarity measure in accordance with the waveform could be promising for waveform-specific learning detection in this paradigm. Implicit learning is generally defined as a natural learning process, in which individual devotes sufficient attention to a structured stimulus environment without a clear awareness of what to learn or any conscious operation such as explicit strategies for learning 1 . The "implicit learning" phenomenon has been elicited in many learning processes of fundamental abilities in experiment situations, including language acquisition, object knowledge formation and motor learning 2 . Specifically in motor skills domain, the implicit learning involving many motor components is termed as implicit motor learning 3 , which has been investigated by a good number of studies 4-8 using the continuous tracking task paradigm developed by Pew 9 . In a typical continuous tracking task, implicit motor learning is represented by implicit waveform-specific (IWS) learning. The participants are instructed to track a target horizontally moving across the screen following an invisible trajectory with a hand-driven device. The waveform of the trajectory normally consists of three equal-duration segments, in which the middle segment is repeated throughout all trials while the other two segments are randomly generated for each trial under complexity control 10 . When participant's continuous tracking performance on the middle repeated segment outperforms the outer random segments, waveform-specific learning happens. Generally, the participant is blinded to the segment composition and unaware of the existence of the repeated segment, which enables one to conclude that the waveform-specific learning is implicit. The occurrence of IWS learning in the continuous tracking task paradigm was firstly reported by Pew 9 . Since then, many researchers utilized the continuous tracking task paradigm to investigate implicit motor learning in different research contexts such as comparison of implicit and explicit learning 10,11 , validation on the occurrence of implicit learning 4,11,12 , examination on the capability of implicit learning in older and younger adults 13 and patients 14,15 , and investigation on oculomotor and manual coordination in implicit motor learning 16 . Although most of the studies reported the occurrence of IWS learning in the continuous tracking task paradigm 4,5,[9][10][11][12][13]16,17 , the failure to observe the IWS learning in several studies challenged the reliability of this task paradigm. Chambaron et al. 18 observed IWS learning only when using exactly the same repeated segment reported by Wulf et al. 12 , but failed when the repeated waveform patterns assigned to the participants varied from each other. They inferred that the superior tracking performance in repeated segment observed by Wulf et al. 12 might be due to the easiness of the repeated segment. However, such an inference was debatable given the procedural difference (e.g. numbers of practice trials, task speed and hand-driven device) between these two studies: it was pointed out that in contrast to a larger amount of practice in the original experiment 12 , only a single practice session of 12 trials in the replication 18 might be insufficient for any effect of practice to occur 4 . In another study 19 , Lang et al. did not observe IWS learning in a standard continuous tracking task and they attributed the failure to the ceiling effect of tracking performance. This presumption was consistent with the viewpoint that the IWS learning did occur but the expression of knowledge was suffering from a ceiling effect 4 . In contrast to these negative results 18,19 , the validation study 4 and our recent work 5 successfully demonstrated the IWS learning in the continuous tracking task, in the condition that the repeated segments assigned to each participant were different. A number of researchers have attempted to improve the continuous tracking task paradigm from different aspects, in order to increase its reliability for implicit motor learning study [3][4][5]19 . A major effort has been devoted to reinforcing the IWS learning effect during the task, such as Lang et al. 's investigations on enhancing the implicit learning through a better predictability by increasing target sequence regularities in the repeated segments 3 , and removing the negative guidance effect that prevents the participants from learning by suppressing visual feedback 19 . Another approach concentrates on the detection of the induced IWS learning, for instance Künzell et al. concerned how the tracking path characteristics as well as the target speed affected the IWS learning detection 4 , and our previous work tested the time-on-task effect on the detection and also provided refinements on the paradigm for more effective detection 5 . This paper argues that a reliable tracking performance measure with specific sensitivity to IWS learning is critical to the continuous tracking task paradigm. The discrepant results from the aforementioned studies, to some extent, revealed the importance and difficulty of how to detect the IWS learning reliably and effectively. Currently, the root mean square error (RMSE) in screen pixels is the most widely adopted performance measure in the continuous tracking task for implicit motor learning research 5,6,8,9,12,20,21 . However, the RMSE index simply sums up the squared errors of every tracking, which focuses more on local errors than global similarity and is vulnerable to accidental errors that are unrelated to IWS learning, for instance the mistakes caused by hand-driven device control. Some researchers also used other conceptually similar dependent variables as performance measure, such as the integrated absolute error 9 and the radial error 22 , which also concerned much detailed local information. The low sensitivity and specificity of these existing performance measures in the IWS learning detection may imperil the reliability of the continuous tracking task paradigm for implicit learning studies. Differently, Lang et al. 19 applied the inter-correlation coefficient on the horizontal movement, which is essentially a typical similarity indicator of two time series calculated by Pearson's correlation test, to assess the tracking performance. Unfortunately, they still failed in detecting the IWS learning in the traditional continuous tracking task 19 , which might result from the ceiling effect due to the inappropriate difficulty set for the tracking task in the designed experiment as stated therein 19 . Aiming at reliable detection of IWS learning in the continuous tracking task paradigm, this study investigated the feasibility and sensitivity of the Pearson's correlation coefficient on the vertical movement (denoted by r v ) as the performance measure, in comparison with the conventional measure using RMSE. The experiment was particularly designed with a practice phase, an immediate test phase and a consolidation test phase, for a full examination on IWS learning. More specifically, twenty-four participants performed the continuous tracking task on three days (i.e. Day 1, Day 2 and Day 3) in which they were instructed to track a moving target displayed on a monitor with a stylus and pen tablet. There was one block on Day 1 as the performance baseline. On Day 2, five consecutive practice blocks followed by one transfer block and one retention block were performed. On Day 3, two retention blocks with one transfer block in-between were constructed for the consolidation test. Each block consisted of four trials and each trial was divided into three segments. In all trials, the waveforms in the first segment (Seg1) and the third segment (Seg3) were randomly generated, in the middle segment (Seg2) the waveform was repeated over trials in the practice and retention blocks, and randomly generated in the baseline and transfer blocks. Results The illustration of the continuous tracking trajectories with horizontal movements and vertical movements is presented in Fig. 1. Regarding the tracking performance measures, a smaller RMSE or a higher r v indicates a better tracking performance. Figure 2 gives an example of the target and the cursor movements of three segments in one trial to show the individual tracking performance. In order to fully examine the IWS learning in the continuous tracking task, two-way analysis of variance (ANOVA) with repeated measures were performed for the tracking performance measures. Both tracking performance measures were normally distributed examined by Shapiro-Wilk test, and Greenhouse-Geisser adjustments were used if Mauchley's test showed that assumptions of sphericity were violated. The analyses and comparisons between RMSE and r v were performed in three aspects based on the time course, including (1) across the practice phase, (2) during the immediate test phase on Day 2, and (3) during the consolidation test phase on Day 3, which are depicted in the following paragraphs. Across the practice phase. Tracking performance measured by RMSE and r v on the Seg2 and the mean of random segments Seg1 and Seg3 across the practice phase were analyzed respectively by 2 (Segment: Seg2 and the average of Seg1 and Seg3) × 6 (Block: Block 1 to Block 6) repeated ANOVA. Regarding both RMSE and r v , a main effect of Block was evident (RMSE: F 2.64, 60.65 = 28.009, p < 0.0001, partial-η 2 = 0.549; r v : F 1.91, 49.96 = 22.622, p < 0.0001, partial-η 2 = 0.496), indicating the improvement of tracking performance by practice. This is consistent with the tracking performance curve in Fig. 3 where all segments showed a decreasing trend in RMSE and an increasing trend in r v across the practice blocks. However, neither a main effect of Segment (F 1, 23 = 2.245, p = 0.148, partial-η 2 = 0.089) nor a Segment × Block interaction (F 3.33, 76.55 = 1.84, p = 0.141, partial-η 2 = 0.074) was evident for RMSE, indicating that the improvement of tracking performance over blocks measured by RMSE had no significant difference between the Seg2 and the mean of random segments Seg1 and Seg3. On the contrary, Segment showed a significant main effect on r v (F 1, 23 = 5.348, p = 0.03, partial-η 2 = 0.189) so that the tracking performance in the Seg2 measured by r v was superior to the mean of random segments Seg1 and Seg3. Importantly, an expected significant Segment × Block interaction (F 3.06, 70.44 = 2.965, p = 0.037, partial-η 2 = 0.114) was observed for r v , suggesting that the improvement of tracking performance over blocks in the Seg2 was more significant than that of the random segments Seg1 and Seg3. This result provided evidence of IWS learning in the practice phase. During the immediate test phase. The immediate test phase on Day 2 included the transfer test in Block 7 and the retention test which was adjusted to be the average of Block 6 and Block 8 for counterbalance. Figure 4 presents the tracking performance difference between the retention test and the transfer test (i.e. Transfer -Retention) across subjects (n = 24) measured by both RMSE and r v . It can be observed that the IWS learning was detected in most participants by both measures; nevertheless, the tracking performance measured by r v led to detection of the IWS learning in more participants than that measured by RMSE. Moreover, tracking performance measured by RMSE and r v were analyzed respectively by a 2 (Segment: Seg2 and the average of Seg1 and Seg3) × 2 (Test: transfer test and retention test) repeated ANOVA. For RMSE, a main effect of Test (F 1, 23 = 4.804, p = 0.039, partial-η 2 = 0.173) was observed, suggesting that the tracking performance in the transfer test was significantly lower than that in the retention test. However, neither a main effect of Segment (F 1, 23 = 0.614, p = 0.441, partial-η 2 = 0.026) nor a Segment × Test interaction (F 1, 23 = 2.894, p = 0.102, partial-η 2 = 0.112) was found, indicating that the decrease of tracking performance from the retention test to the transfer test had no significant difference between the Seg2 and the random segments Seg1 and Seg3. For r v , there was a significant main effect of Test, with tracking performance decreasing from the retention test to the transfer test (F 1, 23 = 12.612, p = 0.002, partial-η 2 = 0.354), and a significant main effect of Segment with higher tracking performance in the Seg2 in comparison with the random segments Seg1 and Seg3 (F 1, 23 = 6.141, p = 0.021, partial-η 2 = 0.211). More in r v (F 1, 23 = 11.659, p = 0.002, partial-η 2 = 0.336), indicating that the decrease of tracking performance from the retention test to the transfer test in the Seg2 was significant larger than that in During the consolidation test phase. In order to compare the detection sensitivity of two performance measures to the offline consolidation of IWS learning, the tracking performance measured by RMSE and r v in the consolidation test phase on Day 3 were also analyzed respectively by a 2 (Segment: Seg2 and the average of Seg1 and Seg3) × 2 (Test: transfer test and retention test) repeated ANOVA. For RMSE, the main effect of Test (F 1, 23 = 5.358, p = 0.03, partial-η 2 = 0.189) was significant and the main effect of Segment approached significance (F 1, 23 = 3.578, p = 0.071, partial-η 2 = 0.135). Nevertheless, the Segment × Test interaction (F 1, 23 = 0.158, p = 0.695, partial-η 2 = 0.007) was far from evident. These results suggested that although the tracking performance significantly decreased from the retention test to the transfer test and the tracking performance was lower in the random segments Seg1 and Seg3 than the Seg2, the decrease of tracking performance did not have difference between the Seg2 and the random segments Seg1 and Seg3. For r v , there was a significant main effect of Test (F 1, 23 = 4.515, p = 0.045, partial-η 2 = 0.164) but no main effect of Segment (F 1, 23 = 1.846, p = 0.187, partial-η 2 = 0.074). Nonetheless, we observed a significant Segment × Test interaction (F 1, 23 = 9.765, p = 0.005, partial-η 2 = 0.298), implying that the decrease of tracking performance was more pronounced in the Seg2 than in the random segments Seg1 and Seg3 when the repeated waveform pattern of Seg2 in the retention test was replaced by a random pattern in the transfer test. This result demonstrated the offline consolidation of IWS learning. Discussion In this study, we proposed to use the performance measure, i.e., the Pearson's correlation coefficient on the vertical movement r v , to detect the IWS learning in the continuous tracking task paradigm. The proposed measure has been investigated through a carefully designed experiment which comprises a practice phase, an immediate test phase and a consolidation test phase after 24 hours. To the best of our knowledge, it is the first time to investigate IWS learning in all three phases within one experiment, which enabled a full examination on IWS learning in all different phases and ensured the reliability of the experiment results. The feasibility and sensitivity of r v on the detection of IWS learning was compared with the conventional RMSE measure that is widely used for the continuous tracking performance and the experiment results indicated that r v was superior to RMSE in the detection of IWS learning in the continuous tracking task paradigm. In the practice phase, the ANOVA results showed a significant main effect of Block on both RMSE and r v , indicating that the tracking performance can be significantly improved by practice. This result is in line with our expectation and consistent with our previous work 5 and the validation study of continuous tracking task for implicit motor learning 4 . More importantly, the Segment × Block interaction revealed significance in r v , indicating that the middle segment showed significant larger improvement over practice than the outer random segments, which provided a strong evidence of IWS learning. On the contrary, no Segment × Block interaction was found in RMSE. Künzell et al. 4 and our previous work 5 did not find the IWS learning effect in the practice phase using RMSE as the performance measure, either. A possible reason suggested by Künzell et al. 4 on the lack of IWS learning detected during practice says that, although the IWS learning did happen, the expression of learning effect might suffer from a ceiling effect. The analyses and comparisons between RMSE and r v in this study provided another possibility: the RMSE for detecting IWS learning in these two aforementioned papers might be not sensitive enough to reflect the true extent of the learning effect. Also, the experiment results demonstrated that r v possesses a higher sensitivity than RMSE to reflect the IWS learning in the practice phase and in the immediate test phase. As one of the typical tasks to induce implicit motor learning, the continuous tracking task typically involves two types of learning: general motor skill (GMS) learning and IWS learning. GMS learning refers to the acquisition of expertise with the general requirement of the task 23 , and it occurs when tracking both the random and repeated segments. As no waveform-specific learning occurs in the random segments, GMS learning can be measured by the tracking performance improvement in the random segments across practice blocks 23 . IWS learning is a specific representation of implicit learning in the repeated segment, and therefore it can be seen that GMS learning and IWS learning happen simultaneously in the repeated segment across the blocks. Both the GMS learning and IWS learning contribute to the tracking accuracy in the calculation of the performance measure, which may increase the difficulty of the IWS learning detection. Obviously, in this task paradigm what we really aimed to measure is the waveform-specific learning performance, which concerns how similar the trajectory drawn by the participant is to the given waveform pattern. RMSE squares the point-to-point distance errors and does not count the direction of errors, which may cause ambiguity about of the exact cursor position. Moreover, RMSE summarizes the squared errors at each point but not evaluates the shape of the whole cursor trajectory in comparison with the target trajectory that we care the most. In addition, RMSE is easy to be contaminated by accidental errors that are unrelated to IWS learning, which may hinder or even overwhelm the reflection of IWS learning. On the contrary, as a typical similarity indicator, Pearson's correlation coefficient measures the degree of resemblance between two trajectories, which conceptually differs from RMSE and is closer to what we aimed to measure. For example, in Fig. 2 it can been seen that the tracking performance in the repeated segment Seg2 is better than that in the random segment Seg1, which can be reflected by r v rather than RMSE. What is more, the r v in this study counts less GMS learning effect than RMSE, as the r v focuses more on the IWS learning induced by the waveforms specifically designed in the vertical direction. As shown in Fig. 1, the waveform pattern of the target movements consists of a combination of a sinusoidal trajectory in the vertical direction and a uniform rectilinear motion in the horizontal direction. These two different types of movements in two orthogonal directions are independent, implying that only the vertical movements carry the information of the repeated waveform pattern while the horizontal movements do not. As illustrated in Fig. 1b, no waveform-specific information is contained in the horizontal direction, and thus the horizontal tracking errors mainly arise from GMS learning but not IWS learning. Consequently, the Pearson's correlation coefficient r v that concentrates on the vertical movements can reduce the interference of GMS learning especially from the horizontal direction and consequently measures IWS learning more precisely. In addition to the implicit motor learning detection, performance measure is also a vital factor for the detection of implicit motor learning consolidation. Offline consolidation is an important issue for motor learning investigation in the continuous tracking task paradigm. Consolidation refers to that performance is robust and resistant to decay and interference with time passing and without further practice 24,25 . It can be assessed by repeating the test phase of the task separated by a period of time in which participants are not concerned with the task. Therefore, IWS learning consolidation can be measured by repeating the immediate test phase on a second day 5 . In this study, during the consolidation test phase on Day 3, it can be observed from Fig. 3 that RMSE showed an expected increase and r v had an expected decrease in the middle segment Seg2 in the transfer test. However, although RMSE detected the tracking performance difference between the transfer test and the retention test, it failed to reach significance in evaluating the tracking performance drop in the Seg2 from the retention test to the transfer test. Different from RMSE, r v presented a significantly sharper decline in the transfer test as a typical evidence of the IWS learning consolidation occurrence. These statistical results revealed that the offline consolidation of IWS learning can be significantly reflected by r v rather than RMSE, which indicated that r v also outperformed RMSE in the detection of IWS learning consolidation. These findings further underlined the importance of appropriate performance measure to reflect the true extent of implicit motor learning. A lot of efforts have been put in previous studies in order to increase the reliability for implicit motor learning research using the continuous tracking task paradigm, while on the other hand this reliability is reflected mainly by whether and how successful the IWS learning can be detected. As the GMS learning and the IWS learning may occur simultaneously, a successful detection of IWS learning depends on not only how strong the IWS learning is induced, but also how sensitive to the IWS learning the performance measure is, especially under the interference of GMS learning and other factors. This study proposed to use similarity in the waveform direction as the performance measure and the experiment results demonstrated that the proposed measure r v is superior to the widely used performance measure RMSE, leading to successful detecting of IWS learning in all three phases. This revealed the importance of the performance measure in the detection of IWS learning and provided more confidence when applying this paradigm as a tool for implicit motor learning research. Method Participant. A total of twenty-four right-handed young volunteers aged from 19 to 31 (mean: 24.1, SD: 3.06, 15 male and 9 female) participated in this study. All participants had normal or corrected-to-normal vision and none of them had prior experience or knowledge of the continuous tracking task. Informed consents were signed before the experiment and honorarium for participation (approximately US $20) were paid after completing the whole experiment for all participants. This experiment was in accordance with the Declaration of Helsinki and approved by the Research Ethics Committee (University of Macau). Task. Participant was seated comfortably in front of an LCD monitor (Sony, 17-inch, 1280 × 1024 pixel resolution) at a typical viewing distance of around 60 cm. The area of full screen was proportionally projected to a large pen tablet (PTH-851, Wacom Intuos pro, Japan) with an active area of 12.8 × 8.0 inch. Holding a stylus with their right hand, participants were instructed to control the movement of a cross-shaped white cursor to track a red dot with a diameter of 9 mm displayed on the screen. The ratio of the pen movement on the tablet and the cursor movement on the monitor was calibrated to reach exactly 3:4. The goal of this task was to track a targeted red dot moving horizontally with an invisible sinusoidal trajectory. A custom Java program (Sun Microsystems, Santa Clara, CA) was applied to generate the waveform patterns and present the movements of the target and cursor. In the meantime, both the trajectories of targeted red dot and manually controlled cursor were recorded by this program at a sampling rate of 32 Hz. The horizontal and vertical movements of the trajectories were θ θ θ = + + + + + + + + α i is the rounded vertical coordinate of the i-th position at which the target is to be displayed, θ π = × . × i t ime freq 2 14 /( ) i , with time representing the segment duration and freq representing the sampling frequency. The duration is 17.14 s, and the sampling frequency for display is chosen as the same as that for recording for consistency and simplicity. In order to create a smooth transition between segments, the first 15% and last 15% of each segment was transformed to ensure that the initial and final locations of each segment fell on the horizontal line in the middle of the screen. Therefore, only the rest 70% of each segment (i.e., 1.5π out of π . 2 14 or 12 seconds in duration) was under complexity control and subsequently analyzed for tracking performance evaluation. Each of the three segments (Seg1, Seg2 and Seg3) had its own waveform pattern. The coefficients of the waveform patterns for all the three segments were generated following two criteria aiming to appropriately control the complexity: (a) the values of coefficients were within the range of ± 5, and (b) the differences among the mean velocities of the generated waveform patterns of the three segments were no more than 1% when running the coefficients through the experiment setup. The waveform patterns of Seg1 and Seg3 were randomly generated and thus different for each trial, whereas the waveform patterns of Seg2 were repeated over trials for each participant. Twenty-four selected waveform patterns from a pool of more than three thousand generated patterns following the criteria mentioned previously were randomly assigned to each participant so that the repeated patterns in Seg2 differed for each participant. Procedure. Participants were told that they would see a small red dot (tracking target) occurring on the left middle of the screen and moving horizontally until reaching the right edge of the monitor. The task for all participants was to try their best to track the dot with the cursor as accurately as possible by controlling a stylus to draw on the tablet. For each participant, the whole experiment consisted of 11 blocks of the continuous tracking task on three days, denoted by Day 1, Day 2 and Day 3 respectively. On Day 1, Block 1 was taken as a tracking performance baseline test with randomly generated waveforms in Seg2. On Day 2, Blocks 2 to 6 and Block 8 had repeated waveform patterns in Seg2, while in Block 7, the waveforms in Seg2 were replaced by random patterns. Blocks 1 to 6 were considered as the practice phase and Blocks 6 to 8 were the immediate test phase, in which Block 6 and Block 8 were retention tests while Block 7 was a transfer test. On Day 3, another test phase of three blocks (i.e. Blocks 9 to 11) was performed in order to test the offline consolidation of implicit motor learning. To counterbalance the effect caused by the order of blocks, Block 9 and Block 11 were designed as retention tests while Block 10 was a transfer test, and the segment settings were the same as in the immediate test phase on Day 2. Each block was composed of four trials with a 15-s interval between two consecutive trials and a 90-s break was also provided between blocks. In order to get familiar with the continuous tracking task, participants completed a warm-up trial right before the formal task on each day. After completing the whole experiment, participants were first asked whether they had noticed anything particularly about the tracking waveform and then whether they had noticed any repetition of any part of the tracking waveform. The participants, who claimed that they had noticed the repetition, were further asked which part was repeated over trials. As a result, no participant reported any awareness of the repeated waveform pattern, which ensured that the waveform-specific learning was implicit. The schematic representation of the experiment process was shown in Fig. 5. Performance measures. In order to investigate the feasibility and sensitivity of the Pearson's correlation coefficient on the vertical movement r v for the IWS learning detection in the continuous tracking task paradigm, both r v and RMSE were considered as the performance measures for comparison. The RMSE for each of the three segments was calculated respectively in each trial and then averaged across trials per block as the dependent measure of tracking performance in the corresponding block. Pearson's correlation coefficient r v for each segment in all trials was calculated as shown in Equation (2), where X = {x 1 , …, x n } represents the target vertical locations on the screen in time series, Y = {y 1 , …, y n } represents the cursor vertical locations on the screen in time series, x was the mean of X, and y was the mean of Y. For each of three segments, the average r v across four trials per block was taken as the tracking performance in the corresponding block.
v3-fos-license
2017-04-20T15:31:06.610Z
2014-01-17T00:00:00.000
5409794
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0085020&type=printable", "pdf_hash": "987278c2ef8c371f7531760b92ad3ca8358e83a6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44795", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "987278c2ef8c371f7531760b92ad3ca8358e83a6", "year": 2014 }
pes2o/s2orc
Cognitive Function in Peripheral Autonomic Disorders Objective aims of the current study were 1) to evaluate global cognitive function in patients with autonomic failure (AF) of peripheral origin and 2) to investigate the effect of a documented fall in blood pressure (BP) fulfilling the criteria for orthostatic hypotension (OH) on cognitive performances. Methods we assessed 12 consecutive patients (10 males, 68±7 years old) with pure AF (PAF) or autoimmune autonomic neuropathy (AAN) and 12 age- and gender-matched controls. All patients had no clinical signs of central nervous system involvement and normal brain CT/MRI scan. Cognitive function was assessed on two consecutive days in 3 conditions: on day 1, while sitting, by means of a comprehensive battery of neuropsychological tests; on day 2, while tilted (HUT) and during supine rest (supine) in a randomized manner. BP and heart rate (HR) were continuously recorded non-invasively for the whole duration of the examination. Results patients with PAF or AAN displayed a preserved global cognitive function while sitting. However, compared to supine assessment, during HUT patients scored significantly worse during the Trail Making Test A and B, Barrage test, Analogies test, Immediate Visual Memory, Span Forward and Span Backward test. Pathological scores, with regard to Italian normative range values, were observed only during HUT in the Barrage test and in the Analogies test in 3 and 6 patients respectively. On the contrary, in healthy controls, results to neuropsychological tests were not significantly different, during HUT compared to supine rest. Conclusions these data demonstrate that patients with PAF and AAN present a normal sitting global cognitive evaluation. However, their executive functions worsen significantly during the orthostatic challenge, possibly because of transient frontal lobes hypoperfusion. Introduction Orthostatic hypotension (OH) is defined as a systolic blood pressure (SBP) fall of at least 20 mmHg or a diastolic blood pressure (DBP) fall of at least 10 mmHg within 3 min of standing or head-up tilt (HUT) to at least 60u [1]. Previous cross-sectional studies reported an association between OH and cognitive decline in various conditions, including central neurodegenerative disorders with autonomic failure (AF) [2][3][4]. However, prospective studies failed to demonstrate that OH was a risk factor for cognitive decline, possibly because of the confounding effects of age, concomitant disorders, medications, cerebrovascular or neurodegenerative processes [4][5][6]. Data on cognitive function in peripheral autonomic disorders, rare conditions characterized by AF without central nervous system (CNS) involvement, are scant. So far, a single retrospective study reported cognitive impairment in 6 out of 14 patients with a longstanding diagnosis of pure AF (PAF) [7]. However, blood pressure (BP) values and cognition were not measured concurrently, patients were tested only while sitting, and 4 patients with cognitive deficits had abnormal CT/MRI scan. A more recent case series on 3 patients with autoimmune autonomic ganglionopathy reported that OH and elevated anti-body titer were associated independently with neuropsychological impairment, which improved, even in the seated normotensive position, after plasmapheresis [8]. In a preliminary non-randomized study based on 10 patients with central and peripheral causes of AF we demonstrated that, despite a normal sitting global cognitive evaluation, our patients presented a significant worsening of global and executive cognitive functions during HUT [9]. However, except for few patients that had some pathological performances in verbal abstract thinking and delayed recall of verbal memory, they scored within the Italian reference range values. Therefore, due to the paucity of data on cognitive function in peripheral autonomic disorders and since previous studies did not systematically take in consideration the effect of posture on neuropsychological results, we performed the current study aiming at: 1) evaluate global cognitive function in patients with AF of peripheral origin by means of a comprehensive battery of neuropsychological tests performed while sitting; 2) investigate the effect of a documented fall in SBP fulfilling the criteria for OH [1] on cognitive performances, carrying out neuropsychological assessments in a randomized manner while the patients were supine and during HUT. Methods Twelve consecutive patients with a confirmed diagnosis of AF of peripheral origin and twelve age-and gender-matched controls were enrolled in the study. Each participant gave written informed consent before participating to the study, which was approved by the institutional review board of the University of Bologna, Italy. Patients' inclusion criteria comprised the presence of neurogenic OH [1], absence of clinical signs (parkinsonian, cerebellar or pyramidal) of CNS involvement and normal brain CT/MRI scan (absence of white matter lesions, cortical infarcts, atrophy and hydrocephalus). Of the twelve patients enrolled in the study (10 males, 6867 years old, mean disease history 1265 years), 9 had a probable PAF and 3 an autoimmune autonomic neuropathy (AAN) ( Table 1 and Table 2). Patients were classified as PAF on the basis of symptoms associated with OH, of results of cardiovascular reflexes and negative cerebrospinal fluid (CSF) examinations [10,11]. Patients were classified as AAN if presented AF of subacute onset with albuminocytologic dissociation in the CSF [12]. All our AAN patients had negative ganglionic AChR antibodies titers. All patients, except for patient 3, had features of autonomic failure for many years, thus making it most unlikely that this was the autonomic presentation of multiple system atrophy or other central neurodegenerative disorder. All partic-ipants were non-smokers and had no additional disorders that might affect cognitive functioning. Patients and controls underwent neuropsychological assessment on 2 separate days in 3 different conditions: first, while seated, by means of the Brief Mental Deterioration Battery (BMDB), Word and Semantic Fluency and the Stroop Color Word Test, to evaluate global cognitive function [11][12][13]; then, the day after, by means of a selection of neuropsychological tests during supine rest (supine) and head-up tilt (HUT) to assess the effect of OH on attention and executive function. These tests were selected, on the basis of our previous experience [9], in order to reduce the time needed for this evaluation and made it feasible for our patients during the HUT. This selection included the Digit Span Forward and Backward for immediate and working memory, Barrage test for visual search function, Immediate Visual Memory for visual memory, Analogies test for verbal abstract thinking, Trail Making A and Trail Making B for attention and executive functions [13][14][15][16]. During HUT patients were kept to an angle ranging from 30u to 50u, able to cause a fall of at least 20 mmHg in SBP but at the same time, not to evoke symptoms. Controls were all kept to an angle of 40u. BP and heart rate (HR) were continuously recorded, noninvasively by means of a Task Force Monitor (CNSystem, Austria) for the whole duration of the examination. On both days participants were assessed in the morning, in a quiet clinical investigation room by the same examiner (R.P.). Healthy controls were investigated at another time than the patients and assessment of cognitive function was not blinded. Participants were required to postpone their usual morning medications until after the end of the evaluation and to abstain from smoking and drinking alcohol or caffeinated beverages from the night before the study. Each of the HUT/supine session lasted approximately 15 min, and was separated by 30 min of supine rest. The sequence of execution of the HUT/supine evaluation was randomly assigned in order to have half of the patients and controls who performed neuropsychological assessment first during supine rest and then during HUT and the remaining half who performed the tests in the reverse order. To reduce the effect of learning, parallel forms of the tests were presented on each assessment and the sequence of presentation of the various tests was randomized. Patients' performances were compared on an individual basis to the Italian reference range values [14,16]. The results to the neuropsychological tests during HUT and supine condition were compared by using Wilcoxon signed-ranks test for related samples. Statistical analysis was performed using IBM SPSS Statistics 20.0; a p,0.05 was considered significant. Results While sitting, the Mini Mental State Examination, the final result of the BMDB (a measure of global cognition functioning), the results to Word and Semantic Fluency test and to the Stroop Color Word test were within the normal range in all patients (Table 3). Despite this significant worsening of executive functions, reversible pathological scores, with regard to Italian reference range values, were observed during HUT in the Barrage test and in the Analogies test only in 3 and 6 patients respectively (Table 3). On the contrary, controls' results to neuropsychological tests were not significantly different during HUT and supine assessment (Figure 1). To exclude a possible learning effect of test-retest, the results obtained in the last assessment of the second day were compared to the results obtained in the previous session, irrespectively of the position in which they were performed (supine/HUT), and no statistically significant differences were observed in any of the performed test. Discussion These results demonstrate that patients with peripheral AF present a significant reversible worsening of immediate and working memory, sustained attention, visual search and abstract thinking during the orthostatic challenge, but a normal global cognitive function while seated. Pathological scores may be observed in a minority of patients only during HUT. On the contrary, no changes in cognitive function were observed in healthy controls during HUT compared to supine assessment. These data confirm our previous results and indicate that, even after a prolonged disease history, OH ''per se'' does not seem to be associated with permanent cognitive deficits. Our data suggest that the previously reported association between OH and cognitive decline may be the consequence of other factors, such as the presence of white matter lesions, silent cerebral infarcts or central neurodegeneration, possibly sharing the same pathogenesis of OH. On the contrary, the BP fall observed during the orthostatic challenge was associated with a reversible impairment of executive function, which may be related to systemic hypotension with transient cerebral hypoperfusion. This hypothesis is strengthened by a previous brain SPECT study, in which orthostasis caused a decreased blood flow in frontal areas in a patient with PAF, which reversed to normal values while lying flat [17]. The absence of changes in neuropsychological results in healthy controls during supine and HUT assessment indicate that our results on PAF and AAF are solid and are not affected by the motor performance related to the supine/HUT position or to a possible learning effect. We believe this data add valuable information to the current knowledge of these rare disorders and may help to clarify previous results regarding the relationship between OH and cognitive function. Moreover, they are clinically relevant to understand that patients with OH, if needed, have to be neuropsychologically tested in supine position possibly with BP monitoring and considering all the conditions that are known to worsen OH such as food, alcohol and hypotensive drugs.
v3-fos-license
2021-09-04T13:17:01.557Z
2021-08-01T00:00:00.000
237402415
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.degruyter.com/document/doi/10.1515/jccall-2021-2002/pdf", "pdf_hash": "aab988fc7e458d01b598265e249c18591ec10a52", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44796", "s2fieldsofstudy": [ "Education", "Linguistics", "Computer Science" ], "sha1": "8d08fb230e01ecf02e33d8a57914a6e9447f1da2", "year": 2021 }
pes2o/s2orc
Open Research Online Reflections on research questions in mobile assisted language learning : Research questions are central to mobile assisted language learning (MALL) projects and studies, yet they have received little attention to date. Taking research questions as its central focus, this paper offers some reflections on the complexity of the broader field of mobile learning, on different kinds of research, on salient themes and challenges in mobile learning and MALL, and it suggests some research directions for the future. Since MALL research is interdisciplinary, and since research questions are an object of study in other fields of knowledge, the paper refers to sources from multiple disciplines to support a more compre-hensive consideration of current and future research questions in MALL. The paper is fundamentally an invitation to a global conversation about research questions in MALL. of information; frequent interaction and re fl ection; enjoyment and personal gains; and that it involves a multiplicity of technologies, modalities, and methods. These themes represent key strengths of mobile approaches to teaching and learning that may be developed in environments where teachers and researchers have the ability to try out something Introduction In this paper I draw on my experience of research, evaluation, project development, as well as supervision and examining of doctoral students' theses over the past two decades, in mobile assisted language learning (MALL) and in mobile learning more broadly. During that time, there have been many opportunities to think about what is worth researching, how the research should be carried out, and what useful knowledge it may produce. Many research projects and studies in MALL are driven by one or more research questions, and that is the aspect I have chosen to focus on in this paper. It seems valuable to reflect on research questions because they underpin the development of any field of research, they may evolve as the field advances, and they set a course for future studies. There is a growing body of publications reviewing the state-of-the-art in mobile learning and mobile assisted language learning, but none of these works looks specifically at what research questions are being addressed overall. This stands in contrast to other fields of knowledge where research questions are frequently debated and priority questions are proposed (Antwis et al., 2017;Freudenberg & Sharp, 2010;Oldekop et al., 2016;Seddon et al., 2014). In the field of e-learning, Cronje (2020: 13) has recognized "a lack of clarity in terms of what is being researched, as well as in how it is being researched" and has proposed a model for developing research questions that are aligned to research aims, using design research as an illustrative example. In language education and applied linguistics, a reference book due to be published in late 2021 (Mohebbi & Coombe, forthcoming) promises a wealth of suggestions for research questions organized by topic, including some questions for MALL. As yet there appears to be no dedicated published work on existing or future research questions in the field of MALL. This paper offers some reflections on the nature and focus of research in language learning with mobile technologies and suggests directions for the future. The paper is simultaneously an invitation to a global conversation about research questions in MALL. The conversation might include topics such as: What do researchers, teachers, and others, think about the questions that have guided MALL research to date? Would it be valuable to create a database of research questions that could be analyzed in various ways, and if so, how might they be analyzed? What new research questions, or types of questions, could be suggested for future studies? I hope that readers of this paper will be stimulated to continue this conversation around the world, approaching the issue of research questions from global and local perspectives alike. The complexity of mobile learning Since we shall be considering research questions in MALL, which may be seen as a subfield of mobile learning, a necessary first step is to acknowledge that conceptions of mobile learning are not universal, its definitions have evolved over the years and it has a relationship with the fields of e-learning as well as computer assisted language learning (CALL). Mobile learning may be understood as "an extension of e-learning" (Nami, 2020), or the exact opposite, "not an extension of e-learning" (Hewagamage, Wickramasinghe, & Jayatilaka, 2012), and "not a new variant of e-learning" (Kukulska-Hulme & Traxler, 2013). MALL may be considered an extension of CALL (Chang, Warden, Liang, & Chou, 2018), or the distinction between MALL and CALL may be emphasized (Kukulska-Hulme & Shield, 2008). These different viewpoints exist because mobile devices may be seen as a way to access or interact with materials, resources and communities on digital platforms; or alternatively, the focus may be on what learners can do with personal devices when they are mobile and immersed in diverse physical environments. E-learning is mainly associated with formal education and training, but mobile learning is conducted in both formal and informal settings. Any research questions may thus be influenced by the researcher's choice of perspective and setting. Different conceptions of mobile learning are likely to have an influence on research questions that guide projects and studies. An explanation of mobile learning offered by the Higher Education association EDUCAUSE (2021) has the traditional classroom as its central reference point: "Using portable computing devices (such as iPads, laptops, tablet PCs, PDAs, and smart phones) with wireless networks enables mobility and mobile learning, allowing teaching and learning to extend to spaces beyond the traditional classroom. Within the classroom, mobile learning gives instructors and learners increased flexibility and new opportunities for interaction." The research literature offers many other definitions of mobile learning developed over the last two decades. Crompton (2013) reviewed several definitions and concluded that there were four central constructs: pedagogy, technological devices, context, and social interactions. Together with the editors of the handbook for which she was writing, Crompton added the idea of "content interaction" and suggested that mobile learning should be defined as "learning across multiple contexts, through social and content interactions, using personal electronic devices" (Crompton, 2013, p. 4). While learning 'across multiple contexts' can include the classroom, this definition is more inclusive of diverse scenarios such as informal learning and on-the-job learning where no classrooms are involved. In line with this more inclusive perspective, MALL may be defined as "the use of smartphones and other mobile technologies in language learning, especially in situations where portability and situated learning offer specific advantages" (Kukulska-Hulme, 2018). It is important to note that definitions are frequently surrounded by further elaboration and exemplification that may also be relevant to consider; for example the definition in Kukulska-Hulme (2018) is accompanied by some important remarks: "Increasingly, MALL applications relate language learning to a person's physical context when mobile, primarily to provide access to location-specific language material or to enable learners to capture aspects of language use in situ and share it with others … Mobile learning is proving its potential to address authentic learner needs at the point at which they arise, and to deliver more flexible models of language learning" (para. 1). Definitions of mobile learning partly reflect real-world practices in terms of how researchers or practitioners are using mobile technologies in their projects, but perhaps they also describe what their authors would like mobile learning to be (ideally, or in the future) in terms of where the learning takes place or how it is supported. Some believe there is great value in situated mobile learning at home, at work, outdoors, or across a variety of contexts. Others wish to draw attention to how people may learn by interacting with different types of content (learning materials, artifacts, the natural world) and with other human and artificial beings (teachers, learners, support staff, parents, friends, volunteers, artificial agents) with whom they are in physical contact or connected remotely. Another distinction worth making is that although pedagogy is identified as one of the central constructs in definitions (Crompton, 2013), mobile learning research is primarily concerned with learners and learning. An alternative term, 'mobile pedagogy' (Kukulska-Hulme, Donohue & Norris, 2015) was devised to make explicit the teacher's role in the design and promotion of mobile activities that encourage students to engage in language learning as a form of mobile inquiry. For example, students can investigate how language is used in different settings, find ways to practice it in a variety of contexts, and record their experiences for subsequent reflection and discussion. Mobile pedagogy can be viewed as a subset of the broader field of mobile learning which also includes informal learning without teacher involvement. As pedagogy evolves towards more consideration of how and where learning takes place and how it can be developed beyond the classroom, definitions of mobile learning emphasizing pedagogy or learning design can be enriched by the broader context of learning within a society. Traxler (2009) gave a personal account of the evolution of mobile learning, highlighting numerous definitions and theories, and concluded by suggesting mobile learning is learning that is "most aligned to progressively more mobile societies" (p. 11). At that time, in 2009, it was not a common way of conceptualizing mobile learning and still seems not to have become widespread. What constitutes a 'mobile society' may be understood in various ways, but according to Traxler, Read, Kukulska-Hulme, and Barcena (2019) in the context of mobile learning what matters is that such societies are permeated by personal digital technologies. In mobile societies, there are likely to be evolving conceptions of learning and of how learning can be encouraged and supported through use of personal technologies such as smartphones, tablets, and wearables. Opinions also differ as to whether the field of mobile learning is young or already in a state of maturity. Kukulska-Hulme, Sharples, Milrad, Arnedillo-Sánchez, and Vavoula (2011) describe the origins of mobile learning in Europe as dating back to small trials and projects in the 1980 and 1990s, with larger projects emerging in the early 2000s. A review of MALL-related literature published between the years 2000 and 2012 (Duman, Orhon, & Gedik, 2015) noted an increase in publications in the year 2008 and a peak in 2012. According to a subsequent literature review by Aguayo, Cochrane, and Narayan (2017) "mobile learning as a research field has matured since the first attempts at large scale exploratory projects in the early 2000s" (p. 34). Yet some researchers are claiming that the use of mobile wireless technologies is a "recently developed approach to learning in educational environments" (Sarrab, Elbasir, & Alnaeli, 2016). Others are noting that in their specific contexts or countries mobile learning is new. For example, "Within the Greek formal educational context, mobile learning is in its infancy" (Nikolopoulou, 2018, p. 500); "learning through mobile devices is still in its infancy in Pakistan" (Uppal, Ali, Zahid, & Basir, 2020, p. 105); and "the field of mobile learning in Tanzania is still in its infancy (Ndume, Songoro, & Kisanga, 2020). Consequently, research questions that are posed in different contexts may, to some extent, reflect the perceived maturity of mobile learning in those contexts. Furthermore, since most fields of research are made up of different strands in which new topics and challenges are constantly emerging, it is possible for a field to be mature in some respects and immature in others. Within the field of mobile learning, Buabeng-Andoh (2021) argues that research into the determinants of mobile learning acceptance is in its infancy phase, even though the issue of technology acceptance in education has been studied for a long time. Within MALL, Nami (2020) observes that "there have been relatively few published studies on the use of particular smartphone applications designed for language learning" and suggests that "app-based language learning research is still in its infancy" (p. 85). Therefore, the focus of research questions may be determined by perceptions of aspects of mobile learning that are under-researched, globally or in a particular setting. The fields of mobile learning and MALL are capacious, ready to embrace a wide range of themes and stages of development. Studies devised by experienced researchers may differ in scope, focus, and methods from those that are devised by researchers who are less experienced, and they may also differ from studies devised by teachers, some of whom have only just begun to research mobile learning as part of their classroom practice. Even if education practitioners are aware that there is an extensive literature on mobile learning, they may be constrained in the extent to which they can access or engage with it, which may mean that they have fewer opportunities to learn from the research questions that others in the field have already explored. Research into how people learn with the aid of mobile technologies has changed in some important respects over the past two decades (Crompton & Traxler, 2019). In early studies, research participants were usually given smartphones and other devices they had never used before and they were asked to try out new applications and sometimes completely new activities. In later studies, a broader range of learners of all ages and in diverse settings have more often used their own technologies and increasingly familiar applications to extend their learning and to focus on specific personal requirements. In the past, research participants may have been largely passive "subjects" who undertook activities requested by researchers (for example, completing a questionnaire, carrying out a task), while more recently, some of them participate more actively, sometimes as co-researchers who may even be involved in the design of the study and learning tasks. In the early days, the bulk of mobile learning research was conducted, published and shared in English; now there are growing numbers of studies available in other languages too. Increasingly, studies investigate real-world contexts; they use a range of research designs and methods; some studies have compared mobile learning with other learning methods or they have compared different designs of mobile learning, but the majority have not been comparative and instead they have focused on mobile learning implementations, investigations, or design improvements (Lai, 2020). Mobile learning research is shaped by many factors, including, but not limited to: research traditions, competences and beliefs of researchers, available technologies and applications, software development capabilities, resources for data collection and analysis, and expectations regarding the outcomes as well as the impact of the research. These factors are rarely mentioned in overviews of mobile learning research. Burston (2021) has drawn attention to the fact that mobile learning studies are published in a broad range of journals across different disciplines and several domains of professional practice. This suggests that the nature of the research and the methods used are likely to be drawn from, or influenced by, those disciplinary and professional practice values, conventions, and traditions. The nature of research in mobile learning The OECD's Frascati Manual (OECD, 2015) states that research, along with experimental development (collectively known as Research and Development, or R&D) comprises "creative and systematic work undertaken in order to increase the stock of knowledgeincluding knowledge of humankind, culture and societyand to devise new applications of available knowledge" (p. 44). The Frascati Manual goes on to identify two types of research, basic and applied: "Basic research is experimental or theoretical work undertaken primarily to acquire new knowledge of the underlying foundation of phenomena and observable facts, without any particular application or use in view." Applied research is also undertaken to acquire new knowledge, however it is "directed primarily towards a specific practical aim or objective" (p. 45). Research in MALL is mainly "applied". Data on aspects such as teachers' or learners' perceptions, attitudes, and practices may be collected, interpreted, and used to inform learning designs and pedagogical interventions that make use of mobile technologies. Empirical studies based in classroom practices are widely used to generate data that supports or refutes an explicit or implicit hypothesis about whether a proposed approach brings about a desired improvement in language learning and associated aspects such as learner motivation. Experimental development of systems and software is also common, especially in projects originating in software engineering and intelligent systems design. Researchers may be steeped in either quantitative or qualitative research traditions and may be unwilling or unable to adopt an alternative approach. In his commentary on research in educational technology, Selwyn (2010) argued that "academic researchers and writers should give greater acknowledgement to the influences on educational technology above and beyond the context of the individual learner and their immediate learning environment" and that "the use of technology in education needs to be understood in societal terms" (p. 68). If certain technologies are commonly used in daily life, then perceptions of their adoption in education may be influenced by those daily life experiences. Global and local societal challenges may be reflected in mobile learning research projects, for example recent concerns about sustainability (Ng & Cumming, 2016), and if a technology is widely available, it might be a means to satisfy increased demands for education in contemporary times (e.g., Ally & Tsinakos, 2014;Almarwani, 2011). In Canada, Pulla (2017) draws attention to the relevance of mobile learning to Indigenous cultures. In contrast to researchers who have been calling for more work on theory in mobile learning (e.g., Traxler & Koole, 2014), Pulla (2017) argues that "Canadian education researchers need to be re-focusing their efforts away from the theoretical frameworks of education technology and toward the practical application of the lessons learned from the international learning community in the design and delivery of scalable, accessible and inexpensive MLT [mobile learning technology] education applications" (p. 45). We may ask what is mobile learning research trying to achieve and who will assess whether it is generating useful knowledge. Hulley, Newman, and Cummings (1988) observed that "a good research question should pass the "so what" testgetting the answer should contribute usefully to our state of knowledge" (p. 58). For example, answering the research questions makes a positive difference by providing evidence that could lead to an improvement in how something is taught or learned, or who benefits from such an improvement. However, the phrase "our state of knowledge" presupposes that there is a shared pool of knowledge that has been harnessed into an inspectable state or perhaps it is taken on trust. In a world where publications are growing exponentially, researchers may rely on a subset of knowledge within their sphere of interest or they may refer to published, peer-reviewed state-of-the-art reviews. Such reviews, while valuable, are selective in the literature they include and exclude. They may focus on papers published in certain journals only, usually in English (for further discussion of where mobile assisted learning literature is published, see Burston, 2021). While "we" may not all agree on the state of knowledge in MALL or what is important for our context, the "so what" test is still worth applying. Research topics in mobile learning and MALL Fifteen years ago, I took part in a workshop on "Big issues in mobile learning" (Sharples, 2006), in which we explored the meaning and dimensions of the emerging field of mobile learning, and identified several major themes that we (a small group of devoted researchers) considered important to address in mobile learning research: -How to enhance the learning experience without interfering with it -Affective factors in learning with mobile devices -Addressing conflicts between personal informal learning and traditional classroom education -Appropriate methods for evaluating learning in mobile environments -How learning activities using mobile technologies should be designed to support innovative educational practices -How mobile devices could be integrated with broader educational scenarios While these were identified 'issues' rather than specific research questions, they highlighted several areas of challenge, such as disruption, conflict, integration, informal learning, personalization, and learning design. Each of them seemed to matter. Mobile learning researchers have subsequently worked to some degree on these areas of challenge (e.g., Pollara & Broussard, 2011;Qing, 2017;Sharples, 2009). As mobile learning has become more widespread over the years, challenges have evolved and other research agendas have been developed (for example, Aguayo et al., 2017;Looi et al., 2010). Within MALL, alongside abundant research focusing on specific individual language skills or proficiency, recent prominent themes include learner autonomy (Nasr & Abbas, 2018;Sato, Murase, & Burden, 2020), use of online sources and apps (Loewen et al., 2019;Zou, Yan, & Li, 2020) and learning in authentic environments (Shadiev, Hwang, Huang, & Liu, 2018;Yeung & Sun, 2021). Hwang and Fu's (2019) review of mobile assisted language learning studies from the period 2007-16 confirms that early studies mainly focused on fostering learners' individual language skills, while later studies have looked at multiple skills in authentic learning environments. Based on a meta-analysis of MALL research and design, Chwo, Marek, and Wu (2018) highlighted "discrepancies between how teachers and instructional designers expected MALL devices to be used and how the students actually used them" (p. 66), which suggests that this could be a topic for further research. They also found that issues of access, motivation, and curriculum often have negative impacts on learning outcomes, which reminds us that research studies that are narrowly focused on mobile technology may overlook the influence of other factors. Commonly investigated topics in MALL studies and gaps in research are also discussed by authors such as Duman et al. (2015) and Burston (2021). Research questions in mobile assisted language learning Many years ago, Dillon (1984) observed that in general "little is known about the kinds of questions that may be posed for research" (p. 361). His review of classifications of question types showed them to be inadequate. Subsequently, White (2013White ( , 2017 has inquired into research questions and has sought to guide researchers in the process of research question development. White (2013: 213) explains that the process of formulating, developing, and refining research questions "allows researchers to make connections with existing theories and previous empirical findings and helps avoid unnecessary repetition of, or overlap with, previous work". Research questions chosen for a study may also reveal something about what researchers consider important to investigate. As noted by Farrow, Iniesto, Weller, and Pitt (2020), "almost all research projects are grounded in trying to answer a question that matters or has consequences" (p. 12) and the starting point for a research project "will usually be a research question framed within a particular paradigm" (p. 13) such as positivism or interpretivism. Cronje (2020) refers to the pursuits of knowledge, virtue, value, and power as four underpinning drivers for research. Yet what matters to a research project may also be related to a local context or issue. At times, what researchers have chosen to investigate may be influenced by what their organization, research funding agency, or Ministry of Education considers to be important. Furthermore, discussions with colleagues around the world suggest that our cultures and education systems play a role in shaping conceptions of MALL and what kinds of research studies and questions are valued. Such considerations are rarely articulated in published work. Research should begin with clear aims and objectives, which will help in the formulation of appropriate research questions. Yet even then, research questions are not easy to get right, and in some research studies they may have to be repeatedly revisited and refined as new data emerge and change our understanding of a phenomenon. Below are several common types of questions that may be encountered in mobile assisted language learning studies. They are formulated here as generic questions. Researchers may wish to consider whether research questions they have used to guide their own studies have been similar to any of these, or whether they are pursuing very different lines of inquiry. -Does the use of this mobile system lead to improvement in the acquisition of a specific language skill? This question type focuses on a specific language skill or skills, and it implies that improvement is the desired outcome. The study is probably looking at one specific mobile system. It is focused on a particular technology and a specific skill. It may be trying to measure improvement by using language tests and comparing results. -What evidence is there that the proposed mobile learning design supports learner collaboration, negotiation, critical thinking, etc.? In this type of question, the focus is on a mobile learning design. A learning design could include use of a mobile application as part of a task or a series of tasks. In this research there might be a concern with the various people involved when an application is usednot only the learners, but also those who support the learners, and what resources are available to them. Furthermore, in this question we can detect an interest in some interpersonal and thinking skills that might support language acquisition. -Does a mobile learning approach have beneficial effects on motivation or affective aspects of learning? This is another common focus for mobile learning designs and research. The focus here is on learner motivation and affective aspects of learning which may be linked to motivation. The research may be trying to find out whether students are enjoying their learning experience, and whether they are engaged in it, which might have immediate or long-term consequences for their learning. If they are motivated, then perhaps they will sustain their learning for longer and reap the benefits of that. -How do learners engage in self-organized language learning and what benefits do they derive from it? Here the learners are using some mobile applications or resources that they found for themselves or that other people have recommended to them. So, the learning is self-motivated and self-organized. The researchers might be interested in the learners' motivation, but also what benefits they derive from the activity. With this type of research question, we are getting closer to the learners' experiences, engaging with the learners, trying to find out what they do and perhaps how it complements their classroom learning. These "generic" questions could perhaps be mapped on to the four broad question types identified by Cronje (2020) in the context of e-learning research, where the research aims or intent (expressed in research questions) would be to "explain", "develop", "describe", or "explore". However, in the generic questions outlined here, question types have been combined with discipline-specific areas of common interest and concern, namely language skills, interpersonal skills, personal experience, and learner agency. Sunderland's (2018) introduction to research questions in linguistics categorizes questions as descriptive, explanatory, and evaluative. Experimental research is perhaps increasingly valued in mobile learning, as researchers conduct studies to show how one way of teaching and learning may be superior to another or produce better outcomes for students. A recent experimental study by Hwang and Chang (2021) tries out a new approach to peer-assessment, using a mobile concept mapping system the researchers have developed which should facilitate knowledge construction in elementary science classes. The researchers are interested in the effectiveness of the bi-directional peer-assessment approach supported by their system. The study compares the bidirectional peer-assessment approach with a conventional peer-assessment approach. There are seven research questions, and all of them are in exactly the same form: RQ1-7: Did the students using the bi-directional peer-assessment approach have … [better science learning achievement/ better concept mapping scores/ better learning motivation/ better self-efficacy/ better environmental identity/ better critical thinking tendency/ better feedback quality/ lower cognitive load] … than those learning with the conventional peer-assessment approach? These questions enable the researchers to provide answers showing whether the proposed approach resulted in improvements with respect to the aspects they chose to investigate. "Did the students have … " is a straightforward question type that could also prove useful if other researchers wanted to run studies in a similar area and compare results. It can be used not only in science learning but in language learning too. In contrast to the experimental approach, Lai and Zheng's (2018) study explored language learners' self-directed use of mobile devices beyond the classroom. They have argued in favor of investigations of learning in informal settings, remarking that "insights into the learning experience in this territory are much needed in order to maximize the educational potentials of mobile learning" (p. 299). The following questions guided their research: RQ1. What are the different dimensions of learners' self-initiated, self-directed out-of-class language learning with mobile devices? RQ2. How do language learners utilize different technological tools to construct self-directed out-of-class mobile learning experiences? Factor analysis enabled the researchers to explain some of the variance in learners' out-of-class mobile learning experience, in particular their activity in terms of personalization, engagement in authentic learning, and using mobile devices to connect with native speakers of the target language and other learners. "What" and "How" in the research questions indicate that the researchers set out to discover more about self-directed mobile language learning and the learners' choice of tools. There is room for many different types of studies within mobile assisted language learning, guided by different questions, and more thought should be put into how such studies could complement and build upon each other. In the field of medicine, Perillat and Baigrie (2021) argue that a lack of prioritization among research questions and therapeutics related to the present pandemic was responsible for the duplication of clinical trials and the dispersion of precious resources. Thus sharing research questions could sometimes be a good strategy. It may, however, be constrained by the fact that in some contexts originality of questions may be an implicit requirement, for example in dissertations and theses at doctoral level. It might also be a factor in research that will be judged on its originality for the purposes of quality assessments that evaluate the extent to which an output makes an important and innovative contribution to understanding and knowledge (e.g. REF, 2021). Conclusions This paper has been a means to reflect on challenges and developments in mobile learning and MALL, with special reference to research questions. It brings into focus some of the research questions and question types that are guiding research in mobile assisted language learning, within the broader field of mobile learning. There is scope for a great deal of further investigation of the types of questions that have been pursued to date and what they may reveal about what different stakeholders consider important to investigate in MALL projects and studies in different contexts across the globe. The field of MALL continues to grow and diversify, which means that an analysis of research questions will need to consider the field in all its complexity. MALL research and development has expanded to include more diverse learners and communities, such as refugees and migrants (Abou-Khalil, Helou, Flanagan, Pinkwart, & Ogata, 2019; Kukulska-Hulme, 2019), learners with disabilities (Alonso, Read, & Astrain, 2020), and indigenous youth in cities (Shilling, 2020). In a recent chapter (Kukulska-Hulme, 2021) I highlighted some of the opportunities and issues associated with mobile assisted language learning, based on case studies representing innovative MALL across different sectors of education. Five notable themes running through the case studies were uncovered, suggesting that MALL supports breaking down barriers; unfettered flow of information; frequent interaction and reflection; enjoyment and personal gains; and that it involves a multiplicity of technologies, modalities, and methods. These themes represent key strengths of mobile approaches to teaching and learning that may be developed in environments where teachers and researchers have the ability to try out something new. Good research questions could help take these themes forward so that they grow into more substantial bodies of research. As mobile learning expands beyond smartphones and tablets to other ubiquitous, wearable, and companion-like technologies that are entering our lives, MALL research will continue to thrive. These thoughts are echoed by Shadiev, Hwang, and Huang (2017) when they declare that future studies should investigate the "application of newly learned knowledge to solve daily real-life problems in authentic language learning environments with technology, how to ensure that students are engaged in learning activities, and long-term continuation of such activities." (p. 290). For future studies, they suggest considering more advanced intelligent technologies for supporting language learning in authentic environments: "For example, wearable devices, such as clothing and accessories, incorporating computer and advanced electronic technologies. Some recent popular examples are optical head-mounted displays, smartwatches or smart bracelets" (p. 292). New research questions will need to accompany these developments, some based on previous questions and others that we have not thought of yet. Bionote Agnes Kukulska-Hulme The Open University, Milton Keynes, UK agnes.kukulska-hulme@open.ac.uk Agnes Kukulska-Hulme is Professor of Learning Technology and Communication in the Institute of Educational Technology at The Open University, UK, where she leads the Future Learning Research and Innovation Programme. Her work encompasses online distance education, mobile learning and language learning. Professor Kukulska-Hulme is on the Editorial Boards of ReCALL, RPTEL, International Journal of Mobile and Blended Learning, and Waikato Journal of Education. Her publications include over 200 articles, papers and books, and she has also authored policy and practice reports for UNESCO, British Council, the Commonwealth of Learning, the International Research Foundation for English Language Education and Cambridge University Press. She has been an invited speaker at over 100 international conferences and events.
v3-fos-license
2017-05-04T11:15:39.607Z
2008-02-27T00:00:00.000
7620219
{ "extfieldsofstudy": [ "Biology", "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-9-125", "pdf_hash": "0ddab94bcc7887a24d311dde581c94fac79428c3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44798", "s2fieldsofstudy": [ "Biology" ], "sha1": "12b59bbb4fd2085b51df0f04dd28bc4751cfab38", "year": 2008 }
pes2o/s2orc
Merging microarray data from separate breast cancer studies provides a robust prognostic test Background There is an urgent need for new prognostic markers of breast cancer metastases to ensure that newly diagnosed patients receive appropriate therapy. Recent studies have demonstrated the potential value of gene expression signatures in assessing the risk of developing distant metastases. However, due to the small sample sizes of individual studies, the overlap among signatures is almost zero and their predictive power is often limited. Integrating microarray data from multiple studies in order to increase sample size is therefore a promising approach to the development of more robust prognostic tests. Results In this study, by using a highly stable data aggregation procedure based on expression comparisons, we have integrated three independent microarray gene expression data sets for breast cancer and identified a structured prognostic signature consisting of 112 genes organized into 80 pair-wise expression comparisons. A classical likelihood ratio test based on these comparisons, essentially weighted voting, achieves 88.6% sensitivity and 54.6% specificity in an independent external test set of 154 samples. The test is highly informative in assessing the risk of developing distant metastases within five years (hazard ratio 9.3 with 95% CI 2.9–29.9). Conclusion Rank-based features provide a stable way to integrate patient data from separate microarray studies due to invariance to data normalization, and such features can be combined into a useful predictor of distant metastases in breast cancer within a statistical modeling framework which begins to capture gene-gene interactions. Upon further confirmation on large-scale independent data, such prognostic signatures and tests could provide a powerful tool to guide adjuvant systemic treatment that could greatly reduce the cost of breast cancer treatment, both in terms of toxic side effects and health care expenditures. Background Breast cancer is the most common form of cancer and the second leading cause of cancer death among women in the United States, with an estimated ~213,000 new cases and ~41,000 deaths in 2006 [1]. The main cause of breast cancer death comes from its metastases to distant sites. Early diagnosis and adjuvant systemic therapy (hormone therapy and chemotherapy) substantially reduce the risk of distant metastases. However, adjuvant therapy has serious short-and long-term side effects and involves high medical costs [2]. Therefore, highly accurate prognostic tests are essential to aid clinicians in deciding which patients are at high risk of developing metastases and should receive adjuvant therapy. Currently, the most widely used treatment guidelines, St. Gallen [3] and the US National Institutes of Health (NIH) [2] consensus criteria, assess a patient's risk of distant metastases based on clinical prognostic factors such as tumor size, lymph node status, and histologic grade. These guidelines cannot accurately identify at-risk patients and about 70-80% of patients defined as being at risk by these criteria and receiving adjuvant therapy would have survived without it [4]. In addition, many patients who would be cured by local or regional treatment alone are "over-treated" and suffer toxic side effects of adjuvant therapy unnecessarily. Therefore, there is an urgent need for new prognostic tests to precisely define a patient's risk of developing metastases to ensure that the patient receives appropriate therapy. The advent of DNA microarray technology provides a powerful tool in various aspects of cancer research. Simultaneous assessment of the expression of thousands of genes in a single experiment could allow better understanding of the complex and heterogeneous molecular properties of breast cancer. Such information may lead to more accurate prognostic signatures for prediction of metastasis risk in breast cancer patients. Over the past few years, a number of studies have identified prognostic gene expression signatures and proposed corresponding prognostic tests based on these genes. In many cases, the prediction of breast cancer outcome is superior to conventional prognostic tests [5][6][7][8][9][10][11]. Among these studies, the two largest have attempted to identify gene expression signatures and prognostic tests strongly predictive of distant metastases. van't Veer et al. applied a supervised method to identify a 70-gene signature, and a correlationbased test capable of predicting a short interval to distant metastases, in a cohort of 78 young breast cancer patients (<55 years of age) with lymph-node-negative tumors [6]. The test was applied to a cohort of 295 patients with either lymph-node-negative or lymph-node-positive breast tumors [5]. Using a different microarray platform, Wang et al. derived a 76-gene prognostic signature from 115 lymph-node-negative patients who had not received adjuvant systemic treatment. The signature could be used to predict distant metastasis within five years in breast cancer patients of all age groups with lymph-node-negative tumors and was subsequently applied to a set of 171 lymph-node-negative patients [7]. These studies have shown that tests based on gene expression signatures would result in a substantial reduction of the number of patients receiving unnecessary adjuvant systemic treatment, thereby preventing over-treatment in a considerable number of breast cancer patients. The most striking observation when comparing the signatures from different studies is the lack of overlap of signature genes. For instance, in the studies of van't Veer et al. and Wang et al., despite the similar clinical and statistical designs, there is an overlap of only three genes in the two gene signature lists. These diverse results make it difficult to identify the most predictive genes for breast cancer prognosis. The disagreements in gene signatures may be partly due to the use of different microarray platforms and differences in patient selection, normalization procedures and other experimental choices. Moreover, in a recent study [12], reanalysis of the van't Veer data has shown that the prognostic signature is even strongly influenced by the subset of the patients used for signature selection within a particular study. This observation indicates that given the small number of samples in the training sets, many genes might show what appear to be significant correlations with clinical outcome and the differences among these correlations might be small. Therefore, it is possible to combine genes in many ways to generate different signatures with similar predictive power when validated on internal test sets [12]. Moreover, in general, these prognostic tests are not robust, meaning that they cannot be validated on independent, external data sets [9]. Independent reanalysis on other microarray data sets has shown very similar findings [13]. Given the large numbers of features (~10,000 to 40,000 genes) in microarray data and the relatively small numbers of samples (~100 patients) used in the training set of each study, it is highly possible to accidentally find a set of genes with good predictive power on internal test sets. This is the type of "over-fitting" that is typical when the number of observed variables far exceeds the number of samples. In light of this general "small-sample dilemma" in statistical learning and the particular observations from the two reanalysis studies mentioned above, the disagreements in gene signatures obtained from different data sets are not surprising. We believe that much larger numbers of samples (patients), perhaps thousands, are needed to develop more robust prognostic tests and signatures. The rapid accumulation of microarray gene expression data suggests that combining microarray data from different studies may be a useful way to increase sample size and diversity. In particular, "meta-analyses" have recently been used to merge different studies in order to develop prognostic gene expression signatures for breast cancer [14,15]. However, effectively integrating microarray data from different studies is not straightforward due to several issues of compatibility, such as differing microarray platforms, experimental protocols and data preprocessing methods. Instead of directly integrating microarray gene expression values, meta-analyses combine results (e.g. t statistics) of individual studies to increase statistical power. The major limitation of meta-analyses is that the small sample sizes typical of individual studies, coupled with variation due to differences in study protocols, inevitably degrades the results. Also, deriving separate statistics and then averaging is often less powerful than directly computing statistics from aggregated data. In contrast to the meta-analysis approach, in which the results of individual studies are combined at an interpretative level, other methods, such as Z-score, Distance Weighted Discrimination (DWD), integrate microarray data from different studies at the expression value level after transforming the expressions to numerically comparable measures [14,[16][17][18][19][20]. In general, the procedure involves the following steps. First, a list of genes common to multiple distinct microarray platforms is extracted based on cross-referencing the annotation of each probe set represented on the microarrays. Cross-referencing of expression data is usually achieved using the UniGene database [21]. Next, for each individual data set, numerically comparable quantities are derived from the expression values of genes in the common list by applying specific data transformation and normalization methods. Finally, the newly derived quantities from individual data sets are combined to increase sample size and statistical methods are applied to the combined data to build diagnostic and prognostic signatures. One major limitation of these direct integration methods is that there is still no consensus on how best to perform data transformation and normalization. In our previous work [22], we proposed a novel method for molecular classification which builds predictors from relative expression values, which can be directly applied to integrated microarray data and which generates very simple decision rules. Because this method is based only on the ranks of the expression values within a profile (sample), there is no need to prepare the data for integration, in particular there is no need for data normalization, since ranks are invariant to all types of within-array monotonic preprocessing. This approach to data integration was validated on prostate cancer data [23], resulting in a powerful two-gene diagnostic classifier. It has also been applied recently to differentiating between gastrointestinal stromal tumors and leiomyosarcomas [24]. Here, we extend this method to predict distant metastases in breast cancer, and attempt to overcome the limitations of previous study-specific methods and meta-analyses. Summary We integrate three independent microarray gene expression data sets to obtain an integrated training set of 358 samples and identify a set of features for predicting distant metastases. All the samples included in this study are from lymph-node-negative patients who have not received adjuvant systemic treatment. Each feature is based on an ordered pair of genes and assumes the value one if the first gene is expressed less than the second gene, and assumes the value zero otherwise. These genes may not all be highly differentially expressed, and one gene in the pair may serve as a "reference" for the other one. Since the features are rank-based, no data normalization is needed before data integration. A classical likelihood ratio test is used to classify patients as either poor-outcome, meaning they are likely to metastasize, or good-outcome, meaning that they are unlikely to develop distant metastases. The choice of features is motivated by achieving the highest possible specificity at an acceptable level of sensitivity, taken here to be 90% in accordance with the St. Gallen and NIH treatment guidelines. The number of features chosen in the prognostic signature, as well as the threshold in the likelihood ratio test (LRT), is optimized with kfold cross-validation on the integrated training set. The optimal feature number is estimated to be 80, corresponding to 112 genes (since some genes appear in more than one feature). The prognostic test based on this signature is validated using an independent microarray data set. Upon further validation on large-scale independent data, the prognostic gene expression signature could support other breast cancer prognostic tests with high enough specificity to help avoid over-treatment of newly diagnosed patients. Study data Four breast cancer microarray data sets are included in this study. Each data set has been downloaded from publicly available gene expression repositories (e.g. Gene Expression Omnibus) or supporting web sites [7,11,25,26]. All four data sets are generated from the same Affymetrix HG-U133A microarray platform. Here, the names of the first authors of individual studies are used as the names of the data sets. Three data sets, Miller (251 patients), Sotiriou (189 patients) and Wang (286 patients), are used as training data and the other one, Pawitan (159 patients), is used as independent test data. The reason for this division into training and test data is that detailed clinical information has been provided for the Miller, Sotiriou and Wang data sets and this information has been used to select specific patients for training, whereas little clinical information is provided for the Pawitan study. For the Miller, Sotiriou and Pawitan studies, because the gene expression data sets provided by them have undergone cross-sample normalization, we have downloaded the raw CEL files and calculated expression values using the Affymetrix GeneChip Operating Software version 1.4. There is an 85-patient overlap between Miller and Sotiriou data sets, so we have excluded the replicate samples from our study. Detailed patient information in each study has been described in the corresponding literature. Motivated by a recent study [27], we employ the idea of restricting training data to extreme patient samples, which are more informative in identifying a prognostic signature. Extreme patients are either short-term survivors with poor-outcome within a short period or long-term survivors who maintain a good-outcome after a long follow-up time. Specifically, we select patients who developed distant metastases (relapse) within five years as poor-outcome samples and patients who were free of distant metastases (relapse) during the follow-up for a period of at least eight years as good-outcome samples. The sharp contrast between short-term and long-term survivors should identify more informative and reliable genes for a prognostic signature. Only early stage lymph-node-negative patients who had not received adjuvant systemic treatment are included in the training data because adjuvant treatment is likely to modify patient outcome. The selection is irrespective of age, tumor size and other clinical parameters. After applying the above selection criteria, a total of 358 patients are identified from the three training data sets and used to learn a prognostic signature and prognostic test. The numbers of selected patients from each training data set are listed in Table 1. A prognostic signature from integrated data We directly merge the three microarray data sets in Table 1, using the 22283 probe sets on Affymetrix HG-U133A microarray, to form an integrated training data set. The integrated data set consists of 122 extreme poor-outcome samples (distant metastases within five years after surgery) and 236 extreme good-outcome samples (free of distant metastases during the follow-up for a period of at least eight years after surgery). Recall that each feature is based on a pair of genes. The integrated training set is used to estimate the relationship between the number m of features in a prognostic classifier and the specificity at 90% sensitivity level, evaluated by the 40-fold cross-validation, as described in 'Methods'. The result is plotted in Figure 1. As can be seen, the specificity is nearly constant after about 80 features are included. Our final prognostic signature then consists of the 80 top-ranked features (gene pairs) from the feature list generated from the original integrated training data, using the feature selection and transformation procedures described in 'Methods'. Because some genes appear in more than one feature, the 80 top-ranked gene pairs in our prognostic signature include 112 distinct genes (Table 2). To illustrate the behavior of the 80 features in the signature on the Wang data set (part of the integrated training data), we show the difference in expression between the two genes in each of the 80 gene pairs in the form of a heat map in Figure 2. Distinct patterns of expression differences can be observed for good-and poor-outcome samples. In order to evaluate the reproducibility of the 112-gene signature, we repeat the same feature selection process with several re-samplings of 300 patients out of the 358 patients in the integrated data set. The average overlap is 39.0%. This is not surprising in view of the still modest sample size and the fact that most of the changes occur in the second half of the ranked list of gene pairs. Validation of the prognostic test on independent data To validate the prognostic test, we compute its sensitivity and specificity on an independent set of samples, the Pawitan data set [26], which consists of 159 primary breast cancer patients. This test set includes both patients with lymph-node-negative tumors and patients with lymphnode-positive tumors, and who had or had not received adjuvant systemic therapy. Following the practice in most of the literature, our objective is to predict the development of distant metastases within five years. Of the 159 patients, 35 patients developed distant metastases (relapse) within five years ("poor-outcome"), and 119 patients were free of distant metastases (no relapse) during the follow-up for a period of at least five years ("goodoutcome"). Note that the definition of good-outcome for patients in the validating data is different from the definition in the training data because we have used extreme samples to identify the prognostic signature. Our prognostic test is the classical likelihood ratio test, determined by assuming that the features are conditionally independent under both classes, namely "poor outcome" (the null hypothesis) and "good outcome" (the alternative hypothesis); see 'Methods'. The LRT reduces to comparing a weighted average of the 80 features to a threshold. The weights depend on the statistics of the individual features under both classes and are estimated from the training data; the threshold is also estimated from the training set, using cross-validation. The LRT built from the prognostic signature achieves a sensitivity of 88.6% (31 out of the 35 poor-outcome samples) and a specificity of 54.6% (65 out of the 119 good-outcome samples) on the 154 samples included in the validating data set. The remaining five patients, who either developed distant metastases after five years or were free of distant metastases with a follow-up period less than five years, are not included in the validating data set. We compute the odds ratio of the prognostic test for developing metastases within five years between the patients in the poor-outcome group and in the good-outcome group as determined by the prognostic test. The prognostic test has a high odds ratio of 9.3 (95% confidence interval: 3.1 -28.1) with a Fisher's exact test p-value < 0.00001. To make the results easier to understand, we have included in the additional files the heat maps of the two-group (goodand poor-outcomes) supervised clusters of the integrated training data and test data for the 112-signature genes (see Additional file 1 and file 2). It is noteworthy that performance of the LRT on the validation data is actually somewhat better than the performance on the training set (which is estimated by crossvalidation). Specifically, from Figure 1 (see also 'Methods'), the specificity of the LRT prognostic test is around 43% at approximately 90% sensitivity when estimated from the training data, whereas a specificity of approximately 55% at about the same sensitivity is achieved on the independent validation set. The heat map of the 80 signature gene pairs Figure 2 The heat map of the 80 signature gene pairs. The Wang data set is used to illustrate the gene expression values of the signature genes. A heat map is generated using the matrix2png software [34]. There are 80 rows corresponding to the 80 gene pairs; the displayed intensities are the differences between the expression values of the two genes in each pair. The expression value for each difference is normalized across the samples to zero mean and one standard deviation (SD) for visualization purposes. Differences with expression levels greater than the mean are colored in red and those below the mean are colored in green. The scale indicates the number of SDs above or below the mean. Choosing size of the signature Figure 1 Choosing size of the signature. The relationship between the number of features in a prognostic signature and the specificity at 90% sensitivity of the corresponding prognostic test, evaluated by 40-fold cross-validation. We select m opt = 80, the smallest value that achieves roughly maximum specificity at the 90% sensitivity level. The specificity observed on the validation set is in fact higher. to the good-outcome group is 9.3 (95% confidence interval: 2.9 -29.9, p-value < 0.001). Comparison of the prognostic signature to study-specific signatures To evaluate the potential statistical power gained by integrating multiple data sets to increase diversity and sample size, we compare the predictive power of our integrated prognostic signature with each of the three separate studyspecific prognostic signatures identified from the three data sets in Table 1. We use exactly the same method we used for the integrated data and each of the resulting three prognostic tests is applied to the same independent test data, the Pawitan data. The results are reported in Table 3. In the case of the Sotiriou data, we do not achieve the targeted sensitivity of at least ninety percent due to the very small sample size; the estimate of the threshold in the LRT does not generalize to the Pawitan test set. For the Miller and Wang data sets, the desired sensitivity is achieved but the specificity is far lower than for the classifier learned from the integrated data set. The Wang data set is the largest. Using 40-fold cross-validation, the optimal feature number of gene pairs for the prognostic signature is m opt = 60. The 94.3% sensitivity on the test set (33 out of the 35 poor-outcome samples) is close to the target of 90%. The specificity of the classifier is 10.1% (12 out of the 119 poor-outcome samples), substantially lower than the classifier based on the integrated training set, albeit at somewhat higher sensitivity. (Indeed, the performance of the prognostic LRT test based on the Wang data alone is barely better than the completely randomized, data-independent procedure which chooses poor-outcome with probability 0.9 and good outcome with probability 0.1, independently from sample to sample.) The odds ratio of this test is 1.9 (95% confidence interval: 0.4 -8.7, Fisher's exact test p-value = 0.74), and the Kaplan-Meier curve ( Figure 3B) shows a less signifi- The Kaplan-Meier analysis Probability of Remaining Metastasis-free Probability of Remaining Metastasis-free cant difference between the patients in the poor-outcome and good-outcome groups than that of the signature from the integrated data. Finally, the estimated hazard ratio of 1.6 (95% confidence interval: 0.4 -6.8, 0.01 < p-value < 0.05) is much lower than that of the prognostic test from the integrated data. These comparisons demonstrate that the prognostic test derived from the integrated data is superior to the prognostic test derived from any of the individual studies and highlight the value of data integration. By integrating several microarray data sets with our rank-based methods, study-specific effects are reduced and more features of breast cancer prognosis are captured. Discussion Using a rank-based method for feature selection, we integrate three independent microarray gene expression data sets of extreme samples and identify a 112-gene breast cancer prognostic signature. The signature is invariant to standard within-array preprocessing and data normalization. All of the patients in the integrated training set had lymph-node-negative tumors and had not received adjuvant systemic treatment, so the identification of the prognostic signature is not subject to potential confounding factors related to lymph node status or systemic treatment. A LRT constructed from the prognostic signature is used to predict whether a breast cancer patient will develop distant metastases within five years after initial treatment. This prognostic test achieves a sensitivity of 88.6% and a specificity of 54.6% on an independent test data set of 154 samples. The test set includes patients who had and who had not received adjuvant systemic treatment, and those with both lymph-node-negative and lymph-node-positive tumors, indicating that our prognostic signature could possibly be applied to all breast cancer patients independently of age, tumor size, tumor grade, lymph mode status, and systemic treatment. It should be pointed out that, somewhat paradoxically, one reason for this ability to generalize is that, as with all machine learning methods, the feature seleciton process is not guided by specific biological knowledge about the underlying processes and pathways. One motivation for using the LRT is simplicity: under the assumption of independent features, the test statistic is a weighted average of the feature values and the test itself reduces to comparing this average to a fixed threshold. Another motivation stems from the Neyman-Pearson lemma of statistical hypothesis testing [28], which states that the LRT achieves optimal specificity at any given level of sensitivity. However, we cannot claim optimal specificity (at roughly ninety percent sensitivity) for our prognostic test since our LRT is constructed by assuming the 80 binary comparison features are statistically independent in each class, which is likely to be violated in practice due to correlations among the genes and genes appearing in multiple pairs. But this approach does offer a rigorous statistical framework for constructing prognostic tests at a given sensitivity. It also provides a direction towards more powerful procedures. Evidently, increasingly better approximations to the "true" LRT, and hence to optimal specificity, would be obtained by accounting for more and more of the dependency structure among the features. Indeed, accounting for pair-wise correlations alone would be a significant step in this direction. Comparison with the conventional treatment guidelines (e.g. St. Gallen and NIH) is instructive. While maintaining almost the same level of sensitivity (~90%), our prognostic test achieves a specificity which is well above the 10-30% range of the St. Gallen and NIH targets. This means that our test can spare a significant number of good-outcome patients from unnecessary adjuvant therapy, while ensuring roughly the same percentage of pooroutcome patients receive adjuvant therapy as recommended by the treatment guidelines. Therefore, our prognostic test and signature, if further validated on large-scale independent data, could potentially provide a useful means of guiding adjuvant systemic treatment, reducing cost and improving the quality of patients' lives. Other strengths of our study, compared with previous ones, are the larger number of homogeneous patients (lymph-node-negative tumors without adjuvant systemic treatment) in the training set, and an external independent test set. In each of the two major breast cancer prognostic studies [6,7], the training and validation data are extracted from the same study group from the same population. More specifically, the entire data set is randomly divided into two pieces, one serving as a training set and the other as a validation or test set. In this case, the training data and the validation data are likely to have similar properties. Therefore, the study-specific prognostic test identified from the training data usually gives over-promising results when assessed using the "internal" validation data. (Similar remarks apply to methods which measure performance using cross validation.) This argument may explain why the two major prognostic signatures, although validated internally with about 90% sensitivity and about 50% specificity, cannot be validated externally with an independent data set [9]. In addition, splitting the original data set into two pieces only aggravates the smallsample problem, as well as producing other sources of bias [12]. In our study, we increase diversity and sample size by integrating several microaray data sets involving patients from different populations. By selecting a homogeneous subgroup of patients and combining data from multiple studies, the derived prognostic test and signature is less sensitive to study-specific factors. An intriguing advantage of inter-study data integration is that it increases the statistical power to capture essential prognostic features which might be masked by study-specific features and the small sample sizes of individual data sets. In this sense, our prognostic test is more robust to interstudy variability and may facilitate external validation. Comparison of our prognostic signature with the two major signatures of van't Veer et al. and Wang et al. is not straightforward because of differences in patients, microarray platforms, and algorithms. The study of van't Veer et al. uses an Agilent array platform and our study uses an Affymetrix array platform. Only 46 out of the 112 genes in our prognostic signature are present on the Agilent Hu25K array and only 36 of the 70 genes in the van't Veer signature are present on the Affymetrix HG-U133A array. Therefore, we can neither validate the van't Veer prognostic test on our validation data nor validate our test on their data set. There is a three-gene overlap between the van't Veer signature and our signature (CCNE2, ORC6L, and PRC1). Since the data set in Wang et al. is included in our training set, we cannot validate our test on that data set. On the other hand, in order to validate the test proposed by Wang et al., we need to know the estrogen receptor (ER) status of our test samples because the classification rule based on their signature is depend on ER status, which is absent from our validation data. Again, there is a four-gene overlap between the Wang signature and our signature (AP2A2, CBX3, CCNE2, and MLF1IP). It is noteworthy that the gene CCNE2 is included in all of the three signatures and is reported to be related to breast cancer [29]. CCNE2 could be a potential target for the rational development of new cancer drugs. Using the program DAVID [30], according to the gene ontology biological process categories, the 112-gene signature is highly enriched in cell cycle (P-value = 1.45E-5) and cell division (P-value = 5.9E-4). To pinpoint the role of some of the genes in our signature, the cell cycle pathway is displayed in the additional files with our signature genes shown in red (see Additional file 3). These findings demonstrate that deregulation of these pathways has a direct impact on tumor progression and indicate that the 112-gene signature is biologically relevant. To assess the benefit of data integration, we compared the predictive power of our signature with that of three studyspecific signatures identified from the Sotiriou, Miller and Wang data sets using the same LRT procedure. When applied to the same independent test data, our prognostic test consistently outperforms the study-specific tests and the largest study (Wang) in particular, in terms of specificity (54.6% vs. 10.1%) at roughly the same 90% sensitivity level, odds ratio (9.3 vs. 1.9), hazard ratio (9.3 vs. 1.6), and Kaplan-Meier analysis. These findings again suggest a prognostic test derived from a single data set may be overdedicated and might perform weakly on external data. In contrast, a prognostic test derived from integrated data is more likely to be more robust to study-specific factors and to be validated satisfactorily on external data. Recently, some studies have shown that combining gene expression data and conventional clinical data (e.g. tumor size, grade, ER status) could lead to improved breast cancer prognosis [31,32]. An approach based on solid statistical principles can accommodate aggregating data of multiple types, e.g., combining gene expression signatures with traditional clinical factors. In this study, due to the lack of clinical information for some of the training samples, we could not incorporate such information into the development of our prognostic test. As clinical information becomes publicly available, it might be combined with the integrated gene expression data to further improve prognosis. Conclusion The opinion expressed in recent studies that gene expression information can be useful in breast cancer prognosis seems to be well-founded. However, due to the small sample sizes relative to the complexity of the entire expression profile, existing methods suffer certain limitations, namely the prevalence of study-specific signatures and difficulties in validating the prognostic tests constructed from these signatures on independent data. Integrating data from multiple studies to obtain more samples appears to be a promising way to overcome these limitations. We have integrated several gene expression data sets and developed a likelihood ratio test for predicting distant metastases that correctly signals a poor outcome in approximately ninety percent of test cases while maintaining about fifty-five percent specificity for good outcome patients. This well exceeds the St. Gallen and NIH guidelines and compares favorably with the best results previously reported (although not yet validated on external test data). As more and more gene (and protein) expression data is generated and made publicly available, modeling the interactions among genes (and gene products) will become increasingly feasible, and is likely to be crucial in designing prognostic tests which achieve high sensitivity without sacrificing specificity. Data integration Recently, our group has developed a family of statistical molecular classification methods based on relative expression reversals [22,33], and applied one variant based on a two-gene classifier to microarray data integration [23]. These methods only use the ranks of gene expression val-ues within each profile and achieve impressive results in both molecular classification and microarray data integration. An important feature of rank-based methods is that they are invariant to monotonic transformations of the expression data within an array, such as those used in most array normalization and other pre-processing methods. This property makes these methods especially useful for combining data from separate studies since the nature of the primary features extracted from the data, namely comparisons of mRNA concentration between pairs of genes, eliminates the need to standardize the data before aggregation. Specifically, the ranks of gene expression values are invariant to monotonic data transformations within each microarray. Consequently, we directly merge gene expression data of the patients from three training data sets in Table 1, using the 22283 probe sets on Affymetrix HG-U133A microarray, to form an integrated training data set of 358 samples. After aggregation, we extract a list of pair-wise comparisons; each of these "features" corresponds to a pair of genes and is assigned the value zero or one depending on the observed ordering of expressions; see the following section. The number of features retained is much smaller than the number of genes on the array. The procedure is now described in more detail. Feature selection and transformation Consider G genes whose expression values X = {X 1 , X 2 , ..., X G } are measured using a DNA microarray and regarded as random variables. The class label Y for each profile X is a discrete random variable taking on one of two possible values corresponding to the two prognostic states or hypotheses of interest, namely "poor-outcome," denoted Y = 1, and "good-outcome," denoted Y = 2. The integrated training microarray data represent the observed values of X and comprise a G × N matrix x = [x gn ], g = 1, 2, ..., G and n = 1, 2, ..., N, where G is the number of genes in each profile and N is the number of samples (profiles) in the integrated data set. Each column n represents a gene expression profile of G genes with a class label y n = 1 (poor-outcome) or y n = 2 (good-outcome) for the twoclass problem in our study. Among the N samples, there are N 1 (respectively, N 2 ) samples labeled as class 1 (respectively, class 2) with N = N 1 + N 2 . For each pair of genes (i, j), where i, j = 1, 2, ..., G, i ≠ j, let P(X i <X j |Y = k), k = 1,2, denote the conditional probability of the event {X i <X j } given Y = k. We define a score by Suppose Z m , m = 1, 2, ..., M, corresponds to the gene pair {i, j}. For convenience, the ordering (i, j) will signify which probability in Equation (1) is larger. The reason for this is to facilitate interpretation of the results, as will become apparent. If P(X i <X j |Y = 1) ≥ P(X i <X j |Y = 2), as estimated by the fractions in (2), we will write (i, j) and if P(X i <X j |Y = 1) <P(X i <X j |Y = 2) we will denote the pair by (j, i). The value assumed by Z m is then set to be 1 if we observe X i <X j and set to 0 otherwise, i.e., if we observe X i ≥ X j . Of course the same definition is applied to each feature in the training set. In this way, observing Z m = 1 (resp., Z m = 0) represents an indicator of the poor outcome (resp., good outcome) class in the sense that p m = P(Z m = 1|Y = 1) ≥ q m = P(Z m = 1|Y = 2). For all the features selected in our signature we in fact have p m > 1/2 > q m . After this procedure, the original G × N data matrix is reduced to an M × N data matrix. The number of distinct genes in a prognostic signature is obviously fewer than 2M. In our practice, there are always more than M distinct genes among the top M gene pairs. Given that the numbers of genes in published breast cancer prognostic signatures are mostly fewer than 100, we fix M = 200 in this study to maker sure we can identify a prognostic feature signature based on a reasonable number of genes. Likelihood ratio test The classical likelihood ratio test (LRT) is a statistical procedure for distinguishing between two hypotheses, each The threshold t is adjusted to provide a desired tradeoff between type I error and type II error, i.e., between sensitivity and specificity. Choosing t small provides high sensitivity at the expense of specificity and choosing t large promotes the opposite effect. Naive Bayes Classifier In the special case in which the random variables Z 1 , ..., Z M are binary (as here) and are assumed to be conditionally independent given Y, the LRT has a particularly simple form. It reduces to comparing a linear combination of the variables to a threshold. Recall that p m = P(Z m = 1|Y = 1) and q m = P(Z m = 1|Y = 2), m = 1, 2, ..., M. In this case, and a similar expression holds for P(z|Y = 2) with p m replaced by q m . It follows that and consequently The LRT then reduces to the form: Choose Y = 1 if and choose Y = 2 otherwise, where Since p m > q m , all these coefficients in Equation (4) are positive and the decision rule in Equation (3) reduces to weighted voting among the pair-wise comparisons: every observed instance of z m = 1 is a vote for the poor outcome class with weight λ m . Moreover, under the two assumptions of i) conditional independence and ii) equal a priori class probabilities (i.e., P(Y = 1) = P(Y = 2)), this is in fact the Bayes classifier (which is optimal) for the threshold t = 0. Sensitivity vs. Specificity Since our interest lies in high sensitivity at the expense of specificity if necessary, we do not choose t = 0. Since we want a very high likelihood of detecting the poor-outcome class, we choose the threshold t to achieve high sensitivity, defined to be above 90%. Let t α denote the (largest) threshold achieving sensitivity 1 -α. That is, suppose (We explain how to estimate t α from the training data in the next sections.) Then, from the Neyman-Pearson lemma, we know that our decision rule achieves the maximum possible specificity at this level of sensitivity. More precisely, this threshold maximizes which is the probability of choosing good-outcome when in fact good-outcome is the true hypothesis. Of course this is only a theoretical guarantee and depends very strongly on the conditional independence assumption which is surely violated in practice; indeed, some genes are common to several of the variables Z m . Still, the LRT does provide a framework in which there are clearly stated hypotheses under which specificity can be optimized at a given sensitivity. Moreover, it provides a very simple test and the parameters p m , q m are easily estimated with available sample sizes. Most importantly, the decision procedure dictated by the LRT does indeed work well on independent test data (see 'Results'). Signature identification and class prediction In clinical practice, when selecting breast cancer patients for adjuvant systemic therapy, it is of evident importance to limit the number of poor-outcome patients assigned to the good-outcome category. The conventional guidelines (e.g., St. Gallen and NIH) for breast cancer treatment usually call for at least 90% sensitivity and 10-30% specificity. Therefore our objective in selecting the threshold t is to maintain high sensitivity (~90%); the specificity is then determined by the sample size and the information content in the features. In order to meet these criteria, we employ k-fold cross-validation to estimate the threshold which maximizes specificity at ~90% sensitivity for each signature size for our likelihood ratio test. The idea is to use k-fold cross validation to estimate the sensitivity and specificity for each possible value of m = 5, 10, 15, ..., 200, (the number of features in the LRT) and t = 1, 2, ..., 200 (the threshold in Equation (3)). For each such m we then compute the specificity at the largest threshold t(m) achieving at least 90% sensitivity; this function is plotted in Figure 1. (Obtaining 90% sensitivity can always be achieved by selecting a small enough threshold.) Finally, we then choose the smallest value m opt which (approximately) maximizes specificity; the threshold is then t opt = t(m opt ). From Figure 1, we see that m opt = 80. Specifically, the steps are as follows: 1) Divide the integrated training data set into k disjoint subsets of approximately equal sample size; 2) Leave out one subset and combine the other k-1 subsets to form a training set; 3) Generate a feature list of M gene pairs ranked from most to least discriminating according the score defined in Equation (1) and compute the corresponding binary feature vector of length M for every training sample and every left-out sample; 4) Starting from the top five features, sequentially add five features at a time from the ranked list, generating series of 40 feature signatures of sizes m = 5, 10, ..., 200; 5) For each signature in 4), classify the leftout samples using the LRT in Equation (3) for each possible integer threshold t = 1, 2, ..., 200 and record the numbers of misclassified poor-outcome and misclassified good-outcome samples; 6) Repeat steps 1)-5) exhaustively for all k divisions into training and testing in step 1); 7) Calculate the sensitivity and specificity for the prognostic LRT test for each of the 40 signatures, and keep only the largest threshold for which the sensitivity exceeds 90%. The optimal number of features, m opt , is the smallest number which effectively maximizes specificity. The final prognostic signature is the m opt top-ranked features (gene pairs) generated from the original integrated training set. The final prognostic test is the LRT with these features and the corresponding threshold t opt = t(m opt ); this is the classifier which is applied to the validation set and yields the error rates reported in 'Results'. Additional statistical analysis We compute the odds ratio of our prognostic test for developing distant metastases within five years between the patients in the poor-outcome group and good-outcome group as determined by LRT classifier. The p-values associated with odds ratios are calculated by Fisher's exact text. We also plot the Kaplan-Meier curve of the signature on the independent test data with p-values calculated by log-rank test. The Mantel-Cox estimation of hazard ratio of distant metastases within five years for the signature is also reported. All the statistical analyses are performed using MATLAB. Authors' contributions LX, under the supervision of RLW and DG, collected the microarray data sets and implemented the algorithms; all authors developed the methodology and contributed to the final manuscript.
v3-fos-license
2023-01-16T14:37:39.891Z
2021-09-15T00:00:00.000
255835298
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1186/s12885-021-08733-4", "pdf_hash": "36dcb55bec21c868176c4a16406ef9b11228c3e1", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44801", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "36dcb55bec21c868176c4a16406ef9b11228c3e1", "year": 2021 }
pes2o/s2orc
Infrequent RAS mutation is not associated with specific histological phenotype in gliomas Mutations in driver genes such as IDH and BRAF have been identified in gliomas. Meanwhile, dysregulations in the p53, RB1, and MAPK and/or PI3K pathways are involved in the molecular pathogenesis of glioblastoma. RAS family genes activate MAPK through activation of RAF and PI3K to promote cell proliferation. RAS mutations are a well-known driver of mutation in many types of cancers, but knowledge of their significance for glioma is insufficient. The purpose of this study was to reveal the frequency and the clinical phenotype of RAS mutant in gliomas. This study analysed RAS mutations and their clinical significance in 242 gliomas that were stored as unfixed or cryopreserved specimens removed at Kyoto University and Osaka National Hospital between May 2006 and October 2017. The hot spots mutation of IDH1/2, H3F3A, HIST1H3B, and TERT promoter and exon 2 and exon 3 of KRAS, HRAS, and NRAS were analysed with Sanger sequencing method, and 1p/19q codeletion was analysed with multiplex ligation-dependent probe amplification. DNA methylation array was performed in some RAS mutant tumours to improve accuracy of diagnosis. RAS mutations were identified in four gliomas with three KRAS mutations and one NRAS mutation in one anaplastic oligodendroglioma, two anaplastic astrocytomas (IDH wild-type in each), and one ganglioglioma. RAS-mutant gliomas were identified with various types of glioma histology. RAS mutation appears infrequent, and it is not associated with any specific histological phenotype of glioma. Background Glioma is a common tumour originating in brain [1]. Glioblastoma is the most aggressive subtype and the most common in adult glioma [1]. Other than glioblastoma, diffuse gliomas include astrocytomas and oligodendrogliomas. and anaplastic astrocytomas and anaplastic oligodendrogliomas show poor prognosis compared in each subtype [1]. These subtypes had been classified mainly by histological diagnosis [2]. Recent intensive genomic and molecular biological analyses of gliomas have identified several significant driver gene mutations in IDH, BRAF, or H3F3 [3,4]. Dysregulations in the p53, RB1, and MAPK / PI3K pathways have also been suggested to be involved in the molecular pathogenesis of glioblastoma [5,6]. The importance of the molecular information to an understand the biological properties and pathogenesis of glioma is well recognized. The new 2016 World Health Organization (WHO) classification for central nervous system tumours has introduced the concept of multi-layered integrated diagnosis using a combination of traditional histopathological classification and information obtained from modern molecular analytical methods; therefore, the necessity for molecular information will increase in the neuro-oncological field [7]. RAS genes including KRAS, HRAS, and NRAS are wellknown oncogenic genes, and are involved in the ERK pathway, a subgroup of the MAPK pathway. Ligandmediated activation of receptor tyrosine kinases, such as epidermal growth factor receptor (EGFR), activate RAS proteins and initiate the cascade of the ERK signalling pathway. Activated RAS proteins activate the RAF, which can activate MEK just upstream of ERK [8,9]. In addition, RAS genes also activate PI3K [10]. Through these several pathways, RAS genes promote cell proliferation, survival, and growth. Mutations in RAS genes have been found in various cancer cells and lead to dysregulation of cell proliferation to promote oncogenesis [11,12]. RAS proteins are bound to GDP in a stable state, and switch to an activated state when bound to GTP [12,13]. GTPase switches GTP-bound RAS back to GDP-bound RAS [13]. RAS mutations have an impaired intrinsic GTPase and are insensitive to GTPase-activating proteins; therefore, inhibiting the conversion of GTP to GDP resulting in dysregulated cell proliferation and oncogenesis [11][12][13]. RAS mutations are mainly observed in codons 12, 13 and 61, and often in pancreatic, colorectal, lung and thyroid cancers [14,15]. KRAS-activating mutations are widely effective as predictors of resistance to anti-EGFR monoclonal antibodies in colorectal and lung cancer patients [15][16][17][18]. Anti-KRAS drugs have been under development [19,20], and some clinical trials are ongoing [21]. RAS mutation is now an important biomarker and therapeutic target in these solid cancers. In terms of central nervous system diseases, a recent study showed an important relationship between RAS mutations and cerebral arterio-venous malformations as a non-neoplastic pathology [22]. Although several reports have found a small number of cases bearing RAS mutations in various gliomas, the clinicopathological properties of these mutations have not been fully addressed [23][24][25][26]. This study analysed RAS mutations and their clinical significance in gliomas. Patients and samples Inclusion criteria for the present study were the local initial diagnosis of gliomas according to the 2007 WHO classification of central nervous system tumours, and frozen or fresh tumour tissues available for genetic analysis. The exclusion criteria were insufficient quality of results of genetic analysis, or clinical data, but no case was excluded. A total of 242 cases were enrolled, including 167 tumours operated on from July 2008 to October 2017 in Kyoto University Hospital, and 75 tumours operated on from May 2006 to March 2017 in Osaka National Hospital. Clinical data collected from each institution included age, sex, tumour location, extent of resection, clinical course including treatment protocol and dates of surgery, recurrence or progression, and death. Ki-67 index were analysed in 167 tumours which was operated in Kyoto University Hospital. MGMT promoter methylation analysis O6-methylguanine-DNA methyltransferase (MGMT) promoter methylation was assessed by quantitative methylationspecific PCR (qMSP), in accordance with previous reports [31,32]. Genomic DNA samples were processed using the EZ DNA Methylation Gold Kit (Zymo Research Corporation, Irvine, CA). The methylation status of samples was analysed by qMSP using the QuantStudio 12 K Flex Real-Time PCR System (Thermo Fisher Scientific) with POWER SYBR® Green PCR Master Mix (Thermo Fisher Scientific) and specific primers (Supplementary Table 1) [33] by the standard curve method. The cut-off for determining a hypermethylated state was set as > 1% [32]. Integrated diagnosis Using all molecular pathological information, all cases received integrated diagnoses according to the 2016 WHO classification for central nervous system tumours. DNA methylation array DNA methylation profiles were examined by Filgen, Inc. (Aichi, Japan) using the Infinium® MethylationEPIC BeadChip system (illumina, San Diego, CA). Raw methylation data (idat files) were uploaded onto the MolecularNeuropathology.org website and compared to a reference cohort to then be classified into subgroups of the highest calibrated score for each sample [35]. Statistical analysis All statistical analyses were performed using JMP version 15 software (SAS institute INC). The continuous variates were analyses by Student's t-test. For survival analysis, overall survival (OS) was defined as the interval between the initial operative day and the date of death or last follow-up date on which the patient was known to be alive. Survival data were analysed using the logrank test and Cox regression analyses. Differences were considered significant for values of p < 0.05. The clinical courses for each case were not uncommon. But the meaning of RAS mutations in glioma for survival were difficult to be discussed in the present study due to the small number of patients, and the Kaplan-Meyer curve showed no difference in overall survival between anaplastic astrocytoma, IDH-wild type, with and without RAS mutation ( Supplementary Fig. 1). Case 1 A 26-year-old woman presented with a chief complaint of dizziness, and MRI showed left frontal lobe tumour with hyperintensity on T2-weighted imaging without gadolinium enhancement. She elected to follow a "wait and scan" approach ( Fig. 1a, b). Five years later, the slowly growing tumour was removed under awake craniotomy. Post-operative MRI showed total resection of the T2-hyperintense lesion. Histopathological examinations detected atypical glia-like cells proliferating densely, cells with round nuclei and clear cytoplasm resembling fried eggs, as well as astrocytic cells, in a substantial area of the tumour. No necrosis or microvascular proliferation was identified (Fig. 1c). FISH detected 1p/19q codeletion, and Ki-67 labelling index of the tumour was 12.5%. The pathological diagnosis was anaplastic oligoastrocytoma, and the patient was followed without post-surgical chemotherapy or radiotherapy. At 45 months after the first surgery, the tumour recurred, and a second surgery was performed to achieve total resection. No rerecurrence was seen until this presentation, 69 months after the first surgery. No anti-tumour treatment had been performed after the second surgery. Genetic analysis of primary tumour showed IDH1 R132H, TERT C250T, and KRAS G12A ( Supplementary Fig. 2), and no mutations in IDH2, H3F3A, or HIST1H3B. MGMT promoter was hypomethylated. MLPA analysis showed 1p/ 19q codeletion and no CDKN2A/B deletion (Fig. 1d). The integrated diagnosis from Sanger sequencing, MLPA, and pathological findings was anaplastic oligodendroglioma, IDH-mutant and 1p/19q codeleted. Interestingly, genetic analysis of recurrent tumour showed the same result about IDH1/2, TERTp, H3F3A and HIST1H3B, but KRAS mutation was not detected. Case 2 A 54-year-old woman presented with a 3-month history of increasing headache and dizziness. MRI showed a gadolinium-enhanced lesion in the genu of the corpus callosum and a T2 hyperintensity lesion spreading to bilateral frontal lobes (Fig. 2a, b). Emergent endoscopic surgery was performed because of progressing hydrocephalus and achieved partial removal of the tumour. Histopathological examinations showed increased atypical glial cells and numerous mitoses, but no microvascular proliferation or palisading necrosis in the specimen (Fig. 2c). Ki-67 labelling index was 40%. The pathological diagnosis was high-grade glioma, and postoperative treatment was radiotherapy concomitant with temozolomide [36]. After discharge, she received maintenance therapy with temozolomide and bevacizumab. However, she showed progressive disease 29 months after the first surgery and received bevacizumab in combination with ifosfamide, carboplatin, and etoposide (ICE) [37]. The tumour kept growing slowly, and she died 49 months after the first surgery. Genetic analysis revealed no mutations in IDH1/2, H3F3A, HIST1H3B or TERT promoter, and MGMT promoter was hypermethylated. In addition, KRAS E76D was detected (Supplementary Fig. 2). A DNA methylation array showed MGMT promoter hypermethylation, matching the qMSP result, but did not identify any matching methylation classes with high calibrated scores. The copy number profile showed no special characteristics (Fig. 2d). The final diagnosis was anaplastic astrocytoma, IDH-wildtype. To support this diagnosis, additional Sanger sequencing was performed and TP53 P72R was revealed. Case 3 A 45-year-old man presented with simple partial seizures involving the right side of the face. MRI showed a T2-hyperintense lesion without gadolinium enhancement in the left frontoparietal lobe. Histopathological examinations of stereotactic biopsy revealed tumour cells with semiround or round nuclei (Fig. 3a) of various sizes, and areas of mitoses, with a Ki-67 labelling index of 35%. No necrosis or vascular proliferation was seen, and FISH revealed no 1p/19q codeletion. The diagnosis was anaplastic glioma. He received chemoradiotherapy comprising 60 Gy with temozolomide, but MRI showed tumour progression 3 months later (Fig. 3b, c). He was treated with additional radiotherapy and bevacizumab with ICE but died 24 months after the first surgery. Genetic analysis revealed NRAS Q61R ( Supplementary Fig. 2), but no mutations in IDH1/2, H3F3A, HIST1H3B or TERT promoter, and MGMT promoter was not hypermethylated. Methylation-based profiling by the DNA methylation array classified this tumour as "methylation class family Glioblastoma, IDH wildtype" with a calibrated score of 0.55. This low score could be a result of low tumour content or low DNA quality in the analysed material, but the classification matched well with the clinical course and pathological findings. The copy 3d). Because there was no evidence of grade 4 histology, the integrated diagnosis was determined as anaplastic astrocytoma, IDH-wildtype. Case 4 A 36-year-old man was referred after a brain tumour was coincidentally identified on screening CT after a traffic accident. MRI revealed a left medial occipitotemporal tumour with gadolinium enhancement (Fig. 4a, b). Histopathological examination of stereotactic biopsy (Fig. 4c) revealed a dense, invasive proliferation of various-sized glial cells with some mitoses, and a Ki-67 labelling index of 5%. No necrosis or microvascular proliferation was identified. Immunohistochemistry showed positive results for olig2, GFAP, and p53, while FISH showed no 1p/19q codeletion. Based on these findings, the first diagnosis was anaplastic astrocytoma. The patient received chemoradiation and maintenance chemotherapy with temozolomide. As tumour progression was detected 18 months after biopsy, he underwent gross total resection of the tumour. No tumour recurrence was identified after the second surgery, and no additional treatment was performed for 24 months. Genetic analysis of primary tumour revealed KRAS Q61K ( Supplementary Fig. 2), wild-type IDH1/2, H3F3A, HIST1H3B and TERT promoter, and no MGMT promoter hypermethylation. The DNA methylation array classified "methylation class family pilocytic astrocytoma" as the methylation class and "methylation class of lowgrade glioma, subclass hemispheric pilocytic astrocytoma and ganglioglioma" as the methylation class family member, with calibrated scores of 0.97 and 0.96, respectively. The copy number profile showed gain of chromosomes 7, 9, 11 and 12 (Fig. 4d). Histopathological re-examination revealed many large ganglion cells with anisonucleosis and some double nuclei (Fig. 4e), Nissl bodies and eosinophilic granular bodies (Fig. 4f) in specimens from the second surgery. Given these genetic results and histopathological findings, the final diagnosis was ganglioglioma. Like as case 1, KRAS mutation was not detected in the recurrent tumour. Review of the previous reported cases Previous 17 studies presented 44 gliomas with RAS mutations (Table 3). They were 17 glioblastomas (2 were glioblastomas with oligodendroglial component), 1 astrocytoma, 4 oligodendrogliomas, 3 anaplastic oligodendrogliomas, 1 oligoastrocytoma, 9 pilocytic astrocytomas, 2 anaplastic pilocytic astrocytomas, 2 fibrillary astrocytomas, 2 gangliogliomas, 2 pleomorphic xanthoastrocytomas, and 1 gliosarcoma. And they included 14 men and 13 women, and ages at diagnosis were described in 28 patients and they were 1-64 years (average, 33.3 years; standard deviation, 17.1 years). The co-existing mutations were various and IDH1 R132H was the major mutation which detected in 11 cases. NA Not available 2 of 40 gangliogliomas [25]. Literature review of RASmutant gliomas showed that RAS-mutant gliomas have various histologies and that RAS mutation coexisted with other genetic alterations. They were often reported in young cases. The larger database made by the Cancer Genome Atlas (TCGA) Research Network showed 2 KRAS mutation and 2 NRAS mutation in 590 glioblastomas, and 1 KRAS mutation and 1 NRAS mutation in low grade gliomas with IDH-mutant and 1p/19q codeletion, and 1 KRAS mutation and 2 NRAS mutation in those with IDH-mutant and no 1p/19q codeletion [52]. Summarizing by age group, RAS mutations were found in 1 out of all 93 gliomas under 30 years old, 6 out of 631 cases from 30 to 60 years old, and 1 out of 363 cases in over 60 years old, and there was no significant difference in frequency of RAS mutations [52]. Similar to these studies, we report RAS mutation as a rare occurrence with no association to a particular histological phenotype of glioma. Additionally, copy number analysis in the present study revealed no chromosomal gain or loss. Discussion In this study, RAS-mutant gliomas showed various histology, but all cases were in relatively young adults. RAS mutation was found in an anaplastic oligodendroglioma, two IDH-wildtype anaplastic astrocytomas, and a ganglioglioma. Among the 20-to 60-year-old patients of our present cohort, 14 tumours were anaplastic oligodendrogliomas, 23 were anaplastic astrocytomas (14 were IDH-wildtype), and one was ganglioglioma. Excluding the single ganglioglioma case present in our cohort, IDH-wildtype anaplastic astrocytomas in patients under 60 years old showed RAS mutation the most frequently (14.3%). Genetically, no other major driver mutations were identified in the anaplastic astrocytomas or the ganglioglioma, which had RAS mutations. The case of RAS-mutant anaplastic oligodendroglioma showed IDH1 and TERT promoter mutations, which are known to be detected in almost all oligodendrogliomas [27]. Because of the small number of RAS mutant tumours, clarifying the genetic properties of RAS mutant tumours and discussing associations between RAS mutations and other driver genes is difficult, however, some studies reported the co-existing other genetic alterations in RAS-mutant gliomas. Clinically, the two cases of anaplastic astrocytoma with RAS mutation showed aggressive infiltration during the clinical course with high Ki-67 labelling index, but clinical outcomes did not differ from those of other IDH-wildtype anaplastic astrocytomas (Supplementary Fig. 1). The other two cases of anaplastic oligodendroglioma and ganglioglioma showed benign clinical courses. Some studies have reported RAS mutation as a prognostic factor in some non-neuroepithelial solid cancers [53,54]. However, we could not explain the clinical significance of RAS mutation occurring in gliomas. The limitation of the present study was the rarity of RAS mutant gliomas due to the infrequency of RAS mutation in glioma. This was why the survival analysis was difficult in our cases, but it was also the same in another previous cohort. These issues should be addressed using larger cohorts in the future. KRAS mutation has been reported to increase vascular endothelial growth factor (VEGF) expression and to promote the construction of a tumour vascular network [55]. However, the present study found no evidence of an aggressive vascular network such as widespread gadolinium enhancement or intra-tumoral arteriovenous shunt. KRAS G12D is reportedly associated with gliosis [56]. Another report suggested that KRAS signalling is essential for the maintenance of glioblastoma in mice, and inhibition of KRAS expression result in tumour apoptosis [57]. These facts proposed that RAS mutation has some effect on glioma maintenance and proliferation, and MAPK / PI3K pathways, which are activated by RAS mutation, have been suggested to be involved in the molecular pathogenesis of glioblastoma [5,6]. Although the higher Ki-67 labelling index in the RAS-mutant gliomas had not been discussed previously, this may reflect the tumour proliferation activities. Some anti-RAS drugs are currently under development [19,20], and these drugs are expected to make contributions to improving the prognosis of RAS-mutant glioma in the near future. In the presented case series, recurrent tumours of case 1 (AO) and case 4 (ganglioglioma) showed no RAS mutations which were shown in their primary tumour. This fact may imply that tumour with RAS mutation was disappeared by treatment. Through direct comparison of the genomic landscape of gliomas at initial diagnosis and recurrence, a previous study showed that full set of mutations found in the initial tumour do not maintain in the recurrences and suggested that recurrent tumours are originate from cells derived at a very early stage of the evolution of tumours [58]. While IDH1 and TERTp mutations, and 1p/19q codeletion assigned as the truncal events during tumour evolution [3], RAS mutations in glioma may be an additional alterations to development. About the primary tumours, Sanger sequencing revealed TP53 mutation in one of these AAs, and methylation assay showed amplification of PDGFRA and loss of CDKN2A/B and TP53 in the other. This fact proposed that RAS mutation have a potential to be a driver gene of glioma development, but its effect may be supportive compared with major truncal driver mutations like as IDH mutation, TERTp mutation and 1p/19q codeletion. Because RAS mutation could switch at glioma recurrence, the molecular analysis is thought to be essential for recurrent as well as primary tumours when anti-RAS treatment are conducted. KRAS G12A and KRAS Q61K are present in 0.76 and 0.07% of cases in the Project Genomics Evidence Neoplasia Information Exchange (AACR GENIE) launched by the American Association for Cancer Research [59]. KRAS G12A has been identified in lung, colon, colorectal and rectal adenocarcinoma, and uterine endometrioid carcinoma, while KRAS Q61K has been found in colon, colorectal and pancreatic adenocarcinoma. KRAS G12A and KRAS Q61K are predictive biomarkers for the use of erlotinib, gefitinib, cetuximab, and panitumumab in patients [16-18, 60, 61]. Non-small cell lung carcinoma and colorectal carcinoma have the greatest number of therapies targeting KRAS G12A and KRAS Q61K or related pathways. KRAS E76D has not been reported in other types of cancer, and further study was needed whether if it has a role of an activating mutation. NRAS Q61R is present in 0.73% of AACR GENIE cases [59], and has been identified in cutaneous melanoma, melanoma, papillary thyroid cancer, poorly differentiated thyroid gland cancer, and colon adenocarcinoma [59]. NRAS Q61R is a predictive biomarker for the uses of cetuximab and panitumumab in patients [60,61]. Further, for NRAS-mutant melanoma, binimetinib reportedly improves progression-free survival compared with dacarbazine [62]. Lower grade astrocytomas in our cohort contained a large number of IDH-wild type tumours. This fact partially results from high frequency of TERTp mutation. In our IDH-wild type tumours, 8 out of 18 DAs and 15 out of 32 AAs showed TERTp mutation. Nowadays, IDHwild type astrocytomas with TERTp mutations are known as a group of astrocytomas with poor prognosis, and these tumours are supposed to be a different group from the group of common lower grade astrocytomas [63]. The diagnosis of lower grade astrocytoma without IDH mutation needs further discussion. Conclusions We found 4 RAS mutations in various types of 242 gliomas. All cases involved younger adults. No clear association was identified between RAS mutations and clinical or genetic characteristics of tumours. Clarification of the effectiveness of anti-RAS treatments for gliomas requires further investigations in larger cohorts.
v3-fos-license
2018-04-03T00:11:01.792Z
2013-12-19T00:00:00.000
9447662
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/2193-1801-2-681", "pdf_hash": "04fb57184b72aa6af2bcd9b503a3ea1f8c6ef999", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44802", "s2fieldsofstudy": [ "Medicine" ], "sha1": "47e4990085b960c930dcc4abf6c2c2b695e62902", "year": 2013 }
pes2o/s2orc
Muscle function and omega-3 fatty acids in the prediction of lean body mass after breast cancer treatment Background Decreased lean body mass (LBM) is common in breast cancer survivors yet currently there is a lack of information regarding the determinants of LBM after treatment, in particular, the effect of physical activity and dietary factors, such as long-chain omega-3 fatty acids (LCn-3) on LBM and LBM function. This cross-sectional study explored associations of LBM and function with LCn-3 intake, dietary intake, inflammation, quality of life (QOL) and physical fitness in breast cancer survivors to improve clinical considerations when addressing body composition change. Methods Forty-nine women who had completed treatment (surgery, radiation and/or chemotherapy) were assessed for body composition (BODPOD), LCn-3 content of erythrocytes, C-reactive protein (CRP), QOL, dietary intake, objective physical activity, 1-min push-ups, 1-min sit-stand, sub-maximal treadmill (TM) test, and handgrip strength. Results After adjustment for age, LBM was associated with push-ups (r = 0.343, p = 0.000), stage reached on treadmill (StageTM) (r = 0.302, 0.001), % time spent ≥ moderate activity (Mod + Vig) (r = 0.228, p = 0.024). No associations were seen between anthropometric values and any treatment, diagnostic and demographical variables. Body mass, push-ups and StageTM accounted for 76.4% of the variability in LBM (adjusted r-square: 0.764, p = 0.000). After adjustment docosahexanoic acid (DHA) was positively associated with push-ups (β=0.399, p = 0.001), eicosapentanoic acid (EPA) was negatively associated with squats (r = −0.268, p = 0.041), with no other significant interactions found between LCn-3 and physical activity for LBM or LBM function. Conclusion This is the first investigation to report that a higher weight adjusted LBM is associated with higher estimated aerobic fitness and ability to perform push-ups in breast cancer survivors. Potential LCn-3 and physical activity interactions on LBM require further exploration. Introduction Loss of lean body mass (LBM) and simultaneous gains in fat mass are amongst the most common side effects following treatment for breast cancer (Mcdonald et al. 2011). This pattern of body composition change is distressing for the survivors and it is related to higher levels of chronic inflammation (Mourtzakis & Bedbrook 2009), and a greater risk for metabolic syndrome (Healy et al. 2010) and its related diseases (Healy et al. 2010;Pierce et al. 2009). A growing literature has established LBM, and in particular skeletal muscle tissue, as an influential organ in hormonal, immune and metabolic function (Pedersen & Febbraio 2012). Lifestyle factors such as physical activity and nutrient intake can enhance LBM size (Irwin et al. 2009) and function, (Courneya et al. 2007;Schmitz et al. 2005) and have also been associated with improved survival (Ibrahim & Al-Homaidh 2010) and quality of life (Mcneely et al. 2006) after treatment for breast cancer. Taken together, LBM is becoming an important marker for women who have been diagnosed with breast cancer. Findings from observational studies have indicated that chemotherapy has been associated with declines of LBM during and after treatment (Cheney et al. 1994;Demark-Wahnefried et al. 1997;Demark-Wahnefried et al. 2001;Gordon et al. 2011;Kutynec et al. 1999), however not all trials have reported LBM loss after chemotherapy (Campbell et al. 2007). In contrast, associations between higher LBM and aromatase inhibitor hormonal therapy have been reported in three different data sets (Francini et al. 2006;Montagnani et al. 2008;Van Londen et al. 2011). Modifiable variables such as dietary intake and physical activity have not been extensively explored with regard to LBM change in breast cancer populations. Some evidence exists for an association between decreased physical activity and increased adiposity (Irwin et al. 2005), while mixed results have been reported in relation to dietary intake and adiposity, (Sheean et al. 2012) however a deeper understanding of physical activity, dietary factors and LBM change are needed to better guide clinicians in the post-treatment period. Long chain omega-3 fatty acids (LCn-3) are established as anti-inflammatory agents and have been shown to protect LBM in cancer populations (Dewey et al. 2001;Murphy et al. 2012;Ries et al. 2012;Van Der Meij et al. 2011). However, conclusions from reviews of intervention studies in cancer populations investigating the effect of LCn-3's on LBM have been mixed (Murphy et al. 2012;Ries et al. 2012). Typically, older studies have shown a protective effect for LBM when the appropriate dose of LCn-3 is consumed (Fearon et al. 2006;Fearon et al. 2003). More recent studies investigating 2 g of EPA LCn-3 supplementation in individuals undergoing chemotherapy for non-small cell lung cancer (NSCLC) have shown significantly greater attenuation of LBM and improved levels of intra-muscular triglyceride (IMTG), compared to those not supplementing. (Murphy et al. 2010;Murphy et al. 2011). In non-cancer populations the effect of LCn-3 on LBM has been minimal, with the majority of controlled trials indicating limited clinical effect (Mcdonald et al. 2013b). Recent research has indicated that a greater effect may be seen when LCn-3 s are combined with an anabolic stimulus (Mcdonald et al. 2013b;Rodacki et al. 2012;Smith et al. 2011a;Smith et al. 2011b). Three small, well controlled studies combined LCn-3 supplementation with exposure to an anabolic stimulus, i.e. hyperinsulinaemic/hyperaminoacidaemic clamp or resistance training. Two reported an increased muscle protein synthetic (MPS) response to for young healthy (Smith et al. 2011b), and elderly participants (Smith et al. 2011a), yet LCn-3 alone made no difference to basal MPS. The third study that used resistance training reported increased peak torque development for the supplemented group above that of the group who received the resistance training program only (Rodacki et al. 2012). Considering LBM function, measured by strength or power development, may be more important to health outcomes than absolute values of LBM, (Newman et al. 2006;Ruiz et al. 2008) further investigations are required. Therefore, the objectives of this cross-sectional study was to explore associations of LBM and LBM function in the context of LCn-3 intake, dietary energy and protein intake, inflammation, quality of life (QOL) and parameters of physical fitness and activity in women who had completed breast cancer treatment. A secondary goal was to determine the effect of interactions between tissue content of LCn-3 and markers of physical fitness on LBM after treatment for breast cancer. Study design All participants provided written informed consent. The data presented here was collected as the baseline assessment for a 6-month 3-arm randomized controlled trial (RCT) investigating LBM in women who have completed treatment for breast cancer. Detailed rationale study protocol for the full trial has been published previously (Mcdonald et al. 2013a). The study was approved by the Uniting Care (UCH HREC: #1034) and the University of Queensland (#2011000079). Participants Participants were invited to participate through hospital breast cancer oncology centres, radio advertising, social media and breast cancer research registries in Brisbane, Australia. Baseline assessment occurred over one week, which included two visits 7 days apart. Eligibility Women ≥18 years of age; had been diagnosed with early stage breast cancer (Stage 0-IIIa as determined by the American Joint Committee on Cancer Care); had successfully completed surgery, radiation and/or chemotherapy in the last 12 months (participants could be currently receiving endocrine and/or herceptin therapy); were able to perform moderate intensity physical activity, and have a BMI of >20 and <35 kg/m 2 were eligible for enrolment. Participants were excluded if they had presence of metastatic growth or local/distal recurrence of cancer; a diagnosis of cardiovascular disease or diabetes; or, consumed >1 g of eicosapentanoic acid (EPA) and docosahexanoic acid (DHA) LCn-3 s combined per day. Anthropometric variables Height was measured to the nearest 0.5 cm using a stadiometer (Seca). Weight to the nearest 0.1 kg, LBM and fat mass were measured using the BODPOD digital scales and air displacement plethysmography (ADP) pod (BODPOD, COSMED USA Inc), respectively. Before each assessment day, the BODPOD scales and air chamber were calibrated as per the manufacturer's instructions using known weights and volumes. All measures were performed by a certified BODPOD assessor. Results were expressed as percentage LBM and body fat of total weight, then absolute LBM was calculated giving a value in kilograms of LBM. Quality of Life (QOL) QOL was measured using the Functional Assessment of Cancer Therapy-Breast + 4 (FACT-B + 4) tool (Cella et al. 1993). That FACT-F subset of questions was also added to capture participant fatigue. Higher scores are representative of better well-being. Diet history Dietary intake was measured by the practitioner assisted Diet History Questionnaire (Martin 2004). Participants were asked to complete the questionnaire based on their intake over the last month. An accredited practicing dietitian reviewed the questionnaire with the participant to clarify portion sizes and other relevant details. Nutrient analysis was carried out using Foodworks 7 (Xyris Software). Blood analyses Fasting high sensitivity-C Reactive Protein (CRP) was measured using a latex-enhanced immunoturbidimetric assay of blood serum. The 8.5 ml sample of whole blood was collected and analysed for CRP, then frozen at −20°C for transport to Victoria, Australia for fatty acid testing. Lipids from red cells were extracted with chloroform methanol mixture. The fatty acids were trans-esterificated to methyl esters with methylation reagent "Meth-Prep 2". The methylation extract was then analysed by gas liquid chromatography method with flame ionisation detection (gas chromatograph Schimadzu G-2010-FID). The proportion of fatty acids content of the erythrocytes expressed as % of total fatty acids. Muscle function and fitness tests Grip strength was performed on both arms, with the maximum of 3 attempts recorded. Participants were seated with feet flat on floor, shoulder in neutral position with elbow bent at 90 degrees. Upper body muscular strength-endurance was measured using a 1-minute push-up test. Participants were asked to perform as many push-ups (knees on ground) as possible in 1 minute (American College of Sports Medicine 2010). Lower body muscular endurance was measured using a 1-minute sit-stand test. The participant was asked to perform as many sit-stand movements as possible in 1 minute. Chair height was standardised at 43 cm height (American College of Sports Medicine 2010). Sub-maximal aerobic capacity was measured using the modified Balke sub-maximal treadmill test. Seated blood pressure was measured before each assessment to ensure safety of exercise (Sharman & Stowasserb 2009). The test being completed when the individual had reached 85% of their estimated maximum heart rate (max HR) (est. maxHR = 220-age). Statistical analysis Baseline characteristics were compared between treatment types and stages of disease using independent samples t tests or ANOVA. Spearman's correlation coefficient was used assess the strength of bivariate associations, % time in moderate and vigorous activity were grouped together into one variable: % time in ≥ moderate activity. To assess the significance of age-and/or weight-adjusted associations between an outcome and a potential predictor, multivariable linear regression was used. Multivariable linear regression was used to model LBM as a function of various markers of fitness while also controlling for total body mass. For missing data, only those with full data sets were included in the models. The variables considered for inclusion in the model were those that were individually associated with LBM after adjusting for age and weight. Markers of fitness were added to the model sequentially, with the order determined by decreasing r-values. A predictor was only retained in the model if its coefficient was significantly different from zero at the 0.05 level. Adjusted R-squared was used to compare nested models. Models were also fitted that included interaction terms that explored the respective LCn-3 indices combined with fitness markers on LBM. Results Participants were recruited over a 15-month period (Oct 2011 -Jan 2013). A total of 135 women were initially screened for inclusion criteria. The major reasons for exclusion were >12 months post treatment completion and daily consumption of >1000 mg EPA and DHA combined. Forty-nine participants were eligible for the study and completed baseline assessment. Descriptive statistics of the population are shown in Table 1. Compared to those who did not have radiation therapy, DHA values (t = 2.904; p = 0.016) and LCn-3: LCn-6 (t = 3.06; p = 0.004) ratios were higher for those who underwent radiation therapy. Otherwise, radiation therapy was not associated with markers of body composition, QOL, dietary intake, LBM function, endurance or physical activity. Individuals taking tamoxifen tended to have lower EPA content compared to those taking AIs or no hormonal therapy (0.78% vs. 1.16% & 1.23%; F = 3.153, p = 0.054), however, there was no evidence to support an association between hormonal treatment and other markers of body composition, QOL, dietary intake, LBM function or physical activity. Associations between LBM and dietary intake, inflammation, physical activity, markers of fitness and quality of life LBM was positively correlated with daily intake of total energy (r = 0.301, p = 0.036) and protein (r = 0.464, p = 0.001), and negatively correlated with higher squat test results (r = −0.39, p = 0.006) ( Table 2). However, after adjusting for weight and age, the only significant associations with LBM were % time spent in ≥ moderate intensity activity (ß: 0.228, p = 0.024), number of push-ups performed (ß: 0.343, p = 0.000) and treadmill stage completed (ß: 0.302, p = 0.001) ( Table 2). CRP was positively correlated with body fat %, waist and hip however, these associations were no longer significant after controlling for total body weight (data not shown). Associations between LCn-3 and anthropometric indices, inflammation & quality of life after breast cancer treatment No significant correlations were identified between absolute LBM or % LBM for total RBC n-3, ratio of AA: EPA or % RBC content of EPA or DHA (Table 3). No significant relationships were found between any other anthropometric variables and n-3 related values. No significant correlations were identified between CRP and erythrocyte LCn-3. No markers of body composition, CRP or indices of LCn-3 intake were significantly correlated with either measure of QOL. Predictors of LBM in women soon after breast cancer Number of push-ups, StageTM, and mod + vig activity were considered for inclusion in a weight-adjusted linear regression model for LBM (Table 2). Table 4 shows coefficients for the variables included in the final model. Table 4 also shows the value of adjusted R-squared obtained as each variable was successively added to the model. Mod + vig was not retained in the final model because the coefficient was not significantly different from zero (β=0.115, p = 0.177) in the presence of the other predictors. The model including weight, push ups and StageTM explained 76.4% of the variation in absolute LBM (Table 4). Interactions of physical activity and indices of LCn-3 intake on markers of LBM function The number of push-ups performed was positively correlated with time spent in ≥ moderate intensity activity (r = 0.467; p = 0.001), total n-3 levels (r = 0.385; p = 0.012) and DHA levels (r = 0.517, p = 0.000) ( Table 5). The correlation with total n-3 levels was no longer statistically significant after adjusting for DHA. DHA maintained a significant association after adjusting for age, weight, LBM and % time > mod activity (β=0.399, p = 0.001) % ≥. Mod activity remained a significant predictor (F-Test: 8.95, p = 0.005) of the number of push-ups performed in one minute after adjusting for DHA, age, weight and LBM. There were no significant interactions between RBC LCn-3 and time spent in any intensity of activity for any of the regression models of physical function (data not shown). Discussion This paper reports a positive relationship between LBM (adjusted for total weight) and physical function represented by the % time spent in ≥ moderate intensity physical activity, stage achieved on sub-maximal treadmill test and number of push-ups completed. To the authors' knowledge, this is the first study to determine associations between physical function and body composition in women who have completed treatment for breast cancer. Our results agree with previous cross-sectional and prospective cohort studies, which have shown that decreasing physical activity levels are associated with greater adverse body composition change, (Irwin et al. 2003;Irwin et al. 2005) while dietary measures (Demark-Wahnefried et al. 2001) have been less predictive of these changes. The findings relating to the influence of chemotherapy on LBM agree with two previous studies (Campbell et al. 2007;Winters-Stone et al. 2009) but are in contrast to five studies that have shown a greater decrease in LBM after chemotherapy (Cheney et al. 1994;Demark-Wahnefried et al. 1997;Demark-Wahnefried et al. 2001;Gordon et al. 2011;Kutynec et al. 1999). In addition, Prado et al. reported that individuals with chemotherapy toxicity had a greater risk of sarcopenia (Prado et al. 2009). Differences in our results may be due to the cross-sectional nature of the study. Previously published data sets indicating LBM change after chemotherapy and hormonal therapies were prospective in nature (Cheney et al. 1994;Demark-Wahnefried et al. 1997;Demark-Wahnefried et al. 2001;Gordon et al. 2011;Kutynec et al. 1999) and were able to see trends over time. No associations were found between erythrocyte LCn-3 and markers of body composition. Recent studies in populations during and post-chemotherapy treatment have indicated a positive relationship between skeletal muscle mass and plasma phospholipid LCn-3 content (Murphy et al. 2010;Murphy et al. 2011), however these participants experienced significant and rapid muscle loss during treatment. After early stage breast cancer treatment, the rate and magnitude of muscle loss experienced is not typically as high as when compared to more advanced staged cancers (Mcdonald et al. 2011;Murphy et al. 2010). As a result, our results are comparable to metabolic/obese populations undergoing similar body composition change (Krebs et al. 2006;Noreen et al. 2010;Storlien et al. 2001). Total body mass, push-ups performed in one-minute, and stage completed on treadmill remained in the final model accounting for 76% of the variation in LBM. These results are of interest as they indicate an association with physical function and healthier body composition. Specifically, the strength of association with number of push-ups/minute as opposed to squats may indicate the importance of whole body resistance training to maintain or achieve a higher LBM and lower fat mass. A decrease in sports/recreational exercise has been previously associated with an increase in adiposity however, LBM change was not reported (Irwin et al. 2005). It is possible that those who performed more push-ups due to an increase in relevant exercise training may also be more conscientious in regards to dietary intake, however no association was found in this study. Both erythrocyte DHA and EPA content were associated with markers of physical function, surprisingly in positive and negative directions, respectively. DHA was strongly and independently associated with the ability to perform push-ups, while erythrocyte EPA content was negatively associated with squats performed. In addition, assessing predictive models for push-up performance, when%time ≥ moderate physical activity was added to the DHA model, a greater effect was seen. In contrast, EPA content remained significantly negatively associated with squats performed. Previous studies have indicated an increase in muscle protein synthesis (Smith et al. 2011a; Smith et al. 2011b) and peak torque development (Rodacki et al. 2012) after supplementation of LCn-3 s was combined with an anabolic stimulus. In advanced cancer populations, EPA LCn-3 supplementation (often in conjunction with a protein-rich supplement) has been associated with improvements in physical function (Moses et al. 2004) and strength (Fearon et al. 2006), while EPA and DHA LCn-3 + NSAIDs have been shown to improve handgrip strength (Cerchietti et al. 2007). Our results both agree and disagree with the previous literature base with no clear reason for the opposing directions for the associations between physical function, DHA and EPA. Further investigation into LCn-3 and physical activity interactions are required. Our population compared favourably with larger cohorts for body composition, (Chlebowski et al. 2002;Irwin et al. 2005) education level, (Irwin et al. 2005) however the exclusion of those with a diagnosed chronic disease (T2DM or CVD) and those who could not participate in moderate physical activity, may have led to our participants being younger and more physically active than the general breast cancer population. Conclusion This is the first study to report that higher weight adjusted LBM is associated with greater upper body strengthendurance and aerobic fitness in women after completion of treatment for breast cancer. Further research is required to elucidate LCn-3-exercise interactions.
v3-fos-license
2021-11-20T16:05:15.619Z
2021-11-01T00:00:00.000
244421677
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-4426/11/11/1223/pdf", "pdf_hash": "b57c01d92953a160be9b50a76e7c804b3e74ff47", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44806", "s2fieldsofstudy": [ "Medicine" ], "sha1": "01540117ad1e99e463a8d23e866f0d1d951320f7", "year": 2021 }
pes2o/s2orc
Modifiable Individual Risks of Perioperative Blood Transfusions and Acute Postoperative Complications in Total Hip and Knee Arthroplasty Background: The primary aim of this study was to identify modifiable patient-related predictors of blood transfusions and perioperative complications in total hip and knee arthroplasty. Individual predictor-adjusted risks can be used to define preoperative treatment thresholds. Methods: We performed this retrospective monocentric study in orthopaedic patients who underwent primary total knee or hip arthroplasty. Multivariate logistic regression models were used to assess the predictive value of patient-related characteristics. Predictor-adjusted individual risks of blood transfusions and the occurrence of any perioperative adverse event were calculated for potentially modifiable risk factors. Results: 3754 patients were included in this study. The overall blood transfusion and complication rates were 4.8% and 6.4%, respectively. Haemoglobin concentration (Hb, p < 0.001), low body mass index (BMI, p < 0.001) and estimated glomerular filtration rate (eGFR, p = 0.004) were the strongest potentially modifiable predictors of a blood transfusion. EGFR (p = 0.001) was the strongest potentially modifiable predictor of a complication. Predictor-adjusted risks of blood transfusions and acute postoperative complications were calculated for Hb and eGFR. Hb = 12.5 g/dL, BMI = 17.6 kg/m2, and eGFR = 54 min/mL were associated, respectively, with a 10% risk of a blood transfusion, eGFR = 59 mL/min was associated with a 10% risk of a complication. Conclusion: The individual risks for blood transfusions and acute postoperative complications are strongly increased in patients with a low preoperative Hb, low BMI or low eGFR. We recommend aiming at a preoperative Hb ≥ 13g/dL, an eGFR ≥ 60 mL/min and to avoid a low BMI. Future studies must show if a preoperative increase of eGFR and BMI is feasible and truly beneficial. Introduction The probabilities of blood transfusions and perioperative complications in total joint arthroplasty (TJA) are highly influenced by patient-related risk factors such as age and comorbidities [1,2]. To lower the patients' individual risks by a target-orientated preoperative treatment, it is essential to identify potentially modifiable risk factors. A frequently reported modifiable risk factor is a low preoperative haemoglobin (Hb) concentration, which is not only associated with a higher rate of blood transfusions but also renal, cardiac and wound-related complications [3]. Since a low Hb concentration can often be successfully treated, an anaemia screening and the treatment of preoperative anaemia has become a foremost aim in the run-up for TJA [4]. However, it remains unclear which minimum Hb concentration should be aimed at to effectively reduce the rate of blood transfusions and perioperative complications. In particular, previous studies are controversial if female sex is an independent risk factor [5,6] and if different Hb thresholds should be used in male and female patients [7]. While a too high threshold may lead to a higher transfusion rate, a too low threshold may result in a treatment of non-diseased asymptomatic patients who are by definition not ill, which is not only a relevant cost factor but also a medicolegal dilemma. Therefore, we performed this study to investigate the predictive value of the preoperative Hb concentration and other patient-related risk-factors for blood transfusions and acute postoperative complications. On the basis of this data, we aimed to define useful target values of a preoperative treatment. Methods We performed this monocentric retrospective study after approval of the local ethics committee (Ethikkommission der Medizinischen Fakultät der Universität Würzburg, application number AZ-2018071001) and completed registration at the German register for clinical studies (Deutsches Register Klinischer Studien, registration number DRKS00015219). Data Collection We collected the data of all patients that underwent elective total hip and bicondylar knee arthroplasty between January 2016 and December 2018 in a single orthopaedic university hospital. Demographic, anamnestic and clinical data were collected retrospectively from the hospital's information technology system (ORBIS, Agfa Healthcare GmbH, Bonn, Germany). Data about the use of allogeneic and autologous blood transfusions were crosschecked using hard copy records. Of the demographic and anamnestic data, we recorded the patient's age, sex, height and weight as well as the preoperative medication, daily consumption of alcohol or nicotine and comorbidities. Of the preoperative clinical data, we recorded the American Society of Anaesthesiologists (ASA) status and the lab values of the last blood sample before undergoing surgery, including c-reactive protein (mg/dL), haemoglobin concentration (g/dL), haematocrite (%), mean corpuscular volume (MCV) (fl), platelet count (10 3 /µL), creatinine (µmol/L), estimated glomerular filtration rate (eGFR) (mL/min) Quick (%) and partial thromboplastin time (PTT) (sec). Of the intraoperative data, we collected the type of anaesthesia, the type of surgery, the use of tranexamic acid, duration of surgery, the use of drains and the use of an autologous re-transfusion system (cell saver). Outcome Measures The administration of at least one allogeneic red blood cell (RBC) transfusion was the primary outcome of this investigation. According to the hospital's guideline, an Hb concentration <6 g/dL was an unconditional trigger for RBC transfusions. A Hb concentration <8 g/dL was a conditional trigger regarding the patient's individual resources and clinical symptoms. Blood transfusions in patients with a Hb concentration >8 g/dL were well-founded exceptions. The secondary outcome measure was the occurrence of an adverse event during the patient's stay in hospital that required an unexpected change of treatment. Statistical Analysis Statistical analysis was performed on deidentified data using SPSS Statistics 26 (IBM inc., Armonk, NY, USA). To investigate the predictive power of the preoperative characteristics, we calculated logistic regression models for both outcome measures. To include only the strongest predictors, we modified the inclusion and exclusion criteria for the regression models. To predict an allogeneic RBC transfusion, a variable was included in the model when its inclusion improved the model fit with a significance of p ≤ 0.01. Variables were removed from the model, when adding of further variables reduced the significance of the variable dependent improvement to p ≥ 0.05. To predict a complication, a variable was included in the model when its inclusion improved the model fit with a significance of p ≤ 0.05. Variables were removed from the model when adding of further variables reduced the significance of the variable dependent improvement to p ≥ 0.1. In case the logistic regression models revealed a potentially modifiable risk factor for the outcome measures, we calculated a monovariate logistic regression model for the respective risk factor to receive a continuous estimation of the predictor-adjusted risks. Allogeneic red blood cell (RBC) transfusions were applied in 179 (4.8%) patients. Adverse events during the patient's stay in hospital were recorded in 239 (6.4%) patients. Predictor-adjusted risks of blood transfusions were calculated for preoperative Hb concentration, BMI and eGFR (Figure 1a-c). Predictor-adjusted risks of acute postoperative complications were calculated for eGFR ( Figure 1). The individual transfusion probability exceeded 10% in patients with a Hb concentration < 12.6 g/dL, BMI < 17.9 kg/m 2 or eGFR < 55 mL/min. The individual risk for an acute postoperative complication exceeded 10% in patients with an eGFR < 60 mL/min. The individual risk for an acute postoperative complication exceeded 10% in patients with an eGFR < 60 mL/min. Discussion This study was performed to investigate potentially modifiable patient-related risk factors for blood transfusions and acute postoperative complications in total joint arthroplasty. Our results show that numerous patient-related characteristics are potential risk factors for blood transfusion ( Table 2). Some of the identified characteristics are non or hardly modifiable such as the patient's age and comorbidities, which were in our study as cardiovascular diseases, haemophilia, low thrombocytes and ASA status. Provided that all patients receive adequate treatment of their comorbidities, these characteristics are non-modifiable unless by choosing an earlier time-point of surgery in the patient's life. An earlier surgery might not only reduce the patient's age but also age-related diseases [10], contributing to a lower transfusion rate. This is in line with the results of previous studies, which showed that younger and healthier patients have a lower risk for blood transfusions [6,11]. In our study, we focused on potentially modifiable predictors of blood transfusions such as a low body mass index (BMI), a low estimated glomerular filtration rate (eGFR) and a low haemoglobin (Hb) concentration. Our results confirm the results of a previous study that a high body mass index is a protective factor for blood transfusions in major surgery [12]. Since a high body weight is associated with a high blood volume [13] but not with a high blood loss [14], patients with a high BMI undergo a relatively low blood loss and are therefore less prone to blood transfusions [12]. However, considering the negative effects of a high BMI [15] we recommend avoiding not only malnutrition but also obesity if a long-term preparation for TJA is possible. In contrast, a low Hb concentration is a highly predictive factor of a blood transfusion that can also be addressed by a short-term preoperative treatment [16] using iron supplementation [17,18] and erythropoietin [19]. However, it remains unclear which Hb concentration is an optimum threshold for an efficient preoperative treatment, not least because of sparse data about the effect of iron supplementation in non-anaemic patients [20]. Different thresholds have already been proposed by previous studies [6,7]. As the surgical approaches and blood sparing techniques might differ between medical centres, it was our aim to find useful thresholds for our standard techniques in primary hip and knee arthroplasty. An important finding was that the patient's sex and type of TJA were not significant predictors of blood transfusions. Therefore, we calculated the Hb-adjusted probability of blood transfusions regardless of the patient's sex and type of TJA (Figure 1). To illustrate the difference in the Hb-adjusted risks of male and female patients, we added the separated curves to Figure 1. Although in our study the overall rate of transfusions was under 5%, our results showed that many patients undergo a much higher individual probability of a blood transfusion. For instance, if the Hb concentration was 12.5 g/dL or less, the individual transfusion risk exceeded 10% (Figure 1). In our opinion, it is not appropriate to use the WHO criteria of anaemia [21] to decide whether a patient should receive a preoperative treatment or not. In line with previous results [22], we recommend aiming at a Hb concentration of at least 13 g/dL in the run-up for primary TJA in men and women. Another finding of our study showed that the preoperative estimated glomerular filtration rate (eGFR) was a predictive factor of blood transfusions as well, which has been reported before [6,23]. Possibly, a low eGFR is not an independent risk factor but is only associated with a risk factor that we did not account for. However, we considered numerous patient-related characteristics (Table 1) to calculate our logistic regression model. Therefore, impaired renal function is likely a true risk factor for blood transfusions. We calculated an eGFR-adjusted individual probability of blood transfusions that showed a value of at least 60 mL/min is associated with an individual transfusion probability under 10%. On the basis of the Hb concentration (13 g/dL), BMI (22 kg/m 2 ) and eGFR (60 mL/min) thresholds, we created "high-risk" groups to illustrate the impact on the transfusion rate ( Table 3). The results show that patients with a Hb under 13 g/dL have an eight-fold higher risk of a transfusion and patients with an GFR under 60 mL/min a 5-fold and patients with a BMI under 22 kg/m 2 an almost 3-fold higher risk of a blood transfusion. Future studies must show if a target orientated preoperative treatment in such patients results in a higher Hb concentration, BMI and eGFR to lower the transfusion rate. Our results further showed that a low eGFR is not only associated with a higher rate of blood transfusions but also with a higher risk of acute postoperative complications, mainly acute renal insufficiency but also cardiac complications and a higher risk of falling. According to the individual eGFR-adjusted risks, patients with an eGFR < 60 mL/min had an individual risk for a complication of more than 10% while the overall risk in our population was 6.4%. In patients with an eGFR under 40 mL/min, the risk of a complication even exceeded 20%. This is in line with sparse previous results that reported renal insufficiency as a risk factor for complications [23]. Probably our rather sensitive outcome considering also mild adverse events resulted in a clearer identification of a low eGFR as a risk factor. We recommend screening every patient for low eGFR in the run-up for TJA. At least in patients with an eGFR < 60 mL/min, a diagnostic work-up should be initiated. An increase of the preoperative eGFR might result in a decrease of transfusions and perioperative complications. Probably, at least a small portion of patients will benefit from such a ''patient kidney management", as some types of renal insufficiency might be modifiable by a higher intake of water [24,25] or an improved treatment of cardiovascular and renal diseases [26,27]. Such management to improve and protect kidney function should not only call for preoperative but also intra-and postoperative measures such as controlling renal perfusion [27], avoiding nephrotoxic medication and recognizing acute renal dysfunction as soon as possible. In future, some medication might contribute to kidney protection, but evidence is missing [28]. Interestingly, the preoperative Hb concentration was not a significant predictive factor for acute complications in our multivariate logistic regression model although preoperative anaemia is a frequently reported risk factor for complications [3,29]. This finding suggests that in some patients not the preoperative Hb concentration itself but associated characteristics such as higher age and ASA status are the underlying cause for acute postoperative complications [30]. Limitations Several limitations of this retrospective study must be addressed. Due to its retrospective nature, we do not know if the potentially modifiable risk factors are truly modifiable. Second, we do not know if a treatment of these risk factors, even if it changes the risk factor's value, truly lowers the risk of blood transfusions and complications. However, at least for a low Hb concentration, previous studies have already shown that a successful treatment results in a lower transfusion rate [17]. Another limitation of our study is that the uses of tranexamic acid, drains and cell savers were left at the discretion of the responsible anaesthesiologist and surgeon. This results in a high variability of the individual treatment and therefore reduces the accuracy of predictor-adjusted individual risks. To estimate a patient's individual risk for a transfusion or complication, many more than the potentially modifiable risk factors investigated here must be considered. Moreover, to address the high variability of the investigated characteristics, multi-centre studies with a high number of patients are indispensable. Follow-up studies are crucial for investigating the modifiability of Hb, BMI and eGFR and the usefulness of the thresholds recommended here. These thresholds will have to be updated regularly based on their effect on changes in rates of transfusions and complications. In addition, further development in preoperative treatment as well as in surgical and anaesthesiologic techniques will have to be regarded. Conclusions Our results confirm that a low Hb concentration is a main risk factor for blood transfusion. Men and women with a preoperative Hb concentration <13 g/dL undergo an 8-fold higher risk of blood transfusions, and the individual risk for a blood transfusion exceeds 10% in patients with a preoperative Hb of less than 12.6%. We recommend aiming at a minimum preoperative Hb concentration of 13 g/dL. The preoperative estimated glomerular filtration rate (eGFR) is also a significant risk factor for blood transfusions but also for acute postoperative complications. Patients with an eGFR under 60 mL/min had a five-fold higher risk of transfusions and three-fold higher risk of complications. Future studies must show if it is possible to increase the eGFR in the run-up for total joint arthroplasty and if the transfusion and complication rates can be further reduced. The third modifiable risk factor for blood transfusions was a low BMI. Therefore, malnutrition should also be addressed during "prehabilitation" for total joint arthroplasty.
v3-fos-license
2020-10-11T19:36:04.715Z
2020-09-20T00:00:00.000
222266663
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://rsdjournal.org/index.php/rsd/article/download/8247/7406", "pdf_hash": "9f4639ffb4eb899cf72fa594fde31c25efcd6809", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44808", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "9f4639ffb4eb899cf72fa594fde31c25efcd6809", "year": 2020 }
pes2o/s2orc
Association between sleep disorders and headache in adolescents : systematic review Objective: The aim of this study is to analyze the association between sleep disorders and headache in adolescents. Materials and Methods: A systematic literature review was carried out, analyzing the publications of articles indexed in the National Library of Medicine (Medline /Pubmed), Scientific Electronic Library Online (Scielo), Cochrane Library and Scopus.The inclusion criteria were as follows: articles in English, with available abstracts; articles that answered the guiding question; and covered the age group of 10 to 19 years old. Advanced search used the descriptors ‘’ Adolescents’’, ‘’ Sleep disorder’’,’’Headache’’, ‘’Quality of life’’, ‘’Poor sleep quality’’ and their synonyms recognized by the Mesch and Desc vocabulary. Results: of the 3.386 articles found, 2.318 articles were selected for having their titles and abstracts read. Among these articles, 41 were selected for reading in full, Research, Society and Development, v. 9, n. 10, e1019108247, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i10.8247 2 resulting in the selection of 10 studies to be included in this review. Conclusions: It is concluded that sleep disorders are associated with headache in adolescents, this association being a complex and bidirectional phenomenon that does not allow to clearly distinguish which condition appears first. resulting in the selection of 10 studies to be included in this review. Conclusions: It is concluded that sleep disorders are associated with headache in adolescents, this association being a complex and bidirectional phenomenon that does not allow to clearly distinguish which condition appears first. foram selecionados para leitura na íntegra, resultando na seleção de 10 estudos para compor esta revisão. Conclusão: Conclui-se que os distúrbios do sono estão associados à cefaleia em adolescentes, sendo esta associação um fenômeno complexo e bidirecional que não permite distinguir claramente qual condição aparece primeiramente, porém sabe-se que estas variáveis causam impacto negativo na qualidade de vida, sendo assim, estudos sobre o tema auxiliam na criação de ações de promoção à saúde que visem reduzir estes impactos. Introduction Chronic pain is a public health problem that generates both personal and social losses (Abu-Arefeh et al., 2010). Chronic headache is disabling and capable of negatively impacting the quality of life of individuals, interfering with daily activities, school performance and generating high costs to the health system (Heyer et al., 2014;Silva et al, 2015;Souza et al., 2015;Pimentel et. al., 2020). Headache is one of the most common complaints during adolescence 3 . In their epidemiological studies carried out in several countries, Abu-Arafeh et al. (2010) and Wober-Bingol (2013) respectively found headache prevalence levels of 58.4% and 54.4% in this age group. According to the International Classification of Headache (2014), the painful condition can be categorized as a primary headache; secondary headache; and painful cranial neuropathies, as well as other facial pain and other forms of headache. Primary headaches are the most common form of headache found in the pediatric population 3 , being divided into: migraine or migraine; tension-type headache; trigeminal-autonomic headaches; and other primary headaches. In primary headaches, sleep disorders are a common comorbidity. However, the number of studies addressing this relationship is still smal (Heyer et al., 2014). According to Torres-Ferrus et al. (2018), adolescents who experience episodes of primary headache have shorter sleep duration and other changes, such as insomnia and excessive daytime sleepiness. Research, Society and Development, v. 9, n. 10, e1019108247, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i10.8247 4 Sleep deprivation is considered a trigger for headache, especially primary migrainetype headache (Kenezevic-Pogancev et al., 2014) and is associated with a lower sense of wellbeing, impaired daytime body functioning, as well as reduced school performance and attendance, directly interfering with learning (Kenezevic-Pogancev et al., 2014;Turco et al., 2011). Despite this, it is known that the binomial primary headache and sleep disorders is complex, making it difficult to perceive which condition presents itself first. The presence of altered sleep patterns may trigger headaches, whereas headache is seen as a factor that generates changes in sleep (Heyer et al., 2014). Accordingly, this study aims to investigate the association between sleep disorders and headache. Therefore, an integrative literature review was carried out as a research strategy, since this search provides the identification of existing evidence in the literature related to the topic in question. The present study can assist in planning actions aimed at adolescent health. Methodology This review followed the guidelines of PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) and was conducted through the analysis of articles on the theme "Sleep disorders and headache in adolescents''. The present study included articles available in the following databases: National Library of Medicine (Medline / Pubmed), Scientific Electronic Library Online (Scielo), Cochrane Library and Scopus. The PICO question guiding the research was the following: '' Is poor sleep quality/sleep disorders associated with headache in adolescents?'', With P: adolescents; I: presence of sleep disorder; C: absence of sleep disorder; O: headache. To refine the choice of articles, the following inclusion criteria were established: articles in English; abstracts available in the cited databases; articles that answered the guiding question; and studies covering the age group between 10 and 19 years old, as During the selection, studies duplicated in the databases, as well as those classified as literature review articles, clinical trials and case report and those that were not in an article format were excluded, such as editorials, guidelines, letters, conference summary, theses and dissertations. Research, Society and Development, v. 9, n. 10, e1019108247, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i10.8247 5 A peer review was performed, that is, the articles were analyzed by two researchers individually (M.A.C.S. and S.S.S) for further analysis together, following the inclusion and exclusion criteria. In case of any divergence, a third reviewer was consulted (M.V.H.) until a consensus was reached, in order to bring greater reliability to the selection of the synthesis studies. Thus, the first stage consisted of reading the titles and abstracts of the articles. Those which had titles and abstracts that did not include the researched subject were excluded. The studies that met the inclusion criteria were selected to have their full text read, resulting in the selection of articles included in this synthesis. Results and Discussion The search strategies used in this review were adapted according to the access specificities of each database. By crossing the descriptors in the databases, 3.386 articles were found. Of these, 802 were found in PUBMED, 268 in Cochrane, 2.316 in Scopus and 0 in Scielo. The next phase included the exclusion of 1.068 duplicate studies between the bases, with 2.318 remaining to have their titles and abstracts screened. In this step, 2.277 studies were excluded, as they did not present titles and abstracts that addressed the researched topic, totaling 41 articles for reading the full text. After reading in full, 10 studies remained and were included in the synthesis (Figure 1). In the next stage, a data analysis and synthesis were carried out. The selected articles were organized in Table 1 according to the author, year of publication, country of origin, study design, sample, age group (years old) and results of the study. Research, Society and Development, v. 9, n. 10, e1019108247, 2020 (CC BY 4. The present study aimed to verify the association between sleep disorders and headache in adolescents. Both sleep disorders and headache are conditions that damage the quality of life of adolescents, as they prevent them from performing their daily tasks, also interfering in their performance (Heyer et al., 2014;Silva et al., 2015). It is observed that most of the studies included in this synthesis took place in North America, more precisely in the United States (Gilman et al., 2007;Heyer et al., 2014;;Pecor et al., 2015;Kemper et al., 2016;Ming et al., 2016;Lateef et al., 2019). None of the studies were conducted in Brazil. It was found that 80% of the studies have a cross-sectional design (Gilman et al., 2007;Grupta et al., 2008;Pecor et al., 2015;Kemper et al., 2016;Ming et al., 2016;Ming et al., 2016;Torres-Ferrus et al., 2018), thus, only demonstrating the association between the variables and preventing the cause and effect verification, only allowing to raise certain hypotheses. Therefore, the conclusions drawn from these studies should be cautiously considered. Adolescence is a period marked by several biological, social and psychological changes (Lima et al., 2014) Therefore, conducting longitudinal studies would be useful to identify critical factors which are characteristic of this age group, as well as their relationship with sleep disorders and headache, considering the impact on quality of life. The study by Knezevic-Pogancev et al. (2014), with a longitudinal design, showed that the age group directly influences the development of headaches, with insomnia being a trigger reported both in adolescents suffering from migraine (90.6%) and in those who had other types of primary headache (94.5%). The opposite was found in the study by Heyer et al. (2014), who demonstrated headache intensity (p<0.009) and its onset time (p<0.001) as predictive factors for sleep impairment. This divergence is present in the literature, since headache and sleep disorders interact in a bidirectional manner 2 , through a common pathophysiological substrate (Sousa & Rosato, 2011). In 50% of the studies included in the review, headache was seen as a predictive factor for the shorter duration of total sleep time (Gilman et al., 2007;Gruota et al., 2008;Ming et al., 2016;Torres-Ferrus et al., 2018;Lateef et al., 2019). According to Geib (2007), sleep is influenced by several modulators, including headache, categorized as na organic modulator, which leads to the installation of inappropriate sleep habits linked to nighttime awakenings and latency time, resulting in shorter total sleep duration. A shorter total sleep duration leads to excessive daytime sleepiness (EDS). According to Pecor et al. (2015), adolescents who suffer from headaches have greater daytime sleepiness when compared to healthy adolescents. It is suggested that the presence of headache leads to longer latency and more night-time awakenings, resulting in a reduction in the total amount of sleep. According to Bittencourt et al. (2005), sleep deprivation is among the causes of EDS. The study by Gilman et al. (2007) corroborates this observation. Their results show that there is an association between more severe headache and greater sleep latency. In the study by Moschiano et al. (2012), individuals who reported more frequent and painful episodes of headache presented sleep disturbances. Although the study did not report which sleep disorders were found, it is clear that there is na association between the variables. According to Nunes 21 (2002), insomnia, a type of sleep disorder, is related to several chronic diseases. Since primary headache is a chronic condition that modulates sleep habits, it can be suggested that this is a factor associated with insomnia. In the present review, only one study found no association between any variables (Kemper et al., 2016). This result can be explained by the sample used in the research, since only 29 adolescents were included, not being a representative sample of the population. However, the depression variable was significantly associated with headache. Adolescents who had depression often developed headaches, as demonstrated by the HIT-6 headache test (p = 0.0006). Taking into account that melancholic depression, also called typical depression, is associated with an altered sleep pattern in which the individual tends to present insomnia (Dalgalarrondo, 2019), this comorbidity can be a confusing factor in the present study. This is due to the failure to distinguish which condition generated insomnia, being either a result of headache or depressive disorder. Final Considerations It is concluded that there is an association between sleep disorders and headache in the studied population. Among the types of headache, the most common type found in adolescents are primary headaches, with migraine and tension-type headache frequently associated with insomnia, night-time awakenings and longer sleep latency.
v3-fos-license
2022-03-16T01:15:56.011Z
2022-03-15T00:00:00.000
247450904
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-022-10899-5.pdf", "pdf_hash": "1f417b0846f50a08aa8de9f55d65ba790fb95b75", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44809", "s2fieldsofstudy": [ "Physics" ], "sha1": "bc493fe0904a1fbc99008bcd63b5330e1774e851", "year": 2022 }
pes2o/s2orc
Asymptotically flat vacuum solution for a rotating black hole in a modified gravity theory The theory of f(R)-gravity is one of the theories of modified Einstein gravity. The vacuum solution, on the other hand, of the field equation is the solution for black hole geometry. We establish here an asymptotically flat rotating black hole solution in an f(R)-gravity. This essentially leads to the modified solution to the Kerr black hole. This solution exhibits the change in fundamental properties of the black hole and its geometry. It particularly shows that radii of marginally stable and bound orbits and black hole event horizon increase compared to those in Einstein gravity, depending on the modified gravity parameter. It further argues for faster spinning black holes with spin (Kerr) parameter greater than unity, without any naked singularity. This supports the weak cosmic censorship hypothesis. I. INTRODUCTION General relativistic gravity of Einstein turns out to be a remarkable discovery to explain a range of astrophysical sources, apart from its theoretical integrity, even after more than 100 years of its original discovery. Eventually, all the predictions of Einstein's gravity proved to be correct, particularly after the direct detection of gravitational wave in 2015 [1]. In fact, the said discovery could be considered 'three in one': direct confirmation of gravitational wave, spinning black hole and binary black hole. Although to understand coalescence of, e.g., black holes and to probe the underlying gravitational radiation, strong field general relativity (GR) or numerical relativity is indispensable, most of the direct tests of GR are done based on weak field approximation. Therefore, the global validity of GR in the strong field regime, i.e. the true nature of gravity close to the source of gravity, remains questionable. Hence, no one can rule out possible modification to GR in natural systems, particularly when the theory is asymptotically flat. Asymptotic flatness assures reduction of modified GR to GR and to Minkowskian with distance from the source. Therefore, even if close to the source, i.e. a compact object like black hole, neutron star, actual gravitational theory is modified GR, the same theory will be able to explain any solar-system based or Earthbased experiment. One such example of modified GR is the theory of f(R)-gravity [2,3], which was explored to explain sub-and super-Chandrasekhar limiting mass white dwarfs in a unified theory, what GR as such could not. They are possibly leading to under-and over-luminous type Ia supernovae under the same model framework. Recently, we also established an asymptotically flat vacuum solution, unlike that for a white dwarf, of f (R)gravity in spherical symmetry [4]. This is essentially a modified solution for the Schwarzschild, hence nonrotating, black hole. We showed that depending on the modified gravity parameter, various basic characteristics of the black hole, e.g. marginally stable and bound circular orbits, event horizon etc., change. We also showed that for a very hot accretion flow, critical/sonic point location changes in modified GR. There are other explorations of black hole in modified GR as well [5][6][7]. However, most of the cosmic objects are rotating, hence more realistic, at least in general, black holes are expected to be rotating. The same goes with other compact objects described by non-vacuum solutions. What if, a black hole is rotating in modified GR, more precisely in f (R)-gravity? In other words, how the Kerr solution changes in the f (R)-gravity? In this work, we establish an asymptotically flat solution for a rotating black hole in modified GR. In place of obtaining a solution from the appropriate Einstein action for a modified GR, we rely on the Newman-Janis algorithm (NJA) [8]. We know that based on NJA the Kerr black hole solution can be derived from the Schwarzschild solution by making an elementary transformation involved with complex numbers. The basic idea is, as if due to the choice of coordinates combining realistic coordinates and metric parameters, the Kerr metric appeared to be diagonal and also spherical symmetric, like the Schwarzschild black hole. However, once it is expanded in realistic coordinates it turns out to have off-diagonal terms with axially symmetric nature of the metric. We plan to implement NJA in the modified Schwarzschild metric under f (R)-gravity [4] to obtain the corresponding modified Kerr solution. To the best of our knowledge, there is no venture towards this so-lution before this work. Once we obtain the modified Kerr solution, we explore various basic characteristics of the metric, e.g. radius of event horizon, marginally stable and bound circular orbits, various components of epicyclic oscillation frequency, orbital angular frequency, etc., with the change of black hole spin and modified gravity parameter. The paper is organized as follows. In the next two sections, we recapitulate the basic formalism of obtaining modified GR based field equation in f (R)-gravity and its solution for an asymptotically flat non-rotating black hole, respectively, in sections II and III. Thereafter, we establish a rotating black hole solution in section IV based on NJA. Further, we discuss the nature of singularity of the metric and horizons in, respectively, sections V and VI. For the latter, first we present the numerical solution and then approximate analytical solution. Subsequently, we explore various fundamental orbits, as in GR, in this modified gravity framework for a test particle motion in section VII and corresponding fundamental oscillation frequencies in section VIII. We conclude our work in section IX. II. BASIC FORMALISM OF FIELD EQUATION In GR, the Einstein-Hilbert action produces the field equation. With the metric signature (+ − −−) in 4-dimension it is given by [9] where c is the speed of light, R is the scalar curvature such that R = R µν g µν , often called Ricci scalar, with R µν being Ricci tensor, G is Newton's gravitation constant, L M is the Lagrangian of the matter field and g = det(g µν ) is the determinant of the metric tensor g µν . Varying this action w.r.t. g µν and equating it to zero with appropriate boundary condition produces the Einstein's field equation for GR, given by where T µν is the energy-momentum tensor of the matter field. This equation relates the matter to the curvature of the spacetime. In case of modified GR, here f (R) gravity, the Ricci scalar in Einstein-Hilbert action is replaced by f (R) (being a function of the Ricci scalar). The action is then represented as Now varying this modified action w.r.t g µν with appropriate boundary condition gives a modified version of the field equation, which is given by [10][11][12] where F(R) = d dR f (R), is the d'Alembertian operator given by = ∇ µ ∇ µ and ∇ µ is the covariant derivative. For f (R) = R, this equation reduces to the well-known Einstein field equation in GR. Now for the vacuum solution the energymomentum tensor vanishes, i.e. T µν = 0, and the equation reduces to The trace of this equation is given by Substituting f (R) from equation (6) into equation (5), we have III. SOLUTION FOR A NONROTATING BLACK HOLE Here we briefly recapitulate a solution for a non-rotating black hole in f (R)-gravity obtained earlier [4]. The vacuum solution of a spherically symmetric and static system can be written in the form of g µν = diag s (r) , −p (r) , −r 2 , −r 2 sin 2 θ . Now we assume that F (R) has a form such that, F (r) = 1 + B/r. Hence, as r → ∞, F (r) → 1, which generates the usual theory of GR. Note that B ≤ 0 to guarantee the attractive nature of gravity [4]. Now from equation (6) we have [13] 2 and where X (r) = p (r) s (r). Putting equation (10) in equation (9) we obtain the series solution for s(r) (for B 0) as where C 1 and C 2 are constants of integrations which can be obtained by arguing that the metric needs to behave as Schwarzschild metric at a large distance, which requires the coefficient of r 2 to vanish and coefficient of 1/r to be −2, which gives Thus, the temporal component of the metric turns out to be Thus the radial component of the metric can be found as g rr = −p (r), where p (r) = X(r)/s(r), and thus the power series solution takes the form as After the original discovery of the Kerr metric, Newman and Janis showed that the solution could be derived from the Schwarzschild solution by making an elementary transformation involved with complex numbers, assuming the black hole to be spinning. The spin (angular momentum per unit mass) of black hole comes into the solution as an arbitrary parameter. The static spherically symmetric metric and the line element could be written in the general form in (+ − −−) convention as [14] (16) In the null coordinates, this line element can be written, by advancing the time coordinate as dt = du +f dr and settingf = [s(r)/p(r)] − 1 2 , as Thus, the contravariant form of the metric can be written as Here "."s in equation (18) indicate that the metric is symmetric and will have the same elements as in the upper triangle. The contravariant form of the metric can be written so that it can be expressed in terms of its null tetrads [8,15,16] as where the null tetrads satisfy the conditions l µ l µ = m µ m µ = n µ n µ = 0, with the bar indicating the complex conjugate. Putting the elements of the metric from equation (18) to equation (19), along with equation (20), the null tetrads are found to be Then following NJA, we proceed by making a complex transformation as By considering this as a complex rotation of the θ − φ plane, the tetrads can be obtained as Note that s(r, θ) and p(r, θ) in equation (26) are completely different from s (r) and p (r) in equation (22) (and in equations (11) and (15); also see [17][18][19]). In fact, the new functions are functions of both r and θ, while the old ones are functions of only r. From equation (19), the contravariant form of the metric is obtained as where Σ = r 2 + a 2 cos 2 θ. The inverse of this metric, i.e. its covariant form, is Now we redefine the coordinates u and φ such that, du = dt+g (r) dr and dφ = dϕ+h(r) dr, with g and h as in a new coordinate system. This leads to all the non-diagonal elements, except g φt , go to zero. This transforms the metric to Boyer-Lindquist coordinate system. Now putting X (r, θ) = p (r, θ) s (r, θ), the metric in this coordinate system takes the form which essentially leads to the counter part of rotating black hole of the metric in equation (16). B. Transformation of specific functions under NJA and modified Kerr metric Equipped with the knowledge of NJA, the angular momentum parameter can be easily incorporated in the non-rotating vacuum solution. For this we first proceed by noting that while we make the complex transformation, the coordinates r and u are complexified and a new parameter a is introduced. However, since in the end one needs a real spacetime, a function Q must remain real and so its changes are given as [16,20] so that the functions 1/r 2n and 1/r 2n+1 must be written as Now suppose the function Q (r, r) has some terms of 1 (rr) n and 1 (rr) n 1 2 1 r + 1 r with at least one of them having a non-zero coefficient, then after the complex transformation of u → u = u − ia cos θ, r → r = r + ia cos θ, θ → θ = θ, φ → φ = φ, the components of Q (r, r) will transform as Thus, after the complex transformation, the function Q(r) transforms to Q(r, θ). 1 Applying 1 Q (r) and Q(r, θ) are not necessarily equal. equations (36) and (37) to the functions X (r), s(r) and p (r), we have Thus equations (32), (38), (39) and (40) essentially complete our development of the metric which is the asymptotically flat vacuum solution for a rotating black hole in a modified gravity. It can be easily seen that by setting B = 0, we obtain the usual Kerr-metric. V. SOURCE AND SINGULARITY From equation (32) we see that the metric becomes singular, when s(r) or p(r) becomes singular and that happens when Σ = 0, since Σ is present at the denominator in both. This shows that the metric becomes singular for [20] This can be seen to be a geometric singularity by computing the curvature contraction R µνρλ R µνρλ . Further, it is an extended singularity, rather than 'point -like' singularity (as in Schwarzschild metric). Now defining local rectangular coordinate system x =r sin θ cos φ + α sin θ sin φ, y =r sin θ sin φ − α sin θ cos φ, z =r cos θ, we immediately see that r = 0, θ = π/2 corresponds to x 2 + y 2 = α 2 and z = 0. Consequently, the physical singularity of the Kerr metric is a ring singularity. With the small B approximation as made in section VI B below, the term involved with spin angular momentum transforms as α ≈ a − 1.5B (as will be clearer in section VI B below), thus the radius and angular position as, respectively, Therefore, the singularity can be seen to be on a circle of radius α around the origin in the z = 0 plane. The solution can be considered to lie uniformly distributed on this circle, bounding an interior disc x 2 + y 2 ≤ α. This singularity signifies the presence of a rotating black hole and is termed as ring singularity. VI. HORIZONS In addition to the ring-like curvature singularity, there are also additional coordinate singularities. Such coordinate singularities can be removed by suitable choice of coordinates, but they often underlie important physical phenomenon and have geometric description. Considering the Boyer-Lindquist coordinates for the metric given by (32), we define ∆ as then g rr = −Σ/∆, which becomes singular when ∆ = 0. The solution of r for ∆ = 0 gives two real values r ± of which r − ≤ r + . These radii are referred to as outer (r + ) and inner (r − ) horizons; the former is called the event horizon and the later one Cauchy horizon, and the region r < r + is referred to as the 'interior' of the black hole. It can be shown that the event horizon marks the point of no return. Now since r − lies inside the event horizon and no actual observer can have access to the interior of the event horizon, we avoid any discussion about the inner horizon r − . A. Numerical Solution From equations (32), (38), (39) and (40) we obtain the metric components as a series solution and substituting them in equation (42) effectively gives ∆. Now ∆ = 0 has been numerically solved in order to obtain event horizon r H which is r + . We will obtain an analytic approximation of the result in the next section. Tables I and II show r H for different a and B in the equatorial plane. Tables I and II show that r H monotonically increases with the increase of |B| and monotonically decreases with the increase of a. From Table II and Figure 1 it can be seen that unlike in Kerr metric, |a max | > 1 is allowed due to B < 0. The variation of maximum a, i.e. a max , for varying B is shown in Figure 2. It can be seen from the Figure 2 that |a max | varies almost linearly with B. Exploring and interpreting these results with the exact solutions is beyond the scope of this work. We will look at an analytic approximation of the above feature and report the result in the next section, where we will calculate |a max |. We will confirm that indeed |a max | is allowed to be greater than unity in modified gravity and also varies approximately linearly with B. B. Analytical Approximation In order to assure the possibility of analytical solutions, we consider very small modifications to GR and hence we take B/r 1. Thus we take only terms up to r −2 , the functions s(r, θ) and p(r, θ) can then be written as p (r, θ) = 1 + (2 − 2B)r r 2 + a 2 cos 2 θ Taking terms upto r −2 , in Boyer-Lindquist coordinate system, the metric can be recast from equations (32), (43), (44) and taking further B 1 and having X ≈ 1, the nonzero component of the metric comes out to be This line element matches exactly with the results of black hole theories with higherdimensional branes [21,22]. This shows that the work presented here gives a more general metric and includes the results from higherdimensional branes. The effects of higherdimensional branes come from a specialized case where the modification to gravity has been taken to be very small. Now to find the horizons in this case, the equation ∆ = 0 has to be solved which approximately becomes, from equation (42), which gives Thus, to the first order in B, we obtain ∆ = r 2 + a 2 − 2r − β. Now solving the quadratic equation (48) gives two three-surfaces of constant r as These surfaces give the outer (r + ) and inner (r − ) horizons. Thus, the event horizon takes the form as It can be easily seen that by setting B = 0, we recover the well-known results of the event horizon in Kerr metric, r H 0 = 1+ √ 1 − a 2 , which confirms the validity of analytical solutions. Figure 3 shows how r H varies with a based on analytical approximate solution. It can be seen from Table I that for a = 0 the results match quite well with the analytical results presented here. However, as |B| increases, the value deviates a lot from the actual solution, which is because we have taken only terms up to r −2 in s (r) and p (r) in analytical calculation. Quantitatively, when B ≈ −0.1, very small compared to r, the numerical solution matches with the approximate analytical solution; thus, the analytical approximation is valid for the B ≥ −0.1 realm, so that From equation (49), for r H to be real we must have Thus, From equation (50) the maximum value of |a max | obtained to be different from that obtained from Kerr metric and because β ≥ 0, black holes can have spin parameter of value more than unity, i.e. |a| ≥ 1. The linear dependence of spin on modified gravity parameter can also be seen from equation (51) which nearly matches with Figure 2. Interestingly, this approximate analytical solution matches exactly with the Kerr-Newman metric if we replace β with −Q 2 , where Q is the charge of the black hole. However, we know that the Kerr-Newman solution is a vacuum solution of the Einstein's field equation when the integrand of action is a scalar curvature (Ricci scalar) dependent on the parameters M, a and Q. Hence, this approximate solution due to the perturbative correction to GR can be treated as the solution of Einstein's field equation itself with appropriate redefinition of the action and parameter(s). However, in general the solution (g µν ) obtained in §IV can be understood as the one corresponding to an appropriate choice of f (R) and then F(R) satisfying equation (5). VII. ORBITS IN EQUATORIAL PLANE Due to the source having an angular momentum, the system's geometry is no longer spherical and is only axisymmetric. Only the components of the angular momentum along the symmetry axis are conserved. There are orbits confined to the equatorial plane (θ = π/2), but the general orbit is not necessarily on the plane. However, to present a manageable solution, we consider the equatorial plane in this section. Thus, from equations (32), (38), (39) and (40) we can construct two Killing vectors corresponding to energy and angular momentum. The energy arises from the timelike Killing vector K µ = ∂ t , and the Killing vector whose conserved quantity is the magnitude of the angular momentum is given by L = ∂ ϕ . Thus, we can construct the conserved quantities as E and L as the conserved energy per unit mass and angular momentum per unit mass along the symmetry axes, which can be expressed as [23] and Now by inspecting the metric we have L = g tϕ u t + g ϕϕ u ϕ . A. Marginally bound circular orbit From normalization condition of four-velocity u· u = 1, together with u θ = 0, we obtain a radial equation for u r = dr/dτ as g tt u t 2 + g rr (u r ) 2 + 2g tϕ u t u ϕ + g ϕϕ (u ϕ ) 2 = 1. (58) Thus equations (56), (57) and (58) essentially calculate u r as a function of E, L, r, a and B. The effective potential can now be defined as [23,24] V ef f (E, L, r, a, B) := r 3 (u r ) 2 . (59) Now for circular orbits we must have the radial velocity to vanish and hence the effective potential must vanish. Thus for equilibrium condition, we must have an extremum in V ef f . Therefore, we obtain the relations It can be shown that unbound circular orbits have E > 1. Given an infinitesimal outward perturbation, a particle in such an orbit will escape infinity. Bound orbits exist for r > r mb , where r mb is the radius of the marginally bound circular orbit with E = 1. Thus, solving equation (60) with condition E = 1, we obtain the value of r = r mb . From Figure 4 the effect of B on r mb can be seen, and that r mb increases with increasing |B| for a fixed a, and r mb decreases with the increase of a for a fixed B. It also can be seen that setting B = 0 gives the same results as in GR. B. Innermost stable circular orbit To find the innermost stable circular orbit, we opt for the same V ef f as defined in section VII A. Since we are considering circular orbits, equation (60) is still valid. All the bound circular orbits are not stable. For stability condition, we must have the condition Now, the minimum radius (innermost orbit) that satisfies equations (60) and (61) is termed as Innermost Stable Circular Orbit (ISCO) and the radius named as r ISCO . Numerically solving these three equations simultaneously we obtain the variation of r ISCO shown in Figure 5. Similar to the case of r mb , here we see r ISCO increases with increasing |B| for a fixed a, and r ISCO decreases with the increase of a for a fixed B. Also, it can be easily verified that as B = 0, the results of GR are preserved. VIII. EPICYCLIC FREQUENCY IN MODIFIED GRAVITY In this section we will briefly describe the derivation of epicyclic oscillation frequencies for the stationary, axisymmetric metric from the effective potential for circular geodesics, depicting the spacetime around a rotating black hole. From equations (32), (38), (39) and (40) the line element can essentially be expressed as ds 2 = g tt dt 2 +2g tϕ dtdϕ+g ϕϕ dϕ 2 +g rr dr 2 +g θθ dθ 2 , (62) with g µν as a function of r and θ and a symmetry along φ and t. It is most straightforward to obtain the epicyclic frequencies for a metric that can be expressed in this form. Epicyclic frequencies originate from the the relaxation of the circular orbits under external perturbation and it must be that this frequencies solely depend on the structure of the spacetime. Now the similar normalization condition as in equation (58) along with equations (56) and (57) but without a fixed θ, hence with u θ , can be rewritten as where the effective potential can be defined as V ef f = E 2 − g tt g ϕϕ + 2LE + g tϕ g tϕ + L 2 g tt g 2 tϕ − g tt g ϕϕ ∆ . (64) For circular orbits in the equatorial plane we have u r = u θ = 0, which implies V ef f = 0, anḋ u r =u θ = 0 give ∂ r V ef f = ∂ θ V ef f = 0. From these three conditions E and L can be obtained as [25] and the orbital angular frequency is given by [25] Ω ≡ 2πν ϕ = −∂ r g tφ ± ∂ r g tϕ 2 − ∂ r g ϕϕ ∂ r g tt ∂ r g ϕϕ , where the positive (negative) sign in equation (67) refers to the co-rotating (counter-rotating) orbits with respect to the black hole spin. Equation (67) also defines the quantity ν ϕ which is the frequency in which the particles move around the black hole in circular orbits. Now the proper angular momentum ( ) can be derived to be = − g tϕ + Ωg ϕϕ g tt + Ωg tϕ . For finding the epicyclic frequencies, we first consider the perturbation to the radial (r) and vertical (θ) coordinates so that where the perturbations are considered to be δr(t) ∼ e iΩ r t and δθ(t) ∼ e iΩ θ t , so as to have equations for harmonic oscillator of the form Here r 0 is the radius of the circular orbit and θ 0 = π/2, is the angle at which the equatorial plane exists. Now expanding the R.H.S. of equation (63) into second-order Taylor series along with the radial (r) and vertical (θ) components, replacing r and θ from equation (69), using equations (70) and (71), and after some simple algebra we obtain [25,26] (73) The dependence of the frequencies on B arises from various metric components. The explicit forms of the frequencies are huge and hence are not included in this work. Rather, we shall provide a numerical estimations of these frequencies. It should also be noted that these frequencies are observables and will be the key in estimating the most favored value of B from observational data. The behaviors of ν r and ν θ are shown in Figures 6a and 6b with a fixed spin parameter a = 0.8. From Figure 6 it can be seen that ν r decreases, while ν θ and ν φ increase, with the increase of |B|, at a given r (particularly away from the black hole). However, the peak of ν θ decreases with increasing |B|. Also ν r vanishes at a larger radius with a smaller peak with increasing |B|. It can be easily seen from equation (67) that the GR result, i.e. Ω ∼ (r 3/2 ± a) −1 , can be found by setting B = 0. IX. CONCLUSION The idea of modified GR is in the literature for sometime, but its indispensable usefulness was not very clear. Although Starobinsky argued for R 2 -gravity (a kind of f (R)-gravity) to explain inflation [27], it was not clear if all the gravity theories are the same. In last one decade or so, the authors however showed that R 2 -gravity could be useful to sort out problems lying with neutron stars and white dwarfs [2,3,28,29] as well. Nevertheless, none of these solutions is black hole (vacuum) solution. In this work, we establish an asymptotically flat vacuum solution of the axially symmetric field equation in a modified GR, more precisely f (R)-gravity. The solution particularly describes the spacetime geometry around a rotating black hole, i.e. the modified Kerr black hole solution, for the first time of this kind to the best of our knowledge. It shows that depending on the modified gravity parameter, all the fundamental properties of the black hole change, e.g. the radii of black hole, marginally stable and bound circular orbits increase. Therefore, based on the observed size, e.g. by Event Horizon Telescope (EHT) image, the inference or estimate of spin of black hole would be incorrect unless proper theory is used. If indeed the gravity theory is based on an f (R)-gravity, the GR based inference of spin of the black hole would actually underestimate it. This has many far reaching astrophysical implications. The solution also implies that the naked singularity, as formed at the Kerr parameter a > 1, need not necessarily produce in modified GR. This naturally has important implications to the cosmic censorship hypothesis [30,31]. Therefore, black holes, according to this gravity theory, can spin faster without forming naked singularity depending on the modified gravity parameter.
v3-fos-license
2021-11-27T16:04:27.374Z
2021-11-25T00:00:00.000
244664978
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/wcmc/2021/6195212.pdf", "pdf_hash": "eb77d273edd107a2a5dfd713cec6401902c1b3b8", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44810", "s2fieldsofstudy": [ "Education" ], "sha1": "6bdebc5e8ff0caad99544482753bdecafee79b4d", "year": 2021 }
pes2o/s2orc
Analysis of Cultural Education and Behavior in Colleges and Universities Based on Image Recognition Technology When the rationality and importance of cultural education in colleges and universities have been confirmed, the key question is how to better realize cultural education. This requires an in-depth understanding and grasp of the cultural education work of colleges and universities, analyzing the internal logic contained in this education system and then discussing its practical rationale. This paper proposes a multiobjective Support Vector Machine (SVM) image recognition method for college cultural and educational behaviors based on three decision-making theories. Aimed at the problem of classification errors caused by image segmentation in image recognition of cultural and educational behaviors in colleges and universities, it is found that the reason that causes the same target to be divided into different subblocks is that the ordinary SVM classifier has only two choices of “yes” and “not” in classification. It is impossible to set the conditions for the classification of cultural and educational behavior images to satisfy all the existing conditions. In order to solve the above problems, this paper combines the three decision-making ideas with the image recognition of university cultural education behaviors and designs a multiobjective SVM image recognition method of university cultural education behaviors based on the three decision-making theories. We establish a higher education service quality structure model and use LISREL to explore the causal relationship among students’ perceived service quality, university image, student’s perceived service value, student trust, student satisfaction, and student behavior intentions in the field of higher education. The empirical results show that student satisfaction has a significant direct and positive effect on student behavior. Service quality has a significant direct and positive impact on perceived value and student trust. The perceived risk of higher education services determines the image of colleges and universities, and student trust plays an important role. The image of colleges and universities acts on student satisfaction through perceived value and student trust. The image of colleges and universities has a direct impact on students’ behavioral inclination, and it acts on students’ behavioral inclination through student trust and student satisfaction. Among the influences on behavior tendency, the direct positive effect of student trust is the largest, followed by the image of colleges and universities and the quality of service. Introduction With the general trend of artificial intelligence, its branch image recognition has also received close attention. As a collection of image processing and pattern recognition, it is a technology that uses computers to understand images to distinguish different target objects [1]. At present, image recognition is very common in daily life and business society, such as airport security check identity verification, commuting fingerprint check-in, or iris access control, and industry research is also very extensive, such as transportation, medical care, and agriculture. With the continuous development of the social economy and the advancement of the popularization of higher education, the employment pressure of undergraduates in our country is increasing, and "difficulty in obtaining employment" has become a common situation in society [2]. Under the impact of the social wave of the rise of high and new technology, the demand for structural talents in various enterprises is constantly expanding, and the demand for talents with strong professional skills is increasing. In order to increase the employment rate of graduates, colleges and universities have strengthened the training of students' professional skills to a certain extent. From a long-term perspective, professional education is important, but cultural quality education is also indispensable. It should run through the whole process of college students' training and education [3]. The educational goal of colleges and universities is to train people with all-round development of physical and mental health. Now, more emphasis is placed on national quality, cultural heritage, and patriotism. The core point of improving these is to strengthen the country's cultural quality education. The cultural education system has been in operation in the university environment, but people's understanding of this way of education still has a certain degree of ambiguity [4]. For example, in previous studies, the understanding of cultural education includes campus culture education, environmental culture education, and subject culture education. There is even a view that all education is cultural education in the final analysis. Mentioning "culture" seems to be able to describe a lot without thinking, but the answer to "what the hell is" is dumb. People's understanding of cultural education is the same. It interprets the connotative characteristics of cultural education in colleges and universities and analyzes its inherent logic on this basis. These rational interpretations can enhance people's understanding of cultural education in colleges and universities. Education is the true appeal of ideological and political education, and cultural education is an important part of ideological and political education [5]. Under the conditions of cultural globalization and under the background of building a socialist cultural power, the study of cultural education is a modern development of traditional ideological and political education based on society and inculcation of knowledge and education. It is both an update of educational concepts. It is also the development of educational methods. System theory emphasizes the whole and connection. The cultural education system of colleges and universities is to summarize and integrate the elements of cultural education according to the scientific and logical relationship in accordance with the principles and methods of system theory, so as to play a role that exceeds the sum of the effects of individual elements and enhance the effectiveness of ideological and political work in colleges and universities [6]. This paper constructs a classifier for image recognition of cultural and educational behaviors in colleges and universities based on three decisions. This paper is aimed at the problem that the traditional image segmentation of cultural and educational behaviors in colleges and universities are all regular rectangles, and the same subobjectives may be divided into different submodules. Compared with the commonly used methods of image recognition of cultural and educational behaviors in colleges and universities, this paper designs and builds a college based on three decisions. The selection of judgment conditions is obtained by retraining the previous classification results. Through a limited number of iterations, the university cultural and educational behavior image segmentation divides the same target into the same submodule as much as possible, thereby solving the problem of errors in the image segmentation of the university cultural and educational behavior and making the university cultural and educational behavior image segmentation more accurate. This paper introduces the three decision-making theories into the traditional SVM image recognition of cultural and educational behaviors in colleges and universities, designs a classifier based on the concept of the three decision-making theories, and establishes a recognition model of cultural and educational behavior images in colleges and universities to improve the accuracy of the recognition of cultural and educational behavior images in colleges and universities. Through the analysis of the three decision-making theories, the concept of delayed decisionmaking is humanized in the field of cultural and educational behavior image recognition in colleges and universities, and the delayed decision-making part is iteratively processed, and finally more accurate recognition results are obtained. We develop an educational service quality scale to analyze the service quality in the field of higher education, establish a higher education service quality measurement model, and use mathematical statistics to derive the factors of higher education service quality attributes. Confirmatory factor analysis shows that the reliability and validity of the factors are high, and the second-order confirmatory factor analysis shows that they constitute higher-order factors of education service quality attributes. This article will introduce the advanced ideas of service marketing into the education quality management from the perspective of university education service quality management, adopt the research method of combining theoretical research and empirical research and qualitative analysis and quantitative analysis, and systematically analyze higher education. Related Work Image recognition is the process of detecting, separating, and cognizing target objects in simple or complex backgrounds. It is a basic behavior that is vital to the survival of animals. Human activities depend to a large extent on the classification or recognition of a large number of visual objects [7]. We can recognize these objects quickly and effortlessly even under different lighting conditions or when they are blocked by other objects in a complex visual environment. People form and continuously improve their own object classification principles and visual recognition systems through continuous learning of new objects. When the observer encounters a new object and does not have the conceptual information of the classified object in advance, the classification principle he formed before will serve as the basis for classifying the new object [8]. In the process of learning to recognize basic objects, humans can detect consistent features with minimal changes between individuals. These features represent most samples of an object class, thereby extracting class invariance and continuously improving their classification principles and recognition system [9]. The important reason for the long-term survival of human beings in nature is that they can quickly recognize and understand their environment and make corresponding response plans. The key is to use human's own visual system 2 Wireless Communications and Mobile Computing to locate and identify targets and at the same time realize visual scenes. If the computer can automatically recognize images, it will further enrich and facilitate human life. This makes image recognition technology one of the important research directions in the field of artificial intelligence and big data analysis. Image recognition refers to the use of computer vision, pattern recognition, machine learning, and other technical methods to automatically recognize the concept of one or more target semantics in the image, and the generalized image recognition also includes the concept recognition of image region positioning. Image recognition technology can meet the user's vision application needs in different scenarios, including Internet-based image retrieval and data mining, human-machine dialogue, and information services on smart terminals, such as mobile devices and smart robots. There are different opinions on the construction of the cultural education system in academia, but most of them focus on the specific content, practical approach, and guarantee mechanism of the cultural education system [10]. Relevant scholars proposed that higher vocational colleges should start with the concept system, curriculum system, implementation system, and guarantee system to build a complete professional cultural education system [11]. Researchers believe that schools must first scientifically construct an ideal model of a practical-oriented regional excellent cultural education system, secondly creatively construct a practical model of a practical-oriented regional excellent cultural education system, and finally effectively construct a practical-oriented regional excellent cultural education [12]. The guarantee mechanism of the system can form a unique practice-oriented school regional cultural education system. Related scholars believe that one of the important connotations of the construction of the school cultural education system in the new era is to use the concepts of the new era to guide the construction of the school's cultural power, which mainly includes spiritual, material, behavioral, institutional, and other aspects [13]. Only when the main body of cultural power construction actively participates in the various cultural constructions of the school, a good development situation may emerge. Relevant scholars pointed out that the understanding of cultural quality education should be viewed from three different levels [14]. The three levels have a progressive relationship, from shallow to deep, and cannot be viewed independently: colleges and universities should take "improving the cultural literacy of college students" as the first priority. Researchers believe that the humanities used in the cultivation of college students' cultural qualities should be based on the thinking and behavior of college students, focusing on social practice and urging college students to exert conscious initiative in promoting social development [15]. Relevant scholars pointed out that only relying on the means of imparting knowledge will never achieve the goal of cultural quality education [16]. It should be combined with the overall progress and development of people and society; that is, colleges and universities should not only pay attention to the transmission of knowledge but also pay attention to morality when cultivating talents [17]. For the organic combination of the cultivation and cultivation of sentiment, colleges and universities should carefully design courses and pay attention to the creation of a fine and harmonious environment. Relevant scholars believe that in cultural quality education, the role of people should be emphasized (people-oriented), and the cultivation of "full" people should also be emphasized at the same time as the cultivation of individuality [18]. Individuality is the source of innovation and creativity. Relevant scholars pointed out that the cultural quality education of college students should be combined with the improvement of teachers' cultural literacy, ideological and political, and focus on the comprehensive training of humanities and science of talents [19,20]. Practice Theory of Cultural Education in Colleges and Universities Operating Structure of Cultural Education in Universities. Cultural education is a kind of practical activity, which is related to people's epistemology, and is essentially an activity about people's cognition. The formation process of human cognition is the process of achieving the goals of subject and object on the basis of practice. It can be seen that the realization of the effect of cultural education also presents a structure of subject and object, which is the result of the integration of cultural and educational elements. The quality ecosystem of cultural quality education in colleges and universities is shown in Figure 1. The cultural education system of colleges and universities depends on the social system, and it is also an organic operating system. Therefore, the grasp of its subject and object structure can be interpreted in the theory of system philosophy. In system philosophy, "subject" mainly includes three main forms: individual subject, group subject, and social subject. Regardless of the form of the subject, a basic feature is that it is social and conscious, and its activities are carried out with a certain purpose. "Object" mainly includes three object forms: natural object, social object, and spiritual object. The development of cognition activities is a process of mutual transformation between subject and object on the basis of practice, and the achievement of cognition goals is to realize the subjectivity of the object. The subject of cultural education in colleges and universities should explore the content, source of motivation, and achievement of cultural education. In other words, one convenience for mastering the initiative and dominant force of this activity is the subject, while the party that is affected, received, and influenced is the object. According to the "differential order" structure of operation and development, teachers and students, schools, and cultural fields in society can all act as subjects, playing the roles of individuals, groups, and social subjects, respectively. But this does not mean that the subject of cultural education in colleges and universities has fallen into "relativism." What needs to be pointed out is that in the composition of these subjects, administrative forces, school environment, teachers, etc. still occupy important positions of the subject of cultural education in colleges and universities. Teachers and campus practitioners must not only play the main role of educating 3 Wireless Communications and Mobile Computing people, but in terms of the overall social environment and the "postmetaphor" role of students to teachers, they are also the objects of cultural education to some extent. Generally speaking, the student group is still an important part of the object of campus culture education. The Operating Mechanism of Cultural Education in Colleges and Universities. When the object of education has certain cultural knowledge and forms a cognitive schema that conforms to a certain cultural content, the cultural education work has achieved a certain degree of success, but this is not the end, and it is not the complete completion of the cultural education work. The realization of the internalization of the cultural core at the level of educational objects is only half of the success of cultural education. It is important for the educational object to absorb and recognize the spiritual literacy reflected by this cultural core and enable such spiritual literacy to dominate daily behaviors. There are countless people who know and understand "integrity" in real life, while those who can truly achieve "integrity" in daily behavior will be greatly reduced. This involves the relationship between knowledge and action and the relationship between internalization and externalization. The goal of cultural education lies not only in simple knowledge transfer and transmission but also in the realization of the unity of knowledge and action in the object of education. The schematic diagram of the operating mechanism of cultural education in colleges and universities is shown in Figure 2. According to the basic view of new behaviorism, the occurrence of behavior is dominated by conscious experience, and a behavior must have a corresponding behavioral motivation. Individual behavior is affected by factors such Wireless Communications and Mobile Computing as environment, experience, and age, and most of these variables are changed through learning. Therefore, in the process of a behavior, different stimuli and response variables will be formed through different stages of learning, which will have a decisive impact on individual behavior. In contrast to the operation of the cultural education system in universities, its goal is to realize that the advanced socialist culture with Chinese characteristics has an impact on the ideological level of the main body of the university, so that the main body of the university has the content of the advanced socialist culture with Chinese characteristics. To achieve this goal, the first thing that needs to be solved is the conversion of the national macrotheory to the individual microideological level, that is, the assimilation and adaptation of the external cultural content discussed above to individual concepts. When the transformation of cultural content at the individual microlevel is completed, people will have new behavioral motives in their thoughts and concepts, and new variables that affect behavior will be produced. When similar thoughts exist in a certain range and function to produce similar behaviors, a group movement with common characteristics will be formed. This collective action has a strong social effect and can have an important leading role in social trends. In the university environment, if it is possible to realize that most of the behaviors of the main body of the university are dominated and guided by the advanced socialist culture with Chinese characteristics, it is the manifestation of the transformation of the effect of the university cultural education system from internalization to externalization. It is said that the cultural education system has achieved complete operation, and the effect of cultural education has finally been realized. Ways to Realize Cultural Education in Colleges and Universities 3.3.1. Improve the System Design of Cultural Education. The cultural education of colleges and universities should give full play to the leading role of politics, continuously improve the system design, innovate the leading model, and establish a long-term mechanism for cultural education. In the process of overall development of colleges and universities, we must always adhere to the party's centralized and unified leadership of cultural education in colleges and universities, follow the basic rules of cultural education, and innovate and create education models and methods suitable for the college's own characteristics. Cultural education itself has a systematic and long-term nature, and the development of cultural education activities must give full play to the functions of various elements. There are rich organizational advantages in the university environment. It is necessary to give full play to the role of party organizations at all levels, league organizations, student associations, and other grassroots organizations, carry out various cultural education activities at all levels, and strengthen the theoretical study and core value education of college students. The cultural education of colleges and universities should be based on the main channel of curriculum teaching, grasp the key to teachers, establish and improve curriculum ideological-and politicalrelated systems, use curriculum teaching to enrich the campus cultural atmosphere, and realize cultural education. 5 Wireless Communications and Mobile Computing on the "solo" of the school but also comprehensively coordinate the forces of all parties to form a cultural education "chorus." First of all, in terms of campus construction, we must pay attention to integrity and coordination. The creation of cultural atmosphere is not accomplished by setting up a functional department, but by infiltrating the concept of cultural education into all aspects of running a school. It is necessary to highlight cultural elements in the curriculum setting and campus environment construction, but also to create a cultural environment in the aspects of student accommodation, entertainment, and travel and cultivate teachers and students' patriotic and school-loving ideas with the ubiquitous and meticulous cultural infiltration concept. Secondly, it is necessary to make full use of external cultural education conditions such as social resources and family environment, link the construction of campus culture with the cultural construction of the city where it is located, and realize the organic integration of the cultural environment on campus and the cultural environment outside the campus. Create an Information Platform for Cultural Education. The information age has provided more possibilities for the innovation of cultural education methods in colleges and universities. The rapid development and universal application of Internet technology are the biggest "trend" that colleges and universities must adapt to in ideological and political work. The emergence of emerging communication methods and information processing methods such as the Internet of things, blockchain, big data, and cloud computing has provided important technical support for colleges and universities to implement the task of fostering people and improve the quality of education. The cultural education work of colleges and universities should focus on the cultural education effect of cyberspace, which is concentrated in the two aspects of "establishing" and "breaking." First of all, it is necessary to realize the "establishment" of network culture educating people. Colleges and universities should take an active and adaptive attitude to occupy the network position in time, make full use of Internet technology to build education platforms, enrich online content, and prevent the lack of mainstream cultural content in the network environment. The grasp of language and the guidance of topics are the key factors to realize the cultural education of the network environment. To carry out cultural education on the network platform, we must pay attention to the art of network language expression and be good at transforming profound theoretical content into popular network language to enhance the pertinence of cultural education. Second, we must pay attention to the "breaking" of network content. In the network environment, all kinds of information are muddled down, and the good and bad are mixed, which often dissolves the education effect of mainstream culture. External forces and reactionary forces with ulterior motives often attract young students with weak discriminating ability under the guise of "recovering the truth" and "digging into the secret history." Therefore, another important way of educating people in online culture is to eradicate these weeds in cyberspace, seek to "stand" in "breaking," use true and authoritative information to elimi-nate students' doubts, and realize the implantation of positive and correct values. Multiobjective SVM Image Recognition Algorithm for University Cultural Education Behavior Based on Three Decision-Making Theories Three-branch decision-making is a model based on human understanding of decision-making. The three-branch decision-making thinking adds the concept of delayed decision-making to the traditional right and wrong decisionmaking. Through the delayed decision-making mechanism, the credibility of the decision-making is improved under the continuous addition of new decision-making conditions. The traditional image recognition technology of cultural education in colleges and universities is generally based on Support Vector Machine (SVM). Through the judgment of right and wrong, we classify the useful information of the cultural and educational behavior images of colleges and universities, so as to achieve the purpose of recognition of the cultural and educational behavior images of colleges and universities. The useful information of cultural and educational behavior images in colleges and universities mainly includes color, brightness, direction, contour, and other characteristics, and these characteristics are easily lost in the collection and transmission of cultural and educational behavior images in colleges and universities, so they will exist in the recognition process. Combining the three decision-making theories with the field of image recognition of cultural and educational behavior in colleges and universities can further judge and detect the fuzzy parts of the cultural and educational behavior images in colleges and universities by delaying decisionmaking and effectively train the determined parts. It effectively reduces the lack of useful information in the image of cultural and educational behavior in colleges and universities and improves the accuracy of image recognition of cultural and educational behavior in colleges and universities. Multiobjective SVM Image Recognition of University Cultural Education Behavior. In this paper, the traditional recognition method is improved, and the efficient image segmentation method of university cultural education behavior is used, so that the university cultural education behavior image will not be divided into different subblocks because of the same target, thereby increasing the recognition rate. This paper integrates methods such as multifeatures, segmentation, detection, and multiclassifiers to conduct deep learning training on sample college cultural and educational behavior images. While reducing the loss of college cultural and educational behavior images, the target area is segmented and retrained. The composite SVM recognizer used in this paper is an improved version based on SVM recognition, which uses multiple methods such as fusion and segmentation to perform the final detection and recognition to improve the recognition rate of cultural and educational behavior images in colleges and universities. Wireless Communications and Mobile Computing The traditional university cultural and educational behavior image recognition method is based on the composite SVM. We obtain the first hyperplane and the second hyperplane parallel to the hyperplane and the same distance from the sample data points. The hyperplane can be described by the following linear equation: The first hyperplane is The second hyperplane is Among them, when gðxÞ = 0, x is a point on the hyperplane, vector w is a vector perpendicular to the hyperplane gðxÞ = 0, w T represents the transpose of the w vector, and b represents a constant. The sample data points of the tobe-identified college cultural and educational behavior image on the first hyperplane and the second hyperplane are the points closest to the separating hyperplane. Three Decisions in Image Recognition of Cultural and Educational Behaviors in Colleges and Universities. Improving the accuracy of cultural and educational behavior image recognition in colleges and universities has always been a difficult point in the field of cultural and educational behavior image information in colleges and universities. The method of image recognition of cultural and educational behavior in colleges and universities is relatively single, and researchers are also easy to focus on the low-level visual feature points, which will lead to insufficient recognition of the amount of useful information in the cultural and educational behavior images in colleges and universities, thereby reducing the need for colleges and universities. This paper proposes a method of image recognition of cultural and educational behaviors in colleges and universities based on three decision-making methods, which makes use of the useful information in the cultural and educational behavior images of colleges and universities to a greater extent and makes the recognition of cultural and educational behavior images in colleges and universities more accurate. The understanding is that the content of the semantic information of the cultural and educational behavior images in colleges and universities is far more than the visual characteristics of the cultural and educational behavior images in colleges and universities. The current image segmentation of cultural and educational behaviors in colleges and universities usually directly divides the cultural and educational behavior images of colleges and universities into regular rectangles, which will cause the same target to be segmented into different subblocks of the cultural and educational behavior images of colleges and universities. The subblocks of cultural and educational behavior images are divided into different regions, which affects the accuracy of the recognition of cultural and educational behavior images in colleges and universities. The existing recognition methods improve the fusion of classifiers to improve the recognition performance of cultural and educational behavior images in colleges and universities. Due to the complexity of the content of cultural and educational behavior images in colleges and universities, the classification task of cultural and educational behavior images in colleges and universities is very difficult. The semantic classification of cultural and educational behavior images in colleges and universities is still challenging in the fields of college cultural and educational behavior image recognition, computer vision, and cognitive science. In response to the above problems, this article proposes a three-branch decision-based image recognition method for cultural and educational behaviors in colleges and universities. The method uses the three-branch decision-making mechanism to process fuzzy information, thereby reducing the possibility of not being recognized because of fuzzy feature information. Since this method is to segment the college cultural and educational behavior images by continuously increasing the judgment conditions of the classifier, the college cultural and educational behavior images in different subblocks will be divided into the same subblock in the continuous iterative process. Each time the three decisionmaking classifiers add judgment conditions after filtering the existing conditions, the information added can be adjusted according to the current university cultural and educational behavior image's own attributes, and then the segmentation area can be adjusted continuously. Design of a Multiobjective SVM Image Recognition Classifier for College Cultural and Educational Behaviors Based on Three Decisions. According to the characteristics of human cognition, three decision-making methods are adopted. Due to the unique delayed decision-making characteristics of the three decision-making, new decision information is constantly added to the initial decision conditions for decision-making condition judgments, so the classifier will take the divided positive and negative regions as new training after each classification is completed. The set is retrained to form a new judgment condition and added to the delayed decision-making area, until the delayed decision-making area can no longer be divided. Finally, we perform image recognition of cultural education behaviors in colleges and universities. Each training will add new decision-making conditions to make the classification results of the three decisionmaking classifiers clearer, and the part that delays the decision-making is smaller and smaller, until a certain critical value is reached, so that the image recognition of cultural education in colleges and universities achieves the ideal result. Corresponding to the three decisions, we use α, β, and ξ to represent acceptance, rejection, and noncommitment, respectively. Suppose the evaluation function is defined as Pr ðXÞ, and the risk function is Rð△|xÞ, where △ represents the decision-making action on x. Based on the cost matrix, the following risk estimates can be obtained for the two states: The acceptance risk is Wireless Communications and Mobile Computing The rejection risk is The risk of noncommitment is In the decision-making problem, based on the risk function, the decision action with the least risk is selected. λαp represents the cost of accepting decision-making when the acceptance conditions are met, λαn represents the cost of accepting decisions that do not meet the decision-making conditions of the sample data set of college cultural and educational behavior images, and λβp represents the cost of accepting decisions that meet the decision-making conditions of the sample data set of college cultural and educational behavior images. Λβn represents the cost of rejecting decision-making under the condition of not satisfying the decision-making conditions of the sample data set of college cultural and educational behavior images, λξp represents the cost of noncommitting decisionmaking under the condition of satisfying the decisionmaking conditions of the sample data set of college cultural and educational behavior images, and λξn represents the cost of noncommitment decision-making under the decisionmaking conditions of the sample data set college cultural and educational behavior images, assuming that the risk function satisfies the following conditions: Since the loss function satisfies the uncertainty condition of randomness, its mathematical mechanism is that the conditions for time occurrence are insufficient, so that there is no decisive causal relationship between the condition and the result. First, the event can be repeated under basically the same conditions; second, under basically the same conditions, an event may be manifested in multiple ways, and it cannot be determined in advance. In what specific way, third, all the possibilities of the event in various ways can be foreseen in advance (the probability of its occurrence in a certain way, that is, the probability of occurrence in the repeated process). The process of image recognition of cultural and educational behaviors in colleges and universities based on three decisions is shown in Figure 3. Empirical Analysis of Higher Education Service Quality Table 1. Second-Order Factor Analysis. The confirmatory factor analysis results show that the curriculum, faculty services, academic resources, management mechanism, campus environment, school activities, and living and accommodation have a high degree of correlation among several variables. The approximate confidence interval test is conducted with standard errors. We use LISREL 8.7 to do a second-order factor analysis of the above variables. These variables can be classified as a higher-order factor "attribute of educational service quality." The standardized estimates are all greater than 0.5 and less than the critical value of 0.95, which is much greater than the 0.01 significance level of 2.58; that is, the factor load and t value of all indicators of the evaluation model on their respective measurement concepts (latent variables) are rela-tively significant, indicating that the data is relatively significant. At the same time, no large errors occurred in the measurement errors of the observed variables, indicating that the next step can be an analysis of the overall suitability of the evaluation model. Figure 4 shows the discriminant index results for evaluating the overall suitability of the model. The normative fit index (NFI), relative fit index (RFI), and incremental fit index (IFI) are all greater than 0.90, indicating that the model fits well. The construction reliability and average variation extraction (second-order) are shown in Figure 5. The calculated reliability of the variable construction is between 0.72 and 0.78, which is greater than the standard of 0.6, indicating that the observed variable provides a credible construction measurement of the latent variable. The average variance variation extraction is between 0.72 and 0.80, and both are greater than 0.5. Perceived curriculum is a combination of teaching materials, adequate teaching equipment, professional courses, modernization of teaching equipment, and teaching content. Perception of academic resources is a combination of library facilities, library resources, computer facilities, and the Internet; perception management mechanism is based on the convenience of service time, feedback channels, and student participation. Perceived campus environment is a combination of campus location, campus beauty, and campus safety; perceived school activities are community activities, internship opportunities, student exchanges, and physical activities; perceived life and accommodation are a combination of food quality, canteen facilities, food prices, accommodation conditions, and accommodation charges. Cronbach's α of all concepts included in this study is higher than the minimum critical value of 0.6 recommended by related researches, indicating that the measurement model exhibits good internal consistency. The weight of each latent variable is shown in Figure 6. Structural Equation Model Analysis. In the model designed in this article, expectation variables are not added. The formation of expectation is largely derived from the students' past consumption experience. Higher education is different from general services. In the field of education, students have little or no prior knowledge of the education to be received. Before consuming education services, although students' information sources include admission promotion, teachers, family members, or relatives and friends, the students themselves have no previous consumer experience to refer to, and they know very little about the level of services that the school will provide. Student expectations lack effectiveness, so student expectations are not included in the research model in this study. The standardized coefficient value of the research sample is greater than 0.5 and does not exceed 0.95; except for the path from service quality to student satisfaction, the t value is greater than 1.96 at the 0.05 significant level. Both the load and the t value are more significant, indicating that the data has higher convergence effectiveness. The measurement error of the observed variable shows no large standard error and no negative error variance. These results show that there is no violation of the estimation phenomenon between the sample data of the measurement model, so the fitness of the overall model can be tested. The construction reliability of each latent variable is greater than the 0.6 standard, indicating that the observation variable provides a credible construction measurement for the six latent variables, and the average variance variation extraction is greater than the 0.5 standard. Therefore, the reliability of each latent variable is relatively high. The comparison of the accuracy of different image recognition algorithms is shown in Figure 7. It can be seen that the multiobjective SVM based on three decision-making theories in this paper has higher accuracy in image recognition of cultural and educational behaviors in colleges and universities. Student satisfaction has a significant direct positive effect on students' behavior tendency. Service quality has a direct positive impact on perceived value and student trust. The direct impact of service quality on student satisfaction is not significant, but it has an indirect impact on student satisfaction through perceived value and student trust and has an indirect impact on student satisfaction through student trust and student satisfaction. The image of colleges and universities acts on student satisfaction through perceived value and student trust. The image of colleges and universities has a direct impact on student behavior tendency, and it also acts on student behavior tendency through student trust and student satisfaction. Among the influences on behavior tendency, the direct positive influence of student trust is the biggest, followed by the image of colleges and universities and the quality of service. In addition, the timeconsuming comparison of different image recognition algorithms is shown in Figure 8. It can be seen that the time consumption of the multiobjective SVM image recognition of university cultural education behavior based on three decision-making theories in this paper is the lowest, which 11 Wireless Communications and Mobile Computing shows that the real-time performance of the algorithm in this paper is the best. Conclusion This paper introduces three decision-making ideas when improving traditional image recognition algorithms for cultural and educational behaviors in colleges and universities. Through theoretical analysis, it is concluded that the threebranch decision can be applied to the field of cultural and educational behavior image recognition in colleges and universities, and then, the traditional college cultural and educational behavior image classification algorithm is improved, from the traditional two-branch division to the threebranch decision. The process is more in line with the way of human thinking. A classifier for image recognition of cultural and educational behaviors in colleges and universities based on three decision-making theories is designed and constructed. Experiments have proven that the feasibility and accuracy of the classifier have solved the combination of three decision-making theories and the field of cultural and educational behavior image recognition in colleges and universities. In view of the problem that the traditional image segmentation of cultural and educational behaviors in colleges and universities are all regular rectangles and the same subobject may be divided into different submodules, the three-branch decision-based image recognition classifier for cultural and educational behavior in colleges and universities proposed in this paper proposes the idea of delayed decision-making. The selection of judgment conditions is the retraining of the results of the previous classification, through a limited number of iteration which makes the image segmentation of college cultural and educational behavior images segment the same target into the same submodule, thereby reducing the error generated by the image segmentation of cultural and educational behavior in colleges and universities. The recognition error may be caused by the image segmentation of the cultural and educational behaviors in the traditional university cultural and educational behavior image recognition, so as to improve the accuracy of the cultural and educational behavior image recognition in colleges and universities. The empirical results show that service quality has a direct and positive impact on perceived value and student trust, and the direct impact of service quality on student satisfaction is not significant. The image of colleges and universities acts on student satisfaction through perceived value and student trust. The image of colleges and universities has a direct impact on student behavior tendency, and it also acts on student behavior tendency through student trust and student satisfaction. Among the influences on behavior tendency, the direct positive influence of student trust is the biggest, followed by the image of colleges and universities and the quality of service. Interactivity has an important influence on students' perception of education services. The one-factor analysis of variance in the impact of teacher-student relationship and classmate relationship on perceived education services shows that students with good teacher-student relationship perceive education services higher and students with good classmate relationships perceive education services higher. Data Availability The data used to support the findings of this study are included within the article. Conflicts of Interest All the authors do not have any possible conflicts of interest.
v3-fos-license
2021-05-30T13:22:21.915Z
2021-05-27T00:00:00.000
235244209
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.8150", "pdf_hash": "7037ee9a47cabe6eaba547c616fd586ecfc2dab0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44813", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "707ddb920322541f753992b6f07ffc07c4549771", "year": 2021 }
pes2o/s2orc
How many replicates to accurately estimate fish biodiversity using environmental DNA on coral reefs? Abstract Quantifying fish species diversity in rich tropical marine environments remains challenging. Environmental DNA (eDNA) metabarcoding is a promising tool to face this challenge through the filtering, amplification, and sequencing of DNA traces from water samples. However, because eDNA concentration is low in marine environments, the reliability of eDNA to detect species diversity can be limited. Using an eDNA metabarcoding approach to identify fish Molecular Taxonomic Units (MOTUs) with a single 12S marker, we aimed to assess how the number of sampling replicates and filtered water volume affect biodiversity estimates. We used a paired sampling design of 30 L per replicate on 68 reef transects from 8 sites in 3 tropical regions. We quantified local and regional sampling variability by comparing MOTU richness, compositional turnover, and compositional nestedness. We found strong turnover of MOTUs between replicated pairs of samples undertaken in the same location, time, and conditions. Paired samples contained non‐overlapping assemblages rather than subsets of one another. As a result, non‐saturated localized diversity accumulation curves suggest that even 6 replicates (180 L) in the same location can underestimate local diversity (for an area <1 km). However, sampling regional diversity using ~25 replicates in variable locations (often covering 10 s of km) often saturated biodiversity accumulation curves. Our results demonstrate variability of diversity estimates possibly arising from heterogeneous distribution of eDNA in seawater, highly skewed frequencies of eDNA traces per MOTU, in addition to variability in eDNA processing. This high compositional variability has consequences for using eDNA to monitor temporal and spatial biodiversity changes in local assemblages. Avoiding false‐negative detections in future biomonitoring efforts requires increasing replicates or sampled water volume to better inform management of marine biodiversity using eDNA. | INTRODUC TI ON Biodiversity is changing faster than our ability to accurately quantify species losses and gains (Ceballos et al., 2020;Filgueiras et al., 2021), with consequent difficulties in evaluating the degradation of ecosystem functions and services upon which human well-being depends (Díaz et al., 2019). Traditional methods such as visual surveys are costly and time-consuming and require on-site taxonomic expertise (Ballesteros-Mejia et al., 2013;Dornelas et al., 2019;Kim & Byrne, 2006). Despite decades of sampling efforts, biodiversity monitoring still covers only a small fraction of global ecosystems and is challenging in isolated and remote regions across the oceans (Collen et al., 2009;Dornelas et al., 2018;Letessier et al., 2019;Webb et al., 2010). An emerging tool for rapid biodiversity assessment is environmental DNA (eDNA) metabarcoding Stat et al., 2017), which is proving to be effective in marine environments (Boulanger et al., 2021;Holman et al., 2021;Juhel et al., 2020). Widespread application of eDNA metabarcoding in marine ecosystems faces multiple challenges (Hansen et al., 2018). Importantly, variability sources exist in the recovered biodiversity estimates which are poorly understood (Bessey et al., 2020;Juhel et al., 2020;Rourke et al., 2021;Thalinger et al., 2021). Detection rates and resultant variability in biodiversity estimates depend on eDNA (a) origin (source of an organism's genetic material shed into its environment), (b) state (forms of eDNA), (c) transport (e.g., through diffusion, flocculation or settling, currents or biological transport which can vary according to the depth), and (d) fate (how eDNA degrades and decays) (Barnes & Turner, 2016;Harrison et al., 2019;Thalinger et al., 2021) with DNA particles best preserved in cold and alkaline waters with low exposure to solar radiation (Moyer et al., 2014;Pilliod et al., 2014;Strickler et al., 2015; but see Mächler et al., 2018). As a result, marine eDNA residence time is shorter than in freshwater and ranges from a few hours to a few days (Collins et al., 2018). Marine systems are open, with eDNA particles dispersed by oceanographic dynamics at local (e.g., tides, currents, and water stratification), regional (e.g., eddies), and large (e.g., thermohaline currents) scales. As such, significant dispersal of eDNA from its source may theoretically occur (Andruszkiewicz et al., 2019;Eble et al., 2020); however, many studies indicate that eDNA detection is limited to a small spatiotemporal sampling window (Boulanger et al., 2021;O'Donnell et al., 2017;Port et al., 2016;Stat et al., 2019;West et al., 2020;Yamamoto et al., 2017). We test whether eDNA sampling strategies need to overcome this potentially high noise-to-signal ratio or if small spatiotemporal sampling windows exist that provide a consistent view of local biodiversity. The most common approach for concentrating marine eDNA is water filtration along transects (Kumar et al., 2020), but the de Investigaciones Marinas y Costeras, Grant/Award Number: 2017011000113 compositional turnover, and compositional nestedness. We found strong turnover of MOTUs between replicated pairs of samples undertaken in the same location, time, and conditions. Paired samples contained non-overlapping assemblages rather than subsets of one another. As a result, non-saturated localized diversity accumulation curves suggest that even 6 replicates (180 L) in the same location can underestimate local diversity (for an area <1 km). However, sampling regional diversity using ~25 replicates in variable locations (often covering 10 s of km) often saturated biodiversity accumulation curves. Our results demonstrate variability of diversity estimates possibly arising from heterogeneous distribution of eDNA in seawater, highly skewed frequencies of eDNA traces per MOTU, in addition to variability in eDNA processing. This high compositional variability has consequences for using eDNA to monitor temporal and spatial biodiversity changes in local assemblages. Avoiding false-negative detections in future biomonitoring efforts requires increasing replicates or sampled water volume to better inform management of marine biodiversity using eDNA. K E Y W O R D S biomonitoring, coral reef diversity, environmental DNA, MOTU, sampling variability, tropical marine ecosystems appropriate amount of water to filter remains underdetermined (e.g., 1 L in Nguyen et al., 2020 and30 L in Polanco Fernández et al., 2020). An increased volume of water should lead to increased compositional similarly among replicates, but even at 2 L 30%-50% of the total species pool were missing in any given sample (Bessey et al., 2020). The question remains whether a larger water volume, which integrates eDNA signal over multiple kilometers, can provide a less variable and more consistent estimate of biodiversity. In addition to the volume of water, a high level of eDNA sampling replication in the field can be required to reduce false negatives (species present but not detected) and improve the accuracy of biodiversity estimates of local sites and regions. For example, 92 × 2 L seawater samples accurately predict (R 2 = 0.92) the distribution of species richness for different fish families (Juhel et al., 2020). Spatial diversity gradients have been recovered from only 3 × 0.5 L water samples in temperate (Thomsen et al., 2012) and tropical systems (West et al., 2020). However, West et al. (2020) report that more replicates were necessary to avoid false negatives and better sample diversity in a given site (>8). Budget and time limitations constrain the number of sampling replicates available (Ficetola et al., 2015)which require optimization to take full advantage of eDNA-based surveys. Here, we compared biodiversity of replicated eDNA samples in terms of Molecular Operational Taxonomic Units (MOTUs) since genetic reference databases have many gaps for tropical fishes . We assessed within-site MOTU richness (αdiversity) and between-site MOTU dissimilarity (β-diversity) separating the turnover and nestedness components (Baselga, 2012). We targeted tropical fishes across eight different sites within the Caribbean, Eastern Pacific, and Western Indian Ocean using the same standardized sampling protocol. Over transects 2 km long, we filtered 30 L of water per sample, with paired samples per transect. In addition, we performed a replication experiment in two locations by repeating transects multiple times in a ~24 h period. Although multiple markers can be associated with greater recovery rates (Polanco-Fernandez et al., 2021), we used a single primer pair due to cost constraints and thus provide assessment for a pragmatic and costefficient sampling regime. Our objectives were to (a) establish the comparability of fish diversity estimates from replicated eDNA samples collected at the same time, in the same location and under similar conditions, (b) identify the number of eDNA replicates required to saturate diversity curves at a given local site for our protocol (e.g., a using single primer), (c) compare the above patterns among three ecologically distinct tropical ocean regions, and (d) examine whether our sampling protocol saturates regional fish biodiversity estimates. Given that we filtered far more water than previous saturation experiments, we may expect higher eDNA detections whereby MOTU richness and composition should be very similar among the paired replicates-providing robust estimates of biodiversity. In this case, the replicate accumulation curve should saturate rapidly and reach an asymptotic maximum suggesting that the maximum potential diversity for a given sampling design is achieved (e.g., filtering, primer, sequencing methodology). In the opposite case, it would indicate that even a high volume of filtration and a large number of replicates would be required to inventory fish biodiversity regionally. | Sampling sites and eDNA sampling protocol We filtered surface seawater across eight sampling sites in three different oceanic regions: Caribbean Sea, Western Indian Ocean, and the Eastern Pacific (Figure 1). At each of the eight sampling sites, several transects were carried out with at least two filtration replicates per transect (see Table 1). Filtration replicates per transect were performed simultaneously on either side of a small boat moving at 2-3 nautical miles per hour while filtering surface seawater for 30 min, resulting in approximately 30 L of water filtered per replicate. The shape of 2 km transect varied to match the configuration of the reefs but were always consistent between the compared replicates. 2.2 | eDNA processing, sequencing, and clustering eDNA extraction, PCR amplification, and purification prior to library preparation were performed in separate, dedicated rooms following the protocols described in Polanco Fernández et al. (2020) and Valentini et al. (2016). eDNA was amplified using the teleo primer pair (forward: -ACACCGCCCGTCACTCT, reverse: -CTTCCGGTACACTTACCATG) which targets a ~60 base pair marker within the mitochondrial 12S ribosomal RNA gene and shows high accuracy to detect both bony (Actinopteri) and cartilaginous fish (Chondrichthyes) (Collins et al., 2019). The primers were 5′-labeled with an eight-nucleotide tag unique to each PCR replicate, with forward and reverse tags identical, allowing the assignment of each sequence to the corresponding sample during sequence analysis. Twelve PCR replicates were run per sample, that is, 24 per transect. While sample-to-sample variation in PCR replicates exist , we used a multitube procedure and pooled 12 PCR replicates prior to analyses, which is shown to reduce PCR stochasticity (Tab erlet et al., 1996). Further, this methodology has accurately recovered biodiversity patterns from traditional surveys (e.g., Czeglédi et al., 2021). Fifteen libraries were prepared using the MetaFast protocol (Fasteris). For seven libraries (Caribbean and East Pacific sites), paired-end sequencing (2 × 125 bp) was carried out using an Illumina HiSeq 2500 sequencer on a HiSeq Rapid Flow Cell v2 using the HiSeq Rapid SBS Kit v2 (Illumina), and for the remaining eight libraries (Western Indian Ocean sites), the paired-end sequencing was carried out using a MiSeq (2 × 125 bp) with the MiSeq Flow Cell Kit v3 (Illumina), following the manufacturer's instructions. To control for any potential biases linked to the differences in sequencing platforms, the samples were titrated before library preparation to achieve a theoretical sequencing depth of 1,000,000 per sample in each library and sequencing platform. Library preparation and sequencing were performed at Fasteris facilities. Fifteen negative extraction controls and six negative PCR controls (ultrapure water, 12 replicates per PCR control) were amplified per primer pair and sequenced in parallel to the samples to monitor possible contaminants. To provide accurate diversity estimation in the absence of a complete genetic reference database , we used sequence clustering and stringent cleaning thresholds . This procedure has been validated in Marques et al. (2020) and generates highly correlated alpha, beta, and gamma diversity between traditional taxonomic and MOTU-based diversity estimates (correlation r ~ 0.98). Clustering was performed using the SWARM algorithm which uses sequence similarity and abundance patterns to cluster multiple variants F I G U R E 1 Sampling sites in the Eastern Pacific, Caribbean, and Western Indian Ocean. The eight sampled sites represented by Google Earth imagery show the spatial distribution of transects within sites. Markers represent the beginning of eDNA transects in each site; color and shape indicate whether samples were used in local accumulation analysis (static samples repeated multiple times in a shorter period, red circles) or regional/island level accumulation curves (blue triangles) of sequences into MOTUs (Fisher et al., 2015;Rognes et al., 2016). First, sequences were merged using vsearch (Rognes et al., 2016), next we used cutadapt (Martin, 2011) for demultiplexing and primer trimming and finally vsearch to remove sequences containing ambiguities. SWARM was run with a minimum distance of one mismatch to make clusters . Once the MOTUs are generated, the most abundant sequence within each cluster was used as a representative sequence for taxonomic assignment (see Polanco Fernández et al., 2020 for details). We applied a postclustering curation algorithm (LULU) to identify potential errors, using sequence similarity and cooccurrence patterns, which curates the data by removing MOTUs identified as artifactual without discarding rare but real MOTUs (Frøslev et al., 2017). We removed all occurrences with less than 10 reads per PCR. Finally, we removed all MOTUs present in only one PCR replicate within the entire data set. This additional step was necessary as PCR errors were unlikely to be present in more than one PCR occurrence, and it removed spurious MOTUs that inflated diversity estimates by a factor of two when compared to true diversity . As such we provided conservative MOTU diversity estimates where we limited the number of false-negative MOTUs while also removing many false positives. Pseudo-genes were unlikely to bias our analyses because nuclear DNA is rare in eDNA samples (Capo et al., 2021;Stat et al., 2017) and is outnumber by a factor of hundreds to thousands by the mitochondrial eDNA of focus here (Robin & Wong, 1988). | MOTU richness We first compared MOTU local richness with the expected richness of the species pool in the eight sites. For this, we created MOTU presence-absence matrices containing every replicate of each region. We also compiled fish presence-absence matrices from species lists for each of the eight sites from the literature: Scattered Islands Kruskal-Wallis rank sum test. We also related the MOTU richness per replicate to the site richness (from species lists) using a linear model (but note the different sequencing platforms between regions). We estimated the recovered MOTU richness for each filtration replicate per transect and determined if the mean α-diversity differed between paired filtration replicates for a given transect using a Wilcoxon signed-rank test. | MOTU compositional dissimilarity To understand the variability in MOTUs recovered between filtration replicates, we quantified the compositional similarity of MOTUs. We estimated the pairwise Jaccard's dissimilarity index (β jac ) between filtration replicates per transect using the R package vegan (Oksanen et al., 2019). The Jaccard index ranges from 0 (species composition between the replicates is identical, that is, complete similarity) to 1 (no species in common between the replicates, i.e., complete dissimilarity). We partitioned the Jaccard index into turnover (β jtu ) and nestedness (β jne ) components using the R package betapart (Baselga & Orme, 2012). Nestedness quantifies the extent to which replicates are subsets of each other. Turnover indicates the amount of species replacement among replicates, that is, the substitution of species in one replicate by different species in the other one (Baselga & Orme, 2012;Legendre & De Cáceres, 2013). In addition, we tested TA B L E 1 Overview of eDNA sampling across regions and sites in our study Santa Marta (#2) 3 6 whether β jac differed between the regions using a Kruskal-Wallis rank sum test. | Local-scale MOTU accumulation curves To analyze the local-scale richness accumulation, we repeated circular transects multiple times in Malpelo and Santa Marta. We sampled two locations in Santa Marta filtrating 6 replicates at each within 20 hr and one location in Malpelo filtrating 10 replicates within 3 days. This sampling design defined three local MOTU accumulation "experiments." We produced MOTU richness accumulation curves across filtration replicates from each location using the specaccum function from the R package vegan (Oksanen et al., 2019). The "random" method was used to generate 1,000 accumulation curves which were used to fit 14 models using the sar_average function in the R package sars ( tions. We compared model fits selecting the model with the lowest AIC. We generated multimodel mean averages which were used for asymptote calculations, extrapolation, and visualization. We next used the sar_pred function to extrapolate MOTU richness for up to 60 filtration replicates. We defined asymptotes as the number of replicates at which less than 1 new MOTU was added per additional sample. | Regional-scale MOTU accumulation curves In contrast to the saturation curves at one location, we assessed the extent to which our eDNA protocol captures regional fish biodiversity. MOTU accumulation curves were calculated using all filtration replicates in each of the eight sites. Species accumulation curves were produced and compared as above ( Figure 1; Table 1) rather than within localized repeated transects. All transects and replicates from all stations within a sampling site were pooled to form a sitewide (or regional) accumulation curve. | Overview of eDNA biodiversity patterns We detected a total of 789 unique MOTUs assigned to bony and cartilaginous fish taxa. Site MOTU richness was significantly and positively associated with the size of the site species pool (slope = 0.1, t = 4.7, p < .001; Figure 2) reconstructing large-scale biodiversity gradients across the tropics. | MOTUs richness per replicate The fish MOTU richness detected by each filtration replicate (n = 100) ranged from 3 to 162, with a mean of 58.3 ± 35.6 MOTUs ( Figure 2 | MOTU compositional dissimilarity between replicates The | Local-scale MOTU accumulation curves The accumulated fish MOTU richness in the two locations in Santa replicates (i.e., within our number of replicates), except for Santa Marta, where an additional 6 replicates are predicted to be required to reach an asymptote ( Figure 5). In the Western Indian Ocean, | Regional-scale MOTU accumulation curves where sampling was less exhaustive, regional MOTU richness did not saturate and reached between 46.4% (Tromelin) and 82.7% (Grande Glorieuse) of the predicted asymptotic MOTU richness. To reach an asymptotic richness of 172.3-320.2 MOTUs in the Western Indian Ocean, our estimates suggest that between 30 and 52 replicates would be required. The shapes of regional accumulation curves were qualitatively different between the three oceans and showed differing levels of both diversity and sampling exhaustiveness across sites ( Figure 5). Our results were qualitatively insensitive to the definition of the asymptote used (see Figures S2-S3). where, and to what extent these varying processes act to modify spatial and temporal eDNA distribution is critical to disentangle biodiversity variation from sampling variation on reefs. | D ISCUSS I ON Since biodiversity changes are most often detected as compositional turnover, but not necessarily richness changes, we highlight a major challenge in developing eDNA to monitor ecosystem modifications through space and time (Blowes et al., 2019;Dornelas et al., 2014;Hill et al., 2016;Santini et al., 2017). Our results imply that if sample variability is not accounted for, or survey designs are not well replicated, eDNA-derived time series could over-emphasize compositional turnover by containing many false negatives. This point will be exacerbated where incomplete reference databases recover a small portion of common species and falsely identify low species turnover among samples (Schenekar et al., 2020), even though MOTU turnover identified here may be very high. We found MOTU compositional differences between replicates to be higher in the more speciose Western Indian Ocean (under similar sampling protocols), perhaps due to the larger species pool, further challenging eDNA applications in most diverse tropical systems (Juhel et al., 2020). Current protocols should be cautiously applied to biomonitoring if such limitations remain unresolved. Our results also imply that many replicates of >30 L water are needed to reach a stable estimate of total local biodiversity. Promisingly, the regional biodiversity of tropical systems was relatively well-quantified through repeated eDNA sampling (e.g., Figure 5), and more exhaustive biodiversity estimates may be achieved by including mesophotic coral ecosystems (from −300 m depth to subsurface) and various habitats (e.g., lagoons, reef-slope, mangroves, seagrass) (Juhel et al., 2020). The well-established community pattern that many species are rare and few are common (McGill et al., 2007) also likely exists in eDNA particles. Moreover, finding rare eDNA fragments in any given sample may be exacerbated by features of marine systems. For example, we likely sampled vagrant open-ocean species that pass through temporarily, in some of our remote sites (e.g., Malpelo) which may have increased sampling variability. Compared to terrestrial systems, the seawater environment may homogenize eDNA that comes from different habitats (e.g., coral, rock, sand, seagrass). The eDNA species pool could be larger in a seawater sample than expected based on habitat variation along a given 2 km transect. Dispersion of eDNA between distinct habitats (e.g., from seagrass beds to coral reefs) would enhance the likelihood of finding a rare habitat specialist from a different habitat type and increasing perceived sampling variability. As such, eDNA variability may be greater in seascapes with a greater diversity of habitats. Sampling designs may need to account for the extent that a given water body accumulates sources of eDNA, and the amount of habitat variation that a water sample signal is aggregated over. To use eDNA-derived data most effectively, statistical analyses may need to control for habitat variations before reaching conclusions (Boulanger et al., 2021). Marine eDNA protocols are challenged by the compositional turnover between replicates. As in traditional approaches, saturation of biodiversity samples only occurs with many replicates on tropical reefs (MacNeil et al., 2008). However, traditional methods F I G U R E 4 Local-scale MOTU richness accumulation analysis of eDNA filtration replicates from Santa Marta and Malpelo. The curves show the multimodel mean average of the local MOTU richness and richness extrapolation for the filtration replicates collected by repeated sampling at the same location over a short period. Colored text boxes indicate the final sampled richness and the percentage of the estimated richness asymptote reached with our filtration replicates. Points on the curve mark the asymptote (defined as a < 1 MOTU increase in species richness per added sample). The asymptotic MOTU richness plus the number of filters required to reach the asymptote are noted in the white text box next to the curves. The solid line shows the richness of the filters collected during actual sampling; the dotted line is the extrapolation of richness up to 60 filters. The curve color corresponds to the sampling regions: Santa Marta (light orange: "tayrona_camera_1," dark orange: "tayrona_camera_2"), Malpelo (blue). See Figure S1 for the same analysis conducted on MOTUs assigned to the nearest taxonomic rank like underwater visual census (UVC) and baited remote underwater video (BRUVs) are systematically biased by observer effects and fish behavior, leading to false negatives for cryptic and elusive species (Ackerman & Bellwood, 2000;Bernard et al., 2013;MacNeil et al., 2008). For example, we found ~30 Chondrichthyes species that typically would not be encountered on visual surveys (e.g., 2 Mobula Coral reefs are extremely speciose (Edgar et al., 2017;Fisher et al., 2015) and so 60 L (two replicates) or even 180 L (six replicates) does not seem to fully quantify local biodiversity. Instead, in support of other eDNA studies that filtered far less water, our replicates only sampled a portion of diversity (Bessey et al., 2020;DiBattista et al., 2017;Juhel et al., 2020;Koziol et al., 2019;Sigsgaard et al., 2019;Stat et al., 2019). In temperate systems, 20 L of water was sufficient for fish family richness to saturate (Koziol et al., 2019; but see Evans et al., 2017), but tropical systems are more challenging to monitor. The number of eDNA replicates to ensure tropical fish diversity saturation varies widely. For example, 32-39 samples of 0.5 L of water began to saturate fish genera diversity in western Australia , but 92 samples of 2 L did not saturate diversity in West Papua, Indonesia, a hotspot of fish diversity (Juhel et al., 2020). Furthermore, even the largest sample of 2 L in Bessey et al. (2020) only detected <43% (75/176) of the total species pool reported in the Timor Sea. eDNA accumulation curves often confound site-accumulated (regional) and replicate-accumulated (local) diversity presenting challenges for replicate number and water volume refinements (but see Bessey et al., 2020). Comparing available estimates, integrative sampling (performed here), rather than point sampling, for example, Stat et al. (2019) and Juhel et al. (2020), appears very promising. For example, in Caribbean and Eastern Pacific sites within ~25 filters, we found additional filters added only <1 MOTU. Previous works using point samples have far higher sampling numbers, and higher DNA analysis costs per filter so leading to apparently lower costeffectiveness (unless filters are aggregated at the DNA extraction step; e.g., Juhel et al., 2020;Stat et al., 2019). Future work should optimize sampling designs and the trade-off between water sample volume and replicate number, which we only partially explore, and how these factors contribute to the precision of biodiversity estimates in controlled settings (Miya et al., 2015). For example, if sampling nearer to substrate bottoms greatly improves recovery of eDNA this additional cost (e.g., divers, submersibles, and additional expertise) could work out as a cost-effective solution to address surface sampling variability. Another option would be to use previous knowledge of biodiversity in each site to adapt the number of replicates to reach expected saturation. A similar pattern of low compositional similarity, and consistent richness in replicates, could arise if filters saturate with eDNA and prevent the full quantification of biodiversity. Our analyses suggest this is unlikely because the richness recovered from the eDNA filters was associated with the size of the species pools, which would be unexpected if filters had a maximum richness capacity that was reached consistently. Furthermore, we might expect nestedness to be more important if filters or PCR processes were first saturated with the most commonly available eDNA, but we found MOTU compositional differences between replicates were more strongly F I G U R E 5 Regional MOTU richness accumulation curves of eDNA filtration replicates across the Caribbean, Eastern Pacific, and Western Indian Ocean. The curves show the multi-model mean averages of the local richness and richness extrapolation (number of MOTUs) for the number of filters (sample size) from each region. Points on the curve represent the asymptote (defined as a less than 1 MOTU increase in species richness per added sample). The asymptote for the MOTU richness plus the number of filters needed to reach the asymptote is noted in the text box below the curves. The solid line shows the richness of the filters collected; the dotted line is the extrapolation of richness up to 60 filters. The colors of the curves correspond to the sampling area: Caribbean Sea (orange), Eastern Pacific (light blue), and Western Indian Ocean (grey) related to turnover than nestedness. Finally, if filters first saturate with common species, eDNA recovery of rare species would be limited, but in our eDNA protocol, we find many species that remain undetected or rare in visual surveys (Polanco Fernández et al., 2020). Promisingly, this suggests not only that our sampling protocol is robust but also that sampling and filtering an even greater water volume per filtration replicate is a feasible approach to better quantify the high fish diversity of coral reefs. Given the low biomass-to-water ratio in marine systems, a high volume of filtered water is likely a prerequisite to have a representative sampling of the marine environment (Bessey et al., 2020). However, other parameters must be considered and explored in the future to identify whether physicochemical and local oceanographic conditions introduce variability in biodiversity estimates (Collins et al., 2018). | CON CLUS ION Our findings underline both promises and limitations of eDNA derived biodiversity estimates in hyperdiverse tropical ecosystems. On one hand, local richness estimation appears to rapidly resolve broad-scale richness patterns of underdocumented tropical marine biodiversity (Costello et al., 2010;Menegotto & Rangel, 2018). On the other hand, stochasticity between sample replicates urges cautious application to biomonitoring, and further protocol refinement, to avoid misattribution of biodiversity trends to detection errors. A better understanding of the behavior of eDNA in diverse physicochemical marine environments will help design more effective eDNA sampling protocols and disentangle sampling errors from true biodiversity patterns (Harrison et al., 2019). Resolving whether more replicates, or greater water volumes, leads to higher probability of eDNA recovery is critical for cost-effective eDNA protocols-but integrative sampling of tens of liters along boat transects appears a promising approach. Using multiple primer sets may also improve the rate of biodiversity sampling saturation but this possibility remains unexplored here. We also recommend testing various water sampling strategies, for example sampling not only surface water, but taking eDNA along a depth gradient where the ecology of eDNA may differ. Accurate, cheap, and fast biodiversity estimates are critically needed to monitor changes in the Anthropic Ocean. Current eDNA protocols provide higher and more realistic estimates of biodiversity than traditional methods for a given sampling effort. This opens very promising and realistic perspectives to quantify biodiversity since increasing the volume of water filtered and replicate numbers is feasible, particularly in regions with high biodiversity. Further refinement of our marine eDNA protocol will better quantify, monitor, and manage changing tropical marine biodiversity. -164294). CA was funded by an "étoile montante'' fellowship from the "pays de la loire" region (grant number 2020_10792). ACK N OWLED G M ENTS EM was supported by the FAIRFISH project (ERC starting grant: 759457). We thank SPYGEN staff for technical support in the laboratory and are grateful to PE Guerin for his support in bioinformatics pipeline development. CO N FLI C T O F I NTE R E S T None declared. writing-review and editing (equal). DATA AVA I L A B I L I T Y S TAT E M E N T We agree to archive our data in Dryad on acceptance of our manuscript.
v3-fos-license
2024-02-23T05:08:36.888Z
2024-02-11T00:00:00.000
267779154
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.thno.org/v14p1701.pdf", "pdf_hash": "907e26108f0ebdcebca04ff0d507cce8c246df57", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44815", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "907e26108f0ebdcebca04ff0d507cce8c246df57", "year": 2024 }
pes2o/s2orc
Forskolin-driven conversion of human somatic cells into induced neurons through regulation of the cAMP-CREB1-JNK signaling Human somatic cells can be reprogrammed into neuron cell fate through regulation of a single transcription factor or application of small molecule cocktails. Methods: Here, we report that forskolin efficiently induces the conversion of human somatic cells into induced neurons (FiNs). Results: A large population of neuron-like phenotype cells was observed as early as 24-36 h post-induction. There were >90% TUJ1-, >80% MAP2-, and >80% NEUN-positive neurons at 5 days post-induction. Multiple subtypes of neurons were present among TUJ1-positive cells, including >60% cholinergic, >20% glutamatergic, >10% GABAergic, and >5% dopaminergic neurons. FiNs exhibited typical neural electrophysiological activity in vitro and the ability to survive in vitro and in vivo more than 2 months. Mechanistically, forskolin functions in FiN reprogramming by regulating the cAMP-CREB1-JNK signals, which upregulates cAMP-CREB1 expression and downregulates JNK expression. Conclusion: Overall, our studies identify a safer and efficient single-small-molecule-driven reprogramming approach for induced neuron generation and reveal a novel regulatory mechanism of neuronal cell fate acquisition. Introduction Neurons are some of the most important cells in the body and control a wide range of physiological activities [1].Mature neurons naturally lose their proliferation and regeneration abilities, and neuronal damage, especially to brain neurons, can cause severe motor dysfunction [2].However, the regeneration of functional neurons after neuronal injury remains a major challenge [3,4].Recent studies have demonstrated that somatic cells (fibroblasts, glial cells and astrocytes) can be converted into functional neurons both in vitro and in vivo through regulation of the expression of transcription factors or induction with small molecule cocktails [5][6][7][8][9][10].Thus far, viral-based expression of transcription factors has been largely used for the conversion of somatic cells into neurons [5,10]; however, this approach introduces exogenous genes, limiting its translation into clinical applications [11].In contrast, small Ivyspring International Publisher molecule cocktails that target signaling pathways, epigenetic modifications, or metabolic processes are also capable of directly reprogramming somatic cells into neuron progenitor cells [12] or neurons [6][7][8][9].Compared to transcription factor-based reprogramming, the small molecule-induced reprogramming approach is advantageous because it is nonviral, does not require transcription factors, is cost effective, is easy to alter and standardize, and has a broad range of downstream applications [13].Therefore, small molecule strategies could potentially be translated into clinical therapeutic applications.However, the small molecule cocktails currently used for reprogramming include several small molecules that may cause unpredictable potential side effects, since their induction effects are complicated and the reprogramming mechanisms have still not been elucidated [11].These issues have significantly impeded the further clinical application of small molecules in neuronal regeneration. Forskolin is a diterpene produced by the roots of the Indian plant Coleus forskohlii [14].The natural small molecule compound forskolin, which has a low molecular weight and easily crosses the cell membrane and internal tissue barrier, has been used for centuries in traditional medicine, and its safety has also been documented in modern medicine [14,15].Forskolin directly activates the adenylate cyclase enzyme (AC), which generates cAMP from ATP, thus increasing intracellular cAMP levels [16], and is commonly used to reduce body fat [17].Moreover, the increases in intracellular cAMP levels can also increase the expression of PKA/CREB1, which is beneficial for neuronal survival and health because it inhibits apoptosis signaling pathways, such as the JNK signaling pathway [18,19].Previous reports have shown that forskolin can be used as a small molecule compound to promote the neural differentiation of mesenchymal stem cells [20] and the generation of chemically induced neurons (ciNs) by small molecule cocktails [6,7]; however, forskolin is reportedly not a critical small molecule for the conversion of somatic cells into ciNs [8]. Surprisingly, in the current study, we discovered that forskolin induction alone can highly efficiently reprogram human somatic cells directly into induced neurons (FiNs), including a wide range of neuronal-subtype cells, which has never been described.These FiNs can survive for >2 months in culture and display significant robust neural electrophysiological activity.Injecting these induced neurons into the mouse brain in vivo revealed that these human FiNs can survive for >2 months.Moreover, our findings demonstrate that forskolin participates in the conversion of somatic cells into FiNs by regulating the cAMP-CREB1-JNK signals. The regulatory effects of any single site of this pathway can induce this conversion successfully.Therefore, this study identifies a natural small molecule for neuronal regeneration with a clear regulatory mechanism and may offer a novel strategy for clinical application in the treatment of neurodegenerative disease. Conversion of somatic cells into neuronal cells by forskolin induction Our previous study [21] showed that a small chemical cocktail, BFRTV (B, TTNPB; F, forskolin; R, RepSox; T, tranylcypromine; and V, VPA), could induce fibroblasts to reprogram into mammary epithelial cells derived from the embryonic ectoderm.We then hypothesized that BFRTV might be able to induce the conversion of fibroblasts into other ectoderm-derived cells, such as neurons.Through further small molecule compound (BFRTV) screening, we surprisingly discovered that many BJ cells (human skin fibroblasts) treated with induction medium (IM) (including 10 μM forskolin; F) exhibited a bipolar neuron-like cell morphology as early as 24-36 h post-induction and significantly exhibited this morphology at 2-3 days post-induction (Figure 1A-B, Figure S1A and Supplemental Video).These bipolar neuron-like cells yielded >50% TUJ1-positive cells and >20% MAP2-positive cells at 2 days post-induction (D2).Subsequently, we replaced the induction medium with neuron maturation medium (including 10 μM forskolin).On day 5 (D5), the positive rates of TUJ1 and MAP2 were greater than 90% and 80%, respectively, and >80% of the cells expressed the mature neuronal marker NEUN (Figure 1C and S1B).BJ cells (day 0) stained negative for the neuronal markers TUJ1, MAP2 and NEUN and were used as negative controls.Human induced neuronal cells generated from somatic cells by induction with a small molecule cocktail (VCRFSGY) as described in a previous report [7] stained positively for TUJ1, MAP2 and NEUN and were used as positive controls (Figure 1C and S1C-D).Subsequently, these FiNs survived >2 months in vitro in neuronal cell culture medium without the addition of 10 μM forskolin (Figure 2B, S1A, S2A-C).Meanwhile, the human astrocyte marker antibody GFAP did not significantly stain positive during this period, and BJ cells (D0) expressed the fibroblast marker VIM (Figures S3 A).In addition, the results of quantitative real-time PCR (qRT-PCR) were consistent with the above immunofluorescence (IF) results, showing that the expression of fibroblast marker genes was significantly downregulated, while that of neuronal marker genes was significantly upregulated (Figures S3 B-C).Moreover, at 5 days post-induction (D5), the ratios of cells positive for the neuronal subtype markers choline, vGlut1, GAD67 and TH to TUJ1-positive cells were more than 60%, 20%, 10%, and 5%, respectively (Figure 1D-E).These findings indicate that BJ cells can be rapidly, easily, and efficiently reprogrammed into multiple subtypes of neurons, including cholinergic, glutamatergic, GABAergic, and dopaminergic neurons, by using a single small molecule, forskolin, without the appearance of astrocytes during the whole process.Moreover, human adult somatic cells (human adult skin fibroblasts, HSFs and human adult ovarian granule cells, HGCs) also can be efficiently converted into induced neurons under forskolin induction (Figure S4).Therefore, forskolin is able to efficiently induce the conversion of human somatic cells into neurons. The cAMP-CREB1-JNK signals determines the cell fate conversion of BJ cells into neurons under forskolin induction To further investigate the regulatory mechanism of FiNs in the reprogramming process, we screened small molecule compounds that act on forskolin induction-related signaling pathways.Interestingly, the results showed that the addition of cAMP (1 mM), 8-bromo-cAMP (PKA/CREB1 activator; 50 μM), SP600125 (JNK inhibitor; 10 μM) or LDN193189 (BMP/ALK2,3 inhibitor; 2.5 μM) within induction medium (no forskolin) could also reprogram BJ cells into neurons (Figure 2A), as determined by positive staining for the specific neuronal markers TUJ1, MAP2, and NEUN (Figure 2B-C).Therefore, based on known signaling pathway information, we speculate that the cAMP-PKA/CREB1-JNK signals may play a decisive role in FiN reprogramming from somatic cells (Figure 2D). To demonstrate the regulatory pathway and the key genes that regulate this reprogramming process, we further conducted gene overexpression and knockdown experiments to confirm the regulatory effects of CREB1 and JNK (MAPK8).The results showed that CREB1 overexpression with pLVX-IRES-CREB1-ZsGreen1 or JNK (MAPK8) downregulation with Lenti-CAS9-MAPK8-Puro could reprogram BJ cells into neurons, as determined by the neuronal cellular phenotype and positive IF staining of neuronal markers (TUJ1, MAP2 and NEUN) (Figure 3A-B and Figure S5 A-C).In contrast, JNK overexpression with pLVX-IRES-MAPK8-ZsGreen1 or CREB1 downregulation with Lenti-CAS9-CREB1-Puro significantly decreased the rates of TUJ1-and MAP2-positive neurons after SP600125 or forskolin induction, respectively (Figures S5 D-E).Therefore, our findings suggest that CREB1 and JNK (MAPK8) are critical regulatory genes for FiN reprogramming and demonstrate that forskolin functions in this reprogramming through the regulation of the cAMP-CREB1-JNK signals. FiNs show typical neural electrophysiological properties To investigate the electrophysiological properties of FiNs, we used high-density microelectrode arrays (HD-MEAs) to detect the cells induced by forskolin for 0 days, 2 days, 5 days, 10 days, 15 days and 30 days.During the forskolin induction process, the percentage of active electrodes on the HD-MEA chip gradually expanded from approximately 0.25% (D2, 2 days post-induction) to approximately 63% (D30, 30 days post-induction), and the mean firing rate (Hz) also gradually increased (Figure 4A, C-D and Figure S6 A-B).Neuronal networks are often characterized by synchronized activity (bursts) resulting from recurrent synaptic connections that form as the neuronal network matures.Testing of the different induction time groups mentioned above and raster plotting revealed that network activity, the number of spikes per burst and the number of spikes per burst per electrode increased significantly with increasing induction time (Figure 4B, E-F).Moreover, we conducted whole-cell recording of FiNs and found that at 15 days post induction, FiNs were able to generate an action potentials (APs) in response to current clamp mode injection of depolarized step currents, and possess fast-decay spontaneous excitatory postsynaptic currents (sEPSCs).Therefore, these results are consistent with that of HD-MEAs method (Figure 4G-J). Moreover, the actual neurotransmitters (dopamine and GABA) were detected in the cell supernatant of induced neurons at 5 days post forskolin induction (Figure S6 C).Collectively, these findings suggest that FiNs possess typical neural electrophysiological activity. Survival of transplanted FiNs in mouse brains To demonstrate whether transplanted FiNs could survive for long term in vivo, we conducted in vivo transplantation experiments.BJ cells were transfected with GFP-labeled lentivirus and then induced with forskolin for 2 d to generate GFP-FiNs.The GFP-FiNs-2d were trypsinized and injected bilaterally into the lateral ventricles of postnatal day 1 mice (Figure 4K).At 7, 30 and 60 days post-injection (DPI), mice were sacrificed for cryosectioning, and obvious green fluorescence was observed at the injection site at 7 DPI (Figure 4L).Subsequently, some transplanted cells with GFP in cryosections at 30 and 60 DPI were stained red by IF, which indicated that they expressed the neuronal markers TUJ1 and MAP2 (Figure 4M).These findings indicate that FiNs can survive >2 months when transplanted into mouse brains.Moreover, a few GFP-cells resembling neurons were not positive for TUJ1, which may suggest that these cells were dying-induced neurons, incompletely reprogrammed neurons or GFP-positive fibroblasts.However, additional experiments need to be conducted to clarify this point. Single-cell sequencing and mRNA-seq analyses demonstrate the cell fate conversion of BJ cells into FiNs To dissect the molecular events during FiN reprogramming, we performed single-cell RNA (scRNA)-seq to investigate the transcriptomes of individual cells collected at three time points along the reprogramming path: initiating BJ cells (BJs), cells at 3 days post-induction (D3) and cells at 5 days post-induction (D5).Using the unsupervised dimensional reduction and visualization method of uniform manifold approximation and projection (UMAP) plotting, we clustered cells from all stages into seven cell clusters (Figure 5A).Based on the marker genes for each cluster and the stages of the cells collected, the cells of cluster 1 expressing the fibroblast markers TAGLN and MYL9 were classified as initiating BJ cells, while the cells of clusters 6 and 7 within the sample at 5 days post-induction (D5) exhibited some neuron specific markers such as MAP2 and TUBB3 (Figure 5B). Moreover, based on the marker genes for each cluster, we first determined that cells of clusters 6 and 7 at 5 days post-induction (D5) were cells that had been successfully reprogrammed into induced neurons, which expressed a number of neuronspecific markers, including ASCL1, NEUROG2, NEUROD1, RBFOX3(NEUN), MAP2, TUBB3(TUJ1), etc., and the enrichment of neuronal-related GO terms (Figure 5 C-E).Some neural development and functional synapse related genes, such as the MEIS2, DDX5, SAT1, PURA and RORB, were expressed specifically in the cells of cluster 5 at 3 days post-induction (D3) (Figure 5C-D, Figure S7A), which indicated that the cells of cluster 5 more likely follow-up development into neurons.Second, we found that the cells of cluster 2 at 3 days postinduction dominantly expressed cell cycle-related genes (MKI67, CDK1, CENPF, etc.), which were enriched with cell cycle-related GO terms, while fibroblast-specific genes were downregulated (Figure 5C-D, Figure S7A).Interestingly, genes associated with early embryonic neurodevelopment were upregulated at the same time (ASPM, KIF20B, KNL1, etc.) (Figure 5C).These findings indicate that these cells of cluster 2 may have been in a preparatory state of neuronal lineage commitment.Third, compared with the cells of cluster 2, the cells of cluster 3 showed obvious downregulation of cell cycle-related genes followed by upregulation of a panel of genes involved in processes of neural differentiation and regeneration, such as MALAT1 (regulation of synaptogenesis and neurogenesis), NEAT1 (regulation of neuronal excitability), and NRG1 (the major synaptogenic protein) (Figure 5C).This may suggest that the cells of cluster 3 had already entered the neuronal lineage and were in an intermediate state of FiN reprogramming.Fourth, the cells of cluster 4 were enriched with many terms related to cell death and apoptosis, and neural-lineage genes were not significantly expressed, which did not seem to indicate successful reprogramming (Figure 5C-D).These findings indicated that four different cellular states may have been captured in the reprogramming route from BJ cells to FiNs.The above findings from the scRNA-seq analyses may indicate that forskolin can efficiently reprogram BJ cells into neuronal cells after forskolin induction. Based on the above findings from scRNA-seq analysis, the cell fate was changed significantly at 3 days post-induction.It may indicate that 3 days post-induction is the critical timepoint for FiN cell fate decision.In order to further clarify this point, we used a single-cell assay for transposase-accessible chromatin sequencing (scATAC-seq) to analyze BJs and 3 days post-induction cell samples (D0 and D3 cells), and the pseudo-time analysis showed that the D3 sample gradually transitioned from the fibroblast state (Figure 6A).The results revealed an open chromatin state of the cells at 3 days post-induction with increased accessibility at certain gene loci related to neural development and decreased accessibility at certain gene loci related to fibroblasts.However, the cells in BJ cells sample, showed chromatin accessibility profiles opposite those of the cells in D3 sample (Figure 6B-C).Gene Ontology (GO) analysis showed that the genes that were significantly activated at 3 days post-induction were enriched for the neural development related terms (Figure 6D).These findings are consistent with those of the above scRNA-seq analysis, and further demonstrated that some induced cells at 3 days post-induction had entered the neural cell fate commitment and subsequently had chances to develop into neuron cell fate at 5 days post-induction. Finally, in parallel to the scRNA/ATAC-seq analysis, we collected samples at D0, D2 and D5 to measure the global gene expression profiles by mRNA sequencing (mRNA-seq).The differentially expressed genes (DEGs) were grouped according to their expression patterns during the induction process.The genes in the upregulated group were enriched with GO terms related to neurogenesis, while the genes in the downregulated group were enriched with terms related to fibroblasts (Figure S7 B).Moreover, Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis showed that neuronal subtype terms were enriched for the D5 samples (Figure S7 C) and that neuronal subtype-related genes were upregulated gradually during the induction process (Figure S8 A-E).These findings from mRNA-seq analysis indicated that fibroblast-related genes were downregulated and neuronal-related genes were upregulated during the induction process and that BJ cells ultimately achieved neuronal cell fate.These findings were consistent with the findings of scRNA/ATAC-seq analysis.Overall, the multiomics sequencing analysis demonstrated that the conversion of forskolininduced neurons (FiNs) from somatic cells was an authentic phenomenon. ScRNA-seq reveals a successful FiN reprogramming path To more precisely understand the FiN reprogramming path, we used Monocle 2 to perform pseudotime ordering analysis to investigate the scRNA-seq data from the cells of the 3 days post-induction samples (cells in clusters 2, 3, 4 and 5).Pseudotime ordering showed that reprogramming was a continuous process that progressed from cluster 2 (pre-branch) through cluster 3 to cluster 5 (neural lineage commitment successful branch) or cluster 4 (failed branch) (Figure 7A-B).The induced cells in cluster 5 significantly expressed neural development related genes, such as the MEIS2, RORB, PURA, NFAT5 and ZBTB20, while these genes were not significantly expressed in the cells of cluster 4 (Figure 7B).Therefore, the above findings indicate that the significant expression of these genes may guarantee successful neuron cell fate commitment. Moreover, we clustered DEGs into three gene sets based on their expression dynamics along the pseudotime sequence (Figure 7C).The results of GO term enrichment analysis of each gene cluster showed that induced cells in the preparatory state (enriched with cell cycle-related terms, Figure 7C top, red) needed to go through a transient intermediate state (Figure 7C middle, green) to successfully achieve a stable neural lineage fate (enriched with neural development terms, Figure 7C bottom, blue).During the intermediate state, the expression of genes related to neuronal damage repair-related genes, such as MALAT1, MEG3, NEAT1, KCNQ1OT1, FTX, and the essential for neuronal function gene ADARB1 were also actively expressed in successful branches of neural lineage commitment rather than in failed branches (Figure 7D).These findings indicate that successful neuron cell fate commitment requires the orderly participation of multiple types of activated regulatory networks to ultimately achieve a neuron identity. Regulatory network of FiN reprogramming The above results demonstrated that the cAMP-CREB1-JNK pathway determines the cell fate conversion of BJ cells into neurons under forskolin induction and that CREB1 and JNK (MAPK8) are critical regulatory genes.Furthermore, the results of scRNA-seq analysis showed that CREB1 was upregulated after forskolin induction (Figure S9 A).Moreover, scATAC-seq analysis showed that the binding motif of CREB1 was significantly enriched in the cells in the day 3 (D3) cells sample (Figure S9 B).Interestingly, mRNA-seq analysis showed that a group of genes had transient upregulation in the D2 cells sample, and GO enrichment analysis revealed that the corresponding terms were related to the MAPK/JNK cascade (Figure S9 C).Therefore, these findings further confirm that CREB1 and JNK (MAPK8) are critical regulatory genes in FiN reprogramming, especially at the initial stage. Most importantly, scRNA-seq analysis also revealed the regulatory network of this reprogramming mediated by CREB1 and JNK (MAPK8).Based on pairwise correlation of the gene expression data, we constructed a regulatory network during progressive cell fate transitions from the trigger (the regulatory effects of CREB1 or JNK induced by forskolin) to FiNs (Figure 8A).In detail, three transcriptional regulatory subnetworks (the preparatory, intermediate and FiN subnetworks) were revealed chronologically.The preparatory-state subnetwork was connected with the trigger genes (CREB1 or JNK) and the transient upregulation of cell cycle-and neural development-associated genes was the result of the regulatory effects of the trigger genes.Moreover, the sequential switching of transcriptional circuits highlighted the intermediate subnetwork as the bridge linking the preparatory state to the FiN state.In fact, the continuously expressed intermediate-state genes may guarantee that the neural lineage-specific genes were expressed until the end of reprogramming to successfully stabilize the neuron cell fate. These findings indicate that the reprogramming of BJ cells to FiNs requires several transcriptional waves.First, under the regulatory effects of CREB1 or JNK as triggers, cell cycle-and neural developmentrelated genes are transiently upregulated in the preparatory state.Subsequently, the reprogrammed cells enter an intermediate state characterized by significant expression of neural development genes.Finally, neuron identity is further strengthened and stabilized by the continuous expression of these neural lineage-specific genes (Figure 8B). Discussion The conversion of human somatic cells into neuron cell fate via induction with small molecule cocktails has been reported previously [6,7].However, there are no reports that a single small molecule can induce this conversion.Here, we report a natural small molecule compound, forskolin, which has been safely used in many applications to cure human diseases and maintain human health [14], and can efficiently induce the conversion of human fibroblasts into functional neuronal cells.Forskolin has been commonly used in small molecule cocktail-mediated induction to promote the conversion of somatic cells into neurons but has not been shown to be able to induce this conversion [7,8].In our studies, we established a highly efficient induction platform for the conversion of human somatic cells into neuronal cells by using the single small molecule forskolin.This induction platform allowed somatic cells to achieve neuron cell fate rapidly.These cell types were indicated by a large amount of neuron specifical markers-positive cells with typical neural electrophysiological activity, including cells in several neuronal subtypes that survived for >2 months in vitro and vivo.The induction speed and efficiency for the conversion of neuron cell fate form somatic cells were surprisingly faster and higher, respectively, than those of any other neurons reportedly induced from somatic cells.Moreover, this induction method can generate more neuronal subtype cells than any previously reported method [6][7][8][9].It may provide huge potential for somatic cells to achieve a wide range of neuron subtype cell fates for the therapy of various types of neuron degeneration diseases in vitro and in vivo.Notably, compared to the reported small molecule cocktail induction approach, our single-small-molecule induction approach may avoid many potential side effects that can arise from different small molecules or their combined effects [11].Such side effects could significantly affect further clinical applications.Moreover, a single small molecule can easily cross the blood-brain barrier and reach the defective tissue to induce neuronal regeneration in vivo; however, it is difficult to guarantee that all compounds of small molecule cocktails can reach the defective tissue in vivo in the proper ratio, which may significantly impact their induction effects.Furthermore, forskolin may be able to be combined with nanobioengineering techniques to establish a strategy for precise induction of in situ neuronal regeneration [22].Overall, these findings may offer a safer, faster and more efficient approach for the generation of neurons chemically induced from somatic cells in vitro and in vivo.This approach can be used for the treatment of neurodegenerative diseases in a wide range of clinical applications. Surprisingly, according to the signaling pathway related to forskolin's induction effects, we also discovered that other single small molecules, cAMP, 8-bromo-cAMP (PKA/CREB1 activator), SP600125 (JNK inhibitor) and LDN193189 (BMP/ALK2,3 inhibitor), can induce the conversion of human somatic cells into neuronal cells.Moreover, by conducting gene overexpression and downregulation experiments for CREB1 and JNK (MAPK8), we demonstrated that the cAMP-PKA/CREB1-JNK pathway mediates this reprogramming.Mechanistically, forskolin increases intracellular cAMP levels, subsequently upregulating PKA/CREB1 expression, downregulating JNK expression and activating three transcriptional regulatory networks (the preparatory, intermediate and FiN networks), which induces human somatic cells to reprogram into neurons.Interestingly, the regulatory effects of any single member of the cAMP-PKA/CREB1-JNK pathway can induce this conversion successfully.Therefore, we uncovered a novel regulatory pathway that mediates neuronal cell fate acquisition by somatic cells.Previous reports have indicated that the cAMP-PKA/CREB1 signaling pathway is important for regulating neuronal development [23], differentiation and survival.Activation of this pathway promotes neuronal survival and functional maintenance and repair [24,25].Moreover, JNK signaling, which is best known for its involvement in propagating proapoptotic signaling, plays a role in neuronal death, and there is evidence that this pathway may operate in various central nervous system (CNS) disease states [26].A previous report indicated that JNK inhibition may aid in the treatment of neurodegenerative diseases [27] by reducing neuronal apoptosis and that activation of the cAMP-PKA/CREB1 signaling pathway can suppress JNK activation and antagonize apoptosis [28,29].Therefore, in addition to the regulatory effects of the cAMP-PKA/CREB-JNK signaling pathway on neuronal differentiation and survival, we surprisingly discovered a novel regulatory effect of this pathway on the induction of neuronal regeneration from somatic cells.These findings reveal a novel regulatory mechanism for achieving neuronal cell fate and offer a promising therapeutic strategy for neurodegenerative diseases involving the induction of neurons from in situ somatic cells. Furthermore, we reconstructed the forskolininduced reprogramming pathway via multiomics sequencing analysis.The results indicated that forskolin-induced somatic cells were directly reprogrammed into neuronal cells.This is consistent with previous reports on chemically induced neuronal cells derived from somatic cells [6,7].However, we revealed a novel FiN reprogramming pathway by forskolin induction.In brief, at the early state of forskolin induction, under the regulatory effects of CREB1 upregulation and JNK downregulation, the activation of the cell cycle-(CENPF, MKI67) [30,31] and embryonic neurodevelopment-associated genes (ASPM and KIF20B) [32,33] may be able to remodel the cell cycle and prepare/create an appropriate environment for neurogenesis.Therefore, from this point of view, our findings are consistent with the previously reported findings that claimed that cell cycle remodeling may be the key point for the initial phase of somatic cell reprogramming [34].During this cell cycle remodeling process triggered by the regulatory effects of CREB1 and JNK, environmental signaling for neurogenesis could be released.With the continuation of induction, the existence of an intermediate state allows the activation of neural development and damage repaired-related genes (NEAT1, MALAT1, ADARB1, NRG1, ZBTB20, PURA) [35][36][37][38][39][40].The successful and continuous expression of the above intermediate state-related genes may provide the basis for successful FiNs reprogramming.Subsequently, the reprogrammed cells enter a state of stable expression of neural lineage-specific genes (OGT, MEIS2, MAP2 and TUBB3) [41][42][43] to achieve successful FiNs reprogramming (Figure S7 D).This is the first study to clearly describe the reprogramming path and regulatory mechanism by which somatic cells achieve a neuronal cell fate under chemical induction. Overall, we established a robust, highly efficient, and novel method for chemically inducing the conversion of human somatic cells into neuron cell fate via a single small molecule with a clear regulatory mechanism.Moreover, we revealed that any single small molecule that can upregulate cAMP/CREB1 and/or downregulate JNK signaling is capable of inducing the conversion of neuron cell fate from somatic cells.Therefore, our findings offer insights into the mechanism of neuronal cell fate achievement and identify a potentially powerful and clinically feasible strategy to treat neurodegenerative diseases by replacing lost neurons.Prospectively, application of the small molecule forskolin may be a potential approach for in situ neuronal regeneration for therapeutic purposes. Generation and culture of induced neurons derived from somatic cells BJ cells were seeded on poly-D-lysine (PDL) (Gibco, Cat.#A38904-01)-coated dishes and cultured with DMEM+10% FBS to a cell density of 80%.The medium was then replaced with neuronal induction medium (IM).On day 2 (D2), the IM was replaced with neuronal maturation medium (MM), and this medium was replaced every 2 days.On D5, the 10 μM forskolin was removed, and the medium was replaced with neuronal cell culture medium (NM), which was replaced every 2 days. IF staining IF staining was performed as previously reported [6,21].Cells were washed three times with PBS, fixed with 4% paraformaldehyde (PFA) for 20 min, and then blocked (in buffer containing 100 mmol/L glycine and 0.3% BSA in PBS) 3 times for 5 min each time.After blocking with 1% BSA for 1.5 h, the primary antibody was prepared, and the cells were incubated with this antibody at 4°C overnight.The next day, the cells were incubated with secondary antibodies for 1.5 h at room temperature, and then a fluorescence microscope was used for imaging and analysis. To calculate the positive rate of the cells above, we randomly selected 5-10 fields of view under a fluorescence microscope.Cells positive for TUJ1 or MAP2 with typical neuronal morphology were counted to quantify neurons, Hoechst-positive cells were counted to quantify total cells, and Choline-, vGLUT1-, GAD67-and TH-positive cells were counted to quantify the cells of each subtype.The ratio of the number of Tuj1/Map2-positive cells to the number of Hoechst-positive cells was the positive rate of TUJ1/MAP2.The ratio of the number of choline-/vGLUT1-/GAD67-/TH-positive cells to the number of TUJ1-positive cells was the positive rate of Choline/vGLUT1/GAD67/TH. The above experiments were repeated three times, and the average value was taken for quantitative analysis. qRT-PCR According to the product instructions, TRIzol (Vazyme, Cat.#R401-01) was used to extract total RNA, and a HiScript III RT SuperMix for qPCR kit (Vazyme, Cat.#R323-01) was used to reverse-transcribe the RNA into cDNA.Real-time quantitative PCR was performed on a LongGene Q2000B qPCR instrument using a ChamQ SYBR qPCR Master Mix kit (Vazyme, Cat.#Q711-02) according to the manufacturers' instructions. Plasmid construction Construction of the overexpression vector: First, the circular empty vector pLVX-IRES-ZsGreen1 was linearized by double enzyme digestion, and CREB1/MAPK8 was connected to the linearized vector by homologous recombination.Subsequently, the plasmid was transformed into E. coli for amplification, and colony PCR identification and Sanger sequencing identification were performed. Construction of the knockout vector: Single guide RNA (sgRNA) was designed according to the gene sequences of CREB1/MAPK8.At both ends of the sgRNA, BsmBI enzyme cutting sites were added to generate complementary sticky ends, which were annealed to form double-stranded DNA and then connected to a Lenti-CAS9-sgRNA-puro vector.The ligated product was transformed with competent cells, and after colony PCR verification, positive clones were obtained and sequenced to obtain a lentiviral plasmid expressing sgRNA with the correct sequence. Lentivirus infection The above lentiviral recombinant plasmids/ vectors (7.5 μg) were cotransfected with VSVG (3 μg) and NRF (4.5 μg) plasmids into 293T cells using Lipofectamine 3000 (Invitrogen) in 60 mm dishes.The virus supernatants were collected 48-72 h after transfection at 37°C under 5% CO2.The supernatant was centrifuged (4°C, 2000 rpm, 10 min) and filtered with a 0.45 μm filter, and BJ cells were infected with the supernatant and medium at a ratio of 1:1.After 48 h, fluorescence was observed under a fluorescence microscope, or puromycin screening was performed. Electrophysiology An HD-MEA chip was sterilized in 70% alcohol and then washed three times with sterile deionized water.The chip was placed in a 100 mm petri dish after drying, and a 35 mm dish filled with 2 mL of deionized water was placed in the 100 mm petri dish to provide a humid environment.Next, 400 μL of medium was injected into the chip, which was placed in a 37°C, 5% CO2 incubator for pretreatment for 2 days.After 2 days, the culture medium was aspirated, and PDL solution was added to cover the electrode array of the chip.After incubation in the incubator for 1 h, the medium was washed 3 times with deionized water.Cells were seeded on the surface of the treated chip electrode, and various electrophysiology parameters were detected at different time points with MaxOne equipment (MaxWell Biosystems, Switzerland). Whole-cell recordings of FiNs were performed using a Multiclamp 700B amplifier (Molecular Devices).ACSF, 95% O2 / 5% CO2 blistering was continuously perfused.Pipette solutions contained (in mM) 93 k -gluconate, 16 KCl, 2 MgCl2, 10 HEPES, 4 ATP-Mg, 0.3 GTP-Na2, 10 phosphate, 0.5 Alexa Fluor 568 (Invitrogen), and 0.4% neurobiotin (Invitrogen) (pH 7.25, 290-300 mOsm).Membrane potential was maintained around -70 mV and a step current with an increment of 5 pA was injected to induce an action potential.A step voltage in increments of 10 mV was injected to induce a sodium current.To block the sodium current, TTX was added in the laboratory at a final concentration of 1 µM.To record the spontaneous postsynaptic current, the membrane potential was held around -85 mV.Signals were sampled at 5 kHz with a 2 kHz low-pass filter.Data were analyzed using pClamp 10 software (Clampfit). Detection of neurotransmitters The cell supernatants were collected at the corresponding induction time, centrifuged and filtered to remove dead cells and impurities, and the corresponding neurotransmitters were detected according to the instructions of the ELISA kit (Abcam, ab285238-Dopamine ELISA Kit; ab287792 -Human QuickDetect™ GABA ELISA Kit). In vivo transplantation of FiNs All experiments followed animal welfare policies and were approved by the ethics committee of Guangxi University or The People's Hospital of Guangxi Zhuang Autonomous Region.BJ cells were infected with GFP-tagged lentivirus, and the fluorescence rate was observed under a fluorescence microscope after 48 h (it reached more than 80%).These GFP-BJ cells were induced with the induction method described in this paper and were positive for Tuj1 after 2 days of induction.These induced cells were digested with TryPLE into single cells and resuspended in cold neuronal MM at a density of 5 × 10 4 /μl.The cell suspension was placed on ice and tapped every 5 min.Postnatal 1 day mice (C57BL/6) were immobilized and anesthetized on ice for 5 min.Using a Hamilton syringe (Hamilton, Cat.#701N), 2 μL of the cell suspension was injected into the lateral ventricle of each mouse at a rate of 0.5 μL/min.The same method was used for the contralateral side.The injections were made at sites two-fifths of both eyes of the mice at a depth of 2 mm [6,44].Then, the mice were sacrificed 7, 30, and 60 days after injection, and their brain tissues were subjected to crytosectioning and IF analysis. Cryosectioning and IF analysis After the mice were sacrificed at the above time points, their brains were removed as soon as possible, placed in liquid nitrogen and quickly frozen into blocks.The samples were precooled in a 4°C refrigerator for 5-10 min to allow the O.C.T. compound to permeate the tissue.The samples were placed in a constant-temperature cryostat for coronal sectioning, placed at room temperature for 30 min, fixed in acetone at 4°C for 5 min, dried in an oven for 20 min, and washed three times with PBS for 5 min each time.Finally, antigen heat retrieval was performed.The sections were air-dried at room temperature for 15 min and incubated with PBS containing 10% donkey serum for 1 h at room temperature. A primary antibody against TUJ1/MAP2 was diluted at a concentration of 1:50, and the sections were incubated with this antibody at room temperature for 2 h.The secondary antibody, anti-mouse-555/anti-rabbit-555, was diluted at a concentration of 1:200.The sections were incubated with the secondary antibody at 37°C for 1 h and washed with PBS 3 times for 5 min each time.Hoechst was added dropwise to stain the nuclei, and the sections were incubated at room temperature for 15 min.Then, 10 μL of neutral gum was added dropwise to seal the slides, and the slides were placed under a fluorescence microscope for observation and imaging. Induced cell dynamics tracking The cells were inoculated in a 96-well plate, an appropriate amount of PBS was added to the wells around the inoculated wells to maintain a suitable humid environment.The abovementioned 96-well plates were placed in a BioTek Cytation5 plate, and the appropriate exposure and length of exposure were set.Shots were taken every 2 h for a total of 132 h of tracking, and the induced cells were cultivated in accordance with the above steps. ScRNA-seq library construction and sequencing ScRNA-seq was performed on BJ cells and D3 cell samples (3 days post-induction) using a 10× Genomics system.Briefly, dissociated cells (~10,000 cells per sample) were loaded into a 10× Genomics Chromium Single Cell system using Chromium Single Cell 3' Reagent Kits v3.1 (10× Genomics, Pleasanton, CA).ScRNA libraries were generated by following the manufacturer's instructions.The libraries were pooled and sequenced on an Illumina NovaSeq 6000.The sequencing reads were processed through the Cell Ranger 4.0.0 pipeline (10× Genomics) using the default parameters. ScATAC-seq library construction and sequencing ScATAC-seq was performed on BJ cells and D3 cell samples (3 days post-induction) using a 10× Genomic Single Cell ATAC Reagent v1.1 Kit following the manufacturer's instructions.The libraries were pooled and sequenced on an Illumina NovaSeq 6000.The sequencing data were processed through the Cell Ranger ATAC 1.1.0pipeline (10x Genomics) using the default parameters. Bulk RNA-seq (mRNA-seq) library construction and sequencing Total RNA was extracted from cells using TRIzol® Reagent according the manufacturer's instructions (Invitrogen), and genomic DNA was removed using DNase I (TaKaRa).The RNA-seq transcriptome library was prepared following the TruSeq TM RNA Sample Preparation Kit from Illumina using 1 μg of total RNA.After quantification, the RNA-seq sequencing library was sequenced with the Illumina NovaSeq 6000 sequencer in paired-end mode (2 × 150 bp read length).Then, the clean reads were separately aligned to the reference genome in orientation mode using HISAT2 software (http://ccb .jhu.edu/software/hisat2/index.shtml).The mapped reads of each sample were assembled with StringTie (https://ccb.jhu.edu/software/stringtie/index.shtml?t=example) in a reference-based approach. ScRNA-seq analysis The clean scRNA-seq reads for all of the samples were mapped to the human reference genome hg38 using Cell Ranger v.4.0.0 [45].The expression matrices were loaded into R v.4.1.0using the function Read10× in Seurat (v.4.1.0)[46] and then merged together by column.This resulted in a total of 11,488 cells from samples at 3 days post-induction and 17,504 cells from BJ cells.Cell-level quality control was performed by filtering the cells by (1) total UMI counts of no more than 5,000 but higher than 500; (2) gene numbers higher than 500 but less than 2500; and (3) mitochondrial gene percentages less than 10.The expression level of each gene in each cell was normalized using the function NormalizeData with the default parameters.Cluster-level quality control was performed after the standard Seurat clustering pipeline was implemented using the following functions in order: FindVariableFeatures with all features, ScaleData, RunPCA, FindNeighbors with the first 16 principal components (PCs) and FindClusters with resolution 0.2, otherwise default settings.Clusters with fewer than 50 cells were removed.After quality control, 10575 cells from cells at 3 days post-induction and 2,461 cells from BJ samples remained. Genes that were differentially expressed between clusters (cluster markers) were identified with the FindAllMarkers function using a Wilcoxon rank sum test and a minimum upregulation of 0.05 log-fold.GO analysis of all gene groups was performed using the function enrichGO in the R package clusterProfiler [47]. Construction of a trajectory using DEGs Monocle 2 ordering was conducted by using the set of variable genes with the default parameters, except that we specified reduction_method = "DDRTree" in the reduceDimension function [48]. The key regulator factor was submitted to the STRING database to infer regulatory networks based on known interaction relationships (supported by data from curated databases, experiments, and text mining).Factors without any interactions with other proteins were removed from the network.The network was visualized with Cytoscape (v3.9.0). ScATAC-seq analysis All the analyses (UMAP dimension reduction, cluster identification, and identification of differentially accessible regions) were performed according to the Signac (v1.6.0)[49] vignettes, and the default parameter settings were used to construct cell trajectories with Monocle 3 [50]. Quantification and statistical analysis Statistical analysis of quantified data was performed using GraphPad software.Significance was calculated with Student's t test or one-way ANOVA, unless otherwise stated.The data are presented as the mean±SEM.*p < 0.05, **p < 0.01, ***p < 0.001. Figure 2 . Figure 2. The cAMP-CREB1-JNK signals determines the cell fate conversion of BJ cells into neuronal cells under forskolin induction.A. Brightfield and regional magnification of the phenotype of neurons generated by only cAMP, 8-Bromo-cAMP, SP600125 or LDN193189 induction, respectively.Scale bars, 100 μm.Magnification scale bars, 20 μm.B. The neuronal markers TUJ1 (green), MAP2 (red), and NEUN (green) were expressed in neurons generated by only cAMP, 8-Bromo-cAMP, SP600125 or LDN193189 induction, respectively.Scale bars, 200 μm.Magnification scale bars, 40 μm.C. The positive rate of TUJ1, MAP2 and NeuN immunofluorescence of induced neurons generated by cAMP, 8-Bromo-cAMP, SP600125 and LDN193189 induction, respectively.D. Schematic diagram of the hypothetical regulatory pathways and sites of action of small molecules for the conversion of BJ cells into neuronal cells. Figure 4 . Figure 4. Electrophysiological properties of FiNs and transplanted FiN survival in mouse brains.A. HD-MEA electrical images showing 2D spatial distribution maps of the active electrodes (%) (left) and mean firing rate (Hz) (right) on days 5, 10, 15 and 30.B. HD-MEA chips detected the electrical signals of cells in each channel (left) and the network activity (firing rate (Hz)) (right) with a cycle of 60 seconds on days 5, 10, 15 and 30.C-F.Active electrode rates (C), mean firing rate (Hz), (D) number of spikes per burst (E) and number of spikes per burst per electrode (F) of cells detected in HD-MEA chips on days 2, 5, 10, 15, and 30 (mean ± SEM, n=3 biological replicates, *p < 0.05, **p < 0.01, ***p < 0.001, one-way ANOVA).G. Current-clamp recordings of FiNs showing a representative train of action potentials (top panel).Step currents were injected from -60 pA to 120 pA (bottom panel).H. Large currents of voltage-dependent sodium and potassium channels.I. Representative trace of spontaneous postsynaptic currents in FiNs.J. Synaptic currents evoked by avoltage step (60mV, 1ms) in the voltage-clamp mode.K. Schematic diagram of the bilateral intracerebral injection of FiNs.L. On day 7 after FiN injection (D7), cryosectioning showed green fluorescence (GFP) at the injection site.Scale bars, 1000 μm.Magnification scale bars, 200 μm.(n= 3 injected mice for analysis).M. On days 30 (D30) and 60 (D60) after FiN injection, cryosection IF showed FiNs with GFP labels that survived and expressed the neuronal markers TUJ1 (red) and MAP2 (red) in the mouse brain.Scale bars, 200 μm.Magnification scale bars, 40 μm.White arrows indicate transplanted GFP-FiNs.(n= 6 injected mice of each timepoint for analysis). Figure 5 . Figure 5. ScRNA-seq analyses demonstrate the conversion of BJ cells into FiNs.A. Uniform manifold approximation and projection (UMAP) analysis of the BJ cells (D0), cells at 3 days post-induction (D3) samples and cells at 5 days post-induction (D5) samples (left).The UMAP plots display the induced cells collected on D0 (BJ cells), D3 and D5 that were clustered into 7 clusters (right).B. UMAP feature plots of the expression of the fibroblast marker genes TAGLN and MYL9 and the neuron marker genes MAP2 and TUBB3.C. Heatmap showing the differentially expressed genes (DEGs) cataloged in each cluster.D. GO analysis showing the enriched terms in each cluster. Figure 6 . Figure 6.ScATAC-seq analyses demonstrate the conversion of BJ cells into FiNs.A. scATAC-seq analysis of BJ cells and D3 cells.The UMAP plots show that the induced cells collected on days 0 (BJ) and 3 (D3) (left).Color-coding was performed pseudotime (right).The UMAP overlay of pseudotime implies developmental progression.B. Dotplot showing the neural development-and neuron marker gene open access in the D3 cells sample.C. Genome tracks showing scATAC accessibility at the neural development gene locus are highlighted in the D3 cells sample, and the fibroblast gene locus is highlighted in the BJ sample.D. GO analyses of significantly opened genes of the D3 cells sample versus the BJ cells sample.The P value was determined by a one-sided hypergeometric test without adjustments. Figure 7 . Figure 7. ScRNA-seq analyses reveal the successful reprogramming events of FiN reprogramming. A. Monocle-generated pseudotime trajectory of a subsampled population of cells (n = 2000) from each cluster in the D3 cells sample of scRNA-seq data.Pseudotime is shown colored in a gradient from dark to light blue.B. Trajectory reconstruction of three branches in scRNA-seq data: the pre-branch (before bifurcation), the successful branch, and the failed branch (after bifurcation).Cluster 3 and cluster 8 indicate cells at the termini of the successful of neural lineage commitment and failed branches, respectively.Violin plot of scRNA-seq data displaying the expression of representative neural development related genes in the successful and failed branches of neural lineage commitment, respectively.C. Heatmap showing the expression patterns of key dynamically expressed genes along the reprogramming pseudotime (left).The enriched GO terms for each gene set cluster in the heatmap (right).D. Expression pattern scatter plot showing the expression levels and changes/branches in neural development genes that affected successful reprogramming.Solid lines represent successful branches, and dotted lines represent failed branches. Figure 8 . Figure 8. ScRNA-seq analyses reveal the regulatory network of FiN reprogramming.A-B.Gene correlation network including triggers, the preparatory subnetwork, the intermediate subnetwork and the FiN subnetwork (E) and (F) a regulatory model summarizing the progression of reprogramming induced by forskolin.
v3-fos-license
2020-06-04T09:06:16.265Z
2020-05-26T00:00:00.000
225854050
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://curriculumstudies.org/index.php/CS/article/download/35/16", "pdf_hash": "3703f213bd879b2335d16c37b549027d17085e11", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44816", "s2fieldsofstudy": [ "Sociology" ], "sha1": "7f00d044fa7ca47ed432434ce414e9139d62d63c", "year": 2020 }
pes2o/s2orc
Building Bridges Instead of Walls: Engaging Young Children in Critical Literacy Read Alouds Situated in the months after the 2016 United States presidential election, this qualitative case study illuminates third-grade children’s sense-making about the GOP Administration’s proposed border wall with Mexico. In light of these present-day politics, close analysis of how young children discuss social issues remains critical, particularly for social studies educators. Looking across fifteen book discussions, we zero in on three whole-class conversations about (im)migration beginning with initial read alouds through the final debrief wherein children conversed with a local university anthropologist about the clandestine migration of individuals across the U.S.’s southern border. During initial discussions, children in the Midwestern school demonstrated their frustration towards racist laws of the mid-1900s. Others responded with empathy or made personal connections to their own family heritage. In the findings, we note a clear progression in how children understood (im)migration issues as evidenced by how their questions and curiosities shifted in later lessons. We highlight how, when children are encouraged to engage with social topics, they can act as critical consumers and position themselves as politically active and engaged citizens. INTRODUCTION "It's because this stuff isn't just trash, it was real people that owned that stuff," replied Katie, a 10-year-old white girl, when asked about the importance of learning about (im)migrants' stories. For six weeks, Katie and her third-grade classmates had been immersed in an integrated humanities unit planned in response to the turbulent 2016 Presidential election in the United States. Her teacher Ms. Honey, a 34-year-old white woman, had used daily read alouds in their morning meeting to introduce concepts and spark conversations related to (im)migration. Katie and her peers had engaged with numerous children's literature texts, ranging from stories about refugees' journeys to the tales of migrant workers fighting for improved working conditions. After weeks of building background knowledge about historical and contemporary (im)migration issues, the children spoke with a local expert, Dr. Jason De León (2015), a highly regarded anthropologist studying the clandestine migration of individuals across the U.S southern border. With intrigued eyes and attentive ears, the children eagerly attended to the smartboard screen where Dr. De León shared with the children objects (im)migrants crossing the border left behind. "What were some of the messages you found?" asked one young learner. "How many backpacks have you found?" asked another. The children peppered Dr. De León with questions. As in the previous book discussions, the children displayed concern for (im)migrants, particularly upon hearing about the challenges they faced to enter the United States. However, as evidenced in Katie's response, the children understood people were at the center of (im)migration debates. After weeks of engaging in reading historical fiction and contemporary texts, the children were more knowledgeable about (im)migration and more inclined to critique policies they deemed unjust and inhumane while positioning themselves as politically aware, socially-engaged community members. Operating from an understanding that young children are capable of and interested in critical social issues (Halvorsen, 2017;Hauver, 2019;Payne, 2018;Payne et al., 2019;Vasquez, 2004Vasquez, /2014, in this paper, we share key moments of children's sense-making about the GOP Administration's proposed border wall with Mexico. This is a topic that lingers in American politics ahead of the 2020 presidential election, as the Trump administration recently announced $3.8 billion from the National Guard would be diverted to the wall (Choi, 2020). In light of these present-day politics, we argue that close analysis of how children discuss social issues remains critical. In this qualitative case study (Dyson & Genishi, 2005), we were guided by the following question: How might a series of critical literacy texts and class discussions focused on (im)migration inform young children's civic participation? In this paper, we first describe relevant studies from early childhood and elementary classrooms wherein children discussed critical topics and, specifically, inquiries wherein children's literature was used as a vehicle to do so. Then, we outline our methods and modes of inquiry before detailing read aloud sessions in the findings. Finally, we close with a discussion about how we see this work informing the educational communities now and in the future. LITERATURE REVIEW AND FRAMING As documented in humanities scholarship in early childhood, children are consistently shown as capable of engaging in dialogue about critical social issues like climate change and natural disasters Wargo & Alverado, 2019) and gun control (Ghiso, 2011(Ghiso, , 2015. However, in practice, teachers often avoid seemingly "adult" topics, naming them as "too political" (Vasquez, Tate, & Harste, 2013). Instead, teachers of young learners often opt to talk broadly about ideas of community issues, perhaps glossing scientific facts (e.g., the rate the Earth is warming) and forwarding individualistic solutions (e.g., recycling will save the planet). For many teachers, children's literature is a starting point for investigating community issues. One common approach to reading and analyzing such texts is through a critical literacies approach. Broadly, the term critical literacies refers to the use of the technologies of print and other media of communication to "analyze, critique, and transform the norms, rule systems, and practices governing the social fields of everyday life" (Luke, 2004, p. 21). Importantly, a critical literacies approach is not a checklist of instructional tasks or analytic strategies one employs as they read. Instead, it is a way of being in the world (Vasquez, Janks, & Comber, 2019). As such, critical literacies is not only of and for the English language arts (ELA) block, but it is interdisciplinary in nature because the approach foregrounds how all persons can learn to read the word and the world (Freire & Macedo, 1987). In doing so, individuals and collectives can act for a more just society. In social studies and ELA, a critical literacies approach begins with the understanding that no text is neutral; the political nature of any text-from a children's picture book to a history textbook-can be explored and critiqued (Dywer, 2016). Further, a critical literacies approach allows children to engage in critical meaning-making and to create analytical repertoires; they can apply to social phenomena such as poverty, unemployment, or workers' rights (Comber, 2015). A critical literacies approach to teaching and learning is an "overtly political orientation" (Luke, 2012, p. 5). Critical literacy is part and parcel of our understanding of global literacy and ultimately plays an important role in forwarding just civic and social values (Callow, 2017). With this understanding that teaching and learning are value-laden tasks (Barrett & Buchanan-Barrow, 2005) and that texts are ideological (Street, 1984), in our study we used a diverse array of children's literature to historically ground children's understandings about contemporary issues of (im)migration. We used texts in similar ways and for similar purposes to scholars like Cipparone (2014) who used the book Pancho Rabbit and the Coyote (Tonatiuh, 2013) to engage fourth grade students in conversations about the challenges involved in emigrating from Mexico to the U.S. We also read the book My Two Blankets (Blackwood & Kobald, 2014) with the same intention that Callow (2017) did in their work with primary students-to encourage them to understand and display empathy as well as recognize the plight of refugees. While we intended to engage children in conversations about (im)migration using texts in similar ways to Ciapparone (2014) and Callow (2017), we found it impossible to discuss this social issue without also introducing topics related to diversity in critical ways. Bridging scholarship from across the disciplines in early childhood, our thinking was informed by scholars like Husband (2018) who argued that multicultural picture books promote racial awareness and justice among children. In particular, we were informed by Husband's (2018) claim that educators should abandon colorblind approaches to race within their classrooms. Teaching children about racism both deals with racial stereotypes and messages and assists children in developing a sensitivity to racial injustices in their everyday lives and within society (Apfelbaum et al., 2010;Husband, 2018). While scholars have documented how literature can challenge misconceptions and expose stereotypes, so too can picture books perpetuate them. For instance, Kleekamp and Zapata (2018) noted portrayals of disabilities in children's literature often included themes of pity and exclusion. Grounded in the belief that books influence our understandings, Kleekamp and Zapata (2018) argued that inclusive children's literature must feature characters with agency and multidimensional lives who hold diverse identities (in their study, disability labels). Building on the work of scholars like Bishop (1990), Kleekamp and Zapata (2018) contend there exists an ethical imperative for children to read texts representing their own lived experiences. In this way, intentionally incorporating diverse picture books affords children the opportunity to gain insight into the lives of characters who experience the world like them, and those that live life differently than them (Kleekamp & Zapata, 2018;Solis, 2004). Likewise, Correia and Bleicher (2008) contend such reflections are part of a teachable skill set; in early learning spaces, children are frequently taught to make such connections by identifying whether the connection was to another text, to themselves, or to the world (Keene & Zimmerman, 1997). Additionally, we suggest exposure to such texts is critical because children live raced, classed, and gendered lives; thus, they deserve the opportunity and space to interrogate such topics (Mirra & Garcia, 2017). LITERATURE REVIEW AND FRAMING Situated within a public elementary school in the Midwestern United States, the larger study occurred across the 2016-2017 academic year. The data we draw on here was part of an integrated (e.g., social studies and ELA) unit wherein third graders were asked to contemplate contemporary social issues. Specifically, they were asked to consider the role of government and community members related to (im)migration policies. In the following sections, we detail the context, participants, and our methods for readers. Context and Participants Community School J (CSJ) was one of two elementary schools within the wider district that served children in grades 1 through 4. The school was the academic home for roughly 350 children that hailed from the neighborhood. The majority of children attending CSJ benefitted from the free or reduced lunch program. According to official school reports, the population at CSJ was predominantly white (52%); 48% of children were identified as children of Color (36% African American, 9% Asian American, 4% Hispanic, 1% Other). Students at CSJ were not only racially diverse, but many children arrived at school speaking a number of languages other than English. In this way, the racial and linguistic diversity of the school mirrored national demographics in the United States (Taylor, 2014). Twenty-two children (7 who self-identified as white, 5 as Black or African American, 4 as mixed or bi-racial, 2 as Asian American, 1 as Asian, 1 as Latino, 1 as Mexican American, 1 as Mexican, and 1 as Muslim) were enrolled in Ms. Honey's classroom. In Table 1, we offer a list of the children who appear in the findings as well as their self-selected pseudonyms and demographics. Prior to this study, Cassie had spent three years at CSJ and was a familiar face within the school (for more see, Brownell, 2017aBrownell, , 2017bBrownell, , 2018Brownell, , 2019aBrownell, , 2019b. As a white, monolingual, U.S.-born cisgender woman in her early 30s, Cassie fit readily in with the professional community at CSJ as her appearance paralleled that of the majority of the faculty. For instance, she and the focal teacher, Ms. Honey, shared these characteristics. Further, as a past early childhood educator, Cassie could readily communicate with Ms. Honey, despite the fact that Ms. Honey had nearly a decade more teaching experience. Although Anam, a trilingual, Pakistani-Canadian and Muslim cisgender woman in her early 20's, was not present during data generation, she worked alongside Cassie as an undergraduate research assistant during data analysis during her third year at university. Given Anam's role as an intern with an International Non-Governmental Organization using play-based learning to empower vulnerable children around the world, she was well-suited to assist with this project. Specifically, Anam built upon her experiences analyzing, summarizing and writing project briefs on the positive impacts of play-based learning for children's life skill development, as well as content from her courses as an International Development Studies major. With Cassie, Anam synthesized and analyzed how the children engaged in critical conversations. Ms. Honey was a seasoned educator with 10+ years of teaching. Having started her teaching career in the Southwestern United States, she returned to the focal state where she was born and raised to teach at CSJ three years earlier. During her tenure at CSJ, Ms. Honey became recognized as an educational leader and was frequently selected by the administrator to facilitate professional learning. Moreover, Ms. Honey was deemed a "successful" teacher because students in her class consistently performed well on top-down standardized assessments. In return for her leadership and marked success, Ms. Honey was granted more curricular freedom than some of her peers. Additionally, in the wake of the 2016 Presidential election, Ms. Honey felt teaching civic issues and governmental procedures was an ethical imperative, not just a curricular goal. Given all this and the past experiences Ms. Honey and Cassie had in completing a previous inquiry, they decided to collaboratively plan and implement the focal unit. Unit Overview Cassie and Ms. Honey created this unit for the purposes of integrating social and political activism in the social studies classroom. The integrated social studies and ELA unit served as a way for Ms. Honey to engage the children in discussion about controversial topics in a thoughtful manner, using children's literature as the vehicle to do so. The texts covered topics such as refugees, (im)migrants, and, more generally, the process of displacement and migration. The focal teacher, Ms. Honey, led the read alouds with children during their daily morning meetings; all conversations were recorded and later transcribed. Data Generation In the larger interpretive study (Erickson, 1986), Cassie considered children's diverse communicative practices related to critical social issues. Thus, she generated data in a number of different ways for this case study (Dyson & Genishi, 2005). Specifically, she used ethnographic methods such as participant observation, photography, and fieldnotes to generate data (Emerson, Fretz, & Shaw, 2011). Children were well-aware of the role of Cassie as a researcher and knew about her interest in their thinking about critical issues. Frequently, children would approach her to share ideas they thought Cassie might be able to use as part of what the children termed her "kid experiments." This included sharing their compositions or other resources they thought may be interesting to her. Cassie also generated daily audio-or video-recordings of classroom happenings, activities on the playground, and conversations in the cafeteria. Cassie frequently engaged Ms. Honey and the children in conversation, both as formal interviews and informal discussions. Like other talk, these were audio-or video-recorded for later transcription and analysis. For the purposes of this paper, we draw on a series of classroom conversations focused on children's literature related to (im)migration. Data Analysis Working alongside Cassie, Anam transcribed verbatim the collection of audio recordings Cassie generated. This included transcription of the daily read alouds as well as the whole-class conversations that occurred before, during, and after each reading. While transcribing the data, Anam paid particular attention to the key themes present in children's discussions, such as how they articulated their feelings and shared personal connections in response to the stories they were reading. Cassie then reviewed the original audio recordings alongside the transcripts and Anam's notes, reading these texts alongside the fieldnotes generated at the time of the study. Together, we developed a more detailed coding scheme for examining the texts in a way that accounted for our noticings. We looked for moments when kids made connections between texts, between texts and themselves, and between texts and their world (local world or a global world), a heuristic Ms. Honey used in her teaching. Children were encouraged to make these connections as part of a more thoughtful social studies curriculum. Ari: The wall rips apart families. Text to World Children became more comfortable to critique and share their opinions on social and political issues, particularly on (im)migration. FINDINGS In this paper, looking across fifteen book discussions, we zero in on three whole-class conversations about (im)migration. We first describe an early read aloud, then a mid-unit book discussion, and finally, we share about a whole-class debrief of the conversation children had with Dr. De León. Across these three findings, we showcase how children's thinking about the topic of (im)migration was enriched within the integrated social studies and ELA unit. Additionally, we highlight how children shifted from only learning about new historical content from the picture books (e.g., segregation in the U.S. was not just Black/White) to eventually critiquing contemporary policies and structures (e.g., the proposed wall is oppressive and therefore wrong) using their learning from the books. Beyond Black and White: Facing the Hard History of U.S. Segregation In one of the earliest sessions of the six-week unit, Ms. Honey read aloud Tonatiuh's (2014) Separate is Never Equal: Sylvia Mendez and Her Family's Fight for Desegregation. As noted in the title, the story details how Sylvia Mendez, a U.S. citizen of Mexican and Puerto Rican descent, was denied enrollment to a "Whites only" school in her home state of California. For the children, this read aloud was one of the first in which they came to understand that the issue of school segregation (and segregation in the wider society) included more than just individuals who were Black or White. This was also the first time the children engaged in an explicit conversation about the realities of racism related to Mexican (im)migrants. Ms. Honey opened the lesson by gauging children's familiarity with the term segregation, a topic they had briefly discussed a few months earlier in relation to Black History Month. She activated their background knowledge by engaging them in a conversation wherein the children shared that they understood segregation as the separation between Black and White individuals. Children made mention of particular historical figures like Rosa Parks and childactivist Ruby Bridges, with one Black child noting she had known about Ruby Bridges "since second grade." Nearly all children seemed to understand segregation as an issue of "back then." For instance, another Black girl commented she had seen the "White people on one side and Black people on the other side" signs during a class trip to a local historical museum a few months prior. While the children's knowledge about the segregation of Black and White communities was, in many ways, robust, it was simultaneously limited; all children were unfamiliar with the segregation of Mexican American children. As Ms. Honey read the story, the children appeared disheartened by the hardships faced by Sylvia and her family. With prompting from Ms. Honey, they made sense of how segregation negatively impacted Mexican Americans as they heard how Sylvia's father advocated on her behalf. Mid-way through the book, Ms. Honey commented that she noticed something about Sylvia's family and, after a turn-and-talk, asked the children to share what they were noticing. Yeah! Evident in their comments, the children were starting to make sense of the importance of collective action taken by Sylvia's family to desegregate the school system; a theme that became clearer as the children continued to read about how Sylvia's father would travel across the area looking for other families that were disappointed by the limits on their children's schooling due to their racial or ethnic identity. As the story continued, the children expressed frustration and disbelief as they listened to how Mexican American children were denied attendance to the same school as their White counterparts because they were considered "unworthy" and "dirty" (Tonatiuh, 2014). To guide children in critical thinking and to engage their voices and perspectives, Ms. Honey encouraged the children to converse with their peers using the prompt, "I feel this because…". After turningand-talking with a peer, the children shared aloud their thoughts in a whole-class discussion, where many expressed anger about the circumstances. Here, the children's understandings about the inequities of the situation, as described in the historical fiction text, become clearer. The children articulated a wide range of feelingssadness, anger, frustration, and a general sense of displeasure and disappointment. For some like Gabe, the feelings they harbored were due to text-to-self connections, particularly as they considered how such harmful policies may have impacted their own schooling. In the latter part of the conversations, children spoke one after the other and in response to one another. As Katie, Nicki, and Faith conversed, there was a shift in how they talked as they considered what things might be like if the roles were reversed. Underlying their comments is the notion that caretakers of all backgrounds want what is best for their children and that all children deserve a "good" school. With this shared understanding, the children's eruption into applause upon hearing the result of the Mendez court case (a win for Sylvia and her family) or in hearing about how proud Sylvia was to have made friends from all backgrounds and knowing that this was because her family had fought for her, should not have been a surprise. As Ms. Honey read aloud the story, the children demonstrated curiosity, concern, and empathy. It was during this read-aloud and the subsequent conversation that we noticed how children first started to make sense of critical topics like segregation, racism, and migrant work by articulating their feelings with the support of prompts from Ms. Honey. For us, this initial discussion demonstrated how children's literature can evoke critical conversations amongst children, allowing them to understand the unfair laws of the past and, as we demonstrate in the latter findings sections, reflect on present-day politics. Sowing Seeds of Understanding: Explaining the Precarity of Employment After using the Tonatiuh (2014) picture book to situate race as a systemic issue impacting more than just those deemed Black or White, Ms. Honey used the text Harvesting Hope: The Story of Cesar Chavez (Krull & Morales, 2003) to discuss connections between race and class in a later week. This piece of historical fiction brought to life for the children the story of Cesar Chaveza Mexican American labor leader who formed the National Farm Workers Association and fought tirelessly to improve the working conditions of migrant workers in the United States. Beginning with Chavez's childhood, the picture book details instances early on when he felt powerlessness because of policies that undervalued Chavez's humanity as a non-White, Spanish speaker. Later, the book traces his role as a labor leader and the radical shifts he made in this role. Unlike most of the other books, Ms. Honey read aloud the story of Chavez over two days. This afforded her time to discuss the book with the children and to emphasize the precarity of migrant work. On the opening day of the read aloud, for example, Ms. Honey and the children had a long discussion about the impact of drought on farms and, in turn, on the families of those working in the fields. As noted in the following transcript, Ms. Honey had the time to facilitate a discussion about who a migrant worker was and the challenges they faced in their work. During the two days of conversation, the children appeared more comfortable discussing (im)migration and, similar to the Tonatiuh (2014) reading, some children made personal connections to the text. As the children listened to the story, how they made sense of the moral implications of the stories of the real people portrayed in the texts became evident as well. Ms. Honey: We learned about how the conditions were not great [for migrant workers], do you remember? What were the conditions like on the farm where they worked? What were some of the things that made you go, oh no! Matt: That one person in one day would only make thirty cents. Ms. Honey: Right, they weren't making much money at all. Katie? What else? Katie: That their beds were all soaking. They were wet and damp. Ms. Honey: Thank you...Sameerah? Sameerah: That they couldn't say anything like they don't want to work anymore because they [farmers/bosses] could murder or hurt them. In grappling with the reality of Chavez story, the children appeared more inclined to make personal connections. For example, Gem told her classmates she herself was new to the United States, telling her peers, "I'm an immigrant." In this expression of her identity as a newcomer, Gem made a connection from the text to herself. While this sort of connection was one we saw many children make over the 15 read alouds and the related conversations, Gem was a unique case insofar as her place in the class shifted from a seemingly quiet classmate to a confident learner with specific expertise on the subject matter of the unit. Thus, for children like Gem, stories about activists like Sylvia Mendez and Cesar Chavez opened new avenues for her to participate in the social studies and ELA curriculum. Children also appeared willing to share their thoughts about the injustices faced by (im)migrant workers in the post-discussion. They had a seemingly shared opinion on the atrocious work conditions created by White individuals for Mexican American workers. Additionally, some children began to feel emboldened to state they specifically wanted to share their individual opinions. Ms. Honey: Alright, what an inspiration. Because during this time, White people didn't think to count for people [migrants] as being human. They felt like they could treat poor people in a way that you should not treat people. They thought of them like they were just things...things that could do their work for them because they were poor. What do you think about that? Children: Yuck!! Gabe: I want to share my thoughts on this. Ms. Honey: Alright. I'm happy to hear it, Gabe. Gabe: Alright, I'm happy, but I'm sad because who thinks another person is less than another person? That's a disgrace! And the reason I'm happy is because they actually made it [referring to march Chavez made with labor colleagues]! Gabe's text-to-self connection in earlier course readings and shared identity as a Mexican American with labor leader Chavez likely informed his willingness to assert these sorts of connections in class. Like Gem, Gabe's read of the picture books included reading himself into the texts and, in turn, his classroom and world. In this way, the daily read alouds cultivated new avenues for children to feel they belonged, especially for children from marginalized communities that may not typically see themselves represented in literature or popular culture. The inclusion of historical fiction was most definitely a tool for children to make personal connections. However, it was also a vehicle for children to engage in and demonstrate critical thought. While in earlier lessons, children used prompts from Ms. Honey to critically reflect on the texts, in this lesson Gabe used the story of Chavez to highlight the innate value of all humans, no matter their identity or background. Gabe did so without a sentence starter from his teacher, instead stating he had something to share and then actually sharing it with his classmates. Although the children made sense of each story in unique and personalized ways, across the 15 read alouds, we noticed how children like Gabe progressed in thinking about (im)migration and how their curiosities began to shift. Unpacking Critical Concepts Through Real-World Experiences In one of the final weeks of the unit, Ms. Honey read aloud a second picture book by Tonatiuh (2013), Pancho Rabbit and the Coyote. While many of the books included in the unit were historical fiction, this text differed from the others in that it was an allegorical tale featuring animal characters. For unfamiliar readers, in this text, Tonatiuh (2013) described the journey of the young Pancho Rabbit who lived south of the Rio Grande River. After his father did not return to the family home after completing his work as a migrant worker, the worried Pancho Rabbit packed a bag and headed north. Along the way, Pancho Rabbit met Coyote who offered to help him to travel toward his father, but ultimately Coyote wished to deceive Pancho who was eventually rescued by his father. Ms. Honey and Cassie used this Tonatiuh (2013) text to once again emphasize the hardships (im)migrants faced, particularly those that must cross the United States' most southern border. After reading the full story, Ms. Honey also read the author's note. In it, Tonatiuh (2013) described the role of "coyotes" (e.g., smugglers) in assisting individuals crossing the border without the documents deemed necessary by the U.S. government. Additionally, Tonatiuh's (2013) author's note provided space for Ms. Honey to discuss dual-citizenship and deportations with the children. Although such topics were discussed in prior readings and, at the time, these issues were frequently appearing in the news. Ms. Honey: Remember we talked about that word? Deported? Do you remember what that means? What does that mean Katie? Katie: That they find you're there when you're not supposed to be and they send you back. Across the course of the unit, children learned new terms such as deportation and came to understand what those terms meant in relation to (im)migration. In our review of the 15 read alouds, we saw a significant growth in the children's line of questioning as well as their understanding of (im)migration and its related terms. We also noticed instances wherein Ms. Honey shared more about individuals and communities she knew that were impacted by the (im)migration policies and practices Tonatiuh (2013) discussed in his author's note. Specifically, we noted how Ms. Honey spoke about how her former students in Arizona and their families' lives were influenced by U.S. laws. In speaking from personal experience, Ms. Honey brought to life the issues Tonatiuh (2013) wrote about and those the children had heard in previous weeks, such as in the transcript that follows. Katie: I didn't know that there were such weird laws that were so mean about people just trying to survive. Ms. Honey: Exactly. Yes, it was a really scary time. And it wasn't long ago, I remember it happening and feeling like that it was unfair and I had friends that, who were affected by that wall. And my students were affected by that wall because a lot of their parents were immigrants and they were really worried all the time that they might get deported. If they got caught, if the kids were born in the U.S., they would stay and the parents would be sent back. Elliott: But who would they live with? Cause they [children] can't live by themselves. Ms. Honey: Family, sometimes. Sometimes they were put in foster care. Sometimes it's just one of their parents that is deported and sent back. We also used the Tonatiuh (2013) picture book and Ms. Honey's personal connections to prepare the children to learn more about the real people involved in (im)migration policies. Specifically, we used Tonatiuh's (2013) story to frame the virtual discussion the children had later in the day with Dr. De León. During this conversation, Dr. De León (2015) connected the Tonatiuh (2013) picture book to the work he engaged in as a researcher. He showed the children the items he found along the Arizona-Mexico border, including backpacks, children's toys, and food containers. The children were intrigued and eager to know more about the (im)migrants' stories and developed thoughtful questions for Dr. De León. In addition to the questions detailed in the introduction of this paper, the children were also curious about why Dr. De León decided to become an anthropologist. Dr. De León explained to them his interest in exploring the objects people left behind during their journey and how these objects could be used to shed light on the stories of individuals passing through. It was evident that by the end of their discussion with Dr. De León, the children had come to better understand the multi-faceted dimensions of (im)migration and that, as Katie stated, there were real people behind the objects Dr. De León found along the U.S./Mexico border. Be friendly to people! Katie: Just by looking at someone's stuff, you can learn a lot about a person. Ms. Honey: Yeah, just by looking at a person's belongings, you can learn a lot about them. As noted here, the children demonstrated they had made connections between the objects Dr. De León found, the stories of people those objects were connected to, and the factors that influenced why individuals crossed borders. The children appeared to enrich their understanding about the negative implications a border wall would have on (im)migrants and their families, but they also discussed the negative impact a wall would have on the environment. For instance, children shared the following: Nicki: He also said the wall is not good because it also hurts the animals and the habitat. Ms. Honey: Yeah, good…. Savanna: The wall hurts the environment! Ari: It rips apart families! As demonstrated in this excerpt, by the close of the unit, a majority of the children came to understand that many of the GOP's proposed (im)migration policies would create harmful or dangerous situations for those seeking refuge in the United States. Moreover, the children understood from the various read alouds and related conversations the present-day realities many (im)migrants were challenged by, and they could imagine how proposed practices might inhibit others in the future. DISCUSSION Through snippets of transcripts from classroom conversation, we noted how children became more comfortable talking about (im)migration and called attention to how the children learned to critique current and historical policies. Moreover, we used these excerpts to showcase the role Ms. Honey had in thoughtfully engaging and facilitating conversations amongst her students as part of her social studies curriculum. While she initially encouraged participation through stems like, "I feel this because…," children became much more assertive in their commentary over time and eventually began with opening statements such as, "I want to share my thoughts on this" (see Gabe's comments in the second findings section). As children began to think more independently about (im)migration, they responded with empathy or by making personal connections to their own family heritage. The children also made connections between books. At times, this meant they recognized similarities in how individuals or communities advocated for themselves while at other times they noticed the oppressive policies which led to the marginalization of a community was what was similar. In turn, the children made connections between historical injustices and those which persist today. Cumulatively, we highlighted how, when children were encouraged to engage with social topics, they acted as critical consumers and positioned themselves as politically active and engaged community members. Within the integrated curriculum, Ms. Honey's role shifted as well. For instance, while in the earliest lesson there was a great deal of teacher talk and teacher-led conversation, in later lessons she encouraged children to reflect on their own. In this way, Ms. Honey engaged in teaching practices we would encourage others to take up as she became the facilitator, rather than the leader, of classroom conversations. To reach this level of conversation, Ms. Honey needed to scaffold the learning of her young students, assisting them with the task of analysis until they were able to do this work on their own. CONCLUSION In this paper, we detailed not only how open children were to talking about these ideas or opening wide the proverbial doors of the United States, but also how the children grappled with the ethical implications of the stories that were presented and how they related them to their own lives. Teaching controversial and critical topics, like (im)migration, addressed more than curricular goals within social studies or ELA. The sort of critical teaching and learning within this integrated curriculum allowed children to voice their concerns while opening new avenues for them to connect to their personal experiences and perspectives within the social studies classroom. We see the teaching of critical topics like this as an ethical imperative insofar as such learning opportunities position children as critical, engaged, and active community members. This research demonstrates the importance of educators integrating social and political activism in their social studies classrooms for ethical and curricular purposes.
v3-fos-license
2021-08-08T05:23:41.348Z
2021-07-23T00:00:00.000
236943458
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.mdpi.com/1660-4601/18/15/7823/pdf", "pdf_hash": "f6d30a215df81d25e99e17fee9fac4d4b502cd28", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44821", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "f6d30a215df81d25e99e17fee9fac4d4b502cd28", "year": 2021 }
pes2o/s2orc
Physical Activity for Health and Wellness Regular physical activity (PA) is both a preventive measure and a cure for non-communicable diseases (NCDs) [...]. Introduction Regular physical activity (PA) is both a preventive measure and a cure for noncommunicable diseases (NCDs). Moreover, PA improves mental health, quality of life, and well-being [1]. Conversely, physical inactivity and sedentary lifestyles have negative impacts on individuals, families, and society, as evidenced in particular by the spread of the obesity epidemic [2][3][4][5][6]. PA has proven to be a low-cost alternative for the treatment and prevention of disease. Therefore, interventions to prevent avoidable diseases by increasing the proportion of physically active people are fundamental. The Special Issue "Physical Activity, Wellness and Health: Challenges, Benefits and Strategies" was intended to collect research articles on anthropometric determinants of health and performance, PA and healthy habits, exercise and diet, exercise and body composition, interventions to promote PA for people of all ages, strategies for the implementation of an active life, and the beneficial effects of exercise on metabolic syndrome. Finally, 20 articles covering a wide range of information were published, indicating the interest generated by this call. Below we will provide a summary of the main contents of this Special Issue, highlighting proposals for future research that potentially contribute to the health benefits of being physically active. Topics included in this Special Issue fall mainly into the following three areas: anthropometry, health, and sport; health benefits of exercise; population studies and strategies for an active life. Anthropometry, Health, and Sport Anthropometric characteristics are important factors of a person's physical performance and health status. Four studies included in this Special Issue evaluated the contribution of these variables. Matias et al. [7] found that phase angle derived from bioelectrical impedance spectroscopy is predictive of maximal isometric forearm strength in cancer patients. Its relevance as a clinical indicator of disease-related function in breast cancer survivors was suggested. Handgrip strength was particularly influenced by body composition parameters and handedness according to Zaccagni et al. [8], so much so that the authors recommended it as a proxy for unhealthy conditions with impaired muscle mass, taking into account laterality. Further research should also provide evidence for the effectiveness and clinical relevance of hand strength testing in the assessment and prediction of critical health conditions. Barbieri et al. [9] investigated the efficacy and accuracy of a data mining methodology in predicting cardiovascular risk based on anthropometric, demographic, and biomedical data from a very large sample of the population involved in competitive sports practice. The procedure was conducted using a decision tree and logistic regression to classify individuals as at-risk or not. In addition, the authors used the receiver operating characteristic curve to assess classification performance, achieving satisfactory results. The fourth study by Rinaldo et al. [10] departs from the previous themes to deal with injuries that can occur in sporting activities, focusing on the relationship between anthropometric traits and injury occurrence. Their findings pointed out that an increased body mass index, decreased calf muscle area, and being closer to the age of peak height velocity are significant risk factors for injuries in elite soccer players aged 9-13 years. Consistently with these findings, the authors claim that body composition and anthropometric characteristics should be monitored to reduce the risk of injury in young soccer players. Furthermore, training programs must be adapted to both the chronological age and the maturity status of the players. Health Benefits of Exercise PA contributes to preventing and treating a wide range of NCDs and can improve mental health, while also enhancing the quality of life and well-being. A total of seven studies in the Special Issue were conducted in this area. Two studies concern, in particular, the implications of regular exercise for disease prevention and treatment. Kanai et al. [11] reported that the health utility score was 0.77 in stroke survivors and was associated with the number of steps; the more stroke survivors walked, the higher their health utility score. Turning to multiple sclerosis, it is well known that physical inactivity reduces cardiorespiratory capacity, promotes physical deconditioning, and leads to comorbidities such as obesity, metabolic syndrome, and osteoporosis. In this field, Pau et al. [12] examined possible sex-related differences in the amount and intensity of PA performed by people with multiple sclerosis and showed that the pattern for women was characterized by greater sedentariness and less activity of light intensity than for men. Both studies [11,12] quantitatively assessed PA (moderate-tovigorous physical activity, MVPA) using accelerometers. Five studies focused, in particular, on mental health and PA. PA promotes different kinds of positive psychological responses. Regular exercise has a beneficial impact on depression and anxiety. It reduces stress and improves overall well-being. The first study starts from the evidence that poor sleep quality, common in young people, increases the risk of morbidity and mortality. In this area, Zhai et al. [13] highlighted that regular PA can improve poor sleep quality among college students. PA could enhance sleep by helping individuals cope with stress, indicating that stress management could be a nonpharmaceutical treatment for sleep improvement. Considering the mental health of young people, Usán Supervía et al. [14] examined the relationships between the constructs of goal orientations, emotional intelligence, and burnout in high school students. The authors outlined that the psychological profile arising from these features could be important for academic performance and school participation. Bíró et al. [15] examined gender, as a socio-economic determinant of health, by testing the validity of the biopsychosocial model of health with a limited life course perspective on a very large sample of students from Hungarian universities and colleges. Their findings suggested that determinants of male health included fewer variables focused on physical activity, and were less influenced by social relationships, in contrast to female health, which was influenced by age and social support. Kim and Ahn [16] showed that exercise participation for six weeks led to positive changes in the self-esteem and mental health of college students. In a narrative review, Belvederi Murri et al. [17] investigated the beneficial effects of PA on depressed populations. A specific public health problem is the premature mortality of depressed individuals. This is mainly caused by increased cardiovascular risk, as depression leads to the development or exacerbation of unhealthy lifestyles. According to their findings, PA can reduce depression severity and directly address cardiovascular risk factors. In the field of public health, the development and dissemination of initiatives promoting exercise-based interventions in depressed populations are recommended, focusing on their cost-effectiveness. Population Studies and Strategies for an Active Life Implementation Nine articles in the Special Issue deal with this topic. Two studies took into account the multiple negative effects of physical inactivity on health and the factors involved. In a South African adult population, Chifaku et al. [18] assessed the levels and correlates of PA. They found that gender, marital status, and health awareness were significant predictors, pointing out a high prevalence of insufficient PA in some vulnerable groups, particularly the elderly and obese, and a general lack of participation in sports and recreational activities. As PA plays a fundamental role in the process of growth and development, Baqal et al. [19] analyzed data from a national study, "Jeeluna", on a large sample of adolescents living in the Kingdom of Saudi Arabia. The authors found that 67% of adolescents who did not exercise led a sedentary lifestyle. Males and adolescents aged 10-14 years were significantly more likely to engage in PA than females and adolescents aged 15-19 years. Among the factors contributing to high rates of inactivity among adolescents, the authors include the lack of PA programs in schools, hot weather conditions, poor family and peer support, and socio-cultural barriers, which have a particular impact on girls. Despite the known benefits of regular PA, there is a high percentage of physically inactive adults worldwide. Increased national attention on PA as a tool for health promotion and disease prevention is therefore required [20]. Five studies in this Special Issue examine different approaches and strategies that aim to increase PA. The first article, by Potter et al. [21], is a pilot study on activities that naturally involve PA, considering a stealth health approach to increase PA among inactive dog owners. The approach tested in this study showed that dog obedience training could have, as a side effect, a positive impact on both PA and sedentary behavior among dog owners; dog owners are induced to walk more and sit less. Given the large number of dog owners, this new approach to promoting PA may have a significant impact on public health and merits further investigation. In Latin America, the prevalence of obesity and overweight is increasing in all countries, despite the efforts of governments to promote healthy lifestyles. In this context, Farías [22] analyzed which emotions out of fear and hope are most effective in stimulating individuals to make health-related decisions, showing that these appeals in health advertisements do not have any main effect on PA intention, although this effect is positively moderated by perceived body weight and past healthy eating behavior, and is negatively moderated by subjective norms in diet and exercise. Another study conducted by Shi et al. [23] on university students indicated that the combination of insufficient physical activity levels with mobile phone addiction is significantly linked to high levels of irrational procrastination. To improve efficiency and reduce irrational procrastination, it would be necessary to increase physical activity and reducing mobile phone addiction. A systematic review by Zaccagni et al. [24] reported the consequences on physical activity and health of the general lockdown implemented in Italy from March to May 2020 due to the COVID-19 pandemic. Their analysis of 23 studies showed that there has been a general reduction in PA and unhealthy dietary habits as a result of this lockdown in Italy, with a deterioration of the health status in both the general population and people with chronic diseases. According to the authors, individual outdoor exercise should be promoted, especially during daylight hours, while maintaining physical distance in the case of another lockdown to contain current and future pandemics. Particularly in older people, sedentary behavior is a serious public health problem. Monteagudo et al. [25] examined the impact of overground walking interval training in sedentary older adults by comparing two different dose distributions during a longitudinal study. Both training protocols led to a significant overall improvement of physical function in older adults. As regards the strategy to be used in the elderly, Monteagudo showed that the bout length is not a determinant of the functional health effects associated with exercise; splitting a single exercise into two sets during the day can be beneficial for autonomy, agility, and health-related quality of life. In particular, the accumulative strategy is to be recommended when health-related quality of life is the main goal, whereas the continuous strategy is to be recommended when weakness may be a short-or medium-term threat. The last two studies of this section concern the fitness sector and the spread of sports venues. The research of Moustakas et al. [26] aimed to define the drivers of change in the fitness sector and to identify the skills needed by the fitness workforce to navigate these changes. The main finding was that technology, health needs, and customer loyalty are critical drivers of change in the fitness industry. Fitness professionals must therefore respond by improving both their professional skills, especially in providing services for special populations, and their soft skills, stressing the particular importance of engaging with technology and having an understanding of specific health issues. Mainland China, one of the most populous upper-middle-income countries, also has to deal with a prevalence of NCDs and physical inactivity. Analyzing the relevant characteristics of sports venues associated with leisure-time PA in China, Wang et al. [27] identified the number and area of sports venues as the most important indicators. The number of sports venues, which increased between 2000 and 2013, is still comparatively small compared to the United States and Japan. The urban-rural gap in sports venues exemplifies just a few aspects of the 'urban-rural dual structure' in Chinese society. Conclusions The 20 manuscripts included differ in subject matter and methodologies applied, and we consider this variability to be an enrichment for the Special Issue. According to the previous subdivision, the studies included in this Special Issue dealt mainly with interventions to promote PA for persons of all age groups and implementation strategies for active living in different populations. In general, the studies made important suggestions for planning targeted interventions for specific diseases, ages, or population groups, but also for providing guidelines for a healthy lifestyle, tailored to the requirements of individuals to achieve maximum effectiveness. PA interventions are needed to reduce the treatment costs of chronic morbidity that may result from a lower prevalence and better control of CVD and its risk factors. PA-based interventions have also been shown to be effective as additional interventions in mental health. In this respect, it should be emphasized that exercise is still underprescribed for depressed individuals. It is therefore important to eliminate the barriers that are currently restricting this prescription by clinicians. The findings of several studies support the relevance of specific anthropometric variables as potential health indicators, suggesting that anthropometric characteristics and growth rates should be monitored in younger athletes. To improve clinical decision making by reducing the number of unnecessary examinations, the application of data mining to biomedical data, including anthropometric data, may be effective. The importance to ensure the application of appropriate methodologies of measuring quantitative traits (PA, strength, body composition measurements, etc.) was often emphasized in the articles. All of the studies support strategies to promote PA and reduce sedentary behavior among adolescents, adults, and the elderly. There is no doubt that regular exercise is beneficial to health, but the general population should be encouraged to engage in more of it. With the support of all the contributing authors, we are confident that we have provided a significant contribution to the knowledge of the topic addressed in this Special Issue.
v3-fos-license
2021-05-21T13:31:04.500Z
2021-05-21T00:00:00.000
234795448
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.616625/pdf", "pdf_hash": "330f92352333dc287a780ec7538bc9ecf541fdd0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44823", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "330f92352333dc287a780ec7538bc9ecf541fdd0", "year": 2021 }
pes2o/s2orc
Androgen Receptor, Although Not a Specific Marker For, Is a Novel Target to Suppress Glioma Stem Cells as a Therapeutic Strategy for Glioblastoma Targeting androgen receptor (AR) has been shown to be promising in treating glioblastoma (GBM) in cell culture and flank implant models but the mechanisms remain unclear. AR antagonists including enzalutamide are available for treating prostate cancer patients in clinic and can pass the blood–brain barrier, thus are potentially good candidates for GBM treatment but have not been tested in GBM orthotopically. Our current studies confirmed that in patients, a majority of GBM tumors overexpress AR in both genders. Enzalutamide inhibited the proliferation of GBM cells both in vitro and in vivo. Although confocal microscopy demonstrated that AR is expressed but not specifically in glioma cancer stem cells (CSCs) (CD133+), enzalutamide treatment significantly decreased CSC population in cultured monolayer cells and spheroids, suppressed tumor sphere-forming capacity of GBM cells, and downregulated CSC gene expression at mRNA and protein levels in a dose- and time-dependent manner. We have, for the first time, demonstrated that enzalutamide treatment decreased the density of CSCs in vivo and improved survival in an orthotopic GBM mouse model. We conclude that AR antagonists potently target glioma CSCs in addition to suppressing the overall proliferation of GBM cells as a mechanism supporting their repurposing for clinical applications treating GBM. Targeting androgen receptor (AR) has been shown to be promising in treating glioblastoma (GBM) in cell culture and flank implant models but the mechanisms remain unclear. AR antagonists including enzalutamide are available for treating prostate cancer patients in clinic and can pass the blood-brain barrier, thus are potentially good candidates for GBM treatment but have not been tested in GBM orthotopically. Our current studies confirmed that in patients, a majority of GBM tumors overexpress AR in both genders. Enzalutamide inhibited the proliferation of GBM cells both in vitro and in vivo. Although confocal microscopy demonstrated that AR is expressed but not specifically in glioma cancer stem cells (CSCs) (CD133+), enzalutamide treatment significantly decreased CSC population in cultured monolayer cells and spheroids, suppressed tumor sphere-forming capacity of GBM cells, and downregulated CSC gene expression at mRNA and protein levels in a dose-and time-dependent manner. We have, for the first time, demonstrated that enzalutamide treatment decreased the density of CSCs in vivo and improved survival in an orthotopic GBM mouse model. We conclude that AR antagonists potently target glioma CSCs in addition to suppressing the overall proliferation of GBM cells as a mechanism supporting their repurposing for clinical applications treating GBM. INTRODUCTION Glioblastoma (GBM) is the most common type of malignant central nervous system tumor in adult patients in the US accounting for about 50% of them (1). Standard treatment for GBM includes maximal safe resection of the tumor, followed by concurrent chemoradiation and adjuvant chemotherapy. Some of the recent promising studies focused on identification of aberrant genetic and signaling pathways to develop small molecules for targeted therapies, characterization of glioblastoma cancer stem cells, modulation of tumor immunological responses and understanding of the rare longterm survivors (2)(3)(4)(5)(6)(7)(8)(9). However, even with the extensive efforts of research, current standard care using temozolomide concurrently with brain radiation therapy (RT) after maximal safe surgery only achieves a median survival of fourteen months in the overall patient population, or 22 months in the best prognostic group of patients carrying a hypermethylated MGMT promotor (10,11). With this universally fatal disease due to its resistance to the standard treatment of RT and chemotherapy, any research advances even small may have a significant impact on survival and provide hope to the thousands of patients who are diagnosed annually with this cancer. With extensive research conducted to comprehend the molecular regulation of GBM for potential clinical applications, our present knowledge about the tumorigenesis of GBM remains limited. Interestingly, the incidence rate of GBM is significantly higher in adult men than in women as reviewed by Kabat et al. (12). Overall the incidence rate of all glioma in adulthood is also 50% greater in men than in women (13,14). The exact mechanism underlying this pronounced epidemiology is unclear. Using New York State tumor registry data for the period from 1976 to 1995, McKinley et al. calculated crude, and age-and sex-specific incidence rates for three types of gliomas: glioblastoma, astrocytoma not otherwise specified, and anaplastic astrocytoma (15). Results showed that, overall, males were 1.5 to 2.0 times more likely to develop GBM compared with females even with age justification. In addition, experimental studies indicate that glioblastomas transplanted into animals grow at a slower rate in females compared with males (16). The oncogenic potential of sexual hormones and androgen/ androgen receptors cannot be ruled out in the carcinogenesis of GBM. Indeed, steroid hormone receptors including androgen receptor (AR) are members of a superfamily of ligand-activated transcription factors that are potentially oncogenic in gliomas as has been proposed by other researchers (17,18) and has been confirmed in prostate cancer (19). In contrast to estrogen receptors (ERs) and progesterone receptors (PRs) whose expressions in human and animal glioma and glioblastoma cell lines are varied and inconsistent (20)(21)(22)(23)(24)(25)(26), androgen receptors were consistently detected in a high proportion of gliomas. For example, Caroll et al. investigated the expression of the androgen, estrogen, glucocorticoid, and progesterone receptor messenger ribonucleic acid (mRNA) and protein in a number of astrocytic neoplasms of various histological grades (17). Androgen mRNA was detected in all astrocytic neoplasms examined, regardless of histological subtype. Estrogen receptor mRNA was undetectable in all astrocytic tumors examined in that study. Chung et al. detected AR expression immunohistochemically in 40% of GBMs (grade IV gliomas) and 75% of anaplastic astrocytomas (grade III gliomas) (27). Interestingly, AR expression was also present in 39% of the female glioma samples, similar to the detectable ratio in male (47%). A more recent study from Yu et al. confirmed significantly upregulated AR expression in the GBM tissue as compared to normal peripheral brain tissue in patients by Western blotting assays. Furthermore, AR expression was detected in all eight human GBM cell lines used in this study (28). AR mediates androgen effects via hormone-receptor binding in normal tissues in both male and female although androgenindependent AR activation is a common finding in castrationresistant prostate cancers. Androgens derive predominantly from the testis but also to a lesser extent from the adrenal glands. Testicular testosterone and adrenal (source for females) dehydroepiandrosterone (DHEA) or androstenedione can be converted into bioactive 5a-dihydrotestosterone (DHT) by the enzymes 5 alpha-reductase, which binds to the AR and induces its conformational change. This leads to the dissociation of chaperone and heat shock proteins and the subsequent interaction between AR and co-regulatory molecules and importin a, which facilitates nuclear translocation of ARligand complexes. In the nucleus, the AR undergoes phosphorylation and dimerization, which permits chromatin binding to androgen-responsive elements (ARE) within androgen-regulated target genes (29). To our knowledge, despite these preliminary expression pattern studies of AR in GBM and its known functions/ signaling in prostate cancer, there has been no reported studies in confirming the therapeutic role of targeting AR in GBM in brain although a previous study from Zalcman et al. and a very recent report from Werner et al. showed promising results in flank implant models (30,31). Therefore, we used a syngeneic orthotopic mouse model to test the hypothesis that AR suppression using the AR antagonist, enzalutamide, is effective to suppress tumor growth in the brain. We also studied the expression pattern of AR in GBM tumor specimens from patients treated at our medical center. Simultaneous experiments were conducted in the laboratory on the mechanism of AR inhibition in GBM cell lines in which AR expression status was correlated to the effects of AR inhibition on anchorage-dependent cell growth, tumor sphere formation, as well as cancer stem cell survival/marker gene expression. Cell Proliferation Assay Cell titer blue assays were performed with cells cultured in 96well plates treated with different concentrations of AR antagonists (enzalutamide and bicalutamide (Selleckchem, Munich, Germany) for 48 h before changing to fresh media and continuing culture overnight. 20 µl cell titer blue reagents (Promega, Madison, WI, USA) were added to each well containing 100 µl medium. After incubation at 37°C for 2 h, the fluorescence was read at 560/590 nm using SpectraMax (Molecular Devices, San Jose, CA, USA). IC50s of AR antagonists on GBM cell lines were calculated using the GraphPad software (Version 8.3.1, San Diego, CA, USA). Confocal Immunofluorescence Microscopsy U87MG, U138MG, and MGPP3 GBM cell lines were treated with DMSO (control), 20 µM, or 40 µM enzalutamide for 48 h. Cells were fixed with 4% paraformaldehyde for 10 min at room temperature, permeabilized with 0.5% Triton X-100 for 10 min and then washed in PBST three times. Cells were blocked with 1% BSA for 30 min and then incubated with the primary antibodies for 1 h at room temperature. The primary antibodies include anti-c-Myc (1:100) (Abcam, Cambridge, MA, USA) and AR antibody (441) (1:50) (Santa Cruz Biotechnology, Inc., Dallas, TX, USA). After incubating with primary antibodies, cells were washed with PBST three times, 5 min each and then incubated with secondary antibody conjugated with Alexa Fluor 488 or Alexa Fluor 647 (Abcam, Cambridge, MA, USA) for 1 h at room temperature. Cell nuclei were stained with DAPI mounting medium (Thermo Fisher Scientific Inc., Waltham, MA, USA) before captured with the LSM800 confocal microscope (ZEISS, Germany). Similar procedures were performed for FFPE mouse brain tumor specimens for confocal microscopy with the following primary antibodies used: anti-AR antibody (ab3510) (Abcam, Cambridge, MA, USA), anti-Nanog (PA5-85110) (Thermo Fisher Scientific Inc., Waltham, MA, USA), and anti-CD133 antibody (ab19898) (Abcam, Cambridge, MA, USA). The weighted colocalization was analyzed for 4 different areas of the confocal images using ZEN colocalization software (ZEISS, Germany). Tumor Spheroid Formation and Treatment U87MG and MGPP3 cells were cultured in media with 0.5% FBS in 96-well plates (ultra-low attachment) (Corning, Inc., Corning, NY, USA) at a density of 10,000/well and maintained at 37°C under 5% CO 2 in a humidified incubator. After tumor spheroids were formed, DMSO (control), enzalutamide, or bicalutamide at specified concentrations were added into culture media. The diameters of the spheroids were monitored every day for an additional 3-4 days under a microscope and growth curves of the spheroids were plotted and compared between groups. Flow Cytometry on Cancer Stem Cells in Tumor Spheroids/Cell Culture The subpopulation of cancer stem cells in GBM tumor spheroids or cultured adherently were sorted and evaluated with and without enzalutamide treatment, respectively, with an anti-CD133 CSC surface antibody (Miltenyi Biotec, Germany). After treating the tumor spheroids for 4 days with 120 µM enzalutamide or 180 µM bicalutamide, the tumor spheroids were harvested, gently dissociated to single cell suspensions using ACCUTASE ™ (STEMCELL Technologies Inc., Canada). First a total of 10 6 cells were stained with Live/Dead fixable dead cell staining dyes and then incubated with APC-conjugated anti-CD133 antibody (Miltenyi Biotec, Germany). After 30 min of incubation with CD133 antibody at 4°C, cells were washed with phosphate-buffered saline (PBS) at 300×g for 10 min. Samples were sorted using a FACS LSRII G Flow Cytometer and percentages of CSC subpopulation were analyzed by FACSDiva software (Beckon-Dickinson, Franklin Lakes, NJ, USA). Limiting Dilution Assays In Vitro and In Vivo on Stem Cell Content To determine the content of CSCs or stem-like cells in the cultured GBM cell lines with or without enzalutamide treatment, U87MG or MGPP3 cells were treated with DMSO (control) and enzalutamide for 2 days in adherent cultures before being trypsinized and dissociated into suspended single cells. A series of numbers of suspended single cells were seeded in ultralow attachment 96-well plates (Corning, Inc., Corning, NY, USA) at 1, 10, 25, 50 and 100 cells/well and cultured in media with 0.5% FBS for 14 days with an intermittent assessment to confirm the formation of tumor spheroids. After 14 days, the numbers of wells that have at least one sphere were counted manually under a microscope. To further confirm the change of CSCs with or without enzalutamide treatment, orthotopic LDA in vivo experiment was performed. MGPP3 cells were treated with DMSO (control) or enzalutamide for 3 days before dissociating into single cells. 10 3 and 10 4 cells with or without enzalutamide treatment were inoculated into six mice brain for each group. The growth of tumor was monitored using PerkinElmer In vivo Imaging System (IVIS) every week. CSC Marker Gene Expression Analysis With TCGA Database Spearman's rank correlation coefficients of mRNA expression levels between AR and various CSC marker genes were calculated based on RNA-seq results of GBM patients from TCGA database. RNA-seq and Quantitative Real-Time PCR U87MG cells were treated with 80 µM enzalutamide for 4, 24, and 48 h before the total RNA was isolated using RNeasy Plus Mini Kit (Qiagen, Netherlands). The RNA-seq was performed by next-generation sequencing (NGS) using NextSeq550 (Illumina, San Diego, CA, USA). For data analyses, each RNA-seq read was trimmed using Trimmomatic (34) to make sure the average quality score is larger than 30 and the minimum length being 30 bp or longer. Reads were mapped to the human genome (NCBI build 37) using Tophat v2.1.1 (35), which together accurately aligned an average of 90% of paired-end reads. Numbers of reads in genes were counted by the software tool of HTSeq-count (36) using corresponding human gene annotations and the "union" resolution mode was used. Differential expressions were computed for whole gene regions by summing reads for each region. For pair-wise differential expression comparisons, DESeq (v.1.36.0) (37) was used to analyze the numbers of reads aligned to genes and to identify differentially expressed genes. A threshold value for fold-change of differential expression was set at log2 (fold-change) >1 (two-fold actual value) and adjusted P-values <0.05 for rejecting the null hypothesis. Quantitative real time PCR (qPCR) was performed using Taqman probes from Applied Biosystems (Thermo Fisher Scientific Inc.) on the reversely transcribed cDNAs from the same RNA samples used in RNA-seq to confirm the changes of the genes with or without enzalutamide treatment. Fold changes of the gene expression levels were calculated by the delta Ct method relative to the control samples. The beta-actin was used as internal control for normalization. The primer sequences used are as follows: Syngeneic Orthotopic GBM Mouse Model 5 × 10 4 MGPP3 murine glioblastoma cells were stereotactically implanted into the right brain hemisphere of 16 to 17 week-old male mice weighing 20 to 30 g. The growth of the tumor was monitored using PerkinElmer In vivo Imaging System (IVIS) each week. Mice were imaged 10 min after intraperitoneal injection of luciferin (Biosynth International, Inc., Itasca, IL, USA) at 150 mg/kg. The mice were regrouped into vehicle (negative control) (10% DMSO, 30% PEG400, 60% corn oil) or enzalutamide treatment groups with equivalent mean values of bioluminescence signals between groups at week 5 after the implantation. Enzalutamide (20 mg/kg, dissolved in the vehicle solution, 100 µl/injection) or vehicle only (100 µl/injection) were injected intraperitoneally (IP) into the mice three times per week per previously published protocols (38). The treatment was continually given to the mice until week 15 after the implantation (week 10 after starting drug treatment) or death. The mice presented with signs of near death such as seizures were euthanized with neck dislocation. After death was confirmed, mice were perfused with 10% formalin in PBS and brain tissues were dissected for immunohistochemistry (IHC) studies. All studies were carried out in compliance with the local ethical guidelines for animal experiments. The protocol was approved by the Institutional Animal Care and Use Committee (IACUC) of University of Nebraska Medical Center (protocol #: 16-134-01). All the mice with tumor received palliative care for pain control after surgery, during the follow-up and prior to euthanization per institutional guidelines. Immunohistochemistry Serial unstained slides were cut from the formalin-fixed paraffinembedded (FFPE) tissue blocks of GBM specimens from deceased, normal brain autopsy tissue from patients who died of nonneurological disease, and temporal lobectomy surgical specimens from patients with epilepsy with approved protocol from our Institutional Review Board. IHC for AR (clone SP107 rabbit monoclonal antibody, Cell Marque, Rocklin, CA, USA) was performed using BenchMark Ultra IHC/ISH system (Roche, Basel, Switzerland). Slides cut from the FFPE mouse brain GBM tissue were incubated with anti-AR (ab3510), anti-CD133 (ab19898), anti-Sox2 (ab97959) and anti-c-Myc (ab32072) individually or in combination. All these antibodies were from Abcam, Cambridge, MA, USA. After staining for the above markers with substrates incubated and color developed, slides were scanned with Ventana iScan HT slide scanner at 400× magnification and quantified using Definiens Tissue Studio (Ventana, Munich, Germany). Statistics Experimental data for cell proliferation assays, tumor spheroid sizes and IHC signals were calculated as Mean ± standard error of the mean. Student t-test (two groups) or one-way ANOVA (more than two groups) was performed using GraphPad Software (Version 8.3.1, San Diego, CA, USA). Overall survivals (OS) were compared between enzalutamide treatment and vehicle only groups of mice with Kaplan-Meier analysis. Results were considered statistically significant if p <0.05. AR Is Commonly Overexpressed in GBM Tumor Specimen From Patients We performed immunohistochemistry (IHC) studies on tumor specimens from GBM patients and demonstrated overexpression of AR in tumor tissues when compared with control brain specimens (brain tissue from patients without neurologic disease/tumor or patients with temporal lobectomy for epilepsy as shown in Figure 1). The majority of both male and female GBMs were found to have high AR nuclear expression levels in a significant percentage of cells in tumor ( Figures 1F, L). We observed the pattern of peri-arterially enriched AR expression ( Figure 1K). In normal brain tissue controls from autopsy, no AR expression was detected ( Figure 1Q). Similarly, lobectomy tissue from epilepsy patients showed very low AR expression detected in very few cells ( Figure 1R). Forty-three out of 58 GBM patients (74%) examined so far in our database display positive AR expression in >10% tumor cell nuclei, with the other eleven patients showing 1-10% positivity (93% with >1% positivity) and only four patients' tumor were found to be completely devoid of AR staining. Ninety-seven percent male and 87.5% female patients, respectively, were found to be positively stained for AR in >1% of tumor cell nuclei (P = 0.33). Seventy-nine percent male and 66.7% female patients showed positive AR staining in >10% tumor cell nuclei (P = 0.30). Reviewing of the staining pattern and positivity counting were performed independently by two pathologists from our institute with a high level of consistency. AR Antagonists Inhibit the Proliferation of GBM Cells In Vitro We demonstrated that in commercially available GBM cell lines including A172, Ln229, M059K, U87MG and U138MG, all are AR-positive but with variable expression levels ( Figure 1S). Our results are consistent with the finding from Yu et al. who showed all twelve GBM cell lines tested were expressing AR (28). AR antagonists, enzalutamide or bicalutamide, inhibited the proliferation of GBM cells and significantly reduced viability after two days of treatment in a dose-dependent manner in all human and murine GBM cell lines tested in vitro ( Figure 2). The GBM cells' sensitivity to the drug was not found to be related to the level of AR expression. Even though AR expression levels are relatively low in some cell lines such as U87MG and Ln229, they were still susceptible to AR antagonists with IC50s of enzalutamide and bicalutamide being~40 and 80-160 µM, respectively, for all tested human GBM cell lines. Enzalutamide Downregulates c-Myc and AR Expression in a Dose-Dependent Manner c-Myc, an extensively studied oncogene, has an important role in ensuring tumor development, promoting proliferation and maintenance of cancer progenitor cells in human cancers (39)(40)(41). c-Myc, along with other stem cell genes including SOX2, BMI1 and OCT-4, is highly expressed in prostate cancer stem/ progenitor cells (42). We studied the relationship between c-Myc expression and AR blockade in GBM cells. Human prostate cancer cell line LnCap, an AR-positive cell line, was used as a positive control for AR and c-Myc expression in GBM cells ( Figure 3A). Both AR and c-Myc expression levels in U87MG and MGPP3 cell lines were downregulated after 20 µM enzalutamide treatment for 24 h, and both decreased further with higher concentration of enzalutamide (40 µM) ( Figures 3B, D). The downstream genes of c-Myc such as FOXO3a and CDC25A also decreased significantly or with strong trends at the protein level after enzalutamide treatment in both U87MG and MGPP3 cells. Another downstream gene GADD45A showed significant decrease in MGPP3 cells but not in U87MG after drug treatment ( Figure 2S). MGPP3 murine GBM cells showed similar AR expression patterns and dose-dependent response to enzalutamide treatment ( Figures 3C, D). However, unlike LnCap prostate cancer cells with nuclear specific AR distribution, both GBM cell lines cultured in vitro showed cytosol-enriched subcellular localization of AR which is in contrast with the nuclear dominant localization of GBM patients' specimens based on IHC staining. In Vitro Cancer stem cells (CSCs) can be enriched in spheroids using an ultra-low concentration of serum (43). Although the consensus has not been reached on what is/are the most representative marker(s) to detect CSCs in GBM, cell surface marker CD133 has been the most commonly used in used (44,45). After the formation of spheroids of U87MG in culture media with an ultra-low concentration of serum, incubation of the spheroids with AR antagonists suppressed their further growth ( Figures 4A, B). In contrast, untreated (DMSO controls) spheroids continued growing in culture media but the growth was delayed or completely arrested with increasing concentrations of AR antagonists added. After treatment with either enzalutamide or bicalutamide, CD133+ cells in U87MG spheroids were significantly decrease proportionally compared with the control group treated with DMSO only based on flow cytometry ( Figures 4C, D). The average percentages of CSC cells in spheroids were 3.1 ± 0.3, 2.2 ± 0.1 and 1.6 ± 0.2 in DSMO control, enzalutamide and bicalutamide, respectively. AR Antagonists Downregulate Cancer Stem Cell Marker Gene Expression in GBM Cells In Vitro in a Time-Dependent Manner In addition to the cell surface marker CD133, other cancer stem cell markers/embryonic stem cell markers such as Nanog and Oct4 have also been widely used for cell linage studies in cancers including GBM (46)(47)(48)(49)(50). The expression levels of Oct4 and Nanog in the spheroids of U87MG with and without enzalutamide treatment were studied with Western blotting experiment. We observed enrichment in stemness markers Nanog and Oct4 in U87MG cells over 72 h of culturing time in spheroids when treated with solvent control only ( Figure 4E). Enzalutamide treatment significantly decreased expression of both Nanog and Oct4 proteins in spheroids after only one-day incubation with the drug as compared to controls with DMSO treatment only. The proportional reduction, relatively to DMSO treatment control, of Nanog and Oct4 became more significant after prolonged treatment (three days) of enzalutamide ( Figure 4F). GATA4, a downstream gene of Oct4 and Nanog, also decreased significantly in its protein expression after the treatment of enzalutamide ( Figure 2S). In Vitro and In Vivo Limiting Dilution Assays Demonstrate That Enzalutamide Suppresses Tumor Sphere-Forming Capability in GBM Cells In Vitro and Tumor Formation In Vivo The in vitro limiting dilution assay (LDA) has been used widely to analyze the cancer stem cell population under various culturing conditions (51)(52)(53)(54). Extreme limiting dilution assay (ELDA) is a software application that calculates the proportion of cancer stem cells in a mixed cell population with statistical software (55). Our ELDA experiments demonstrated that enzalutamide treatment significantly decreased the tumor sphere-forming capacities from the subpopulation of cancer stem cells in both U87MG and MGPP3 cell lines ( Figure 4G). The inhibitory effects of the AR antagonist are dose-dependent with 80 µM enzalutamide exhibiting a significantly enhanced effects suppressing tumor sphere formation/CSC subpopulation in U87MG cell line compared with the negative control or lower concentration of enzalutamide (40 µM). Interestingly, the inhibitory effects of enzalutamide on tumor sphere formation in MGPP3 cells are also significant compared to the negative control but seem to be saturated with doses above 40 µM ( Figure 4G). LDA in vivo is the gold standard to test the tumor-initiating capability of the CSCs. With each mouse brain inoculated with 10 4 MGPP3 cells, all six mice in the control group without drug pretreatment had tumor growth as expected, whereas only three out of six in the enzalutamide pretreated group had tumor growth after 6 weeks. In mice inoculated with decreased number of tumor cells at 10 3 cells each, one out of six mice in the control group had tumor growth while zero out of six mice in the enzalutamide pretreated group had tumor growth 6 weeks after implant. ELDA software estimated that the ratios of CSCs with tumor-forming capacity in the cell line decreased from 1/3,071 to 1/16,498 after the treatment of enzalutamide (p = 0.022) ( Figure 4H). mRNA Expression Levels of AR Are Positively Correlated With GBM CSC Marker Genes as Well as Genes/Pathways Related to Proliferation To explore the correlation between AR gene and GBM cancer stem cell genes in The Cancer Genome Atlas (TCGA) database, we selected 10 well-known GBM cancer stem cell genes (45,56). We found that the mRNA expression levels of all GBM cancer stem cell genes are positively correlated with the AR gene, the highest correlation being SOX2 (Spearman's rank correlation coefficient R = 0.59) ( Figure 5B). No correlation was seen between the expression levels of AR and GAPDH, a housekeeping gene. In addition, from our RNA-seq analyses on U87MG cells cultured in vitro, we found that almost all cancer stem cell marker genes such as Nestin, ID1, FUT4 and L1CAM showed either a significantly decreased or trends of decreased expression (Nanog, CD133 (Prom1), Sox2 and BMI1) after enzalutamide treatment (80 µM) for 48 h. Quantitative RT-PCR assays were further performed and confirmed that the mRNA expression levels of AR, CD133 (Prom1), Oct4, Nanog, and Sox2 were all significantly decreased after the treatment of enzalutamide (80 µM) for 48 h ( Figure 5D). Based on the RNA-seq results after treating U87MG cells with enzalutamide for 48 h, comprehensive Gene Set Enrichment Analyses (GSEA) analyses against the KEGG pathway database (https://www.genome.jp/kegg/pathway.html [genome.jp]) were also performed to identify additional cellular functions of AR. Cancer stem cell signatures were included in different related pathways. The signaling pathways regulating pluripotency of stem cells specifically FoxO and TGF-b signaling pathways, as expected, are listed among the top 15 pathways most affected after enzalutamide treatment, with 32 differentially expressed genes (DEGs) after the treatment of enzalutamide were enriched in the "hsa04550: Signaling pathways regulating pluripotency of stem cells" (p = 2.72 × 10 −4 ). In addition, genes/pathways involved in cell cycle, Hippo, MAPK, PI3K-Akt and ErbB signaling pathways are listed as well indicating the involvement of AR in promoting cell cycling/proliferation of differentiated GBM cells ( Figure 5A). Furthermore, the heatmap generated from RNA-seq results confirmed the downregulation of the expression levels of not only the genes specific for cancer stem cells but also those in TGFb signaling, cell cycles and cell proliferation in U87MG cells after enzalutamide treatment time-dependently ( Figure 5E). Enzalutamide Downregulates Cancer Stem Cell Marker Gene Expression In Vivo, Inhibits GBM Tumor Progression and Significantly Prolongs Survival in Mice We further examined the effects of the AR antagonist using a syngeneic orthotopic GBM mouse model. MGPP3 cells which express luciferase constitutively were intracranially injected into mice, and treatments (enzalutamide vs. vehicle control) were administered twice per week once tumors developed. Bioluminescence imaging was used to monitor the differences in tumor progression between treatment groups ( Figures 6A, B). Representative IVIS images of progressed tumors (top right) and tumors responded to enzalutamide treatment (two mice at bottom right) are shown in Figure 7B. We found that GBM tumor growth was suppressed (size reduced or stabilized) in five out of nine mice (55.6%) in the enzalutamide-treated group versus zero out of nine mice (0%) in the vehicle only control group. Mice tolerated this dose of enzalutamide treatment well with significantly more weight gain during the course of treatment ( Figure 6C). Furthermore, mice treated with enzalutamide had significantly improved overall survival compared with those in the control group (p <0.05). The median survival for the control and enzalutamide-treated groups was 36 days and 54 days, respectively ( Figure 6D). All the mice in the control group died by Day 72 after tumor injection whereas 50% of the mice in the enzalutamide treated groups survived. The tumors in these long term surviving animals completely disappeared eight weeks after initiation of the drug treatment. Immunohistochemistry study on formalin-fixed and paraffinembedded (FFPE) mouse brain tumor tissue demonstrated that cell surface CSC marker CD133 and oncogene c-Myc expressions were both significantly decreased in enzalutamide-treated mice tumors compared with those in control group (Figures 6E, F). It was also interesting to observe that although the percentages of AR-positive cells in the tumor did not show significant differences between enzalutamide-treated and control groups, the AR expression levels or IHC staining intensity was decreased after enzalutamide treatment. Comparing with the control tissues without drug treatment showing AR staining mostly in the nuclei (95.4%), the tumor brain specimen after enzalutamide treatment showed a significantly higher percentage of cells with cytosol dominant distribution (96.7%) (p <0.05) ( Figure 6E). All CD133+ Cells Are AR Positive In Vivo But AR Expression Is Not Specific for Glioma CSCs Confocal microscopy was performed to study the expression patterns of AR and glioma CSC marker genes on FFPE mouse brain tumor specimens. AR in orthotopically growing tumor cells showed variable staining intensities ( Figure 7A) and nuclear-dominant expression patterns but was also detectable in cytosol ( Figure 7B). Clusters (thick arrow) as well as individually distributed (arrowheads) CD133+ CSCs were shown in Figure 7A. CD133 staining pattern was consistent with a cell membrane distribution ( Figure 7B). Due to the different subcellular distribution patterns, co-localization rates of AR and CD133+ cell were manually counted. All CD133+ cells (100.0% ± 0.0%) were positively stained for AR. It is interesting for us to observe that individually localized CD133+ cells showed high rate of co-localization with cells with higher intensity of AR staining defined by >50% or maximal intensity (91.3% ± 7.5%) (arrowheads, Figure 7A). Clusters of CD133+ CSCs, as in the dash line box in Figure 7A, were all AR positive but not always in high AR intensity cells ( Figure 7B). AR expression in CD133negative (AR+/CD133−) cells was also seen but mostly (80.2% ± 8.3%) in cells with lower intensity of AR staining (thin arrows, Figures 7A, B). The staining pattern indicates that higher AR expression level is associated with cells with higher stemness. Similarly, All Nanog-expressing (Nanog+) cells were AR+ per manual counting. More impressively, the Nanog-weighted Nanog/AR co-localization coefficient equaled 0.75 based on software analyses which means 75% of positive Nanog staining signals co-localize with AR staining. Again we noticed that Nanog+ cells tend to have stronger AR stainings as well with nearly all Nanog+ cells (89.5% ± 10.1%) showing AR signal intensities above the 50% threshold of the maximum (arrowheads, Figure 7C). In contrast, a large portion of AR+ cells (76.7% ± 10.2%) showed no detectable Nanog staining (thin arrows, Figure 7C). Again these cells (AR+/Nanog−) mostly showed weak AR staining intensity (≤50% of maximum) (90.2% ± 5.1%), but still significantly higher than the background as shown in the negative control ( Figure 7D). Data further supporting these observations were from the software-based co-localization analyses which showed that ARweighted Nanog/AR co-localization coefficient was 0.09, in contrast to 0.75 for Nanog-weighted number. The results were interpreted as: although all Nanog+ cells were also AR+ and Nanog protein staining heavily co-localized with AR protein, AR expression was much more diffusely seen in tumor cells than Nanog due to the findings that many weakly AR+ cells were Nanog negative. DISCUSSION The presence of specific steroid hormone (estrogen, progesterone, or androgen)-binding receptors has been correlated with the clinical outcome and response to hormonal therapy in a number of different neoplasias, including breast, prostate and renal cell carcinoma (57). However, there is much less information available in brain tumors for steroid hormone receptor expressions or response to hormonal-particularly androgensuppression. Previous studies, as well as our own, showed that androgen receptors were consistently detected in a higher proportion of gliomas (17,27,28,31). These findings may help to explain the gender difference in GBM incidence and indicate that AR might be a promising therapeutic target for treating GBM. Surprisingly, we have found a similar high proportion of GBM in female patients that expresses AR, and a GBM cell line derived from a female patient, Ln229, also responds to androgen receptor antagonists in a way similar to cell lines derived from male patients ( Figure 2). The role of AR in gliomagenesis in female patients is worth further studying. It has been reported that following brain injury in rodent and bird models, astrocyte aromatase expression is upregulated transiently starting from hours post-injury and lasting for a few weeks (58)(59)(60). These data provide a possible mechanism for the upregulation of AR and/or secondary AR selfactivation through aromatase-mediated testosterone conversion/ depletion, which chronically could induce AR overexpression and/ or become androgen independent. Ovary and adrenal glands produce dehydroepiandrosterone (DHEA), androstenedione and testosterone. Furthermore, for postmenopausal women, the common age for GBM diagnosis, the ovary becomes an androgen-secreting organ (61). Androgen antagonists thus may play an equally therapeutic role in both genders. We have observed for the first time very consistent results from our studies both in vitro and in vivo that AR blockade downregulates the expression levels of majority of the tested GBM CSC-specific marker genes. Cell culture studies showed significant reductions of cancer stem cell genes at both mRNA and protein levels in both human and mouse cell lines cultured anchorage dependently as well as in tumor spheroids. AR genes are very conservative between human and mouse. AR antagonists significantly suppress the expression of c-Myc, whose activity is required for proliferation, growth, and survival of glioma CSCs (56). The results strongly suggest that AR may be involved in the process of gliomagenesis and act as an essential factor for glioma CSC maintenance and/or proliferation, which is consistent with the findings that androgen/AR promotes neural stem cell proliferation (62). We acknowledge that the functions of androgen/AR in embryonic and somatic stem cells have been shown to be tissue type-dependent, and the role of AR in cancer stem cells have been controversial which again demonstrates the significance of our studies (63,64). Supporting evidence for our hypothesis further stems from observations that AR expression is induced in the glial cells in animal brains after injuries (excitotoxic injury or stab wound induced) in both male and female rat and avian models (65,66). Although there is some discrepancy on whether reactive astrocytes or microglial cells are the source of an overexpression of AR, the data do suggest that AR might be playing a role in pathological conditions such as carcinogenesis in glial cells. In the rat model, AR was seen to be expressed at low levels in some cortex and hippocampal neurons but not in nonstimulated astrocytes, and no overexpression was seen in neurons adjacent to injury site. In mouse GBM specimens, we found negative staining of AR in adjacent and contralateral normal brain tissue. It is interesting to note that in patients with epilepsy, their temporal lobectomy tissue also showed some degree of elevation of AR expression when compared to brain tissues from autopsy of normal brain which again suggests the possible involvement of AR in the pathologic process of brain ( Figure 1). It is still debatable which CSC marker genes currently studied represent true stemness in these precursor tumor cells. Thus we have combined both in vitro and in vivo studies including tumor spheroid formation assays, limiting dilution assays in vitro and in vivo, CSC marker studies from TCGA database and from our RNAseq studies as well as IHC and confocal microscopy on multiple CSC marker genes to confirm that AR is essential for maintenance of glioma CSC population. Our results are consistent with other studies demonstrating that AR can bind directly to Nanog gene promotor and promote cancer cell stemness in hepatocellular carcinoma and ovarian cancer (67,68). Although our in vivo tumor model demonstrated a high percentage of CSCs in mouse GBM by IHC and confocal microscopy, there are very low abundance of CD133+ cells when GBM cells were cultured in vitro. Our findings are consistent with what have been reported previously that in vitro cultured GBM cells contain a very low percentage (0.3-5%) of CD133+ cells particularly in high serum conditions which induce differentiated state of the tumor cells that are CD133− (69)(70)(71)(72). Meanwhile, Jensen et al. reported that, when they implanted the U87MG cells into the mouse brain, they found that there was 30-40% CD133+ cells in the mouse GBM tissue developed, significantly higher than that in in vitro conditions. Interestingly, CD133+ cells also exhibited clustered distribution pattern in the culture tumor spheroids and brain tumor tissue as seen in our studies ( Figure 6E) (73). However, AR antagonists, both enzalutamide and bicalutamide, demonstrated significant efficacy in suppressing cell proliferation after only two days of treatment in cultured GBM cell lines indicating that AR may not only promote CSCs but also cell proliferation. Indeed, our unpublished studies also showed that blocking AR could induce G2/M cell cycling arrest in GBM cell lines. AR blockade can significantly downregulate c-Myc protein levels in GBM cells both in vitro and in vivo (Figures 2 and 6) with known cellular functions of c-Myc in cell proliferation and glycolysis in glioblastomas (74,75). Gene ontology from RNA-seq results also confirmed the additional functions of AR in cell cycling/ proliferation particularly by regulating Hippo, PI3K/Akt and MAPK signaling pathways which can all contribute to both CSC and differentiated cancer cell divisions ( Figure 5). Our results from confocal microscopy also confirmed that AR expression can be detected in both CD133+ and CD133− tumor cells although AR expression levels, as indicated by staining intensity, were shown to be highest in isolated CD133+ cells than clustered CD133+ cells and AR+/CD133− cell ( Figures 7A, B). Similar results were found using Nanog as another CSC marker ( Figure 7C). Questions still remain whether the higher AR staining intensity is due to higher protein expression level or protein aggregation in cells. Based on these results, we hypothesize that glioma CSCs may be more dependent on AR expression/functions for maintenance and/or survival than more differentiated tumor cells. Cancer stem cells (CSCs), although composing only a small portion of the tumor cell population, have the highest AR expression levels/staining intensities (AR+++) which decrease as the CSCs differentiate into partially differentiated cancer stem cells (PDCCs) (AR++) and subsequently into differentiated cancer cells (DCCs) with the lowest AR expression level/intensity (AR+) ( Figure 7E). We are currently conducting further experiments including overexpression of c-Myc in GBM cell lines to potentially overcome the effects from AR suppression, as well as CRISPR/ CAS9-mediated AR knockout in GBM cell lines to investigate the multi-faceted functions of AR in GBM tumor growth. Nonetheless, the particular efficacy of AR blockade in suppressing glioma CSCs signify the importance of further research on this novel target with its potential to overcome the tumor resistance mediated by CSCs to current standard care with RT and/or chemotherapy. AR antagonists have been used to treat prostate cancer for more than 35 years with extensive clinical experience and accumulation of biological data (76). Enzalutamide, a new generation of AR blockade drug that is FDA-approved for metastatic prostate cancer, which also demonstrated excellent brain penetration capability, provides us a readily testable drug for repurposing in GBM patients (77). Enzalutamide, unlike the previous generations of AR antagonist drugs such as bicalutamide and flutamide, not only can prevent androgen and AR binding, but also block the nuclear import of AR including some of the AR splicing variants (AR-Vs). AR-Vs have been reported to contribute to prostate cancer progression through induction of epithelial-tomesenchymal transition and acquisition of stem cell characteristics (78). The expression of AR-Vs lacking the cterminal ligand-binding domain (LBD) was found to be increased in androgen-independent and metastatic prostate cancers (79,80). Some of these AR-Vs such as AR-V7, are constitutively active and localized in the nuclear compartment, and their transcriptional activity is not regulated by androgens. However, Zhan et al. (81) also reported that, when expressed alone in cells, some AR-Vs (e.g., AR-V1, AR-V4, and AR-V6) localize mainly in the cytoplasm but can dimerize with AR-V7 or widetype AR to be nuclear localized. There is very limited information on whether AR-Vs are present in GBM except for a preliminary study from Zalcman et al., indicating~30% of the glioblastomas in patients expressed a constitutively active AR-splice-variant (AR-V7/AR3) lacking the LBD (30). Whether AR-V7 is expressed in U87MG cells are controversial (30,82,83). No information exists whether other types of AR-Vs present in GBM although data from Zalcman et al. strongly indicate the presence of castrationresistance of the tumor from the very beginning of pathogenesis of GBM which is different from prostate cancer that usually becomes so after prolonged androgen deprivation therapy. If that is the case, enzalutamide and other newly generation of AR antagonists such as apalutamide could provide superior GBM control benefit compared to older generation drugs or antagonists/ agonists of gonadotropin-releasing hormone (GnRH) that suppress androgen production, as having been demonstrated by a phase III clinical trial on metastatic prostate cancer (84). Our results showed that, when cultured in vitro, GBM cells including the mouse MGPP3 cells had cytosol dominant distribution pattern of AR ( Figure 2) with or without enzalutamide treatment. However, IHC staining on both mouse and human brain tumor specimens demonstrated nearly 100% nuclear localization of AR but more cytosol dominant distribution pattern after enzalutamide treatment in vivo. These results, although cannot conclude whether there are cytosol-located AR-Vs in these GBM cells, do indicate that the testosterone concentrations in culture medium might not be high enough as in brain tissues to induce translocation of AR to nucleus when cultured in vitro. Our results also provided evidence that enzalutamide can successfully block the AR translocation to the nucleus as reported before (85). With our results in GBM and previous studies from prostate cancer showing specific cancer stem cell suppression, AR antagonists could be good therapeutic candidates and repurposed in the treatment of GBM, particularly when combined with current standard care modalities such as temozolomide and/or radiation therapy, which cancer stem cells are known to be resistant to (3,86,87). We also acknowledge that in vitro effective dose of enzalutamide in GBM cells (IC50:~40 µM) or for spheroids (60-120 µM) from our study seems to be higher than the therapeutic dose in plasma achievable in vivo. The phase I/II study revealed that the minimum (predose) plasma concentrations (C min ) at steady state in the 150 mg PO daily dose cohort of patients for enzalutamide is about 20 µM although nearly doubling of the concentration can be achieved if using maximal toxicity dose (88). Preclinical studies on prostate cancer cells demonstrated Ki (inhibition constant) of enzalutamide is 86 nM and the IC50 of enzalutamide to suppress widetype AR activation by testosterone based on reporter gene transcription assays is 219 nM. The IC50 of enzalutamide in cell viability assays for VcaP, an AR dependent prostate cancer cell line, is 410 nM (89). These numbers are drastically lower than the IC50s of cell proliferation assays we observed in GBM cell lines. One explanation of the difference is the difference in experimental conditions. For example, the culturing time after adding the drug prior to cell proliferation assays was much longer than ours (4 days vs. 2 days). The reason we chose 2 days culturing time after adding the drug instead of 4 days is that after 3 days, we started to see synchronized cell apoptosis in U87MG and other GBM cell lines such as U138MG which develops rapidly (data not shown). It is noted that in the studies by Zalcman et al., the concentrations of enzalutamide used to treat GBM cells for 48 h, same as what we did, were from 10 to 80 µM, consistent to our data (30). We also would like to point out that the IC50s reported by Moilanen et al. was measured under testosterone stimulation (mibolerone) which very likely would have caused left-shift of the survival curve. However, Xue et al. reported that the IC50s of enzalutamide on different prostate cell lines such as LnCap, C4-2B, 22Rv1 and VCaP were 42, 20, 36, and 30 µM without testosterone stimulation (90). Although the authors did not specify the culturing time for each cell line after adding the drug but stated the shortest culturing time is 72 h. These IC50s are very similar to our results from GBM cell lines and probably reflecting the conditions in vivo better with the testosterone levels in elderly patients, male or female, being very low around the average ages of GBM diagnosis. Another explanation on higher IC50s seen in GBM cell lines is the potential presence of AR-Vs, as discussed above that may render GBM cells much more resistant to AR antagonists comparing to androgen-dependent prostate cancer cells, which will keep us in mind when developing future clinical trials repurposing AR antagonists for GBM treatment. Despite of these explanations, enzalutamide in this range of concentrations (40-80 µM) might involve a non-canonical target(s) with offtarget effects which warrants further studies. Arguing against this hypothesis are the published data showing that knocking down of AR by siRNA resulted in significant inhibitory effects on GBM cell growth in vitro although whether the effects were mainly on differentiated tumor cells or CSCs is unclear (30). Nevertheless, our results support the potential of repurposing AR antagonists for GBM treatment. Enzalutamide showed significant efficacy in the syngeneic orthotopic mouse GBM model, although well tolerated, only 50% of the mice survived long term ( Figure 6). We did observe significantly more weight gain in drug-treated mice which is a well-known side effect of androgen deprivation therapy. It is also noted that the survival curve after enzalutamide treatment showed initially the same pattern as control group but separated eventually indicating a heterogeneity of tumor response to the drug. Our pre-clinical results indicate that likely further dose escalation study for GBM patients or combining this drug with other standard care modalities for GBM such as temozolomide and/or RT may be necessary to further improve the outcome, as supported by the most recently published data from Werner et al. in the flank implant tumor model (31). In summary, our data demonstrated tumor suppressive effects of AR antagonists, particularly enzalutamide, in GBM cell lines and, for the first time, in an orthotopic mouse model. Potential mechanism of the drug effects appear to be at least partly mediated through inhibition of cancer stem cell via AR in gliomagenesis and may provide us with a novel target for GBM treatment. DATA AVAILABILITY STATEMENT The original contributions presented in the study are publicly available. This data can be found here: https://www.ncbi.nlm.nih. gov/geo/query/acc.cgi?acc=GSE174295. ETHICS STATEMENT The animal study was reviewed and approved by Institutional Animal Care and Use Committee of the University of Nebraska Medical Center. AUTHOR CONTRIBUTIONS CZ (16th author) and NZ designed the experiments and wrote the manuscript. NZ, CZ (16th author), FW, SA, KL, CZ (5th author), SC, DD, MP, BG, PZ, and SW did the experiments and analyzed the data. SB, TB, CL, and TH interpreted the data and revised the manuscript. All authors contributed to the article and approved the submitted version. FUNDING The work was supported by the National Institute of General Medical Sciences (1U54GM115458-01). ACKNOWLEDGMENTS The project described was supported by the National Institute of General Medical Sciences. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. We appreciate the thorough review and
v3-fos-license
2021-10-25T15:07:23.636Z
2021-06-30T00:00:00.000
239870607
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.35189/dpeskj.2021.60.2.2", "pdf_hash": "162946c492de5382880aaddd0588ff5c4235a372", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44826", "s2fieldsofstudy": [ "Education" ], "sha1": "3b6b7111138d6fa8343527060e18f16ac3d7c9ac", "year": 2021 }
pes2o/s2orc
STUDY ON IMPROVING COORDINATION SKILLS IN WOMEN’S BASKETBALL GAME . The purpose of this research is to highlight the effectiveness of the proposed motor programme for the improvement of coordination skills in female basketball players aged 13-15 years. The implementation of a motor programme focused on the development of coordination skills leads to their improvement and could have a significant impact on future results/performance. The participants in this study were 68 female basketball players who were divided into two groups, an experimental group (n = 36) and a control group (n = 32). The following tests were applied to assess the development level of their coordination skills: Tapping test, Rope jumping test, Alternate hand wall toss test, Square test and Kinaesthetic test. Both groups were assessed at the beginning of the competition season and at the end of the 6 months of specific motor programme. The programme proposed and applied to the experimental group consisted of: slalom circuits, elastic rope jumping, static and dynamic balance exercises, eye-hand, hand-foot and eye-foot coordination exercises, throwing and catching objects and rhythm-building exercises. The following research methods were used: literature review, experiment method, test method, mathematical and statistical method, graphical method. The results of the study showed that the experimental group recorded improved values in the final test compared to the results obtained by the control group. In conclusion, the proposed motor programme has proven its effectiveness as regards the improvement of coordination skills in female basketball players aged 13-15 years. Introduction In recent years, the game of basketball has developed significantly at world level, which has led to the improvement of sports performance.The permanent adaptation of training methods, the use of high-tech auxiliary equipment (FITLIGHT motion sensor, balance plate, dribbling glasses, training ladder, LED cable, etc.) and the implementation of technology in sport are factors that have contributed to achieving outstanding performance, which was hard to imagine a few years ago. It has lately been found that specialists in the field are extensively concerned with standardising the sports training process and implicitly developing original tests to identify solutions in this regard, but also with modifying the competitive system to ensure increased efficiency in the game of basketball (Oancea & Bondoc-Ionescu, 2015). One of the basic characteristics of basketball, namely the rapid alternation between offensive and defensive situations, allows players to display their technique but also fantasy and creativity, passion to compete, acrobatic shots, desire for affirmation and recognition (Krause & Nelson, 2018).Most basketball players have an impressive technical background, easily use both hands (ambidexterity) during the game and perform technical actions at high speed.Drinkwater et al. (2008) state that speed, agility and power are essential for basketball players.Therefore, specific physical training, along with competitive activity, involves a high level of coordination skills, which leads to the efficient adaptation of players' technical and tactical skills to the concrete game conditions (Bădău, 2006).Dragnea and Mate-Teodorescu (2002) define coordination skills as a complex of predominantly psychomotor qualities that require the ability to quickly learn new movements as well as rapid and efficient adaptation to various conditions specific to different types of activities by restructuring the existing motor background.According to Tudor (1999), coordination ability is a psychomotor quality that relies on the correlation between the central nervous system and skeletal muscles during the execution of a movement. Regarding the definition of coordination skills, several opinions have been expressed but, regardless of formulations, specialists in the field highlight that this psychomotor component is determined by the quality of the central nervous system.The coordination process refers to the individual's ability to match what is intended to be achieved with what is actually achieved (Potop et al., 2013).Therefore, the improvement of coordination skills should place emphasis on the ability to combine and connect movement, spatiotemporal perception and other specific qualities that are found in specific motor reactions such as perception of the opponent on the field, perception of distance or perception of the moment when the motor action begins (Erculj et al., 2010;Mishyn et al., 2018).Sadowski et al. (2014) state that the most important components of coordination ability include kinaesthetic differentiation, movement adjustment, reaction time, rhythm, spatiotemporal orientation, movement coupling and balance.Candra (2019) emphasises that one of the components of coordination ability that is often encountered in the game of basketball is related to eye-hand coordination; thus, the eye as a visual organ provides information, while the hand performs the task.In order to solve game tasks such as dribbling, shooting or passing the ball, cooperation is needed "in the nervous system of the hands and eyes" (Candra, 2019, p. 864). Coordination skills are known to significantly influence the level of sports performance, but more specifically, one can say that their individual level of development directly influences the player's technical background.Therefore, the main method of developing coordination skills is to practice, provided that exercises with progressively increased complexity are used (Bompa & Buzzichelli, 2002).When using a proper combination of coordination, intermediate and conditional skills, sports performance is optimal (Boccolini et al., 2013;Sevreza & Bourdin, 2015). Conditional skills (such as speed, endurance and power) are based on the metabolic efficiency of body systems and muscles, while coordination skills are based on the ability to receive and process the information received through optical, acoustic, vestibular, tactile and kinaesthetic analysers, which are involved in both movement and the development of motor skills.Instead, intermediate skills (such as suppleness) have limited effects on movement regulation. Taking into account the aforementioned aspects, we aimed to determine in our research the connection between basketball-specific technical and tactical elements and coordination skills, which is highlighted in Table 1. Purpose This study aims to verify whether the implementation of a specific motor programme has positive effects on the development level of coordination skills in the game of basketball. Objectives  Designing a specific motor programme for the improvement of coordination skills in female basketball players aged 13-15 years;  Determining the initial and final development levels of coordination skills in athletes from both groups (experiment and control). Tasks  selection of research groups;  selection of tests/assessments;  initial testing of research groups;  development and implementation of the intervention programme;  final testing of research groups;  processing, analysis and interpretation of recorded data;  drawing final conclusions. Hypothesis The implementation of a motor programme focused on the development of coordination skills has a significant impact on performance in the case of tests assessing these motor skills. Participants The participants in this study were 68 female basketball players aged 13-15 years, who were divided into two groups, an experimental group (n = 36) and a control group (n = 32). The experimental group is made up of athletes from three basketball teams, and the control group consists of athletes from two basketball teams.The average age of the experimental group is 14.5 years, and the standard deviation is .94.For the control group, the average age is 14.6 years, and the standard deviation is .87.It should be mentioned that the athletes included in this research have been practising the game of basketball for a minimum of 3 years and a maximum of 5 years. Instruments In assessing the development level of coordination skills, the interrelation between tests and the components of coordination skills was taken into account, which is highlighted in Table 2 and Figure 1. Table 2 . Interrelation between tests and the components of motor skills (personal contribution) No. Assessment Name of the test Components of the targeted coordination skills 1. Coordination skills Tapping test  quick reaction ability 2. Rope jumping test  ability to combine movements (arms-feet) 3. Alternate hand wall toss test  ability to combine movements (eye-hand) 4. Square test  spatiotemporal orientation ability 5. Kinaesthetic test  kinaesthetic differentiation ability To determine the level of coordination skills of the research participants and to collect data about the progress of the experiment group upon completion of the proposed specific motor programme, the following tests were used:  Tapping test The main purpose of this test is to assess the speed and coordination of the upper limbs.This test is part of the Eurofit Test Battery.To perform the measurement using this test, the following equipment is needed: a table, two yellow discs of 20 cm in diameter, a rectangle of 30 x 20 cm, measuring tape, stopwatch.The test requires the participant to stand in front of the table with feet apart at shoulder width, their non-dominant hand resting in the middle of the rectangle and their dominant hand touching the yellow disc on the same side.The two yellow discs are placed with their centres at a distance of 60 cm.At the "go" command, the participant must keep both feet on the ground and move their dominant hand on either side of the rectangle to touch the yellow discs as quickly as possible.The timer is stopped when the participant has completed 25 full cycles (50 touches).The test is performed only once.  Rope jumping test The main purpose of this test is to assess the ability to combine arm-foot movements.The following equipment is required to perform the measurement by means of this test: rope, stopwatch.The test consists in performing normal straight jumps with the rope from both feet to both feet for 60 seconds.The test is performed only once.  Alternate hand wall toss test The main purpose of this test is to assess the ability to combine eye-hand movements.The following equipment is required to perform the measurement with the help of this test: tennis balls or baseballs, a solid wall, marking tape, stopwatch.The test involves placing a mark at a certain distance from the wall (for example, 2 meters).The participant stands behind the line facing the wall.The ball is thrown with one hand in an underarm action against the wall and an attempt is made to catch it with the opposite hand.The ball is then thrown back against the wall and caught with the initial hand.The test can be performed for a set number of attempts or a set period of time (for example, 30 seconds, as in the present study).The number of catches is recorded.Through the constraint of a set period of time, the factor of working under pressure is also added.The test is performed only once.  Square test The main purpose of this test is to assess spatial orientation ability.The following equipment is required to perform the measurement by means of this test: meter, chalk, stopwatch.The test involves drawing on the ground a square of side 90 cm, which is divided into 9 squares of 30 cm.On the opposite sides, two more squares of the same size are drawn.From the square "0", the participant must perform jumps on both feet in ascending order, in the shortest possible time and without stepping on the dividing lines.The execution time is recorded and the number of errors is counted.For each error, 0.5 seconds are added to the final time.  Kinaesthetic test The main purpose of this test is to assess the kinaesthetic differentiation ability.Size 5 and size 7 basketballs are required to perform the measurement using this test.The test consists of free throws with balls of different sizes (in women's basketball, the official ball is size 6). Participants must perform 10 free throws by alternately using size 5 balls (500 g) and size 7 balls (620 g).The final result is given by the number of points scored in the 10 attempts. Procedure The research was conducted during the 2018-2019 competition season.In this period, athletes performed five workouts per week, and each workout lasted between 60 and 90 minutes.The proposed specific motor programme was applied to the experimental group and took 15-20 minutes of the total training time, while the control group performed training sessions in the classic manner of the coach.The specific motor programme was implemented over a period of 6 months.The content of the specific motor programme was based on action systems leading to the development of coordination skills, to which auxiliary equipment (FITLIGHT motion sensor, balance plate, dribbling glasses, LED cable, ball accessorysquare up, ball accessory -gloves, ball accessory -plastic bag, extender ball, coordination ladder) was added in order to achieve the desired results. Thus, the specific motor programme applied to the female athletes in the experimental group included different types of exercises that were largely dependent on the following components of coordination skills: ambidexterity, laterality, hand coordination and ball passing accuracy, general segmental body coordination, spatiotemporal perceptions, body coordination in performing various types of jumps, static and dynamic balance.Some types of exercises used in the intervention programme were: eye-hand, arm-foot and eye-foot coordination exercises, TE-TA circuits, throwing and catching objects, balance exercises, different types of jumps and changes of direction, but also reaction and rhythm exercises.The application of coordination exercises respected the didactic principles, especially the principle of accessibility that relies on three classic rules of the teaching practice, namely the transition from easy to difficult, from simple to complex, from known to unknown. Some concrete examples of exercises used to develop coordination skills are presented below:  eye-hand coordination: the athlete stands with both feet on the balance plate at a distance of half an arm's length away from the wall where the motion sensors are positioned.The athlete must deactivate the motion sensor by hand (using the hand that is closest to the sensor); 3 x 30 sec/series, break: 10-15 sec;  arm-foot coordination: the athlete stands an arm's length away from the wall.The motion sensors are placed on the ground, between the wall and the athlete.The athlete throws the tennis ball against the wall with one hand and catches it with the other hand while touching the light sensor with their foot; 2-3 x, 30 sec/series, break: 10-15 sec;  eye-foot coordination: five cones are placed in a circle.A motion sensor is attached to each cone.The athlete stands in the middle of the circle and must jump from both feet to both feet over the cone whose sensor lights up; 3 x 30 sec/series, break: 10-15 sec;  TE-TA circuit: the exercise begins when the FITLIGHT sensor positioned at the start lights up.From the baseline, the athlete performs footwork at the training ladder while facing the direction of movement with an inward trend simultaneously with ball passing around the trunk (to the right)one-count stoppingtwo-hand chest passing to the teammate who is in centre fieldsprinting to the basketregainingrunning shootingrecoveringdribbling to the second ladderfootwork facing the direction of movement with an inward trend concurrently with ball passing around the trunk (to the left)two-count stoppingtwo-hand chest passing to the teammate who is in centre fieldsprinting to the basketregainingjumping shooting (from the base of the three-second area, at an angle of 45 degrees).When the sensor placed near the training ladder lights up, the athlete must perform a squat;  throwing and catching objects: the athlete stands with both feet on the balance plate at a distance of half an arm's length away from the wall where the motion sensors are positioned.The athlete throws the tennis ball against the wall with one hand and catches it with the same hand while touching the light sensor with the opposite hand; 4 x 30 sec/series, break: 10-15 sec;  balance exercise: the athlete sits on the balance plate with both feet in the air.On either side of the athlete, at a distance of half an arm's length, a motion sensor is placed.The athlete must touch the sensor with the hand on that side; 3 x 30 sec/series, break: 10-15 sec;  jumping exercise: standing with legs apart and hands behind the head in front of a row of 5-6 crates placed at equal intervals of 2-3 m: multiple jumps on both feet over the crates of the same height (30 cm); 3 x, break: 30 sec;  reaction exercise: two balls of different colours are placed 4 m away from each other.The athlete is midway between the balls, and a sensor that lights up in different colours is placed 2 m in front of the participant.For each colour, the athlete has a different task to perform: RED = do a push-up; GREEN = move with added step to the right; YELLOW = touch the sensor; WHITE = move with added step to the left; 3 x 30 sec/series, break: 10-15 sec. For both groups (experiment and control), the study included an initial test at the beginning of the competitive year and a final test upon completion of the proposed motor programme.To assess the development level of coordination skills, the following five tests were used: Tapping test, Rope jumping test, Alternate hand wall toss test, Square test and Kinaesthetic test. The initial and final tests provided information on the performance of the research participants (experimental group and control group).This allowed noticing the progress of the experimental group after the implementation of the motor programme focused on improving coordination skills. Results The data were statistically processed with IBM SPSS software, version 23.The statistical analysis involved calculating the indicators of central tendency and mean differences between the experimental and control groups in pre-test and post-test.The effect size was also calculated using the Cohen's d coefficient.(Table 3 and Table 4) Table 3 In the first test for the assessment of coordination skills, Tapping test, female players in the experimental group obtained an average of 15.56 executions compared to female players in the control group, whose average was lower by approximately 3 executions (Figure 2).The differences are statistically significant.In the second test for the assessment of coordination skills, Rope jumping test, female players in the experimental group obtained an average of 153.66 executions compared to female players in the control group, whose average was lower by approximately 27 executions (Figure 3).The differences are statistically significant.In the third test for the assessment of coordination skills, Alternate hand wall toss test, female players in the experimental group obtained an average of 31.08 executions compared to female players in the control group, whose average was lower by approximately 11 executions (Figure 4).The differences are statistically significant.In the fourth test for the assessment of coordination skills, Square test, female players in the experimental group obtained an average of 4.7 seconds compared to female players in the control group, whose average was lower by approximately 4 seconds (Figure 5).The differences are statistically significant.In the last test for the assessment of coordination skills, Kinesthetic test, female players in the experimental group obtained an average of 7.58 executions compared to female players in the control group, whose average was lower by approximately 3 executions (Figure 6).The differences are statistically significant.As can be seen in Figure 7, the largest difference is found in the Kinesthetic test.Following the participation in the intervention programme, an average improvement of 127.62% was obtained in terms of performance.The smallest difference is found in the Tapping test where the percentage difference between the initial test and the final test is 19.66%.Therefore, it can be said that the proposed intervention programme leads to an improvement in performance of about 20% for this test.We also calculated an absolute average value of the percentage increase in the experimental team's performance for all tests assessing coordination skills and we obtained the value 54.93%.This value indicates that the implemented intervention programme has improved by 54.93% the average performance of the experimental team in the tests for the assessment of coordination skills, which confirms the research hypothesis. Discussion The present study aimed to test the effectiveness of a specific motor training programme for the improvement of coordination skills in female basketball players aged 13-15 years.We chose this age group to test the effectiveness of the proposed programme because it is the appropriate period for the acquisition of skills necessary to build the technical background of a basketball player.According to several authors, dribbling is the most important technical skill that should be highlighted in the game of basketball.Through dribbling, peripheral vision, the "sense of the ball" and the perception of distance from the opponent are developed, all of them combined with the intellectual abilities of a basketball player (Boychuk, 2015;Demcenco, 2017). The results of this study show that coordination skills can be significantly improved during a 6-month period through a systematic training programme that includes eye-hand, arm-foot and eye-foot coordination exercises, various TE-TA and slalom circuits, throwing and catching objects, static and dynamic balance exercises, different types of jumps and changes of direction but also reaction and rhythm exercises.The improvement of coordination skills was significant for all the components measured through the tests applied: speed and coordination of the upper limbs, body coordination, eye-hand coordination, spatial orientation, kinaesthetic differentiation ability.Of these components, the easiest to train through the proposed programme was the kinaesthetic differentiation ability.The relevance of these results is supported by many authors.Thus, Boichuk et al. (2018) consider that special attention should be paid to sport-specific skills because the technical training of basketball players depends on the development level of these coordination skills.Boichuk et al. (2017) claim that technical and tactical skills do not depend on a single coordination ability but on a combination of all the components of coordination ability.The research conducted by Kozina et al. (2018) highlights that the high level of development of coordination skills is decisive for improving the technique of the game and leads to its qualitative increase.At the same time, it enables the athlete to quickly adapt to changes during match play and make the best technical and tactical decisions. Coordination skills are also needed in recreational activities such as Adventure Park, which consist in completing routes of progressive difficulty, which are signalled by specific colours (yellow, green, red, blue and black).These types of activities require abilities related to balance, kinaesthetic differentiation, spatiotemporal orientation, reaction and coordination of body segment movements (Bădău & Bădău, 2018). In the preparation of badminton players (Srinivasan & Saikumar, 2012), handball players (Nesen et al., 2018) and volleyball players (Kozina et al., 2018), the use of the training ladder has led to improved coordination skills at lower limb level. The results of this study are relevant in at least two basic directions in basketball: specific technical training and secondary selection. Some limitations in conducting this study make us interpret the results with caution.First of all, the current training sessions of the female athletes from both groups were carried out by the study participants' coaches at that time.The experimental motor programme was developed by the main researcher, who is also licensed as a basketball coach.In the pre-test, there were no significant differences between the two groups.During the 6 months of the intervention, there may have been changes in the current training that were not kept under control but may have impacted the development of coordination skills.Second, coordination ability was assessed by specific tests that distinctly measured its components.It would be interesting to design a way of assessing the development of coordination skills in relation to actual performance during the game of basketball, either in a longitudinal study or in a predictive study. Conclusion The findings of this study show that the application of a specific motor programme integrated in the training sessions leads to significantly better results in terms of development of coordination skills.These results also attest to the significant role played by the use of modern auxiliary equipment (such as motion detection sensors) in the training and development of coordination skills. Therefore, we can say that the motor programme focused on the development of coordination skills should be included in the sports training process for the age group 13-15 years, where the priority should not be to get results but to get skills and abilities and develop qualities that will offer players solutions for any game situation. We hope that the results of this research will help physical education teachers and basketball coaches who teach for the age group 13-15 years. Figure 1 . Figure 1.Graphical representation of the interrelation between tests and the components of motor skills (personal contribution) Figure 2 . Figure 2. Differences between the experimental group and the control group in the Tapping test (T1 = initial test, T2 = final test) Figure 3 . Figure 3. Differences between the experimental group and the control group in the Rope jumping test (T1 = initial test, T2 = final test) Figure 4 . Figure 4. Differences between the experimental group and the control group in the Alternate hand wall toss test (T1 = initial test, T2 = final test) Figure 5 . Figure 5. Differences between the experimental group and the control group in the Square test (T1 = initial test, T2 = final test) Figure 6 . Figure 6.Differences between the experimental group and the control group in the Kinesthetic test (T1 = initial test, T2 = final test) Table 1 . Coordination skills specific to the technical elements and practices used in the game of basketball (personal contribution) Table 4 . Statistical analysis of the differences between the experimental and control groups in the tests for coordination skills -Final testing Experimental group: N = 36, Control group: N = 32; T2 = final test; M = mean; SD = standard deviation *** p ≤ .01,** p ≤ .05
v3-fos-license
2022-01-21T14:46:32.397Z
2022-01-21T00:00:00.000
246075455
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00246-022-02820-4.pdf", "pdf_hash": "6afefcf8c098857272e1e7088a52c0db37ee7aed", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44828", "s2fieldsofstudy": [ "Medicine" ], "sha1": "6afefcf8c098857272e1e7088a52c0db37ee7aed", "year": 2022 }
pes2o/s2orc
The Efficacy of Corticosteroids, NSAIDs, and Colchicine in the Treatment of Pediatric Postoperative Pericardial Effusion The objective of this study is to investigate and compare the efficacy of corticosteroids, NSAIDs, and colchicine in treating postoperative pericardial effusion (PPE) following cardiac surgery in the pediatric setting, on the basis of available literature. To investigate and compare the efficacy of corticosteroids, NSAIDs, and colchicine in treating postoperative pericardial effusion (PPE) following cardiac surgery in the pediatric setting, on the basis of available literature. A systematic review was conducted by carrying out a database search in PubMed on April 20th, 2021. An English language filter was added, but no time restrictions were applied. Lack of pediatric literature prompted a broadening of the search to include adult literature. One pediatric and four adult studies were included, but the pediatric evidence was not found to be of satisfactory quality, and the findings of adult literature could not be readily generalized to the pediatric setting. No well-founded conclusions could be drawn regarding the efficacy of corticosteroids, NSAIDs, or colchicine in treating PPE, as a striking lack of evidence for their efficacy in the pediatric setting were revealed. A knowledge gap was found in the literature, indicating a need for good-quality randomized controlled trials to bridge this gap. Introduction A common complication observed in patients following cardiac surgery is the development of pericardial effusion (PE) or the accumulation of excess fluid in the space around the heart. This can lead to life-threatening cardiac tamponade. Clinical signs of such postoperative pericardial effusion (PPE) include shortness of breath (dyspnea), malaise, discomfort or pain in the chest, low blood pressure, tachycardia, fever, and reduced urine output [1]. However, PPE can also present with non-specific symptoms or even asymptomatically [1]. The diagnosis of PPE can be carried out in a variety of ways, but the use of echocardiography and computed tomography has been reported in the literature [1][2][3]. While the precise pathogenesis of PPE remains to be elucidated, some theories have been proposed in the literature. The immune system is commonly implicated, for instance, with suggestions that an inflammatory mechanism is involved [2][3][4]. It has also been suggested that the development of PPE is the result of an autoimmune reaction, wherein the immune system produces antibodies against self-antigens that are exposed when the pericardium is damaged during surgery [3,4]. In fact, this theory may explain why younger children, who do not yet have a completely developed immune system and older adults, whose immune systems show a decline in competency, tend to exhibit lower incidence rates of clinically relevant PE [3]. Many studies have been performed to ascertain the incidence of this complication, but a wide range of values can be found in the literature, ranging from estimates as low as 1.1% to those as high as 6.2%, subject to variations in study design, sample size, and other factors [2,5]. Moreover, certain surgical procedures have been found to be associated more with this complication, than others; for instance, a study by Moh et al. found that patients undergoing coronary artery bypass grafting were more likely to develop pericardial effusion post-surgery than those who underwent valve replacements or other types of surgery [6]. A variety of factors have been suggested to influence the risk of developing PPE. Several studies in the literature 1 3 have performed uni-and multivariate analyses to determine the factors that have a statistically significant impact on the likelihood of developing PPE [1,3] or on the likelihood of requiring readmission to the hospital with PE [5]. The findings of these studies have been summarized in Table 1. Given the prevalence of PE as a postoperative complication, one would expect a wide range of literature providing evidence for the effectiveness of the various methods of drug treatment reportedly being used. However, this does not seem to be the case. While many different approaches have been described for drug-based treatment of PPE in the literature, ranging from aspirin [2,5,6], non-steroidal antiinflammatory drugs (NSAIDs) [2,3,5,6], and corticosteroids [5,6] to colchicine [2,3,5,6], not many studies have compared these approaches to one another in an attempt to elucidate which one is most effective. This is especially true in the pediatric setting, wherein literature on the effectiveness of individual drug treatment approaches is scarce to begin with. Thus, this systematic review will investigate the following question: which method of drug treatment is most effective for treating PPE in children following cardiac surgery. Methods A systematic literature search was performed using PubMed. The search terms included the MeSH terms 'pericardial effusion,' 'postpericardiotomy syndrome,' 'postoperative care,' 'anti-inflammatory agents, non-steroidal,' 'colchicine,' and 'adrenal cortex hormones' in various combinations with cardiac surgery, drug therapy, therapeutic use, and so on. The detailed search strategy can be found in Appendix 1. The studies were selected on the basis of pre-determined criteria such that the participants must be human, the studies must be published in English, must have as their outcome, the size (width, or volume, assessed by means of an echocardiography) and/or clinical signs of postoperative pericardial effusion (PPE) following cardiac surgery (early or late onset), must investigate the influence of drug-based treatments (specifically, colchicine, corticosteroids, or NSAIDs) on the outcome, and must either be open access or accessible through the Utrecht University library. Initially, only pediatric literature was sought, but upon finding a striking scarcity of literature in this age group, the search was broadened to include adult literature, in order to attempt a generalization of the latter's findings to the pediatric setting. No publication date restrictions were imposed. Results Studies investigating the impact of prophylactic drug-based treatments for PPE (n = 26) or PE developed as a result of causes unrelated to cardiac surgery (such as neoplastic causes) (n = 6), having animals as subjects (n = 1), and published in a non-English language (n = 16) were excluded. Case studies (n = 6) and reviews (n = 7) were later excluded. The selection process of papers can be visualized in the study flow diagram in Fig. 1. Type of procedure: patent ductus arteriosus repair, ventricular septal defect closure, conduit, and electrophysiology surgical procedures [5] Increased body surface area [3] Cardiopulmonary bypass [3] Use of inotropic agents [3] Down Syndrome [5] Type of procedure: cardiac transplant, systemic-pulmonary artery shunt, atrial septal defect closure (via surgery) [5] 1 3 One study was selected in the pediatric setting [8] and four in the adult setting [9][10][11][12]. The eligible studies were assessed for quality of evidence using the Cochrane Risk of Bias assessment tool, version 2.0 [7]. The pediatric study was judged to raise some concerns regarding possible bias in the randomization procedure and in selection of the reported result. The adult studies were all judged to have low risk of bias, except for [9,12], which were judged to raise some concerns or be at high risk (respectively) over bias in selection of the reported result. The domain-wise risk of bias assessment can be visualized in Fig. 2. Corticosteroids A study by Wilson et al. [8] followed 290 children after they had undergone cardiac surgery. Of these, 21 were enrolled in the study (see Table 2 for inclusion criteria) and randomly assigned to the prednisone group (n = 12) or to the control group (n = 9). The former group was administered a prednisone suspension, while the latter group was given placebo. For the duration of the study, patients were not given any NSAIDs, including aspirin. Only simple analgesics like paracetamol were prescribed if required. Among other observations, the researchers studied the proportion of participants in each group who were in complete remission at 72 h and Fig. 1 Study flow diagram detailing the step-by-step process undertaken during the literature search for this review. The syntax entered into PubMed (see Appendix 1. for details) yielded 138 articles or 137 non-duplicate articles. Of these, 16 were excluded after the English filter was applied, another 100 were excluded after the title/abstract screen, and finally another 16 articles were excluded after the full-text screen. Five articles were finally included in the review Fig. 2 Summary of risk of bias assessment carried out using the Cochrane Risk of Bias 2.0 assessment tool [7]. "?" indicates some concerns, "+" indicates low risk of bias, and "−" indicates high risk of bias. See text for details Table 2 Overview of literature on efficacy of corticosteroids in treating PPE in the pediatric setting [8] Findings from Pediatric Literature Significant differences found between prednisone and placebo groups (p = 0.03) 1 week after the treatment had been started. The researchers defined remission as "the complete absence of all symptoms and signs of [PPS] for at least 24 h with static or decreasing effusions" [8, p. 63]. The researchers also measured the time until resolution of the patients' effusions [8]. The main finding was that, while no significant differences were found at 72-h post-initiation of treatment, at 1 week, a larger proportion of the prednisone group was in remission. The difference between the groups at 1-week post-initiation of treatment was statistically significant (p = 0.03). It was also found that the prednisone group showed a trend toward faster resolution of effusions [8]. An overview of this study can be found in Table 2. NSAIDs Horneffer et al. [9] followed 1019 patients after cardiac surgery, of which 149 were enrolled in the study (see Table 3 for inclusion criteria) and randomly assigned to one of three groups initially-an ibuprofen group, indomethacin group, or placebo group. Patients were not given any aspirin for the duration of the study (any aspirin prescribed prior to enrollment was discontinued, to be resumed only at the end of the 10 days of treatment). Only non-aspirin or acetaminophencontaining analgesics were administered if requested. However, at 48-h post-initiation of treatment, the researchers assessed the patients and found that treatment in a number of participants (n = 74) had clearly failed (defined by persistence of one or more of the symptoms used to make a diagnosis of PPS) and required intervention. At this point, the study drug code was broken and a preliminary analysis of the data was conducted, revealing that of the patients in whom treatment had failed, a majority belonged to the placebo group (p < 0.02). Next, the patients were randomized into one of two groups-the ibuprofen group or indomethacin group-for the remaining duration of the study. The study found that 90.7% of patients in the ibuprofen group, 87.5% of those in the indomethacin group, and 59.1% of those in the placebo group showed resolution of PPS symptoms. These differences were found to be statistically significant (p = 0.002) [9]. Meurin et al. [10] screened 5455 patients for PPE and of these, 196 were included (see Table 3 for inclusion criteria). These patients were randomly assigned to either a diclofenac or placebo group. Patients who had undergone CABG were additionally provided with "low-dose" aspirin. The intention-to-treat data analysis of the study findings revealed that while both groups showed a mean decrease in grade of PE severity, the difference in the magnitude of this change between the study groups (mean difference = − 0.28 grade) was not statistically significant (p = 0.11). Additionally, the number of patients who developed cardiac tamponade (p = 0.49) or showed a decrease of at least 1 grade in PE severity (p = 0.845) did not differ significantly between the two groups. Change in mean width of PE (in mm) was also found not to differ significantly between the two groups (p = 0.07) [10]. An overview of studies [9,10] can be found in Table 3. Colchicine Amoli et al. [11] assessed 154 patients who had undergone open-heart surgery, all of whom developed PPE and were thus enrolled in the study (see Table 3 for inclusion criteria). The patients were randomly assigned to either a colchicine or placebo group. Patients who had undergone CABG were additionally administered 80 mg of aspirin per day. The study did not find any significant differences between the two groups, either in terms of mean PE size or PE severity at the end of treatment (p = 0.844) or in terms of proportion of patients who showed at least a 1-grade reduction in PE severity as a result of treatment (p = 0.283) [11]. Meurin et al. [12] screened 8140 patients post-cardiac surgery for PE by means of a transthoracic echocardiography (TTE) and of these, 197 patients were included in the study (see Table 3 for inclusion criteria). Participants were randomly assigned to either a colchicine or placebo group. Patients who had undergone CABG were also regularly given "low-dose" aspirin. At the end of treatment, patients were given a second TTE. The intention-to-treat data analysis of the study findings revealed that mean change in PE grade from baseline did not differ significantly between the two groups (p = 0.23). Further, the number of patients who developed cardiac tamponade (p = 0.80) or showed a decrease of at least 1 grade in PE severity (p = 0.23) did not differ significantly between the two groups. Average change in width of PE (in mm) was also found not to differ significantly between the two groups (p = 0.27) [12]. An overview of studies [11,12] can be found in Table 3. Discussion PPE is an important and potentially life-threatening complication after pediatric cardiac surgery. In spite of this, the evidence in support of current drug treatment options for PPE is extremely limited and based almost entirely on the findings of small-scale RCTs like the study by Wilson et al. [8]. Moreover, the guidelines provided by relevant bodies like the European Society of Cardiology on how to treat PPE seem to be merely an expert opinion, based purely on experience and not on scientific evidence. In fact, even the references provided by these guidelines for the use of antiinflammatory therapy or colchicine (in adjunct with aspirin or NSAIDs) are studies that are not of very high quality or The study defines this as a PE of grade ≥ 2 and "corresponding to a loculated effusion larger than 10 mm or a circumferential effusion of any size" [10] b The study defines this as a PE of grade ≥ 2 and "corresponding to a loculated effusion larger than 10 mm or a circumferential effusion of any size" [12] describe the efficacy of the drug in prophylaxis as opposed to in the treatment of PPE [13]. To circumvent the problem of lack of pediatric literature, adult data were included with the intention of attempting to generalize the findings of such studies to the pediatric setting. However, there were several limitations to this approach. Much of the adult literature included in this review included samples of older adults (even though the age group of the cohorts in these studies is only specified to be above 18 years old, the procedures that the participants had undergone-CABG, for instance-are characteristic of an older population [14]). Previous studies have suggested that extremely young children (as opposed to the relatively older children on whom the RCT included in this review was conducted) and older adults (who seem to be the primary study population of the RCTs included in this review) have immune systems that do not function optimally, which makes these groups less prone to developing severe PPE, given that the immune system is often implicated in its etiology [3,5]. This disparity in immune function between our population of interest and the population we have analyzed means that even though drugs like ibuprofen may achieve resolution of PPE in the latter, the same effect may not be observed in the former. This makes generalization of the findings from adult literature to the pediatric setting difficult and likely inadvisable. Another issue with the adult literature is that the results are conflicting and possibly even biased by prophylactic NSAID administration (after CABG surgery) in a significant percentage of patients in both placebo and drug groups, as seen in the studies by Meurin et al. [10], Amoli et al. [11], and Meurin et al. [12]. The issue is further compounded by a possible risk of bias in reporting results found in the studies by Horneffer et al. [9] and Meurin et al. [12]. Study [9] has a composite endpoint and does not report its findings on individual parameters, making it difficult to ascertain if patients have benefited from the drug specifically in terms of PPE (one of the parameters). Study [12], on the other hand, specifies frequency of pericardial drainage after 30-day post-initiation of treatment as a secondary endpoint, but fails to report its findings for this endpoint. Moreover, the procedure in study [9] does not include administration of an echocardiography to the participants, with merely clinical signs as inclusion criteria, which further makes it difficult to draw any well-founded conclusions about the efficacy of the drug in question (ibuprofen and indomethacin) in the treatment of PPE. An interesting finding did, however, result from an analysis of the adult literature. It is notable that in studies [10][11][12], aspirin, an NSAID, was administered to CABG patients in both groups, and these studies also did not find significant differences in treatment outcome between their study groups. On the other hand, part of the procedure in study [9] was to withhold any aspirin from participants and only provide non-aspirin analgesics on demand, and this study did in fact find significant differences in outcome between their study groups. This indicates that the etiology of PPE might be inflammatory; the administration of NSAIDs to participants in both placebo and drug groups may have reduced the apparent effect of the drug being studied in [10][11][12], since the anti-inflammatory effects of aspirin may have led to greater resolution of PPE in the placebo group than might otherwise have been observed. A major limitation to being able to draw this conclusion with greater certainty, however, is the aforementioned potential risk of bias found in [9]; the only study that did not administer any aspirin to its participants and also the only study to have found significant differences in treatment outcome. (It should be noted that the pediatric study by Wilson et al. [8] also withheld aspirin from its participants and also found significant results, but its sample size was too small (n = 21) for this finding to truly be of much significance, and as mentioned above, it also raised some concerns over risk of bias in randomization procedure and selection of reported results.) The findings of this review were especially unexpected given the current prevalence in use of many of these drug treatments postoperatively, whether as treatment or prophylaxis for PPS. For instance, NSAIDs are commonly used for prevention of the development of PPS in children following cardiac surgery. A database search of PubMed in this case also served as a revelation; studies investigating the prophylactic use of NSAIDs (acetylsalicylic acid [15] and ibuprofen [16], both commonly employed in clinical practice) to prevent development of PPS in children found no significant results. That being said, the relatively low incidence of PPS as surmised from [2,5] in the introduction section above may have a role to play in these findings. A low incidence of PPS means that even at a 100% efficacy of a drug, a large number of patients would need to be treated in order for PPS to be prevented in one patient-and since realistically, no drug is a 100% effective, the number of patients needed to be treated to show a significant effect of the drug would be higher still. Since such high numbers of patients needed to be treated are often difficult to achieve in practice, it might be worth not dismissing NSAIDs as a potential option for treatment or prophylaxis of PPS just yet. This can be supported by the possible confounding role NSAID administration may have played in the adult studies [10][11][12] included in this review; if administration of even "small amounts" of NSAIDs (as defined by the researchers of the above studies) was sufficient to skew the study results, then perhaps this can be used to a clinical advantage, especially given the relative safety of NSAID use even in children. Moreover, two additional studies retrieved during a PubMed database search found promising results using prophylactic NSAIDs (diclofenac) to prevent PPS development in adult populations [17,18], so perhaps further research is needed to determine the true efficacy of NSAIDs as prophylaxis or treatment for PPS in the pediatric population. Finally, this review may not have provided much concrete evidence for any of the three drugs investigated for the treatment of PPE, but it does shed light on the glaring lack of literature on the subject, indicating a need for future research. This is especially urgent for the pediatric setting, as children are not only more prone to developing PPE than the older individuals currently being studied [3,5], but there is also rather scarce literature on treating PPE in children. There is thus a need for well-designed pediatric trials confirming the efficacy of prednisone, NSAIDs, and colchicine in treating PPE and evaluating the possible side effects of such treatments, which are currently being prescribed entirely on the basis of individual experiences with the drugs. Since placebo trials have already shown to be ineffective and even risky [9], crossover trials may be better suited for this purpose (see Fig. 3 for a hypothetical study design). It might also be useful to conduct a study investigating the incidence of PPS in two cohorts, one being administered NSAIDs prophylactically and the other, placebo. Researchers must, however, account for the high number of patients needed to be treated to prove drug efficacy in this case. Conclusion The results of this study make apparent the fact that very little is currently known about what the best drug treatment for PPE might be and this is especially true for the pediatric setting. The inability to generalize the findings of adult literature to the pediatric setting further exacerbates the problem of the lack of pediatric evidence in support of any one drug treatment for PPE. Since PPE is a common postoperative complication with a possible impact on patient mortality, this severe lack of evidence must be rectified. There is, thus, an urgent need for good-quality clinical trials to investigate and compare the efficacy of corticosteroids, NSAIDs, and colchicine in treating pediatric PPE-a serious complication that modern medicine knows seriously little about. Appendix 1 Detailed search strategy Search terms (PubMed) Fig. 3 A hypothetical study design involving crossover trials to compare the efficacy of two drugs, A and B, in the treatment of PPE. Screening of participants would be followed by randomization into two study groups, one of which would receive drug A and the other, drug B. The end of Phase I of the study would be marked by primary data analysis and a crossover, wherein the two study groups would switch treatments. Phase 2 of the study would then commence and its end would be marked by the start of secondary data analysis. This would also mark the end of the study
v3-fos-license
2018-03-30T09:42:20.762Z
2006-06-01T00:00:00.000
37192897
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/tswj/2006/398283.pdf", "pdf_hash": "dce4441c8e72d5fed18bb14ddba415bf7dfe41a6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44833", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "sha1": "a5c2d20cc028253822530d67c77fce8917f694bb", "year": 2006 }
pes2o/s2orc
Lipid Mediator Informatics and Proteomics in Inflammation-Resolution Lipid mediator informatics is an emerging area denoted to the identification of bioactive lipid mediators (LMs) and their biosynthetic profiles and pathways. LM informatics and proteomics applied to inflammation, systems tissues research provides a powerful means of uncovering key biomarkers for novel processes in health and disease. By incorporating them with system biology analysis, we review here our initial steps toward elucidating relationships among a range of bimolecular classes and provide an appreciation of their roles and activities in the pathophysiology of disease. LM informatics employing liquid chromatography-ultraviolet-tandem mass spectrometry (LC-UV-MS/MS), gas chromatography-mass spectrometry (GC-MS), computer-based automated systems equipped with databases and novel searching algorithms, and enzyme-linked immunosorbent assay (ELISA) to evaluate and profile temporal and spatial production of mediators combined with proteomics at defined points during experimental inflammation and its resolution enable us to identify novel mediators in resolution. The automated system including databases and searching algorithms is crucial for prompt and accurate analysis of these lipid mediators biosynthesized from precursor polyunsaturated fatty acids such as eicosanoids, resolvins, and neuroprotectins, which play key roles in human physiology and many prevalent diseases, especially those related to inflammation. This review presents detailed protocols used in our lab for LM informatics and proteomics using LC-UV-MS/MS, GC-MS, ELISA, novel databases and searching algorithms, and 2-dimensional gel electrophoresis and LC-nanospray-MS/MS peptide mapping. INTRODUCTION To qualify as a lipid mediator (LM), a product must be stereoselective in its actions and be generated by cells in quantities that are commensurate with its potency and range of action [1]. Low-energy ionization with electrospray avoids unwanted degradation and generates primarily molecular (or pseudomolecular) ions for collision-activated dissociation MS/MS analysis [4]. LC-UV-MS/MS can provide more direct spectral characterization for structural elucidation than GC-MS (gas chromatography coupled with MS) because samples can be analyzed without prior derivatization. The correlation of MS/MS fragments vs. structures of some LMs and their isomers has been determined [4,6,7,8]. The results indicate that physical properties are readily obtained and used for complete structural elucidation of LMs. GC-MS is also useful to provide additional information together with LC-UV-MS/MS to support structural identification and proposed structures. LC-UV is a widely used technique for eicosanoid analysis [9]. ELISA (enzyme-linked immunosorbent assay) is designed for quantification of specific LMs with high selectivity and sensitivity. It allows investigators to analyze a large number of samples in a timely fashion [10]. "Proteomics" is the study of the major functional component of the genome, i.e., the identification of all proteins in a chosen biological system and all their post-translational modifications [11]. This approach will be used to characterize genes and functional interactions among proteins that are important in inflammation and allow detection of subtle differences in protein levels that provide a detailed picture of inflammation. Separation of proteins by two-dimensional (2D) gel electrophoresis coupled with identification of proteins by tryptic peptide capillary LC-nanoelectrospray ionization (nanospray) ion-trap MS/MS followed by protein database searching using MS/MS spectra is a powerful analytical method in proteomics as depicted in Scheme 2. The comparison of changes in intensity and mobility of proteins of interest on 2D gels between samples from different treatment groups translates directly to changes in protein expression and modification of primary structure. Through identification of sets of proteins that are concertedly up-or down-regulated, the dynamic changes following a specific stimulus can be charted [11]. NOVEL LIPID MEDIATOR PATHWAYS IN INFLAMMATION AND RESOLUTION It is now appreciated that inflammation plays a key role in many prevalent diseases. In addition to the chronic inflammatory diseases such as arthritis, psoriasis, and periodontitis, it is now increasingly apparent that diseases such as asthma, Alzheimer's disease, and even cancer have an inflammatory component associated with the disease process. Therefore, it is important for us to gain more detailed information on the molecules and mechanisms controlling inflammation and its resolution [12]. Toward this end, we recently identified new families of LMs generated from fatty acids during resolution of inflammation, termed resolvins and protectins [8,13,14] (Fig. 1) (Table 1). Resolvins and protectins are autacoids that play a critical and broad role in human health and diseases, especially those related to inflammation and resolution [3]. These novel mediators generated from eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) that displayed potent bioactions were first identified in resolving inflammatory exudates and in tissues enriched with DHA [7,8,13]. The trivial names resolvin (resolution phase interaction products) and docosatrienes (DTs) were introduced for the bioactive compounds belonging to these novel series because they demonstrate potent anti-inflammatory and immunoregulatory actions. The compounds derived from EPA carrying potent biological actions (i.e., 1-10 nM range) are designated E series, given their EPA precursor, and denoted as resolvins of the E series (resolvin E1 or RvE1), and those biosynthesized from the precursor DHA are resolvins of the D series (resolvin D1 or RvD1). Bioactive members from DHA with conjugated triene structures are DTs that are immunoregulatory [8,13] and neuroprotective [15] and are termed protectins/neuroprotectins. Aspirin [38] treatment impacts biosynthesis of these compounds and a related series by triggering endogenous formation of the 17R-D series resolvins and docosatrienes. These novel epimers are denoted as aspirintriggered (AT)-RvDs and -DTs, and possess potent anti-inflammatory actions in vivo essentially equivalent to their 17S series pathway products. Criteria for Identification of a Bioactive Lipid Mediator The criteria to identify a known LM for LC-UV-MS/MS-based lipidomic analysis are as follows [9,16] Sample Extraction Procedures for Lipidomic Analysis All incubations and in vivo samples (i.e., exudates and tissues) will be stopped with 2 vol of cold methanol [15]. Briefly, this procedure is tailored as follows: an internal standard [for example, 50 ng of a deuterium-labeled LM (e.g., d 4 -PGE 2 or d 4 -LTB 4 ) or PGB 2 ] will be selected and added to determine the extraction recovery (typically >90%) of the LMs. The samples are centrifuged (3,000 rpm, 4°C, 15 min) to remove cellular and protein materials. After the supernatants are decanted, they are diluted with 5 vol of Milli-Q water. The pH is adjusted to 3.5 with 1 M HCl for C18 solid-phase extraction (SPE). After washing with 15 ml of H 2 O and then 8 ml of hexane, the SPE cartridges (C18, 3 ml, Waters, MA) are eluted with 8 ml of methyl formate, and the effluent is reconstituted into methanol for lipidomic analysis using LC, GC-MS, or LC-MS/MS [17,18]. GC-MS-Based Lipidomic Analysis These LMs need to be converted to derivatives for GC-MS analysis. The derivatization includes methylation of carboxyl groups and silylation of hydroxyl groups to trimethyl-siloxy groups. Methylation will be conducted via reaction with diazomethane. MNNG (1-methyl-3-nitro-1-1nitroguanidine, Nmethyl-N'-nitro-N-nitrosoguanidinine) (Sigma-Aldrich, MO) is converted with 5 N NaOH solution to diazomethane gas, which is trapped into ice-cold diethyl ether. Each sample will be treated in 0.5 ml of ether solution of diazomethane for 30 min at room temperature. After the ether and extra diazomethane are removed with N 2 , the sample will be silylated with 0.1 ml of BSTFA (N,O-bis[trimethylsilyl]trifluoroacetamide) reagent (Pierce, IL) for 24 h and protected from light as in Serhan [19]. GC-MS will be performed with an Agilent GC 6890 gas chromatograph coupled with a 5973 MS mass spectrometer (Agilent Technologies, CA). The conditions typically are column, HP-1 0.25 mm × 0.25 μm × 30 m (Agilent); splitless on time, 0.9 min; column temperature program, 150ºC (1 min), 230ºC (4 min), 240ºC (8 min), and 245ºC (12 min) as in Nicolaou et al. [16] and Serhan [19]. LC-UV-Based Lipidomic and Chiral Analysis LC-UV-based lipidomic analysis of LMs will be conducted on an Agilent 1100 HPLC-UV system or an Agilent 1040 HPLC-UV system with the photodiode-array (PDA) UV detector scanned from 200-360 nm. The general conditions are as follows for achiral LC-UV, a Prodigy ODS (3) (100 x 2 mm x 5 mm) column (Phenomenex, CA) will be used. The mobile phase runs at 0.2 ml/min as C (methanol:water:acetic acid 65:34.99:0.01) from 0-8 min, ramps to methanol from 8.01-30 min, then flows as methanol for 5 min, and then runs as C again. For chiral LC-UV, a Chiralcel OB-H column (4.6 × 250 mm) (Chiral Technologies, PA) will be used to determine R and S alcohol configurations of monohydroxy-PUFA using isocratic mobile phase (hexane:isopropanol 95:5), with a 0.6 ml/min flow rate. The LM analytes are methylated before chiral LC analysis. The identification will be conducted by matching the retention times and UV spectra of unknown compounds to standards. After the compounds of interest are identified, they will be quantified on the basis of their chromatographic peak areas and calibration curves of chromatographic peak areas for authentic synthetic standards [9]. LC-UV-MS/MS-Based Lipidomic Profiling LM informatics analysis will be conducted on a LCQ TM LC-PDA-ion trap-MS/MS (ThermoFinnigan, CA) equipped with a LUNA C18-2 (150 x 2 mm x 5 mm) or Prodigy ODS (3) (100 x 2 mm x 5 mm) column (Phenomenex, CA) with photodiode-array UV detector scans from 200-360 nm. The conditions are as follows: the mobile phase flows at 0.2 ml/min as C (methanol:water:acetic acid 65:34.99:0.01) from 0-8 min, ramps to methanol from 8.01-30 min, then flows as methanol for 10 min, and then runs as C again for 10 min. Conditions for MS/MS are electrospray voltage, 4.3 kV; heating capillary, -39 V; tube lens offset, 60 V; sheath N 2 gas, 1.2 l/min; and auxiliary N 2 gas, 0.045 l/min [20]. Quantification will be based on the peak areas from selective ion monitoring (SIM) chromatograms and the calibration curve of chromatographic areas for each corresponding standard. Examples of LC-UV-MS/MS-based lipidomic analysis of eicosanoid standards are shown in Fig. 2. Fig. 3 shows the chromatograms of biogenic resolvins, AT-Rvs, and PD1/NPD1. Fig. 4 displays the MS/MS spectra of biogenic resolvins, AT-Rvs, and 17S-HDHA. Mass spectra of biogenic and synthetic PD1/NPD1 are given in Fig. 5; MS/MS spectrum of synthetic RvE1 and GC-MS spectrum of deuterated-RvE1 are shown in Fig. 6. Lipidomic Databases and Searching Algorithms Using current chemical analytical technologies, most LMs are identified manually by direct comparison of the spectra, chromatographic behaviors, and in some cases biological activities acquired from sample tissues with those of authentic standards of known LMs; when authentic standards are not available, as in the case of novel LMs and their further metabolites, basic chemical structures can be obtained on the basis of the relationship between structures and features of their spectra and chromatographic behaviors compared to those of synthetic and biogenic products prepared to assist in the assignment. We routinely identify LMs by matching the unknown spectra (MS/MS, GC-MS, and UV spectra) and retention times (RTs) to those of authentic and synthetic standards if available, or with a theoretical database that consists of virtual UV and MS/MS spectra and RTs for discovering potentially novel LMs [2,8,13] if standards are not available. We initially developed a theoretical database and algorithm according to the relationships between LM structures and their spectral and chromatographic characteristics [2]. The proposed structures of novel potential LMs in the theoretical databases were based on PUFA precursors and established biosynthetic pathways. Mediator-lipidomic databases and search algorithms were constructed to assist in the identification of LM structures employing LC-UV-ion trap MS/MS with the following objectives: (1) assembling a database using currently available mass-spectral software, (2) constructing a cognoscitive-contrast-angle algorithm and databases to improve the identification of LMs using MS/MS ion identities that currently cannot be performed with available software, and (3) developing a theoretical database and algorithm for assessing potentially novel and/or unknown structures of LMs and their further metabolites in biologic matrices. It is quite meaningful to develop mediator-lipidomic databases and algorithms using ion trap mass spectrometers that are relatively cheaper and popular. Moreover, the fragmentation rules and patterns for collision-induced dissociation (CID) spectra from triple-quadruple mass spectrometers, another popular MS instrumentation, are similar to what we encounter using ion trap [6,21]. Logic Diagram to Identify Lipid Mediators The regular routes for LM identification and structure elucidation of potentially novel LMs were followed in mediator-lipidomic databases and search algorithms that we constructed (Scheme 3). Two types of lipidomic databases for LMs were used for searching; one contains LC-UV-MS/MS spectra and chromatograms acquired on LM standards and the other is based on theoretically generated LC-UV-MS/MS Exudates were obtained and analyzed by procedures essentially identical to those described in Murphy et al. [6]. Selective ion chromatograms were at m/z 375 (top), 359 (middle), and 343 (bottom). The UV chromatogram was plotted at 300 nm to mark tetraenecontaining chromatophores. (C) Selected ion chromatogram (m/z = 359) shows 17S series resolvins and protectins produced in human neutrophils (30-50 × 10 6 cells/incubation), which were exposed to zymosan A and 17S-H(p)DHA, and products were analyzed using LC-UV-MS/MS (n = 5) [13]. spectra and chromatograms. The searches were conducted stepwise against either standards or the theoretical databases to increase the search speed. The search of MS/MS spectra was carried out only against the MS/MS subdatabase with the molecular ion of interest (i.e., M-1) and matched UV spectra (e.g., conjugated diene, triene, or tetraene chromophores). Subsequently, the matching of RTs was performed. If the UV spectral pattern was unclear, the MS/MS and RT were still searched to avoid potential errors in assignment. A standard LM or theoretical fragmentation/ion fragmentation pattern that fulfilled the above match criteria was then assigned to the unknown set. If the match was a "hit" only with UV and MS/MS spectra, but not with RT, the LM in the sample was likely to be a geometric isomer of a known LM. Databases Constructed with MassFrontier TM Software A mediator-lipidomic database composed of LC-UV-MS/MS spectra and chromatograms acquired from authentic LMs was constructed with GC-MS spectral software MassFrontier TM (ThermoFinnigan). The search algorithm for MassFrontier TM is dot-product, developed by Stein et al. [22,23,24]. The UV λ max of authentic LMs were written into the subdatabase names and the RTs were written into the LM names so that MassFrontier TM could handle the acquired UV spectral results and RTs for the identification of the unknown LMs following the logic diagram in Scheme 3. [3]. (B) RvD5 was generated in trout brain cells [38]. (C) RvD6 was produced by human PMN; (inset) UV spectrum [13]. (D) 17S-DHA was generated in trout brain [38]. COCAD: Cognoscitive-Contrast-Angle Algorithm and Databases The system with cognoscitive-contrast-angle algorithm and databases (COCAD) that we developed can be used to elucidate the fragmentation of LMs in mass spectrometry and to match unknown MS/MS spectra to those of synthetic and/or authentic standards [2]. In this process of matching, the intensity of each peak is treated differently based on the ion identity. MS/MS ions are clustered into three types: "peripheral-cut" ions, formed by neutral loss of water, CO 2 , amino acid, or amines derived from functional groups linking to LM carbon-chain as hydroxy, hydroperoxy, carbonyl, epoxy, carboxy, amino acid group, or amino group; "chain-cut" ions, formed by cleavage of a carbon-carbon bond along the LM carbon-chain; and "chain-plus-peripheral-cut" ions, formed by combination of chain-cut and peripheralcut. Molecular ions formed during electrospray(ESI) can be converted easily to peripheral-cut ions in the MS/MS process. Similarly, chain-cut ions can also be converted readily to chain-plus-peripheral-cut ions (Scheme 4). Typical chain-cut ions for LMs in MS/MS are formed by α-cleavage of the carbon-carbon bonds connecting to the carbon with a functional group directly attached [4,6,8,13,25]. LMs readily undergo αcleavage [6]. We proposed the nomenclatures illustrated in the LXA 4 structure presented in Scheme 4 to systematically name the segments formed via chain-cut and chain-plus-peripheral-cut without concern for hydrogen-shift occurring during mass spectrometric analysis of PUFA-derived products. All the possible chain-cut, peripheral-cut, and chain-plus-peripheral-cut segments for LXA 4 are indicated; the details for nomenclatures can be found in Lu et al. [2]. A MS/MS ion detected from LM samples in negative-ion mode generally is formed from a specific segment with the addition or subtraction of hydrogen(s) caused by hydrogen-shift during the cleavage [2]. The charge (z) of the LM negative ion is usually equal to one; therefore, the mass-to-charge ratio (m/z) of a LM ion is usually equal to its mass (m). Previous reports [4,6] and our published results [2,3,17,20] 3 , and/or amino acids, the chain-cut ions can form chain-plus-peripheral-cut ions. For the chain-cut and chain-plusperipheral-cut ions in the present report, we focused on those formed by α-cleavages. Those detected MS/MS ions uninterpretable via the empirical rules mentioned above and neutral loss are taken as unidentified ions. SCHEME 4. LC-UV-MS/MS database layout: example for naming LM segments [2]. In this case, example shown is lipoxin A 4 , formed via chain-cut, peripheral-cut, and chain-plus-peripheral-cut for interpretation of MS/MS fragmentation. Modification of MS/MS Ion Intensities According to Identities Chain-cut ions are most informative and could be diagnostic for determining specific LM structures such as the position of functional groups and double bonds. Peripheral-cut ions in MS/MS spectra are similar among LM isomers and, therefore, were not specific enough for differentiation of individual LM isomers [2]. According to empirical fragmentation rules mentioned above, the n th MS/MS peak can be identified as one or several chain-cut (C) ions, peripheral-cut (P) ions, and/or chain-plus-peripheral-cut (CP) ions. The weighted intensity y I n of each identified ion is as follows: where: y is the MS/MS ion type identified as C, P, or CP; It is 10 as C W and 1 as CP W or P W (for peripheral-cut ions). The fingerprint features of chain-cut ions are used to define LM structure by multiplying their intensities by 10, which was determined to be the best among values 2, 10, 20, and 100 tested. Weighted MS/MS ion intensities are used for COCAD and the theoretical system. ρ represents the contribution of peripheral-cut ions to I n ' (ρ = 3 for peripheral-cut ions formed via loss of one CO 2 from each molecular ion, ρ = 10 for peripheral-cut ions formed via loss of one H 2 O from each molecular ion, and ρ = 1 for other peripheral-cut ions formed via multiple loss of CO 2 and/or H 2 O from each molecular ion). The assignment of ρ values is arbitrary and based on the observation of relative intensities of peripheral-cut ions in MS/MS spectra of LMs. COCAD Contrast Angle COCAD used a contrast-angle algorithm to match an MS/MS spectrum between sample and standards. For this approach, the contrast angle is calculated as follows: U v is equal to C v , CP v , or P v for unknown spectrum to be identified; S v is equal to C v , CP v , or P v for standard spectrum. V is the total number for one type of virtual ion formed via chain-cut, chain-plus-peripheral-cut, or peripheral-cut for a specific LM; C n B v is equal to 1 if the n th MS/MS peak can be identified as the ν th virtual ion formed via chain-cut, or equal to zero if not; CP n B v or P n B v has a similar meaning but for ions formed via chain-plus-peripheral-cut or peripheralcut; N is the total number of peaks in the MS/MS spectrum; D C is the dot product between the virtual vectors of U (unknown sample) and S (standard) formed via chain-cut; D CP or D P is the dot product for chain-plus-peripheral-cut or peripheral-cut ions, respectively. D C , D CP , or D P in (e) represents the similarity of ions formed via chain-cut, chain-plus-peripheral-cut, or peripheral-cut, between an unknown spectrum and a standard spectrum. None of them is greater than 1. The ν th virtual ion is not used for the calculation of the corresponding D C , D CP, or D P if either U v or S v is zero. If every C v , CP v , or P v within the vectors is zero, then D C , D CP, or D P is assigned the value zero, respectively. The COCAD contrast angle in formula (f) represents how well the spectrum of the sample matches the standard: if it is 0 o , the two spectra match exactly; if it is 90 o , the two spectra do not match at all; the smaller the contrast angle between 0 o and 90 o , the better the match [24,26]. The value is integrated and normalized from dot products D C , D CP , and D P (f). The numeric coefficient 10 in (f) was found to be the best value (2,20, and 100 were also tested) that emphasizes the fingerprinting feature of chain-cut ions because chain-cut ions are more important for determining the LM structure than are other types of ions. To normalize[(10 × D C + D CP + D P ) ÷ (11 + ω CP )] in (f) to be no more than 1, 11 was used in the denominator of (f), and ω CP is equal to 1 if at least one MS/MS ion is identified as a chain-plusperipheral-cut virtual ion or equal to zero if no such ion is identified. No chain-plus-peripheral-cut ion is identified in a few LM standard spectra. Therefore, ω CP is introduced in equation (f) to normalize the COCAD contrast angle to zero when matching these types of spectra against themselves. Unidentified ions were excluded for matching in equations (b) to (f). Theoretical Database and Search Algorithm for the Identification of Novel Lipid Mediators Theoretical databases consist of the segments (Scheme 4), the UV λ max , and RTs predicted for potentially novel LMs. Searching against a theoretical database is also performed stepwise as described in Scheme 3, from UV λ max , to MS/MS spectra, and then to RTs. Equation (g) is the matching score for an MS/MS spectrum of an unknown product compared with a virtual spectrum based on the segments and empirical fragmentation rules noted above. resolution TheScientificWorldJOURNAL (2006) 6, Matching The matching score in (g) summates the weighted intensities of all the identified MS/MS peaks in the spectrum acquired from the sample. The numerator of the formula is composed of three parts: summating the weighted intensities of MS/MS peaks identified as chain-cut ions; summating the weighted intensities of MS/MS peaks identified as the chain-plus-peripheral-cut ions; and summating the weighted intensities of MS/MS peaks identified as the peripheral-cut ions. C f M n is the total number of chain-cut ions via α-cleavage formed from the f th functional group and matched to the n th MS/MS peak. F is the total number of functional groups in one LM. f is counted from the carboxyl terminus of LM. For example, f is 1 for 5-hydroxy, 2 for 6-hydroxy, and 3 for the 15S-hydroxy group present in LXA 4 . F for LXA 4 is 3 (Scheme 4). ( C I n + CP I n + P I n ) is used in (g) for normalization to eliminate the impact on the matching scores of the total peak intensities in MS/MS spectra. The databases and search algorithms were developed on the basis of LC-UV-ion trap MS/MS data of LMs. The ion intensity patterns of MS/MS spectra generated from an ESI-triple quadrupole mass spectrometer are quite similar to ESI-ion trap MS because the collision energy for both types of instruments is in the low energy region (a few to 100 eV, laboratory kinetic energy of ions) [6,21,28,29]. Therefore, the constants and algorithms reviewed here and in Lu et al. [2] may fit the CID spectra from triple-quadrupole MS/MS without much modification. For high-collision-energy (several 10 2 to ~10 3 eV) CID spectra generated via sector or TOF/TOF analyzer, the relative intensity patterns are quite different in comparison with the low-energy ones, although many ions occurred for both energy situations [4,6,21,28,29]; for example, the peripheral-cut ions are less abundant than the chain-cut ions. For ion-trap and triple-quadrupole MS/MS, peripheral-cut ions are more abundant than chain-cut ions. Our constants and algorithms give chain-cut ions more weight than peripheral-cut ions because chain-cut ions are more important to define LM structures. Therefore, they may still fit high-collision-energy CID spectra. Nevertheless, the constants and algorithms should be thoroughly tested and modified accordingly to fit other instruments that may generate different fragmentation patterns and intensities of resulting ions. The RTs used in this set of experiments were obtained using specified chromatographic conditions (for a column of 100-mm length and some for 150-mm length) and because several fundamental issues were our initial focus. Hence, these databases and algorithms were programmed so that the new LC-UV-MS/MS data including other chromatographic conditions can be easily entered and used in the databases. A computer-based automated system equipped with these databases and searching algorithms was used successfully to identify 15S-HETE in murine spleen [2] (Fig. 7). The peak displayed at RT 20.4 min on the chromatogram (at m/z 219 of MS/MS 319, left inset) had a UV λ max 235 nm. Therefore, the search on the theoretical system was narrowed down to the subdatabase with molecular ion m/z 319, UV λ max 235 nm, and RT 21 min. In this case, 15-HETE gave the highest matching score among all compounds in the subdatabase. The MS/MS peaks identified were annotated with the ion interpretation that also shows a fragmentation mechanism [2]. Segments of 15-HETE that matched the MS/MS peaks according to the empirical fragmentation rules are italicized [2]. MassFrontier TM was also used to identify the peak, which also identified it as 15-HETE [2]. ELISA-Based Lipidomic Analysis and Physical Validation Specific ELISA for the AT LXA 4 (ATL or 15-epi-LXA 4 ) with high selectivity (cross-reactivity to LXA 4 is less than 3%) and sensitivity (detection limit is 50 pg/ml) were developed [10,30]. Using this specific ELISA, we demonstrated that aspirin therapy triggers the production of anti-inflammatory ATL in healthy individuals in an 8-week, randomized and placebo-controlled clinical trial. ATL production in the test groups was inversely related to inhibition of platelet thromboxane, even when aspirin was given in low doses (81 mg of aspirin daily). Thus, utilizing this specific ELISA, we are able to monitor plasma ATL providing an easy tool and positive signal for assessing individual responses to aspirin therapy. All ELISAs will be validated for each series of LMs using LC-MS/MS as in Chiang et al. [10]. 2D Gel Electrophoresis Soluble proteins from biomedical samples will be separated by isoelectric focusing and SDS-PAGE, according to [31,32]. Proteins are solubilized in a total volume of 185-μl rehydration buffer of the following composition: 7 M urea, 2 M thiourea, 4% CHAPS, 30 mM dithiothreitol (DTT), 0.2% v/v ampholytes (BioLyte pH 3-10, Bio-Rad, CA), 0.001% bromophenol blue. Isoelectric focusing strips with a linear immobilized pH gradient ranging from 3-10 (Bio-Rad) are rehydrated with sample-containing rehydration buffer for 30 min in the isoelectric focusing tray, overlaid with mineral oil, and further rehydrated actively at 50 V for 16 h in a Protean isoelectric focusing apparatus (Bio-Rad). Isoelectric focusing is subsequently performed by increasing the voltage linearly over 20 min to 250 V, followed by a linear increase over 2.5 h to 8,000 V and further focusing at 25,000 V-h at 8,000 V. The focused proteins are reduced by DTT for 10 min at room temperature in equilibration buffer 1, composed of 6 M urea, 2% SDS, 0.375 M Tris/HCl pH 8.8, 20% glycerol, and 130 mM DTT, followed by thioether formation by iodoacetamide for 10 min at room temperature in the dark in equilibration buffer 2 composed of 6 M urea, 2% SDS, 0.375 M Tris/HCl pH 8.8, 20% glycerol, and 135 mM iodoacetamide. The IPG strips are rinsed once in SDS-PAGE running buffer (25 mM Tris, 0.19 M glycine, 3.5 mM SDS), and mounted in agarose on top of 10.5-14% gradient SDS-polyacrylamide gels (Bio-Rad). Proteins are separated by size (range ~15-200 kDa) via electrophoresis with running buffer at 200 V for 60 min at 19 o C in a Dodeca tank (Bio-Rad), which allows multiple gels to be run. In-Gel Protein Digestion and Peptide Recovery The gels are fixed and stained with Sypro Ruby protein gel stain [33]. Protein gel spots of interest are excised and cut in ~1-mm 3 cubes with clean scalpels and placed in Eppendorf tubes. The protein is in-gel digested with trypsin, and peptides are recovered essentially as described by Rosenfeld et al.[34], with the following modifications: the gel pieces are washed twice for 45 min in 0.5 M Tris pH 9.2/acetonitrile 1:1 (v/v) at 37 o C. They are shrunk in acetonitrile and dried for 10 min by vacuum centrifugation (SpeedVac, Savant, NY). They are swollen in 100 mM NH 4 HCO 3 for 30 min at room temperature. They are shrunk again in acetonitrile and dried. In-gel trypsin digestion is performed by the addition of 100 μl of 50 mM NH 4 HCO 3 containing 500 ng of modified sequencing-grade trypsin (Promega, WI). After 1 h, additional 50 mM NH 4 HCO 3 is added to just cover the gel pieces. The in-gel digestion proceeds overnight at 28 o C. The fluid surrounding the gel pieces is transferred to a clean Eppendorf tube. The gel pieces are incubated for 1 h with 5% formic acid in 50% acetonitrile, and the extract is pooled with the first supernatant. The gel pieces are incubated another 15 min with 100% acetonitrile, which is transferred to the pooled peptide extracts. The combined extracts are dried by vacuum centrifugation (SpeedVac), and the tryptic digests are stored at -80 o C until further analysis. LC-Nanospray-MS/MS Proteomic Analysis Tryptic peptide mass and charge will be determined by capillary LC/electrospray ionization ion-trap MS/MS [35,36]. Tryptic peptides are separated by capillary LC using a capillary column (LC Packings, ID 75 μm, length 15 cm, particle size 3 μm) at 100 nl/min delivered by an Agilent 1100LC pump (400 μl/min) and a flow splitter (LC Packings, Accurate, NY). Peptides are loaded via a Rheodyne port onto a 2-μg capacity peptide trap (CapTrap, Michrom, CA) in 2% acetonitrile, 0.1% formic acid, and 0.005% trifluoroacetic acid. A mobile-phase gradient is run using mobile phase A (2% acetonitrile and 0.1% formic acid in ultrapure water) and B (80% acetonitrile and 0.1% formic acid in ultrapure water) from 0-10 min 0-20% B, 10-90 min 20-60% B. Water and acetonitrile are of mass spectral grade (Burdick & Jackson, MI). Peptide mass and charge are determined after low-flow electrospray ionization on a ThermoFinnigan Advantage ion-trap mass spectrometer. Electrospray ionization is performed with endcoated spray tips (silica-tip 5 cm, ID 360 μm, tip ID 15 μm, New Objective, MA) at a final flow rate of approximately 100 nl/min and a spray voltage of 1.8 kV. The mass spectrometer is tuned with angiotensin II in 30% B at 100 nl/min. Peptide parent ion mass is determined, and zoom scans and tandem MS/MS spectra of parent peptide ions above a signal threshold of 2 × 10 4 are recorded with dynamic exclusion using Xcalibur v. 1.3 data acquisition software (ThermoFinnigan). Peptide Mapping Protein identification of gel spots will be made by peptide mapping of tryptic peptide tandem mass spectra using Sequest. Sequest searches are performed within the BioWorks 3.1 software (ThermoFinnigan), using the NCBI nr.fasta protein database indexed for mouse proteins. Possible protein modifications taken into consideration will include alkylation of cysteine with iodoacetamide and acrylamide and oxidation of methionine. A protein is considered positively identified if a minimum of two tryptic peptides of that protein are matched with a cross-correlation score above 2.0. An example of determining the temporal changes of specific exudate proteins is shown in Fig. 8. We used a MS-based proteomic analysis with 2D gel electrophoresis and image analysis. Proteins were identified by peptide mapping of in-gel-digested proteins using capillary LC-nanospray ion trap MS/MS (nanospray-LC-MS/MS) and bioinformatics software. Fig. 8 shows a representative 2D gel of exudate proteins and the temporal profiles of several proteins with distinct kinetics during inflammation resolution. A list of proteins and their corresponding identified tryptic peptide fragments together with cross-correlation scores can be found in Bannenberg et al. [37], as well as the observed and theoretical (m.w.) and isoelectric point (pI) of the identified proteins. Serum proteins such as plasminogen, fibrinogen, and serum albumin were abundant in exudates 4 h after initiation of inflammation, indicating that protein exudation from blood made the largest contribution to the total exudate protein levels [37]. Haptoglobin (Fig. 8, B and C) displayed a delayed accumulation that is maximal at the onset of resolution interval(Ri). S100A9 rapidly accumulated in the exudate, achieving maximal levels during Ri, followed by a gradual decrease at 24 h. The exudate level of a C-terminal fragment of α 1 -macroglobulin (pregnancy zone protein), plasminogen, and fibrinogen displayed the same kinetics as the total exudate protein levels [37]. In contrast, apolipoprotein E was present in the uninflamed peritoneum; its levels decreased during the Ri and returned to basal levels after 24 h. Proteinase inhibitor 1a rapidly appeared in the peritoneal exudate, with maximal levels at 4 h that thereafter decreased continuously. Transthyretin levels apparently did not change during the time course (Fig. 8B). Using this approach of "resolution proteomics", we identified several components that are likely founding members of the resolvers in novel resolution circuits and pathways operating in vivo during and promoting resolution. SUMMARY LM informatics and proteomics applied in inflammation research and specifically to mapping the resolution phase provides a powerful means of uncovering specific biomarkers for potential disease phenotypes. By incorporating them with system biology analysis, we can begin to elucidate relationships among changes across a wide range of biomolecule classes and provide new insight into the pathophysiology of inflammatory disease. LM informatics employing LC-UV-MS/MS to evaluate and profile temporal production of compounds, combined with proteomics at defined points during experimental inflammation and its resolution, enable us to elucidate bioactions and roles of novel mediators in inflammation and resolution. The automated system reviewed here including databases and searching algorithms is crucial for prompt and accurate analysis of these lipid mediators such as eicosanoids, resolvins, and protectins, which play critical roles in human physiology and many prevalent diseases, especially those related to inflammation. The temporal profiles of several exudate proteins (haptoglobin, S100A9, a C-terminal fragment of α 1macroglobulin, apolipoprotein E, proteinase inhibitor 1α, plasminogen, the fibrinogen αand β-chains, and transthyretin) are shown (values are means ± SEM, n = 3-6 gels). (C) Tryptic peptide mapping of haptoglobin by MS. Peptides that are matched are shown in red. The matching of the tandem mass spectrum of peptide YVMLPVADQDK is shown. Lu et al.: Mediator-lipidomics, proteomics, inflammation, resolution TheScientificWorldJOURNAL (2006) 6, ACKNOWLEDGMENTS Many thanks to Mary Halm Small for expert editorial assistance in manuscript preparation and Katherine Percarpio for lab assistance. We thank Nicos A. Petasis (University of Southern California) and Core C of NIH grant no. P50-DE016191 for preparation of synthetic d 4 -Resolvin E1. This work was supported in part by grants no. GM38765, DK074448, and P50-DE016191 (S.H., Y.L., C.N.S.) from the National Institutes of Health.
v3-fos-license
2023-09-15T13:03:20.045Z
2024-06-01T00:00:00.000
270575412
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2038-8330/16/2/38/pdf?version=1718531671", "pdf_hash": "e356eedbacbcbc98709ec98580ef2ec88cbac546", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44834", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a4e57ac83f9d6b88a5edc518b82dc516d81bd5d7", "year": 2024 }
pes2o/s2orc
Neutropenia in Childhood—A Narrative Review and Practical Diagnostic Approach Neutropenia refers to a decrease in the absolute neutrophil count according to age and race norms and poses a common concern in pediatric practice. Neutrophils serve as host defenders and act crucially in acute inflammation procedures. In this narrative review, we systematically present causes of neutropenia in childhood, mainly adopting the pathophysiological classification of Frater, thereby studying (1) neutropenia with reduced bone marrow reserve, (2) secondary neutropenia with reduced bone marrow reserve, and (3) neutropenia with normal bone marrow reserve. Different conditions in each category are thoroughly discussed and practically approached from the clinician’s point of view. Secondary mild to moderate neutropenia is usually benign due to childhood viral infections and is expected to resolve in 2–4 weeks. Bacterial and fungal agents are also associated with transient neutropenia, although fever with severe neutropenia constitutes a medical emergency. Drug-induced and immune neutropenias should be suspected following a careful history and a detailed clinical examination. Cytotoxic chemotherapies treating malignancies are responsible for severe neutropenia and neutropenic shock. Rare genetic neutropenias usually manifest with major infections early in life. Our review of taxonomies clinical findings and associates them to specific neutropenia disorders. We consequently propose a practical diagnostic algorithm for managing neutropenic children. Introduction Neutrophils, also known as polymorphonuclear leukocytes, are produced in the stem cells of the bone marrow [1].About 1000-1500 × 10 6 /L neutrophils are produced daily, while their average lifespan is 7-10 days.Only 2-5% of the produced neutrophils enter the circulation, while the rest remain in the storage pool of the bone marrow [2].They play a major role in acute inflammation and host defense against microbial pathogens [1]. In order for neutrophils' compensatory function to be fulfilled, an adequate number of them needs to be produced in the bone marrow, while an adequate number also needs to be transferred in circulation and migrate to the area of infection [3]. Many methods have been developed regarding neutrophils' calculation in peripheral blood that are well correlated with the gold standard method, which is their count in a peripheral blood smear: the Abbott method, where multiangle polarized scatter separation and three-color fluorescence detection are used; the Siemens method, where peroxidase staining, light scatter, and absorption are used; the Beckman Coulter method, where impedance volume/conductivity and five-angle light scatter are used; and the Sysmex method, where fluorescent staining, forward/side scatter, and side fluorescent light detection are used [4]. Neutropenia is defined as an absolute number of neutrophils less than 2500 × 10 6 /L in neonates and infants and less than 1500 × 10 6 /L in toddlers and older children and adults.Regarding African American children, this limit ranges from 1000 to 1500 × 10 6 /L [5,6]. The severity and frequency of infections are inversely correlated to the absolute neutrophil counts and directly to the prolongation in time of neutropenia.On the other hand, the risk of infection is higher when the decreased number of neutrophils is caused by a decrease in cell production in the myeloid series in the bone marrow in comparison to the decreased numbers of neutrophils due to their destruction in the peripheral blood [7]. According to the absolute neutrophil count, neutropenia is classified as mild in values 1000-1500 × 10 6 /L, moderate in values 500-999 × 10 6 /L, and severe in values <500 × 10 6 /L.There have been proposed classifications according to the benign nature of the neutropenia, the acuteness or chronicity, the age of onset, and the nature of the cause [1,4]. In the present study, we decided to follow the classification of Frater J. [4], which constitutes a classification system that takes into account the physiology of the granulocyte maturation in the bone marrow along with the course of the differentiated neutrophils in the peripheral blood and the other end organs.This system classifies neutropenia as (1) neutropenia with reduced bone marrow reserve, (2) secondary neutropenia with reduced bone marrow reserve, and (3) neutropenia with normal bone marrow reserve. The aim of our study, apart from a narrative review regarding the different conditions/disorders that can cause neutropenia in childhood, was to provide a practical diagnostic and therapeutic approach concerning neutropenia in this sensitive age group. Cyclic Neutropenia Cyclic neutropenia, or cyclic agranulocytosis, is a rare hematological disorder (1:1,000,000 in the general population) with an autosomal-dominant pattern of transmission, in which mutations occur in the gene for neutrophil elastase (ELA2 or ELANE) [8].The disease presents with recurrent fever, deep and painful mouth ulcers, painful lymphadenopathy, and cellulitis from minor cuts on the hands and perineal areas, while sinusitis, otitis, pharyngitis, and bronchitis may often be present.Patients with cyclic neutropenia may also present with acute peritonitis (abdominal guarding, ileus, and septic shock).Between the periods of recurrent fever, mouth ulcers, and infections, patients present no pathological findings in physical examination.Typical cases of cyclic neutropenia have oscillations of neutrophils and monocytes with 21-day periodicity.During the neutropenic period, blood neutrophil levels fall to less than 200 × 10 6 /L for 3-5 days.The neutrophil count then usually increases to near the lower limit of normal, about 2000 × 10 6 /L, and remains at approximately this level until the next neutropenic period [9].The availability of recombinant human granulocyte colony-stimulating factor (G-CSF) has greatly changed the management of cyclic neutropenia.Clinical trials clearly have established that G-CSF treatment (2-5 µg/kg/d) increases the neutrophil oscillations' amplitude, shortens the neutropenia duration, and changes the cycle length from 21 to about 14 days, while patients have reported a reduction in recurrent fevers, mouth ulcers, and all other disease manifestations [10,11]. Shwachman-Diamond Syndrome Shwachman-Diamond syndrome (SDS) is a rare autosomal recessive congenital disorder with an incidence of one in 77,000 individuals [12].The SDS gene (7q11) mutations have been detected in 80% of patients with SDS, suggesting a heterogeneous model of transmission.The disease is characterized by pancreatic insufficiency, bone marrow dysfunction, and skeletal abnormalities.Even though no specific biochemical or genetic test is available at the moment for a definite diagnosis, evidence of exocrine pancreatic dysfunction and hematological abnormalities are the main characteristic findings.Short stature, skeletal abnormalities, hepatomegaly, or biochemical abnormalities of the liver are supportive findings of the diagnosis [13].The clinical diagnostic criteria used by Dror and Freedman [14] are the following.(1) Exocrine pancreatic dysfunction (at least one of the following): (a) abnormal quantitative pancreatic stimulation test, (b) serum cationic trypsinogen below the normal range, and (c) abnormal 72 h fecal fat analysis plus evidence of pancreatic lipomatosis by ultrasonographic examination or computed tomography (CT) scan.(2) Hematological abnormalities (at least one of the following): (a) chronic (on two occasions at least 6 weeks apart) single lineage or multilineage cytopenia with bone marrow findings consistent with a productive defect ((i) neutrophils <1500 × 10 6 /L, (ii) hemoglobin concentration <2 standard deviations below mean, adjusted for age, and (iii) thrombocytopenia <150,000 × 10 6 /L) and (b) myelodysplastic syndrome.Management of children with SDS requires pancreatic enzymes for a significant proportion of patients.The dosage should be adapted to the severity of the symptoms, such as steatorrhea, abdominal pain, and growth parameters.Depending on the evolution of hematological abnormalities, a full blood count must be performed every 3-6 months or more frequently if symptoms require so.An annual bone marrow biopsy must be performed for surveillance of the acquisition of cytogenetic abnormalities [13]. Kostmann Syndrome Severe congenital neutropenia (SCN), known as Kostmann syndrome, is a rare heterogeneous group of diseases (3-8.5 per million individuals) characterized by arrested neutrophil maturation in the bone marrow [15].It is caused by HAX1 gene mutation, an autosomal recessive condition that displays recurrent respiratory tract, skin, and deep tissue infections from the first few months of life [16].The arrested neutrophil maturation at the promyelocyte stage, along with severe neutropenia (<500 × 10 6 /L) and death due to bacterial infections, pose the main characteristics of the syndrome [17].The only curative therapy is hematopoietic stem cell transplantation (HSCT), but due to the complications of this procedure, administration of G-CSF is preferable in most cases, with survival >80% of treated cases [18]. Chédiak-Higashi Syndrome Chédiak-Higashi syndrome (CHS) is an inherited condition that follows an autosomal recessive pattern.Less than 500 cases have been described worldwide [19].It is characterized by various symptoms, including frequent bruising, nosebleeds, bleeding from the gums or other mucosal surfaces, albinism affecting the skin and eyes, and recurring bacterial infections.The syndrome is caused by a mutation in a gene called lysosomal trafficking regulator protein (LYST), which results in a reduced ability to engulf and eliminate foreign particles, increasing the likelihood of recurrent bacterial infections.In the accelerated phase of the disease, fever, hepatosplenomegaly, lymphadenopathy, neutropenia, anemia, and sometimes thrombocytopenia are present.Long-term progression of the disease can lead to neurologic manifestations, such as stroke, coma, ataxia, tremor, motor and sensory neuropathies, and absent deep tendon reflexes.Most patients (90%) die within the first 10 years of life, during the accelerated phase, and due to recurrent infections.Abnormally large intracytoplasmic granules, which can be found especially in white blood cells and bone marrow, are diagnostic for the disorder.Molecular genetic testing can also be employed to identify the presence of two variants in the LYST gene, which is associated with the condition.When the diagnosis is confirmed, the accelerated phase should be assessed.Regarding therapy, absolute cure is achieved with an allogeneic hematopoietic stem cell transplantation (HSCT).The HSCT has better results when it is performed before the development of the accelerated phase.If indications of an accelerated phase become apparent, it is important to address hemophagocytosis and achieve remission before proceeding with HSCT.In regard to ocular symptoms, visual acuity might be improved by correcting refractive errors.Moreover, the use of sunscreen protects against skin malignancies.Early start of the rehabilitation program limits the neurologic complications, and finally, non-steroidal anti-inflammation drugs (NSAIDs) must be avoided, as they can cause bleeding events; the immunization program must be followed, and antibiotic treatment for bacterial infections must start as soon as possible [20]. Myelokathexis Myelokathexis is a rare condition that causes severe chronic neutropenia and leukopenia due to the retention of neutrophils in the bone marrow.Characteristic findings include degenerative changes, hypersegmentation of mature neutrophils, and hyperplasia of bone marrow myeloid cells.Diagnosis is made with bone marrow aspiration and microscopic examination of blood samples.The affected patients' bone marrow shows abundant neutrophil lineage cells and characteristic pyknotic nuclear lobes connected by fine chromatin filaments in the mature neutrophils.Microscopic examination of blood samples reveals >97% polymorphonucleated leucocytes.Treatment of the disease includes the administration of either G-CSF or granulocyte-macrophage-colony stimulating factor (GM-CSF) that increases the neutrophil count and reduces infection indices [21]. Reticular Dysgenesis Reticular dysgenesis (RD) is a rare congenital disorder caused by mutations in the gene encoding adenylate kinase 2 (AK2).RD is defined clinically by a combination of severe combined immunodeficiency (SCID), agranulocytosis, and sensorineural deafness.Reticular dysgenesis is a rare disorder; only ~20 cases are reported.Besides the typical combination of T-B-NK-SCID and agranulocytosis, patients with RD reportedly suffer from a profound sensorineural hearing deficit.Individuals typically experience severe infections at an early stage of life, often occurring shortly after birth.Swift identification and crucial medical treatments are essential to provide a potential cure for this fetal disease.RD is presented with life-threatening infections, usually in the first days of life, accompanied by bacterial sepsis in most cases.Laboratory findings include lymphopenia with persistent agranulocytosis, T-cell numbers below the normal ranges, hemoglobin levels below the normal levels, thrombocytopenia, and bone marrow revealed hypoplasia or hypoplasia.HSCT is the only curative therapy [22]. Dyskeratosis Congenita Dyskeratosis congenita is an X-linked genetic disease that is characterized by ectodermal dysplasia and hematopoietic failure.Its incidence is estimated to be 1 case per million individuals [23].Ectodermal dysplasia of dyskeratosis congenita presents with its classic triad of cutaneous reticular hyperpigmentation, nail dystrophy, and leucoplakia of the mucous membranes.Other symptoms such as obstructed tear ducts (epiphora), developmental delay, short stature, dental caries, tooth loss, early appearance of gray hair, and hair loss may also co-exist.Hematology indices include mild neutropenia and aplastic anemia associated with high mean corpuscular volume (MCV) and elevated Fetal Hemoglobin (HbF).Infections are rarely seen [2,24]. Drug-Induced Neutropenia Drug-induced neutropenia is a disorder that can be caused either by decreased production or by increased destruction of neutrophils.Neutropenia caused by decreased production of neutrophils is related to chemotherapeutic drugs that can suppress the myeloid progenitor cells in the bone marrow.Increased neutrophil destruction is related to idiosyncratic drug-induced neutropenia (IDIN), where nonchemotherapy drugs are responsible for the condition.The prevalence of IDIN is 1.6-15.4per million per year [25].Chemotherapy drugs that cause neutropenia are alkylating agents, anthracyclines, antimetabolites, camptothecins, epipodophyllotoxins, hydroxyurea, mitomycin C, texanes, and vinblastine, while nonchemotherapy drugs such as clozapine, dapsone, hydroxychloroquine, infliximab, lamotrigine, methimazole, oxacillin, penicillin G, procainamide, propylthiouracil, quinidine/quinine, rituximab, sulfasalazine, trimethoprim/sulfamethoxazole and vancomycin are the causes for IDIN.Early diagnosis of IDIN is difficult, as patients usually are asymptomatic.A complete blood count will reveal a granulocyte count of <1500 × 10 6 /L (more often 500 × 10 6 /L), while other cell counts (red blood cells, platelets) are within normal ranges. The most important part of the treatment is the identification and cessation of the offending medication.Sometimes, due to multiple drug usage, it is difficult to determine the offensive one.After drug removal, in most cases, neutropenia will resolve, and only symptomatic treatment with antibiotics and good hygiene will be needed.The average duration for complete recovery of neutrophils is approximately 9 days.Patients with extended neutropenia may also require treatment with hematopoietic growth factors such as G-CSF [26]. T-Cell Large Granular Lymphocytic Leukemia T-cell large granular lymphatic (LGL) leukemia is a proliferation of cytotoxic (CD8+) T-cell clones that cause neutropenia, anemia, and thrombocytopenia, often associated with autoimmune disorders.It affects 0.14 per million individuals [27].Clinically, the disease is diagnosed because of recurrent bacterial infections, including cellulitis, perirectal abscesses, and respiratory infections.Other symptoms include fatigue due to anemia, increased temperature, night sweats, and decreased weight.Hepatosplenomegaly is commonly found, while some of the patients may be asymptomatic.Laboratory findings include neutropenia with absolute neutrophil count <500 × 10 6 /L.Half of the patients present with anemia and moderate thrombocytopenia.Peripheral blood smear examination reveals an increased number of granular lymphocytes with normal absolute lymphocyte number or mild lymphocytosis.These patients usually also have serological abnormalities, such as rheumatoid factor, antinuclear, antiplatelet, and antineutrophil antibodies, hyper/or hypogammaglobulinemia, positive Coombs test, monoclonal gammopathies, and increased β2-microglobulin.The diagnosis should be suspected in all patients with unexpected cytopenias and an increased number of LGLs by morphology and flow cytometry.Abnormal proliferation of CD8+ T-cells has to be shown as clonal for a definite diagnosis to be made.Polymerase chain reaction (PCR) is a widely used method with a sensitivity of 70-80%.Flow cytometry using monoclonal antibodies can also detect the clonal process of T-cell disorders [28]. Nutritional Deficiency Neutropenia is caused by nutritional deficiency of vitamin B12, folic acid, and copper, and severe protein-calorie deficit in nutrition leads to multiple cytopenias rather than solely neutropenia.Patients exhibit clinical manifestations such as fatigue, decreased weight, and pale skin.Laboratory findings include anemia in complete blood count and deficiency of vitamin B12, folic acid, copper, and ferritin elements [29]. Viral Infections The most common causative viruses for neutropenia include varicella, EBV, CMV, measles, hepatitis virus, and HIV.The mechanism of neutropenia caused by viral infections includes bone marrow granulopoiesis suppression, which occurs directly or through an immune-mediated process.The level of neutropenia can differ from mild to severe.Granulocyte-colony stimulating factor (G-CSF) treatment may be required in patients with severe neutropenia and detected infection [7]. Chronic Benign Neutropenia of Infancy and Childhood Chronic autoimmune neutropenia of infancy and childhood is a common disorder, which usually resolves by the age of 3-5 years.It occurs in 1:100,000 children/year, with the mean age of onset being 7-9 months [30].The disorder is benign despite the low ANCs.In most cases, it is detected during acute diseases, usually febrile ones.The persistence of neutropenia after the disease resolution should suspect physicians for the diagnosis.Numerous tests can be performed to confirm the diagnosis, including the identification of autoantibodies against surface antigens of neutrophils.In older children, identifying these antigens indicates further investigation for congenital immunological disorders.Screening for these disorders includes measurement of circulation T-cell receptor α/β positive, CD4+/CD8+ double negative T-cells, or serum immunoglobulin.Definite diagnosis of these conditions requires specialized immunological screening [2,31]. Non-Immune Chronic Benign Neutropenia Non-immune chronic benign neutropenia more commonly appears in adults than children.Usually, it is an accidental finding in complete blood count where the degree of neutropenia is mild and is caused by an increased level of destruction of neutrophils.There are no typical clinical manifestations of the disease.Splenomegaly is seen in some rare cases of adults, usually due to increased serum concentration of pro-inflammatory cytokines and chemokines, as well as a high level of soluble cell adhesion molecules [32]. Benign Familial Neutropenia Benign familial neutropenia is an autosomal-dominant inherited disease.It is usually met within specific ethnic groups, such as Americans, South African Blacks, and other African tribes.Its prevalence has been estimated to be 25-50:100 in Africans, 4.5:100 in African-Americans, 10.7:100 in Arabs, 11.8:100 in Yemenite Jews, and 15.4:100 in Black Ethiopian Jews [33].The cause of neutropenia in this disease is unknown, and the diagnosis is usually made in the aforementioned ethnicities when other pathological causes of neutropenia have been excluded.Because of the benign course of the disease, no treatment is necessary [6]. Autoimmune Neutropenia Autoimmune neutropenia is a rare disease (0.12-1.14:100European individuals) caused by antibodies directed against neutrophil-specific antigens, leading to their destruction [34].It includes primary and secondary autoimmune neutropenia.Anti-neutrophil antibodies, called human neutrophil allogen (HNA) antibodies, are directed against neutrophil surface glycoproteins.Diagnosis is performed directly by granulocyte immunofluorescence test, where paraformaldehyde-fixed neutrophils are incubated with serum to allow neutrophil antibodies to bind to the antigenic epitopes, and indirectly by serum granulocyte agglutination test (GAT).In this test, agglutination of neutrophils produced by IgG antibodies in the GAT is an active process, occurring in two phases: in the first phase, neutrophil reactive antibodies bind to native antigens on unfixed neutrophils, sensitizing them.In the second stage, sensitized neutrophils undergo chemostasis and move toward other polymorphonuclear neutrophils (PMNs) [35]. Primary Autoimmune Neutropenia Primary autoimmune neutropenia is mainly diagnosed at an early age (5-15 months), and spontaneous reduction in neutrophils occurs in almost all cases.Its incidence is 1:100,000 individuals [36].Despite the deficient number of neutrophils, these patients rarely present serious infections.Autoantibodies are not easily detected, and the screening needs to be repeated multiple times.Autoantibodies bound to NA1 and NA2 granulocytes alloantigen type [35]. Secondary Autoimmune Neutropenia Secondary autoimmune neutropenia is most commonly present in adulthood.Usually, it is related to autoimmune diseases like rheumatoid arthritis, systemic lupus erythematosus, and Sjogren syndrome.It can also be related to hematological diseases, solid tumors, or immunological deficiency syndromes [35]. Alloimmune Neutropenia Alloimmune neutropenia is caused as a response of the newborn's immune system to maternal incompatible antibodies.Its incidence is 0.5-2:1000 live births [37].Neutrophilspecific antibodies HNA-1a/1b/2a, HNA-1c, HNA-3a, and HNA-4a are identified in newborn blood.The diagnosis is usually made immediately after birth, and the resolution occurs at 2-3 months of age.The degree of neutropenia ranges from moderate to severe.Affected infants have an increased risk for skin, respiratory, and urinary tract infections, omphalitis, and fever.The treatment is usually supportive, except in septic patients, where G-CSF is indicated [38]. 4.6.Drug-Induced Neutropenia (Antibody-Mediated) Drug-induced neutropenia through an antibody-mediated mechanism is a rare disease with a high mortality rate.Clinical symptoms include fever, sore throat, stomatitis, pneumonia, and sepsis.Diagnosis is made by bone marrow biopsy, where bone marrow granulocytes are presented with a late maturation arrest.Treatment planning requests cessation of the offending drug.In rare advanced cases, splenectomy might be required, especially in patients with pronounced anemia and thrombocytopenia [4]. Infection-Related Neutropenia (Antibody-Mediated) Neutropenia presented as a result of bacterial or viral infection is called post-infection antibody-mediated neutropenia.Diagnosis is made by a history of previous infection accompanied by laboratory results confirming the diagnosis.Bone marrow microscopy might present decreased bone marrow reserve, especially shown in patients with bacterial sepsis.Treatment is mainly aimed at treating the infection, but if bone marrow maturation is shown to decrease in microscopy, G-CSF may be required [4]. Hypersplenism Patients with hypersplenism might develop mild neutropenia.Hypersplenism can be related to various conditions, such as infections, neoplasms, collagen vascular disease, hepatic diseases, and hemolytic anemia.Desolation and possible destruction of neutrophils within the spleen is the mechanism causing neutropenia.The degree of neutropenia caused by hypersplenism seems to be irrelevant to the spleen size.Diagnosis is established by imaging methods, where the spleen appears to be enlarged, along with neutropenia in laboratory results [39]. Maternal Hypertension Maternal hypertension can cause neutropenia during pregnancy, which usually resolves within the first months of life.Newborns with intrauterine growth restriction, HELLP (hemolysis, elevated liver enzymes, and low platelets) syndrome, and premature rupture of membranes are at higher risk of developing neutropenia.Neutropenia, even if it is self-limited, can increase the risk of hospital infections, as it requires hospitalization of the newborn [40,41]. Diagnostic Approach to Neutropenia Regarding the diagnostic approach of a child with neutropenia, firstly, neutropenia should be confirmed by absolute neutrophil count in peripheral blood smear [1] according to age-specific institutional reference ranges [4].If institutional reference ranges for the pediatric population are not provided, the International Council for Standardization in Haematology (ICSH) recommends the use of published reference ranges [42].In Table 1, we provide reference ranges for age-specific white blood cells and leukocyte differential in a routine blood count. Apart from the absolute neutrophil count in peripheral smears, other findings that are directly related to specific diseases or disorders should be evaluated (Table 2). Moreover, the pediatrician or laboratory physician who is called to investigate neutropenia in childhood must be familiar with the age-specific causes (Table 3). A positive family history of neutropenia, bacterial infections early in life (e.g., infection of the umbilical cord stump), susceptibility to infections, and unexplained sudden infant death in the family should be directed to congenital neutropenia syndromes [49,50].To date, more than 24 genes have been identified to be associated with congenital neutropenia syndromes (Table 5) [49], with the majority of published cases (60%) being associated with ELANE gene mutations [51].ELANE gene is present in a gene cluster on chromosome 19 and is associated, apart from congenital neutropenia syndromes, with cyclic neutropenia.The protein initially produced by this elastase gene undergoes proteolytic processing to create the active form of the enzyme.Once activated, this enzyme breaks down proteins found in specialized neutrophil lysosomes called azurophil granules, as well as proteins present in the extracellular matrix.The enzyme's activity is thought to contribute to degenerative and inflammatory diseases by breaking down collagen-IV and elastin.Moreover, this protein can degrade the outer membrane protein A (OmpA) of E. coli and the virulence factors of bacteria such as Shigella, Salmonella, and Yersinia [52].Recessive/2q35 Chronic neutropenia SBDS [58] Recessive/7q11.22Shwachman-Diamond syndrome EFL1 [59] Recessive/15q25.2EFL1 syndrome GATA2 [60] Dominant/3q21.3 GATA2 syndrome G6PC3 [61] Recessive/17q21 Severe congenital neutropenia SLC37A4 [62] Recessive/11q23.3Glycogen storage type Ib Recessive/20q13 STK4 (MST1) syndrome SMARCD2 [78] Recessive/17q23 SMARCD2 The clinical physician must conduct a thorough physical examination.Growth development, mental status, and phenotypical abnormalities must be recorded, while all the systems must be examined [1].In Table 6, we present the clinical findings observed in various disorders that can cause neutropenia.The extent of the laboratory examinations is determined by the severity and duration of neutropenia [1].In the pediatric population, viral infections are the main cause of neutropenia, apart from neonatal sepsis, where neutropenia is caused by bacterial infections [4].In most cases, it is hard to distinguish whether neutropenia is the cause or the result of the infection.Nevertheless, a complete blood count should be repeated in 2-4 weeks, and if neutropenia is resolved, no further examinations are needed [1].If neutropenia persists, complete blood count should be repeated 2-3 per week for 6 weeks to differentiate cyclic neutropenia from severe chronic neutropenia [9].The detection of antineutrophil antibodies can diagnose chronic benign neutropenia in infancy and childhood [4].Bone marrow aspiration must be performed when neutropenia progresses, and myelodysplastic syndrome or leukemia must be ruled out or confirmed [1,4].In Figure 1, we provide a practical diagnostic algorithm that we believe will be helpful to any physician who will be called to manage a pediatric case of neutropenia. Conclusions This narrative review attempts a comprehensive and critical analysis of the current scientific data on the topic of neutropenia in children.Αcquired neutropenia is usually benign and most frequently attributed to viral infections.There should be caution about drug administration in children, as this can also lead to neutropenia.Pediatricians should familiarize themselves with autoimmune disorders that can cause neutropenia.We present that clinical examination directs diagnostic investigations and clinical findings pinpointed to specific laboratory and genetic testing.Congenital neutropenia syndromes are a group of rare genetic disorders clinically manifesting with severe infections early in life.The ELANE gene should be tested in all cases of "unexplained-idiopathic" congenital neutropenia.Finally, our review concludes with a practical diagnostic approach to neutropenia in children, which can serve as a guide for the optimal handling of neutropenic patients. Conclusions This narrative review attempts a comprehensive and critical analysis of the current scientific data on the topic of neutropenia in children.Acquired neutropenia is usually benign and most frequently attributed to viral infections.There should be caution about drug administration in children, as this can also lead to neutropenia.Pediatricians should familiarize themselves with autoimmune disorders that can cause neutropenia.We present that clinical examination directs diagnostic investigations and clinical findings pinpointed to specific laboratory and genetic testing.Congenital neutropenia syndromes are a group of rare genetic disorders clinically manifesting with severe infections early in life.The ELANE gene should be tested in all cases of "unexplained-idiopathic" congenital neutropenia.Finally, our review concludes with a practical diagnostic approach to neutropenia in children, which can serve as a guide for the optimal handling of neutropenic patients. Abbreviations: CI-confidence interval; Eos-eosinophils; hr-hour; Lymp-lymphocytes; mo-months; Monomonocytes; Neu-neutrophils; WBC-white blood cells; wk-week; yr-year.All means and CIs are presented in × 10 6 /L.* Neutrophils include band cells of all ages and a small number of metamyelocytes and myelocytes in the first few days of life. Table 5 . Known genes associated with congenital neutropenia syndromes.
v3-fos-license
2023-06-04T15:12:59.042Z
2023-06-02T00:00:00.000
259052677
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1590/0104-1428.20220087", "pdf_hash": "6e7066da94fba508771ed9a34fa2695984351da5", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44835", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "c51890d34c60fe40509cc765b68873e4b758961c", "year": 2023 }
pes2o/s2orc
Crystallization and fusion kinetics of Poly(butylene terephthalate)/Titanium Dioxide Abstract In this paper, the crystallization, fusion, and activation energy (Ea) of PBT/TiO2 were thoroughly evaluated using DSC. Increasing the rates shifted the peaks of melt crystallization to lower temperatures while the fusions were almost unaffected. TiO2 hindered the melt crystallization of PBT and lower crystallization rates, i.e., CMAX and K’ were acquired, in general, the crystallinity degree (Xc) was 4% higher in PBT/TiO2 which is in the marginal error. Pseudo-Avrami and Mo models were applied to evaluate the melt crystallization kinetics; both fitted the melt crystallization quite well; deviations were observed at the beginning and the crystallization end most due to the nucleation and spherulites impingement during the secondary crystallization. Ea was evaluated using the Friedman model, considering the values of Ea less energy has to be removed from PBT/TiO2 when compared to PBT, specifically at 1% of TiO2. Introduction Polyesters are plastic resins widely used in sundry industrial applications, from the general goods as commodities to the sophisticated products with high technological performance and added value. These resins contribute for almost 18% of the world's polymer production [1] . Among them, one of the most important is poly(butylene terephthalate) (PBT), a thermoplastic, semi-crystalline with excellent processing properties. Its high chemical, thermal and mechanical performances make PBT a potential candidate for many applications in science and technology [2][3][4][5][6][7] . Literature has reported the crystallization kinetics of PBT upon additives and fillers addition, the second phase addition may promote the heterogeneous nucleation and reduce the crystallization time, speeding up its general processing [8] . However, other properties can be achieved, such as significant improvement in the mechanical properties [9] , antistatic and super-strength characters [10] are examples of synergistic PBT compounds, filled with aluminum oxide (Al 2 O 3 ), epoxide elastomers and carbon nanotubes, for instance. In order to improve polymers' properties, additives and fillers are commonly added. For instance, Titanium dioxide (TiO 2 ) which is used due to its high thermal and chemical stability, non-toxicity, photo-catalytic character and antibacterial action, for instance [11,12] , the addition of TiO 2 to the compounds may increase the solar reflectance [13] , rigidity [14] , tenacity [15] , synthesize films [16] and increases the degree of crystallinity [17] . Due to these great achievements, adding TiO 2 can be attractive aiming at higher PBT performance, therefore in this work, PBT was doped with TiO 2 , in amounts ranging from 0 to 10% of the weight. Afterwards, the phase transitions, i.e., crystallization and melting were investigated. Zhou et al. [18] reported TiO 2 effect in nanocomposites of poly(butene 2,5-furan dicarboxylate) (PBF), a biologicalbased polyester similar to PBT, at concentrations up to 7% of the weight; TiO 2 acted as a nucleating agent accelerating the crystallization as well as improving UV resistance. In the present work, as later on discussed at low PBT/TiO 2 contents (1% wt) there was no nucleating effect, suggesting that the deterrent effect of TiO 2 's solid particles was greater than the nucleation ability into PBT matrix during the melt crystallization [5] , investigation of the crystallization kinetics and energetic measurements are presented contributing to scientific and technological databases. Crystallization and fusion of PBT and PBT/TiO 2 composites were recorded using differential scanning calorimetry (DSC) through non-isothermal scans, and applying several heating/ cooling rates. Crystallization kinetics was evaluated using Pseudo-Avrami and Mo models, measured discrepancies are provided validating the modeling. Additionally, this work reports the activation energy evaluations for crystallization and fusion processes, a methodology rarely reported. The activation energy for the melt crystallization was computed using the Friedman isoconversional model [19] . Regarding the activation energy of fusion, Toda et al. [20] suggested a model for polymer fusion, considering the geometry of melting cylindrical rods, however, the literature reports that for cases of sample overheating during fusion, the most robust isoconversional models for calculation of the crystallization activation energy are also suitable for the evaluation of the activation energy of the fusion, therefore Friedman's isoconversional method was applied in this work [21] . Based on our database, the kinetics of crystallization and its modeling discrepancies, as well as the activation energies evaluation for PBT/ TiO 2 , have been rarely reported for polymers composites. Materials PBT 195 Valox was supplied by Sabic company (Bergen op zoom Netherlands), with density of 1.31 g cm -3 . TiO 2 was purchased from Evonik Degussa Co. with surface area of 50 m 2 /g and a 75:25 ratio of anatase and rutile, with an average crystal size of 25 to 94 nm. Compounding PBT compounds with 1, 5, and 10% of the weight of TiO 2 were prepared in a Haake Rheomix 600 (Germany) laboratory internal mixer fitted with high-intensity rotors type rollers, at 240 °C, 60 rpm during 10 min. Scanning electron microscopy (SEM) Scanning electron microscopy images were captured using a LEO 1430 unit, from Zeiss (USA). The specimens were previously fractured in liquid nitrogen to avoid plastic deformation, afterwards, coated with a carbon layer aiming to avoid the charges accumulation. Differential scanning calorimetry The phase transitions, i.e., crystallization and fusion, as well as the thermal properties were monitored with a DSC Q20 from TA Instruments (USA). Specimens weighing approximately 3 mg were experimented in closed aluminum pans under nitrogen gas flow of 50 mL/min. The applied thermal cycle consisted of: heating from 25 °C to 270 °C, isotherm at 270 °C for 3 minutes, cooling from 270 °C to 20 °C and re-heating from 20 °C to 270 °C using constant heating/cooling/reheating rates of 5, 10, 20, and 30 °C/min. Figure 1 displays a typical DSC scan together with an applied thermal cycle illustrated as a dotted red line. The investigated phase transitions are presented and coded as F 1 : first fusion; C 1 : melt crystallization; and F 2 : second fusion. Integration and conversion during crystallization and fusion measurements The crystallizable mass conversion during crystallization or fusion, x = x (t), was estimated using Equation 1, through the energy flow between the starting and ending points previously defined. where: J is the heat flow of the phase transition and ' t the time for partial conversion; J 0 is an adequate baseline and E 0 (Equation 2) refers to the total exchanged heat between the specimen and the neighborhood during the event [22] . the crystallization or fusion rate c = c (t) was computed using Equation 3 [22] . The degree of crystallinity X c developed during the event was evaluated using Equation 4: In this work, the equilibrium melting enthalpy used for PBT was 140 J/g and the equilibrium melting temperature was 0 226 Figure 2 shows SEM image of PBT/10% TiO 2 where the white dots are TiO 2 which are well dispersed in PBT matrix as a result of the proper compounding parameters. The applied magnification was 10.000 x. Melt crystallization (C1) measurements Relative crystallinity (X rel ) and crystallization rate (dx/dt) as temperature functions for PBT and PBT/1% TiO 2 using the investigated cooling rates are displayed in Figure 3. The Supplementary Material presents plots for PBT/5% TiO 2 , and PBT/10% TiO 2 (please see Figure S3 and S4). Sigmoidal behavior was verified for X rel plots in Figure 3a and 3b characterizing the phase transition without discontinuities, commonly observed in polymers [24] . The dα/dt showed bell shape, increasing at the beginning of crystallization which is related to nucleation and primary crystallization, reaching the top and decreasing afterwards configuring the secondary crystallization and spherulites impingement [25] . Sigmoids obtained from the higher cooling rates are displaced to lower temperatures due to the time effect, i.e., upon higher cooling rates exists less time for the nucleation, and crystal growth occurs at lower temperatures [26] . The crystallization rates increase for the higher cooling rates as may be confirmed from the heights of the bellshaped curves. Quantitative data for the crystallization rates is tabled in the Supplementary Material (Table S1). Concerning the TiO 2 addition in general, sigmoids of PBT/ TiO 2 were displaced to lower temperatures, nevertheless, the filler effect is nonlinear, and this topic is further discussed in terms of activation energy. Figure 4a and b presents the sigmoids collected for F 1 and F 2 , respectively, the corresponding melting rates are also shown within the plots. In general, the fusion is less sensitive to the heating rates and TiO 2 addition which is evidenced as subtle bell peaks displacement. During the first fusion, both compounds presented quite similar melting rates and molten fraction profiles, light dissimilarity was verified for PBT/ 10% TiO 2 that melted in a lower temperature range. The readers may find additional molten fraction plots in the Supplementary Material, please see Figure S2 and Figure S8. First (F1) and second (F2) fusion measurements Regarding F 2 , the investigated compounds presented quite similar sigmoid and melting profiles, nevertheless, contrarily to F 1 , F 2 peaks displayed complex character which may be linked to distinct morphologies and crystals perfection [27] ; it seems there are smaller/imperfect crystals that melt in the temperature range from 200 to 220 o C while the most perfect/bigger melt between 220 and 240 o C [28,29] . It is supposed there was crystal reordering during the melt crystallization and second heating which promoted the development of higher perfected crystals, similar trend is reported in the literature for PP, PET, Nylon 1212 [30][31][32] , for instance. Apparently, TiO 2 addition did not significantly change the F 2 trend, however during F 2 higher melting rates were verified suggesting easier melting, deeper discussion related to this topic and its relationship with the activation energy for melting is further on presented. The degrees of crystallinity computed for F 1 and F 2 are displayed in Figure 5. In general, X c decreased with increasing the heating rate, specifically for the heating rates higher than 10 o C/min, since for higher heating rates there is reduced time for the crystal formation. The thermal environment changes rapidly hindering or interfering in the crystals' nucleation and growth, hence producing shorter or imperfect crystallites [33] . Regarding TiO 2 addition, the composites presented a slight increase in Xc, i.e., approximately 4% higher. Melt crystallization kinetics -Pseudo Avrami Aiming to further analyzing the non-isothermal melt crystallization, the kinetics of crystallization of neat PBT and PBT/TiO 2 composites was analyzed. The relative crystallinity as crystallization time function was computed as the exothermic peak areas ratio using Equation 5: where: c dH dt is the released heat; rel X is the relative crystallinity measured from the peak integration as the ratio between the total and partial peak's area, 0 t and t ∞ are the onset and end melt crystallization times. Pseudo-Avrami modeling Avrami [33][34][35][36][37] developed a macrokinetic model to investigate the isothermal crystallization, based on microkinetics approaches. The Avrami model considers the relative crystallinity x as time function τ computed in the event starting according to Equation 6: K = K(Τ) and n = n(Τ) are the Avrami's parameters. K is the rate constant evaluated considering the nucleation and crystalline growth rates, and n is the Avrami exponent which is related with the crystallite geometry [37][38][39][40] . Nonisothermal crystallization data, acquired using constant cooling rates may be correlated through an expression formally identical to Avrami Equation 7: Nevertheless, when using this model for nonisothermal crystallization investigations the parameters K' and n' are the heating rate φ functions, and not of temperature as in the Avrami model. Therefore, our researcher group has named Pseudo-Avrami [25,26] . The relative crystallinity of PBT and PBT/TiO 2 composites are displayed in Figure S9 which presents the theoretical (solid lines) and experimental (symbols) data. All plots displayed sigmoidal shapes characterizing continuous phase transition as commonly observed in polymers. Plots in Figure S9 present reasonable fits without huge deviation between the experimental and theoretical data. Only for PBT/10% TiO 2 cooled at 5 °C/min presented deviation at the end of the primary crystallization. It can be verified that the experimental relative crystallinity developed subtly higher than the theoretical predictions, in general, when using rates lower than 10 o C/min and higher than 20 o C/min higher deviations are computed, which may be linked to the noise and time-lag effects, additionally, for PBT/10% TiO 2 it is supposed to be also linked to the TiO 2 addition influence. Nevertheless, in general, Pseudo-Avrami described the crystallization of PBT and PBT/TiO 2 composites in a reasonable mode. The sigmoids may be divided into three stages, i.e., the first stage due to the nucleation, the second stage due to the primary crystallization which takes place at an accelerated rate with a high amount of mass transformation, and the third stage due to the secondary crystallization that is slower and more prominent for the slower cooling rates. It is related to crystallite impingement when the crystallization is finishing [37,41] . As above verified for the crystallization rates, increasing the cooling rates displaced the sigmoids to higher times (lower temperatures), in general upon higher cooling rates the specimen crystallizes faster nevertheless the developed crystallites are shorter and/or imperfects, thus depending on the desired morphology the cooling rates may be a proper tool to attain it. As mentioned, the sigmoids may be divided into three stages, i.e., nucleation, primary crystallization, and secondary crystallization, related to the discrepancy between theoretical and experimental data the higher deviation was verified during the begging, 0 < X rel < 10% and the crystallization ending, i.e., X rel > 80%. From the sigmoids presented in Figure S9 the Pseudo-Avrami plots were built and are presented in Figure 6, through the plots of Y versus ln τ according to Equation 6. Linearity deviation was mainly verified when the crystallization was beginning and when it was finishing as illustrated. Clearly, Pseudo-Avrami plots may be divided into three stages: 1 stnucleation, 2 nd -primary crystallization, and 3 rd -secondary crystallization, corroborating with presented data in Figure S9. The discrepancy between theoretical and experimental data was measured and data are presented in Figure 7 for PBT/1% TiO 2 . In general, the higher deviation was verified for higher cooling rates, and for the beginning and ending of crystallization as mentioned. Whether the analysis is concentered between 20% < X rel < 80% the discrepancy goes down as demonstrated in Figure 7b, confirming Pseudo-Avrami fits quite well the crystallization from the melting of PBT and PBT/TiO 2 composites. Figure 8 presents the crystallization rate constant (K') and maximum crystallization rate (C max ) as the cooling rate function for the investigated compounds, both parameters are related to the crystallization rate and through the displayed data they increased with the cooling rates, i.e., theoretical and experimental crystallization rates followed similar trend [26,[42][43][44][45] . For a given cooling rate, the crystallization rate was higher for neat PBT indicating somehow TiO 2 decreased PBT's crystallizability, i.e., decreased PBT's ability to fast crystallize and hinder the transformation mechanisms, i.e., nucleation, primary and secondary crystallization, possibly changing the activation energy for the crystallization as further on investigated. In the Supplementary Material, in Table S2 the readers find the Pseudo-Avrami exponent and the R 2 parameter. Mo and co-workers modeling Mo and co-workers [45,46] developed a model to correlate non-isothermal crystallization parameters in polymers tested using constant cooling/heating rates, assuming the needed time τ to reach a given level of relative crystallinity due to the cooling/heating rate φ, according to Equation 8: Figure S12c shows sigmoids for PBT/5% TiO 2 where symbols are the experimental data acquired during cooling, and the solid lines are the theoretical data computed according to Mo model. Plots presented quite good fits between experimental and theoretical data with subtle deviation at the crystallization extremes, i.e., beginning and ending, i.e., possibly linked with the nucleation and spherulites impingement, following a similar trend as already observed for Pseudo-Avrami model. From these sigmoids Mo plots were built and presented in Figure S11d for PBT/10% TiO 2 and 10% < X rel < 90% from an overview of these data may be suggested Mo is adequate to modeling PBT and PBT/TiO 2 composites. Plots for other compounds are displayed in the Supplementary Material; please see the Figure S11 and Figure S12. Figure 9 shows the discrepancy between the theoretical and experimental data for PBT/10%TiO 2 evaluated using Mo model, following a similar trend as already observed for Pseudo-Avrami. A huger deviation was observed at the beginning and end of crystallization, nevertheless if assumed the range 20 < X rel < 80% the deviation is quite low, i.e., less than 5% which confirms Mo model describes very well the crystallization from the melt of PBT compounds [41,47] . Mo parameters F and α were measured using Equation 9 and are graphically presented in Figure 10 [47,48] . In the Supplementary Material, Table S3 the readers find the R 2 parameter for the investigated compounds. F increased with the degree of crystallization, i.e., for higher crystallinity much energy must be supplied to the system; a quite similar trend was observed for PP/PET blends as reported by Zhu et al. [49] . Related to TiO 2 addition, PBT composites displayed higher F suggesting that with the crystallization development the composites need much energy [50][51][52] . Mo exponent slowly increased with the degree of crystallinity suggesting crystalline structures more complexes were produced with the crystallization advance, i.e., nuclei are formed, as crystallization advance new macromolecules are added, progressing to the fibrils and then to the spherulites, which increase in size and can become more crystalline with the crystallization improvement, their crystallinity also depending on the applied crystallization parameters (time, temperature, cooling/heating rates), which can be used to control the whole crystallization. Parameters reported in the present paper may be used as proper tools to control the crystallization rate and the degree of crystallinity of PBT and PBT/TiO 2 composites. The following section presents the calculations for the activation energy for the melt crystallization and for the fusions. Activation Energy ( a E ) -Melt crystallization The conversion rate of a chemical reaction is commonly reported as the product of a temperature-dependent rate constant K(T) and a function of the f(x) conversion characteristic of the reaction mechanism, as shown in Equation 10: Isoconversional models are more applied to determine the activation energy of crystallizations. Friedman's model [53,54] is based on the logarithmic of the conversion rate assuming a constant rate K(T) defined by Arrhenius, which is shown in Equation 11: Where A is a pre-exponential factor constant and R = 8.314 J K -1 mol -1 is the universal gas constant. Transformations from the amorphous/disordered state to the crystalline state in polymeric melt are considered complex reactions; therefore Equations 10 and 11 must be generalized to: Generally, a E is a function of conversion (in this case it is a function of (X rel ) and Equation 12 can be converted to the logarithmic form: , this treatment repeated for the different values of X rel results in a E as function of X rel . This method was applied in this work for X rel ranging between 0 and 1.0 as shown in Figure 11. As can be seen from Figure 11, all activation energies are negative, indicating that energy has to be removed from the system in order to promote the melt crystallization. Considering the absolute values of the activation energies, less energy has to be removed from the system for the TiO 2 / PBT compounds when compared to neat PBT. The only exception from this behavior is the composition with 10% TiO 2 load in the range 0-30% of relative crystallinity, which could be attributed to a measurement error. The fact that all the curves of a E versus X rel show up as almost parallel lines indicates that there is no significant change in the melt crystallization mechanism when adding the TiO 2 . The effect of filler is also nonlinear. A quite similar nonlinear shifting of activation energy curves was reported by Ries et al. [55] for the cold and melt crystallization of PHB/ZnO composites. Activation energy ( a E ) -First fusion In contrast to crystallization kinetics, fusion kinetics has been rarely investigated [56] . Few studies report polymer fusion kinetics by means of isoconversional kinetic models [57][58][59] . A differential or integral isoconversional method may be applied depending on the nature of the experimental data. If the reported data are from DSC measurements, therefore, Friedman's differential isoconversional method [53,54] may be used. Toda et al. [20,60] proposed a nucleation model for polymer fusion which fusion starts with melting the cylindrical cores. However, Friedman's isoconversional model is powerful to study the fusion kinetics and evaluate a E under superheating [61] . In those situations, a decrease in s of the fusions upon temperature increase is expected [62] . This behavior was mostly observed in this work. The numerical optimization method [63] is based on the data of Friedman analytical method. The acquired data from the Friedman method such as E(x) and A(x) are numerically optimized, the best fit between the experimental plots is obtained through non-linear optimization based on the least squares method. For fusion, this model-free method was the most suitable in this work, due to the better R 2 of the analytical plot, which ranged from 0,932 to 0,996. a E s were computed using the numerical optimization method and plotted as molten fraction function. Figure 12 shows a E for the first fusion of PBT and PBT/TiO 2 composites. Neat PBT fusion requires the highest activation energy; while the lowest a E was observed for PBT/1%TiO 2 then a further increase in filler content raises the activation energy again. This behavior is similarly nonlinear as the trend verified for the melt crystallization. Activation energy ( a E ) -Second fusion a E for the second fusion of investigated compounds was measured using the numerical optimization method, similarly to the first fusion. This method based on the Friedman model presented quite high R 2 , i.e., 0,975 < R 2 < 0,996. Plots are presented in the Supplementary Material, please see Figure S14. Acquired a E s are plotted as molten fraction (X m ) function and shown in Figure 13. All investigated compounds presented a similar profile, i.e., a E decreased upon the fusion advance, the only exception observed was an increase in for X m > 95% until the end of fusion. In the final stages of the second fusion, the verified trend for a E was: E 95%PBT > E 90%PBT > E 99%PBT > E 100%PBT . During the second melting, there were no significant variations in a E with TiO 2 addition and similarly to the first melting, there was no linear trend between TiO 2 and the computed a E . From Figure 12 and Figure 13 it may be verified that the second fusion character is quite different from the first one, as the first fusion is related to the quenched material from the mixing while the second fusion is related with the melt crystallized material, hence the second fusion was computed during the second heating with different thermal history and mainly distinct morphology altogether would be conducting to different activation energy as displayed in Figure 13 Conclusions PBT/TiO 2 compounds were successfully melting mixed; according to SEM images, TiO 2 nanoparticles are well dispersed in the PBT matrix without evidence of agglomeration. The melt crystallization, fusions, and activation energy ( ) a E were evaluated based on DSC scans. Upon the integration of the DSC scans, the thermal events were visualized as sigmoids, indicating continuous phase transformation. Higher cooling rates shifted the sigmoides of the melt crystallization to lower temperatures, while the fusions were almost insensible to the heating rates. Pseudo-Avrami and Mo models fit the melt crystallization kinetics quite well with subtle deviation only verified at the beginning and end of the crystallization, nevertheless quite high R 2 parameters were acquired. Standard negative activation energies were computed for the melt crystallization and positive activation energies for the fusions; the Friedman model was applied to both phase transition evaluations and high R 2 values suggest that they are a proper methodology. As expected, the activation energies decrease upon temperature increase for all filler contents. Supplementary Material Supplementary material accompanies this paper. Figure S1. Typical DSC scan for PBT, collected during applied thermal cycles for the heating/cooling/re-heating of 10 o C/min. Dotted red line is the applied thermal program. Solid blue line the heat flow signal with the investigated phase transitions, i.e., F 1 . Figure S2. The half melt crystallization time τ 1/2 (dashed line) and melt crystallization rate (solid line) of produced compounds as function of tested cooling rates. Figure S3. Relative crystallinity (solid line) and Crystallization rate (dotted line) as temperature function. Compounds and cooling rates indicated. Figure S4. Relative crystallinity (solid line) and Crystallization rate (dotted line) as temperature function. Compounds and cooling rates indicated. Figure S5. Molten fraction (solid line) and Melting rates (dotted line) as temperature function of F 1 . Compounds and cooling rates indicated. Figure S6. Molten fraction (solid line) and Melting rates (dotted line) as temperature function of F 2 . Compounds and cooling rates indicated. Figure S7. Pseudo-Avrami plots of neat and PBT/TiO 2 cooled at indicated cooling rates illustrating crystallization development in three stages. Figure S8. Pseudo-Avrami plots of composites a) PBT/1%TiO 2 , b) PBT/5%TiO 2 and c) PBT/10%TiO 2 computed for the indicated cooling rates. Figure S9. Relative crystallinity of a) neat PBT, b) PBT/1%TiO 2, c) PBT/5%TiO 2 and d) PBT/10%TiO 2 at displayed cooling rates. The theoretical data are solid lines and the experimental are symbols. Figure S10. Discrepancy for the whole melt crystallization of a) neat polymer, b) PBT/5%TiO 2 and c) PBT/10%TiO 2 at indicated cooling rates. And Discrepancy for the melt crystallization between 20% < X rel < 80% of d) neat polymer, e) PBT/5%TiO 2 and f) PBT/10%TiO 2 . Plots built according to Pseudo-Avrami model. Figure S11. Mo plots for the melt crystallization of a) neat PBT, b) PBT with 1%TiO 2, c) PBT/5%TiO 2, and d) PBT/10%TiO 2 at indicated degree of crystallinity. Figure S12. Relative crystallinity for the melt crystallization of a) neat PBT, b) PBT/1%TiO 2, c) PBT/5%TiO 2, and d) PBT/10%TiO 2 at indicated cooling rates. Figure S13. Deviation between theoretical and experimental data during the melt crystallization evaluated using Mo model (cooling rates indicated) of a) neat PBT, b) PBT/1%TiO 2 and c) PBT/5%TiO 2 . And discrepancy evaluated for 0 < X rel < 80% of d) neat PBT, e) PBT/1%TiO 2 and f) PBT/5%TiO 2 . Figure S14. for the second fusion of PBT and PBT/TiO 2 composites using the numerical optimization method Table S1. Melt crystallization data for indicated compositions and rates. Table S2. Pseudo-Avrami expoent (n'), R 2 and degree of crystallinity (X c %) for the investigated compounds.
v3-fos-license
2023-08-04T13:52:28.212Z
2023-08-04T00:00:00.000
260450622
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ojrd.biomedcentral.com/counter/pdf/10.1186/s13023-023-02769-4", "pdf_hash": "478c20dece21d38a4f5e950c149773c2c0ec3d93", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44836", "s2fieldsofstudy": [ "Medicine" ], "sha1": "dc964523b6d8d1ec17e6e95eb0858d0975e66a13", "year": 2023 }
pes2o/s2orc
Long-term nusinersen treatment across a wide spectrum of spinal muscular atrophy severity: a real-world experience Background Spinal muscular atrophy (SMA) is an autosomal recessive disorder caused by a biallelic mutation in the SMN1 gene, resulting in progressive muscle weakness and atrophy. Nusinersen is the first disease-modifying drug for all SMA types. We report on effectiveness and safety data from 120 adults and older children with SMA types 1c-3 treated with nusinersen. Methods Patients were evaluated with the Hammersmith Functional Motor Scale Expanded (HFMSE; n = 73) or the Children’s Hospital of Philadelphia Infant Test of Neuromuscular Disorders (CHOP-INTEND; n = 47). Additionally, the Revised Upper Limb Module (RULM) and 6-minute walk test (6MWT) were used in a subset of patients. Patients were followed for up to 30 months of nusinersen treatment (mean, SD; 23, 14 months). Subjective treatment outcomes were evaluated with the Patients Global Impression–Improvement (PGI-I) scale used in all patients or caregivers at each follow-up visit. Results An increase in the mean HFMSE score was noted at month 14 (T14) (3.9 points, p < 0.001) and month 30 (T30) (5.1 points, p < 0.001). The mean RULM score increased by 0.79 points at T14 (p = 0.001) and 1.96 points (p < 0.001) at month 30 (T30). The mean CHOP-INTEND increased by 3.6 points at T14 (p < 0.001) and 5.6 points at month 26 (p < 0.001). The mean 6MWT improved by 16.6 m at T14 and 27 m at T30 vs. baseline. A clinically meaningful improvement in HFMSE (≥ 3 points) was seen in 62% of patients at T14, and in 71% at T30; in CHOP INTEND (≥ 4 points), in 58% of patients at T14 and in 80% at T30; in RULM (≥ 2 points), in 26.6% of patients at T14 and in 43.5% at T30; and in 6MWT (≥ 30-meter increase), in 26% of patients at T14 and in 50% at T30. Improved PGI-I scores were reported for 75% of patients at T14 and 85% at T30; none of the patients reporting worsening at T30. Adverse events were mild and related to lumbar puncture. Conclusions In our study, nusinersen led to continuous functional improvement over 30-month follow-up and was well tolerated by adults and older children with a wide spectrum of SMA severity. Supplementary Information The online version contains supplementary material available at 10.1186/s13023-023-02769-4. Background Spinal muscular atrophy (SMA) is an autosomal recessive disorder caused by a biallelic mutation in the survival motor neuron gene SMN1 on chromosome 5q13 [1][2][3].The lack of the SMN protein leads to anterior horn cell degeneration in the spinal cord, resulting in progressive muscle weakness and atrophy [4].The SMN2 gene is a centromeric copy of the SMN1 gene, but the genes differ by a C-to-T transition in exon 7.This difference results in the exclusion of exon 7 during the SMN2 pre-messenger-RNA splicing and production of the nonfunctional SMN protein, with only 10-15% of the SMN2 product being a full-length protein [5,6].The number of the SMN2 copies is the most important known modifier of SMA severity [7]. The incidence of SMA is about 1:11,000, and the carrier frequency is 1 in 40 to 67 [8].The phenotype of SMA ranges from a severe infantile form, with hypotonia and generalized weakness at birth, to an adult-onset disease with mild symptoms.Historically, based on the age of onset and the best motor function achieved, 5 types of SMA have been distinguished: SMA0, SMA1, SMA2, SMA3, and SMA4 [9].The SMA 0 type is placed at the most severe end of the disease spectrum.These patients present with a prenatal onset, arthrogryposis and severe respiratory failure at birth.SMA1 is the most common type of SMA.In the natural course of the disease, children with SMA1 never achieve ability to sit independently and their life span is limited due to a respiratory failure.In SMA 2, patients can sit unsupported but are never able to walk.Patients with SMA3 achieve the ability to stand and walk independently, however the age of onset, severity of the disease as well as the age of immobilization varies substantially in this group.SMA 4 refers to patients with the onset usually after 30 years of age with a mild phenotype of disease.Each type can be divided into sub-types with more severe or milder forms reflecting the continuum in the spectrum of the disease.SMA1 includes very severe type SMA1a, less severe SMA1b and SMA1c with prolonged survival.SMA1c patients can reach adulthood in some cases without gastrostomy or invasive ventilation.Patients with SMA2 can be divided into SMA2a or milder form SMA2b. SMA 3a and 3b refers to the patients with onset before 3 years of age or over 3 years, respectively The course and clinical presentation of SMA1c and SMA2a as well as SAM2b and SMA3a overlap even those patients differed in achievement of main motor milestones.This observation is especially evident in later stage of disease [10]. Natural history studies demonstrated progressive disease course in all types of SMA [10][11][12][13][14][15].Nusinersen is a splice-switching antisense oligonucleotide that promotes exon-7 inclusion into the SMN2 gene transcript [16,17], thus increasing the amount of functional SMN protein [18].It is the first disease-modifying drug for all SMA types, which was approved for use by the U.S. Food & Drug Administration and European Medicines Agency in 2016 and 2017, respectively.Since then, it has been used worldwide, with about 11,000 patients treated up to mid-2022 [19].In Poland, nusinersen treatment has been reimbursed since January 1, 2019, irrespective of patient age or SMA type and severity.Actually more than 850 patients are treated with nusinersen in Poland, another 120 receive other DMTs, accounting in total for about 80-85% of the whole population of Polish SMA patients.So far, the effectiveness and safety of nusinersen was demonstrated in clinical trials including pediatric patients only [18,20] and several recent studies reported real-world data on the effects of nusinersen treatment in the adult population.Most studies indicated benefits of nusinersen in adults regardless of the disease type, duration, and severity.However, most of them reported outcomes for a follow-up duration of up to 14 months, while data on long-term nusinersen treatment are limited [19,[21][22][23][24][25]. The aim of this real-world study was to investigate the safety and effectiveness of nusinersen treatment in patients with a wide spectrum of SMA severity, followed for up to 30 months.Additionally, we aimed to assess the subjective opinion of patients on the effect of nusinersen treatment on their disease course and symptoms. Patients and methods We prospectively assessed 130 patients who were treated with nusinersen between March 2019 and January 2022 when the data were cut.All patients received the treatment within the frame of a national reimbursement program at two centers that treat adults and children older than 5 years old. The inclusion criteria were defined by the national reimbursement program of nusinersen treatment in Poland and were the following: patients presented clinically with SMA types (1c-3; classification based on the highest motor milestone achievement), diagnosis was confirmed by genetic testing, with assessment of the number of SMN2 copies, the patients had no contraindication to lumbar punction or inability for lumbar punction.The program allows continuation of treatment in patients who started nusinersen before 2019, including Expanded Access Program (EAP).Additional criteria for inclusion into the study was the minimum and maximum treatment duration between 6 [T6] and 30 [T30] months, respectively.The patients were included into the treatment on a first-come, first-served basis from the region assigned to each center. Nusinersen administration All patients were treated with intrathecal loading doses of 12-mg nusinersen at days 1 (T0, baseline), 14, 28, and 63, followed by maintenance doses every 4 months (from month 6 [T6] to month 30 [T30]) according to the standard protocol.Intrathecal drug administration was performed by an experienced neurologist using a conventional lumbar puncture (LP) or by radiologist using computed tomography (CT)-guided LP with an ultra-low dose of radiation (a procedure developed by our team and reported previously [26]) or the C-arm fluoroscopy system.Local anesthesia (5% lidocaine/prilocaine cream) or sedation was offered to all patients and used if needed.Patients were monitored for at least 5 h after each procedure for possible adverse events. Functional assessment The Hammersmith Functional Motor Scale Expanded (HFMSE; score, 0-66), the Children's Hospital of Philadelphia Infant Test of Neuromuscular disorders (CHOP-INTEND; score, 0-66), Revised Upper Limb Module (RULM; score 0-37), and the 6-minute walk test (6MWT) were used to evaluate patients depending on functional ability or disease severity [27][28][29].In line with requirements of the national nusinersen reimbursement program, the HFMSE or CHOP-INTEND assessment was obligatory.RULM and 6MWT were additionally performed in one of the participating centers (Medical University of Warsaw, MUW) only, as there were not required by the national reimbursement program. The patients who were able to walk or sit independently underwent assessment by HFMSE test.Those who presented with severe muscle weakness: never sit independently (SMA1) or who lost this ability in course of disease or were weak sitters (SMA2, SMA3) were assessed by CHOP-INTEND adapted to adult patients.The assessment by CHOP-INTEND test was approved and required in the national nusinersen treatment program.The RULM test was applied to patients who sit or walked independently. Clinically significant improvement for the HFMSE, CHOP-INTEND, and RULM was defined as a change in the score of ≥ 3 points, ≥ 4 points, and ≥ 2 points, respectively [30][31][32].For patients able to walk independently, significant improvement in the 6MWT was defined as an increase in walking distance by at least 30 m [29]. The assessments were performed by experienced physiotherapists at T0 and at administration of each maintenance dose from T6. Whenever possible, the patients were tested by the same physiotherapist.Data on adverse events, including headache, nausea, vomiting, vertigo, fever, back pain with assessment of duration and intensity were collected using a questionnaire at each point of treatment.Information on hospitalization due to adverse event was also collected.It was also possible to report other adverse event.The subjective assessment of treatment by patients (or caregivers in the case of children) was performed using the 7-point Patient Global Impression -Improvement (PGI-I) scale [33] rated as follows: very much improved (1); much improved (2); minimally improved (3); no change (4); minimally worse (5); much worse (6); and very much worse (7).Patients assessed their clinical status at each time point of nusinersen treatment versus baseline (T0). Ethics and patient consent Patients or their caregivers, as appropriate, gave their informed consent for nusinersen treatment (National Health System form for reimbursed treatment program) and for data collection (Ethic Committee approval-BK/180/2008). Statistical analysis The results of functional assessments were presented as mean, SD, and 95% confidence intervals (CIs), and percentage of patients who showed improvement after treatment.The statistical inference of differences was assessed using the Wilcoxon signed-rank test and a paired t-test.Multivariate linear regression (least squares estimation) was used to identify factors responsible for the differences versus baseline.The initial regressions analysis model included numerous factors, such as age at onset, duration of the disease to the first dose, age at first dose, initial scores on motor function scales, number of SMN2 copies, dummy variables indicating BMI score < 18.5 and > 25 for low/high body mass index.A general-to-specific modelling strategy was used to obtain the final regression model.Specifically, nonsignificant factors were removed from the model.Due to a relatively large sample size, Student-t test were used to verify the significance of associations between explanatory variables and the outcome, with a p value of less than 0.05 level considered significant.Statistical calculations were performed with Stata 14.In addition to a full-sample analysis, the results were also reported separately for: (1) specific SMA types (1c-3); (2) sitting SMA2 and ambulant and non-ambulant SMA3. Results The final analysis included 120 treatment-naive patients.Of 130 screened patients, 7 were excluded due to an insufficient follow-up duration, and 3 patients were excluded due to treatment discontinuation, including a 12-year-old boy with SMA3 who entered a clinical trial, a 24-year-old woman with SMA3 who did not tolerate LP procedure, and a 26-year-old man with SMA1 who died before the fifth nusinersen dose due to tracheostomy bleeding unrelated to treatment.The first patient included in the analysis received the first nusinersen dose within the national reimbursement program on April 30, 2019, and the last patient started treatment on June 22, 2021.Most patients (n = 76.63%) have started treatment during the first 10 months since April 2019.7 of 120 patients started the treatment earlier, in 2017-2018, within the frame of the EAP.All have SMA1c.Six of them (adults) started nusinersen treatment in Belgium then were transferred to continue EAP in Poland (MUW) staring in September 2018 and continue the treatment in National Health Service program.One SMA1c patient (teenager), started nusinersen treatment in EAP in Poland in one of the pediatric centers and then was transferred to MUW center.The information on their functional assessment at the beginning of treatment (T0) were available in the patients' medical records.The mean treatment duration in the EAP those 7 patients was 11 months (range, 6-14 months) and involved an administration of 6 doses on average.All but one patient were treated in the reimbursement program for at least 600 days (about 20 months). The number of assessed patients decreased over time because they did not reach a given time point before the data were cut.Additionally, due to covid pandemic restrictions some patients skipped the functional assessment at some points of treatment.The number of patients assessed at each time point by two main tests is shown in Additional file 1 (Study Flow Diagram). The baseline characteristics of patients are presented in Table 1.Among the 120 patients included in the analysis, 53 were female and 67 were male.Most patients were adults (88%, 105 patients).The mean age at T0 was 32 years (SD, 14 years; range, 5-66 years).Among 15 children (1 SMA1c, 4 SMA2 and 10 SMA3) included in the study the mean age at T0 was 9.3 years (SD 3.6, median 8 years; range 5-17years).Eleven children were in age range 5-11 years and remining 5 children in the range 12-17 years.SMA1c was reported in 12 patients (10%); SMA2, in 19 (16%); SMA3, in 89 (74%).The SMA3 group was divided into sitters (41 patients) and walkers (48 patients).In the SMA1c group, 11 of the 12 patients were adults.Their mean age at T0 and a mean disease duration to the first dose was similar (because of the onset in the first months of life) and was 29 years (SD, 7.8 years; range, 13-45 years).The mean treatment duration for the whole study group was 23 months (SD, 14 months). Lumbar puncture procedures A total of 1023 intrathecal drug administrations via LP were performed during the study.Conventional intrathecal administration was performed in 87 of 120 patients (77%) and included 746 LPs.Remining 277 intrathecal administration of nusinersen was performed using CT-guided LP (in 30 patients) or the C-arm X-ray system (in 3 patients) due to history of scoliosis surgery (12 patients), severe scoliosis (19 patients) and obesity (2 patients).These additional procedures for drug administration were required in, 75% (9 patients), 63% (12 patients and 14% (12 patients) with SMA1c, SMA2 and with SMA3, respectively.There were no administration failures.In 2 patients, LPs were performed via the intervertebral foramen using CT. Hammersmith functional motor scale expanded The HFMSE assessment at T0 was performed in 73 patients (43 men), including 6 patients (4 children) with SMA2 and 67 patients (10 children) with SMA3 including 19 SMA3 non-ambulant patients (see Table 2).Their mean age and the mean disease duration at T0 was 31 years (SD, 15.6 years; range, 5-66 years) and 23.7 years (SD, 14 years; range, 4-62 years), respectively.One patient with SMA2 did not undergo assessment at day 180 (T6) but was assessed at the subsequent 4 time points.Therefore, he was included in the analysis.At T30, 28 patients were evaluated using the HFMSE. At least 1-point improvement was noted in 52 of the 72 patients (72%) at T6 versus T0, and in 24 of the 28 patients (86%) at T30 versus T0.Clinically meaningful improvement (≥ 3 points) in the HFMSE score was observed in 26 of the 72 patients (36%) after six months of treatment (T6).The percentage of responders gradually increased to 71% (20 of the 28 patients) at T30 versus T0 (Additional file 2). In 11 of the 73 patients (15%), the HFMSE score improved by at least 10 points during the treatment.Of those patients, 8 were still able to walk, 6 had 4 copies of SMN2, and 5 had 3 copies of SMN2. Of the 73 patients, 4 had a score of ≥ 60 points at T0. Two patients who scored 64 points at T0 remained stable up to T26 and T30, respectively.One patient improved from 60 to 63 points at T14 and was stable until T22, and 1 patient improved from 61 to 63 points at T14 and was stable at T30. Worsening was observed in 8% of patients at T6 and 4% of patients at T30 (Additional file 2).Similar results were obtained in a separate analysis for the SMA3 group (Additional file 3).The separate, statistical analysis for SMA2 was not performed because of a small number of those patients (n = 6).Three of SMA2 patients were assessed until T30; 2 of them improved (one from 20 to 29 points, and the second from 17 to 21 points), and the third was stable (8 points at T0 and at T30).The other 2 SMA2 patients were assessed until T26 and both improved (one from 9 to 11, the other from 4 to 6 points).The sixth patient, the only one in whom worsening was observed, was treated until T22 and his score was 4 at T0 and 3 at T22. A mean HFMSE score for 73 patients at T0 was 34.0 points and gradually increased at subsequent time points of nusinersen treatment up to 40.9 points at T30 The mean value of differences between T0 and T6 was 2.5 Additionally, there were significant differences in mean HFMSE scores between subsequent time points during the follow-up, starting from T6, with a continuous increase up to T30 (Additional file 5) . The mean HFMSE score changes between baseline and each time point of treatment assessed separately for ambulant (48) and non-ambulant (25) patients revealed the statistically significant difference at each point of treatment for each group.However, when these results were compared, the statistically significant differences between ambulant and not-ambulant patients was not found in any point of treatment (p > 0.2) (Additional file 6). Children's hospital of philadelphia infant test of neuromuscular disorders Among 47 patients (24 men [51%]) assessed with CHOP-INTEND, 12 patients had SMA1; 13, SMA2; and 22, SMA3 (see Table 4).The mean age at T0 was 33.7 years (SD, 11.0; range, 13-66), the mean disease duration to the first dose was 31.7 years (SD, 9.5; range, 3.0-58.0).Fortyfour patients were assessed at least at T0 and T6.The baseline CHOP-INTEND score was not available for the 3 adults with SMA1 who started treatment abroad within the EAP.They started evaluation in the study at T10, T14, and T18, respectively.In two of them the assessment was available up to T30.The data are presented in a separate analysis of SMA1 patients (Additional file 7).In the patients assessed by CHOP-INTEND an improvement by at least 1 point was noted in 77% (34 of 44) of patients at T6 and in 94% (16 of 17) of patients at T26 vs. baseline The clinically meaningful improvements (≥ 4 points) in the CHOP-INTEND score was observed in 20.5% (9 of 44) at T6 and in 65% (11 of 17) at T26 (Additional file 8). At T30, only 5 patients were assessed and improvement versus baseline was noted in 4 (all SMA1).In separate analyses for SMA1, SMA2 and SMA3patients, the highest percentage of patients who improved at each time point of treatment was noted for SMA3 (Additional files 7, 9, 10). The mean value of differences between T0 and T6 was 2.23 points and increased to 5.59 points at T26.The mean differences between T0 and each time point of treatment up to T30 reached the statistic significances (p < 0.001) (Table 5; Fig. 2). There were statistically significant differences in the mean CHOP-INTEND score between subsequent time Of the 12 patients with SMA1, 9 were assessed at T0 and at were treated at least ten months (assessment at T10), and all of them showed improvement in the CHOP-INTEND score by at least 1 point (Fig. 3).Eight of these patients were assessed at T26, and clinically meaningful improvement (≥ 4 points) was shown in 58.3% (range, 5-17 points).All 4 patients who were assessed at T0 and at T30 showed improvement by more than 4 points (range, 6-17 points).Of the 3 patients without assessment at T0, 2 patients showed improvement by 1 point, and 1 patient was stable during the follow-up (Fig. 3). Revised upper limb module Fifty-one patients (9 with SMA2 and 43 with SMA 3; 30 men [59%]) were assessed by the RULM at T0 and at least The mean RULM score significantly increased between T0 and subsequent time points up to T30, except between T0 and T10 (Table 6).Similar results were obtained when the patients with the maximum score at baseline were excluded.Differences in the mean RULM score between individual time points are shown in Additional file 5. The mean RULM score changes between baseline and each time point of treatment assessed separately for ambulant (26) and non-ambulant (25) patients revealed the statistically significant difference at each point of treatment for ambulant patients.In non-ambulant patients the significant improvement is observed only after T22.The differences in mean score between ambulant and not-ambulant patients was statistically significant in the period T14-T26 and it is very closed to statistical significant at T30.The data showed that nonambulant patients gained better improvement (Additional file 12). 6-minute walk test Twenty-seven patients with SMA3 (18 men [67%]) were evaluated by the 6MWT at T0 and at least 1 time point of treatment from T6 to T30.The lack of a fairly significant number of ratings in the 6MWT test was mainly due to patients' fear of staying too long in the hospital and contacting medical staff and other patients during the pandemic.The mean age of these patients at T0 was 27 years (SD, 13; range, 6-59), and the mean disease duration was 18 years (SD, 10; range, 4-33). Clinically meaningful improvement (change in 6MWT ≥ 30 m) was observed in 33% (5 of 15) at T6, and these values gradually increased to 50% (6 of 12) at T30.The number and percentage of patients with any worsening was relatively large in each point of treatment.At T6 was 40% (6 of 15) and at T30 was 33% (4 of 12) (Additional file 13). Multivariate regression Multivariate regression analyses with changes in the HFMSE score as an outcome variable showed that improvement in the first period of treatment (T0-T6, T0-T10) depended on sex, with women showing a greater improvement (p = 0.038, p = 0.010, respectively).The improvement in the longer horizon (T0-T26, T0-T30) is negatively associated with initial score on motor scale (p = 0.046, p = 0.018).None of the additional factors (number of the SMN2 copies, age at onset, duration of the disease to the first dose and age at the first dose, body mass index) showed a significant correlation with the treatment outcome (Additional file 14).Multivariate regression analyses with changes in the CHOP-INTEND score as an outcome variable did not show any significant association with factors tested in the HFMSE (data not shown). Safety Data on adverse events after LP and drug administration also included the loading doses (days 1, 15, 30, and 63) and were available for 1023 intrathecal injections.The procedure was generally well tolerated.Post lumbar puncture syndrome (PLPS) was observed in 198 of 1023 (19%) LPs.All patients with PLPS reported headache, mainly of mild intensity.Back pain was reported for 111 LP procedures (11%).Nausea was reported by 41 patients (4%) and vomiting by 12 (1%).Only 1 patient (SMA1) required a single hospitalization for severe back pain after LP.PLPS developed on the same day, on the second day, or on the third day after LP in 13%, 67%, and 13% of all LPs, respectively.In 7% of LPs, PLPS occurred after 3 or more days after the procedure but not later than after 7 days. The LP procedure supported by CT or the C-arm fluoroscopy system was associated with a lower risk of PLPS compared with conventional intrathecal drug administration (11% vs. 22%, respectively; p < 0.00001). In one case (a 26-year-old woman with SMA2 and history of scoliosis surgery), cerebrospinal fluid leak was observed after CT-guided injection at T10.The leak stopped within 1 h without intervention. Patient global impression -improvement Overall, 96.5-100% of patients reported subjective improvement or stabilization.During the 30 months of treatment, none of the patients reported feeling much or very much worse (grades 6 or 7) (Additional file 15). The distribution of responders, that is, patients who achieved a clinically meaningful improvement in each of the functional tests, is shown in Table 8. Discussion Adults constitute about half of all patients with SMA [34].Recently, numerous real-world studies reported the effectiveness and safety of nusinersen treatment in adults and older children [19, 22-24, 35, 36].However, while the studies confirmed the beneficial effect and a satisfactory safety profile, the longest follow-up was limited to 14 months, and thus data on long-term effects in adults are limited.In addition, there was no evidence on the effectiveness of nusinersen in patients with SMA1 with prolonged survival up to adulthood, that is in SMA1c.The present study was performed with the aim to fill the gap in the current scientific knowledge.SMA1c adult patients are rarely viewed as eligible for treatment, as no data was reported so far in this patient's group.SMA1c is a significantly a milder phenotype then SMA1a and SMA1b and the course and clinical presentation is similar to SMA2a phenotype especially in later stages of diseases [10]. Our study confirmed a significant improvement in mean HFMSE scores at 14 months versus baseline and demonstrated continued functional gain also after subsequent 16 months (T30) of nusinersen administration.Previous studies showed the beneficial effect of treatment at 14 months [21,22].Only few studies reported a longer observation time, but did not exceed 24 months [24,25].Our study showed significant differences in the mean HFMSE score between baseline and subsequent time points of treatment in all 73 patients, including 6 with SMA2 and 67 with SMA3.When patients with SMA3, ambulant SMA3, and non-ambulant SMA3 were evaluated separately, the differences in the mean score between baseline and subsequent time points were almost identical for all these groups.Interestingly, a recent study of 111 children and young adults with SMA2 and SMA3 (median age, 12.5 years) followed for 24 months showed different results [24].There was a significant increase between baseline and 12 months in SMA2, but not in SMA3.Moreover, a significant increase was noted in HFMSE between baseline and 24 months in SMA2 and SMA3 only in children younger than 5 years (p = 0.009 and p = 0.043, respectively), but not in older subgroups.Our results demonstrated a significant potential for improvement also in older patients with SMA2 and SMA3, which stands in contrast to the natural history of SMA2 and SMA3, with a functional decline manifesting as a mean loss of 0.5 to 1 points in the HFMSE score per year [10][11][12].Interestingly, although the most dynamic improvement in our study was observed during the first 18 months of treatment, it remained significant until the end of followup.The rate of responders as assessed by the HFMSE score increased to 71% (20 out of 28 patients) at T30.In an Italian study, the percentage of responders increased from 28% (33 of 116 patients) at T6 to 49% (25 of 51 patients) at T14 [21], while in a German cohort, it was only 40% (23 of 57) at T14 [22].These differences may be due to a higher proportion of patients with SMA2 and a lower HFMSE score at baseline in those studies as compared with our cohort.High HFMSE scores at baseline predict better improvement, at least during the first 14 months of treatment [22].The floor effect of HFMSE in weak sitters may affect the sensitivity to detect changes in adult patients and should be remember when interpreting the treatment results [25]. Our data support previous findings that even adult patients with poor motor function at baseline can derive significant benefits from nusinersen treatment [21,23,36].We demonstrated improvement in patients with SMA1c and severe SMA2 and SMA3 who were assessed by the CHOP-INTEND test.A mean CHOP-INTEND score significantly increased between baseline and subsequent time points up to T26, with 80% of responders at T30.At T26, 7 of 8 patients with SMA1c achieved a clinically meaningful response.Moreover, all 4 SMA1c patients who reached T30 were responders.There are no literature data on nusinersen effectiveness in adult patients with SMA1c. Upper limb function assessed with the RULM showed continuous improvement, not only during the first 14 months of treatment [21,22], but also until T30.Again, our study demonstrated a greater benefit than previous reports [21,24].The percentage of responders increased from 25% (5 of 20 patients) at T6 to 43% (10 of 23 patients) at T30.All patients with a maximum score at baseline maintained their function.The ceiling effect of the score makes it difficult to demonstrate improvement by means of the RULM in patients with milder form of SMA [21,24,25,36]. As for the 6MWT, our study indicated a continued benefit of treatment with stabilization after 18 months.The multivariate regression analysis showed that during the first 10 months of treatment, women showed greater improvement in the HFSME score than men; however, this difference was not observed in the long-term followup.The improvement in the longer horizon (T0-T26, T0-T30) is negatively associated with initial score on motor scale (p = 0.046, p = 0.018), which is in line with previous studies [22].The results concerning the association between changes in the HFMSE score and factors such as sex and initial HFMSE scale remain robust across regressions utilizing various sets of explanatory variables (with 0.1 < p < 0.01).It is important to note, however, that these findings should be interpreted with caution due to relatively small sample and the p-values within a range that indicates marginal statistical significance.None of the other factors/variables which were taken to account in the multivariate regression analyses did show a significant correlation with the treatment outcome.This observation was found also in previous research [21]. Our study confirmed that nusinersen administration is safe and well tolerated by patients, adverse events were seen in 30% of the patients but were mostly mild.This supports previous reports [21,22,35].We observed that although a CT-guided LP requires a more complex medical approach, the risk of PLPS was significantly reduced in comparison with conventional LP.It could be related to LP technique (less traumatic, guided approach) but also the functional status of the patients, as guided technique was employed in more advance, non-ambulant patients. The results of the PGI-I questionnaire confirmed a high level of patient satisfaction with treatment results [37]. Our study has several limitations.First, the size of adult SMA1c and SMA2 samples was relatively small.The CHOP-ATTEND test validated nowadays for adult patients with severe symptoms was not available at the time of the study.For this reason we applied in these cases CHOP-INTEND test which is not validated in the adults.However this was the only scale at that time available and recommended for use in non-sitters or very weak sitters, including adults [38,39].Additionally, the results of some functional tests were not available for all time points, due to restrictions imposed during the COVID19 pandemic in 2020 and 2021.During the pandemic 31 doses were delayed, and we were not able to control our analysis for this factor.Additionally, the study did not involve a control group of untreated patients as the national program of nusinersen treatment in Poland does not have significant exclusion criteria and most (currently over 900) patients with SMA are treated. In conclusion, our data provide real-world evidence for continuous effectiveness and safety of long-term nusinersen treatment in adults and older children regardless of the type and severity of SMA, including adult patients with SMA1c. points and doubled to 5.1 points at T30.The mean differences between T0 and each time point of treatment reached the statistic significances (p < 0.001).The results are presented in Table3; Fig.1[see also Additional file 4] . Fig. 1 Fig. 1 Mean differences in HFMSE score between the baseline (T0) and subsequent treatment time points (in months) up to T30, p < 0.001 at all time points; n-number of patients assessed at each time point of treatment Fig. 2 Fig. 2 Mean differences in CHOP-INTEND score between the baseline (T0) and subsequent treatment time points (in months) up to T26, p < 0.001 at all time points; n-number of patients assessed at each time point of treatment Table 1 Baseline characteristics of patients Table 2 Baseline characteristic and demographics of analyzed patients at each time point of treatment: HFMSE assessment Data are n (%), or Mean (SD, range).HFMSE -Hammersmith Functional Motor Scale Expanded Table 3 Changes in the HFMSE score versus baseline (6 patients with SMA2, 67 patients with SMA3 included 48 ambulant and 19 nonambulant patients) Table 4 Baseline characteristic and demographics of analyzed patients at each time point of treatment: CHOP-INTEND assessment Data are n (%), or Mean (SD, range).CHOP-INTEND-Children's Hospital of Philadelphia Infant Test of Neuromuscular Disorders Table 6 Changes in the RULM score (max.37 points) versus baseline (T0) NA-not applicable *Wilcoxon test; NA-not applicable Table 7 Changes in 6MWT results versus baseline.NA-not applicable Table 8 Distribution of patients who achieved clinically meaningful improvement (responders) in each of the functional tests applied in the study.For the PGI-I, responders were defined as patients who improved minimally, much, or very much *Three patients who did not undergo assessment at T0 were excluded
v3-fos-license
2019-11-14T17:07:16.584Z
2019-11-12T00:00:00.000
209715613
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://aip.scitation.org/doi/pdf/10.1063/1.5121860", "pdf_hash": "650de9b5ca89f3f762fa2c79a5e8f8ca9391e7ff", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44837", "s2fieldsofstudy": [ "Physics" ], "sha1": "841ac43cd509ac8e86965165f76822c50afb3039", "year": 2019 }
pes2o/s2orc
Precision locking CW laser to ultrastable optical frequency comb by feed-forward method We locked a 1064 nm continuous wave (CW) laser to a Yb:fiber optical frequency comb stabilized to an ultrastable 972 nm CW laser with the feed-forward method. Consequently, the stability and coherent properties of the ultrastable laser are precisely transferred to the 1064 nm CW laser through the frequency comb’s connection. The relative linewidth of the frequency-stabilized 1064 nm CW laser is narrowed to 1.14 mHz, and the stability reaches 1.5 × 10/s at the optical wavelength of 1064 nm. The phase noise characterization in the 1 mHz–10 MHz range is presented to indicate that feed-forward locking a CW laser to an ultrastable comb will offer a potential technique for many important applications, such as optical frequency synthesis and gravitational wave detection. © 2019 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). https://doi.org/10.1063/1.5121860., s Optical frequency comb (OFC) and frequency-stabilized continuous wave (CW) lasers are two key components in many applications such as optical clocks, high-resolution spectroscopy, time and frequency transfer, astrocombs, absolute distance measurement, coherent property transfers in optical fields, and space-based gravitational wave detection. In all the above applications, coherent locking between an OFC and a CW laser is generally required. A traditional connection technique is based on a servo phase-locked loop (PLL) electronics that generates appropriate control signals, which are fed back to control the actuators (e.g., the piezoelectric transducer, the acoustic optical modulator, the pump current, or the temperature of the system) in laser. Locking an OFC to a CW laser or locking a CW laser to an OFC has been accomplished with the servo feedback phase-locked loop method in the past few decades. However, the PLL electronics with large servo bandwidth requires more complex circuit design and more careful adjustment of P-I filter parameters. Although a wide bandwidth locking up to 1.3 MHz in the extended cavity diode laser (ECDL) has been obtained by direct feedback control of the injection current of the diode laser with error signal, in most solid-state lasers and fiber lasers, the locking bandwidth of the whole system is usually limited to less than 200 kHz by the properties of the actuator, the noise introduced by a complex circuit, and the pump-gain process. Taking an example of the impact of the solid-state gain medium, in the Ti:sapphire comb, the upper level lifetime of the Ti:sapphire medium is 3.2 μs, which corresponds to the theoretical servo bandwidth of 300 kHz, but in the case of locking the carrier-envelope phase offset (CEO) frequency with the PLL electronics, it typically only reaches approximately 50 kHz. In 2011, a novel feed-forward method, based on an acoustooptic frequency shifter (AOFS) without the PLL, was developed to stabilize the carrier envelope phase offset (CEO) signal in OFCs. Due to the real time response for frequency shift in the AOFS, the CEO control decreased dramatically to 12 as the residual timing jitter was successfully obtained in the Ti:sapphire comb. Since then, the feed-forward configuration has been used in multiple OFCs, such as fiber comb, to minimize the CEO phase noise. In 2012, this scheme was adopted in locking the CW laser to the frequency AIP Advances 9, 115003 (2019); doi: 10.1063/1.5121860 9, 115003-1 Optical frequency comb (OFC) and frequency-stabilized continuous wave (CW) lasers are two key components in many applications such as optical clocks, high-resolution spectroscopy, time and frequency transfer, astrocombs, absolute distance measurement, coherent property transfers in optical fields, and space-based gravitational wave detection. [1][2][3][4][5][6][7][8] In all the above applications, coherent locking between an OFC and a CW laser is generally required. A traditional connection technique is based on a servo phase-locked loop (PLL) electronics that generates appropriate control signals, which are fed back to control the actuators (e.g., the piezoelectric transducer, the acoustic optical modulator, the pump current, or the temperature of the system) in laser. Locking an OFC to a CW laser or locking a CW laser to an OFC has been accomplished with the servo feedback phase-locked loop method in the past few decades. [9][10][11][12][13] However, the PLL electronics with large servo bandwidth requires more complex circuit design and more careful adjustment of P-I filter parameters. Although a wide bandwidth locking up to 1.3 MHz in the extended cavity diode laser (ECDL) has been obtained by direct feedback control of the injection current of the diode laser with error signal, 14 in most solid-state lasers and fiber lasers, the locking bandwidth of the whole system is usually limited to less than 200 kHz by the properties of the actuator, the noise introduced by a complex circuit, and the pump-gain process. Taking an example of the impact of the solid-state gain medium, in the Ti:sapphire comb, the upper level lifetime of the Ti:sapphire medium is 3.2 μs, which corresponds to the theoretical servo bandwidth of 300 kHz, but in the case of locking the carrier-envelope phase offset (CEO) frequency with the PLL electronics, it typically only reaches approximately 50 kHz. 15 In 2011, a novel feed-forward method, based on an acoustooptic frequency shifter (AOFS) without the PLL, was developed to stabilize the carrier envelope phase offset (CEO) signal in OFCs. 16 Due to the real time response for frequency shift in the AOFS, the CEO control decreased dramatically to 12 as the residual timing jitter was successfully obtained in the Ti:sapphire comb. Since then, the feed-forward configuration has been used in multiple OFCs, such as fiber comb, to minimize the CEO phase noise. 17,18 In 2012, this scheme was adopted in locking the CW laser to the frequency ARTICLE scitation.org/journal/adv comb by Sala et al. 19,20 The ∼0.6 MHz feed-forward locking bandwidth and the ∼10 kHz linewidth in the locked CW laser have been demonstrated. In this letter, we demonstrated an ultranarrow linewidth 1064 nm CW laser by applying the feed-forward method to lock to a Yb:fiber OFC that was stabilized to an ultrastable 972 nm CW laser. 21 The 1064 nm laser beam was separated into a transmitted beam and a first-order diffracted beam with the AOFS. To lock and analyze the 1064 nm CW laser, the beat signals between the comb and these two beams were recorded by in-loop and out-of-loop configurations. The relative linewidth of the out-of-loop beat note was reduced to 1.14 mHz after feed-forward locking the 1064 nm laser to the comb, which was the first time the megahertz-level was accessed by the feed-forward locking technique. Over 3 h of measuring, the frequency shift showed an Allan deviation of 1.5 × 10 −17 in a 1 s sampling rate. We also measured the phase noise power spectral density (PSD) of the stabilized out-of-loop beat note signal from 1 mHz to 10 MHz. The integrated phase noise (IPN) was 381 mrad from 1 Hz to 10 MHz, which corresponds to 216 as timing jitter in a 1 s series. Accordingly, the IPN from 1 mHz to 10 Hz was 20.5 mrad, which corresponds to 11.5 as timing jitter in a long 1000 s series. This long-term frequency stability and extremely low phase noise, especially in a low frequency regime, are of particular importance for the applications of optical frequency synthesis and gravitational wave detection. The experimental setup is shown in Fig. 1. The 1064 nm CW laser is a diode-pumped monolithic Nd:YAG laser that operates at single frequency with 500 mW output power. 22 The laser is prestabilized to one of the absorption spectral lines of iodine molecules ( 127 I 2 ). 23,24 Due to not strictly controlling the temperature of the iodine molecule chamber and eliminating the residual amplitude modulation, the frequency offset of the 1064 nm CW laser still has a few kilohertz jitter in short term and dozens of kilohertz drift in long term after prestabilization. 25 This prestabilized 1064 nm CW laser benefits to the further locking to the ultrastable comb; however, since the 1064 nm CW laser has previously been locked into the absorption spectral lines of iodine molecules, it is difficult to be further locked to an ultrastable comb by feedback internal or external ports to modulate the laser cavity. For this setup, the feed-forward method, just to change the output laser beam frequency instead of modulating any laser cavity internal parameters, is a good solution. The optical frequency comb is based on a 250 MHz, 20 mW, 1030 nm Ybdoped fiber comb that has been stabilized to an ultrastable 972 nm CW tunable diode laser, which has been locked to a high fineness F-P cavity with the Pound-Drever-Hall (PDH) technique. The relative stability of the Yb:fiber comb was reported to reach 2 × 10-18/s, one of the best in Yb:fiber combs, as described in Ref. 21. To complete the feed-forward locking between the 1064 nm laser and the ultrastable comb, an AOFS at a center modulation frequency of 80 MHz was inserted into the beam from the 1064 nm CW laser. Its modulation bandwidth is approximately 1.4 MHz, which provides an availability of wide bandwidth phase locking. When the deflection efficiency of the AOFS was set to 10%, the 1064 nm laser beam was divided into a transmitted beam with 450 mW (inloop) and a first-order diffracted beam with 50 mW (out-of-loop). They were then separately superimposed and heterodyned with the nearest line of the Yb:fiber comb. A self-heterodyne fiber delay technique was used to evaluate the spectral content of both the first order diffracted beam and the frequency comb in Refs. 19 and 20. In our experiment, as the ultrastable laser that the Yb:fiber comb has been locked to has a narrow linewidth on the hertz-level, which means very long coherence length, the self-heterodyne fiber delay technique is not appropriate here. Hence, we build the out-of-loop to detect the beat signal between the first-order diffracted beam and the comb to assess the phase noise characterization of the stabilized CW 1064 nm laser. In the in-loop interferometer, the beat signal was detected by using a photodiode (PD), finloop = | fn − fcw|, where fn is the nth comb mode ARTICLE scitation.org/journal/adv and fcw is the CW laser frequency. The finloop was then mixed with an RF synthesizer (LO) signal, fLO, which is derived from the frequency synthesizer to generate the driving signal of the AOFS, fAOFS. Thus, it has been shown that fAOFS = fLO ∓ finloop . In the out-of-loop interferometer, the beat frequency, foutof -loop, heterodyned between the first-order diffracted beam and the comb, which can be expressed as From this equation, it is clear that the foutof -loop is stabilized at the frequency fLO, and the noise carried by fcw is removed by the finloop with the opposite signs in terms of fcw and fAOFS. In one experiment, we obtained the beat signal of about 22.2 MHz and mixed it with an RF signal of 102.2 MHz to obtain the 80 MHz driving signal for the AOFS. According to Eq. (1), the foutof -loop should be fixed at 102.2 MHz after being locked. Figure 2 shows the foutof -loop beat note frequency spectrum and the linewidth after locking. The frequency spectrum content of foutof -loop is recorded by using the frequency spectrum analyzer (R&S, FSW26) with 1 kHz resolution bandwidth. The foutof -loop had a 45 dB signal-to-noise ratio and contained more than 90% of RF power within the coherent carrier. In addition, two bumps appear at approximately 0.28 MHz on both sides of the center frequency, foutof -loop, indicating that the actual bandwidth of the whole feedforward system is approximately 280 kHz. This was dependent on the rising time of acoustic wave transmission in the AOFS. To investigate the exact relative linewidth of foutof -loop, we used a fast Fourier transform (FFT) analyzer (SRS, SR770) with the resolution bandwidth of 0.47 mHz to measure the frequency spectrum distribution. The curve in Fig. 2(b) shows that the original data points match a Lorentzian line shape, and the relative linewidth is reduced to 1.14 mHz. We also compared the frequency shift of the foutof -loop in a long time series by a frequency counter using the 1064 nm CW laser with and without feed-forward locking to the comb. When the 1064 nm CW laser was in the condition of pre-iodine-stabilization, the foutof -loop had a kilohertz of jitter in a 1 s time gate and dozens of kilohertz of drift in the long term, dropping from 40 kHz to −40 kHz over a 1.8 h time period, shown by the blue shade in Fig. 3(a). After feed-forward locking to the ultrastable comb, both the fast jitter and the slow shift in the foutof -loop were depressed to a large extent. The red line in Fig. 3(a) represents the frequency drift of the stabilized foutof -loop, like a straight line without any fluctuation compared to the blue curve. The detail over the 3-h time period of the stabilized foutof -loop is described in Fig. 3(b), which corresponds to a 4.1 mHz standard deviation of the shift. The calculated Allan deviation is shown in Fig. 3(c) with a tracking stability of 1.5 × 10 −17 in a 1 s sampling rate at the central wavelength of 1064 nm, where the tracking stability dropped below 2.7 × 10 −19 at 1000 s gate time. This was attributed to the AOFS with ∼1.4 MHz frequency mismatch tolerance, which was larger than in the general servo-loop circuit. For a deeper look at the noise performance, the phase noise power spectral density (PSD) and the integrated phase noise (IPN) of the locked foutof -loop were observed in frequency-resolved 1000 s (1 mHz-10 MHz). We could not measure the PSD of the free running beat note signal foutof -loop because of strong jitter in the regime of exceeding 1 s where the analyzer was unable to read the data. At high frequencies, from 1 Hz to 1 MHz, as shown in Fig. 4(a), the total IPN was 381 mrad, which corresponded to 216 as timing jitter. The inflection point at 280 kHz across the PSD distribution reflected the bandwidth of the feed-forward scheme, in accordance with the analysis of Fig. 2(a). The 80% of residual integrated phase noise originated from outside the servo bandwidth, and the noise below hundreds of kilohertz was well suppressed. Figure 4(b) shows the low frequency noise, characterized from 1 mHz to 10 Hz, measured with the FFT analyzer. The IPN from 1 mHz to 10 Hz was 20.5 mrad, which corresponded to 11.6 as residual timing jitter. When transformed to the frequency noise amplitude spectral density in the unit of Hz/ √ Hz, the noise amplitude was 2 × 10 −3 Hz/ √ Hz in 1 mHz. It is worth noting that a CW laser with such low IPN in the low frequency range benefits the detection of low frequency gravitational waves. 26 In conclusion, we demonstrated that precision locking can occur between a 1064 nm CW laser and an ultrastable optical frequency comb by the feed-forward method without the servo-loop. The out-of-loop measurements confirmed that the stability and coherent properties of the ultrastable comb were transferred to the 1064 nm CW laser. The relative linewidth of the stabilized CW 1064 nm laser was narrowed to 1.14 mHz, and the Allan deviation was 1.5 × 10 −17 in 1 s, which is on the same scale of the ultrastable comb. The integrated phase noises in the range of the high frequencies (1 Hz-10 MHz) and low frequencies (1 mHz-10 Hz) reached 381 mrad and 20.5 mrad, respectively. The long-term stability and the phase noise have shown the robustness and reliability of the feed-forward scheme and indicate that such a stabilized CW laser may find important applications in low frequency gravitational wave detection and high precision optical frequency synthesis. In the future, we will lock multiwavelength CW lasers to ultrastable frequency comb with the feed-forward scheme to establish an ultrastable and ultraprecise optical frequency synthesizer.
v3-fos-license
2021-03-17T05:22:33.677Z
2021-03-01T00:00:00.000
232241040
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-8220/21/5/1714/pdf", "pdf_hash": "89457545845ea9b471f923e8a6a3a067129de094", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44838", "s2fieldsofstudy": [ "Business" ], "sha1": "89457545845ea9b471f923e8a6a3a067129de094", "year": 2021 }
pes2o/s2orc
A New GNSS Interference Detection Method Based on Rearranged Wavelet–Hough Transform Since radio frequency interference (RFI) seriously degrades the performance of a global navigation satellite system (GNSS) receiver, interference detection becomes very important for GNSS receivers. In this paper, a novel rearranged wavelet–Hough transform (RWHT) method is proposed in GNSS interference detection, which is obtained by the combination of rearranged wavelet transform and Hough transform (HT). The proposed RWHT method is tested for detecting sweep interference and continuous wave (CW) interference, the major types of GNSS interfering signals generated by a GNSS jammer in a controlled test bench experiment. The performance of the proposed RWHT method is compared with the conventional techniques such as Wigner–Ville distribution (WVD) and Wigner–Hough transform (WHT). The analysis results show that the proposed RWHT method reduces the influence of cross-item problem and improves the energy aggregation property in GNSS interference detection. When compared with the WHT approach, this proposed RWHT method presents about 90.3% and 30.8% performance improvement in the initial frequency and chirp rate estimation of the GNSS sweep interfering signal, respectively. These results can be further considered to be the proof of the validity and effectiveness of the developed GNSS interference detection method using RWHT. Introduction With the increasing global navigation satellite system (GNSS) applications in various civil and military fields, reliable navigation and positioning performance has become a key and important requirement for GNSS receivers [1][2][3][4]. The direct sequence spread spectrum (DSSS) scheme which has been used in most of the satellite navigation systems spreads the received GNSS signal power over a wider bandwidth; this ensures a dispreading gain in the GNSS receiver, which can reduce impairments caused by the undesired disturbing signals [1]. Although GNSS has a certain capability to be immune from interference, due to the reception of the very low GNSS signal power, intentional or unintentional radio frequency interference (RFI) can cause serious performance degradation of a GNSS receiver; for instances, RFI may cause degradation of GNSS signal quality, serious errors in navigation and timing results, and even completely loss lock of the receiver [2][3][4][5][6]. Therefore, interference detection has become a critically important role for GNSS applications [7][8][9][10][11][12][13][14][15][16]. The commonly used methods in GNSS interference detection mainly include automatic gain control (AGC) method, time domain methods, and frequency domain methods. AGC adjusts the GNSS input signal level to the range of the analog-to-digital converter (ADC), which performs interference detection by monitoring the AGC level [17][18][19]; however, in a weak interference environment, the detection performance of the AGC method is significantly reduced. For the time domain method, this method can be implemented at the digital intermediate frequency (IF) level after the ADC at the front end of the GNSS receiver; this time domain method is only effective for narrowband RFI sources, because broadband interference cannot be easily distinguished from thermal noise [20,21]. For the frequency domain method, due to its spectral characteristics, it is usually used to detect narrowband carrier interference [22][23][24]. Concerning GNSS interference detection by using time domain methods or frequency domain methods, they cannot fully describe the nature and characteristics of the time-varying interference present in the received GNSS signals [13]. To deal with the TF resolution trade-off problem with the spectrogram and the cross-term problem with the WVD, the Wigner-Hough transform (WHT) has been adopted [25][26][27]. In the detection of linearly frequency modulated (LFM) interfering signal for GNSS applications, Hough transform converts the problem of global straight-line detection in the image space to the local peak detection problem in the parameter space [15,16]. Although WHT shows improved performance in interference detection [15], under low signal-to-noise ratio (SNR) scenario, false WHT peak occurs in the parameter space due to strong noise, resulting in interference detection error. The rearrangement operation can be combined with the TF representations for signal detection applications [8,9,28,29]. To improve GNSS interference detection performance, in this paper, the rearrangement operation has been introduced to the continuous Wavelet transform (CWT), obtaining the rearranged wavelet transform, which is beneficial for the significant improvement of the TF aggregation property in the TF plane; then, the obtained rearranged wavelet transform has been further combined with the Hough transform, which is very effective to deal with the cross-term problem; in this way, a new rearranged wavelet-Hough transform (RWHT) method has been proposed in interference detection for GNSS receivers particularly working in challenging interfering environments, which can be considered to be the main novelty of this paper. In this paper, first, the cross-term problem with the conventional TF analyses such as WVD used in GNSS interference detection has been discussed and then the Hough transform has been adopted to overcome this tough cross-term problem; second, the rearranged wavelet transform has been developed to improve the TF resolution in the TF plane; finally, the rearranged wavelet transform has been further combined with the Hough transform obtaining the novel RWHT which is expected to improve the interference detection performance for GNSS receivers. To prove the validity and effectiveness of the proposed RWHT-based GNSS interference detection method, the interference detection experiment has been performed by using the real GPS L1-C/A signal collected in the presence of sweep interference or continuous wave (CW) interference. The analysis results have shown that the proposed RWHT technique effectively suppress the cross-terms in the bilinear TF distributions and greatly enhances the TF localization property in the TF plane, which significantly improves the interference detection performance for GNSS receivers particularly working in the difficult jamming scenarios. This paper is organized as follows: Section 2 introduces the models and methods used in GNSS interference detection; Section 3 analyzes the GNSS interference detection results; the discussion has been made in Section 4; finally, the conclusion has been addressed in Section 5. GNSS Signal and Interference Model In an interfering environment, the model of the signal at the input of a GNSS receiver can be represented as [10][11][12][13]15]: where r RF,i (t) represents the ith GNSS signal (i = 1, 2, · · · , N s ), N s denotes the number of satellites in view, and η RF (t) is the disturbing term. When a single useful signal is considered, the GNSS signal transmitted by the ith satellite can be given as [10][11][12][13]15]: where: • A i is the amplitude of the ith useful GNSS satellite signal; • τ i is the propagation delay for the ith satellite signal; In general, the disturbing term r RF (t) can be expressed as [10][11][12][13]15]: where j RF (t) is the non-stationary interfering signal and w RF (t) is the GNSS receiver thermal noise which is usually in the form of a zero-mean stationary Additive White Gaussian Noise (AWGN). Potentially, there are different forms of interfering signals generated by RFI sources. In this paper, without loss of generality, the interference term j RF (t) can be considered to be a sweep interference (linear chirp), usually frequency modulated with near constant amplitude. Sweep interference, i.e., LFM interference, is considered to be one of the main types of interfering signals. When considering GNSS applications, sweep interference is regarded as an interference pattern that performs periodic linear scanning of the GNSS target frequency band, thereby effectively reducing the reliability or safety of satellite navigation and positioning services in the target frequency band. Sweep interference can be expressed by sinusoids in the time domain as: where A inst (t) is the carrier amplitude of the sweep interfering signal, f inst (t) is the instantaneous frequency of the sweep interference, and ϕ 0 denotes the initial carrier phase of the sweep interference, which can be considered to be a random variable presenting a uniform distribution in the range [−π, +π). The instantaneous frequency f inst (t) of the linear chirp can be expressed as: where f 0 is the initial frequency, t j denotes the frequency sweep period for the interfering signal, and k represents the chirp rate or the frequency modulation rate. The CW interference can be considered to be a special case of the sweep interference with a fixed carrier frequency f 0 ; in this case, j RF (t) can be expressed as [13]: where A CW (t) is the amplitude of the CW interfering signal, f CW (t) denotes the center frequency of the carrier, and ϕ 0 represents the initial carrier phase of the CW interference. The input signal y RF (t) defined in Equation (1) is filtered and down-converted by the GNSS receiver front end. Then, the received GNSS signal before the ADC is expressed as [10][11][12][13]15]: where f IF is the intermediate frequency (IF) of the GNSS receiver; c i (t − τ i ) represents the spreading code sequence after filtering in the GNSS receiver front end. Here the effect of the GNSS receiver front-end filter is neglected assuming the simplifying condition c i (t) ≈ c i (t); and η(t) represents the disturbing component after down-conversion and filtering, η(t) = j(t) + w(t). To avoid the cross-terms caused by the interaction between the positive and negative frequency parts of the spectrum, an analytic form of the received GNSS signal has been proposed [7,[10][11][12][13]15], expressed as: where j is the imaginary root unit and the analytic signal y a (t) contains a real part y(t) representing the original GNSS signal and an imaginary partŷ(t) denoting the Hilbert transform of y(t). Wigner-Ville Distribution (WVD) WVD is a commonly used TF distribution for analyzing non-stationary time-varying signals, which belongs to a typical quadratic or bilinear TF representation since the analyzed signal is used twice in the calculation. The WVD can be defined as the Fourier transform of the time-dependent instantaneous auto-correlation function R y (t, τ) of the analyzed signal, given as: where the instantaneous correlation function R y (t, τ) is equal to y a t + τ 2 y * a t − τ 2 , y a (t) represents the analytic signal, τ is called the lag variable. Although WVD has many nice properties and provides nearly the best TF resolution among all the TF analysis techniques, its main drawback comes from cross-term problem due to its bilinearity nature [30,31]. Consider the signal y(t) = y 1 (t) + y 2 (t), where y(t), y 1 (t) and y 2 (t) are analytic. Expanding the instantaneous auto-correlation function of y(t), we can obtain where R y1y2 (t, τ) and R y2y1 (t, τ) are respectively the instantaneous cross-correlation functions (e.g., R y1y2 (t, τ) = y 1 (t + τ/2) y * 2 (t − τ/2)). Taking Fourier transforms of Equation (9) with respect to τ, we can have where WVD y1 (t, ω) and WVD y2 (t, ω) are the WVDs of y 1 (t) and y 2 (t), respectively, and the last term is considered to be the cross-WVD (XWVD) between y 1 (t) and y 2 (t), provided as: From Equation (11), it is easy to know that the WVD of the sum of two signals is not only the sum of their corresponding WVDs, but also of their XWVD. This means that the spectrum energy density of the sum of two signals does not reduce to the sum of the individual densities (unless the signals are spectrally disjoint). If y 1 (t) and y 2 (t) are mono-component signals, WVD y1 (t, ω) and WVD y2 (t, ω) are the auto-terms, while 2Re WVD y1y2 (t, ω) is called the cross-term. As a result, if a signal contains more than one component in the TF plane, its WVD suffers from spurious features containing cross-terms that occur halfway between each pair of auto-terms. As an example, the LFM signal in the presence of AWGN is analyzed, where the SNR is set to be 3 dB. The WVD of the noisy LFM signal is provided in Figure 1a, and correspondingly, the contour of the computed WVD is presented in Figure 1b. From Figure 1, it can observe the TF peaks representing the noisy LFM signal, but there exist serious cross-terms in the TF plane where we expect no energy at all. The presence of the cross-terms of the analyzed signal in the TF plane does not possess any physical meaning, which makes proper signal interpretation very difficult; this is the main drawback of the WVD approach [30,31]. The WVD is a quadratic transformation, although it shows satisfactory TF aggregation property, the WVD of multi-component signal inevitably presents cross-term problem due to its bilinearity nature [30,31]. To deal with the cross-term problem, wavelet transform can be adopted, which is explained in the following section. Rearranged Wavelet Transform CWT is a linear transform which does not suffer from the tough cross-term problem with the quadratic TF representation, and it can be expressed as [32]: where y a (t) denotes the analytic form of the received GNSS signal; a is a scale parameter (a = 0) representing the degree of compression; b denotes a translation parameter deter-mining the location of the wavelet; the function ψ(t) is called the mother wavelet, which is a continuous function both in time and frequency domains and usually used as a source function to produce daughter wavelets ψ a,b (t) = 1/ |a|ψ * t−b a , i.e., the translated and scaled version of the mother wavelet ψ(t); and ( * ) means the operation of conjugation. Since the integral of the squared amplitude of the CWT is proportional to the energy of the analyzed signal, the squared modulus of the CWT can be defined as scalogram, written as follows: where the scalogram is the signal energy distribution in the time-scale space, which can be used to represent the changing information of the analyzed signal in the TF plane; a is the stretching parameter that can adjust the daughter wavelet function's oscillation frequency f = f 0 /a, and f 0 is the center frequency of the mother wavelet ψ(t). In this paper, Morlet wavelet is chosen to extract linear frequency modulation feature of the sweep interfering signal. The Morlet wavelet kernel is given as [33,34]: where t is the time, and ω 0 denotes the angular frequency of the mother wavelet. By substituting Equation (15) to Equation (13), the Morlet wavelet transform can be obtained, which is expressed as: where y a (s) is the input analytic signal, a represents the scale factor and t denotes the shift factor. In Figure 2a, the wavelet scalogram of the noisy LFM signal is provided, and correspondingly, the contour of the computed wavelet scalogram is presented in Figure 2b. From Figure 2, the TF energy peaks can be observed in the TF plane in the absence of cross-terms, which can almost represent the frequency modulation law of the LFM signal with the AWGN; but from Figure 2b, the wavelet scalogram shows very poor TF localization property in the TF plane, which shows unsatisfactory TF resolution in the LFM signal detection. Since the concept of rearrangement or reassignment can be used to improve the TF energy aggregation property in the signal detection applications [11,12], similarly, in this paper, to deal with the TF resolution problem with the wavelet scalogram, the rearrangement algorithm has been adopted to strengthen and concentrate the TF energy peaks in the signal detection. The TF resolution of the wavelet scalogram can be adjusted by changing the scale parameter a. At high frequencies, the scalogram provides a high time resolution but a low frequency resolution; while at low frequencies, the scalogram shows a high frequency resolution but a low time resolution. The TF resolution of the scalogram is bounded by the Heisenberg uncertainty principle. The scalogram SC y (t, a; ψ) represents the signal average energy density located in a certain local area geometrically centered in the point (t, f ) within the TF plane. The TF energy distribution in the local area is usually not geometrically symmetric, which will degrade the TF aggregation property of the energy distributions; therefore, it is not appropriate to assign the average energy density value to the geometric center of that considered local area [11,12]. One possible way to solve this problem is to reallocate the signal average energy density value to the gravity center t ,f in that local domain, which more reasonably represents these TF energy distributions [11,12]. The rearrangement operation can be used to enhance the TF energy localization property, when the rearrangement is introduced to the wavelet scalogram, the energy distribution at any point (t, f ) in the TF plane will be moved to the corresponding new point t ,f , which is the gravity center of the signal energy distribution around the original point (t, f ). In this way, the rearranged wavelet scalogram obtained with the Morlet wavelet transform can be provided as: (17) where SC (r) y (t , a ; ψ) is the rearranged wavelet scalogram;t(y; a, b) andâ(y; a, b) are the corresponding reassigned values which determines the coordinates of the gravity center of the signal's energy distribution; and δ(·) denotes the Dirac delta function. Furthermore, the rearrangement operator is given as: where τ ψ and D ψ are the operations of multiplication and differentiation by the running variable, respectively, i.e., τ ψ (t) = t·ψ(t), D ψ (t) = dψ(t) dt ; f 0 is the center frequency of the mother wavelet ψ(t); T y (t, a; ψ) is the Morlet wavelet transform of the analyzed signal y a (t). In Figure 3a, the reassigned wavelet scalogram of the noisy LFM signal is provided, and correspondingly, the contour of the reassigned wavelet scalogram is given in Figure 3b. From Figure 3, it is easy to observe that the TF energy peaks are squeezed and concentrated in a linear region of the TF plane, presenting improved TF aggregation performance, thus the modulation law of the LFM signal can be roughly characterized. Hough Transform Hough transform can be used to detect the line of frequency modulation law for a given signal in the TF image space. The principle of Hough transform can be explained in Figure 4, where the center of a rectangular TF image is assumed to be the origin O of a Cartesian coordinate system XOY and the TF image size is set to be L × H. If the position of a pixel in the TF image is represented by (t, f ), then from Figure 4, it is easy to know that Concerning a straight line l in the TF space, the norm form of the Hough transform is given as follows [15]: x cos θ + y sin θ = ρ (20) where ρ is the distance from the origin to the closest point on the straight line l, θ denotes the angle between the X axis and the normal of the straight line l through the origin O. If a single point in the TF image is considered, then all the straight lines passing through this point definitely correspond to a unique sinusoidal curve in the ρ-θ plane. Therefore, it is easy to conclude that a set of two or more points which determines a straight line in the XOY system will produce corresponding sinusoidal curves which goes through a particular point (ρ, θ) in the ρ-θ space which corresponds to the considered straight line. In this way, the problem of detecting collinear points in a line in the image space can be reasonably converted to the problem of searching for concurrent sinusoidal curves in the parameter space. Wigner-Hough Transform When considering the detection of the LFM interfering signal (sweep interference) by using the WVD, if the undesired cross-term is not considered, the auto-term's energy distribution is concentrated on a straight line representing the signal's frequency modulation law in the TF plane [15]. If the Hough transform is combined with the WVD, the problem of detecting global straight line representing the LFM interference's frequency modulation law in the TF plane can be converted to the problem of finding the corresponding energy peak focused on a particular point in the ρ-θ plane [15]. Therefore, the Wigner-Hough transform (WHT) can be formed, which is defined as [15,27]: where y a (t) is the analytical form of the input signal y(t), y * a (t) denotes the complex conjugate of y a (t), f 0 represents the initial frequency and k is the chirp rate. In Figure 5, the WHT of the noisy LFM signal is presented in the ρ-θ plane, it is easy to observe that an energy peak denoting the LFM signal is concentrated in a limited point area, thus the LFM signal feature can be distinguished from that of the AWGN component. It can be known that the problem of detecting the straight line which represents the frequency modulation law of the LFM signal in the TF plane has been converted to the easily solved problem of searching for the energy peak position of the WHT in the ρ-θ space. The characteristic parameters of the LFM signal can be further determined by using the Hough transform. In Figure 4, it can be assumed that there are N f sampling points on the frequency axis, then there exist N t sampling points on the time axis which satisfies tan β = −cot θ = N f /N t . Therefore, the actual frequency modulation slope of the LFM interference is given as: where β is the angle between the straight line l and the t axis, β = θ − π 2 ; ∆ f is the frequency resolution, which is equal to f s 2L ; ∆t means the time resolution, which is equal to 1 f s ; and f s denotes the sampling rate. According to the geometric relationship shown in Figure 4, it is easy to obtain the relationship between the initial frequency f 0 and the polar coordinates (ρ, θ) of the energy peak in the parameter space, shown as: When the WHT is used in sweep interference detection for GNSS applications, the colinear points (t i , f i ) (i = 0, · · · , N − 1, and N is the number of points in the straight line representing the frequency modulation law of the analyzed signal, which is equal to the number of input signal samples) in the frequency modulation line within the TF image are mapped into a particular point (ρ, θ) with the polar form whose position is determined by the energy peak of the WHT which is formed by N concurrent sinusoidal curves in the ρ-θ parameter space; in this way the characteristic parameters such as chirp rate k and initial frequency f 0 of the LFM signal can be further estimated. Rearranged Wavelet-Hough Transform The observed straight line denoting the frequency modulation law of the sweep interfering signal performed by using rearranged wavelet transform is usually described with the Cartesian coordinates (x, y) (equivalent to (t, f )) in the image space, and by using Hough transform it can be converted into the polar coordinates (ρ, θ) in the parametric space, which can be written as: where ρ is the normal distance between the straight line and the origin O of the XOY coordinate system, and θ denotes the angle between the normal and the X axis, θ ∈ [0, π]. From Cartesian coordinates to polar coordinates, the point-line duality has been converted to the point-sinusoidal curve duality through Hough transform; in this way, the detection of linear lines in the TF image space can be converted to detect intersection points of sinusoidal curves in the (ρ, θ) parameter space. Therefore, to improve GNSS interference detection performance, the rearranged wavelet scalogram can be combined with the Hough transform to develop a novel RWHT, which can be determined as: where SC (r) y is the reassigned wavelet scalogram in the TF domain, δ(·) denotes the Dirac delta function. Equation (25) can be also rewritten in the form of polar coordinates [15], provided as: It can be seen from Equation (26) that through Hough transform, the original straight line denoting the sweep interring signal's frequency modulation law which is represented by the rearranged wavelet scalogram in the TF plane can be mapped to the concurrent sinusoidal curves which are concentrated in a particular point in the form of polar coordinates within the (ρ, θ) parameter plane, this is the core idea of the proposed RWHT algorithm for GNSS interference detection applications. When dealing with the sweep interference present in the received GNSS signal, since the concurrent sinusoidal curves which represent the collinear points on the sweep interference frequency modulation line are intersected in the same point within the (ρ, θ) parameter space, thus an energy peak will be formed in this point domain where the RWHT energy distributions are greatly strengthened and concentrated. This particular RWHT energy distribution feathers can be used to differentiate the sweep interference from the AWGN and useful GNSS signal components in the parameter space. By searching for the RWHT energy peak position, the characteristic paraments such as initial frequency and chirp rate of the sweep interference can be effectively estimated. The proposed GNSS interference detection method based on RWHT is clearly explained in Figure 6. The received GNSS signal in the analytic form is wavelet-transformed, then the scalogram is calculated, which is to be redistributed by using the rearrangement to obtain rearranged scalogram. The rearranged scalogram presents improved TF aggregation property, but in low jammer-to-noise ratio (JNR )condition of GNSS interference detection, the TF energy distribution discontinuous phenomenon may occur in the TF plane; therefore, to deal with such discontinuity problem with the rearranged scalogram, the Hough transform is introduced and combined with the rearranged scalogram, theoretically, thus the RWHT can be obtained. All the points in the straight line represented by the rearranged scalogram in the TF image space will correspond to the RWHT energy peak concentrated in a determined point (ρ, θ) within the parameter space. When the RWHT is used in the detection of sweep interference present in the received GNSS signal, a sharp energy peak definitely appears in a specific point domain within the parameter space, presenting all the information of the frequency modulation law of the linear chirp interference. By searching for the local RWHT peak and determining this particular point position within the parameter space where the RWHT peak energy concentrates, the sweep interfering signal can be effectively detected, and furthermore the initial frequency and chirp rate of the linear chirp interference present in the received GNSS signal can be precisely estimated by the peak detection process. Results In this section, the performance of GNSS interference detection based on rearranged wavelet-Hough transform was analyzed in comparison with the traditional interference detection techniques such as WVD and WHT. The scheme of the GNSS interference detection test is illustrated in Figure 7, and the experimental setup for collecting the GPS-L1 C/A signal corrupted by interference is depicted in Figure 8. A jammer controlled by a computer was used to produce interference, which was added to the GPS L1-C/A samples collected by the GNSS software receiver, whose receiver front end was connected to the receiver antenna placed on the building roof through a cable. The combined signal was captured using a GNSS signal collector and sent to the GNSS software receiver implemented on another computer through a USB cable. To demonstrate the validity and effectiveness of the proposed RWHT-based interference detection method for GNSS receivers, several tests were performed, which were characterized by the corresponding parameters provided in Table 1. In the experiment, the carrier-to-noise power density ratio C/N 0 of the received GPS L1-C/A signal was 40 dB-Hz, the intermediate frequency f IF was set to 39.96 MHz and the sampling frequency f s was set to 20.47 MHz in compliance with the band-pass sampling theorem. The collected GPS L1-C/A signal in zero-mean AWGN was jammed by a constant amplitude LFM interference (i.e., linear chirp) or CW interference, which were chosen as the typical test bench in GNSS interference detection. When considering the linear chirp interference present in the received GPS L1-C/A signal, the initial frequency f 0 of the linear chirp interference was set to 4.9 MHz, the sweep period t j of the linear chirp interference was set to 0.1 ms and the chirp rate k was chosen to be −1.0 × 10 4 MHz/s. In Figure 9a, the WVD of the GPS L1-C/A signal in the presence of sweep interference is depicted, and the contour of the WVD is correspondingly shown in Figure 9b. Although a straight line denoting the frequency modulation law of the sweep interference can be seen in the TF plane, serious cross-terms can be also observed due to the bilinear nature of the WVD, which inevitably cause error to the estimation of the instantaneous frequency of the sweep interfering signal and bring difficulties in the detection of interference, thus, the GNSS interference characteristic parameters cannot be correctly extracted using the WVD. To mitigate the cross-term interference with the WVD, the rearranged wavelet scalogram of the interfered GPS L1-C/A signal is shown Figure 10. It can be observed that the cross-term is effectively suppressed in the TF plane and the TF energy peaks are linearly distributed on the straight line denoting the frequency modulation law of the sweep interference present in the received GPS L1-C/A signal. It is easy to know that the interference detection ability with the rearranged wavelet transform outperforms the WVD, as verified in Figure 10. However, the discontinuity phenomena of the TF energy distribution along the straight line of frequency modulation law of the sweep interference occurs in the TF plane due to use of rearrangement operation in the rearranged wavelet transform, which possibly causes error to the instantaneous frequency estimation of interfering signal. The Hough transform can be used to mitigate the cross-terms present in the TF energy distributions. In Figure 11a, the WHT of the interfered GNSS signal is shown in case of JNR equal to −8 dB; and correspondingly, the contour of the WHT is given in Figure 11b. From Figure 11a, it can be found that a WHT peak denoting the sweep interference occurs in the ρ-θ plane, but the energy peak position is relatively fuzzy in the parameter space, thus it is difficult to determine the peak position accurately due to the influence of the existence of undesired cross-terms, as verified in Figure 11b. It can be known that errors can be possibly brought into the estimation of characteristic parameters of the sweep interfering signal in the low JNR condition. When the JNR of the sweep interference is set to be −2 dB, the WHT of the interfered GNSS signal is presented in Figure 11c and the corresponding contour of the WHT is given in Figure 11d. It can be observed that concurrent sinusoidal curves representing the collinear points in the straight line of frequency modulation law of the sweep interference are intersected in a specific point area within the ρ-θ plane, where the energy distributions are accumulated within such point domain in the parameter space, therefore, when the energy peak position of the WHT is roughly determined in the parameter space, the GNSS interference feature can be possibly extracted. To further improve the interference detection performance for GNSS receivers, the RWHT of the interfered GPS L1-C/A signal is provided in Figure 12a in case of JNR equal to −2 dB; and its corresponding contour is given in Figure 12b. It can be seen that a distinct peak occurs in the parameter space, and the intensity of such RWHT energy distributions representing the contribution of sweep interference present in the received GPS L1-C/A signal is accumulated and strengthened in a particular point area where the concurrent sinusoidal curves are intersected forming a strong RWHT energy peak, denoting the collinear points distributed on the straight line of reassigned wavelet scalogram of the interfered GPS L1-C/A signal in the TF plane; in this way, the discontinuity problem of the rearranged wavelet transform in the TF space can be effectively solved when adopting the proposed RWHT-based interference detection method in the ρ-θ plane. Moreover, in comparison to the WHT result shown in Figure 11b, the RWHT peak position can be determined in the ρ-θ plane even in case of JNR equal to −8 dB, it is easy to know that improved interference detection ability can be obtained with the developed RWHT technique, as verified in Figure 12b. In case of JNR equal to −2 dB, the RWHT of the interfered GNSS signal is provided in Figure 12c, and its contour is shown in Figure 12d. It is clear to observe that a very sharp peak occurs in a point domain within the ρ-θ space, where the RWHT energy distributions much concentrate in such intersection point domain. Meanwhile, in the other area excluding the intersection point domain of the concurrent sinusoidal curves existed in the ρ-θ plane, the RWHT energy distributions contributed from the white Gaussian noise and the useful GPS L1-C/A signal components are reduced significantly compared with the WHT energy peak level located in the intersected point domain, and the RWHT energy distributions of these signal components within such area in the ρ-θ plane become much sparser in comparison to the WHT case; moreover, the cross-terms with the RWHT are effectively suppressed in comparison to the WHT approach. Therefore, the sweep interfering signal can be clearly distinguished from the other terms of white Gaussian noise and useful GPS L1-C/A signal and its characteristic features can be easily extracted. Furthermore, in comparison to the case of JNR equal to −8 dB, the dimension of the RWHT energy peak point domain where the concurrent sinusoidal curves are intersected is much reduced, which means that much improved estimation precision for the sweep interference characteristic parameters can be obtained since more accurate determination of the RWHT energy peak position in the ρ-θ space can be made. The initial frequency of the sweep interfering signal is 4.9207 MHz, and the estimated chirp rate of the sweep interfering signal is −1.0051 × 10 4 MHz/s. To comprehensively evaluate the GNSS detection performance of the proposed RWHT method, root mean square error (RMSE) has been used in the analysis for the estimates of the initial frequency and chirp rate of the sweep interference in comparison to the WHT approach. In different JNR scenarios, the RMSE results for the estimated initial frequency and chirp rate values of the sweep interference present in the received GPS L1-C/A signal are provided in Figure 13. It can be found that the RMSE results of the RWHT and WHT remain almost unchanged with changes of the JNR values of the sweep interfering signal, indicating that these two methods present stability and robustness in the interference detection and feather parameters estimation. In details, in Figure 13a, the RMSE level of the GNSS sweep interference initial frequency estimate with the WHT is almost kept at 3.3 × 10 −4 level, while the RMSE level of the GNSS sweep interference initial frequency estimate with the RWHT is kept at about 3.2 × 10 −5 level, presenting about 90.3% initial frequency estimation precision improvement over the WHT approach. Similarly, in Figure 13b, the RMSE level of the chirp rate estimate obtained with the WHT nearly remains at the level of 3.9 × 10 −4 , while the RMSE of the estimated chirp rate achieved with the RWHT almost keeps at the level of 2.7 × 10 −4 , it can be known that the proposed RWHT method has a 30.8% chirp rate estimation precision enhancement over the WHT approach. From the RMSE analysis results, it is easy to know that regardless of the estimation of the initial frequency or the chirp rate of the sweep interference present in the received GPS L1-C/A signal, the RWHT technique shows much improved interference detection performance for GNSS receivers. Moreover, the proposed RWHT method is expected to be also valid in the detection of CW interference since it can be considered to be a special case of LFM interfering signal with a fixed carrier center frequency [13]. In the experiment, the GNSS CW interference characterized by J/N = −2 dB is considered in the disturbing scenario. In Figure 14a, the WVD of the GPS L1-C/A signal in the presence of CW interference is provided, and correspondingly, the contour of the obtained WVD is given in Figure 14b. From Figure 14, a horizontal straight line can be faintly observed in the TF plane with a fixed frequency, which denotes the TF characteristic of the CW interference; but there also exist very severe cross-terms in the TF plane at the same time, which will bring serious difficulties to the correct understanding of the CW interference present in the received GPS L1-C/A signal. In Figure 15a, the proposed RWHT of the GPS-L1 C/A signal in the presence of CW interference is evaluated, and its contour is accordingly shown in Figure 15b. From Figure 15, a very sharp vertical peak of the RWHT energy distribution can be observed in a particular point domain within the ρ-θ parameter space, and the RWHT energy highly concentrates in this point, whose position can be used to estimate the characteristic parameters of the CW interference; in the other area except the point domain within the ρ-θ parameter space, the RWHT energy distributions corresponding to the white Gaussian noise and the useful GPS L1-C/A signal components are effectively suppressed to a negligible level when compared with the RWHT energy peak. The proposed RWHT method completely removes the cross-term artifacts within the bilinear TF distributions such as WVD, and at the same time it presents satisfactory energy aggregation property in GNSS interference detection. In summary, it is known that the proposed RWHT method is not only effective for sweep interference detection, but also suitable for CW interference detection for GNSS receivers. Discussion From the GNSS sweep interference detection results, the proposed RWHT method can be used to detect the GNSS sweep (linear chirp) interference effectively since it successfully deals with the cross-term problem and presents much improved energy localization property. The estimation of the characteristic parameters such as the initial frequency and chirp rate of the sweep interfering signal has been performed, and the corresponding quantitative metric RMSE results have been compared between the proposed RWHT method and the WHT approach. The RWHT method shows much improved precision in the estimation of the characteristic parameters of the sweep interference present in the received GPS-L1 C/A signal. Furthermore, in this paper, this proposed RWHT method has been experimentally verified to be valid and effective in the detection of CW interference for GNSS receivers. The RMSE analysis of this proposed RWHT method in dealing with the CW interference for GNSS applications will be evaluated in the future research works. In addition, since both Beidou and GPS signals use spread spectrum communication and code division multiple access (CDMA) technologies [1], the proposed RWHT method is expected to be effective in the detection of interference present in the Beidou signals. The rationality behind this proposed RWHT method is based on the huge difference of the energy distributions contributed from the useful GNSS signal and the interfering signal (including sweep interference and CW interference) when they are evaluated in the ρ-θ parameter space, respectively. This will be further analyzed in our future research works. Conclusions In this paper, a novel GNSS interference detection method based on rearranged wavelet-Hough transform has been proposed. To prove the validity and effectiveness of the developed technique, a comprehensive interference detection performance evaluation has been performed in comparison to the existed TF analysis approaches such as WVD, WHT, and rearranged wavelet scalogram. The interference detection tests have been performed on the real GPS L1-C/A signal in the presence of sweep interference or CW interference to verify the theoretical analyses. From the experimental results, the traditional WVD approach shows unsatisfactory interference detection performance since it is suffered from the serious cross-term problem; the rearranged scalogram can partially suppress cross-terms, but it presents TF energy distribution discontinuity problem in the TF plane due to the rearrangement operation, which degrades the GNSS interference detection performance; when the WVD is combined with the Hough transform, the obtained WHT can partially alleviate the cross-terms present in the TF plane; when the Hough transform is combined with the rearranged scalogram, it is of vital importance that the developed RWHT method effectively suppresses the undesired cross-terms since the RWHT energy distributions are greatly strengthened and mainly concentrated in a particular limited point domain within the parameter space, and at the same time, the RWHT method presents satisfactory energy aggregation property since the rearrangement operation is made, this can be considered to be the main novelty of this paper. It is clear to know that the proposed RWHT method provides significant performance improvement in GNSS interference detection when compared with the existed TF analysis approaches. The proposed interference detection method by using RWHT has been proven to be very effective to remove the cross-terms within the traditional bilinear TF distributions and improve the TF energy aggregation property at the same time, which is critically important to the GNSS interference detection performance enhancement. Based on these achieved technical improvements, the RWHT-based GNSS interference detection method provides much improved performance over the WVD, rearranged wavelet scalogram, and WHT techniques. In summary, by using the developed efficient RWHT algorithm, the proposed GNSS interference detection method is not only effective in GNSS sweep interference detection, but also valid in dealing with GNSS CW interference, which is very promising to be used in anti-interference design for GNSS receivers particularly working in the difficult and challenging interfering environments. Acknowledgments: The author would like to thank the editor and anonymous reviewers for their thoughtful comments and constructive suggestions. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2022-07-14T18:10:36.413Z
2022-07-01T00:00:00.000
250505477
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1999-4915/14/7/1516/pdf?version=1657603935", "pdf_hash": "90debe0c03796f85804d76a1e0bbe1979d004480", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44839", "s2fieldsofstudy": [ "Biology" ], "sha1": "8b4705935177a2d6c80e9299e85b186ab71666e8", "year": 2022 }
pes2o/s2orc
Monitoring Urban Zoonotic Virus Activity: Are City Rats a Promising Surveillance Tool for Emerging Viruses? Urban environments represent unique ecosystems where dense human populations may come into contact with wildlife species, some of which are established or potential reservoirs for zoonotic pathogens that cause human diseases. Finding practical ways to monitor the presence and/or abundance of zoonotic pathogens is important to estimate the risk of spillover to humans in cities. As brown rats (Rattus norvegicus) are ubiquitous in urban habitats, and are hosts of several zoonotic viruses, we conducted longitudinal sampling of brown rats in Vienna, Austria, a large population center in Central Europe. We investigated rat tissues for the presence of several zoonotic viruses, including flaviviruses, hantaviruses, coronaviruses, poxviruses, hepatitis E virus, encephalomyocarditis virus, and influenza A virus. Although we found no evidence of active infections (all were negative for viral nucleic acids) among 96 rats captured between 2016 and 2018, our study supports the findings of others, suggesting that monitoring urban rats may be an efficient way to estimate the activity of zoonotic viruses in urban environments. Introduction During the last decades, spillover of viruses from wildlife hosts have caused highimpact diseases in humans, e.g., hemorrhagic fevers caused by hantaviruses or arenaviruses, two epidemics of severe acute respiratory syndrome related to zoonotic-origin coronaviruses (SARS-CoV), Ebola virus disease, and influenza A. Anthropogenic changes, including increasing human population density, increasing international travel, land-use change, and urban sprawl, appear to be drivers in the spillover and spread of zoonotic viruses to humans [1,2]. In particular, cities are unique ecosystems where dense human populations and their companion animals live in relatively close proximity with wildlife species. Given that the majority (60.3%) of emerging infectious diseases are caused by zoonotic pathogens, of which 71.8% originate from wildlife [3], knowledge of zoonotic pathogens carried by wildlife hosts is critical to understanding pathogen prevalence in the environment, geographic distribution, and risk of spillover to humans. Brown rats (Rattus norvegicus) are described as "urban exploiters" [4] in that they proliferate in urban settings where they live in close proximity with humans [5]. As urban brown rats are hosts of several zoonotic pathogens [6,7], they may act as reservoirs of these pathogens to humans and livestock [8,9]. However, information on the viruses carried by urban wild rats that could pose a zoonotic risk to human health is scarce. The objective of this study was to conduct a cross-sectional survey to assess the prevalence of some viruses of zoonotic importance in wild brown rats within the city center of Vienna, Austria. The studied sites are highly frequented by humans and were suspected to present a high rat density. We focused our investigation on a limited number of viruses, previously identified in rodents at different locations worldwide [9][10][11] based on three criteria: The virus was previously reported in urban brown rats, e.g., Seoul orthohantavirus (SEOV) [12,13], hepatitis E virus (Orthohepevirus A, HEV) [14][15][16], influenza A virus (IAV) [17], and coronaviruses [18][19][20]; 2. These viruses are transmitted from rat to rat mainly via direct or indirect contact with excreta (e.g., hantaviruses, encephalomyocarditis virus) or saliva (e.g., hantaviruses, coronaviruses), perhaps during intraspecific aggression (e.g., hantaviruses) [34]. Some viruses are considered non-seasonal in rats (e.g., hantaviruses [11,35]), while others have demonstrated seasonal variations (e.g., for seasonal IAV [17]) in rat populations. Because urban brown rats are synanthropic and may come into contact with food supplies [5,36,37], rat-to-human transmission is likely to occur via direct or indirect contact with rat excreta (e.g., hantaviruses, HEV, encephalomyocarditis virus) or saliva (e.g., hantaviruses), via the bite of competent vector (e.g., WNV, USUV), or direct inoculation via close contact with infected animals (poxviruses). In general, much is unknown about the role of urban rats in the transmission of zoonotic viruses to humans, and studies such as this one may shed light on poorly understood aspects of viral zoonoses (e.g., precise transmission routes, seasonality, etc) [1,38,39]. Ethical Statement This study followed institutional and national standards for the care and use of animals in research. It was approved by the institutional ethics and animal welfare committee and the national authority (GZ 68.205/0196-WF/V/3b/2016). Study Areas and Sampling Methods Rattus norvegicus were trapped between 12 September 2016 and 13 November 2018 in three sites highly frequented by humans in the city center of Vienna, Austria: (i) at a promenade along the Danube canal (mean coordinates of the trapped rats in decimal degrees: 16.365 N, 48.226 E); (ii) at Karlsplatz (16.363 N, 48.200 E), one of the tourist attractions in the city; and (iii) at Schwedenplatz (16.375 N, 48.212 E), a cruise ship port on the Danube river. These sites were chosen as rats could be observed during daytime, suggesting that the rats were abundant and that these locations may represent critical interfaces for virus transmission between rats and humans. Rats were captured live in spring and autumn season (we avoided too cold/warm temperatures due to ethical and animal welfare considerations) using Manufrance live-traps (280 × 100 × 100 mm). Traps were set between 17.00 and 19.30 and retrieved the following morning between 6.00 and 8.00 (more details on trapping can be found in [42,43]). Live captured animals were transferred to a pathology laboratory where they were anesthetised in an induction chamber using 5% isoflurane before euthanasia via an intra-peritoneal barbiturate overdose. Rats were identified to the species level based on morphological characters. For each animal, morphological data were recorded. We chose to sample and analyze the lung tissue because it is the preferential organ for detection of hantaviruses [44], EMCV [45], and influenza A viruses [17]. It is also the most highly vascularized organs in rats [46], enabling us to potentially monitor blood-circulating viruses even if they do not have a lung tropism. During necropsy, lung tissue was collected aseptically and placed into RNAlater™ (ThermoFisher, Waltham, MA, USA). A ten mm tail tip was sampled for molecular barcoding purposes. All samples were maintained at −80 • C until RNA or DNA extraction. Barcoding To confirm the morphological identification of the Rattus species, we followed the DNA barcoding protocol as described in [47], consisting in the amplification and sequencing of a 585-bp fragment of the mitochondrial DNA (mtDNA) D-loop. Preparation of Tissue Lung tissue was removed from RNAlater, and an approximately 1 g section was placed into 400 µL ice-cold sterile phosphate-buffered saline (PBS) in a tube with 4 copper-coated steel beads. The tissue was homogenized on a TissueLyser (Qiagen, Hilden, Germany) for 3 min at 30 Hz and then centrifuged at 8000× g for 4 min at 4 • C. Nucleic acids were extracted from 200 µL of cleared homogenate with a commercial kit (ZymoResearch, Irvine, CA, USA). Nucleic acid extraction was checked by spectrophotometry (NanoVue, Biochrom GmbH, Berlin, Germany) and confirmed that extraction was successful. Detection of Viral Nucleic Acids Virus nucleic acids were detected by following various previously published PCR protocols using 2.5-5 µL of nucleic acid template (Table 1). Highly sensitive real time reverse transcription PCR (RT-qPCR) to detect RNA viruses was preferred when protocols were available using Luna ® One-step RT-qPCR mix (New England Biolabs, Ipswich, MA, USA) on an Applied Biosystems 7500 light cycler with published temperature cycling programs (Table 1). Conventional RT-PCR to detect RNA viruses was performed using a one-step RT-PCR mix (OneTaq ® , New England Biolabs) followed by capillary gel electrophoresis (QIAxcel, Qiagen, Hilden, Germany) to visualize amplicons. Similarly, conventional PCR was used to detect poxviruses using GoTaq G2 mix (Promega, Madison, WI, USA) followed by capillary gel electrophoresis to visualize amplicons. Positive controls were used for flaviruses (a WNV lineage 4c isolate [48] and an USUV cell culture isolate "939/01" [24]), as these are well-characterized in our laboratory and potential false positives could be identified by sequencing. Otherwise, no positive controls were used to reduce the possibility of false-positive results. Instead, we used common diagnostic assays to detect some viruses (coronaviruses, hantaviruses, influenza A virus, and chordopox viruses) or used multiple tests (one RT-qPCR and one RT-PCR for hepatitis E virus; two RT-qPCR and one nested RT-PCR for EMCV) to reduce the possibility of false negatives. The samples were screened for the following viruses: CoVs [49,50], flaviviruses [51] including specific assays to detect WNV [24] or USUV [52], Old World hantaviruses [53], two assays to detect HEV [54,55], IAV [56], three assays to detect EMCV [57][58][59], and poxviruses [60] (Table 1). Detection of Viruses No virus nucleic acids were detected in the rat lung tissue samples. For viruses which we presumed were highly likely to be detected, multiple assays were performed and were all negative: hepatitis E virus (one RT-qPCR and one RT-PCR) and EMCV (two RT-qPCR and one nested RT-PCR) ( Table 1). Discussion Among the zoonotic viruses investigated here, two, namely EMCV and HEV, were the most likely candidates for detection in urban brown rats. Rodents are reservoirs and vectors of EMCV, and sporadic outbreaks in domestic animals have been linked to rodent exposure [62]. In Italy, outbreaks within zoos or on farms have occasionally been associated with either EMCV-positive rodents [58] or increased rodent abundance [61]. EMCV is present in Austria and neighboring countries, and exposure has been detected in domestic pigs, where the virus causes little to no clinical pathology [63,64], and in humans [65,66]. Specifically, human exposure was linked to hunters and zoo workers, thus contact to wild game animals and captive wildlife may be risk factors for exposure in Austria. EMCV can be detected up to 22 dpi in the lungs of experimentally inoculated laboratory rats with a high rate of transmission between rats (R 0 >> 1) [45], nonetheless the infection is ultimately transient. Therefore, we cannot rule out a low level of EMCV infection in the urban rat population; however, a larger sample size may be necessary to detect active virus infections. We also expected to detect HEV in our survey of urban brown rats, particularly as HEV has recently been detected in urban brown rats captured in Vienna [25]. Recent virological and serological surveys of Austrian blood donors suggested that approximately 14% of the human population had been exposed to HEV, and 0.01% had active infections [67,68]. HEV (specifically the species Orthohepevirus A) is an emerging viral zoonosis, and in Europe, wild ungulates (wild boar and deer) and domestic pigs are the principal reservoirs [69]. HEV-positive wild R. norvegicus were detected in the cities of Lyon, France (12/81, 15%) [70]; Hamburg, Germany (2/30, 6.7%) [15]; and Vienna, Austria (7/43, 16.2%) [25], although in these studies, the virus was detected in liver [25,70] and feces [15,25]. Studies in Germany have revealed that HEV isolates from brown rats were phylogenetically different from epizootic strains [14] and have since been assigned to Orthohepevirus C species (genotype C1). The so-called "rat HEV" was first detected with a "broad-spectrum RT-PCR" [15], which was not used in our study. In the previous study that detected HEV RNA-positive rats in Vienna, all were Orthohepevirus C (i.e., "rat HEV"), and not the epizootic Orthohepevirus A, which has been isolated from a variety of animals and linked to human disease [25]. Importantly, in that study, no orthohepeviruses were detected by two RT-qPCR assays (one of which was specific to epizootic HEV and was used in our study) but were rather detected by a conventional RT-PCR assay (also used in our study) [25]. Thus, although we did not use the "broad-spectrum" HEV assay [15], Ryll et al. clearly demonstrated that rat HEV is circulating in Vienna and could be detected by the conventional RT-PCR used in our study [25]. This may suggest the distribution of HEV in the urban rat population in Vienna is spatially focal and/or seasonal, as has been observed elsewhere [14]. The fact that we sampled lung tissue (and not liver, feces, or other tissue) should not have been a major limitation, as HEV and EMCV are blood-borne pathogens and therefore present in the large blood-volume of the rodent lung. Overall serological testing combined with molecular testing would provide more information regarding the exposure of urban rats to zoonotic viruses and the zoonotic risk to humans and domestic animals. Serological testing is needed as not all the viruses cause lifelong infection in rodents, and infection is not necessarily concomitant with the presence of antibodies (e.g., [14]). From our results, we can only infer that the rate of active viral infection of urban rats at the three investigated sites is low, but we did not determine their exposure to these viruses at these sites. Among the other viruses that were screened, hantaviruses are blood-borne viruses which are present in Austria but were not detected in our study. In Austria, Puumala orthohantavirus is the most common hantavirus (the reservoir host is the bank vole, Myodes glareolus), causing the mild disease "nephropathia epidemica" in humans [26][27][28]. The more pathogenic Dobrava-Belgrade orthohantavirus (the reservoir host is the yellownecked mouse, Apodemus flavicollis) is also present and may cause a more severe form of disease termed hemorrhagic fever with renal syndrome (HFRS) [71,72]. Evidence of cross-species transmission of rodent-borne hantaviruses exists [73][74][75]; although we note that Apodemus sp. were occasionally found in the traps, we cannot infer the likelihood of virus spillover. The detection of SEOV may have been more likely, as black rats (Rattus rattus) and brown rats are known reservoirs. SEOV may also cause HFRS in humans and has a wide geographic distribution due to global trade: SEOV RNA was detected in the lungs of wild urban brown rats in France [76,77], Belgium [44], UK [78], and New York City, USA [9]. Hantavirus reservoirs are typically persistently infected, and therefore it is likely that we would have detected infection if present. Therefore, our data support the hypothesis that human exposure to hantaviruses is unlikely in urban habitats of Vienna, and risk of spillover of endemic hantaviruses to other rodents is also limited. The mosquito-borne flaviviruses WNV and USUV are endemic in Austria, and are known to cause occasional disease in humans, birds, and horses [22,79]. These zoonotic viruses are maintained in an enzootic cycle involving Culex mosquitoes and avian hosts. USUV and WNV have never been reported in wild R. norvegicus, although antibodies to WNV were detected in R. rattus and/or R. norvegicus in Pakistan, Israel, Austria, Tunisia, central Africa, and Madagascar, as well as Maryland, Washington, DC, and Louisiana, USA [80]. USUV has been detected in wild R. rattus and other rodents in Senegal [81]. As 2018 was notable for an extraordinarily high rate of WNV infection in humans, mosquitoes, and horses [82], transmission to other urban mammals during this time appeared probable; however, we detected no USUV or WNV in our samples. We used robust universal flavivirus primers as well as virus specific USUV and WNV primers, which are well documented to amplify many flavivirus species [51]; however, we cannot exclude the fact that rats, like other mammals, are dead-end hosts for these arboviruses with a brief stage of low viremia. Influenza A virus is maintained in aquatic avian cycles and enters human epidemic transmission cycles via domestic swine or can spill-over to humans directly [83]. However, IAV was described in the lung of urban brown rats in Boston, USA (2/163, 1.2%) [17]. Rats are unlikely to be important hosts of IAV. Similarly, to our knowledge, rodent-human transmission of coronaviruses has never been recorded, although human coronavirus OC43 is thought to share a common ancestor with some rodent coronaviruses [84]. Murine coronaviruses (genus Betacoronavirus), including murine hepatitis virus and sialodacryoadenitis virus, can be common in laboratory rodents, including R. norvegicus strains, and many species/strains of related coronaviruses have been characterized from wild rat populations [10,19,20,84,85]. Human infection with zoonotic coronaviruses is well documented (e.g., SARS-CoV-1, SARS-CoV-2, MERS-CoV), but rats have never been conclusively implicated in the transmission cycle [20,86]. Therefore, the lack of IAV and CoV in our samples is not surprising. We recommend that serological tests or virological tests of other tissue samples (e.g., intestine or feces) are investigated in a future surveillance project of urban rats. Finally, poxviruses are known to be transmitted from rats to humans and domestic animals. According to previous surveys, cowpox is present in several rodent species in Central Europe with a high seroprevalence [87,88]. While these broad surveys of wild rodents did not include rats, exposure to rats has been suspected in some confirmed human poxvirus infections: cowpox virus was detected in a patient with an exposure to a pet rat [89]; poxvirus was isolated from skin lesions of a wild R. norvegicus in Kuwait [90]; and in the Netherlands, a cowpox virus was isolated from wild R. norvegicus [91,92], including a case of direct transmission to a human [92]. Cowpox has occasionally been detected in Austria [31,32], yet cases in individuals from younger generations who did not receive the vaccinia vaccine are increasing. Future surveillance efforts should focus on serosurveillance to determine exposure to cowpox virus in the urban rat population to better ascertain the zoonotic risk of infection. While we are confident in the diagnostic techniques used here, it is clear that detecting active infections require a larger sample size, particularly when there is a bias towards sampling a young population as expected with live traps (in contrast to kill traps, as were probably used in [25]). Calculating a sample size based on prevalence found in the literature encounters two major issues: (i) prevalence of rat-borne diseases varies spatially at the global and local scale [6,14,93]; (ii) researchers acknowledge certain challenges in studying urban rats, especially as trapping success is generally quite low [94,95], which often precludes reaching the targeted sample size. We acknowledge other potential study limitations, including the variability in sensitivity of the assays to detect a given virus and the investigation of a single tissue (versus virus-specific target tissues). As the likelihood of infection with viruses increases with age for many viruses, and many viral infections follow an acute course, there was a low probability of detecting active virus infections [9,10]. However, zoonotic infections were detected in the examined sample in previous studies [42,43,96], while viral infections have been detected in comparable [9,35,70,76] or lower [15,25,77] sample sizes, demonstrating that a modest sample size of the rat population enables to reveal current viral infections. Therefore, targeted surveillance of the rat population for zoonotic viral pathogens should focus on older rats, or at least attempt to include all age classes. Thus, the principal limitations of our study were that few sites were investigated, relatively low sample size (n = 96), and the overall young age of the captured rats. A higher diversity of microenvironments could have revealed variations in the prevalence, therefore highlighting favorable environments for virus transmission. A bigger sample size would have provided a more accurate picture of the epizootiologic situation. Negative results are rarely published [97]. However, the (sole) publication of positive results greatly limits a realistic perspective of the entire epidemiological situation [98]. In particular, publishing negative results helps to interpret positive results that may be ob-tained in the future at this location or in similar studies at different locations. Furthermore, when searching for potential animal reservoirs or vectors of (emerging) infectious diseases, having positive results only (presence data) induces a bias and restricts the perception and the understanding of the global epidemiology of these infectious diseases [99]. Publication bias (toward positive results) makes scientific literature unrepresentative of the research field. It may lead to misconception of the reality, e.g., that prevalence of zoonotic pathogen infections in urban rats is generally high. Absence data are as critical as presence data to the understanding of the eco-epidemiology of zoonotic viruses, mapping of their true geographic coverage, and assessment of contributing factors of virus emergence. Publication of absence data is of high interest for addressing the epidemiological situation at a given instance and enables dating of pathogen emergence or shift in the epidemiological situation. In addition to eradication efforts, monitoring of wild rats for potential zoonotic viruses is potentially a valuable resource to predict future human outbreaks. For example, the detection of Leptospira spp. in rats has proven to be a good spatial predictor of human leptospirosis cases in leptospirosis-endemic urban habitats [100]. Moreover, urban brown rats could theoretically be used as sentinels for fine-scale spatial monitoring of environmental contamination with antimicrobial resistant bacteria [42], lead [101], and other heavy metals [102]. Therefore, within a One Health approach and operationalization, (sero)surveillance of rat populations may prove valuable to assessing the zoonotic risk from viruses, particularly of EMCV or HEV, in the human population but also in domestic animals, in urban and periurban habitats. The use of urban brown rats as sentinels for the active surveillance of some targeted viral pathogens should be further evaluated as a promising surveillance tool in settings where the viruses are known as well as not yet recorded. Conclusions Our transverse study provided absence data about eight zoonotic viruses in wild urban brown rats sampled across three sites within the city center of Vienna, Austria, that are highly frequented by humans. Our findings showed that in these specific sites and at the time of sampling, rats did not constitute a hazard for the zoonotic transmission of the investigated viruses to humans. We recommend the publication of absence data (in any format, short communication, dataset, dedicated website) about rat-borne pathogens for better, unbiased assessments of emergence risk factors. In addition, the authors thank Steve Smith and Gopi Munimanda from the Konrad Lorenz Institute for Ethology, University of Veterinary Medicine Vienna, Austria, for the barcoding of the samples.
v3-fos-license
2015-07-06T21:03:06.000Z
2013-04-15T00:00:00.000
9096662
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://nutritionj.biomedcentral.com/track/pdf/10.1186/1475-2891-12-48", "pdf_hash": "e8b633abff6926616b85dd9b3c511c97ca7ad86d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44842", "s2fieldsofstudy": [ "Medicine" ], "sha1": "ff45be2deb5ba112d06e21d4863baa81bb4cf62a", "year": 2013 }
pes2o/s2orc
Long-term effects of low-fat diets either low or high in protein on cardiovascular and metabolic risk factors: a systematic review and meta-analysis Background Meta-analyses of short-term studies indicate favorable effects of higher protein vs. lower protein diets on health outcomes like adiposity or cardiovascular risk factors, but their long-term effects are unknown. Methods Electronic databases (MEDLINE, EMBASE, Cochrane Trial Register) were searched up to August 2012 with no restriction to language or calendar date. A random effect meta-analysis was performed using the Software package by the Cochrane Collaboration Review Manager 5.1. Sensitivity analysis was performed for RCTs with a Jadad Score ≥3, and excluding type 2 diabetic subjects (T2D). Results 15 RCTs met all objectives and were included in the present meta-analysis. No significant differences were observed for weight, waist circumference, fat mass, blood lipids (i.e. total cholesterol, LDL-cholesterol, HDL-cholesterol, triacylglycerols), C-reactive protein, diastolic and systolic blood pressure, fasting glucose and glycosylated hemoglobin. In contrast, improvements of fasting insulin was significantly more pronounced following high protein diets as compared to the low protein counterparts (weighted mean difference: -0.71 μIU/ml, 95% CI -1.36 to -0.05, p = 0.03). Sensitivity analysis of high quality RCTs confirmed the data of the primary analyses, while exclusion of studies with diabetic subjects resulted in an additional benefit of high-protein diets with respect to a more marked increase in HDL-cholesterol. Conclusion According to the present meta-analysis of long-term RCTs, high-protein diets exerted neither specific beneficial nor detrimental effects on outcome markers of obesity, cardiovascular disease or glycemic control. Thus, it seems premature to recommend high-protein diets in the management of overweight and obesity. Background With respect to the optimal macronutrient composition in the daily diet, most international authorities recommend to increase intakes of carbohydrates at the expense of fat and protein [1,2]. However, in face of the worldwide increase in prevalence of both overweight and obesity, there is a plethora of recommendations for diets aiming at weight loss and weight management. Among them, a high-protein (HP) regimen has gained increasing interest in recent years [3]. For the general population, recommended dietary reference intakes (DRIs) for protein are 0.66 g * kg body weight -1 * d -1 [4]. Actual consumption data for the US American population average 1.3 g * kg body weight -1 * d -1 in the 19-30 age group indicating a protein intake in excess of their needs [5]. The Acceptable Macronutrient Distribution Range (AMDR) for protein is given as 5-35% of daily calories depending on age [6]. A recent meta-analysis comparing HP vs. low-protein (LP) diets with a duration between 28 days and 12 months observed favorable effects of HP diets on biomarkers of obesity as well as cardiovascular risk factors such as HDL-cholesterol (HDL-C), triacylglycerols (TG), and blood pressure [7]. Several randomized controlled trials (RCTs) investigated the short-term effects of HP vs. LP diets, reporting advantages of HP protocols including a reduction in TG concentration [8][9][10]. A meta-regression of 87 studies concluded that low-carbohydrate, HP diets favorably affected body mass and composition independent of energy intake [11]. The benefits of HP diets might be explained by increased thermogenesis and satiety [12,13]. Recent data from the 26-year follow up of the Nurses' Health Study (NHS) revealed that protein sources such as red meat and high-fat dairy products were significantly associated with an elevated risk of coronary heart disease, while higher intakes of poultry, fish, and nuts correlated with a lower risk of coronary heart disease (CHD) [14]. Since there is a lack of information concerning studies with different protein contents covering a longer dietary intervention period, the aim of this meta-analysis was to compare the long-term effects of HP vs. LP regimens on biomarkers of obesity, cardiovascular complications as well as adverse effects of HP. Methods The review protocol has been registered in PROS-PERO International Prospective Register of Systematic Reviews (crd.york.ac.uk/prospero/index.asp Identifier: CRD42012002791). Literature search Literature search was performed using the electronic databases MEDLINE (between 1966 and August 2012), EMBASE (between 1980 and August 2012), and the Cochrane Trial Register (until August 2012) with restrictions to randomized controlled trials, but no restrictions to language and calender date using the following search term: (high protein diet). Moreover, the reference lists from retrieved articles were checked to search for further relevant studies. This systematic review was planned, conducted, and reported adhearing to standards of quality for reporting meta-analyses [15]. Literature search was conducted independently by both authors, with disagreements resolved by consensus. Eligibility criteria Studies were included in the meta-analysis if they met all of the following criteria: (1) randomized controlled design; (2) minimum intervention period with a followup of 12 months; (3) comparing a HP (≥ 25% of total energy content, TEC) with a LP dietary intervention (≤ 20% of TEC), with both protocols adopting a low fat diet (≤ 30% of TEC) [16]; (4) assessment of the outcome markers: weight, waist circumference (WC), fat mass (FM), total cholesterol (TC), low-density lipoprotein cholesterol (LDL-C), HDL-C, TG, diastolic and systolic blood pressure (DBP, SBP), C-reactive protein (CRP), fasting glucose (FG), fasting insulin (FI) and glycosylated hemoglobin (HbA1c); (5) report of post-intervention mean values (if not available mean of two time points were used) with standard deviation (or basic data to calculate these parameters). If data of ongoing studies were published as updates, results of only the longest duration periods were included. Quality assessment of studies Full copies of studies were independently assessed for methodological quality by both authors using the Jadad score [17]. This 5-point quality scale includes points for randomization (randomized = 1 point; table of random numbers or computer generated randomization = an additional 1 point), double-blinding (double-blind = 1 point; use of a placebo = additional 1 point), and followup (numbers and reasons for withdrawal in each group are stated = 1 point) within the report of an RCT. An additional point was accepted if the analysis was by intention-to-treat to compensate for the fact that double-blinded study protocols are elusive in dietary intervention studies. Final scores of 0-2 were considered as low quality, while final scores of ≥ 3 were regarded as representing studies of high quality. Furthermore, the trials were assessed for methodological quality using the risk of bias assessment tool by the Cochrane Collaboration [18] (Figure 1). Data extraction and statistical analysis The following data were extracted from each study: the first author's last name, publication year, study duration, participant´s sex and age, BMI, % diabetics, sample size, outcomes, drop outs and post mean values or differences in mean of two time point values with corresponding standard deviation. Subsequently, a standardized data extraction form for this systematic review was created according to Avenell et al. [19]. For each outcome measure of interest, a meta-analysis was performed in order to determine the pooled effect of the intervention in terms of weighted mean differences (WMDs) between the postintervention (or differences in means) values of the HP and LP groups. Combining both the post-intervention values and difference in means in one meta-analysis is a legitimate method described by the Cochrane Collaboration [20]. All data were analyzed using the REVIEW MANAGER 5.1 software, provided by the Cochrane Collaboration (http://ims.cochrane.org/revman). Heterogeneity between trial results was tested with a standard χ 2 test. The I 2 parameter was used to quantify any inconsistency: where Q is the χ 2 statistic and d.f. is its degrees of freedom. A value for I 2 > 50% was considered to represent substantial heterogeneity [21]. To consider heterogeneity, the random-effects model was used to estimate WMDs with 95% confidence intervals (CIs). Forest plots were generated to illustrate the studyspecific effect sizes along with a 95% CI. Funnel plots were used to assess potential publication bias (e.g. the tendency for studies that yield statistically significant results to be more likely to be submitted and accepted for publication). To determine the presence of publication bias, the symmetry of the funnel plots in which mean differences were plotted against their corresponding standard errors was assessed. One study [22] imputed two types of LP diets, and these diets were combined to one group as described in the Cochrane Handbook [20]. Data extraction was conducted independently by both authors, with disagreements resolved by consensus. Literature search and characteristic of studies A total of 15 studies extracted from 3862 articles met the inclusion criteria and were analyzed in the systematic review [22][23][24][25][26][27][28][29][30][31][32][33][34][35][36]. The detailed steps of the metaanalysis article selection process are given as a flow chart in Figure 2. General study characteristics are given in Table 1. In case of more than one LP/LF group within a single study design, all LP interventions were combined as recommended by the Cochrane Collaboration [20]. Type 2 diabetes mellitus (T2D) was not defined as an exclusion criteria, and a total of three studies enrolling subjects with T2D were included in the present metaanalysis [24,32,33]. 13/15 studies reported the distribution of gender (1200 women vs. 690 men). The pooled estimate of effect size for the effects of HP as compared to LP on primary and secondary outcomes are summarized in Table 2. Figure 1 Risk of bias assessment tool. Across trials, information is either from trials at a low risk of bias (green), or from trials at unclear risk of bias (yellow), or from trials at high risk of bias (red). Glycemic control Decreases in FI were significantly more explicit in subjects adhering to an HP diet as compared to those following an LP regimen Sensitivity analysis Articles with a Jadad quality score ≥ 3 only were included in the sensitivity analyses. A total of 8/15 studies remained for sensitivity analyses [22,24,26,28,32,33,35,36]. The results of the primary analyses could be confirmed for all the parameters that were not significantly altered in different ways in the HP and LP groups. Furthermore, changes in FI turned out to be of similar dimension as well when studies with poor Jadad scores were excluded. In an additional sensitivity analysis, studies enrolling patients with T2D [33,34,36] were discarded to account for a potential "reproducibility effect" on the pooled WMD when comparing the results of the present systematic review with a meta-analysis by Santesso et al. [7], where T2D represented an exclusion criterium. Results were not significantly different as compared to the comprehensive meta-analyses (Figure 4). Publication bias The funnel plots (with respect to effect size changes for Weight, WC, FM, TC, LDL-C, HDL-C, TG, CRP, DBP, SBP, FG, FI and HbA1c in response to HP diets) indicates little to moderate asymmetry, suggesting that publication bias cannot be completely excluded as a factor of influence on the present meta-analysis. It remains possible that small studies yielding inconclusive data have not been published. Discussion In this systematic review, HP dietary protocols were compared with LP regimens with respect to their effects on biomarkers of obesity and obesity-associated disorders such as diabetes or cardiovascular disease. Analyses were restricted on HP as well as LP diets providing ≤ 30% of TEC in the form of fat to prevent potential bias due to variations in total fat intake. Main findings suggest no advantages or disadvantages of a higher dietary protein content. None of the dietary protocols turned out to be superior to its counterpart with regard to the biomarkers under investigation. Following primary analysis, decreases in fasting insulin were significantly more pronounced in HP diets. However, this was no longer valid after inclusion of high quality trials only in the secondary analysis. The raise in HDL-C turned out to be more pronounced in the HP group compared to the LP group following sensitivity analysis excluding studies that enrolled patients with T2D. In a previous study, HP diets exerted a 12%-increase in HDL-C under closely supervised dietary control [37]. Two meta-analyses provide evidence that higher fat intake was associated with higher levels of HDL-C when compared to low-fat diets [38,39]. With respect to the studies included in the present systematic review, the trials by Gardner et al. [22], Dansinger et al. [26], and McAuley et al. [34] reported higher intakes of total fat at the end of their 12 months protocols (dietary records) in the HP groups as compared to the respective LP counterparts. Omitting these trials to the sensitivity analysis, changes in HDL-C turned out to be similar in both HP and LP regimen (data not shown), suggesting that HDL-C response was due to dietary fat content rather than to protein consumption. Taken together, these results are in discrepancy with a recent meta-analysis by Santesso and co-workers [7] who reported weight loss, WC, HDL-C, TG, SBP, DBP and FI to be significantly more improved following short-and long-term HP diets as compared to LP protocols. The different findings might at least in part be explained by the fact that only long-term studies with a duration ≥ 12 months were included in the present meta-analysis. In addition, both post-intervention values as well as changes in mean differences were used as suggested by the Cochrane Collaboration [20] to avoid a standardized mean differences method, whereas Santesso et al. [7] separated between primary (change from baseline values) and secondary (final values) analyses. These results indicate that HP diets do not exert favorable effects on anthropometric measures like body weight, fat mass and waist circumference. However, in a meta-regression by Krieger et al. [11] high-protein intake turned out to be a significant predictor of fat free mass retention, thereby compensating a potential side-effect of long-term energy restriction. Dietary protein content of the high-protein diets included in this meta-analysis varied between 30-40% of TEC, which is within the age-dependent AMDR of 5-35% for all but one RCT [31]. Via analysis of the National Health and Nutrition Examinations Survey conducted between 2003 and 2004, Fulgoni [5] concluded that the actual intake of protein in US-American adults of 1.3 g * kg body weight -1 * d -1 exceeds the DRI values of 0.66 g * kg body weight -1 * d -1 . He suggested that recommendations could be adapted to 25-30% of TEC, assuming benefits of higher protein intake e.g. on regulation of body weight. Regarding biomarkers such as weight, waist circumference or fat mass, the present meta-analysis does not support this concept. Three RCTs included in this meta-analysis investigated the effects of HP regimens on biomarkers of kidney function in patients with T2D. In all trials, HP diets did not affect renal functions assessed via measurement of serum creatinine and microalbuminurea [32,33,36]. Likewise, a 2-year RCT by Friedman et al. [40] reported no harmful effects of a high-protein/low carbohydrate diet on glomerular filtration rate, albuminuria, or fluid and electrolyte balance. With respect to prospective cohort studies, a systematic review by Mente et al. [41] indicated no significant correlations between animal protein sources, e.g. eggs, milk or meat on coronary heart disease (CHD), whereas vegetable protein sources like nuts were associated with a decreased risk. Findings from Greece, Sweden and the US noted an increased all-cause mortality following a HP/low carbohydrate diet based on animal sources in both women and men whereas a vegetable-based low-carbohydrate diet was associated with lower all-cause and cardiovascular disease mortality rates [42][43][44]. This systematic review did not consider unpublished data, and with respect to the moderate asymmetry of the Funnel plots, it cannot be excluded that publication bias such as lack of published studies with inconclusive results may have at least a moderate impact on the effect size estimates. An important limitation of dietary intervention trials is the heterogeneity of various aspects and characteristics of the study protocols. The literature chosen for the present meta-analysis varies regarding type(s) of diets used, definitions of HP and LP diets, study population (i.e. BMI, type 2 diabetics, abnormal glucose metabolism), intervention time, nutritional assessment as well as longterm follow-ups (between 1 and 2 yrs.). In addition, some studies were performed on hypocaloric terms, while others provided an isocaloric diet. Not all of the studies gave details on the quality of their respective setup (e.g. method of randomization, follow-up protocol with reasons for withdrawal) yielding Jadad scores < 3. However, following sensitivity analyses including high quality studies only (Jadad score ≥ 3), pooled estimates of effect size were similar to those obtained with the complete set of studies. Some comparisons within the present meta-analyses were done using both post-intervention values and changes in mean difference, which is considered to be a legitimate procedure as described by the Cochrane Collaboration [20], and should not be regarded as a limitation. In summary, the present meta-analysis investigated the long-term effects of HP vs. LP both low in fat on biomarkers predicting the outcome of obesity, cardiovascular disease and glycemic control. Since biomarkers under investigation were not affected by changes in dietary protein content, unanimous recommendation of a highprotein dietary approach is not supported by evidence. With respect to the potential risk of high-protein contents, further studies are required before dietary recommendations can be changed towards a higher percentage of daily protein consumption.
v3-fos-license
2020-03-19T20:12:11.853Z
2019-12-26T00:00:00.000
213457541
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "https://journal.uny.ac.id/index.php/reid/article/download/20924/13795", "pdf_hash": "c8179f32ecf4bbd088177f471a81267c63965ca3", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44843", "s2fieldsofstudy": [ "Mathematics", "Education" ], "sha1": "ea1f5b9b453f69b3ff3c4264b86889dcb38ab0b7", "year": 2019 }
pes2o/s2orc
Estimation of college students’ ability on real analysis course using Rasch model This study is aimed at estimating the difficulty level of essay tests and the accuracy of students’ ability in Real Analysis essay test using the Rasch model with the QUEST program and R 3.0.3 package eRm program. The population in this study was all students of the Department of Mathematics Education, Universitas Pancasakti Tegal in the academic year 2016/2017, who were enrolled in the Real Analysis course. The data were analyzed using the R 3.0.3 package eRm program and QUEST program. The students’ ability was obtained from the result of the course final exam of the first Real Analysis course. The analysis shows that: (1) by using Rasch model for partial credit scoring, the difficulty level shows that 100% of essay questions in Real Analysis final exam is categorized as difficult, (2) the estimation of students’ ability in Real Analysis course using Rasch Model with CML method is better than the estimation of students’ ability using Rasch Model with JML approach. Introduction One important component in the formation of quality human resources is education. The most important factor to be able to compete globally in the 21st century is education. According to Mardapi (2012, p. 12), efforts to improve the quality of education can be pursued through improving the quality of learning and the quality of the assessment system. Thus, in the process of education in Higher Education, for example in learning mathematics must strive to implement the learning process and assessment as well as possible. A good process of learning mathematics can certainly be done by providing flexibility for students to develop and explore their abilities. Today, education in Indonesia is still considered very low, especially for mathematics. Even though mathematics is the main science taught from elementary school to university. This indication can be seen from the low student achievement in each academic year. Ironically, mathematics is a subject that is not liked. Many students are afraid of mathematics. For them, math is like a frightening enemy they want to avoid. Schwartz (2005, p. 1) suggests the basic success of mathematics education is to support the development of intelligence in mathematics from a variety of life conditions. Student's mathematical skills in living conditions at the School can be seen when students take the test. The implementation of the test is basically to assess the success of students during the learning process. The test is very necessary so that the educator in this case the lecturer can know the student's learning achievement after being given the subject matter in the learning process. Therefore, making a good test needs to be pursued by considering the ability of students, so that the tests carried out as a measuring tool to test student achievement can reflect/ describe the true abilities of students. Students of the Mathematics Education program at Universitas Pancasakti Tegal all this time consider the most difficult subjects to be Real Analysis. Real Analysis comprises deductive and axiomatic topics. Previous observation on the performance of students of Universitas Pancasakti revealed the students' ability in this course is relatively low. It is indicated by their ability to prove a convergent sequence yet, they found it difficult in solving some problems related to convergent sequence as there are many theorems are included. Student learning evaluation activities are one of the important tasks that must be done by lecturers. In the field of education, evaluation of student learning achievements is conducted to determine the progress of students in the curriculum that has been taught. One effort to evaluate students is to give examinations in the middle of the semester and at the end of the semester. However, sometimes giving questions that are too difficult or too easy causes it to be difficult for lecturers to distinguish students' abilities. Therefore, an analysis of exam questions is needed in the hope that the exam results present the ability of students. Evaluation is a series of activities in improving the quality, performance, or productivity of an institution in carrying out its program. Through evaluation, information about what has been achieved and which have not will be obtained, then this information is used to improve a program. According to Tyler (1950), evaluation is a process of determining the extent to which educational goals have been achieved. According to Griffin and Nix (1991), evaluation is a judgment on the value of the measurement results or implications of the measurement results. Tyler emphasizes the achievement of the objectives of a pro-gram, while Griffin and Nix emphasize the use of assessment results. Thus, the focus of evaluation is a program or group, and there is a judgment element in determining the success of a program (Mardapi, 2012, p. 4). The form of real analysis subject evaluation is the midterm and the final semester examination. The test is in the form of a description test, the advantages of the description form test are easy in the preparation. This form of description will also train students in expressing opinions both systematically and logically (Buckley, Winkel, & Leary, 2004). A lecturer will be able to find out where the weaknesses of the students are in the material that has been taught so that they will give input on what things must be improved. Scoring on the description form tests takes a long time and is relatively more difficult so the form of the description test is difficult to use for large-scale tests. An assessment will be meaningful if the results can be used to improve the quality of the learning process. An assessment will be meaningful if the results can be used to improve the quality of the learning process (McMillan, 2005). The existence of the midterm and final semester exams in the Real Analysis course is to evaluate the ability of students. Some theories and models that can be used to analyze test items are the ones with the Rasch Model. In this study, Rasch model was employed to analyze test items. According to Imaroh, Susongko, and Isnani (2017), the items parameter does not depend on the sample. Further, Ningsih and Isnani (2010) revealed the different reliability levels of essay test items analyzed using Item Response Theory model (1PL, 2PL, 3PL) and Rasch model. The concept of objective measurement in the social sciences and the assessment of education, according to Wright and Mok (2004), must have five criteria, namely: (1) producing linear measurements with equal intervals, (2) exact estimation process, (3) identifying inaccurate (misfits) or uncommon items (outliers), (4) able to handle missing data, (5) produce measurements that are independent of the parameters studied. Of the five conditions, so far only the Rasch model can fulfill these five conditions. The quality of measurements in the assessment of education carried out with the Rasch model will have the same quality as the measurements made in the physical dimension in the field of physics (Sumintono & Widhiarso, 2015). In measuring modern test theory, the Rasch model is seen as the most objective measurement model. The use of the Rasch model in measuring education has advantages in specific objectivity and the stability of high grain parameter estimates (Wu & Adams, 2007). The main characteristic of the Rasch Model is that this model considers all responses of a test taker regardless of the sequence in solving the problems. It means that the level of difficulty of each test item is not necessarily in consecutive order. The main advantage of the Rasch model is that the mental process used by participants in solving the problems is more accurate. Moreover, compared to other models (particularly classical test theory) this model has the ability to predict the missing data based on a systematic response pattern. This model has been applied to mathematics and reading tests, e.g., at the National Assessment of Educational Progress (NAEP) (Susongko, 2014). This model is also suitable for analyzing personality scale responses that have a multi-point scale. Unlike the Rasch model which includes all responses without considering the sequence in solving the problems, the Gradation model requires sequential responses of the test takers from a low to a high category. In the Gradation model, the level of difficulty of each test item is arranged in sequence, while in classical test theory, the pattern of students' answers is not considered as classical test theory merely considers correct and incorrect answers. Gradation model is suitable for a course that requires regularities or sequential responses of each test item, such as mathematics, physics, and chemistry. According to Lababa (2008), one of the oldest test theories about behavioral assessment is classical true-score theory. Classical test theory has an easy application. Moreover, it is a practical model to describe how measurement errors can affect the observed score. Quantitative item analysis emphasizes the analysis of internal test characteristics through empirically obtained data. Internal characteristics include test item parameters which are the level of difficulty and discrimination power of a test. Rasch model is a dichotomous scoring model that merely has two categories, namely the correct answer with a score of 1 and the incorrect answer with a score of 0. Currently, it has been developed more extensively in polytomous scoring. According to Retnawati (2014, p. 32), the polytomous scoring model is an item response model that has more than two scoring categories. In the Rasch model, it is assumed that all items have the same discrimination index (Isgiyanto, 2011). To deal with polytomous data with various ranks, a new type of analysis of the Rasch model is developed, namely the Partial Credit Model. However, the main purpose of the Rasch model is to create a scale measurement at equal intervals. Meanwhile, as the raw scores are not shown in interval form, the scores cannot be used directly to interpret the students' ability. Rasch model requires both per person score data and per item score data. These two scores become the basis for estimating true scores that indicate the level of individual ability as well as the degree of difficulty of the test. Rasch modeling uses both per person score data and per item score data. These two scores become the basis for estimating true scores that indicate the level of individual ability as well as the degree of difficulty of the test. The advantage of the Rasch Model compares to other models, particularly classical test theory, is the ability to predict the missing data, based on a systematic response pattern. Some studies had been carried out related to the use of the Rasch Model in analyzing test items. A study by Kurniawan and Mardapi (2015) showed that the Rasch model provides complete information about test items, including its difficulty level. This study is aimed at estimating the difficulty level of the essay test on the first Real Analysis course by using the Rasch Model and describing the estimation of students' ability in Real Analysis course by using the Rasch Model, QUEST program, and R 3.0.3 package eRM program. Method This research is an explorative descriptive study of data sets of items and responses of participants in the semester's final examination of the real analysis subject in the academic year 2016/2017. This research is a post-hoc diagnosis that is described as a retrofitting approach (Gierl, 2007). The retrofitting approach is carried out through analysis of the items and item response data in the final semester exam in the real Analysis 2016/2017 academic year. Some studies have implemented the Rasch model by involving 30 to 300 students as the sample (Bond & Fox, 2007;Keeves & Masters, 1999). The subject of this present study was 82 students of Mathematics Education Department of Universitas Pancasakti Tegal in the academic year 2016/2017 who took the first Real Analysis course. The sampling technique used in this study is purposive sampling. It is one of the non-random sampling techniques where the researcher determines sampling by specifying specific characteristics suitable with the objectives of the study so that it is expected to answer the research problems. Based on the explanation of the purposive sampling, there are two things that are very important in using the sampling technique, namely non-random sampling and setting specific characteristics according to the research objectives by the researchers themselves. The instrument used in this study was the final exam test on the first Real Analysis course. The test items include the introduction material, Real Numbers, Sequences and Series, and Limit (Bartle & Sherbert, 2000). Rasch model was applied to analyze the collected data. This analysis resulted in a description of the difficulty level of the test items. By using the eRm package in R Program version 3.0.3, the analysis generated the estimation of item parameters on the exam of Real Analysis. Measurement modeling explains the procedure of how to organize raw scores into more meaningful information. Moreover, it can utilize a mathematical model that can interpret raw scores into a score that provides more valid and accurate information. The analysis of raw scores leads to a new finding: the opportunity for students to correctly answer an item is the same as the comparison of students' ability and the difficulty level of the test items. (Bryan, 2004) OCFs (Ogive Curve Function) become a prototype of Rasch model development for polytomous items. If i is a polytomous item with score category = 0, 1, 2,. . . , mi, then the probability of participant n with score x on item i is later described in Category Response Function (CRF), which is illustrated in the following equation (Glas & Verhelst, 1989): Equation (2) can be elaborated by the number of categories in the test items. For example, if a scale has three categories of the score of 0, 1, and 2, then there will be a category (j) as many as three individual probability equations for each category. Probability in category 0 is: Probability in category 1 is: Probability in category 2 is: . In the probability of category 0, there is a number 1 in the numerator since Rasch Model requires the following equation: (Glas & Verhelst, 1989) Findings and Discussion The parameter of the difficulty level of test items has the same value interval as the parameter of participants' ability (θ), which is bi j = θ. The bi j value ranges from -∞ to +∞. However, the values which are practically (or rationally) used are only between -4.0 to +4.0. It means that the more negative the difficulty level of an item or close to -4, the easier the problem. On the other hand, the more positive the difficulty level or approaching +4, the more difficult the problem (Naga, 2003, p. 224). In case the parameter of the difficulty level of a test item meets bj ≤ -2, the item is then categorized as a very easy item. If it meets -2 ≤ bj ≤ 0, the item is then categorized as an easy item. Furthermore, if it meets 0 < bj ≤ 2 and bj ≥ 2, the item is then categorized as a difficult and very difficult item, consecutively (Hambleton, Swaminathan, & Rogers, 1991). The analysis of the question number 1 showed that δ11 = 0.861, δ12 = 0.374, and δ13 = 0.45. It implies that the difficulty level of the first, second, and third steps is included in the difficult category. In question number 2, the difficulty level of the first step is included in the difficult category (δ21=1.731), while the difficulty level of the second step is identified as very difficult (δ22=2.787). In question number 3, the results obtained were δ31=1.149 and δ32= 1.796, which suggest that the difficulty level of the first and second steps can be included in the difficult category. The analysis of question number 4 resulted δ41=-0.363 and δ42=-0.963. It indicates that the difficulty level in both steps is in included in the easy category. The results showed that there are three categories (δ12, δ21, δ41) which are identified as easy, one category (δ11) is identified very easy, and six categories (δ22, δ31, δ32, δ42, b51, and b δ52) are categorized as difficult. In general, the score of difficulty level of those items was 0.594, thus the four test items were identified as difficult. It can be inferred from the aforementioned results that the final exam items of Real Analysis course are categorized as difficult for the participants, even though all topics in the questions had been discussed during the course. The value of the difficulty level of item varies (typically) from about -2.0 to +2.0. Item number 1 with sub-topic of the Completeness of Real Numbers was identified as a difficult item. Likewise, item number 2 and item number 3 with sub-topic of the Limit of a Sequence and the Theorems of Limit of a Sequence, respectively, were categorized as difficult items. On the contrary, item number 4 with sub-topic of the Theorems of Limit of a Sequence was identified as an easy item. To make it clearer, Figure 1, Figure 2, and Figure 3 present the questions in the test and the sample of student's answers. From the students' answers which are presented in Figure 1, Figure 2, and Figure 3, it can be foreseen that the student was incapable to solve the problems number 1, 2, and 3 systematically, because of the incapacity in understanding some theorems and definetions which are related to the problems. The students could not recognize and analyze the relation between the theorems and definitions. It is presented in Figure 4 that in the fourth problem, the student seemed to comprehend the topic. The theorems related to sequences and series were analyzed before the implementation for solving a problem. It can be seen from the sample in which the student could use the theorems systematically as suggested in solving the problem. The result of the analysis showed that the ability of the test participants was quite diverse. In fact, merely a small number of students can solve questions number 1, 2, and 3 correctly. Most of the students could not determine specific theorems and definitions to solve the problems, especially in the second and third problems. In contrast, most of the students already understand the theorems used to solve the fourth problem, which are the sequences and series theorems, even though they faced a difficulty to analyze the theorems. The estimation of the students' ability is presented in the interval scale (-3, +3). The category score in Rasch Model shows the number of the required steps to solve an item correctly. A high score indicates a good ability category. On the contrary, a low score indicates a low category of ability as well. The output of the estimation of ability parameter obtained from QUEST program and the package eRM with partial credit modeling or Rasch Model is used to illustrate the comparison between the students' ability estimated using the Joint Maximum Likelihood (JML) approach with the package eRM and those estimated using the Conditional Maximum Likelihood (CML) approach with the QUEST program. In JML approach, the students' ability could not be expressed in score 0 and score 100. Meanwhile, in CML approach, the students' ability can be expressed in score 0 (approximately a value of -3.09) and score 100 (as approximately a value of 85). Therefore, it can be inferred that Rasch Model using CML approach is more suitable than Rasch Model using JML approach to estimate the students' ability in understanding the subject-matter. The result of analysis meets the OutfitMSQ criteria if the value is 0.035 < OutfitMSQ < 3.239. The analysis resulted a value of 0.5 < OutfitMSQ < 1.5, thus it fulfills the range of OutfitMSQ. The criteria of INFIT MNSQ is 0.5 < MNSQ <1.5. According to the mean value and the standard deviation of Rasch model, the CML approach with the package eRM is eligible since the mean and the standard deviation meets the criteria. On the contrary, the JML approach with Quest program is less appropriate as indicated by the mean and the standard deviation that do not meet the criteria. In conclusion, the result of analysis on the estimation of students' ability reveals that the estimation of students' ability using Rasch model with CML approach and eRm program is more accurate than the estimation of students' ability using Rasch model with JML approach and QUEST program. Similarly, based on OutfitMSQ, Rasch model using CML approach with eRm program has better performance than Rasch model using JML approach with Quest program. Conclusion Based on the results and discussions, it can be concluded that the essay test items on the first Real Analysis course that have been tested to the students of Mathematics Education Department, Universitas Pancasakti Tegal can be classified as a good test. Besides, the students' ability can be estimated precisely by using Rasch Model with CML approach and eRm package. The estimation of participants' ability was quite diverse. A small number of students can solve questions number 1, 2, and 3 correctly despite these questions were classified difficult. Meanwhile, most of students already understand the theorems used to solve the fourth problem. The students are capable to apply the theorems systematically to solve the fourth problem.
v3-fos-license
2018-05-31T23:23:06.787Z
2015-12-25T00:00:00.000
44072068
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.mcser.org/journal/index.php/mjss/article/download/8700/8358", "pdf_hash": "f5903ba965607278f2333b1e0ef07538c89d307c", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44846", "s2fieldsofstudy": [ "Education" ], "sha1": "f5903ba965607278f2333b1e0ef07538c89d307c", "year": 2015 }
pes2o/s2orc
Methods of Teaching and their Impact on Learn This study highlights the role and importance of teaching where the student is in focus in our basic education, some of the phenomena that accompany the learning process and education, as well as expressing some inherent considerations about ways, techniques and the most effective forms of teaching-centered nowadays student. To achieve the main goal of this work and to achieve the objectives set, except for the use of an extensive literature contemporary in the field of teaching, I researched own material by interviewing students, parents and teachers as well as doing surveys with them. As primary sources were interviews, observations, questionnaires and surveys that are structured and semi-structured. Instruments of this survey are development of quantitative and qualitative measurement, in a word mixed methods. To understand better the functioning of teaching methods and their effect on the learning process was followed and observed classes in institutions designed to observe. The process of data and processing were conducted through three instruments: review of the literature, direct observation and questionnaires. The main object of this paper is to: To make an interpretation of the method of student centered teaching, which is the most efficient and most used by teachers. To discern the current situation of these teaching methods, as aligned with European standards, which are the possible solutions to these schools and the premise to create a much more effective teaching. So I think that this paper is an attempt to distinguish some of the causes that lead to malfunctioning of our learning process in basic education, and to suggest some ways and forms a much more contemporary teaching. Introduction 1. In the "Dictionary of education" (Grillo.K p:182) instruction is defined as the act of teaching in an educational institution; It is the operation and management of the learning process by the teacher.Teaching has always been a deliberate process, which should be implemented certain objectives.Like any deliberate act, teaching does not happen by chance, it is a planning process, requiring the effective implementation of this process.Teaching is: a) the transmission of knowledge from teacher to pupil, accompanied by the question "What?" -What to teach him; b) organization and direction of the work of students, accompanied by the question "How?" and facilitation of learning (What will teach and how to teach the student?). Learning activities of students marks to achieve recognition.It is a process motivated and deliberate, the result of the skills many, such as: ability to convince yourself that learning is a duty personal skills to motivate yourself positively, the skills to plan time, skills to master the environment examines the skills to plan activities, ability to work alone or in groups, the skills to become aware of the results obtained, aiming to prepare to deliver a critical attitude toward what you learn, then start to himself.Learning is a phenomenon with personal character.Learning is so important to human life that it is hard to find a situation where it is not included.Often, teachers and school practice especially new teachers, and very difficult to orient themselves in unforeseen problems and situations related to pupils.This impels them to find the most appropriate solution.This fact is the source of many ideas and conceptions about learning and teaching. Teaching, in the strict sense is the act of teaching in an educational institution.The process of teaching is one of the most important factors and most influential in achieving learning goals.Teaching helps students learn.Given this, some describe teaching as a process, as an action or an interpersonal activities, where the teacher interacts with one or more students and influence them, but that does not understand the impact of student to teacher.(Manual, Group of authors, 1999 p.20).In the broadest sense, teaching is the direction from the teacher in the learning situation and learning that includes: a) proactive process of decision-making for planning, drafting and preparation of materials for teaching and learning, for diagnosing the needs of students, creating learning environment for the use of instructional time. b) the process interactive, direct cooperation of teacher-student, using the forms, methods and means of teaching.c) prospectively process, summarizing evaluation, self-assessment, establishing procedures to evaluate learning, to put grades to assess the curriculum, to evaluate teaching materials.So, teaching marks the teacher's activity.It is a set of strategies, tactics and pedagogical procedures teacher uses to teach students.Also, it is said that teaching is an intentional process, which means that this process is intended to be achieved certain goals, always oriented learning.(Ibid,p.22)Learning and teaching are two basic processes that underlie the activities of students and teachers.Learning is the activity of the students, which is a phenomenon with individual character.Teaching is the direction of teacher learning situation and learning.Given that each student has its own uniqueness and learning, we cannot claim absolute recipes in his learning.Teachers can help students to choose the best method of learning.The learning process cannot be understood only in the capture of information, which remains just under reproduction information.Acquisition of real information will note only if we have the answer of the subject to students learning situation, whether there will be a sense or awareness of this new behavior, which leads to a conception on the part of students.Man undergoes changes throughout life, are the consequences of learning in turn, learning from educational institution and educational and learning from Micro-social. Traditional pedagogy is taking time to determine the way in which concrete behavioral changes, taking into account teaching, as well as learning.Given this, professors and philosophers have made radical reforms in the structure of the school and its contents, which were spread throughout the world.Learning has attracted the attention of many researchers, education professionals and teachers.Special attention was paid to learning techniques in school, as an activity typically noted for individual, hence the necessity of knowledge and mastery of techniques to realize learning as a way that facilitates teaching and consequently that of learning. Teaching cannot be defined simply as a set of methods.It means the fulfillment of a number of objectives for a particular group of students.Teaching must find the balance between teaching and leadership activities directly to students working on their own or in groups.Good teaching means to achieve the student develop skills and learning strategies, and simultaneously learn both what features matter.Although teaching is more than a set of strategies, some of the teaching methods should be part of the repertoire of every teacher. The Development of Critical Thinking among Students 2. The essential characteristic, which is the heart of productive teaching, belongs to the development of critical thinking.It is considered as the basic pillar of modern teaching, on which is predicted the realization of educational activities.Changes or reforms that have begun to be practiced in education, require the students to establish the development of the information, while the teacher is considered as the main guide, which directs the activity in class by telling students the main road towards new researches.So, this tendency manifests itself with the methodology of teaching, which is called "student-centered teaching."Developing critical thinking is essential for students, and the only assistant who can grow it, is the teacher.(Guideline 1, AEDP 2000 :) No rule or typesetting of thoughts of thoughts and ideas from any group of people, cannot tell what is the best idea in a certain circumstance.The work done during the common interaction is done to improve the student's learning, which should be sustainable and should last for a lifetime.There has been written much about critical thinking.For people who think critically, meaning information base is a starting point more than the end of learning.The development of critical thinking critically involves the absorption of ideas and reviews their impact, presenting them in a sophisticated way to balance their opposing views in opposite, building reliable systems to try them out and take a sustainable base on these structures.Critical thinking is a complex process involving ideas and creative resources, the re-conceptualization and restructuring concepts and information.It is a cognitive, active and interactive process, which occurs simultaneously in many levels of thinking.Thinking is often inclined towards the goal, but it can also be a creative process, where the goals may be vague.Critical thinking is a very sophisticated way of thinking.It occurs in those cases when the older students should be.But even young students are fully able to engage in appropriate levels of development of critical thinking.They include optional tasks to solve difficult problems and to display higher levels of thinking on issues related to decision-making. What is critical thinking? Thinking is a similar process as reading, writing, speaking or listening.It is an active, interacting and complex process, which includes thinking about something real.In school, learning critically is better learned while testing it as a way of acquisition of content, as something that is part of the overall results.Actually, recent researches about critical thinking and learning model suggests us that the focused method reduces the development of critical thinking.For example, Brown (1989) argued that learning skills, shared objectives and real-world tasks, enable students to perform an objective test well, but they are unable to apply these skills to new situations.Richer definitions for learning and thinking are supported by researchers of cognitive psychology (cognitive), philosophy and multicultural education.The most common side in this research study are: • Fruitful and long learning, which can be applied to new situations is a matter in whose information and ideas, have an important meaning.This happens when students energize in learning, enter inside learning and synthesize & produce information by theirselves.(Anderson et.al, 1985).• Students learning expands when there are used strategies for thinking.But even students should do their part in the learning process to have the required results.(Palinskar and Brown, 1989).• Learning and critical thinking are expanded when students have the opportunity to apply new learning in the real tasks (Resnik 1987).Learning expands when it is built on previous knowledge and on the experience of the students (Roth 1990).Critical thinking and learning occurs when teachers understand and appreciate differences of ideas and experiences.Also critical thinking mentality occurs when "only one question is right" (Benks.1988).Teaching for critical thinking is not a simple task or a task that can be performed in a certain class and then forgotten.Also there are not resolved steps in which the students and the teacher should walk along.However, there are some rules which promote the development of critical thinkers.Conditions which are described below are essential to promote critical thinking: • Provide time and a chance to show critical thinking experiences. • Create opportunities for students to think more about certain issues. • Consider ideas ,thoughts and opinions. • Encourage the active involvement of students in the learning process. • Make sure the students to have a safe environment in which they do not have doubts that anyone could tease them.• Express conviction in the ability of each student to make critic judgments. • Appreciate the critical opinion expressed by students.Critical thinking for various reasons takes a lot of time.Before thrown a hypothesis, first there is required time to discover what you think and how you are convinced in the topic. Time is also needed to start the expression of opinions by the students in their own words and to hear their speech and expression of ideas.But while, we must recognize that the exchange of ideas and critical thinking also requires time.Without this exchange it is not possible to assess or comment on the results of others, who are able to process thoughts and to reflect further.To promote critical thinking in the classroom, students should be given enough time to express their ideas and comments.During time spent in expressing thoughts in an environment which fosters the exchange of ideas, you should create opportunities for them to express themselves more clearly.Students do not always think freely and seriously about important issues.They often expect the teacher to determine "the only real answer."Students put together their own ideas and concepts in different ways.Some of these combination may be more productive than others, some may seem reasonable at first, but thinking further, they become less meaningful, or other definitions may seem stupid at first, but to make valid, they require processing.For this to happen, students should be given permission for this kind of thinking in order to create and express a clear opinion or "stupidity".When students understand the idea or thought, they are eligible to be involved more deeply in a critical analysis which can be required after.When teachers will give permission to students to deepen to their thinking and to become critical thinkers, they must do so by being under the control of their teacher.But even here we must distinguish between the provision of permit and being allowed.When students do not lack in the issue even the critically thinking will lack.Enough students come to school as slack students believing that the teacher and text contain knowledge and the teacher and text are responsible for their own learning.They see knowledge as immutable, that only need "to be emptied" by the teacher in their heads and reproduce command to show learning by reproducing knowledge.This suggestively learning shows that students are not involved in the process of developing critical thinking.Involvement in the process of developing critical thinking occurs when students are involved in the learning process through taking responsibility about their own learning.Classroom teaching modes, which include student in reflective thought, the exchange of ideas and opinions are the ones that will promote and outlive students during their learning.In the classes in which students are allowed to remain idle, lacking critical thinking and this is the behavior and the importance of teachers about their concepts on learning.The free and critical thinking can be indangered.Ideas may come to mind in different ways, with humor, sometimes even contradictory.It is part of critical thinking that sometimes come with "silent ideas" performed together with clever combinations or confusing notions.The teacher should provide students that this is a natural part of the learning process.It is also important to make clear that scorn or ridicule the individual ideas and opinions of everyone's, should not be allowed, because it creates an environment where everyone can prove a personal risk.Thinking is best formulated in a safe and indangered environment, where ideas are respected and where students are highly motivated to engage actively in critical thinking. Active involvement Michael Csiksentmihali (1975) demonstrated that when students are actively engaged in the learning process, at an appropriate level challenging, they greatly express satisfaction for inclusion and increase the capacity for judgment and understanding.Students can fully understand that when they are devoted to the lesson with fully energies and even success, they have the required goals.The exchange of opinions and ideas is a disciplined behavior.It requires the exchanger (the one that participates in conjunction with a partner, providing auxiliary joint elaboration of a thought, experience, etc., That short, from now on we will call exchange) to end something for the sake of others.From parents, it is taught to young children as an important social and survival skill. However, children can accept the idea of assistance or exchange, not because their parents expect it, but because they fail to see it as a substantial ransom exchange.They understand that the end of something (a job started together) will be achieved and obtained valuables when students are set to exchange views and ideas.The exchange of beliefs, ideas and opinions may be at risk.He asks students to tell yourself and others, as thinkers and persuasive, that are capable of great thoughts and avoidance of errors.Educating every man in this way determines the learning community, of which we educate everyone. 3. The methodology used in this paper reveals the role and importance of factors affecting the correct application of research-based learning and in accordance with the level of competence and the level of their impact on student outcomes. To determine these factors were studied and interpreted the factors that have left their mark on the structure, cultural level and global awareness of children, such as the turnover of the demographic associated with exile from rural to urban areas, in the capacity of our educational institutions to cope with new situations etc .. The facilities, classrooms where learning takes place, not the high number of students in classrooms and school texts for the students to develop the class appropriately, are all factors that influence the acquisition of teaching methods where the student is centered. To accomplish this work are used these methods: • discussions with school principals, teachers of elementary and secondary education, pupils and parents, • surveys, school surveys, consultations and free discussions, • participation in internal training school.Also, besides the use of a broad contemporary literature in the field of teaching methods, there are researched materials by interviewing students, parents and teachers as well as becoming survey with them.Served as primary sources, interviews, observations, questionnaires and surveys that are structured and semi-structured.I was focused especially on the problems that arise during teaching.Method of conversation with old teachers, and with them who are new to the profession, conversations with students of different classes on the usefulness of the approach centered teacher and student and surveys for this problem, I was given the opportunity to improve performance continuously by enriching it with necessary elements of the study.To determine these factors were taken into analysis of records and surveys to teachers and schoolchildren laid and prepared some questions and hypotheses research. Research questions and hypotheses This study was conducted on the basis of these research questions and hypotheses. Questions Research 1.What are some of the most common method used by teachers and when is the teacher-centered method as well as the student-centered used? 2. When the students are more active and focused, to which method, and when the students result is higher, when only the teacher explains, or when the class is organized?3. Are classrooms and textbooks appropriate for the students to develop the class?Hypotheses 1. Teachers use modern methods to the student during their work in the classroom.2. Student-centered teaching is more effective and more productive than it centered teacher.3. Curricula and textbooks are suitable for the development of the learning process in an appropriate manner, based on the implementation of modern methods and student-centered teaching. Population and sampling As for the students, there were surveyed a total of 200 students, specifically, four classes in compulsory education from fifth grade to ninth grade, so about 100 students, and 100 students in three classes of secondary education, particularly from the high school, who were part of the survey and our observations.In this study was independently completed the survey in the form of a structured questionnaire with the same items in accordance with the characteristics fit and groups. On the basis of these questions were developed measurement instruments.Later research questions and hypotheses were associated with the determination of dependent variables and independent model of statistical analysis and sample appropriately. Data Collection During my work on this paper, I have managed to collect data and evidence from observations and assessments carried out in the respective schools.Specifically, there are special hour lesson followed focusing particularly on some of the most effective ways and methods to determine the quality of teaching.Many teachers in the classroom use different ways to discuss.Usually it is the teacher who is at the center of discussions that takes place during teaching.The teacher asks and students answer questions.The teacher may refer to different students while he remains at the center of discussion.There are also cases where the discussions with the student in center, students are those who lead the discussion.A student takes the role of leader, who leads the group discussion, but the discussion then derived from one student to another.In this method, the teacher is an observer.Teacher takes notes during the discussion for later use data in evaluating the performance of students and reference materials in the next presentation.In discussing the students in class, discussion groups formed by 5 to 8 students.The atmosphere of the discussion where the center is the pupil should be an environment where democracy prevails, where students are free to express their opinions without fear.The teachers, who have a major role in teaching should be competent in possession of teaching methodology, which means: • To recognize and possess the variety of methods, techniques and as procedures, teaching. • To know how to use these methods and progress in adaptation to the needs of students. • To determine when he should work with the entire class, with groups of students and special students. • To recognize the difficulties of student learning and specific needs of their learning. • To select and use a variety of teaching resources including all chronology information. • To support students to focus and to adapt to link the subject.During my observations it was concluded that the explanation occupies a greater percentage in comparison with the methods of discussion.It turns out that 60% of the class of sound explanation, versus 40% who take the discussion. Another phenomenon is also evident in the survey environment in which it takes place in the class.It is very important for the class to be a friendly place.Classroom environment influences the success of the students is the most important criterion to develop and disclose their sentiment.It is the responsibility of teachers to create an appropriate atmosphere in the classroom.The teacher should use the personal pronoun "we" instead of "I", in order that students will feel that the teacher works with them.This helps the teacher to establish positive attitudes about teacher-student relationship, student-student, which will create a climate of emotional support, so that students learn to respect all of the individuals and their ideas.Each group should help each other to develop their own ideas. To the teachers surveyed found that they generally take care to create a suitable environment for their students.Unfortunately classes in our country consist of a large number of students and teachers when discussing led realizes implements the Socratic method.To achieve a fruitful discussion as in the classroom and in fulfillment of the objectives set by them, teachers follow some of the following steps: • Select some generalizations that should be learned by students. • Provide students with information that obtained through the explanation, texts or various techniques passed on new information.• Teachers use research questions which are served for guiding the students towards the issuance of principles and generalizations from the information that is given.Asked about the number of methods, techniques or strategies used within one hour of instruction, 58% of teachers think that implement such usually 3, and 24% implement 4 methods.A small percentage, 7%, 5 teachers implement teaching methods within an hour.Number of methods and techniques applied within an hour lesson, mainly depend on the structure of the class.Usually when you use these methods you reach in provided conclusions, but it happens and often reach in unforeseen conclusions.The teacher in the implementation of this method, repeatedly asked them questions that encourage students' thinking at a high level or better known as critical thinking and aimed to lead the students in reaching conclusions, generalizations, which teacher has set as goals since the beginning of learning. What are some of the most common method used by teachers and the teacher-centered methods and studentcentered it? The basic question on which this study is the question raised above the level of impact of the application of the teaching methods in the teaching process.Based on this question was submitted and the first hypothesis: H.1 Teachers use modern methods to the student center during their work in the classroom. For confirmation of this hypothesis they were reviewed and analyzed the data of teachers and students as well as surveys conducted by them.During the discussions, surveys and the results of the questionnaire results that teachers, supported by information technology and communication, nowadays, use the most modern methods with traditional comparisation.They watch student-centered methods as effective because the realization of the objectives achieved in a higher mass and through their students come to acquire better knowledge of the data.Often they do a combination of these methods by remaining at the theoretical level using methods and there is a kink in the practice of these methods, which is as much scientific and professional.In most cases the teacher uses the method offered as a template not given to whom is addressed both in terms of cognitive, physical and emotional.There is a superficiality of knowledge of the child's stages of growth, attached it with only theoretical knowledge of teaching methods, create student-centered methods remain as beautiful models diaries.Namely processed results are presented through graphs below: 85% of teachers surveyed prefer contemporary teaching methods, versus 15% who still have a tendency to use traditional methods. Chart No. 1 Chart no. 2 60% of teachers say that the school year using modern methods of above 65%. Report the use of traditional and contemporary methods not related to the age of teachers.We are not talking about traditional teacher started from the age that they have, as not all teachers new to the profession can say that he is definitely contemporary as teachers.Documentation of teachers (annual learning plans and their diaries) and classes observations point to a combination of contemporary and traditional methods.However, preference or inclination to have a philosophy teacher to a method of learning, neither is liberated from one or another of them.There is no absolute separation between traditional and contemporary. Today, when all classes of pre-university work with textbooks selected by the teachers from the alternative list, methods to the student center is prevalent, but it is wrong to indeed pursue two other methods, it centered teacher, and she centered text have not no place, no value.The structure of alternative texts, based on the new curriculum and in accordance with the new programs, it significantly facilitates teachers and preparatory work directly with students in class.Categories learning lines and the teaching staff or skills to breathe properly when a master teacher puts the student in the class level, he builds and implements classes standards of achievement.I think the problem should not be admitted as a teacher if the teacher knows or does not know the modern methods and uses or does not use them during class.The center of gravity remains in the realization of the roles of actors class.For example, there is no value in Venin diagram techniques, when the teacher "intervenes" in meeting its student wears clothing.Or, where can stay the value of technology "clusters" (Tree of thought,) when the teacher "adds" to those who say the students.So, we need orientation of teachers to the principle: "Talk less, listen more" if I accept that this principle governs the use of the time report that the teacher uses the time that was left to students during class. Teachers today recognize student-centered learning and implement planning strategies, methods and techniques depending on the structure of the class, but the information that follows the teaching unit.In the completed questionnaires, teachers list a large number of methods of teaching techniques, especially such related to critical thinking.Their list is so long that not a few teachers, group some techniques to methods, or some of them name the traditional methods.Without excluding the possibility of confusion from teachers, the time factor has already done its work.On the right, a teacher who does work in groups of more than 15 years, will call it a traditional practice.But any strategy, method or technique, modern and effective however, shall not give his teachers' expectations if it is not planned and not used in view of the objectives of the lesson.So methods are the starting point to targets.When objectives are well defined when they are real and measurable, pave the way for interactive teaching methods and weaving lessons own independent work of the students with small groups. The teachers, who have a major role in teaching should be competent in possession of teaching methodology, which means: • Recognize and methods own diversity, and its exam techniques, teaching. • Know how to use these methods and progress in adaptation to the needs of students. • Determine when to work with the entire class, to groups of students and special students. • To recognize the difficulties of student learning and specific needs of their learning. • To select and use a variety of teaching resources including information tecnology. • To support students to focus and to adapt to link the subject. The goal is to note practically it is wrong, because there is always room for improvement.Opportunities for improvements start right from the identification of the real problems, ranging from the practical application of that theory guides, adapting always environment psychosocial our schools, and always stressed the great importance that teachers prepare their professional and passion for the mission they have undertaken. 4.2.2 When the students are more active and focused, to which method, and when the result of higher student when the teacher only explains, or when organizing a class on student-centered methods? The second question seeks to explain the use of student-centered methods and their role in the learning process. H.2 Teaching methods with the student center is more effective and more productive than it centered teacher. Through direct observation in professional practice, the use of more student-centered shows that in most cases confused with freedom of speech and expression in the classroom. Teachers have remained at the theoretical level to use new methods and put a kink in the practice of these methods, which is as much scientific and professional.In most cases the teacher uses the method offered as a template not given to whom is addressed both in terms of cognitive, physical and emotional.There is a superficiality of knowledge of the child's stages of growth, attached it with only theoretical knowledge of teaching methods, create student-centered methods remain as beautiful models diaries. It is very important recognition of cognitive psychology on development children to succeed in any form of teaching.Referring to Piaget's views, which means that: During middle childhood, the cognitive area, significant changes occur, especially when the child starts school.The child around the age of 7 years old enter the stage of concrete operations, according to Piaget's in this period the child develops a set of general rules or strategies to examine and interact with the world. This stage of cognitive development of the child called "concrete" because thinking of the child is still based on what the child undertakes concrete objects or events, until the age of 11-12 years old child cannot understand the concepts important to abstract.It understood that the educational activity, or any other children of this age should be more concrete life. In moments of conducting the research level of student placement in the center of the process by teachers, at least as mentality, it is not at the highest level. Methods levels and student-centered techniques most commonly used are.Ongoing reforms in education, the whole purpose of improvement had its negative sides.Given the desire for European integration to achieve European standards, many developments in education have been premature.Usage of internet in learning, with all the benefits that cannot be part of a massive learning because many families have not accessed the Internet.The use of other sources of information like extracurricular books and encyclopedias, and all the benefits that brings knowledge and learning are not usurping any student because of social and economic factors.Occasional change of curricula for improving brings continuing confusion in education.Difficulties were encountered in the curriculum 5 + 4, where many teachers found themselves facing the fact that we must teach to a class that where teachers were not specialized and that required review and acquisition of knowledge that should be broadcast.Before this fact, teachers found themselves unprepared and "original", and even they have years of work experience in education.Diverse publications, or alternative text although were presented as a good opportunity for the teacher and to adjust the level of students' text, there were problems, because it was not taken into account for a moment that he of whom were formatted these texts, the student himself.Development of new curricula in recent years, made before the educational institutions as well as pre-university schools of high need for qualification of training teachers / future teachers to raise / to prepare them in time with the current requirements, as only cognitive and acquiring the best of these curricula could be implementing and implementing their competent.Teachers not only feel that necessary, but they were and still begin to be and are very interested in their professional growth.Broad participation and active in national seminars and local ones is one of the positive indicators.Teachers, regardless of age, have received new teaching strategies methods, unknown methods and techniques previously, moving away from traditional practices obsolete and ineffective. Teachers are asked: Do you think that there should be changes in the curriculum of new pre-and if so, what would you wanted to change? The responses show that: -44.5% teachers clearly express that there should be changes in the curriculum of the 9year education.But 63.5% feel good curriculum of undergraduate education, and have welcomed the new changes. How suitable are textbooks with which you work?-58.3% of teachers said that the texts which they work with are more suitable, 8.2% of them at all appropriate, and 33.3% think appropriate Thus teachers today is raised the demand that he answer the structure and content of the new curriculum with more comprehensive strategy.This can be achieved with the implementation of various forms of teaching work (work in pairs, group work, individual work) and various models of teaching, which put the spotlight on the students, take into account the potential and individual skills of students, do to engage all students regardless of the difficulties they have.Only the usage of alternative models, strategies varied, different forms of learning, interactive methods, individual programs, etc., facilitate the process of learning at all the students. 5. For years, in the process of training and retraining of teachers are engaged the institutions and centers of state, foundations and non-governmental agencies, providing training, seminars qualification, contemporary literature, the experience and teaching of learning lesson alternatives.(Orstein, Alan C) Curriculum, the fundamentals, principles and issues, publication of ISP, Tirana, 2003, p.71).In the multitude of all these innovations, the teacher has the academic freedom to select what he prefers, what seems mostly appropriate, what can be realized more easily in terms of school and classroom where she teaches, without excluding cases of senior school education specialists and inspectors were dictated recipes teachers ready and requested use template models estimated, according to them, as the best time.Already now we can talk about effective experience in a majority of schools in all regions of the country.In addition to professional program and conducted by educational institutions in cooperation with foundations and OJQ, teachers, especially those with experience, they have himself qualification to update their practices in teaching and learning.The qualification of teachers, which remains a priority for educational institutions and other stakeholders or interest groups active in the field of education, and himself qualification, as demand continued to any teacher, when implemented effectively, serve updating professional They have impact on student achievement.Teachers see as their qualifications sources: personal libraries, school library, organize trainings RED/EO, and training from outside educational organizations.35% of the teachers as libraries qualification and training resources from the RED/EO, while 26.5% of them, but libraries training by RED / EO, acquire and training center providing independent agencies, non-governmental. About 70% of teachers who participated in the questionnaire have developed the necessary qualifications for modern methods. No. What are the main sources of information and your training?Total % 1. Training centers and agencies that provide independent, non 20% Teacher education in the country, have difficulty in using computer, so lagging behind the development of technology in this way.So the teachers who don't know the technology, it is not possible to guide students in learning with alternatives, and so today the primary school students receives information from the Internet ready and are able to analyze and synthesize information.Programs and MES guidelines recommend integrated learning, although this does not apply to genuine teaching me though is of particular importance on ease of learning, in making learning interesting and concrete.The student-centered learning, aimed at learning "doing", but with all these reforms and endless current school infrastructure in the country, highlighting the lack of teaching aids, as well as teacher training that is wishful thinking.The important factor is the teacher in teaching, training, adequately training will be the key to teaching we aim to achieve. Conclusions 6. Student-centered teaching is one of the methods that puts the learner at the center of attention and gives priority to critical thinking and the formation of the personality of students.With traditional methods, researchers and teachers have noticed the superficiality in the understanding of students and passive knowledge of all ages and classes. Effective teaching and learning methods require the use of appropriate pedagogical and methodological.Learning "doing" has become a motto for all teachers, after convincing argument that John Dewey brought that, children should be engaged in looking to learn new ideas, he also stressed that students should be presented with real-life problems, so then we can help them to find the necessary information that will help to solve these problems in themselves. Many teachers combine different methodologies in teaching.No method old or new, should not be served as the sole teaching.Contemporary and traditional methods aimed at the same thing in education, namely the improvement of teaching, student achievement elevation, forming good citizens and more productive and improve society. Schools and teachers should seek a middle way, an elusive and abstract concept, where we do not have the extreme emphasis of matter or students.A teaching is all what we conceptually and economically need, serving student learning. The purpose of the change of methodology and education in general through reforms of successive, it is very important for how comes at the right time for assimilation, because a society can assimilate at once the views of extreme brings "new" and again to be an democratic education.The type of society in which they develop significantly is reflected in our educational system. We think it should be done in terms of training of teachers and in improving curriculum development towards greater space for the application of research, not only vision, but also as a basic element of content.A precondition for the success of the application of the mentioned methods will be to the level of democratization of relations curriculumteacher-student. At this level trinomial the teacher says his level in taking the role as teacher and student coach said of abilities and competencies for self-directed learning. As suggestions and main tasks of this paper we can recommend: • Avoid continuous reforms overlapping and confusing in this way the goals and primary and secondary objectives.• Raising the standard of learning. • Improving the initial and continuing teacher training. • Increasing responsibility of the school inspectors for quality control, not only for administrative purposes. • Increasing motivation of teachers as financial incentives, new perspectives in his professional career, improved working conditions etc. • Community involvement in the problems of education, parents and intellectuals to give their opinion and contribute to the improvement of teaching.• Coherence and comprehensiveness of all schools, at all levels in every town or village in training programs for innovation in our education. In percentage terms used in the table showy technique used is thought Thunderstorm techniques (brainstorming).82.6% of respondents claimed that the most useful have this technique.This technique is one of less used techniques.(project work) 62.6% of teachers say they have not ever used this technique.If we compare the percentage of teachers who implement almost any time permanently are less than any other technique 12.2% of them.Curricula and textbooks are suitable for the development of the learning process in an appropriate manner, based on the implementation of modern methods and student-centered teaching.
v3-fos-license
2016-05-12T22:15:10.714Z
2013-01-02T00:00:00.000
8871137
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0053017&type=printable", "pdf_hash": "46fbded46ef772b1a05dd453380d8778a7c2ad7c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44847", "s2fieldsofstudy": [ "Biology" ], "sha1": "46fbded46ef772b1a05dd453380d8778a7c2ad7c", "year": 2013 }
pes2o/s2orc
Proteomic Analysis of Lipid Droplets from Caco-2/TC7 Enterocytes Identifies Novel Modulators of Lipid Secretion In enterocytes, the dynamic accumulation and depletion of triacylglycerol (TAG) in lipid droplets (LD) during fat absorption suggests that cytosolic LD-associated TAG contribute to TAG-rich lipoprotein (TRL) production. To get insight into the mechanisms controlling the storage/secretion balance of TAG, we used as a tool hepatitis C virus core protein, which localizes onto LDs, and thus may modify their protein coat and decrease TRL secretion. We compared the proteome of LD fractions isolated from Caco-2/TC7 enterocytes expressing or not hepatitis C virus core protein by a differential proteomic approach (isobaric tag for relative and absolute quantitation (iTRAQ) labeling coupled with liquid chromatography and tandem mass spectrometry). We identified 42 proteins, 21 being involved in lipid metabolism. Perilipin-2/ADRP, which is suggested to stabilize long term-stored TAG, was enriched in LD fractions isolated from Caco-2/TC7 expressing core protein while perilipin-3/TIP47, which is involved in LD synthesis from newly synthesized TAG, was decreased. Endoplasmic reticulum-associated proteins were strongly decreased, suggesting reduced interactions between LD and endoplasmic reticulum, where TRL assembly occurs. For the first time, we show that 17β-hydroxysteroid dehydrogenase 2 (DHB2), which catalyzes the conversion of 17-keto to 17 β-hydroxysteroids and which was the most highly enriched protein in core expressing cells, is localized to LD and interferes with TAG secretion, probably through its capacity to inactivate testosterone. Overall, we identified potential new players of lipid droplet dynamics, which may be involved in the balance between lipid storage and secretion, and may be altered in enterocytes in pathological conditions such as insulin resistance, type II diabetes and obesity. Introduction Lipid droplets (LD) comprise a core of triacylglycerols (TAG) and cholesterol esters surrounded by a monolayer of phospholipids, cholesterol and of a variety of proteins [1,2]. TAG synthesis takes place at the endoplasmic reticulum (ER) membrane, where enzymes required for their synthesis are located. It is now widely accepted that the newly synthesized TAG accumulate between the two phospholipid leaflets of the ER membrane and that, after reaching a critical size, the nascent lipid droplet may bud off toward the cytosol but also, in hepatocytes and enterocytes, into the ER lumen where triglyceride-rich lipoprotein (TRL) assembly occurs [1,3,4]. The current model of TRL assembly proposes a two-step process, consisting of the formation of a lipid-poor apolipoprotein B (apoB) particle followed by its fusion with a luminal TG-rich apoB-free lipid droplet formed independently. The microsomal TAG transfer protein (MTP) plays an essential role in TRL assembly, for the co-translational lipid recruitment by apoB to form the primordial apoB particle as well as for the luminal LD production (for reviews, see [5,6]). The function and fate of TAG present in LD vary depending on cell types. LD were essentially studied in adipocytes, because they are specialized in TAG storage and have a single very large lipid droplet filling the cytoplasm. Upon fasting, TAG of the LD are hydrolyzed and fatty acids are released into the circulation to provide energy to other organs such as muscles and heart. In mammary cells, the LD are exocytosed to create the milk globules during lactation. In hepatocytes and enterocytes, TAG present in cytosolic LD may contribute to TRL assembly through a mechanism of hydrolysis-reesterification [7,8]. The fatty acids, mono-and diacylglycerols released by lipolysis from cytosolic LD can participate to new TAG synthesis at the ER membrane. However, the proteins and enzymes involved in the control of the TAG partition between cytosol and ER lumen, i.e. between storage and secretion, and the underlying mechanisms, are still poorly understood in these cells. The proteins associated with LD have been characterized in different specialized mammalian cell types including 3T3-L1 adipocytes, mammary epithelial cells, hepatic cells (for review [9]), Caco-2/TC7 enterocytes [10], muscle cells [11] and insulin-producing b-cells [12]. These studies indicate that the proteome of cytosolic LD depends on the cell type although common features occur. For example, the structural PLIN proteins (previously known as PAT family proteins) [13] are always identified on LD. Perilipin-1 is found specifically on the adipocyte lipid droplet, perilipin-5/OXPAT is expressed in cells that have a high capacity for fatty acid oxidation, such as cardiac muscle cells, while perilipin-2/ADFP/ADRP and perilipin-3/TIP47 are ubiquitous (for review [14]). Similarly, proteins involved in lipid metabolism, intracellular traffic or signalling are always identified, but can vary from one cell type to another [9]. Moreover, the protein composition of LD in a given cell type may differ depending on the physiopathological state of the cell. In summary, although cytosolic lipid droplets were previously considered simply as long term lipid storage bodies, it is now clear that they are cellular organelles involved actively in the control of lipid metabolism, in direct and dynamic interaction with other organelles like the ER and mitochondria [11,12,15,16]. Observations made in enterocytes in vivo during lipid absorption have clearly shown that a dynamic accumulation and depletion of TAG in LD occurs during the process of fat absorption, suggesting that TAG present in cytosolic LD contribute to chylomicron production [10,17,18]. Recently, we characterized the protein endowment of cytosolic LD isolated from Caco-2/TC7 enterocytes 24 h after incubation with lipid micelles and thus in a state of cytosolic LD-associated TAG mobilization [10]. When supplied with lipid micelles, these human enterocytes are able to produce TRL and to store, in cytosolic LD, TAG that can be subsequently mobilized to contribute to TRL production in the absence of lipid micelles [19,20]. Furthermore, we showed that the extent of TAG targeting into the ER lumen, and thus the balance between storage and secretion, is modulated by nutrients, including glucose [19] or polyphenols [21]. High levels of intestinally derived lipoproteins are associated with increased cardiovascular risk and there is evidence of altered TRL secretion by intestine in pathological conditions, such as insulin resistance, type II diabetes and obesity [22,23,24,25]. An imbalance between the cytosolic and luminal LD dynamics could contribute to this altered TRL secretion and it is thus important to determine the underlying mechanisms that control the TAG partition between cytosol and ER in enterocytes. The hepatitis C virus (HCV) core protein has the ability to impair the balance between TAG storage and secretion in hepatocytes [26]. This structural protein, forming the capsid shell of HCV, is targeted to the cytosolic side of the ER membrane from where it migrates onto the surface of LD, possibly by lateral diffusion [27]. Infected patients can develop hypobetalipoproteinemia as well as liver steatosis [26,28] and studies in transgenic mice expressing HCV core protein indicated that core protein on its own is sufficient to provoke these effects in hepatocytes, i.e. a decrease of TRL secretion and a cytosolic accumulation of LD [29,30]. Moreover, although lipoprotein secretion was not examined, it has been shown that cells transfected with HCV core protein accumulate LD [31,32]. This study aimed to identify, in Caco-2/TC7 enterocytes, LDassociated proteins that could be involved in the partition of TAG between storage and secretion. HCV core protein was used as a tool to modify the protein coat of LD. Results demonstrated that HCV core protein expression in Caco-2/TC7 enterocytes impaired their TRL secretion capacity, as compared to control cells. Differential proteomics allowed the identification of proteins that were differentially expressed in LD fractions isolated from Caco-2/TC7 cells expressing HCV core protein or not. Among them, we show for the first time that 17b-hydroxysteroid dehydrogenase 2 (DHB2), a member of the short chain dehydrogenase/reductase superfamily, modulates lipid secretion by Caco-2/TC7 enterocytes. The vector pCDNA3.1-mycSV40 was made by cloning the SV40 Promotor, isolated as a MluI-HindIII fragment from pGL3 (Promega), between the MluI and HindIII restriction sites of pCDNA3.1-myc (provided by D. Pasdeloup [36]). All the other constructs used were generated by PCR, as described in Table 1, and cloned into pCDNA3.1-mycSV40. Caco-2/TC7 cells grown to 70% confluence on 35-mm dishes were transfected with 0.5 mg of plasmid DNAs using Lipofectamine 2000 (Invitrogen) according to the manufacturer's instructions. To obtain the stable Caco-2/TC7 cell lines expressing GFP-HCV core protein (Caco-2/TC7 GFP-CP), transfected cells were sorted by FACS and grown under antibiotic selection. For immunofluorescence studies, Caco-2/TC7 cells were seeded on glass coverslips and transfected as described above when 60% confluent. When appropriate, five h after transfection, cells were incubated with 0.6 mM oleic acid for the last 24 h of culture to promote lipid droplet formation. Since lipid micelles are cytotoxic to undifferentiated Caco-2/TC7 cells, oleic acid was supplied as complexed to BSA. For this, oleic acid (6 ml from a 100 mM stock solution in chloroform/methanol 2:1 (v/v) per ml of final medium to prepare) was dried under a stream of nitrogen then complexed to BSA by incubation with fetal calf serum (0.2 ml per ml of final medium to prepare) for 1 h at 37uC. The mixture was then adjusted to 1 ml with culture medium without serum and supplied to the cells. HEK 293T cells (American Type Tissue Culture Collection) were grown at 37uC in Dulbecco's modified Eagle medium (DMEM, Invitrogen) supplemented with 10% fetal calf serum. shRNA The use of lentiviral vectors expressing small hairpin RNA (shRNA) was described previously [38]. Briefly, HEK 293T cells were cotransfected with three plasmids: pVSV-G, pCMVDR8.91 (provided by D. Trono [http://tronolab.com/index.php]), and pLKO.1Puro-shDHB2. The cell supernatant containing recombinant lentivirus was harvested 3 days posttransfection and used to transduce Caco-2/TC7 cells, seeded on filters 4 days before, in the presence of hexadimethrine bromide (5 mg/ml polybrene; Sigma). After overnight incubation, the cells were maintained in selective medium containing puromycin (10 mg/ml) for 3 days until confluence then cultured up to day 18 for differentiation. Silencing of HSD17B2 was done by using the 19-nucleotides sequence TGGTGAATGTCAGCAGCAT (shDHB2). ShControl corresponds to a sequence specific to the luciferase gene (GTGCGTTGCTAGTACCAAC). Silencing efficiency was estimated by quantitative RT-PCR. Reverse Transcription and Real-time PCR Analysis Total RNA was isolated using TRI Reagent (Molecular Research Centre) according to the manufacturer's protocol. The reverse transcription experiments were performed with 1 mg of total RNA in a total volume of 20 ml. PCR reactions were performed in quadruplicate using a Light-cycler machine (Roche). For each reaction, a 1:400 final dilution of the reverse transcription product was used with 0.4 mM final concentration of each primer in SYBR Green I master mixture (Roche). PCR conditions were one step of denaturation (8 min at 95uC) followed by 45 cycles (each cycle consisted of 10 s at 95uC, 10 s at 60uC (62uC for ACSL3), and 10 s at 72uC). Gene expression was normalized to expression of human ribosomal protein L19. The oligonucleotide primers used for RT-PCR analysis are shown in table S1. Fluorescence Microscopy Cells on glass coverslips were fixed with 4% paraformaldehyde for 10 min at room temperature, and, after two washes with phosphate-buffered saline (PBS), permeabilized with 0.03% saponin in PBS for 30 min. After incubation with the appropriate primary antibody for 1 h at room temperature, the coverslips were washed twice with PBS and incubated with the secondary antibody for an additional h. After two further washes with PBS, they were stained for neutral lipids by incubation for 10 min with BODIPY 493/503 (10 mg/ml; Invitrogen) or with LD540 (0.5 mg/ ml), kindly provided by C. Thiele [39], or mounted directly in Fluoprep (BioMérieux) containing 1 mg/ml 49,6-diamidino-2phenylindole dihydrochloride (DAPI; Sigma). The samples were examined using a Zeiss LSM 710 Meta confocal microscope. Subcellular Fractionation Lipid droplets from Caco-2/TC7 cells were isolated by density gradient centrifugation as described previously [10]. Briefly, differentiated Caco-2/TC7 cells incubated for 24 h with lipid micelles to promote LD formation were lysed twice using a cell disruption bomb then cell homogenates were centrifuged for 10 min at 1000 g at 15uC. The LD-containing supernatant was adjusted to 0.33 M sucrose, put in a new centrifuge tube, and overlaid with buffers containing sucrose to form a discontinuous sucrose gradient ranging from 0.33 to 0 M. Tubes were centrifuged for 2 h (150 000 g, 15uC) and 1 ml fractions were recovered from top to bottom. The pellet was resuspended in 2 ml of buffer. Western Blotting Proteins were resolved on 10% (5% for apoB) sodium dodecyl sulfate-polyacrylamide gels and transferred to Hybond ECL membrane (Amersham). Blots were blocked for 30 min with 5% dried milk powder in 20 mM Tris-HCl, pH 7.6, 137 mM NaCl, and 0.1% Tween 20 (TBS-Tween) and incubated overnight at 4uC with appropriate antibodies diluted in TBS-Tween containing 1% dried milk. Blots were developed by enhanced chemiluminescence using ECL reagent (Amersham) and bands were visualized using the Image Reader LAS-4000 (Fujifilm). In-gel Trypsin Digestion, iTRAQ Labelling and Nano-liquid Chromatography-tandem Mass Spectrometry Analysis (LC-MS/MS) The 1 ml top fractions recovered from Caco-2/TC7 and Caco-2/TC7 GFP-CP cell samples by density gradient ultracentrifugation were freeze-dried, and all of the material was subjected to ingel trypsin digestion as described previously [10]. The iTRAQ (isobaric tag for relative and absolute quantitation) labelling of peptides was performed according to the manufacturers instructions (Applied Biosystems). Briefly, one unit of label (defined as the amount of reagent required to label 100 mg of protein) was thawed, reconstituted in 700 ml of ethanol and incubated with the samples for 2 h at RT. After labelling with different iTRAQ reagents, samples prepared from Caco-2/TC7 and Caco-2/TC7 GFP-CP cells were pooled by pair. Nano-liquid chromatography and tandem mass spectrometry were performed as described in [40]. Four independent experiments were performed in duplicate. The proteins identified in every sample were used to normalize the iTRAQ ratios between the different experiments. The mean value of the iTRAQ ratios for all these ''standard'' proteins was 0.9945, i.e. very close to one, as expected. To be listed in Table 2, a protein had to be identified at least three times out of the four experiments. For each protein, the normalized iTRAQ ratios and the mean 6 SD were calculated, and compared to the theoretical mean (0.9945) to determine whether the protein was significantly differentially expressed in the LD fractions isolated from Caco-2/ TC7 GFP-CP cells and Caco-2/TC7 cells. Lipid Analysis and Estradiol/Estrone Analysis After incubation with lipid micelles containing [1-14 C]oleic acid, lipids extracted from cells and culture media were analyzed as described previously [20]. Briefly, lipids were extracted with chloroform/methanol (2:1, v/v) and fractionated by TLC. Incorporation of [1-14 C]oleic acid into lipids was measured by liquid-scintillation counting of excised radioactive bands of the TLC plates. The DHB2 activity assay was performed using a protocol adapted from [41]. Basolateral media (0.5 ml) of cells incubated with [ 3 H]E2 was extracted with 1 ml ethyl acetate: isooctane (1:1, v/v) then the organic phase was evaporated. The residue was dissolved with 50 ml chloroform/methanol (2:1, v/v) and mixed with carrier steroids (250 nmoles of E1 and E2 each). Steroids were separated by TLC using chloroform: ethyl acetate (3:1) as the mobile phase and visualized with I 2 vapour. The E1 and E2 spots were excised and measured by liquid-scintillation counting. Statistical Analysis Data are presented as means 6 SD. Statistical significance was evaluated using Student's t test for unpaired data. Creation of the HCV Core Protein-expressing Cell Line Caco-2/TC7 GFP-CP To study the effect of HCV core protein on TAG balance between storage and secretion in Caco-2/TC7 enterocytes, we generated the cell line Caco-2/TC7 GFP-CP that expresses the HCV core protein. For this, the core gene followed by the signal sequence of the next protein of the polyprotein, i. e. envelope protein E1, from HCV genotype 1b was cloned into the GFP expression vector pNeoSV40-EGFP-C1. The signal sequence targets the core protein to the ER membrane and, after cleavage by signal peptidase and signal peptide peptidase, core protein will traffic to lipid droplets [27,42]. An N-terminal fusion was thus required in order to prevent core protein separation from the GFP-tag after cleavage of the signal peptide. Expression of the GFP-CP fusion protein was under the control of the SV40 promoter, which has been shown to preserve moderate expression of the transgene even in differentiated Caco-2/TC7 cells [34]. GFP positive cells were sorted by FACS and, after antibiotic selection, several clones were isolated and the stable cell line Caco-2/TC7 GFP-CP was established. Caco-2/TC7 GFP-CP cells were cultured on filters and the expression of GFP-CP was analyzed over time at the mRNA and protein levels by quantitative RT-PCR (Fig. 1A) and by western blot (Fig. 1B), respectively. Under these conditions cells reach confluence at day 7 then differentiate into enterocyte-like cells, i.e. gradually acquire their capacity to secrete TRL upon addition of lipid micelles [20]. As shown in Fig. 1A, the core gene was indeed expressed only in Caco-2/TC7 GFP-CP cells and, although the mRNA level of HCV core protein decreased with time, it remained expressed in cells cultured on semi-permeable filters for 18 days, i.e. differentiated Caco-2/TC7 GFP-CP cells, allowing lipid secretion analysis. As shown by western blot, a similar pattern was obtained at the protein level (Fig. 1B). Absence of free GFP was checked ( Fig.1B and Fig. S1). No difference in core expression was observed whether lipid micelles were supplied or not (data not shown). Impact of HCV Core Protein on Lipid Metabolism in Caco-2/TC7 Cells We then analyzed whether core protein expression had an impact on the lipid metabolism of Caco-2/TC7 enterocytes. Differentiated Caco-2/TC7 cells, cells expressing the empty vector (Caco-2/TC7 GFP) and cells expressing the GFP-core fusion protein (Caco-2/TC7 GFP-CP) were incubated with lipid micelles containing [1-14 C]oleic acid for 24 h. After this incubation period, less than 10% of the oleic acid remained in the apical medium (Fig. S2A). This percentage was already reached after 16 h of incubation suggesting that the fatty acid uptake was completed. Cells and media were analyzed for lipid synthesis and secretion ( Fig. 2A and B). Lipids were extracted from cell lysates and media, fractionated by TLC and the radioactivity recovered in the TAG and PL spots was measured. A significant decrease in TAG and PL secretion was observed in Caco-2/TC7 GFP-CP cells, compared to control cells (50% and 35% reduction, respectively) (Fig. 2B). Interestingly, the decreased lipid secretion was not accompanied by decreased apoB secretion (Fig. 2C). Since there is one apoB molecule per TRL, this suggests secretion of a similar number of smaller TRL. Since the time course of fatty acid uptake was similar for Caco-2/TC7 and Caco-2/TC7 GFP-CP cells during the incubation period (Fig. S2B), the significant decreased lipid secretion by Caco-2/TC7 GFP-CP cells was not due to a delayed fatty acid uptake that could have led to a delayed TG secretion. The amount of newly synthesized intracellular TAG and PL was not significantly different between core proteinexpressing Caco-2/TC7 cells and control cells ( Fig. 2A). However, it must be noticed that the approximate 50% decrease of TG secretion by Caco-2/TC7 GFP-CP cells represent about 10 nmoles (Fig. 2B), a value that was indeed within the error bar of Table 2. List of proteins identified in lipid droplet fractions isolated from Caco-2/TC7 GFP-CP cells compared to that of Caco-2/TC7 cells, ranked by decreasing iTRAQ labelling ratios. The two samples of peptides generated by trypsin digestion of the proteins present in lipid droplets fractions were labelled with two different iTRAQ labels then analyzed by LC-MS/MS. The proteins listed were identified in at least three out of four independent experiments performed in duplicate. An iTRAQ ratio above one indicates that this protein was more abundant in the lipid droplet fraction of Caco-2/TC7 GFP-CP cells than in that isolated from Caco-2/TC7 cells. Conversely, a ratio below one indicates that this protein is less abundant in the lipid droplet fraction of Caco-2/TC7 GFP-CP cells than in that isolated from Caco-2/TC7 cells. Stars highlight proteins whose amounts are significantly different between the two cell lines (P,0.05). doi:10.1371/journal.pone.0053017.t002 the intracellular lipid content since the percentage of secretion is a minor fraction of the total synthesized lipids (about 10%; compare the scales of the y-axis in Fig. 2A and B). Next, we analyzed the localisation of HCV core protein in Caco-2/TC7 enterocytes by immunofluorescence. After incubation with oleic acid for 24 h, LD were clearly induced, as visualized by the neutral lipid stain LD540 (Fig. 2D, compare+and -oleic acid). Because free GFP was not detected by western blot in Caco-2/TC7 GFP-CP cells (Fig. 1B and Fig. S1), the GFP fluorescence was due to the GFP-CP fusion protein. The GFP-CP fusion protein localized to LD of Caco-2/TC7 GFP-CP cells (Fig. 2D, panel d). Overall, we have shown that in Caco-2/TC7 enterocytes HCV core protein localizes to LD and leads to a decreased lipid secretion, as observed previously in hepatocytes [29,31]. Differential Proteomics of Lipid Droplet Fractions Isolated from Differentiated Caco-2/TC7 GFP-CP Versus Caco-2/ TC7 Cells To gain insight into how HCV core protein interferes with the protein composition of LD, we performed differential proteomics on the LD fractions isolated from differentiated core-expressing Caco-2/TC7 cells versus native Caco-2/TC7 cells, which allowed both the identification and the relative quantification of proteins between the two samples. Caco-2/TC7 GFP-CP and Caco-2/TC7 cells were grown on filters for 17 days for differentiation then incubated with lipid micelles for the last 24 h. LD were then isolated using sucrose gradients. While the silver stained gels of the isolated LD fractions were obviously different from the starting cell lysates, the protein profiles of the LD fractions isolated from Caco-2/TC7 and Caco-2/TC7 GFP-CP cells were rather similar (Fig. S3). However, on a 1D-silver stained gel, one single band may contain many proteins and the quantitative modification of one of them may not be observed, in particular if its relative amount is low. As described previously for Caco-2/TC7 cells [10], the lowest density fraction (fraction number 1) isolated from Caco-2/TC7 GFP-CP was highly enriched in perilipin-2/ADRP, a marker of lipid droplets (Fig. 3a). Because differentiated Caco-2/TC7 cells secrete TRL, which might co-purify with cytosolic LD, the sucrose gradient fractions were examined for the presence of apoB48, the nonexchangeable apolipoprotein present in TRL. Reported results in Fig. 3f indicate clearly that fraction 1 was not contaminated by TRL. Indeed, apoB48 was detected in the bottom fractions, which contained membranes (including microsomes). Fractions were also tested for PDI (protein disulfide isomerase), calnexin and GRP78 (78 kDa glucose-regulated protein), all being microsomal proteins. These proteins, routinely identified by proteomics in LD fractions (for review, see [9]), could hardly be detected by western blot in LD-containing fractions (Fig. 3b, c, e). Finally, the mitochondrial marker HSP60 could not be detected in the LD fraction (Fig. 3d). For differential proteomics, proteins contained in the LD fractions isolated from Caco-2/TC7 and Caco-2/TC7 GFP-CP cells were digested with trypsin then labeled with different iTRAQ reagents. Peptides were identified by LC-MS/MS. This differential proteomic approach allowed the relative quantitative identification of 42 different proteins ( Table 2). An iTRAQ ratio greater than one indicated a higher abundance of this protein in LD isolated from core-expressing Caco-2/TC7 cells than control Caco-2/TC7 cells. Conversely, a ratio less than one indicated that this protein was less abundant in the LD fraction from Caco-2/ TC7 GFP-CP cells than Caco-2TC7 cells. The range of the ratios was rather limited (i.e. 0.389-1.837) and consistent with the similarity of the protein profiles observed on 1D-silver stained gels of LD fractions isolated from Caco-2/TC7 or Caco2/TC7 GFP-CP cells. The most abundant protein identified was perilipin-2 (data not shown), as reported previously for Caco-2/TC7 cells [10] and confirming the validity of this approach. Remarkably, 21 proteins (50%) were directly related to lipid metabolism, including LD coat proteins (perilipins), enzymes involved in fatty acid activation or in Figure 1. HCV core protein expression in Caco-2/TC7 GFP-CP cells as a function of time in culture. Caco-2/TC7 cells expressing HCV core protein-GFP (TC7 GFP-CP) or not (TC7) were grown on filters for indicated days: confluence is reached on day 7 then cells differentiate i.e. TRL secretion increases gradually with time in culture. Cells were analyzed for expression of HCV core transcripts by quantitative RT-PCR (A) and for HCV core protein and GFP content by western blot (B) using antibodies against HCV core protein or against GFP. Blots were probed for actin as protein loading control. doi:10.1371/journal.pone.0053017.g001 the synthesis or degradation of acylglycerols, phospholipids, cholesterol and steroids ( Table 2). Proteins involved in intracellular trafficking (Rabs) or known to be associated with ER were also identified. The two proteins that were most up-regulated in LD fractions from Caco-2/TC7 GFP-CP cells were both involved in steroid metabolism. 17b-hydroxysteroid dehydrogenase type 2 (DHB2) is a member of the SDR (short chain dehydrogenase/reductase) superfamily [43] and catalyzes the oxidative conversion between 17-ketosteroid and 17b-hydroxysteroid pairs like estrone and estradiol or androstenedione and testosterone [41]. 3b-hydroxysteroid dehydrogenase (3BHS1) is responsible for the oxidation and isomerization of D5-3b-hydoxysteroid precursors to form D4ketosteroids and plays a crucial role in the biosynthesis of all classes of hormonal steroids. However, because of its variability, the iTRAQ ratio obtained for 3BHS1 did not reach statistical significance. Other up-regulated proteins were monoglyceride lipase (MGLL), which hydrolyses monoacylglycerides to free fatty acids and glycerol, the PLIN protein perilipin-2 and lysophospha- In contrast, microsomal triglyceride transfer protein (MTP) and its subunit protein disulfide isomerase A1 (PDIA1), which are required for TRL assembly and lipid droplet production in the ER lumen, were strongly decreased in LD-containing fractions isolated from Caco-2/TC7 GFP-CP cells, as were a number of ERassociated chaperones. Additionally, unlike perilipin-2, perilipin-3 was decreased. Protein Expression and Localisation of Selected Proteins in Caco-2/TC7 and Caco-2/TC7 GFP-CP Cells Results obtained by this quantitative proteomic approach were confirmed by immunoblotting of LD fractions isolated from differentiated Caco-2/TC7 and Caco-2/TC7 GFP-CP cells supplied with lipid micelles for 24 h (Fig. 4). Because there is no suitable loading control for LD fractions and since there was no modification of the TAG content between the cell lines (data not shown), experiments were performed by starting with equal numbers of cells and loading equal volumes. To improve the immunodetection of the proteins, LD fractions isolated from Caco-2/TC7 or Caco-2/TC7 GFP-CP cells were freeze-dried in order to load ten times more material per well than for western blots shown on Fig. 3. The Western blots performed using available antibodies against eight proteins identified in the proteomic study (Table 2) confirmed the relative increase, stability or decrease of these proteins between LD fractions isolated from Caco-2/TC7 cells expressing core protein or not (Fig. 4). As a consequence of the higher amount of material loaded on the gels, PDI, which was detected hardly when using unconcentrated isolated LD fractions (Fig. 3), became detectable. Next, we analyzed by confocal microscopy the intracellular localisation of some of the identified proteins that were connected to lipid metabolism. We selected DHB2, 3BHS1 and PCAT2, which were found up-regulated in Caco-2/TC7 GFP-CP cells compared to Caco-2/TC7 cells, and ACSL3 and CB043, which were not altered but are frequently found in proteomic studies [9]. Caco-2/TC7 cells were transfected with plasmids encoding the proteins of interest fused to a N-terminal myc-tag, then LD formation was induced by incubation with 0.6 mM oleic acid for 24 h. The proteins were visualised using specific anti-myc antibody and LD were visualised by Bodipy 493/503. As shown in Fig. 5A, a clear LD-associated localisation was detected for all the tested proteins. However, whereas the localisation of 3BHS1 and CB043 was almost exclusively around LD, the localisation of DHB2, PCAT2 and ASCL3 around LD was partial. Double transfections of Caco-2/TC7 cells with pGFP-CP, expressing GFP fused to the HCV core gene serotype 1b, and the proteins listed above showed co-localisation of the studied proteins with the core protein around lipid droplets (Fig. 5B). (Table 2), we analyzed whether these proteins were up-regulated at the mRNA level as well. As shown in Fig. 6A, the mRNA levels of HSD17B2, HSD3B1, perilipin-2 and indeed, of HCV core protein, were significantly higher in Caco-2/TC7 GFP-CP cells than in Caco-2/TC7 cells. However, the mRNA levels were unchanged for MGLL and LPCAT2, proteins that were found up-regulated in the proteomic approach, as well as for C2orf43 and ACSL3, proteins that were not modified. The Caco-2 cell line derives from a human epithelial colorectal adenocarcinoma and TC7 is a clone of Caco-2 cells [37]. Although these cells differentiate such that their phenotype resembles absorptive enterocytes of the small intestine, it still remains a cell line with a cancerous origin and thus proteins might be differently expressed in normal cells from human intestine. Therefore, to assess the physiological relevance of these results, we performed similar experiments on mRNA isolated from human small intestine (Fig. 6B). All above-mentioned genes were expressed in human small intestine except HSD3B1. Therefore, since 3BHS1 protein was also not significantly up-regulated in LD isolated from Caco-2/TC7 GFP-CP as compared to Caco-2/TC7 cells, it was not studied further. We focused on the protein DHB2 that, with an iTRAQ ratio of 1.83760.092, was the most up-regulated LD-associated protein in Caco-2/TC7 GFP-CP cells as compared to Caco-2/TC7 cells. DHB2 is expressed in the gastrointestinal tract as well as in the Caco-2 cell line [44,45], and Fig. 6A and B). To distinguish a local LD enrichment from an overall higher cellular amount, we analyzed DHB2 levels in cell lysates by western blot. Fig. 6C and D show that DHB2 was significantly overexpressed in Caco-2/ TC7 GFP-CP cell lysates compared to Caco-2/TC7 cells. Overall, our results indicate that DHB2 localizes partially to LD and that HCV core protein expression leads to an increased expression of DHB2 both at the mRNA and protein levels. Impact of DHB2 Depletion on the Lipid Metabolism of Caco-2/TC7 and Caco-2/TC7 GFP-CP Cells If the decreased lipid secretion observed in Caco-2/TC7 GFP-CP cells compared to Caco-2/TC7 cells was related to the increased DHB2 expression, silencing DHB2 by shRNA should lead to increased lipid secretion. To test this hypothesis, Caco-2/ TC7 transduced with a lentiviral vector expressing shDHB2 or shControl were cultured for 17 days for differentiation then incubated with micelles containing [1-14 C]oleic acid for 24 h. Lipids extracted from cells and media were separated by TLC and the radioactivity in the resulting spots was measured. The efficiency of silencing was checked by quantitative RT-PCR (Fig. 8A). While TAG and PL synthesis was not altered in cells depleted for DHB2 compared to control cells (Fig. 8B), TAG secretion was increased 2.5 times (Fig. 8C). The rise in TAG secretion was not accompanied by a modification of apoB secretion (Fig. 8D), suggesting the secretion of larger TRL by cells depleted for DHB2 compared to control cells. Discussion Our objective in this study was to identify LD-associated proteins that could be involved in the balance between TAG storage and secretion in enterocytes. For this, we took advantage of Figure 5. Proteins identified by LC-MS/MS in the lipid droplet fractions isolated from Caco-2/TC7 cells localise to lipid droplets (A) and co-localise with HCV core protein around lipid droplets (B). Caco-2/TC7 cells were transfected with plasmids encoding the proteins of interest fused to a myc-tag (A) or double transfected with plasmids expressing proteins of interest fused to a myc-tag and the core expressing plasmid pGFP-CP (B), and incubated with 0.6 mM oleic acid/BSA for 24 h to induce lipid droplet formation. The myc-tag was detected with mAb9E10 and Alexa Fluor 568-conjugated anti-mouse IgG (red) and the core protein by GFP fluorescence (green). (a) 17b-hydroxysteroid dehydrogenase type 2 (DHB2), (b) 3-beta-hydroxysteroid dehydrogenase (3BHS1), (c) lysophosphatidylcholine acyltransferase type 2 (PCAT2), (d) UPF0554 C2orf43 (CB043) and (e) long-chain-fatty-acid-CoA ligase 3 (ACSL3). Scale bars, 10 mm. doi:10.1371/journal.pone.0053017.g005 Figure 6. Gene and protein expression in Caco-2/TC7 cells (TC7), Caco-2/TC7 GFP-CP cells (TC7 GFP-CP) and human jejunum of some proteins identified by LC-MS/MS in lipid droplet fractions. (A) Caco-2/TC7 and Caco-2/TC7 GFP-CP cells were cultured on filters for 17 days then supplied with lipid micelles for 24 h. mRNA levels were measured by quantitative RT-PCR for core (HCV core protein), HSD17B2 (17bhydroxysteroid dehydrogenase type 2), HSD3B1 (3-beta-hydroxysteroid dehydrogenase), PLIN2 (perilipin-2), MGLL (monoacylglycerol lipase), LPCAT2 (lysophosphatidylcholine acyltransferase 2), C2orf43 (UPF0554 protein C2orf43) and ACSL3 (long-chain-fatty-acid-CoA ligase 3). (B) mRNA levels for the same genes were quantified in human jejunum mRNA samples. (C) Lysates of Caco-2/TC7 and Caco-2/TC7 GFP-CP cells were analyzed by western blot for 17b-hydroxysteroid dehydrogenase type 2 (DHB2) and actin. (D) The immunoblot shown in C was quantified and standardized to actin used as the loading control. Results shown are the means 6 SD from three independent experiments performed in triplicate, except for human jejunum (one sample measured in triplicate). *, p,0.05 compared to control cells.Caco-2 cell line derives from a human epithelial colorectal adenocarcinoma and TC7 is a clone of Caco-2 cells [37]. Although these cells differentiate such that their phenotype resembles absorptive enterocytes of the small intestine, it still remains a cell line with a cancerous origin and thus proteins might be differently expressed in normal cells from human intestine. Therefore, to assess the physiological relevance of these results, we performed similar experiments on mRNA isolated from human small intestine (Fig. 6B). All above-mentioned genes were expressed in human small intestine except HSD3B1. Therefore, since 3BHS1 protein was also not significantly up-regulated in LD isolated from Caco-2/TC7 GFP-CP as compared to Caco-2/TC7 cells, it was not studied further. doi:10.1371/journal.pone.0053017.g006 the ability of HCV core protein that localizes to LD to modify this balance since, in hepatocytes, HCV core protein impairs TRL secretion and induces lipid accumulation in the cytosol [29,30,31,47]. To obtain an overview of the proteins that were quantitatively modified on LD isolated from cells expressing HCV core protein or not, we used a differential proteomic approach: LC-MS/MS of iTRAQ-labeled peptides obtained by trypsin digested proteins. As model of enterocytes, we used Caco-2/TC7 cells, which are the only cell culture model of human enterocytes able to secrete TRL and store TAG as cytosolic LD that can be later mobilized for TRL production [20]. In Caco-2/TC7 enterocytes, the expression of HCV core protein led to a 50% decrease in TAG secretion. Since the amount of apoB secreted was unaltered, this suggests the secretion of a similar number of smaller LRT and therefore the impaired production of LD in the ER lumen. Also, these results show that the effect of HCV core protein on the lipid metabolism is observed both in hepatocytes and enterocytes, suggesting common mechanisms for this effect. Using iTRAQ quantitative proteomics, we identified and differentially quantified a total of 42 proteins in the LD fractions isolated from Caco-2/TC7 GFP-CP cells, compared to those from Caco-2/TC7 cells. Interestingly, 50% of these proteins were related to lipid metabolism, including the LD coat proteins perilipins and enzymes involved in fatty acid activation or acylglycerol, phospholipid, cholesterol and steroid synthesis or degradation. Among them, DHB2 and ACSL5 (long-chain-fattyacid-CoA ligase 5), which belong to the HSD (hydroxysteroid dehydrogenase) and ACSL families, respectively, were identified for the first time in LD fractions. It has been shown that other members of these families are present in LD fractions [9,48], and in hepatic cells ACSL3 knockdown was shown to result in decreased apoB secretion [49]. However, proteomics showed no quantitative modification of ACSL3 or ACSL5 suggesting that they are not involved in the effect of HCV core protein on lipid secretion in enterocytes. ACSL5 was not identified previously on LD most probably because its expression is particularly high in epithelial cells of the small intestine as compared to other organs [50,51]. It is noteworthy that eight proteins were linked to sterol metabolism: DHB2, 3BHS1, NB5R3, DHB11, DHB7, NSDHL, DHRS3 and ERG1. While enzymes involved in cholesterol biosynthesis have been routinely identified in proteomic studies [9,10,52], the identification of enzymes involved in steroid metabolism was more surprising because the intestine is not reputed as a steroidogenic organ. However, as suggested for DHB11 [53], these enzymes may be involved in the metabolism of diet-derived or oxidized hydrophobic, potentially toxic molecules. Though liver is reputed to be the major xenobiotic-metabolizing organ, enterocytes are in contact with a large variety of xenobiotics and intestine contributes to the first steps of detoxification [54,55]. By confocal microscopy of cells expressing these proteins of interest, we confirmed the localization around LD of CB043, which was routinely identified in LD fractions, but only by proteomics [9,10]. CB043 has no function assigned yet but contains homologies for an esterase-lipase superfamily domain and an abhydrolase-6 domain. Our proteomic analysis showed no quantitative modification of CB043 suggesting that the CB043 protein amount on LD per se is not involved in the effect of HCV core protein on lipid secretion in these cells. Indeed, next to the protein amount, the protein/enzyme activity of CB043 may be controlled by many factors including post-translational modifications, cofactors or substrate availability. Six Rab proteins that are involved in vesicular traffic and are routinely identified in LD fractions [9,56] were identified, but none of them were significantly differentially expressed between the two cell lines. Concerning the perilipin family that contributes to the protein coat of LD, perilipin-2 was significantly enriched while perilipin-3 was depleted in the LD fractions isolated from Caco-2/TC7 GFP-CP, compared to Caco-2/TC7 cells. Perilipin-2 and perilipin-3 are the only PLIN family proteins expressed in enterocytes [18]. In adipocytes, it has been shown that the earliest deposits of neutral lipids are coated with perilipin-3 [57] and, more recently, that perilipin-3 is involved in the biogenesis of LD [58]. In enterocytes, Lee et al. [59] suggested that perilipin-3 plays a role in the synthesis of LD from newly synthesized TAG, while perilipin-2 plays a role in the stabilization of TAG stored in longer term. Moreover, overexpression of perilipin-2 reduces lipolysis catalyzed by Adipose Triglyceride Lipase (ATGL) [60], which was identified in LD fractions of Caco-2/TC7 cells [10]. Overall, the modification of the perilipin-2/perilipin-3 balance induced by HCV core protein in Caco-2/TC7 cells favours stabilisation of the LD. This is in good agreement with the recent report describing that LDlocalized core protein slows down the turnover of TAG in LD [61]. However, there was no modification of the TAG content between the cell lines. To go further, it would be worth studying other key enzymes involved in the TAG hydrolysis/reesterification process such as lipases and acyltransferases as well as examining TAG turnover by using pharmacological inhibitors. Interestingly, a number of chaperones associated with the ER, such as PDI isoenzymes, and proteins involved in lipoprotein secretion, such as MTP, were strongly depleted in LD-associated fractions isolated from Caco-2/TC7 GFP-CP cells, compared to Caco-2/TC7 cells. Although the identification by proteomics, in LD fractions, of proteins reputed to have an ER location has been often considered as a contamination, it is now accepted that there are strong physical relationships between ER and LD [15,16]. The iTRAQ quantitative approach clearly indicated a decreased association of LD with ER in cells expressing HCV core protein. In core protein expressing hepatocytes, it has been shown previously that the MTP amount is not modified but MTP activity is impaired, a mechanism suggested to contribute to the development of steatosis and the decreased secretion of TRL by these cells [29]. An additional, non exclusive, hypothesis is that core expression leads to a decreased association of LD to ER membranes, which may contribute to the impaired secretion of lipids observed in core-expressing Caco-2/TC7 cells versus control cells. Additionally to perilipin-2, other proteins were found enriched in the LD fractions of Caco-2/TC7 cells expressing HCV core protein versus control cells. Immunofluorescence studies confirmed the LD-associated localization of DHB2, 3BHS1 and PCAT2. Moreover, stable HCV core expression in Caco-2/TC7 cells led to altered gene expression for some of these proteins. DHB2 was the most enriched around LD, up-regulated at the mRNA and protein levels, and has never been described associated to LD or connected to HCV core protein. DHB2 belongs to the family of 17ß-hydroxysteroid dehydrogenases that occupy pivotal positions in steroid metabolism pathways, regulating the intracellular concentrations of active (E2 and T) and inactive (E1 and androstenedione) steroid pairs. DHB2 inactivates E2 and T into E1 and androstenedione, respectively, while the reverse reaction is catalysed by DHB1 for E1 and DHB3 for androstenedione [41]. In Caco-2 cells, these isoforms are present [44]. Indeed, DHB2 is expressed in the human gastrointestinal tract, particularly in the epithelial cells of the small intestine [45]. In these cells, DHB2 has been suggested to be involved in the inactivation of endogenous and exogenous active sex steroids. DHB2 also has a 20a-HSD activity [62]. Possibly, DHB2 is an enzyme with unknown yet substrates and thus with additional functions [63]. Using estradiol as a substrate for DHB2, we showed that Caco-2/TC7 GFP-CP cells, which overexpressed DHB2 compared to control cells, were more potent at transforming estradiol into estrone, as expected. The silencing of DHB2 in Caco-2/TC7 cells caused an increase of TAG secretion, confirming the capacity of DHB2 to impair TAG secretion by Caco-2/TC7 cells. Similar experiments performed on Caco-2/TC7 GFP-CP cells showed only a partial, not significant, restoration of TAG secretion. Keeping in mind that HCV core protein led to the quantitative modification of many proteins in LDs fractions, the lack of a clearcut effect by modulating the amount of one single protein was not that surprising. The effect of DHB2 on TAG secretion could be mediated by its enzyme activity on substrates, or by coating the LD surface and impairing the access of other proteins. We thus tested the effect of E2 and T on lipid secretion by Caco-2/TC7 cells and found that, while E2 had no effect, T led to an increase of TAG secretion. In humans, sex steroids are known to exert profound and complex effects including on lipid metabolism [64,65,66]. However, data from literature on the effect of T on serum lipid levels are contradictory [67,68]. The effects of steroid hormones are mediated through interaction with specific intracellular receptors, which are also present in the gastrointestinal tract [69]. To our knowledge, no data describing the impact of T on lipid secretion by intestine are available. We show for the first time that T leads to an increased lipid secretion in enterocyte-like Caco-2/TC7 cells, although the effect remains modest. Gathering all the present results, we can formulate the following model: DHB2, which is upregulated in HCV core-expressing Caco-2/TC7 cells, leads to a more rapid inactivation of steroid hormones, including testosterone, that stimulates lipid secretion in Caco-2/TC7 enterocytes. In summary, by a differential proteomic approach, we identified proteins on lipid droplets that are altered by HCV core protein in Caco-2/TC7 enterocytes. Because HCV core protein led to a decreased TRL secretion, the identified players are potentially involved in the control of the balance between lipid storage, as LD, and secretion, as TRL, in Caco-2/TC7 enterocytes. Factors modifying this balance may be simple proteins, as shown for DHB2, or the ratio between two proteins, such as the perilipin-2/ perilipin-3 ratio, or the extent of LD association to organelles, such as the ER. Further studies on these identified factors will help to gain more knowledge about this white spot on the map of cellular pathways i.e. the crosstalk of cytosolic LD and TRL formation in the ER. High levels of intestinally derived lipoproteins are associated with increased cardiovascular risk and there is evidence of altered TRL secretion by intestine in pathological conditions, such as insulin resistance, type II diabetes and obesity [22,23,24,25]. This altered TRL secretion may result from an imbalance between the cytosolic and luminal LD dynamics and the underlying mechanisms are important to characterize. Additionally, since we used HCV core protein to perturb the TAG balance, our results may help to further characterize the effect of HCV core protein on LD metabolism, which is necessary for HCV replication and production in hepatocytes. Figure S1 Western blot analysis of GFP-HCV core protein produced by the stably transfected Caco-2/ TC7 GFP-CP cell line. Caco-2/TC7 cells expressing GFP-HCV core protein (TC7 GFP-CP) were grown on filters for 18 days then cell lysates were analyzed for GFP (A) or HCV core protein (B). Cell lysates from Caco-2/TC7 cells, Caco-2/TC7 cells transiently transfected with plasmids encoding GFP or HCV core protein are included as controls. (TIF) Figure S2 Time course of oleic acid incorporation into Caco-2/TC7 (TC7) and Caco-2/TC7 GFP-CP (TC7 GFP-CP) cells. Cells were grown on filters for 17 days then incubated for various durations with lipid micelles supplemented with [1-14 C]oleic acid. The radioactivity remaining in the apical medium was counted and expressed as percentage of the radioactivity present at time 0 (A). Radioactivity contained in cell lysates was counted and expressed as nmoles of oleic acid incorporated per dish (B). Data are means 6 SD of three independent experiments performed in duplicate. (TIF) Figure S3 Silver stained gels of lipid droplet fractions and cell lysates from Caco-2/TC7 cells (TC7) and Caco-2/TC7 GFP-CP (TC7 GFP-CP) cells. Cells were cultured on filters for 17 days then supplied with lipid micelles for 24 h. Lipid droplet fractions were prepared as described in the Materials and Methods section, freeze-dried for concentration and one tenth of the lipid droplet fraction was loaded per well. One mg of cell lysates was loaded per well. Proteins were separated by 10% SDS-PAGE and silver stained. (TIF)
v3-fos-license
2021-12-19T17:13:32.178Z
2021-12-01T00:00:00.000
245314086
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2304-8158/10/12/3124/pdf", "pdf_hash": "5ea1aabd813add524526d6d120c58e55d37e5db5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44850", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Materials Science" ], "sha1": "3e8198e3dfeb432a608f03603ff65ead9224d42e", "year": 2021 }
pes2o/s2orc
Combined Effect of Temperature and Oil and Salt Contents on the Variation of Dielectric Properties of a Tomato-Based Homogenate Tomato-based processed foods are a key component of modern diets, usually combined with salt and olive oil in different ratios. For the design of radiofrequency (RF) and microwave (MW) heating processes of tomato-based products, it is of importance to know how the content of both ingredients will affect their dielectric properties. Three concentrations of olive oil and salt were studied in a tomato homogenate in triplicate. The dielectric properties were measured from 10 to 3000 MHz and from 10 to 90 °C. Interaction effects were studied using a general linear model. At RF frequencies, the dielectric constant decreased with increasing temperature in samples without added salt, but this tendency was reversed in samples with added salt. The addition of salt and oil increased the frequency at which this reversion occurred. At MW frequencies, the dielectric constant decreased with increasing temperature, salt, and oil content. The loss factor increased with increasing salt content and temperature, except in samples without added salt at 2450 MHz. Penetration depth decreased with increasing frequency and loss factor. Salt and oil contents have a significant effect on the dielectric properties of tomato homogenates and must be considered for the design of dielectric heating processes. Introduction Tomatoes and olive oil are two key components of the Mediterranean diet. Olive oil acts as an excipient food that increases the bioavailability of the nutrients present in tomatoes [1]. According to the European Commission, the production of processed tomatoes is expected to increase due to higher yields, whereas consumption will probably increase mainly due to growing demand for convenient and healthy food. EU exports of processed tomatoes are expected to increase 1% per year, with Spain, Italy, and Portugal contributing more than 90% of the production [2]. The thermal treatment of tomato-based products is usually carried out in heat exchangers; this operation produces characteristic cooked flavors while ensuring an extended shelf-life. However, the high temperatures at the surface of the tube and long processing times associated with conventional pasteurization are known to diminish the sensory and nutritional qualities of raw tomato products such as "salmorejo" or "gazpacho" [3]. Furthermore, tomato is a valuable source of vitamin C, which has been extensively reported to be a thermolabile nutrient, suffering important losses at high temperatures and over long processing times [4]. One of the most promising alternatives to conventional heating is dielectric heating. By subjecting the food matrix to an alternating electric field, volumetric heat is generated inside the food product. The volumetric heating overcomes the barrier of slow heat transfer rates and implies much faster processing, maintaining desirable food quality attributes such as nutrition and flavor [5]. The way materials interact with electric fields depends on their dielectric properties, which quantify a material's ability to reflect, store, and transmit electromagnetic energy. Knowledge of these properties is essential in the design and implementation of dielectric heating processes, since they have a direct effect on the penetration depth of the electromagnetic waves in the food matrix and the heating rate [6]. In the presence of a dielectric, the intensity of an electric field is reduced by a factor of ε r , which corresponds to the relative permittivity of the material and is expressed as a ratio with the permittivity of vacuum (ε 0 = 8.854 × 10 −12 J/V 2 m): Multiple dielectric mechanisms contribute to the complex dielectric permittivity, which is defined as [7]: The real component ε is often referred to as dielectric constant; it is related to the capacity of a dielectric to polarize and orientate towards an applied electric field. The imaginary component ε is the loss factor, which is related to various energy dissipation mechanisms and thermal conversion. Thermal and dielectric properties are influenced by temperature, frequency of the alternating electric field, and food composition [7]. Recent developments in continuous-flow dielectric heating technologies have prompted a higher interest in the study of dielectric properties of fluid foodstuffs, including soy sauce [8], salsa con queso [9], mirin [10], vinegar [11], honey [12], fruit juices [13], chili sauce [5], and milk [14]. Fewer studies deal with the contribution of different ingredients to the dielectric properties of food products. Ahmed et al. [15] studied the influence of olive oil and other ingredients on the dielectric spectra of hummus. Franco et al. [16] studied the individual contributions of water, salt, and sugar to the dielectric properties of green coconut water. Luan et al. [17] studied the effect of oil, salt, sucrose, and bentonite on the dielectric properties of bentonite water pastes at MW frequencies. Regarding studies that have dealt specifically with tomato, De los Reyes et al. [18] measured the dielectric properties of fresh and osmotically dehydrated cherry tomatoes at 20 • C and 2450 MHz. More recently, Peng et al. [19] studied the effect of adding 0.2 g/100 g of NaCl and 0.055 g/100 g of CaCl 2 on the dielectric properties of three different tomato tissues in the range of 300-3000 MHz over a wide range of temperatures. It is expected that a high-moisture product, such as a tomato homogenate, will have a lower dielectric constant with higher oil content and a loss factor that increases with salt and temperature. However, it is unclear if the effect of the emulsification of olive oil and the structural changes in the tomato tissue during cooking will affect this dielectric behavior at different frequencies. The objective of this work was to study the effect of temperature and different concentrations of salt and olive oil on the dielectric properties and penetration depth of a tomato homogenate at RF and MW frequencies. Sample Preparation Vine tomatoes were purchased from a local producer. After reception, tomatoes were washed with tap water through aspersion and processed using a cutter (model CUT-35; Castellvall, Girona, Spain), a 3 mm steel automatic sieve (model C80; Robotcoupe, Vincennes, France) and a colloid mill (model MZ-100; FrymaKoruma AG, Rheinfelden, Switzerland) working at 3000 rpm to obtain a homogeneous paste. Samples with three different concentrations of salt (0, 0.5, and 1%) and extra virgin olive oil (0, 5, and 10%) content were prepared and homogenized for 1 min using an immersion blender. Overall, 9 treatments were studied in triplicate (27 samples). These concentrations were chosen based on the common values found in commercial tomato-based products. Ingredients were combined to investigate possible interactions. Samples were left overnight at 4 • C for stabilization prior to analysis. Sample Characterization To characterize the raw material, moisture content and total soluble solids of the tomato homogenate were measured in all samples. Moisture content was obtained by drying in an oven at 103 ± 2 • C until reaching constant weight [20]. Total soluble solids content was measured using a portable refractometer (model Quick-BrixTM 90; Mettler Toledo GmbH, Giessen, Germany) and expressed as • Brix. The tomato homogenate had a moisture content of 94.8 ± 0.8% and total soluble solids of 4.1 ± 0.1 • Brix. Particle size of the homogenized product was determined at room temperature using a laser diffraction particle size analyzer (model Mastersizer S; Malvern Instrument Ltd., Worcestershire, UK) with a measurement range of 0.01-3500 µm and an obscuration range of 8-15%. Particle size of the samples, expressed as Dv (90) (the size below which 90% of the sample lies), was found to be 836 ± 26 µm. Measurement of Dielectric Properties The measurement of dielectric properties (dielectric constant, ε , and loss factor, ε ) was carried out in all samples following the procedure described by Muñoz et al. [14]. Measurements were made using an open-ended coaxial line with a high-temperature probe, connected to a 5 Hz-3 GHz network analyzer (model E5061B, Keysight Technologies, Santa Rosa, CA, USA) through a DC-4 GHz electronic calibration module (model N7550A, Keysight Technologies, Santa Rosa, CA, USA), which was fan-cooled for higher thermal stability. The instrument was warmed up for 2 h and then calibrated with air, a short-circuit block (supplied by the manufacturer), and deionized water. Samples were contained in a stainless steel (316 L-s) 750 mL autoclave (106 mm in height and 108 mm in diameter), identical to the one described by Muñoz et al. [14] and pressurized to 500 kPa to avoid the formation of air bubbles and evaporation at high temperatures, which may affect the measurements. To ensure uniform temperature distribution, samples were heated slowly by adjusting manually the power of an electrical resistance. Temperature was monitored using a temperature probe (TESTO, Lenzkirch, Germany) placed near the probe. Dielectric measurements were taken every 10 • C from 10 to 100 • C, over a frequency range from 10 to 3000 MHz. Special attention was given to the dielectric properties at RF frequencies (27.12 and 40.68 MHz) and MW frequencies (915 and 2450 MHz) allocated for their use in industrial, scientific, and medical applications. All measurements were performed in triplicate. Calculation of the Frequency at Which the Relationship between the Dielectric Constant and Temperature Reverses For each sample, a linear regression of dielectric constant on temperature was fitted at each frequency, and the slopes were obtained (data not shown). The frequency at which the slope changed signs was taken as the reversion point P r of the sample. Calculation of the Penetration Depth Penetration depth is defined as the distance at which power decreases to 1/e of the initial value at the surface. It is a parameter that provides insight on the temperature uniformity during dielectric heating [8]. For each mixture of tomato homogenate, the penetration depth was calculated at 27.12, 40.68, 915, and 2450 MHz using Equation (3). Statistical Analysis Analysis of variance was performed for each selected frequency (27.12, 40.68, 915, and 2450 MHz) using the GLM procedure of the SAS statistical package (SAS Inst., Inc., Cary, NC, USA). The statistical linear model was: where: Y ijklm is the observed value (ε , ε , or d p ), µ is the overall mean, SC i is the salt effect at i concentration (i = 0, 0.5, and 1%), OC i is the oil effect at j concentration (j = 0, 5, and 10%), T k is the temperature effect at k • C (k = 10, 20, . . . , 100 • C), s ijl is the effect of the l sample within the group of samples with i salt concentration and j oil concentration, and e ijklm is the random residual of the model. The factor sample, nested in factor SC × OC, was used as the error term for testing the effects of SC, OC, and SC × OC. The residual was used as the error term for testing the effects of T, SC × T, and OC × T. For the reversion point, the statistical linear model was: where: Y ijkl is the reversion point, µ is the overall mean, SC i is the salt effect at i concentration (i = 0, 0.5, and 1%), OC j is the oil effect at j concentration (j = 0, 5, and 10%), and e ijk is the random residual of the model. When a significant effect was encountered, least-squares means were compared using LSMEANS with Tukey's test option. Differences were deemed significant at the 5% probability level. Values of least-squares means and standard deviation can be found in Tables S1-S13 of the Supplementary Materials. Dielectric Constant (ε ) Values of ε decreased with increasing frequency in all samples. This decrease is in accordance with the Debye relation [21]. The faster the alternation of the electric field, the more difficult it is for the water molecules to orientate towards it, which reduces the dielectric polarization and the energy storage [16]. This decline was sharper at RF frequencies, especially in samples with higher salt content at high temperatures, similarly to what was reported by Zhu et al. [13] in fruit juices. For tomato homogenate without salt or oil, the values of ε ranged from 77.2-57.7 at 915 MHz and from 74.9-57.2 at 2450 MHz. These values are in accordance with previous results obtained by Peng et al. [19], who reported values ranging from 78 at 22 • C to 57 at 120 • C for pericarp, locular, and placental tissues of raw tomatoes at 915 MHz, and slightly lower values at 2450 MHz. The dielectric constant was significantly affected by salt and temperature at all frequencies (Table 1). There was a significant interaction between salt and temperature at RF frequencies. There was a significant effect from oil at all frequencies (except at 27.12 MHz) and from its interaction with temperature at MW frequencies. Figure 1 shows the interaction effect of salt and temperature on ε for the two evaluated frequencies in the RF region. At these frequencies, ε increased with increasing salt concentration. Temperature did not significantly affect ε in non-salted samples, but there was a significant increase in ε with temperature in salted samples. Higher temperature and ion concentration increase ionic conductivity, which according to Zadeh et al. [22], causes an increase in dielectric dispersion, increasing the dielectric constant. At RF frequencies, there was a significant negative main effect from oil on the dielectric constant of tomato at 40.68 MHz (Figure 2). This effect was not significant at 27.12 MHz, possibly due to ionic conduction being the dominant polarization mechanism at low frequencies [23]. At MW frequencies, there was a significant negative main effect from salt on the dielectric constant (Figure 3). This shows that above a certain frequency between the RF and MW regions, the tendency was reversed and ε started decreasing with increasing salt content. The negative effect from salt on the dielectric constant is caused by the binding of water molecules, which results in an obstruction of their polarization and overall energy storage [24]. The negative interaction effect of oil and temperature on the dielectric constant at MW frequencies is shown in Figure 4. The effect of oil is caused by water molecules being replaced by dielectrically inert molecules with almost no polarization [25]. The effect of temperature is caused by an increase in the Brownian motion of water molecules and a general reduction of viscosity in the mixtures, which in turn reduces the relaxation time and the energy storage [16]. The negative main effect of salt at MW frequencies was very small in comparison to that of oil. This is in accordance with what was reported by Peng et al. [19], where the authors did not find any significant differences in the dielectric constant of tomatoes after adding 0.2 g/100 g NaCl at 915 and 2450 MHz. The opposite effect of temperature and salt content at RF and MW frequencies is similar to what was reported by Muñoz et al. [14] in milk, by Wang et al. [26] and Nelson and Bartley [27] in whey protein gels, by Luan et al. [17] in bentonite pastes, and by Guan et al. [28] in mashed potatoes. According to Nelson [29], the frequency at which the dielectric constant starts decreasing with increases in temperature or salt content marks the point from which dipole relaxation becomes the dominant loss mechanism over ionic conduction. Addition of ingredients changed the frequency at which this reversion occurred. Specifically, there was a statistically significant (p < 0.05) positive interaction effect from salt and oil on the reversion point ( Figure 5). This effect is possibly caused by the increased loss in ionic conduction due to the addition of salt and the reduction in loss by dipole relaxation due to the addition of oil. Loss Factor (ε ) Values of loss factor showed a general tendency to decrease with increasing frequency and increase with increasing salt content and temperature. Values of ε of pure tomato homogenate ranged from 13.4 to 28.7 at 915 MHz and from 12.9 to 14.9 at 2450 MHz. This is in accordance with the values reported by Peng et al. [19], which ranged from 10 to 38 at 915 MHz and from 8 to 16 at 2450 MHz for pericarp, locular, and placental tissues of raw tomatoes. Conversion of electromagnetic energy into heat occurs mainly due to dipole rotation and ionic conduction [7]: where ε d is the relative dipole loss and ε σ is the relative ionic loss. The latter is directly proportional to the electrical conductivity (σ) and inversely proportional to the frequency ( f ): Ionic loss (ε σ ) is the primary factor affecting dielectric loss at low frequencies. The relationship between ε σ and f is characterized by a linear decreasing function in a log-log plot [30]: where a and b are constants. The loss factor maintains this linearity as long as ε σ is the primary loss mechanism, but it starts to deviate at high frequencies, where the frequency of the applied electric field begins to match the relaxation time of the water molecules, causing the dipole loss (ε d ) to become more prevalent and increase until a maximum value at the point known as critical frequency [19,30]. The positive interaction effect of salt and temperature on the loss factor of tomato homogenate can be seen in Figure 6 for the studied frequencies in the RF region and in Figure 7 for the studied frequencies in the MW region. By adding salt, the increase in ion concentration increases the relative contribution of ionic loss [17]. At higher temperatures, the reduced viscosity increases ion mobility and overall ionic loss [14]. Loss factor did not significantly increase with temperature in samples without added salt at 2450 MHz. In this case, the high frequency and low salt content cause the dipole loss to become more prevalent. Dipole loss decreases with increasing temperature due to the relaxation times becoming increasingly shorter than the frequency of the applied electric field [19,22]. The opposing effects of temperature on dipole loss and ionic loss resulted in relatively constant values of ε across the studied temperature range. In Figure 7, it can be seen that samples without added salt showed very little reduction in their values of ε from 915 MHz to 2450 MHz in comparison with those containing salt. For pure tomato homogenate at 10 • C, loss factor increased from 13.4 at 915 MHz to 14.9 at 2450 MHz. This is explained by the higher dipole contribution to the loss factor, which increased with increasing frequency in the studied range. The samples with added salt have a higher relative contribution of ionic loss; therefore, their loss factor values linearly decrease in the measured range according to equation 8 [17]. This dependence of ionic and dipole loss on temperature, frequency, and salt content is expected for high-moisture foods and is in accordance with what has been reported for many foodstuffs, including raw tomatoes [19], apples [31], green coconut water [16], milk [14], fruit juices [13], mirin [10], and soy sauce [8]. There was not a significant effect from oil on the loss factor at 27.12, 40.68, or 915 MHz. At 2450 MHz, there was a significant negative main effect; values decreased from 24.7 to 22.7 when increasing oil content from 0% to 10%. Samples with added oil had less polarizable molecules per unit volume and lower migration rate of ions due to the higher viscosity, which was expected to cause a general reduction of the loss factor. This reduction could only be appreciated at 2450 MHz, where the overall values of ε are small and the effect of dipole rotation becomes prevalent [15,24]. The results of this study are similar to the findings of Luan et al. [17] in bentonite pastes, where the authors postulate that ε was mainly affected by oil content while ε was mainly affected by salt content. Table 1 shows the level of significance of olive oil, salt, and temperature on the penetration depth of tomato homogenate at each selected frequency. Oil content had no significant effect on d p . Values of d p decreased with increasing frequency, and there was a significant negative interaction effect from salt and temperature (Figures 8 and 9). According to equation 3, d p is inversely dependent on the frequency and loss factor, so these results are in line with the expectations. Samples without added salt had an increasing penetration depth at 2450 MHz from 10 to 40 • C, in accordance with their increasing loss factor in that temperature range. Penetration Depth (d p ) Similar results have been obtained for many foodstuffs, including mirin [10], soy sauce [8], inulin solutions [24], milk [14], and fruit juices [13]. Peng et al. [19] obtained values of d p ranging from 3.31 cm to 1.24 cm in tomato pericarp tissue at 915 MHz without added salt, and at 2450 MHz, increasing values from 1.24 cm to 1.45 cm at 60 • C and then decreasing again to 1.26 cm at 100 • C. Our results are in good agreement with these findings. According to Schiffmann [32], the thickness of a food product should not exceed 2 or 3 times its penetration depth to ensure heat generation inside the product. During continuous-flow dielectric heating, tomato homogenates with added salt would require lower tube diameters to maintain a uniform temperature rise, regardless of the oil content. The increased penetration depth at lower frequencies (RF region) has the potential to reduce overheating, thus allowing for improved nutritional and sensory properties in vegetable homogenates. However, most of the research on continuous-flow dielectric heating has focused on MW frequencies. Coming years will probably see growing interest in exploring industrial applications in the RF region. Conclusions The dielectric properties of homogenates of tomato, salt, and olive oil were measured at four frequencies relevant to industrial dielectric heating applications. The dielectric constant was mainly affected by changes in temperature at all frequencies, by salt content in the RF range, and by oil content in the MW range. The loss factor was mainly affected by the salt content and temperature. The reversion point increased with the addition of salt and oil. The penetration depth generally decreased with increasing temperature, salt content, and frequency. In dielectric heating processes, changes in dielectric properties resulting from new formulations must be taken into consideration because they will influence the temperature distribution and the heating rates associated with the thermal treatment. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/foods10123124/s1, Table S1: Least-squares mean value of dielectric constant at different combinations of temperature and salt content at 27.12 MHz. Lowercase and uppercase different letters indicate significant differences for temperature and salt variable, respectively (p < 0.05), Table S2: Least-squares mean value of dielectric constant at different combinations of temperature and salt content at 40.68 MHz. Lowercase and uppercase different letters indicate significant differences for temperature and salt variables, respectively (p < 0.05), Table S3: Least-squares mean value of dielectric constant at different combinations of temperature and oil content at 915 MHz. Lowercase and uppercase different letters indicate significant differences for temperature and oil variables, respectively (p < 0.05), Table S4: Least-squares mean value of dielectric constant at different combinations of temperature and oil content at 2450 MHz. Lowercase and uppercase different letters indicate significant differences for temperature and oil variables, respectively (p < 0.05), Table S5: Least-squares mean value of reversion point (MHz) at different combinations of temperature and oil content at 2450 MHz. Lowercase and uppercase different letters indicate significant differences for salt and oil variables, respectively (p < 0.05), Table S6: Least-squares mean value of loss factor at different combinations of temperature and salt content at 27.12 MHz. Lowercase and uppercase different letters indicate significant differences for temperature and salt variables, respectively (p < 0.05), Table S7: Least-squares mean value of loss factor at different combinations of temperature and salt content at 40.68 MHz. Lowercase and uppercase different letters indicate significant differences for temperature and salt variables, respectively (p < 0.05), Table S8: Least-squares mean value of loss factor at different combinations of temperature and salt content at 915 MHz. Lowercase and uppercase different letters indicate significant differences for temperature and salt variables, respectively (p < 0.05), Table S9: Least-squares mean value of loss factor at different combinations of temperature and salt content at 2450 MHz. Lowercase and uppercase different letters indicate significant differences for temperature and salt variables, respectively (p < 0.05), Table S10: Leastsquares mean value of penetration depth at different combinations of temperature and salt content at 27.12 MHz. Lowercase and uppercase different letters indicate significant differences for temperature and salt variables, respectively (p < 0.05), Table S11: Least-squares mean value of penetration depth at different combinations of temperature and salt content at 40.68 MHz. Lowercase and uppercase different letters indicate significant differences for temperature and salt variables, respectively (p < 0.05), Table S12: Least-squares mean value of penetration depth at different combinations of temperature and salt content at 915 MHz. Lowercase and uppercase different letters indicate significant differences for temperature and salt variables, respectively (p < 0.05), Table S13: Least-squares mean value of penetration depth at different combinations of temperature and salt content at 2450 MHz. Lowercase and uppercase different letters indicate significant differences for temperature and salt variables, respectively (p < 0.05).
v3-fos-license
2018-04-03T04:27:25.798Z
2013-06-13T00:00:00.000
15766787
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/crionm/2013/727904.pdf", "pdf_hash": "aee10b784ddb4476c56a7455a4a1d49a2225accc", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44852", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "sha1": "e6f7ac6ddeaeee6b3812d78ca26178fd2b9c8210", "year": 2013 }
pes2o/s2orc
Lack of Adjuvant Radiotherapy May Increase Risk of Retropharyngeal Node Recurrence in Patients with Squamous Cell Carcinoma of the Head and Neck after Transoral Robotic Surgery Purpose. Transoral robotic surgery (TORS) has increased in popularity in the management of squamous cell carcinoma of the head and neck. However, TORS does not address the neck or retropharyngeal nodes (RPN). In the current report, we highlight the impact of the lack of adjuvant radiotherapy on RPN recurrence after TORS. Materials and Methods. A 58-year-old Caucasian male presented with squamous cell carcinoma of the head and neck of unknown primary. He was offered radiotherapy as a definitive management for clinical stage T0N2aM0, stage IVA, but he opted to left neck dissection. Follow-up PET-CT scan revealed recurrence in the left base of tongue and right level II lymph node. He was offered radiotherapy which he declined and opted to TORS and right neck dissection. Follow-up PET-CT scan showed recurrence in left RPN for which he underwent salvage concurrent chemoradiotherapy to 70 Gy. Results. After a followup of 9 months from the date of salvage chemoradiotherapy completion, the patient is with no evidence of disease. Conclusions. TORS followed by adjuvant radiotherapy seems reasonable in the context of squamous cell carcinoma of the head and neck due to the odds of RPN involvement. Further reports are warranted to optimize post-TORS adjuvant treatment. Introduction Transoral robotic surgery (TORS) has increased in popularity in a variety of different indications. In 2006, O'Malley et al. began the introduction and application of the Intuitive Surgical da Vinci robot to head and neck surgery [1]. Hurtuk et al. reported on 64 patients with oropharyngeal carcinoma. Out of 64 patients, 50% of stages I and II patients were spared adjuvant radiation therapy (RT) or combined chemoradiation (chemo-RT) while 34% of stages III and IV patients were spared chemotherapy [2]. Obviously, most patients treated with definitive RT or chemo-RT are spared TORS and spared neck dissection (ND). However TORS does not address the nodal disease of the neck which may require a second surgical procedure. Furthermore, TORS and neck dissection (ND) do not address the retropharyngeal lymph nodes (RPN). Thus while TORS can be an excellent approach, it usually requires multiple procedures and leaves a key area unaddressed. In the current report we highlight the impact of lack of adjuvant radiotherapy (RT) on RPN recurrence following TORS. Case Report A 58-year-old Caucasian male presented with painless solitary left-sided upper neck mass of 2-month duration. He reports smoking one pack per day for 30 years but denies alcohol or drug abuse. After comprehensive physical examination, which was unremarkable, fine needle aspiration Management 3.1. Surgery. The patient was presented at our institutional head and neck tumor conference and the consensus was to proceed with multiple targeted biopsies from the base of tongue (BOT) and bilateral tonsillectomy all of which were negative for malignancy. The patient was offered RT as a definitive management for metastasis of unknown primary (MUP) clinical stage T0N2aM0, stage IVA, but he opted to left ND. Pathology revealed one positive LN for SCC, at level II, out of fifty six LNs. It measured 4.5 cm in the greatest dimension with no extracapsular extension. The pathologic staging was MUP stage IVA (pT0pN2aM0). Followup. PET-CT scan, 6 months later, showed interval development of an intense hypermetabolic focus in the region of the left BOT and right level II LN. He underwent incisional biopsy of the left BOT mass, which was positive for SCC and HPV/p16. The patient was offered RT which he declined and opted to TORS and right ND, which revealed SCC of the left BOT, tumor measured 1.7 cm (pT1) with negative margins, and all the 29 LN were negative (pN0). He was pathologically staged as BOT, pT1pN0 M0, stage I. It is not clear whether the patient developed a second primary in the form of squamous cell carcinoma of the BOT or the primary of the initial undiagnosed MUP has emerged. Subsequently, he was offered adjuvant RT but he persistently declined. The patient did very well until a follow-up PET-CT scan (18 months later) showed a solitary 1.5 × 1.3 × 2.5 cm (Figure 1) markedly hypermetabolic left RPN with SUV of 9. He underwent a CT-guided FNA with pathology positive for SCC. Radiotherapy. The patient was represented at our institutional multidisciplinary head and neck tumor conference and the consensus was to proceed with salvage concurrent chemo-RT. The patient underwent CT simulation and IMRT based treatment. Two PTVs with dose painting were designed. The high dose region included the grossly enlarged FDG avid left RPN which received 70 Gy in 33 fractions. The lower dose for elective regions included the contralateral RPN, BOT, lymphatic in transit, and bilateral neck; all received 54 Gy in 33 fractions (Figures 2(a)-2(c)). He received concurrent chemotherapy in the form of cisplatin 100 mg/m 2 (days 1, 22, 43). He experienced the expected acute RT related toxicity in the form of grade ≤2 (mucositis, dysgeusia, dysphagia, xerostomia, and dermatitis). After a followup of 9 months from the date of salvage chemo-RT completion, the patient is with no evidence of disease. Discussion TORS represents a shift from the conventional treatment paradigm on multiple levels. It does not require a mandibulotomy, mandibular swing, or tracheotomy for airway protection. Avoiding these surgical maneuvers provides patients with a far less morbid procedure [1][2][3]. Although the initial studies of TORS were focused on safety and efficacy, data regarding long-term oncologic results and functional outcomes are now available. Weinstein et al. reported on 47 patients with oropharyngeal SCC [3]. All patients underwent TORS + ND, 57% underwent post-TORS chemo-RT, 28% underwent post-TORS RT alone, and 2% underwent post-TORS chemotherapy alone. With a mean followup of 27 months, the local, regional, and distant control rates were 98%, 96%, and 91%, respectively. The 2-year actuarial overall and disease specific survival rates were 79% and 90%, respectively. Additionally, they reported 2.4% incidence of PEG dependence at 2 years, which is comparable to the current chemo-IMRT induced PEG dependence rates. Similar results have been reported by other investigators [1][2][3][4][5][6][7]. The primary drainage of the oropharynx is to the neck nodes (mainly level II) and to the lateral RPN. RPN are located in the retropharyngeal and parapharyngeal space that is closely related to cranial nerves IX through XII, the internal jugular vein, and the internal carotid artery at the base of skull which make them inaccessible surgically. Metastases to the RPN are most commonly associated with cancers of the nasopharynx, oropharynx, and pharyngeal wall. Notably, these metastases occur primarily along the lateral RPN chains. Involvement of the medial chain is extremely rare [8,9]. The dismal clinical impact of RPN metastases has been reported in the literature. McLaughlin et al. reported on 774 patients with SCCHN. They found that the number of cervical nodal groups involved was the most significant factor ( < .0001) relating to the incidence of RPN involvement. The rates of neck relapse (40% at 5 years) and distant metastasis were significantly higher in patients with RPN involvement, and the rates of 5-year disease-free survival and absolute survival were significantly lower. They concluded that RPN involvement is a strong predictor of poor prognosis [10]. RT is used as adjuvant therapy, ±chemotherapy as dictated by the surgical pathology. Due to the rarity of reports that addressed the RPN involvement after TORS, there is no universal consensus on the management of this situation but traditionally salvage RT ± chemotherapy would be recommended. To the best of our knowledge there is no data to support the routine use of adjuvant RT after TORS especially with favorable pathological findings. However, due to the odds of RPN involvement in the context of oropharyngeal tumors, we believe that post-TORS adjuvant RT would be wise, as it comprehensively covers all areas at risk. In cases with adverse prognostic features (positive margins and extra capsular extension) concurrent chemo-RT should be offered [11,12]. In the current report, salvage chemo-RT may offer a successful regional control for the RPN with acceptable toxicities. This case report is particularly important because it is unlikely that a prospective trial will be performed in this patient population. As there is little in the literature to guide treatment, we treated this patient in a similar fashion to salvage treatment strategy to SCCHN with complete success thus far despite the short followup. The real risks of local, regional nodal relapse or metastatic potential after TORS are unknown. Therefore, the appropriate areas to receive higher or lower doses, including nodal levels, are unclear. Another issue that must be considered in an era of depleting health resources is the costeffectiveness of the interventions. Investigators have reported that the costs of multimodality approach (i.e., TORS, ND, RT ± chemotherapy) were 10 times the cost of treatment with chemo-RT alone for operable tumors of the oropharynx. The majority of cost was related to inpatient and outpatient care, rather than surgical procedure [13][14][15]. Thus, while TORS can be an excellent approach, there are important issues that need to be addressed. This becomes a real discussion with the patient to truly present the pros and cons of all treatment approaches, so that the patients can make the right decision for them. Conclusion TORS followed by adjuvant RT seems reasonable in the context of BOT of the head and neck due to the odds of lateral RPN involvement. Further reports are warranted to optimize post-TORS adjuvant treatment.
v3-fos-license
2020-09-20T13:05:10.297Z
2020-09-01T00:00:00.000
221798312
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/ijms21186802", "pdf_hash": "ed6d021770754b4c9cf74d662cb77464cad30326", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44854", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "68c2fb91480e9703cbf9ad40123f72192b14040b", "year": 2020 }
pes2o/s2orc
A Screening of Antineoplastic Drugs for Acute Myeloid Leukemia Reveals Contrasting Immunogenic Effects of Etoposide and Fludarabine Background: Recent evidence demonstrated that the treatment of acute myeloid leukemia (AML) cells with daunorubicin (DNR) but not cytarabine (Ara-C) results in immunogenic cell death (ICD). In the clinical setting, chemotherapy including anthracyclines and Ara-C remains a gold standard for AML treatment. In the last decade, etoposide (Eto) and fludarabine (Flu) have been added to the standard treatment for AML to potentiate its therapeutic effect and have been tested in many trials. Very little data are available about the ability of these drugs to induce ICD. Methods: AML cells were treated with all four drugs. Calreticulin and heat shock protein 70/90 translocation, non-histone chromatin-binding protein high mobility group box 1 and adenosine triphosphate release were evaluated. The treated cells were pulsed into dendritic cells (DCs) and used for in vitro immunological tests. Results: Flu and Ara-C had no capacity to induce ICD-related events. Interestingly, Eto was comparable to DNR in inducing all ICD events, resulting in DC maturation. Moreover, Flu was significantly more potent in inducing suppressive T regulatory cells compared to other drugs. Conclusions: Our results indicate a novel and until now poorly investigated feature of antineoplastic drugs commonly used for AML treatment, based on their different immunogenic potential. Introduction Recent evidence indicated that, under certain circumstances, chemotherapy stimulates the immune system. Indeed, in the last decade, some chemotherapeutic agents used for acute myeloid leukemia (AML) treatment, such as anthracyclines, have been shown to induce a type of cell death that can promote modifications in cancer cells, which activate the immune system against leukemia cells [1][2][3][4]. In particular, the treatment of AML cells with daunorubicin (DNR), but not cytarabine (Ara-C), results in the maturation of dendritic cells (DCs) and in the efficient cross-priming of anti-leukemia T cells [1,[3][4][5][6][7]. This process, immunogenic cell death (ICD), is characterized by the coordinated emission of danger-associated molecular patterns (DAMPs), including the translocation of the endoplasmic reticulum (ER) chaperones such as calreticulin (CRT) and heat shock proteins (HSPs) 70 and 90 on the cell surface, the active secretion of adenosine triphosphate (ATP), the release of high mobility group box 1 (HMGB1) from the nucleus in the extracellular milieu [8][9][10][11][12] and, finally, Only Treatment with DNR and Eto, but Not Flu and Ara-C, Induced Translocation of CRT and HSPs from ER to Plasma Cell Membrane in AML Cells ICD is represented by the coordinated emission of DAMPs, including CRT and HSPs 70 and 90 translocation from the ER to the cell surface, the active secretion of ATP and the active release of HMGB1 from the nucleus to the extracellular milieu [12]. We then in vitro tested and compared the capacity of each drug to induce these events. First, the evaluation of CRT, HSP70 and HSP90 exposure was determined in apoptotic cells by flow cytometry. HL-60, KG-1 or primary AML cells were treated with DNR (500 ng/mL), Ara-C (20 µg/mL), Eto (20 µg/mL) or Flu (70 µg/mL) for 24 h. The protein expression levels obtained by flow cytometry varied considerably among the different conditions ( Figure 2). In particular, in the case of CRT, the expression significantly increased from 0.6 ± 0.1%/2.4 ± 0.4%/4 ± 1.8% (HL-60/KG-1/primary AML cells, respectively) in un-treated cells to 39.7 ± 9.7%/37.6 ± 4.9%/22.8 ± 5.5% in DNR-treated cells (p < 0.0001/0.001/0.001, respectively), or 35.5 ± 9.7%/33.5 ± 0.3%/22.2 ± 3.5% in Eto-treated cells (p < 0.001/0.01/0.05, respectively). In contrast, no significant differences were observed when Ara-C (2.9 ± 0.3%/4.3 ± 1.9%/3.3 ± 0.7%) or Flu (3.7 ± 1.5%/3 ± 2%/5.8 ± 3.5%) were used. Similarly, a significant HSP exposure was observed for HL-60, KG-1 and primary AML cells after DNR and Eto treatment, but not after Ara-C and Flu treatment ( Figure 2). The only exception to this trend was represented by primary AML cells for HSP90 protein, where no up-regulation after Eto treatment was observed. As shown in Figure 3, flow cytometry data regarding CRT translocation were also confirmed using immunofluorescence. Taken together, these in vitro data indicate that Eto is comparable to DNR in early ICD event induction. In contrast, Flu treatment is not capable of inducing either CRT or HSP translocation, similarly to Ara-C treatment. Flow cytometry analysis of calreticulin (CRT) and heat-shock protein (HSP) translocation on the cell-surface of acute myeloid leukemia (AML) cells after chemotherapy treatment. The HL-60, KG-1 and primary AML cells were treated with daunorubicin (DNR) (500 ng/mL), cytarabine (Ara-C) (20 µg/mL), etoposide (Eto) (20 µg/mL) and fludarabine (Flu) (70 µg/mL) for 24 h. The percentage of CRT + , HSP70 + and HSP90 + cells (gated on apoptotic Ann-V + cells) was analyzed by flow cytometry. Un-treated cells (no treatment; NT) were used as a negative control. The values are represented as mean ± SEM of 5 independent experiments. * p <0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001 compared to un-treated cells. As shown in Figure 3, flow cytometry data regarding CRT translocation were also confirmed using immunofluorescence. Taken together, these in vitro data indicate that Eto is comparable to DNR in early ICD event induction. In contrast, Flu treatment is not capable of inducing either CRT or HSP translocation, similarly to Ara-C treatment. 2.3. Only Treatment with DNR and Eto, but Not Flu and Ara-C, Induced the HMGB1 Release from the Nucleus to the Extracellular Space of AML Cells After cellular stress, during the late post-apoptotic phase, pro-inflammatory nuclear factor HMGB1 translocates to the cytosol and is consequently released to the extracellular space [12]. After binding to specific receptors on DCs, HMGB1 induces the full maturation of DCs as evaluated by the up-regulation of CD40, CD54, CD80, CD83 and MHC II. To test this event, the HL-60 cells were treated with DNR (500 ng/mL), Ara-C (20 µg/mL), Eto (20 µg/mL) or Flu (70 µg/mL) for 24 h, then fixed, permeabilized, stained and analyzed for HMGB1 expression by immunofluorescence microscopy. As shown in Figure 4A, the extracellular release of HMGB1 was well-documented after DNR and Eto, but not after Ara-C and Flu, similarly to what was observed for CRT and HSP induction. These results were also confirmed when the HMGB1 expression density between un-treated and treated cells was evaluated ( Figure 4B). Collectively, in line with early ICD-related events, the immunofluorescence evaluation confirmed the presence of HMGB1 in the extracellular milieu after DNR (as expected) and Eto treatment, but not after Flu and Ara-C. cells were treated or not (no treatment; NT) with daunorubicin (DNR) (500 ng/mL), cytarabine (Ara-C) (20 µg/mL), etoposide (Eto) (20 µg/mL) and fludarabine (Flu) (70 µg/mL) for 24 h. (A) The release of HMGB1 (FITC-conjugated) from nucleus (DAPI-conjugated) to cytoplasm and then extracellular space was visualized by immunofluorescence microscopy. One representative experiment for each drug is reported. Bar 20 µm. (B) Quantitative analysis of HMGB1 fluorescence intensity outside the nucleus in un-treated and treated HL-60 cells. A representative field was used for quantification. The signal outside the nucleus was measured by densitometry (n = 21; randomly selected cells). The cells are grouped in classes of fluorescence intensity and plotted relative to HMGB1 expression. Collectively, in line with early ICD-related events, the immunofluorescence evaluation confirmed the presence of HMGB1 in the extracellular milieu after DNR (as expected) and Eto treatment, but not after Flu and Ara-C. Only Treatment with DNR and Eto, but Not Flu and Ara-C, Induced ATP Release to the Extracellular Space of AML Cells One of the most distinctive features of ICD is represented by the extracellular release of ATP from dying cells during the late apoptosis phase [17]. Autophagy-dependent active secretion of ATP, which binds purinergic receptors on DCs, promotes their recruitment, survival and differentiation [18]. ATP release was tested after 24 h by luminescence in HL-60, KG-1 or primary AML cells treated with DNR (500 ng/mL), Ara-C (20 µg/mL), Eto (20 µg/mL) or Flu (70 µg/mL) for 24 h. For HL-60 cells, comparable ATP levels were observed after DNR (fold change of luminescence 20.7 ± 5.8) and Eto treatment (22.4 ± 13.8), both significantly increased as compared to un-treated cells (p < 0.05), whereas both Ara-C and Flu treatment showed a lower capacity to induce ATP release ( Figure 5). Only Treatment with DNR and Eto, but Not Flu and Ara-C, Induced ATP Release to the Extracellular Space of AML Cells One of the most distinctive features of ICD is represented by the extracellular release of ATP from dying cells during the late apoptosis phase [17]. Autophagy-dependent active secretion of ATP, which binds purinergic receptors on DCs, promotes their recruitment, survival and differentiation [18]. ATP release was tested after 24 h by luminescence in HL-60, KG-1 or primary AML cells treated with DNR (500 ng/mL), Ara-C (20 µg/mL), Eto (20 µg/mL) or Flu (70 µg/mL) for 24 h. For HL-60 cells, comparable ATP levels were observed after DNR (fold change of luminescence 20.7 ± 5.8) and Eto treatment (22.4 ± 13.8), both significantly increased as compared to un-treated cells (p < 0.05), whereas both Ara-C and Flu treatment showed a lower capacity to induce ATP release ( Figure 5). A similar trend of higher ATP release after DNR and Eto treatment was observed also for KG-1 cells ( Figure 5). Finally, a significant ATP release was observed after DNR treatment in primary AML cells (fold change of luminescence 9.9 ± 2.9) compared to un-treated cells (p < 0.05), whereas both Ara-C and Flu treatment showed a lower capacity to induce ATP release ( Figure 5). Collectively, these data indicate that, among the drugs that have been proposed to increase the efficacy of the conventional chemotherapy backbone including DNR and Ara-C, Eto has a similar and comparable capacity to DNR in inducing both early and late ICD events. On the contrary, Flu has a low if any effect, proving similar to Ara-C. A similar trend of higher ATP release after DNR and Eto treatment was observed also for KG-1 cells ( Figure 5). Finally, a significant ATP release was observed after DNR treatment in primary AML cells (fold change of luminescence 9.9 ± 2.9) compared to un-treated cells (p < 0.05), whereas both Ara-C and Flu treatment showed a lower capacity to induce ATP release ( Figure 5). Collectively, these data indicate that, among the drugs that have been proposed to increase the efficacy of the conventional chemotherapy backbone including DNR and Ara-C, Eto has a similar and comparable capacity to DNR in inducing both early and late ICD events. On the contrary, Flu has a low if any effect, proving similar to Ara-C. 2.5. All Tested Drugs Induced DC Maturation Mediated by Chemotherapy-Treated HL-60, KG-1 and Primary AML Cells, but Only DNR and Eto Induced CD83 Up-Regulation, and Only DNR, Ara-C and Eto Induced CCR7 Expression When emitted in the correct spatiotemporal context, the DAMPs recruit DCs in the proximity of ICD and activate them to engulf TAAs [12]. As a consequence, DCs become fully matured and competent in skewing cytokine production toward immunostimulation, a process which is strictly necessary for T cell priming and activation. The HL-60, KG-1 and primary AML cells were treated with DNR (500 ng/mL), Ara-C (20 µg/mL), Eto (20 µg/mL) or Flu (70 µg/mL) for 24 h and loaded in immature DCs (immDCs). After 24 h, the DC phenotype was evaluated by flow cytometry. As shown in Figure 6, all tested drugs induced a significant up-regulation of one or more DC-maturation markers, but only DNR significantly improved the expression of all of these (compared to immDCs), at least for HL-60 cells. For KG-1 and primary AML cells, DNR treatment significantly up-regulated three of four DC-maturation markers ( Figure 6). All Tested Drugs Induced DC Maturation Mediated by Chemotherapy-Treated HL-60, KG-1 and Primary AML Cells, but only DNR and Eto Induced CD83 Up-Regulation, and only DNR, Ara-C and Eto Induced CCR7 Expression When emitted in the correct spatiotemporal context, the DAMPs recruit DCs in the proximity of ICD and activate them to engulf TAAs [12]. As a consequence, DCs become fully matured and competent in skewing cytokine production toward immunostimulation, a process which is strictly necessary for T cell priming and activation. The HL-60, KG-1 and primary AML cells were treated with DNR (500 ng/mL), Ara-C (20 µg/mL), Eto (20 µg/mL) or Flu (70 µg/mL) for 24 h and loaded in immature DCs (immDCs). After 24 h, the DC phenotype was evaluated by flow cytometry. As shown in Figure 6, all tested drugs induced a significant up-regulation of one or more DC-maturation markers, but only DNR significantly improved the expression of all of these (compared to immDCs), at least for HL-60 cells. For KG-1 and primary AML cells, DNR treatment significantly up-regulated three of four DC-maturation markers ( Figure 6). Very similar results were obtained after Eto treatment. In particular, a significant up-regulation of CD80, CD86 and CCR7, which is required for DC migration to lymph nodes, was observed in HL-60 cells. Interestingly, only DNR, Eto (in HL-60 and KG-1 cell lines) and also Ara-C (in KG-1 cell line) induced CCR7 expression among all tested cells ( Figure 6). Taken together, all four tested drugs induced a significant up-regulation of CD86 suggesting a partial DC maturation caused by the inflammatory microenvironment after chemotherapy treatment. Only DNR and Eto have the capacity to induce full DC maturation, including the up-regulation of CD83 and CCR7, which is known to regulate the capacity of DCs to migrate into T-cell enriched areas of draining lymph nodes. The DC phenotype was evaluated by flow cytometry. Un-loaded immature immDCs were used as a negative control. The values are represented as mean ± SEM of 5/3/3 independent experiments for HL-60/KG-1/primary AML cells, respectively. * p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001 compared to immDCs. Very similar results were obtained after Eto treatment. In particular, a significant up-regulation of CD80, CD86 and CCR7, which is required for DC migration to lymph nodes, was observed in HL-60 cells. Interestingly, only DNR, Eto (in HL-60 and KG-1 cell lines) and also Ara-C (in KG-1 cell line) induced CCR7 expression among all tested cells ( Figure 6). Taken together, all four tested drugs induced a significant up-regulation of CD86 suggesting a partial DC maturation caused by the inflammatory microenvironment after chemotherapy treatment. Only DNR and Eto have the capacity to induce full DC maturation, including the up-regulation of CD83 and CCR7, which is known to regulate the capacity of DCs to migrate into T-cell enriched areas of draining lymph nodes. For HL-60 cells, the autologous un-loaded immDCs induced a modest proliferation (proliferation index 1.89) compared to un-stimulated CD3 (1), as shown in Figure 7. The proliferation status improved significantly after adding DCs, previously loaded with HL-60 treated with DNR or Eto, as compared to un-stimulated CD3. In particular, Eto induced the highest proliferation index (4.1) (Figure 7). On the contrary, Flu treatment was capable of inducing little increase of T-cell proliferation (2.2) over un-loaded immDCs (1.89). A significant up-regulation of proliferation index was observed also after DNR treatment for KG-1 cells (3.1) and after Eto treatment for primary AML cells (2.5) compared to un-stimulated CD3 (1), as shown in Figure 7. Collectively, our data demonstrate that Eto can be considered an ICD inducer comparable to DNR. In particular, along with the induction of all ICD-related events and full DC maturation, Eto treatment was the most powerful among the tested drugs in stimulating T-cell proliferation, thus suggesting a significant capacity to activate the immune response. On the contrary, Flu had weak immunogenic potential and can be considered a non-immunogenic chemotherapy drug. Flu-Treated Leukemic Cells Induced a Population of Suppressive T Regulatory Cells via DCs To test the tolerogenic potential of the drugs, the induction of T regulatory cells (Tregs) was evaluated. DCs loaded for 24 h with HL-60, KG-1 and primary AML cells treated with DNR (500 ng/mL), Ara-C (20 µg/mL), Eto (20 µg/mL), Flu (70 µg/mL), or un-loaded immDCs were used in co-culture with allogeneic T cells. After 5 days, the total Tregs induction, characterized by the expression of CD3 + CD4 + CD25 +/high CD127 −/low , as well as the suppressive Tregs subpopulation characterized by the expression of CD3 + CD4 + CD25 high CD127 −/low CD45RA − FOXP3 +/high , was evaluated by flow cytometry. As shown in Figure 8A, none of the tested drugs significantly induced a total population of Tregs, but, interestingly, Flu induced a significant number of suppressive Tregs ( Figure 8B), as compared to other drugs. In particular, DCs loaded with HL-60 cells treated with Flu induced 3.5 ± 0.8 (fold change) of suppressive Tregs compared to un-loaded DCs (p < 0.0001) or DCs loaded with DNR-/Ara-C/Eto-treated HL-60 cells (0.9 ± 0.2/1.1 ± 0.1/1.6 ± 0.4, respectively; p < 0.05/p < 0.001/p < 0.0001). A similar pattern was observed when DCs loaded with primary AML cells treated with Flu (fold change 7.5 ± 0.5) were used for Tregs induction compared to un-loaded DCs (p < 0.0001) ( Figure 8B). Similarly to HL-60 cells, DCs loaded with primary cells treated with Flu also induced a significant up-regulation of suppressive Tregs compared to DNR/Ara-C/Eto (p < 0.05/p < 0.0001/p < 0.0001, respectively). For DCs loaded with Flu-treated KG-1 cells, only a trend in suppressive Tregs up-regulation was observed (fold change 2.7 ± 0.1). Moreover, a more in-depth characterization of Flu-induced suppressive Tregs revealed an up-regulation of programmed cell death protein 1 (PD-1) expression indicating their potentiality and effector function ( Figure 8C). The fold change of the mean of fluorescence intensity (MFI) of PD-1 expressed on suppressive Tregs increased to 4.9 ± 0.7, when DCs loaded with Flu-treated HL-60 were used (p < 0.0001 compared to unloaded DCs). Similarly, DCs loaded with KG-1 or primary AML cells also induced a significant up-regulation of PD-1 MFI on suppressive Tregs to 3.9 ± 0.3 or 4.0 ± 0.3, respectively (fold change), after Flu treatment compared to un-loaded DCs (p < 0.05 or p < 0.01, respectively) ( Figure 8C). Interestingly, we observed an up-regulation of PD-1 also after DNR and Ara-C, but not Eto treatment, when DCs loaded with HL-60 cells were used to induce Tregs. These data are in line with previously obtained results highlighting the contrasting immunological effect of Flu and Eto treatment. (70 µg/mL) or un-loaded immDCs were used in co-culture with allogeneic T cells. After 5 days, the T-cell proliferation by flow cytometry was evaluated. For HL-60 cells, the autologous un-loaded immDCs induced a modest proliferation (proliferation index 1.89) compared to un-stimulated CD3 (1), as shown in Figure 7. The proliferation status improved significantly after adding DCs, previously loaded with HL-60 treated with DNR or Eto, as compared to un-stimulated CD3. In particular, Eto induced the highest proliferation index (4.1) (Figure 7). On the contrary, Flu treatment was capable of inducing little increase of T-cell proliferation (2.2) over un-loaded immDCs (1.89). A significant up-regulation of proliferation index was observed also after DNR treatment for KG-1 cells (3.1) and after Eto treatment for primary AML cells (2.5) compared to un-stimulated CD3 (1), as shown in Figure 7. Collectively, our data demonstrate that Eto can be considered an ICD inducer comparable to DNR. In particular, along with the induction of all ICD-related events and full DC maturation, Eto treatment was the most powerful among the tested drugs in stimulating T-cell proliferation, thus suggesting a significant capacity to activate the immune response. On the contrary, Flu had weak immunogenic potential and can be considered a non-immunogenic chemotherapy drug. . Flow cytometry analysis of CD3 + T-cell proliferation mediated by dendritic cells (DCs) loaded with chemotherapy-treated HL-60, KG-1 and primary AML cells. The HL-60, KG-1 and primary AML cells were treated with daunorubicin (DNR) (500 ng/mL), cytarabine (Ara-C) (20 µg/mL), etoposide (Eto) (20 µg/mL) and fludarabine (Flu) (70 µg/mL) for 4 h and loaded in immature DCs (immDCs) for 24 h. After 24 h, un-loaded immDCs or DCs loaded with treated HL-60, KG-1 and primary AML cells were used as a stimulus for CD3 + T cells for 5 days. The proliferation index of CD3 + T cells was then analyzed by flow cytometry and expressed as fold change. Un-stimulated CD3 + T cells were used as reference and set as 1. The values are represented as mean ± SEM of 5/3/3 independent experiments for HL-60/KG-1/primary AML cells, respectively. * p < 0.05; ** p < 0.01; **** p < 0.0001 compared to un-stimulated CD3 + T cells. Discussion Our results demonstrate that, among the four anti-leukemia drugs commonly used in AML treatment, Eto is comparable to DNR in ICD-related event induction, whereas Flu, similarly to Ara-C, has a weak immunogenic effect and, interestingly, may increase Tregs. In recent years, a growing body of evidence has shed new light on the composition of the immunological microenvironment in AML patients. Multiple and contrasting aspects of T cell function are operative in AML bone marrow (BM) at diagnosis, such as activation along with exhaustion and senescence. These data reveal the capacity of AML, similarly to solid tumors, to shape and edit anti-leukemia immune response. Although in solid tumors the effect of some chemotherapy and targeted agents on the tumor immunological microenvironment is well-established [41], the impact of chemotherapy on immune response in AML has not been extensively investigated. Recent evidence indicates the elasticity of AML cells in modulating CD8 + T cell responses and the plasticity of their signatures upon chemotherapy response, which is capable of reversing some dysfunctional features of BM-infiltrating T cells [42]. These data extend to the AML The values are represented as mean ± SEM of 5/3/3 independent experiments for HL-60/KG-1/primary AML cells, respectively. CD3 cells stimulated with unloaded immDCs were used as reference and set as 1. * p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001 compared to CD3 cells stimulated with un-loaded immDCs. Taken together, these data indicate Flu as a non-immunogenic chemotherapy drug with suppressive effects on the immune system and are in line with previous results obtained in this study. Discussion Our results demonstrate that, among the four anti-leukemia drugs commonly used in AML treatment, Eto is comparable to DNR in ICD-related event induction, whereas Flu, similarly to Ara-C, has a weak immunogenic effect and, interestingly, may increase Tregs. In recent years, a growing body of evidence has shed new light on the composition of the immunological microenvironment in AML patients. Multiple and contrasting aspects of T cell function are operative in AML bone marrow (BM) at diagnosis, such as activation along with exhaustion and senescence. These data reveal the capacity of AML, similarly to solid tumors, to shape and edit anti-leukemia immune response. Although in solid tumors the effect of some chemotherapy and targeted agents on the tumor immunological microenvironment is well-established [41], the impact of chemotherapy on immune response in AML has not been extensively investigated. Recent evidence indicates the elasticity of AML cells in modulating CD8 + T cell responses and the plasticity of their signatures upon chemotherapy response, which is capable of reversing some dysfunctional features of BM-infiltrating T cells [42]. These data extend to the AML field the notion that chemotherapy drugs are important not only to directly eliminate tumor cells but also to induce or reinforce immune system responses that may be crucial for the eradication of chemo-resistant malignant cells [43]. In this scenario, our work expands our knowledge regarding the immunogenic potential of the chemotherapy drugs that are commonly used as an induction regimen in AML patients. We and others have previously demonstrated that DNR is a very strong ICD inducer [1,3,44], whereas Ara-C has a weak immunogenic capacity. Interestingly, Eto and Flu, which are alternatively combined with the conventional DNR and Ara-C-based induction regimen with the aim of increasing its effectiveness, proved to be profoundly different in their immunogenic activity. Very similarly to DNR, Eto is a strong ICD inducer. Recently, it was shown that Eto induces cell apoptosis through a mechanism involving the ER stress pathway [45]. In the ICD context, it was well demonstrated that phosphorylation of eukaryotic initiation factor 2 (eIF2α) is essential for the ER stress response and is correlated with CRT/ERp57 complex exposure, leading to DC activation in various tumor models [46][47][48][49]. These findings support our hypothesis that a molecular target of Eto inducing ICD process could be the eIF2α inducing the ER stress. Moreover, Cheng et al. demonstrated that Eto in combination with 2-deoxyglucose (2-DG; inhibitor of glycolysis), but not 2-DG alone, induces ICD in mouse lymphoma model, and that this effect was at least partially mediated through CRT exposure on the plasma membrane. This is a first sign that Eto also has immunogenic properties in other tumor model [45,50]. Contrary to DNR and Eto, Flu, more similarly to Ara-C, reduced ICD capacity and, interestingly, may directly exert a tolerogenic function by inducing a population of suppressive Tregs. These data are intriguing and could be correlated with the recent evidence that different immunological landscapes exist in cancer, including AML, that are associated with important and clinically relevant differences in chemosensitivity as well as in response to immunotherapy approaches [51]. In AML, inflammatory patterns as well as inhibitory signals, such as the expression of immune checkpoint receptors on leukemic cells [52], have been recently associated with high-risk cytogenetic and molecular profile, which in turn associates with resistance to standard chemotherapy, including anthracyclines [31,53]. In this scenario, we may speculate that the use of proinflammatory chemotherapy drugs, such as Eto and DNR, may further increase a condition of T-cell exhaustion, correlated with inflammation. In contrast, anti-inflammatory, even tolerogenic, drugs, such as Flu, may favor the induction of anti-tumor immunity by preventing T-cell anergy and exhaustion. Indeed, a better understanding of the immunologic effects of chemotherapy drugs, along with the use of immunogenomics in the in-depth characterization of the AML microenvironment, may guide the clinical choice toward a more personalized and immunological-driven use of chemotherapy and targeted agents in AML. It is well-known that the AML microenvironment is mostly enriched in Tregs, which interacts with effector T cells, thus crucially dampening the anti-leukemia immune response and favoring leukemia immunological escape [54,55]. In particular, the role of Tregs in AML is very important for both the characterization of the BM microenvironment composition before chemotherapy and the prediction of response to chemotherapy [56][57][58][59]. Indeed, as compared to healthy individuals, AML patients at diagnosis may have higher numbers of Tregs, whose frequency is directly correlated with response to chemotherapy [57,60], and a rapid turnover of Tregs after chemotherapy has been demonstrated in AML patients [58]. Our group has recently addressed the mechanisms by which DNR and Ara-C may contribute to modify the immune response in AML patients and in a mouse model of AML [1]. Briefly, we demonstrated that especially DNR, along with the activation of anti-leukemia immunity, may also induce tolerance by increasing the number of leukemia-infiltrating tolerogenic DCs and, more importantly, by expanding a population of Tregs. These findings suggested that Tregs induced after DNR treatment may play an important regulatory role in the choice between tolerance and immunity in response to chemotherapy-treated dying leukemia cells and are in line with other recent studies which use preclinical models of self-tolerance and autoimmunity [61]. The in vitro results of the present study extend our knowledge on the tolerogenic capacity of other drugs commonly used in the therapy of AML, such as Eto and Flu. In particular, Flu proved to be a potent inducer of Tregs when used to treat AML cells before in vitro cultures, whereas Eto has a weak capacity to induce Tregs, while maintaining T-cell stimulatory function. These results are not surprising, given the well-established immunosuppressive activity of Flu, especially in the context of lymphomas and chronic lymphocytic leukemia [62]. However, to our knowledge, this is the first demonstration that, when used to pulse DCs in T-cell cultures (cross-priming effect), Flu-treated AML cells may potently act as a Treg inducer, thus providing a new immunosuppressive mechanism associated with Flu administration. Moreover, our data indicate that Tregs obtained after cultures with DCs pulsed with Flu-treated AML cells have the highest expression of PD-1 among all tested drugs. It is known that PD-1 could be highly expressed on Tregs and is fundamental for the inhibition of effector T cell function as well as for the induction and maintenance of T cell tolerance via Tregs [63,64]. Accordingly, the expression of PD-1 on Tregs correlates with their immunosuppressive activity [65][66][67], and the accumulation of PD1 + Foxp3 + Tregs within the tumor microenvironment of solid tumors strongly supports their immunosuppressive potential [65,67]. In AML patients, PD-1 expression was observed in different T-cell subpopulations, including Tregs [52]. Our group recently confirmed these data in AML, showing higher PD-1 expression on Tregs in both in vivo mouse models and AML patients after chemotherapy treatment [1]. Since the PD-1/PD-1 ligand axis represents a target of therapy in many clinical studies for solid tumors and leukemias including AML [44], the up-regulation of PD-1 on Tregs after AML treatment with Flu may have interesting clinical implications. In particular, the use of anti-PD-1 checkpoint inhibitors in combination with chemotherapy has the potential of targeting Tregs, which prominently contributes to the tolerogenic microenvironment in AML. Taken together and with the limitations of an in vitro study, the present investigation expands the knowledge on the immunogenic and tolerogenic potential of the chemotherapy drugs commonly used in the therapy of AML. Among these, important differences have been observed, indicating that, particularly in an era when immunotherapy is being included in the clinical stage of AML treatment, the immunological perspective of chemotherapy should be taken into consideration in therapy decision-making. As a future goal, a set of in vivo studies are planned to confirm these in vitro data. CRT and HSP70 and 90 Staining by Flow Cytometry One hundred thousand HL-60, KG-1 or primary AML cells treated with DNR, Ara-C, Flu and Eto as described above or un-treated were stained with human-specific primary monoclonal antibodies (mAbs) for CRT (AB92516; Abcam, Cambridge, UK), HSP70 (AB181606; Abcam) and HSP90 (AB13495; Abcam) at dilution ratios of 1:100, 1:230 and 1:250, respectively, in blocking solution (PBS/FBS 2%). After 30 min of incubation, the cells were washed with cold PBS and stained for another 30 min in the dark with secondary mAb Donkey Anti-Rabbit IgG-AlexaFluor 647 (AB150075, Abcam) diluted at 1:5000. After a final wash with cold PBS, the cells were stained with Ann-V by Annexin-V-FLUOS Apoptosis Detection Kit (Roche) for 15 min and then analyzed on flow cytometer BDAccuriC6 (BD Biosciences, Franklin Lakes, WI, USA). At least 10,000 events were analyzed. The cells stained only with secondary mAb for each condition were used as negative fluorescence control. Quantification of ATP Release HL-60, KG-1 or primary AML cells were seeded in 96-well flat bottom plates (1 × 10 6 /mL) and treated with chemotherapeutic agents DNR, Ara-C, Flu and Eto (as described above), for 4 h. Cells were then washed and after 20 h ATP quantification in the supernatants was performed in triplicate using ENLITEN rLuciferase/Luciferin Reagent (Promega, Madison, WI, USA), according to the manufacturer's instructions. Luminescence was measured at the single-tube luminometer Glomax 20/20 (Promega), with 10-second RLU (relative light units) signal integration time. CRT and HMGB1 Staining by Immunofluorescence HL-60, KG-1 or primary AML cells (500,000/condition) treated with DNR, Ara-C, Flu and Eto, as described above, or un-treated were re-suspended in PBS and centrifuged by Cytospin (Shandon-Elliott Instruments Limited, Runcorn, UK) for 10 min at 1000 rpm, speed 40. The samples were then fixed with 4% paraformaldehyde for 10 min. After repeated washing with cold PBS, the cells were stained according to the following protocols. The images were acquired and processed by Axiovert 40 CFL microscope (Carl Zeiss Microscopy-LLC, New York, NY, USA). CRT Exposure First, the cells were treated with blocking solution (PBS + 5% bovine serum albumin; BSA from Sigma-Aldrich) for 30 min. Then, primary antibody anti-human calreticulin (AMAB29516; Abcam) diluted at 1:100 in blocking solution was used for staining. After 30 min of incubation, the cells were washed with cold PBS and stained with secondary antibody Alexa Fluor 488 anti-rabbit (A27034; Life technologies/ThermoFisher Scientific, Waltham, MA, USA) at a concentration of 5 µg/mL in blocking solution for 30 min in the dark. After the last wash in cold PBS, one drop of ProLong Gold Antifade Mountant reagent with DAPI (Thermofisher) was added. The slide was closed with transparent nail polish and stored at −20 • C in the dark before microscope analysis. For quantitative analysis of CRT + cells by immunofluorescence, a total of 100 cells were used for quantification. HMGB1 Release First, the cells were washed with 0.1% PBS-Tween (Sigma Aldrich) and permeabilized with 0.2% PBS-Triton (Sigma-Aldrich) for 10 min. After another washing with 0.1% of PBS-Tween, the cells were incubated for 30 min in blocking solution (PBS + 5% of BSA) and stained with primary antibody anti-human HMGB1 (3935S; Cells Signaling, Danvers, MA, USA) diluted at 1:100 in blocking solution for 30 min. The cells were then washed with 0.1% PBS-Tween and stained with secondary antibody Alexa Fluor 488 anti-rabbit (A27034; Life technologies/ThermoFisher Scientific) at a concentration of 5 µg/mL in blocking solution for 30 min in the dark. After the last wash in 0.1% PBS-Tween, one drop of ProLong Gold Antifade Mountant reagent with DAPI (Thermofisher) was added. The slide was closed with transparent nail polish and stored at −20 • C in the dark before microscope analysis. The immunofluorescence intensity was measured by densitometry using Photoshop (Adobe Photoshop software 6.0). The cells were grouped in classes of low, intermediate and high fluorescence intensity (6 cells per group). The values were corrected by pixel number to compare cells with different dimensions [68]. DC Generation, Pulsing and Maturation Human monocyte-derived DCs were generated by a 5-day culture of CD14 + cells in complete RPMI in the presence of granulocyte-macrophage colony-stimulation factor (50 ng/mL; GM-CSF Endogen, Worldwide, St. Louis, MO, USA) and IL-4 (800 U/mL; MiltenyiBiotec), as previously described [69,70]. DC maturation was induced by pulsing with treated or un-treated HL-60, KG-1 or primary AML cells, as described in Section 4.2. For DC pulsing, chemotherapy-treated HL-60, KG-1 or primary AML cells were cultured for 20 h with immDCs (2:1 ratio) in complete RPMI. After culture, mature and immDCs were tested for immunophenotype and used for proliferation and Treg induction testing. Proliferation Test Twenty thousand immDCs, chemotherapy-treated HL-60, KG-1 or primary AML cells pulsed DCs (as described in Section 4.6) were irradiated at 3000 cGy and co-cultured for 5 days in complete RPMI with 200,000 autologous CD3 + T cells at the ratio of 1:10. The CD3 + T cells were stained before the co-culture with 5 µM of carboxyfluorescein succinimidyl ester (CFSE; Abcam). The proliferation was analyzed on a FACS Canto II (BD Biosciences) flow cytometer, and the proliferation index was calculated using FCS express 6 software.
v3-fos-license
2019-04-29T13:16:31.690Z
2015-05-10T00:00:00.000
85526289
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.4172/2169-0022.1000168", "pdf_hash": "7e07685e99357f53f4abbdc7d8010220042c10c8", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44856", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "sha1": "591035550b7aee3a4e7c16d2a60701273854e530", "year": 2015 }
pes2o/s2orc
Effect of SiC Particles on Dielectrically Properties of Epoxy Reinforcement by (Bi-Directional) Glass Fiber Pure polymers are generally electrical insulators in their nature, so they are applied as electrically insulating materials. Polymers contain a very low Concentration of free charge carriers, and thus they are nonconductive and transparent to electromagnetic radiation [1]. Plastic polymers have chemical reaction properties similar to those of small molecules, though the polymers themselves are larger in size. This means that a range of different factors, including thermal conditions, stress cracking, or the diffusion of chemical additives, can alter the molecular structure, and thus the fundamental properties, of most plastic polymer materials [2]. Some changes, such as unintentional reduction in molecular weight, can lead to plastic degradation and product failure, while others can supplement or improve a polymer’s characteristics [3]. Introduction Pure polymers are generally electrical insulators in their nature, so they are applied as electrically insulating materials. Polymers contain a very low Concentration of free charge carriers, and thus they are nonconductive and transparent to electromagnetic radiation [1]. Plastic polymers have chemical reaction properties similar to those of small molecules, though the polymers themselves are larger in size. This means that a range of different factors, including thermal conditions, stress cracking, or the diffusion of chemical additives, can alter the molecular structure, and thus the fundamental properties, of most plastic polymer materials [2]. Some changes, such as unintentional reduction in molecular weight, can lead to plastic degradation and product failure, while others can supplement or improve a polymer's characteristics [3]. Epoxy is one of the most important thermosetting polymers. Due to the high chemical and corrosion resistance, good mechanical properties and low thermal conductivity, epoxy has been extensively used in various fields including coating, high-performance adhesives, and composite matrix [4]. Hybrid epoxy composites are required used for a several industrial applications such as electrical, concept of optoelectronic and electronic devices protect electrical components from short circuiting, dust and moisture. In the electronics industry epoxy resins are the primary resin used in over molding integrated circuits, transistors and hybrid circuits, and making printed circuit board [5]. Development of a hybrid polymer composite retaining both types of characteristics is considered to be an active field of research. This research work aims to develop a hybrid polymer composite material using ceramic particle such as SiC and using glass fiber which will retain the advantages of both the fiber reinforced and particle-reinforced composites and emerges as a viable alternative to the existing polymer composites [6,7]. The preparation of composites is carried out using hand lay-up method and hence no expensive machinery/equipment is required. Many researchers have been development the study of effect of fillers such as (glass fiber, ferrite, silica, SiC, Al 2 O 3 ) upon the dielectric properties of epoxy in a wide frequency range and temperature ranges [8]. There are many types of glass fiber, with the most common being E-glass fiber, C-glass fiber and S-glass fiber. E-glass fiber is the most common in use, because it draws well and has good tensile and compressive strength and stiffness, and good electrical and weathering properties [9]. Abstract In this research was study the dielectric properties of the epoxy composites as a function of a frequency, "weight fraction-particle size" of fillers. Composite plates were prepared by incorporating fiber glass and SiC Particles of 0.1 µm, 3 µm, 40 µm diameter sizes at 10, 20, 30 and 40 weight percent in epoxy matrix. The experiments were performed to measure the dielectric constant and electrical conductivity in range KHz. Experimental SiC particles with different particle sizes (0.1,3,40) µm were used with weight percent (10%, 20%, 30%, 40%) and then mixed with epoxy reinforcement by two layers of glass fiber (0º-90º). Euxit50 resin K (Epoxy) supplied by the Egyptian swiss chemical industrials Co., with formulated amine hardener in ratio 3:1 for curing. The epoxy resin is a liquid with low viscosity and transparent in. For preparing composite samples, a weighted quantity of SiC powder was first thoroughly mixed with a measured volume of epoxy resin. Then a half volume of hardener was added and the result mixture was well mixed so as to obtain a uniform composition. The samples the a.c measurement, each sample was in-shape like disc with diameter of 30 mm and thickness of 1 mm. A thin aluminum deposited on both sides of each sample by evaporation technique under pressure of 10-9 bar, using coating unite type Edward (E306A), to minimize the contact resistance and space charge effects. Electrical Measurements The broadband dielectric properties were measured by a precision LCR meter (HP-4275) at a constant temperature and various frequencies (10 +3 Hz to 10 +6 Hz). The dielectric constant ε' was calculated from the capacitance by using the following equation: (1) Where C is the capacitance (F), ε 0 is the free space dielectric constant value (8.85×10 −12 F/m), A is the capacitor area (cm 2 ), and d is the thickness of the sample (m). a.c. conductivity (σ a.c.) was calculated from the relation Where, tan δ is the loss tangent was calculated from the relationship: And ε'' the dielectric loss, ω the angular frequency. In this paper the dependence ε', σ a.c of pure and hybrid composite on frequency, particle size and weight fraction will be study of (fiber glass, SiC) -epoxy composites. Dielectric constant Frequency dependence: It can be seen from Figures 1-3 that the dielectric constant of unfilled epoxy and epoxy composites decrease with an increasing frequency. Dielectric constant is a frequency dependence parameter in polymer systems. In atypical epoxy system based on an epoxy resin cured with a hardener as in the present case, the epoxy component of dielectric constant is governed by the number of oriental dipoles present in the system and their ability to orient under an applied electric field [10]. Usually, the molecular groups which are attached perpendicular to the longitudinal polymer chain contributed to the dielectric relaxation mechanism. At lower frequencies of applied voltage, all the free dipolar functional groups in the epoxy chain can orient themselves resulting in a higher dielectric constant value at these frequencies. As the electric field frequency increases, the bigger dipolar groups find it difficult to these dipolar groups to the dielectric constant goes on reducing resulting in a continuously decreasing dielectric constant of the epoxy system at higher frequencies [11]. Similarly, the inherent dielectric constant in SiC particle and glassfiber also decrease with increasing frequencies of the applied field. This combined decreasing effect of the dielectric constant for both epoxy and the filler result in a decrease in the dielectric constant of the epoxy composites also when the frequency of the applied field increases. This behavior is agreement with [10,12]. Wight fraction and particle size dependence: As shown in the same figures, we can see that the dielectric constant increase as the volume fraction of fillers in the polymer matrix is increased because of the system becomes more heterogeneous than the pure epoxy as more filler is added to it [13]. The increase in dielectric constant with increase in filler is attributed to the formation of clusters. A cluster may be considered as a region in the polymer matrix where particles are in physical contact or very close to each other. The average polarization associated with a cluster is larger than that of an individual particle because of an increase in the dimensions of the metallic inclusion and, hence, greater interfacial area. Similar result have been pointed by [10,13,14]. The effect of filler size on the epoxy composite for dielectric constant can be seen in the same figures. 40 Micron composites show a higher dielectric constant value than the small particle sized composites there can be two reasons for this observation (1) epoxy chain immobility in small particle sized composites and (2) influence of SiC permittivity. It has been reported the permittivity of bulk SiC small sized particle composites are almost the same or higher as compared to that of coarse-grain SiC [14]. So, the higher dielectric constant value in 40 micron composite is probably due to the fact that there is no restriction in the mobility of epoxy chains in them similar to the case of small sized particle composites [10]. Electrical conductivity Frequency dependence: The variation in electrical conductivity of the composites as a function of frequency is shown in Figures 4-6. The electrical conductivity slowly increases as the frequency is increased in range KHz. The interpretation of this behaviour may be that the composite materials are basically attributed to high dislocation density near the interface. Electrical conductivity in turn depends on the number of charge carriers in the bulk of the material, the relaxation time of the charge carriers and the frequency of the applied electric field. Since the measurement temperatures are maintained constant, their influence on the relaxation times of the charge carriers is neglected. Over the current frequency range of measurement, charge transport will be mainly dominated by lighter electronic species [10]. In this situation the electrical properties of filler were almost dominated, since a network may start to connect filler particles to each other and a new kinetic path may be formed. Weight fraction and particle size dependence: The general theory to explain the conduction mechanism of fillers (particle, fiber) filled polymer composites is the" theory of conductive paths", which suggests that it is the existence of conductive paths (fibers and particle contacts) that results in the conductivity of the composites. With increasing of the content of the fillers, conductive paths among the fillers increase, and the average distance between the fillers becomes smaller; thus, the resistivity of the composites decrease and increase the electrical conductivity. The addition of a small amount of small sized particle colloid helps to build the conductive network and lowers the resistivity of the composite. Yet once connected, the addition of small particles seems only signify the relative contribution of contact resistance between the particles. Due to its small sized SiC contains the large numbers of particle when compared with 40 micron sized. This large number of particles should be beneficial to the inter connection between particles. However, it also inevitably increases the contact resistance. As a result, the overall effect is an increase in resistivity upon the addition of small sized SiC particles. That leads to decrease the electrical conductivity for small sized when compared with 40 micron sized [15,16]. Conclusions The dielectric constant of epoxy composites with (SiC, glass fiber) decrease with an increase of frequency. The dielectric constant, electric conductivity increase with an increase of weight fraction of fillers, which has been attributed to interfacial polarization. The electrical conductivity is increased with an increasing in frequency, since a network may start to connect filler particle to each other and a new kinetic path may be formed. In this research, the epoxy with 40µm composites showed a higher electric property than the 3µm, 0.1µm composites.
v3-fos-license
2020-03-26T10:30:38.403Z
2020-03-20T00:00:00.000
216131940
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-8994/12/3/482/pdf", "pdf_hash": "e8934ceec20f67df6c594b6b74a56b4381921fb3", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44857", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "0b49d95c5a67e4c03550bc014c27e48f916cb29d", "year": 2020 }
pes2o/s2orc
Classification of Guillain–Barré Syndrome Subtypes Using Sampling Techniques with Binary Approach : Guillain–Barré Syndrome (GBS) is an unusual disorder where the body’s immune system affects the peripheral nervous system. GBS has four main subtypes, whose treatments vary among them. Severe cases of GBS can be fatal. This work aimed to investigate whether balancing an original GBS dataset improves the predictive models created in a previous study. purpleBalancing a dataset is to pursue symmetry in the number of instances of each of the classes.The dataset includes 129 records of Mexican patients diagnosed with some subtype of GBS. We created 10 binary datasets from the original dataset. Then, we balanced these datasets using four different methods to undersample the majority class and one method to oversample the minority class. Finally, we used three classifiers with different approaches to creating predictive models. The results show that balancing the original dataset improves the previous predictive models. The goal of the predictive models is to identify the GBS subtypes applying Machine Learning algorithms. It is expected that specialists may use the model to have a complementary diagnostic using a reduced set of relevant features. Early identification of the subtype will allow starting with the appropriate treatment for patient recovery. This is a contribution to exploring the performance of balancing techniques with real data. Software, M.T.-V., and J.H.-T.; Validation, M.T.-V., J.H.-T., O.C.-B., and B.H.-O.; Writing–original draft preparation, M.T.-V., J.H.-T., and O.C.-B.; Writing–review and editing, M.T.-V., J.H.-T., O.C.-B., and B.H.-O. authors Guillain-Barré Syndrome Guillain-Barré Syndrome (GBS) was initially detected in 1916 by Guillain, Barré and Strohl. It is a rare acute paralytic polyneuropathy with four principal several clinical variants. It is an autoimmune disorder of the peripheral nervous system [1]. GBS characterizes by a fast development normally from a few days up to four weeks with an incidence closely to one to two in 100,000 people. It occurs in adults and children. GBS can damage the nerves controlling movements, pain, temperature, and touch sensations [2]. In critical cases, GBS may lead to respiratory failure and can also be mortal. The progression of GBS can be described in three phases: 1. Initial phase: evolution of symptoms lasting days to up to four weeks 2. Plateau phase: lasting weeks to months 3. Recovery phase: remyelination, lasting weeks to months. Critical patients can take a minimum of two years or more. Full recovery is not achieved in some cases. The exact cause is unknown but frequently is associated with a respiratory or gastrointestinal infection. Cytomegalovirus and Zika are associated with GBS [3]. The GBS subtypes are mainly [4]: [5] describes the characteristics of each of the GBS subtypes. Table 1. Features of GBS subtypes [5]. AIDP Most common variant (85% of cases). Primarily motor inflammatory demyelination ± secondary axonal damage. Maximum of four weeks of progression. Macrophages invade intact myelin sheaths and denude the axons. AMAN Motor only with early and severe respiratory involvement. Primary axonal degeneration. Often affects children, young adults. Up to 75% positive Campylobacter jejuni serology. Macrophages invade the nodes of Ranvier where they insert between the axon and the surrounding Schwann-cell axolemma, leaving the myelin sheath intact. AMSAN Motor and sensory affection with critical course of respiratory and bulbar involvement. Primary axonal degeneration with poorer prognosis. Similar to AMAN but also involving vetral and dorsal roots. Abnormality in sensory conduction, although the underlying pathology is not clear. The first approach in the diagnosis of GBS is based upon the clinical features since it is a non-invasive method. Nevertheless, diagnostic mechanisms such as cerebrospinal fluid (CSF) analysis and electrodiagnostic studies are useful to determine the specific subtype that the patient is suffering [6]. These methods have several disavantages since they are invasive and costly. In this exploratory study, we used different sampling methods, to balance the GBS multiclass dataset. We aimed to create different predictive models using real data to identify four main GBS subtypes that a patient suffers, applying Machine Learning algorithms. It is expected that specialists may use the model to have a complementary diagnostic using a reduced set of relevant features. Early diagnosis of the GBS subtype is essential due to the rapid progress of this disorder. The treatments vary according to the subtype contracted. Sequelae and economic costs can be high unless proper treatment is started immediately. Imbalanced Data Classification A dataset is imbalanced when one of its classes has fewer instances (minority class) regarding the other class (majority class) [7]. purple One instance is a row in a dataset. For this study, there are 129 instances that belong to patients diagnosed with some type of GBS. Classes are the way the data is grouped in a dataset. For example, in this work, there are four classes in the original dataset. Each class represents a subtype of GBS. Standard classifiers are designed to work with balanced datasets. When a dataset is imbalanced, the classifiers take the majority class for decision making, ignoring the minority class. It affects the performance of the classifiers because, in real-life cases, it generally needs to find the classification of the minority class [8]. For example, in cases of cancer diagnosis, there are more healthy patients than those diagnosed with the disease. If we apply a classifier to imbalanced data to identify cancer patients, the classifier biases the result to healthy patients (majority class) ignoring cancer patients (minority class). The accuracy will be high; however, it is more important to identify cancer patients than healthy patients. There are two types of imbalance data. Binary imbalance occurs when in a dataset is integrated with two classes, one of them has fewer instances (minority class) than the other class (majority class). On the other hand, the multiclass imbalance is present when the dataset has more than two classes and the instances that form them are unequal with respect to the others [9]. There are three main methods used in the literature to handle imbalanced data: * Algorithm Level: It makes a modification to the algorithm, generally adds more weight to the minority class. This method requires a deep knowledge of the operation of the algorithm to be modified. Each algorithm must be adapted to the dataset to be used. * Data Level: It consists of balancing the training set by matching the majority class with the minority class. This method is known as preprocessing since the modification of the data is done before the application of the classification algorithm. Standard classifiers are designed to work with a balanced dataset. The advantages of this method are that they are easy to configure, and they can be used with any classification algorithm. There are three sampling methods: Undersampling: It consists of eliminating instances of the majority class until matching the number of instances with the minority class. There are other undersampling variants that eliminate instances in a directed manner such as noise or instances that are in the border of the decision area. Oversampling: This method adds instances to the minority class until the majority class is balanced with the minority class. There are different variants for oversampling. For example, Random Oversampling (ROS), makes a copy of existing instances and adds a copy of them randomly. SMOTE is one of the most successful methods for oversampling. This adds instances in synthetic form to the minority class. There are also variants of SMOTE which have demonstrated great precision. Hybrid: It is the combination of the different Oversampling and Undersampling methods. * Cost-sensitive: Combines the methods of Data level and Algorithm Level. It is considered the costs associated with misclassifying. Preprocessing methods have shown that balancing the training set by oversampling and undersampling of classes improves significantly the classifiers results. This regarding imbalanced data [10][11][12]. The goal of this research was to identify the best algorithm to balance Guillain-Barré Syndrome (GBS) dataset by applying different data balancing techniques at the data level, oversampling the minority class and undersampling the majority class. purpleIn the specialized literature, there are no studies to classify the subtypes of GBS using Machine Learning algorithms. In previous studies, [13,14], predictive models were created to classify the four main GBS subtypes using different classifiers. These models were created using an imbalanced dataset obtained an accuracy of 90%. In this experimental study, the data was preprocessed using different balancing techniques to balance the original dataset. With the objective that the classifiers use balanced data and know if it is possible to overcome the previously created models. The results show that balancing the data helps in the performance of predictive models. In some cases improved 90% accuracy. In this study, purplewe try to make symmetrical the number of instances of each subtype by applying four different undersampling algorithms (Random Undersampling -RUS-, Tomek Link -TM-, One Side Selection -OSS-and Neighborhood Cleaning Rule -NCR-). Then, we compared these results with those found by Synthetic Minority Oversampling Technique (SMOTE) using different percentages of oversampling. We binarized the multiclass dataset with two different techniques: One versus All (OVA) and One versus One (OVO). We used three classifiers with different approaches: Decision tree (C4.5), Support Vector Machines (SVM) and JRip. purple Decision tree and JRip create predictive models understandable by humans and this is an advantage, especially in this case, models obtained may be useful for physicians to diagnose GBS subtypes. Moreover, C4.5, JRip, and SVM stand out their excellent results in classification tasks. The goal was to investigate whether data balancing techniques allow to create a predictive model with a statistically significant difference with respect to a predictive model with imbalanced data. This article is organized as follows. In Section 2, we show a literature review. 3, we present a description of the dataset, machine learning algorithms and the performance measure used in the study. Section 4 describes the experimental procedure. In Section 5, we show and discuss the experimental results. Finally, in Section 6, we summarize results, provide conclusions, and suggest future work. Related Work In real life, the imbalance data is frequent in cases of medical diagnosis or in the identification of variants of diseases. The main problem occurs because of existing more cases of healthy patients than patients with any disease. For this type of challenge, researchers have applied data preprocessing techniques which consist of oversampling the minority class or undersampling the majority class. These techniques have shown that balancing datasets significantly improve the performance of classifiers. In [15], Han and coworkers proposed Distribution-Sensitive (DS). This is an oversampling algorithm for Medical Diagnosis for imbalanced data. DS analyzes the position of the minority class instances and carefully classifies them into noise samples, unstable samples, limit samples, and stable samples. Each of these samples is processed differently by the algorithm. The objective is to choose the most suitable sample to synthesize new samples. Authors apply sample synthesis methods according to the closeness among surrounding samples, and thus guarantee that the newly synthesized samples and the original minority samples share characteristics. The results showed that the accuracy of the classification algorithm is improved. Bach et al., in 2016 [16], analyzed a dataset of 729 patients. In total, 92.6% belonged to healthy cases and 7% of cases suffered from Osteoporosis. For this imbalanced data, the authors applied oversampling and undersampling methods to detect patients with Osteoporosis. To oversample the dataset, they applied SMOTE. To undersample, they used two different methods, Random Undersampling (RU) and Edited Nearest Neighbours (ENN). Bach found that SMOTE at 300% combined with ENN gave the best results. Kalwa et al. [17] a Smartphone Application was used to diagnose melanoma which is a type of skin cancer, considered the most deadly and difficult to treat in advanced stages. The application analyzes images and compares them with 200 images of a public dataset. This research uses SMOTE to oversampled cases of melanoma patients. The results were compared without using any preprocessing technique, resulting in SMOTE obtaining better performance regarding the data not oversampled. In [18], Le et al. propose a framework for self-care problems detection of children with physical and motor disabilities. This research uses SMOTE to improve the prediction for the SCADI (Self-Care Activities Dataset) dataset. The results show that extreme gradient boosting using SMOTE outperforms Artificial Neural Network, Support Vector Machine and Random Forest (RF). The accuracy of their framework reaches 85.4%. Fazal proposes a Hybrid Prediction Model (HPM) [19]. This study analyzes a dataset to improve early diagnosis of Type 2 Diabetes and Hypertension. HPM consists of Density-based Spatial Clustering of Applications with noise-based outlier detection, SMOTE, and RF. The authors successfully predict diabetes and hypertension using three benchmark datasets. Elreedy et al. [20], conducted an experimental study to explore SMOTE performance factors, analyzing the relationship between the number of records created and the dataset dimension. They also analyzed the performance of some classifiers and the effects of applying SMOTE. Finally, they included in the study some variants of SMOTE such as Bordeline_SMOTE1, Borderline_SMOTE2 and ADASYN and their performance. For this work, they used five public datasets taken from UCI. As a result, they found that SMOTE improves the performance of the classifiers, however, this varies from one type of classifier to another. They found that the more examples of the minority class exist, the greater the accuracy. This is because the K-nearest neighbor patterns become closer to each other. They concluded that SMOTE can be used in classification problems for small datasets since increasing the size of the data improves the classification performance. In [21], Devi and coworkers presented a modification of the Tomek Link undersampling algorithm, based on the fact that, in addition to class imbalance, there are other factors such as the existence of redundant borderline records and outliers in the data space that critically reduce the performance of classifiers. They used 10 public UCI datasets and four single classifiers for their experiments. The proposed algorithm facilitates the removal of redundant boundary records rather than simple boundary ones, with the aim of creating a sparse majority region near the decision boundary. This may help to convergence towards a balanced class distribution. This undersampling method achieves less loss of information and better performance. Bach et al. [22], compared four different undersampling methods to balance data: Edited Nearest Neighbor, Neighborhood Cleaning Rule, Tomek Link,and Random Undersampling, against his proposed algorithm, called KNN_Order. This algorithm removes records from high-density areas to minimize loss of information. They proved the performance of this algorithm using 18 public datasets. In addition to class imbalance and noise, the superposition of instances of different classes affects the performance of classifiers. In [23], they proposed to remove potentially overlapped data points to tackle binary class imbalance, using Neighborhood search with different criteria. This method identifies and eliminates instances of the majority class. They use 66 synthetic datasets and 24 public datasets of UCI and Keel repository in their experiments. These methods were compared with other balancing methods, achieving competitive performance over traditional methods. In [24] Kovacs et al., they performed a detailed comparison of 85 variants of oversampling techniques for the minority class. They used 104 imbalanced datasets as well as four classifiers for their experiments. They found that oversampling leads to better results in classification on imbalanced datasets. Regarding SMOTE variants, polynom-fit-SMOTE, ProWSyn, and SMOTE-IPF gave the best results. In [25], introduced Farthest SMOTE (FSMOTE), a modification of SMOTE. This approach increases the decision area, considering minority samples closer to the boundary. They compare different oversampling methods: SMOTE, ADASYN, borderline SMOTE, and safe-level SMOTE. For experiments, they used seven datasets and two classifiers: Naive Bayes and SVM. Results showed that FSMOTE improves the existing techniques. Debashree and coworkers [26] proposed a modification of the Tomek-Link undersampling method. They present a solution to class imbalance and classes overlapping, as these two problems affect the performance of standard classifiers. The objective of their research was overlapping region detection, cleaning up of overlapping region, undersampling of the majority records, and an effective data-preprocessing framework. The proposed model increases the performance of the minority class while maintaining an intact majority class performance. On the other hand, there are several studies employing bioinformatics thechniques, such as microarray tests [27]. However, the most significant disadvantage of microarrays is the high cost of a single experiment. The data balancing through sampling methods can be applied to any imbalanced dataset, regardless of the subject. In finance, the classification can be improved, for example: In [28] SMOTE was applied to create Financial risk models. These models serve companies to prevent threats from the external economic environment or bad financial decisions. In this study, the authors used 2628 Chinese companies listed on the stock exchange. The imbalance occurs because there are more companies with healthy finances (2190 belonging to the majority class) than companies with financial risk (438 belonging to the minority class). They performed three types of experiments: In the first experiment, they used the imbalanced data and applied Adaboost and Support Vector Machine (SVM). In the second experiment, they applied data balancing with SMOTE and subsequently applied Adaboost with SVM. For the third experiment, they executed Adaboost with SVM, however, SMOTE worked at the same time that the classifiers. The results show that balancing the data improved the models with the imbalanced data. For balanced models, the third model improved a significant difference with the second model. Online banking operations using credit cards have been increasing every day; with this growth, credit card frauds are also more common. In The results showed that the best classifiers were Bagging and SVM. SMOTE-ENN obtained the best performance compared to the other oversampling methods. For the undersampling methods, TL obtained the best performance. Phishing is a technique used by cybercriminals to deceive and obtain personal information such as passwords, credit card data, and bank account numbers. This is achieved through fraudulent emails. A large amount of mail sent and received can help build models with Machine Learning algorithms that help predict future cyber-attacks. However, most of the emails that reach us in the inbox are true compared to phishing emails. This results in an imbalance of data. In [30], they used SMOTE to balance a dataset with 812 instances obtained from the UCI Machine Learning Repository. The dataset is divided into three classes (phishy, suspicious and legitimate). Three algorithms were used to create the models (Support Vector Machine, Random Forests, and XGBoost). The results show that the imbalanced data have poor performance. The data that were balanced using SMOTE achieved a better performance. Dataset The dataset used in this work are records of 129 cases of patients diagnosed with Guillain-Barré Syndrome (GBS). They received treatment for one of the four subtypes of GBS: AIDP, AMAN, AMSAN and MF. The data were collected at the Instituto Nacional de Neurología y Neurocirugía. Table 2 shows the characteristics of the dataset. Table 3 shows the 16 relevant features selected in a previous study [31]. These attributes were selected from the original dataset with 365 features. The features V22, V29, V30, and V31 are integer values; the remaining ones are decimal. Imbalance Ratio In binary classification, it is common to find real-life cases where highly imbalanced data are present. An example is credit card fraud detection, where more cases of operations carried out correctly than fraudulent operations are usually found [32]. However, in cases where the number of records of one class is similar to another one it is not clear to determine when a dataset is imbalanced. For example, in [33] the researchers classified three types of different pediatric brain tumors with a dataset of 90 patients divided into three classes: 38, 42, and 10. In cases like this, there is no consensus among experts in the field if there is an imbalance of data between classes. Imbalance ratio (IR) is the widely accepted measure to determine imbalance data. In Equation (1), IR is the ratio of the number of records of the majority class between the number of records of minority class [34]. A dataset can be considered imbalanced if IR > 1.5 [35]. Machine Learning Algorithms In this study, we include four methods of undersampling with different approaches. These methods have demonstrated their success to improve the performance of classifiers by eliminating instances of the majority class [36]. We applied these methods to investigate if eliminating random instances of the majority class affects the performance of classifiers. On the other hand, it is proven that not only the imbalance between classes affects the performance of classifiers, but also factors such as noise affect the result [37]. For this reason, we apply three different undersampling methods for noise elimination. We also apply SMOTE, the most commonly used method for oversampling the minority class with synthetic data, using six different synthetic oversampling percentages. This method has demonstrated its success with imbalanced datasets [38]. We used three classifiers from different family, we wanted to investigate which of them gets the best performance compared to those reported in previous studies using the imbalanced dataset. Random Undersampling (RUS) RUS is a non-heuristic method of randomly reducing data. RUS takes the majority class and randomly removed the requested instances according to the percentage required in the algorithm. This with the objective of equalizing the majority class with the minority class until reaching the desired balance between the two classes [39]. One of the advantages of this method is that it decreases the run time [40]. Tomek Link (TML) It is one of the most used data undersampling techniques [41]. TML is based on the Condensed Nearest Neighbor algorithm. TML is also known as a data cleaning method since it eliminates noise from the majority or minority class. On the other hand, TML does not perform data balancing between classes, however, it looks for Tomek examples and only deletes examples of the majority class for each Tomek Link found. The algorithm works as follows: A couple of records m i and m j is name the Tomek Link if they are from different classes and are closer neighbors one another. Namely, there is no record m l , in such a way d( is the distance between m i and m l . Two records building up a Tomek Link indicates that one of them is noise or both are at the limit [42]. One Side Selection (OSS) OSS is the combination of two different undersampling methods that carefully remove records of the majority class. First, OSS applies Condensed nearest-neighbor US-CNN, which removes records of the majority class being far from the decision area boundary (redundant examples). Subsequently, OSS uses TML to remove records of the majority class that are noisy examples and also instances that are at the border of the decision area (unsafe examples). Instances of the majority class that were not eliminated are used for learning (safe examples) [43]. Algorithm 1 shows OSS steps. Algorithm 1: One Side Selection (OSS). Data: T (the original training set) Result: S (the resulting set) begin D = all instance minority from T and randomly selected instances majority; Classify T with the 1-NN rule using the records in D, and contrast the assigned concept categories with the original ones; Move all misclassified records into D that is now compatible with T while being smaller; Remove from D all instances majority that is believed borderline and/or noisy; S = All instances minorities retained; end The objective of OSS is to balance the training set keeping only the most significant records of the majority class without eliminating instances of the minority class [44]. Neighborhood Cleaning Rule (NCR) NCR is a modification of the Edited Nearest Neighbor Rule (ENN) [45]. NCR improves the data cleanliness of the majority class for imbalanced data binary. NCR stands out among other undersampling methods because it considers the quality of the deleted data. It is focused only on data cleansing rather than on the balance of classes of the training set [46]. NCR works as follows: for each record, there is a N 1 sample in the training set. Then, find the three closest neighbors of each sample. When N 1 belongs to the majority class and the classification outcome is the opposite of the original class at N 1 , then N 1 is removed. When N 1 belongs to a minority class and the neighbors belong to the majority class, then the nearest neighbor is removed. [47]. Algorithm 2 shows NCR steps. NCR eliminate outlier in the majority class of imbalanced datasets [48]. Synthetic Minority Oversampling Technique (SMOTE) In [49], SMOTE was introduced, one of the most successful and commonly used oversampling methods in cases of binary class imbalance problems. This technique oversamples the minority class by creating synthetic or artificial data based on the similarities of the feature space between existing minority examples. SMOTE introduces synthetic examples along with the line segments that join any of the closest neighbors to the minority class. Based on the oversampling required, the neighbors of the nearest neighbors are chosen at random. These new data created synthetically improve the previous techniques that replace oversampling in a simple way. Synthetic data balance the training set helping the classifier to significantly improve the result [50]. Algorithm 3 shows SMOTE steps. In Figure 1, we show the operation of SMOTE. Synthetic objects in the minority class are created through the interpolating of the object and his k Nearest Neighbors. In Figure 1a, we can see the dataset consisting of two classes, a majority and a minority class. Figure 1b shows the Nearest Neighbors selected to apply SMOTE. The synthetic instances of the minority class are also observed. Figure 1c shows the set of balanced data using oversampling synthetic. We used SMOTE for oversampling the minority class of our imbalanced dataset. Single Classifiers Decision tree (C4.5): C4.5 divides the original problem into sub-groups. For each iteration, a tree with the best gain is constructed according to the selected feature. The decision tree is constructed top-down. The feature with the highest information gain is used to make the decision [51]. This method is one of the most popular of inductive algorithms. It has been successfully applied to diagnose medical cases [52]. Support Vector Machines (SVM): SVM is used in binary classification problems. Given a training set, SVM search for the optimal hyperplanes, with a maximum margin of the distance between them [53]. The larger the margin of the classes, the lower the error and accuracy increased of the classifier [54]. SVM is based-kernel. RIPPER (JRip): JRip, a based-ruled approach, is one of the most popular algorithms for classification problems [55]. Classes are examined in increasing size. Then, a starting rule set for the class is created using incrementally reduced error. JRip creates a rule set for all the records of each class, one by one [56]. Performance Measure We used the Receiver Operating Characteristics (ROC) curve performance measure, a frequently used tool for evaluating classifiers [57]. It has advantages over other evaluation measures, such as precision-recall. ROC curve is a two-dimensional graph that provides a good summary of a classification model performance in the presence of imbalanced datasets with unequal error costs [58]. An ROC curve is generally employed in medical scenarios where the diagnostic of presence or absence of an abnormal condition are common [59]. The area of the graph has a value between 0.5 and 1, where a value of 1 represents a perfect diagnosis and 0.5 represents a test with no discriminatory capacity diagnosis. Binarization Techniques In multiclass classification, it is common to decompose the original dataset containing all the classes into a binary dataset. One versus All (OVA) and One versus One (OVO) are two approaches commonly used for binarization. OVA and OVA facilitate the application of the data preprocessing techniques to balance the data before the training set goes to the classifier [60]. The OVA approach takes one class as a minority and the remaining classes are combined and transformed into the majority class. This procedure is made for the n classes of the dataset [61]. OVO trains a classifier for each possible pair of classes (n-1)/2 (pairwise learning) [62]. Figures 2 and 3 show examples of OVA and OVO approaches used in a multiclass imbalanced dataset. We use the OVA and OVO binarization technique widely used in classification problems [63]. From a medical perspective, OVA and OVO may assist physicians in distinguishing one subtype from another, an important task since each subtype varies in severity and treatment. Validation We used train-test evaluation for each single classifier, employing two-thirds of data for training, and one-third for testing. Figure 4 describes the experimental procedure. We tackle our multiclass classification problem by dividing it into two different binary subproblems using OVA and OVO approaches. Purple the sampling methods use binary datasets. These are integrated with minority class and majority class. For this reason, we used two different techniques to binarize our original GBS multiclass dataset. We created 10 binary datasets divided into two groups. purple The OVA technique takes a subtype of GBS which will be the minority class. The majority class will be made up of the sum of the other three remaining subtypes of GBS. Applying OVA, we obtained four imbalanced pairs of subsets. The OVO technique performs all possible combinations between two classes that integrated a dataset. For this experimental study, six possible imbalanced subsets pairs were obtained, created by the combination of the GBS subtypes from the original dataset. Subset Original Training SMOTE SMOTE SMOTE SMOTE SMOTE SMOTE We conducted a Wilcoxon test [64] to search for a statistical difference among the models using a significance value of 0.05. A nonparametric test was used since it does not require a particular data distribution [35]. Purple R is a language used to perform statistical analysis, it allows you to manipulate data quickly and accurately. R creates high-quality graphics, it is free and open source. It is an object-oriented language. RStudio is an IDE or integrated development environment. This means that RStudio is a program to manage R and use it more conveniently. RStudio includes a console, a syntax editor that supports code execution, as well as tools for plotting, debugging and managing the workspace. R experiments were performed in RStudio 1.2.1335. A package is a collection of functions, data, and documentation that improves the capabilities of R. Packages are available in CRAN (Comprehensive R Archive Network). We used DMwR package [65] to oversampling with SMOTE. We used Unbalanced package to undersample the majority class with methods RUS, TML, OSS, NCL [66]. On the other hand, we applied three classifiers to create predictive models, using RWeka package [67] for C4.5 and JRip, e1071 package [68] for SVM classifier. Other packages used were rJava [69], a low level interface for JAVA that allows the creation of objects. The data partition and the confusion matrix was created using the packagecaret [70]. To calculate the imbalance ratio we used imbalance [71]. Curve ROC was created using pROC [72]. We used lattice [73], for data viewer. We used rpart [74], a recursive partitioning for classification trees. To plot the models created by rpart we used rpart.plot [75]. SVM was tuned with the tune function, assigning the values 0.001, 0.01, 0.1, 1, 10, 50, 80, 100 for the C parameter. Results and Discussion This section show results obtained applying the four different undersampling techniques and the oversampling SMOTE technique to four imbalanced subsets obtained using OVA, as well as to six imbalanced subsets obtained using OVO. Each value is the average ROC curve obtained across 60 runs, each with a different seed. We applied C4.5, SVM and JRip classifiers after the data balancing and we evaluated the model performance using ROC, the most accepted metric for imbalanced problems. We used the Wilcoxon test to evaluate the statistically significant difference between the models using imbalanced data against to the models using balanced data. In Tables 8 and 9, we show the IR computed of the GBS subset from OVA and OVO. The highest IR values were obtained with OVA. This is because the higher the number of the majority class with respect to the minority one the higher the result. However, in GBS3 the IR = 1.1864. Some authors consider that a dataset is imbalanced when IR > 1 [76]. For OVO, in all cases, IR > 1.5. Tables 10-13 show in bold the cases with a statistically significant difference. The structure of the four tables is as follows: first column shows the subsets obtained using binarization techniques (OVA, OVO), the GBS subtype included, as well as the number of instances for each of them. The second column shows the three classifiers used for each subset. The third column shows the results of the classifiers using the imbalanced data. Subsequent columns show results of applying the balance techniques and their corresponding Wilcoxon test, where N S (Not Significant) stands for a not statistically significant difference between results using imbalanced data and results using balanced data, NC (Not Computed) means that the test could not be performed due to many identical results across the 60 runs or that best results were obtained using imbalanced data, and S (Significant) represents that there is a statistically significant difference between results using imbalanced data against to balanced data. Table 10 shows results obtained after applying RUS, TML, OSS, and NCR to the four imbalanced subsets obtained through OVA. A total of 48 data balanced cases were obtained. In 16 cases, balanced data could not improve imbalanced data. In 24 cases, balanced data improved the imbalanced data with no statistically significant difference. Eight cases presented a statistically significant difference. These cases are listed below with their corresponding ROC value. GBS4 subset obtained the best results. In all 12 cases, the balanced data improved the imbalanced data, applying all four undersampling methods and all three classifiers. Furthermore, a statistically significant difference was found in four of them. GBS3 subset obtained the worst performance. Balanced data could not improve the imbalanced data in eight cases. Balanced data improved imbalanced data only in four cases, with no statistically significant difference. The best undersampling method using OVA was RUS because it improved imbalanced data in 8 cases, half of them with a statistically significant difference. OSS improved results in seven cases, three of them with a statistically significant difference. NCR improved imbalanced data in 8 cases, however, only one of them obtained a statistically significant difference. TML obtained the worst performance, although in nine cases results were improved, none of them obtained a statistically significant difference. We conducted 16 experiments cases for each classifier, derived from applying four undersampling methods in 4GBS subsets. From these experiments, C4.5 obtained the best results, in 11 cases balanced data improved imbalanced data, three of them with a statistically significant difference. Applying SVM, in 13 cases balanced data improved imbalanced data, but only two of them with a statistically significant difference. Finally with JRip, in nine cases balanced data improved imbalanced data, three of them with a statistically significant difference. Table 11 shows results obtained after applying RUS, TML, OSS and NCR to the 6 imbalanced subsets obtained through OVO. A total of 72 data balanced cases were obtained. In 40 cases, balanced data could not improve imbalanced data. In 20 cases, balanced data improved the imbalanced data with no statistically significant difference. 12 cases presented a statistically significant difference. These cases are listed below with their corresponding ROC value. GBS6 subset obtained the best results. In 11 out of 12 cases the balanced data improved the imbalanced data, 5 of them with a statistically significant difference. In only one case the balanced data could not improve the imbalanced data. GBS1 subset had the worst performance. In none of the 12 cases, the balanced data improved the imbalanced data. The best undersampling method using OVO was TML since it improved imbalanced data in 9 cases, in 4 of them with statistically significant difference. RUS and OSS behaved the same, that is, in 8 cases the balanced data improved the imbalanced data, 3 of them with a statistically significant difference. NCR had the worst performance: in 7 cases the balanced data improved the imbalanced data, 2 of them with a statistically significant difference. We conducted 16 experiments for each classifier, as in OVA. From these experiments, C4.5 obtained the best results, in 13 cases the balanced data improved the imbalanced data, 8 of them with a statistically significant difference. Applying JRip, in 13 cases the balanced data improved the imbalanced data but only 2 of them with a statistically significant difference. With SVM, in 6 cases the balanced data improved the imbalanced data, 2 of them with a statistically significant difference. Table 12 shows results obtained after applying SMOTE at 100%, 200%, 300%, 400%, 500%, and 1000% to the 4 imbalanced subsets obtained through OVA. A total of 72 data balanced cases were obtained as result from applying three classifiers to 24 imbalanced subsets. In 28 cases, balanced data could not improve imbalanced data. In 26 cases, balanced data improved the imbalanced data with no statistically significant difference. 18 cases presented a statistically significant difference. These cases are listed below with their corresponding ROC value. GBS4 subset obtained the best results. From 18 balancing cases with SMOTE, in only one case balanced data could no improve imbalanced data. In 7 cases, balanced data improved imbalanced data without a statistically significant difference. In 10 cases, a statistically significant difference was found. On the other hand, GBS2 obtained the worst performance. In only one case a statistically significant difference was found. In 4 cases, balanced data improved imbalanced data; however, a statistically significant difference was not found. In 13 cases, balanced data could no improve imbalanced data. For OVA and SMOTE techniques, the best performance was obtained applying SMOTE at 100%, since in seven cases balanced data improved the imbalanced data, 5 of them with a statistically significant differences. SMOTE at 400% obtained the worst performance since in 9 cases balanced data improved the imbalanced data, however, only one obtained a statistically significant difference. As for the classifiers, JRip obtained the best performance, given that in 13 cases balanced data improved imbalanced data without statistically significant difference. In addition, in other 8 cases we found a statistically significant difference. With C4.5, in 11 cases balanced data improved imbalanced data, however, only 5 of them obtained a statistically significant difference. Applying SVM, in 12 cases balanced data improved imbalanced data, but only 5 of them with a statistically significant difference. We conclude that SMOTE at 100% combined with JRip obtained best results. Table 13 shows results obtained after applying SMOTE at 100%, 200%, 300%, 400%, 500%, and 1000% to the 6 imbalanced subsets obtained through OVO. A total of 108 data balanced cases were obtained as result from applying 3 classifier to 36 imbalanced subsets. In 72 cases, balanced data could not improve imbalanced data. In 29 cases, balanced data improved the imbalanced data with no a statistically significant difference. 7 cases presented a statistically significant difference. These cases are listed below with their corresponding ROC value. GBS4 subset obtained the best results. In 6 cases, a statistically significant difference was found. In 2 cases, balanced data improved the imbalanced data with no statistically significant difference. In 10 cases, balanced data could not improve the imbalanced data. GBS3 subset obtained the worst performance. In all 18 cases, balanced data could not improve the imbalanced data. For OVO and SMOTE techniques, the best performance was obtained applying SMOTE at 100%, since in 5 cases, balanced data improved the imbalanced data without a statistically significant difference, however, in 2 cases a statistically significant difference was found. In 11 cases, balanced data could no improve the imbalanced data. SMOTE at 400% obtained the worst performance since in 14 cases balanced data could no improve the imbalanced data. In 4 cases, balanced data improved the imbalanced data, however, only one case obtained a statistically significant difference. As for the classifiers, JRip obtained the best performance. In 8 cases balanced data improved the imbalanced data with no statistically significant difference, however, in 6 cases we founded a statistically significant difference. In 16 cases balanced data could no improve the imbalanced data. Applying C4.5, in 19 cases balanced data could no improve the imbalanced data, in 11 cases balanced data improved the imbalanced data, without a statistically significant difference. SVM obtained worst performance, only in 5 cases balanced data improved the imbalanced data, however, a statistically significant difference was not found. We conclude, as in OVA, for OVO and SMOTE at 100% combined with JRip obtained the best results. Conclusions The aim of this work was to investigate if balancing the original GBS dataset improves the predictive models to identify GBS subtypes created in a previous study. We performed 4 independent experiments applying data-level techniques. We started by creating 10 binary datasets divided into two groups. We used OVA and OVO techniques on the original dataset obtaining 4 and 6 binary subsets respectively. We divided each GBSn subset into 2 sets, 66% for training and 34% for testing. We balanced the training subset using two sampling methods. The majority class for each training subset was undersampled applying 4 different methods: RUS, NCR, OSS, and TML. Furthermore, the minority class of the training subset was oversampled applying SMOTE at 100%, 200%, 300%, 400%, 500%, and 1000%. Undersampling and oversampling were applied for OVA and OVO. Once the training subsets were balanced, we applied 3 different classifiers: C4.5, JRip, and SVM. The scores are the average ROC curve obtained through 60 runs, each with a different seed. We used the Wilcoxon test to assess whether there is a statistically significant difference between the imbalanced models versus the balanced models. The number of cases with statistically significant difference between imbalanced data and balanced data across the 4 experiments was: 8 for OVA with undersampling, 12 for OVO with undersampling, 18 for OVA with SMOTE, and 7 for OVO with SMOTE. From all 4 sampling experiments, the best results were obtained combining SMOTE with OVA. Regarding classifiers, JRip obtained the best performance since it found more cases with statistically significant differences for all experiments. Purple Balance a subset data using oversampling obtained better performance. Adding synthetic instances to minority class applying SMOTE helped classifiers get the best performance. On the other hand, eliminating instances of the majority class resulted in losing information that the classifiers needed to achieve better performance. However, factors independent of imbalanced data, such as noise, can affect the performance of the classifiers. We found that the best results were obtained in the combinations where the majority class clearly exceeds the minority class. In these cases, the instances clearly distinguish each other and the undersampling algorithms were only responsible for eliminating noise or class overlapping that helped improve the performance of the classifiers. On the contrary, when the classes have a similar number of instances, the worst results were produced. The results achieved in this research shows that balancing the original dataset improves the previous predictive models. In addition, this predictive model can help specialists to identify the subtype of GBS that a patient suffers. Early identification of the subtype will allow starting with the appropriate treatment for patient recovery. This is a contribution to exploring the performance of balancing techniques with real data. As future work, we will experiment with different variants of SMOTE, and we will apply a hybrid approach using the OVA and OVO techniques. Also, we plan to build more accurate predictive models using different single and ensemble methods.
v3-fos-license
2019-06-01T13:16:29.000Z
2019-02-01T00:00:00.000
171755583
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2226-471X/4/1/8/pdf", "pdf_hash": "530d8d21b61e942742d15ad39d7e07ff86b110d5", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44858", "s2fieldsofstudy": [ "Linguistics" ], "sha1": "ec3680d4f5d1bd4fa6bde4e858109562bbd2179c", "year": 2019 }
pes2o/s2orc
Slot-and-Frame Schemas in the Language of a Polish-and English-Speaking Child: The Impact of Usage Patterns on the Switch Placement : How does the bilingual child assemble her first multiword constructions? Can switch placement in bilingual combinations be explained by language usage? This study traces the emergence of frozen and semi-productive patterns throughout the diary collection period (0;10.10–2;2.00) to document the acquisition of constructions. Subsequently the focus falls on most frequently produced monolingual and bilingual combinations captured through 30 video recordings (1;10.16–2;5.11) which are linked to the diary data to confirm their productivity. First, we verify that like in monolingual development, frequency-based piecemeal acquisition of constructions can be reproduced in our bilingual diary data: in the child’s earliest combinations 87% are deemed as semi-productive slot-and-frame patterns. Second, video recordings show that productivity, understood as a function of type frequency, plays a role in determining the switch placement in early bilingual combinations only to some extent. A more accurate explanation for why frames from one language take slot fillers from another is their autonomous use and semantic independence. We also highlight limitations of input: while the child was raised with two languages separated in the input, she continued to switch languages which suggests that switching is developmental. Introduction Study of children's early productions has been a focal point of usage-based research due to their potential to account for the journey a child's mind makes towards adult-like linguistic proficiency. Similarly, children's early language use has enjoyed considerable interest in research on bilingualism, as the simultaneous acquisition of two languages allows us to go into questions about how children learn to separate languages in their minds. Usage-based studies of bilingual acquisition, however, are rare, and our paper aims to contribute to a growing recognition that this gap needs filling. We will argue that both contributing research traditions stand to gain from considering bilingual acquisition data through a usage-based lens. Specifically, children's use of codeswitching (CS) in a setting in which parental input only contains negligible instances of the phenomenon will tell us something about the limitations of input characteristics in accounting for linguistic competence, while in no way denying that input is of crucial relevance. In addition, we will argue that studying children's CS from a usage-based perspective improves our understanding of dominance, an obvious but theoretically problematic concept in bilingualism studies. Usage-Based Perspective on Bilingual Acquisition Our view of language acquisition reflects constructivist models of linguistic representation which see linguistic competence as an inventory of form-meaning pairings (constructions) whose nature forever oscillates on the continuum from frozen unprocessed chunks, to partially schematic and eventually more abstract (Croft 2001;Goldberg 1995;Langacker 1987). Frozen constructions are multi-morphemic and multiword chunks which are acquired from the speech stream as wholes (e.g., gimme and all gone) and used as if they were single lexical items; children seemingly having no awareness of the parts (Peters 1983;MacWhinney 1978MacWhinney , 2014. Frozen items feature heavily in early speech: in children's first 50-word vocabularies, calculated as both single and multiword productions, on average 17.8% (range: 2-42%) and in their first 100-word vocabularies on average 21.2% (range 5-44%) of items are frozen multiword combinations (Lieven et al. 1992). The acquisition of frozen chunks provides a window into the development of schemas: when two words such as All gone are first used together, the child only becomes aware of a pattern with repeated opportunities to hear it in different configurations which allow for it to be segmented. As a result, one of these two words (e.g., all) is replaced with another (e.g., mummy, daddy), thus creating a slot X gone into which similar words from the child's linguistic repertoire can be inserted spontaneously at the moment of speaking. Such assembly necessitates categorical perception which allows for words to be selected and combined. Meanwhile, the word gone remains fixed acting as a pivot for this two-word combination. Obviously, adult competence consists of more than just frozen forms, and indeed child data show early forms of productivity. This is visible in the second type of combination, called partially schematic constructions, which combine frozen and productive elements. In such multiword constructions, there are at least two frozen elements: one or more morphemes or words, and a pattern in which that word or morpheme (sometimes referred to as 'pivot') is a fixed element; the pattern is often referred to as a 'frame.' The open element in the pattern, often referred to as the 'slot', is subject to filling by whichever relevant word or morpheme helps conveying the intended meaning. The mechanism by which a speaker, be it a child acquiring the language or a fully functioning adult speaker, arrives at the activation of optimal slot fillers is not very well understood yet. However, we do know that they dominate early lexicons: using a combination of diary and video recorded data, Lieven et al. (1997) report that among the first 400 multiword constructions used by 11 monolingual toddlers (1;0-3;0) on average 60% (ranging from 51-72%) are such partially schematic patterns including Put in X, I want to X and Go to X. The time it takes to develop 25 patterns from the vocabularies of first 100 words ranges from 3 to 9 months (Lieven et al. 1997). Data from monolingual two-year-olds video recorded on a dense sampling schedule show that 78-92% of utterances can be classified as instantiating frames with open slots, with most slots filled with nouns and noun phrases and increasing in complexity as a function of increasing mean length of utterance (MLU) ). Later in acquisition and with mounting experience of language use, all parts of such utterances become fully processed and open to a broader range of elements they attract as fillers. Such utterances are referred to as novel if they appear to have been constructed through activation of an entrenched syntactic template and selection of lexical elements which cannot be traced to any other language produced. Looking at child data, one wonders what determines the division of labour between the deployment of multiword frozen chunks, partially open frames, and completely open patterns (i.e., syntactic templates). One factor that has received much attention in the literature on language acquisition is the frequency of multiword combinations in the input: it appears that frozen chunks often found in child speech are also found with high frequency in child-directed speech. Cameron-Faulkner et al. (2003), for example, show such repetitiveness of child-directed speech in utterance initial positions of 12 English-speaking mothers with high correlation to the child's phrases built around these highly frequent items. In a follow-up study with languages with freer word order, such as Russian, German and English, Stoll et al. (2009) report that the input directed at two-year-old children is indeed lexically restricted, at least at the beginnings of utterances they study. High degrees of repetitiveness of items such as That's a X was found in the speech of all examined Russian-, German-and English-speaking mothers. There were also intriguing differences, with English having the most and the longest frames, accounting for more of the input data than the two remaining languages, a finding explained by that language having the most restricted word order (Stoll et al. 2009). Evidence from adult usage shows that productivity of a pattern does not necessitate the use of the completely empty pattern, which would be the equivalent of rule-based activation (Walsh et al. 2010). While one of the production routes is certainly a process which seems akin to a beads-on-a string assembly, this route is costly as it requires considerable cognitive work (Walsh et al. 2010). For example, when a child opens a sticker book, points at a missing sticker, and says apple gone or house gone, such cognitive work would be attributed to activating the rule that a subject noun precedes a participle. More likely, the child activates a slightly less schematic pattern: gone is preceded by a noun. Adult data indicate that the preferred route is to activate a construction via lexical means in a process referred to as unit-based recall (Walsh et al. 2010). This would be possible if the fixed element, gone in our example, first starts being used with one item that fills the slot more frequently than others, as would be expected of the versatile construction it's gone. As these words recur together in speech, they are expected to form a collocational bond that helps make their production more automatic and ensures smooth transition from one to the other during utterance (Walsh et al. 2010). In the process, the morphosyntactic relation between the elements is backgrounded, and they are recalled as one unit (Bybee 2001;Walsh et al. 2010). Importantly, this view assumes non-redundancy. The nature of linguistic representation allows for productive schemas to co-exist with less productive patterns and for constructions to be assembled via rule-based ('beads-on-a-string') and lexical ('unit-based') means, with, importantly, many gradations in between. This usage-based approach captures the dynamic nature of language in its continuity between grammar and lexis which are subject to constant change depending on one's individual experience of language usage (e.g., Bybee 2001). Our Research Questions Studies of monolingual children have provided ample evidence that children's journey towards adult-like competence originates in such 'slot-and-frame' schemas and that it is piecemeal and mostly lexically-based, at least in the early stages of acquisition. Usage-based studies of monolingual children reach back to Braine (1963) and his three-rule pivot grammar which sowed the first seeds for change in the way we now view early child language. More recent research into 'slot-and-frame' patterns has focused on verb-argument constructions (Keren-Portnoy 2006;Ninio 1999;Tomasello 1992), and interrogative constructions (Dąbrowska 2000;Dąbrowska and Lieven 2005). However, considering that most of the world is bilingual, there has been surprisingly little interest in how acquisition of slot-and-frame patterns proceeds in contexts where two languages are present in the environment. We aim to fill this gap by referring to data from a bilingual child exposed to Polish and English from birth. The first research question we thus ask in our study is how the acquisition of such schemas proceeds longitudinally under conditions of bilingual exposure, from the first lexically fixed combinations produced to more open constructions. The idea behind the current article is that bilingual acquisition data may give us richer insight into how children build up their syntactic productivity. In the current study, we are particularly interested in the evidence for productivity given by CS in child data, since the child we will report on, like many other child study participants in the bilingual acquisition literature, grew up in a family in which there was very little CS in the input. Also, since input in the two languages is rarely equal, bilingual acquisition data show to what extent quantitative, and perhaps also qualitative, differences in the input in each language lead to differences in the acquisition of syntactic productivity. This would allow us to get a more sophisticated view of the role of frequency, including of its limitations rather than just the demonstration that it plays a pivotal role. To the best of our knowledge, only one bilingual study of slot-and-frame patterns exists: Quick et al. (2018) report the constructing of language from both English and German input by a child called Tim recorded between the ages of 1;10-3;1. Using the 'Traceback' method (see below), the study arrives at a quantitative breakdown of all bilingual constructions produced in four logically possible types of constructions. (a) Completely lexically fixed chunks, here referred to as frozen, e.g., hilf-me (help me): 18% (b) Creative combinations of multiple chunks, e.g., let's kaputt-machen (let's break it): 11% (c) Partially schematic constructions where the fixed element is either monolingual, e.g., ich-kann-nicht X (I can't X), or bilingual, e.g., ich want X (I want X): 60% (d) Other, e.g., utterances with no schemas, e.g., ein open Mama (one open Mama): 11% If we combine the categories 'b' and 'c' as both instantiating partial schematicity (where 'a' is lexical and 'd' is syntactic as these terms are commonly understood), it is clear that most language production concerned partially schematic units, i.e., frames with open slots. We may expect that most of these bilingual constructions involve a frame in one language and a slot filler in the other. This invites our second research question about the extent to which productivity of a given pattern results in openness of that pattern to items from either language. We expect that partial productivity is only part of the explanation as some of the CS in category (c) occurs within the frozen constructional frames (Quick et al. 2018). If our expectation is confirmed, we will examine child usage data for any further evidence which could help us to explain why CS occurs at certain points in the constructions. The very fact that CS occurs at all in Tim's data also needs explaining. The parents used solely English at home while German input was delivered in nursery. This suggests that frozen bilingual constructions must have resulted from the child's own language usage rather than from hearing parental CS. Usage-based linguistics tends to privilege the passive part (witness the emphasis on input), but of course usage is both input and own production. Without attention to the latter, usage-based linguistics runs the risk of appearing as a sophisticated update of behaviourism and its fascination with imitation. Partially schematic constructions are by definition sites of productive (or 'creative') language use and a gateway to more abstract syntax: the schema may be entrenched by the time it is used, but filling its open slot with a novel item not used in that slot before means a novel utterance has been produced. Another question which thus needs to be addressed is how the building of constructions resembles parental input and the child's own experience of language practice. More concretely: how come children codeswitch when the input emphasizes separation of the languages. This is our third research question, and by referring to evidence provided to address the first two questions, we will suggest that the answer has to do with the development of syntactic productivity and with the relative unnaturalness of language separation. Our Contribution to Bilingual Research Our study aims to use the slot-and-frame approach in relation to bilingual data to expand on what is already known about bilingual acquisition through studies produced to date. Such studies have been particularly helpful in highlighting the general trends observed in children studied across linguistic communities. It is now well established that CS is commonplace before the age of two but it tends to phase out if both languages are kept separate in the child's environment (Nicoladis and Genesee 1996;Redlinger and Park 1980;Volterra and Taeschner 1978;Paradis and Nicoladis 2007). Some of this early CS may be due to lexical gaps: around the age of two children sometimes use a word from another language because they do not have an appropriate translation equivalent (Nicoladis and Secco 2000;Quay 1995). However, with increasing proficiency in both languages, bilingual children learn to use more translation equivalents (Legacy et al. 2016) and this presumably allows them to figure out how to use them in context sensitive ways. As children as young as two display interlocutor sensitivity in that they adapt their speech to that of their caregivers (Deuchar and Quay 2000;Lanza 1997;Nicoladis and Genesee 1996), early CS also appears to be a function of parental language use: children codeswitch more if CS is not challenged by their parents (Lanza 1988 but see Deuchar and Muntz 2003;Nicoladis and Genesee 1998); they also codeswitch more when CS is modelled in the input (Comeau et al. 2003). However, parental adherence to the OPOL strategy does not guarantee lack of mixing by children (Mishina-Mori 2011). Of particular relevance to the qualitative nature of CS is also the observation that if the child's two languages display asymmetry in acquisition, with one language developing faster than the other, such asymmetry will determine the nature of words which are used in bilingual combinations. Dominance is likely to be of importance in mixing as most bilingual children are dominant in one of their two languages (Gathercole 2016;Paradis and Nicoladis 2007) and this shows in various measures, including amount of exposure to both languages (Unsworth 2015;Nicoladis et al. 2018), the MLUs in both languages (e.g., Quick et al. 2018), the number of TE equivalents available (Legacy et al. 2016;Nicoladis et al. 2018), parental reports and relative proportions of language used (Nicoladis et al. 2018). By referring to the Matrix Frame Model of CS (Myers-Scotton and Jake 2001) which assumes a strict division between grammar and lexis, it has been argued that it is usually the child's 'dominant' language which provides the functional frame while the language used less frequently provides individual content words (Bernardini and Schlyter 2004;Cantone 2007;Gawlitzek-Maiwald and Tracy 1996;Petersen 1988) though more recent studies show that frames can sometimes be derived from the weaker language (Müller et al. 2015). To demonstrate the relationship between dominance and mixing, Petersen (1988), for example, constructs the Dominant Language Hypothesis which allows her to define the dominant language as one which contains fewer mixes. Under this hypothesis, grammatical morphemes from the dominant language can occur with lexical items of either the dominant or the weaker language; however, grammatical morphemes from the weaker language can occur only with lexical morphemes from that language. Meanwhile, the accounts presented by Gawlitzek-Maiwald and Tracy (1996) and Bernardini and Schlyter (2004), for example, explain how CS proceeds when one language is dominant and provides a functional skeleton for the weaker language to grow into. However, it remains unclear whether it is dominance which exerts influence on how languages are mixed or the other way round. More importantly, circularity is a danger: we explain a particular asymmetry in the data with reference to dominance, but use the asymmetry to establish dominance. The concept becomes more useful, we will argue, if dominance is linked to which language provides more of the syntactic frames (including partially schematic constructions) that host slot fillers from the other language and especially if this asymmetry can be linked to differences in the child's linguistic experience and thus to higher degrees of entrenchment for that language's partially schematic units. The three research questions introduced earlier in this section will be addressed by referring to data from Polish and English, a language pair not studied before for the acquisition of early constructions or CS. We find that the typological distance between Polish (a highly inflected language) and English (a fusional language) allows us to ascribe a language index more easily to individual words and patterns. The Participant This study is part of a project which examined one child's productions in light of her language usage patterns, using diary data and video recordings (for further details see Gaskins 2017). Informed consent for inclusion of the child in the study was gained from her mother before the study was launched. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the School of Sciences, History and Philosophy Ethics Committee at the University of London (code 2012-09). The main participant of this study is Sadie, a first-born and normally developing child who presents a case of bilingual first language acquisition (BFLA). Sadie was born and raised in England and she heard English at home from her father who did not know any Polish. Polish, on the other hand, was heard regularly only from her mother, the only speaker of that language in her immediate environment, whose command of English did not go unnoticed by the child. In addition, the parents spoke English with each other at home. In her second year of life, Sadie attended an English nursery three days a week, 10 h a day, and spent the remaining two weekdays addressed in Polish by her mother, and weekends with both parents. In the summer, both at the end of her first and second year of life, she spent two weeks in Poland each time, fully immersed in Polish. Additionally, once every three months, she was visited by her maternal grandmother who stayed with her for two weeks at a time and addressed her only in Polish. When Sadie was at home, her parents conformed fairly consistently to the OPOL strategy (one-parent-one-language): Sadie's Polish-speaking mother used 8 types and 12 tokens of individual English words over the course of the ten Polish recordings (vs. 1626 types and 9485 tokens of Polish words) while her English-speaking father used 16 types and 24 tokens of individual Polish words across the ten English recordings (vs. 1119 types and 13,675 tokens of English words). Sadie's language acquisition is asymmetrical, for at least four reasons. First, the diary data reveal that throughout her second year of life Sadie received roughly 65% of her linguistic input in English. Second, at the age of 2;02 Sadie's word stock was 74% English (292 words) and 26% Polish (103 words). Third, when recorded on video speaking to her father, she used mostly English with only 2% of the words Polish. When recorded speaking to her mother, however, she used on average 90% English and only 10% Polish word tokens, at comparable and relatively stable rates throughout the data collection period (see also Gaskins 2017). This shows that English was her dominant language of interaction regardless of the language addressed to her. Lastly, at 1;10.16 her MLU measured in monolingual English utterances was 1.63 and increased to 2.35 by 2;05.11 while at 1;10.20 her MLU measured in monolingual Polish utterances was 1.03 and dropped to 1 word per turn at 2;04.15. The Data Following the Language Diary Method (De Houwer and Bornstein 2003), a diary was kept to record Sadie's development between 0;10.10-2;3.22. The diary contained quantitative information on the amount of input she received in each language such as which language was addressed to her in each 30-minute segment of the day; it also listed any new words and multiword combinations she produced. The diary was updated as and when a new word or word combination was heard or when an existing combination was heard combined with a new word. Between the ages of 1;11.01 and 2;0.10 the diary was updated throughout the day every single day as Sadie's mother was off work. After 2;0.10, this updating happened throughout the day on 4 days a week when Sadie's mother stayed at home and in the evenings on the remaining 3 days when Sadie's mother worked during the day. No diary entries were made when Sadie's mother was at work. Up to the age of 1;10 all new language was recorded, including single words and multiword combinations, but between 1;10-2;2 priority was given to new combinations as the sheer amount of language Sadie produced meant that it was impossible to record it all. Despite limitations of diary data which could not capture every single instance of language use, access to the diary gave us a privileged insight into slot-and-frame patterns from a very young age and it allowed us to capture the very first instance of when words were combined together in speech. As a second source of data, 30 half-hour video recordings (1;10.16-2;5.11) were transcribed, amounting to fifteen hours' worth of interactions. These recordings are representative of three sociolinguistic contexts: there are ten recordings with Sadie's father where she was addressed in English, ten with Sadie's mother where she was addressed in Polish and ten with both parents present where she heard both languages. However, seeing that regardless of the context Sadie always preferred to speak English, all the data were collapsed into one dataset. All the recordings were made at dinnertime, followed by playtime which often involved looking at books, matching up animal cards and playing with Lego. Video recorded data allowed us to capture the most frequently produced combinations. These were then verified against diary data: if there was no CS in an utterance within a given schema on video, we verified if this also held for the diary data; if diary contained conflicting information, schemas were then shifted to the category of bilingual combinations. Data Analysis All Sadie's monolingual and bilingual utterances, as recorded in the diary, were examined using what we call a 'diary Traceback method'. This method was adapted from that used to analyse densely sampled corpora of recorded speech and to trace constructions back to those recorded in prior videotaped interactions (Dąbrowska and Lieven 2005;Lieven et al. 2009;Quick et al. 2018). In our study, to verify whether piecemeal acquisition holds for our context of bilingual exposure, we first traced construction development throughout the diary recording period. The tracing consisted in following longitudinally all the earliest two-word and multiword constructions noted down in the diary to establish which of their elements were frames and which the words selected to fill the slots in those frames. Depending on the data available, all the word combinations recorded in the diary were subsequently divided into three groups: frozen, novel and partially schematic. If neither of the two (or more) elements of a combination were ever seen to be replaced with another in speech, as was the case with Thank you, the combination was deemed to be a frozen unit, as there was no evidence that it had been built using a productive frame. If a combination seemed to have been assembled following a more abstract schema, it was deemed to be novel, even though there were not sufficient data to ascertain schematicity beyond doubt. It is possible that such combinations had been picked up holistically, yet they corresponded closely with existing partially schematic units (e.g., Red car corresponded with Naughty X and Silly X) which gave us reasons to believe that had been assembled productively. Everything else belonged to the final category of partially schematic constructions, which we will focus on in this paper. We will illustrate the development of such constructions with the example of No X. The word no was first produced at 1;02.10. Its first occurrence within multiword combinations was no potatoes, at 1;6.03. After that, no was also recorded with bed (No bed at 1;7.07) and more (No more at 1;7.20) and therefore the word no could now be considered to have given rise to a frame No X. Following the same logic, the phrase I don't want X emerged from a frozen chunk which was initially heard once in the combination want it yoghurt (1;9.21), a non-target like construction with two objects. Once the word cheese was also produced in the same frame (at 1;10.22), the combination Want it X was recognised as a frame. Its non-target like character persisted, perhaps because the conjoined usage of the words want and it is frequent (in common expressions such as I don't want it and Do you want it?). Eventually, Sadie stopped using the word it in the above frame, leading to more target-like usage of the verb-object construction, and the disappearance of the productive schema Want it X. By now, the word want started being used repeatedly in the extended combinations I don't want it and I don't want X. Since our video recording schedule typically involved only 3-4 recordings per month, our video recorded data were not sufficiently dense to lend themselves to exactly the same kind of analysis as that adopted by other researchers (Dąbrowska and Lieven 2005;Lieven et al. 2009;Quick et al. 2018). Instead, once the video recorded data had been transcribed on CHAT, they were traced back to the slot-and-frame patterns from the diary and then linked to them using FREQ and KWAL commands on CLAN. Altogether 6465 of Sadie's utterances were examined of which 1717 were multiword. Of these, 198 were bilingual. We defined bilingual constructions as any utterances which contained at least one word from Polish and one from English. To link constructions' productivity to switch placement, we further examined the four most frequently produced monolingual and the six most frequently produced bilingual partially schematic utterances, as the latter were by far more common in the child's data. These constructions were further analysed in terms of type/token ratios (TTR) of slot fillers in order to establish their productivity. The more types of words used within the slot, the higher the TTR ratio and the more productive the slot (Bybee 2001). The constructions were also traced back to the diary to examine their earliest usage patterns, specifically the form in which they emerged early in acquisition. Schematic and Specific Units We first look at all word combinations in Sadie's diary data. At this point we do not distinguish between English and Polish, and lump all data together, including combinations that instantiate CS. The most frequently produced monolingual and bilingual combinations will be explored in detail in Sections 3.2.1 and 3.2.2. The results discussed here demonstrate how word combination, and in particular frame formation, proceeded in Sadie's bilingual acquisition. Overall, 315 tokens of combinations produced by Sadie in English, Polish or combining both languages were found in the diary between 1;4.17 and 1;2.00. All these were divided into the three schematicity categories, depending on the evidence available to support their categorization (see Figure 1). The first group were frozen multiword units, because no other combinations were found with either of the words. They account for 4% (n = 13) of Sadie's 315 types of multiword combinations (See Appendix A). Among them were imitations of phrases heard on TV, from books and from parents (e.g., Wait for me!) social phrases (e.g., Well done!), linguistic routines (e.g., What's that?) and compound nouns (Bath time). Schematic and Specific Units We first look at all word combinations in Sadie's diary data. At this point we do not distinguish between English and Polish, and lump all data together, including combinations that instantiate CS. The most frequently produced monolingual and bilingual combinations will be explored in detail in Sections 3.2.1 and 3.2.2. The results discussed here demonstrate how word combination, and in particular frame formation, proceeded in Sadie's bilingual acquisition. Overall, 315 tokens of combinations produced by Sadie in English, Polish or combining both languages were found in the diary between 1;4.17 and 1;2.00. All these were divided into the three schematicity categories, depending on the evidence available to support their categorization (see Figure 1). The first group were frozen multiword units, because no other combinations were found with either of the words. They account for 4% (n = 13) of Sadie's 315 types of multiword combinations (See Appendix A). Among them were imitations of phrases heard on TV, from books and from parents (e.g., Wait for me!) social phrases (e.g., Well done!), linguistic routines (e.g., What's that?) and compound nouns (Bath time). The second group of constructions, and those we will focus on in this paper, were partially schematic, with at least two examples for each lexically-based pattern. They included discourse routines (e.g., Bye bye X), questions (e.g., Where's X), noun phrases (e.g., The X), prepositional phrases (e.g., In the X), noun-based schemas (e.g., Sadie X), verb-based schemas (e.g., Go away, X!) and pronoun-based schemas (e.g., Everybody X) as well as vocatives (e.g., Daddy, X!). Some constructions were also included within others, e.g., The X is a construction in its own right but also part of In the X and On the X. Most fixed elements in the frames were functional items, such as social words, question words, adjectives, determiners, prepositions and functional verbs. Partially schematic combinations account for 87% (274) of Sadie's 315 multiword constructions; recall that the slot X can be either in English or in Polish (see Appendix B). The 67 types of partially schematic patterns evolved over the period of 9.5 months, with the first one emerging at 1;4.17 and the last just before the end of data sampling at 2;2.00. If we define the language of the frame as the language of the fixed element, 50 of the 67 patterns had English frames, 14 had Polish frames and three had a frame that fitted both languages. If we assume that partially schematic units reflect the emergence of syntax, this finding shows that under the conditions of imbalanced bilingual exposure Sadie experiences, the distribution of frames mirrors her input: most frames come from English, which is Sadie's dominant language, with single words from English and sometimes Polish filling the open slot (marked as X in the examples above). The second group of constructions, and those we will focus on in this paper, were partially schematic, with at least two examples for each lexically-based pattern. They included discourse routines (e.g., Bye bye X), questions (e.g., Where's X), noun phrases (e.g., The X), prepositional phrases (e.g., In the X), noun-based schemas (e.g., Sadie X), verb-based schemas (e.g., Go away, X!) and pronoun-based schemas (e.g., Everybody X) as well as vocatives (e.g., Daddy, X!). Some constructions were also included within others, e.g., The X is a construction in its own right but also part of In the X and On the X. Most fixed elements in the frames were functional items, such as social words, question words, adjectives, determiners, prepositions and functional verbs. Partially schematic combinations account for 87% (274) of Sadie's 315 multiword constructions; recall that the slot X can be either in English or in Polish (see Appendix B). The 67 types of partially schematic patterns evolved over the period of 9.5 months, with the first one emerging at 1;4.17 and the last just before the end of data sampling at 2;2.00. If we define the language of the frame as the language of the fixed element, 50 of the 67 patterns had English frames, 14 had Polish frames and three had a frame that fitted both languages. If we assume that partially schematic units reflect the emergence of syntax, this finding shows that under the conditions of imbalanced bilingual exposure Sadie experiences, the distribution of frames mirrors her input: most frames come from English, which is Sadie's dominant language, with single words from English and sometimes Polish filling the open slot (marked as X in the examples above). There was also a third group of 28 (9%) combinations that remained unclassified after identifying all frozen and partially schematic constructions and which were considered novel (see Appendix C). In these cases, no more than one example was found for a particular verb plus argument, which invited an interpretation that they had been constructed in accordance with a more abstract schema, i.e., both verb and argument were considered slot fillers in an entirely schematic unit. Given the limitations of the data, we cannot be sure of course whether there were no earlier occurrences of any of these words in these patterns: in other words, the Traceback method is a conservative method whose technical definition of novel combination likely makes us overestimate the proportion of novel combinations. Among them were noun-based schemas (e.g., Red car), Subject-Verb [SV] structures (e.g., Baby's crying), Verb-Subject [VS] structures (e.g., Jedzie pociąg 'is going the train'), Subject-Verb-Object [SVO] structures (e.g., Ja chcę smoczek 'I want the dummy'), imperatives (e.g., Come back, pies! 'come back, dog'), questions (e.g., Has daddy got bicycle?) and combinations of multiple schemas (e.g., What happened, everybody?). Most of the novel combinations were based around a verb. Investigating these instantiations is important, as they potentially show the emergence of syntax, i.e., the use of schemas more abstract than the constructions that are only partially schematic, which we focus on here. All three groups make up the full constructional inventory of Sadie's output in the period under investigation, as far as our data allow its reconstruction, without regard for whether the constructions contained English or Polish lexical material. Most of her output was English. In the next section we will look at the division across the languages in more detail, focusing on the occurrence of CS. We will see that this mostly took the form of Polish lexemes used in English frames, not vice versa. Evidence for Productivity vs. Motivation for CS in Schematic Units It should not come as a surprise that Sadie codeswitches, despite the fact she was raised with the OPOL strategy and her parents did not codeswitch, which is confirmed by lack of mixed units among her frozen combinations. The method of tracing constructions forward in a diary of course is not perfect: it is always possible that a partially schematic construction instantiating CS was indeed heard in parental speech or, more likely, produced by Sadie but not recorded, but its conservative nature inspires confidence that Sadie's use of CS in such constructions illustrates her expanding grammatical competence. Children raised in OPOL surroundings are indeed routinely reported to go through at least a phase in which they mix their languages, suggesting it is a natural phenomenon (e.g., Mishina-Mori 2011). The analysis in this section contributes to that literature, but we mainly want to explore the evidence for productivity that CS affords when found in the output of an OPOL-raised bilingual child. Technically speaking, inserting a foreign word should be possible for any partially schematic pattern. Examining which ones do in fact host foreign words should tell us something about what kinds of constructions attract CS and therefore play a role in accommodating loanwords. It may also tell us something about productivity, as constructions that host words of foreign origin may be the most productive patterns. To explore this in more depth, we zoom in onto the most frequently used constructions with high token numbers in the diary and on video, taking large numbers of slot fillers. In Sadie's data, there are two prototypes among these constructions in terms of their openness to CS: constructions whose instantiations are always monolingual and constructions that frequently have their slot filled by material from the other language. In this section, we analyse this difference and suggest an explanation. Constructions with low token numbers are not considered here: they may not have been captured in sufficient breadth to warrant meaningful analysis. The asymmetry between the languages noted earlier also has implications for the CS in the data. Most monolingual constructions had fixed material only drawn from English (i.e., they are 'English constructions') and always hosted slot fillers drawn from English (Group 1). The few Polish schemas virtually always hosted slot fillers from the same language. The full inventory of monolingual constructions from this group is presented in Appendix D Table A1. This set of constructions includes five of the ten Polish schemas, and 19 English constructions, including the 33 tokens of The X and 26 tokens of I X, the two most frequently produced monolingual partially schematic constructions. Appendix D Table A2, on the other hand, lists all instantiations of the 37 partially schematic constructions that did sometimes include CS (Group 2), e.g., all ten occurrences of Bye-bye X. Inspecting these data allows us to look for the patterns of productivity. Some patterns are less productive than others because they occur with only a limited set of complements (like personal names and vocative words such as daddy; such patterns might not be patterns in cognitively real terms but just collections of similar frozen units). The most productive patterns are the syntactically most interesting ones, and we study them with one overall question in mind-what accounts for the CS we see in some of these patterns but not the others? To follow from this, we advance an account for why the abovementioned extremely productive schemas The X and I X gave rise to solidly monolingual English instantiations. Group 1: Mostly Monolingual Constructions In the monolingual group (see Table 1) are the invariably monolingual I X and The X (the latter of which is also enclosed in longer monolingual constructions with prepositions such as In the X and On the X) as well as the nearly always monolingual My X and Where's X (for slot fillers see Appendix E). Note that all four have an English frame. Mind that in example 1, though some Polish words were slotted into these combinations, they occurred at points of the utterance other than after the pronoun central to the frame. t want it ser 'cheese' (1;11.05); I don't want it spać 'to sleep' (1;11.07); I don't want it do domu 'home' (1;11.12); I go swim (2;0.04); I don't know (2;0.08); I want again (2;0.18); I need milk (2;0.29); I lost the dummy (2;1.03); I made it (2;1.17) and I did it (2;1.18). Overall, 55 tokens of the word I were recorded, always as part of constructions with 15 types of fillers used (TTR = 0.272). Group 2: Frequently Bilingual Constructions Next, we look at the constructions that were also highly frequent but often contained a slot filler from the other language (see Table 2): More X, X gone, No X, Daj mi X 'give me X', (I don't) want (it) X, (Where) other (one) X. One of these constructions is Polish. Overall, these six schema account for 51% tokens (n = 97) of mixed constructions recorded on video. More woda 'water' (1;11.08); More pić 'to drink' (1;11.13); More kukurydza 'corn' (1;11.14); More woda 'water' (1;11.15); More kaczki 'ducks' (1;11.24); More tissue (2;0.06); I want more bread (2;0.21) and I want more ice-cream (2;1.11) Of 94 tokens of the word more, 28 were not followed by slot fillers and seven were followed by the word please. In the 53 used in constructions, there were 33 different types of fillers (TTR = 0.622). Overall, 23 tokens of Daj (mi) were recorded, of which two are produced without a slot filler. In the other 21, 5 types of fillers are used (TTR = 0.238). (I don't) want (it) X Want it emerged as an unprocessed chunk at 1;9.11 and was used on its own until 1;11.04 when used in a mixed construction I don't want it pies 'dog' I don't want it mleko 'milk' and then at 1;11.05 I don't want it ser 'cheese' Overall, 84 tokens of want (it) were recorded of which 42 are produced without a slot filler and 42 are used with 34 types of fillers (TTR = 0.809). (Where) other (one) X Other one emerged as an unprocessed chunk /auauan/ at 1;9.20, used on its own until 1;11.06 when recorded in Other one peppa pig Other one teddy (2;0.29)-its use in constructions alternated with its use on its own. The frame was later extended through the addition of the word where to form a longer frame Where other one first recorded in a construction Where other one cat (2;1.06) Overall, 79 tokens of (where) other (one) were recorded on video, of which six were produced without a filler and 73 with 33 types of fillers (TTR = 0.452). Typically produced in the context of a card game in which she was expected to find matching animals. The usage data show that generally constructions that contain CS some of the time (mean TTR = 0.581) were more productive than the ones that never do (mean TTR = 0.307). However, the data concerning productivity, at least the kind defined as a function of type frequency, are inconsistent. The X (TTR = 0.435) from Group 1 was as productive as both (Where) other (one) X (TTR = 0.452) and No X (TTR = 0.438) from Group 2. Likewise, the productivity of I X (TTR = 0.272), My X (0.363) and Where('s) X (TTR = 0.114) from Group 1 was comparable to that of the Polish construction Daj mi X (TTR = 0.238) from Group 2. This suggests that reasons for openness of constructions to CS need to be examined in more detail. A closer look at the usage patterns reveals that in Group 1, the words I, the, my and where all emerged as, and all remained parts of, constructions. Although a range of words filled the slots in these constructions, thus potentially leading to segmentation of their component parts and increased productivity of the schema, the words of the frames were never used on their own, so it is not clear whether these words were conceptualized as individual linguistic entities. On the other hand, in Group 2 all the frames emerged first as individual words or longer multiword units and possibly became entrenched and conceptualized as such through holophrastic use. In the next section, we will summarize these findings and discuss their implications for theories of language acquisition, particularly in bilingual settings. Discussion We have demonstrated how the acquisition of constructions proceeds in a two-year-old exposed to Polish and English from birth, and who shows a preference for using the latter regardless of who her interlocutor is. Lumping output in both languages together, data produced by the diary generated 315 constructions recorded between 1;4.17 and 1;2.00 of which 4% (n = 13) were frozen, 87% (n = 274) partially schematic and 9% (n = 28) potentially novel. From among the 67 types of Sadie's partially schematic constructions, 50 had English and 14 Polish frames, while the three remaining frames could be interpreted as either Polish or English, a division that corresponds with the asymmetry in the input Sadie received from her environment. These findings allow us to answer our first question in that they confirm that much of the creativity of children's language use seems to be located in the use of slot-and-frame patterns (e.g., Keren-Portnoy 2006;Lieven et al. 1992;Quick et al. 2018), and to consist of the filling of slots with new lexical items. To address the second question about the openness of partially schematic constructions to CS, we examined a pool of the most frequent monolingual and bilingual units from the video recordings and supplemented them with the diary data. Given the child's dominance in English, this mostly meant a comparison of constructions mostly or only instantiated as fully English chunks (Group 1) and English constructions that often contained some Polish material, usually a content word (Group 2). The usage data show that constructions that never contain CS are less productive (mean TTR = 0.307) than those that contain CS some of the time (mean TTR = 0.581). This suggests that the occurrence of CS in a construction is a sign of its productivity, or that the usefulness of a construction in hosting new words, including words from the other language, is what drives its productivity. However, the TTRs of individual constructions within each group are dispersed across a wide range, which suggests that type frequencies of slot fillers may not be the most accurate way of predicting the openness of such constructions to material from the 'other' language. Our data suggest rather that this openness has something to do with the usage patterns of a frame from early emergence through to subsequent use. Examination of the frames in Group 1 suggests that their production always came about by virtue of being part of longer stretches of speech. This lack of articulatory and semantic autonomy may explain why they were never or rarely combined with Polish items. On the other hand, examination of frames from Group 2 shows that they were not tightly attached to other words, and therefore must have gained some articulatory autonomy, allowing them to be combined more freely with items from both languages. This shows that in a context where CS is rarely or never modelled in the input, productive assembly of bilingual speech is facilitated by the words having more independent semantic identity and therefore having been entrenched through solitary use. Other factors which contribute to CS most likely include whether or not the slot projects for elements typically amenable to CS, such as semantically specific words. All schemas from Group 2 project for a noun, almost any noun. Taking More X as an example, more and X are relatively autonomous, in the sense that they both contribute semantics that is essential for the meaning of the whole and both more and X (whatever it is that fills the slot) will often occur without the other. For schemas from Group 1, such as I X or The X, that is not the case: I and the are more dependent on co-occurring material than more is. We suggest this is why we find more on its own and not I or the. Due to this difference in autonomy and dependency I and the virtually always trigger further material with which they form multiword units in Sadie's mind, and by virtue of the relatively monolingual modes she is usually in, all or most of these units will be completely English. Furthermore, in I X (though in no other constructions from Group 1) the X category actually consists of many multiword chunks with different structural properties, so that in some sense it is less productive. This, ultimately, has to do with the distribution of semantic autonomy and dependence, and may explain why 'productivity' by itself, at least the kind understood merely as a product of type frequency, is insufficient as an explanation. Lack of CS in The X could also be linked to its sheer frequency in speech: as it does not have competitors in other definite articles, apart from those nouns which require zero article, it is in wide use and initially children may not be aware that the is a separate word. This may sound counterintuitive in light of what we know about type frequency. After all, if the recurs with a high number of nouns in the slot of the construction, we should expect high productivity of that slot (Bybee 2001) and, by extension, high levels of CS within that slot. However, The X is virtually impervious to CS. It may be useful to refer to some cross-linguistic data to help us to explain this observation. French deploys a range of definite articles, depending on gender (l' as in l'amour 'love'; la as in la vie 'life'; le as in le matin 'the morning' and les as in les bonbons 'candies'). In the early acquisition of Definite article X, whole noun phrases are replicated as whole constructions and no errors are evident in use (Leroy-Collombel 2010). Once the concrete constructions have been analysed, gender errors begin to occur (e.g., le poule 'the hen' instead of la poule) and finally children start to use the relevant determiners with the right gender (Leroy-Collombel 2010). The case of French thus shows initial tight attachment of particular articles to particular nouns which is likely the result of rote-learning. As French children hear contrastive use of three different articles, they learn that the articles are separate elements. By extension, we speculate that if the English the had competitors in other English definite articles, children would be forced to experiment with its use earlier on and they would figure out sooner how to use it productively. We suspect that this typologically determined ease of detachment of articles from nouns has some implications for CS. Let us move on to the example of German, a language with three definite articles (der as in der Mann 'the man'; die as in die Frau 'the woman' and das as in das Brot 'the bread'). Quick et al. (2018) discuss the bilingual acquisition of one child, Tim, and report that he switches within noun phrases when the definite article is German, e.g., Und das X and And die X, but not when it is English, i.e., The X. Presumably, high type frequency of definite articles in German leads to quicker emergence of the determiner category which, in turn, facilitates CS. By extension, the fact that there is only one definite article in English leads to a delayed emergence of this category. In the case of German Article X construction, it is thus the bilateral processing of that construction which facilitates CS within it because it triggers schematizing at an earlier stage. Our data confirm that the unilateral processing of The X in English leads to a slower emergence of partial schematicity and therefore it does not trigger CS within the construction in our data. These findings have important implications for our understanding of the relationship between productivity and CS. One main observation was that in Sadie's data partial schematicity accounts for CS only to some extent. Is this because overall productivity is insufficient in explaining CS or because partial schematicity is not a sufficient determinant of CS? We show that most of the child's productions can be classed as partially schematic, regardless of whether they are bilingual or monolingual, and indeed that some of such partially schematic constructions remain impervious to CS. We also show that the words which facilitate CS are those which have been entrenched through autonomous use; and that some lexically fixed frames may require bilateral processing of the whole construction to trigger CS, as in the case of The X. This invites our conclusion that type frequency leading to partial processing of a schema is not a sufficient predictor of CS: some constructions may need more than just partial productivity though this would need to be confirmed in future research on a larger set of bilingual constructions. In answer to the third question asked in this study, as to why Sadie codeswitches when her input emphasizes the separation of her languages, we suggest that CS is simply a reflection of her emerging syntactic productivity. As most of Sadie's frames are English, she switches to English in order to be able to say more regardless of the language of interaction. Her CS is likely supported by many factors other than the way in which the two languages are presented in the input. One of them is higher entrenchment some words compared to their translation equivalents which facilitates their access and retrieval (see Quick et al. forthcoming). For example, most of Sadie's frames are English and the fact that they are activated even when Sadie is addressed in Polish suggests they must be more entrenched and easier to access. Sadie's CS thus shows limitations of input in accounting for language use: despite being raised in an OPOL environment, she still combined her two emerging languages together in speech. Additionally, purely on the basis of the description of the family linguistic situation one could have expected that Sadie's Polish would be very rudimentary. However, high numbers of CS utterances as well as some novel Polish combinations show that extensive input is perhaps not needed to build up some decent degree of competence and self-confidence in the minority language, an issue often discussed in the literature on Family Language Policy. The observation of CS also shows that Sadie, who does not experience intraclausal CS in the input, is really 'working' her languages. Particular frames are especially productive in this way; and we can see how the typical characteristics of insertion CS (i.e., grammatical frame from one language hosts content words from the other) could develop if Sadie would continue to produce mixed speech. Whether she does or not is mostly dependent on sociolinguistic factors. Finally, Sadie's usage data also show language dominance to be just a by-product of more basic processes of usage-based selection of words and constructions combined with sociolinguistic pressures on a child that stimulate an awareness of the language affiliation of these words and constructions. That language affiliation comes from two sources: the natural abstraction of knowledge from co-occurrence patterns, which holds for all humans everywhere, and sociolinguistic emphasis on language separation, which may be strong (the usual case in the bilingual acquisition literature) or not (rarer, perhaps because of empirical bias or perhaps because of social reality). Clearly, naturalistic data can only be indicative due to production limitations and the restricted context of recordings. Therefore, ideally future studies should complement our knowledge by investigating the questions we asked here under experimental conditions. Conclusions In this study, we have shown that like in monolingual development, bilingual constructions can be also accounted for by the emerging slot-and-frame patterns. Access to frequently produced monolingual and bilingual constructions allowed us to highlight a limited role of type frequency for the processing of words in speech. Their autonomous use as well as bilateral processing of constructions, which they are part of, appears to be a better predictor of their productivity and their readiness to enter in combination with words from another language. Despite showing links between the child's own language usage and the patterns observed in her CS, we have also highlighted limitations of input in predicting a child's own language outcomes. The case of the child we studied shows that despite being raised in an OPOL context, she went through at least a phase of combining words from her two languages which suggests that CS is a natural manifestation of the bilingualism that results from being raised with two languages. (a) Imitations of phrases heard on TV and from books: Are you ready to go? Don't worry my friend; What does she look like? (b) Imitations of phrases most likely heard from parents: Pies cicho! 'Dog; quiet!'; Do domu pies! 'Home dog!' Oh there it is; Wait for me!; What have you got? (c) Compound nouns: Bath time (d) Social phrases: Well done; Night night (e) Linguistic routines: Co to? 'What's that?'; W tę stronę 'This way' Appendix B Partially schematic units (a) Discourse routines: Bye-bye X; Papa X 'bye bye X'; Hello X; Hi X; More X; Jeszcze X 'more X'; No X; Nie X 'Not X' Yes X; Thank you X; X gone; Nie ma X 'not there X'; X please (b) Questions: What's that X?; Where/where's X?; Where other one X?; X [where] are you? What X? (c) Noun phrases: The X; My X; Mine X; Moje X 'my X'; This X; Two X; X's turn; Other one X; Naughty X; Silly X; possessives (Sadie X = English and Polish word order; X Sadie-Polish word order) (d) Prepositional phrases: X on; In the X; On the X; Do X 'in the direction of X' (e) Verb-based schemas: imperative (Tickle X; Watch (it) X; X come!; Daj mi X! 'give me X'; X look!; Idź X! 'Go away X!'); affirmative (X coming; Xśpi 'is asleep'; I'm gonna X; Jedziemy do X 'we're going to X'); requests (I want [it] X; No want X; I don't want it X; I lost the X; I need X); demonstrative (To jest X 'this is X'; Got no X; Jest X 'is X'; Look X; X's here) (f) Noun-based schemas: with verbs missing: Sadie X (e.g., Sadie bicycle too; Sadie in there); with verbs included: Sadie X (e.g., Sadie clean up; Sadie otworzy 'Sadie will open'; Sadie broke it; Sadie wants bicycle; Sadie jest tutaj 'Sadie is here') and X the door (Shut the door! Open the door!) (g) Pronoun-based schemas: I X (e.g., I see you; I made it; I did it; I do it; I go swim; I finished) and I'm X-ing (e.g., I'm swimming; I'm cleaning; I'm bouncing a ball; I'm coming to get you); X you (e.g., Thank you; Bless you) and Everybody X (with verbs missing: Everybody shower; Everybody up; Everybody bicycle; Everybody apple; Everybody tired; Everybody ząbki 'Everybody teeth') and Everybody X (with verbs included: Everybody sit down!) (h) Vocatives: Daddy X! (e.g., Daddy have a go! Daddy help Sadie! Daddyśpij! 'Daddy sleep!'); Tata X! 'Daddy X!' (e.g., Tata
v3-fos-license
2018-04-03T03:26:57.297Z
2016-07-11T00:00:00.000
3246185
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcpsychiatry.biomedcentral.com/track/pdf/10.1186/s12888-016-0952-0", "pdf_hash": "f3aadac3e6339a53c89bf19239be7a15b767710b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44860", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "f3aadac3e6339a53c89bf19239be7a15b767710b", "year": 2016 }
pes2o/s2orc
A qualitative study of online mental health information seeking behaviour by those with psychosis Background The Internet and mobile technology are changing the way people learn about and manage their illnesses. Little is known about online mental health information seeking behaviour by people with psychosis. This paper explores the nature, extent and consequences of online mental health information seeking behaviour by people with psychosis and investigates the acceptability of a mobile mental health application (app). Methods Semi-structured interviews were carried out with people with psychosis (n = 22). Participants were purposively recruited through secondary care settings in London. The main topics discussed were participants’ current and historical use of online mental health information and technology. Interviews were audio-recorded, transcribed and analysed by a team of researchers using thematic analysis. Results Mental health related Internet use was widespread. Eighteen people described searching the Internet to help them make sense of their psychotic experiences, and to read more information about their diagnosis, their prescribed psychiatric medication and its side-effects. Whilst some participants sought ‘expert’ online information from mental health clinicians and research journals, others described actively seeking first person perspectives. Eight participants used this information collaboratively with clinicians and spoke of the empowerment and independence the Internet offered them. However nine participants did not discuss their use of online mental health information with their clinicians for a number of reasons, including fear of undermining their clinician’s authority. For some of these people concerns over what they had read led them to discontinue their antipsychotic medication without discussion with their mental health team. Conclusions People with psychosis use the Internet to acquire mental health related information. This can be a helpful source of supplementary information particularly for those who use it collaboratively with clinicians. When this information is not shared with their mental health team, it can affect patients’ health care decisions. A partnership approach to online health-information seeking is needed, with mental health clinicians encouraging patients to discuss information they have found online as part of a shared decision-making process. Our research suggests that those with psychosis have active digital lives and that the introduction of a mental health app into services would potentially be well received. Electronic supplementary material The online version of this article (doi:10.1186/s12888-016-0952-0) contains supplementary material, which is available to authorized users. Background Over 3 billion of the world's population is now estimated to have access to the Internet (http://www.Internetworldstats.com/stats.htm). In the UK 83 % of adults have Internet access, and over half own a smartphone [1]. The Internet is changing the way people learn about and manage their illnesses. In a recent European survey, over 75 % of respondents felt that the Internet was a good resource for finding out more about health and 60 % reported using the Internet to look up health information [2]. Due to the increasing availability of the Internet and the anonymity it offers, online health resources may present an attractive source of information for those with stigmatised conditions [3]. It is reported that more than half of the people with first episode psychosis use the Internet as a source of information about their mental health [4] and that over 50 % of the people with psychiatric problems use the Internet to find out about their diagnosed mental health condition [5]. Current evidence suggests that people with psychosis access and use digital technology in a similar way to individuals unaffected by mental illness [6,7]. However, little is known about how people with psychosis view and interact with mental health information online. Although E-Mental health interventions are in their infancy, those that have been developed to support people with psychosis and their families have been well received by users [8,9]. Online and mobile-phone based interventions are associated with improved medication management amongst people with psychosis and seem to be at least as effective as standard care in relation to adherence [8]. We aimed to explore the nature, extent and consequences of online health information seeking by people with psychosis, in order to inform future clinical practice and the potential development of a novel E-mental health app. Settings This study took place in Camden and Islington NHS Foundation Trust, which is a NHS mental health provider in an ethnically and socially diverse inner-London area. Participants were recruited through an Early Intervention Service for psychosis, a NHS residential Crisis House and an Acute Day Unit, which offers a day service to people in mental health crises. Ethical permission was obtained from a UK National Health Service Research Ethics Committee (REC reference: 13/EE/0222). Participants Participants were purposively recruited to obtain views across age groups, gender, ethnicity, psychotic diagnoses, and educational levels. Inclusion criteria: English-language speaker Aged 18-65 years Diagnosis of psychosis (schizophrenia, schizoaffective disorder, bipolar disorder with psychotic symptoms, persistent delusional disorder or psychosis not otherwise specified) Currently using or had previously used the Internet. Eligible patients were initially identified and approached by a member of their mental health team. Those who expressed interest were contacted by the researcher, who provided further information and obtained informed consent. Taking account of the potential range of sociodemographic and clinical positions of our interviewees, we estimated that we would require between 15-25 participants to reach the point of theoretical saturation. Data collection A semi-structured topic guide was used to ensure certain topics were discussed with every participant whilst remaining open enough to allow new areas to be explored depending on each participant's views/experiences. One researcher (GA) retrieved and reviewed key papers about online mental health enquiry [5,7,10], identifying major themes for a draft topic guide. Other members of the research team reviewed the themes identified, and a pilot topic guide was agreed upon. This was piloted with 2 participants and was finalised in collaboration with the study research team. The same basic topic guide (See Additional file 1), with options to probe and explore answers, was used with all participants. The topic guide (see Additional file 1) covered: Participants' current and historical health related Internet use The reported impact of this on their mental health Their current and historical use of E-Mental health technology such as mental health apps Their experience of, and attitudes towards, health related Internet use and E-Mental health technology Semi-structured interviews, up to 90 min long, were conducted by GA, an Academic Clinical Fellow in Psychiatry, at the secondary care setting at which the participant was being seen. Demographic data (gender, age, ethnicity, education levels, psychosis and other comorbid diagnoses) were also recorded. Analysis The interviews were audio-recorded, transcribed verbatim, and any identifying information was removed to preserve anonymity. The manuscripts were imported to QSR NVivo 10 for Windows [11]. The material was analysed using thematic analysis [12]. To enhance validity, a second researcher (PO) separately coded ten of the transcripts and the developing coding frame was discussed and reviewed with the wider research team throughout analysis. Recruitment, data collection, and analysis occurred concurrently until saturation was reached. We judged this to have occurred after 22 participants' transcripts were analysed, as no further themes were generated. Results The characteristics of the participants are shown in Additional file 1: Table S1. The sample was predominantly from White and Black British backgrounds, single, and in receipt of employment or disability living allowance. Their ages ranged from 21 to 57. The most common diagnosis was Psychosis Not Otherwise Specified (Psychosis NOS). The duration of interviews varied from 21 to 65 min. One interview (P12) was cut short as the degree of psychotic symptoms experienced by the participant meant he was unable to complete the interview. Whilst data gathered from his initial responses are included in the results, he was unable to discuss the more complex questions regarding the extent and impact of his online mental health enquiry. Digital lives and overview Eighteen of the participants reported that they had access to wireless Internet where they lived and most had a personal device from which they could access the Internet. Seventeen reported accessing the Internet daily, four others reported regular Internet use and only one person said they rarely used the Internet. Of the thirteen participants who owned smartphones, all described using general apps downloaded to their mobile devices, but none had used a mental health app and most stated that they had never heard of any. Results are organised into four thematic sections. 1) How and why the majority of participants sought mental health information online. 2) Participants' experiences in navigating, accessing, and processing this information. 3) Impact of online information on the participants' emotions and behaviour and how this was influenced by their relationship with clinicians. 4) Respondents' views on self-management apps for psychosis. These key themes and findings are discussed below. 1) Seeking and Finding Mental Health Information Online The majority of participants had used the Internet to find out more information about mental health (n = 18). Sixteen people described searching the Internet to help them make sense of their experiences (including delusions and other symptoms of psychosis), and read more information about their diagnosis, their prescribed psychiatric medication, and its side-effects. Three people discussed searching for mental health advocacy and mental health organisations. All of these participants reported using Google to direct them to relevant mental health information, with six participants stating that they often limit their browsing to the top Google search results. Wikipedia was the most frequently identified information source. One participant described not knowing the names of specific diagnoses, so he found that searching symptoms and then following links a useful way of navigating the Internet in search of mental health information. Two participants reported following links, one through a UK based mental health charity's Twitter account and the other through the NHS Choices website. However, there was a general lack of awareness of NHS online resources or specific mental health charity websites and of how to access them, with only five participants reporting having had experience of using them. Despite feeling that NHS websites would be well informed and a responsible source of information, four participants discussed having never accessed them as they were unsure how to. Information about medication and medication sideeffects was the most common topic of mental health enquiry online (n = 15). Participants described going online to enhance their knowledge and understanding because the information they found was more detailed and indepth than other resources, such as leaflets or clinicianprovided information. Several participants described how accessing this detailed information helped them to feel better informed with regard to their medication and mental health problems, and how this led to a sense of security and reassurance. One person described how reading scientific explanations of her delusions helped her to manage her experiences. "I would have a difficult time not believing my delusions and it would help to look at a sort of medical thing [online] so I could affirm the idea that I was experiencing mental health problems." (P04) Whilst some participants sought 'expert' online information originating from mental health clinicians and research journals, others described actively seeking first person perspectives. Participants described seeking these experiential accounts for a number of reasons. One described feeling resentful towards mental health services following his first compulsory admission to a mental health hospital and wanting to make sense of this by reading about other people's experiences (P10). Some participants described looking for advice from people with lived experience on how to cope with their diagnosis or manage their mental health, while others simply wanted reassurance that they were not alone in their experiences of mental health problems. One participant reported that reading about other people's experiences had helped her make sense of her own and another described how reading recovery stories gave him hope for his own future (P08). "Reading other people's success stories regarding how they've gone back to a normal life can be, you know, somewhat reassuring." (P08) A minority of participants had been advised to look up mental health resources online by family and mental health clinicians (n = 4). In contrast to participants who were internally motivated by a personal desire to further their understanding of mental health, these participants spoke of either looking only briefly, not having the motivation to fully explore the subject, or only looking at the recommended site if it was of specific interest to them. "My key worker at the hostel, she said she found out about it from somewhere … I didn't go to the website yet. I just didn't do it really, I had motivation difficulties really." (P18) Four participants spoke of never having used the Internet for mental health related enquiry. These people all had Internet access at home and spoke of using it regularly for other purposes such as communication and social media. Whilst these participants were from a wide range of age groups and ethnic backgrounds, none of them had received higher education. When online mental health enquiry was discussed, one expressed an interest in accessing mental health information but reported that they had not thought of using the Internet to do so. The other three were aware that the Internet could be used for this purpose but either reported having no interest in further information about their mental health or treatment, or that they actively avoided information about mental health problems. All four described relying solely on written and verbal information provided by their mental health teams, and feeling satisfied that such information adequately met their needs. This seemed to preclude any felt need for independent enquiry. "Basically I don't feel that it's something I need to look up online. Me being in a mental health hospital, I put my trust in the doctors to come through for me to help me get better. If I was relying on it myself then I would look it up but because I am relying on it with other people, I don't feel I need to look it up. I feel that they have the solutions." (P06) Accessibility and availability Many participants who used the Internet to search for mental health information described the benefits of having access to current and in-depth information online that was more accessible across space and time than other sources, including clinicians. "It's readily available, it's easily accessible. Not that a clinician isn't, I mean … you can access it any time that you want … I suppose the accessibility is … really positive." (P04) However, several participants described barriers to access that applied to all E-mental health information and services. Financial barriers were the most commonly cited. Several participants reported being unable to afford a replacement after their smartphones were broken, stolen or sold. Others were concerned about having sufficient data allowance on their mobiles to access Eresources and having to go a café to access public wireless Internet. Four participants described the interplay of their mental health and Internet use. These participants reported reduced Internet use when they were unwell, due to either feeling unmotivated or finding Internet resources difficult to engage with. Conversely, two other participants described not wanting to access online mental health information or E-mental health services when they felt well. They explained that they did not have the time or did not want to think about their mental health difficulties. "I feel like I'm on top of my mental health at the moment. So yeah, I don't feel any inclination to start planning out charts and stuff about how I'm doing … to me the focus on it is kind of what gets me down about it, you know? I prefer it if it just didn't really exist." (P10) One participant, who had a mild learning disability and a diagnosis of paranoid schizophrenia, spoke of needing support from a clinician to access online information as her symptoms and antipsychotic medication left her feeling too tired and unwell to navigate the Internet independently, beyond social media and online shopping. Others described lacking confidence with digital resources and felt they would need guidance or support from mental health clinicians before starting to use new online mental health resources. Navigation and comprehension Participants' experiences of finding and understanding information on the Internet varied. Many described the information as easier to understand than other written sources due to clearer presentation and simpler language. "I prefer Internet to books. Books [have] big words that I don't even know the meanings. I prefer Internet, it's easy words." (P11) Several participants framed their ability to understand online information in terms of the level of professional expertise needed to comprehend it. Some described information they found on the Internet as accessible to the lay person, while others implied that finding and processing mental health information required specialist training and skill. "It's very practical and … down to earth in how it's written. You don't have to be a mental health professional to get an understanding and benefit from it. So yea, it's good." (P8) However, not all participants spoke positively of the depth and abundance of information, and some described difficulties in accessing or processing it. Several participants described feeling overwhelmed by the large quantity of online information, or feeling incapable of understanding what they found. One participant felt that she lacked the medical expertise required to find and comprehend mental health information online (P17). Another preferred relying on clinicians to filter available information over independent research, having struggled with the amount and depth of online information when looking things up independently (P01). Reliability There was a wide range of views with regard to the reliability of information on the Internet. Whilst some participants felt that it was accurate, others were more cautious about the nature of the information found online. Participants' judgements of the reliability of online information were strongly linked to their level of trust in those to whom they credited authorship, and the qualifications of the author. Participants who thought that the information was accurate generally expressed a belief that trusted professionals, such as doctors or researchers, kept the information up-to-date. In particular, Wikipedia was identified as the most reliable source of health information online. "I look at my medications on Wikipedia and stuff like that … it has all the side-effects … really good researchers already scan all the books and have a big book list at the end of it." (P18) Other participants were more cautious of the credibility of online health information. Some questioned its reliability due to having found contradictory information, and questioned the qualifications, motivations, and sources of those posting. Two participants expressed distrust towards the NHS and NHS professionals, and a consequent suspicion of information on NHS websites. They described anti-psychiatry and conspiracy theory websites as more reliable, and reported using these as their sole source of mental health information. "I kind of look at mental health and psychologists, I don't trust them 100 %. So I never look up on anything that comes out of the NHS or it has to be conspiratorial if I look at mental health issues."(P09) Four participants spontaneously expressed an interest in obtaining a recommended list of mental health websites. They either reported difficulty finding reliable sources despite spending hours looking through information online, or said that they did not know what was available. Two other participants said that the discussion from the interview had reminded them to search for sites beyond the top search results from Google. Anonymity and privacy Over half of the participants who used online health resources spoke of the security and anonymity of the Internet. Participants described the importance of the privacy of this online world that could be accessed in their own space. "The Internet's nature, you know, in itself it, there is a sense of security with it because you're at home [and], I suppose, safe." (P08) Participants also described feeling more relaxed when looking up information online than in a face-to-face meeting with a mental health clinician. "I suppose there can be a bit of pressure when you are speaking to the doctor because it can be a big deal to start on a new medication and it can be a nice to have a calmer look at the information." (P04) 3) The Patient-Internet-Clinician Relationship This section is based on data from seventeen people who used the Internet to access mental health information online. This does not include the four people who were non-users and the participant whose interview was terminated early and did not answer these questions. Participants who used the Internet for mental health information could be divided broadly into two groups. Eight people accessed Internet information, discussed this with their mental health team, and worked in conjunction with clinicians to make shared care decisions. For the purposes of the study we describe this group as collaborators. The other group, (n = 9), parallel universes were active users of the Internet. They used it to gather more information about their mental health, but did not bring this new information into consultations with their mental health clinicians. They kept their online searches, and the impacts of these, separate from discussions in clinical consultation, so that these two domains of knowledge and activity co-existed separately as parallel universes. Whilst both groups were derived from similarly diverse ethnic backgrounds, education levels and diagnoses, the participants who were collaborators were on average younger (30 vs. 38 years old), with fewer years' contact with mental health services (3 vs. 10 years), fewer psychiatric admissions (3 vs. 7), and fewer involuntary admissions (1 vs. 2). The group of collaborators were all based at an EIS (Early Intervention in Psychosis) service. The parallel universe group were predominantly made up of residents of a Crisis House and/or an Acute Day Unit (n = 6) with a minority of EIS patients (n = 3). In this section we explore these distinct groups and the impact that these two styles of patient-internet-clinician relationship have on participants' emotional experiences and clinical care. Fear and anxiety Both groups of participants identified negative consequences of mental health enquiry online. The most commonly described experience was anxiety after reading about medication side-effects (n = 7). Four of these participants spoke of reading about sudden death in relation to antipsychotic medication, and another individual disclosed her fear of developing a learning difficulty after reading information suggesting that this happened to 1 % of those taking Aripiprazole (P11). In the parallel universe group, two participants who had previously used the Internet as a source of information reported no longer doing so because of concerns about what they had read, explaining that they didn't want to worry any more. Two other participants reported being so anxious about what they had read online that they had discontinued their medication without discussion with their clinicians. One said they had experienced a psychotic relapse and an admission to hospital as a result of stopping their medication (P5). The other described how finding this additional information online had led her to doubt the reliability of her mental health team (P11). Another individual described how reading about the side-effects of Olanzapine had made him want to take cocaine, which he felt would be safer. "I went online, and it worried me … I left my meds sitting there for a good three weeks, and I didn't take them because I was scared about the side-effects. I got paranoid, let's put it that way." (P05, parallel universe) Those in the collaborator group expressed a similar underlying anxiety in relation to online information about the side-effects of antipsychotic medication. Unlike the parallel universe group, however, collaborators reported having discussed their concerns with their mental health teams. These participants described how clinicians had put frightening statistics into a clinical context, provided reassurance and, in some cases, proposed alternative medication. Whilst some described remaining anxious about possible medication side-effects, they reported that their medication adherence and behaviour remained unchanged following their Internet use. Two participants in the parallel universe group reported feeling anxious and hopeless after reading or watching YouTube videos online about mental illness. Both described how this material left them feeling confused and unable to process the information. This was in contrast to the experiences of the collaborator group, where some individuals reported consulting a doctor in order to check the accuracy of the information they had found, which relieved the anxiety created by this supplementary information. "There's a lot of information on the Internet that may not necessarily be correct. It may be correct at the time but it may have changed. So I was … checking and then going back to the doctors and asking." (P13, collaborator) Empowerment, control, and negotiating the clinicianpatient relationship For those in the collaborator group, both the act of independent research online and the understanding and knowledge gained as a result were closely linked with feelings of control and empowerment. "I think for one thing, it makes me feel more in control of things to be able to look at things independently and to get new information about it." (P04, collaborator) "I looked at my diagnosis … it's very helpful, when you understand it. It sort of gives you the ability to recover from it." (P2, collaborator) This greater sense of self-reliance did not seem to extend to those in the parallel universe group. These participants described their online mental health related enquiry as a solitary activity that they did not share with friends, family, or mental health professionals. Similarly, they reported healthcare-related decisions (for example, discontinuation of medication) that were influenced by online enquiry as having been made secretly or in private. The majority of the parallel universe group explained their reluctance to share online information with clinicians in terms of their clinicians' failure to initiate discussions about online health information or to recommend sites or E-mental health resources. However, participants' explanations for their own reticence in clinical consultations regarding their online activity were suggestive of assumptions about power and knowledge in the clinician-patient relationship, in which the clinician was perceived as the 'expert' provider of information. Some participants expressed an explicit belief that doctors did not like patients exploring the Internet for supplementary information, but most described a more implicit sense that independent mental health information seeking online was somehow at odds with the status-quo: "It wasn't that I felt I was usurping the medical authorities. It was just, I don't know, it somehow felt like I shouldn't be doing it. That sounds bizarre doesn't it? Really. Because everyone is entitled to be self-informed." (P20) "I trust [my key worker] but it's not really that I can tell [them] 'oh I searched and this and I found out I am like schizophrenic people' … I wouldn't tell her that I think I have schizophrenia. It's not really a nice thing to say. Because, well, she's the person who finds my sicknesses, not me that finds it." (P11) 4) Future Use of Mental Health Apps As part of the interview schedule, participants were asked about their experience and thoughts regarding mental health apps. None of the participants had used mental health apps and despite most using other apps in their daily life (n = 14), they said they were not aware that any mental health apps existed. They were asked about their views regarding the potential introduction of a self-management smartphone app and what would encourage or discourage them from using this. There was a positive attitude towards the idea of a mental health self-management app, particularly among those who were currently completing a paper diary, with eighteen participants stating they would find it helpful. These participants felt an app should be clear, concise and easy to use. Some participants recommended tick boxes as an alternative to text input, while others suggested having free-text space to record any difficulties they had experienced. Several participants described how such an app could potentially help them overcome communication and recalled difficulties that they had experienced when seeing their mental health clinicians and discussing the nature and degree of psychotic or mood symptoms. "When I go to see my social worker or psychiatrist … it's so hard sometimes to express yourself on how your mood has been, you don't remember and you don't know if there's a pattern or not." (P19) Five people spoke of an app giving them a sense of purpose and helping them to set goals for their recovery, with one participant likening the app to the acute day centre he was attending (P22). "A lot of the time when you've got a mental health problem you get up but you don't know what to do with yourself … [an app] would be really useful because it would give them a goal and when you've got a goal, it helps." (P20) Four participants spoke about the usefulness of having links or mental health information on the apps, which would allow them access to reliable and credible information through their mobile devices. Perceived barriers to app use echoed those relating to general availability of the Internet, with financial barriers being commonly cited. Participants discussed being unable to afford smartphones, and two said that it would be advantageous to have an app that was also available on a tablet or computer. Others were concerned about having sufficient data allowance to allow access to such an app. Four participants described motivation to complete such an app as a key factor. Some felt that they may not be motivated to use it when unwell and others anticipated difficulty finding the motivation to engage with it on a daily basis, especially if they felt it was not useful. Two participants described lacking confidence with digital resources and felt they would need an induction or support from mental health clinicians before starting to use such an app. Several participants expressed a concern that digital health technology might replace human contact. The four participants who said they would not use a self-management mental health app were from a range of age groups and ethnic backgrounds and included those from all three participant groups (collaborators, parallel universe and non-users of the Internet for mental health enquiry). Three out of four of these participants had smartphones and were using general apps. All of them stated that they felt stable with regards to their current mental health and felt that any additional emphasis on mental health or illness would be destabilising. The person who did not have a smartphone expressed concern about technology replacing the face-to-face contact with clinicians. Discussion The internet provides alternative perspectives and new information People with psychosis are using the Internet to acquire mental health related information. Most of our participants use search engines and review the top search results only, which is typical of how the general population navigates the Internet for health information [10]. The popularity of Wikipedia, and the general lack of awareness of other sources of health informationsuch as NHS Choices -could be due to its high ranking on Google searches [13]. Alternatively it could be attributable to participants' beliefs that it is the most comprehensive and up-to-date source on the Internet. There is a paucity of research on the credibility of health information on Wikipedia. One review suggests that although it has high accuracy, its readability is poor and does not meet the criteria for patient information leaflets and would benefit from further professional input [14]. Our research has shown that people affected by psychosis appreciate the accessibility of online health information and find this empowering. This is supported by existing evidence [7]. However, the results of this study suggest that while some participants find online information helpful and reassuring, for others who do not use this information collaboratively with their mental health team, it can lead to concern and affect health related decisions, including medication adherence. Since medication non-adherence is associated with a number of negative outcomes for people with psychosis [15], this is an important finding and warrants further research. Clinicians and patients need to communicate about the "virtual" world Most participants did not discuss their use of online mental health information with their clinicians. These participants were generally older, had a longer psychiatric history with a greater number of compulsory admissions and were recruited from a non-EIS setting. These participants attributed this lack of discussion to the fact that their clinicians did not initiate the conversation. This may be because some mental health clinicians believe that patients, despite the growing body of evidence [6], are not using the Internet. Participants reported that they did not volunteer information about their Internet use for fear of undermining their clinician's authority. This suggests a tension between the potential independence and empowerment offered by online health information seeking and the sense of dependence and respect for authority engendered by the traditional patient-clinician relationship. This has repercussions for shared decision-making beyond digital technology and may reflect perceived inequalities of power in the therapeutic relationship [16]. The participants who shared and discussed information with their clinicians were all from an EIS service. This may reflect the model of collaborative care that has been fostered and developed in these services as opposed to the more traditional hierarchal model of care. Previous qualitative research from EIS has highlighted the value that patients place on being involved in treatment decisions and working jointly with clinicians on their care plans [17]. Perhaps greater openness and equality in EIS therapeutic relationships has facilitated sharing and discussion regarding patients' online mental health searches. A partnership approach to online health informationseeking is needed with mental health clinicians encouraging patients, particularly those with a longer psychiatric history and from an older age group, to discuss information they have found online as part of a shared decision-making process. As other researchers have reported [7], patients want mental health clinicians to recommend websites and appropriate resources. This could provide an opportunity to initiate dialogue around patients' mental health related Internet use. In addition, professionals could play an important role in enabling patients to critically evaluate and interpret information that they read on the Internet to reduce the risk of misinformation and alleviate concerns. In other branches of medicine, E-health information is becoming increasingly embedded in the relationship with patients with speciality Wikipedia pages such as the Cancer Guidelines Wiki, created by the Australian Cancer Council (http://wiki.cancer.org.au/australia/Guidelines). Mental health clinicians should be advised and encouraged to follow suit. Limitations and strengths The strengths of this study include the breadth of its sample, encompassing varying ethnicities, diagnoses, and levels of education. There is little previous relevant qualitative work and none of such depth. Several limitations should be noted. Only participants who had used the Internet were recruited into the study and speaking of past Internet use introduces recall bias, so we may have over-estimated health related Internet use. While small samples are often sufficient to achieve theme saturation in qualitative research [18], a larger sample would increase confidence that our results reflected a full range of patients' experiences. All qualitative work is affected by the role of the interviewer. Since the interviewer has a clinical background, steps were taken to limit and critically appraise the influence of this on the results including involving nonclinicians in the research team and patients in our study design and analysis of the results. Clinical and research recommendations Our findings are consistent with other studies suggesting that mental health mobile technology would be well received [19,20]. It would require financial assistance for some patients, as lack of suitable funds was cited by a number of participants who did not have access to wireless Internet or smartphones. Mental health clinicians should consider ways to introduce and discuss online mental health enquiry in consultations with patients. This may alleviate concern about misinformation or overwhelming information and could have a positive impact on their health care decisions and outcomes. Conclusions People with psychosis use the Internet to acquire mental health related information. This can be a helpful source of supplementary information particularly for those who use it collaboratively with clinicians. When this information is not shared with their mental health team, it can affect patients' health care decisions. A partnership approach to online health-information seeking is needed, with mental health clinicians encouraging patients to discuss information they have found online as part of a shared decision-making process. Our research suggests that those with psychosis have active digital lives and that the introduction of a mental health app into services would potentially be well received. Additional file Additional file 1: Topic Guide for Internet use amongst people who use mental health services. (DOCX 33 kb) Abbreviations Apps, an application, especially as downloaded by a user to a mobile device; EIS, Early Intervention in Psychosis; NHS, National Health Service; Psychosis NOS, Psychosis Not Otherwise Specified.
v3-fos-license
2020-02-06T09:10:27.332Z
2020-01-01T00:00:00.000
213013204
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://iiste.org/Journals/index.php/APTA/article/download/51007/52716", "pdf_hash": "09acf59bb2389489ff51da0ba87ea9de7a5a54bf", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44862", "s2fieldsofstudy": [ "Physics" ], "sha1": "90446880876e1e1ed9ed67f6b4e59ba7478e47cb", "year": 2020 }
pes2o/s2orc
Quantum Natures of Single-Mode Displaced Squeezed Vacuum State The displaced squeezed vacuum state is produced by application of displaced operator on squeezed vacuum state. With help of density operator we find Q function, with the Q function mean, variance and quadrature variance would be calculated. From this we can determine the system has superpoissonian statics, the squeezed parameter is direct proportion with both mean and variance of photon number, but inversely proportion with quadrature variance. The squeezing occurs in plus quadrature with the maximum squeezing of 99.7% for r=3. Introduction Squeezed states of light have been observed in a variety of quantum optical systems, which are used to enhance the measurement sensitivity in optomechanics [1], and even in biology [2]. In squeezed states of light, the noise of the electric field at certain phases falls below that of the vacuum state. This means that, when we turn on the squeezed light, we see less noise than no light at all. This apparently paradoxical feature is a direct consequence of quantum nature of light and cannot be explained within the classical framework [3]. They are described interims of single-mode, two-mode and as the mixtures with the other quantum states of light. The single-mode squeezed light is produced by a degenerate parametric amplifier, consisting of a nonlinear crystal pumped by coherent light. And the two-mode squeezed light is generated by a nondegenerate subharmonic system consisting of nonlinear crystal pumped by coherent light. Two-mode squeezed vacuum state is defined by applying the two-mode squeezed operator to the two-mode vacuum state. We can use the squeezed states of light with mixing them with other quantum states of light. The displaced number state is obtained by application of squeezing operator on displaced and number states [4][5][6][7][8][9]. In this paper we seek to determine the quantum nature of displaced squeezed vacuum state. We obtained it by application of displaced states on the squeezed vacuum, and by calculating its density operator we determine its quantum nature as described below. Displaced Squeezed vacuum state 2.1 Single-mode squeezed vacuum state Single-mode squeezed vacuum state is the prototype of a degenerate parametric amplifier consists of nonlinear crystals pumped by coherent light [10]. The Hamiltonian for degenerate parametric amplifier is given by, = ∈ − . (1) The state vector of light for single-mode light initially in coherent state | > can be expressed as To this end, we can write the displaced squeezed vacuum state in the form of [11] %, > = & ' % 0 >. The Variance of Photon Number The variance of photon number for single-mode state is settled in anti-normal order form as; To this end, we can drive quadrature squeezing relative to vacuum state as; We can calculate ∆ \ 1 from Eq. (34) by taking the squeezing parameter zero (r=0) then Eq. (35) takes the form; F = 1 − ∆ . Conclusions In this paper we determine the quantum nature of DSVS, with help of Q function we calculate mean, variance of photon number, quadrature variance and quadrature squeezing. And we find the mean is greater than variance which shows the supperposonity of the system and squeezing occurs in plus quadrature. The other are described in the bellow From table 1 we can investigate as squeezing parameter increase quadrature variance decrease but quadrature squeezing(S) increase. And we gate the maximum degree of squeezing to be 99.7% for r=3.
v3-fos-license
2014-10-01T00:00:00.000Z
2002-10-28T00:00:00.000
18723636
{ "extfieldsofstudy": [ "Environmental Science", "Medicine" ], "oa_license": "pd", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1289/ehp.5563", "pdf_hash": "36e60633b9be8a4fd0da2e4f3f0c15dd0606ebf9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44864", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "54dfb8dbabdccacbfeaec7229c419436245e9f1e", "year": 2003 }
pes2o/s2orc
Evaluation of recreational health risk in coastal waters based on enterococcus densities and bathing patterns. We constructed a simulation model to compute the incidences of highly credible gastrointestinal illness (HCGI) in recreational bathers at two intermittently contaminated beaches of Orange County, California. Assumptions regarding spatial and temporal bathing patterns were used to determine exposure levels over a 31-month study period. Illness rates were calculated by applying previously reported relationships between enterococcus density and HCGI risk to the exposure data. Peak enterococcus concentrations occurred in late winter and early spring, but model results showed that most HCGI cases occurred during summer, attributable to elevated number of exposures. Approximately 99% of the 95,010 illness cases occurred when beaches were open. Model runs were insensitive to 0-10% swimming activity assumed during beach closure days. Comparable illness rates resulted under clustered and uniform bather distribution scenarios. HCGI attack rates were within federal guidelines of tolerable risk when averaged over the study period. However, tolerable risk thresholds were exceeded for 27 total days and periods of at least 6 consecutive days. Illness estimates were sensitive to the functional form and magnitude of the enterococcus density-HCGI relationships. The results of this study contribute to an understanding of recreational health risk in coastal waters. Southern California's beaches attract 100 million visitors annually. To protect swimmers from exposure to fecal contamination, the microbiological quality of coastal waters is extensively monitored by state and local agencies (1). Rapid population growth and urban development have resulted in regional domestic sewage and urban runoff problems, and beach contamination has become the focus of public safety concern. Elevated fecal bacterial indicator levels forced the Orange County Health Care Agency to close Huntington Beach, California, for much of the summer of 1999 (2). Large-scale investigations have been conducted to identify the source of contamination (3), but not to quantify an incidence rate of illness attributable to bathing there. Exposure to marine recreational water of poor microbiologic quality has been linked to multiple adverse health outcomes including infections of the eyes, ears, skin, and gastroenteritis (4). The results of prospective studies (5), however, suggest that, of these outcomes, only gastrointestinal symptoms are both swimming associated and pollution related. Epidemiologic investigations of illness in marine recreational bathers have addressed gastroenteritis from exposure to sewage contamination (5,6) and, more recently, storm drain runoff (7). Surf zone bacterial contamination at Huntington Beach may be due to sewage pollution, non-point source storm drain runoff, or a combination of these inputs. This region receives a mixture of primary and secondary sewage from the Orange County Sanitation District (OCSD) daily. The volume of sewage discharge fluctuates with seasonal water usage. Storm drain runoff reaches Southern California's coastal waters in high volumes following rainfall events during winter months and to a lesser extent during dry weather conditions in summer (8,9). The relative contribution of sewage effluent and non-point source runoff in driving surf zone bacterial fluctuations is currently under study by independent researchers (10). Superimposed upon seasonal water quality trends are spatial and temporal variations in beach usage for marine water-contact recreation, which have important implications for microbial risk assessment. If recreational water contact occurs at times during which the water is safe, there may be a low degree of health risk to bathers. But if peak use of beaches occurs at locations and times during which unsafe levels of contaminants are present, then aggregate health risk will be elevated. Of several bacterial indicators commonly used for microbial risk assessment (e.g., total coliform, fecal coliform, and enterococcus), the enterococcus density in seawater is believed to be the best single measure of its quality relative to the risk of swimming-associated, pollutionrelated infectious disease (11)(12)(13). For example, enterococci show higher correlation with swimming-associated gastroenteritis in wastewater-influenced water bodies than fecal coliform and total coliform (13). Changes made to California's monitoring standards (14) in 1998 required the adoption of enterococcus as an indicator of marine recreational water safety to supplement existing standards for fecal coliform and total coliform. In the absence of large-scale prospective health risk studies, the objective of this study was to create a model to compute a historical incidence rate of gastroenteritis in swimmers based on enterococcus densities in Huntington Beach and neighboring Newport Beach. Three assumptions were tested in the model regarding a) the relationship between enterococcus density and gastrointestinal illness risk, b) bathing activity levels at sampling locations, and c) the fraction of beachgoers who bathed during beach closure days. Materials and Methods Study site. We studied a contiguous stretch of coastline (8.5 miles) in Huntington Beach and Newport Beach, California ( Figure 1). Approximately 5.5 million instances of swimming and surfing occur there each year (15). The Santa Ana River (SAR), a major freshwater input draining a 2,850-square-mile watershed, bifurcates the beaches. Approximately 243 million gallons per day of treated sewage effluent are discharged into the ocean by the OCSD through an outfall pipe located 4 miles offshore from the mouth of the SAR (2). Data sources. Historical enterococcus density data were collected by the OCSD approximately three times per week at each of 13 surf zone monitoring stations located at 1,000-foot intervals along the beach (Figure 1). A total of 503 data points were available for the 31month study period between 1 June 1998 and 31 December 2000. Missing values were treated in the model by linear interpolation of surrounding known values. In cases where sample counts were quantified as a range (i.e., above or below detection limits), the lower point of the range was used to provide conservative estimates of contamination. Aggregate beach attendance was provided through local lifeguard agencies and fire departments and was available for > 99% of days studied. To estimate the fraction of beachgoers who bathe at different times of the year at local beaches, we used a report of the seasonal amount of marine water contact recreation activity as a fraction of beach attendance. During the months of October-March, approximately 18% of total beachgoers bathe, whereas the summertime fraction (April-September) is 27% (15). The times and locations of beach closures were provided by the Orange County Health Care Agency. Of the sampling locations, beaches at stations 9N and 6N were closed the most frequently, with 55 total beach closure days each during the study period. Stations 3N, SAR, and 3S were each closed for 13 total days. Station 6S was closed for 10 total days. None of the beaches south of station 6S in the study area were closed during the study period. Enterococcus-HCGI relationships. We applied two relationships between enterococcus density and highly credible gastrointestinal illness (HCGI) to determine risk to the individual bather from exposure to sewage (5) and storm drain runoff (7), respectively. For consistency with the definition of exposure used in the original epidemiologic investigations, individuals engaged in water contact activities leading to likely immersion of the head, regardless of duration, were counted as bathers in the model. HCGI was defined as symptoms of vomiting, diarrhea, nausea, or stomachache, accompanied by a fever (5). The primary enterococcus density-HCGI relationship used in the model was drawn from prospective health risk studies of sewage exposure conducted by Cabelli et al. (5), on which current federal bacterial water quality guidelines are based (13). A relatively strong (r 2 = 0.74) statistical association between enterococcus density and gastrointestinal illness risk was found across several years and at several sites. The dose-response relationship was expressed as follows: where X is the mean enterococcus density, and Y is the rate difference of gastrointestinal illness in swimmers versus nonswimmers. Application of this equation yielded a 1.9% attack rate of HCGI at an enterococcus density of 35 colony forming units (CFU) per 100 mL sewage-polluted seawater; the detectable increase in the attack rate for swimmers versus nonswimmers was 12 CFU/100 mL (5). In the model created for this study, upper and lower confidence intervals (CIs) for the dose-response curve of Cabelli et al. (5) were reproduced at the 95% level by fitting the original reported data to a piecewise linear curve, which diverged from the mean response curve line at both low and high enterococcus densities. The second relationship used in the model was drawn from a study of exposure to storm drain runoff conducted by Haile et al. (7). Haile et al. used the definition by Cabelli et al. (5) of HCGI as "HCGI1" and reported an elevated gastrointestinal illness risk in bathers near storm drain outlets only at enterococcus concentrations exceeding 104 CFU/100 mL. Because of the lack of a clear dose-response curve, the enterococcus-HCGI risk relationship of Haile et al. was represented in the model as a step, or threshold, function with a relative risk of 1.0 for exposure to enterococcus densities ≤ 104 CFU/100 mL, and a relative risk of 1.31 for bathing in waters above that count. Model architecture. The model was constructed using Vensim 4.0 software (Ventana Systems, Harvard, MA). The model made use of daily historical estimates of aggregate beach attendance for two beaches: Huntington Beach, California, (comprising Huntington City Beach and Huntington State Beach), and Newport Beach, California. Figure 2 shows the flow of information within the model. For each day in the study period, an HCGI risk curve was applied to historical enterococcus counts at each sampling location to estimate elevated risk associated with bathing at each location. To estimate the number of bathers at each beach, historical beach attendance data for each day were combined with the seasonal fraction of beachgoers who bathe. Beach-specific aggregate bather counts were then combined with spatial distribution of bathers by season (winter or summer) to yield the total number of bathers by sampling location. Estimated bather counts by location were then multiplied by elevated HCGI risk associated with swimming to generate cumulative HCGI cases over the study period. Sensitivity analyses. Three sensitivity analyses were performed to examine the degree to which results reflect assumptions made about model input parameters. In the first sensitivity analysis, the enterococcus density-HCGI risk relationship for storm drain runoff exposure reported by Haile et al. (7) was substituted for the relationship established for sewage exposure reported by Cabelli et al. (5) (hereafter referred to as "Cabelli's and Haile's relationship," respectively). In the absence of detailed historical data on the spatial patterns of bathing along each beach, the second one-way sensitivity analysis examined the impact of changed assumptions regarding bather distribution on illness estimates over the study period. Illness rates resulting from a uniform bather distribution scenario were compared with those under a clustered bather distribution scenario described in Table 1. In the clustered scenario, water contact activity is concentrated at beaches with coastal amenities such as parking lots, piers, jetties, and the mouth of the SAR. The clustered distribution of bathers, used as a default assumption in the model, is consistent with the assertion that beachgoers in this region prefer beaches with coastal amenities (16) and is substantiated by observations made for this study. Under a uniform distribution scenario, bathers were assumed to be equally spread between sampling locations along the 8.5-mile stretch of coastline over the study period. For the third sensitivity analysis, no bathers were assumed to have been in the water during beach closure days, leading to a level of zero risk of contracting swimmingassociated HCGI. The impact of varying this estimate to 10% of bathers in the water despite beach closures was examined in terms of total expected HCGI cases over the study period. This assumption was based on communication with a local public health official who revealed that beach closures may not completely prevent swimming on beach closure days (17). Beach usage and water quality. Combined beach attendance is shown in Figure 3. Approximately 42,520,000 people attended the beaches over the 31-month study period. Seasonal variations in beach attendance were pronounced, as well as increases in beach attendance during weekends and holidays. Many summer days had more than 180,000 beach visits, compared with several winter days with less than 3,000 total beachgoers. Figure 4A shows the mean enterococcus levels for each sampling location over the study period. Highest enterococcus concentrations were found near the mouth of the SAR, where the average enterococcus density exceeded 100 CFU/100 mL. However, the SAR station was not included in the risk analysis because few or no bathers were observed at this location. Enterococcus levels were generally higher at sampling locations north of the SAR in Huntington Beach than at stations south of the SAR in Newport Beach. A time series of enterococcus levels for all stations over the study period is shown in Figure 4B. A 31-day centered moving average for the data was superimposed on the time series. Peak enterococcus concentrations were frequently detected during late winter and early spring. For example, the highest daily average sample counts between February 2000 and April 2000 exceeded 350 CFU/100 mL. Abnormally high enterococcus concentrations were found during summer 1999, causing beach closures for most of the summer (2). A pronounced variation between years in mean enterococcus density for all stations combined was noted, with an average of 19 CFU/100 mL in 1999 and 30 CFU/100 mL in 2000. Risk analysis. Application of Cabelli's relationship to total number of exposures yielded 95,010 cumulative HCGI cases over the study period ( Figure 5A). The total number of HCGI cases ranged from 47,012 to 129,853 when Cabelli's lower and upper 95% CIs were used, respectively. Figure 5A also shows that the use of Cabelli's relationship leads to gradual, low-frequency illness trends over time. Substitution of Haile's relationship for Cabelli's relationship yielded far fewer illness cases. A total of 2,056 HCGI cases occurred during the study period, approximately 98% fewer than total illness cases using Cabelli's relationship ( Figure 5A). Figure 5B compares the number of illness cases per day using two different relationships. Application of Cabelli's relationship resulted in peak attack rates of approximately 600 cases per day in summer months, with the maximum number of HCGI cases at 665. Roughly 75% of total days during the months of May through August have more than 100 individuals contracting HCGI. In contrast, < 0.3% of days in the months of November through February have 100 individuals contracting HCGI. Use of Haile's relationship led to an average of only two HCGI cases per day over the study period. Figure 6 illustrates that HCGI attack rates are highly influenced by the enterococcus-HCGI risk relationships applied to the exposure data. The average risk for contracting HCGI over the study period was 0.89% when Cabelli's relationship was applied to enterococcus densities at each sampling location. Use of Cabelli's upper and lower 95% CIs yielded a 1.2 and 0.4% illness rate, respectively. The HCGI attack rate resulting from application of Haile's relationship was 0.2%. Effect of spatial distribution on illness estimates. Figure 7 shows a comparison of HCGI rates under the clustered and uniform bather distribution scenarios. Approximately 95,010 HCGI cases resulted under the clustered scenario, compared with 90,000 illness cases in the uniform distribution scenario. The two scenarios yielded broadly comparable results, suggesting that spatial location of bathers did not make a substantial difference in terms of the estimated aggregate illness rates. However, clustered patterns of bathing may heighten exposure to elevated enterococcus levels by up to 15% when particular beaches are examined in isolation (e.g., Stations 3S-29S in Newport Beach, data not shown). Bathing activity during beach closures. Adjustment of bathing activity during beach closures from 0 to 10% accounted for only a 0.1% increase in the total number of HCGI cases over the study period. Approximately Discussion The results suggest that the majority of HCGI cases occur in the summer months and, to a lesser degree, in the late spring, regardless of bather distribution. This temporal illness pattern reflects a large number of exposed individuals in the water during summer months and holds despite the fact that late winter and early spring typically exhibit the poorest water quality. Based on empirical analysis, aggregate beach usage patterns predispose individuals to only a 5.6% increase in risk over exposure levels had spatial considerations been ignored. Nonetheless, illness rates are substantially elevated at particular beaches when bathing activity is concentrated at contaminated locations. The vast majority of illness cases (99%) occur when these beaches are open. A lack of sensitivity of model illness rate estimates to bathing activity during beach closures is attributable to the low number of beach closure days at most sampling stations. For example, there were no beach closures at 8 of 13 locations. Thus, reduced bathing activity during beach closure periods only minimally lessens the number of potential illness cases. Although the computed HCGI attack rate is within the 1.9% level of acceptable risk under the U.S. Environmental Protection Agency's marine water contact guidelines (13) for the entire study period, illness rates exceed the threshold levels of acceptable risk for 2.9% of total days (Figure 8). The single sample beach closure standard is currently set at 104 CFU/100 mL, whereas the 1.9% acceptable risk threshold is reached at an enterococcus density of 35 CFU/100 mL. The acceptable risk threshold can also be crossed when monthly standards are enforced, mandating no more than 20% of samples to exceed a 30-day log geometric mean enterococcus density of 35 CFU/100 mL. The average enterococcus density at all stations was 30 CFU for the year 2000. Application of Cabelli's relationship suggests 27 total days and periods of up to 6 consecutive days during which the tolerable risk threshold was crossed ( Figure 8). Application of Cabelli's upper CI yielded periods of up to 20 consecutive days with risk levels considered unacceptable under federal guidelines (data not shown). Relocation of amenities away from beaches with persistent water quality problems has been suggested as a means to dissuade potential bathers from swimming in contaminated waters (16). However, the lack of sensitivity of illness risk to bather distribution in this study indicated that bather relocation to less contaminated beaches may not substantially reduce public health risk in the long term. Addition of storm drain filters or implementation of other pollution abatement measures at contaminated beaches may reduce pathogen levels. The implementation of more stringent marine water contact standards without water quality improvement would result in more frequent beach closures. Beach closures prevent illness, but also deprive public use and enjoyment of the beach, which is contradictory to the goals of the Clean Water Act (18). Environmental Medicine | Modeling bathing patterns and gastrointestinal illness risk in coastal waters Environmental Health Perspectives • VOLUME 111 | NUMBER 4 | April 2003 An increase in the frequency of beach closures might also contribute to the fraction of bathers who enter the water despite beach closure warnings. A number of caveats apply to the interpretation of our model results. First, the environmental conditions under which the original health risk studies were conducted may be of limited applicability. Cabelli et al. (5) measured illness rates in East Coast bathers exposed to sewage-based contamination in dry weather. Haile et al. (7) assessed exposure to storm drain runoff under exclusively dry weather conditions. Neither offers an exact match to the study period and region. The defined susceptible population upon which illness rates were generated for this model might be more inclusive than the defined susceptible populations in the original health risk studies. Cabelli et al. (5) and Haile et al. (7) both excluded as potential subjects bathers who swam in the ocean in the weeks leading up to their trials in order to target single-exposure, water-related illness risk. This model drew bather numbers as a fraction of total beach attendance, including frequent beach users who would have been excluded as subjects from those studies. Consequently, the illness rate computations may overestimate or underestimate the true number of HCGI cases, depending upon the influence of repeat exposures and other susceptibility factors in frequent beachgoers. Flat rates of water contact recreation for summer and winter months used in this model are based on limited published estimates of seasonal water use rather than upon objective measurements. The amount of marine contact recreation activity has first-order impacts on illness rate calculations. Further elucidation of the functional form and magnitude of the relationship between exposure to enterococcus and HCGI risk also has first-order impacts on illness rate estimates and the sensitivity of those estimates to other factors. A more detailed approximation of both point estimates and the functional form of the response relationship can be used to determine whether aggregate illness rates are likely to increase in punctuated or gradual manners. For example, the discontinuous functional form of Haile's relationship is represented as a threshold, making it highly sensitive to noise and changes in other parameter values. The assumption of a relative HCGI risk of 1 at enterococcus concentrations ≤ 104 CFU/100 mL is an issue raised in the original study (7). Non-water-related risk factors for HCGI, including food consumption, household illness, and medical history of stomach problems, may not have been adequately controlled for when the original concentrationresponse relationships were generated (19). Conservative estimates of contamination are presented in this study (based on the use of lower limits of a range where enterococcus counts were provided as such) to reduce the probability of illness rate overestimation. Each enterococcus count is applied to a uniform risk level for an entire day in the model. Illness rate estimates may be affected by fluctuations of indicator level throughout the day (20)(21)(22). Furthermore, a reanalysis of the results of Cabelli et al. (5) by other researchers suggested the possibility of underestimating true HCGI risk by 14-57% (21). Despite asserted weaknesses in the methodology and data analysis of Cabelli et al. (23), federal marine water contact recreation guidelines remain based on the results of their studies because of the strength and power of statistical association found between indicator density and health outcome over many years at multiple sites. The presumed etiologic agent of HCGI is frequently a suspected Norwalk-like virus or human rotavirus (5,24,25). A one-dimensional functional relationship between enterococcus density and HCGI risk only indirectly accounts for nonbacterial contributions to water-related illness. Indicator bacteria concentrations may exhibit low correlation to levels of viruses and protozoa in coastal waters (7,26). Using protozoa and viruses, dynamic population-level models of infectious disease transmission have been developed for drinking water as well as selected recreational waters (27)(28)(29)(30)(31)(32). However, a lack of sufficient timeseries data on specific pathogens in our study site precludes the use of these organisms for water-based risk assessment at present. Therefore, in the absence of adequate virus and protozoa data for coastal waters, enterococci are believed to be the best available predictors of adverse health outcomes with a viral-based etiology (1,25). During model construction, immune response variability and secondary transmission of illness were ignored because of lack of information on infection status. Illness transmission was treated as a stationary process where the probability of individual infection was multiplied by the number of exposed individuals to predict disease incidence. The model does not take bather shedding of pathogens into account. The exposure of an individual to microbial hazards is assumed to be independent of the infection status of other individuals in the population, and the overall magnitude of exposure is assumed to be independent of the total number of infected individuals. Therefore, the results of this study present the most conservative estimates of recreational illness at the study site. Conclusion This study combined spatial and temporal patterns of marine recreational water usage with historical microbial indicator levels to illustrate the public health implications of recreational exposure to contaminated waters. Results indicated that illness rates were highest during the summer months, despite peak concentrations of fecal enterococcus frequently detected during late winter and early spring. Spatial distribution of bathers along the beach had minimal effect on aggregate illness rates, but may account for up to a 15% increase at selected beaches. Illness rates were highly sensitive to the relationship between enterococcus density and HCGI risk. The daily risk level fluctuated throughout the study period, with 2.9% of total days in excess of federal recreational risk guidelines. Further characterization of the enterococcus density-HCGI risk relationship will provide a better understanding of these recreational health risks.
v3-fos-license
2022-08-10T06:18:19.609Z
2022-08-09T00:00:00.000
251443562
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://games.jmir.org/2022/3/e35008/PDF", "pdf_hash": "6dd0a71ed7f037c3e90c8c7cbb78eecc43247261", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44866", "s2fieldsofstudy": [ "Medicine" ], "sha1": "7b37391b760d88bbb15bddc91a728747677faa34", "year": 2022 }
pes2o/s2orc
Effect of Virtual Reality on Pediatric Pain and Fear During Procedures Involving Needles: Systematic Review and Meta-analysis Background: Virtual reality (VR) is used as a distraction measure during painful clinical procedures associated with the use of needles. These procedures include vaccinations, blood draws, or the administration of medications, which can cause children to feel increased levels of pain and fear. Objective: The objective of this study was to collect and analyze the current evidence regarding the effectiveness of VR as a tool to distract children from pain and fear during needle procedures as compared to that of standard techniques. Methods: A systematic review and meta-analysis was performed. We included randomized clinical trials (RCTs) or quasi-RCTs with participants younger than 21 years who underwent needle procedures in which the main distraction measure used was VR and where the main outcome measure was pain. The databases searched included the PubMed, Web of Science, Scopus, PsycINFO, CINAHL, and Cochrane libraries. In this systematic review, the studies were analyzed by applying the Critical Appraisal Skills Program guide in Spanish and the Jadad scale. In the meta-analysis, the effect size of the studies was analyzed based on the results for pain and fear in children. Results: From 665 unique search results, 21 studies were included in this systematic review, most of which reported low methodological quality. The study sample cohorts ranged from a minimum of 15 participants to a maximum of 220 participants. Ten studies were included in the meta-analysis. The global effect of using VR as a distraction measure was a significant reduction in pain (inverse variance [IV] –2.37, 95% CI –3.20 to –1.54; Z =5.58; P <.001) and fear (IV –1.26, 95% CI –1.89 to –0.63; Z =3.92; P <.001) in children in the experimental groups. Conclusions: The quality of the studies was mostly low. The main limitations were the impossibility of blinding the participants and health care personnel to the VR intervention. Nonetheless, the use of VR as a distraction measure was effective in reducing pain and fear in children during procedures involving needles. Background The main problems experienced in pediatric care are pain and fear. This is especially true for procedures associated with the use of needles such as vaccinations, blood draws, or the administration of medications [1,2]. This causes difficulties in the administration of health care and can result in parental dissatisfaction [3]. The International Association for the Study of Pain defines pain as "an unpleasant sensory and emotional experience associated with actual or potential tissue damage or described in terms of such damage" [4]. Pain, therefore, is a complex experience that involves sensory, cognitive, behavioral, and psychological factors [5]. In turn, fear is an immediate alarm reaction to danger, which triggers an escape behavior and an intense physiological response [6]. The pain and fear that children experience when facing needle procedures is a concern for health care professionals. Therefore, various techniques are being studied to help reduce its impact. Indeed, the administration of drugs is not always indicated to reduce pain and fear in these procedures [7]. Rather, the use of distractions during painful procedures appears to be one of the most effective ways to decrease pain and distress in children [8]. For example, music or toys have already been effectively used as distraction measures to help reduce pediatric pain. Nonetheless, virtual reality (VR) is a novel technique that has been proven to be more effective than traditional methods [3]. VR is a computer technology that creates a 3D-simulated artificial environment [5]. It usually requires wearing special glasses that cover a wide field of vision and which include motion tracking systems at the eye level [9]. These glasses can be connected to a computer or a telephone [5]. VR makes it easier to divert attention away from the painful procedure so that children will have a slower response to pain signals by counteracting them with an experience of pleasant stimuli [10,11]. Several studies have evaluated the use of VR as a distraction measure during painful procedures such as venipuncture [3,[12][13][14][15], tooth extraction [16][17][18][19], or burns treatment [20][21][22][23][24]. However, these studies have certain limitations such as the use of small sample sizes or poor methodological quality. Comparing the findings of these studies is difficult because the works published to date have evaluated a wide breadth of invasive medical care types. Furthermore, we were able to identify only 2 systematic reviews and 1 meta-analysis that analyzed the use of VR in children. However, these studies had evaluated several medical procedures, including dental procedures, burns treatments, oncological care, or physical therapy sessions [3,25]. The variation in the procedural conditions using VR implies a lack of evidence to support its use in needle procedures. Thus, highlighting these issues, this systematic review and meta-analysis focused on the effect of VR on pain and fear during needle procedures in children. Objectives The general objective of this study was to collect and analyze the current evidence available regarding the effectiveness of VR as a tool to distract pediatric patients from potential pain and fear while undergoing needle procedures compared to the distractions by standard techniques. Regarding the specific objectives, our first aim was to analyze the studies included in the systematic review to assess their methodological quality. Second, our objective was to analyze the effect of the randomized controlled trials (RCTs) included in our meta-analysis. Research Question Is the use of VR as a distraction measure effective for reducing the perception of pain in children while performing needle procedures? Study Design This is a systematic review and meta-analysis of studies that evaluated the effect of VR as the main distraction measure to reduce the perception of pain in children undergoing needle procedures. Inclusion Criteria Studies were included in this paper based on the following criteria: (1) the participants were younger than 21 years; (2) studies where the use of VR was the primary distraction means used during needle procedures; (3) studies, including pilot studies, with an RCT or quasi-RCT methodological design; and (4) studies where the main outcome measure was pain. Data Sources For this study, we consulted the PubMed, Web of Science, Scopus, PsycINFO, CINAHL, and Cochrane databases. The literature search was conducted between January 2020 and June 2021. Two independent researchers comprehensively reviewed the results obtained in each of the studies and subsequently compared the selected papers. Research Strategy The medical subject heading keyword terms used in the search were reality, virtual, virtual reality, virtual reality headset, virtual reality exposure therapy, child*, pediatric, adolescent, intervention, program*, pain, ache, procedural, acute pain, pain perception, fear, and fears. All these terms were combined with the Boolean AND and OR functions and no filters were applied to limit the search. Search strategies were created specifically for each database by using the medical subject heading terms described above (Multimedia Appendix 1). No publication date or language restrictions were applied. Study Selection Process First, we evaluated the scientific literature to identify studies that met the inclusion criteria. To do this, we read the title and abstract from each of the identified papers. Two of our authors (RCG and MLV) independently performed an initial screening by reading the study titles and abstracts. After this process, the researchers discussed their results based on the predetermined inclusion and exclusion criteria. There was a 6% discrepancy in the opinions of these authors, which was resolved by further discussion to reach a consensus. Data Extraction Once the full-text papers were selected, 2 authors (RCG and CRZ) analyzed the studies based on their general characteristics and methodological quality. In this process, these researchers jointly extracted the relevant information from these publications. This information was transferred to 2 tables. First, the general characteristics of the studies were included in Multimedia Appendix 2. Subsequently, the methodological quality of all the studies was analyzed based on the Critical Appraisal Skills Program guide in Spanish (CASPe) scale, and this information was completed by performing a quantitative evaluation using the Jadad scale; these data are shown in Multimedia Appendix 3. Protocol and Registration This systematic review was registered with the Open Science Framework (Osf.io/cd8nr) in October 2021. Data List The general characteristics (Multimedia Appendix 2) of the studies provide information, including the following elements: author, study year and country, overall sample size, number of participants in the control and intervention groups, participant age, study type, variables and measurement instruments used, and finally, positive (P<.05), negative (P>.05), or inconclusive (±) results. Multimedia Appendix 3 provides an assessment of the methodological quality of the studies that we included in this review according to the CASPe [26]. This tool organizes data about each study into 3 sections: validity, results, and applicability. We used the Jadad scale [27], which assesses research quality on a scale of 0 to 5 points according to the responses to a series of questions, to complete this information. Scores below 3 points suggested that little methodological rigor had been applied during the study in question. This allowed us to objectively assess the following parameters: random sequence generation, allocation concealment, blinding of participants and personnel, and blinding to the outcome assessment. To guarantee the quality of this meta-analysis, we followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) statement guidelines [28] (Tables S4 and S5 of Multimedia Appendices 4 and 5, respectively). Risk of Bias Assessment The Cochrane Collaboration Risk of Bias Tool [29] was used to assess the risk of bias in the studies included in the meta-analysis in 5 categories: selection bias, performance bias, detection bias, attrition bias, and reporting bias. For selection bias, which refers to the introduction of differences between groups at baseline, random sequence generation and allocation concealment were judged. Performance bias was analyzed based on blinding of the participants and personnel. Detection bias referred to blinding of the outcome assessors. Attrition bias included different rates of withdrawals between groups and was judged according to the proportion of incomplete outcome data. Finally, reporting bias described selective reporting. The Cochrane Collaboration Handbook for Systematic Reviews for Interventions was used to analyze the risk of bias from studies not included in the meta-analysis. This analysis included selection bias when randomization was analyzed, performance bias when blinding between participants and personnel was tested, detection bias when blinding between participants and outcome assessors was tested, attrition bias where dropouts were analyzed, and reporting bias where they were analyzed, and the outcomes were selectively reported [29]. Analysis of the Meta-analysis Data Employing the random effects model in Review Manager software (RevMan v.5.2; Cochrane Collaboration), 2 meta-analyses were carried out to examine the overall effect of the intervention on pain and fear in children. We used this model because we wanted to limit overestimation of the effect size. The studies included had an RCT design and contained complete statistical information; the effects were expressed as mean differences with a 95% CI. The heterogeneity of the studies was assessed by calculating the I 2 statistic, and the variance between the studies was examined by calculating Tau 2 . When the significance level was set at .05, the heterogeneity of the studies we included was high for both these variables (94% and 96%, respectively; P<.01). Lastly, to increase the precision of the effect size estimator, the effect sizes proposed by Cohen [30] were calculated (small effect, d=0.20; medium effect, d=0.50; and large effect, d=0.80). Search Results As shown in Figure 1, our initial search returned a total of 665 papers. After eliminating 211 duplicates, 2 researchers (RCG and MLV) initially screened the 454 studies by reading their titles and abstracts. There was a 6% discrepancy in their opinions, which was resolved by reaching a consensus based on the eligibility criteria of the papers. This selection further reduced the sample to 96 manuscripts. Reading the full texts of these papers revealed that only 46 papers had focused on the use of VR to reduce pain during procedures involving needles, some of which had also addressed fear in these patients. Lastly, 3 of our authors (RCG, MLV, and CRZ) critically read all these papers and excluded another 25 papers because they did not meet the inclusion criteria, as described in Figure 1. Thus, 21 studies were finally included in this systematic review, and only 10 were eligible for inclusion in the meta-analysis [31] ( Figure 1). Figure 1. Flowchart showing the screening and selection process for the papers included in this systematic review and meta-analysis. Created using the guidelines on Page et al [31]. WOS: Web of Science; *Consider if feasible to do so, reporting the number of records identified from each database or register searched (rather than the total number across all databases/registers); **If automation tools were used, indicate how many records were excluded by a human and how many were excluded by automation tools. Risk of Bias The Cochrane Collaboration Risk of Bias Tool [29] was used to assess the risk of bias of the 10 studies included in the metanalysis by 2 reviewers. Based on these tools, only 1 of the studies was at high risk of bias, 8 at unclear risk of bias, and 1 at low risk of bias ( Figure 2). Based on the Cochrane Collaboration criteria for different types of bias, we analyzed the 11 studies not included in the meta-analysis. As shown in Multimedia Appendix 3, the biases related to blinding, both of the participants of the personnel as well as to the outcome assessment, reached the highest levels in 82% (9/11) of the studies. Of the 11 studies, most of the studies had a moderate risk of bias (5/11, 46%); 3 (27%) studies were identified as having a high risk of bias and 2 (18%) studies had a low risk of bias. One study (9%) was classified as having a low risk of bias but no information on blinding could be obtained. [34,36,40,41,43,45,47,51,52]. Effects of VR on the Perception of Pain The studies were heterogeneous in both the measured outcomes (I 2 =89-92). We were able to analyze the effect size of the pain studies in 10 of the 21 studies (Figure 3). The main results showed statistically significant differences in favor of the experimental group in the studies by Wolitzky et al [52] [40] also found a significant reduction in pain in the intervention group (d=0.37; IV -1.00, 95% CI -1.90 to -0.10). As shown in Figure 3, the global effect of using VR as a distraction measure had significantly reduced pain in children in the experimental groups (IV -2.37, 95% CI -3.20 to -1.54; Z=5.58; P<.001). Figure 3. A random forest plot of the association between pain and study group (control vs virtual reality) [34,36,40,41,43,45,47,51,52]. b: Wong-Baker Faces Pain Rating Scale; Buzzy: a device that applies local cold and vibration at the injection site; DC: distraction card; IV: inverse variance; VR: virtual reality. Effects of VR on Fear We were only able to analyze the fear variable in 5 of the 21 studies. The use of VR produced a statistically significant reduction in fear in the experimental groups in the study by Chen et al [40] (d=0.35; IV -0.46, 95% CI -0.90 to -0.02) and a large reduction in the Koç Özkan and Polat study [47] (d=0.17; IV -2.36, 95% CI -2.74 to -1.98). Likewise, fear was significantly reduced in the studies by Erdogan and Aytekin Ozdemir [43] in the VR versus control group (d=1.17; IV -1.30, 95% CI -1.82 to -0.78) and the intervention by Piskorz et al [34] in active VR (d=1.36; IV -2.60, 95% CI -3.76 to -1. 44]. As shown in Figure 4, the global effect of using VR as a distraction measure had significantly reduced the perception of fear in children in the experimental groups (IV -1.26, 95% CI -1.89 to -0.63; Z=3.92; P<.001). Figure 4. A random forest plot of the association between fear and study group (control vs virtual reality) [34,40,43,47,51]. Buzzy: a device that applies local cold and vibration at the injection site; DC: distraction card; IV: inverse variance; VR: virtual reality. Discussion To the best of our knowledge, this is the first systematic review with a meta-analysis designed to examine the effectiveness of the use of VR as a distraction measure to reduce pain and fear in the pediatric population during procedures involving needles. Based on the high effect sizes that we found, our results suggest that VR distraction is possibly more effective than the habitual routine or other distractions used during needle procedures to reduce the perception of pain and fear felt by children. It is difficult to compare these results with those of other studies because most of them included different medical processes or did not analyze the effect on the children's fear. However, other meta-analyses found similar results, indicating that the effects of VR are beneficial in reducing fear during medical processes involving pain, especially in children [54]. However, these comparisons must be analyzed with caution because neither the studies included nor their participants were homogeneous in terms of age or characteristics, the medical procedures analyzed, or the tools used to measure pain. Most of the papers included in this review found that VR had a positive effect by helping to reduce pain in children. Of note, all the studies that had included more than 100 participants and had used the Wong-Baker Faces Pain Rating Scale (WBFPS) had reported statistically significant results. This may be because this visual assessment scale is more effective in assessing pain in children than other scales that use numerical assessment scales such as the visual analog scale (VAS) for pain [55]. Although the VAS is a reliable method for assessing acute pain, children younger than 7 years may have difficulty in its use, as indicated by the reduced reliability of the results reported in these studies [56]. In addition, the VAS and WBFPS have been widely used in studies evaluating pain in other procedures such as wound healing [57], physiotherapy sessions after complex surgical interventions [58], or dental procedures [59] in which they produced positive results. Most of the papers included in this review [32,33,35,[37][38][39][40][41]44,45,[47][48][49][50][51][52] had analyzed the effect of VR on pain and fear in pediatric patients with cancer during venipuncture or reservoir puncture procedures. Furthermore, most of the studies we retrieved (20/21, 95%) had been carried out in hospitals, while only 5% (1/21) had been carried out in primary health care centers. This may have been a result of the health care provision resources available at the sites where these previous studies had been carried out, given that most of this work had been carried out in hospitals, thanks to the teaching function of these centers [60][61][62]. These data indicate that scant research has been carried out for this level of care, which is surprising, considering that needle procedures are frequent in primary care contexts because of the systematic vaccination programs carried out in the pediatric population. Among other possible explanations, perhaps this lack of research can be explained by health care staff overload or low levels of motivation among professionals or toward the support of research [63][64][65][66][67]. However, 2 study protocols have recently been published that will aim to evaluate the effectiveness of VR against pain during vaccination in the pediatric population through RCTs with estimated sample sizes of 100 [68] to more than 400 participants [69]. Although we found that VR is effective in reducing children's fear, very few studies have demonstrated the usefulness of VR in reducing fear during procedures involving needles [40,47]. Thus, the absence of a validated scale to measure this variable may be inhibiting its proper evaluation [70]. According to Taddio et al [71], most studies that measure fear do so by using questionnaires developed by the investigators, nonvalidated scales, or scales for measuring anxiety [72,73]. Thus, this review reveals the lack of consensus on the most appropriate instruments for evaluating and clearly differentiating between fear and distress in the pediatric population. Although in clinical practice, the difference between fear, anxiety, and stress may not always be relevant, these represent different theoretical constructs, which are not always rigorously differentiated. Notwithstanding, both fear and distress are important factors that are related to and impact the pain perceived by children [74,75]. Of note, the quality of the studies included in this systematic review (based on CASPe and Jadad assessments) was mostly low. However, some studies with low quality or even small samples showed important effects. We assume that in the future, a meta-regression model could be used to expand existing knowledge about these intervention types and their methodological quality. For this reason, this systematic review and meta-analysis highlights the need to design and implement new research with high methodological quality that would allow extraneous variables to be isolated, favoring the cause-effect relationship. The principal reasons for the studies included in this meta-analysis to be of low quality were that it was nearly impossible to blind both the participants and health care personnel to the VR intervention because of the nature of these devices [76]. Furthermore, in many cases, the absence of randomization was justified for ethical reasons. Indeed, more than half of the studies we examined had considered small sample sizes of fewer than 100 participants [77], which, in addition to being unreliable and inefficient, can lead to overestimation of the study effect size and can produce low reproducibility of the results. Finally, chronological age and neurological development are among the factors that influenced children's perceptions of pain and fear of procedures involving needles, and therefore, adjusting the age of children to less than 21 years should be considered in future studies [78]. Blinding and randomization are also the issues that were identified in the risk of bias analysis of studies not included in the meta-analysis. The studies included in the meta-analysis generally had a low level of risk, while studies not included tended to have a higher level of risk of bias. This may be due both to the fact that meta-analysis studies are more robust and to the use of different measurement tools in these papers. The main limitations of this work were, on the one hand, the lack of studies with nonsignificant results available in the scientific literature. This meant that we may not have included all the relevant studies, and therefore, it was not possible to control for publication bias [79]. On the other hand, although the random effects model that we used favored the most realistic observation of the data by specifically weighting each study, the heterogeneity of the included studies, both in terms of their outcome measures and their methodological approaches, means that we must be cautious about the interpretation of our results. This problem was also identified in a similar recent meta-analysis in which heterogeneity was found in studies with young patients [54]. Finally, the studies included did not address the effect of VR in children younger than 4 years, which implies a limitation of the results when it comes to generalizing this effect in all children. Based on all the above, the methodological design of future work must adequately calculate the required sample sizes and use appropriate sampling, participant study group allocations, and blinding techniques to be able to extrapolate any data obtained to the wider pediatric population. This review was limited by the quality of the studies it included. Generalization of these findings to younger children should also be done with caution because the studies we considered had not included children younger than 4 years. In conclusion, the findings of this review indicate that VR could be a feasible distraction measure to reduce the perception of pain and fear in the pediatric population during procedures involving needles. However, these results are limited by the heterogeneity of the studies included. In this sense, more trials with larger sample sizes and quality methodological techniques will be needed in the future. Acknowledgments This work was supported by a grant from the University CEU Cardenal Herrera (ICLINIC/1906). Authors' Contributions MLV and CRZ conceptualized and designed the study, drafted the initial manuscript, designed the data collection instruments, and reviewed the manuscript. RCG collected the data, carried out the analysis, and revised the manuscript. LP drafted the initial manuscript. LGG and MISL critically reviewed the manuscript for important intellectual content. All the authors approved the final manuscript as submitted and agree to be accountable for all aspects of the work. Conflicts of Interest None declared. Multimedia Appendix 1 Search strategies.
v3-fos-license
2021-11-25T06:22:46.412Z
2021-11-23T00:00:00.000
244528020
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.nature.com/articles/s41391-021-00470-w.pdf", "pdf_hash": "18d4ce2115b43fd53bc7effb72874a4c1be09ead", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44867", "s2fieldsofstudy": [ "Medicine" ], "sha1": "fb6bca00824beba2665318d14aa7c5cde988bc2d", "year": 2021 }
pes2o/s2orc
Effects of yoga in men with prostate cancer on quality of life and immune response: a pilot randomized controlled trial Background Diagnosis and treatment of prostate cancer is associated with anxiety, fear, and depression in up to one-third of men. Yoga improves health-related quality of life (QoL) in patients with several types of cancer, but evidence of its efficacy in enhancing QoL is lacking in prostate cancer. Methods In this randomized controlled study, 29 men newly diagnosed with localized prostate cancer were randomized to yoga for 6 weeks (n = 14) or standard-of-care (n = 15) before radical prostatectomy. The primary outcome was self-reported QoL, assessed by the Expanded Prostate Index Composite (EPIC), Functional Assessment of Cancer Therapy-Prostate (FACT-P), Functional Assessment of Chronic Illness Therapy-Fatigue (FACIT–F), Functional Assessment of Cancer Therapy-General (FACT-G) at baseline, preoperatively, and 6 weeks postoperatively. Secondary outcomes were changes in immune cell status and cytokine levels with yoga. Results The greatest benefit of yoga on QoL was seen in EPIC-sexual (mean difference, 8.5 points), FACIT-F (6.3 points), FACT-Functional wellbeing (8.6 points), FACT-physical wellbeing (5.5 points), and FACT-Social wellbeing (14.6 points). The yoga group showed increased numbers of circulating CD4+ and CD8+ T-cells, more production of interferon-gamma by natural killer cells, and increased Fc receptor III expression in natural killer cells. The yoga group also showed decreased numbers of regulatory T-cells, myeloid-derived suppressor cells, indicating antitumor activity, and reduction in inflammatory cytokine levels (granulocyte colony-stimulating factor [0.55 (0.05–1.05), p = 0.03], monocyte chemoattractant protein [0.22 (0.01–0.43), p = 0.04], and FMS-like tyrosine kinase-3 ligand [0.91 (−0.01, 1.82), p = 0.053]. Conclusions Perioperative yoga exercise improved QoL, promoted an immune response, and attenuated inflammation in men with prostate cancer. Yoga is feasible in this setting and has benefits that require further investigation. Trial registration clinicaltrials.org (NCT02620033). INTRODUCTION In 2020, there were 191,930 new prostate cancer cases in the US [1]. A diagnosis of prostate cancer may have profound psychological effects that contribute to poor physical, emotional, and social quality of life (QoL). Up to 30% of men diagnosed with prostate cancer experience significant anxiety, fear, and distress concerning the disease and treatment-related complications [2]. Moreover, the risk of suicide is doubled in the year following a diagnosis of prostate cancer [3]. Increasing use of natural approaches such as acupuncture, herbal supplements and vitamins, yoga, meditation, massage therapy and aromatherapy, coupled with the unmet need for effective management of QoL-related symptoms, has created a demand for integrative medicine in these men [4][5][6]. Many studies have demonstrated that yoga improves healthrelated QoL and emotional, physical, and mental wellbeing in patients with cancer. Mindfulness defined as "paying attention in a particular way, on purpose, in the present moment, and nonjudgmentally"-has been shown to be improved with yoga practice with focus on breath work [7,8]. Moreover, in addition to improving fitness, flexibility, and muscle tone, yoga lessens anxiety and stress [9][10][11][12][13][14][15][16]. There is also evidence indicating that yoga attenuates oxidative stress and chronic inflammation associated with stressful situations [11,13,17]. However, our understanding of the molecular mechanisms involved in these effects remains limited. Furthermore, most of the relevant studies have been performed in breast cancer, and there are limited data in prostate cancer. The aims of this pilot study were to assess the effects of a perioperative yoga exercise program on QoL, fatigue, sexual and urinary function, and mindfulness and on the cellular immune response and proinflammatory marker levels. MATERIALS AND METHODS Study design and participants This block randomized, open-label, parallel-group clinical trial included 29 men with prostate cancer who were scheduled for radical prostatectomy. This was a pilot study of a yoga program in men with prostate cancer to elicit a hypothesis and estimate an appropriate effect size. Patients were accrued from September 25, 2015, to February 6, 2019. The study inclusion criteria were as follows: age 30-80 years; pathologically and/or radiographically confirmed new diagnosis of localized prostate cancer; scheduled for radical prostatectomy (robotic-assisted or open); no active synchronous malignancy; not currently practicing yoga and/or meditation; adequate pain control; no neurological or musculoskeletal comorbidity that would interfere with exercise; willingness to be randomized to either study group and undergo phlebotomy; and ability to provide informed consent. Patients with an absolute contraindication to exercise testing or a psychotic, addiction-related, or major cognitive disorder were excluded. The patients were randomized into a yoga group (n = 14) that participated in a yoga program for 6 weeks preoperatively and postoperatively and a control group (n = 15) that received standard-of-care only. Control group were patients with new diagnosis of prostate cancer who did not undergo yoga intervention prior to their surgery. All patients completed healthrelated QoL surveys at baseline (6 weeks preoperatively), immediately before surgery, and 6 weeks postoperatively. Blood samples were collected at these three time points for examination of immune cell status and cytokine levels. The full study protocol is provided as Supplementary Text 1. The study was approved by the institutional review board of Long School of Medicine, UT Health San Antonio (approval: HSC20150406H). Written informed consent was obtained from all study participants. Yoga program The yoga intervention was developed for patients with prostate cancer as a collaborative effort between the Thrivewell Cancer Foundation, a local yoga studio, and the lead author (DK). The program consisted of 60 min of yoga exercise twice weekly for 6 weeks preoperatively (depending on surgeon and theater availability) and for 6 weeks starting 3-6 weeks postoperatively. The yoga sessions were led by certified instructors from the ThriveWell Cancer Foundation and the local yoga studio. Sessions were held at various locations in San Antonio, TX, and participants could choose their most convenient location. We utilized Hatha yoga method-Hatha yoga generally refers to the practice's focus on use of physical postures. Hatha yoga was combined with focused attention on gentle breath while moving with awareness through the practice to gently mobilize major joint in the body. The study practice also provided the breathing and pelvic floor engagement awareness in seated meditation at the beginning of yoga practice. Each participant was shown how to perform yoga correctly and safely, with tailoring of exercises to their comfort level. The instructors monitored patients' progress by observing their ability to breathe smoothly, rhythmically, and continuously while performing yoga. The study was performed in San Antonio, Texas. Clinical outcomes Health-related QoL was assessed using the Functional Assessment of Cancer Therapy-Prostate (FACT-P) scale. FACT-P is a modification of the FACT scale, a 27-item measure of QoL across the domains of physical, social/family, emotional, and functional wellbeing, and contains 12 additional items specific to the impact of prostate cancer symptoms. Therefore, the FACT-P yields both a prostate cancer-related QoL score and a total QoL score [18]. The Five Facets of Mindfulness Questionnaire was used to evaluate the effects of yoga on everyday mindfulness. This 39-item measure includes five domains (Observe, Describe, Act with Awareness, Non-judging of Inner Experience, and Nonreactivity to Inner Experience) and has been validated in both English and Spanish [19]. Cancer-specific fatigue was measured using the 13-item Functional Assessment of Chronic Illness Therapy-Fatigue (FACIT-F) questionnaire [20], urinary continence using the 7-item Expanded Prostate Index Composite (EPIC) urinary questionnaire [21], and erectile function using the 9-item EPIC-sexual function questionnaire [22]. All three questionnaires have been confirmed to have good reliability and validity. Analysis of immune cells Cryopreserved human peripheral blood mononuclear cells (PBMCs) were obtained at all assessment times and processed for immune analysis as previously described [23]. Briefly, PBMC cryovials were thawed rapidly at 37°C and washed in 9 ml of warm serum-free RPMI-1640 medium (Corning, Corning, New York, NY). Single-cell PBMC suspensions were quantified for total number of live cells using an automated cell counter (Vi-Cell XR, Beckman Coulter, Brea, CA). After centrifugation at 4°C and 1200 rpm for 5 min, the cell pellets were resuspended with ice-cold flow buffer (sterile 2% fetal bovine serum in phosphate-buffered saline) at a maximum of 1 × 10 6 cells with a staining volume of 100 µl on 96-well U-bottom plates; they were then incubated with human Fc blocker for 20 min and stained using fixable viability dye and fluorochrome-conjugated anti-human monoclonal antibodies for 45 min at 4°C in the dark (Supplementary Table 1). For examination of the cytokine response in immune cells, the thawed PBMCs were resuspended in complete RPMI-1640 medium containing 10% fetal bovine serum, penicillin/streptomycin, and L-glutamine (Corning) on a 96-well Ubottom plate and incubated in a resting state at 37°C overnight. Next, the cells were stimulated for 5 h using Cell Activation Cocktail (BioLegend, San Diego, CA) at a dilution of 1:500. Cell surface and intracellular staining was then performed using a Fixation/Permeabilization Solution Kit (Cytofix/ Cytoperm, BD Biosciences, San Jose, CA). The stained samples were analyzed using an LSRII cytometer (BD Biosciences) with FACS Diva software. Using the gating strategy for immune cell subsets, all live PBMCs were first gated from singlets and fixable viability dye-negative populations. CD4 + and CD8 + T-cells were gated under a live CD3 + population. Statistical analysis Domain and subdomain scores for all patient-reported outcomes were calculated as per their respective guidelines and then scaled to a 0-100 score. A higher score indicates better health-related QoL. All continuous variables were evaluated for normality using the Kolmogorov-Smirnov test. Between-group differences in sociodemographic, clinical, and immune parameters were evaluated at baseline using the Student's t-test for continuous variables and the chi-squared test for categorical variables (Fisher's exact test was used as appropriate). The effect of the intervention was evaluated by calculating the improvement in patient-reported outcome scores, cytokine expression, and numbers of immune cells between baseline and immediately before surgery. Betweengroup differences were evaluated using the Student's t-test if the variable was normally distributed and the Wilcoxon rank-sum test if not. The significance level was set to α = 0.05 (two-tailed) but not adjusted for multiple comparisons using methods such as Bonferroni correction because we were not testing any a priori hypothesis. The primary outcome was self-reported QoL at baseline, preoperatively, and 6 weeks postoperatively. Secondary outcomes were changes in immune cell status and cytokine levels with yoga. The analysis was restricted to only the first two time points because only two of the 15 participants in the yoga group completed the third time point (per-protocol analysis). All statistical analyses were performed using R software (R Foundation for Statistical Computing, Vienna, Austria). RESULTS Twenty-nine of 30 patients assessed for eligibility were randomized to the yoga group (n = 14) or the control group (n = 15). Twelve patients in the yoga group completed their yoga program and could be followed up before surgery and one in the control group was lost to follow-up. Therefore, complete data for 26 patients were available for the analyses (Fig. 1). Patient demographics and clinical characteristics The baseline sociodemographic and clinical characteristics are shown in Supplementary Table 2. The median patient age was 60 years (IQR 59-61) in the control group and 56 years (IQR 55-60.5) in the yoga group. Most of the patients had organ-confined disease. Approximately 23% of the cohort was Hispanic. There was no significant between-group difference in QoL at baseline (Supplementary Table 3). Table 1 shows the changes in mean scores in both study groups at the scale and subscale levels. Overall, there was a statistically nonsignificant trend towards improvement in sexual function (EPIC questionnaire, yoga vs control: 9.1 vs. 0.6; p = 0.098), fatigue (FACIT-F questionnaire, 1.8 vs. −4.5; p = 0.098), general QoL (FACT-P: 1.9 vs. −6.3; p = 0.065), and prostate-specific QoL (0.6 vs. −5.3; p = 0.08). We then calculated the minimally important difference (MID), namely, the minimal effect that would be meaningful to patients, for each patient-reported QoL outcome by stratifying our data to one-third of a standard deviation. This amount of change in the standard deviation has been shown to have a clinically meaningful impact on a patient's QoL [24,25]. We then found that the FACT-P, FACT-General, FACIT-F, and EPIC-Sexual scores were improved meaningfully by yoga (Fig. 2). On further substratification of these scales using the MID, there were improvements in the FACT-P physical, social, and functional wellbeing scores and in EPIC-Sexual function ( Supplementary Fig. 1) in the yoga group. Immune cells We characterized lymphocytes from the patients' blood samples using multi-parametric gating flow cytometry ( Supplementary Fig. 2 shows the results and Supplementary Table 1 lists the antibodies used). We then analyzed the immune cell data by creating a t-distributed stochastic neighbor embedding (tSNE) plot, which is a non-linear dimensionality reduction algorithm that provides "big picture" data. A global t-SNE map of PBMCs was obtained using a 12-parameter flow cytometry panel. Using the tSNE algorithm, we identified qualitative phenotypic differences between the yoga group and control group (Fig. 3a). We further delineated the populations of T-cells, NK cells, and subsets of myeloid cells and MDSCs. We then plotted the differences in frequencies and absolute numbers of immune cells between the study groups using box and whisker plots ( Supplementary Fig. 3a-e). We identified an increased IFN-γ response in peripheral cytotoxic CD4+ (p = 0.007) and CD8+ (p = 0.004) cells in the yoga group in comparison with the control group (Fig. 3b). Levels of (Fig. 4). Changes in levels of 38 cytokines from baseline are compared between the study groups in Supplementary Fig. 4 and Supplementary Table 4. DISCUSSION Although there have been some small retrospective studies and one relevant randomized trial [14], the effects of yoga in patients with prostate cancer remain unclear, particularly regarding QoL and its molecular impact. Therefore, we designed this clinical trial to obtain preliminary clinical and translational data. Our preliminary data add to the literature by providing a molecular explanation for the similar improvements in QoL seen in patients with prostate cancer. The greatest impact of yoga was on sexual function, fatigue, prostate cancer-specific QoL, and physical, social, and functional wellbeing. Our data also indicate that yoga modulates several key immune cells that are important drivers of antitumor activity. Furthermore, the analysis of cytokines/chemokines suggests that yoga attenuates the inflammatory response. Our data demonstrate a positive effect of yoga in several clinical domains. First, we observed a significant improvement in QoL in the perioperative setting, which was reflected in enhanced physical, social, and functional wellbeing as well as improvement in symptoms of fatigue and stress. These findings are consistent with those of a trial by Ben-Josef et al. [14] in which 50 patients with prostate cancer undergoing radiation therapy were allocated to yoga classes (n = 22) or standard-of-care (n = 28) for 6-9 weeks. In that study, patients in the yoga group experienced significantly less global fatigue and severity of fatigue than those in the control Perioperative exercise studies for patients who underwent radical prostatectomy are mostly focused on the urinary continence after the surgery. Few studies with various perioperative exercise interventions studied the impact of exercise on cancer-specific QoL. In their randomized control trial assessing 49 patients, Park et al. [26] reported that a postoperative combined exercise intervention results in improvement of physical function and QoL. In a large systematic review and meta-analyses of 1057 prostate cancer patients enrolled in 13 randomized clinical trials, exercise intervention significantly improved fatigue symptoms [mean difference (MD) 4.83, 95% CI 3.24-6.43; p < 0.00001] as assessed according to the Functional Assessment of Cancer Therapy (FACT)-Fatigue scale. Fatigue remained improved at 6 months (MD 3.60, 95% CI 2.80-5.12; p < 0.00001). Furthermore, exercise interventions improved QOL measured using the FACT-General (MD 3.93, 95% CI 1. 37-5.92; p = 0.003) and FACT-Prostate (MD 3.85, 95% CI 1.25-6.46; p = 0.04) scales [27]. A recent multi-institutional study examined the effect of yoga on quality of sleep in 410 cancer survivors (>90% female) who were randomly assigned to 4 weeks of yoga (n = 206) or standardof-care (n = 204) [9]. Compared with the control group, the yoga group demonstrated significant global improvements in quality, duration, and efficiency of sleep and less use of sleep medication. Longer-term follow-up of that cohort revealed significant improvement in all subdomains of cancer-related fatigue [28]. The Society for Integrative Oncology has produced an evidencebased guideline on use of integrative therapies during and after treatment of breast cancer that has been endorsed by an American Society of Clinical Oncology expert panel [29]. This guideline recommends yoga for reduction of anxiety and stress, amelioration of depression/mood, and improvement of QoL. Our data add to the literature by providing a molecular explanation for the similar improvements in QoL seen in patients with prostate cancer. There are data suggesting that psychological stress has a negative effect on the adaptive cellular immune response, including decreased production of NK cells and T-cells. Chronic stress results in stimulation of the hypothalamic-pituitary-adrenal axis, which produces glucocorticoids, and the sympathetic-adrenal axis, which produces catecholamines [30]. Leukocytes have receptors for these stress-related hormones and can modulate their binding [31]. T-cells have more of these receptors and are exquisitely sensitive to fluctuations in stress hormone levels. Recent data show that stress and anxiety may lead to metabolic changes that impair the function of CD4+ T-cells [32]. In the present study, we identified a robust IFN-γ response in CD4+ and CD8+ cells, increased expression of the Fc receptor (CD16) in NK cells, and decreases in numbers of regulatory T-cells and MDSCs. These findings, although only hypothesis-generating, point to a strong immune response, less stress, and better QoL in patients with prostate cancer who practice yoga. Future studies are needed to clarify the impact of yoga on T-cell subpopulations. Research has shown a relationship between persistent fatigue and overactivation of the inflammatory network in patients with cancer. Other studies have shown an association between QoL indicators, including fatigue, anxiety, and depression levels, and increased production of proinflammatory cytokines, including IL-6 and TNF-α [33]. Our data show that 6 weeks of a yoga exercise program reduced fatigue and expression of proinflammatory markers, including G-CSF, MCP-1, and Flt-3 ligand. G-CSF has been shown to activate production of endothelial cells and cytokines and to promote angiogenesis [34]. MCP-1 is a chemokine that is expressed by glial cells and neurons. Higher plasma MCP-1 levels are associated with more rapid and severe cognitive decline secondary to neuronal loss whereas lower levels are neuroprotective [35,36]. There is emerging data showing that MCP-1 acts as a potent chemotactic factor regulating stromalepithelial cell in prostate cancer. It regulates prostate cancer motility and proliferation at the site of the bone microenvironment and overexpression of its receptors (CCR2 and CCRL1) may contribute to the progression and biochemical failure of prostate cancer [37]. Finally, higher expression levels of Flt-3 ligand have been linked to an autoimmune response and to chronic inflammatory responses in the lung, central nervous system, and gastrointestinal tract [38]. By lessening inflammation and fatigue and improving mood, yoga is an ideal exercise that can be modified for individuals with a sedentary lifestyle or functional limitations [11,16,28,[39][40][41]. Our pilot data are consistent with those of previous retrospective studies demonstrating the beneficial effects of yoga on several psychological and QoL outcomes, add granularity at the molecular level, and identify putative inflammatory markers for future research. This study has several limitations. First, the study cohort was small. Second, assessments were performed at only two time points and adherence rates are not available. Third, given that we were not testing any a priori hypothesis, the significance level was not adjusted for multiple comparisons using methods such as Bonferroni correction. Therefore, our data is hypothesis-generating and further research is required. In conclusion, our present findings indicate that yoga improves QoL, generates a robust immune response, and attenuates expression of key inflammatory cytokines in men with newly diagnosed prostate cancer. Our data shows that patients are motivated to perform yoga in the preoperative setting (no attrition) but not in the postoperative period (high attrition). Larger-scale studies are needed to replicate these results. Future studies should incorporate a translational component to clarify the mechanism via which yoga improves QoL and examine the effects of yoga on progression and recurrence of prostate cancer. DATA AVAILABILITY The principal investigator (DK) and biostatistician (PS) had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
v3-fos-license
2017-10-17T12:33:23.344Z
2017-06-30T00:00:00.000
29823899
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.ecologyandsociety.org/vol22/iss2/art45/ES-2017-9422.pdf", "pdf_hash": "083ade771b5c763d5b027296c98a79fa097ac918", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44870", "s2fieldsofstudy": [ "Environmental Science", "Political Science", "Sociology" ], "sha1": "083ade771b5c763d5b027296c98a79fa097ac918", "year": 2017 }
pes2o/s2orc
Resilience , political ecology , and well-being : an interdisciplinary approach to understanding social-ecological change in coastal Bangladesh The commodification of peasant livelihoods through export-oriented aquaculture has brought about significant socialecological changes in low-lying coastal areas in many parts of Asia. A better understanding of the underlying drivers and distributional effects of these changes requires integration of social and ecological approaches that often have different epistemological origins. Resilience thinking has gained increased traction in social-ecological systems research because it provides a dynamic analysis of the cross-scalar interactions between multiple conditions and processes. However, the system-oriented perspective inherent in resilience thinking fails to acknowledge the heterogeneous values, interests, and power of social actors and their roles in navigating social-ecological change. Incorporation of political ecology and well-being perspectives can provide an actor-oriented analysis of the trade-offs associated with change and help to determine which state is desirable for whom. However, empirical demonstrations of such interdisciplinary approaches remain scarce. Here, we explore the combined application of resilience, political ecology, and well-being in investigating the root causes of social-ecological change and identifying the winners and losers of system transformation through empirical analysis of the differential changes in farming systems in two villages in coastal Bangladesh. Using the adaptive cycle as a structuring model, we examine the evolution of the shrimp aquaculture system over the past few decades, particularly looking at the power dynamics between households of different wealth classes. We found that although asymmetric land ownership and political ties enabled the wealthier households to reach their desired farming system in one village, social resilience achieved through memory, leadership, and crisis empowered poorer households to exercise their agency in another village. Material dimensions such as improved living standards, food security, and cash incomes were evidently important; however, freedom to pursue desired livelihood activities, better environmental quality, mental peace, and cultural identities had significant implications for relational and subjective well-being. INTRODUCTION Social-ecological changes brought about by the rapid growth of the aquaculture industry and increased occurrence of climatic shocks and stresses have significantly modified the vulnerability contexts of low-lying coastal areas in many parts of Asia (Pokrant 2014, Orchard et al. 2016, Abdullah et al. 2017).An extensive body of empirical work has studied the vulnerabilities of households or communities to specific shocks and stresses, often providing a snapshot of a single spatial scale at a given time (Miller et al. 2010).However, the underlying drivers of social-ecological change and its differential effects on the well-being of social actors remain understudied (Tucker et al. 2015).These knowledge gaps can be attributed to a lack of communication between different disciplines, which often limits the scope of empirical work within the boundaries of a given concept (Janssen et al. 2006, Miller et al. 2010).Given the inherent complexity of social-ecological systems, a holistic, in-depth analysis of different elements within the system requires an integrative, interdisciplinary approach that bridges across several ecological and social knowledge domains (Binder et al. 2013, McGinnis andOstrom 2014). Since the mid-2000s, resilience has emerged as an important concept in evaluating social-ecological change because it provides a dynamic approach to system analysis and management, with emphasis on nonlinearity and multiscalar feedback mechanisms (Ingalls and Stedman 2016).Resilience thinking, however, is criticized for its system-level bias that does not account for the role of power dynamics in navigating social-ecological change and the distribution of costs and benefits associated with change (Cote and Nightingale 2012, Fabinyi et al. 2014, Brown 2016).As such, several authors have highlighted the potential of a political ecology perspective in analyzing the asymmetries in power (Peterson 2000, Davoudi 2012, Turner 2013) and have emphasized the need to integrate well-being approaches in addressing the differential needs and values of social actors (Coulthard et al. 2011, Armitage et al. 2012, Coulthard 2012). Despite theoretical progress, only a few empirical studies have combined resilience thinking with political ecology (e.g., Beymer-Farris et al. 2012, Moshy et al. 2015) or human well-being (e.g., Marschke andBerkes 2006, Moshy et al. 2015) to analyze the politics of desirable states, the trade-offs associated with adaptation strategies, and the winners and losers of change.Here, we aim to explore empirically the combined application of resilience, political ecology, and well-being perspectives in understanding the drivers and distributional effects of socialecological change in coastal Bangladesh.In doing so, we compare the cases of two villages that had similar levels of exposure to natural shocks and stresses but experienced differential changes in farming systems.We use the resilience concept to describe the nature of the changes in relation to the system characteristics and functions and apply a political ecology lens to examine the roles of different actors in shaping the trajectories of change.We then analyze the implications of these changes for the well-being of actors with heterogeneous interests and needs.We first briefly review the theoretical literature on resilience, political ecology, Ecology and Society 22(2): 45 https://www.ecologyandsociety.org/vol22/iss2/art45/and well-being and highlight the need for overcoming disciplinary boundaries to better theorize the social dimensions of socialecological resilience. LITERATURE REVIEW Resilience thinking has captured interest in environmental social science research by analyzing human-nature interactions in the face of global environmental change (Leach 2008, Speranza et al. 2014).Originating from the ecological sciences, resilience embraces change as an inevitable feature of a system and places emphasis on either maintaining its character by absorbing the disturbance or transforming to a new regime when conditions become untenable (Walker et al. 2004, Folke 2006).Although early definitions of social-ecological resilience mainly focused on a system's post-event buffer capacity (Adger 2000, Walker et al. 2004), social scientists later expanded the concept to include the capacity of actors to learn from experience and build knowledge and skills for transformation (Folke 2006, Cutter et al. 2008).The adaptive renewal cycle, a heuristic model within resilience thinking, suggests that all complex systems, whether natural ecosystems or human societies, undergo cyclic changes comprised of exploitation (r), conservation (K), release (Ω) and reorganization (α) phases (Holling 1986, Gunderson andHolling 2002).As a system passes through these four phases, its resource use and structure gradually increases until it becomes so rigid that a disturbance leads to a chaotic collapse followed by a new growth phase characterized by innovation and experimentation (Holling 2001, Folke 2006).The dynamics of a system at a certain scale are influenced by the slow-and fast-moving variables at higher and lower scales, creating a hierarchy of nested sets of adaptive cycles, referred to as panarchy (Gunderson and Holling 2002). Resilience thinking is often criticized for its system-oriented approach, which puts little emphasis on the entities that comprise a system unless they are captured within the system's structure (Turner 2013).It tends to homogenize social complexity and assume that all actors within the system have similar interests, expectations, and behavior (Fabinyi et al. 2014).The process of building resilience, either though incremental adjustments or through radical transformations, often creates new patterns of winners and losers because certain system regimes may be considered more desirable by one segment of society than another (Walker and Salt 2006).Academic literature in the field to date has insufficiently addressed the basic issues of power, politics, and agency, as well as debates over fundamental questions such as "What is desirable?"and "For whom?" (Cote andNightingale 2012, Davoudi 2012).To understand the drivers and differential effects of social-ecological change, there is a need to account for the different perspectives and desired states of the people involved (Cote andNightingale 2012, Fabinyi et al. 2014) and consider inequities in decision-making procedures and the distribution of costs and benefits resulting from change (Davoudi 2012).This has led to increased calls for bringing in insights from political ecology, which would enable resilience studies to engage sufficiently with power dynamics among social actors (Peterson 2000, Beymer-Farris et al. 2012, Cote and Nightingale 2012, Turner 2013, Fabinyi et al. 2014, Brown 2016, Ingalls and Stedman 2016). A political ecology approach highlights how power relations influence the access, control, and management of resources and places politics at the forefront of analysis to identify social origins of environmental degradation and the plurality of perceptions (Peet andWatts 1996, Bryant 1998).Contentions among social and political scientists have generated various perspectives of power.Power involves the ability of an actor within a social relation to carry out his or her own will despite resistance from others (Weber 1947, Dahl 1957), to set the agenda or prevent the discussion of controversial issues (Bachrach and Baratz 1962), and to shape others' perceptions and preferences in ways that cause them to act contrary to their own interests (Lukes 1974).Applying these three dimensions of power to study complex social-ecological interactions is, however, complicated because it is unfeasible to attribute causal relationships between individual actions and undesirable collective outcomes (Olsson et al. 2014, Boonstra 2016).To address these challenges, it is necessary to identify the availability, distribution, and mobilization of various sources of power and conceptualize power both as a "conduct shaping" and a "context shaping" force (Boonstra 2016).Recognizing the indirect consequences of human behavior on social structures and events that influence the conditions for subsequent actions can facilitate the integration of power in resilience studies (Boonstra 2016). Asymmetries in social power can shape social-ecological change in ways in which the interests of some actors are privileged over others, thus involving trade-offs and creating distributional inequities (Ingalls and Stedman 2016).Human well-being has emerged as an important concept within the literature on resilience and ecosystem services as a means to analyze the heterogeneous needs of different social groups and identify the winners and losers of change (Coulthard et al. 2011, Daw et al. 2011, 2015, Armitage et al. 2012, Coulthard 2012, Hossain et al. 2017).Well-being is defined as "a state of being with others, where human needs are met, where one can act meaningfully to pursue one's goals, and where one enjoys a satisfactory quality of life" (Wellbeing in Developing Countries Research 2007:1).It is a three-dimensional concept comprising what people have (the material dimension such as food, shelter, health, assets, and standard of living), what they can do with what they have (the relational dimension, including personal relationships, networks of support and obligations, cultural identities, inequalities and conflict, and scope for personal and collective action), and how they think about what they have and can do (the subjective dimension, involving life satisfaction, fears and aspirations, trust and confidence, and sense of meaning; McGregor 2007, Copestake 2008, White 2010). Understanding the drivers and distributional effects of socialecological change through the combined application of resilience, political ecology, and well-being perspectives entails incorporation of social stratifiers as a means of disaggregating different social groups.We use household poverty level (alternatively referred to as wealth class) as a central lens for differentiation, whereby poverty is assessed from a multidimensional approach involving a wide range of context-relevant indicators.We next describe the research methods and study sites.Empirical evidence from the study sites is then presented, followed by a discussion about how an interdisciplinary approach can greatly enhance our understanding of the complex processes and outcomes of socialecological change.Semistructured questionnaire to collect quantitative data on households' demographic profile, asset ownership, livelihood activities, perceptions of brackish-water shrimp farming, and changes in well-being 150 households (25% of approximately 600 households) in each village, selected through a random route sampling method; each of the villages was divided into neighborhoods, and households were selected within each neighborhood via a "random walk."Household heads were the primary respondents; however, participation from any willing household member was welcomed to obtain more accurate data Livelihood trajectory interview Unstructured interviews to generate qualitative data on changes in assets, livelihood strategies, and well-being over the previous decades, with detailed exploration of the underlying causes of these changes 25 interviews in each village with adult males and females selected through a purposive sampling method, ensuring representation from all wealth classes and different occupations RESEARCH METHODS A mixed-method approach comprising of focus group discussions (FGDs), participatory wealth ranking (PWR), household questionnaire surveys, and livelihood trajectory interviews was used to collect empirical evidence in late 2014 (Table 1).Data from PWR and household surveys were used to stratify households by wealth class.Understanding the drivers of social-ecological change involved the analysis of qualitative data from FGDs and interviews in relation to the characteristics of the adaptive cycle, whereas assessments of well-being impacts were based on both survey and interview data. PWR was used to identify the number of wealth classes within each village and outline the main characteristics that differentiate one class from another.Coincidentally, participants in both villages disaggregated households into five wealth categories, namely, rich, upper middle, lower middle, poor, and extreme poor, using agricultural land ownership as the most important determinant along with indicators such as relative income, housing materials, education, and food security (refer to Table A1.1 in Appendix 1 for details).Asset ownership data from household surveys were used to generate household wealth indices and calculate the numbers of sample households belonging to each of the five categories (Table A1.2 in Appendix 1).Principal component analysis (PCA) was carried out using 17 indicators under seven dimensions (refer to Table A1.3 in Appendix 1 for descriptive statistics).All components with an eigenvalue > 1 were extracted, of which the factor scores and factor loadings of the first principal component (PC1) were considered as the household wealth indices and indicator weights, respectively (Table A1.4 in Appendix 1).K-means cluster analysis with five clusters was then applied on the PC1 factor scores to quantitatively disaggregate households into five wealth classes.PCA also revealed the variation in asset ownership within and between different classes and inequality in wealth distribution within the two communities (refer to Tables A1.5 and A1.6 in Appendix 1 for details on asset ownership). Following translation and transcription, qualitative data from FGDs and livelihood trajectories were scrutinized, and chunks of text related to historical events were coded as per the spatial scale (international, national, regional, or local) and the domain in which they occurred (socio-political, agro-ecological, or economic).The events closely adhered to the characteristics defining each of the four phases of the adaptive cycle in terms of the system's potential (that is, the wealth of the system) and connectedness (that is, the internal controllability of the system; refer to Table A1.7 in Appendix 1 for details of data analysis).The events were then arranged chronologically, demarcating boundaries between the phases for the two villages respectively.Although this demarcation aided structuring and analysis of data, it should be noted that these boundaries are highly flexible and represent broader time periods instead of rigid start and end dates. Quantitative data from household surveys were used to construct bar charts on households' changes in well-being resulting from the changes in farming systems.The questionnaire included an open-ended question asking respondents whether they were better off, worse off, or same as before, and why.Using this subjective line of inquiry resulted in a wide range of responses in which relational factors such as having a peaceful community often emerged in addition to the usual objective factors such as income and assets.These were also supplemented with qualitative data from interviews that provided deeper insights into individuals' values, struggles, and aspirations.Individuals' responses may not be representative of all members within the household; however, because we primarily focused on understanding the power dynamics between different wealth classes, intrahousehold differences and gender dimensions were not studied. STUDY SITES The study villages, Mithakhali and Kamarkhola, are located in southwestern coastal Bangladesh (Fig. 1), an active deltaic floodplain characterized by high vulnerability to salinity intrusion and cyclones accompanied by tidal surges (Shameem et al. 2014, Huq et al. 2015).Salinity intrusion is largely a seasonal phenomenon; changes in upstream river flows lead to a relatively freshwater regime during the wet season and high levels of water and soil salinity during the dry season (Nuruzzaman 2006).However, this natural process has been exacerbated by the construction of the Farakka dam on the Ganges River in India, While both villages traditionally depended on paddy cultivation during the wet season (July-December), a number of agroecological, socioeconomic, and political factors caused a twostage change in the farming systems since the 1980s (Fig. 2).In the first stage, brackish water shrimp cultivation (Penaeus monodon, locally known as Bagda) was introduced during the dry season (February-June), along with wet season paddy in both villages.In the second stage, the two villages embarked on different trajectories, with Mithakhali phasing out paddy gradually and replacing it with freshwater whitefish farming, and Kamarkhola banning shrimp cultivation and reverting to traditional subsistence-based paddy farming along with freshwater prawn (Macrobrachium rosenbergii, locally known as Galda) and whitefish.The underlying causes of these differential changes in farming systems and their implications for human wellbeing are discussed next. Drivers of social-ecological change Exploitation and conservation phases in both Mithakhali and Kamarkhola Fig. 3 illustrates the chronology of events at different spatial scales during the four phases of the adaptive cycle.The exploitation and conservation phases were similar for both villages, as well as the south-western coastal region in general.During the 1960s and early 1970s, the government implemented the "Coastal Embankment Project," under which hundreds of polders were constructed in the coastal region of Bangladesh to increase wet season agricultural productivity by keeping out saline water.From the late 1970s, increased international market demand and high prices for shrimp spurred an interest among farmers in shrimp aquaculture, causing agricultural lands to be turned into shrimp https://www.ecologyandsociety.org/vol22/iss2/art45/Fig. 3. Timeline of events characterizing the four phases of the adaptive cycle in Mithakhali and Kamarkhola. Ecology and Society 22(2): 45 https://www.ecologyandsociety.org/vol22/iss2/art45/farms during the dry season.The sluice gates were kept open from February to April to allow saline water to enter farms, along with a wide variety of fish fry and natural shrimp postlarvae.Meanwhile, between 1979 and 1996, the World Bank's Structural Adjustment Programme aimed to promote the country's economic growth through the creation of an export-oriented, market-based economy.Many infrastructure development programs along with improved technology dissemination and fiscal incentives were launched to expand the shrimp industry.Apart from the expansion in the number of shrimp farms, the industry experienced concurrent growth in associated services such as hatcheries, processing plants, ice plants, and shrimp depots. In addition to the government's role in promoting the sector, many outside entrepreneurs, including businessmen, politicians, army, and civil officials, started to invest in shrimp farming in the late 1980s.While the profits were huge, the amount of land suitable for shrimp cultivation was in short supply; hence, the appropriation of public land became a source of power play in the region.Because of interdepartment conflicts, absence of precise distribution policy, and underhanded dealings, most of the public land and canals were allocated to politically powerful persons.These outsiders also pressured local farmers to lease out their lands for shrimp farming, and in some cases, used hired musclemen to forcefully evict marginal rice farmers from their land. During the 1980s, local farmers in this area did not have much knowledge about the prospects of shrimp farming.Slowly, powerful businessmen came to this area and started to inundate our land with saline water during the dry season.The incoming water contained large quantities of wild shrimp postlarvae.The businessmen made huge profits without any investment; when they drained out the water in June, the local landowners could plant paddy.But after a few years, when rice yield started to decline, farmers wanted compensation from these businessmen, who then started paying rent.As yields continued to decline, the rents continued to increase.Participant in FGD, Mithakhali. During the 1990s, to increase production and cope with the decline in natural shrimp fry availability, farmers started to release hatchery-bred postlarvae, which are comparatively cheaper but more susceptible to diseases than the natural ones sold by fry collectors.In addition to Bagda shrimp, many farms also harvested large amounts of predatory fish, which entered the farms along with the tidal waters.Large-scale conversion of agricultural land to shrimp farms, deliberate flooding of rice fields and canals with saline water, and legal and illegal construction of gates and pipelines through embankments significantly increased soil and water salinity.Although the shrimp industry led to increased national income and greater employment opportunities through the establishment of associated activities, most of the income was enjoyed by a few powerful entrepreneurs.Landless farmers and sharecroppers, who traditionally leased land to grow crops, lost access to these productive resources and became unemployed. Release and reorganization phases in Mithakhali During the release and reorganization phases, the farming systems in the two villages followed different trajectories.Overtime, local farmers started to realize that they were deprived of the huge profits that were generated from their own land by outside entrepreneurs while also suffering from the adverse effects of shrimp cultivation, including decline in paddy yield, loss of homestead gardens, restricted access for fishing in canals, and livestock rearing.False contractual agreements, nonpayment of lease money, and disputes over common public lands led to increased social tensions.Local people were involved in street protests and violent confrontations with the outside entrepreneurs, leading to serious disruptions in law and order, violations of human rights, and even incidents of rape and murder.During the 1990s, almost all candidates of union council elections took advantage of people's sentiments and used an antishrimp position in their electoral campaigns.In Mithakhali, the locally elected lawmaker passed a law in 1996 stating jomi jar, gher tar (only the true landowner has full rights over the shrimp farms on his land).Local farmers were able to regain control over their lands and subsequently divided the large commercial farms into smaller farms managed by individual landowners. Following the eviction of the outside entrepreneurs, local farmers in Mithakhali continued to farm brackish water shrimp along with predatory fish in the dry season, followed by paddy and small amounts of freshwater whitefish in the wet season.However, by the time the landowners gained control over their land, the "golden era" of shrimp was almost over.Prolonged shrimp farming, often supplemented with additional salts, caused the soil to lose fertility over time.Moreover, in the mid-1990s, white spot syndrome virus, believed to have originated from imported postlarvae, spread across shrimp farms and is still a major concern for farmers.Paddy yields declined considerably until costs became higher than revenue.Large farmers became reluctant to grow rice; hence, from July onward, when monsoon rains diluted the water and decreased its salinity, several species of whitefish were released onto farms.These fish were harvested in December, after which the water was drained out entirely and the land was prepared for shrimp cultivation in the following season. The rich people were always looking out for poor people who wanted to either lease out their land or sell it altogether.Poor people lacked foresight; they were happy with the high rent or price they were offered.They are also naïve; they never saw this much cash in hand before.Hundreds of small farms were slowly assimilated into the larger ones, making the rich more powerful.The large landowners were reluctant to drain out water from their land after the end of the dry season.And unless the large landowners removed water from their farms, the small farmers could not plant rice in the wet season.One kilogram of shrimp sold for BDT 700-800 (1 USD ≈ 80 BDT), whereas one maund (37 kg) of rice sold for BDT 300; so any economically rational being would opt for aquaculture.Lower middle class farmer, Mithakhali. The final blow came in 2007, when cyclone Sidr brought in highly saline tidal water and degraded the soil to such an extent that crop cultivation became impossible.This was followed by cyclone Aila in 2009; the tidal surge inundated the village during high tide, and https://www.ecologyandsociety.org/vol22/iss2/art45/ the water receded back again on the same day during low tide.The cyclone had relatively smaller impacts in Mithakhali because it is located toward the inner part of Mongla subdistrict, further away from the main rivers.Apart from the immediate loss of fisheries and increased soil salinity in subsequent years, there was no damage to infrastructure.However, further land degradation and increased disease outbreaks had severely dwindled incomes from shrimp cultivation. As estimated by the manager of a shrimp cooperative, shrimp mortality had increased from 5% to 80% over 15 years, and at the time of study, a farmer could still earn about BDT 42,000 per hectare (compared to BDT 340,000 per hectare in the past) during the dry season, followed by another BDT 67,000 per hectare from whitefish farming during the wet season.However, given that the mean agricultural land ownership of poor and extreme poor households, who together composed 68% of the total population, was only 0.57 and 0.016 hectares, respectively, the cash income for most people from shrimp and whitefish cultivation was very limited.Increased soil salinity and private control of water canals precluded all other sources of subsistence such as rice, vegetables, open-access fish, and livestock.Lack of funds and specialized skills constrained these households from entering other highreturn nonfarm activities.While small farmers faced food insecurity and rising debts, large farmers could still enjoy economies of scale and cope with losses by intensifying production.Thus, most households were strongly against brackish water shrimp cultivation, with some expressing ambivalent opinions.This reflected that when people internalize the harshness of their circumstances, they do not desire what they never expect to achieve.However, a limited number of households, mainly from the rich and upper middle classes, were in favor of shrimp cultivation. The big landlords want shrimp cultivation to continue so that they can get money by sitting in Khulna city, Dhaka city, or even abroad.In a given season, they can earn up to BDT 10 million.During election time, they will be able to fund the local politicians, whereas someone like me won't be able to contribute a penny.So obviously, the politicians will support them.Those who are poor want the embankment to be built.If the embankment is there, we can keep out the saline water and use freshwater stored in canals to grow rice as well as whitefish.We can also grow winter crops like sesame and pulses.Poor farmer, Mithakhali. In contrast, an interview with a rich farmer revealed a different perspective; he explained that although most people were against shrimp cultivation, reverting to the paddy-based system was not feasible. I understand how decades of shrimp cultivation has adversely affected the agro-ecology of this village.But we cannot stop it at once even if we wanted to.This is something many of the farmers don't realize.If we stop shrimp cultivation today, it would take at least 3-5 years for the soil to regain its fertility.Thirty years of land degradation cannot be altered in a day.So how will these people survive in the meantime?Who will support us? Rich farmer, Mithakhali. Release phase and reorganization phases in Kamarkhola During the mid-1990s, in Kamarkhola, shrimp cultivation was mainly carried out by outside entrepreneurs, who leased land from local farmers in exchange of meagre rents.The success of these early entrepreneurs inspired local large landowners, who established their own independent farms or engaged in cooperative farming along with small farmers.Over time, as the adverse effects of shrimp cultivation became more apparent, people were divided over whether to continue shrimp aquaculture.Large landowners and some medium-sized ones who had gained good profits from shrimp, as well as some landless people who benefitted from working on shrimp farms, wanted to continue shrimp farming, whereas most others, especially small landowners and some large owners who faced losses from shrimp, were against it.In late 2008, residents of Kamarkhola and neighboring villages united to chase away the outside entrepreneurs when they tried to open the sluice gates in the embankment.The newly elected local parliamentary member and a couple of antisaline-water environmental protection groups played key roles in mobilizing farmers and helping them express their collective frustration against years of injustice.Finally, an order from the High Court permanently banned brackish water shrimp farming in Kamarkhola. In mid-2009, Kamarkhola was severely affected by cyclone Aila, which caused massive infrastructural damage, displacing people to temporary settlements on the embankment and prohibiting agricultural activities for approximately 1.5 years.Despite the short-term hardships, many people referred to the event as a blessing in disguise because it brought the area into the limelight.Institutional support, in terms of relief and rehabilitation materials, enabled the people to survive during the farming system transition and led to overall infrastructural development, including better housing, water, and sanitation facilities, cyclone shelters, and embankment reinforcement.After agricultural activities resumed in 2011, most farmers obtained good yields from rice, and some used their experience from shrimp farming to grow freshwater prawn and whitefish as polyculture in ponds or as integrated culture on their agricultural lands.Thus, in contrast to Mithakhali, the social-ecological system in Kamarkhola managed to reorganize and prevent the farming system from tipping over to a state that is undesirable for most farmers. Material well-being The material dimensions of well-being received comparatively greater attention than the other dimensions because income and food security were the most basic needs for survival.In Mithakhali, material well-being decreased for the majority of households (Fig. 4), particularly in the middle income and poor classes, because of dwindling profits from shrimp farming, inability to grow rice or fish for subsistence, and the need to purchase all grocery items from the market.Poor shrimp yields also led to a reduction in land rents and profits from shrimprelated businesses.The lack of agricultural activities within the village compelled wage laborers to migrate to nearby subdistricts, often agreeing to work for lower wages.The increased use of bamboo cages for harvesting shrimp also lowered the need for labor on large shrimp farms.In contrast, rich households reported improved well-being, particularly because of the accumulation of land over three decades, which allowed them to carry out largescale aquaculture and invest in high-return nonfarm activities.Similarly, households from other wealth classes mentioned good profits from shrimp or income from multiple sources as the main reasons for increased well-being. The shrimp business has enabled my family, as well as many others, to escape from the poverty-stricken minimalistic rural lives. My father dropped out of primary school and worked as a medium-scale rice farmer during his 20s and 30s. Later, shrimp farming allowed him to earn lots of money, which he spent to educate his children. Now my brother and I have good jobs in Khulna city, where we live with our families. We come to the village from time to time to supervise the managers who look after our shrimp farms. Upper middle class farmer, Mithakhali In Kamarkhola, although the transition of the farming system was desirable for most people, many of them still considered themselves as being worse off than previously, particularly in the material dimension.This was mainly for three reasons.First, at the time of the study, it had been only three years since farming activities had resumed after cyclone Aila, and many households had not yet successfully started freshwater Galda prawn farming or livestock rearing on an economically beneficial scale.Second, although environmental quality was better, in the absence of shrimp disease outbreaks, the cash income from shrimp was much higher than that from rice.Third, households that were reliant on shrimp-related businesses were now solely reliant on other villages in the region. During shrimp cultivation, millions of taka worth of goods would be carried along these rivers day and night.People had cash in their pockets, and they could purchase the goods they needed.Now it's difficult to get over that addiction to cash.I secretly farm shrimp in a small parcel of land outside the embankment.But there is no satisfaction in cultivating shrimps stealthily in such small amounts of land.Rich farmer, Kamarkhola. However, the opportunity to pursue multiple livelihood activities such as agriculture, business, and service generated both marketand subsistence-oriented income, thus improving material wellbeing for some households.Poor and extreme poor farmers, who were previously dependent on wage labor only, had the opportunity to engage in sharecropping contracts with large farmers.In both villages, material well-being remained unchanged for some households because the increase in cash income was offset by rising expenditures to raise a family.Some poor and extreme poor families, who solely depended on physical labor and were not directly involved with farming, did not experience any significant changes, often saying, "We live hand to mouth; we were poor, and will always be poor." Relational well-being While the material dimension refers to what people have, the relational dimension reflects what people can do with what they have, thus emphasizing people's freedom to act in ways that correspond with their own interests and values.In Mithakhali, most farmers reported a loss of relational well-being because large landowners used their power to shift from a shrimp-paddy rotational system to year-long aquaculture-based livelihoods.This suffocated the agency of smallholding farmers by trapping them in an undesirable farming system.People's words, tone of voice, and facial expressions often reflected a sense of despair, injustice, and frustration.The lack of autonomy in choosing livelihood strategies, the need to adhere to existing rules of farming, and fears about long-term livelihood outcomes were evident in some narrations. Even if I want, I can never stop shrimp cultivation on my own. If other farmers adjacent to my land are doing so, I have to do it as well. Recently, due to the oil spillage in Sheila River near the Sundarbans, the government is thinking of creating an alternative route by dredging our nearby Passur River.But no matter how much they dredge, each high tide will bring tonnes of sediment and raise the river bed once again.The only solution is to stop shrimp farming and cut all the dykes along the farms so that the silt and clay can be deposited on the land during high tide.You must allow water exchange to occur in its natural way.If shrimp cultivation is stopped, the soil will start regaining its fertility in a year.Lower middle class farmer, Mithakhali. There was also a general lack of faith in institutions such as the national and local government and nongovernmental https://www.ecologyandsociety.org/vol22/iss2/art45/organizations, and a lack of trust among community members and different actors in the aquaculture supply chain.Farmers faced losses from both ends; increased disease outbreaks were reducing shrimp yields, and farmers sometimes failed to receive a good market price for their produce. I used to collect drums of shrimp from the farms and sell them at the depots in Khulna.Now the shrimp yields have decreased and many more people are involved in this business, so there is no profit.All the farmers used to trust me with their shrimp because they knew that I would repay them in time.The people at the depots used to tell me that they never found a bad fish in the drums I supplied.Even today, when I go there, they hug me out of affection and respect.But nowadays, the middlemen are pushing gels and water into the shrimp to increase their weight and get more profits.But in the long run, the European countries are identifying these adulterations and are now showing reluctance to buy our shrimp.Lower middle class farmer, Mithakhali. In contrast, some considered the change in farming a necessary transformation that enabled farmers to cope with the changing needs of society.Three decades previously, the population size was smaller, and competition for natural resources was limited.People could spend their entire lifetimes within the confines of their village, with food sufficiency being the only concern.However, a better life in the new millennium necessitates cash for pursuing education, accessing proper health care, and purchasing consumptive goods such as televisions and mobile phones.Thus, relational well-being improved because cash from shrimp farming provided freedom of choice. In Kamarkhola, although material well-being remained unchanged or even worsened for some people, relational wellbeing improved significantly because people had the freedom to act in ways that were meaningful to them.People had confidence in the local government leader, who helped them take collective action against the outside entrepreneurs.However, they perceived nongovernmental organizations as profit-making organizations that ripped off the poor in the name of development.Nevertheless, many households relied on microcredit for investment in crops and fisheries.Well-being also involved living well together as a community, rather than pursuing one's own selfish motives. Those who say that they were better off during the shrimp period are salt pirates!They are like predatory animals!Shrimp farming only benefitted 5 out of 100 people, while the poor and landless suffered from poverty.If they asked for some fish, they'd be beaten up by the farm owner.But now if a hungry person comes to my door asking for rice, he does not return empty handed.Lower middle class farmer, Kamarkhola. Subjective well-being Subjective well-being refers to what people think or feel about what they have or do.In rural Bangladesh, rice farming traditionally formed an integral aspect of cultural identity.There was pride and satisfaction in being recognized as a successful rice farmer.Large landowners often served as informal village leaders and supported smallholding farmers in times of need.In Mithakhali, the inability of grow rice and the general shift in the social structure led to a loss of subjective well-being. We have been rice farmers for generations; we neither understand nor can do anything other than rice.After the harvest, my yard would be filled with piles of paddy, and workers would be busy milling them.The paddy heaps were so high that our children would climb them to see the entire village.My homestead yard used to be filled with large buffalos that were used for ploughing the land.Now I have a couple of malnourished cows.Upper middle class farmer, Mithakhali. The opposite was true for most farmers in Kamarkhola; the farming system transformation led to better environmental quality and greater peace of mind.Vegetation cover and soil quality improved over time.Although some farmers reported losses in prawn yield because of disease outbreaks, freshwater prawn cultivation was relatively less risky. During shrimp cultivation, the roads used to be so muddy all the time that if you walked along, your shirt would be spilled with mud.The air was very toxic, it felt as if we were inhaling chemicals.Now it feels great to have so many fruit trees around our house.Our children have something to eat.When a guest like you comes along we have something to offer.Upper middle class farmer, Kamarkhola. DISCUSSION AND CONCLUSION Social-ecological systems such as the aquaculture system studied here are complex adaptive systems in which actors with different values and interests interact with each other and their natural environment.Actors learn from their past experiences and use their accumulated knowledge to respond to challenges.The system is not governed by deterministic laws: as the system evolves, the rules of the game change (Darnhofer et al. 2010).Unpacking these complex chains of interactions, spanning from local to global scales, requires interdisciplinary approaches of scientific investigation; however, differences in epistemological origins and methodologies associated with different concepts often create cognitive challenges to capturing the breadth without sacrificing the depth (Stojanovic et al. 2016).Here, we sought to bridge these disciplinary divides by empirically demonstrating the combined application of resilience, political ecology, and wellbeing perspectives in understanding the drivers and distributional effects of social-ecological change in coastal Bangladesh. The adaptive cycle heuristic offered a useful analytical framework to analyze the changes in system characteristics and behaviors and understand the multiple cross-scale interactions among several domains.Although the adaptive cycle may not be applicable to all complex systems (Cumming and Collier 2005), it has been particularly useful in analyzing changes in characteristics and behaviors of capture fisheries systems (Seixas and Berkes 2003, Goulden et al. 2013, Jacques 2015, Prado et al. 2015), with relatively limited application in culture fisheries systems (Garschagen 2010, Beymer-Farris et al. 2012).The evolution of the shrimp aquaculture system studied here closely adheres to the attributes of the different phases of the adaptive cycle model.The exploitation phase was characterized by https://www.ecologyandsociety.org/vol22/iss2/art45/plentiful natural resources and rapid growth of the aquaculture industry.Availability of fallow lands during the dry season, good soil productivity, and abundance of wild postlarvae enabled outside entrepreneurs to earn huge amounts of cash with minimal investment.New social hierarchies were formed as the traditional patron-client relationships among peasants were replaced by market-oriented cash crops.During the conservation phase, the growth rate slowed as land scarcity impeded further extensification of shrimp aquaculture, and productivity was increased by stocking hatchery-bred postlarvae in addition to wild ones.The system's potential and connectedness increased at the cost of decreasing resilience.Development of associated services such as hatcheries, depots, and processing plants expanded social networks along the supply chain, and the shrimp-paddy rotational system was institutionalized throughout the coastal region.Disease outbreaks on shrimp farms, declining paddy yields, and distributional injustices between outside entrepreneurs and local farmers triggered the release phase.The cohesive social structure became unstable, and contradictory coalitions of interests started to emerge.Whereas some farmers still favored the cash crop economy, others preferred to revert to the traditional subsistencebased farming system, thus forming new constellations of values in both villages.The farming systems in both villages transformed to a new state; however, local governance processes and power dynamics among farmers of different wealth classes determined whose desirable state was reached.During the reorganization phase, farmers drew upon their skills and knowledge to experiment with newer forms of livelihoods such as pond-based polyculture of freshwater prawn and whitefish (in Kamarkhola) and land-based farming of different marine and freshwater fish (in Mithakhali). The resilience approach thus provided a system-oriented analysis of the changes in potential and connectedness within the social and ecological spheres without sufficiently engaging with the roles of power, interests, and agency in navigating change.Integration of a political ecology approach offered an actor-oriented perspective that proved to be essential in explaining the root causes of the different responses of the two villages.For instance, in Mithakhali, at the time of study, shrimp cultivation was carried out by large local landowners rather than by outside entrepreneurs, who were overthrown in the late 1990s.This made it difficult for local people to protest against shrimp farming because the large landowners had the rights to farm their own land as they pleased.The ability to gain profits from shrimp mainly depended on the capacity to own, lease, and control land; thus, large landowners with political connections and the financial resources necessary for investment turned out to be winners, whereas poor and landless farmers were pushed into further poverty (see also Abdullah et al. 2017).However, in Kamarkhola, shrimp cultivation was mostly carried out by outside entrepreneurs, making it comparatively easier for the local farmers to evict them in 2008, with the support of local political leaders and grassroots organizations.Moreover, in Mithakhali, outside entrepreneurs were evicted in the 1990s because local farmers wanted to cultivate shrimp on their own land and earn increased cash.At that time, the negative effects of shrimp cultivation on other livelihood sources were not apparent.However, in Kamarkhola, when local farmers protested against outside entrepreneurs in 2008, they wanted to stop shrimp and revert to paddy because they were aware of the adverse consequences of brackish water shrimp farming. In addition, in Kamarkhola, the presence of certain key elements of social resilience such as social memory, leadership, and crisis enabled the transition of the farming system to a more desirable state.During the 1990s, there were various kinds of local resistance in Khulna to the appropriation of public lands, coercive treatment of small-scale rice farmers reluctant to lease out their land, and flooding of paddy fields with saline water.Hence, the culture of social movements and dealing with crisis actively through collective action was embedded within the social memory of the local people (see also Beymer-Farris et al. 2012).As in previous social movements, the protests in Kamarkhola were supported by members of local political parties or nongovernmental organizations who played key roles in organizing local community members and helping them express their collective frustration against years of injustice.In this case, the local parliamentary member, in association with a couple of antisaline-water environmental protection groups, played a crucial role in mobilizing people and ultimately obtaining an order from the High Court that banned shrimp cultivation in the area.Finally, the destruction created by the cyclone opened up opportunities to start a new farming regime.Whereas farmers in Mithakhali were concerned about the immediate difficulties of stopping shrimp cultivation, those in Kamarkhola could depend on cyclone aid during the transition period. Application of the social conception of well-being provided a nuanced understanding of the distributional effects of socialecological change on households of different classes.The empirical evidence showed that well-being was not only determined by economic gains, it also relied on people's freedom to act in ways that were consistent with their own values and aspirations, which were, in turn, shaped by their perceptions of the surrounding environment and understanding of what constitutes a good life.Rice cultivation was not just a job but a way of life; sufficiency in rice is an important aspect of well-being for most people in rural Bangladesh (see also White 2010).A recent study in coastal Bangladesh also noted that material gains from shrimp farming were offset by worsening subjective wellbeing caused by the loss of self-sufficiency in rice, frustration at injustices related to land expropriation, and despair about the future (Belton 2016).In contrast, freshwater prawn, fish, and paddy cultivation positively contributed to societal well-being by enabling both cash income and food security, creating more equitable distribution of resources, and by retaining a cultural identity as rice farmers (Belton 2016). The integration of resilience, political ecology, and well-being approaches here thus helped to engage better with the social complexities of change and provided a more grounded analysis of what is desirable and for whom.Social-ecological systems research is often dominated by system-oriented approaches that tend to rely on quantitative measurements of linkages among various components.Factors and processes that do not fit within the boxes and arrows of the system model are sometimes ignored.For instance, in studying the interrelationships between ecosystem services and well-being in coastal Bangladesh, Hossain et al. (2016Hossain et al. ( , 2017) ) used indicators such as percentage of population below the poverty line, gross domestic product, and production https://www.ecologyandsociety.org/vol22/iss2/art45/cost as measures of material well-being; education as a proxy for freedom of choice; and water and sanitation facilities, housing conditions, and birth by a skilled health trainer as measures of quality of life.Analysis of aggregate indicators at a regional level reflected strong positive relationships between provisioning services and material well-being, and weak relationships with regulating services.As argued by Dawson and Martin (2015), such reductionist approaches fail to acknowledge the conflicting objectives of different interest groups, the power relations, and trade-offs associated with changes in ecosystem services.Aggregate measures can lead to policies that seek to increase overall economic growth to promote human development.The adverse socioeconomic and agro-ecological impacts resulting from the unregulated growth of the shrimp industry in coastal Bangladesh is a living example of the dangers of such reductionist research approaches and policy formulation.Interdisciplinary approaches are essential for studying human-nature interactions; however, using social theories as addendums to established ecological frameworks can prove to be counterproductive.Methodological approaches should be tailored to capture the inner workings of human societies and heterogeneous needs of different people. Responses to this article can be read online at: http://www.ecologyandsociety.org/issues/responses.php/9422 • Kacha houses with mud floors, mud/bamboo walls and leaf/straw roofs • Can afford two meals a day, with occasional protein intake • Income usually not enough to meet household expenses; often have loans from NGOs Extreme Poor • Do not have any agricultural land, many residing on the embankment • Mainly dependent on wage laboring; some engaged in sharecropping. • Do not have any agricultural land • Mainly dependent on wage laboring/petty trades Release phase • Release of accumulated capital and collapse of system structure; • Social capital and behavior can break away from normalised routines and positions. • Increased salinity leading to adverse impacts on subsistence based livelihood activities; Disease outbreaks in shrimp farms; • Reluctance to continue brackish water shrimp farming and social movements against outside entrepreneurs; • Occurrence of severe cyclones and tidal surges Re-organisation phase • Social learning and memory support experimentation and development of novel ideas, while crisis provide windows of opportunity; • Specific coalitions of interests emerge and compete for discursive dominance • Skills acquired from brackish water shrimp cultivation used to experiment with white fish or freshwater prawn cultivation • Destruction by cyclone Aila providing opportunity for changes in farming systems • Difference in perceptions on brackish water shrimp cultivation; recognition of the ecological and economic potential for integrated freshwater prawn and paddy farming Fig. 1 . Fig. 1.Map of Bangladesh showing the locations of the two study sites. Fig. 4 . Fig. 4. Changes in material well-being resulting from changes in farming systems in Mithakhali and Kamarkhola. Table 1 . Description of research tools used for data collection and participant selection methods. • Poor housing with mud floors and walls/roofs made of palm leaves/straw • Always face food shortage, hardly can afford protein items • Income not enough to household expenses; often have loans from NGOs Table A1 .7 Characteristics used for structuring and analyzing data in relation to the adaptive cycle Characteristics of a SES in terms of its potential and connectedness Characteristics of the shrimp industry as identified from empirical evidence Exploitation phase• Abundance of resources, allowing competition among alternative social or ecological groups and formation of new hierarchies;• System exhibits flexibility and high resilience • Availability of fallow land during the dry season; • Abundance and diversity of post-larvae and fish juveniles in tidal water; • Adoption of export-oriented growth policy, creating demand for market-based products • Traditional patron-client peasant societies being replaced by commercial aquaculture Accumulation of ecological capital, such as biomass and nutrients, and social capital, such as skills, networks, trust and human relationships.• System exhibits stability and rigidity, as resources are bound up by tight organisation, thus, excluding domination by alternative species or social institutions • High levels of financial investments by the government as well as large local farmers; • Development of ancillary services along the supply, creating employment and trade networks • Shrimp cultivation became the dominant livelihood activity, occupying private farmland, mangrove forests, public land and waterbodies
v3-fos-license
2016-05-15T00:46:21.109Z
2011-03-31T00:00:00.000
208914032
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1107/s160053681101097x", "pdf_hash": "71b0bd60c53e8f4fa8640525131e1c0cc36be00c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44871", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "eac1cb260d1124359d346c1a1727181de9fe30fa", "year": 2011 }
pes2o/s2orc
5-Cyclohexyl-2-(4-fluorophenyl)-3-isopropylsulfonyl-1-benzofuran In the title compound, C23H25FO2S, the cyclohexyl ring adopts a chair conformation. The 4-fluorophenyl ring makes a dihedral angle of 50.74 (4)° with the mean plane of the benzofuran fragment. In the crystal, molecules are linked by intermolecular C—H⋯π interactions. In the title compound, C 23 H 25 FO 2 S, the cyclohexyl ring adopts a chair conformation. The 4-fluorophenyl ring makes a dihedral angle of 50.74 (4) with the mean plane of the benzofuran fragment. In the crystal, molecules are linked by intermolecular C-HÁ Á Á interactions. Cg is the centroid of the C2-C7 benzene ring. Supplementary data and figures for this paper are available from the IUCr electronic archives (Reference: EZ2240). et al., 2009, Galal et al., 2009, Khan et al., 2005. These compounds occur in a wide range of natural products (Akgul & Anil, 2003;Soekamto et al., 2003). As a part of our ongoing study of the substituent effect on the solid state structures of 2-aryl-5-cyclohexyl-3-methylsulfinyl-1-benzofuran analogues (Choi et al., 2011a, b), we report herein on the molecular and crystal structures of the title compound. In the title compound ( Fig. 1), the benzofuran unit is essentially planar, with a mean deviation of 0.010 (1) Å from the least-squares plane defined by the nine constituent atoms. The 4-fluorophenyl ring makes a dihedral angle of 50.74 (4)°w ith the mean plane of the benzofuran fragment. The crystal packing is stabilized by intermolecular C-H···π interactions between a cyclohexyl H atom and the benzene ring (Table 1; C14-H14A···Cg i , Cg is the centroid of the C2-C7 benzene ring). Experimental 77% 3-chloroperoxybenzoic acid (448 mg, 2.0 mmol) was added in small portions to a stirred solution of 5-cyclohexyl-2-(4-fluorophenyl)-3-isopropylsulfanyl-1-benzofuran (331 mg, 0.9 mmol) in dichloromethane (40 mL) at 273 K. After being stirred at room temperature for 6h, the mixture was washed with saturated sodium bicarbonate solution and the organic layer was separated, dried over magnesium sulfate, filtered and concentrated at reduced pressure. The residue was purified by column chromatography (benzene) to afford the title compound as a colorless solid [yield 73%, m.p. 417-418 K; R f = 0.66 (benzene)]. Single crystals suitable for X-ray diffraction were prepared by slow evaporation of a solution of the title compound in ethyl acetate at room temperature. Refinement All H atoms were positioned geometrically and refined using a riding model, with C-H = 0.95 Å for aryl, 1.00 Å for methine, 0.99 Å for methylene and 0.98 Å for methyl H atoms, respectively. U iso (H) =1.2U eq (C) for aryl, methine and methylene, and 1.5U eq (C) for methyl H atoms. Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds supplementary materials sup-3 in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > 2sigma(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger.
v3-fos-license
2023-03-04T16:13:24.972Z
2023-03-01T00:00:00.000
257329555
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1648-9144/59/3/488/pdf?version=1677718632", "pdf_hash": "a6c74303806074a53b39b566b2a76b630b8ad7fa", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44872", "s2fieldsofstudy": [ "Medicine" ], "sha1": "87296bb1dfa6f293524d9c28a07f731a47c0160e", "year": 2023 }
pes2o/s2orc
Adaptive Immunosuppression in Lung Transplant Recipients Applying Complementary Biomarkers: The Zurich Protocol Achieving adequate immunosuppression for lung transplant recipients in the first year after lung transplantation is a key challenge. Prophylaxis of allograft rejection must be balanced with the adverse events associated with immunosuppressive drugs, for example infection, renal failure, and diabetes. A triple immunosuppressive combination is standard, including a steroid, a calcineurin inhibitor, and an antiproliferative compound beginning with the highest levels of immunosuppression and a subsequent tapering of the dose, usually guided by therapeutic drug monitoring and considering clinical results, bronchoscopy sampling results, and additional biomarkers such as serum viral replication or donor-specific antibodies. Balancing the net immunosuppression level required to prevent rejection without overly increasing the risk of infection and other complications during the tapering phase is not well standardized and requires repeated assessments for dose-adjustments. In our adaptive immunosuppression approach, we additionally consider results from the white blood cell counts, in particular lymphocytes and eosinophils, as biomarkers for monitoring the level of immunosuppression and additionally use them as therapeutic targets to fine-tune the immunosuppressive strategy over time. The concept and its rationale are outlined, and areas of future research mentioned. Introduction Three decades ago, before the introduction of thiopurine methyltransferase (TPMT) enzyme level testing, azathioprine dosing was adjusted according to total leucocyte counts, with a cut-off at 3.5 or 3.0 G/L. This was a simple strategy using a biomarker to monitor drug effects [1][2][3]. While azathioprine was one of the commonly applied immunosuppressive drugs in the early days of lung transplantation, it has been largely replaced by mycophenolate mofetil (MMF), after the latter demonstrated a decreased incidence of biopsy-proven acute cellular rejection and chronic allograft dysfunction [4]. Lung transplantation was initiated in Zurich, Switzerland, in 1992 by the thoracic surgeon Walter Weder and the pulmonologist Rudolf Speich. In the initial phase, the program benefited from experiences obtained from the Toronto lung transplant program. The research of these early Swiss transplant specialists focused on the anastomotic technique and handling of infections in lung transplant recipients as well as the characterization of chronic rejection as "bronchiolitis obliterans syndrome". By the turn of the century, 88 transplantations had been performed and by the end of 2022, we count 619 lung transplantations conducted in Zurich. For nearly 3 decades, ciclosporin has been our first choice of calcineurin inhibitor (CNI) except in pediatric candidates where tacrolimus is preferred. As in many transplant centers, therapeutic drug monitoring is the mainstay for adapting the drug dosing mainly for CNIs (namely ciclosporin and tacrolimus), and to a lesser extent for MMF and the mammalian-target of rapamycin (mTOR) inhibitor everolimus [5]. Frequently, CNI drug target levels are lowered in patients with relevant kidney dysfunction due to their nephrotoxicity and impaired immune responses due to the renal failure itself [6]. Further deterioration in kidney function should be avoided since it reduces quality of life, graft survival, and may lead to end-stage renal disease. A relevant proportion of lung transplant recipients (LTRs) requires renal replacement therapy, i.e., dialysis or even kidney transplantation. In patients with reduced kidney function, we adapt target drug levels, especially for CNIs or CNIs in combination with everolimus (Table 1). 10-12 ** 8-10 ** 3-6 months 9-11 7-9 6-12 months 8-10 6-8 *** 12-24 months 7-9 *** 5-7 *** >24 months 6-8 *** 4-6 *** MMF/MPA trough levels: 2-5 mg/mL (as orientation; mainly controlled by differential blood counts). AUC = area under the curve; MMF = mycophenolate mofetil (CellCept); MPA = mycophenolic acid (Myfortic); Everolimus trough level: 3-5 ng/mL. Note: Everolimus is avoided in the first 1-2 months after transplantation due to antifibrotic activity and potential wound healing problems. ** if patient is 60 years or older, then the target range is one level lower (applicable to the whole timeframe). *** if everolimus is given, then 3-5 mg/L target level for both everolimus and tacrolimus (beyond 9 months post-transplant). At our center, we have also been adjusting immunosuppression by applying complementary biomarkers retrieved from differential blood counts [7][8][9]. Naturally, we have observed total leukocyte counts and total neutrophil counts, but in most cases, these do not pose a major concern unless certain antibiotic or antiviral agents with bone marrow suppressing side effects are used simultaneously, or in the elderly, where the bone marrow reserves are reduced. Nevertheless, severe neutropenia (<0.5 G/L) is associated with decreased survival and increased infection rates and should be managed by dose reduction in immunosuppressive therapy [10]. Serum Lymphocyte Counts Our approach mainly focuses on total lymphocyte and eosinophil counts and, if in doubt, we consider the percent values of these cells in relation to total leukocytes, thus taking a possible leukocytosis into account ( Figure 1). Figure 1. Adaptive immunosuppression considering differential white blood cell counts (eosinophils, lymphocytes) and other factors (therapeutic drug monitoring levels of calcineurin inhibitors (CNI), kidney function, and donor specific antibodies (DSA). We refer to this concept as the "Rule of Five" (the number five appearing in many of the main target values), showing graphically the components of the immunosuppressive therapy including target zones of lymphocytes and eosinophils. As a rule of thumb, we keep lymphocytes suppressed during the first six months, targeting lymphocyte counts just below 1 G/L and above 0.5 G/L. Lower lymphocyte counts will increase the risk of infections disproportionally ( Figure 1). If lymphocyte counts fall below 0.5 G/L, we reduce the antimetabolite (mycophenolate) drug dosage irrespective of measured mycophenolic acid (MPA) serum drug levels. There are only very few exceptions to this rule, such as low lymphocyte counts during mild viral infections or extra corporeal photopheresis (ECP), when antimetabolite dosages are already low and the patient condition is stable without recurrent infections. In these situations, we maintain the antimetabolite dose. With our standard daily dose of 2 × 1.5 g mycophenolate mofetil combined with a moderate dose of prednisone (i.e., 0.5 mg/kg body weight early post-transplant) we usually achieve suppressed lymphocyte counts during the first few months post-transplant. In underweight patients, we initially apply 2 × 1 g daily. We do not increase the mycophenolate above 2 × 2 g (this is rarely used, since it is an off-label dosage in Switzerland). In our experience, the elderly have a reduced bone marrow reserve and are more sensitive to myelotoxic compounds, in particular when combinations are used such as combinations of mycophenolate, valaciclovir, and myelosuppressive antibiotics such as piperacillin/tazobactam [7]. Additionally, we take the presence of HLA antibodies and in particular donor-specific antibodies (DSA) into account during the tapering of overall immunosuppression. Our treat-to-target approach aims for the lowest possible immunosuppression preventing rejection, thus reducing organ toxicity and infections. Fearing rejection, this simple concept is often ignored by transplant physicians in our setting, favoring more intense immunosuppressive strategies, if in doubt. Serum Eosinophil Counts As a third biomarker, we monitor eosinophil counts and aim for values <0.5 G/L (or less than 5%) during the entire post-lung transplant follow-up ( Figure 1). Often, eosinophil counts remain very low under the immunosuppressive triple therapy, with values ranging between 0 to 0.25 G/L. Once they tend to increase over time, we check for signs of rejection, the overall immunosuppression (considering the steroid dose, the CNI dose, the antimetabolite dose, signs of viral replication CMV, etc.), and finally the presence of HLAantibodies and DSA to assess if there is any evidence of insufficient immunosuppression or, less likely in this situation, over-immunosuppression (i.e., no antibodies at all, but possibly viral replication: EBV or CMV). For the long-term management of immunosuppression, we mainly take the evolution of drug dosages, lung function, and renal function into account. Therefore, since the start of the Zurich lung transplantation program in 1992, we have recorded drug dosages, drug levels, lung function, renal function, differential blood counts, C-reactive protein, creatinine, and other biomarkers of each individual patient in a database-spreadsheet, allowing us to monitor changes over time and assisting our decision-making process on modifications to improve individualized immunosuppression. Of course, we also search for other reasons for increased eosinophils, such as primary eosinophilia and secondary causes including parasitic infections, but usually only in cases with high eosinophilia, i.e., serum eosinophils >0.5 G/L or >5%. Finally, bronchoalveolar lavage (BAL) differential blood-cell counts may also be a potential marker for allograft rejection [8,9]. The disadvantage of BAL sampling, however, is that it requires a more invasive strategy (bronchoscopy) to obtain the sample, which is more time-consuming as well as resource-intensive and poses an increased risk to the patient compared to a simple differential blood count. Monitoring for Acute Rejection by Surveillance Bronchoscopies and Tapering Steroid Dose We perform four to six surveillance bronchoscopies during the first year post-transplant. The obtained information from cytological (BAL) and histological samples (transbronchial biopsies, mucosal biopsies, or cryobiopsies) helps us taper or modify immunosuppression in this first post-transplant year, leading to a monthly reduction in the steroid doses by 5 mg if no signs of rejection are documented. Based on our preliminary experience, cryobiopsies tend to have a higher diagnostic yield in the evaluation of acute allograft dysfunction as compared to transbronchial biopsies [11]. In the case of acute allograft rejection (ISHLT > 2), we implement a steroid pulse therapy ranging from oral glucocorticoids (ISHLT A2) to high-dose glucocorticoids with methylprednisolone 500 mg to 1 g per day for 3 days (ISHLT A3-4). At the same time, the immunosuppressive therapy is adjusted by evaluating a CNI switch between ciclosporin and tacrolimus, or a switch between other compounds such as prednisone and prednisolone or mycophenolate mofetil and mycophenolic acid. We rarely introduce everolimus during the first 6 months after lung transplantation but consider it at a later stage, especially in the context of progressive renal failure [12]. Sometimes, we add azithromycin or pravastatin, and rarely montelukast for immunomodulation in this early phase when evidence of allograft dysfunction becomes evident. We typically perform a follow-up bronchoscopy approximately 4 to 6 weeks after modification of immunosuppression, especially when higher grades of acute allograft dysfunction were documented histologically. Some Practical Dosing Rules and Consideration of Renal Dysfunction A short word on immunosuppression medication dosing principles. We usually try to aim for symmetric doses, i.e., the same dose of CNI or antimetabolite in the morning and evening. However, when fine-tuning is necessary and specific low-dosed tablets are not provided by the pharmaceutical industry to allow precise dose adjustments, it is not always possible to stick to this principle and so we no longer aim for symmetry. If the CNI dose is asymmetric, the higher dose is given in the morning, due to the higher water/liquid intake during the day. If the antimetabolite dose is asymmetric, the higher dose is prescribed in the evening in order to counterbalance the morning-only dose of steroids and a higher morning dose of CNI (in the case of asymmetric dosing). With the advent of extended-release tacrolimus, we quite often give tacrolimus only in the morning and fine-tune the immunosuppression or the tacrolimus levels by altering the co-medication, in particular the itraconazole dosing. As a rule of thumb, we give itraconazole life-long and concomitantly with the CNI, thus benefiting from the drug metabolism inhibition of this agent, leading to substantially lower drug dosing requirements for the CNI drugs. This is a major cost-saving factor using a low-cost antifungal agent, thus also preventing some typical fungal infections. In the dosing of itraconazole, we usually start with 200 mg twice daily and adjust the dose according to drug levels, aiming for levels of 0.5-1.0 mg/L. An electrocardiogram is routinely performed when measurable itraconazole levels have been documented, usually about 3-5 weeks post-transplant, and again once maintenance immunosuppression has been established (i.e., at one-year post-transplant) in order to detect a possible prolonged QT time. In the long-term management of LTR with deteriorating kidney function, we consider the combination of everolimus with low-dose tacrolimus to limit further deterioration of renal function. In this combination, the target drug levels are 3-5 µg/L for both drugs. Sometimes, we even reduce these target ranges further to 2-4 µg/L for both drugs in cases of severe renal failure (eGFR < 30). Extracorporeal Photopheresis for Chronic Lung Allograft Dysfunction A further strategy to modify immunosuppression is to introduce extracorporeal photopheresis (ECP). This additional method of influencing and modifying the immune response allows us to reduce the dose of the pharmacological immunosuppressive agents, in particular the CNI in the case of nephrotoxicity or more uncommon CNI-adverse effects, such as posterior reversible encephalopathy syndrome (PRES). In case of malignancies such as skin cancer or post-transplant lymphoproliferative disorder (PTLD), the introduction of ECP allows a reduction in the antimetabolite dosing. Due to cost regulations in Switzerland, the patient must also have been diagnosed with chronic lung allograft dysfunction (CLAD) and all known immunomodulatory strategies should have been applied. In cases of cutaneous carcinogenesis, affecting approximately 20% of our LTRs, mTOR inhibitors (everolimus) are preferred instead of an antimetabolite [13]. It is noteworthy that everolimus can significantly delay wound-healing. Therefore, we stop everolimus 1-2 weeks prior to elective surgery and restart it after complete wound-healing. In rare cases, everolimus is used to replace the CNI completely (i.e., cancer diagnosis with chemotherapy). Why Consider Eosinophils and Lymphocytes? The exact relationship of eosinophil counts and allograft rejection is not fully understood and many investigators are still trying to dissect the complex interaction of key mediators in transplant alloimmunity [14][15][16][17][18][19][20][21]. For several years, we have considered eosinophils as an epiphenomenon highly correlating with lung allograft rejection, which is supported by recent research [18]. A key cytokine in eosinophil homeostasis is interleukin (IL)-5, which is secreted by type 2 helper (Th2) T cells and the more recently identified type 2 innate lymphoid cells (ILC2) [22,23]. The latter are tissue-resident cells expressing the IL-33 receptor mediating type-2 immune responses including parasite clearance and allergic responses, especially in the lungs [24,25]. It is thus interesting to hypothesize that increased eosinophil counts are either a biomarker for a Th2-mediated graft rejection and/or damaged lung graft epithelial cells, which by secreting IL-33 promote IL-5-expressing ILC2s, eventually inducing eosinophilia. However, the precise mechanism of lung allograft rejection, and especially the sometimes contradictory roles of helper T-cell subsets, is incompletely understood [26]. Due to the emergence of biologics targeting the Th2 axis by blocking IL-4 and IL-13 (e.g., with dupilumab) or IL-5 signaling (e.g., with mepolizumab and benralizumab), understanding the mechanism of eosinophilia in the lung transplant setting is of great importance to act as the basis for bringing these drugs into use in novel indications such as chronic allograft dysfunction. However, we have considered eosinophil counts as a supporting biomarker when modifying overall immunosuppression. We have been doing this due to a lack of other widely available biomarkers, which could represent the level of overall immunosuppression. Not only have we closely observed eosinophil counts in the post-transplant setting, but we have aimed to keep them in a target zone between ≤0.5 G/L and ≤5%, respectively ( Figure 1). In case of increased values, both the corticosteroids and the antimetabolite mycophenolate are most effective in reducing eosinophil counts. Since increasing the corticosteroid dose has its disadvantages due to known adverse effects, our main focus has been on increasing the mycophenolate in stable patients without signs of acute allograft dysfunction, defined by lung function decline. In acute allograft dysfunction, we apply steroid pulses as described above. In all other cases, a slight increase in mycophenolate dose is usually sufficient to reverse the trend of increasing eosinophil counts. An analogue phenomenon that we have observed is with DSA, which we attempt to detect after 3, 6, and 12 months and subsequently every 12 months unless a previous sample has shown DSAs. In that case, we measure DSA levels every 3 months, particularly when we have adjusted the immunosuppression, generally by increasing the antimetabolite. We usually increase the daily mycophenolate dose by 250 mg or 500 mg mycophenolate mofetil or 180 mg or 360 mg of mycophenolic acid for the enteric-coated preparation. As a rule, we rarely use serum MPA values to guide dosing mainly because there is insufficient evidence that therapeutic drug monitoring for this component improves lung transplant outcomes. If we measure mycophenolate levels, it is usually to assess drug adherence and intestinal absorption, as sometimes bioavailability can be an issue. Over-Immunosuppression and Calcineurin Inhibitor Therapeutic Drug Monitoring Over-immunosuppression also has drawbacks including infections (CMV, EBV, CARV), carcinogenesis, and PTLD. Therefore, the most appropriate target values for eosinophils as a biomarker for overall immunosuppression remain to be determined, which may also include a lower limit as part of the target range. In the future, there may be better measures, such as cell-free DNA or Torque teno virus, for estimating overall immunosuppression, which would guide us better in tailoring immunosuppressive drug doses. However, in the meantime, we rely on the "crude" value of eosinophils and lymphocytes. The target levels of CNI are tailored individually depending first on the documented acute cellular rejection episodes, presence of CLAD, and quantitative counts of CMV and EBV, and secondly on CNI side effects, the presence of kidney dysfunction, and the presence of DSA. We also measure total IgG values and substitute them by intravenous immunoglobulins in cases of recurrent infections and a total IgG level below the reference value. Therapeutic drug monitoring of ciclosporin after lung transplantation has traditionally used trough (C0) levels [27]. However, C0 levels have a poor correlation with areaunder-the-curve (AUC) measurements of ciclosporin exposure [28]. Hence, in patients with ciclosporin, the area under the curve (AUC) is determined after 3 months posttransplantation, then after 6 months and annually thereafter. This helps us to determine if the Cmax is after one or (more often) 2 h following drug intake (C1 or C2-type), and allows us to determine not only trough levels but also Cmax-values, and to calculate the AUC for the 4 h, giving us an idea of drug exposure. Cmax values are supposed to correlate much better with the ciclosporin exposure measured by AUC. In addition, using C1/C2 as target values might be associated with reduced rates of acute cellular rejection and CLAD of the bronchiolitis obliterans phenotype. Table 1 shows our target levels for CNIs, which have been compiled from various sources and slightly adapted to our clinical context, which includes an induction treatment with basiliximab after transplantation. Age above 60 years has been considered as a target-dose lowering-factor, since our long-standing experience has shown that advanced age can be more safely treated with CNI drug target levels one step lower than for younger LTRs. Our approach to immunosuppression is in many ways similar to strategies by other institutions, but the personalized medicine approach adapts the immunosuppressive strategy to the lymphocyte and eosinophil counts. Studies are required to delineate if this strategy is associated with better outcomes, what underlying mechanisms may predominantly define lymphocyte and eosinophil counts, and if this strategy can compete with newer strategies that use other markers for the level of overall immunosuppression, such as cell-free DNA or Torque teno virus-based approaches. In conclusion, adaptive immunosuppression for lung transplant recipients incorporates many known principles of the triple immunosuppressive treatment and additionally considers the white blood cell counts, in particular lymphocyte and eosinophil counts. These values are considered a proxy for the level of immunosuppression and the underlying cytokine patterns and can be correlated with allograft dysfunction. Certain target values are considered in a time-dependent manner as displayed in Figure 1, which captures key elements considered in the personalized approach to immunosuppression. This concept needs to be studied in different settings and compared with newer methods of quantifying and guiding overall immunosuppressive levels by measuring donor-derived cell-free DNA and Torque teno virus levels. The advantage of this approach is that it uses highly standardized and widely available laboratory measurements.
v3-fos-license
2018-04-03T00:05:57.255Z
2014-08-28T00:00:00.000
9194246
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0103404&type=printable", "pdf_hash": "df229f2bc62c285af88bcb020655d4d7d0ca3822", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44873", "s2fieldsofstudy": [ "Biology" ], "sha1": "df229f2bc62c285af88bcb020655d4d7d0ca3822", "year": 2014 }
pes2o/s2orc
The Gene Expression Profile of CD11c+CD8α− Dendritic Cells in the Pre-Diabetic Pancreas of the NOD Mouse Two major dendritic cell (DC) subsets have been described in the pancreas of mice: The CD11c+CD8α− DCs (strong CD4+ T cell proliferation inducers) and the CD8α+CD103+ DCs (T cell apoptosis inducers). Here we analyzed the larger subset of CD11c+CD8α− DCs isolated from the pancreas of pre-diabetic NOD mice for genome-wide gene expression (validated by Q-PCR) to elucidate abnormalities in underlying gene expression networks. CD11c+CD8α− DCs were isolated from 5 week old NOD and control C57BL/6 pancreas. The steady state pancreatic NOD CD11c+CD8α− DCs showed a reduced expression of several gene networks important for the prime functions of these cells, i.e. for cell renewal, immune tolerance induction, migration and for the provision of growth factors including those for beta cell regeneration. A functional in vivo BrdU incorporation test showed the reduced proliferation of steady state pancreatic DC. The reduced expression of tolerance induction genes (CD200R, CCR5 and CD24) was supported on the protein level by flow cytometry. Also previously published functional tests on maturation, immune stimulation and migration confirm the molecular deficits of NOD steady state DC. Despite these deficiencies NOD pancreas CD11c+CD8α− DCs showed a hyperreactivity to LPS, which resulted in an enhanced pro-inflammatory state characterized by a gene profile of an enhanced expression of a number of classical inflammatory cytokines. The enhanced up-regulation of inflammatory genes was supported by the in vitro cytokine production profile of the DCs. In conclusion, our data show that NOD pancreatic CD11c+CD8α− DCs show various deficiencies in steady state, while hyperreactive when encountering a danger signal such as LPS. Introduction Diabetes mellitus type 1 (T1DM) is caused by an autoimmune reaction to the islets of Langerhans in the pancreas resulting in an autoimmune insulitis, in which the beta cells disappear with as consequence an absolute insulin deficiency. The NOD mouse model is considered an excellent model of human T1DM and spontaneously develops an autoimmune insulitis similar to T1DM patients [1][2][3]. With regard to the early phases of the NOD autoimmune insulitis Diana et al. recently showed a transient accumulation of a small number of plasmacytoid DCs, lymphocytes and B-cells in the pancreatic islets of NOD mice at 2 weeks of age [4]. An interaction between these infiltrating cells was shown to be involved in the onset of autoimmunity against the beta cells. For this very early time point of 2 weeks of age small transient accumulations of conventional DCs (cDCs) and macrophages around the islets have been reported on before [5,6], as well as on apoptosis abnormalities in the pancreas of the NOD mouse [7]. This first relatively mild intra-islet and peri-islet accumulation of immune cells at 2 weeks of age is followed by a second wave of a larger para-and peri-islet immune cell accumulation starting at 5 weeks of age consisting predominantly of cDCs and macrophages, later followed (7-8 weeks) by a massive lymphocyte accumulation [8,9] and a second wave of plasmacytoid DCs. At the time of this larger para-and peri-islet immune cell accumulation there is also a steady increase of dispersed cDCs and macrophages in the exocrine pancreas [8]. A key role for the peri-islet and pancreas accumulating cDC and macrophages in the pathogenesis of the destructive insulitis at 5 weeks of age is indicated by the demonstration that a temporal depletion of cDCs and macrophages at 5 weeks of age before the onset of lymphocytic insulitis blocks or significantly delays the diabetes onset in NOD mice [10,11]. Two major cDC subsets with different phenotypes have been described in the lymphoid organs of mice: The tolerogenic CD8a + CD103 + cDCs that induce T cell apoptosis and the CD8a 2 CD11c + cDCs that are strong inducers of CD4 T cell proliferation [12]. Both these subsets can also be found in the islets of the pancreas of mice [13,14], where the CD8a 2 CD11c + cDCs form the majority of cDC, which accumulate from 5 weeks onwards around the islets of Langerhans [13,15]. We have recently reported on reduced numbers of the minor population of tolerogenic CD8a + CD103 + DCs in the 5 week old pre-diabetic pancreas of NOD mice [16] and hypothesized that the reduced number of these tolerogenic DC contributes to the development of progressive destructive autoimmune insulitis. In this report we focus on the larger subset of immunogenic CD11c + CD8a 2 cDCs (from here referred to as CD8a 2 DCs) in the pancreas of NOD mice of 5 weeks of age [13,16]. We firstly analyzed this population versus a control pancreas CD8a 2 DC population of the C57BL/6 mouse in a genome-wide gene expression analysis to elucidate abnormalities in gene expression networks. Abnormally expressed key genes in the networks were validated by Q-PCR and in functional assays such as cell proliferation assays and flow cytometric analysis (tolerance inducing genes). To assess the responsiveness of the CD8a 2 DCs to a danger signal, we stimulated NOD (and control C57BL/6) CD8a 2 DCs isolated from the pancreas in vitro with LPS and measured the production of inflammatory cytokines; and in addition used whole genome analysis to measure changes in the networks of gene expression. Materials and Methods Mice C57BL/6J and NOD/ShiLtJ female mice were purchased at Charles River Laboratories (Maastricht, The Netherlands). Mice were housed in groups (littermates) under specific pathogen-free conditions with a standard dark-light cycle and fed ad libitum. The diabetes incidence in female NOD mice is about 80%. All mice were euthanized by CO 2 inhalation before collection of the tissues. All experimental procedures were approved by the Erasmus University Animal Welfare Committee in accordance with the Experiments on Animals Act ('Wet op de dierproeven'). Preparation of cell suspensions Pancreases of 5 week old mice were isolated after a cardiac perfusion, cut into small pieces and digested with Collagenase Type 1 (1 mg/ml), hyaluronidase (2 mg/ml) (both Sigma Aldrich, St. Louis, MO, USA) and DNAse I (0.3 mg/ml) (Roche Diagnostics, Almere, The Netherlands) for 40 minutes at 37uC. Subsequently, cells were flushed through a 70 mm filter and washed with DMEM +10% FCS. All cells were resuspended in PBS containing 0.1% BSA and were ready for flow cytometric staining. Single-cell suspensions from pancreas were labeled with CD45 beads (Miltenyi, Leiden, The Netherlands) and CD45 + cells were pre-sorted with the AutoMACS pro (Miltenyi) to remove most of the non-immune cells. The pancreatic CD45 + cells were further processed for FACS analysis or DC isolation. DC isolation and in vitro stimulation The pre-sorted pancreatic CD45 + cells were labeled with CD11c and CD8a in PBS containing 0.1% BSA. Subsequently, CD8a 2 DCs were sorted on a FACSAria II (Becton Dickinson). Figure S1 in File S1 shows the gating strategy. Re-evaluation of the sorted CD8a 2 DCs indicated .98% purity of the sorted cells. Half of the total number of sorted CD8a 2 DCs cells were washed and directly lysed in PicoPure extraction buffer (Arcturus, Applied Biosystems, Bleiswijk, The Netherlands) and stored at 280uC until RNA isolation procedure. The other half of CD8a 2 DCs were cultured for 18 hours in RPMI-1640 medium supplemented with 10% FCS, 50 mM beta-mercaptoethanol and with or without 1 mg/ml LPS from E. coli 0111:B4 (Sigma, Saint Louis, MO, USA). Finally cells were harvested with 2 mM EDTA and lysed in extraction buffer for RNA isolation. Supernatants were collected and stored at 280uC. RNA isolation, amplification and gene expression analysis RNA was isolated with the PicoPure kit (Arcturus, Applied Biosystems) according to the manufacturer's protocol including a DNase I treatment (Qiagen, Venlo, The Netherlands) to remove genomic DNA contamination. RNA quality was assessed on the bioanalyzer (Agilent Technologies, Amstelveen, The Netherlands) and samples with a RIN.8 were accepted. The RNA was reverse transcribed, amplified, biotinylated and fragmented with the Ovation Pico WTA v2 and Encore Biotin Module (NuGEN Technologies, Leek, The Netherlands) and subsequently hybridized on Mouse Genome 430 2.0 Arrays (Affymetrix, High Wycombe, UK) according to the manufacturers protocols. The raw data containing.CEL files, including metadata and matrix with normalized gene expression were uploaded to GEO and will be accessible from publication date under accession number: GSE45028 at http://www.ncbi.nlm.nih.gov/geo/query/acc. cgi?acc = GSE45028. Microarray analysis and qPCR validation Microarray analysis. Quality analysis of the CEL data was assessed by running a standardized workflow developed at the BiGCaT department of Maastricht University -The Netherlands (http://www.arrayanalysis.org/). The expression data containing.CEL files were imported and processed further with BRB-ArrayTools (R. Simon, http://linus.nci.nih.gov/BRB-ArrayTools. html). Gene expression data was normalized using RMA (Robust Multichip Average) [17]. A list of differentially expressed genes (DEGs) among the two classes was identified by using a multivariate permutation test using the class comparison tool in BRB-arraytools. The multivariate permutation test was used to provide 90% confidence that the false discovery rate (FDR) was less than 10% [18,19]. The FDR is the proportion of the list of genes claimed to be differentially expressed that are false positives. Partek Genomics Suite (Partek Inc., Saint Louis, MO, USA) was used for the principle component analysis (PCA) and for the hierarchically clustered representation of the DEGs. Ingenuity pathway analysis (Ingenuity Systems, www.ingenuity.com) was used for annotation, mapping of the DEGs to known biological networks and to visualize interactions between genes. Quantitative PCR validation. RNA and cDNA for the Q-PCR validation was prepared according to the same procedure as described above. Q-PCR was performed with a commercially available mix (TaqMan Universal PCR Master Mix) according to the manufacturer's protocol on a 7900HT Fast Real-Time PCR System (Applied Biosystems). All TaqMan probes and consensus primers were preformulated and designed by the manufacturer (TaqMan Gene Expression Assays; Applied Biosystems). The quantitative value obtained from Q-PCR is a cycle threshold (Ct). Normalized expression values for each gene were calculated by the BrdU incorporation and detection Mice were injected intraperitoneal at an age of 5 weeks with 1 mg BrdU from the FITC BrdU flow kit (Becton Dickinson); Brdu (0.8 mg/ml) was added to the drinking water for the next 96 h hours. Mice were sacrificed after 24, 48 and 96 h and tissue Heatmap with hierarchical clustering (Euclidean distance with average linkage) of the DEG genes among the NOD (Cyan) and C57BL/6 (Orange) pancreatic CD8a 2 DCs (C). Normalized 2 log-transformed probeset expression values are visualized as a gradient from low (blue) to high (red) expression. doi:10.1371/journal.pone.0103404.g001 CD11c + CD8a -DCs in the Pre-Diabetic NOD Pancreas PLOS ONE | www.plosone.org was prepared described in the preparation of cell suspensions. The pre-sorted pancreatic CD45 + cells were stained with cell surface markers and subsequently fixed and permeabilized using Cytofix/ Cytoperm and Perm/Wash buffer from the BrdU flow kit according to the manufacturer's protocol. BrdU was detected by a monoclonal antibody BrdU-FITC (Becton Dickinson) BrdU expression in the pancreatic DC was detected using a BD FACSCanto HTSII (Becton Dickinson) flow cytometer and analyzed with Flowjo software (Tree Star). Cytokine measurements Concentrations of IL-6, IL-10, IL-12p70 and TNF-a were measured in the supernatants from CD8a 2 DC cultures with the FlowCytomix cytometric bead array according to the manufacturer's protocol (eBioscience). Briefly, a mixture of beads coated with antibodies against IL-6, IL-10, IL-12p70 and TNF-a were incubated with the supernatant or standard mixture. The antigens present in the sample bind to the antibodies linked to the different fluorescent beads. A biotin-conjugated second antibody mixture was added and finally streptavidin-PE, to emit a fluorescent signal. Statistical analysis and figures For direct comparisons between the strains, the Mann-Whitney U test was used for unpaired analysis or as noted otherwise in the figure legend. All analyses were carried out using IBM SPSS statistics 20 software (SPSS, Chicago, IL, USA) and considered statistically significant if P,0.05. Graphs were designed with Graphpad Prism 5.0 (Graphpad Software, La Jolla, CA, USA). Results Reduced expression of proliferation, maturation, migration, inflammation and growth factor gene networks in NOD steady state pancreatic CD8a 2 DCs To characterize the CD8a 2 DC subset in the NOD, a microarray analysis was conducted on CD8a 2 DCs isolated from 5 weeks old NOD and C57BL/6 pancreases. In total, 2122 differentially expressed genes (DEGs) among the pancreatic NOD and C57BL/6 CD8a 2 DCs were identified, using a multivariate permutation test. Hierarchical clustering of the samples and PCA analysis indicated a clear distinction in global gene expression profiles between the NOD and C57BL/6 CD8a 2 DCs (Figure 1a, b). The majority of DEGs (1380; 65%) were down-regulated in the NOD pancreas CD8a 2 DCs. Figure 1c shows a hierarchically clustered heat map of all significant DEGs. We conducted Ingenuity Pathway analysis of the DEGs and mainly found a reduced growth factors including islet regeneration factors in NOD steady state pancreatic CD8a 2 DCs. Table 1 shows the top abnormally expressed genes (lowest P-value, largest fold-up/down) in these networks as categorized by Ingenuity. Of the networks we validated several of these top-ranking genes in q-PCR (Table 1; in bold) and the q-PCR results confirmed the results obtained from the microarray analysis. Proliferation/apoptosis network An important proportion of the discriminating cell proliferation and apoptosis DEGs were involved in down-regulation of proliferation, including a gene network involved in the proliferation of phagocytes ( Figure S2 in File S1) suggesting a poor proliferation capacity of the DCs. To functionally verify a putative reduced proliferation capacity of NOD pancreatic DCs we injected NOD and C57BL/6 mice with BrdU, and BrdU + CD11c + DCs in the pancreas were assessed by flow cytometric analysis after 24, 48, and 96 h (Figure 2a; CD8a was not included in the analysis as the vast majority of CD11c + DCs are CD8a 2 ). The total number of CD11c + BrdU + DCs in the NOD pancreas was significantly decreased as compared to the C57BL/6 pancreas at all timepoints (Figure 2b), corroborating the finding of the reduced expression of genes involved in cell proliferation. Cell maturation, inflammation and cell migration network A high ranking network within the reduced network of maturation and inflammation consisted of the down-regulation of various inflammatory response genes ( Figure S3 in File S1), such as Il10, Il12b, Ifng and Chi3l3 (YM1, an enzyme involved in alternative macrophage polarization and known for its Th2 cell promoting effects [19,20]) and the down-regulation of the costimulatory gene CD40. These findings suggest a reduced classical DC maturation of steady state NOD pancreatic CD8a 2 DCs. This observation is supported by previous functional studies of our group, in which we found with regard to the DC differentiation from bone marrow precursors that the cells showed a poor differentiation into fully competent classical immune stimulatory DCs, yet deviated to another phenotype, i.e. a more ''macrophage like'' phenotype [20]. It is therefore important to note that we found in this network various inflammatory genes known for macrophage activation up-regulated (such as Ifi202b, Il36g and Mif) as well as CD209b. CD209b also known as SIGN-R1 is a Ctype lectin involved in binding and capture of dextran, pathogens and encapsulated bacteria and highly expressed on macrophages [17]. Collectively these gene expression data point to a qualitatively different immune and inflammatory set point of the NOD CD8a 2 DCs (more ''macrophage-like''). We also found in the network of cell maturation, inflammation and cell migration the down-regulation of a set of genes involved in tolerance induction (CD200R3, CCR5 and CD24). To validate these findings we performed a limited flow cytometric analysis for the proteins encoded by these genes. NOD pancreatic CD8a 2 DCs expressed minor, but significantly reduced levels of CD200R3, CCR5, and CD24 as well as for CD86 (Figure 3). Several of the maturation and inflammatory network genes also belong to gene networks involved in migration of mononuclear leukocytes (e.g.Ccl2). The down-regulation of this network suggests a reduced migration/trafficking of the pancreatic CD8a 2 DCs in the NOD mouse model. This is in line with the previous functional observations of our group on a reduced migration capability of NOD DC [21,43]. Growth and support networks Of particular interest was also the down-regulation of growthfactor networks and a specific set of genes involved in islet regeneration belonging to the Islet Regenerating (Reg) gene family, which were strongly down-regulated in the NOD pancreatic CD8a 2 DCs (Table 1). These genes included Reg1, 2, 3a, 3d and 3g. On the other hand various genes important in interaction with neurons were found upregulated and of these neuronal cell adhesion molecule (NRCAM) was highly significant. NRCAMs play a role in neuronal cell adhesion and axon guidance, but is also expressed in the pancreas [18]. Other molecules that have been described as important in neuron interaction and that were found up-regulated were neuronal Ank2 and Cacna1 (Table 1). Interactions of DCs and macrophages with islet nerves in the early phases of the NOD insulitis are well documented [21,22]. NOD pancreas-derived steady state CD8a 2 DCs are hyperreactive to LPS stimulation with regard to the upregulation of inflammatory response genes We continued to analyze by microarray analysis the responsiveness of the pancreas CD8a 2 DC subset of both the NOD and C57BL/6 mouse to in vitro inflammatory stimulation with LPS for 18 hours (with PBS as control). Hierarchical clustering of the samples resulted in two clusters representing the mouse strain, each containing two sub-clusters representing LPS stimulation (Figure 1a). In addition, the PCA showed 6 clusters: two separate clusters indicating the DCs under steady state conditions (cyan and orange spheres), and four clusters representing the in vitro PBS/ LPS-stimulation of the DCs for both mouse strains (Figure 1b). Multivariate permutation testing was used to identify DEGs among the PBS-and LPS-stimulated CD8a 2 DCs obtained from either NOD or C57BL/6 pancreases. A total number of 66 common LPS-responsive genes were identified in the NOD and C57BL/6 pancreatic CD8a 2 DCs (Figure 4a). Ingenuity pathway analysis indicated that these genes were mainly involved in inflammatory responsiveness with a strong up-regulation of genes such as Il10, Il1b and Ptgs2 (Table 2 and Figure S4 in File S1). In addition, a unique pattern of LPS-responsive genes was identified for each mouse strain (Figure 4b). A larger number of LPSresponsive genes was identified in the NOD (666 in total) pancreas CD8a 2 DCs compared to C57BL/6 (17 in total), suggesting that NOD DCs are more sensitive to the effect of LPS. This was confirmed by the PCA analysis ( Figure 1b); there is a clear distinction between the in vitro PBS/LPS-treated NOD DCs (blue and purple spheres) in contrast to the C57BL/6 DCs (red and green spheres). There were no differences in the expression of tolllike receptor 4 on both the C57BL/6 as well as the NOD CD8a 2 DCs (data now shown). Ingenuity pathway analysis indicated that indeed the LPS-inducible genes unique for the NOD pancreatic CD8a 2 DCs were involved in inflammatory responsiveness including TREM-1 signaling. Of particular interest were the cytokines: Il6, Csf2 and Tnf, which were all specifically upregulated in the in vitro LPS-stimulated NOD pancreatic CD8a 2 DCs (Table 2). We therefore additionally measured a panel of cytokines in the supernatant of the LPS-stimulated pancreatic CD8a 2 DCs. LPS did not stimulate the production of IL-12 ( Figure 5). IL-10 and TNF-a production were increased after LPS stimulation (as in gene expression), but only reached statistical significance for IL-10 in the CD8a 2 DCs from the C57BL/6 pancreas. IL-6 production was stimulated by LPS and there was a small, but significant increase in IL-6 concentration in LPS stimulated NOD pancreatic CD8a 2 DCs as compared to PBS stimulated DCs and as compared to the LPS-stimulated CD8a 2 DCs isolated from the C57BL/6 pancreas ( Figure 5). These cytokine production findings of a slightly higher IL-6 production support the view that NOD CD8a 2 DCs are hyperreactive to LPS stimulation, though do not show a highly excessive pro-inflammatory cytokine production from LPS stimulated NOD CD8a-DCs. With regard to the other gene networks found repressed in NOD steady state pancreas CD8a 2 DCs, such as the proliferation network and the network of growth factor genes for islets, these networks were neither hyperreactive to LPS, nor different anymore between NOD and C57BL/6 under LPS conditions. It must be noted that in general LPS stimulation reduced the production of REG gene expression (although not significant) (Figure 4b). Genes important in the migration network, such as CCR2 and CCR5, stayed repressed in the NOD CD8a 2 DCs in comparison to C57BL/6 CD8a 2 DCs (Figure 4b) after LPS stimulation. Discussion Previously, we showed in a number of studies [20,23] that DC generation from NOD bone marrow precursors resulted in a low yield of DCs that had various macrophage characteristics, such as a high acid phosphatase content. These DCs were defective in stimulating T cells. On the other hand -and in contradictionthere are also a number of reports of other investigators showing that DCs generated from NOD bone marrow precursors have elevated co-stimulatory, IL-12 and NF-kB activation resulting in an enhanced stimulatory function and in Th1 skewing abilities [24][25][26]. Also in type 1 diabetic (T1DM) patients discrepancies with regard to the differentiation and maturation state of DCs have been reported [27]. We described in 1995 a defective maturation and stimulatory function of DCs derived from monocytes in T1DM patients [28], an observation which was supported by later studies of Takahashi et al and Skarsvik et al who also found the defects in pre-diabetic individuals [29,30]. However Zacher et al. did not find gross differences between monocyte-derived DCs of T1DM patients and healthy controls, with the limited discrepancies observed actually suggesting an enhanced maturation of the cells in T1DM [31]. Peng et al. also found signs of an activation of DCs in T1DM and described higher numbers of more mature circulating DCs as determined by flow cytometric analysis of the peripheral blood of recent onset T1DM patients [32]. However Vuckovic et al. using a similar methodology found decreased dendritic cell counts in children with recent onset T1DM [33]. This report on NOD pancreatic CD11c + CD8a 2 DCs provides greater insight into the above described discrepancies. It shows that the major subset of steady state DCs isolated from the early pre-diabetic NOD pancreas, the CD8a 2 DCs, has an altered gene expression set point and a reduced expression of several molecular networks important for the prime functions of the cell, such as cell renewal, immune tolerance induction, migration and the provision of growth factors for beta cell regeneration. This generally reduced expression state was easily switched over to hyper stimulation: The NOD steady state CD8a 2 DCs were hyperreactive to the danger signal LPS resulting in a state of gene expression with a number of classical pro-inflammatory factors and cytokine genes excessively raised which were particularly down in steady state. We also found some indications that this pro-inflammatory hyperreactivity occurred at the protein level: In the limited set of cytokine production experiments carried out, IL-6 production profiles from NOD LPS stimulated CD8a 2 DCs supported a hyperreactivity towards LPS. Interestingly a hyperproduction of inflammatory cytokines of NOD macrophages upon encounter of another danger-associated molecular patterns (DAMP), i.e. upon encounter with apoptotic or necrotic cells, has been described before [34]. It is tempting to speculate that the reduced expression of CD24 on the defective CD8a 2 DCs found here plays a key role in the exaggerated switch of the DCs to the pro-inflammatory state. CD24 represses DAMP-signal-induced immune responses and CD24 deficient mice display massive increases in pro-inflammatory cytokines [35]. Our study is in fact the first study to assess gene expression profiles of CD8a 2 DCs isolated from the pre-diabetic steady-state pancreas. Kodama et al. (2008) and Wu et al. (2012) studied gene expression profiles in splenocytes isolated from pre-diabetic NOD mice, (a mixture of leukocytes, mainly including lymphocytes, macrophages and DCs). These authors also found various abnormal gene expression patterns partly overlapping with ours, particularly the proliferation and immune response gene expression profiles [36,37]. Also, Kodama et al. and Wu et al. found the majority of genes to be repressed as compared to normal mice. The authors observed that a large part of the abnormally expressed genes were coded for in the diabetes susceptibility regions, the Idd chromosomal loci, and they suggested that this might explain the abnormal expression. Indeed, some of our highly significant abnormally expressed genes (the MHC II-class related genes and the Fgf2 gene) are also part of these loci. In contrast, Wu et al. found several abnormalities in metabolic and enzymatic activity pathways, which we did not observe in the present study. It is likely that the heterogeneity of cell types in the spleen might play a significant role in this discrepant outcome, because less than 10% of total splenocytes are DC. Our present and previous data [38] and also those of others [39] strongly suggest a deficiency in the proliferation, differentiation and maturation capabilities of steady state NOD mouse DCs both systemically and in the pancreas. This general ''immune deficiency-like state'' probably not only affects the effector immune functions of the DCs, but also their tolerance inducing capabilities, since we found important molecules playing a role in tolerance induction (CD24, CD200R3) to be down-regulated on the steady state pancreas CD8a 2 DCs of the NOD mouse. Previous experiments with adoptive transfers of mature bone-marrow derived DCs expressing high levels of co-stimulatory molecules, such as CD80, CD86 and CD40, significantly reduced diabetes incidence in NOD mice [40]. Interestingly, treatment with immature DCs that express low levels of co-stimulatory molecules did not protect against diabetes [41] indicating that overcoming the poor differentiation and maturation state of DCs is of key importance for protection from diabetes in NOD mice. Novel is our finding of the down-regulation in NOD pancreas CD8a -DCs of a network of important genes for beta cell regenerating growth factors, the REG genes. REG genes were initially discovered for their role in the generation of beta cells in the human, rat and mouse pancreas [42][43][44]. DCs and macrophages play an important role in islet development [6] and it is tempting to speculate that the REG produced by these cells is instrumental in this support function. The down-regulation of REG genes in the deficient NOD pancreas DCs might in such view result in an insufficient support for islet growth and the aberrant islet morphogenesis that has been observed in the NOD pancreas from birth onwards prior to the first signs of lymphocytic insulitis [6,45]. Whether an insufficient provision of REG growth factors also plays a role in the actual disappearance of the beta cells in the insulitis phase is not known. There is an increased expression of Reg2 in the total pancreas of NOD mice during diabetes development and staining of the 10-week old NOD pancreas showed expression of Reg 1, 2 and Reg3a, -c proteins in the islets. In addition, all REG genes seem to have an IL-6 responsive element and treatment of healthy human islets with IL-6 results in increased REG production [46]. It is therefore not surprising that adjuvant immunotherapy increased expression of Reg2 which resulted in regeneration of beta cells [47]. Although this study has several limitations: e.g. the pancreas enzymatic digestion method and the low yield of cells making only limited (flow cytometric) and in vitro cytokine production studies possible. Another limitation is the CD11c + CD8a 2 cell population might contain a small fraction plasmacytoid DCs. We are still confident that we can conclude that the gene expression profiles together with the limited flow cytometric data and our previous functional data support a view that under steady state conditions the CD8a 2 DCs in the 5 week old pancreas of the NOD mouse display an altered phenotype with reduced cell renewal, migration, maturation and tolerance induction capabilities. These altered steady state NOD DCs are hyperresponsive to a danger stimulus, leading to a DC type with an exaggerated inflammatory molecular profile. Supporting Information File S1 Supporting file containing: Figure S1. CD11c + CD8a 2 DCs subset in the pancreas of C57BL/6 and NOD mice. Figure S2. Down regulation of phagocyte proliferation network under steady-state conditions. Figure S3. Down regulation of inflammatory response network under steady-state conditions. Figure S4. Inflammatory response network after in-vitro LPS stimulation. (DOCX)
v3-fos-license
2017-10-14T14:03:04.635Z
2012-12-20T00:00:00.000
36229914
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=25948", "pdf_hash": "d65dd960ec26cb55c7d3ff2b2cba67ccc735f1ec", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44876", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "d65dd960ec26cb55c7d3ff2b2cba67ccc735f1ec", "year": 2012 }
pes2o/s2orc
Evaluation of Potential for Translocation of Listeria monocytogenes from Floor Drains to Food Contact Surfaces in the Surrounding Environment Using Listeria innocua as a Surrogate Floor drains in processing environments harbor Listeria spp. due to continuous presence of humidity and organic substrates. Cleaning and washing activities in food-processing facilities can translocate the bacterial cells from the drain to the surrounding environment, thus contaminating food products still in production. This study evaluated the potential for translocation of Listeria monocytogenes from drains to food contact surfaces in the surrounding environment using Listeria innocua as a surrogate. A 7 × 7 × 8-foot polycarbonate flexi-glass chamber with a 10-inch-diameter drain mounted on an aluminum cabinet was used. Stainless steel coupons (6.4 × 1.9 × 0.1 cm, 12 per height) were hung at 1, 3, and 5 feet inside the chamber. Four treatment sets; non-inoculated, non-treated; non-inoculated, treated; inoculated, treated; inoculated non-treated; and two subtreatments of 8 h and 48 h were performed. For the inoculated sets, meat slurry (10 g of ground beef in 900 mL water) and a four-strain cocktail of Listeria innocua at 7 8 log CFU/mL were used. For the treated sets, in addition, a commercial cleaner and sanitizer was applied. The drain was cleaned using a pressure hose (40 50 psi) after 8 h and 48 h. Coupons were then removed and enriched in listeria enrichment broth to establish if any cell translocated from the drain onto the stainless steel coupons via aerosols generated during washing. Confirmation was done using VIP Listeria rapid test kits. Results indicated translocation at all three heights ranging from 2% 25%. Significantly higher translocation (p < 0.05) was found at 1 foot (up to 25%), followed by 3 feet (up to 11%) and 5 feet (up to 2.7%). This research indicated that translocation of Listeria spp. from drains to food contact surfaces does occur and increases with increased proximity to the drain. Introduction Bacteria have been shown to enter foods as a result of contact with contaminated surfaces [1], but contamination of commercially processed food products with Listeria monocytogenes and other Listeria spp.occurs in post-processing environments rather than as a result of organisms surviving the processing operation.L. monocytogenes is also known to be associated frequently with raw materials used in food processing facilities, which may constantly reintroduce the organism to the plant environment [2].Pulsed-field gel electrophoresis (PFGE) typing of Listeria strains isolated from a meat-processing plant in a 2-year period showed the persistence of closely related Listeria strains in the plant environment [3].Listeria monocytogenes can cause mild (listerial gasteroentritis) to severe, life-threatening illnesses (invasive listeriosis) [4]. The foods that have been commonly implicated in in-vasive listeriosis outbreaks are ready-to-eat (RTE) foods.RTE foods can be contaminated if the ingredients are contaminated with L. monocytogenes and are not sufficiently processed to destroy viable cells of this pathogen, or if introduction of L. monocytogenes occurs because of improper sanitary conditions or practices [5]. Several factors contribute to the growth of microorganisms in food-processing environments, including moisture, nutrients, pH, oxidation-reduction potential, temperature, presence or absence of inhibitors, microbial interactions, and time.Moisture plays an increasingly important role and promotes the survival of bacterial cells on different surfaces.Processing plant structures, including equipment, as well as maintenance, repair, and practices that entrap moisture often result in microbial niche development [6].Numerous sampling studies have been conducted to assess the prevalence of Listeria spp. in different food production and processing facilities. Samples were taken from the floor, drains, processing equipment, food contact surfaces, and environment.Significant findings included the recovery of L. monocytogenes from the floor drains from 2% -50% in all tested food establishments [7]. Floor drains in processing environments harbor Listeria spp.due to the continuous presence of humidity and organic substrates.Listeria adhere to, colonize, and become entrapped on the drain surface in a slimy mucilaginous coating of colonizing bacterial cells and associated polymers called biofilm.This biofilm coating protects the bacterial cells against environmental stress, offers resistance to cleaning and disinfection, and is difficult to eradicate or remove compared with free, living cells [2,8].The time available for biofilm formation depends on the frequency of cleaning activities in a processing unit.Food contact surfaces typically may be cleaned several times a day or at the end of each shift; however, environmental surfaces such as walls and drains may be cleaned only once per week.Biofilm clearly has more time to develop on environmental surfaces [9].A study found that although bacterial cells readily attached to food contact surfaces in processing facilities, extensive surface colonization and biofilm formation occurred only on environmental surfaces [10].Several studies carried out in fish-processing plants have shown a correlation between the presence of L. monocytogenes in drains and on food contact surfaces, hence on the finished product [11].Microbial cells may be transferred to the food product by vectors such as air, personnel, and cleaning systems [12,13].The open nature of drains means that they are continuously challenged by a wide range of microbes, which vary depending on the site of the drain.Listeria spp., if present in the drains, may transfer from drains onto food contact surfaces, thus contaminating the food being processed.In dairy processing, an outbreak of Listeria associated with chocolate milk, which sickened 45 people, was traced to a drain that contaminated the milk filler above it [14].Migration of the organism may occur from drains to food through workers and food handlers, contaminated equipment, and high-pressure cleaning and scrubbing in food-processing environments.Because aerosols generated as a result of high-pressure cleaning and washing activities (40 -60 psi) may translocate bacterial cells, our study was designed to evaluate the potential for translocation of L. monocytogenes from drains onto food contact surfaces in the surrounding environment using L. innocua as a surrogate [15]. Bacterial Cultures and Inoculum Preparation The bacterial cultures used in this study included four strains of Listeria innocua (ATCC 33091, 51742, 49595, and 33090) which were obtained from the American Type Culture Collection (ATCC).The lypholized microorganisms were individually transferred to 9 mL tryptic soy broth (TSB, Difco, Franklin Lakes, NJ, USA), vortexed to mix the suspension well, and incubated at 35˚C for 24 h.Each strain was then combined into a single mixed culture suspension to obtain a four-strain cocktail of L. innocua.A 7 -8 log CFU/mL culture suspension was used for inoculation purposes.The cell density of this suspension was determined by serially diluting the pure culture that was grown in TSB, and plating in duplicate onto modified oxford medium agar (MOX, Difco, Franklin Lakes, NJ, USA).The bacterial cell counts were obtained after incubating the plates at 35˚C for 24 h. Preparation of Drain Surface A 10-inch-diameter, circular, painted cast iron drain, mounted onto a 2 × 3-feet "090" with a two-part white epoxy finish aluminum cabinet was used.The drain was placed in a 316 stainless bowl and a schedule 40 PVC male 4-inch adapter was screwed into the drain and was fitted with a 40 PVC pipe (manufactured by RGF Pvt. Ltd., West Palm Beach, FL, USA).A 5-gallon polyethylene bucket was used to collect the drain wash water. Preparation of Surfaces Stainless steel is a surface finish commonly found in food-processing environments.A research study showed that Listeria grew on stainless steel, teflon, nylon, and polyester for 7 to 18 d, whereas its biofilm formation was supported at 21˚C but was reduced at 10˚C [16].Hence, stainless steel coupons were hung inside the chamber at three different heights and used for sampling to test translocation.Polished stainless steel coupons (6.4 × 1.9 × 0.1 cm) were washed with Fisherband Sparkleen (Fisher Scientific, Hampton, New Hampshire, USA) detergent and autoclaved for use. Preparation of Meat Slurry For the preparation of the meat slurry, 10 g of ground beef 80:20 (All Natural Ground Beef Chuck) was placed into a stomacher bag.To this, 100 mL of distilled water was added, then stomached for 1 min; 900 mL of distilled water was added to this mixture to make it 1 liter.10 mL of bacterial cocktail (7 -8 Log CFU/mL) was then added to the meat slurry for the inoculated sets. Inoculation of the Drain The drain was inoculated with meat slurry at regular intervals, as described further, to simulate the normal conditions of drain surfaces in a food-processing facility. Cleaning and Washing Activities Commercial cleaning and washing operations are done with a water pressure hose from 40 -50 psi.Such high pressure generates aerosols.In this study, a commercial cleaner (alkaline-sodium hypochlorite 0.1% -0.5%) and sanitizer (chlorinated ammonium compound consisting of N-alkyl dimethyl benzyl ammonium chlorides, Nalkyl dimethyl ethylbenzyl ammonium chlorides, and ethyl alcohol) were used.The sampling was performed at the end of 8 h based on the usual duration of a shift in a typical production facility.The time period for development of biofilms, 48 h, was also evaluated. VIP for Listeria VIP for Listeria (BioControl Systems.Inc., AOAC approved 997.03) was used for confirmation.If Listeria is present, an antigen-antibody-chromogen complex is formed that is visually read on the kit as a band formation. Procedure Autoclaved stainless steel coupons with binder clips were passed through 1-mL pipettes and placed on cooling racks were hung at 1, 3, and 5 feet with nylon thread strings inside the chamber.A total of 12 racks (4 per height) were used.On each of these racks, a set of 3 coupons was placed; making a total of 12 coupons per height.This study was performed for 8-h and 48-h time periods, each consisting of 4 sets; Non-Inoculated, Non-Treated; Non-Inoculated, Treated; Inoculated, Non-Treated; and Inoculated, Treated.The term inoculated refers to use of bacterial cocktail whereas treated refers to use of a commercial cleaner and sanitizer. Non-Inoculated, Treated and Non-Inoculated, Non-Treated The drain was inoculated with meat slurry at 0, 4, and 8 h.The prepared slurry was poured into the drain at 0 h.The drain was washed with a high-pressure water hose (40 psi) and poured again with slurry at 4 h.The process was repeated at 8 h.The drain was then allowed to sit for 30 min and washed with water (40 psi).The commercial cleaner was then applied and allowed to sit for 60 sec before the sanitizer was used, per manufacturer's instructions, in the treated set whereas no cleaner or sanitizer was used in the non-treated set.The coupons hung inside the chamber during cleaning were then collected in individual sterile plastic bags. Inoculated, Treated and Inoculated, Non-Treated The drain was inoculated with meat slurry at 0, 4, and 8 h.The slurry with 10 mL L. innocua cocktail was poured into drain at 0 h.The drain was washed with a highpressure water hose (40 psi) and again poured with slurry at 4 h.The process was repeated at 8 h.The drain was then allowed to sit for 30 min and washed with water (40 psi).The commercial cleaner was then applied and allowed to sit for 60 sec before sanitizer was used, per manufacturer's instructions, in the treated set whereas no cleaner or sanitizer was applied in the non-treated set.The coupons hung inside the chamber during cleaning were then collected in individual sterile plastic bags. For each of these sets, after collection of coupons, 100 mL of listeria enrichment broth (LEB, Difco, Franklin Lakes, NJ, USA) was added to each of the bags containing stainless steel coupons.The coupons with LEB were incubated at 35˚C for 48 h.After 48 hours of incubation, the turbid broths were streaked onto the prepoured MOX plates, then incubated at 35˚C for 48 h.If black colonies were seen on the MOX plates, those were recorded as presumptive positive for Listeria.Typical Listeria colonies from the MOX plates were isolated and grown in 9 mL TSB test tubes for 48 h at 35˚C.To confirm the presence of Listeria spp., in the turbid TSB test tubes, the rapid VIP Listeria Test was performed.Positive test kits were used as confirmation of the presence of Listeria in the samples. The same procedure was repeated for 48 h to study the translocation of bacterial cells when biofilms have been developed in the drain surface.The drain was inoculated with an L. innocua cocktail in meat slurry, as described previously, at 0, 8, 12, 24, 36, and 48 h. For sampling the drain, sponge method was used.Three drain sites were sampled; drain surface (197.98 cm 2 ), drain crate (278.07 cm 2 ), and drain pipe (335.98 cm 2 ).Sampled sponges (18 oz."Speci Sponge", 3.8 × 7.6 cm; Nasco Laboratory, Fort Atkinson, WI, USA) were placed in sterile bags with 20 mL letheen broth (Difco, Franklin Lakes, NJ, USA).Serial dilutions were made and spread plated on tryptic soy agar (TSA, Difco), MOX, and thin agar layer MOX (TALMOX) [17].The wash water from the drain collected in a bucket was also plated for enumeration for each set.The plates were incubated at 35˚C for 48 h.Bacterial counts were taken and reported as CFU/area. Three replications of each of the experimental sets for both 8 h and 48 h were performed. Statistical Analysis For statistical analysis, Single Factor Model with binomial distribution was used, and data were analyzed using the GENMOD procedure (SAS 9.1.2,2004, Cary, NC, USA).The analysis was performed to find the probability for positive test coupons obtained as a result of translocation of bacterial cells from the drain to the stainless steel coupons.The experimental sets-Inoculated, Treated; Copyright © 2012 SciRes.AiM and Inoculated, Non-Treated for both 8-h and 48-h periods-were observed to fit adequately into the model.The height at which the coupons were hung inside the chamber had a significant effect (p < 0.05) on the number of positive coupons obtained due to cell translocation from the drain to the coupons. Results and Discussion Bacterial populations enumerated from sponge sampling of the drain ranged between 3.5 -4 log CFU/area in the inoculated treated sets in the 8-h set while counts were 6 -8 log CFU/area in the 48-h of the inoculated sets.The treatment with commercial cleaner and sanitizer reduced bacterial population in the drain only by 0.5 log CFU/area.However, samples obtained from wash waters showed 3 log CFU/mL and 4 log CFU/mL reduction in bacterial population in 8-h and 48-h sets, respectively in the treated sets.Previous prevalence studies have shown a reduction in the prevalence of Listeria monocytogenes over 50% in slaughterhouse and upto 16% in smokehouse due to cleaning activities undertaken [18].Tables 1 and 2 show the percentage of positive samples obtained for the different experimental sets.If there was no contamination in the drain to begin with, as indicated by non-inoculated sets for both 8-h and 48-h, no translocation of bacterial cells occurred from the drain onto the coupons and the surrounding environment. In the 8-h set, translocation of bacterial cells was seen at all three heights.The percentage of positive samples was from 2% -17%.Higher translocation was seen at 1 foot, followed by 3 feet and 5 feet, respectively, indicating that the closer the proximity from the drain, the greater the number of bacterial cells that transfer from the drain to the surrounding surfaces. The translocation at 1 foot for the Inoculated, Non-Treated set was 16.6%, whereas the Inoculated, Treated set was 13.8%.These percentage figures based on the experimental set further indicate that when a cleaning and sanitizing treatment is applied to control or eliminate the bacterial cells in the drain, the number of cells that translocate is fewer compared with the untreated drain.The translocation at 3 feet for the Inoculated, Non-Treated set was 11.1%, compared with 5/5% for the Inoculated, Treated set.At 5 feet, the translocation for the Inoculated, Non-Treated set was 2.7% but 0% for the Inoculated, Treated Set.These percentages further reinforce the need for cleaning and sanitizing treatments to floor drains, because the number of cells translocated from the non-treated drain is higher than from the treated drain. In the 48-h set, the coupons were found positive for translocation at 1, 3 and 5 feet.The range of percentage positives in this case was higher compared with the 8-h set, 2% -25%.This may be attributed to the longer time available for the bacterial cells to grow and proliferate in the drain and also to form a biofilm as a protection against environmental stress.The average translocation at 1 foot was the highest, which was 25%, compared with 6.9% at 3 feet and 1.8% at 5 feet. At the height of 1 foot, the percentage translocation for both Inoculated, Non-Treated and Inoculated, Treated sets was found to be 25%.At 3 feet, 8.3% positive coupons were obtained from the Inoculated, Non-Treated set whereas 5.5% were seen in the Inoculated, Treated set.At 5 feet, 2.7% positive samples were seen in the Inoculated, Treated set. This study agrees with previous research findings that indicate using high pressure hoses can discharge Listeria spp. to unreachable areas and food contact surfaces [19]; and suggests that optimization is required in cleaning and washing steps to limit the generation of viable aerosols.Similar suggestion was seen in a study by that found that in the fish-processing plants that did not use high-pressure sprayers for cleaning, L. monocytogenes was overall infrequently isolated from food contact surfaces even when a high number of positive samples were obtained from the drains [20].Studies on guidelines to control L. monocytogenes in small to medium scale fresh cut and packaging operations have also indicated modification the cleaning and sanitizing procedures as one of the means of pathogen control [21].A higher degree of mechanical action and the use of detergents may play a role in the reduction in the spread of contamination by aerosols. Because of the ubiquitous nature of Listeria monocytogenes in the general environment, minimizing its presence throughout food production and processing environments is vital.Effective and reliable personnel practices and hygiene are required in addition to the application of effective cleaning procedures to the manufacturing equipment and the food-processing environment itself.
v3-fos-license
2024-08-01T15:15:15.436Z
2024-07-30T00:00:00.000
271580041
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "3fef0c9fef69c92be789b70729cd6631a1a33adb", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44877", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2a430ef70d9cc0344a1a2bc49832f4ab59c46eaa", "year": 2024 }
pes2o/s2orc
Association of IL-17A and IL-10 Polymorphisms with Juvenile Idiopathic Arthritis in Finnish Children To analyze the role of interleukin IL-17A and IL-10 polymorphisms in susceptibility to juvenile idiopathic arthritis (JIA), 98 Finnish children and adolescents with JIA were studied. Data from the 1000 Genomes Project, consisting of 99 healthy Finns, served as the controls. The patients were analyzed for four IL-17A and three IL-10 gene-promoter polymorphisms, and the serum IL-17A, IL-17F, IL-10, and IL-6 levels were determined. The IL-17A rs8193036 variant genotypes (CT/CC) were more common among the patients than controls, especially in those with polyarthritis (OR 1.93, 95% CI 1.11–3.36; p = 0.020). IL-17A rs2275913 minor allele A was more common in patients (OR 1.45, 95% Cl 1.08–1.94; p = 0.014) and especially among patients with oligoarthritis and polyarthritis than the controls (OR 1.61, 95%CI 1.06–2.43; p = 0.024). Carriers of the IL-17A rs4711998 variant genotype (AG/AA) had higher serum IL-17A levels than those with genotype GG. However, carriers of the variant genotypes of IL-17A rs9395767 and rs4711998 appeared to have higher IL-17F levels than those carrying wildtype. IL-10 rs1800896 variant genotypes (TC/CC) were more abundant in patients than in the controls (OR 1.97, 95%CI 1.06–3.70; p = 0.042). Carriers of the IL-10 rs1800896 variant genotypes had lower serum levels of IL-17F than those with wildtype. These data provide preliminary evidence of the roles of IL-17 and IL-10 in the pathogenesis of JIA and its subtypes in the Finnish population. However, the results should be interpreted with caution, as the number of subjects included in this study was limited. Introduction Juvenile idiopathic arthritis (JIA) is a pediatric rheumatic disease defined by the International League of Associations for Rheumatology (ILAR) as arthritis that has persisted for at least 6 weeks in one or more joints of a child or an adolescent under 16 years of age [1].The ILAR classification further divides the disease into seven subtypes, including JIA with systemic onset, oligoarthritis (divided into persistent oligoarthritis and extended oligoarthritis that develops into polyarthritis after 6 months from onset), rheumatoid factor (RF)-positive polyarthritis, RF-negative polyarthritis, psoriatic arthritis, enthesis-related arthritis (ERA), and undifferentiated arthritis (not falling into any of previous categories or could be classified as more than one of the previous). It is evident that JIA is a heterogenous group of immunological disturbances; its subtypes differ in clinical characteristics, disease activity, prognosis, and the most profitable treatment options [2].Genes previously associated with JIA appear to be linked to a general susceptibility to autoimmunity rather than being specific to JIA [2,3].It is likely that subtypes of JIA can arise from various genetic backgrounds which may differ between ethnic populations.Cytokines like interleukin (IL)-6, Tumor Necrosis Factor (TNF)-α, IL-1, IL-18, IL-17, and IL-10 play a crucial role in the pathogenesis of JIA, contributing to the inflammatory process and tissue damage [4]. Single-nucleotide polymorphisms (SNPs) in cytokine genes can lead to an altered expression of cytokines that appears to influence the risk of autoimmune diseases [5], including arthritis [6], their severity, and may also shape the responses to treatment.The assembly of polymorphisms of hallmark immune regulators may set up a context where an individual develops JIA, and these polymorphisms and their haplotypes may have a considerable influence on disease progression and response to treatment. IL-17A and IL-17F produced by Th17 cells have been linked to the development and chronicity of synovial inflammation in rheumatic diseases, including JIA [7].It has been suggested that these cytokines may be linked to certain subtypes of JIA and possibly disease activity in JIA [8,9].Higher serum levels of IL-17A, among other inflammatory cytokines at JIA onset, have been shown to associate with ongoing disease activity after 1 year [10].Also, the higher number of peripheral blood Th17 cells has been associated with a more prolonged time to reach an inactive disease state [11] and is associated with a prolonged need for treatment in polyarthritis [12].Higher numbers of Th17 cells and lower numbers of Tregs are found in the joints of patients with extended oligoarthritis than in the joints of patients with persistent oligoarthritis, representing a more limited type of oligoarthritis [13,14].Th17 cells appear to play an especially important role in ERA, where increased levels of Th17 cells and IL-17A levels have been found in synovial fluid (SF), correlating with disease activity [15].Fischer et al. found the highest numbers of Th17 cells in the synovial fluid of patients with ERA and the lowest numbers in the joints of patients with antinuclear antibody (ANA)+ oligo-or polyarthritis [10].In line with this, anti-IL-17A blocker secukinumab has been approved for the treatment of two subtypes of JIA: enthesitis-related arthritis and juvenile psoriatic arthritis. IL-10 is a major anti-inflammatory cytokine that inhibits the activation and effector functions of T cells, the antigen-presenting cell function, and the proliferation of monocytes and macrophages [16].It downregulates the production of TNF-α, IL-6, and other inflammatory cytokines and chemokines [17].On the other hand, IL-10 induces B-cell proliferation, differentiation, and antibody isotype switching [11].In animal models of arthritis, IL-10 deficiency has been demonstrated to result in an elevated production of Th17 and Th1 pro-inflammatory cytokines [18,19].Genes associated with IL-10 signaling have been shown to be upregulated in the peripheral blood of patients with persistent oligoarthritis and seronegative polyarthritis, as well as in systemic JIA [14].The plasma level of IL-10 as well as the level of IL-17 at diagnosis, have been listed among the predictive biomarkers of disease outcome in non-systemic JIA [20].Some studies have found low levels of IL-10 in the joints or peripheral blood of patients with non-systemic JIA [21,22], and it has been suggested that intrinsic low IL-10 production insufficient to control inflammation may set up a risk of JIA with worse outcomes [23]. Several single-nucleotide polymorphisms (SNPs) in IL-10 are identified across its coding and regulatory regions, which have been strongly implicated in the pathogenesis of various autoimmune diseases.In particular, three −1082 G>A (rs1800896), −819 C>T (rs1800871), and −592 C>A (rs1800872) promoter area mutations have been found to affect gene activity [24] and the secretion of IL-10 in the in vitro studies [25,26].Previous studies have also shown that the IL-10 ATA haplotype (minor alleles of rs1800896, rs1800871, and rs1800872) was associated with the reduced production of IL-10 in human whole blood cultures of patients with juvenile arthritis [23,27].SNPs in the promoter region of IL-17A influence its expression and, potentially, JIA pathogenesis and outcome [28].The most studied SNP in the IL-17A promoter region is rs2275913 (-197 G>A), which plays a pivotal role in the regulation of IL-17A transcription and the secretion of IL-17A [29].In many studies, the A-allele is associated with elevated levels of IL-17A, however, findings are contradictory and, in some conditions, also reduced levels of IL-17A are reported. In the era of biologicals, the clinical outcome of JIA has improved considerably [8,9], but a share of patients do not respond to treatment and develop complications.It remains necessary to advance our understanding of the disease mechanisms in JIA to better recognize the specific disease subtypes profiting from individual treatment approaches.In order to promote a more precise classification and targeted treatment of JIA, our approach has been to study whether the polymorphisms in IL-17A and IL-10 genes are associated with susceptibility to JIA or its subtypes in a Finnish population and whether these polymorphisms could serve as biomarkers for more specific clinical entities of JIA.To our knowledge, comprehensive analyses of multiple SNPs in the promoter regions of IL-17A and IL-10 genes have not been performed in JIA and its subtypes.In addition, contradictory results have been previously reported on the role of these polymorphisms with susceptibility to JIA from various genetic backgrounds.We chose to exclude the patients with sJIA and RF-positive polyarthritis from this study because of the rarity of these patients and the fact that these subtypes appear to differ markedly in the genetic background from the other subtypes of JIA [30]. Characteristics of the Patient Population Of the 98 patients with JIA included in the study, 71 (72.4%) were females, and 27 (27.6%)were males.All were of Caucasian origin, and all but one were of Finnish origin.At the time of blood sampling for the study, the median age of the patients was 11.5 (range 2.2-16.9)years, and the median disease duration was 2.9 (range 0-13.6)years.The median age at diagnosis was 5.4 years (range 1.0-14.9).Fifty-one (52.0%) patients were diagnosed as having oligoarthritis, thirty-four (34.7%) with polyarthritis, ten (10.2%) with enthesisrelated arthritis (ERA), and three (3.0%) had other types of JIA.Twenty-nine patients (29.9%) were human lymphocyte antigen B27 (HLA-B27) positive and sixty-five (67.7%) were ANA-positive.A total of 2 patients (20.4%) had uveitis at some point in the disease course, and 26 (26.5%) patients had temporomandibular joint (TMJ) synovitis at some point in the disease course.Detailed characteristics of the JIA patients are shown in Table 1. 5 A The data are presented as numbers (n) of children and valid percentages (%).The missing data are presented as numbers and percentages from the total number of patients. 1Persistent Oligoarthritis, n = 44 (44.9%) and extended Oligoarthritis, n = 7 (7.1%). 2 Includes diagnoses of psoriasis-related juvenile idiopathic arthritis (ICD-10: M09.0*, L40.5), n = 1 and juvenile arthritis, unspecified (ICD-10: M08.9), n = 2. 3 Medication at time of sampling. 4Disease activity was determined as described earlier and presented here at the time of diagnosis, 1 year after, and time of sampling, respectively. 5Laboratory values. 6 Next, we compared the prevalence of IL-17 and IL-10 polymorphisms between the control population and patients with JIA.In both groups, no significant sex-based differences were observed concerning the prevalence of SNPs (p ≥ 0.05). IL-17A Polymorphisms The distribution of IL-17A polymorphisms rs9395767, rs2275913, rs4711998, and rs8193036 were analyzed in JIA patients and the control group (Table 2).As shown in Table 2, IL-17A rs8193036 variant genotypes CC/CT were more common than the wildtype genotype TT in patients than the controls (71.4% vs. 59.6%, p = 0.038, OR 1.92; 95% Cl 1.06-3.47).Although, the minor allele A of IL-17A rs2275913 was more frequent in the JIA patients than the controls (OR 1.45, 95% Cl 1.08-1.94;p = 0.014).The data are presented as numbers (n) of children and valid percentages (%).Two control groups are included: 1 Genome 1000 FIN population [31] and 2 Finnish STEPS study cohort [32].Of the latter, data of IL-17A rs2275913 and IL-10 rs1800896 and rs1800872 are available. 3Includes diagnoses of juvenile arthritis subtypes: oligoarthritis, polyarthritis, juvenile enthesitis-related arthritis, juvenile arthritis psoriasis, and juvenile arthritis unspecified.Fisher's exact tests were used to analyze p-values.The odds ratio (OR) and the corresponding 95% confidence intervals (95% CI) were calculated. 4Dominant model where wildtype was compared with heterozygous and homozygote variants.p-values < 0.05 were considered significant (*). In a further analysis where the patients with extended oligoarthritis (n = 7) were excluded, and the patients with persistent oligoarthritis (n = 44) and polyarthritis (n = 34) were compared, no statistical differences in the distribution of IL-17A polymorphisms were found.However, when the patients with persistent oligoarthritis were compared to the patients with extended oligoarthritis, we found that all the patients with the extended type of oligoarthritis had the variant types of both IL-17A rs2275913 and rs8193036 genotypes, whereas 20.5% of the patients with persistent oligoarthritis had the wildtype (GG) of IL-17A rs2275913 and 34% had the wildtype (CC) of IL-17A rs8193036.However, the number of patients with extended oligoarthritis was too low to draw definite conclusions about the differences (Supplementary Table S1). As shown in Table 4, the majority (82.4%) of JIA patients who carry the IL-17A rs9395767 TT-genotype were ANA positive, whereas only 55.6% of those who carry the AA genotype were ANA positive (p = 0.032, OR 3.8; 95% Cl 1.13-13.09).No significant correlation between the distribution of other IL-17A polymorphisms and ANA-positivity were observed.We further studied the link between IL-17A polymorphisms with the risk of uveitis and the risk of a typical feature of JIA: TMJ arthritis.The JIA patients were divided into those who had uveitis at some point in the disease course (n = 20) and into a group that never had uveitis (n = 78).No statistically significant differences were found in the studied IL-17A genotypes between the patients with a positive vs. negative history of uveitis when the whole population of JIA was analyzed (Table 4).Furthermore, there were no significant differences when the patients with oligoarthritis, polyarthritis, and ERA were analyzed separately.A total of 26 patients (26.5%) had been TMJ-affected at some point in the disease course.No correlations were found either when we similarly studied the association of IL-17A polymorphisms with a history of TMJ arthritis in these patient groups (Table 4). IL-10 Polymorphisms The distributions of three IL-10 polymorphisms (rs1800896, rs1800871, and rs1800872) were analyzed in JIA patients and the control group (Table 2).IL-10 rs1800896 variant genotypes (TC/CC) were significantly more abundant in the JIA group than in the controls (p = 0.042, OR 1.66; 95%Cl 1.02-2.72).No difference in the other two SNPs wasnoticed between the patients and controls.Furthermore, no difference in all three SNPs studied was found between the different subtypes of JIA studied (Table 3).Nevertheless, variant IL-10 rs1800896 genotype may be more abundant in ERA than in the controls.Of the ten patients with ERA, nine had the variant genotype.However, the number of ERA patients is too low to draw definitive conclusions.Similarly, no associations between the IL-10 polymorphisms and patients with oligoarthritis, polyarthritis, or the combined group of oligoarthritis and polyarthritis were found. No statistically significant differences were found between the IL-10 rs1800896, rs1800871, and rs1800872 genotypes in patients with positive vs. negative ANA (Table 4).However, when the JIA patients were analyzed against the controls, there seemed to be an association with the IL-10 rs1800896 variant genotype GA/AA and HLAB27 positivity (p = 0.033; OR 3.01; 95% CI 1.04-8.71).Furthermore, no statistically significant differences were found in the IL-10 rs1800896, rs1800871, and rs1800872 genotypes in patients with a positive vs. negative history of uveitis or a positive or negative history of TMJ arthritis when the whole population of JIA patients and the different subtypes were analyzed. IL-10 and IL-17 Haplotypes The linkage disequilibrium (LD) of IL-10 polymorphisms was strong in the controls (Figure 1a) and the JIA patient group (Figure 1b).In addition, IL-10 rs1800872 and rs1800871 exhibit complete LD.There was no evidence of LD in the IL-17A SNPs in the JIA group, and in the control group, only IL-17A rs2275913 was low, evidenced by the LD with rs8193036 and rs9395567.The haplotype analysis showed that the IL-17A rs2275913 and rs819306 allelic TG-haplotype was more frequent in the control group (0.433) than in the patient group (0.321) (p = 0.0211).No other significant differences were found in the IL-17 and IL-10 haplotypes between the patients and controls. Next, we analyzed the LD of three JIA subgroups: oligoarthritis, polyarthritis, and ERA.What stands out in Figure 2 is that the ERA group clearly differed in terms of the IL-10 haplotypes, both in the controls and the other JIA subgroups.The most significant difference was related to IL-10 rs1800896 and rs1800872/rs1800871, where there was high evidence of LD between the SNPs in the polyarthritis group (Figure 2b) but not in the oligoarthritis group (Figure 2a).Next, we analyzed the LD of three JIA subgroups: oligoarthritis, polyarthritis, and ERA.What stands out in Figure 2 is that the ERA group clearly differed in terms of the IL-10 haplotypes, both in the controls and the other JIA subgroups.The most significant difference was related to IL-10 rs1800896 and rs1800872/rs1800871, where there was high evidence of LD between the SNPs in the polyarthritis group (Figure 2b) but not in the oligoarthritis group (Figure 2a). The Role of IL-10 and IL-17 Polymorphisms in Disease Activity To study whether the IL-17A and IL-10 polymorphisms were associated with disease activity, the patients with oligoarthritis or polyarthritis were divided into four groups (inactive disease, low disease activity, medium disease activity, and high disease activity) on the basis of their JADAS10 scores, as defined by Consolaro et al. [33,34], at two different time points: at diagnosis and at 1 year after diagnosis. At the time of diagnosis, all patients had active disease, with 98.3% having moderate (n = 13) or high (n = 44) disease activity (Figure 3).Most of the patients with moderate Next, we analyzed the LD of three JIA subgroups: oligoarthritis, polyarthritis, and ERA.What stands out in Figure 2 is that the ERA group clearly differed in terms of the IL-10 haplotypes, both in the controls and the other JIA subgroups.The most significant difference was related to IL-10 rs1800896 and rs1800872/rs1800871, where there was high evidence of LD between the SNPs in the polyarthritis group (Figure 2b) but not in the oligoarthritis group (Figure 2a). The Role of IL-10 and IL-17 Polymorphisms in Disease Activity To study whether the IL-17A and IL-10 polymorphisms were associated with disease activity, the patients with oligoarthritis or polyarthritis were divided into four groups (inactive disease, low disease activity, medium disease activity, and high disease activity) on the basis of their JADAS10 scores, as defined by Consolaro et al. [33,34], at two different time points: at diagnosis and at 1 year after diagnosis. At the time of diagnosis, all patients had active disease, with 98.3% having moderate (n = 13) or high (n = 44) disease activity (Figure 3).Most of the patients with moderate The Role of IL-10 and IL-17 Polymorphisms in Disease Activity To study whether the IL-17A and IL-10 polymorphisms were associated with disease activity, the patients with oligoarthritis or polyarthritis were divided into four groups (inactive disease, low disease activity, medium disease activity, and high disease activity) on the basis of their JADAS10 scores, as defined by Consolaro et al. [33,34], at two different time points: at diagnosis and at 1 year after diagnosis. At the time of diagnosis, all patients had active disease, with 98.3% having moderate (n = 13) or high (n = 44) disease activity (Figure 3).Most of the patients with moderate disease activity at the time of diagnosis had persistent oligoarthritis.One year after diagnosis, 32.4% (n = 24) of patients had inactive disease, and 27% (n = 20) had low disease activity.Despite active treatment, one year after diagnosis, 28.4% (n = 21) of patients had moderate and 12.2% (n = 16) high disease activity (Figure 3).While the patients diagnosed with poly-or extended oligoarthritis seemed to have more active disease one year after diagnosis, the patients with polyarthritis also showed a more systematic reduction in disease activity compared to the time of diagnosis compared to other subtypes. disease activity at the time of diagnosis had persistent oligoarthritis.One year after diagnosis, 32.4% (n = 24) of patients had inactive disease, and 27% (n = 20) had low disease activity.Despite active treatment, one year after diagnosis, 28.4% (n = 21) of patients had moderate and 12.2% (n = 16) high disease activity (Figure 3).While the patients diagnosed with poly-or extended oligoarthritis seemed to have more active disease one year after diagnosis, the patients with polyarthritis also showed a more systematic reduction in disease activity compared to the time of diagnosis compared to other subtypes.The polymorphisms of IL-17A (rs8193036, rs2275913, rs9395767, and rs4711998) and IL-10 (rs1800896 and rs1800871) and their association with disease activity were analyzed using the collected juvenile arthritis disease activity score (JADAS10), physician's global assessment (PGA) and the childhood health assessment questionnaire (CHAQ) data..It appeared that the IL-10 rs1800871 variant genotype may be associated with disease activity.No statistically significant differences in JADAS10 scores at diagnosis or 1 year after were observed between patients with the WT or variant rs1800871 genotype, but at one year after diagnosis, the PGA score was significantly lower (median 0.00 IQR 0.5) in patients with the variant genotype vs. patients with the WT genotype (median 0.6 IQR 1.38, p = 0.026).Also, the JIA patients carrying the variant type of IL-10 rs1800871 had a significantly lower CHAQ at the time of diagnosis (0.25; IQR 0.59, p = 0.013) and one year after (0.00; IQR 0.25, p = 0.01) compared with the WT carriers.No significant associations were found between disease activity and the other studied SNPs. The Effect of IL-17A and IL-10 Polymorphisms on Serum Cytokine Levels To analyze the functional significance of the IL-17A and IL-10 polymorphisms in JIA, we measured the concentrations of IL-6, IL-10, IL-17A, and IL-17F in the sera of the patients and compared them to the genotypes.The mean concentrations of serum cytokine levels are shown in Table 5.The polymorphisms of IL-17A (rs8193036, rs2275913, rs9395767, and rs4711998) and IL-10 (rs1800896 and rs1800871) and their association with disease activity were analyzed using the collected juvenile arthritis disease activity score (JADAS10), physician's global assessment (PGA) and the childhood health assessment questionnaire (CHAQ) data.It appeared that the IL-10 rs1800871 variant genotype may be associated with disease activity.No statistically significant differences in JADAS10 scores at diagnosis or 1 year after were observed between patients with the WT or variant rs1800871 genotype, but at one year after diagnosis, the PGA score was significantly lower (median 0.00 IQR 0.5) in patients with the variant genotype vs. patients with the WT genotype (median 0.6 IQR 1.38, p = 0.026).Also, the JIA patients carrying the variant type of IL-10 rs1800871 had a significantly lower CHAQ at the time of diagnosis (0.25; IQR 0.59, p = 0.013) and one year after (0.00; IQR 0.25, p = 0.01) compared with the WT carriers.No significant associations were found between disease activity and the other studied SNPs. The Effect of IL-17A and IL-10 Polymorphisms on Serum Cytokine Levels To analyze the functional significance of the IL-17A and IL-10 polymorphisms in JIA, we measured the concentrations of IL-6, IL-10, IL-17A, and IL-17F in the sera of the patients and compared them to the genotypes.The mean concentrations of serum cytokine levels are shown in Table 5. Pearson's correlation analysis of the cytokine levels showed a positive correlation between the serum IL-17F and IL-10 levels (r 2 = 0.599, p ≤ 0.001).No correlation was observed between the other cytokine levels.No statistically significant differences were found in the cytokine levels between the sexes, albeit males (22.61 pg/mL, interquartile range (IQR) 49.39) appeared to have higher serum IL-10 levels compared to females (5.11 pg/mL, IQR 44.74) (p = 0.093). As seen in Table 5, of the three subgroups presented, polyarthritis patients had the highest concentration of serum IL-6 (264.24pg/mL; IQR 9706.65) and ERA patients the highest concentration of serum IL-10 (30.87 pg/mL; IQR 51.27).However, no statistically significant differences were observed in the serum cytokine levels across the categories of diagnoses.The only significant difference found was between oligo-and polyarthritis patients, with the polyarthritis patients having higher IL-17A levels (p = 0.049). The patients with oligo-or polyarthritis were further divided into four activity categories, as described previously.Interestingly, the levels of IL-17F (p = 0.012) and IL-10 (0.006) differentiated across the categories of disease activity, being lowest in the inactive patients and highest in those with moderate disease activity.However, incoherently, the patients in the high-activity group had very low levels of all studied cytokines.We did not observe the association of serum levels of IL-17A and IL-6 with disease activity. Disease activity at the time of sampling includes the patients with oligoarthritis or polyarthritis and is based on the JADAS10 score, as defined by Consolaro et al. [33,34]. When the associations between individual polymorphisms of IL-17A and IL-10 and the cytokine levels were further analyzed, IL-17A rs9395767 and rs4711998 and IL-10 rs1800896 were found to be associated with the serum IL-17F levels.Carriers of the variant type of the above-mentioned IL-17 SNPs had significantly higher levels of serum IL-17F than those who had wild genotypes (Table 6).In addition, the IL-17A rs4711998 variant genotype carriers had higher serum IL-17A levels than the WT carriers when the dominant model (combined heterozygote and homozygote variants are compared with the WT) was used (p = 0.019).Opposite to these findings, JIA patients who carry the variant type of IL-10 rs1800896 had a lower level of serum IL-17F (p = 0.010, with the dominant model).None of the studied SNPs appeared to be statistically significantly associated with either the serum IL-10 or IL-6 levels.The Quade nonparametric ANCOVA test was used to take account of the covariates as listed: sex, JIA subtype, age at the time of sampling, disease duration, JADAS10, and the use of biological medication.The test clearly showed that IL-17A rs471198 was associated with serum IL-17F (p = 0.012) and IL-17A (p = 0.009) levels.IL-10 rs1800896, on the other hand, was associated with both serum IL-17F (p = 0.030) and IL-10 (p = 0.042) levels. Discussion In this exploratory study, we aimed to study the role of several IL-17A and IL-10 promoter area gene polymorphisms in the susceptibility to and disease activity of JIA in children.It is known that SNPs in cytokine genes affect cytokine production, which can influence the risk of arthritis [23,35].To our knowledge, this is the first study addressing the role of IL-17A and IL-10 polymorphisms in JIA in the Finnish population.Disease characteristics of the patients in this study, including the distribution of JIA subtypes, age, and gender, are in line with what has previously been reported in Nordic studies [36,37].Patients with sJIA and seropositive polyarthritis were not included in the study due to the low prevalence of these subtypes and differing pathogenesis from other subtypes.This study is representative of the roles of IL-17A and IL-10 polymorphisms, mainly in the oligoarthritis and polyarthritis types of JIA (which are the two major groups) in a Finnish population, as the numbers of ERA patients and other types of arthritis were low. The IL-17A rs8193036 and rs2275913 variant genotypes were found to increase the risk of JIA in our cohort.IL-17A rs2275913 variant genotypes seem to be associated with an increased risk of RA in Caucasians [38,39], but in a study by Zhang et al., no association with the risk of JIA in Chinese children was observed [40].The IL-17A rs8193036 variant genotype has been previously associated with an increased risk of rheumatoid arthritis (RA) in the Chinese population [41].However, a meta-analysis, including fourteen studies and 3118 patients with RA, did not find an association between IL-17A rs8193036 and susceptibility to RA [42].Our analyses confirm that the IL-17A rs8193036 polymorphism did not significantly affect the serum levels of IL-17A or other cytokines.However, it should be kept in mind that the majority of patients in our study were receiving medication, including DMARDS and biologicals, at the time of sampling, which may affect the results; although, our data suggest that the cytokine levels were not dependent on the use of biological drugs. Our results suggest that both IL-17A rs8193036 variant genotypes CT/CC and IL-17A rs2275913 variant genotypes GA/AA increase the risk of oligoarthritis and polyarthritis types of JIA.Interestingly, our results showed that IL-17A rs8193036 minor allele C is associated with an increased risk, especially for the polyarthritis type of disease.Extended oligoarthritis has basically the same disease features as polyarthritis but with a more gradual onset, and in line with this, among the oligoarthritis group, all patients with an extended type of oligoarthritis had the variant genotype type of both IL-17A rs8193036 and IL-17A rs2275913.The number of patients with extended arthritis was low, but it remains of interest to further study if the IL-17A rs8193036 polymorphism in oligoarthritis patients could serve to predict the risk of disease extension.Thus far, there has been no published data on utilizing secukinumab (currently the only IL-17 blocker having an indication in pediatric rheumatic disease) in the treatment of other pediatric rheumatic conditions than ERA and psoriatic arthritis.It is tempting to speculate that some oligoarthritis or polyarthritis patients with the IL-17A rs8193036 variant genotype could benefit from IL-17 blocking. ANA-positive patients have been suggested to form a subtype of JIA independent of the number of joints affected [30].Typical features of ANA-positive JIA include female predominance, disease onset in the early years, and a high risk of uveitis [30].However, neither IL-17A rs8193036 nor IL-17A rs2275913 seemed to associate with ANA-positivity in our study.In contrast, the IL-17A rs9395767 variant genotype was more abundant in ANA-positive patients than in ANA-negative patients, with the majority of patients with the variant genotype being ANA-positive.The functional role of this particular IL-17A SNP remains uncharacterized.A study by Gan et al. has demonstrated that higher serum levels of IL-17A, and especially higher levels of IL-17F, are associated with higher autoantibody (including ANA) levels in patients with primary Sjögren's syndrome.Interestingly, treatment of patients with psoriasis or psoriatic arthritis with IL-17 blocker secukinumab has been shown to lead to diminished levels of ANA [43].None of the studied IL-17A or IL-10 polymorphisms seemed to mediate the risk of uveitis.However, uveitis is not always present at disease onset and the risk of developing uveitis is greatly diminished after disease onset while patients are receiving efficient therapy, which may influence the outcome in this type of study setting. IL-10 polymorphisms have been previously associated with the risk of JIA in some studies.Fathy et al. (2017) have demonstrated an increased risk of JIA and, especially, of polyarthritis in Egyptian children with the IL-10 rs1800896 variant AA genotype [44].However, no association of IL-10 rs1800896, IL-10 rs1800871, and IL-10 rs1800872 with JIA was found in an Iranian study [45], nor in the meta-analysis that included seven studies by Jung et al. [46].In our study, IL-10 polymorphisms rs1800896, rs1800871 and rs1800872 were not found to significantly associate with the risk of JIA or its subtypes in this Finnish JIA population.However, the IL-10 rs18000896 variant/IL-10 rs1800871 variant haplotype was more common in the JIA group compared to the controls, suggesting that while the effect of one individual variation in the IL-10 gene promoter is not enough to increase the risk of JIA, two might be.Based on most studies on the IL-10 rs 18000896 polymorphism in chronic inflammatory diseases, it is commonly believed to lead to diminished IL-10 production, although some studies have associated it with higher IL-10 production [47].In our study, both the IL-10 18000896 variant genotype and IL-10 rs 18000896 variant/IL-10 rs 1800871 variant haplotype were associated with lower IL-10 production.Also, Fathy et al. have found lower serum levels of IL-10 in JIA patients with the variant IL-10 rs1800896 genotype compared to those with the WT genotype [44].A study by Hee et al. (2007) investigating the role of the IL-10 gene promoter polymorphism in RA in Malaysian patients showed that the haplotype comprising all minor alleles in rs1800896, rs1800871, and rs1800872 (ATA haplotype) was associated with lower IL-10 production when compared with the other haplotypes, and the RA patients who did not display the ATA haplotype produced significantly higher levels of IL-10 when compared with those carrying either one or two polymorphisms [48]. Based on PGA, patients with the IL-10 rs1800871 variant genotype were found to have significantly lower disease activity 1 year after diagnosis compared to patients with the IL-10 rs1800871 WT genotype.This could reflect a better response to medication, as there was a trend towards higher PGA with the patients with variant genotypes at diagnosis; however, this data did not reach statistical significance.CHAQ was lower in patients with the IL-10 rs1800871 variant genotype both at diagnosis and at 1 year.However, the correlation of CHAQ with the joint counts has been shown to be low in early disease and better with more longstanding disease, with the correlation increasing alongside disease duration [49].In a previous work by Schotte et al., the association of IL-10 promoter SNPs (−2849 G>A (rs6703630), −1082 G>A (rs1800896), −819 C>T (rs1800871), and −592 C>A (rs1800872) with a response to etanercept treatment in RA patients was studied [50].They found the most favorable response in patients with the -2849 A-allele or the haplotypes AGCC and GATA, whereas an unfavorable treatment response was found in patients with the GGCC genotype [50]. Although the number of ERA patients included in our study was very low, it stands out that IL-10 is differently regulated in ERA compared to oligoarthritis and polyarthritis.Our results indicate a trend towards the association of ERA with the variant IL-10 rs1800896 genotype, which would be in line with the findings by Braga et al., who have shown that the rs1800896 variant genotype increased the risk of ankylosing spondylitis (AS) by three-fold [51].They also showed that this association was independent of HLA-B27.Also, a study by Mu et al. recognized IL-10 rs1800896 (along with other IL-10 polymorphisms) as a risk factor for AS in the Chinese population [52].The IL-10 rs1800896 variant genotype has been shown to be associated in a number of studies with high IL-10 production in AS [51].While we did not find increased serum levels of IL-10 in all JIA patients with the IL-10 rs1800896 variant genotype, rather, we found decreased serum levels along with decreased IL-17F levels compared to patients with the WT genotype; however, this may not apply to patients with ERA.Furthermore, there was a general trend towards higher IL-10 serum levels in these patients. The global incidence of juvenile idiopathic arthritis (JIA) is relatively low.For instance, in Finland, the incidence rate is 31.7 cases per 100,000 individuals [53].This low incidence rate significantly hampers the recruitment process, resulting in a slower accrual of study participants.Consequently, the limited number of available patients represents a primary limitation of this study, and in the case of ERA patients, the number of subjects was definitely too low to reliably define the population. Therefore, we considered our study to be an exploratory study to discover the preliminary associations of IL-17A and IL-10 polymorphisms with JIA, and, if present, they need to be replicated before more credence is given to these results.Specially, the correlations of polymorphisms with disease activity need to be confirmed in larger populations.Another clear limitation regarding the cytokine data is that the majority of patients were on varying medications at the time of sampling and that this most likely affected the results; however, the medication was included in the Quade nonparametric ANCOVA model as one covariate to minimize an effect. Study Design and Subjects All study subjects were recruited from the Pediatric Rheumatology Clinic at Turku University Hospital in Turku, Finland, between November 2020 and September 2023.Altogether, 130 patients were invited, of which 122 provided their consent to participate and were enrolled.Study samples were collected by providing the guardian with coded tubes to be given to the lab personnel while having routine laboratory tests taken and, eventually, study samples were obtained from 98 patients who were included in the analyses.At the time of initial recruitment, all patients were under 16 years of age and fulfilled the ILAR classification criteria for JIA.The exclusion criteria included systemic onset of JIA, seropositive polyarthritis (due to the expected small number of patients), a known chromosomal abnormality, known genetic disorder, participation in other clinical intervention studies, or whose informed consent could not be ensured due to the lack of a shared language or other factors.As a control population, we used genetic data from the 1000 Genomes Project, which consists of 99 healthy Finns (FIN population) [54].In addition, we analyzed one SNP of IL-17 (rs2275913) and two SNPs of IL-10 (rs1800896 and rs1800871) in our previous STEP study in which healthy infants were followed [55]. Before enrollment in the study, the patients and their guardians were informed about the procedures and the aim of the study, and their informed written consent was obtained. The current study was approved by the Ethics Committee of Turku University and the Hospital District of Southwest Finland (ETMK 31/1801/2020, 16 June 2020). The following clinical data/parameters were collected: the age at disease onset (time of diagnosis and birthdate), the age at the time of sampling, sex, duration of disease, erythrocyte sedimentation rate (ESR), C-reactive protein (CRP) levels, antinuclear antibody titer (ANA), Human Leucocyte Antigen (HLA)-B27, number of affected joints, history of uveitis, history of tempo mandibular joint (TMJ) arthritis, medication, child health assessment questionnaire (CHAQ), physician global assessment of disease activity (PGA) and juvenile arthritis disease activity score (JADAS10).The cumulative activity and clinical parameters at the selected time points (at diagnosis, one year after diagnosis, three years after diagnosis, at the start of the first biological, one year after the start of the first biological, and at the time of sampling) were retrospectively collected from the medical records by a physician. Sampling and Data Collection Serum and EDTA samples were drawn from a peripheral vein at the time of routine monitoring tests.The serum samples were allowed to clot for 60 min at room temperature and were then centrifuged at 2000× g for 10 min at 4 • C, transferred into two cryovials, and stored at −20 • C until used for the analyses. Genetic Analyses Genomic DNA was extracted from 250 µL of EDTA whole blood samples using the E.Z.N.A Blood DNA Mini Kit (Omega Bio-tek, Inc., Norcross, GA, USA) according to the manufacturer's protocol.DNA concentrations were determined by a spectrophotometer (NanoDrop 2000, Thermo Scientific, Waltham, MA, USA), and the DNA samples were stored at −20 • C prior to the analyses. A total of eight IL-17 and three IL-10 polymorphisms were analyzed (Figure 4) from JIA patients using Sanger sequencing at Eurofins Genomics (Eurofins Genomics GmbH, Konstanz, Germany).All the SNPs that had a minor allele frequency of >5% were included in the final analyses and presented in Table 2.The primers for IL-17 and IL-10 were designed with the Primer-BLAST design tool (National Center for Biotechnology Information (NCBI), U.S. National Library of Medicine, Bethesda, MD, USA) and were ordered from Eurofins Genomics (Eurofins Genomics GmbH, Konstanz, Germany).The Invitrogen Platinum Taq DNA polymerase (Thermo Fisher Scienific Inc., Waltham, MA, USA) was used for the PCR reactions according to the manufacturer´s instructions.All PCR reactions were performed in a total volume of 30 µL, consisting of 4 µL of genomic DNA and 26 µL of master mix, including 1.5 mM MgCl2, 1× buffer, 0.2 mM dNTP′s, 0.2 µM of each primer, and 2U Platinium Taq DNA polymerase enzyme (Thermo Fisher Scientific).The PCR conditions were as follows: an initial activation of 2 min at 95 °C and for 40 cycles of repeated denaturation at 94 °C for 30 s, annealing The primers for IL-17 and IL-10 were designed with the Primer-BLAST design tool (National Center for Biotechnology Information (NCBI), U.S. National Library of Medicine, Bethesda, MD, USA) and were ordered from Eurofins Genomics (Eurofins Genomics GmbH, Konstanz, Germany).The Invitrogen Platinum Taq DNA polymerase (Thermo Fisher Scienific Inc., Waltham, MA, USA) was used for the PCR reactions according to the manufacturer's instructions.All PCR reactions were performed in a total volume of 30 µL, consisting of 4 µL of genomic DNA and 26 µL of master mix, including 1.5 mM MgCl2, 1× buffer, 0.2 mM dNTP ′ s, 0.2 µM of each primer, and 2U Platinium Taq DNA polymerase enzyme (Thermo Fisher Scientific).The PCR conditions were as follows: an initial activation of 2 min at 95 • C and for 40 cycles of repeated denaturation at 94 • C for 30 s, annealing at the primer-specific temperature for 30 s, and extension at 72 • C for 1 min and without the final extension step.The primers and annealing temperatures used in the PCR are listed in Table 7. Prior to the sequencing, the PCR products were purified enzymatically with Thermo Scientific Exonuclease FastAP and Exo I (Thermo Fisher Scientific, Waltham, MA, USA).F: 5 ′ -CCAGATATCTGAAGAAGTCCTG-3 ′ R: 5 ′ -CCTAGGTCACAGTGACGTGG-3 ′ 1 55 901 rs1800871 rs1800872 1 Primer used for Sanger sequencing. 2 The frequency of the point mutation was less than 5%, so it is not presented in Table 2 or included in the analyses. IL-17A rs2275913 and IL-10 rs1800896 and rs1800871 SNPs from the STEP study subjects have been previously analyzed using the Sequenom massARRAY iPlex Gold system (Sequenom Inc., San Diego, CA, USA) at the University of Eastern Finland, Kuopio, Finland [55]. Cytokine Measurements The serum cytokine concentrations of IL-17A, IL-17F, IL-10, and IL-6 were measured using the multiplex immunoassay (Bio-PLex 200, Bio-Rad Laboratories, Hercules, CA, USA) with the Milliplex Map Human Th17 Magnetic Bead Panel (Merck KgA, Darmstadt, Germany) according to the manufacturer's protocol.There was no or negligible crossreactivity between the antibodies for an analyte or any of the other analytes in the panel.The reported intra-assay %CVs were 3, 2, 3, and 5, and the inter-assay %CVs were 13, 10, 11, and 7 for IL-17A, IL-17F, IL-10, and IL-6, respectively.The cytokine concentrations that were below the lowest detection limit were assigned values of half of the minimum detectable concentration of each cytokine, which were 1.05 pg/mL, 4.5 pg/mL, 0.15 pg/mL, and 0.85 pg/mL for IL-17A, IL-17F, IL-10, and IL-6, respectively. Statistical Analysis Statistical analyses were performed using SPSS software, version 28.0 (IBM Corp. in Armonk, NY, USA) and GraphPad Prism, version 8 (GraphPad Software, La Jolla, MA, USA).The calculation of the sample size and power analysis were performed with the QUANTO program, version 1.2.4 [56].The current sample size would have 35-93% power at a 0.05 significance level to detect a difference between the controls and patients with JIA-type when the controls per case are 1:10 (IL-17A rs2275913 and IL-10 rs1800896, rs1800871, and rs1800872).When the control per case is 1:1, the power of the study is 57% at a 0.05 significance level, indicating that these results are exploratory and should be interpreted with a critical perspective. Deviations from the Hardy-Weinberg equilibrium (HWE) for the IL-17 and IL-10 SNPs were studied using the Chi-square test.Categorical data were compared by using the Chi-square test or Fisher exact test.The non-normally distributed data were compared using the Mann-Whitney U test.Odds ratios (OR) with 95% CIs were determined, and repeated measures ANOVA was used to analyze the repeated measurements of the subjects.The Quade nonparametric ANCOVA test was used to calculate the associations between serum IL levels and the SNPs with the following covariates: sex, JIA subtype, age at the time of sampling, disease duration, JADAS10 (= disease activity), and the use of biological medication.A two-tailed p < 0.05 was considered significant, and in the haplotype analyses, the Bonferroni correction was used to determine the adjusted p-values. Conclusions In this study, we present new evidence on the roles of IL-17A and IL-10 polymorphisms in the pathogenesis of JIA.We have shown that carrying IL-17A rs2275913 (AG/AA) or rs8193036 variant genotypes (CT/CC) and IL-10 rs1800896 variant genotypes (TC/CC) increases the risk of JIA, and specifically, the risk of seronegative polyarthritis is increased in the IL-17A rs8193036 variant background.Also, IL-17A rs2275913 minor allele A was identified as a risk factor for oligoarthritis and polyarthritis types of JIA.We further show that the IL-10 rs1800871 variant genotype may be related to a better response to treatment at 1 year.Differences in the serum cytokine profiles between the patients with wildtype and variant genotypes of IL-17A and IL-10 polymorphisms were detected.Further studies with treatment of naïve patients are needed to specify the associations with cytokine production.Also, further studies with larger sample sizes are planned so as to specify the role of these cytokine polymorphisms in disease subtypes and patients' responses to treatment. Disease activity parameters were recorded at three time points: A = at the time of diagnosis, B = one year after diagnosis, C = three years after diagnosis, and D = at the time of sampling.Abbreviations: HLA = human leukocyte antigen, ANA = antinuclear antibody, TMJ = temporomandibular joint, ESR = an erythrocyte sedimentation rate, CRP = c-reactive protein, JADAS = juvenile arthritis disease activity score, PGA = physician global assessment of disease activity, and CHAQ = child health assessment questionnaire (CHAQ). Figure 1 . Figure 1.The linkage disequilibrium (LD) block structure of controls (a) and JIA patients (b).LD consisted of four SNPs located in the IL-17A gene and three SNPs located in the IL-10 gene.The LD is displayed according to following colour schemes, with bright red: LOD > 2, D′ = 1, shades of pink/red: LOD > 2, D′ < 1, D′ = 1 and white: LOD < 2, D′ < 1. D' values multiplied by 100 are marked in each cell, cells without number indicates Dʹ 100. Figure 1 . Figure 1.The linkage disequilibrium (LD) block structure of controls (a) and JIA patients (b).LD consisted of four SNPs located in the IL-17A gene and three SNPs located in the IL-10 gene.The LD is displayed according to following colour schemes, with bright red: LOD > 2, D ′ = 1, shades of pink/red: LOD > 2, D ′ < 1, D ′ = 1 and white: LOD < 2, D ′ < 1. D' values multiplied by 100 are marked in each cell, cells without number indicates D ′ 100. Figure 1 . Figure 1.The linkage disequilibrium (LD) block structure of controls (a) and JIA patients (b).LD consisted of four SNPs located in the IL-17A gene and three SNPs located in the IL-10 gene.The LD is displayed according to following colour schemes, with bright red: LOD > 2, D′ = 1, shades of pink/red: LOD > 2, D′ < 1, D′ = 1 and white: LOD < 2, D′ < 1. D' values multiplied by 100 are marked in each cell, cells without number indicates Dʹ 100. Figure 3 . Figure 3. Disease activity in all JIA patients at the time of diagnosis and 1 year after diagnosis.Disease activity score: 0 = inactive, 1 = low activity, 2 = moderate activity, and 3 = high activity. Figure 3 . Figure 3. Disease activity in all JIA patients at the time of diagnosis and 1 year after diagnosis.Disease activity score: 0 = inactive, 1 = low activity, 2 = moderate activity, and 3 = high activity. Funding: This research was funded by The Finnish Cultural Foundation, 00210761 and 00230841; the University of Turku; the Foundation for Pediatric Research; Maire Lisko Foundation; Tampere Tuberculosis Foundation, 26006205 (Q.H.); and Sigrid Juselius Foundation 240045 (Q.H.).Institutional Review Board Statement: This study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Ethics Committee of The Hospital District of Southwest Finland (ETMK 31/1801/2020, 16 June 2020).Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.Data Availability Statement: Data are available upon request. Table 1 . Characteristics of the JIA patients. Table 2 . The distribution of IL-17A and IL-10 genotypes in the JIA patients and controls. Table 3 . The distribution of IL-17A and IL-10 genotypes in JIA patients by three main JIA subtypes. The data are presented as numbers (n) of children and valid percentages (%).Fisher's exact tests or Chi-square were used to analyze p-values.The odds ratio (OR) or relative risk (RR) and the corresponding 95% confidence intervals (95% CI) were calculated.1Dominantmodelwherewildtype was compared with heterozygous and homozygote variants.2Whenmajor genotype AA was compared with homozygote variant genotype TT: OR 3.8; 95% Cl 1.13-13.09. Table 5 . Serum cytokine levels in the patients. Table 5 . Serum cytokine levels in the patients.All cytokine concentrations (median and interquartile range, IQR) in the serum are expressed as pg/mL. Table 6 . Cont.All cytokine concentrations (median and interquartile range, IQR) in the serum are expressed as pg/mL.With the dominant model, p-values are 0.019 for 2 IL-17A, 0.002 for 3 IL-17F, and 0.01 for 4 IL-10. Table 7 . Primers used for PCR and Sanger sequencing.
v3-fos-license
2019-03-18T14:03:39.710Z
2018-04-18T00:00:00.000
81348978
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://juniperpublishers.com/gjorm/pdf/GJORM.MS.ID.555637.pdf", "pdf_hash": "6eb10cda2330ff6714ea99702ea22f086cc2aefe", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44878", "s2fieldsofstudy": [ "Political Science" ], "sha1": "d27b383edc35ba5d40f43dc37835a916e4ad5b82", "year": 2018 }
pes2o/s2orc
Female Genital Mutilation in Shii’a Jurisprudence Female genital mutilation is an ancient tradition which has been impregnated with myth and religious belief through millenniums. As a vast portion of its victims are Muslims, this study discusses whether there is a real Islamic rout in this tradition. The conclusion demonstrates a huge gap between current prevailing impression and real attitude of Islam especially Shii’a sect towards FGM. Opinion Female genital mutilation has a lengthy history among populations. According to a quotation attributed to Imam Ali (first Imam of Shii'a and fourth Imam of Sunni Muslims) the first one who ordered male mutilation was Abraham the prophet, who commanded his wife Sarah to do mutilation for their son Isaac [1]. So, in parallel with male mutilation which developed in Abrahamic religions due to his order, female genital mutilation, although lacked any religious binding law, became popular. Nowadays, female genital mutilation still takes place all over the world, especially in African developing countries despite increasing waves of objection among human rights experts and law and ethics specialists [2]. This short manuscript aims at studying whether Female Genital Mutilation (FGM) has a religious powerful support or not. As the main countries carrying out FGM have different portions of Muslims, this paper focuses on Islamic perspective regarding FGM and seeks details of Shii'a jurisprudence as one of the main sects in Muslim world with about 400 million populations worldwide. First of all, it is considerable that context of quotations attributed to Shii'a Imams (Leaders) proves that when Islam emerged (7th to 9th century A.D) FGM was a common surgery among female children as well as male ones at Hijaz (recognized as Saudi Arabia after world war I). Therefore, it is not surprising to find some quotations permitting FGM at that time. However, in these statements there is not any sign of sanctity for FGM, but contrarily in one statement, interests of future couple to have pleasant sexual relationship is mentioned as the cause of FGM authorization [3]. This permission is limited to partial clitoridectomy which involves partial removal of the clitoris and thus, it does not seek depriving female from sexual pleasure, but also according to Shirazi, the aim of FGM is to provide the couple with more successful sexual relationship. Sanctity requires compulsion but as mentioned above, FGM is not regarded obligatory according to all Muslim jurists including Shii'a clerics [4]. The mandatory mutilation is exclusively male mutilation which even itself has a critic among Shii'a jurists. Even though it constitutes minority viewpoint to deny male mandatory mutilation, it becomes totally different when we discuss FGM. As stated before, there is no compulsive rule imposing FGM to Muslim women, although some believe in its beneficence [5]. In some quotations attributed to Shii'a Imams, it is suggested that FGM takes place when the girl reaches 7 years old. Hence, FGM before this age is not regarded as beneficent in Islamic view. This is one other sign to receive this message that Islam does not permit any harm to female subject of mutilation and to protect female child, postpones it until she reaches [6]. According to a Shii'a thinker, the ruler has no authority to coerce a girl to have FGM. Considering an evident Islamic law which legitimates wide authority for ruler in society and father in family, beside decreasing authority of both over the society-family members currently, concludes that nowadays, it is not in father's discretion to decide about her daughter's FGM. So, it seems that any imposed FGM without girl's consent should be forbidden and considering non-sacred nature of FGM, it is the girl herself who has to decide provided she becomes mature and enjoys full capacity [7]. Despite some current thought which recognizes reward for FGM in other world, a quotation of 6th Imam of Shii'a acknowledges that FGM is not a part of "Sunna" i.e., it is not obligatory nor recommended as a religious act which is expected to lead to heavenly reward. Moreover, a contemporary Shii'a cleric states: "If FGM is not a recommended religious act (=Mustahab), it must be regarded as an oppression to girl's rights". He adds: "Islam authorized FGM only because it was not due time to interdict it and (the best solution) was to let this custom be abandoned gradually by the society itself" [8]. A simultaneous consideration of Shii'a jurists from emergence of Islam until now, clearly demonstrates decreasing trend to authorize FGM. We even face that some contemporary Shii'a jurists refuse to answer their followers' questions about recommendation of FGM in Islam and by this cautious silence, this message is clearly heard that Islam, especially Shii'a perspective does not recognize FGM as a religious matter at all. In other words, current trend of Shii'a school, denies Islamic recommendation of FGM but refers it to category of beauty surgeries which requires other conditions including girl's informed consent after acquiring full capacity i.e., achievement of enough power of rational discretion which is naturally impossible to occur at childhood [9][10][11][12][13]. As considered above, Islamic attitude, especially Shii'a way of thought is dramatically different from common knowledge even among Muslims themselves. It is may be the shadow of ignorance even among specialists which masks face of truth. Religious reference texts, including Islamic quotations enjoy wide range of tolerance and the authors deeply believe that by more cautiousness, more cleverness and more responsibility toward human values, we will definitely witness less conflicts among religious teachings and current rationally admired life styles. This orientation is totally compatible with a famous quotation from prophet Mohammad who calls his mission as "to attain the peak of morality".
v3-fos-license
2021-11-11T06:23:50.858Z
2021-11-10T00:00:00.000
243939594
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jgeb.springeropen.com/track/pdf/10.1186/s43141-021-00266-4", "pdf_hash": "53b371ac6135f841cfd9d6dd630f2a4e505ac5db", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44881", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "e3c4054f3def82d49936b70176c159a016644973", "year": 2021 }
pes2o/s2orc
Evaluation of seven gene signature for predicting HCV recurrence post-liver transplantation Background Orthotropic liver transplantation (OLT) offers a therapeutic choice for hepatocellular carcinoma (HCC) patients. The poor outcome of liver transplantation is HCV recurrence. Several genome-wide associated studies (GWAS) have reported many genetic variants to be associated with HCV recurrence. Seven gene polymorphisms formed a cirrhosis risk score (CRS) signature that could be used to distinguish chronic HCV patients at high risk from those at low risk for cirrhosis in non-transplant patients. This study aims to examine the association of CRS score and other clinical parameters with the probability for HCC emergence and/or the rate of HCV recurrence following liver transplantation. Results Seven gene polymorphisms, forming the CRS, were genotyped by real-time PCR using allelic discrimination protocol in 199 end-stage liver disease patients (79 child A, 43 child B, and 77child C), comprising 106 patients who encountered liver transplantation. Recipient CRS scores were correlated with HCV recurrence (HCV-Rec) at the end of the third year after OLT. Around 81% (39) recipients with low steatosis (LS; < 3.5%) donor percentage revealed no HCV recurrence (non-Rec) (p<0.001). CRS score could distinguish between child A, child B, and child C only at the low-risk group. Among the HCV Rec group 27% (8/30), 40% (12/30), and 33% (10/30) fell into the high, moderate, and low CRS risk groups, respectively. Stepwise logistic regression evinced two features more likely to be seen in HCV-Rec patients: abnormal ALT [OR, 1.1; 95% CI, 1.02–1.2] and donor steatosis >3.5% [OR, 46.07; 95% CI, 1.5–1407.8]. Conclusions Accordingly, the CRS score seems to be less useful to predict HCV recurrence after OLT. ALT and donor steatosis (exceed 3.5%) can significantly promote the HCV recurrence post-OLT. Moreover, the combination of MMF and CNI positively heightens HCV recurrence. Supplementary Information The online version contains supplementary material available at 10.1186/s43141-021-00266-4. Background Despite the incredible progress in treating hepatitis C virus (HCV), many patients are still at the risk of disease progression to cirrhosis and hepatocellular carcinoma (HCC) at different rates [1][2][3]. Our country launched "100 million lives" campaign declaring that viral hepatitis should be eliminated by 2030. Elimination of HCV will confer economic benefits and substantial health and, most critically, the avoidance of above 1.2 million deaths yearly [4]. The Child-Pugh score has been used as a prognostic predictor of postoperative mortality and has been taken into account in a number of staging systems [5,6]. To date, surgery remains the master prognostic tool for the long-term survival of HCC patients; nevertheless, HCC is frequently associated with chronic viral hepatitis and over 80% of tumors are unresectable [7,8]. OLT submits a therapeutic choice for HCC patients, particularly in cirrhotic patients without distant metastasis of HCC. Nonetheless, the main potential cause for the poor outcome of liver transplanted patients post-OLT is HCV recurrence [9]. Recurrent HCV-associated liver disease leads to a consequent loss of graft in about one third of patients within 5 years of OLT and recurrent HCV-associated graft failure is the main cause of patient mortality and re-transplantation in the 5th postoperative year [10]. Several factors are crucial to minimize the complications and improve the clinical outcome such as choice of a suitable donor, appropriate immunosuppressive treatment, and genetic risk stratification prior to transplantation [11,12]. Up to date, no study delineate-specific predictive biomarkers of HCV recurrence in post-transplant patients [19]. Cirrhosis Risk Score (CRS) is a polygenic signature firstly defined by Huang [20] and stratified the cirrhosis risk in many populations better than clinical factors [20,21]. CRS is relying on a set of seven single-nucleotide polymorphisms (SNPs) in six genes: AP3S2, AQP2, AZIN1, STXBP5L, TLR4, TRPM5, and in the intergenic region between DEGS1 and NVL (see Table 1). Theoretically, CRS may be used to stratify patients who are eligible for OLT or not better than a liver biopsy. The latter represents a single time point in the extended natural history of chronic infection, while genetic markers are and "life-long. " Also, we have a growing base of evidence linking a variant in the IL6 rs1800795 G allele with HCV recurrence post-LT [28]. Moreover, we finished a promising study which validated CRS performance in 240 Egyptian HCV-infected patients with different fibrosis grades [29]. Herein, to fuel the novel debate on Child-Pugh score, we assess it with a CRS signature (as an intrinsic genetic marker). Moreover, we aim to validate a CRS signature for recipients to assess the risk for HCV recurrence following OLT in Egyptian liver transplanted patients as it may serve as an early noninvasive genetic biomarker for HCV recurrence post-OLT. Study design The study included a total of 199 end-stage liver disease patients (79 child A, 43 child B, and 77 child C); comprising 106 patients who encountered OLT. All patients encountered orthotopic LT for HCV. The histologic degree of macrovesicular steatosis was determined. Patients suffering from acute rejection episodes were excluded. The primary immunosuppressive for all patients consisted of a calcineurin inhibitor (CNI) with or without Mycophenolate mofetil (MMF) or Everolimus at the second year (according to specific side effects or renal function). In this study, the patients were divided into two groups according to the HCV recurrence: group 1, HCV Rec group (n=32) and group 2, non-Rec group (n=48). All patients were also evaluated by clinical and laboratory parameters, including biochemical (alanine aminotransferase (ALT), aspartate aminotransferase (AST), albumin, bilirubin total, and platelets count (Plt)), and serological test (anti-HCV) and histopathology of liver biopsy. The diagnosis of HCC was made after reviewing images generated with several imaging modalities. Patients having other cancers were excluded. Liver biopsy evaluation Liver biopsy was operated for all recipients at the end of the third year following primary OLT. Liver biopsies were evaluated by a pathologist who was unaware of clinical and demographic data that were obtained. Fibrosis stages were defined using the METAVIR scoring system and categorized according to F0: none, F1: portal widening, F2: bridging fibrosis, and F3: bridging fibrosis with lobular distortion. We also stratified the fibrotic patients based on the inflammation activity into A1, A2, A3 refer to mild, moderate, and severe, respectively. Extraction of peripheral blood DNA The peripheral blood on EDTA was withdrawn from all subjects, and genomic DNA was extracted using genomic DNA extraction kits (Qiagen, Milan Italy). Purified genomic DNA samples were quantified using ultraviolet absorbance at 260 nm using a Thermo Scientific Nan-oDrop ™ Spectrophotometer. The DNA was stored at −20°C. Cirrhosis risk signature (CRS) genotyping The 7 SNPs identified previously by Huang et al. [20] were genotyped using a real-time PCR protocol based on the pre-validated TaqMan MGB TM probe for allelic discrimination assay (Applied Biosystems). Briefly, 1.25 μL of a 40X combined primer and probe mix (ABI/Life Technologies, USA) was added to 12.5 μL of 2X TaqMan ® Universal PCR master mix (ABI/Life Technologies, USA) in a 25-μL final volume of DNAse/RNAse-free water (Invitrogen/Life Technologies, USA) and template. The cycle conditions were 95 °C for 10 min, 95 °C for 15 s, and 60 °C for 1 min. The last two steps were repeated 40 times. The PCR run was performed on Rotor-Gene real-time PCR system (Qiagen, Santa Clarita, CA). Allelic discrimination plots were produced in Statistical Package for The Social Sciences (SPSS version 16.0; SPSS, Chicago, IL). In this study, we consciously used the classification launched in the original publication by Huang et al. [20]: a CRS > 0.7 signifies patients with a high risk of advanced liver fibrosis, CRS < 0.5 signifies a low risk of fibrosis, a CRS of 0.5 to 0.7 signifies an intermediate risk, and upon the score the patient was assigned to appropriate risk category. Statistical analysis Data were analyzed using SPSS 16.0. Data were presented as mean ±standard deviation. Categorical variables were compared with the χ 2 or Fisher's exact tests, each when appropriate, and the effect of differences was established by calculating the odds ratio with the 95% confidence interval (95%CI). According to variable distribution, oneway ANOVA or nonparametric Kruskal-Wallis test was used for multi-group comparisons. The nonparametric Mann-Whitney U test was used to compare median values between two groups for quantitative data. A difference between groups was significant if P< 0.05. Description of the study patients Our study started on 199 end-stage liver disease patients that were categorized into 79 Child-Pugh class A, 43 child B, and 77 child C. Male patients are represented 75% from child A, 81% from child B, and 75% from child C (p=0.7). Patients' baseline characteristics are represented in Table 2. Medical data records allowed a follow-up of only 120 patients who were eligible for liver transplantation, see Table 2. Genotyping of the seven genes Individual 7 candidate SNPs included in the genetic risk score (CRS) for each patient were listed in the S1 table. Some of the allelic discrimination results obtained from the real-time PCR for some genes were represented in Fig. 1 Previously, Huang et al. selected the seven genes that are involved in the cirrhosis prediction and evaluated the probability of each genotype in the cirrhotic and non-cirrhotic patients in the Caucasian population. Their findings 19:174 were tabulated in Table 3 which illustrates that each SNP can take the value 0 or 1 based on the obtained genotype, and then, each value has two probabilities (assuming that the patient can be cirrhotic or non-cirrhotic). Each SNP was calculated independently of other SNPs. The values obtained from Table 3 were substituted in the following Naïve Bayes formula: P (S│cirrhosis) and P (S│no cirrhosis) referred to the estimated probabilities of (cirrhotic and non-cirrhotic) patients, respectively. In the current study, we followed the same steps to build up the CRS value and to evaluate the validity of this formula in the Egyptian population. The detailed calculation method was shown in Huang et al. [20]. Table 4). Importantly, we also stratified the patients based on the occurrence of clinically evident HCC or not; the CRS median for patients who progressed to HCC was 0.62 Description of the liver transplantation patients From all the 106 patients, the recipient's score was available in 80 patients (CRS could not be calculated for the sake of technical complexity such as amplification failure of one the SNP reaction). Among 80 consecutive recipients, 78% were male and 22% female, with a mean age of (50.2±7.3) ranging between 23 and 60 years. The mean age for the HCV recurrence group (HCV-Rec) was 50.44 Tables 5 and 6). Patients with HCV-Rec had statistically significant higher levels of AST, ALT, GGT, ALP, total and direct bilirubin (p for all <0.001), and significantly lower potassium (p=0.004), platelet count (p=0.001), WBCs count (p=0.001), and albumin level (p= 0.002), as compared to those in the no HCV recurrence (non-Rec) group. Also, urea, sodium, BMI, and creatinine levels were slightly elevated but without reaching significance. MELD score did not display any variance between the two studied groups. To examine the role of pre and/or post-operative levels of HCV RNA on HCV-Rec frequency; each level was correlated independently with recurrence. Pre-OLT serum HCV RNA levels did not reveal any correlation (p=0.8). On the contrary, serum HCV-RNA levels post-OLT were 1806.20 ± 918.02 IU/mL in the HCV-Rec group and 1280.97 ± 888.48 IU/mL in the non-Rec group. The mean loads of serum HCV-RNA levels after OLT were significantly related with HCV Rec (p = 0.002; see Table 6). Upon categorizing patients according to the immune suppressive regimen, 24 (77.4%) of HCV-Rec were treated with CNI plus MMF, 4 (13%) were treated with CNI plus Everolimus, and 3 (10%) were treated only with CNI. On the other side, 13 (27%) of the Non-Rec group were treated with CNI plus MMF, 9 (19%) were treated with CNI plus Everolimus, and 26 (54%) were treated with CNI only. Clearly, CNI plus MMF regimen was significantly found in the HCV Rec group (p<0.001). Detection of the HCV recurrence according to donor steatosis percentage To gain viewpoints on the impact of donor steatosis on HCV recurrences after transplantation, patients were grouped according to donor steatosis percentage into patients who have high steatosis (HS; > 3.5) and patients who have low steatosis (LS; < 3.5). Around 81% (39) patients have non-Rec were LS, while 61% (19) patients suffered HCV Rec were HS. On the other hand, 39% (12) patients have HCV Rec were LS, while 19% (9) patients have HCV Rec were HS (p<0.001; see Table 7). Detection of the HCV recurrence according to CRS score According to the CRS score suggested by Huang et al., the patients were stratified into three risk subgroups (high risk, CRS > 0.7; moderate risk, CRS 0.5-0.7; low risk, CRS < 0.5). To rule whether the CRS score could discriminate between patients who experienced the HCV-Rec group versus patients with non-Rec, the distribution of the CRS score was compared among the two groups. Among the HCV-Rec group, 27% (8/30), 40% (12/30), and 33% (10/30) fell into the high, moderate, and low CRS risk groups, respectively. While among the non-Rec group, 30% (13/44), 37% (16/44), and 34% (15/44) fell into the high, moderate, and low CRS risk groups, respectively. Unfortunately, the association between CRS score subgroups and the HCV Rec did not reach the statistical significance (p<0.9; see Table 6). Notably, the CRS values cannot predict the HCV recurrence. Even though only 33.3% of the patients (10/30) with a CRS < 0.5 and 40% of the patients (12/30) with a CRS of 0.5 to 0.7, 27% of the patients (8/30) with a CRS > 0.7 suffered HCV-Rec (p =0.9). Importantly, the median of the CRS score was not significantly different between HCV-Rec and non-Rec patients (median=0.6 for both groups; p=0.4). Detection of the severity of inflammation according to CRS score To examine the potential role of the CRS on the hepatic inflammation, patients were gathered into mild (A0F0-A1F1) and advanced inflammation (A2F2-A3F3) groups. Overall, 40% of the transplant patients progressed to at least A2F2 during follow-up, whereas 60% of the subjects were between A0F0 to A1F1. Around 28% (8/29) patients of (A2F2-A3F3) group met the high-risk CRS score, while 31% (9/29) patients met the low-risk CRS score. On the contrary, 29% (13/45) patients of (A0F0-A1F1) group met the high-risk CRS score, while 36% (16/45) patients met the low-risk CRS score (p=0.9). Detection of the severity of inflammation according to donor steatosis To show the impact of donor steatosis on the recipient's inflammation progression, around 61% (19/29) of recipients have a donor with HS progressed to at least A2F2 during follow-up, whereas 19% (9/45) of the recipients have mild inflammation (A0F0 to A1F1). Around 81% (39/45) of recipients have a donor with LS revealed mild inflammation (A0F0 to A1F1) and 39% (12/29) of recipients progressed to at least A2F2 (p<0.001). CRS scoring using new cutoff for HCV recurrence prediction Our latest study on 400 HCV infected patients with different fibrosis grades concluded that our best CRS cutoff value appraised from roc curve analysis is 0.59 (under publication), accordingly the patients of the current study were regrouped into low risk with a CRS < 0.59 and highrisk patients with a CRS above the mentioned cutoff. The patients who suffered HCV Rec represented 57% (17/30) of high-risk CRS >0.59 but 41% (18/44) of non-Rec patients met low CRS < 0.59 (p =0.8). Stepwise logistic regression analysis When Stepwise logistic regression was applied to the baseline data, three features were more likely to be effective in HCV rec patients' more than non-rec: abnormal ALT [odds ratio (OR), 1.1; 95% confidence interval (CI), 1.02-1.2] and donor steatosis >3.5% (OR, 46.07; 95% CI, 1.5-1407.8; see Table 7). The results of this analysis are depicted in Table 8. The CRS was not an independent predictor of HCV-Rec. Discussion In HCV-related hepatic cirrhosis, hepatocellular carcinoma (HCC) occurs at an annual rate of about 3% [30]. Orthotropic liver transplantation (OLT) offers a treatment option for end-stage liver disease patients. HCV reinfection is nearly common after OLT, and it is estimated that up to 70% of patients will undergo histologic chronic hepatitis C [31], with a greater risk of graft rejection relative to recipients who are transplanted for other etiologies. It is noteworthy that genetic data will be used to assess the disease risk, with possible therapeutic benefits [14,32]. CRS score successfully differentiated chronic HCV patients with high risk versus those with low risk for cirrhosis better than clinical factors [20]. We currently examined the association of the CRS score with the probability for HCC emergence and/or the rate of HCV recurrence following liver transplantation. Theoretically, each of the seven most predictive markers provided only moderate predictability, whereas the combination of these 7 SNPs seems to be robust and predictive. The median of the CRS score significantly differentiates patients with clinically evident HCC from patients who did not progress to HCC. The median of the CRS score was significantly different between child A, child B, and child C only at a low-risk group. New researches warrant that Child-Pugh score usage as a risk prediction tool should be revisited [33]. On the other side, the median of the CRS score was not significantly different between HCV recurrence and non-recurrence patients. Accordingly, results in our cohort tackled that CRS cannot predict the HCV recurrence after OLT. However, a recent study shed the light on the clinical significance of the CRS genotype in the donor organ and revealed a strong association between the donor CRS and early fibrosis progression after OLT, especially in HCV-negative patients [34]. It is worth noting that the coinfection with other viruses triggers the cellular apoptosis and accelerating the HCC development. Therefore, early diagnosis of cirrhosis is crucial to avoid the mortality associated with HIV. Fernández-Rodríguez et al. reported that the diagnostic value of the CRS to deduce the liver fibrosis deterioration is limited in HIV/HCV coinfected patients [35]. Other well-known genetic variations as IL1B and IL28B evinced a statistically significant correlation with the poor outcome post-transplantation [12]. Liver function tests were repetitively delineated to affect HCV recurrence after OLT [10,12]. Definitely, our 19:174 data showed that the increased risk of the HCV recurrence was correlated with augmented ALT, AST, and ALP levels. Feurer et al. [36] negated the correlation between liver function serum levels and disease-free survival rates. Our data shed light that HCV viral load post-transplant is significantly affecting HCV recurrence. Supportive studies affirmed that advanced donor age and/or high HCV viral loads post-transplant corresponded with aggressive HCV recurrence and allograft loss in HCV-positive liver transplant recipients [12,37]. Cyclosporine (as a calcineurin inhibitor (CNI))-based regimen is the main immuno-suppression protocol followed in this study. This regimen was accompanied by mycophenolate mofetil (MMF) or Everolimus. CNI is supposed to bind to the cytosolic protein cyclophilin (an immunophilin) of T-lymphocytes [38]. Mycophenolic acid acts as a selective and reversible inhibitor of Inosine-5′-monophosphate dehydrogenase, whereas Everolimus is an inhibitor of mammalian target of rapamycin (mTOR) [39]. In our study, the addition of MMF to CNI positively heightens HCV recurrence rate while Everolimus did not negatively alter its rate which was supported by former studies [40]. Decisively, the outcome is better once a proper selection of patients is performed [10]. However, many surgeons have shown an augmented risk for inferior post-transplant outcomes in case of donor livers with moderate or severe large droplet macrosteatosis (ld-MaS), although donor livers with small droplet macrosteatosis (sd-MaS) or mild (<30%) ld-MaS are safe for transplantation [41,42]. The combined analysis confirmed that the degree of steatosis in donors' livers was below 3.5% avoiding the possibility of having a worse outcome. Donor liver steatosis impacts graft function, long-term consequences of the recipient allograft, and donor hepatic recovery. Indeed, transplanting a steatotic liver may lead to reperfusion injury/ischemia that may progress to an advanced rate of early graft dysfunction. Safe cutoff for transplantation range from 10 to 30% in accordance with the transplantation center regulation [43,44]. Our study is limited by the small sample size (due to sample scarcity), the absence of the donor genotype and/ or donor with macrovesicular steatosis of 30% or greater as the criterion for better comparison. No doubt that the donor genotype may synergy the genotype-phenotype association. On other point of view, the recipient genotype is more feasible and obtainable before transplantation earlier than the donor genotype. However, many studies correlated only the recipient genotype (for TLR4, IL6, and IL-28B SNPs) with the HCV recurrence [28,41,45,46], and more attention is needed to identify new predictors. Conclusions Based on our results, the prognostic value of the donor steatosis on HCV recurrence holds true in Egyptian CHC patients. Regression analysis showed that donor steatosis and ALT can significantly promote the HCV recurrence post-OLT. Because of the lack of significance, Child-Pugh score usage as a prognostic tool needs to be reassessed. Moreover, it is unlikely that CRS may be applicable in predicting the probability of HCV recurrence after OLT.
v3-fos-license
2019-10-19T13:02:31.424Z
2019-10-01T00:00:00.000
204772906
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/20/20/5130/pdf", "pdf_hash": "dd68cb10595357b3d435f3400de8f426c94d8127", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44882", "s2fieldsofstudy": [ "Biology" ], "sha1": "6df6d6cc19c00bd85c2c6ebc2573bec3b73c7561", "year": 2019 }
pes2o/s2orc
Myoferlin Regulates Wnt/β-Catenin Signaling-Mediated Skeletal Muscle Development by Stabilizing Dishevelled-2 Against Autophagy Myoferlin (MyoF), which is a calcium/phospholipid-binding protein expressed in cardiac and muscle tissues, belongs to the ferlin family. While MyoF promotes myoblast differentiation, the underlying mechanisms remain poorly understood. Here, we found that MyoF not only promotes C2C12 myoblast differentiation, but also inhibits muscle atrophy and autophagy. In the present study, we found that myoblasts fail to develop into mature myotubes due to defective differentiation in the absence of MyoF. Meanwhile, MyoF regulates the expression of atrophy-related genes (Atrogin-1 and MuRF1) to rescue muscle atrophy. Furthermore, MyoF interacts with Dishevelled-2 (Dvl-2) to activate canonical Wnt signaling. MyoF facilitates Dvl-2 ubiquitination resistance by reducing LC3-labeled Dvl-2 levels and antagonizing the autophagy system. In conclusion, we found that MyoF plays an important role in myoblast differentiation during skeletal muscle atrophy. At the molecular level, MyoF protects Dvl-2 against autophagy-mediated degradation, thus promoting activation of the Wnt/β-catenin signaling pathway. Together, our findings suggest that MyoF, through stabilizing Dvl-2 and preventing autophagy, regulates Wnt/β-catenin signaling-mediated skeletal muscle development. Introduction Autophagy, including macroautophagy, microautophagy, and chaperone-mediated autophagy, was first described in 1963 [1]. Autophagy refers to the encapsulation of cytoplasmic components, such as proteins and organelles, destined for transport to lysosomes for degradation [2]. Under stress conditions, such as starvation and hypoxia, autophagy is activated to promote cell survival by releasing energy substrates through degradation of cellular components and elimination of defective or damaged organelles [3]. It is now clear that the cell survival function of autophagy is an important evolutionary conservative mechanism by which eukaryotic cells maintain homeostasis and achieve renewal [4]. Skeletal muscle is composed of highly organized myofibers and its lean mass provides a tissue amino acid source that can be used under conditions of stress or starvation [5]. The ubiquitin-proteasomal pathway and the autophagic/lysosomal pathway are two highly conserved pathways mediating protein degradation in skeletal muscle [6]. In the ubiquitin-proteasomal pathway, ubiquitin-tagged proteins are degraded in the proteasome complex after conjugation with multiple ubiquitin moieties [7]. The turnover of most soluble and myofibrillar proteins in normal skeletal muscle is performed through the constitutively operative ubiquitin-proteasomal pathway [8]. Many studies have shown that autophagy is also constitutively active in normal skeletal muscle and lysosome-related gene deficiency induces accumulation of autophagosomes [9]. Autophagy regulates muscle homeostasis, removing protein aggregates and abnormal organelles that otherwise lead to muscle toxicity and dysfunction [10]. For instance, deficiency in the autophagy-related genes Atg5 or Atg7 is lethal in neonatal mice due to disruption of the supply of transplacental nutrients [10,11]. These findings suggest that autophagy deficiency plays a role in various forms of hereditary muscular dystrophy, including Bessler myopathy, Ullrich congenital muscular dystrophy, and Duchenne muscular dystrophy (DMD) [12]. Generally, muscle consists of multinucleated myofibers; myoblasts are mononuclear but fuse to form myotubes, and myoblasts can repair segmental loss of myofibers and fuse with existing fibers [13]. It is well known that the canonical Wnt signaling pathway is vital to regulate myoblast differentiation during skeletal muscle regeneration [14]. Dishevelled-2 (Dvl-2) is a key component of Wnt/β-catenin, which can be degraded by autophagy and further negatively regulates the canonical Wnt signal transduction [15]. In 2000, Davis et al. reported that myoferlin (MyoF) is upregulated in dystrophin-null muscle, thus implicating MyoF as a candidate gene in the pathogenesis of muscle dystrophy [16]. MyoF is a 230-kDa protein belonging to the ferlin family of proteins that also includes dysferlin and otoferlin. Ferlin family proteins share similar domain architecture, including a carboxy-terminal transmembrane domain and multiple amino terminal C2 domains [17]. Extensive studies have shown that MyoF is highly expressed in myoblasts and is essential in muscle cell functions such as plasma membrane integrity, myoblast fusion, and vesicle trafficking [18]. MyoF-deficient mice have been reported to display marked muscular dystrophy due to the unsuccessful fusion of myoblasts [19]. Studies in muscular dystrophy mice showed promoter-recapitulated normal MyoF expression was downregulated in healthy myofibers and was upregulated in response to myofiber damage, indicating that MyoF modulates muscle injury in both myoblasts and myofibers [20]. Based on the observation that the insulin-like growth factor 1 (IGF1) receptor accumulated in large vesicular structures in MyoF-null myoblasts, Demonbreun et al. concluded that MyoF is required for the IGF factor response and muscle growth [21]. However, other functions of the MyoF, such as its role in the regulation of muscle autophagy, remain to be clarified. In this study, we investigated the function of MyoF in C2C12 myoblast differentiation and its role in muscular autophagy. We ascertained that MyoF interacts with Dvl-2 via the canonical Wnt signaling pathway. Taken together, our results provide a novel insight into the role of MyoF in autophagy. MyoF is Highly Expressed in Differentiated Myogenic Cells Searches of the Gene Expression Omnibus (GEO) database showed that MyoF mRNA is highly expressed in mdx mice. We verified this information and found that MyoF was indeed upregulated in mdx mice, suggesting that MyoF is involved in skeletal muscle regeneration and repair (Figure 1a,b). To investigate the effects of MyoF on skeletal muscle development, we used the C2C12 myogenic cell line to mimic skeletal muscle differentiation in vitro. We found that MyoF expression increased gradually with myotubular differentiation of C2C12 cells. Increased MyoF expression was accompanied by increased myosin heavy chain (MyHC) expression during C2C12 myoblast differentiation (Figure 1c,d). Bars not sharing the same letter labels are significantly different (p < 0.05; n = 3 independent cell cultures). (d) Western blot analysis of MyoF and MyHC protein levels during differentiation; GAPDH was used as loading control. Data represent means ± SEM (n = 3 independent cell cultures). * p < 0.05; ** p < 0.01. Role of MyoF in Skeletal Muscle Differentiation To investigate the role of MyoF during differentiation of C2C12 myoblasts, we silenced its expression by transfection with shRNA directed against MyoF (Figure 2a,b). Monitoring the morphological changes during differentiation showed a significant decrease in the total areas of myotubes, indicating that MyoF silencing impaired myoblast differentiation into myotubes ( Figure 2c (a) Quantitative real-time PCR (qRT-PCR) analysis of MyoF expression in wild-type (WT) and mdx mice (aged 9 months); n = 3 per group. (b) Western blot analysis of MyoF protein levels in WT and mdx mice (aged 9 months); n = 3 per group. (c) qRT-PCR analysis of MyoF mRNA expression in C2C12 cells during differentiation. Bars not sharing the same letter labels are significantly different (p < 0.05; n = 3 independent cell cultures). (d) Western blot analysis of MyoF and MyHC protein levels during differentiation; GAPDH was used as loading control. Data represent means ± SEM (n = 3 independent cell cultures). * p < 0.05; ** p < 0.01. Role of MyoF in Skeletal Muscle Differentiation To investigate the role of MyoF during differentiation of C2C12 myoblasts, we silenced its expression by transfection with shRNA directed against MyoF (Figure 2a MyoF Rescues Skeletal Muscle Atrophy We first studied the effect of MyoF on the expression of atrophy-related genes in myotubes. Myotubes transfected with shMyoF exhibited increased expression of two atrophy-related genes, Atrogin-1 and MuRF1 at the mRNA level, and Atrogin-1 at the protein level (Figure 4a,c). MyoF overexpression decreased expression of the two atrophy-related genes at the mRNA level and Atrogin-1 at the protein level (Figure 4b,d). We next investigated the ability of MyoF to rescue muscle atrophy by silencing or overexpressing MyoF during dexamethasone-induced myotube atrophy in vitro. In the presence of dexamethasone, MyoF silencing exacerbated the expression of atrophy-related genes in the myotubes at the protein levels ( Figure 4c). Overexpression of MyoF attenuated the elevation of Atrogin-1 induced by dexamethasone (Figure 4d). control. (c) Representative images of myotubes formed by Ctrl and MyoF-Flag cells. (d) Immunofluorescence staining of MyHC in C2C12 cells transfected with Ctrl or MyoF-Flag. The bar graph on the right shows the myotube area (%) after transfection with Ctrl or MyoF-Flag. (e) qRT-PCR analysis of MyoD, MyoG, and MyHC mRNA levels in Ctrl and MyoF-Flag cells. (f) Western blot analysis of MyHC and MyoG protein levels in Ctrl and MyoF-Flag cells; GAPDH was used as loading control. Data represent means ± SEM (n = 3 independent cell cultures). * p < 0.05; ** p < 0.01. MyoF Rescues Skeletal Muscle Atrophy We first studied the effect of MyoF on the expression of atrophy-related genes in myotubes. Myotubes transfected with shMyoF exhibited increased expression of two atrophy-related genes, Atrogin-1 and MuRF1 at the mRNA level, and Atrogin-1 at the protein level (Figure 4a,c). MyoF overexpression decreased expression of the two atrophy-related genes at the mRNA level and Atrogin-1 at the protein level (Figure 4b,d). We next investigated the ability of MyoF to rescue muscle atrophy by silencing or overexpressing MyoF during dexamethasone-induced myotube atrophy in vitro. In the presence of dexamethasone, MyoF silencing exacerbated the expression of atrophyrelated genes in the myotubes at the protein levels ( Figure 4c). Overexpression of MyoF attenuated the elevation of Atrogin-1 induced by dexamethasone (Figure 4d). MyoF Functions by Dvl-2-Mediated Canonical Wnt Signaling Wnt/β-catenin signaling plays an important role in the control of myoblast differentiation. In our investigations of the role of MyoF in Wnt/β-catenin signaling, we found that expression of the Wnt GAPDH was used as loading control. Data represent means ± SEM (n = 3 independent cell cultures). * p < 0.05; ** p < 0.01. MyoF Functions by Dvl-2-Mediated Canonical Wnt Signaling Wnt/β-catenin signaling plays an important role in the control of myoblast differentiation. In our investigations of the role of MyoF in Wnt/β-catenin signaling, we found that expression of the Wnt target genes, lymphoid enhancer factor 1 (Lef1), MYC proto-oncogene (c-Myc), and APC downregulated 1 (Apcdd1), was significantly reduced in cells transfected with shMyoF ( Figure 5a). Similarly, the level of active β-catenin in the nucleus was significantly reduced in MyoF-silenced cells compared to that in the control-transfected cells (Figure 5b). Western blot analysis also showed that Dvl-2 protein levels were significantly reduced in MyoF-silenced cells (Figure 5c). The TOP/FOP reporter assay also suggested that MyoF silencing significantly decreased Wnt signaling pathway activity (Figure 5d). Wnt family member 3a (Wnt3a), which is a classical ligand of the Wnt signaling pathway, activates the canonical Wnt signaling pathway [22]. We showed that Axin1 was degraded in control cells in the presence of Wnt3a. However, Axin1 still accumulated in MyoF-silenced C2C12 cells after Wnt3a treatment ( Figure 5e). Furthermore, β-catenin translocation into the nucleus was significantly decreased in response to Wnt3a treatment in MyoF-silenced cells (Figure 5f). This indicated that MyoF is required for Wnt3a activation of the canonical l Wnt signaling pathway. To further investigate the mechanism by which MyoF activates the Wnt pathway, we used 1-azakenpaullone (1-AKP), which is known to activate the canonical Wnt signaling pathway independently of Dvl-2 [23]. We verified that 1-AKP can activate the Wnt signaling pathway and promote the expression of MyHC expression (Figure 5g,h). MyoF silencing reduced the levels of Axin1 and glycogen synthase kinase 3 beta (GSK3β) after the addition of 1-AKP (Figure 5i). Moreover, MyHC expression was increased following the addition of 1-AKP to MyoF-silenced cells (Figure 5j,k). These data showed that MyoF regulates myoblast differentiation via Dvl-2-mediated regulation of the canonical Wnt signaling pathway. Similarly, the level of active β-catenin in the nucleus was significantly reduced in MyoF-silenced cells compared to that in the control-transfected cells (Figure 5b). Western blot analysis also showed that Dvl-2 protein levels were significantly reduced in MyoF-silenced cells (Figure 5c). The TOP/FOP reporter assay also suggested that MyoF silencing significantly decreased Wnt signaling pathway activity (Figure 5d). Wnt family member 3a (Wnt3a), which is a classical ligand of the Wnt signaling pathway, activates the canonical Wnt signaling pathway [22]. We showed that Axin1 was degraded in control cells in the presence of Wnt3a. However, Axin1 still accumulated in MyoF-silenced C2C12 cells after Wnt3a treatment (Figure 5e). Furthermore, β-catenin translocation into the nucleus was significantly decreased in response to Wnt3a treatment in MyoF-silenced cells (Figure 5f). This indicated that MyoF is required for Wnt3a activation of the canonical l Wnt signaling pathway. To further investigate the mechanism by which MyoF activates the Wnt pathway, we used 1azakenpaullone (1-AKP), which is known to activate the canonical Wnt signaling pathway independently of Dvl-2 [23]. We verified that 1-AKP can activate the Wnt signaling pathway and promote the expression of MyHC expression (Figure 5g,h). MyoF silencing reduced the levels of Axin1 and glycogen synthase kinase 3 beta (GSK3β) after the addition of 1-AKP (Figure 5i). Moreover, MyHC expression was increased following the addition of 1-AKP to MyoF-silenced cells (Figure 5j,k). These data showed that MyoF regulates myoblast differentiation via Dvl-2-mediated regulation of the canonical Wnt signaling pathway. MyoF Stabilizes Dvl-2 by Preventing Autophagy Autophagy attenuates Wnt signaling by inducing Dvl-2 degradation. Thus, we investigated the influence of MyoF expression in skeletal muscle on autophagy induction. Expression of Dvl-2-Flag alone resulted in promotion of GFP-LC3 puncta formation in C2C12 cells. However, cotransfection with MyoF-HA and Dvl-2-Flag significantly reduced LC3 puncta formation in C2C12 cells compared to that observed following transfection with Dvl-2-Flag alone (Figure 6a). In addition, Western blot showed that MyoF silencing increased LC3II expression and decreased p62 expression at the protein level (Figure 6b). We observed a significant increase in mRNA expression of ATG5 and ATG7 in MyoF-silenced cells compared to that in control-transfected cells (Figure 6c). Immunofluorescence analysis also revealed a significantly increased number of LC3 puncta in MyoF-silenced cells (Figure 6d). Electron microscopy showed a significant increase in the number of autophagosomes in MyoF-silenced cells compared to that in control-transfected cells (Figure 6e). Immunofluorescence analysis showed that MyoF and Dvl-2 were uniformly distributed in C2C12 cells. In further investigations of the interaction between MyoF and Dvl-2 in the antagonistic autophagy system, immunoprecipitation analysis showed that MyoF interacts with Dvl-2 ( Figure 6f). Moreover, the levels of Dvl-2 ubiquitination were significantly increased in MyoF-silenced cells compared to those in control-transfected cells (Figure 6g). Collectively, these results indicate that MyoF interacts with Dvl-2 to facilitate its resistance to ubiquitination, and thus prevent the autophagy process. The bar graph on the right shows the myotube area (%). Data represent means ± SEM (n = 3 independent cell cultures). * p < 0.05; ** p < 0.01. MyoF Stabilizes Dvl-2 by Preventing Autophagy Autophagy attenuates Wnt signaling by inducing Dvl-2 degradation. Thus, we investigated the influence of MyoF expression in skeletal muscle on autophagy induction. Expression of Dvl-2-Flag alone resulted in promotion of GFP-LC3 puncta formation in C2C12 cells. However, cotransfection with MyoF-HA and Dvl-2-Flag significantly reduced LC3 puncta formation in C2C12 cells compared to that observed following transfection with Dvl-2-Flag alone (Figure 6a). In addition, Western blot showed that MyoF silencing increased LC3II expression and decreased p62 expression at the protein level (Figure 6b). We observed a significant increase in mRNA expression of ATG5 and ATG7 in MyoF-silenced cells compared to that in control-transfected cells (Figure 6c). Immunofluorescence analysis also revealed a significantly increased number of LC3 puncta in MyoF-silenced cells ( Figure 6d). Electron microscopy showed a significant increase in the number of autophagosomes in MyoFsilenced cells compared to that in control-transfected cells (Figure 6e). Immunofluorescence analysis showed that MyoF and Dvl-2 were uniformly distributed in C2C12 cells. In further investigations of the interaction between MyoF and Dvl-2 in the antagonistic autophagy system, immunoprecipitation analysis showed that MyoF interacts with Dvl-2 ( Figure 6f). Moreover, the levels of Dvl-2 ubiquitination were significantly increased in MyoF-silenced cells compared to those in controltransfected cells (Figure 6g). Collectively, these results indicate that MyoF interacts with Dvl-2 to facilitate its resistance to ubiquitination, and thus prevent the autophagy process. Discussion Although MyoF has been proven to be the pathogenic gene of muscular dystrophy, its antagonism against autophagy by stabilizing Dvl-2 has not yet been determined. In this study, we found that MyoF is highly expressed in mdx mice and participates in the growth of C2C12 cells, which is in accordance with the report of the unique function of MyoF in muscle regeneration and degeneration in muscular dystrophies [24]. MyoF is widely regarded as a muscle-specific protein. The growth of skeletal muscle is a multistep process accompanied by myoblast fusion to form myotubes, a process in which MyoF functions in the maturation of myotubes [25]. Our results showed that the normal C2C12 cells' development was retarded when there was interference with shMyoF, while conversely, overexpression of MyoF facilitated the growth of C2C12, which indicated that MyoF plays a role in promoting C2C12 differentiation. MyoF expression increases markedly when myoblasts undergo fusion. However, the function of MyoF may depend on cooperation with other molecules. Doherty et al. showed that the second C2 domain of MyoF interacts with EHD2 and the combination regulates normal myoblasts membrane fusion [26]. Furthermore, MyoF not only regulates myoblast fusion, but also the formation of transverse tubules and responses to muscle injury in both myoblasts and mature myofibers [27]. Taken together, these reports demonstrate that MyoF plays a vital role in the growth and development of skeletal muscle, although its specific molecular mechanism remains to be fully elucidated. Muscle atrophy occurs under conditions such as injury, denervation, glucocorticoid treatment, sepsis, and aging [28]. The physiology of muscle atrophy appears as a loss of tension, and there is substantial evidence that a reduction in protein synthesis and enhanced protein degradation contribute to muscular atrophy [29]. The complex molecular signaling events underlying atrophy are not fully understood. In our study, we performed MyoF knockdown and overexpression in C2C12 cells to explore the role of MyoF in muscle atrophy. Our results showed that the protein abundance of Atrogin-1 and MyHC was affected by MyoF knockdown, overexpression, and dexamethasone treatment. Since Atrogin-1 is a marker gene of muscle atrophy, these results indicate that MyoF has the potential to rescue muscle atrophy. It has been reported that MyoF is upregulated in muscle undergoing repeated degeneration and regeneration and MyoF is regarded as a candidate gene involved in the pathogenesis of muscle dystrophy [16]. In spite of extensive research on muscle disease, little is known about the role of MyoF in muscular atrophy, although MyoF may be a modifier. The canonical Wnt signaling pathway is widely reported to impact all aspects of skeletal muscle, including myogenetic lineage and proliferation of cells [30]. Key proteins in the canonical Wnt signaling pathway, such as Wnt1, Wnt3a, and Wnt5a, regulate proliferation of skeletal muscle satellite cells during injury healing [31]. A wealth of recent data show that the canonical Wnt is targeted and activated to regulate myoblast proliferation. R-spondin1 has been shown to mediate reciprocal control of the canonical Wnt signaling pathway in muscle stem cell progeny to ensure muscular tissue repair after wounding [32]. Furthermore, the canonical Wnt has been shown to promote differentiation in skeletal muscle positively regulated by HDAC8 [33]. In our study, MyoF silencing had marked effects on the expression of Lef1, c-Myc, Apcdd1 and active β-catenin, and Dvl-2, the Wnt target genes and node proteins. Further investigations showed that MyoF silencing disturbed the canonical Wnt pathway in C2C12 by upregulating Axin 1. Previous studies confirmed that Axin 1 mediates the disassembly of β-catenin structure by promoting its phosphorylation catalyzed by GSK-3β [34]. Therefore, we hypothesized that MyoF plays a vital but indirect role in controlling skeletal muscle development via the canonical Wnt signaling pathway. Dvl contains three highly conserved domains, termed Dvl-1, Dvl-2, and Dvl-3, and is expressed ubiquitously throughout development [35]. In the canonical Wnt pathway, Dvl-2 mediates the integration of the receptors and a destruction complex to induced β-catenin degradation, after which β-catenin is translocated into and accumulates in the nucleus, where it interacts with T cell-specific factor/LEF to initiate Wnt target gene transcription [36]. Dvl-2 plays a role in the upstream of the Wnt signal transduction pathway of β-catenin and GSK-3β, and can positively regulate the Wnt signal pathway [34]. 1-AKP is a selective inhibitor of GSK-3β, which results in GSK-3β not being able to form complex with beta-catenin in collaboration with APC and Axin, leading to the accumulation of β-catenin in the cytoplasm and the introduction of large amounts of β-catenin into the nucleus, thus requiring no direct activation of the Wnt signaling pathway by Dvl-2 [21]. Autophagy is a lysosome-dependent degradation pathway that exists widely in eukaryotic cells and is regulated by cell signaling pathways, such as the canonical Wnt pathway. In 2010, Gao et al. demonstrated that ubiquitinated Dvl-2 is recognized by p62, resulting in the formation of a large aggregate composed of p62 and LC3, which then selectively induces autophagy and degradation via the lysosomal pathway [37]. These findings illustrated that autophagy cooperates negatively with the canonical Wnt pathway by inverse regulation of Dvl-2. Indeed, a rich set of data have confirmed that Dvl-2 degradation is negatively regulated by the autophagy signaling pathway. Recent studies have shown that the Wnt pathway is inhibited and Dvl-2 degradation enhanced by GABARAPL1 via autophagy signaling [38,39]. It has also been reported that autophagic degradation of Dvl-2 is restrained by IRS1/2, which interacts and forms a complex with Dvl-2. Thus, IRS1/2 positively controls Wnt/β-catenin signaling via Dvl-2 [36]. In our study, we observed that cotransfection of C2C12 cells with MyoF and Dvl-2 attenuated autophagic degradation and thus, we speculated that MyoF is involved in the regulation of Dvl-2. Indeed, our data suggest that MyoF interacts with Dvl-2 protein to regulate muscle development through regulation of autophagy via the canonical Wnt signaling pathway. In conclusion, our study shows that MyoF regulates the canonical Wnt signaling pathway by stabilizing Dvl-2 to downregulate its autophagic degradation (Figure 7). These findings extend our understanding of the molecular mechanism by which MyoF is involved in skeletal muscle development and shed light on the role of MyoF in the alleviation of muscular autophagy mediated by the canonical Wnt signaling pathway. controlling skeletal muscle development via the canonical Wnt signaling pathway. Dvl contains three highly conserved domains, termed Dvl-1, Dvl-2, and Dvl-3, and is expressed ubiquitously throughout development [35]. In the canonical Wnt pathway, Dvl-2 mediates the integration of the receptors and a destruction complex to induced β-catenin degradation, after which β-catenin is translocated into and accumulates in the nucleus, where it interacts with T cell-specific factor/LEF to initiate Wnt target gene transcription [36]. Dvl-2 plays a role in the upstream of the Wnt signal transduction pathway of β-catenin and GSK-3β, and can positively regulate the Wnt signal pathway [34]. 1-AKP is a selective inhibitor of GSK-3β, which results in GSK-3β not being able to form complex with beta-catenin in collaboration with APC and Axin, leading to the accumulation of β-catenin in the cytoplasm and the introduction of large amounts of β-catenin into the nucleus, thus requiring no direct activation of the Wnt signaling pathway by Dvl-2 [21]. Autophagy is a lysosome-dependent degradation pathway that exists widely in eukaryotic cells and is regulated by cell signaling pathways, such as the canonical Wnt pathway. In 2010, Gao et al. demonstrated that ubiquitinated Dvl-2 is recognized by p62, resulting in the formation of a large aggregate composed of p62 and LC3, which then selectively induces autophagy and degradation via the lysosomal pathway [37]. These findings illustrated that autophagy cooperates negatively with the canonical Wnt pathway by inverse regulation of Dvl-2. Indeed, a rich set of data have confirmed that Dvl-2 degradation is negatively regulated by the autophagy signaling pathway. Recent studies have shown that the Wnt pathway is inhibited and Dvl-2 degradation enhanced by GABARAPL1 via autophagy signaling [38,39]. It has also been reported that autophagic degradation of Dvl-2 is restrained by IRS1/2, which interacts and forms a complex with Dvl-2. Thus, IRS1/2 positively controls Wnt/β-catenin signaling via Dvl-2 [40]. In our study, we observed that cotransfection of C2C12 cells with MyoF and Dvl-2 attenuated autophagic degradation and thus, we speculated that MyoF is involved in the regulation of Dvl-2. Indeed, our data suggest that MyoF interacts with Dvl-2 protein to regulate muscle development through regulation of autophagy via the canonical Wnt signaling pathway. In conclusion, our study shows that MyoF regulates the canonical Wnt signaling pathway by stabilizing Dvl-2 to downregulate its autophagic degradation (Figure 7). These findings extend our understanding of the molecular mechanism by which MyoF is involved in skeletal muscle development and shed light on the role of MyoF in the alleviation of muscular autophagy mediated by the canonical Wnt signaling pathway. MyoF Knockdown and Overexpression For overexpression or silencing of MyoF in C2C12 myoblasts, cells were seeded into 6-well plates and transfected with a pcDNA3.1 expression vector encoding MyoF-Flag or mouse MyoF-shRNA 2 µg, respectively, using Lipofectamine 3000 (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's instructions. MyoF was detected by Western blot and quantitative qRT-PCR analyses. RNA Extraction and Real-Time PCR Cells were washed twice with phosphate buffer saline (PBS), and total RNA was extracted from with TRIzol reagent (Takara, Tokyo, Japan) according to the manufacturer's instructions. Quantitative RT-PCR was performed according to a previously described method [40]. All amplicon primer sets were designed using the Sangon Biotech Primer Design Center (Shanghai, China); details of the primers used are shown in Table 1. Gene expression was determined using average cycle thresholds normalized to GAPDH according to the 2 −∆∆CT method [41]. Western Blot and Immunoprecipitation (IP) Analysis For Western blot analysis, cells were washed with PBS and lysed in RIPA lysis buffer (Bioss, Beijing, China). Next, total protein (200 µg) was separated by 12% SDS-polyacrylamide gel electrophoresis (SDS-PAGE), and transferred to a polyvinylidene fluoride (PVDF) membrane (Millipore Corporation, Billerica, MA, USA). The PVDF membrane was blocked with 5% nonfat milk at room temperature for 1 h, followed by incubation with the appropriate specific primary antibodies overnight at 4 • C. The PVDF membrane was then rinsed with Tris-Buffered Saline Tween-20 (TBST) and stained with the appropriate horseradish peroxidase (HRP)-labeled secondary antibody for 1 h at room temperature. After washing with TBST, proteins were visualized with Electrochemiluminescence (ECL) reagent (Amersham Pharmacia Biotech, Piscataway, NJ, USA). For immunoprecipitation analysis, the cells were lysed with IP lysis buffer, and the total protein (5 µg) was immunoprecipitated with anti-MyoF and anti-Dvl-2 antibodies. Immunocomplexes were washed three times with IP lysis buffer and analyzed by Western blotting as described. Quantification of protein blots was performed with the Quantity One 1-D software (version 4.4.0) (Bio-Rad, Hercules, CA, USA) using images acquired from an EU-88 image scanner (GE Healthcare, King of Prussia, PA, USA). Immunofluorescence and Confocal Microscopy Cells grown on coverslips were rinsed in PBS and fixed with 4% paraformaldehyde (Solarbio) for 10 min. After fixation, cells were washed twice with PBS and permeabilized with 0.2% Triton X-100 for 10 min, washed with PBS, and incubated with the relevant antibodies diluted in PBS/10% FSC for 1 h. The cells were then rinsed three times with PBS for 5 min each time. After incubation with the relevant primary antibody, cells were washed and incubated with the fluorescence-labeled secondary antibody for 1 h at room temperature in the dark. Subsequently, the cells were washed three times with TBST and fluorescence intensity was observed with an Olympus FluoView FV1000 confocal microscope (Olympus, Melville, NY, USA). Transmission Electron Microscopy Cells were scraped gently from culture plates and washed twice with PBS. The cells were then fixed in 2.5% glutaraldehyde PBS for 15 min, and postfixed in 1% osmium tetroxide for 2 h at room temperature. After washing three times in distilled water, the cells were exposed to 1% uranylacetate for 15 min. The samples were dehydrated in a graded ethanol series and embedded in Spurr's low-viscosity media. Ultrathin sections (80 nm) were prepared stained with uranyl acetate and lead citrate, and observed using JEM-1400 TEM (JEOL, Tokyo, Japan). Images were captured using a CCD camera AMT (Sony, Tokyo, Japan). Statistical Analysis All statistical analyses were performed using SPSS 17.0 (SPSS Inc., Chicago, IL, USA). Data are presented as least squares means ± standard error of the mean (SEM), and values were considered statistically different at p < 0.05.
v3-fos-license
2017-10-19T07:27:27.669Z
2017-08-30T00:00:00.000
404250
{ "extfieldsofstudy": [ "Computer Science", "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1099-4300/19/9/443/pdf?version=1506411514", "pdf_hash": "74ba0b07b46e142ab172123c0998b5d941ea6cfc", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44883", "s2fieldsofstudy": [ "Physics" ], "sha1": "74ba0b07b46e142ab172123c0998b5d941ea6cfc", "year": 2017 }
pes2o/s2orc
A Numerical Study on Entropy Generation in Two-Dimensional Rayleigh-B é nard Convection at Different Prandtl Number Entropy generation in two-dimensional Rayleigh-Bénard convection at different Prandtl number (Pr) are investigated in the present paper by using the lattice Boltzmann Method. The major concern of the present paper is to explore the effects of Pr on the detailed information of local distributions of entropy generation in virtue of frictional and heat transfer irreversibility and the overall entropy generation in the whole flow field. The results of this work indicate that the significant viscous entropy generation rates (Su) gradually expand to bulk contributions of cavity with the increase of Pr, thermal entropy generation rates (Sθ) and total entropy generation rates (S) mainly concentrate in the steepest temperature gradient, the entropy generation in the flow is dominated by heat transfer irreversibility and for the same Rayleigh number, the amplitudes of Su, Sθ and S decrease with increasing Pr. It is found that that the amplitudes of the horizontally averaged viscous entropy generation rates, thermal entropy generation rates and total entropy generation rates decrease with increasing Pr. The probability density functions of Su, Sθ and S also indicate that a much thinner tail while the tails for large entropy generation values seem to fit the log-normal curve well with increasing Pr. The distribution and the departure from log-normality become robust with decreasing Pr. Introduction Natural convection heat transfer is widely applied in some important processes in engineering such as thermal storage, environmental comfort, grain drying, electronic cooling and other areas [1,2].Rayleigh-Bénard (RB) convection is one of most classical natural convection in engineering.Several works with experimental [3][4][5][6][7][8][9][10][11] and numerical approaches [12][13][14][15][16][17][18] in various areas are available.Various Prandtl (Pr) numbers that range from 0(10 −2 ) for mercury and molten metals to 0 (10 4 ) for silicon oils have raised concern in various applications convection.The Pr values of 10 23 in the viscous rocky part of the Earth's mantle further emerges in convection of planetary interiors.Thus, a systematic investigation of the dependence of the efficiency loss on the Prandtl number is worth performing.The process efficiency loss in all real processes can be closely related with the friction, mass transference, thermal gradients, chemical reactions, etc.Previous studies of entropy had emphasized potential advantages to evaluation of loss in engineering applications [13,14,[19][20][21][22]. De reported the entropy generations due to heat and flow transport in the cavity and minimizing the entropy generation by using the second law of thermodynamics [13].An optimal configuration with minimum loss of available energy may be gained using this method. Entropy 2017, 19, 443 2 of 13 The importance of thermal boundary conditions in heat transfer processes and entropy generation characteristics inside a porous enclosure was investigated by Zahmatkesh [14].To do this, a wide range of Darcy-modified Rayleigh numbers was analyzed by simulating the natural convection processes in a porous enclosure.Nayak [19] reported that combination of entropy generation with nanofluid-filled cavity block insertion.The thermodynamic optimization of the mixed convection were demonstrated by evaluating entropy generation and Bejan number.It is showed that the heat transfer rate increases remarkably by the addition of nanoparticles.The natural convection and entropy generation of nanofluid-filled cavities having different shaped obstacles with magnetic field effect was studied by Oztop [20].It should be mentioned to this end the very good review paper on entropy generation in nanofluid flow by Mahian et al. [21].A critical review of contributions to the theory and application of entropy generation analysis to different types of engineering systems was reported by Sciacovelli et al. [22].The focus of the work is only on contributions oriented toward the use of entropy generation analysis as a tool for the design and optimization of engineering systems [22]. The main aim of the present work is the study of entropy generation in RB convection processes at different Prandtl numbers based on the minimal entropy generation principle by numerical simulation.The minimal entropy generation principle is that entropy generation in flow systems is associated with a loss of exergy.This is important, when exergy is used in a subsequent process and therefore its loss has to be minimized.The detailed information of local distributions of entropy generation due to frictional and heat transfer irreversibility at different Prandtl numbers as well as the overall entropy generation in the whole flow field are analyzed separately.All the numerical simulations have been implemented using a lattice Boltzmann scheme.Previous studies of the lattice Boltzmann method had emphasized potential advantages in a variety of single, multiphase and thermal fluid hydrodynamic problems [23][24][25][26][27][28][29].Governing equations and numerical methods will be briefly described first in the following section.After that, the detailed numerical results and discussions are presented.Finally, some concluding remarks are provided. Governing Equations To study the dynamics of the fluid, the classical Oberbeck-Boussinesq (Ahlers et al. [8]; Lohse and Xia [6]) equations are adopted in this paper: where ν and κ represent the kinematic viscosity and the diffusivity, respectively. Entropy Generation The amount of phenomenological information contained in the local entropy generation rates are studied by many researchers.As discussed in Bejan [30], Iandoli [31], Magherbi [32], Rejane [33], Mahian [21], Sheremet [34], Bhatt [35,36], Abbas [37] and Qing [38], etc., it is possible to derive an exact formula for both the viscous and the thermal components of the local entropy generation rates.In Cartesian notation of two-dimensional, the expressions are as follows: Entropy 2017, 19, 443 and the total entropy generation rates can be given by: Previous studies of the Bejan number (Be) had emphasized potential advantages to the importance of heat transfer irreversibility in the domain [39].Be is proposed by Paoletti et al. [39].Paoletti et al. investigated he contribution of heat transfer entropy generation on over all entropy generation by using the Be.Be is defined as: The range of Be is from 0 to 1.When Be is equal to 0, the irreversibility is dominated by fluid friction.Correspondingly, the irreversibility is dominated by heat transfer when Be is equal to 1.The irreversibility due to heat transfer dominates in the flow when Be is greater than 1/2.Correspondingly, Be < 1/2 implies that the irreversibilities due to the viscous effects dominate the processes.Meanwhile, it is also noted that the heat transfer and fluid friction entropy generation are equal in Be = 0.5 [39]. Numerical Method Two simple lattice Bhatnagar-Gross-Krook (LBGK) collision operator are introduced.Specially, the evolution of LBGK is described by the following equation [27][28][29]: where f i (x, t), g i (x, t) stand for the probability density functions to find at (x, t) a particle velocity belongs to a discrete and limited set c i (with i = 0, • • • , 8 in the D2Q9 adopted here).F i is the discrete mesoscopic force corresponding with buoyant body force of Equation ( 2), τ ν and τ θ are the relaxation times for flow and temperature in lattice Boltzmann equations, respectively.The equilibrium function for the density distribution function is given as [28]: where w i is the associated weighting coefficient [23].The kinematic viscosity ν and the diffusivity κ are given by: Density, momentum, and temperature are defined as coarse-grained (in velocity space) fields of the distribution functions: A Chapman-Enskog expansion leads to the equations for density, momentum, and temperature from (8) and (9).To derive the classical Oberbeck-Boussinesq equations (Equations ( 1)-(3)), Two macroscopic time scales (t 1 = εt, t 2 = εt) and a macroscopic length scale (x 1 = εx) are introduced.As for the FHP model two time scales and one spatial scale with ∂t = ε∂ t1 + ε 2 ∂ t2 and ∂ x = ε∂ α will be introduced.According to the above Chapman-Enskog expansion, the streaming step on the left-hand side reproduces the inertial terms in the classical Oberbeck-Boussinesq equations (Equations ( 1)-( 3)). Two important dimensionless parameters in RB convection are introducted in the following section.Ra is defined as Ra = β∆θgH 3 /νκ.The enhancement of the heat transfer can be calculated by the Nusselt number Nu = 1 + u y θ /κ∆θH in the numerical results of LBM, where ∆θ is the temperature difference between the bottom and top walls, H is the channel height, u y is the vertical velocity, and .represents the average over the whole flow domain. Analysis of S u and S θ The entropy generation problem due to RB convection with various Pr in rectangular cavities is investigated.The incompressible, the Boussinesq approximation and the two-dimensional flow characteristics are implemented in the present paper.Schematic view of cavity is indicated in Figure 1.The grid verification of the results is inspected before the comparison.One example of the Rayleigh number of 5.4 × 10 9 is presented in Table 1.The number of grid points is taken the same in both the x and y directions in the present study.The size of grid points is taken as N × N, in which N is the grid number in each spatial direction.It is shown that the calculated Nusselt number (Nu) changes with N. It is seen that when N increases, the Nu quickly approaches the benchmark result at Table 1.When N further increases from 2012 to 2400, not much improvement occurs for the result.So we can say that 2012 × 2012 lattices can give very accurate results for Ra = 5.4 × 10 9 . Two important dimensionless parameters in RB convection are introducted in the following section.Ra is defined as u is the vertical velocity, and .represents the average over the whole flow domain. Analysis of Su and Sθ The entropy generation problem due to RB convection with various Pr in rectangular cavities is investigated.The incompressible, the Boussinesq approximation and the two-dimensional flow characteristics are implemented in the present paper.Schematic view of cavity is indicated in Figure 1.The grid verification of the results is inspected before the comparison.One example of the Rayleigh number of 5.4 × 10 9 is presented in Table 1.The number of grid points is taken the same in both the x and y directions in the present study.The size of grid points is taken as N × N, in which N is the grid number in each spatial direction.It is shown that the calculated Nusselt number (Nu) changes with N. It is seen that when N increases, the Nu quickly approaches the benchmark result at Table 1.When N further increases from 2012 to 2400, not much improvement occurs for the result.So we can say that 2012 × 2012 lattices can give very accurate results for Ra = 5.4 × 10 9 .Numerical simulations of two-dimensional RB convection at Pr = 6, 20, 100 and 10 6 are implemented by using LBM at Ra = 5.4 × 10 9 in the present study.All two-dimensional simulations at different Pr are performed on 2012 × 2012 lattices.The no-slip boundary conditions are executed for top and bottom plates, which is same as left and right boundary condition in all simulations.The dimensionless initial temperature of bottom plates is equal to 1, and the dimensionless initial temperature of top plates is equal to 0. And the initial temperature between top and bottom plates is linear distribution from 0 to 1.When the heat flow about 2012 × 2012 lattice domain reaches steady state, CPU time of one case is 10 h by using the CPU of 16 cores. Figure 2a-d show flow field and typical snapshots of the instantaneous temperature field obtained at four Prandtl number (Pr = 6, 20, 100, 10 6 and Ra = 5.4 × 10 9 ).Blue (red) regions correspond to cold (hot) fluid.Large-scale circulations of the fluid are shaped, which develop mainly in the regions among the center of cavity at Figure 2a.And small vortex are emerged in four corners of square cavity, respectively.Large-scale circulations of the fluid in cavity are dissolved gradually with increasing Pr, which is similar to visualization of experiment for large Pr [6,11].Large-scale structures of smaller thermal plumes gradually develop into rise and fall with increasing Pr from the bottom to top walls. temperature of top plates is equal to 0. And the initial temperature between top and bottom plates is linear distribution from 0 to 1.When the heat flow about 2012 × 2012 lattice domain reaches steady state, CPU time of one case is 10 h by using the CPU of 16 cores. Figure 2a-d show flow field and typical snapshots of the instantaneous temperature field obtained at four Prandtl number (Pr = 6, 20, 100, 10 6 and Ra = 5.4 × 10 9 ).Blue (red) regions correspond to cold (hot) fluid.Large-scale circulations of the fluid are shaped, which develop mainly in the regions among the center of cavity at Figure 2a.And small vortex are emerged in four corners of square cavity, respectively.Large-scale circulations of the fluid in cavity are dissolved gradually with increasing Pr, which is similar to visualization of experiment for large Pr [6,11].Large-scale structures of smaller thermal plumes gradually develop into rise and fall with increasing Pr from the bottom to top walls.The corresponding logarithmic fields of viscous entropy generation rates Su at four Prandtl number are shown in Figure 3a-d.From Figure 3a, it can be seen that the significant Su concentrates in the narrow region adjacent to the walls at Pr = 6, which is resulted from the steepest velocity gradient in the near-wall regions.It is observed that with the increase of Pr, the significant Su gradually expands to bulk contributions of cavity from Figure 3b to Figure 3d, which is resulted from the steepest velocity gradient in the bulk of cavity.The corresponding logarithmic fields of viscous entropy generation rates S u at four Prandtl number are shown in Figure 3a-d.From Figure 3a, it can be seen that the significant Su concentrates in the narrow region adjacent to the walls at Pr = 6, which is resulted from the steepest velocity gradient in the near-wall regions.It is observed that with the increase of Pr, the significant S u gradually expands to bulk contributions of cavity from Figure 3b to Figure 3d, which is resulted from the steepest velocity gradient in the bulk of cavity. temperature of top plates is equal to 0. And the initial temperature between top and bottom plates is linear distribution from 0 to 1.When the heat flow about 2012 × 2012 lattice domain reaches steady state, CPU time of one case is 10 h by using the CPU of 16 cores. Figure 2a-d show flow field and typical snapshots of the instantaneous temperature field obtained at four Prandtl number (Pr = 6, 20, 100, 10 6 and Ra = 5.4 × 10 9 ).Blue (red) regions correspond to cold (hot) fluid.Large-scale circulations of the fluid are shaped, which develop mainly in the regions among the center of cavity at Figure 2a.And small vortex are emerged in four corners of square cavity, respectively.Large-scale circulations of the fluid in cavity are dissolved gradually with increasing Pr, which is similar to visualization of experiment for large Pr [6,11].Large-scale structures of smaller thermal plumes gradually develop into rise and fall with increasing Pr from the bottom to top walls.The corresponding logarithmic fields of viscous entropy generation rates Su at four Prandtl number are shown in Figure 3a-d.From Figure 3a, it can be seen that the significant Su concentrates in the narrow region adjacent to the walls at Pr = 6, which is resulted from the steepest velocity gradient in the near-wall regions.It is observed that with the increase of Pr, the significant Su gradually expands to bulk contributions of cavity from Figure 3b to Figure 3d, which is resulted from the steepest velocity gradient in the bulk of cavity.The distributions of thermal entropy generation rates S θ for the four cases are shown in Figure 4.It is observed that the significant S θ concentrates in the narrow region adjacent to the walls at Pr = 6 in Figure 4a, which is resulted from the steepest temperature gradient in the near-wall regions.From Figure 4b to Figure 4d, it is can be seen that the significant S θ gradually expands to bulk contributions of cavity, which is resulted from the steepest temperature gradient in the bulk of the cavity.The distributions of thermal entropy generation rates Sθ for the four cases are shown in Figure 4.It is observed that the significant Sθ concentrates in the narrow region adjacent to the walls at Pr = 6 in Figure 4a, which is resulted from the steepest temperature gradient in the near-wall regions.From Figure 4b to Figure 4d, it is can be seen that the significant Sθ gradually expands to bulk contributions of cavity, which is resulted from the steepest temperature gradient in the bulk of the cavity.The corresponding logarithmic fields of the total entropy generation rates S are shown in Figure 5. Respectively, S is similar to the visualization of Sθ at the same Pr, which shows that the heat transfer dominates in the flow of cavity.Comparing Figures 3 and 4, it is noted that Sθ is much larger than Su.This also indicates the entropy generation in the flow is dominated by heat transfer irreversibility.Moreover, one sees that the amplitudes of both Su and Sθ decrease with increasing Pr.The corresponding logarithmic fields of the total entropy generation rates S are shown in Figure 5. Respectively, S is similar to the visualization of S θ at the same Pr, which shows that the heat transfer dominates in the flow of cavity.Comparing Figures 3 and 4, it is noted that S θ is much larger than S u .This also indicates the entropy generation in the flow is dominated by heat transfer irreversibility.Moreover, one sees that the amplitudes of both S u and S θ decrease with increasing Pr.The distributions of thermal entropy generation rates Sθ for the four cases are shown in Figure 4.It is observed that the significant Sθ concentrates in the narrow region adjacent to the walls at Pr = 6 in Figure 4a, which is resulted from the steepest temperature gradient in the near-wall regions.From Figure 4b to Figure 4d, it is can be seen that the significant Sθ gradually expands to bulk contributions of cavity, which is resulted from the steepest temperature gradient in the bulk of the cavity.The corresponding logarithmic fields of the total entropy generation rates S are shown in Figure 5. Respectively, S is similar to the visualization of Sθ at the same Pr, which shows that the heat transfer dominates in the flow of cavity.Comparing Figures 3 and 4, it is noted that Sθ is much larger than Su.This also indicates the entropy generation in the flow is dominated by heat transfer irreversibility.Moreover, one sees that the amplitudes of both Su and Sθ decrease with increasing Pr. Figure 6 shows the distribution of Be at different Pr.For all cases, the values of Be in the region distributes in cavity, the region with Be greater than 0.5 distributes in the boundary layer and bulk contributions of cavity, which also indicates the entropy generation in the region is dominated by the heat transfer irreversibility. Vertical Profiles of S u and S θ Figure 7 displays the vertical profiles of the horizontally averaged viscous entropy generation rates S u x at various Pr.From Figure 7, it can be seen that the the horizontally averaged viscous entropy generation rates S u x of the top boundary layer and the bottom boundary layer is greater than the bulk contributions of cavity at various Pr, which is resulted from the steepest velocity gradient in the near-wall regions.Moreover, one sees that the amplitudes of the horizontally averaged viscous entropy generation rates S u x decrease with increasing Pr. Figure 6 shows the distribution of Be at different Pr.For all cases, the values of Be in the region distributes in cavity, the region with Be greater than 0.5 distributes in the boundary layer and bulk contributions of cavity, which also indicates the entropy generation in the region is dominated by the heat transfer irreversibility.The vertical profiles of the horizontally averaged thermal entropy generation rates S θ x is shown in Figure 8. Comparing Figures 7 and 8, it is noted that the horizontally averaged thermal entropy generation rates S θ x is similar to the horizontally averaged viscous entropy generation rates S u x .This also indicates that the horizontally averaged thermal entropy generation rates S θ x of the top boundary layer and the bottom boundary layer is greater than bulk contributions of cavity at various Pr. Figure 9 shows the horizontally averaged total entropy generation rates S x , which is also similar to the horizontally averaged viscous entropy generation rates.Moreover, it is observed that the amplitudes of the horizontally averaged total entropy generation rates also decrease with increasing Pr. Vertical Profiles of Su and Sθ The vertical profiles of the horizontally averaged thermal entropy generation rates x S θ is shown in Figure 8. Comparing Figures 7 and 8, it is noted that the horizontally averaged thermal entropy generation rates S .This also indicates that the horizontally averaged thermal entropy generation rates x S θ of the top boundary layer and the bottom boundary layer is greater than bulk contributions of cavity at various Pr. Figure 9 shows the horizontally averaged total entropy generation rates x S , which is also similar to the horizontally averaged viscous entropy generation rates.Moreover, it is observed that the amplitudes of the horizontally averaged total entropy generation rates also decrease with increasing Pr. Figure 10 shows that the mean values of Su, Sθ and S in the whole area versus Pr.It is observed that the mean values of Su, Sθ and S in the whole area decrease with increasing Pr for the same Rayleigh number.It is also observed that the value of the thermal entropy generation is the two order of magnitude of viscous entropy generation, which also indicates the entropy generation in the flow is dominated by heat transfer irreversibility.The vertical profiles of the horizontally averaged thermal generation rates x S θ is shown in Figure 8. Comparing Figures 7 and 8, it is noted that the horizontally averaged thermal entropy generation rates S .This also indicates that the horizontally averaged thermal entropy generation rates x S θ of the top boundary layer and the bottom boundary layer is greater than bulk contributions of cavity at various Pr. Figure 9 shows the horizontally averaged total entropy generation rates x S , which is also similar to the horizontally averaged viscous entropy generation rates.Moreover, it is observed that the amplitudes of the horizontally averaged total entropy generation rates also decrease with increasing Pr. Figure 10 shows that the mean values of Su, Sθ and S in the whole area versus Pr.It is observed that the mean values of Su, Sθ and S in the whole area decrease with increasing Pr for the same Rayleigh number.It is also observed that the value of the thermal entropy generation is the two order of magnitude of viscous entropy generation, which also indicates the entropy generation in the flow is dominated by heat transfer irreversibility.Figure 10 shows that the mean values of S u , S θ and S in the whole area versus Pr.It is observed that the mean values of S u , S θ and S in the whole area decrease with increasing Pr for the same Rayleigh number.It is also observed that the value of the thermal entropy generation is the two order of magnitude of viscous entropy generation, which also indicates the entropy generation in the flow is dominated by heat transfer irreversibility. Probability Density Functions (PDFs) of Su and Sθ at various Pr.Self-similarity of viscous entropy generation rates, thermal entropy generation rates and total entropy generation rates fluctuations is revealed by the observations that the PDFs obtained at distinct times collapse well on top of each other for Su, Sθ and S. In addition, strong fluctuations for Su, Sθ and S are revealed by the observations that the long tail of the calculated PDFs.In correspondence with the cases of both passive [40] and active scalars, a stretched exponential function is used to fit to the fraction of the PDF that extends from the most probable (mp) amplitude to the end of the tail.A stretched exponential function is given as: m Y Y (14) where C, m, and α are fitting parameters, and Y = X − Xmp with X = Su/(Su)rms, Sθ/(Sθ)rms , S/(S)rms and Xmp being the abscissa of the most probable amplitude.The best fit of Equation ( 14) to the data yields m = 0.86 and α = 0.72 for Su, m = 1.15 and α = 0.69 for Sθ and m = 1.06 and α = 0.72 for S. To highlight the differences in our present case for various Pr, we plot in Figure 11 the PDFs of Su, in Figure 12 the PDFs of Sθ and in Figure 13 the PDFs of S in a log-log scale.V at various Pr.Self-similarity of viscous entropy generation rates, thermal entropy generation rates and total entropy generation rates fluctuations is revealed by the observations that the PDFs obtained at distinct times collapse well on top of each other for S u , S θ and S. In addition, strong fluctuations for S u , S θ and S are revealed by the observations that the long tail of the calculated PDFs.In correspondence with the cases of both passive [40] and active scalars, a stretched exponential function is used to fit to the fraction of the PDF that extends from the most probable (mp) amplitude to the end of the tail.A stretched exponential function is given as: where C, m, and α are fitting parameters, and Y = X − X mp with X = S u /(S u ) rms , S θ /(S θ ) rms , S/(S) rms and X mp being the abscissa of the most probable amplitude.The best fit of Equation ( 14) to the data yields m = 0.86 and α = 0.72 for S u , m = 1.15 and α = 0.69 for S θ and m = 1.06 and α = 0.72 for S. To highlight the differences in our present case for various Pr, we plot in Figure 11 the PDFs of S u , in Figure 12 the PDFs of S θ and in Figure 13 the PDFs of S in a log-log scale. Probability Density Functions (PDFs) of Su and Sθ at various Pr.Self-similarity of viscous entropy generation rates, thermal entropy generation rates and total entropy generation rates fluctuations is revealed by the observations that the PDFs obtained at distinct times collapse well on top of each other for Su, Sθ and S. In addition, strong fluctuations for Su, Sθ and S are revealed by the observations that the long tail of the calculated PDFs.In correspondence with the cases of both passive [40] and active scalars, a stretched exponential function is used to fit to the fraction of the PDF that extends from the most probable (mp) amplitude to the end of the tail.A stretched exponential function is given as: where C, m, and α are fitting parameters, and Y = X − Xmp with X = Su/(Su)rms, Sθ/(Sθ)rms , S/(S)rms and Xmp being the abscissa of the most probable amplitude.The best fit of Equation ( 14) to the data yields m = 0.86 and α = 0.72 for Su, m = 1.15 and α = 0.69 for Sθ and m = 1.06 and α = 0.72 for S. To highlight the differences in our present case for various Pr, we plot in Figure 11 the PDFs of Su, in Figure 12 the PDFs of Sθ and in Figure 13 the PDFs of S in a log-log scale.The dashed lines in in Figures 11-13 indicate the log-normal distribution for comparison at various Pr.It is seen that for viscous entropy generation rates, thermal entropy generation rates and total entropy generation rates, small entropy generation values show a much thinner tail while the tails for large entropy generation values seem to fit the log-normal curve well with increasing Pr.The distribution and the departure from log-normality become robust within the self-similarity range with decreasing Pr. Conclusions The entropy generation for two-dimensional thermal convection at different Pr are investigated in the present study with LBM.Special attention is paid to analyze separately the detailed information of local distributions of entropy generation in virtue of frictional and heat transfer irreversibility and the overall entropy generation in the whole flow field.Several conclusions can be summarized. Firstly, the significant Su gradually expands to bulk contributions of cavity with the increase of which is resulted from the steepest velocity gradient in the bulk of cavity.Sθ and S mainly concentrate in the steepest temperature gradient in cavity. In addition, the entropy generation in the flow heat transfer irreversibility plays an important role, frictional irreversibility can be neglected. Thirdly, the amplitudes of Su, Sθ and S decrease with increasing Pr for the same Rayleigh number.Further, the amplitudes of the horizontally averaged Su, Sθ and S decrease with increasing Pr.The dashed lines in in Figures 11-13 indicate the log-normal distribution for comparison at various Pr.It is seen that for viscous entropy generation rates, thermal entropy generation rates and total entropy generation rates, small entropy generation values show a much thinner tail while the tails for large entropy generation values seem to fit the log-normal curve well with increasing Pr.The distribution and the departure from log-normality become robust within the self-similarity range with decreasing Pr. Conclusions The entropy generation for two-dimensional thermal convection at different Pr are investigated in the present study with LBM.Special attention is paid to analyze separately the detailed information of local distributions of entropy generation in virtue of frictional and heat transfer irreversibility and the overall entropy generation in the whole flow field.Several conclusions can be summarized. Firstly, the significant Su gradually expands to bulk contributions of cavity with the increase of Pr, which is resulted from the steepest velocity gradient in the bulk of cavity.Sθ and S mainly concentrate in the steepest temperature gradient in cavity. In addition, the entropy generation in the flow heat transfer irreversibility plays an important role, frictional irreversibility can be neglected. Thirdly, the amplitudes of Su, Sθ and S decrease with increasing Pr for the same Rayleigh number.Further, the amplitudes of the horizontally averaged Su, Sθ and S decrease with increasing Pr.The dashed lines in in Figures 11-13 indicate the log-normal distribution for comparison at various Pr.It is seen that for viscous entropy generation rates, thermal entropy generation rates and total entropy generation rates, small entropy generation values show a much thinner tail while the tails for large entropy generation values seem to fit the log-normal curve well with increasing Pr.The distribution and the departure from log-normality become robust within the self-similarity range with decreasing Pr. Conclusions The entropy generation for two-dimensional thermal convection at different Pr are investigated in the present study with LBM.Special attention is paid to analyze separately the detailed information of local distributions of entropy generation in virtue of frictional and heat transfer irreversibility and the overall entropy generation in the whole flow field.Several conclusions can be summarized. Firstly, the significant S u gradually expands to bulk contributions of cavity with the increase of Pr, which is resulted from the steepest velocity gradient in the bulk of cavity.S θ and S mainly concentrate in the steepest temperature gradient in cavity. In addition, the entropy generation in the flow heat transfer irreversibility plays an important role, frictional irreversibility can be neglected. Thirdly, the amplitudes of S u , S θ and S decrease with increasing Pr for the same Rayleigh number.Further, the amplitudes of the horizontally averaged S u , S θ and S decrease with increasing Pr. Finally, the PDFs of Su, S θ and S obtained at various Pr indicate that with increase of Pr, the tails for large entropy generation values seem to fit the log-normal curve well while a much thinner tail.The distribution and the departure from log-normality become robust with decreasing Pr. In this study it was possible to observe that the thermal and hydrodynamic problem is highly coupled.For a thermophysical configuration involving thermal the larger Pr are the better option.Increasing Pr increase the systems efficiency.This different Pr and thermophysical configuration could be applied, for example, in technical applications convection is characterized by very different Pr, ranging from 0(10 −2 ) for mercury and molten metals to 0(10 4 ) for silicon oils. Table 1 . Grid verification for RB convection in a square cavity at Ra = 5.4 × 10 9 .two-dimensional RB convection at Pr = 6, 20, 100 and 10 6 are implemented by using LBM at Ra = 5.4 × 10 9 in the present study.All two-dimensional simulations at different Pr are performed on 2012 × 2012 lattices.The no-slip boundary conditions are executed for top and bottom plates, which is same as left and right boundary condition in all simulations.The dimensionless initial temperature of bottom plates is equal to 1, and the dimensionless initial Figure 6 Figure6shows the distribution of Be at different Pr.For all cases, the values of Be in the region distributes in cavity, the region with Be greater than 0.5 distributes in the boundary layer and bulk Figure 7 Figure 7 displays the vertical profiles of the horizontally averaged viscous entropy generation rates u x S Figure 7 . Figure 7. Mean vertical profiles of the the horizontally averaged viscous entropy generation rates u x S Figure 7 Figure 7 displays the vertical profiles of the horizontally averaged viscous entropy generation rates u x S Figure 7 . Figure 7. Mean vertical profiles of the the horizontally averaged viscous entropy generation rates u x S Figure 7 . Figure 7. Mean vertical profiles of the the horizontally averaged viscous entropy generation rates S u x at various Prandtl numbers. Figure 8 . Figure 8. Mean vertical profiles of the the horizontally averaged thermal entropy generation rates x S θ Figure 9 . Figure 9. Mean vertical profiles of the the horizontally averaged total entropy generation rates Figure 8 . Figure 8. Mean vertical profiles of the the horizontally averaged thermal entropy generation rates S θ x at various Prandtl numbers. Figure 8 . Figure 8. Mean vertical profiles of the the horizontally averaged thermal entropy generation rates x S θ Figure 9 . Figure 9. Mean vertical profiles of the the horizontally averaged total entropy generation rates Figure 9 . Figure 9. Mean vertical profiles of the the horizontally averaged total entropy generation rates S x at various Prandtl numbers. Figure 10 . Figure 10.Mean values of Su, Sθ and S in the whole area vs. Prandtl number. Figures 11 - Figures 11-13 plot the probability density functions (PDFs) of Su, Sθ and S normalized by their respective rms value Figure 11 . Figure 11.PDFs of viscous entropy generation rates Su by their rms (Su)rms value. Figure 10 . Figure 10.Mean values of S u , S θ and S in the whole area vs. Prandtl number. Figures 11 - Figures 11-13 plot the probability density functions (PDFs) of Su, Sθ and S normalized by their respective rms value Figure 11 . Figure 11.PDFs of viscous entropy generation rates Su by their rms (Su)rms value.Figure 11.PDFs of viscous entropy generation rates S u by their rms (S u ) rms value. Figure 11 . Figure 11.PDFs of viscous entropy generation rates Su by their rms (Su)rms value.Figure 11.PDFs of viscous entropy generation rates S u by their rms (S u ) rms value. Figure 13 . Figure 13.PDFs of total entropy generation rates S by their rms (S)rms value. Figure 13 . Figure 13.PDFs of total entropy generation rates S by their rms (S)rms value. Figure 13 . Figure 13.PDFs of total entropy generation rates S by their rms (S) rms value. Table 1 . Grid verification for RB convection in a square cavity at Ra = 5.4 × 10 9 .
v3-fos-license
2018-03-27T13:05:31.654Z
2018-03-27T00:00:00.000
4335455
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fncel.2018.00085/pdf", "pdf_hash": "91e45c87a6dadf999f4667f8de867026c50f728f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44884", "s2fieldsofstudy": [ "Biology" ], "sha1": "91e45c87a6dadf999f4667f8de867026c50f728f", "year": 2018 }
pes2o/s2orc
Regulation of Brain-Derived Neurotrophic Factor and Growth Factor Signaling Pathways by Tyrosine Phosphatase Shp2 in the Retina: A Brief Review SH2 domain-containing tyrosine phosphatase-2 (PTPN11 or Shp2) is a ubiquitously expressed protein that plays a key regulatory role in cell proliferation, differentiation and growth factor (GF) signaling. This enzyme is well expressed in various retinal neurons and has emerged as an important player in regulating survival signaling networks in the neuronal tissues. The non-receptor phosphatase can translocate to lipid rafts in the membrane and has been implicated to regulate several signaling modules including PI3K/Akt, JAK-STAT and Mitogen Activated Protein Kinase (MAPK) pathways in a wide range of biochemical processes in healthy and diseased states. This review focuses on the roles of Shp2 phosphatase in regulating brain-derived neurotrophic factor (BDNF) neurotrophin signaling pathways and discusses its cross-talk with various GF and downstream signaling pathways in the retina. INTRODUCTION SH2 domain-containing tyrosine phosphatase-2 (Shp2) is a 593 amino acid non-transmembrane protein tyrosine phosphatase (PTP) encoded by PTPN11 gene (He et al., 2014). This phosphatase is ubiquitously expressed and involves the Src homology 2 (SH2) domain facilitating its interactions with phospholipids and phosphoproteins in response to endogenous ligands such as hormones, growth factors (GFs) and cytokines (Dance et al., 2008). Shp2 plays prominent biological roles in regulating several signal transduction cascades affiliated with its function in the early development of vertebrates, cell proliferation, differentiation, transcription regulation and metabolic control (London et al., 2012). Shp2 dysregulation has been stated to be associated with cardiovascular (Lauriol et al., 2015) as well as neurodegenerative disorders of brain and eye (Gupta et al., 2012a;Gómez del Rio et al., 2013). Phosphatase upregulation has been linked to juvenile myelomonocytic, acute myeloid leukemia, and progression of various types of cancers (Mohi and Neel, 2007;Jiang and Zhang, 2008). Dysregulation and inactivation of Shp2 in lower vertebrates leads to severe developmental defects Abbreviations: BDNF, Brain-derived neurotrophic factor; EGFR, Epidermal Growth Factor Receptor; FRS2, Fibroblast growth factor receptor substrate 2; GCL, Ganglion cell layer; MAPK, Mitogen Activated Protein Kinase; NGF, Nerve growth factor; RGCs, Retinal ganglion cells; Trk, Tropomyosin Related Kinase; Shp2, SH2 domaincontaining tyrosine phosphatase-2; Shc, Src homology 2 domain containing. and abnormalities of the central nervous system (CNS), heart and mammary gland (Grossmann et al., 2009). With respect to its structure, Shp2 molecule is comprised of a single catalytic PTP domain, two tandemly arranged SH2 domains in the N-terminal and a carboxy terminal hydrophobic tail ( Figure 1A; Neel et al., 2003;Ostman et al., 2006). Both SH2 domains are involved in selectively discerning phosphorylated sites on other molecules and binding to them, thereby mediating Shp2 interactions with different receptors and adaptor proteins (Tartaglia et al., 2001;Li et al., 2016) while the C-terminal tail might promote protein-protein interactions (Neel et al., 2003). Crystal structural studies of Shp2 phosphatase revealed that under normal condition N-SH2 domain exhibits an intramolecular interaction with the PTP active site, thereby auto-inhibiting the Shp2 catalytic activity (Neel et al., 2003;Tartaglia and Gelb, 2005). However, upon Shp2 engagement by tyrosine-phosphorylated proteins, a conformational change in the domain would relieve the auto-inhibitory effects, thereby unlocking the Shp2 phosphatase activity ( Figure 1B; Neel et al., 2003;He et al., 2014). This phosphatase is well expressed in various regions of the brain such as the cerebellum, brain cortex and hippocampus (Rusanescu et al., 2005). Intracellular signaling mediated by Shp2 has found to be crucial in mediating neural cell-fate decisions through nervous system development to ensure that cortical precursor cells generate neuronal cells rather than a glial cell type during brain development, while its neuroprotective actions are reported to be directed against ischemic brain injury (Ke et al., 2007;Cai et al., 2010). In the retina, Shp2 is well expressed in the ganglion cell layer (GCL) and inner nuclear layer (INL) and its reactivity has been detected in photoreceptors (Kinkl et al., 2002). Shp2 is suggested to be involved in neuronal morphogenesis during early embryonic stages while during postnatal development no further deficits in retinal differentiation were observed in Shp2 mutants indicating a critical role of the protein during early retinal development (Cai et al., 2010). Retinal degenerative changes particularly localized to the inner retina along with optic nerve atrophy in Shp2 ablated rodent models reinforces the important role played by Shp2 in the retina (Cai et al., 2011;Pinzon-Guzman et al., 2015). Shp2 was further demonstrated to mediate Sema4D repulsive signaling to provide axonal guidance in the embryonic chick and mice retinas (Fuchikawa et al., 2009). Numerous in vitro and in vivo studies have addressed involvement of Shp2 phosphatase and its functional and biochemical effects in various cell signaling pathways indepth. Here we discuss the modulation of the brain-derived neurotrophic factor (BDNF) as well as multiple GF signaling networks by the Shp2 phosphatase and its implications in the retina. ROLE OF SHP2 IN BDNF MEDIATED SURVIVAL SIGNALING Neurotrophins (NTs) are secreted proteins that regulate neural growth, survival and function by negatively affecting the induction of various cellular apoptotic pathways. Retinal ganglion cells (RGCs) express various neurotrophic factors and are also supported by neurotrophic factors obtained locally from the Muller cells and retrogradely from the brain through axonal flow (Takihara et al., 2011). The neuromodulatory effects of BDNF in particular, play an important role in neuronal regeneration, development, maintaining the health of RGCs and protecting them from apoptosis Nakazawa et al., 2002). Neurotrophin-regulated signaling cascades have been shown to protect the RGCs and suppress apoptotic pathways (Liu et al., 2002;Gupta et al., 2013). BDNF is a high-affinity ligand of tropomyosin-related kinase B (TrkB) and is shown to be effective in suppressing RGC death caused by axotomy or axonal injury in rodent models (Notaras et al., 2017). BDNF induces Shp2 phosphorylation and its subsequent association with adaptor proteins including Fibroblast growth factor receptor substrate 2 (FRS2; FGF receptor substrate 2; Easton et al., 2006), Src homology 2 domain containing (Shc; Gupta et al., 2013) as well as GRBrb2/SOS for complete Mitogen Activated Protein Kinase (MAPK) activation (Figure 2; Easton et al., 2006;Gupta et al., 2013), suggests a role of this protein in mediating scaffold functions (Chitranshi et al., 2017a). MAPK activation is shown to be neuroprotective in glaucoma conditions (Cai et al., 2011) where neurodegeneration leads to irreversible vision defects . A consensus amino acid sequence, NPXY motif, in TrkB sequence is recognized by the phosphotyrosine binding (PTB) domain of Shp2, and is indispensable for Shp2-FRS2 association (Easton et al., 2006). This interaction helps mediate the pathway in a BDNF-dependent manner (Kumamaru et al., 2011). Shp2 may thus potentially function as a transducing protein connecting BDNF and TrkB to MAPK activation and suggests a positive role of Shp2 in regulating MAPK signaling pathway (Figure 2; Easton et al., 2006;Kumamaru et al., 2011). The phosphatase also plays important roles in the early development of the retina. Cai et al. (2010) demonstrated an essential role of Shp2 phosphatase in retinal cell fate during optic vesicle formation of early embryonic period although its deletion was not found to influence retinal development after initiation of retinal differentiation. Retinas with mutant Shp2 showed features of retinal gliosis, progressive apoptosis of all retinal cell types and other degenerative changes suggesting its important role in Muller glial cells (Cai et al., 2011). Dysfunction of Muller glial cells due to genetic disruption of Shp2 may indirectly affect other retinal neurons such as photoreceptors and RGCs leading to retinal neuronal death and degeneration (Joly et al., 2008;Bringmann et al., 2009). This fact is due to the neuroprotective support that Muller cells provide to the whole retina potentially by producing neuroprotective factors such as BDNF that enhance neuroprotective survival signaling through Extracellular signal-regulated kinases (ERK) and Akt pathway activation (Bringmann et al., 2009; Table 1). Genetic disruption of Shp2 resulted in aberrant ERK phosphorylation in Muller cell bodies within the GCL and INL with extensive retinal degeneration and optic nerve dystrophy. K-ras activation however partially rescued retinal loss suggesting that Shp2 might act in a Ras-MAPK dependent signaling pathway (Cai et al., 2010(Cai et al., , 2011; Table 1). Functions of this phosphatase in sustained activation of Ras/ ERK signaling effectors and its positive involvement in BDNF/TrkB-promoting survival effects on PC12 cells and also on cultured cerebral and ventral mesencephalic neurons have previously been reported (Neel et al., 2003;Zhang et al., 2004). The positive effects of Shp2 on Ras and ERK/Akt signaling pathways can possibly be mediated through its regulatory effects on negative regulators such as modulating Ras-GTPase activating protein or C-terminal Src Kinase (CSK; Figure 2; Zheng et al., 2003). Depending upon the partners and downstream signaling pathways, Shp2 phosphatase has also been shown to exert dominant regulatory negative effects (Tartaglia and Gelb, 2005). Rusanescu et al. (2005) suggested that BDNF-induced activation of Ras, Akt and ERK is regulated by increased cross-talk between Shp2 and TrkB receptor. This interaction negatively affects TrkB autophosphorylation and its activation. Accordingly, Shp2 deletion resulted in TrkB activation and enhanced the survival rate of glutamate-exposed neural cells (Rusanescu et al., 2005). The negative effects of Shp2 on TrkB activation have also been identified in the RGCs isolated from the animal retina (Gupta et al., 2012b; also see Table 2). An increased Shp2-TrkB interaction was observed under glaucomatous stress conditions indicating a pathological cross-talk between the two proteins while its inhibition restored TrkB activity under the same condition (Harper et al., 2009;Gupta et al., 2012b). TrkB activation has been shown to play a critical role in RGC survival under various stress conditions and therefore Shp2 activation or its enhanced interactions with TrkB are likely to suppress the neuroprotective pathways leading to RGC loss and optic nerve axonal deterioration (Gupta et al., 2012b). The effects of Shp2-TrkB interaction on axonal regeneration is another potential area for investigation under glaucoma conditions. Shp2-TrkB interaction has been demonstrated to be mediated through the adapter protein caveolin-1 (Cav-1), the prominent structural constituent of caveolae (Figure 2). Experimental glaucoma stress conditions caused Cav-1 protein hyperphosphorylation which resulted in increased binding to Shp2 phosphatase in the RGCs (Gupta et al., 2012b;Chitranshi et al., 2017b). Shp2 was shown to affect Cav-1 cellular functions through binding to phosphorylated Cav-1 under various stress conditions, hindering the complex formation among Cav-1 and CSK thereby positively regulating the Src signaling pathway and ERK phosphorylation (Yun et al., 2011;Jo et al., 2014). Phosphorylation of Shp2 has previously been shown to be dependent on the presence of Cav-1. Cav-1 downregulation using small interfering RNA (siRNA) significantly reduced FIGURE 2 | Schematic representation of various biochemical intracellular signaling pathways involving Shp2 and its cross talk with other receptors leading to downstream effects on cell survival, growth, differentiation and proliferation. Arrows and T-bars indicate positive and negative regulations respectively while the dash lines show the possible interactions. See the text for regulation details. P, Phosphorylation; CNTF, Ciliary neurotrophic factor; IR, Insulin receptor; FGF, Fibroblast growth factor; EGF, Epidermal growth factor; TrkB, Tropomyosin-related kinase B; FRS2, FGF receptor substrate 2; GAB1, GRB2-associated binder-1; GRB2, Growth factor receptor-bound protein 2; Shc, Src homology 2 domain containing; CSK, C-terminal Src kinase; GAPs, GTPase-accelerating proteins; ERK, Extracellular signal-regulated kinases; MEK, Mitogen-activated protein kinase kinase; PI3K, phosphoinositide 3-kinase. Shp2 tyrosine phosphorylation (Yun et al., 2011). This interaction is shown to be mediated via N-SH2 binding domain of Shp2 but not the C-terminal PTP motif, that regulates the downstream signaling (Park et al., 2015). The role of other significant caveolar proteins such as Cavin family members particularly polymerase I and transcript release factor (PTRF) which participate in caveolae formation through its interaction with Cav-1, have not been extensively investigated Frontiers in Cellular Neuroscience | www.frontiersin.org and their potential cross-talk with Shp2 remains to be explored (Hansen et al., 2013). Our ongoing studies have indicated that Shp2 overexpression in both neuroblastoma cell line (SHSY5Y cells) and in the RGCs in vivo lead to an enhanced endoplasmic stress response induction and diminished TrkB activity (Chitranshi et al., 2017a). Overall, these studies might explain the transient effects of TrkB or BDNF modulation in delaying the death of RGC under glaucomatous or stress conditions. BDNF/TrkB is a potent survival pathway in the visual system (Fu et al., 2009). Therefore, the negative regulation of TrkB through Shp2 phosphatase might explain why BDNF/TrkB activation in RGCs has only transient protective effects in vivo (Gupta et al., 2012b). In cerebellar granule neurons, Shp2 suppressing functions are reported to abolish axonal regeneration by paired immunoglobulin-like receptor PIR-B/Shp mediated TrkB inhibition (Fujita et al., 2011a). Myelin-associated glycoprotein (MAG) stimulation resulted in PIR-B mediated recruitment of Shp2 and Shp1. Shp2/Shp1 downregulation using siRNA was sufficient to reduce the MAG induced TrkB dephosphorylation and subsequent neurite growth inhibitory effects caused by MAG/PIR-B signal transduction. The negative regulation of Shp2 on TrkB was also confirmed in dissociated retinal neurons as well as in the animals subjected to optic nerve injury where Shp2 knockdown was contributed to reduced MAG-induced TrkB dephosphorylation levels in RGCs and significant promotion of optic nerve regeneration (Fujita et al., 2011a,b). Nerve growth factor (NGF), another member of the neurotrophin family, exerts its survival-promoting effects by stimulating neural development and differentiation through MAPK cascade (Lambiase et al., 2002). The pivotal functions of this neurotrophin in the visual system and retina in particular, is highlighted by the expression of its high-affinity receptor TrkA in RGCs, glial and bipolar cells which express NGF that provide the protective effects on these neurons against various diseases . NGF therapies have been helpful in protecting the retina and optic nerve in raised intraocular pressure (IOP) models or in neurodegenerative disorders such as Alzheimer's disease (AD; Lambiase et al., 2009;Roberti et al., 2014) which shares various similarities in ocular manifestations to glaucoma disease (Mirzaei et al., 2017). The survival effects of NGF on retina have been investigated by several researchers, demonstrating that endogenous and exogenous NGF might be helpful in clinical approaches to treat retinal damage and attenuate RGC degeneration caused by glaucoma (Lambiase et al., 2009;Roberti et al., 2014). It has been previously shown that NGF binding to TrkA receptor results in enhanced Shp2 phosphatase activity which together with FRS2/FRS3 and GRB2 play a critical role in inducing neurite extension (Dixon et al., 2006;Easton et al., 2006) in the cultured cortical neurons and PC12 cells (Goldsmith and Koizumi, 2002). Yet, the role of Shp2/TrkA interactions or Shp2-mediated effects on NGF signaling in the retina have not yet been explored. FGF family members are implicated in signaling pathways responsible for vertebrate retinal development, differentiation and lens vesicle patterning where Shp2 functions as a vital downstream mediator (Cai et al., 2010;. Signaling is initiated via FGF stimulation and the direct interaction of FRS2 mediator to the activated FGFR receptor leads to recruitment of other adaptor proteins including GRB2 and Shp2 phosphatase, thereby activating Ras/ERK signaling pathway (Figure 2). However, the overarching hypothesis suggest that FRS2α phosphorylation sites for Shp2 binding, play major role in the ERK activation and the resultant control of eye development (Gotoh, 2008;Kim et al., 2015; Table 1). Accordingly the FRS2α mutant lacking these binding sites depicted failure in lens and retinal developmental . This phosphatase, consequently, has proven roles in providing neuroprotection, maintaining Muller cell function, directing retinal neuronal fate during early retinal and lens development and regulating intrinsic retinal survival mechanisms while its ablation has been shown to severely disrupt the retinal cell maturation leading to extensive retinal cell death and degeneration (Gotoh et al., 2004;Cai et al., 2010;. The Akt intracellular signaling process is apparently not involved in FGF-induced developmental processing and it was not able to compensate for the retinal degenerative phenotype linked to Shp2-ablation (Cai et al., 2011). Shp2 was shown to negatively regulate phosphoinositide 3-kinase (PI3K) signaling in human glial cells in response to EGF treatment. Zhang et al. (2002) demonstrated that Shp2 is involved in regulating duration or strength of PI3K activation and GAB1-mediated PI3K signaling could be activated in fibroblasts expressing mutant Shp2 (Zhang et al., 2002;Mattoon et al., 2004). In contrast, Cai et al. (2011) indicated that although both Shp2 and PI3K are involved in normal retinal protection, these two proteins operated separately showing little cross-talk and enhancing the PI3K signaling did not compensate Shp2 deficits in Shp2 mutant mice retinas. Conversely, PI3K activation reduced upon Shp2 ablation following other GFs stimulation including PDGF and IGF-1, suggesting differential effects of Shp2 in response to various GFs (Zhang et al., 2002). IGF-1 is expressed in retinal pigment epithelium and was shown to play vital roles in the differentiation of cultured retinal neuroepithelial cells in the presence of laminin-1 while its absence or antibody-mediated blocking seriously affected retinal neuronal differentiation (Frade et al., 1999). The EGF-inhibitory impact of Shp2 was also investigated in a glioma cell line (SNB19). Despite lack of proliferation ability of SNB19 in response to EGF stimulation, it was found that interfering Shp2 mutant could reverse the cell's ability to proliferate following EGF induction (Reeves et al., 1995). Using mutants depicting various levels of EGFR activity has revealed differential effects of this receptor during retinal development (Oishi et al., 2006). However, in-depth mechanisms of Shp2 dependant EGFR activity in retinal cell development remain to be explored. Shp2 has also been suggested to negatively regulate Janus kinase-signal transducer and activator of transcription (JAK-STAT) signaling cascade which has major functions in cellular processes such as proliferation, differentiation and apoptosis (Kisseleva et al., 2002). Hee et al. (2006) demonstrated that in the brain microglia, Shp2 is involved in transient stimulation of JAK-STAT signaling following ganglioside induction. This occurs through lipid raft mediated Shp2 phosphorylation and its subsequent association with JAK kinase leading to negative regulation of signaling (Hee et al., 2006). In retinal neurons (rod photoreceptors), Shp2/Shp1 phosphatase is recruited through IGF-1 induced pathway, reduces the level of phosphorylated STAT3 and thereby promotes photoreceptor differentiation (Pinzon-Guzman et al., 2015; Table 2). Interestingly, a novel pathway was recently identified by Salvucci et al. (2015) that highlighted the negative regulation of STAT1 by Shp2 phosphatase (Salvucci et al., 2015). EphrinB2, a critical regulator of retinal vasculature pruning and vessel survival (Salvucci and Tosato, 2012), was demonstrated to be involved in recruitment of Shp2 phosphatase providing physiological pruning of hyaloid vessels during eye development (Salvucci et al., 2015). Nevertheless, Shp2 ablation, in photoreceptors was shown to stimulate STAT3 activation which might either suggest the regulatory role of phosphatase in photoreceptors survival pathway or be an injury dependant response protecting the retina from further profound injury caused by ERK downregulation. This signaling, although required for cell differentiation in the postnatal retina, is dispensable for retinal hemeostasis in normal physiological conditions (Cai et al., 2011). JAK2 and STAT3 effectors have been mainly elucidated in retinal layers, GCL and INL suggesting they mediate neuroprotective activity in ganglion and Muller cells through CNTF stimulation (Peterson et al., 2000). In the late embryonic period and in the postnatal stage, STAT3 is activated through CNTF-mediated gp130 receptor and entirely inhibits differentiation of rod photoreceptors ( Table 1, Ozawa et al., 2004;Pinzon-Guzman et al., 2015). CNTF also functions through Shp2-mediated Ras/MAPK downstream pathway involved in cell growth and survival (Hirano et al., 1997;Ohtani et al., 2000). However, activation of STAT3 downstream effector but not Shp2 mediated signaling is required during post-natal retinal development (Ozawa et al., 2004). In addition, phosphatase has a proven role in oligodendrocyte maturation and differentiation (Liu et al., 2011). Shp1 genetic ablation is associated with negative modulation of myelination (Massa et al., 2004) while Shp2 plays a pivotal role in thyroid hormone (T3) dependant oligodendrocyte precursor cells (OPC) maturation in CNS which is regulated through Akt and ERK1/2 pathways (Liu et al., 2011). T3 is predominantly involved in optic nerve OPC differentiation and impacts RGC survival (Baas et al., 2002) which might be attributed to the potential regulatory role of Shp2 on the oligodendrocytes within the optic nerve. Different studies suggest that Shp2 activity is required for activation of insulin receptor (IR) downstream signaling including Ras, Raf, MEK and MAPK cascade. Insulin receptor substrate 1 (IRS-1), one of the major IR substrate is a multisite docking protein which interacts with SH2 domain-containing proteins such as Shp2. Milarski and Saltiel (1994) showed that IRS-1 and Shc dependent phosphorylation of IR was markedly attenuated following Shp2 mutation in fibroblasts, confirming the crucial role of Shp2 phosphatase in regulating IR activity (Milarski and Saltiel, 1994). IR signaling is vital in light dependant PI3K/Akt cascade which provides neuroprotection to the photoreceptors and rescues them from apoptosis while its deletion leads to stress-mediated degeneration of photoreceptors (Rajala et al., 2013). The association of this receptor with Shp2 in neuronal cells in retina has not been rigorously explored. SHP2 INVOLVEMENT IN RETINAL PATHOLOGICAL CHANGES Shp2 involvement in regulating retinal survival signaling pathways links dysregulation of this phosphatase to various physiological and pathological conditions. Under glaucomatous stress Shp2 leads to preferential RGC degeneration by inhibiting BDNF/TrkB downstream signaling through dephosphorylation and deactivation of TrkB (Gupta et al., 2012bYou et al., 2013). The potential effects of glaucoma extend well beyond the retina into the optic nerve and higher visual centers in the brain through transneuronal changes Gupta et al., 2016). Loss of BDNF/TrkB signaling is reported to be strongly associated with other neurodegenerative disease including AD, Huntington's, Parkinson's (Yin et al., 2008;Baydyuk et al., 2011;Gupta et al., 2013) all of which display characteristics of retinal damage and dysfunction (Muqit and Feany, 2002;Bodis-Wollner, 2009;Gupta et al., 2016). Further investigations are required to explore the potential involvement of Shp2 mutations or associated polymorphisms in various retinal indices in both health and disease conditions. Indeed, many patients with Noonan Syndrome, of whom 50% typically harbor PTPN11 gene mutation (Tartaglia et al., 2002;Zenker et al., 2004), were identified with ocular abnormalities including fundal changes (Marin et al., 2012), optic disk excavation, enhanced cup to disk ratio and myopia, symptoms which are associated with higher risk of glaucoma and retinal degeneration (Whitmore, 1992;Marin et al., 2012;Lee and Sakhalkar, 2014). Applying genetic ablation of PTPN11 gene in animal studies resulted in extensive retinal degeneration, cell death and optic neuropathy during various developmental stages highlighting its regulatory function in retinal neuroprotection and progenitor retinal cell fate (Cai et al., 2011;Puri and Walker, 2013). Furthermore, Shp2 plays key roles in regulating Akt/mTOR driven myelination that might reflect its role in multiple sclerosis (MS; Ahrendsen and MacKlin, 2013). Many MS patients suffer from visual loss, optic neuritis and RGC degeneration, however any association between the role of Shp2 in myelination and its possible impact on optic nerve has not been investigated (Gundogan et al., 2011;London et al., 2012). Additional approaches including generating knockout or conditional/ inducible gene ablation in different layers of the retina would shed light on the cell specific molecular mechanism of the phosphatase in the retina. CONCLUDING REMARKS AND EMERGING CONCEPTS Shp2 plays an emerging and important role in the retinal development and its preservation. Preliminary in vivo and in vitro studies implied that the major effect of the phosphatase is mediated through its regulatory effects on various GFs and their downstream effectors which activates multiple signaling pathways. In this review we outlined and analyzed the existing evidence regarding Shp2 involvement in BDNF and other GF-dependant signaling networks with a specific focus on the retinal neuronal cells. Although significant breakthroughs in the functional characterization of Shp2 have provided more knowledge of the physiological importance of the phosphatase, other areas of future study including manipulating Shp2 expression in retinal and other neuronal cell lineage or development of specific phosphatase inhibitors will further define the mechanisms through which Shp2 positively or negatively mediates various retinal signaling pathways. Another fascinating aspect would be considering Shp2 as a potential molecular target to modulate other neuronal signaling pathways thereby serving as mechanism-based therapy. The involvement of Shp2 in broad range of biochemical actions may make it a challenge to develop any specific therapeutic strategy to isolate a particular neuroprotective role. However, prospective investigations probing Shp2 protein expression, post-translational modifications, sub-cellular localization and interactome changes will unravel the role of this protein in various neurodegenerative disease and retinal disorders. AUTHOR CONTRIBUTIONS The review was conceptualized, written and edited by each of the authors. Supervisor: SLG.
v3-fos-license
2019-11-15T14:08:20.622Z
2019-11-15T00:00:00.000
208014925
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2019.01149/pdf", "pdf_hash": "b40946e6bbcedca8c06570e7bcbc41d54ea41f66", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44885", "s2fieldsofstudy": [ "Biology" ], "sha1": "4f97f8c537bffb01ce090c59cbdfedb849c8a53f", "year": 2019 }
pes2o/s2orc
Reconstruction and Analysis of the lncRNA-miRNA-mRNA Network Based on Competitive Endogenous RNA Reveal Functional lncRNAs in Dilated Cardiomyopathy Dilated cardiomyopathy (DCM) is an important cause of sudden death and heart failure with an unknown etiology. Recent studies have suggested that long non-coding RNA (lncRNA) can interact with microRNA (miRNA) and indirectly interact with mRNA through competitive endogenous RNA (ceRNA) activities. However, the mechanism of ceRNA in DCM remains unclear. In this study, a miRNA array was first performed using heart samples from DCM patients and healthy controls. For further validation, we conducted real-time quantitative reverse transcription (RT)-PCR using samples from DCM patients and a doxorubicin-induced rodent model of cardiomyopathy, revealing that miR-144-3p and miR-451a were down-regulated, and miR-21-5p was up-regulated. Based on the ceRNA theory, we constructed a global triple network using data from the National Center for Biotechnology Information Gene Expression Omnibus (NCBI-GEO) and our miRNA array. The lncRNA-miRNA-mRNA network comprised 22 lncRNA nodes, 32 mRNA nodes, and 11 miRNA nodes. Hub nodes and the number of relationship pairs were then analyzed, and the results showed that two lncRNAs (NONHSAT001691 and NONHSAT006358) targeting miR-144/451 were highly related to DCM. Then, cluster module and random walk with restart for the ceRNA network were analyzed and identified four lncRNAs (NONHSAT026953/NONHSAT006250/NONHSAT133928/NONHSAT041662) targeting miR-21 that were significantly related to DCM. This study provides a new strategy for research on DCM or other diseases. Furthermore, lncRNA-miRNA pairs may be regarded as candidate diagnostic biomarkers or potential therapeutic targets of DCM. INTRODUCTION Chronic heart failure (CHF), a main cause of morbidity and mortality, especially in aging. CHF is a complex clinical syndrome resulting from various structural and functional impairments in ventricular filling or blood ejection (Garin et al., 2014). The lifetime risk of developing CHF has been calculated to range from 20 to 33% worldwide, and nearly half of the patients with CHF will die within 5 years despite all the advanced therapies (Dobre et al., 2014). In addition, as the population ages, the cost associated with CHF is also expected to increase substantially. The etiology of CHF can be classified as ischemic (ICM) or non-ischemic cardiomyopathy (NICM), and dilated cardiomyopathy (DCM) is one of the major causes of ICM. In contrast to revascularization therapies for ICM, novel treatments for DCM remain scarce. Therefore, studies focused on developing new strategies for DCM are urgently required. Accumulating evidence has suggested that rather than being transcriptional noise, diverse non-coding RNAs (ncRNAs) serve as master regulators in CHF initiation and progression at the post-transcriptional level (Kumarswamy and Thum, 2013;Pinet and Bauters, 2015;Thum, 2015). Among them, long non-coding RNAs (lncRNAs) are conventionally described as transcripts longer than 200 nucleotides with no or little protein-coding capacity (Greco et al., 2016;Dangwal et al., 2017). Owing to their versatility, lncRNAs have been reported to participate in several cellular processes ranging from chromatin modification and RNA stability to translational control. Biochemically, lncRNAs exert their function via RNA-RNA, RNA-DNA, or RNA-protein interactions (Li et al., 2013;Dey et al., 2014;Shi et al., 2015). Of note, lncRNAs have been reported to competitively interact with microRNAs (miRNAs) and thus inhibit target mRNA degradation by a competitive endogenous RNA (ceRNA) regulatory mechanism (Sen et al., 2014;Tay et al., 2014). Recently, studies have identified several aberrantly expressed lncRNAs in CHF models. Moreover, overexpression/ knockdown of specific lncRNAs have been reported to significantly influence the process of cardiac development, aging, hypertrophy, and fibrosis in mice (Cheng et al., 2014;Michalik et al., 2014;Devaux et al., 2015;Pourrajab et al., 2015;Uchida and Dimmeler, 2015). However, because of low sequence conservation among species, it is difficult to extend the findings derived from murine models to humans; therefore, little is known about the function of lncRNAs in human hearts. Current reports of lncRNAs in DCM patients are focused on expression profiles from RNA sequencing or microarray (Schiano et al., 2017;Haas et al., 2018). Therefore, considering the large number and limited knowledge of lncRNAs, how to develop computational model for identification of lncRNAs and downstream miRNAs or mRNAs are of significant importance. In our study, we conducted a microarray profile of miRNAs in myocardial biopsy samples from end-stage DCM patients compared with those in normal myocardial samples. Furthermore, based on the ceRNA theory, we constructed a global triple network by using data from the GEO database, as lncRNA and mRNA form a triplet by sharing the same miRNA. We identified human DCM-related lncRNAs with high reliability and our results showed that the lncRNA-miRNA-mRNA network provides a new understanding of the mechanisms and potential therapeutic targets for DCM. Patients and Tissue samples The experimental procedure for evaluating the differential expression of lncRNA/miRNA/mRNA is described in (Figure 1) The study protocol was approved by the Medical Ethics Committee of the Third Affiliated Hospital of Soochow University in Changzhou, Jiangsu Province, China, and informed consent was obtained from each patient. Tissues for detection were collected from the left ventricular wall of explanted hearts from patients with a diagnosis of DCM undergoing heart transplantation (clinical data were presented in our previous paper) (Tao et al., 2016) and from unmatched healthy donors. Animal Model Doxorubicin-induced cardiomyopathy mouse model was induced by chronically administrating mice with either doxorubicin or phosphate-buffered saline (PBS) by six intraperitoneal (i.p) injections (day 0, 2, 4, 6, 8, and 10) at a dose of 4 mg/kg. After 4 weeks, echocardiography was performed and mice were sacrificed. RNA Isolation Total RNA was harvested using TRIzol and purified with the RNeasy mini kit (Qiagen, Hilden, Germany) according to manufacturer's instructions. cDNA synthesis was performed with Bio-Rad iScripTM cDNA Synthesis Kit (Bio-Rad, Hercules, CA, USA) in a reaction volume of 10 μl. miRNA Microarray gene Expression Profiling The miRNA expression profiling assay system based on Affymetrix 4.0 (OE Biotech's, Shanghai, China) was used to perform miRNA expression profiling of myocardial samples from three patients with DCM and three healthy donors (No. GSE112556). Clinical and echocadiography parmeters for patients with dilated cardiomyopathy were presented in Supplemental Table 1. The threshold of up-regulated or down-regulated miRNA was a fold change greater than two, and P < 0.05 using Student's t-test was considered statistically significant. lncRNA and mRNA Microarray Data GEO is a public functional genomics data repository that supports MIAME-compliant data submissions. Human lncRNA/ mRNA expression profiles were downloaded from NCBI-GEO (GSE42955) (Molina-Navarro et al., 2013). The threshold of up-regulated or down-regulated lncRNA/mRNA was a fold change greater than 1.5, and P < 0.05 using Student's t-test was considered statistically significant. This database was further analyzed with our miRNA microarray profile. Real-Time Quantitative Reverse Transcription (Rt)-PCR For quantitative mRNA analysis, a template equivalent to 400 ng of total RNA was subjected to 40 cycles of quantitative PCR using the Takara SYBR Premix Ex TaqTM (TliRNaseH Plus, Takara, Tokyo, Japan) in the 7900HT Fast Real-Time PCR System. The absolute expression levels of miRNAs were normalized to the internal control small nuclear U6 and the expression levels of lncRNAs were normalized to 18s, all the data were calculated by the ΔΔCt method. Each reaction was primed using a genespecific stem-loop primer. The RT stem-loop primers and PCR primers are listed in Supplemental Table 2. Prediction of the miRNA Targets of lncRNAs and mRNAs The miRNA targets of lncRNA were predicted and the minimum free energy (MFE) of miRNA-lncRNA duplexes was calculated by using the RNAhybrid program. MiRNA sequences were obtained from miRbase and lncRNA sequences were obtained from NCBInucleotide. MiRNA target binding sites on the entire lncRNA sequence were predicted. Data of miRNA-mRNA interactions were downloaded from the Miranda and Targetscan prediction tools. Construction of the lncRNA-miRNA-mRNA Network The lncRNA-miRNA-mRNA network was constructed based on ceRNA theory as follows: (1) Expression correlation between lncRNAs and mRNAs was calculated using the Pearson correlation coefficient (PCC). The lncRNA-mRNA pairs with PCC > 0.99 and P value < 0.05 were selected as target pairs. (2) Among the selected lncRNA-mRNA pairs, if both the lncRNA and mRNA targeted and were negatively co-expressed with a common miRNA, this lncRNA-miRNA-mRNA group was identified as a co-expression competing triplet. Functional Enrichment Analysis Significant progress in data mining has provided a wide range of bioinformatics analysis tools, including the gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) databases. The GO database provides gene ontologies, the annotations of genes, and gene products to terms. The combination of ontologies and annotations renders the GO database as a powerful tool for further analysis. The KEGG database is a relational database comprising searchable molecular interaction pathways and reaction networks in metabolism, various cellular processes, and multiple human diseases. Reconstruction of the Key lncRNA-miRNA-mRNA subnetwork Every lncRNA and its related miRNAs and mRNAs in the global triple network were extracted to construct a new subnetwork using Cytoscape software. The number of related lncRNA-miRNA-mRNA triplets was calculated. By comparing the node degree of lncRNA and the number of related lncRNA-miRNA and miRNA-mRNA pairs, the target lncRNAs were identified. We then performed qRT-PCR to confirm the changes of lncRNA in samples from DCM patients and healthy controls. For further analysis, GO and KEGG enrichment analyses were performed for each of the validated lncRNAs by using their mRNA neighbors in the lncRNA-miRNA-mRNA subnetwork. Neonatal Rat Ventricular Cardiomyocyte and Fibroblast Isolation and miRNA Transfection All rats were purchased and raised in the Experimental Animal Center of Soochow University (Suzhou, Jiangsu Province, China). All procedures were in accordance with guidelines on the use and care of laboratory animals for biomedical research published by the National Institutes of Health (No. 85-23, revised 1996), and the experimental protocol was approved by the Animal Care and Use Committee of Soochow University. The isolation of neonatal rat ventricular cardiomyocytes (NRCM), fibroblasts (NRCF), and miRNA transfection were conducted as described in our previous paper (Tao et al., 2018). All transfection and assays on cardiomyocytes and fibroblasts were conducted in serum free medium and low serum medium (1% FBS), respectively. Cardiac fibroblasts at passage two were exposed to miRNA agomir versus negative control (50 nM), or antagomir versus negative control (100 nM) (RiboBio, Guangzhou, China) for 48 h, and treated with 5 ng/ml or 10 ng/ ml TGFβ (Peprotech, Rocky Hill, NJ, USA) for 24 h. statistical Analysis In this study, data are expressed as the mean ± SD. Student's t-test, Chi-square test, or one-way ANOVA followed by Bonferroni's post hoc test was used to compare the one-way layout data when appropriate. P values less than 0.05 were considered to be significantly different. All analyses were performed using GraphPad Prism 5. REsUlTs screening for Differentially Expressed lncRNAs, miRNAs, and mRNAs miRNA arrays were used to determine differentially expressed miRNAs (DEMis) in samples from DCM patients and healthy controls (Owing to the difficulty of obtaining human heart tissues, the DCM sample size was small). A total of 11 miRNAs were found to be dysregulated (fold change > 2.0; P < 0.05, Figure 2A and Table 1). Based on the qRT-PCR analysis, we confirmed that miR-144-3p, miR-144-5p, and miR-451a were down-regulated and miR-21-5p was up-regulated in DCM heart samples ( Figure 2B). To further confirm the changes in miRNA expression in DCM models, we determined the expression levels of miR-144-3p/5p, miR-451a, and miR-21-5p in a doxorubicininduced cardiomyopathy rodent model (the echocardiographic parameters for the doxorubicin-induced DCM model were presented in Supplemental Figure 1). Interestingly, miR-144-3p and miR-451a were consistently down-regulated, and miR-21-5p was up-regulated in the DCM rodent model ( Figure 2C). Human lncRNA/mRNA expression data were obtained from the GEO database (GSE42955). Considering the differences in expression level between samples, the threshold of up-regulated or down-regulated lncRNAs/mRNAs was a fold change greater than 1.5, and P < 0.05 using Student's t-test was regarded as statistically significant. A total of 61 lncRNAs and 172 mRNAs were selected for the following analysis. lncRNA-miRNA-mRNA ceRNA Network First, we predicted lncRNA-miRNA and miRNA-mRNA pairs according to both base sequence and expression level. Considering that one miRNA may associate with several mRNAs or lncRNAs and that one lncRNA may also target several miRNAs, we analyzed whole miRNA microarray profiling. Based on the intersection elements, 199 miRNA-lncRNA pairs and 293 miRNA-mRNA pairs were identified (Figures 3A, B). Furthermore, 69 lncRNA-mRNA pairs were selected according to the ceRNA score and expression level (Figures 3C, D). Then the lncRNA-miRNA-mRNA network composed of 22 lncRNA nodes, 32 mRNA nodes, and 11 miRNA nodes were constructed (Figures 3D-F). Topological Analysis of the DCM-Related lncRNA-miRNA-mRNA Network As we know, hub nodes play significant roles in biological networks. We first analyzed the topological properties of the DCM-related lncRNA-miRNA-mRNA network. We calculated the degree, closeness, and betweenness of the network, and we ranked all the node topological features of the network. We listed the top 20 of each dimension. Interestingly, we found that six lncRNAs ( Table 2) appeared in the list. Moreover, the number of first relationship pairs of lncRNA-miRNA and secondary relationship pairs of miRNA-mRNA was calculated (Table 3). Among the top 11 lncRNA-miRNA pairs, seven lncRNAs were identified in the ceRNA network (NONHSAT072651, NONHSAT006358, NONHSAT001691, FIgURE 2 | MiRNA array using heart samples from DCM patients and healthy controls. (A). Heatmap of miRNA array between DCM patients and healthy controls and a total of 11 miRNAs were found to be dysregulated. (B). qRT-PCR analysis of miRNA between DCM patients and healthy controls (n = 3). (C). qRT-PCR analysis of miRNA between doxorubicin-induced DCM models and controls (n = 6). *, p < 0.05; **, p < 0.01; ***, p < 0.001. NONHSAT027151, NONHSAT072212, NONHSAT119759, and NONHSAT139620). It is worth noting that four lncRNAs (NONHSAT001691, NONHSAT072651, NONHSAT006358, and NONHSAT027151) not only had higher betweenness and node degree but also had a higher number of lncRNA-miRNA and miRNA-mRNA pairs, which suggested that these four lncRNAs may play crucial roles in the origin and development of DCM and may be selected as key lncRNAs. Key lncRNA-miRNA-mRNA subnetwork We then searched for the four key lncRNAs in the ceRNA network we had previously constructed and found that the four lncRNAs mainly targeted miR-144-3p, miR-144-5p, and miR-451. To further validate the target lncRNA, qRT-PCR was performed using samples from DCM patients and healthy controls (primers for lncRNAs are presented in Supplemental Table 2). The results suggested that NONHSAT001691 and NONHSAT006358 were significantly increased in DCM patients ( Figure 4A). We then identified the mRNA and miRNA associated with these two lncRNAs in the global triple network and reconstructed new subnetworks. GO function and KEGG pathway annotations for each of these two lncRNAs were performed. For NONHSAT001691, we identified the biological processes of "positive regulation of cell-substrate adhesion, " "positive regulation of apoptotic process" and "regulation of angiogenesis, " and the enriched KEGG pathways included the "AMPK signaling pathway, " PPAR signaling pathway, " and "adipocytokine signaling pathway" (Figures 4B-D). For NONHSAT006358, the biological processes were similar to those of NONHSAT001691, and the KEGG pathways included the "PPAR signaling pathway, " "adipocytokine signaling pathway, " and "glycerophospholipid metabolism" (Figures 4E-G). The subnetworks of lncRNAs NONHSAT001691 and NONHSAT006358 are presented in Figures 4D, G. Module Analysis of the DCM-Related lncRNA-miRNA Network To further investigate the crosstalk between mRNAs and lncRNAs, we performed bidirectional hierarchical clustering by using R package "gplot. " In the heat map, we discovered two modules (Figures 5A-C) that were highly related to DCM. Then, we performed GO enrichment analysis and KEGG analysis of genes in the modules (Figures 5D-G). In module 1, the "triglyceride biosynthetic process" was significantly and highly related to DCM. KEGG analysis demonstrated that "glycerophospholipid metabolism" was the most significant signaling pathway in DCM. In module 2, the "interferon gamma mediated signaling pathway" had the most notable relationship with DCM. Among all the lncRNAs in the two modules, we found 10 lncRNAs included in the ceRNA network (NONHSAT005601, NONHSAT026953, NONHSAT006250, NONHSAT007750, NONHSAT127244, NONHSAT127857, NONHSAT133928, NONHSAT009028, NONHSAT041662, and NONHSAT039699). Similar to previous approaches, we also conducted qRT-PCR using heart tissues from DCM patients and healthy controls. NONHSAT026953, NONHSAT006250, NONHSAT133928, and NONHSAT041662 were down-regulated in the DCM group. Interestingly, miR-21-5p was the major target of these four lncRNAs. As miR-21-5p has been validated in DCM patients, the involvement of these four lncRNAs in DCM was further confirmed. miR-144-3p/451a Play Different Roles in NRCM and NRCF in Vitro Next, we determined the relative expression level of miR-144/451a/21 in isolated neonatal rat cardiac cardiomyocytes versus fibroblasts and demonstrated higher expression level of miR-144-3p and miR-451a in cardiomyocytes, while miR-21-5p was enriched in fibroblasts compared to cardiomyocytes ( Figure 6A). DCM is characterized by left ventricular dilation and interstitial fibrosis, which are the main causes of heart failure (McNally and Mestroni, 2017). MiR-21-5p was widely expressed in fibroblasts and has been reported to promote transdifferentiation from cardiac fibroblasts into myofibroblasts by targeting notch ligand Jagged1, contributing to cardiac fibrosis post myocardial infarction (Zhou et al., 2018). Then, we placed emphasis on miR-144-3p and miR-451a, which were closed clustered from a single gene locus. miR-451a has been reported to . Forced expression of miR-144-3p attenuated TGFβ-induced cardiac fibroblast proliferation and trans-differentiation, as evidenced by EdU/α-SMA staining (n = 4) and qRT-PCR analysis of α-SMA, Col1a1 and Col3a1 (n = 6) (I, J). inhibition of miR-144-3p deteriorated TGFb-induced cardiac fibroblasts proliferation and trans-differentiation as evidenced by EdU/α-SMA staining (n = 4) and qRT-PCR analysis (n = 6). Scale bar: 50um. *, p < 0.05; **, p < 0.01; ***, p < 0.001. regulate cardiac hypertrophy and autophagy, forced expression of miR-451a in NRCM decreased cell size, whereas knockdown of miR-451a increased cell surface area (Song et al., 2014). Then we transfected miR-144-3p into NRCM and the transfection efficacy of miR-144 agomir or antagomir were exhibited in Figure 6B. However, overexpression or downregulation of miR-144 did not lead to any effects on cell size ( Figure 6C) and markers of pathological hypertrophy (Figures 6D, E), supporting a more prominent role for miR-451a in cardiomyocyte hypertrophy rather than miR-144. Cardiac fibrosis is another hall marker of DCM. By qRT-PCR, the expression level of miR-144-3p was identified downregulated in cultured neonatal rat cardiac fibrosis model stimulated by TGF-β while miR-451 did not show statistical significance (Figure 6F). To gain mechanistic insight into the role of miR-144 in regulating fibrosis, we investigated the effect of miR-144 in cardiac fibroblasts in vitro. MiR-144-3p overexpression decreased cardiac fibroblasts proliferation and trans-differentiation induced by TGF-β, as evidenced by a decrease in EdU and α-SMA staining, and decreased expression levels of α-SMA, Col1a1, and Col3a1 (Figures 6G, H). Contrary to the effects of miR-144-3p overexpression, inhibition of miR-144-3p further enhanced cardiac fibroblasts proliferation and differentiation in the presence of TGF-β (Figures 6I, J), indicating a potential protective effect of miR-144-3p against cardiac fibrosis. DIsCUssION DCM is an important cause of sudden cardiac death (SCD) and heart failure and is the major indication of cardiac transplantation in children and adults worldwide. DCM is characterized by ventricular chamber enlargement and systolic dysfunction with excessive cardiac fibrosis (McNally and Mestroni, 2017). Over the past few years, great efforts have been made to explore the molecular mechanisms of DCM. miRNA-mediated myocardial gene expression is one of the novel mechanisms in DCM (Naga Prasad and Karnik, 2010;Miyamoto et al., 2015). In this study, we first performed a miRNA microarray using heart samples from DCM patients and healthy controls. miRNAs with a fold change > 2.0 and P value < 0.05 were further evaluated by qRT-PCR in DCM patients and a doxorubicin-induced cardiomyopathy rodent model. miR-144-3p and miR-451a were identified as down-regulated and miR-21-5p was identified as up-regulated in DCM. Previous studies have demonstrated that miR-144 and miR-451 are closely clustered and evolutionarily conserved23. miR-144 and miR-451 are processed from a single gene locus that is regulated by the essential hematopoietic transcription factor GATA-4 (Zhang et al., 2010). miR-144/451 as a cluster have been identified to play crucial roles in cardiac ischemic lesions. In cardiomyocytes, ectopic expression of miR-144, or miR-451 augmented survival, which was further improved by overexpression of miR-144/451 cluster, compared to controls in response to simulated ischemia/ reperfusion26. In a miR-144/451 knockout mouse model, loss of the miR-144/451 cluster limited the cardio-protection from ischemic preconditioning by up-regulating the Ras-related c3 botulinum toxin substrate 1 (Rac1)-mediated oxidative stress signaling pathway27, indicating a cardio-protective effects of miR-144/451a cluster against ischemic dysfunction. However, little studies have been done regarding the functional role of miR-144/451 on DCM-related pathological changes. Previous studies have shown that forced expression of miR-451a decreased cell size and knockdown of miR-451a performed opposite effects (Song et al., 2014). In our study, miR-144-3p did not show any effects on cell size on physiological conditions. Although miR-144-3p was enriched in cardiomyocytes compared to fibroblasts, overexpression of miR-144-3p attenuated fibroblast proliferation and transdifferentiation into myofibroblasts induced by TGF-β, which is also conforming with previous studies in vivo Li et al., 2018). Therefore, although evolutionarily as a cluster, miR-144-3p and miR-451a may perform distinct roles in DCM. Furthermore, miR-144-3p may also participate in other pathological processes including inflammation (Hu et al., 2014), autophagy , and mitochondrial metabolism (Li et al., 2016) on cardiomyocytes, which need to be explored in the future. MiR-21 is one of the first identified miRNAs implicated in cardiac hypertrophy and fibrosis (Duygu and Da Costa Martins, 2015;Zhou et al., 2018). In our study, miR-21-5p was upregulated in DCM heart samples, which was consistent with larger sample study (Satoh et al., 2011). However, the regulatory effects of miR-21-5p on DCM were a result of single cardiac fibrosis or hypertrophy or a combination of diverse pathological process need further exploration. In addition to miRNAs, accumulated data have shown that lncRNAs participate in a variety of biological processes and complex diseases, including DCM. Unfortunately, functional studies of lncRNAs are relatively more complicated than those of coding RNAs or miRNAs. Therefore, an efficient and accurate way to infer the potential function of lncRNAs is to detect their relationship with miRNAs and/or mRNAs, whose functions have been annotated. In our study, we used the interaction data from NCBI-GEO and our miRNA microarray to generate a global triple network based on the ceRNA theory, which suggests that lncRNAs and mRNAs share the same miRNA in one triplet. The lncRNA-miRNA-mRNA network was composed of 22 lncRNA nodes, 32 mRNA nodes, and 11 miRNA nodes. Then, the hub nodes and the number of relationship pairs were used to perform topological and subnetwork analysis. In general, a lncRNA with more relationship pairs indicates that the lncRNA is a hub that participates in more ceRNA interactions and plays essential roles in network organization. In this study, four lncRNAs (NONHSAT001691, NONHSAT072651, NONHSAT006358, and NONHSAT027151) were observed to be topological key nodes whose node degrees and number of lncRNA-miRNA and miRNA-mRNA pairs were significantly higher than other lncRNAs. These four lncRNAs were then validated in heart samples of DCM patients and healthy controls by using qRT-PCR. NONHSAT001691 and NONHSAT006358 were identified as significantly up-regulated in DCM patients. Interestingly, miR-144-3p and miR-451a were the potential targets of these two lncRNAs in the ceRNA network, indicating that the NONHSAT001691/NONHSAT006358-miR-144/451a signaling pathway may play a crucial role in the development of DCM. GO and pathway analyses have been used to assess biological functions that are enriched among differentially expressed coding genes. Owing to similar miRNA targets, the significant GO terms of NONHSAT001691 and NONHSAT006358 shared common trends involving "positive regulation of cell-substrate adhesion and the apoptotic process, " and the results were consistent with those of previous studies on DCM (Miller et al., 2004;Pulinilkunnil et al., 2014;Isserlin et al., 2015). Pathway analysis of NONHSAT001691 and NONHSAT006358 showed that the metabolic pathways were mainly enriched, including the AMPK, PPAR, adipocytokine, glucagon, and fatty acid degradation signaling pathways, all of which have been shown to play important roles in DCM (Nikolaidis et al., 2004;Giannessi et al., 2011;Roh et al., 2014;Sung et al., 2015). Moreover, bidirectional hierarchical clustering analysis was conducted to investigate the crosstalk between mRNAs and lncRNAs. A total of 10 lncRNAs were found in the ceRNA network, among which four lncRNAs (NONHSAT026953, NONHSAT006250, NONHSAT133928, and NONHSAT041662) were identified as down-regulated in the DCM group by qRT-PCR. miR-21-5p was the common target of these four lncRNAs, further confirming the feasibility of our miRNA microarray. Moreover, GO and KEGG analysis of these two modules also indicated that metabolism-related signaling pathways play a crucial role in the development of DCM, which provides a novel direction for the study of mechanisms underlying DCM. Currently, discovering non-coding RNA-disease associations plays an increasingly vital role in devising diagnostic and therapeutic tools for diseases. However, since uncovering associations via experimental studies are expensive and timeconsuming, novel and effective computational model for the identification of non-coding RNAs associated with DCM or other diseases are in demand. Several different computational methods have been analyzed to calculate potential non-coding RNA (including lncRNAs and miRNAs) -disease association scores (Chen and Yan, 2013;Chen and Huang, 2017;Chen et al., 2018a;Chen et al., 2018b;Chen et al., 2019). In our study, we constructed a lncRNA-miRNA-mRNA network based on the ceRNA theory. NONHSAT001691/ NONHSAT006358-miR-144-3p/451a and NONHSAT026953/ NONHSAT006250/NONHSAT133928/NONHSAT041662-miR-21-5p were further identified as potential key signaling pathways correlated with DCM. Therefore, this study provides the framework of constructing powerful computational model to predict potential lncRNA-miRNA-disease associations and select the most promising DCM or other disease-related lncRNAs/ miRNAs for experimental validation. However, our study still has some limitations. First, owing to the difficulty of obtaining human heart tissues, the sample size was small, totaling six samples (six DCM samples and six healthy control samples). Due to the lack of samples, there may be false positives in the results. Second, in the process of converting different gene IDs from different databases, a number of genes may have been lost, which would decrease the accuracy of our results. Finally, our study was mainly focused on lncRNA/miRNA changes in DCM samples; therefore, the underlying biological functions need further exploration. DATA AVAIlABIlITY sTATEMENT The data on miRNAs discussed in this manuscript have been deposited in NCBI's GEO and are accessible through GEO Series accession number GSE112556. EThICs sTATEMENT The study protocol was approved by the Medical Ethics Committee of the Third Affiliated Hospital of Soochow University in Changzhou, Jiangsu Province, China, and informed consent was obtained from each patient.
v3-fos-license
2021-12-02T14:31:52.724Z
2021-11-30T00:00:00.000
244779247
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2021.748890/pdf", "pdf_hash": "eb318942e15dd607362f5321450cf991fb649d84", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44886", "s2fieldsofstudy": [ "Biology" ], "sha1": "eb318942e15dd607362f5321450cf991fb649d84", "year": 2021 }
pes2o/s2orc
HigB1 Toxin in Mycobacterium tuberculosis Is Upregulated During Stress and Required to Establish Infection in Guinea Pigs The extraordinary expansion of Toxin Antitoxin (TA) modules in the genome of Mycobacterium tuberculosis has received significant attention over the last few decades. The cumulative evidence suggests that TA systems are activated in response to stress conditions and are essential for M. tuberculosis pathogenesis. In M. tuberculosis, Rv1955-Rv1956-Rv1957 constitutes the only tripartite TAC (Toxin Antitoxin Chaperone) module. In this locus, Rv1955 (HigB1) encodes for the toxin and Rv1956 (HigA1) encodes for antitoxin. Rv1957 encodes for a SecB-like chaperone that regulates HigBA1 toxin antitoxin system by preventing HigA1 degradation. Here, we have investigated the physiological role of HigB1 toxin in stress adaptation and pathogenesis of Mycobacterium tuberculosis. qPCR studies revealed that higBA1 is upregulated in nutrient limiting conditions and upon exposure to levofloxacin. We also show that the promoter activity of higBA1 locus in M. tuberculosis is (p)ppGpp dependent. We observed that HigB1 locus is non-essential for M. tuberculosis growth under different stress conditions in vitro. However, guinea pigs infected with higB1 deletion strain exhibited significantly reduced bacterial loads and pathological damage in comparison to the animals infected with the parental strain. Transcriptome analysis suggested that deletion of higB1 reduced the expression of genes involved in virulence, detoxification and adaptation. The present study describes the role of higB1 toxin in M. tuberculosis physiology and highlights the importance of higBA1 locus during infection in host tissues. The extraordinary expansion of Toxin Antitoxin (TA) modules in the genome of Mycobacterium tuberculosis has received significant attention over the last few decades. The cumulative evidence suggests that TA systems are activated in response to stress conditions and are essential for M. tuberculosis pathogenesis. In M. tuberculosis, Rv1955-Rv1956-Rv1957 constitutes the only tripartite TAC (Toxin Antitoxin Chaperone) module. In this locus, Rv1955 (HigB1) encodes for the toxin and Rv1956 (HigA1) encodes for antitoxin. Rv1957 encodes for a SecB-like chaperone that regulates HigBA1 toxin antitoxin system by preventing HigA1 degradation. Here, we have investigated the physiological role of HigB1 toxin in stress adaptation and pathogenesis of Mycobacterium tuberculosis. qPCR studies revealed that higBA1 is upregulated in nutrient limiting conditions and upon exposure to levofloxacin. We also show that the promoter activity of higBA1 locus in M. tuberculosis is (p)ppGpp dependent. We observed that HigB1 locus is non-essential for M. tuberculosis growth under different stress conditions in vitro. However, guinea pigs infected with higB1 deletion strain exhibited significantly reduced bacterial loads and pathological damage in comparison to the animals infected with the parental strain. Transcriptome analysis suggested that deletion of higB1 reduced the expression of genes involved in virulence, detoxification and adaptation. The present study describes the role of higB1 toxin in M. tuberculosis physiology and highlights the importance of higBA1 locus during infection in host tissues. INTRODUCTION Tuberculosis (TB), caused by Mycobacterium tuberculosis (M. tuberculosis) is a major health concern and infects nearly one-third of the world population. The failure of BCG vaccine to impart protection in adult population and HIV co-infection has negative impact over the control of global TB cases. There is a significant increase in the number of patients infected with the M. tuberculosis strain resistant to front-line TB drugs such as isoniazid and rifampicin. Studies have shown that the proportion of TB patients infected with multi-drug resistant strains lies in the range of 4.6-25% (Lange et al., 2019). Control of the spread of drug-resistant TB and eradication of TB is hampered by the limited efficacy of therapeutic approaches against drug resistant M. tuberculosis strains and our poor understanding of the strategies used by the pathogen for survival inside the human host. M. tuberculosis has emerged as a highly successful intracellular pathogen due to its ability to sense extracellular stimuli and reprogram metabolic pathways that enables it to survive in host tissues under varied stress conditions. Toxin antitoxin (TA) systems are mostly two component modules that are widely present in the genome of prokaryotes and have been implicated in bacterial stress adaptation, persister formation and virulence (Page and Peti, 2016;Harms et al., 2018;Slayden et al., 2018). TA module encodes a toxin that generally inhibits bacterial growth in a bacteriostatic manner by inhibiting an essential cellular function (Schuster and Bertram, 2013;Kedzierska and Hayes, 2016;Page and Peti, 2016;Harms et al., 2018;Fraikin et al., 2020;Kamruzzaman et al., 2021). The antitoxin component of the TA operon neutralizes the activity of this toxin. Based on the mechanisms for neutralization of toxin by the cognate antitoxin, the TA systems have been classified into eight types (Lobato-Marquez et al., 2016;Choi et al., 2018;Harms et al., 2018;Song and Wood, 2020;Wang et al., 2021). The M. tuberculosis genome encodes for majorly Type II TA systems, where the antitoxin forms a tight complex with toxin and abrogates the activity associated with toxin (Pandey and Gerdes, 2005;Ramage et al., 2009;Sala et al., 2014;Tandon et al., 2019). VapBC family that encodes for the VapC toxin and VapB antitoxin is the most abundant subfamily of TA systems in M. tuberculosis (Ahidjo et al., 2011). VapC toxins are characterized by the presence of PIN domain and have been shown to inhibit M. tuberculosis growth by targeting either tRNA or rRNA or mRNA (Sharp et al., 2012;Cruz et al., 2015;Winther et al., 2016). The inducible expression of VapC toxins inhibits the growth of M. tuberculosis or M. smegmatis or E. coli in a bacteriostatic manner (Ramage et al., 2009;Winther et al., 2016;Agarwal et al., 2018). Several studies have shown that the expression of a "subset" of TA systems is increased in stress conditions such as nutrient deprivation, low oxygen and in macrophages (Ramage et al., 2009;Gupta et al., 2017;Agarwal et al., 2018). Previously, we have shown that deletion of vapBC3, vapBC4, vapBC11, and vapC22 in the genome of M. tuberculosis impairs its growth in guinea pigs Deep et al., 2018). However, parental, vapC28 mutant and vapC21 strain displayed comparable growth kinetics in guinea pigs and mice, respectively Sharma et al., 2020). MazF toxins belonging to MazEF TA systems are sequence specific endonucleases that are cumulatively required for M. tuberculosis to establish infection in host tissues (Tiwari et al., 2015). RelE toxins belonging to the RelBE TA system have been shown to be individually non-essential for M. tuberculosis virulence in mice tissues but contribute to antibiotic tolerance in a drug-specific manner (Singh et al., 2010). HigBA TA system was originally identified on Proteus vulgaris plasmid, Rts1 with unique gene arrangement as HigA antitoxin is present downstream of HigB toxin (Tian et al., 1996). HigB belongs to RelE subfamily of toxins and cleaves mRNA in a ribosome dependent manner in V. cholerae, P. vulgaris, and E. coli (Christensen-Dalsgaard and Gerdes, 2006;Hurley and Woychik, 2009;Christensen-Dalsgaard et al., 2010). HigBA TA module from A. baumannii is expressed during stationary phase and under iron deficient conditions (Armalyte et al., 2018). In P. aeruginosa, activation of HigB toxin influences the levels of intracellular c-di-GMP and virulence factors like pyocyanin and pyochelin (Wood and Wood, 2016). Furthermore, HigB toxin also promotes the killing of immune cells by increasing the expression of type III secretion system in ciprofloxacin induced persisters in P. aeruginosa (Li et al., 2016;Wood and Wood, 2016;Zhang et al., 2018). Besides other bicistronic TA operons, M. tuberculosis encodes for two HigBA TA loci, HigBA2 (Rv2022c-Rv2021c) and HigBA3 (Rv3182-Rv3183). The genome of M. tuberculosis also encodes for a tripartite Toxin Antitoxin Chaperone (TAC) system. TAC system comprises of HigB1 toxin (Rv1955), HigA1 antitoxin (Rv1956) and SecB like chaperone (Rv1957). SecB like chaperone prevents HigA1 aggregation and degradation by interacting with Chad like sequences present within HigA1 (Fivian-Hughes and Davis, 2010;Bordes et al., 2016;Guillet et al., 2019). Studies in E. coli and M. smegmatis have shown that overexpression of HigB1 toxin inhibits bacterial growth which is restored upon co-expression of cognate antitoxin, HigA1 (Gupta, 2009;Ramage et al., 2009). It has been shown that HigB1 and HigA1 are co-transcribed with upstream genes Rv1954A, Rv1954c and downstream gene, Rv1957. The locus comprises of two promoters, the P2 promoter controls the expression of Rv1954A-Rv1957 locus, whereas, the P1 promoter is inducible in DNA damaging conditions and controls the expression of Rv1955-Rv1957 only. Previously, it has also been reported that HigA1 possesses helix-turn-helix motif at the amino-terminus, binds to the motif ATATAGG(N) 6 CCTATAT and represses the expression of Rv1954A-Rv1957 locus (Fivian-Hughes and Davis, 2010). Schuessler et al. (2013) have shown that inducible expression of higB1 decreased IdeR and Zur transcript levels and also cleaves tmRNA. Recently, Texier et al. (2021) has shown that ClpXP1P2 protease complex is involved in HigA1 degradation and proposed a model for HigB1 toxin activation. In the present study, we have performed experiments to investigate the physiological role of HigB1 in M. tuberculosis. Here, we report that HigB1 toxin is upregulated in M. tuberculosis under nutrient limiting conditions and upon exposure to levofloxacin. Further, we also demonstrate that in comparison to the parental strain, the growth of higB1 mutant strain was impaired in guinea pigs. We also observed that reduced tissue damage in lung sections of higB1 mutant strain infected guinea pigs in comparison to the sections from guinea pigs infected with the parental strain. The expression of genes involved in virulence, detoxification and adaptation were reduced in the higB1 mutant strain in comparison to the wild type strain. Taken together, in this study, we have investigated the role of HigB1 toxin in physiology and pathogenesis of M. tuberculosis. Culture Conditions, Construction of higB1 Mutant and Complemented Strains The E. coli and mycobacterial strains were cultured at 200 rpm, 37 • C in Luria Bertani medium and Middlebrook 7H9 medium supplemented with 0.2% glycerol, 0.05% Tween-80 and 1x ADS, respectively. For CFU enumeration, an aliquot was removed at designated time points, diluted 10-folds and plated on Middlebrook 7H11 agar supplemented with 1x OADS plates at 37 • C for 3-4 weeks. Unless mentioned, all reagents and chemicals used in the study were purchased from Sigma Aldrich, Merck. M. tuberculosis higB1 gene was deleted from the genome of M. tuberculosis using temperature sensitive mycobacteriophages as described previously (Bardarov et al., 2002). Briefly, pYUB854 higB1 construct was prepared via cloning 800bp upstream (F-gaggccttacgtcctggacaccaacgtggtg, R-gtctagaacccatggcggctggatcaggggg) and downstream (Fgaagcttagagccttcggcgacaccccaccga, R-gactagtactcgaaatcagcggtg gctacgtc) regions of higB1 gene in cosmid pYUB854 flanking the hygromycin resistance cassette. The recombinant cosmid was digested with PacI and packaged in phagemid, phAE87 using Gigapack III Gold Packaging Extract. The recombinant cosmid was electroporated in M. smegmatis to generate high titer temperature sensitive mycobacteriophages. The high titer phages were used to transfect M. tuberculosis H37Rv strain to generate higB1 mutant strain. The deletion of higB1 gene was confirmed by performing whole genome sequencing using the Nextera XT kit and associated protocols on MiSeq (Illumina). The complemented strain was constructed by cloning higB1 gene with its upstream region in the integrative mycobacterium expression vector pMV306K. The recombinant pMV306K-higB1 was electroporated in higB1 mutant strain and transformants were selected on 7H11 agar plates containing hygromycin and kanamycin. Real Time Polymerase Chain Reaction Studies In order to determine higB1 and higA1 expression levels in disease relevant stress conditions, total RNA was isolated from M. tuberculosis H37Rv strain exposed to various stress conditions. These conditions were (i) oxidative stress (5 mM H 2 O 2 ), nitrosative (5 mM NaNO 2 , 7H9 medium, pH-5.2), nutritional stress (1x Tris buffer saline with 0.05% Tween 80), isoniazid treatment (10 µg/ml) and levofloxacin treatment (10 µg/ml). In order to measure intracellular expression levels, total RNA was isolated from J774.1 macrophage infected with M. tuberculosis. The isolated RNA from different conditions was DNase I treated, cDNA synthesized and qPCR was performed as previously described . Promoter Activity Assays For promoter activity assays, upstream region of higB1 was Polymerase Chain Reaction (PCR) amplified and cloned in an EGFP-based promoter reporter vector, pSCK301T3. The recombinant plasmid was electroporated into wild type, higB1, ppk1, and relA strains and transformants were selected on Middlebrook 7H11 medium supplemented with kanamycin and hygromycin. For measurement of promoter activity, strains were cultured in MB7H9 medium till different stages of growth and fluorescence measurements were determined using a Spectramax M5 plate reader (Molecular devices, Inc., United States) with excitation at 490 nm and emission at 520 nm. In vitro Stress and Drug-Persistence Experiments In vitro growth characteristics of parental, higB1 mutant and complemented strains ( higB1-CT) was determined in MB7H9 medium by measuring OD 600 nm at regular intervals. For in vitro stress experiments, early-log phase cultures of various strains were exposed to either 5 mM H 2 O 2 , 5 mM NaNO 2 , 0.25% SDS, or 2.5 mg/ml lysozyme for 24 or 72 h. For nutritional starvation, early-log phase cultures of various strains were harvested, washed and resuspended in 1x tris buffered saline containing 0.05% Tween 80 (1x TBST-80) for either 7 or 14 days. The biofilm formation and colony morphology experiments for various strains were performed as previously described Arora et al., 2018). For in vitro drug-susceptibility assays, mid-log phase cultures of various strains were exposed to drugs that possess a different mechanism of action. The drugs used in the study were isoniazid (cell wall inhibitor), rifampicin (transcription inhibitor), and levofloxacin (replication inhibitor). Animal Experiments In vivo guinea pig experiments were performed as per the guidelines provided by Committee for the Purpose of Control and Supervision of Experiments on Animals (CPCSEA, Govt of India). The experiments were conducted with prior permission of the institutional animal ethics committee of University of Delhi, South Campus. Single cell suspension was prepared from mid-log phase cultures of various strains and aerosol infection was performed using Madison aerosol exposure chamber. The aerosol infection resulted in implantation of 50-100 bacilli in lung tissues at day 1 post-infection. The bacterial loads and histopathology analysis were performed at day 28 and day 56 post-infection. For CFU enumeration, both lungs and spleens were homogenized in 2 ml saline and 100 µl of 10.0-fold serial dilutions was plated on MB7H11 plates in duplicates. The upper left lobe of infected animals was fixed with 10% formalin and stained with hematoxylin and eosin for histopathology analysis as described previously (Singh et al., 2013(Singh et al., , 2016. Microarray Experiments For gene expression profiling, total RNA was isolated from wild type H37Rv, higB1 mutant and complemented strains as previously described (Singh et al., 2013). The isolated RNA was treated with DNase I (Thermo Fischer, United States) and quantified using Nanodrop 2000c spectrophotometer (Thermo Scientific, United States). The purity and integrity of RNA samples were checked on Agilent 2100 Bio analyzer (Agilent Technologies Inc., United States). Further, 25 ng of RNA was amplified and labeled using Low input Quick Amp WT Labeling kit (Agilent Technologies, United States) as described previously (Venkataraman et al., 2014). The labeled cRNA was purified using RNeasy columns (Qiagen, United States) and total yields were quantified on Nanodrop 2000c spectrophotometer. The hybridization was performed using Gene expression hybridization kit as per manufacturer's recommendation (Agilent Technologies, United States). Hybridizations were performed in triplicates. The slides were washed after hybridization as per manufacturer's instructions and scanned using the Agilent Microarray Scanner at a resolution of 5 µM. The settings used for scanner were: Agilent HD_GX_1 color (61 × 21.6 mm), TIFF 20-bit, Photomultiplier tube (PMT) gain 100%. The scanned image was analyzed using Agilent Feature Extraction software (v10.5). The raw data obtained from microarray experiment was normalized and analyzed using GeneSpring GX v.11.5 software. Normalization of the raw data was performed by taking the 50th percentile for each sample. Baseline correction was applied to the median of all samples. The normalized data has been submitted to NCBI's Gene Expression Omnibus database (GEO) and can be queried via accession number GSE179403. The differential expression analysis of samples was performed. Genes that showed a twofold or higher change with a P value of < 0.05 (unpaired Student t-test) were considered to be differentially expressed. Gene Ontology and Pathway analysis of differentially expressed genes was done using DAVID tool 1 and Panther Classification system 2 . Functional and Protein Interaction Network was performed using StringDB 3 . Clustering of the biologically enriched genes was done using Heatmapper online tool 4 . Gene regulatory network of enriched pathways and genes was performed using Pathreg algorithm (Theomics International Pvt Ltd, Bangalore, India) and visualized using Cytoscape V2.8.3. Statistical Analysis Statistical analysis and generation of graphs was done using Prism 8 software (Version 8.4.3; GraphPad software Inc, CA, United States). Differences between groups were compared using two-tailed t-test and were considered significant at Pvalue of < 0.05. The David analysis was also as per the statistical criteria. Deletion of higB1 Doesn't Alter the in vitro Characteristics of Mycobacterium tuberculosis Mycobacterium tuberculosis higBA1 locus is unusual as the toxin (HigB1, Rv1955) and antitoxin (HigA1, Rv1956) are co-transcribed along with the upstream gene, Rv1954A and the downstream gene Rv1957 (Cole et al., 1998; Figure 1A). Previously, it has been shown that SecB regulates the activity of HigBA1 locus as it prevents aggregation and degradation of HigA1 antitoxin (Sala et al., 2013;Bordes et al., 2016). It has also been shown that HigB1 overexpression results in growth arrest in M. tuberculosis and E. coli (Gupta, 2009;Schuessler et al., 2013). In order to determine the role of HigB1 protein in M. tuberculosis physiology, higB1 mutant strain was constructed using temperature sensitive mycobacteriophages as described in Materials and Methods ( Figure 1A, Bardarov et al., 2002). The generation of higB1 mutant strain was validated by PCR (data not shown) and whole genome sequencing. As shown in Figure 1B, no sequence reads aligning to the Rv1955 region were obtained in the mutant genome compared to the wild type H37Rv strain genome, confirming that the Rv1955 (higB1) sequence was absent in the mutant strain. Further, we did not identify any other secondary mutations in the genomic DNA sequence of the mutant strain. For construction of complemented strain, pMV306K-higB1 was electroporated into higB1 mutant strain. Next, the growth patterns of various strains were measured in vitro in liquid medium. We did not observe any significant differences in the growth patterns of various strains by measuring either absorbance or bacterial numbers at regular intervals (Figures 2A,B). The bacterial counts of H37Rv, higB1 mutant and complemented strains after 10 days were ∼ 9.2 × 10 8 , 1.56 × 10 9 , and 1.81 × 10 9 , respectively ( Figure 2B). In Pseudomonas aeruginosa, excess of HigB toxin has been shown to reduce the production of various virulence factors and biofilm formation (Wood and Wood, 2016). TA systems have also been implicated in biofilm formation and quorum sensing (Ren et al., 2004;Wang and Wood, 2011;Sun et al., 2017;Fu et al., 2018). Next, the ability of wild type, mutant and complemented strains to form biofilms was compared ( Figure 2C). We observed that wild type, higB1 mutant and higB1 complemented strain were comparable in their ability to form biofilms in vitro ( Figure 2C). Also, the colony morphology of the higB1 mutant strain was similar to that observed for the parental strain on Middlebrook 7H11 medium ( Figure 2D). Differential Expression of HigBA1 Locus in Stress Conditions and Its Regulation by RelA Gene Product in Mycobacterium tuberculosis Previous studies have shown that the TAC operon in M. tuberculosis is upregulated upon exposure to DNA damaging agents, heat shock, nutritional stress and low oxygen conditions (Betts et al., 2002;Stewart et al., 2002;Rand et al., 2003;Rustad et al., 2008). We also determined the relative levels of higB1 toxin and higA1 antitoxin by qPCR using gene specific primers after exposure to different in vitro stress conditions as described in Materials and methods. In concordance with previous studies, we observed that the expression of higB1 was increased by ∼3.8-fold in M. tuberculosis upon exposure to nutritional stress. In contrast, the transcript levels of higA1 were increased by 1.5-fold in nutritionally starved growth conditions ( Figure 3A). No differences in the transcript levels of higB1 and higA1 was observed after exposure to either oxidative or nitrosative stress ( Figure 3A). As shown in Figure 3A, the transcript levels of higB1 were also increased by ∼2.8-fold upon exposure to levofloxacin. The expression of higA1 remained unchanged in levofloxacin treated M. tuberculosis cultures. Further, we also measured the transcript levels of higB1 and higA1 in macrophages at 24 h post-infection. As shown in Figure 3A, we did not observe any significant changes in the transcript levels of higB1 and higA1 in M. tuberculosis infected macrophages. Studies have shown that bacteria adapt to nutrient limiting conditions by changing the transcriptome profile to support its prolonged survival (Rohde et al., 2012). The change in transcription profiles in bacterial pathogens is associated with the synthesis of two intracellular alarmones guanosine 5 ,3 bispyrophosphate (ppGpp) and guanosine pentaphosphate (p)ppGpp. In bacteria, the (p)ppGpp cellular levels are regulated by the enzymatic activities of RelA (alarmone synthetase) and SpoT (alarmone synthetase and hydrolase) (Ronneau and Hallez, 2019). The genome of M. tuberculosis encodes for a single RelA which is responsible to maintain the cellular pools of (p)ppGpp alarmone (Primm et al., 2000). Studies have shown that RelA protein from M. tuberculosis is essential for its long-term survival under starvation and to establish infection in mice tissues (Primm et al., 2000;Weiss and Stallings, 2013). (p)ppGpp levels also regulates the intracellular levels of inorganic polyphosphate (PolyP). The levels of PolyP in bacteria pathogens are regulated by the polyphosphate kinase 1 (PPK-1), Exopolyphosphatases and Polyphosphate kinase 2 (PPK-2). Dysregulation in PolyP levels is associated with attenuation of various intracellular pathogens in animal models (Kornberg et al., 1999;Singh et al., 2013). The (p)ppGpp alarmone and PolyP levels are known to accumulate during stress conditions and these molecules regulate bacterial stress response specifically under nutrient starvation. As higBA1 locus was simultaneously induced when H37Rv M. tuberculosis was exposed to nutritional limiting conditions, we further analyzed the promoter activity of higBA1 locus in relA and ppk1 mutant strains. For promoter activity assay, pSCK301T3, harboring eGFP downstream of the higBA1 promoter region was electroporated in either wild type or higB1 The growth patterns of various strains were compared by measuring absorbance at 600 nm (A) and CFU enumeration (B). The biofilm formation (C) and colony morphology (D) was performed as described in section "Materials and Methods." The data presented shown in (A,C,D) panels is representative of two independent experiments. The data shown in panel (B) is mean ± SE of Log 10 CFU obtained from two independent experiments performed in duplicates. FIGURE 3 | qPCR studies to determine the relative levels of higB1 and higA1 in different stress conditions. (A) The transcript levels of higB1 and higA1 were determined using gene specific primers as described in section "Materials and Methods." The data obtained was normalized to the levels of sigA, housekeeping gene and shown as mean ± SE obtained from three independent experiments. (B) Promoter activity assays. The promoter activity was measured in various strains at different stages of growth as described in section "Materials and Methods." The data shown in mean ± SE of promoter activity obtained in various strains. Statistically significant differences were observed for the indicated groups, * P < 0.05, * * P < 0.01. or relA or ppk1 strains and fluorescence was determined at mid-log and stationary stages of growth for various strains. We noticed, that in comparison to the wild type strain, the promoter activity was increased by ∼2.0-fold in stationary phase cultures of relA strain of M. tuberculosis ( Figure 3B, * P < 0.05). Further, 1.6-fold increase in promoter activity was also seen in mid-log phase cultures of relA strain as compared to the wild type strain ( Figure 3B, * P < 0.05). As shown in Figure 3B, we noticed that in comparison to the parental strain, the promoter activity was increased by 2.5-fold and 2.7-fold in higB1 strain during midlog and stationary phase of growth, respectively ( * * P < 0.01). These observations suggested that HigB1 toxin acts as a negative regulator of TAC operon expression. The P2 promoter but not the P1 promoter is reported to be regulated by the HigA1 antitoxin (Fivian-Hughes and Davis, 2010). Our data suggests that the HigA1-HigB1 complex negatively regulates the operon. As shown in Figure 3B, no differences were observed in the activity of the promoter of higBA1 TAC operon during different stages of growth between parental and ppk1 mutant strain. The observed increase in the promoter activity of higBA1 TAC operon in higB1 and relA mutant strains depicts that both higBA1 and relA gene products regulate the expression of higBA1 promoter. HigB1 Loci Is Dispensable for Growth of Mycobacterium tuberculosis in Different Stress Conditions In order to survive, the pathogen should be able to sense, adapt and respond to exogenous stress conditions (Fang et al., 2016;Flint et al., 2016). M. tuberculosis possess the unique ability to adapt to different environmental conditions inside host tissues during infection. TA systems are generally considered as stress responsive elements as they are differentially regulated under various stress conditions. Under specific stress condition, antitoxin protein is degraded by cellular proteases and free toxin can inhibit the bacterial growth by targeting the essential cellular processes which further facilitate the bacterial survival under these conditions. Previous studies have shown that TAC operon of M. tuberculosis is significantly induced in response to heat shock, nutritional starvation, hypoxia and persistence (Betts et al., 2002;Stewart et al., 2002;Rand et al., 2003;Rustad et al., 2008). In concordance, we also observed that the transcripts of higB1 were increased upon exposure to nutritional stress and levofloxacin, therefore, we next investigated the role of HigB1 toxin in the M. tuberculosis stress adaptation and drug tolerance. We compared the survival of wild type, higB1 mutant and complemented M. tuberculosis strains upon exposure to various in vitro stress conditions. Despite being upregulated in nutrient limiting growth conditions, we observed that the survival of mutant strain was comparable to the parental strain in these conditions ( Figure 4A). Also, in other stress conditions tested, the survival of higB1 strain was comparable to that observed for the parental strain (Figures 4B-E). Taken together, we conclude that HigB1 toxin does not influence stress adaptation of M. tuberculosis in vitro. HigB1 Is Required for the Survival of Mycobacterium tuberculosis in the Presence of Levofloxacin in vitro Antibiotic persistence is the ability of the bacterial subpopulation to survive the antibiotic treatment. Bacterial persister population makes the TB treatment more difficult and long. Several genes have been implicated in persister formation including TA systems but the role of TA systems in antibiotics mediated persistence is highly questionable. There have been studies which report that overexpression of toxins results in metabolic shutdown that helps the bacterial subpopulation to persist in the presence of antibiotics (Keren et al., 2004a(Keren et al., ,b, 2011Tripathi et al., 2014). In M. tuberculosis, it has been shown that MazF toxins (MazF3, MazF6, and MazF9) contribute cumulatively to drug persistence in the presence of levofloxacin and rifampicin (Tiwari et al., 2015). However, deletion of either VapBC3 or VapBC4 or VapBC11 or VapC21 or VapC28 or VapC22 in the genome of M. tuberculosis did not contribute to drug persistence in vitro Deep et al., 2018;Sharma et al., 2020). HigBA1 and HigBA2 TA module were also shown to be overexpressed in M. tuberculosis persisters (Keren et al., 2011). In P. aeruginosa, overexpression of HigB toxin was shown to increase the bacterial survival by 1000-fold after exposure to ciprofloxacin (Li et al., 2016). In order to determine the role of HigB1 in drug persistence, we compared the survival of various strains upon exposure to drugs with different mechanism of action. In accordance with qPCR results, we observed that higB1 mutant strain was susceptible to levofloxacin by 3.0-fold after 7 days of exposure in comparison to the parental strain in vitro ( Figure 4F, * P < 0.05). As shown in Figure 4F, the deletion of HigB1 did not affect the survival of M. tuberculosis upon exposure to either isoniazid or rifampicin. We also observed that both parental and higB1 strain displayed comparable MIC 99 values against isoniazid, rifampicin, levofloxacin and ethambutol (Supplementary Table 1). Taken together, these studies suggest that the mutant strain was more susceptible to killing upon exposure to levofloxacin. However, the phenotype was not completely restored in the complemented strain. HigB1 Toxin Is Essential to Establish Mycobacterium tuberculosis Infection in Guinea Pigs Based on the in vivo growth phenotype, M. tuberculosis strains have been classified as severe growth in vivo (sgiv) or growth in vivo (giv) or persistence (per) or altered pathology mutants (Hingley-Wilson et al., 2003). TA systems have been implicated in bacterial pathogenesis. We have previously reported that MazF toxins (MazF3, MazF6, and MazF9) contribute to M. tuberculosis pathogenesis (Tiwari et al., 2015). Also, in comparison to parental strain, deletion of either vapBC3 or vapBC4 or vapBC11 or vapC22 attenuates the growth of M. tuberculosis in guinea pigs Deep et al., 2018). Recently, it has been reported that deletion of higB toxin reduces the virulence of E. piscicida in fish tissues (Xie et al., 2021). We next investigated the role of HigB1 in M. tuberculosis virulence using guinea pig model of infection. The animals were infected with either parental or higB1 mutant or complemented M. tuberculosis strains via aerosol route and disease progression was determined during acute (28 days) and chronic stage (56 days) of infection. In concordance with earlier reports, we observed discrete lesions in lung tissues of wild type strain infected guinea pigs (Figures 5A,B). In comparison, significant fewer number of FIGURE 4 | HigB1 is dispensable for M. tuberculosis growth in different stress conditions or drugs in vitro. The survival of various strains was compared in different stress conditions such as nutritional (A) or oxidative (B) or nitrosative (C) or cell wall disrupting agents, SDS (D), lysozyme (E) or after exposure to drugs (F). The data shown in these panels is mean ± SE of Log 10 CFU obtained from two independent experiments performed in either duplicates or triplicates. Statistically significant differences were observed for the indicated groups, * P < 0.05. lesions were seen in higB1 mutant strain infected guinea pigs. The bacterial counts in the lungs of wild type strain infected animals was log 10 5.81 and log 10 5.27 at 28 days and 56 days post-infection, respectively (Figures 5C,D). We observed that in comparison to wild type strain, the growth higB1 mutant strain was impaired in lung tissues by ∼ 42.0-fold and 31.0fold, respectively, during acute and chronic stage of infection (Figures 5C,D, * * P < 0.01 and * * * P < 0.001). The in vivo growth defect for higB1 mutant strain was more prominent at chronic stage specifically in spleens of infected animals. In concordance with the lung data, the bacterial numbers in spleens of parental strain and higB1 mutant strain infected guinea pigs was log 10 4.6 and log 10 3.22, respectively at 4 weeks post-infection ( Figure 5C, * P < 0.05). The reduction in splenic bacillary loads of higB1 infected animals increased to ∼242.0 folds after 56 days post-infection ( Figure 5D, * P < 0.01). The complementation of higB1 mutant strain only partially restored the growth defect in spleens of guinea pigs at both time points (Figures 5C,D). Taken together, these observations suggest that TAC locus is required to establish chronic stage of infection in guinea pigs. Further, we performed histopathology analysis of tissue sections obtained from lungs of guinea pigs infected with various strains of M. tuberculosis at both 4-and 8-weeks post-infection. In concordance with CFU enumeration data, tissue damage was significantly decreased in the tissue sections from higB1 mutant infected guinea pigs. In comparison, the tissue sections from guinea pigs infected with the parental M. tuberculosis strain showed heavy tissue damage in both acute and chronic phase of infection ( Figure 6). As shown in Figure 6, cellular infiltration of lymphocytes and macrophages was seen in the sections from animals infected with the wild type strain. In comparison, lung sections from higB1 strain infected guinea pigs displayed more alveolar space and less damage of lung parenchyma (Figure 6). At 8 weeks post-infection, necrotic areas were present within granulomas signifying extensive tissue damage in sections from parental strain infected guinea pigs (Figure 6). In concordance with CFU data, no necrosis was seen in sections from animals infected with higB1 mutant strain at 56 days post-infection. We observed normal lung parenchymal space in tissue sections from guinea pigs infected with the higB1 mutant strain (Figure 6). In higB1 complemented strain infected guinea pigs, intermediate levels of tissue damage were observed. Overall, in vivo CFU enumeration and histopathological analysis demonstrates the importance of HigB1 in establishment of successful M. tuberculosis infection in the guinea pigs. Global Transcriptome Profiling of higB1 Mutant Strain of Mycobacterium tuberculosis In order to gain mechanistic insights for the attenuated phenotype of higB1 mutant strain of M. tuberculosis, we performed microarray experiments to compare the global transcriptome profile of H37Rv, higB1 mutant and complemented strain. For microarray experiments, total RNA was isolated from mid-log phase cultures of the strains in either duplicates or triplicates. Using a cut-off value of twofold and P-value < 0.05, the fold change was calculated for the genes differentially expressed in the higB1 mutant vis-à-vis the wild type H37Rv ( Table 1). As expected, higB1 transcript levels were The data shown in this panel is mean ± SE of Log 10 CFU obtained from 6 or 7 animals per group per time point. Statistically significant differences were observed for the indicated groups, * P < 0.05, * * P < 0.01, and * * * P < 0.001. reduced by 9.67-fold in the mutant strain (Table 1). Also, the transcript levels of higA1 were reduced by 2.18-fold in the mutant strain ( Table 1). The transcript levels of Rv1957 did not show any significant change in the mutant strain. The transcript levels of higB1 were increased by ∼5.32-fold in complemented strain in comparison to the wild type strain (Supplementary Table 2). Also, there was only a marginal increase in the transcript levels of higA1 and Rv1957 in the complemented strain in comparison to the wild type strain. However, the differential expression of genes observed in the mutant strain was not fully restored in the complemented strain. Unsupervised hierarchical clustering of the samples also showed that the profiles obtained in complemented strain clustered separately from the profiles obtained in the wild type strain (Figures 7A,C). This finding is in concordance with the guinea pig data wherein we did not observe complete restoration of the attenuated phenotype in the complemented strain. It is important to mention here that the complemented strain harbors the integrative pMV306K:higB1 while the H37Rv wild type and higB1 knockout strain lack this site-specific integrative plasmid. To the best of our knowledge, there is no evidence of interference of this plasmid in the gene expression profiles of M. tuberculosis. Since one of the triplicates for the mutant strain ( higB1-1) and the complemented strain ( higB1-CT-1) appeared as an outlier in the hierarchical clustering therefore they were not included in the unsupervised hierarchical clustering analysis of the data (Figures 7B,C). We noticed that the relative transcript levels of 73 genes were differentially regulated in higB1 mutant strain compared to the wild type strain ( Table 1). Among these, the transcripts of Rv2987c, Rv2988c, Rv2989, and Rv1361c were significantly increased by ∼5.0-fold while the transcript levels of Rv0053, Rv2624c, Rv3135, Rv0250c, and Rv2631 were increased by ∼3.0-fold in higB1 mutant strain in comparison to the parental strain (Table 1). Further, we also observed that the transcript levels of ribosomal proteins such as Rv0055, Rv0056, Rv0651, Rv0652, Rv0700, Rv0714, Rv0715, Rv0717, and Rv3924c were increased in the higB1 mutant strain as compared to the wild type strain ( Table 1). Transcriptional profiling of ribosomal proteins and its associated proteins in higB1 mutant strain bore close similarities to that induced by relE3 overexpression and exposure to protein translation inhibitors (Boshoff et al., 2004;Singh et al., 2010). This corroborates with the fact that HigB1 is a translation inhibitor. In addition, the transcript levels Rv0315 were upregulated by twofold in higB1 mutant strain ( Table 1). Rv0315 encodes for an immunostimulatory M. tuberculosis antigen which activates the dendritic cells and drives the Th 1 cell response upon M. tuberculosis infection (Byun et al., 2012). The transcripts encoding for Rv3027c (GCN5-related N-acetyltransferase), Rv3628 (inorganic pyrophosphatase) and Rv3340 (cystathionine [beta]-lyase) were upregulated by 2. 9-, 2. 0-, and 2.3-fold, respectively, in the higB1 mutant strain. We also noticed that transcripts of genes such as Rv2007c, Rv2624c, Rv2625c, Rv2631, and Rv3128c belonging to the DosR regulon were also increased in the mutant strain (Park et al., 2003; Table 1). Among these, Rv2624c has been previously reported to be highly immunogenic antigen as and it has been shown to induce higher levels of IFN-γ and TNFα (Bertholet et al., 2011;Chegou et al., 2012). Further, the transcript levels of 24 genes were down regulated in the mutant strain in comparison to the wild type strain ( Figure 7B and Table 1). Among these, 30 and 21% of the proteins belong to the functional category of intermediary metabolism and respiration and virulence, detoxification and adaptation, respectively. The transcript levels of Rv0341 (iniB, isoniazid inducible protein), Rv0914c, Rv3139 (keto acyl-CoA thiolase) were reduced by 8.6-fold, 3.3-fold, and 3.0-fold respectively in the higB1 mutant strain ( Table 1). We also noticed that deletion of higB1 in the genome of M. tuberculosis reduced the expression of β-propeller gene, Rv1057 by 2.88-fold ( Table 1). Previous studies have shown that Rv1057 regulates ESAT-6 secretion and intracellular growth of M. tuberculosis (Fu et al., 2018). Also, the transcript levels for Rv3084, Rv3085, Rv3086, and Rv3087 that belong to the acid responsive mymA operon (Rv3083-Rv3089) were significantly reduced by ∼2.0fold in the mutant strain in comparison to the parental strain (Table 1; Singh et al., 2005). The expression of Rv0311, a protein shown to be essential for M. tuberculosis to establish extrapulmonary TB infection was also down regulated by ∼2.0 fold in the mutant strain . Further, GO-enrichment analysis was performed using DAVID tool and the most enriched pathways associated with these differentially expressed genes were identified as shown in Figure 8A. These gene sets were then used to create their regulatory network shown in Figure 8B. As evident, the key pathways affected by the higB1 deletion were translation, transcription and oxidoreductase family. While a large number of translation-associated proteins and transcription factors were upregulated in the mutant strain, reduced expression of the genes belonging to the oxidoreductase family was observed in the higB1 mutant strain. We also observed that in comparison to the parental strain the expression of enzymes involved in translation and transcription pathways was not affected in the complemented strain. These observations indicates that HigB1 expression in the complemented strain restored the expression of proteins belonging to these pathways. However, the expression of enzymes belonging to oxidative phosphorylation pathway was compromised in the complemented strain as compared to the parental strain ( Figure 8C). DISCUSSION Chromosomal encoded TA systems are induced in different stress conditions. These have been implicated to help bacteria adapt to different stress conditions by downregulating metabolism and potentiating transition into dormant like stage (Unterholzner et al., 2013;Chan et al., 2016). In addition to slowing down of bacterial metabolism, TA systems are also shown to be essential for persistence and bacterial pathogenesis (Wen et al., 2014;Yang and Walsh, 2017;Alonso, 2021). The large number of TA systems in the genome of M. tuberculosis makes it difficult to perceive the involvement of individual TA systems in the pathogen biology. Previous studies have demonstrated that MazF toxins contribute cumulatively and VapBC3, VapBC4, VapBC11, and VapC22 are essential for the M. tuberculosis pathogenesis (Tiwari et al., 2015;Agarwal et al., 2018Agarwal et al., , 2020Deep et al., 2018). Here, in this study we have investigated the role of HigB1 toxin in M. tuberculosis physiology and pathogenesis. In M. tuberculosis, HigB1 cleaves tmRNA, inhibits the growth of bacteria in a bacteriostatic manner and this is abrogated by high levels of HigA1 (Schuessler et al., 2013). The transcript levels of higB1 are also increased in M. tuberculosis after exposure to nutrient limiting growth conditions and drugs (Betts et al., 2002;Keren et al., 2011). However, its role in bacterial adaptation to these conditions is still not understood. It is still not clear how upregulation of HigB1 to adapt to starvation or exposure to drugs is beneficial to the bacteria or if there is loss of survival/competency in the absence or repression of the toxin. In concordance with previous studies, the transcript levels of higB1 were increased in nutrient limiting growth conditions. However, no upregulation of higB1 expression was observed in other stress conditions evaluated in the study. The transcript levels of higB1 were also upregulated in levofloxacin-treated samples but not after exposure to isoniazid. We observed differential induction of the higB1 and higA1 belonging to higBA TA system upon exposure to levofloxacin and nutritional stress. This might be attributed to differential stability of toxin and antitoxin transcripts in these stress conditions. Similar post-transcriptional regulation of TA system has also been reported in E. coli and M. tuberculosis in different growth conditions (Korch et al., 2009;Singh et al., 2010;Kasari et al., 2013;Ramirez et al., 2013;Tiwari et al., 2015). Under nutrient limiting conditions, M. tuberculosis activates the highly conserved stringent response through guanosine pentaphosphate (p)ppGpp (Primm et al., 2000;Weiss and Stallings, 2013). Previous studies have shown that (p)ppGpp-mediated stringent response in bacteria, in combination with TA activity, can act as a regulated switch to a persistent phenotype (Tian et al., 2017). In M. tuberculosis, RelA and PPK-1 are the main enzymes involved in (p)ppGpp and inorganic polyphosphate biosynthesis (Primm et al., 2000;Singh et al., 2013). Since, we observed the increased expression of higBA1 locus under nutrient limiting conditions, we determined the promoter activity of TAC locus in parental, higB1, relA and ppk1 mutant strain. The increased promoter activity in relA strain suggests that higBA1 locus promoter is negatively regulated by (p)ppGpp/relA gene product levels in the cells. Also, we speculate that higBA1 locus promoter is negatively regulated by HigB1 either alone or in complex with HigA1 antitoxin as reported previously for other bacterial TA systems (Dienemann et al., 2011;Kang et al., 2017;Nikolic, 2019). This data supports autoregulation and cross regulation between stringent response and TAC system in M. tuberculosis. To further elucidate the role of HigB1 toxin toward in vitro and in vivo fitness of M. tuberculosis, we constructed higB1 mutant strain using temperature sensitive mycobacteriophages. The construction of higB1 strain was confirmed by Next generation sequencing. The colony morphology and ability to form biofilms was comparable between the parental and higB1 mutant strain. Despite being upregulated in nutrient limiting growth conditions, we observed that the survival of both wild type and higB1 mutant strain was comparable in nutrient limiting and other stress conditions. In concordance with qPCR results, in comparison to the parental strain, we observed that higB1 strain was compromised for growth upon exposure to levofloxacin. Further, we observed that despite being non-essential in vitro, higB1 is required for the pathogenesis of M. tuberculosis in guinea pigs. Histopathological analysis revealed necrotic granulomatous tissue in lung sections from animals infected with the parental strain. In comparison, normal parenchyma space was seen in sections from animals infected with the higB1 mutant strain. The observed attenuation phenotype associated with the mutant strain was more prominent in spleen specifically at chronic stage of infection. The histopathology sections obtain from higB1 mutant strain infected guinea pigs appeared more similar to sections from uninfected animals as reported earlier (Singh R. et al., 2003;Patel et al., 2011;Cai et al., 2019). The phenotype associated with higB1 mutant strain was similar to that observed for M. tuberculosis strains deficient in either CarD or PerM or MymA or Icl1 or PcaA. These strains were also attenuated for growth in spleens and the phenotype was more drastic in chronic stage of infection (Glickman et al., 2000;McKinney et al., 2000;Singh et al., 2005;Weiss et al., 2012;Goodsmith et al., 2015). The success of M. tuberculosis as an intracellular pathogen lies in its ability to persist in later stages of infection despite the induction of host adaptive immune response. These observations suggest that HigB1 is important for disease progression and dissemination in guinea pig model of infection. To gain further mechanistic insights into the attenuation of higB1 mutant strain in guinea pigs, we compared the global transcriptome profile of H37Rv, higB1 mutant and complemented strains. We observed that the transcripts of ribosomal proteins of both smaller and larger subunits of ribosome were upregulated in the higB1 mutant strain. Studies have shown that these ribosomal proteins are able to elicit a strong CD4 + immune response that might be associated with the faster clearance of the mutant strain in host tissues (Johnson et al., 2017;Kennedy et al., 2018). In addition to this, expression level of higA1 (Rv1956), an adjacent gene of higB1 toxin was also reduced in the higB1 deletion strain. However, transcripts levels of SecB-chaperone protein Rv1957 was not significantly changed in the mutant strain. Microarray studies revealed that the expression of higB1 was restored in the complemented strain by fivefold in comparison to the wild type strain, whereas only marginal change was observed in the levels of higA1 and Rv1957. Despite the restoration of higB1 levels in complemented strain, expression of other genes such as Rv0311, Rv0315, Rv0341, ribosomal proteins (Rv0651, Rv0652, Rv0700, Rv0714), Rv0914c, Rv1057, and mymA operon (Rv3083-Rv3089) was not restored in the complemented strain. In both cases ( higB1 knock out and complemented strains) higB1 levels were either significantly depleted (9.67fold) or significantly increased (5.18-fold) in comparison to the parental strain. Previous studies have shown that toxin and antitoxin stoichiometry is important for their autoregulation and activation (Vandervelde et al., 2017;Fraikin et al., 2020). We speculate that changes in intracellular toxin antitoxin ratios in both mutant and complemented strain might be responsible for the observed attenuated phenotype and partial restoration of disease pathology in guinea pigs infected with the higB1 complemented strain. Taken together, we have performed experiments to elucidate the role of HigB1 toxin in M. tuberculosis physiology and pathogenesis. We show that HigB1 of M. tuberculosis is important to establish infection in guinea pigs. Microarray analysis revealed that deletion of higB1 leads to increase in the transcripts of ribosomal proteins and reduction in expression of genes involved in virulence, detoxification and adaptation. This might be responsible for the observed attenuated phenotype of higB1 mutant strain. Lack of complementation of the mutant strain could be attributed to altered intracellular ratios of toxin, antitoxin and observed differences in the transcription profiles of wild type and complemented strains. In conclusion, HigB1 is vital for M. tuberculosis pathogenesis. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material. ETHICS STATEMENT The animal study was reviewed and approved by University of Delhi South Campus. AUTHOR CONTRIBUTIONS RS and AG conceived the study and designed the work plan. AS, KS, BV, NG, NC, and TG performed the cloning and microbiology assays. AS, NC, and TG performed the animal experiments. BV isolated the genomic DNA. AG performed the NGS. KS, AG, and AS performed the microarray studies and analysis. NB carried out the analysis of the NGS data. RS, AS, KS, and AG analyzed the data, interpreted them, and wrote the manuscript. All authors contributed to the article and approved the submitted version. FUNDING The authors acknowledge the funding received from Department of Biotechnology, India (Grant ID; BT/PR5510/MED/29/513/ 2012). RS acknowledge the funding received from DBT-Wellcome India Alliance as a Senior Fellow (IA/S/19/2/504646). The authors acknowledge the funding received from Translational Health Science and Technology Institute under Translational Research Program and from UGC-SAP scheme to Department of Biochemistry, UDSC. pSC301GFP vector, that has been modified and used in this study, was kindly provided to AG by Dr. Yossef Av-Gay (University of British Columbia, Vancouver, BC, Canada). AS acknowledges research fellowship received from Indian Council of Medical Research. NG was recipient of research fellowship from Council of Scientific and Industrial Research. RS is a recipient of Ramalingaswami fellowship and National Bioscience Award from Department of Biotechnology. ACKNOWLEDGMENTS TG is thankful to Department of Biotechnology for her fellowship. NC is thankful to Department of Science and Technology for funding under NPDF-scheme. The authors are thankful to staff members of Infection disease research facility and University of Delhi South Campus BSL-3 facility for technical help. The authors are also thankful to DBT-Supported Genomics Facility at South campus for Next Generation Sequencing and Gene Expression Microarray studies. The authors acknowledge help Mr. Madavan Vasudevan in analysis of Microarray data and lab attendants Mr. Rajesh and Mr. Sher Singh for technical help.
v3-fos-license
2023-10-06T15:02:28.649Z
2023-10-01T00:00:00.000
263689902
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "CLOSED", "oa_url": "https://www.mdpi.com/2079-7737/12/10/1309/pdf?version=1696410953", "pdf_hash": "a20f131d0c5942d10f1fe7ba077d57388ec9e585", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44889", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "1eb1c103b9e2cf9bac3130d80ac2f1e0af20ffa1", "year": 2023 }
pes2o/s2orc
Genome-Wide Analysis of Q-Type C2H2 ZFP Genes in Response to Biotic and Abiotic Stresses in Sugar Beet Simple Summary A plant’s C2H2-type zinc finger proteins (C2H2-ZFPs) play crucial roles in the process of plant growth and development, as well as various stress responses. The Q-type ZFP family, which contains a conserved “QALGGH”, has been reported in many plants. Sugar beet is an important crop for sugar production. Salt stress and viral infection significantly reduce both sugar yield and processing quality of sugar beet. So far, the genome-wide analysis of Q-type C2H2 ZPFs and their expression pattern in sugar beet have not been analyzed yet. This study analyzed 35 Q-type ZFPs in sugar beet and their expression patterns under salt stress and virus. These results will provide theoretical evidence for understanding the functions of Q-type ZFPs. Abstract A plant’s Q-type C2H2-type ZFP plays key roles in plant growth and development and responses to biotic and abiotic stresses. Sugar beet (Beta vulgaris L.) is an important crop for sugar production. Salt stress and viral infection significantly reduce the root yield and sugar content of sugar beet. However, there is a lack of comprehensive genome-wide analyses of Q-type C2H2 ZFPs and their expression patterns in sugar beet under stress. In this study, 35 sugar beet Q-type C2H2 ZFPs (BvZFPs) containing at least one conserved “QALGGH” motif were identified via bioinformatics techniques using TBtools software. According to their evolutionary relationship, the BvZFPs were classified into five subclasses. Within each subclass, the physicochemical properties and motif compositions showed strong similarities. A Ka/Ks analysis indicated that the BvZFPs were conserved during evolution. Promoter cis-element analysis revealed that most BvZFPs are associated with elements related to phytohormone, biotic or abiotic stress, and plant development. The expression data showed that the BvZFPs in sugar beet are predominantly expressed in the root. In addition, BvZFPs are involved in the response to abiotic and biotic stresses, including salt stress and viral infection. Overall, these results will extend our understanding of the Q-type C2H2 gene family and provide valuable information for the biological breeding of sugar beet against abiotic and biotic stresses in the future. Introduction Transcription factors (TFs) function as crucial molecular regulators for gene expression in all organisms [1].In plants, a number of TF families have been identified, including bZIP [2], WRKY [3], NAC [4], MYB [5], and zinc finger proteins (ZFPs) [6], which play important roles in many biological processes, such as growth, development, reproduction, and stress responses.Among these, ZFPs represent one of the largest TF families in plants [6].ZFPs contain a variable number of zinc finger domains, each of which consists of cysteine (Cys) and histidine (His) residues combined with a zinc ion to form a threedimensional finger-type structure.According to the number and arrangement of Cys and His residues, ZFPs have been classified into C2H2, C2HC, C2HC5, CCCH, C3HC4, C4, C4HC3, C6, and C8 [6]. C2H2-type ZFPs, also known as TFIIIA-type zinc fingers, are among the most extensively studied and abundant ZFPs in eukaryotes [7].Recently, the in silico genome-wide identification and functional characterization of plant C2H2-type ZFPs were well analyzed in many species, including 321 members in Glycine max [8], 301 in Brassica rapa [9], 218 in Medicago truncatula [10], 204 in Triticum aestivum [11], 189 in Oryza sativa [12], 176 in Arabidopsis thaliana [13], 150 in Zea mays [14], 145 in Sorghum bicolor [15], 129 in Cucumis sativus [16], 109 in Populus trichocarpa [17], and 104 in Solanum lycopersicum [18].These findings indicate that C2H2 ZFPs are ubiquitous in the plant kingdom, playing important regulatory roles in various biologic processes, including development and organogenesis, as well as responses to stresses and defense.The C2H2-type ZFPs in A. thaliana are categorized into three sets (A, B, and C), with each set further divided into several different subsets, such as C1, C2, and C3 [19].To date, the plant C1 subset has been the most extensively investigated, and its members have been further classified into five subclasses (C1-1i to C1-5i) based on the number of zinc finger domains [20].Moreover, the majority of plant C2H2 ZFPs contain a highly conserved QALGGH sequence within their zinc finger domain (CX 2-4 CX 3 FX 3 QALGGHX 3-5 H), identifying them as the plant-specific Q-type subfamily of C2H2 ZFP. The first Q-type C2H2 ZFP was discovered in petunia, and a total of 21 Q-type ZFPs were identified based on their specific structures [21,22].The conserved QALGGH motif was found to be essential for DNA binding activity [23].In addition to the conserved zinc finger domain, the N-terminal of some plant C2H2 ZFPs contains a B-box and an L-box motif, while the C-terminal has an ethylene-responsive element binding-factor-associated amphiphilic repression (EAR) motif.The B-box region acts as a nuclear localization signal (NLS).The L-box motif, usually consisting of a core sequence of EXEXXAXCLXXL, is thought to relate to protein-protein interactions.The EAR motif, also known as the DLNbox, has been identified as playing a role in transcriptional repression [24]. The plant Q-type C2H2 ZFP is involved in plant development and various abiotic stress responses, such as drought, salt, osmotic, low temperature, and oxidative stresses [25].For example, the expression of AZF1, AZF2, AZF3, STZ/ZAT10, ZAT11, and ZAT18 is strongly induced by drought, salt, cold stresses, or abscisic acid treatment [26], while the expression levels of ZAT7 and ZAT12 are upregulated by oxidative stress, heat shock, or wounding in A. thaliana [27].ZAT7 has been shown to be involved in plant growth suppression and increased tolerance to salinity stress depending on its EAR motif [28].Overexpressed Zat7 was more tolerant to salinity stress than seedlings of wild-type plants in Arabidopsis [28].Transgenic plants overexpressing Zat12 could tolerate oxidative stress in Arabidopsis [27].More interestingly, ZAT18 was initially identified as a positive regulator of drought stress tolerance in Arabidopsis [29].A recent study indicated that Pseudomonas syringae induces ZAT18 expression to repress the transcription of EDS1 for bacterial infection [30].ZAT18 overexpressing plants were more susceptible to Pst DC3000 compared to Col-0, while ZAT18-KO plants displayed enhanced resistance in Arabidopsis [30].In T. aestivum, 47 Q-type C2H2-ZFPs were identified, and the expression of the majority of TaZFP genes was responsive to drought stress in either leaf or root [31].In O. sativa, OsZFP179 was characterized as a salt-responsive gene, and it was found to enhance salt tolerance in transgenic rice plants [32].Until now, more Q-type C2H2 ZFPs have been identified in A. thaliana, T. aestivum, Brassica oleracea, S. lycopersicum, and Medicago.sativa via genomewide identification [8,20,31,33,34]. Sugar beet (Beta vulgaris), a member of the Amaranthaceae family [35], is an important crop in temperate climates zone, accounting for 20-30% of the world's sugar production.Sugar beet also provides essential raw materials for bioethanol, animal feed, pulp, pectin extract, and functional-food-related industries [36].Sugar beet is frequently subjected to various biotic and abiotic challenges that reduce both sugar yield and processing quality [37].Salinity is a major abiotic stress that limits plant growth and development [38]. Although sugar beet is a salt-tolerant crop, prolonged exposure to salt stress can result in a significant yield loss in beet production [39].In addition, rhizomania caused by beet necrotic yellow vein virus (BNYVV) stands as one of the most severe biotic threats to sustainable beet production globally [40].Susceptible varieties infected by BNYVV exhibit the pronounced lateral rootlet proliferation of taproot and yellow veins on systemically infected leaves [41].This leads to significant losses in root yield as well as a decline in sugar content.Given these challenges, the identification and analysis of sugar beet genes involved in abiotic and biotic stress is crucial, as it offers genetic resources for molecular breeding.Recently, the genome of industrial diploid sugar beet (2n = 18 chromosomes) was sequenced, accelerating sugar beet breeding for tolerances and resistances against abiotic or biotic stresses [35].So far, the structure and function of the BvbZIP and BvWRKY family, as well as their expression pattern under salt stress, have been genome-wide analyzed for sugar beet [42,43].However, information about Q-type C2H2-ZFPs in sugar beet remains unknown.Since Q-type ZFP genes play vital roles in plant development and stress responses, it is essential to identify and analyze the Q-type ZFP gene family in the sugar beet genome. In this study, we identified 35 Q-type ZFP genes (BvZFPs) in sugar beet comprising different numbers of zinc finger domains.The phylogenetic relationships, genomic location, gene structure, chromosome distribution, gene duplication, and cis-regulatory elements are also explored.In addition, we determined their mRNA expression profiles in both leaf and root tissues under salt stress and BNYVV infection.This study aims to provide a comprehensive understanding of the Q-type C2H2 gene family and shed light on the roles of the BvZFP family in sugar beet, especially the functional characterization of plant development processes and stress responses.This research study can help improve plant quality, stress resistance, and enhanced crop production via genetic modifications in Q-type C2H2 gene family members in sugar beet. Chromosomal Distribution, Protein Characterization, and Amino Acid Properties The starting and ending positions of all BvZFP genes on each chromosome were retrieved from the sugar beet gene annotation database, and the results were visualized using the Gene Location Visualize program of Tbtools.Out of 35 genes, 34 BvZFP genes were located on the nine chromosomes, while the remaining 1 may be located on unannotated intergenic regions of the genome.The members of sugar beet BvZFP were renamed according to the location on the chromosome.Protein properties, including the length of the amino acid (aa), molecular masses (MW), theoretical isoelectric point (pI), instability index, and subcellular localization, were predicted using ExPASy Server (https://web.expasy.org/protparam/(accessed on 27 March 2023)). Multiple Sequence Alignment and Phylogenetic Tree Construction The full-length sequences of Arabidopsis C2H2 C1-2i zinc finger proteins (Q-type ZFPs) were downloaded from the TAIR database (http://www.arabidopsis.org/(accessed on 16 March 2023)).The multiple sequence alignment of Q-type zinc finger protein members in sugar beet and Arabidopsis was performed using MEGA software with the default parameters.Full-length amino acid sequences were aligned using ClustalX, and the phylogenetic tree was constructed using the neighbor-joining method with the following parameters: bootstrap method with 1000 replicates and partial deletion. Analysis of Members of the C2H2 ZFP Gene Family The full-length protein sequences of each subfamily were analyzed for conserved motifs using Multiple Expectation Maximization for Motif Elicitation (MEME, https:// meme-suite.org/meme/(accessed on 22 March 2023)) [46].We used the classic discovery mode and adjusted the parameters as follows: distribution of motifs = any number of repetitions, the maximum number of motifs = 10, and the optimum motif width range from 6 to 50 (inclusive).The gene structure and conserved motifs of C2H2 genes were visualized using Gene Structure View tools in Tbtools software. Promoter Analysis of the BvZFP Genes in Sugar Beet The 2000 bp promoter sequences upstream of the 35 BvZFP start codons were extracted from the sugar beet EL10_1.0 genome.The predicted cis-acting elements on promoters with their positional information were identified using the PlantCARE online tool (http: //bioinformatics.psb.ugent.be/webtools/plantcare/html/(accessed on 23 March 2023)).The obtained results were used to predict putative stress-and hormone-responsive cisacting elements and visualized by Simple BioSequence Viewer in TBtools. Collinearity Analysis of Arabidopsis and Sugar Beet BvZFP Genes Arabidopsis was selected for collinearity analysis with sugar beet.The genome sequences and annotation files of Arabidopsis were downloaded from NCBI databases, and the chromosome length and location information for the ZFP genes on the genome of the two species were extracted.Using the One Step MCScanX tool of TBtools, we investigated gene replication events and collinearity relationships for gene pairs between two species.All data were visualized via the use of the Advanced Circos program of TBtools software.Ka and Ks substitution between gene pairs were also calculated using the Simple Ka/Ks Calculator tool. BvZFP Genes Expression Network Analysis The transcriptome data of sugar beet under salt stress and BNYVV were downloaded from the NCBI SRA database (accession number: PRJNA666117, https://www.ncbi.nlm.nih.gov/bioproject/PRJNA666117/ (accessed on 27 March 2023)), which could be used to analyze the expression pattern of BvZFP genes under 300 mM salt stress.In addition, the database of E-MTAB-8187 from NCBI was used to analyze the expression pattern of BvZFP genes under BNYVV infection.GraphPad Prism was used to map gene expressions. To identify BvZFP genes between the two groups, the expression level of each transcript was calculated based on the number of fragments per kilobase of exons per million mapped reads (FPKM).RSEM (http://deweylab.biostat.wisc.edu/rsem/(accessed on 1 April 2023) was used to determine gene abundance.The R statistical package software edgeR (empirical analysis of digital gene expression in R, http://www.bioconductor.org/packages/2.12/bioc/html/edgeR.html (accessed on 1 April 2023)) was used for differential expression analysis. Identification and Chromosomal Localization of the C2H2 Q-Type ZFP Subclass in Sugar Beet To identify C2H2 ZFP in the B. vulgaris genome, we utilized hidden Markov model (HMM) files (PF13912, PF12756, and PF00096) to conduct a genome-wide HMM search.This search yielded 104 putative non-redundant sugar beet C2H2 ZFP proteins (BvZFPs).This number was greater than that present in PlantTFDB, where 64 BvZFPs have been deposited for sugar beet (http://planttfdb.gao-lab.org/family.php?sp=Bvu&fam=C2H2 (accessed on 17 March 2023)).Subsequently, 35 Q-type ZFPs, characterized by the specific CX 2 CX 3 FX 3 QALGGHX 3-5 HX domain, were manually selected (Supplementary File S1: Tables S1 and S2).According to the sugar beet genome database [47], we generated a map that detailed the physical positions of the Q-type BvZFPs (Figure 1).These Q-type BvZFPs were then renamed from BvZFP1 to BvZFP35 based on their physical positions on sugar beet chromosomes (Chr).All BvZFP genes were distributed widely and unevenly on nine Chrs, except for BvZFP35, which was located on unmapped scaffolds (Figure 1).Chr6 contained the largest number of BvZFP members and contained 8 BvZFPs, followed by Chr2, Chr3, Chr9, Chr1, Chr5, and Chr7, each of which contained 3 to 5 BvZFPs.Chr4 and Chr8 contained relatively fewer BvZFP members, with only two and one gene, respectively. Identification and Chromosomal Localization of the C2H2 Q−Type ZFP Subclass in Sugar Beet To identify C2H2 ZFP in the B. vulgaris genome, we utilized hidden Markov model (HMM) files (PF13912, PF12756, and PF00096) to conduct a genome-wide HMM search.This search yielded 104 putative non-redundant sugar beet C2H2 ZFP proteins (BvZFPs).This number was greater than that present in PlantTFDB, where 64 BvZFPs have been deposited for sugar beet (http://planttfdb.gao-lab.org/family.php?sp=Bvu&fam=C2H2 (accessed on 17 March 2023)).Subsequently, 35 Q−type ZFPs, characterized by the specific CX2CX3FX3QALGGHX3-5HX domain, were manually selected (Additional file 1: Tables S1 and S2).According to the sugar beet genome database [47], we generated a map that detailed the physical positions of the Q−type BvZFPs (Figure 1).These Q−type BvZFPs were then renamed from BvZFP1 to BvZFP35 based on their physical positions on sugar beet chromosomes (Chr).All BvZFP genes were distributed widely and unevenly on nine Chrs, except for BvZFP35, which was located on unmapped scaffolds (Figure 1).Chr6 contained the largest number of BvZFP members and contained 8 BvZFPs, followed by Chr2, Chr3, Chr9, Chr1, Chr5, and Chr7, each of which contained 3 to 5 BvZFPs.Chr4 and Chr8 contained relatively fewer BvZFP members, with only two and one gene, respectively. Characterization of the Sugar Beet C2H2 Q−Type ZFP Subclass Based on the number of zinc finger domains and the spacing between the two His residues, the 35 BvZFP genes were divided into four groups, including 22 members in the C1-1i group (one zinc finger domain), 6 members in the C1-2i group, 6 members in the C1-3i group, and 1 member in the C1-4i group (Additional file 1: Table S3).Interestingly, several zinc finger domains with certain modifications of the "QALGGH" motif were observed for all 3i and 4i members (Additional file 1: Table S3).According to a previous study [16], the modified zinc finger domains were classified as the M-type.Furthermore, the protein properties of these genes were predicted using the ExPASy Server, and the results are shown in Table 1 and Additional file 1: Table S4.The protein lengths of the 1i group, 2i group, and 3i group ranged from 162 to 323, 186 to 456, and 237 to 572 amino acids for 1i group, 2i group, and 3i group, respectively.The molecular weights of the 1i Characterization of the Sugar Beet C2H2 Q-Type ZFP Subclass Based on the number of zinc finger domains and the spacing between the two His residues, the 35 BvZFP genes were divided into four groups, including 22 members in the C1-1i group (one zinc finger domain), 6 members in the C1-2i group, 6 members in the C1-3i group, and 1 member in the C1-4i group (Supplementary File S1: Table S3).Interestingly, several zinc finger domains with certain modifications of the "QALGGH" motif were observed for all 3i and 4i members (Supplementary File S1: Table S3).According to a previous study [16], the modified zinc finger domains were classified as the M-type.Furthermore, the protein properties of these genes were predicted using the ExPASy Server, and the results are shown in Table 1 and Supplementary File S1: Table S4.The protein lengths of the 1i group, 2i group, and 3i group ranged from 162 to 323, 186 to 456, and 237 to 572 amino acids for 1i group, 2i group, and 3i group, respectively.The molecular weights of the 1i group's proteins ranged from 18,233 to 35,834 Da, with an average weight of 25,264 Da.For the 2i group, the molecular protein weights ranged from 20,667 to 50,063 Da, with an average weight of 32,905 Da.The average molecular weight of 3i group proteins is 50,602 Da, with individual weights ranging from 25,331 to 62,896 Da.For all BvZFPs, the theoretical isoelectric point (pI) fell between 5.69 and 9.22, and the instability index varied from 37.2 to 75.14.The GRAVY values, ranging from −1.27 to −0.358, revealed that sugar beet Q-type BvZFPs were hydrophilic proteins.In addition, subcellular localization prediction suggested that all BvZFPs are located in the nucleus. Phylogenetic Analysis of Q-Type BvZFP Genes To explore the evolutionary relationship between C2H2 Q-type genes in sugar beet and A. thaliana (AtZFP), we constructed a phylogenetic tree using MEGA10 based on the alignment of 93 Q-type ZFPs amino acid sequences at the whole protein level, which included 35 from sugar beet and 58 from Arabidopsis.The resulting tree classified the members into six major clades, including C1-1Q-A, C1-1Q-B, C1-2Q, C1-QM-A, C1-QM-B, and C1-QM-C (Figure 2).Remarkably, ZFP proteins that possessed the same types and numbers of zinc finger domains were clustered into the same clade.In the 1i group, 15 BvZFPs together with 16 AtZFPs were grouped in the C1-1Q-A clade, while 7 BvZFPs with 11 AtZFPs were grouped into C1-1Q-B clades.Both C1-1QA and C1-1QB clades contained one conserved "QALGGH" motif in their protein sequences.For the members of the 2i group, five BvZFPs along with 11 AtZFPs belonged to the C1-2Q clade.Following the same nomenclature as the C1-1Q clade, the members in the C1-2Q clade contained two conserved "QALGGH" motifs.The rest of the BvZFP members, which presented a variety of combinations with different numbers of Q-and M-type zinc finger domains, were defined as the C1-QM clades.For 3i and 4i group members, four BvZFPs were clustered into C1-QM-A clade together with nine AtZFPs, while four BvZFPs and seven AtZFPs were grouped into the clade C1-QM-B (Figure 2).Moreover, an unrooted phylogenetic tree comprising 35 BvZFPs was constructed.Based on sequence similarity and tree topology, these BvZFPs were further classified into three clades, C1-1Q, C1-2Q, and C1-QM, and five subclades, including C1-1Q-A, C1-1Q-B, C1-2Q, C1-QM-A, and C1-QM-B (Figure 3A), according to the classification in Figure 2. Gene Structure and Conservative Motif Analysis of C2H2 Q−Type BvZFPs We analyzed the number of exon and intron structures of all 35 Q−type BvZFP As shown in Figure 3, 30 BvZFPs were intronless, 4 BvZFPs contained a single intro only 1 BvZFP had four introns.All members with introns except for BvZFP17 belon the 1i group.To better understand the characteristic regions of BvZFP proteins, w the MEME online website to identify predicted conserved motifs.A total of 10 con motifs were identified, with their details presented in Additional file 1: Table S5.M and 2 were recognized as Q−type zinc finger domains (Figure 3D,F), while motif 4 sented the M−type zinc finger domains (Supplement Figure S3A).Motif 1 was wid tributed in all 35 BvZFP proteins (Figure 3C), and motif 2 was found in members C1-2Q, C1-QM-A, and C1-QM-B clades (Figure 3D).Motif 4 was specific to the C clades.Motif 3, known as the EAR motif (Figure 3E), had conserved amino acid sign "LxLxL" or "DLNx(1-2)P".Among these BvZFPs, 18 BvZFPs contained the "LxLxL of EAR motif, while 14 BvZFPs contained the "DLNx(1-2)P" type of EAR motif.I tion, BvZFPs EAR motifs were predominantly located at the C-terminus region, exc BvZFP21 and BvZFP14.Motif 5, termed the L-BOX motif, was characterized by sequence of EXEXXAXCLXXL (Figure 3G).All 14 members in the C1-2Q and C Gene Structure and Conservative Motif Analysis of C2H2 Q-Type BvZFPs We analyzed the number of exon and intron structures of all 35 Q-type BvZFP genes.As shown in Figure 3, 30 BvZFPs were intronless, 4 BvZFPs contained a single intron, and only 1 BvZFP had four introns.All members with introns except for BvZFP17 belonged to the 1i group.To better understand the characteristic regions of BvZFP proteins, we used the MEME online website to identify predicted conserved motifs.A total of 10 conserved motifs were identified, with their details presented in Supplementary File S1: Table S5.Motifs 1 and 2 were recognized as Q-type zinc finger domains (Figure 3D,F), while motif 4 represented the M-type zinc finger domains (Supplement Figure S1A).Motif 1 was widely distributed in all 35 BvZFP proteins (Figure 3C), and motif 2 was found in members of the C1-2Q, C1-QM-A, and C1-QM-B clades (Figure 3D).Motif 4 was specific to the C1-QM clades.Motif 3, known as the EAR motif (Figure 3E), had conserved amino acid signatures "LxLxL" or "DLNx(1-2)P".Among these BvZFPs, 18 BvZFPs contained the "LxLxL" type of EAR motif, while 14 BvZFPs contained the "DLNx(1-2)P" type of EAR motif.In addition, BvZFPs EAR motifs were predominantly located at the C-terminus region, except for BvZFP21 and BvZFP14.Motif 5, termed the L-BOX motif, was characterized by a core sequence of EXEXXAXCLXXL (Figure 3G).All 14 members in the C1-2Q and C1-QM clades contained Motif 5. Notably, some motifs were clade-specific.For example, motifs 10 and 6 mainly existed in C1-2Q.Motif 9 appeared in members of C1-2Q, C1-QM, and C1-1Q-B.Motif 8 was unique to C1-1Q-A (Figure 3).In summary, it was found that the BvZFPs clustered in the same subclades shared a similar motif composition, suggesting functional similarities among these evolutionarily conserved BvZFPs within the same clades. Genomic Collinearity Analysis of C2H2 Q-Type BvZFPs between Sugar Beet and Arabidopsis To investigate the potential evolution processes of C2H2 Q-type BvZFP genes, we analyzed the synteny relationship of 34 BvZFPs using TBtools.Intraspecific collinearity analysis showed that three pairs of BvZFPs exhibited collinearity within sugar beet.The ratio of the nonsynonymous substitution rate to the synonymous substitution rate (Ka/Ks) is widely used for evaluating the selective pressure of duplication events [34].Generally, Ka/Ks > 1 indicates positive selection, Ka/Ks = 1 indicates neutral selection, and Ka/Ks < 1 indicates purification [48].The Ka/Ks value of BvZAT7 and BvZAT32 is 0.1419 (Figure 4, Supplementary File S1: Table S10-1).Moreover, the phylogenetic tree analysis showed that these two genes were on the same branch.The results indicated that these synteny genes have undergone purifying selection in their evolutionary history.To further explore the evolutionary relationship of C2H2 Q-type BvZFPs, interspecific synteny comparisons between B. vulgaris and A. thaliana were also made.In our study, we identified 24 pairs of ZFP genes in sugar beet and Arabidopsis (Figure 5).The Ka/Ks ratios of these gene pairs ranged from 0.1 to 0.4, with an average of 0.25 (Supplementary File S1: Table S10-2), suggesting that the C2H2 Q-type ZFP gene remained relatively conserved in different species. functional similarities among these evolutionarily conserved BvZFPs within the same clades. Genomic Collinearity Analysis of C2H2 Q−Type BvZFPs between Sugar Beet and Arabidopsis To investigate the potential evolution processes of C2H2 Q−type BvZFP genes, we analyzed the synteny relationship of 34 BvZFPs using TBtools.Intraspecific collinearity analysis showed that three pairs of BvZFPs exhibited collinearity within sugar beet.The ratio of the nonsynonymous substitution rate to the synonymous substitution rate (Ka/Ks) is widely used for evaluating the selective pressure of duplication events [34].Generally, Ka/Ks > 1 indicates positive selection, Ka/Ks = 1 indicates neutral selection, and Ka/Ks < 1 indicates purification [48].The Ka/Ks value of BvZAT7 and BvZAT32 is 0.1419 (Figure 4, Additional file 1: Table S10-1).Moreover, the phylogenetic tree analysis showed that these two genes were on the same branch.The results indicated that these synteny genes have undergone purifying selection in their evolutionary history.To further explore the evolutionary relationship of C2H2 Q−type BvZFPs, interspecific synteny comparisons between B. vulgaris and A. thaliana were also made.In our study, we identified 24 pairs of ZFP genes in sugar beet and Arabidopsis (Figure 5).The Ka/Ks ratios of these gene pairs ranged from 0.1 to 0.4, with an average of 0.25 (Additional file 1: Table S10-2), suggesting that the C2H2 Q−type ZFP gene remained relatively conserved in different species. Promoter Analysis of the Q−Type BVZFP Genes in Sugar Beet Cis-acting regulatory elements play essential roles in modulating the plant response to biotic and abiotic stresses.Therefore, the 2000 bp promoter regions located upstream of the BvZFP genes were extracted from the sugar beet EL10_1.0 genome database [44].These sequences were then analyzed for cis-acting elements using the PlantCARE website.The results showed that 385 cis-regulatory elements belonging to 26 categories were obtained (Additional file 1: Table S6).The basal promoter elements, such as TATA-box and CAATbox, and unannotated elements were excluded from this count.Most cis-elements were related to hormonal response and stress signal responsiveness (Additional file 1: Table S7) Among them, 197 elements were found to be related to hormonal responses, including 63 abscisic-acid-responsive elements (ABRE), 62 MeJA-responsive elements (TGACG-motif) 30 salicylic-acid-responsive elements (TCA-element), 29 gibberellin-responsive elements (TATC-box), and 13 auxin-responsive elements (TGA-element).Meanwhile, 46 elements associated with plant development, 47 elements that respond to abiotic stress, and 22 elements related to biotic stress were identified.In addition, the abiotic-stress-responsive elements were related to drought and low-temperature responses.As shown in Figure 6, 22 BvZFPs contain more than 10 functional elements from different categories.We also found that BvZFP8, BvZFP34, BvZFP24, and BvZFP17 had a higher number of cis-regulatory elements, while BvZAT2 and BvZAT26 contained fewer cis-elements (Figure 6).The presence of these cis-regulatory elements in the BvZFP promoter regions suggested their involvement in the regulation of various plant pathways. Promoter Analysis of the Q-Type BVZFP Genes in Sugar Beet Cis-acting regulatory elements play essential roles in modulating the plant response to biotic and abiotic stresses.Therefore, the 2000 bp promoter regions located upstream of the BvZFP genes were extracted from the sugar beet EL10_1.0 genome database [44].These sequences were then analyzed for cis-acting elements using the PlantCARE website.The results showed that 385 cis-regulatory elements belonging to 26 categories were obtained (Supplementary File S1: Table S6).The basal promoter elements, such as TATAbox and CAAT-box, and unannotated elements were excluded from this count.Most cis-elements were related to hormonal response and stress signal responsiveness (Supplementary File S1: Table S7).Among them, 197 elements were found to be related to hormonal responses, including 63 abscisic-acid-responsive elements (ABRE), 62 MeJAresponsive elements (TGACG-motif), 30 salicylic-acid-responsive elements (TCA-element), 29 gibberellin-responsive elements (TATC-box), and 13 auxin-responsive elements (TGAelement).Meanwhile, 46 elements associated with plant development, 47 elements that respond to abiotic stress, and 22 elements related to biotic stress were identified.In addition, the abiotic-stress-responsive elements were related to drought and low-temperature responses.As shown in Figure 6, 22 BvZFPs contain more than 10 functional elements from different categories.We also found that BvZFP8, BvZFP34, BvZFP24, and BvZFP17 had a higher number of cis-regulatory elements, while BvZAT2 and BvZAT26 contained fewer cis-elements (Figure 6).The presence of these cis-regulatory elements in the BvZFP promoter regions suggested their involvement in the regulation of various plant pathways.The phylogenetic tree was developed on MEGA 10 using neighbor-joining phylogenetic method analysis.Both the bootstrap test and the approximate likelihood ratio test were set to 1000 times. Expression Profiles Analysis of Q−Type BvZFPs in Different Tissue To investigate the expression patterns of Q−type BvZFP genes in different tissues, we analyzed sugar beet transcriptome data for leaf and root tissues from three-pair-euphyllastage sugar beet, which were taken from the SRA database [49].A heatmap was constructed to visualize the expression patterns via TBtools software (Figure 7, Additional file 1: Table S8).In the control group, some BvZFPs displayed a tissue-specific expression pattern.Specifically, BvZFP3, BvZFP8, BvZFP9, BvZFP16, BvZFP18, BvZFP19, BvZFP32, and BvZFP35 were only expressed in roots.Meanwhile, BvZFP2, BvZFP6, BvZFP7, BvZFP14, BvZFP17, BvZFP20, BvZFP27, BvZFP28, BvZFP30, and BvZFP34 were expressed in both leaf and root tissues, while 17 BvZFPs did not show expression in either tissues.Interestingly, the expression levels of all BvZFPs in root tissues were generally higher than that in leaf tissue, except for BvZFP14.The phylogenetic tree was developed on MEGA 10 using neighbor-joining phylogenetic method analysis.Both the bootstrap test and the approximate likelihood ratio test were set to 1000 times. Expression Profiles Analysis of Q-Type BvZFPs in Different Tissue To investigate the expression patterns of Q-type BvZFP genes in different tissues, we analyzed sugar beet transcriptome data for leaf and root tissues from three-pair-euphyllastage sugar beet, which were taken from the SRA database [49].A heatmap was constructed to visualize the expression patterns via TBtools software (Figure 7, Supplementary File S1: Table S8).In the control group, some BvZFPs displayed a tissue-specific expression pattern.Specifically, BvZFP3, BvZFP8, BvZFP9, BvZFP16, BvZFP18, BvZFP19, BvZFP32, and BvZFP35 were only expressed in roots.Meanwhile, BvZFP2, BvZFP6, BvZFP7, BvZFP14, BvZFP17, BvZFP20, BvZFP27, BvZFP28, BvZFP30, and BvZFP34 were expressed in both leaf and root tissues, while 17 BvZFPs did not show expression in either tissues.Interestingly, the expression levels of all BvZFPs in root tissues were generally higher than that in leaf tissue, except for BvZFP14. Responses of Q-Type BvZFP Genes under Salt Treatment and Viral Infection To further provide insight into the response of BvZFPs to abiotic and biotic stress, transcriptome data from the SRA database [49] were used to investigate the expression patterns of BvZFPs under salt stress or viral infection (Supplementary File S1: Tables S8 and S9).Under a 300 mM NaCl treatment, the expression of most BvZFPs was changed.At 72 h after treatment, BvZFP2, BvZFP6, BvZFP17, BvZFP30, and BvZFP34 were significantly up-regulated in leaves, while BvZFP2, BvZFP14, BvZFP16, and BvZFP34 were significantly up-regulated in roots (Figure 7).Comparing the expression patterns between root and leaf tissues, we found that BvZFP2 and BvZFP34 were up-regulated in both tissues, suggesting their important role in response to salt stress in these tissues. Responses of Q−Type BvZFP Genes under Salt Treatment and Viral Infection To further provide insight into the response of BvZFPs to abiotic and biotic stress, transcriptome data from the SRA database [49] were used to investigate the expression patterns of BvZFPs under salt stress or viral infection (Additional file 1: Table S8 and S9).Under a 300 mM NaCl treatment, the expression of most BvZFPs was changed.At 72 h after treatment, BvZFP2, BvZFP6, BvZFP17, BvZFP30, and BvZFP34 were significantly upregulated in leaves, while BvZFP2, BvZFP14, BvZFP16, and BvZFP34 were significantly up-regulated in roots (Figure 7).Comparing the expression patterns between root and leaf tissues, we found that BvZFP2 and BvZFP34 were up-regulated in both tissues, suggesting their important role in response to salt stress in these tissues. Transcription factors in plants also play a pivotal role in the response to pathogen infections.Using beet necrotic yellow vein virus-infected sugar beet transcriptome data from the database [50], It was revealed that nearly half of the BvZFP genes exhibited differential expression at varying degrees.Among these genes, BvZFP6, BvZFP8, BvZFP28, and BvZFP34 were significantly up-regulated, while BvZFP2, BvZFP16, BvZFP17, and BvZFP30 were moderately induced.Conversely, the expression levels of BvZFP13, BvZFP25, BvZFP26, and BvZFP33 were down-regulated (Figure 8).The above results indicated that certain BvZFP genes may be involved in either promoting or inhibiting viral infection by regulating downstream target genes.Transcription factors in plants also play a pivotal role in the response to pathogen infections.Using beet necrotic yellow vein virus-infected sugar beet transcriptome data from the database [50], It was revealed that nearly half of the BvZFP genes exhibited differential expression at varying degrees.Among these genes, BvZFP6, BvZFP8, BvZFP28, and BvZFP34 were significantly up-regulated, while BvZFP2, BvZFP16, BvZFP17, and BvZFP30 were moderately induced.Conversely, the expression levels of BvZFP13, BvZFP25, BvZFP26, and BvZFP33 were down-regulated (Figure 8).The above results indicated that certain BvZFP genes may be involved in either promoting or inhibiting viral infection by regulating downstream target genes. Discussion C2H2 ZFPs are one of the most extensively studied transcription factors that play crucial roles in many biological processes in eukaryotic organisms [6,21].The Q−type ZFP, a plant-specific subfamily of C2H2−ZFPs, was found to be involved in plant development, as well as various stress responses [15,34,51].So far, Q−type ZFPs have been extensively studied in many species, such as A. thaliana, S. lycopersicum, Solanum tuberosum, T. aestivum, O. sativa, and P. trichocarpa (Table 2).However, this subfamily has yet to be explored in B. vulgaris, one of the most important crops for sugar production.In this study, we performed a comprehensive genome-wide investigation of Q−type C2H2 ZFPs in sugar beet. Discussion C2H2 ZFPs are one of the most extensively studied transcription factors that play crucial roles in many biological processes in eukaryotic organisms [6,21].The Q-type ZFP, a plant-specific subfamily of C2H2-ZFPs, was found to be involved in plant development, as well as various stress responses [15,34,51].So far, Q-type ZFPs have been extensively studied in many species, such as A. thaliana, S. lycopersicum, Solanum tuberosum, T. aestivum, O. sativa, and P. trichocarpa (Table 2).However, this subfamily has yet to be explored in B. vulgaris, one of the most important crops for sugar production.In this study, we performed a comprehensive genome-wide investigation of Q-type C2H2 ZFPs in sugar beet.A total of 104 C2H2 BvZFPs were first identified from the current B. vulgaris genome using bioinformatic analysis.This number of BvZFP was lower than most previously studied species, except for Vitis vinifera and C. annuum (Table 2).Among these genes, 35 Q-type C2H2 BvZFPs, with each containing at least one conserved "QALGGH"-type zinc finger domain, were selected for further investigation.As expected, sugar beet also has the fewest Q-type ZFP genes compared with other sequenced plant genomes (Table 2).These Q-type ZFP genes were renamed from BvZFP1 to BvZFP35 based on their physical positions on the sugar beet chromosomes.Thirty-four members were unevenly distributed in all nine chromosomes, except for BvZFP35 (LOC104893654) (Figure 1).Based on the number of zinc finger domains and the spacing between the two His residues, 63% (22 out of 35) members belonged to C1-1i group, 34% of members existed in both C1-2i and C1-3i groups, and only one gene BvZFP22 was categorized in the C1-4i group.We did not identify any Q-type ZFPs in sugar beet that possessed five zinc finger domains, which are present in Arabidopsis and rice genomes [12,20].Interestingly, we found several zinc finger domains with certain modifications to the "QALGGH" motif (M-type) for all 3i and 4i members.The lengths of the BvZFP proteins differed among groups, ranging from 162 to 323, 186 to 456, and 237 to 572 amino acids for 1i group, 2i group, and 3i group, respectively.All Q-type BvZFPs were predicted to be localized within the nucleus, indicating that these proteins indeed function as transcription factors in the nucleus.Phylogenetic analyses of 35 BvZFPs, using 58 Arabidopsis ZFPs as templates, classified 35 Q-type BvZFP genes into five major clades according to the number of "KS/KA/RS/RA/QA-LGGH" motifs they contained, including C1-1Q-A, C1-1Q-B, C1-2Q, C1-QM-A, and C1-QM-B (Figure 2).Proteins clustered into the same clade may have similar functions under various stresses and close evolutionary relationships.Among them, C1-2Q clusters with Arabidopsis C1-2i, which is highly conserved during evolution.It has been reported that mutations in the QALGGH sequence greatly affect DNA-binding activity [59].Cis-acting elements are involved in the regulation of gene activity and serve as fundamental molecular switches during transcriptional regulation [60].A previous study confirmed that many of these elements, such as ABREs and DREs, have been reported to widely participate in abiotic stress responses in Artemisia annua, A. thaliana, and T. aestivum [31].In plants, hormones, such as auxin, abscisic acid, and gibberellin, play an important role in the growth and development of plants and the response to adversity stress [61,62].However, no study has investigated these key regulatory elements in sugar beet.Among the BvZFP gene family members, 385 cis-regulatory elements were identified, which contain elements related to hormone response (abscisic acid, gibberellin, ethylene, and auxin), stress response elements (low temperature and drought), and growth-and development-related response elements.These results spotlight potential candidate genes for anti-abiotic stress, although their specific functions need to be confirmed by further investigation.Previous studies have shown that genes with fewer introns are more prone to be activated in response to stress [63].In this study, approximately 86% (30/35) of BvZFPs have no introns, except for five BvZFPs, which indicates that BvZFPs could rapidly react to external stimuli.Collinearity analysis, an essential analytical strategy in comparative genomics, illuminates both large-scale and small-scale molecular evolutionary events across species.In this study, about 70% of the 34 mapped BvZFP genes are collinear with respect to four C2H2-ZFP genes on Arabidopsis chromosomes.The Ka/Ks ratios showed the conservation of BvZFPs throughout evolution, aligning with the previous views that regarded the C2H2 family as evolutionarily stable [64,65].All this detailed information helps us better understand and screen for appropriate C2H2 gene family members in sugar beet. Besides the conserved zinc finger domains, many previously identified plant ZFP proteins have EAR motifs.These motifs, characterized by conserved amino acid signatures "LxLxL" or "DLNxxP" in their respective C-terminal regions [66], play a key role in transcriptional repression [67,68].It has been reported that 26% of wheat TaZFP members contained potential EAR motifs [31], while approximately 55% of grape VviZFP members possessed this motif [56].Genome-wide analysis suggests that the EAR motif is conserved across all plant species and is involved in developmental and stress-related processes [69].For instance, an EAR repressor named NIMIN1 negatively regulates the expression of the PR1 defense gene in Arabidopsis [70].AtZP1, which contains an EAR motif, negatively regulates Arabidopsis root hair initiation and elongation [71].Most strikingly, our results reveal that 32 BvZFPs have at least one EAR motif, accounting for 91% of BvZFP members.As a key repression domain, alterations to any individual residue within the EAR can reduce or completely eliminate its transcriptional repression capability [72]. Previous studies demonstrated that many plant Q-type ZFP genes exhibited different expression patterns across various tissues [73][74][75][76].For example, the Q-type genes of cucumber could be clustered into four groups according to their expression levels; CsZFP genes in group 1 showed high expression levels, while CsZFPs in group 4 exhibited little to no expression in all tissues [16].In potatoes, some Q-type StZFP genes displayed a tissuespecific expression pattern.Specifically, PG0005486 and PG0030311 were predominantly expressed in leaves and roots, respectively [58].In strawberries, FaZAT10 was highly expressed in roots, followed by leaves and stems [74].Moreover, RNA-seq data analysis for Q-type TaZFPs in different wheat tissues showed that 75% of TaZFPs were predominantly expressed in roots, implying their potential role in regulating wheat root development [31]. In the current study, we also found that many BvZFPs displayed a specific expression pattern between leaves and roots (Figure 6).With respect to the three-pair-euphylla-stage seedlings of sugar beet, 46% BvZFPs (16 out of 35) were not observed to be expressed in either leaf or root tissue, suggesting that these genes might not influence development at this growth stage.In contrast, 23% BvZFPs (8 out of 35 genes) were expressed preferentially in roots while 29% BvZFPs (10 out of 35) were expressed in both leaf and root tissues to variable degrees.Among the genes expressed in both tissues, root expression levels for BvZFPs were relatively higher than in leaf tissue except for one gene.In addition, BvZFP6, BvZFP27, and BvZFP30, each containing two "QALGGH" domains, were significantly expressed in roots.Therefore, the results suggested that many Q-type BvZFP genes are involved in sugar beet development under normal growth conditions, especially root development. Plants are frequently affected by various abiotic and biotic.To adapt to these stresses, plants have evolved sophisticated mechanisms.Numerous Q-type ZFP genes have been found in response to diverse stresses, such as drought, salt, osmotic, low temperature, oxidative stress, and pathogen infection in many plant species [16,30,32].Studies have shown that the overexpression of MdZAT5 promotes the expression of anthocyanin-biosynthesisrelated genes NHX1 and ABI1 to actively regulate anthocyanin synthesis and increases sensitivity to salt stress in apple calli and Arabidopsis [75].The ZFP gene Csa6G303740 was significantly up-regulated in response to drought, heat, and salt stresses, indicating it could be involved in the regulation of abiotic stresses in cucumbers [16].Potato StZFP1, which responds to salt, dehydration, and infection by Phytophthora infestans, has the potential to improve salt tolerance in transgenic tobacco [76]. In this work, we analyzed publicly available transcriptome data and found that five BvZFP genes, BvZFP2, BvZFP6, BvZFP17, BvZFP30, and BvZFP34, were up-regulated in leaves or/and roots under salt treatment, indicating that these genes may play roles in salt response in sugar beet.These genes belong to the C1-2i and C1-3i groups, and all contain two Q-type zinc finger domains.In addition, BvZFP2, BvZFP17, and BvZFP 30 are grouped into the same C1-QM clade, while BvZFP6 and BvZFP34 are clustered into the C1-2Q clade.For biotic stress, twelve BvZFP genes were differentially expressed by BNYVV infection.Interestingly, BvZFP2, BvZFP6, BvZFP17, BvZFP30, and BvZFP34 were induced either by salt treatment or viral infection.On the other hand, some BvZFP genes only responded to viral infection; for example, BvZFP8 and BvZFP28 were significantly up-regulated in the BNYVV-infected samples, and BvZFP13, BvZFP25, BvZFP26, and BvZFP33 were downregulated exclusively due to viral infection.These four down-regulated genes all belong to the C1-1i group and are clustered into the C1-1Q-A clade.BvZFP8 and BvZFP28 are in the C1-1Q-B and C1-2Q clades, respectively. Generally, members in the same cluster may have conserved functions.Previous studies have shown that Arabidopsis ZAT6 is strongly induced by various stresses, including salt, cold, dehydration, and pathogen infection [77].Plants overexpressing AtZAT6 improved resistance to salt, drought, freezing stresses, and pathogen infection.In our study, the BvZFP30 clustered with AtZAT6 in C1-QM was differentially expressed under salt in sugar beet, indicating that BvZFP30 may have functions that are similar to AtZAT6 under abiotic stress.ZAT18 was initially shown to be a positive regulator of drought stress tolerance in Arabidopsis [29].A recent study has indicated that Pseudomonas syringae induces ZAT18 expression to repress EDS1 transcription during bacterial infection [30].Furthermore, BvZFP6, BvZFP34, and BvZFP28, which are clustered with ZAT18, are induced by BNYVV infection.Given that both BvZFPs and ZAT18 are highly expressed under the attack of pathogens, it is plausible that these genes may have similar functions with respect to responding to biotic stress.Taken together, these results indicate that BvZFP genes are involved in plant development and stress resistance in sugar beet, particularly in response to both abiotic and biotic stresses. Conclusions In summary, we performed a genome-wide exploration of Q-type ZFPs in sugar beet, and a total of 35 Q-type BvZFP genes that contain at least one conserved "QALGGH" motif were identified.We analyzed the physiochemical properties, genomic location, gene duplication event, gene structure, conserved motif compositions, and cis-regulatory elements of these BvZFP genes.Phylogenetic analysis revealed that 35 Q-type BvZFP genes were classified into five subclades.Moreover, the BvZFPs clustered within the same clade generally shared a similar motif composition.In addition, the expression profiles of BvZFPs in leaf and root tissues, as well as their responses to salt stress and viral infection, were analyzed.Q-type BvZFPs are predominantly expressed in roots and are enriched with members that are responsive to abiotic and biotic stresses.These results will enhance our understanding of the BvZFP gene family and provide valuable information for the further functional analysis of Q-type BvZFPs relative to the abiotic and biotic stress tolerance of sugar beet.Meanwhile, this study provides a theoretical basis for the biological breeding of sugar beet against salt stress and viral infection in the future. Figure 1 . Figure 1.Chromosomal localization of Q−type BvZFP genes on sugar beet chromosomes.The 34 members are distributed over 9 chromosomes. Figure 1 . Figure 1.Chromosomal localization of Q-type BvZFP genes on sugar beet chromosomes.The 34 members are distributed over 9 chromosomes. Biology 2023 , 12, x FOR PEER REVIEW 7 of 20 numbers of zinc finger domains were clustered into the same clade.In the 1i group, 15 BvZFPs together with 16 AtZFPs were grouped in the C1-1Q-A clade, while 7 BvZFPs with 11 AtZFPs were grouped into C1-1Q-B clades.Both C1-1QA and C1-1QB clades contained one conserved "QALGGH" motif in their protein sequences.For the members of the 2i group, five BvZFPs along with 11 AtZFPs belonged to the C1-2Q clade.Following the same nomenclature as the C1-1Q clade, the members in the C1-2Q clade contained two conserved "QALGGH" motifs.The rest of the BvZFP members, which presented a variety of combinations with different numbers of Q-and M-type zinc finger domains, were defined as the C1-QM clades.For 3i and 4i group members, four BvZFPs were clustered into C1-QM-A clade together with nine AtZFPs, while four BvZFPs and seven AtZFPs were grouped into the clade C1-QM-B (Figure 2). Figure 2 . Figure 2. Phylogenetic relationships of Q−type C2H2−ZFPs between sugar beet and Arabidopsis.The unrooted phylogenetic tree was constructed using MEGA10 via the neighbor-joining method with 1000 bootstrap replicates.The tree was divided into six phylogenetic clusters.The red circles represent Arabidopsis ZFPs, the blue circles represent sugar beet ZFPs. Figure 3 . Figure 3. Motif distributions and gene-structure analysis of Q−type BvZFP genes.(A) The p netic tree was built using the NJ method with a bootstrap value of 1000.(B) Exons, introns, untranslated region (UTR) are represented by yellow rectangles, gray lines, and green rec respectively.(C) The conserved motifs in BvZFP proteins (1-10) are shown in different col gray lines represent relative protein lengths.(D-G) The common conserved motifs in BvZ teins. Figure 3 . Figure 3. Motif distributions and gene-structure analysis of Q-type BvZFP genes.(A) The phylogenetic tree was built using the NJ method with a bootstrap value of 1000.(B) Exons, introns, and the untranslated region (UTR) are represented by yellow rectangles, gray lines, and green rectangles, respectively.(C) The conserved motifs in BvZFP proteins (1-10) are shown in different colors.The gray lines represent relative protein lengths.(D-G) The common conserved motifs in BvZFP proteins. Figure 4 . Figure 4. Distribution and collinearity of Q−type BvZFP genes in sugar beet.The outer circle represents the location of BvZFP genes on the chromosomes, and the inner circle histogram and heat map represent gene density.The grey lines represent all collinear genes on the sugar beet genome, and the colored lines connect the collinear BvZFP gene pairs. Figure 4 . Figure 4. Distribution and collinearity of Q-type BvZFP genes in sugar beet.The outer circle represents the location of BvZFP genes on the chromosomes, and the inner circle histogram and heat map represent gene density.The grey lines represent all collinear genes on the sugar beet genome, and the colored lines connect the collinear BvZFP gene pairs. Figure 5 . Figure 5. Collinearity analysis of Q−type C2H2−ZFPs in sugar beet and Arabidopsis genomes.The outer circle represents the different chromosomes of the two species, and the inner circle histogram and heat map represent the gene density of each species.The grey curves indicate the collinear gene regions within the genomes of the two species, and the colored curves emphasize the specific collinearity relationships among Q−type BvZFP genes between the two species. Figure 5 . Figure 5. Collinearity analysis of Q-type C2H2-ZFPs in sugar beet and Arabidopsis genomes.The outer circle represents the different chromosomes of the two species, and the inner circle histogram and heat map represent the gene density of each species.The grey curves indicate the collinear gene regions within the genomes of the two species, and the colored curves emphasize the specific collinearity relationships among Q-type BvZFP genes between the two species. Figure 6 . Figure 6.Distribution of cis-acting elements in the promoter of Q−type BvZFP members in sugar beet.The phylogenetic tree was developed on MEGA 10 using neighbor-joining phylogenetic method analysis.Both the bootstrap test and the approximate likelihood ratio test were set to 1000 times. Figure 6 . Figure 6.Distribution of cis-acting elements in the promoter of Q-type BvZFP members in sugar beet.The phylogenetic tree was developed on MEGA 10 using neighbor-joining phylogenetic method analysis.Both the bootstrap test and the approximate likelihood ratio test were set to 1000 times. Table 1 . The physical and chemical properties of Q-type BvZFPs in sugar beet. Table 2 . The number of C2H2 and Q-type zinc finger proteins in different species.
v3-fos-license
2021-10-15T15:27:01.630Z
2021-10-01T00:00:00.000
239474485
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1648-9144/57/10/1075/pdf", "pdf_hash": "f7d894eba836e900f056a9be0d323b988af71d3b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44894", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "63f425fc67ef80af2672dc7cb99c1d5690706878", "year": 2021 }
pes2o/s2orc
PAX7, PAX9 and RYK Expression in Cleft Affected Tissue Background and Objectives: Cleft lip with or without cleft palate is one of the most common types of congenital malformations. Transcription factors paired box 7 and 9 (PAX7, PAX9) and receptor-like tyrosine kinase (RYK) have been previously associated with the formation of orofacial clefts but their exact possible involvement and interactions in the tissue of specific cleft types remains uncertain. There is a limited number of morphological studies analyzing these specific factors in cleft affected tissue due to ethical aspects and the limited amount of available tissue material. This study analyses the presence of PAX7, PAX9, and RYK immunopositive structures within different cleft affected tissue to assess their possible involvement in cleft morphopathogenesis. Materials and Methods: Cleft affected tissue was collected from non-syndromic orofacial cleft patients during cleft correcting surgery (36 patients with unilateral cleft lip, 13 patients with bilateral cleft lip, 26 patients with isolated cleft palate). Control group oral cavity tissue was obtained from 7 patients without cleft lip and palate. To evaluate the number of immunopositive structures in the cleft affected tissue and the control group, a semiquantitative counting method was used. Non-parametric statistical methods (Kruskal–Wallis H test, Mann–Whitney U test, and Spearman’s rank correlation) were used. Results: Statistically significant differences for the number of PAX7, PAX9, and RYK-positive cells were notified between the controls and the patient groups. Multiple statistically significant correlations between the factors were found in each cleft affected tissue group. Conclusions: PAX7, PAX9, and RYK have a variable involvement and interaction in postnatal morphopathogenesis of orofacial clefts. PAX7 is more associated with the formation of unilateral cleft lip, while PAX9 relates more towards the isolated cleft palate. The stable presence of RYK in all cleft types indicates its possible participation in different facial cleft formations. Introduction Cleft lip and palate are relatively common congenital malformations, and they cause excessive functional disabilities in affected children and increase socioeconomic burden and suffering within affected individuals and their families. There are multiple possible propositions for the pathogenesis of cleft lip and palate, but the etiology of orofacial clefts is mainly understood as multifactorial in nature with the involvement of both environmental factors and individual genetic factors [1,2]. The complicated interactions between the surrounding external environmental factors during pregnancy and multiple genes involved in craniofacial region development can initiate and impact the formation of different orofacial clefts. Multiple different cleft candidate gene interactions and mutations have been associated with the development of craniofacial clefts, for example, the involvement of paired box (PAX) genes such as PAX7 and PAX9 in some orofacial cleft cases [2,3]. Improved understanding of the presence and interactions of specific cleft candidate genes Materials and Methods The study was conducted in accordance with the 1964 Declaration of Helsinki. All tissue samples used for the study were taken from patients with a voluntary agreement from the parents of patients from each patient group and the parents of controls to allow the donation of the tissue samples for scientific research. Patient and control group tissue samples were acquired from the Cleft Lip and Palate Centre of the Institute of Stomatology of Riga Stradins University (RSU) and the analysis and study was performed in the Department of Morphology of RSU. The Ethics committee of RSU provided the approval of the study protocol (22.05.2003.; Nr.6-1/10/11, 24.09.2020). The study groups were divided based on the cleft type (unilateral cleft lip, bilateral cleft lip, isolated cleft palate). The soft cleft tissues with the oral cavity epithelium and the underlying connective tissue were taken during cleft surgery. The inclusion criteria for the patient groups were the following: diagnosis and surgery of non-syndromic unilateral cleft lip, bilateral cleft lip, and isolated cleft palate, respectively, patient age before primary dentition (age 3-18 months), no periodontal disease detected, or no other pathology which would impede the patient from receiving cleft lip and palate reparative surgery. For the unilateral cleft lip group, 36 patients participated in the study (20 boys and 16 girls) aged 3-8 months. For the bilateral cleft lip group, 13 patients participated in the study (10 boys and 3 girls) aged 4-16 months. For the isolated cleft palate group, 26 patients participated in the study (18 boys and 8 girls) aged 4-14 months. Control group oral cavity tissue was taken from 7 patients who received labial frenectomy due to the surgical correction of hypertrophic upper lip frenulum. The structure of the control group was composed of four boys and three girls (8-11 years old). The control group inclusion criteria were the following: patients with the diagnosis of hypertrophic upper lip frenulum, no inflammation and no other pathological process detected in the tissue sample, no craniofacial clefts in anamnesis or in family history. Due to the very limited amount of control group tissue material, PAX7 immunoreactivity could be evaluated from only 5 control group patients. PAX9 and RYK immunoreactivity could be evaluated from all 7 control group patients. Standard biotin and streptavidin immunohistochemical method was performed for the detection of PAX7, PAX9, and RYK [19]. The tissue samples were fixed in 2% formaldehyde and 0.2% picric acid in 0.1 M phosphate buffer (pH 7.2). The washing procedure was performed in phosphate-buffered saline (PBS) fluid containing 10% saccharose for 12 h. The embedding procedure was performed in paraffin and cutting was performed into 6-7 µm thick sections. Later, deparaffinization was carried out and further slide staining was performed with the biotin-streptavidin immunohistochemical method for detection of the presence of specific proteins within the tissue with antibodies for PAX7 (ab55494, 1:100, Abcam, Cambridge, UK), PAX9 (orb11242, 1:100, Biorbyt Ltd., Cambridge, UK), and RYK (orb38371, 1:100, Biorbyt Ltd., Cambridge, UK). The visual illustration of slides was provided by Leica DC 300F digital camera (Leica Microsystems GmbH, Wetzlar, Germany). Further processing of images and image analysis was performed with Image-Pro Plus software (Media Cybernetics, Inc., Rockville, MD, USA). A semi-quantitative counting method was used to record and provide a non-parametric evaluation of the relative frequency of immunopositive cells by using the immunohistochemical method [20]. The relative frequency of positively stained cells was analyzed with light microscopy in five visual fields of each section by two independent researchers. No positive structures or cells were labeled as 0, a rare occurrence of positive structures was labeled as 0/+, a few positive structures were labeled as +, a few to moderate number of positive structures: +/++, moderate number of positive structures: ++, moderate to numerous number of positive cells: ++/+++, numerous number of positive cells: +++, numerous to abundant number of positive structures: +++/++++, and an abundance of positive cells in the visual field was labeled as ++++. Analysis of data was performed by using both analytical and descriptive statistical methods. The count of PAX7, PAX9, and RYK positive cells per each visual field, median value, and interquartile range calculation was performed for further evaluation using Spearman's rank correlation analysis. Spearman's rank correlation coefficient's Spearman's rho value (r s ) was interpreted as the following values: r s = 0.0-0.2, a very weak correlation; r s = 0.2-0.4, a weak correlation; r s = 0.4-0.6, a moderate correlation; r s = 0.6-0.8, a strong correlation; r s = 0.8-1.0, a very strong correlation. The semi-quantitative count of immunoreactive structures is shown as median values. Statistical significance was calculated with the Kruskal-Wallis H test and Mann-Whitney U test between each group. The statistical analysis of data was provided with the statistics program SPSS Statistics (version 25.0, IBM Company, Chicago, IL, USA). A p-value of <0.05 was considered statistically significant for all statistical calculations. Routine Hematoxylin and Eosin-Stained Slide Evaluation Hematoxylin and eosin-stained slides for all three patient groups were prepared to notify the presence of the surface epithelium and the underlying connective tissue. In all slides of the patient groups, stratified squamous epithelium with underlying connective tissue was found. The tissue fragments in all three patient groups were mainly similar to a relatively normal oral cavity and lip tissue (without inflammation, without fibrotic changes, and without vacuolization of the epithelium) with some slight variations. These variations include the presence of minimal subepithelial inflammation with infiltration of inflammatory cells (more visible in the isolated cleft palate group with seven individuals having minor subepithelial inflammation when compared to five individuals in the unilateral cleft lip group and two individuals within the bilateral cleft lip affected tissue group). Relatively minor vacuolization (a few to moderate number of epitheliocytes) within the surface epithelium was notified in four individuals within the unilateral cleft lip group and in one individual within the bilateral cleft lip group, but epithelial vacuolization was not notified in the isolated cleft palate tissue group. In some cleft affected tissue slides, patchy vacuolization of the oral cavity epithelium was visible in epithelial cells ( Figure 1A,B). A patchy proliferation of the basal cells of the oral cavity epithelium was noticed in some slides. In some isolated cleft palate slides, the presence of subepithelial inflammation with fibrotic changes in the connective tissue was visible ( Figure 1C). Routine Hematoxylin and Eosin-Stained Slide Evaluation Hematoxylin and eosin-stained slides for all three patient groups were prepared to notify the presence of the surface epithelium and the underlying connective tissue. In all slides of the patient groups, stratified squamous epithelium with underlying connective tissue was found. The tissue fragments in all three patient groups were mainly similar to a relatively normal oral cavity and lip tissue (without inflammation, without fibrotic changes, and without vacuolization of the epithelium) with some slight variations. These variations include the presence of minimal subepithelial inflammation with infiltration of inflammatory cells (more visible in the isolated cleft palate group with seven individuals having minor subepithelial inflammation when compared to five individuals in the unilateral cleft lip group and two individuals within the bilateral cleft lip affected tissue group). Relatively minor vacuolization (a few to moderate number of epitheliocytes) within the surface epithelium was notified in four individuals within the unilateral cleft lip group and in one individual within the bilateral cleft lip group, but epithelial vacuolization was not notified in the isolated cleft palate tissue group. In some cleft affected tissue slides, patchy vacuolization of the oral cavity epithelium was visible in epithelial cells ( Figure 1A,B). A patchy proliferation of the basal cells of the oral cavity epithelium was noticed in some slides. In some isolated cleft palate slides, the presence of subepithelial inflammation with fibrotic changes in the connective tissue was visible ( Figure 1C). PAX7 Immunohistochemical Evaluation The number of factor positive cells found in the different cleft affected tissue groups and the controls was quite variable. Within the control group, the median number of PAX7-positive epitheliocytes in the epithelium was few to moderate (+/++) and it ranged from a few (+) to moderate (++) number of PAX7-positive cells. In the connective tissue of the control group, the median number of PAX7-positive connective tissue cells was few to moderate (+/++) and ranged from a few (+) to moderate to numerous (++/+++) PAX7-positive cells (Figure 2A). For PAX7 within unilateral cleft lip affected tissue, the median number of PAX7 containing epitheliocytes was moderate to numerous (++/+++) and the number of factor-positive cells ranged from few to moderate (+/++) to numerous to abundant (+++/++++). Within the connective tissue of the unilateral cleft lip group, the median number of PAX7-containing positive cells was numerous (+++) and ranged from a few to moderate (+/++) to abundant (++++) positive cells which were mainly macrophages, fibroblasts, and endothelial cells ( Figure 2B). PAX7 Immunohistochemical Evaluation The number of factor positive cells found in the different cleft affected tissue groups and the controls was quite variable. Within the control group, the median number of PAX7-positive epitheliocytes in the epithelium was few to moderate (+/++) and it ranged from a few (+) to moderate (++) number of PAX7-positive cells. In the connective tissue of the control group, the median number of PAX7-positive connective tissue cells was few to moderate (+/++) and ranged from a few (+) to moderate to numerous (++/+++) PAX7-positive cells (Figure 2A). For PAX7 within unilateral cleft lip affected tissue, the median number of PAX7 containing epitheliocytes was moderate to numerous (++/+++) and the number of factorpositive cells ranged from few to moderate (+/++) to numerous to abundant (+++/++++). Within the connective tissue of the unilateral cleft lip group, the median number of PAX7containing positive cells was numerous (+++) and ranged from a few to moderate (+/++) to abundant (++++) positive cells which were mainly macrophages, fibroblasts, and endothelial cells ( Figure 2B). In isolated cleft palate affected tissue, the median number of PAX7-containing epitheliocytes was a few to moderate (+/++) within the epithelium and ranged from barely detectable PAX7-positive cells (0/+) to numerous to abundant (+++/++++). The median number of PAX7-positive cells in isolated cleft palate affected tissue was moderate to numerous (++/+++) and the number of PAX7-positive cells (fibroblasts, macrophages, and endothelial cells) in connective tissue ranged from a few (+) to numerous to abundant (+++/++++) ( Figure 2D). The use of the Kruskal-Wallis H test notified that a statistically significant difference was found in the number of PAX7-positive structures in the epithelium between the controls, unilateral cleft lip, bilateral cleft lip, and isolated cleft palate groups (H = 25.804, df = 3, p < 0.001). The Kruskal-Wallis H test also indicated a statistically significant difference For PAX7 in bilateral cleft lip affected tissue, the median number of PAX7-positive epiheliocytes was moderate (++) and the number of factor-positive epitheliocytes ranged from a few (+) to numerous (+++) PAX7-positive cells. For PAX7 in bilateral cleft lip affected connective tissue, the median number of factor-positive cells (PAX7 was mainly found in macrophages and also in some fibroblasts) was few to moderate (+/++) and ranged from a few (+) to moderate to numerous (++/+++) within the bilateral cleft lip patient group ( Figure 2C). The use of the Kruskal-Wallis H test notified that a statistically significant difference was found in the number of PAX7-positive structures in the epithelium between the controls, unilateral cleft lip, bilateral cleft lip, and isolated cleft palate groups (H = 25.804, df = 3, p < 0.001). The Kruskal-Wallis H test also indicated a statistically significant difference The Mann-Whitney U test notified a statistically significant difference in the number of PAX7-positive epitheliocytes in the epithelium between the control group and unilateral cleft lip affected tissue group (U = 5.5, p = 0.001). A statistically significant difference was also seen for the number of PAX7-containing cells within the connective tissue between the control group and the unilateral cleft lip affected tissue group (U = 15.5, p = 0.002). The Mann-Whitney U test notified that no statistically significant difference was detected in the number of PAX7-positive epitheliocytes in the epithelium between the bilateral cleft lip affected tissue group (U = 18.5, p = 0.154). The Mann-Whitney U test also indicated that was no statistically significant difference present in the number of PAX7-positive cells within the connective tissue between the controls and the bilateral cleft lip affected tissue group (U = 32.0, p = 0.959). The Mann-Whitney U test indicated that no statistically significant difference was notified for the number of PAX7-containing epitheliocytes in the epithelium between the control group and isolated cleft palate group (U = 46.0, p = 0.481). No statistically significant difference was found in the number of PAX7-positive cells in the connective tissue between the controls and the isolated cleft palate affected tissue (U = 34.5, p = 0.091). PAX9 Immunohistochemical Evaluation Within the control group, the median number of PAX9-containing epithaliocytes in the epithelium was moderate (++) and it ranged from moderate (++) to moderate to numerous (++/+++) PAX9-positive cells. Within the connective tissue of the control group, the median number of PAX9-positive structures was 0 (no PAX9-positive cells) and it ranged from no PAX9-positive cells (0) to barely detectable (0/+) ( Figure 3A). The median number of PAX9-positive epitheliocytes in the epithelium of unilateral cleft lip affected tissue was few to moderate (+/++) and ranged from no PAX9-containing cells (0) to numerous (+++) PAX9-positive cells within the unilateral cleft lip patient group. The median number of PAX9-positive connective tissue cells such as fibroblasts, macrophages, and endothelial cells within the connective tissue of the unilateral cleft lip patient group was moderate (++) and ranged from no positive cells (0) to numerous (+++) positive cells ( Figure 3B). The median number of PAX9-positive cells within the epithelium of bilateral cleft lip patient group tissue was a few (+) positive cells and ranged from no positive epitheliocytes (0) to moderate to numerous (++/+++) positive epitheliocytes. Within the connective tissue of the bilateral cleft lip patient group tissue, the median number of PAX9-positive cells was a barely detectable (0/+) number of PAX9-containing cells and the values ranged from no positive structures (0) to a moderate number (++) of PAX9-containing cells ( Figure 3C). The median number of PAX9-positive epitheliocytes in the surface epithelium of isolated cleft palate affected tissue was a barely detectable number of positive cells (0/+) and ranged from no positive cells (0) to moderate (++) number of PAX9-containing cells. The median number of PAX9-positive cells within the connective tissue of the isolated cleft palate patient group was a few (+) immunopositive cells (mainly endothelial cells and some macrophages) and had a range from no positive cells (0) to moderate to numerous (++/+++) in some patients ( Figure 3D). The Kruskal-Wallis H test notified a statistically significant difference for the number of PAX9-positive structures in the epithelium between the control group, unilateral cleft lip group, bilateral cleft lip group, and isolated cleft palate group (H = 28.308, df = 3, p < 0.001). Kruskal-Wallis H test also indicated that there was a statistically significant difference for the number of PAX9-positive structures in the connective tissue between the control group, unilateral cleft lip group, bilateral cleft lip group, and isolated cleft palate group (H = 33.917, df = 3, p < 0.001). The Kruskal-Wallis H test notified a statistically significant difference for the number of PAX9-positive structures in the epithelium between the control group, unilateral cleft lip group, bilateral cleft lip group, and isolated cleft palate group (H = 28.308, df = 3, p < 0.001). Kruskal-Wallis H test also indicated that there was a statistically significant difference for the number of PAX9-positive structures in the connective tissue between the control group, unilateral cleft lip group, bilateral cleft lip group, and isolated cleft palate group (H = 33.917, df = 3, p < 0.001). The Mann-Whitney U test notified that no statistically significant difference was found for the number of PAX9-containing epitheliocytes within the epithelium between the control group and the unilateral cleft lip affected tissue group (U = 78.0, p = 0.107). There was a statistically significant difference in the number of PAX9-containing cells in the connective tissue between the control group and the unilateral cleft lip affected tissue group (U = 9.5, p < 0.001). The Mann-Whitney U test indicated a statistically significant difference for the number of PAX9-positive epitheliocytes in the surface epithelium between the control group and the bilateral cleft lip affected tissue group (U = 13.5, p = 0.012). Mann-Whitney U test also indicated no statistically significant difference for the number of PAX9-positive cells within the connective tissue between the control group and the bilateral cleft lip affected tissue group (U = 29.0, p = 0.155). The Mann-Whitney U test notified that no statistically significant difference was found for the number of PAX9-containing epitheliocytes within the epithelium between the control group and the unilateral cleft lip affected tissue group (U = 78.0, p = 0.107). There was a statistically significant difference in the number of PAX9-containing cells in the connective tissue between the control group and the unilateral cleft lip affected tissue group (U = 9.5, p < 0.001). The Mann-Whitney U test indicated a statistically significant difference for the number of PAX9-positive epitheliocytes in the surface epithelium between the control group and the bilateral cleft lip affected tissue group (U = 13.5, p = 0.012). Mann-Whitney U test also indicated no statistically significant difference for the number of PAX9-positive cells within the connective tissue between the control group and the bilateral cleft lip affected tissue group (U = 29.0, p = 0.155). The Mann-Whitney U test notified a statistically significant difference for the number of PAX9-positive epitheliocytes in the surface epithelium between the control group and the isolated cleft palate affected tissue group (U = 2.5, p < 0.001). Mann-Whitney U test indicated a statistically significant difference for the number of PAX9-positive cells within the connective tissue between the control group and the isolated cleft palate affected tissue group (U = 18.5, p = 0.001). RYK Immunohistochemical Evaluation Within the control group, the median number of RYK-positive epitheliocytes in the epithelium was a barely detectable (0/+) number of immunopositive cells and it ranged from no positive cells (0) to moderate (++) number of RYK-containing cells. Within the connective tissue of the control group, the median number of RYK-containing connective tissue cells was a few (+) positive cells and it ranged from no RYK-containing cells (0) to a moderate (++) number of RYK-positive cells ( Figure 4A). Kruskal-Wallis H test also notified that no statistically significant difference was found in the number of RYK-positive structures in the connective between the control group, unilateral cleft lip patient group, bilateral cleft lip patient group, and isolated cleft palate patient group (H = 18.307, df = 3, p < 0.001). The Mann-Whitney U test indicated a statistically significant difference for the number of RYK-containing epitheliocytes in the epithelium (U = 5.5, p < 0.001) and the connective tissue (U = 11.0, p < 0.001) between the control group and the unilateral cleft lip affected tissue group. The Mann-Whitney U test indicated a statistically significant difference for the number of RYK-positive epitheliocytes in the epithelium between the control group and the bilateral cleft lip affected tissue group (U = 3.0, p = 0.001). A statistically significant difference was also found in the number of RYK-containing epitheliocytes in the connective For RYK, the median number of immunopositive cells within the epithelium of the unilateral cleft lip patient group was moderate to numerous (++/+++) and ranged from few to moderate (+/++) number to numerous to abundant (+++/++++) number of RYK-positive cells. In the connective tissue of the unilateral cleft lip patient group, the median number of RYK-containing cells (mainly endothelial cells and macrophages) was moderate to numerous (++/+++) and ranged from a few (+) positive cells to numerous (+++) ( Figure 4B). The median number of RYK-containing epitheliocytes in the epithelium of the bilateral cleft lip patient group was moderate to numerous (++/+++) and ranged from few to moderate (+/++) to numerous (+++). The median number of RYK-containing cells (mostly endothelial cells and some macrophages) within the connective tissue of the bilateral cleft lip patient group was moderate to numerous (++/+++) and ranged from few to moderate (+/++) to numerous (+++) ( Figure 4C). The Kruskal-Wallis H test indicated a statistically significant difference for the number of RYK-positive structures in the epithelium between the controls, unilateral cleft lip group, bilateral cleft lip group, and isolated cleft palate group (H = 22.868, df = 3, p < 0.001). Kruskal-Wallis H test also notified that no statistically significant difference was found in the number of RYK-positive structures in the connective between the control group, unilateral cleft lip patient group, bilateral cleft lip patient group, and isolated cleft palate patient group (H = 18.307, df = 3, p < 0.001). The Mann-Whitney U test indicated a statistically significant difference for the number of RYK-containing epitheliocytes in the epithelium (U = 5.5, p < 0.001) and the connective tissue (U = 11.0, p < 0.001) between the control group and the unilateral cleft lip affected tissue group. The Mann-Whitney U test indicated a statistically significant difference for the number of RYK-positive epitheliocytes in the epithelium between the control group and the bilateral cleft lip affected tissue group (U = 3.0, p = 0.001). A statistically significant difference was also found in the number of RYK-containing epitheliocytes in the connective tissue between the control group and the isolated cleft palate affected tissue (U = 3.0, p = 0.001). The Mann-Whitney U test calculation notified statistically significant differences between the control group and the isolated cleft palate affected tissue group in the number of RYK-containing cells within the epithelium (U = 10.0, p < 0.001) and in the connective tissue (U = 5.0, p < 0.001). The semiquantitative evaluation of PAX7, PAX9, and RYK immunoreactivity is summarized in Table 1. Correlations Spearman's rank correlation coefficient correlation calculation showed statistically significant correlations between the number of immunopositive structures for PAX7, PAX9, and RYK within the epithelium and the connective tissue within each type of cleft tissue analyzed in the study (unilateral cleft lip patient group, bilateral cleft lip patient group, and isolated cleft palate patient group). No statistically significant correlations were found within the control group tissue. Correlations in Bilateral Cleft Lip Affected Tissue In the bilateral cleft lip affected tissue, very strong statistically significant correlations (r s = 0.8-1.0) were seen between the number of PAX9-containing epitheliocytes in the epithelium and the number of PAX9-containing connective tissue cells (r s = 0.882, p < 0.001), between the number of PAX7-containing epitheliocytes in the epithelium and the number of RYK-containing cells in the epithelium (r s = 0.869, p < 0.001). The correlations between the factors in bilateral cleft lip affected tissue can be found in Table 3. Table 3. Correlations between paired box 7 (PAX7), paired box 9 (PAX9), and receptor-like tyrosine kinase (RYK) immunopositive structures in bilateral cleft lip affected tissue based on Spearman's rank correlation coefficient calculation (r s -Spearman's rho value). Table 4. Table 4. Correlations between paired box 7 (PAX7), paired box 9 (PAX9), and receptor-like tyrosine kinase (RYK) immunopositive structures in isolated cleft palate affected tissue based on Spearman's rank correlation coefficient calculation (r s -Spearman's rho value). Discussion The formation of non-syndromic orofacial clefts is still unclear and there is a limited amount of information about the differences in genetic factors and signaling pathways in specific types of craniofacial clefts, such as in case of unilateral or bilateral cleft lip, and isolated cleft palate. Our study showed that there are statistically significant differences for the number of PAX7, PAX9, and RYK-positive structures between the control group tissue and different types of cleft affected tissue. Our research showed that there are statistically significant differences for the number of PAX7-positive cells in the epithelium and connective tissue between the control group and the unilateral but not bilateral and isolated cleft palate affected tissue. PAX7 is a transcription factor involved within the process of craniofacial region development and regulates the formation and differentiation of neural crest cells that form the connective tissue of the orofacial region [21]. Genome-wide association studies have shown that mutations in the PAX7 gene are associated with the formation of craniofacial clefts [22][23][24]. Our study suggests that PAX7 could be more functionally involved with the development of specific types of craniofacial clefts such as unilateral cleft lip, and less involved within the formation of bilateral cleft lip and isolated cleft palate, however, further research could help to elaborate this possible functional and pathogenetic connection of PAX7 with specific cleft types. The data notified statistically significant differences in the number of PAX9-containing structures in the connective tissue only between the control and the unilateral cleft lip group, while such difference was notified between the control group and bilateral cleft lip affected tissue only within the epithelium. The comparison of the control group and isolated cleft palate group showed statistically significant differences in the number of PAX9-containing cells both within the epithelium and within the connective tissue. PAX9 is a transcription factor that has been previously described as necessary for the regulation of palatogenesis [25]. Dysfunction of PAX9 has been previously associated with the development of craniofacial abnormalities, such as cleft palate and tooth agenesis [26,27]. Mice models have shown that PAX9 gene deletion and downregulation of PAX9 causes the formation of cleft secondary palate [28] and associations have been found with PAX9 and the development of cleft lip in mice [29]. PAX9 has been linked with the formation of cleft lip with or without cleft palate in humans [30]. Our study results suggest that the number of PAX9-positive structures could be different in different cleft affected tissue depending on the cleft type which could affect the pathogenetic pathways in each cleft type. The significant immunoreactivity of PAX9 in the surface epitheliocytes and connective tissue cells of isolated cleft palate seems to emphasize the interaction between the epithelium and the underlying connective tissue in this specific type of cleft, which could affect tissue growth and remodeling during cleft formation, while the significant presence of PAX9 within the unilateral and bilateral cleft lip affected tissue and possible involvement in cleft pathogenesis within these specific types of clefts could not be excluded. Statistically significant differences were found in the number of RYK-positive cells and structures in the epithelium and connective tissue between the controls and all three cleft types analyzed in this study. The available information about RYK function and formation of clefts is quite limited, but the association between orofacial cleft formation and RYK dysfunction and the loss of activity of RYK has been previously found in humans [14]. Our study results could suggest that the significant presence of RYK-positive cells compared to controls in different cleft types might be explained by the possible similarities of the pathogenetic signaling mechanisms of cleft formation where RYK involvement is present. Multiple statistically significant correlations were found between PAX7, PAX9, and RYK within the cleft affected tissue groups which most likely could be explained by the interaction between these factors in the developing orofacial region during the postnatal period. An interesting question relates to the intercorrelation of these specific gene proteins. A statistically significant strong correlation was detected between PAX9-containing epitheliocytes in the surface epithelium and PAX9-containing connective tissue cells in the tissue of unilateral cleft lip. Similarly, a strong correlation was also found between PAX9-positive structures within the epithelium and PAX9-positive structures in the connective tissue in the isolated cleft palate affected tissue. A very strong correlation between PAX9-positive structures in the epithelium and PAX9-positive connective tissue cells was found in bilateral cleft lip affected tissue which might indicate a stronger interaction between the cleft affected epithelium and the underlying connective tissue when compared to other types of clefts. Previous studies have concluded that PAX9 has an involvement not only in the formation of the palatal region but also the upper lip region [29]. These correlations of PAX9-positive structures between connective tissue and the oral cavity epithelium in all types of cleft affected tissue may indicate an interaction and the presence of possibly similar pathogenetic mechanisms within the different cleft types analyzed in this study. Multiple statistically significant moderate correlations were found between PAX7, PAX9, and RYK within the unilateral cleft lip and isolated cleft palate affected tissue. Both PAX7 and PAX9 are involved with craniofacial region development [31]. The correlations between PAX7 and PAX9 could be explained by their interaction within Wnt and Notch signaling pathways which play an important regulatory role during the formation of orofacial structures [8,32]. RYK also functions within the Wnt signaling pathway which also interacts with PAX9. Wnt signaling can interact with and activate Fgf (fibroblast growth factor) signaling and modulates PAX9 to provide repression of DKK (Dickkopf) protein, an inhibitor of the canonical Wnt signaling pathway, causing a positive feedback loop within the craniofacial development process [33]. The disruption within these molecular signaling pathways could eventually lead to the formation of craniofacial clefts. These interactions could explain the correlations between the factors within the different cleft affected tissue. A very strong statistically significant correlation was notified between PAX7-containing epitheliocytes in the surface epithelium and RYK-containing epitheliocytes in the surface epithelium within bilateral cleft lip affected tissue. Similarly, a strong correlation between PAX7-positive epitheliocytes in the epithelium and RYK-positive structures in the epithelium was found in isolated cleft palate affected tissue, but in unilateral cleft lip affected tissue this correlation was only moderate. This possible interaction between PAX7 and RYK is most likely indirect within different Wnt signaling pathways which regulate tissue remodeling and growth processes within the developing orofacial region [34,35]. The differences in the strength of the correlation may indicate differences in factor interaction, pathogenic signaling, and development within the different types of cleft affected tissue. A limitation of this study is the use of only immunohistochemistry to detect the presence of PAX7, PAX9, and RYK in cleft affected tissue-using additional methods such as gene amplification and in situ hybridization would provide a good addition to this study. The use of these additional techniques is planned for future work. Another limitation is the size of the control group which was relatively small and in which the collection of tissue material is complicated due to ethical concerns. 1. Transcription factors PAX7 and PAX9 are variably involved in the postnatal morphopathogenesis of different facial cleft types: PAX7 is stably associated with the formation of unilateral cleft lip, while PAX9 relates more towards the isolated cleft palate. 2. The participation of cleft candidate gene RYK in all patterns of facial clefting is proven by its stable appearance in all cleft-affected tissue postnatally. 3. Interactions between PAX7, PAX9, and RYK prove the involvement of all of these factors in the clefting process via the possibly similar signaling pathways disrupted by other still unknown factor/s that influence the gene expression during postnatal life. Informed Consent Statement: Informed consent was obtained from all parents of subjects involved in the study. Written informed consent has been obtained from the parents of patients to publish this paper. Data Availability Statement: The data described and analyzed in this study is available on the request from the corresponding author. Due to ethical considerations and the use of children tissue material the data is not publicly available.
v3-fos-license
2023-09-27T14:10:53.854Z
2023-09-27T00:00:00.000
262898379
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://as-botanicalstudies.springeropen.com/counter/pdf/10.1186/s40529-023-00400-0", "pdf_hash": "5cb552a52c817743128889f939028e19c33cf6c2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44895", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "81b2ab5d041858e5efb88761b5ab1d5e899999de", "year": 2023 }
pes2o/s2orc
The orchid seed coat: a developmental and functional perspective Orchid seeds are 'dust-like.' The seed coat is usually thin, with only one to a few cell layers. It originates from the integuments formed during ovule development. In orchids, the outer integument is primarily responsible for forming a mature seed coat. The inner integument usually fails to develop after fertilization, becomes compressed, and collapses over the expanding embryo. Hence, the seed coat is formed from the funiculus, chalaza, and outer integumentary cells. The outermost layer of the seed coat, the testa, is lignified, usually at the radial and inner tangential walls. The subepidermal thin-walled layer(s), the tegmen, subsequently cold, resulting in seeds having only a single layer of seed coat cells. In some species, cells of the inner integument remain alive with the ability to synthesize and accumulate lipidic and or phenolic compounds in their walls covering the embryo. This cover is called the 'carapace,' a protective shield contributing to the embryo's added protection. A developmental and functional perspective of the integuments and seed coat during seed development and germination is presented in this review. Supplementary Information The online version contains supplementary material available at 10.1186/s40529-023-00400-0. Background In seed plants, the embryo is protected by a seed coat originating from the integument(s) formed during ovule development.The importance of the seed coat in seed development and germination is well recognized and a subject of reviews (Mohamed-Yasseen et al. 1994;Boesewinkel and Bouman 1995;Moise et al. 2005;Radchuk and Bonisjuk 2014;Matilla 2019).Since the seed coat completely encloses the embryo, the essential functions are to supply nutrients to the developing embryo and offer physical protection during embryo development and germination.In past decades, as the published literature shows, additional information on seed coat formation and function has been elaborated, e.g., as in Brassicaceae species (Raviv et al. 2017) and tomato (Chaban et al. 2022), indicating the uniqueness and importance of the seed coat.The evolution and molecular control of seed coat development have also been summarized recently by Matilla (2019). Orchid seeds are 'dust-like.' The seed coat is usually thin.The inner seed coat, the tegmen, tends to collapse as the seed matures, leaving an outer layer, the testa, with distinct surface features.Comprehensive information on orchid seed morphology and seed coat structure is available, i.e., Dressler (1993), Rasmussen (1995), and Molvray and Chase (1999).Arditti and Ghani (2000) detailed the numerical and physical characteristics of orchid seeds in an extensive review.Barthlott et al. (2014) provided a scanning electron microscopy survey of orchid seed diversity, illustrating the seed coat's surface features and morphology. In the study of orchid seeds, most investigations focus on surface features relating seed morphology to seed dispersal and biosystematics discussion, e.g., Molvray and Kores (1995), Gamarra et al. (2007), Hariyanto et al. (2020), and Aprilianti et al. (2021).Collier et al. (2023) recently reported differences in seed morphometrics of orchids native to North America and Hawaii.Their goal is "a better understanding of seed morphometrics, and especially the structure and function of the testa may be useful in developing more effective protocols aimed at in vitro seed germination."Moreover, detailed ontogenetic accounts of seed coat formation in orchids are not readily available in the literature.Furthermore, its potential functions during embryo development are seldom discussed.Recently, Yeung (2022) reported that the inner integument in Epidendrum ibaguense takes on an active cytological appearance with wall ingrowths at fertilization and proembryo development.This observation draws attention to the integuments in seed formation and warrants further investigation. The primary objective of this review is to illustrate the structural features and different patterns of seed coat formation in selected orchid species, as shown in Figs. 1,2,3,4,5,6,7,8 and to discuss seed coat functions during seed development and germination.Questions and suggestions are included in the discussion, hoping to generate more interest and debate in studying the orchid seed coat.Embryo development in orchids is unusual, and many questions remain, especially on regulating its development (Yeung 2022).Understanding the orchid seed coat can provide additional insights into the embryo and seed development. Integument formation during orchid ovule development The integuments become the seed coat after fertilization.In most orchid species, integuments form during megasporogenesis (Yeung and Law 1997).The inner integument usually initiates earlier than the outer integument.The bitegmic condition of ovules, i.e., having two integuments, is most common in orchids.Moreover, variations in integument formation and structural organization are noted.Orchid ovules having a single integument are also known, e.g., Epipogium aphyllum (Kusano 1915;Afzelius 1954), Gastrodia elata (Abe 1976;Li et al. 2016), and Paphiopedilum godefroyae (Ren and Wang 1987).The single integument is highly reduced in size in Epipogium roseum (Arekal and Karanth 1981), and it does not cover the nucellus of a mature embryo sac and lacks a distinct micropyle (Additional file 1).And recently, a species, Pogoniopsis schenckii, with ategmic ovules, have been reported (Alves et al. 2019); only the nucellus encloses the embryo sac and subsequently becomes the seed coat.Abe (1972) considered species with unitegmic ovules to be more advanced from an evolutionary perspective. Integument formation in orchids with bitegmic ovules Epidendrum ibaguense-a tropical epiphyte orchid In E. ibaguense, the integuments initiate during the archesporial cell formation (Yeung and Law 1989;Yeung 2022).Surface nucellar cells begin to divide near the archesporial cell of the ovular primordia.The inner integument develops rapidly, enclosing the megasporocyte (Fig. 1A).It is two cells thick.Moreover, the integumentary cells at the micropylar end of the ovule have additional divisions, forming a prominent micropyle (Fig. 1B-D).The inner integumentary cells increase in cytoplasmic density, especially during fertilization and proembryo development (see Fig. 1C in Yeung 2022). The outer integument initiates later than the inner integument and develops near the chalaza.Periclinal walls in the nucellar epidermis mark its initiation (Yeung and Law 1989).When the mature embryo sac forms, the outer integument has overtaken the inner integument.It is a bilayer structure and continues to elongate and extends toward the funiculus (Fig. 1D). The E. ibaguense ovule takes on an anatropous orientation, with the micropyle facing the placental tissue.The funiculus connecting the ovule to the placenta is narrow and approximately four cells thick.At the funiculusplacenta junction, cells are small and mitotic figures are visible (Fig. 1E).After fertilization, cells at the junction elongate to accommodate seed elongation.By the time of fertilization, the single-layered nucellar tissue is crushed by the expanding embryo sac and is difficult to discern. Phaius tankervilliae-a subtropical terrestrial orchid P. tankervilliae (Fig. 2), commonly known as "the Nun orchid"-a subtropical terrestrial orchid, its integument formation resembles E. ibaguense with the initiation of the inner integument at the time of megasporocyte formation (Fig. 2A).This is soon followed by the appearance of the outer integument.The inner integument consists of a single cell layer except at the micropylar end, where it becomes a bilayer (Fig. 2B, C).The outer integument extends beyond the inner integument as the ovule matures (Fig. 2B).As in E. ibaguense, the cells of the inner integument at the micropyle enlarge with increased 2C).The nucellar tissue surrounding the mature embryo sac is compressed and becomes difficult to discern.The funicular cells connecting the placental tissues remain small at fertilization and will elongate to accommodate seed growth after fertilization (Fig. 2D). Calypso bulbosa-a temperate terrestrial orchid In C. bulbosa, a temperate terrestrial orchid, the inner integument initiates near the archesporial cells during ovule development (for micrographs, see Law and Yeung 1989;Yeung and Law 1992).It develops rapidly until it completely encloses the tip of the nucellus containing the developing megaspores.The outer integument grows slowly and does not extend beyond the inner integument before fertilization.Thus, the micropyle is formed from the inner integument.Elongation of the outer integument takes place after fertilization.In C. bulbosa, starch granules are present in the outer integument and cells of the chalazal tissues, especially after fertilization.Similar to E. ibaguense, the cells of the inner integument increase in cytoplasmic density during fertilization and proembryo development (Yeung and Law 1992). Vanilla species as examples of bitegmic ovules with multicell-layered integuments Vanilla, a tropical orchid genus, is known for having a thick and hard seed coat.The seedling growth is initially terrestrial.As the vine continues to grow and climbs up trees, the orchid becomes epiphytic.Because of its economic importance, information about its reproductive biology is readily available in the literature, e.g., Swamy (1947), Nishimura and Yukawa (2010), Kodahl et al. (2015), and Yeh et al. (2021).The thick seed coat originates from multilayered integuments of the ovule before fertilization.In V. planifolia, the inner integument differentiates during archesporial cell formation and is composed of two to three cell layers (Fig. 3A).It envelops the developing ovule before the completion of megasporogenesis.The outer integument appears when the megasporocyte undergoes meiotic divisions.It comprises three to four cell layers with additional layers at the chalazal end (Swamy 1947) (Fig. 3B).In V. imperialis (Kodahl et al. 2015), integument formation is similar to V. planifolia, except for differences in the timing of integument initiation and nucellar cell degeneration. Unlike the inner integument, the multilayer outer integument grows slowly and does not envelop the inner integument before fertilization (Fig. 3C).Moreover, at fertilization, the outer integument grows rapidly and completely encloses the embryo sac and the inner integument (Fig. 3D).Notably, the cells of the outermost layer of the outer seed coat enlarge rapidly during fertilization; the outer tangential and radial walls become thickened considerably (Fig. 3D).The thickened wall stained pinkish-red with the toluidine blue O (TBO) stain indicates the thickened wall remains primary in character. Gastrodia and Epipogium species as examples of orchids with unitegmic ovules Some mycoheterotrophic orchids, such as Gastrodia and Epipogium, have ovules with a single integument, termed the unitegmic ovule (Tohda 1967;Abe 1976;Arekal and Karanth 1981;Li et al. 2016).In Gastrodia species, i.e., G. elata and G. nantoensis, a single layer of nucellar cells encloses the developing megaspores and, subsequently, the embryo sac (Li et al. 2016).A single integument initiates during megaspore formation (Fig. 4A, B).As the ovule matures, the integumentary cells elongate rapidly, eventually enclosing the embryo sac, leaving a micropyle opening (Fig. 4C).Prominent starch granules are present in the integumentary and chalaza tissues during ovule development (Fig. 4C).Similar to G. nantoensis, the single integument of E. roseum has not covered the nucellus of a mature embryo sac at the time of fertilization (Additional file 1).The integument tissue encloses the fertilized embryo sac after the first cell division of the zygote and becomes the seed coat.Although the unitegmic ovule has simpler integument structures, the pollen tube's guidance and the synergids' penetration still occur normally in the absence of a distinct micropyle (Fig. 4D). The roles of the integuments The integuments formed during ovule formation are programmed to become the seed coat after fertilization.Moreover, judging from its developmental patterns and cytological features, the inner integument appears to take on functional roles during the ovule and early embryo development.Whereas the outer integument functions in seed coat formation after fertilization.The fact that outer integument is not necessarily developed at fertilization, as shown in Cremastra appendiculata (Abe 1968) and Calypso bulbosa (Law and Yeung 1989), and it does not take part in micropyle formation such as Bletilla striata (Abe 1971), indicates that it is programmed to function in seed coat formation after fertilization. Although there are indications that the inner integument possesses unique biochemical properties, its importance in development tends to be overlooked.A high peroxidase activity has been localized histochemically in the inner integument of Encyclia tampensis (Alvarez 1968) and the micropylar region of the integument in Cypripedium (Zinger and Poddubnaya-Arnoldi 1966).A marked activity of dehydrogenases has also been detected in the ovules' integument in several orchids (Zinger and Poddubnaya-Arnoldi 1966).These earlier studies indicate that the inner integument has unusual biochemical characteristics.The increased staining of inner integumentary cells at fertilization in E. ibaguense (see Fig. 1c Kolomeitseva et al. 2021).The inner integument of Calypso bulbosa shows a higher staining intensity until the suspensor begins to extend beyond it (see Figures in Yeung and Law 1992).In Liparis parviflora (see Fig. 1 in Kolomeitseva et al. 2019), the inner integument gives a strong autofluorescence at fertilization.Although the exact function is unknown, the biochemical and cytological features indicate that the integument can play an important role during fertilization and proembryo development.As discussed by Yeung (2022), since an endosperm fails to form, could the inner integument function as an "endosperm substitute" in orchid seeds during early embryo development? It is well established that auxin is a crucial player in embryogenesis.Recently, Robert et al. (2018) demonstrated that the integuments are the source of auxin, regulating embryo morphogenesis in Arabidopsis.In the asexual race of Spiranthes cernua, cells of the inner integument, especially those at the tip of the micropyle, become highly cytoplasmic and develop into adventive embryos (Swamy 1948).In the Zeuxine strateumatica complex, adventive embryos can arise from the nucellar epidermis or inner integument (Vij et al. 1982).Judging from increased staining intensity and metabolic activities of the inner integument, plant growth substances could be one type of product produced, generating added morphogenetic potential. The funiculus is thin, with no vascular elements connecting the developing ovules and seeds to the placenta (Figs.1F, 2D, and 3F).In an ovule, nutrients are transported in a symplastic manner through plasmodesmata from the chalaza to the embryo sac.Although the translocation path is shorter from the hypostase/postament to the embryo sac, a longer route is preferred.In Vanilla, the fluorescent marker uranin is transported to the micropylar end along the inner integument before the appearance of fluorescence in the egg apparatus (Zhang and Zheng 1988).Together with the cytological features of the cells, the inner integument could have enhanced nutrient transfer ability, especially at the micropylar end, where the proembryo develops after fertilization. The micropyle is a unique and common feature of an ovule; it serves as the entry point for the pollen tube during fertilization.In flowering plants, the micropyle is organized by the contribution of both integuments.Even though the orchid ovules are usually bitegmic, the micropyle is often organized by the inner integument alone.This feature is noted in a majority of orchids, as reported in Amitostigma kinishitae (Abe 1977), Herminium monorchis (Fredrikson 1990), Microstylis wallichii (Sood and Rao 1986), and Neuwiedia veratrifolia (Gurudeva 2019).Histologically, the inner integument becomes multilayered, forming a prominent extension at the micropyle.The cells have a dense cytoplasm.With the numerous ovules present, can the micropylar integumentary cells play a role in attracting the pollen tubes to the ovules and aid in the fertilization process? In the study of orchid ovule development, although descriptive accounts of integument formation are available, the potential functions of the integuments are seldom discussed.We hope to draw attention to the importance of the integuments in ovule and proembryo development and encourage more focused studies of this tissue in the future. Seed coat development and structural features After fertilization, the integuments develop into the seed coat.The nucellus disintegrates at the time of embryo sac maturation or soon after.The inner integument usually fails to develop further, becomes compressed, and collapses over the expanding embryo.Hence, the seed coat is derived mainly from the funiculus, chalaza, and outer integumentary cells in a mature orchid seed. Cells of the outermost layer of the seed coat are lignified, usually at the radial walls and the inner tangential walls.The subepidermal thin-walled layer(s) subsequently collapsed, resulting in seeds having a single layer of seed coat cells.Moreover, in some terrestrial species such as Dactylorhiza majalis (Rasmussen 1995), Epipactis (Additional file 2), Cypripedium formosanum (Figs. 5 and 6) (Lee et al. 2005), and Cypripedium plectrochilum (Fig. 7), the inner integument remains alive with the ability to synthesize and accumulate lipidic and phenolic compounds before the cells collapse over the embryo.This additional covering is termed the 'carapace, ' a protective shield (Veyret 1969;Rasmussen 1995), contributing to the added embryo protection.The following examples document the development of seed coats with different structural organizations. Orchid seeds with a single layer of seed coat cells at maturity and without a carapace In E. ibaguense, the fertilized ovules undergo rapid enlargement and elongation along the length of the funiculus-chalaza.The inner integumentary tissue is ruptured and destroyed with the rapid growth of the embryo proper (Fig. 1F, G).As a result, the cells of the inner integument appear as remnants adhering to the embryo proper, within the seed cavity.Hence, the mature seed coat forms from the outer integumentary tissue only. As the embryo develops, the suspensor elongates towards the tip of the micropyle formed by the outer integument.The suspensor is in close contact with the inner cells of the seed coat, especially on the funiculus side (Fig. 1H).These seed coat cells remain thin-walled and not lignified, as judged by the purple color of the TBO stain.During the early stages of seed development, these thin-walled cells remain alive, as indicated by a nucleus within cells (Fig. 1H).As the embryo matures, the suspensor and the thin seed coats become dried and difficult to be discerned. A cavity is often noted in orchid seeds, especially at the chalazal end of the seed.The air inside the seed coat makes the seeds buoyant and readily dispersed.In E. ibaguense, the inner seed coat cells in the chalaza region fail to divide further after fertilization.With fewer cells and continual elongation of the outer layers, the inner cells separate and disintegrate, forming a chalazal cavity (Fig. 1I).In a mature seed, cell remnants suspend the embryo in this air-filled cavity.Lignification of the outermost layer of the seed coat cells begins early, with lignin deposition occurring in the radial walls and inner tangential wall while the outer walls remain thin (Fig. 1J).Moreover, lignified outer tangential walls can be seen in some mature seed coat cells near the embryo proper.Only a single lignified seed coat encloses the embryo at the time of seed maturation. A similar pattern can be found in P. tankervilliae (Fig. 2).The expanding embryo cavity results in the compression and collapse of the inner integument (Fig. 2E).The expanding suspensor protrudes beyond the micropyle.It grows towards the outer opening delimited by the outer integument (Fig. 2F, Ye et al. 1997).Like E. ibaguense, the suspensor is in close contact with the seed coat cells, which are not lignified as judged by the staining reaction towards the TBO.These inner seed coat cells remain thin-walled and alive (Fig. 2G) before embryo maturation.In P. tankervilliae, lignification of the seed coat's outermost layer begins before embryo maturation.The radial and inner tangential walls show secondary thickenings (Fig. 2H).At maturity, all inner thin-walled seed coat cells have collapsed, partially covering the embryo (Fig. 2I).Thus, a mature seed coat is comprised of only a single layer of cells (Fig. 2I). In the above examples, the behavior of the suspensor influences the final seed coat structure.The rapid growth of the suspensor and the increased size of the embryo prevent further development of the inner integument into an integral structural component of the mature seed coat. Orchid seed coat with a carapace In several Cypripedium species, besides having an outer seed coat, the mature embryo is covered by a tight thin layer which has been called "carapace" (Figs.5H, I) (Lee et al. 2005(Lee et al. , 2015)).In C. formosanum, the ovule's inner integument forms the carapace.The inner integument appears as a small projection at the base of the nucellar filament during archesporial cell formation (Fig. 5A).As the megaspore undergoes meiosis, the inner integument continues to extend toward the tip of the nucellar filament (Fig. 5B).It eventually encloses the developing embryo sac (Fig. 5C).After fertilization, the embryo cavity enlarges slightly after fertilization and remains the same till seed maturation.Mitotic activity is not detected within the inner integument (Fig. 5D).As the seed approaches maturity, the cells of the inner integument begin to dehydrate and compress into a tight thin layer (Fig. 5E-G), wrapping around the embryo.It stains blue with the TBO stain and reacts positively to Nile red stain, indicating the presence of lignin and cuticular substances, respectively (Fig. 6A, B).It is important to note that the embryo of C. formosanum has a short, singlecelled suspensor (Fig. 5F).It is not a haustoria-like suspensor similar to E. ibaguense and P. tankervilliae. The structural features of carapace vary among species.In C. plectrochilum, a distinct carapace is formed during seed development.A thin transparent seed coat houses the color carapace derived from the inner integument, which covers the mature embryo (Fig. 7).Similar to C. formosanum, cuticular substance, and lignin is present in the carapace cell walls.In addition, phenolic substances are synthesized and fill the vacuole of the cells (Fig. 7A).This gives the seeds an orange-black color.At maturity, the carapace shrinks, forming a distinct and thick layer wrapping around the embryo (Fig. 7B). Orchid seed with a multilayered seed coat and the presence of a carapace The Vanilla seeds differ morphologically and structurally from other orchid seeds.The seed coat is sclerotic, a feature seldom found in orchid seeds (Fig. 3I).In V. planifolia, two distinct seed coat layers surround the embryo during seed development (Figs.3E and 8A). The inner seed coat, derived from the inner integument, is two cells thick, and the walls remain primary during the early stages of seed development (Fig. 8A, B).As the seeds approach maturity, the inner seed coat becomes gradually compressed and eventually forms a thin layer at maturity covering the embryo, creating a carapace (Fig. 8C, D).Using the Nile red staining, the inner seed coat's innermost and outermost surface walls react positively, indicating the possible accumulation of a cuticular substance in the wall of these cell layers (see micrographs in Yeh et al. 2021). The outer integument is responsible for forming the seed coat.Before fertilization, the outer integument is still growing (Fig. 3B, C).It has not enclosed the embryo sac completely.At this stage, the walls of the outer seed coat cell are relatively thin (Fig. 3C).After fertilization, the walls of the outermost layer of the seed coat become thickened (Fig. 3D-F).As the embryo becomes matured, the thickened walls of the outer seed coat become sclerified (Fig. 3G, H).At the same time, the dark material accumulates further in the outer and lateral walls of the outermost cell layer (Fig. 8B, C).The thickened cell walls with dark material occupy the entire cell cavity, and the cells become sclerotic as the embryo matures (Figs.3I and Fig. 8D).Near maturity, the inner layers of the outer seed coat gradually compress and attach to the sclerified outermost layer of the seed coat (Fig. 8C, D).Using the TBO stain, the cell wall of the outermost layer of the seed coat stained greenish-blue, indicating the presence of lignin in the wall. The fruits and seeds in Vanilla species are designed for zoochory (Nishimura and Yukawa 2010;Pansarin and Ferreira 2021).The fruits and the sclerotic seeds in Vanilla are intended to be eaten by birds or other animals as the fresh fruits turn red as they mature, and birds are confirmed to be the primary seed dispersal agent.The digestive enzyme of birds sclerifies the hard seed coats, breaking dormancy and promoting germination (Nishimura and Yukawa 2010; Pansarin and Ferreira 2021;Zhang et al. 2021). Seed coat from unitegmic ovules The unitegmic ovule is found in some mycoheterotrophic orchids, e.g., Gastrodia and Epipogium (Tohda 1967;Abe 1976;Arekal and Karanth 1981;Li et al. 2016).In these species, the seed coat comprises a single integument with only two cells thick.During the seed development of G. nantoensis, the seed coat cells become more vacuolated and enlarge further, and the starch grains are metabolized as the seed matures (Fig. 4E-G).At maturity, seed coat cells eventually compress into a thin layer and envelop the embryo (Fig. 4H).In G. nantoensis, the compressed thin seed coat stains greenish blue with the TBO stain, indicating the lignified cell wall (Fig. 4H).The seed coat also reacts weakly to the Nile red staining (Fig. 4I).Still, the signals could be easily quenched by pre-staining of TBO, indicating the absence of distinct cuticular materials.The fruiting period of Gastrodia and Epipogium is relatively short compared to most orchids; their aboveground parts last only 3-4 weeks, then vanish (Arekal and Karanth 1981).Since Gastrodia and Epipogium are fully mycoheterotrophic species that rely entirely on the nutrient supply from mycorrhizal fungi (Yagame et al. 2007;Li et al. 2016), the seed coat's simple structure may help reduce nutrient investment during reproduction. The characteristics of lignin and cutin deposits in the orchid seed coat The seed coat is the first protective barrier against environmental stresses such as moisture and pathogens (Mohamed-Yasseen et al. 1994;Rajjou and Debeaujon 2008).In addition to the cellulosic walls, different polymers can be found embedded or encrusted in the seed coat cell walls, i.e., lignin, suberin, and cutin (Sano et al. 2016).These compounds can offer additional protection and reinforce the walls.In the orchid seed coat, lignification of seed coat cells appears universal, and its presence is deemed essential in its ability to protect the embryo within. Lignin is readily identified using histochemical tests, i.e., phloroglucinol-HCl and TBO, and autofluorescence characteristics when viewed with a fluorescence microscope.Modern techniques such as vibrational spectroscopy and nuclear magnetic resonance provide vigorous methods for identifying lignin and studying its chemistry (Lupoi et al. 2015).Barsberg et al. (2013) confirm the presence of lignin in Cypripedium calceolus using FT-IR spectroscopy.In recent years, a new form of lignin, the C-lignin, was discovered by nuclear magnetic resonance (NMR) spectroscopy in seed coats of certain species belonging to Orchidaceae and Cactaceae (Chen et al. 2012(Chen et al. , 2013; see Barsberg et al. 2018).The C-lignin differs from the commonly known G/S lignin because it is synthesized from caffeyl alcohol.Using the ATR-FT-IR spectroscopy, Barsberg et al. (2018) characterized seed coat ontogenesis and chemistry in three orchid species, i.e., Neuwiedia veratrifolia, C. formosanum, and Phalaenopsis aphrodite and discuss C-lignin properties and possible function to seed coat properties.They revealed and noted the marked diversity with respect to the seed surface chemistry of the orchids studied.Future investigations will provide further insight and possible implications for seed ecology and germination (Barsberg et al. 2018). The presence of a cuticle is a common feature in many seed coats, e.g., cotton (Yan et al. 2009) and soybean (Ranathunge et al. 2010).The accumulation of cuticular material is commonly observed in the epidermal tissue, forming a vital hydrophobic barrier over the aerial surfaces, preventing water loss and gaseous exchanges (Esau 1977).In recent years, Nile red, a sensitive lipid stain (Greenspan et al. 1985), is often used to detect lipidic substances on the surface of epidermal cells and the embryo and has contributed to the characterization of cuticular substances in plant cell walls. In orchids, the deposition of cuticular substances in orchid seed coats varies among species, and a distinct cuticle is absent in the seed coat walls.A lipid component is not detected using the lipid stain, Sudan III (Carlson 1940), and the IR spectroscopic method (Barsberg et al. 2013) in the C. parviflorum and C. calceolus seed coat, respectively.Cyrtosia javanica, a mycoheterotrophic orchid species, has a thick seed coat from the outer integument (see micrographs in Yang and Lee 2014).The outermost layer is sclerified with thick lignified walls.However, Nile red staining fails to detect the presence of a lipidic substance in the outer seed coat layers.Moreover, weak positive staining is found in the walls of the inner seed coat cells derived from the inner integument (Yang and Lee 2014).A positive Nile red staining is noted in Cymbidium sinense (Yeung et al. 1996) and C. formosanum (Lee et al. 2005).However, the stain is quenched by prestaining with TBO, indicating that the cuticular substance is adcrusted in the wall and not as a distinct cuticle similar to that commonly seen in leaf epidermal cells. Cutin deposits are more consistently found when a carapace is present and at the embryo's surface.In Cephalanthera falcata, lignin and cuticular material accumulation have been reported in the inner seed coat (Yamazaki and Miyoshi 2006).Similar intense staining of Nile red can be seen in the inner walls (carapace) derived from the inner integument in C. formosanum (Lee et al. 2005).Positive Nile red staining is often noted in the outer walls of the orchid embryos, such as C. sinense (Yeung et al. 1996) and Paphiopedilum delenatii (Lee et al. 2006).The presence of cuticular material offers additional protection to the embryo. Seed coat functions during seed development and germination in orchids Due to the simplicity of the seed coat structures in orchids, besides aiding in seed dispersal and serving a protective function during seed germination, a discussion on its functions during development is absent from the literature.Here, we summarize current observations and draw attention to the seed coat's additional functions during development and germination. Nutrient supply during seed development The inner layer of the seed coat derived from the outer integument is destined to aid in nutrient transfer to the developing embryo, especially when a haustoria-like suspensor is present.As shown in E. ibaguense and P. tankervilliae, the inner layers of the seed coat derived from the outer integument remain alive with thin walls during the early stages of embryo development.The walls stain purple with the TBO stain.This polychromatic stain can distinguish lignin, cellulose, and pectic substances based on color differences (O'Brien et al. 1964;O'Brien and McCully 1981).The purple-color reaction towards the TBO stain indicates the absence of phenolic compounds in the wall, which can impede the apoplastic transport process.Furthermore, the absence of autofluorescence and Nile red stain in these cell layers indicates the lack of lipidic and phenolic compounds in the walls (Yeung et al. 1996).These features enable the suspensor to obtain nutrients apoplastically through the walls of the seed coat and translocate them to the embryo proper.Our earlier study demonstrates that the suspensor cell of P. tankervilliae has a more negative osmotic potential than neighboring cells, providing a driving force for the uptake of water and nutrients from adjoining seed coat cells (Lee and Yeung 2010).A recent comparative study using suspensors of Arabidopsis and beans indicates that genes involved in transport and Golgi body organization are upregulated in the suspensor (Chen et al. 2021), indicating that the suspensor has unique physiological properties.By positioning itself next to the source of nutrients, i.e., the thin seed coat cells, nutrient acquisition for the embryo can be achieved. Carapace formation for added protection in seed dispersal and germination The term carapace is defined as a protective shell.It originates from the inner integument and wraps around the embryo (Veyret 1969;Rasmussen 1995;Lee et al. 2005;Yamazaki and Miyoshi 2006).This structure is common in temperate, terrestrial orchids such as Dactylorhiza species (Custódio et al. 2016) and Paphiopedilum species (Lee et al. 2006).The thickness of the carapace varies.Synthesis and deposition of phenolic compounds occur before the inner integumentary cells collapse, offering further protection to the embryo.In Cephalanthera falcata (Yamazaki and Miyoshi 2006), a carapace is readily detected and wrapped tightly around the embryo.A thin carapace is seen in C. formosanum (Lee et al. 2005), while Limodorum (Veyret 1969) and C. plectrochilum (Fig. 7) have a relatively thick carapace. From the case histories shown earlier, it is clear that a carapace cannot be formed in embryos with a haustoria-like suspensor.As seen in E. ibaguense, P. tankervilliae as well as Phalaenopsis (Additional file 3; Lee et al. 2008), the rapid elongation of the suspensor and the growth of the embryo tend to rupture the inner integument preventing carapace formation.Moreover, to fulfill a protective function, we propose that the term 'carapace' should be applied to those seeds with a distinct inner layer derived from the inner integument, having lipidic and or phenolic deposits incorporated in the cellulosic walls.The added compounds serve to provide added protection to the embryo in addition to the seed coat. It is well established for in vitro seed germination that carapace is one of the major causes inhibiting mature seed germination.Veyret (1969) noted that seeds with a particularly well-developed carapace, such as Cephalanthera and Epipactis species (Additional file 2), germinated with difficulty.The carapace acts as a barrier to water and nutrient absorption.Sonification modifies the carapace through physical scarification and improves germination (Miyoshi and Mii 1988).Stratification of the seed coat using NaOCl improves tetrazolium staining in seeds with a thick carapace (Custodio et al. 2016).Seed pretreatment could improve seed coat hydrophilicity and permeability, allowing germination (Miyoshi and Mii 1998;Lee et al. 2007;Lee 2011;Šoch et al. 2023). The presence of a carapace is important to the survival of orchid seeds in their natural environment.The carapace is more often found in seeds of temperate terrestrial orchids (Rasmussen 1995;Lee et al. 2005;Yamazaki and Miyoshi 2006).Besides, functions as an additional protective layer, can the presence of a carapace result in seed coat-imposed dormancy, regulating seed germination in its natural habitat?In the temperate region, seeds are shed in the autumn.A carapace may protect the embryo, allowing the seeds to survive the winter months and delaying germination until spring. Seed coat features allow water uptake during germination As indicated above, the inner seed coat layer cells have no phenolic deposits and will not pose as an apoplastic barrier for water movement during seed imbibition.Even though the cells have collapsed as the seeds dry, the walls can still serve as channels for the apoplastic movement of water and water-soluble materials during germination. Particular structural adaptations for water uptake and storage have been noted in the seed coat of Sobralia dichotoma (Prutsch et al. 2000).The seed coat in S. dichotoma consists of different cell types, i.e., helical tracheoidal cells and collapsed cells with walls rich in pectin.Imbibition leads to uncoiling, stretching the helical tracheoidal cells forming a pipe, and shaping a central capillary.The reversible movement of the helical tracheoidal cells is interpreted as a mechanism of water uptake (uncoiling) and -storage (coiling).The pectin-rich cells may function in water storage, thereby protecting the mature embryo against desiccation.This intricate design demonstrates that orchid seed coat can have specialized functions even though cells are no longer alive. The varied thickened seed coat adapts to the seed dispersal mechanism Although most orchids have a thin seed coat at maturity, orchids with fresh, colorful fruits and thick-walled seeds are designed for zoochory.As shown in Apostasia nipponica (Suetsugu 2020), C. javanica (Yang and Lee 2014), Cyrtosia septentrionalis (Suetsugu et al. 2015), Neuwiedia singapureana (Zhang et al. 2021) and Yoania japonica (Suetsugu 2018a, b), these species have fleshy fruits containing seeds with a thick seed coat.The thickened and lignified seed coat protects seeds from the digestive enzyme as they pass through the digestive tracts of birds.Moreover, the digestive process modifies the seed coat, enhancing germination (Zhang et al. 2021).A similar observation is well documented in Vanilla species (Pansarin and Ferreira 2021; Yeh et al. 2021).It is likely for those orchid species with fresh fruit and seeds, having a thick seed coat is an adaptation to their elected reproductive strategies.In their review, Coen and Magnani (2018) recently indicated, "the seed coat architecture evolved to adapt to different environment and reproductive strategies in part by modifying its thickness."The varied number of integuments and thickness of the seed coat found in orchid species are likely to be adaptive features for seed dispersal and germination. The seed coat directs the entry of fungal hyphae through the micropyle during symbiotic seed germination For symbiotic seed germination, the successful penetration and establishment of a compatible mycorrhizal fungus into the embryo ensures protocorm formation.Although the seed coat structure is simple, its design is part of the strategy ensuring success.The prominent micropylar opening is a clever design providing an initial site of entry for the mycorrhizal fungi for most orchid species.In Bletilla striata, embryos with the seed coat removed result in a lower germination rate than intact seeds infected with appropriate symbiotic fungi (Miura et al. 2019).This finding indicates that restricting the invasion of fungal hyphae at the initial stage of fungal colonization allows proper symbiotic establishment.The entry of mycelium through the micropyle into the degenerated suspensor of the embryo is one of the preferred pathways (Yeung et al. 2019), ensures the 'planned' sequence of events, such as peloton formation, can occur, resulting in protocorm growth and development.In Caladenia tentaculate, the embryo produces a UV autofluorescing substance which gradually recedes towards the suspensor region near the micropyle (Wright et al. 2005).Although the nature and function are unknown, this substance may interact with compatible mycorrhizal fungi, establishing symbiotic interactions.It is important to note that even though the suspensor has degenerated, the absence of cuticular materials in its wall enables the ready penetration of mycelium into the embryo properly.The seed coat at the chalazal end can also accommodate the expansion of the embryo, forming a tight fit over the embryo at the chalazal end during the early stages of germination.This safeguards the entry of fungal hyphae into the embryo's future shoot apical zone, allowing proper shoot development.Moreover, the compatibility between the fungi and the orchids is a critical factor in determining the ultimate success of symbiotic seed germination (Chen et al. 2022).Although the function of the seed coat is 'passive' , the structural design enables it to play a role in the early stages of symbiotic seed germination. Perspective The orchid seed coat has a simple structure.The minute size of the seeds makes this a difficult experimental material to study.Moreover, their simple organization is likely an adaptation to reproductive strategies.This review draws attention to aspects of seed coat structures and their potential functions during seed development and germination.Moreover, many important questions remain.For example, does the seed coat have a morphogenetic role in embryo development without an endosperm besides nutrient supplies?In flowering plants, there is a close interplay between the endosperm and seed coat formation (Ingouff et al. 2006;Wang et al. 2021).Without endosperm, could a similar process occur between the orchid embryo and the seed coat? With an improved appreciation of seed coat development and function, we can focus on studying key processes such as nutrient transfer between the seed coat and the embryo and the biosynthesis of secondary metabolites in carapace formation.Our understanding of the molecular control of seed coat development still has many gaps (Matilla 2019).Recently, a MADS-box gene, PeMADS28 has been identified in orchids in Phalaenopsis equestris and has been shown to play an essential role in ovule integument development (Shen et al. 2021).Is there a seed coat-specific promoter in orchids that regulates integument and seed coat development?More studies are needed on molecular genetics and gene functions during development.We also see the potential of the seed coat system in unraveling new regulatory mechanisms and providing new perspectives on plant biology.The recent successful use of the RNA-seq method with the laser microdissection technique described by Millar et al. (2015) and Balestrini et al. (2021) can provide precise answers to the question posted.With further refinement in cell isolation techniques, it would be possible to apply single-cell RNA sequencing technology (Xu and Jackson 2023) to study specific events in the integument and seed coat development. Fig. 1 Fig. 1 HYPERLINK "sps:id::fig1||locator::gr1||MediaObject::0"The ovule and seed development of Epidendrum ibaguense.A The archesporial cell enlarges and differentiates into the megasporocyte and is enveloped by a single layer of nucellar cells (arrowhead).At the same time, both the inner (*) and outer (arrow) integuments have developed.Scale bar = 20 μm.B A mature embryo sac (arrowhead) showing the egg apparatus.The inner integument (*) is well developed at the micropylar end, forming the micropyle.The outer integument has extended beyond the inner integument as the ovule matures.Scale bar = 50 μm.C After fertilization, the zygote (arrowhead) has a dense cytoplasm with a prominent nucleus and some starch deposits (small red dots).The inner integumentary cells (*) at the micropylar end become densely cytoplasmic; each cell has a distinct nucleus.The walls of the inner integumentary cell thicken, and wall ingrowths are present.Scale bar = 20 μm.D A lower magnification micrograph giving a general overview of the contrasting staining intensity between the inner and outer integuments.The fertilized ovule and the inner integument have a stronger staining intensity compared to the vacuolated outer integumentary cells.Scale bar = 150 μm.E A narrow funiculus connects the developing seed to the maternal placental tissue.Mitotic activity (arrowhead) can be discerned at the time of fertilization.Scale bar = 40 μm.F As the proembryo increases in size and the suspensor begins to protrude beyond the opening of the inner seed coat, the cells of the inner seed coat (arrowhead) gradually become compressed.Scale bar = 50 μm.G The embryo continues to increase in size.As a result, the inner seed coat is crushed, and only remnants (arrowhead) remain adhering to the embryo proper.Scale bar = 50 μm.H Light micrograph showing a portion of the suspensor (arrowhead) pressing against the walls of the seed coat cells.The inner layers of the seed coat stain purple with the TBO stain, indicating the absence of phenolic compounds in the walls.Scale bar = 10 μm.I Fewer mitotic divisions in the inner chalazal cells result in creating a cavity (*) during seed development.Scale bar = 60 μm.J Light micrograph showing a TBO-stained section of a mature seed coat (arrowhead).Judging from the staining reaction, there is a preferential deposition of lignin in the seed coat's inner periclinal and radial walls.Scale bar = 50 μm Fig. 4 Fig. 5 Fig. 4 The ovule and seed development of Gastrodia nantoensis.A The archesporial cell is differentiating into a megasporocyte.Cell division (arrow) near the ovule's chalazal end signifies the integument tissue's initiation.Scale bar = 20 μm.B The second meiotic division results in the formation of two megaspores of unequal size.At the same time, the initiation of integument tissue is becoming visible (arrow).Scale bar = 20 μm.C A longitudinal section through a mature embryo sac showing the egg apparatus (*).The integument tissue (arrow) has completely enclosed the embryo sac at this stage.Starch grains (arrowhead) start to accumulate in the integument tissue.Scale bar = 20 μm.D At the time of fertilization, the pollen tube (arrowhead) penetrates the embryo sac, and the integument tissue elongates further and becomes the seed coat (arrow).Scale bar = 20 μm.E Light micrograph showing a proembryo with a suspensor cell (S).Scale bar = 20 μm.F A longitudinal section through a developing globular embryo.At this stage, the nucellus (arrowhead) gradually compresses, and large starch grains (double arrowheads) are abundant in the cells of the embryo proper and the seed coat.The suspensor cell (S).Scale bar = 20 μm.G As the seed approaches maturity, starch grains (double arrowhead) are prominent within the embryo cells, and the suspensor cell (S) has reduced its size and begins to degenerate.At this stage, the nucellus (arrowhead) has compressed and degenerated.Scale bar = 20 μm.H At maturity, the embryo has smaller cells near the chalazal end and larger cells in the micropylar end.The suspensor has degenerated at this stage, and the embryo proper is enveloped by a shriveled seed coat (arrow).Scale bar = 20 μm.I Nile red staining fluorescence micrograph of a mature seed at the same stage as that seen in Fig. 4I.The seed coat (arrow) and the surface wall (arrowhead) of the embryo proper react positively to the stain.Scale bar = 20 μm Fig. 6 Fig. 6 The formation of carapace in Cypripedium formosanum seeds.A Light micrograph showing a cross-section through a developing seed at the globular stage.The embryo is enclosed by the inner seed coat (arrowhead) from the inner integument.The tangential and radial walls of the outer seed coat (arrow) have thickened.Scale bar = 30 μm.B Light micrograph showing the fluorescence pattern of a cross-section through a developing seed at the stage similar to A after the Nile red staining.The surface wall of the embryo proper possesses fluorescent signals (double arrowheads).In addition, the tangential and radial walls of the outer seed coat (arrow) and the inner and outer surface walls of the inner seed coat (arrowheads) fluoresce brightly.Scale bar = 30 μm.C Electron micrograph showing the adjoining region of the embryo proper cell and the cell of the inner seed coat at the globular stage.Osmiophilic lipid bodies (OL) have accumulated within the embryo proper cell (EP), and a distinct osmiophilic layer (arrowhead) is present in the inner surface wall of the inner seed coat (IS).N, nucleus.Scale bar = 2 μm Fig. 8 Fig. 8 The seed coat development of Vanilla planifolia.A The seed coat consists of an inner seed coat (IS, two cells thick) and an outer seed coat (OS, three to four cells thick).At the time of fertilization, the cell wall of the outermost layer of the outer seed coat remained primary in nature.Scale bar = 20 μm.B In the globular embryo stage, the cell wall of the outermost layer of the outer seed coat (OS) thickens, and the inner seed coat (IS) becomes dehydrated and compressed.Scale bar = 20 μm.C As the seed matures, the thickened outermost layer of the outer seed coat and the inner layers gradually dehydrate and compress.The inner seed coat (IS) has compressed into a thin layer at this stage.Scale bar = 20 μm.D At maturity, both the thin inner seed coat (IS) and the thickened outer seed coat (OS) compress and envelop the embryo E tightly.Scale bar = 20 μm in Yeung 2022) draws attention to the special cytological features.When re-examining reports on orchid ovule development, increased staining intensity in the inner integument is often noted, e.g., Oncidium flexuosum (see Figs. 24-29 in Mayer et al. 2011), Acianthera johannensis (see Fig. 5 in Duarte et al. 2019) and Dendrobium nobile (see Figs. 4 e and f in
v3-fos-license
2023-01-17T19:25:02.486Z
2023-08-08T00:00:00.000
255903156
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.rbmt.org.br/export-pdf/1720/aop842.pdf", "pdf_hash": "805845783eed6898f6034241debf0df71a3f3cc4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44896", "s2fieldsofstudy": [ "Medicine" ], "sha1": "cdbb10e6a0ad58b119c9376c39f4bd9c0883db0d", "year": 2023 }
pes2o/s2orc
Stress, burnout and work engagement among physicians of the state of Paraná, Brazil Introduction Precariousness of medical work, with loss of autonomy and devaluation, in addition to unstable and non-guaranteed employment bonds, has caused health problems in professionals, hampering the assistance provided. Research shows a high prevalence of stress, burnout, depression, and suicide among physicians. This study investigated aspects of mental health in physicians from the state of Paraná, Brazil. Objectives We aimed to evaluate indicators of stress, burnout, and work engagement measured by inventories specifically designed for each one: Stress Symptoms Inventory, Burnout Syndrome Inventory, Utrecht Work Engagement Scale, respectively. Methods The professionals answered the questionnaires remotely, after accepting the invitation for the study and signing the consent form. Results A total of 1,201 physicians answered the questionnaires, with mean age of 37 years; 53.9% of participants were women; 63.5% graduated in Paraná. Of the total number of participants, 97.5% and 93.4% presented psychological and physical symptoms of stress, respectively. According to the Inventário da Síndrome de Burnout, the prevalence of diagnosis of burnout was estimated at 59.4%. As for work engagement, 40% of participants showed very high levels in the overall score of the construct. Conclusions Most physicians showed signs of stress; burnout rates were high; negative organizational conditions prevailed in the work environment; work engagement was frequent. INTRODUCTION Valued for its social role, medicine has been discredited and undergone radical changes in its practice.Precarious working conditions leading to suffering, stress, and anxiety are a no longer questioned reality in medical professional practice, since work organization is the main driving force of workers' mental life. 1,2edical work is characterized by excessive workload, extended working shifts, long-distance duties, and little autonomy, and physicians have an average of two to four employment bonds, reaching up to 20 bonds.This situation generates several ethical and technical problems, in addition to increased levels of burnout, illness, and frequency of involvement in traffic accidents with victims, sometimes fatal. 1,2umerous problems generate stress in physicians, including idealized behavior, intense and frequent contact with pain, suffering, death, and dying; dealing with intimacy and with difficult patients; uncertain and limited medical knowledge. 1,2Nonetheless, physicians continue working with dedication, and the number of individuals dropping out the medical profession or the medical school is low, possibly because this profession maintains an "artistic" component, 1 and medicine has been historically considered a ministry. Stress, conceptualized as a "general adaptation syndrome," is related to a strategy of the human body to adapt to changes, demands, disappointments, and other everyday events.People usually experience a certain level of stress, only varying in its intensity. 3,4Inability to overcome stressful experiences wears individuals down, and the results of this process depends on its degree, frequency, and duration. 4tress is manifested in stages: in the beginning, individuals encounter a source of stress, then attempt to recover themselves from physiological changes, returning to balance; failure to achieve this balance leads to exhaustion.It is a dynamic process in which thoughts, feelings, behaviors, and biophysiological mechanisms attempt to adapt themselves. 3,4The results of the response will depend on individual and social differences, cultural characteristics, and individual behavioral adaptive patterns. 4tress at the workplace is unavoidable, due to the natural contemporary demands.According to the World Health Organization (WHO), 5 occupational stress is manifested when professionals are presented with demands and pressures that are not matched to their knowledge and abilities and which challenge their ability to cope.These demands may occur at the organizational level, such as company's culture and structure; at the group level, such as scarce teamwork and rivalries; and at the individual level, such as unclear and conflicting roles, limited work. Physicians and nurses are more prone to chronic occupational stress, because they are exposed to multiple factors and sources of stress in their everyday work. 3,6ndividual's adaptation to a stressful situation requires coping strategies, not always leading to good results. 3When these strategies fail, diseases such as burnout syndrome may occur. 7,8rganizational factors are extremely important in the study of burnout syndrome, because they may provide resources for the maintenance/promotion of well-being at work, but they may have an opposite impact if these resources are lacking.Organizational conditions may be positive (POC), which are factors that facilitate activities and associated with organizational resources and work engagement; or negative (NOC), associated with occupational costs and demands and related to burnout and stress. 7urnout syndrome is characterized by high levels in three dimensions: emotional exhaustion (EE), a manifestation of chronic stress; depersonalization or dehumanization (DEs), formerly known as "cynicism", characterized by negative attitudes; and reduced professional accomplishment (rPA), defined as reduced productivity, low morale, and inability to work. 8,9This syndrome is manifested as organic or psychological symptoms, such as sleep disturbances, muscular or musculoskeletal pain, headache, gastrointestinal or cardiovascular disorders, among others, in addition to symptoms involving feelings and emotions. 1,9mong physicians, burnout syndrome affects quality of care, safety, disease evolution, and degree of patients' satisfaction; at work, it interferes with team dynamics and with institution's financial health. 10ianchi et al. 11 believe that EE is more associated with depressive symptoms and that chronic stress at work may lead to burnout and cause depression; they added that antidepressants improve burnout. 11,12Schonfeld & Bianchi 12 add that they found an association between burnout and depressive symptoms and anxiety.According to Maslach & Leiter, 9 burnout syndrome is a specific occupational dysphoria and depression is a mental disease different from burnout.Oquendo et al. 13 believe that there is a reluctance to diagnose depression in physicians; therefore, they refer to burnout, a problem related to the workplace that is not an endogenous condition requiring psychiatric treatment. 13][15] With the emergence of positive psychology, which privileges the study of health aspects, work engagement emerges as a construct referring to a positive cognitive state, present over time, of a motivational and social nature. 16,17][18] Engaged professionals are more likely to be committed to activities and have satisfactory results both quantitatively and qualitatively, in addition to being less likely to develop burnout syndrome. 17,19his research aimed to evaluate the prevalence of symptoms of stress and burnout syndrome, as well as organizational conditions and work engagement, among physicians working in the state of Paraná, Brazil. METHODS This is an empirical, cross-sectional, quantitative, ex post facto, descriptive study. In October 2018, an invitation letter was sent to all physicians registered at Paraná's Regional Council of Medicine (Conselho Regional de Medicina do Paraná, CRM-PR) up to February 1 st , 2017.A link was made available to those who agreed to participate in the research, in order for them to access the research protocol and answer it online.Due to the low adherence, a new letter was sent 30 days later, emphasizing the importance of the study, with a 10-day deadline to answer the questionnaires. The protocol, made available through the Qualtrics platform, contained a social and professional questionnaire followed by questions related to the specific tests: Stress Symptoms Inventory (Inventário de Sintomatologia de Estresse, ISE), 20 Burnout Syndrome Inventory (Inventário da Síndrome de Burnout, ISB), 7 and Utrecht Work Engagement Scale (UWES). 16he ISE comprises 27 statements considered to indicate stress, subdivided into Physical Symptoms (PhyS, 7 items), e.g., "I have been feeling fatigue," and Psychological Symptoms (PsyS, 20 items), e.g.,: "I feel angry and impatient," measured by a 5-point Likert scale, with 0 corresponding to "never" and 4 corresponding to "frequently". The ISB, developed in Brazil to cover several areas of occupational practice, has two parts: the first one assesses individuals' perception of their work environment ("antecedent factors"), with POC items (related to a good work environment), e.g.: "I feel that I am effectively part of a working team," and NOC (defining an unfavorable work environment), e.g.: "Bureaucracy takes most of my time at work." The second part evaluates burnout syndrome and includes 19 items distributed into the following dimensions: EE (n = 5), e.g.: "I feel that my work has consumed all my energy"; PA (n = 5): "My work fulfills me professionally"; DEs (n = 4): "I've had to toughen up to maintain my job;" and emotional withdrawal (EW) (n = 5): "I realize that I avoid closer contact with people at work." To diagnose the syndrome, individuals should have high scores in EE, DEs and EW dimensions, and low scores in the PA dimension. 3,7he UWES, version for workers, 16 assesses work engagement in general and in specific dimensions: VI (n = 6), such as "At my work, I feel bursting with energy"; DE (n = 5), "I find the work that I do full of meaning and purpose"; and AB (n = 6), "Time flies when I'm working." Answers were given on 7-point Likert scale ranging from 0 to 6 (0 = never and 6 = always/everyday).The instrument and its manual were translated and adapted by Agnst et al. 21articipation in the research was voluntary, and only active medical professionals who answered the questions related to social and professional identification and to the instruments were considered.The participants were informed that they would not be identified and that information would be processed as one data set. STATISTICAL ANALYSIS Results were calculated using the SPSS, version 21, and AMOS, version 18 statistical software.Descriptive analysis [means (M), standard deviation (SD), and calculation of scores for each instrument], reliability (Cronbach's alpha), correlation analysis (Pearson correlation), difference in means (Student's t test and ANOVA), and confirmatory factor analysis (structural equation modeling).A 95% confidence interval was used in all analyses. ETHICAL CONSIDERATIONS The work complied with the standards of Resolution no.466/12 of the Brazilian National Health Council and was approved by the Research Ethics Committee of a higher education institution (HEI) under number CAE 2269960, in September 2017, on Plataforma Brasil. RESULTS Among the 23,524 physicians to whom the questionnaire was sent, only 1,334 (5.7%) returned it, some of them partially completed, and 133 physicians were excluded because they only answered the identification questions.The 1,201 (5.1%) participants who answered most questions were included in the analyses, in order to avoid further losses; thus, the total (n) is not the same for the different aspects researched. Out of the 1,201 participants, 647 (53.9%) selfreported as female, and 553 as male (46.0%);only one self-reported as belonging to another gender.Among those who informed their age, mean age was 37 years (ranging from 24 to 81 years); 770 respondents (64.2%) were 42 years or younger, whereas 104 (8.7%) were 60 years or older; 214 physicians (17.8%) reported being younger than 30 years.At the time of data collection, 880 (83.2%) out of 1,058 physicians were in a stable relationship.Among those who informed where they earned their medical degree, 762 (63.5%) studied at a HEI in the state of Paraná, 100 (8.32%) in the state of Santa Catarina, 70 (5.8%) in the state of Rio Grande do Sul, 62 (5.2%) in the state of São Paulo, and 60 (5.0%) in the state of Rio de Janeiro.The other participants reported having graduated in other Brazilian states or in other countries, such as Argentina, Bolivia, Ecuador, Portugal, and Cuba, totaling 128 physicians (10.7% of the sample).Considering those who earned their medical degree in the state of Paraná, most of them graduated at Universidade Federal do Paraná (40.7%). With regard to academic degree, 190 obtained an undergraduate degree; 460 completed medical residency, and 343 had a specialization (without specifying whether it was residence or another type of specialization -some participants had more than one); furthermore, 141 had a master's degree; 67 had a doctoral degree, and 16 had a post-doctoral degree. Among the 1,058 physicians who answered most questions, 47.6% were or had already been on psycotherapeutic or psychiatric treatment; 53% reported taking medications of continuous use, and 29% used controlled medications; 74.5% believed that medical practice was a source of stress, and 46.8% have already thought in changing profession. In the results obtained with instruments to investigate signs of stress, burnout syndrome, and engagement, there is variation in the sample population (n), because some participants did not complete the tests.Among the 1,057 who answered most items in this phase, 473 (44.7%) were men, and 584 (55.2%) were women. As for ISE, 97.5% of the physicians had high levels of PsyS, and 93.4% of PhyS. According to the results presented in Table 1, reliability indexes were higher than 0.7, a result considered adequate with regard to the criterion α > 0.7. With regard to confirmatory factor analyses, results were also adequate, attesting the quality of the model for the present sample: comparative fit index (CFI) = 0.8; adjusted goodness-of-fit index (AGFI) = 0.8; root mean square error of approximation (RMSEA) = 0.1 (0.08 more specifically).Regression indexes ranged from 0.4 (ISE17ßPsyS) to 0.8 (ISB10ßPhyS), being thus higher than 0.4, which indicates that these items are constitutive components of their scales. Results for the ISB, presented in Table 2, show high score for NOC in 91.9% of participants (n = 972); whereas scores for POC had an almost equal distribution between high (34.6%,n = 366), medium, and low scores.Mean values were lower for POC (M = 2.04; SD = 0.8) and higher for POC (M = 2.9; SD = 0.3) compared to those found in a validate study of the scale with 604 participants with different occupations. 7ith regard to burnout syndrome, most participants presented high scores in the following dimensions: EE (87.1%),DEs (96.3%),EW (76.3%) and rPA (88.1%) (Table 2). The results for Cronbach's alpha tests were considered adequate (>0.7), all ranging around 0.9.Confirmatory factor analyses also showed adequate results, attesting the quality of the model for the present sample: CFI = 1.0/AGFI= 0.9/RMSEA= 0.1.Regression indexes ranged from 0.6 (ISB1ßDEs) to 0.8 (ISB8ßDEs), being thus higher than 0.4, which indicates that these items are constitutive components of their scales. The diagnosis of burnout syndrome according to already established criteria, 7 could be inferred in 628 physicians, accounting for 59.4% of the 1,058 who completed the instrument; 266 men (42.4%) and 362 women (57.6%). The results obtained with the UWES revealed that most professionals had very high, high, or medium levels of engagement.Approximately 40% of them showed very high levels in the four dimensions assessed.Means ranged from 3.64 to 4.11, with the following distribution across dimensions: VI (M = 3.6; SD = 1.3),DE (M = 3.9; SD = 1.1),AB (M = 4.1; SD = 1.0), and overall engagement (M = 3.9; DP=1.1) indicating mostly moderate and high levels of work engagement.Results are described in Table 3. Reliability rates were adequate, ranging from 0.8 (AB) to 0.9 (UWES).Results of confirmatory factor analyses showed to be adequate, attesting the quality of the model for the present sample: CFI = 0.9; AGFI= 0.9; RMSEA = 1.7.Regression indexes ranged from 0.5 (UWES9ßAB) to 0.9 (UWES2ßVI), being thus higher than 0.4, which indicates that these items are constitutive components of their scales.With regard to mean UWES scores, they were higher than those presented in the data from instrument's manual. 16he Pearson correlation test between the scales showed that all results were statistically significant and coherent with the theoretical premise of a positive correlation between stress and burnout and a negative correlation between work engagement and these variables. 19,22earson correlation of ISE scores was 0.6.With regard to the ISB, the highest value was found between COP and CON (r = -0.7),and the lowest between NOC and PA (r = -0.4).With regard to the UWES, all correlations were higher than r = 0.8 (VI-AB). With regard to inter-instrument correlations, the highest value was found between PA and DE (r = 0.8), which indicates that PA tends to be closely related to dedication to work activities.Conversely, the lowest correlation was observed between NOC and AB at work (r = -0.3),indicating that the strength of the correlation between these variables, although significant, is the weakest among those analyzed. DISCUSSION One of the limitations of the study was the number of questionnaires returned, which was below the expected, a fact that may be explained by the size of the questionnaires, since each inventory consisted of several questions requiring attention from respondents; in general, physicians do not usually volunteer their limited time for that. Another limitation was the non-probabilistic sampling technique used in the study, which prevents generalization of the results for the population of physicians in Paraná. It is worth noting the high number of professionals who were or had already been on psychiatric or psychotherapeutic treatment and of those using continuous and/or controlled medications, suggesting that most physicians in the sample are experiencing physical and/or emotional stress.This fact may have influenced their decision of answering the questionnaire completely; for them, the study may have represented a cry for help.Those who did not answer the questionnaire were not interested in the subject or did not want to expose themselves, despite being informed about the confidential nature of the research. Another important aspect is that most physicians (74.5%) believed that medical practice was a source of stress and that almost a half (46.8%) had already thought in changing profession.Unfortunately, it was not possible to correlate these findings with demographic data, because working bonds, hours, and places varied significantly, thus hampering the compilation of answers and the comparison of test results. The analyses of answers for the ISE allowed to conclude that participants are in a stressful situation, with a very high percentage of them showing high levels of PsyS and/or PhyS, with higher mean scores compared to those of studies with other professional categories. 19,22t is known that the strategies used by individuals to adapt to a stressful situation, either cognitive or material, are not always successful. 3 When they fail, it may result in diseases such as burnout syndrome and depression, among others. 1,2The results obtained by the ISB may be an indicative of this process.The fact that most physicians was relatively young, with a mean age of 37 years old, thus at the beginning of their career, partially explains these findings, since occupational stress is more frequent in this phase. 1,2,5owever, these findings do not rule out the possibility that older professionals and those with longer professional experience develop signs and symptoms of occupational diseases such as those investigated in the present study.Only 8.7% of participants reported being 60 years or older, and it was not possible to establish a relationship between belonging to this age group and having the aforementioned diseases or other chronic diseases. Stress, burnout and work engagement among physicians Most participants reported being in a stable relationship at the time of the research, an aspect that would have a protective effect against stress, as well as regular physical activity, which, however, was reported by only 20% of respondents. The results for the tests of the first part of the ISB indicated that an unfavorable work environment predominated in the sample, as shown by high scores in NOC, which may undesirably interfere with work activities and contribute to the development of stress and burnout. Burnout indicators had higher mean scores than those found in the study by Benevides-Pereira et al. 23 with 701 professionals from industries.The diagnosis of burnout syndrome could be inferred in almost 60% of physicians and was more frequent in women (362; 57.6% of 628).This high percentage of burnout suggests that the work environment is compromising the health of these professionals, as shown by the predominance of NOC, considered factors preceding the syndrome. 23It is necessary to develop occupational health strategies to ensure safety and maintenance of health, technical quality, and satisfactory productivity for these professionals. In a study conducted by Lima et al. 24 including 134 (22.3%) of the 600 physicians from a hospital in Rio de Janeiro, Brazil, the percentage of burnout was 10%, a value considerably lower than that reported in the present study, although 82.1% of physicians in their study showed high levels in at least one of the dimensions of the syndrome.Schwartz 25 published the results of a research with 1,838 Brazilian physicians, in which 8% of professionals reported suffering from depression and 26% from burnout, whereas 11% reported suffering from both conditions; this study was based on self-reported diagnosis. The results for the UWES 16 indicated that, in general, professionals were engaged to work.Approximately 40% of them had very high levels in the four dimensions assessed: VI, DE, AB, and overall engagement, with AB and overall engagement showing the highest means.Similar results were obtained by Teixeira et al. 26 in a study conducted in Brazil using the same instrument and including 36 pediatrics residents, with higher means for the DE dimension.International studies with the UWES, such as an investigation with 123 surgeons in Germany, revealed that participants showed higher levels of overall engagement than those reported in the present study, with M = 4.4; SD = 0.9, 27 results more desirable related to the construct.In a sample of 111 workers from a high complexity surgical unit in Cali, Colombia, Ortiz & Jaramillo 28 found higher mean scores for overall engagement (M = 4.9), scores also higher than those reported for the sample from the state of Paraná, Brazil.These authors 28 also point out that these findings indicate a positive mental state, with commitment, persistence, and work enthusiasm, but they remember that it does not imply ruling out the issue of occupational stress. The findings obtained in the present research coincide with those of other authors who also questioned why, despite precarious working conditions and high levels of stress, most physicians are always willing to work, sometimes working overtime, a fact that should be further analyzed. 14,24,29ngagement may have contributed to protecting against burnout and in strategies to cope with stress, a desirable aspect in occupational health psychology, indicating a direction and the concrete possibility of transforming work processes to become them health promoter. 1Engagement reinforces the effects of the available organizational resources, contributing to increase in performance and organization well-being. 17,18here was a positive correlation between stress and burnout, and a negative correlation of work engagement with stress and burnout; a result already expected according to the theoretical premise. 19,22Interinstrument correlations showed that PA is associated with dedication to work activities and that NOC are negative correlated with AB at work. It is important that professional organizations and health managers reflect on and suggest health promotion and well-being actions of these professionals.In general, the few intervention programs that may improve physicians' health state act only at the individual levels, advising on how to deal with and reduce stress. 15rganizational intervention programs are rare and should be encouraged. Ensuring occupational safety should be the goal of professional organizations and health managers, paying attention to factors such as number of working hours, high demand of activities, sleep deprivation, and degree of autonomy, which affect occupational health and put professionals under pressure, leading them to EE, which has a confirmed association with depression. 14There is the need for studies on the process of medical work in its multiple forms, when searching for an improvement in quality of life and health of these professionals, which will certainly have a positive influence on their practices. 1,15 CONCLUSIONS 1.Although the number of physicians who answered the tests were below the expected, the aim of evaluating stress, burnout, and work engagement was achieved; 2. Most physicians in the sample experienced signs of stress; 3. NOC prevailed in the work environment; 4.There was a high number of individuals with burnout in the sample population; 5. Work engagement was frequent, despite the predominance of stress and NOC; 6.The instruments showed adequate reliability indexes and confirmatory factor analyses, as well significant correlations between the scales, denoting that the constructs were associated for the present sample. Further studies are recommended in order to better explain these results.It may be interesting to apply the tests in a random sample and, as much as possible, applying each instrument in different groups, in order to increase physicians' adherence, since long questionnaires are associated with withdrawal. It is suggested to analyze the associations between test results and demographic characteristics, in to investigate a possible cause-effect relationship, an aspect that was hampered in this study. HOMAGE The author Ana Maria Benevides Pereira participated actively in the development of the article, but unfortunately passed away before its publication.We thank her for her valuable contributions to this study.
v3-fos-license
2020-04-30T09:07:28.741Z
2020-04-28T00:00:00.000
234988722
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-24669/v1.pdf?c=1588118120000", "pdf_hash": "85654e5c2728ee4e62d7d9ca649de17b4d41381f", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44903", "s2fieldsofstudy": [ "Medicine" ], "sha1": "7b7eb4b8bfb4fa32192b17872d293c19ec3c9e5d", "year": 2020 }
pes2o/s2orc
Thymoquinone protects against hyperlipemia-induced cardiac damage in low-density lipoprotein receptor-deficient (LDL-R−/−) mice CURRENT POSTED Background Hyperlipemia is a risk factor for cardiac damage and cardiovascular disease. Several studies have shown that thymoquinone (TQ) can protect against cardiac damage. The aim of this study was to investigate the possible protective effects of TQ against hyperlipemia-induced cardiac damage in low-density lipoprotein receptor deficient (LDL-R−/−) mice. Methods: Eight-week-old male LDL-R−/−mice were randomly divided into the following three groups: the control group fed a normal diet (ND group), the high fat diet (HFD) group, and the HFD mixed with TQ (HFD+TQ) group. All groups were fed the different diets for 8 weeks. Blood samples were obtained from the inferior vena cava, collected in serum tubes, and stored at -80 °C until use. Cardiac tissues were fixed in 10% formalin and then embedded in paraffin for histological evaluation. The remainder of the cardiac tissues was snap-frozen in liquid nitrogen for mRNA preparation or immunoblotting. Results The levels of metabolism-related factors, such as total cholesterol (TC), low-density lipoprotein-cholesterol (LDL-c), and high-sensitivity C-reactive protein (hs-CRP), were decreased in the HFD+TQ group compared with that in the HFD group. Periodic acid-Schiff staining demonstrated that lipid deposition was lower in the HFD+TQ group than that in the HFD group. The expression of pyroptosis indicators (NOD-like receptor 3 [NLRP3], interleukin [IL]-1β, IL-18 and caspase-1), pro-inflammation factors (IL-6 and tumour necrosis factor alpha [TNF-α]), and macrophage markers (cluster of differentiation [CD]68) was significantly downregulated in the HFD+TQ group compared with that in the HFD group. Conclusions Our results indicate that TQ may serve as a potential therapeutic agent for hyperlipemia-induced cardiac damage. group, but were significantly increased in the HFD + TQ group. Our data establish that TQ contributes to the mitigation of hyperlipidaemia-induced cardiac damage, as shown by reduced lipid deposition and pyroptosis and downregulated pro-inflammatory cytokine expression. These findings provide new insights into the role of TQ in hyperlipidaemia-induced cardiac damage and introduce the possibility of a novel therapeutic intervention for treating CVDs. Introduction Hyperlipemia is a critical damage-inducing element in cardiovascular disease (CVD) [1]; individuals with hyperlipidaemia have a higher risk of CVD compared with those with normal cholesterol levels [2]. Furthermore, increasing evidence has shown that dyslipidaemia-related cardiac damage is associated with lipid accumulation, oxidative stress, and inflammation [3,4]. Several researchers have investigated various drugs for treating hyperlipidaemia such as statins; however, as these are related to the development of resistance in cells and as these are associated with adverse effects, new methods for treating hyperlipidaemia are needed. Thymoquinone (TQ) is the major constituent of Nigella sativa [5], commonly known as black seed or black cumin, and is globally used in folk (herbal) medicine for treating and preventing a number of diseases and conditions [6]. Previous studies have reported that TQ suppresses chronic cardiac inflammation [7], and regulates the expression of factors, such as vascular endothelial growth factor and nuclear factorerythroid-2-related factor 2 (Nrf2), thereby improving the antioxidant potential of the cardiac muscle. In addition, TQ alleviates diabetes-associated oxidative stress in cardiac tissues [8]. Additionally, several studies have shown that the protective effect of TQ against cardiac damage such as in case of ischemic damage [9] and acute abdominal aortic ischemia-reperfusion injury [10]is mediated via the pyroptosis pathway [11]. Recently, pyroptosis, an inflammatory form of programmed cell death [12], has been gaining increasing attention, especially during hyperlipemia [13,14]; however, the pathophysiological mechanisms underlying the relationship between hyperlipemia and cardiac damage are not yet fully understood. Therefore, in this study, we investigated the role of TQ in hyperlipidaemia-induced cardiac damage in a low-density lipoprotein receptor-deficient (LDL-R⁻/⁻) mouse model. Animal model LDL-R ⁻/⁻ mice were purchased from Beijing Vital River Lab Animal Technology CO., LTD. (Beijing, China). All mice were bred in a room with a 12/12-h light-dark cycle at a controlled temperature (24 -26 °C). Male LDL-R⁻/⁻ mice (8-week-old) were randomly divided into the following three groups: mice fed a normal diet (ND group, n = 8), mice fed a high-fat diet (HFD group, n = 8), and mice fed a highcholesterol diet + 50 mg/kg/day of TQ (HFD+TQ group, n = 8). The experimental diet was purchased from Shanghai Slac Laboratory Animal Co., Ltd. (Shanghai, China). Mice in all groups were fed with the appropriate diet for 8 weeks. Blood samples were acquired from the inferior vena cava, collected in serum tubes, and stored at -80 °C until use. Cardiac tissues were fixed in 10% formalin and embedded in paraffin for histological evaluation. The remaining cardiac tissues were snap-frozen in liquid nitrogen for mRNA isolation and immunoblotting analyses. The animal experiment was approved by the Animal Ethics Committee of Beijing Hospital. Biochemical measurements Sera were separated from the collected blood samples by centrifugation at 3000 rpm for 15 min. The levels of total cholesterol (TC), low-density lipoprotein cholesterol (LDL-c) and high-sensitivity Creactive protein (hs-CRP) in the serum were detected using the Total Cholesterol, low-density lipoprotein cholesterol, and high-sensitivity C-reactive protein Assay Kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China), as per the manufacturer's instructions. Haematoxylin and eosin staining The cardiac tissues were fixed with 10% buffered formalin for 30 min and then dehydrated in 75% ethanol overnight, followed by paraffin embedding. Serial sections (4 μm) were stained with haematoxylin and eosin for pathological analysis. Periodic acid-Schiff (PAS) staining Cardiac tissues from each group were stored in 10% formalin, dehydrated in an ascending alcohol series (75, 85, 90 and 100% alcohol, 5 min each) and then embedded in paraffin wax. Paraffin sections (4-μm-thick), sliced from these paraffin-embedded tissue blocks, were then de-paraffinized via immersion in xylene (three times, 5 min each) and rehydrated using a descending alcohol series (100, 90, 85 and 75% alcohol, 5 min each). Samples were stained with PAS stain to investigate the changes in cardiac morphology. Red staining indicated lipid deposition. The relative signal intensity was quantified using NIH ImageJ software. RNA isolation and real-time PCR (qPCR) Total RNA was isolated from cardiac tissues and complementary DNA (cDNA) was synthesised using the TransScript One-Step gDNA Removal and cDNA Synthesis SuperMix kit (Transgen, Beijing, China) according to the manufacturer's protocol. Gene expression was quantitatively analysed by qPCR using the TransStart Top Green qPCR SuperMix kit (Transgen). β-Actin was amplified and quantitated in each reaction in order to normalise the relative amounts of the target genes. Primer sequences are listed in Table 1. Abbreviations: TNF-α, tumor necrosis factor-α; IL-6, interleukin-6; NLRP3, the nucleotide-binding and oligomerization domain-like receptor 3; IL-18, interleukin-18; IL-1β, interleukin-1β Statistical analysis All data are presented as the mean ± standard error of mean (SEM). Statistical analysis was performed using SPSS software version 23.0 (SPSS Inc., Chicago, IL, USA). Inter-group variation was measured using one-way analysis of variance (ANOVA) and subsequent Tukey's test. The minimal level for statistical significance was set at P < 0.05. Metabolic Characterisation The metabolic characteristics of LDL-R⁻/⁻ mice after 8 weeks of different treatments are summarised in Table 2. The heart/body weight ratio did not change in the three groups. TC, LDL-c and hs-CRP levels were markedly increased in the HFD group, but significantly decreased in the HFD + TQ group. TQ reduced HFD-induced cardiac damage To evaluate inflammatory cell infiltration into the cardiac tissue, haematoxylin and eosin staining was performed (Fig. 1). HFD+TQ group mice showed markedly reduced inflammatory cell infiltration in their cardiac tissue compared with that in the HFD group mice, indicating that TQ reduced HFDinduced cardiac damage. To evaluate lipid accumulation in cardiac tissue, we evaluated PAS staining and the expression of CD36 and CD68 (Fig. 2). Increased lipid retention was detected in the cardiac tissues of HFD group mice. Interestingly, HFD +TQ group mice showed markedly reduced lipid deposition in the cardiac tissue compared with that in HFD group mice. TQ reduced HFD-induced expression of pro-inflammatory cytokines in mouse cardiac tissues To examine the involvement of pro-inflammatory cytokines in the cardiac tissues of the three groups of mice, mRNA expression of IL-6 and tumour necrosis factor alpha (TNF-α) was measured using qPCR ( Fig. 3). Although IL-6 and TNF-α mRNA were upregulated in the HFD group, this upregulation was attenuated in the HFD+TQ group. TQ reduced HFD-induced pyroptosis in cardiac tissues To evaluate pyroptosis in cardiac tissues, we examined the mRNA and protein expression of pyroptosis indicators NLRP3, caspase-1, IL-1β, and IL-18 (Fig. 4). NLRP3, caspase-1, IL-1β and IL-18 mRNA was significantly downregulated in the HFD+TQ group compared with that in the HFD group ( Fig. 4a). Western blotting (Fig. 4b) demonstrated that the levels of NLRP3, caspase-1, IL-1β, and IL-18 were markedly reduced in the cardiac tissues of the HFD+TQ group compared with that in the HFD group ( Fig. 4b-c). These results indicate that TQ reduced HFD-induced upregulation of NLRP3, caspase-1, IL-1β, and IL-18 expression. TQ reduced HFD-induced increase in P-ERK levels in the cardiac tissues of mice To investigate the effect of TQ on the regulation of the ERK signalling pathway, we analysed P-ERK levels in the respective treatment groups by western blotting (Fig. 5). P-ERK level was higher in the HFD group than that in the ND group, and the HFD+TQ group exhibited significantly low P-ERK levels than the HFD group. Discussion The present study demonstrates that TQ has a protective effect against hyperlipemia-induced progressive lipid deposition, pro-inflammatory cytokine expression, and pyroptosis. Metabolic characteristic analysis indicated that the levels of TC and LDL-c were increased in the HFD group compared to that in the ND group mice. These results are in agreement with reports by Kolbus et al. [15]. Interestingly, the TC and LDL-c levels in the HFD + TQ group were significantly lower than those in the HFD group. Several clinical studies have indicated that hs-CRP can serve as a biomarker for the risk prediction of cardiovascular events [16,17]. Our results show that the HFD + TQ group had markedly reduced serum hs-CRP levels compared with that in the HFD group, indicating that TQ influences cholesterol metabolism and hs-CRP levels. Hyperlipidaemia promotes macrophage accumulation and lipid deposition in cardiac tissues [18]. Cellular lipid homeostasis involves the regulation of influx, synthesis, catabolism, and efflux of lipids. An imbalance in these processes can result in the conversion of macrophages into foam cells [19]. The CD68 marker identifies a population of macrophages; CD68 positive cells are often observed infiltrating cardiac tissues [18]. The results of our lipid deposition assays showed that CD36 expression and PAS staining were significantly increased in the LDL-R⁻/⁻ HFD group mice compared with that in the ApoE⁻/⁻ ND mice; however, this damage was significantly inhibited in the HFD + TQ group. Pro-inflammatory cytokines have been reported to be highly expressed in hyperlipidaemia, and are known to contribute to cardiac damage [20,21]. Our study showed that the expression of IL-6, and TNF-α was reduced in the HFD + TQ group compared with that in the HFD group, indicating that TQ downregulated HFD-induced expression of IL-6 and TNF-α. Pyroptosis is a novel programmed cell death mechanism. Recent studies have reported that pyroptosis contributes to the development of hyperlipidaemia. Pyroptosis induction is closely associated with the activation of the NLRP3 inflammasome, which has been linked to key cardiovascular risk factors including hyperlipidaemia [22,23]. A significant decrease in atherosclerotic lesion size has also observed at the aortic sinus of HFD-fed LDL-R⁻/⁻ mice reconstituted with NLRP3 knockout bone marrow cells [23]. In addition, previous studies have shown that NLRP3 recruits caspase-1, leading to the activation of caspase-1, maturation and secretion of IL-1β and IL-18, and initiation of pyroptosis [24][25][26][27]. Our results showed that the cardiac tissues in the HFD + TQ group expressed markedly reduced levels of NLRP3, caspase-1, IL-1β and IL-18 compared with that in the HFD group, indicating that TQ downregulated HFD-induced pyroptosis. ERK is a cytoplasmic kinase whose activity is regulated by phosphatases [28]. Previous studies have suggested that TQ increases the phosphorylation of mitogen-activated protein kinases and ERK [29]. P-ERK modulates cellular metabolism by a series of reactive oxygen stress activities. TQ increases P-ERK levels to regulate cellular activity [30]. Our study showed that P-ERK levels decreased in the HFD group, but were significantly increased in the HFD + TQ group. Conclusions Our data establish that TQ contributes to the mitigation of hyperlipidaemia-induced cardiac damage, as shown by reduced lipid deposition and pyroptosis and downregulated pro-inflammatory cytokine expression. These findings provide new insights into the role of TQ in hyperlipidaemia-induced cardiac damage and introduce the possibility of a novel therapeutic intervention for treating CVDs. Availability of data and materials All datas generated or analyzed during this study are included in this published article. Competing interests The authors declare that they have no competing interests. Pro-inflammatory gene expression in the cardiac tissue. Relative mRNA expression of TNF-α and IL-6 in cardiac tissue of three group with different treatments. Data are given as the means ± SEM; n = 5-6 in each group. * P < 0.05 ; **P < 0.01 Figure 3 Pro-inflammatory gene expression in the cardiac tissue. Relative mRNA expression of TNF-α and IL-6 in cardiac tissue of three group with different treatments. Data are given as the means ± SEM; n = 5-6 in each group. * P < 0.05 ; **P < 0.01
v3-fos-license
2018-04-03T00:57:08.324Z
2012-07-05T00:00:00.000
16475486
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/archive/2012/685151.pdf", "pdf_hash": "fc0782058a389b696d0b89531b709743eab6a781", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44904", "s2fieldsofstudy": [ "Medicine" ], "sha1": "ee78fa95bb508da555ed30517181a68e26cce521", "year": 2012 }
pes2o/s2orc
Regional Variation on Rates of Bronchopulmonary Dysplasia and Associated Risk Factors Background. An abnormally high incidence (44%) of bronchopulmonary dysplasia with variations in rates among cities was observed in Colombia among premature infants. Objective. To identify risk factors that could explain the observed high incidence and regional variations of bronchopulmonary dysplasia. Study Design. A case-control study was designed for testing the hypothesis that differences in the disease rates were not explained by differences in city-of-birth specific population characteristics or by differences in respiratory management practices in the first 7 days of life, among cities. Results. Multivariate analysis showed that premature rupture of membranes, exposure to mechanical ventilation after received nasal CPAP, no surfactant exposure, use of rescue surfactant (instead of early surfactant), PDA, sepsis and the median daily FIO2, were associated with a higher risk of dysplasia. Significant differences between cases and controls were found among cities. Models exploring for associations between city of birth and dysplasia showed that being born in the highest altitude city (Bogotá) was associated with a higher risk of dysplasia (OR 1.82 95% CI 1.31–2.53). Conclusions. Bronchopulmonary dysplasia was manly explained by traditional risk factors. Findings suggest that altitude may play an important role in the development of this disease. Prenatal steroids did not appear to be protective at high altitude. Introduction Despite all the advances in the care of premature infants with respiratory distress syndrome (RDS), including the use of antenatal steroids and early management with surfactant, bronchopulmonary dysplasia (BPD) continues to be a major cause of chronic morbidity among this population. There are large variations in the incidence and severity of this disease. According to the National Institutes of Health of USA (NICHD) consensus [1], mild BPD is defined as a need for supplemental oxygen for ≥28 days at 36 weeks postmenstrual age (wPMA) or discharge, moderate BPD as supplemental oxygen for ≥28 days plus treatment with <30% oxygen at 36 wPMA, and severe BPD as supplemental oxygen for ≥28 days plus ≥30% oxygen and/or positive pressure at 36 wPMA. Currently, the estimated incidence of BPD defined as need for supplemental oxygen at 36 wPMA in the United States is approximately 30% for premature infants with a birth weight <1000 grams and <7% in infants with a birth weight >1250 grams or who were at least 30 weeks of gestation at birth [1,2]. There is little information about trends in the epidemiology and pathogenesis of BPD in developing countries. ISRN Pediatrics The most recent report of the incidence of BPD in Latin America comes from the NEOCOSUR Neonatal Group study in 1,825 very-low-birth-weight (VLBW) infants born in sixteen hospitals from Argentina, Chile, Paraguay, Peru, and Uruguay [19]. The authors found an incidence of BPD (oxygen requirement at 28 days of life with chronic radiographic changes) of 24.4%. A randomized controlled trial of early bubble nasal continuous positive airway pressure (nCPAP) and surfactant in premature infants conducted in three different cities in Colombia [20] found an incidence of BPD (defined as supplemental oxygen requirement at 36 wPMA) twice as large (44%) as the one observed in less mature premature infants in developed countries or in the NEO-COSUR study. The Colombian trial also revealed significant variations in BPD rates among participating cities: Bogotá 50%, Cali 18%, and Bucaramanga 13%. These cities have different characteristics, the most important being altitude: Bogotá is 2600 meters above sea level (masl), Bucaramanga 959 masl, and Cali 956 masl. We conducted this study with the aim of identifying environmental, maternal, infant, and therapeutic risk factors associated with BPD in Colombia. We also tested the hypothesis that differences in BPD rates were not explained by differences in city-of-birth-specific population characteristics or by differences in respiratory management practices during the first seven days of infants' life among cities. Materials/Subjects and Methods This is a nested case-control study based on data collected as part of a multicenter randomized controlled trial carried out by the Colombian Neonatal Research Network in eight neonatal intensive care units (NICUs) located in three cities (Bogotá, Bucaramanga, and Cali) in Colombia. A detailed description of this trial has been published [20]. Briefly, premature infants born between 27 and 31 weeks of gestation with clinical evidence of respiratory distress during the first hour of life, and who did not require intubation as part of their initial resuscitation and stabilization, were placed on bubble nCPAP and then randomized to receive very early surfactant therapy through transient intubation followed by nCPAP or to expectant management on nCPAP alone. A total of 279 premature infants were enrolled in the trial from January 1, 2004 to December 31, 2006. All study sites provided comprehensive continuous care for critically ill neonates and were staffed by trained nurses and specialized physicians with fully equipped and modern neonatal units. Prenatal and neonatal data were collected prospectively until death or discharge. Additional data were collected at 36 wPMA. The neonatal survival for the population included in the RCT was 90.7%. Neonatal mortality rates were similar among cities. For the present study, all analyses were limited to infants who survived to 36 wPMA. Bronchopulmonary dysplasia was defined as the need for supplemental oxygen for ≥28 days at 36 wPMA [1]. The target oxygen arterial saturation (SaO 2 ) was 92% for infants treated in Bogota and 96% for those treated in other cities. Given the high altitude in Bogotá it was expected that SaO 2 would be lower than in infants at lower altitudes [21,22]. Neonates who fell under this definition were considered as cases. Since we used an epidemiological definition for BPD and we did not have the radiographic findings to confirm BPD cases, to avoid misclassification bias "controls" were all infants who were not receiving supplemental oxygen at 36 wPMA and had required <20 days of oxygen supplementation or had not required oxygen supplementation at all. Cases and controls were compared to identify risk factors associated with BPD. Relevant exposure data included maternal and infant perinatal characteristics, infant postnatal diagnosis, and respiratory management practices. Preterm premature rupture of membranes (PPROM) was defined as >12 hours, use of antenatal steroids as the administration of a complete course (two 12 mg intramuscular doses of betamethasone within 24 hours) of maternal steroids at least 24 hours before delivery, confirmed chorioamnionitis as a positive amniotic fluid culture, and suspected chorioamnionitis as maternal fever during labor and fetal tachycardia. Gestational age (GA) at birth was estimated using the last menstrual period date; when GA was inconsistent with the physical examination, the Ballard score was used [23]. Small for gestational age (SGA) was defined as a birth weight <10th percentile for age [24]. PDA was confirmed by echocardiography and subsequently treated with indomethacin or surgical ligation. Sepsis was defined as a clinical deterioration with temperature instability, recurrent apnea, and at least one of the following indications of altered organ function: lethargy, hypoxemia, and increased serum lactate level [25]. Grades III and IV IVH were identified by head ultrasound using the Papile et al. criteria [26]. Air leak syndrome included radiological evidence of pneumothorax, pneumomediastinum, or pulmonary interstitial emphysema. The Score for Neonatal Acute Physiology (SNAP II) on the day of admission was calculated from clinical data collected prospectively during the first day of life [27]. Respiratory management variables included type of ventilatory support received: (a) only nCPAP when infants were placed on nCPAP without subsequent mechanical ventilation (MV) during their NICU stay, (b) nCPAP + MV when infants on nCPAP met treatment failure criteria and required MV. Length of MV was categorized in ≥7 days or <7 days. Exposure to surfactant was divided into three categories: (a) no surfactant exposure included all infants who did not receive surfactant during the trial, (b) very early surfactant included infants administered surfactant within the first hour of life, and (c) rescue surfactant included infants administered surfactant after the first hour of life. Exposure to oxygen supplementation was measured by fraction of inspired oxygen (FIO 2 ) registered daily. Demographic and clinical characteristics of the study population were summarized. To identify differences in the distribution of studied variables between cases and controls, the statistical analysis included hypothesis testing using the Pearson Chi-square test for categorical variables and the Students t-test for continuous variables. To explore associations between studied variables and BPD, crude and adjusted odds ratios (ORs) with 95% confidence intervals (95% CIs) were estimated in a bivariate analysis, followed by a multivariate regression analysis using a log-binomial generalized linear model that included all significant variables in a manual 3 (4.7) 9 (6.1) 0.688 APGAR score 5 min, median (IR) 9 (8-9) 9 (8-9) 9 (8-9) 0.228 SNAP-II, score, mean ± (SD) 5.7 (9) 7.6 (10.5) 5.4 (8.6) 0.102 BPD: bronchopulmonary dysplasia, wPMA: weeks postmenstrual age, SGA: small for gestational age, SNAP-II: score for neonatal acute physiology, SD: standard deviation, IR: Interquartile range (difference between the value at 75% of cases, and the value at 25% of cases). forward stepwise approach. The Wilcoxon rank sum test was used to compare the median values between groups and the Generalized Cochrane Mantel-Haenszel test to assess the statistical significance of associations between categorical variables. To determine differences in population characteristics or in respiratory management variables during the first week of life that could explain the differences in BPD rates among cities, a descriptive analysis of selected variables was performed by city of birth, followed by a bivariate analysis to identify variables mainly associated with BPD in each city. To identify possible differences in the distribution of studied variables between three cities among cases of BDP, the statistical analysis included hypothesis testing using the Pearson Chi-square test for categorical variables and the Analysis of Variance or Kruskall Wallis test for continuous variables. Multivariate analyses were then conducted to explore associations between city of birth and BPD while controlling for differences in population demographic characteristics, postnatal diagnosis, and respiratory management variables during the first seven days of life. With the final sample of cases and controls, the power calculation for the research hypothesis tested was 82%, according to the following parameters: incidence of BPD 30%, type I error probability of 0.05, and expected size effect (odds ratio, OR) of 2.0. Results of all multivariate analyses are expressed as odds ratios (ORs) with their corresponding 95% CI. All analyses were carried out using the SAS program (SAS Institute, Cary, NC). Results A total of 216 (77.42%) infants survived to 36 wPMA, and four were excluded from the analysis due to missing data or because they did not meet the case or control definitions. The final analysis included 212 infants; 64 (30%) met the definition of BPD. Identification of Risk Factors for BPD in the Whole Population. Mean GA and birth weight were lower in the BPD group (Table 1); no differences were observed in other perinatal characteristics between cases and controls. Tables 2 and 3 present the results of bivariate analysis controlling for GA at birth. Use of rescue surfactant, no surfactant exposure, diagnosis of PDA, chorioamnionitis, and confirmed or suspected PPROM were variables independently associated to BPD. Following a stepwise forward approach, the initial logistics model showed that PPROM, nCPAP + MV, no surfactant exposure,rescue surfactant, PDA, median daily FIO 2 (sub index is not allowed in this website), and sepsis were associated with a higher risk of BPD. When city of birth was introduced into the model as a control variable, we observed that median daily FIO 2 and PDA lost statistical significance, suggesting the presence of an interaction. We conducted a series of multivariate models testing for interactions between the variables: city of birth, median daily FIO 2 ; sepsis and PDA. The only significant interaction observed was between the variables "Bogotá as city of birth" and "median daily FIO 2 " The results of the final logistic regression model controlling this interaction are showed in Table 4. "Bogotá as city of birth" was the most significant independent variable associated with BPD; other variables associated with BPD in the initial model remained significantly associated (Table 4). BPD Rates and City-Specific Differences. The distribution of maternal and infant perinatal characteristics, infant postnatal diagnosis, and respiratory management practices in the first seven days of the infants' life was similar in all cities (Table 5); however, there are statistically significant differences between cases and controls among cities. The proportion of BPD cases in infants whose mothers received antenatal steroids was significantly higher in Bogotá than in any other city. In Bucaramanga, infants with BPD had lower birth weights than infants in Cali or Bogotá. Infants born in Bogotá that required MV after nCPAP (nCPAP + MV) and those treated with rescue surfactant or no surfactant exposure had a higher incidence of BPD compared to infants of similar characteristics born in Cali or Bucaramanga. Infants born in Bogotá had higher median values of daily FIO 2 compared to Bucaramanga and Cali, and infants diagnosed with BPD in Bogotá received higher concentrations of daily supplemental oxygen than controls, while BPD cases in Cali and Bucaramanga did not. In relation to postnatal diagnosis during the first 7 days of life, the diagnosis of PDA was higher in Bucaramanga, but the largest proportion of BPD cases in infants with PDA was observed in Bogotá. The diagnosis of sepsis was also more frequent in Bucaramanga and Cali, but the largest proportion of BPD cases among infected infants was seen in Bogotá, followed by Cali and Bucaramanga (P < 0.0001). Bivariate analysis stratified by city of birth (Table 5) showed the following: for infants born in Bogotá, the diagnosis of sepsis in the first seven days of life and the incremental levels of daily median FIO 2 as variables mainly associated with a higher risk of BPD; for infants born in Cali, air leak syndrome, PPROM, and PDA in the first seven days of life were the variables associated with BPD, while in Bucaramanga PPROM was the only significant risk factor associated with BPD. To explore for associations between city of birth and BPD, we generated a logistics model controlling potential confounders. Multivariate results are shown in Table 6; Bogotá as city of birth was strongly associated with an increased risk of developing BPD (OR 1.82 95%CI 1.31-2.53). Other variables in the model associated with a higher risk of BPD were rescue surfactant, no surfactant exposure, nCPAP + MV, median daily FIO 2 , PDA, and sepsis. Discussion In this study population, the incidence of BPD was 30%. This rate is higher than the average rates reported in populations with similar gestational ages in developed countries [1,2,[5][6][7]. Our results showed that infants born in Bogotá had nearly twice the risk of developing BPD than infants born in Bucaramanga or Cali, independently of differences in maternal, infant, and therapeutic risk factors. Additionally, Bogotá showed the highest rates of BPD cases associated with the presence of air leak syndrome, exposure to MV, rescue surfactant or no surfactant exposure, PDA, and sepsis, when compared to other cities. It also had the highest values of daily supplemental oxygen to reach the target SaO 2 among all the cities. Exposure to antenatal steroids did not appear to protect infants born in Bogotá from developing BPD. Because Bogotá is located a higher altitude (more than 2600 masl) than Bucaramanga and Cali, these results could suggest that altitude may play an important role in the pathogenesis of BPD in Colombia. There are few publications on altitude-related disease and pulmonary hemodynamics in pediatric populations, and the altitude where an infant is born has not clearly been proven to be associated with the development of BPD. The effect of living at high altitudes (>2500 masl) on lung diffusion capacity and pulmonary hemodynamics has been described in highland children [28]. As a result of the low partial pressure of oxygen in the environment, oxygen uptake into the lungs is enhanced by increases in minute ventilation, lung compliance, and pulmonary diffusion. The decreased partial pressure of oxygen in the lungs of highland children has also been associated with higher pulmonary artery pressures [28][29][30][31]. Several investigators have also found that functional closure of the ductus arteriosus in the newborn is delayed at high altitudes as a consequence of increased pulmonary vascular pressures [32][33][34][35]. The transition from oxygenation via the placenta to oxygenation across the formerly fluid-filled lungs is especially precarious in a low-oxygen environment [36]; with the onset of ventilation immediately after birth, the oxygen tension in the alveolus and pulmonary capillaries of neonates may not increase as expected, resulting in postnatal persistence of fetal circulation [36]. In the absence of pulmonary disorders, normal neonates may experience frequent episodes of arterial oxygen desaturations and hypoxia during the first week of life. Studies comparing healthy infants born at high altitudes (>3100 masl) with infants born at sea level have shown that a week after birth the SaO 2 of high-altitude infants declines, whereas SaO 2 gradually rises after birth or remains constant over time in infants born at sea level [32]. As a result of these events, it is possible that premature infants born at high altitudes have a prolonged transition period and early dependency on high concentrations of oxygen and ventilatory support compared to their counterparts born at lower altitudes. The presence of RDS would enhance ventilation perfusion mismatch leading to more hypoxia, oxygen dependency, oxidative stress, and higher levels of ventilatory support, increasing the risk and severity of BPD. It is also possible that the observed dependency on supplemental oxygen in our population of premature infants is the result of physiological oxygen dependency and not BPD. This may be due to the fact that the definition used for BPD does not take into account clinical or radiographic changes. The association between PDA and BPD has previously been documented [4,12,37]. It is also possible that the prolonged closure of the PDA may play a role in the pathogenesis of BPD, especially as the pulmonary vascular resistance begins to drop, but we cannot answer this question because we did not assess the duration of the PDA, the presence of pulmonary hypertension, or the presence of pulmonary edema or hemorrhage in this population of infants. Likewise, we did not assess the effect of fluid intake on the development of PDA and BPD because this information was not collected in the initial trial [7,14,37,38]. Previous studies have demonstrated the relationship between the presence of lateonset sepsis, PDA, and the development of BPD [12,37]. Our study suggests that altitude enhances their negative effects and other traditional risk factors associated with BPD [38]. In our population, PPROM was found to be a significant risk factor for BPD while chorioamnionitis was not. This finding could be explained in part because amniocentesis and amniotic culture fluids were not taken routinely in all participating centers as part of the workup for suspected chorioamnionitis in mothers with preterm labor or premature rupture of membranes. The association of BPD with preterm labor and PPROM has been well documented in the medical literature [39]. Finally, our study emphasizes the need to minimize MV exposure and offer surfactant replacement therapy within the first hour of life to infants with RDS on NCPAP in order to decrease the incidence of BPD in this population [20,40,41]. To summarize, this study suggests that altitude may be an important risk factor associated with increased supplemental oxygen dependency and the development of BPD. Future studies need to determine whether altitude plays a role in the pathophysiology of BPD or if it is just a marker for physiologic oxygen demands. They also need to use a more accurate definition for BPD, measuring not only the need of supplemental oxygen but also the radiographic findings and clinical signs. Disclosure Authors declare that material of this paper is original research, it has not been previously published, and it has not been submitted for publication elsewhere while under consideration. Conflict of Interests Authors declare that they do not have any competing financial interests in relation to the work described in this paper.
v3-fos-license
2023-01-11T15:02:13.011Z
2015-10-01T00:00:00.000
255576816
{ "extfieldsofstudy": [], "oa_license": "CC0", "oa_status": "GREEN", "oa_url": "https://ddd.uab.cat/pub/artpub/2015/gsduab_4142/quathedynsys_a2015v14p291preprint.pdf", "pdf_hash": "f9154773044c4e6526220074598612b0623b5564", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44906", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "f9154773044c4e6526220074598612b0623b5564", "year": 2015 }
pes2o/s2orc
Periodic Solutions for the Generalized Anisotropic Lennard-Jones Hamiltonian We characterize the circular periodic solutions of the generalized Lennard-Jones Hamiltonian system with two particles in Rn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {R}^n$$\end{document}, and we analyze what of these periodic solutions can be continued to periodic solutions of the anisotropic generalized Lennard-Jones Hamiltonian system. We also characterize the periods of antiperiodic solutions of the generalized Lennard-Jones Hamiltonian system on R2n\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {R}^{2n}$$\end{document}, and prove the existences of 0<τ∗≤τ∗∗\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0<\tau ^{*}\le \tau ^{**}$$\end{document} such that this system possesses no τ/2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau /2$$\end{document}-antiperiodic solution for all τ∈(0,τ∗)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau \in (0,\tau ^{*})$$\end{document}, at least one τ/2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau /2$$\end{document}-antiperiodic solution when τ=τ∗\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau =\tau ^{*}$$\end{document}, precisely 2n\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2^n$$\end{document} families of τ/2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau /2$$\end{document}-antiperiodic circular solutions when τ=τ∗∗\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau =\tau ^{**}$$\end{document}, and precisely 2n+1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2^{n+1}$$\end{document} families of τ/2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau /2$$\end{document}-antiperiodic circular solutions when τ>τ∗∗\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau >\tau ^{**}$$\end{document}. Each of these circular solution families is of dimension n-1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n-1$$\end{document} module the S1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S^1$$\end{document}-action. Introduction and Statement of the Main Results Frequently mathematical simple models are used in molecular dynamics and computational chemistry to describe the interaction between a pair of molecules or atoms, see for instance [5,9]. One of the most used empirical potentials in molecular dynamics is the Lennard-Jones potential, see [8], which models the interaction between two neutral atoms or molecules, under two different forces in the limit of small and large separation. These forces are: a repelling force at short distances (coming from overlapping electron orbitals, related to the Pauli's exclusion principle), and an attractive force at long distances (coming from the van der Waals force, or the dispersion force). The Lennard-Jones potential where a/4 is the depth of the potential energy, σ is the finite distance at which the interparticle potential vanishes, ||r 1 − r 2 || is the distance between the two particles localized at the positions r 1 and r 2 in R n . The values of these parameters are chosen in order to reproduce experimental data, or deduced from accurate quantum chemistry computations; [3] is a good reference for these considerations. We rescale the unit of length and the unit of mass in such a way that the constant σ and a become 1, then the Lennard-Jones potencial becomes When one of the atoms or molecules is at the origin of coordinates and the position of the other atom or molecule is x = (x 1 , . . . , x n ), then the Lennard-Jones Hamiltonian writes where |x| = n k=1 x 2 k . In the coordinates (x, p x ) the generalized Lennard-Jones potential is central, and consequently it is integrable with the independent first integrals given by the angular momentum C = x ∧ p x , where ∧ is the exterior product of the vectors x and p x . The norm of the angular momentum C on a solution of the Hamiltonian system (2) is denoted by c, and of course it is also a first integral. For stating our first result on the circular periodic solutions of the generalized Lennard-Jones Hamiltonian system we need some notation. We define The next two proposition characterize the circular periodic solutions of the generalized Lennard-Jones Hamiltonian system. (a) For c = −γ the Hamiltonian system (2) has only one retrograde circular periodic solution centered at the origin of coordinates of radius ρ 2 . The period of this orbit is 2πρ 2 2 /γ . (b) For each c ∈ (−γ, 0) the Hamiltonian system (2) has exactly two retrograde circular periodic solutions centered at the origin of coordinates, one with radius this period tends to 2πρ 2 2 /γ when r 1 (c) ρ 2 and tends to ∞ when r 1 (c) ρ 1 ; and the other periodic solution with radius r 2 (c) in the interval (ρ 2 , ∞) of period this period tends to 2πρ 2 2 /γ when r 2 (c) ρ 2 and tends to ∞ when r 2 (c) ∞. (c) For c = 0 the Hamiltonian system (2) has a circle of equilibra centered at the origin of coordinates and of radius ρ 1 . (d) For each c ∈ (0, γ ) the Hamiltonian system (2) has exactly two direct circular periodic solutions centered at the origin of coordinates, one with radius r 1 (c) in the interval (ρ 1 , ρ 2 ), and the other with radius r 2 (c) in the interval (ρ 2 , ∞). The periods of these two orbits have the behavior described in statement (b). (e) For c = γ the Hamiltonian system (2) has only one direct circular periodic solution centered at the origin of coordinates of radius ρ 2 . The period of this orbit is the same than in statement (a). (a) For each c ∈ (−γ, 0) the Hamiltonian system (2) has exactly one retrograde circular periodic solution centered at the origin of coordinates with radius r 1 (c) in the interval (ρ 1 , +∞) of period this period tends to +∞ when r 1 (c) ∞ and when r 1 (c) ρ 1 . (b) For c = 0 the Hamiltonian system (2) has a circle of equilibra centered at the origin of coordinates and of radius ρ 1 . (c) For each c ∈ (0, γ ) the Hamiltonian system (2) has exactly one direct circular periodic solution centered at the origin of coordinates with radius r 1 (c) in the interval (ρ 1 , ∞). The period of this orbit has the behavior described in statement (a). (b) For c = 0 the Hamiltonian system (2) has a circle of equilibra centered at the origin of coordinates and of radius ρ 1 . (c) For each c ∈ (0, ∞) the Hamiltonian system (2) has exactly one direct circular periodic solution centered at the origin of coordinates with radius r 1 (c) in the interval (ρ 1 , ∞). The period of this orbit has the behavior described in statement (a). Our first main goal is to characterize which of the circular periodic solutions described in Proposition 1 can be continued to the generalized anisotropic Lennard-Jones Hamiltonian system defined by the Hamiltonian for a given integer m and a given ε such that 1 < m < n and |ε| is sufficiently small. Therefore the corresponding Hamiltonian system iṡ We define Our first main result characterize the circular periodic solutions of the generalized Lennard-Jones Hamiltonian system (2), given in Proposition 1, which can be continued into the generalized anisotropic Lennard-Jones Hamiltonian system (5) for small values of |ε|. (a) If c ∈ (−γ, 0) and R is not an integer, then at every 2-dimensional plane P through the origin of coordinates the retrograde circular orbit of radius r 1 (c) and angular momentum c of the generalized Lennard-Jones Hamiltonian system (2) can be continued into the generalized anisotropic Lennard-Jones Hamiltonian system (5) for small values of |ε|. Theorem 4 is proved in Sect. 4. Following exactly the arguments of the proof of Theorem 4 we could be able to characterize the circular periodic solutions of the generalized Lennard-Jones Hamiltonian system (2), given in Propositions 2 and 3, which can be continued into the generalized anisotropic Lennard-Jones Hamiltonian system (5) for small values of |ε|, but we do not state them here in order to avoid a length article. In what follows we shall characterize the periods of antiperiodic solutions of the Lennard-Jones Hamiltonian system on whether there exist such solutions. For a given τ > 0 we study the τ -periodic solutions of the Lennard-Jones Hamiltonian system (2), which now we rewrite into the form where U ∈ C 1 (R 2n \{0}, R) is defined by where we suppose 0 < α < β, a > 0 and b ∈ R. Firstly, for a given τ > 0 we plug the τ -periodic circular motion x = x(t) into the system (6) with the potential function U = U (x) of (7), and try to see which circular motion can become solution of (6). Proposition 5 is proved in Sect. 5 below. Note that Proposition 5 provides results on the circular periodic solutions of the generalized Lennard-Jones Hamiltonian system (2) for a > 0 and b ∈ R, while Propositions 1-3 only for a > 0 and b > 0. By a similar proof which is left to the readers, we have the following result for the higher dimensional case. for any x ∈ R 2n \{0}, we can look for τ/2antiperiodic solutions of (6), i.e. those solutions x satisfying Note that circular solutions of (6) found by Proposition 5 are all τ/2-antiperiodic solutions. It is well known that τ/2-antiperiodic solutions of (6) are critical points of the functional defined on the space where S τ = R/(τ Z). Motivated by the method of [11], we obtain our second main result below. Here this theorem characterizes the period τ > 0 for which (6) possesses no, at least one or more τ/2-antiperiodic solutions. Theorem 7 is proved in Sect. 6 below. See [1,4,14] and the references therein for other works on the periodic orbits of the Lennard-Jones potential, or related with these periodic orbits. Proofs of Propositions 1 and 2 Since the notion of angular momentum is defined in any dimension by using the exterior product in R n , one would guess that central force problems in any dimension are completely integrable, as it is well known for n = 3. This was proved explicitly in [7], by constructing n first integrals independent and involution: the energy and some combinations of the angular momentum components. It is shown that the motion of these central problems are always reduced to a 2-dimensional plane through the origin of coordinates. The Lennard-Jones Hamiltonian (1), restricted to a 2-dimensional plane P through the origin of coordinates with initial position and momenta in P, in polar coordinates in P becomes where is the norm of the angular momentum restricted to P in polar coordinates. Therefore its corresponding Hamiltonian system writeṡ We fix the value of = c. On a circular periodic solution in P we haveṙ = R = 0. ThereforeṘ Hence the value of the angular momentum over the circular periodic solution of radius r in P is Proof of Proposition 1 The radius of a circular periodic orbit must satisfy r ≥ ρ 1 , see the graphic of the function c(r ) in Fig. 1. The maximum and the minimum of the function c(r ) takes place when r = ρ 2 and c(ρ 2 ) = ±γ . So the value of the angular momentum on the circular periodic solutions in P run in the interval [−γ, γ ], as it is stated in Proposition 1. On a circular periodic solution we have that R = 0, and from (15) its angular velocity isθ = ω = c/r 1 (c) 2 . Therefore its period is Note that the period is the necessary time in order that θ increases or decreases by 2π . Now the statements of Proposition 1 follows easily from Fig. 1. Since β > α > 0, the period T as a function of r 2 can be written as Clearly (β + 2)/2 > (β − α)/2, so T → +∞ as r 2 → ∞. Fig. 3 the proof is completely similar to the proofs of Propositions 1 and 2. Basic Results on the Continuation of Periodic Solutions We deal with autonomous differential systemṡ with ε 0 > 0 is an interval where the parameter ε takes values, and as usual the dot denotes the derivative with respect to the time t. We denote its general solution as Consider the T -periodic solution φ(t, x 0 ; 0). A continuation of this periodic solution is a pair of smooth functions, u(ε), τ (ε), defined for |ε| sufficiently small such that u(0) = x 0 , τ (0) = T and φ(t, u(ε); ε) is τ (ε)-periodic. One also says that the periodic solution can be continued. This means that the solution persists when the parameter ε varies, and the periodic solution does not change very much with the parameter. The variational equation associated to the T -periodic solution φ(t, where M is a m × m matrix. Note that the matrix f x (x; ε) is the Jacobian matrix of the vector field f (x; ε). The monodromy matrix associated to the T -periodic solution φ(t, x 0 ; ε) is the solution M(T, x 0 ; ε) of (19) satisfying that M(0, x 0 ; ε) is the identity matrix of R m . The eigenvalues of the monodromy matrix associated to the periodic solution φ(t, x 0 ; ε) are called the multipliers of the periodic orbit. Let φ(t, x 0 ; ε) be a T -periodic orbit of the C 2 differential system (18). The eigenvector tangent to the periodic orbit has associated an eigenvalue equal to 1. So the periodic orbit has at least one multiplier equal to 1, for more details see for instance Proposition 1 in [10]. Let F : U → R be a locally non-constant function of class C 1 such that Then F is called a first integral of system (18), because F is constant on the solutions of this system. Here the dot · indicates the usual inner product of R m , and the gradient of F is defined as We say that k first integrals F j : U → R for j = 1, . . . , k are linearly independent if their gradients are independent in all the points of U except perhaps in a set of Lebesgue measure zero. Let F j : U → R a first integral for j = 1, . . . , k with k < m. Assume that F 1 , . . . , F k are linearly independent in U . Let γ be a T -periodic orbit of the vector field f (x; ε) such that at every point x ∈ γ the vectors ∇ F 1 (x), . . . , ∇ F k (x) and f (x; ε) are linearly independent. Then 1 is a multiplier of the periodic orbit γ with multiplicity at least k + 1, see for instance Theorem 2 of [10]. If the differential system (18) has k independent first integrals, we say that a periodic solution φ(t, x 0 ; ε) is non-degenerate if 1 is an eigenvalue of the monodromy matrix M(T, x 0 ; ε) with multiplicity k + 1. The following result goes back to Poincaré, for a proof see for instance the proof of Proposition 9.1.1 of [12]. Proposition 8 A non-degenerate periodic solution of a differential system (18) with ε = 0 and k independent first integrals can be continued to differential systems (18) with |ε| sufficiently small. Proof of Theorem 4 We shall work in a fixed 2-dimensional plane P through the origin of coordinates in the space of positions. In fact, from [7] we know that we can find 2n − 3 independent first integrals, such that 2n − 4 fix the motion on the plane P, and the additional first integral is the restriction of the Hamiltonian of the system to the invariant plane P. From section 3 it follows that a circular periodic solution in the plane P is nondegenerate if it has 2n − 2 multipliers equal to 1, and the remainder two are different from 1. Since we shall work with the differential system (18) restricted to the invariant position plane P, in order to see that a circular periodic solution contained in P is non-degenerate it is sufficient to prove that their multipliers are 1 with multiplicity two, and two other multipliers different from 1. The Jacobian matrix of the Hamiltonian vector field corresponding to the Hamiltonian system (15) is ⎛ When we evaluate this matrix on the circular periodic solution of radius r 1 (c) and c ∈ (0, γ ) with recall (16), we obtain the matrix Now the variational equation (19) becomeṡ where M is a 4 × 4 matrix, and the solution M(t) of this differential equation such that M(0) is the identity matrix of R 4 , evaluated at the period (17) of the circular periodic orbit of radius where r 1 = r 1 (c), and Of course, by definition this last matrix is the monodromy matrix of the circular periodic solution of radius r 1 (c). Its eigenvalues are the multipliers of this periodic solution, namely Since r 1 ∈ (ρ 1 , ρ 2 ) we have that AB < 0, and consequently Finding τ/2-Antiperiodic Circular Solutions of System (6) Proof of Proposition 5 By definition (7) of U , we obtain Let x ±,r (t) = r cos 2π t τ , ±r sin 2π t τ for some r > 0 to be determined later. Then we obtainẍ Thus we havë That is, x is a solution of (6) if and only if r > 0 is a root of ϕ τ (r ). This completes the proof of Proposition 5. Proof of Theorem 7 In order to prove Theorem 7 we need the following two inequalities given in the next two lemmas. Lemma 9 (Wirtinger's inequality, cf. Theorem 258 of [6]) For real numbers a < b, and the equality holds if and only if for some constant c 1 and c 2 ∈ R. Lemma 10 (Jensen's inequality, cf. Theorem 204 of [6]) For real numbers a < b, let φ = φ(t) satisfying φ (t) > 0 and be finite for all t ∈ (a, b), and f and p be integrable on [a, b] and satisfying with m and M may be infinite, and f (t) is almost always different from m and M. Then . Here equality holds if and only if f (t) is a constant function on [a, b]. Proof of Theorem 7 We carry out the proof in two steps. Here we used the condition 2 < α. Now other claims of Theorem 7 follow from Propositions 5 and 6. The proof of Theorem 7 is complete. Remark 1 (i) By our above study, it is natural to ask whether for every τ ≥ τ * , the system (6) possesses any τ/2-antiperiodic solutions which are not circular motions. (ii) It is not clear so far whether τ * = τ * * holds, as well as whether there exists any τ/2-antiperiodic solutions for τ ∈ (τ * , τ * * ) if τ * < τ * * holds. (iii) Based on results in Propositions 3 and 4, it is natural to ask whether the Theorem 5 continues to hold when the potential function is a weak force, i.e., 0 < α ≤ 2 < β. (iv) Here we would like to draw readers attentions to a remarkable result of Ambrosetti and Coti Zelati, i.e., Theorem 9.1 of [1] in 1993, in which they proved the existence of at least one τ -periodic solution of the system (6) for every τ > 0. In their proof, they constructed a mountain pass structure which depends on a set of suitable functions with non-zero mean integral values. Their τ -periodic solutions are not τ/2-antiperiodic. For 0 < τ < τ * , this conclusion follows from our Theorem 7. Here a natural task of the future study on the system (6) is to understand the global structures of the sets of its τ/2-antiperiodic solutions and τ -periodic solutions respectively for prescribed suitable τ > 0 with the potential function being strong force or weak force.
v3-fos-license
2019-04-10T13:13:25.810Z
2019-02-21T00:00:00.000
104381752
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://biodiscovery.pensoft.net/article/29242/download/pdf/", "pdf_hash": "b3a8a710252d914e5f20926940217d1357ecc426", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44907", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "sha1": "b3a8a710252d914e5f20926940217d1357ecc426", "year": 2019 }
pes2o/s2orc
Environmental pollutants-dependent molecular pathways and carcinogenesis Exposure to environmental pollutants can modulate many biological and molecular processes such as gene expression, gene repair mechanisms, hormone production and function and inflammation, resulting in adverse effects on human health including the occurrence and development of different types of cancer. Carcinogenesis is a complex and long process, taking place in multiple stages and is affected by multiple factors. Some environmental molecules are genotoxic, able to damage the DNA or to induce mutations and changes in gene expression acting as initiators of carcinogenesis. Other molecules called xenoestrogens can promote carcinogenesis by their mitogenic effects by possessing estrogenic-like activities and consequently acting as endocrine disruptors causing multiple alterations in cellular signal transduction pathways. In this review, we focus on recent research on environmental chemicals-driven molecular functions in human cancers. For this purpose, we will be discussing the case of two receptors in mediating environmental pollutants effects: the established nuclear receptor, the Aryl hydrocarbon receptor (AhR) and the emerging membrane receptor, G-protein coupled estrogen receptor 1 (GPER1). ‡,§ § | ‡ © El Helou M et al. This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are The environment presents all the elements that surround us (Schmidt 2012).In the environment, humans are exposed to pollutants in many ways, including orally, by inhalation or by the dermal route.Pollution of the environment is suspected to be one of the main causes of cancer (Parsa 2012).The process of carcinogenesis is mainly divided into three stages: initiation, promotion and progression.The initiation step follows a repeated exposure to "initiators" such as oxidative stress, chemical pollutants, virus and X-rays that increase the frequency of genetic mutations.The promotion step requires a non-mutagenic stimulus known as "promoters" such as chronic inflammation, estrogens and xenoestrogens (natural or chemical compounds that imitates estrogens) that promote proliferation of the initiated cells.The progression step comprises the expression of the malignant phenotype characterised by angiogenesis and metastasis (Liu et al. 2015).Exposure to environmental compounds may interfere at all stages of carcinogenesis, in particular at the initiation and promotion stages.Several studies have evaluated the association between widespread environmental pollutants and carcinogenesis.Indeed, epidemiological studies and in vitro approaches suggest that a great number of cancers could be induced via exposure to chemicals that humans are likely to encounter in their environment (Antwi et al. 2015, Boffetta 2006, Braun et al. 2016, Rochefort 2017, Rodgers et al. 2018, Wilde et al. 2018). The International Agency for Research on Cancer (IARC) evaluated the carcinogenic risks to humans and has classified around 120 agents as carcinogenic, where the chemical substances represent the majority (IARC 2018).There are many kinds of environmental pollutants: 1) agriculture chemicals including pesticides such as 1,1,1-trichloro-2,2-bis(4chlorophenyl)ethane (DDT); 2) the industrial chemicals including dioxins such as 2,3,7,8tetrachlorodibenzo-p-dioxin (TCDD), metals such as arsenic compounds, plasticisers such as bisphenol A (BPA) and health care products such as phthalates; 3) the air pollutants including polycyclic aromatic hydrocarbons (PAH) such as benzo[a]pyrene (B[a]P), N-Nitrosamines such as N-Nitrosodimethylamine (NDMA), air microparticles such as sulphur dioxide and carbon monoxide; 4) drugs including exogenous hormones and 5) some natural compounds such as aflatoxines.Pollutants are characterised by their higher persistence and pervasive nature due to high lipid solubility that allows them to remain, bioaccumulate in fatty tissues and interact with the environment for a long period of time (Mathew et al. 2017).These molecules can have different mechanisms of action; they could be genotoxic or non-genotoxic which include molecules that are able to induce epigenetic modifications, to alter the endocrine system, to act as immunosuppressors or inducers of tissue-specific toxicity and inflammatory responses (Caldwell 2012, Hernández et al. 2009).In this review, we will be discussing mainly the genotoxic compounds and the endocrine disruptors. A "genotoxic" agent is able to damage the genetic material by inducing DNA damage, mutation or both (Hayashi 1992).Genotoxicity is a key feature of carcinogenesis; it promotes chromosome changes that may be structural (such as translocations, deletions, insertions, inversions, micro-nuclei and changes in telomere length) or numerical, affecting the numbers of chromosomes as in the case of aneuploidy and polyploidy (Smith et al. 2016).Genotoxicity, due to environmental molecules, can alter the oncogenes and tumour suppressor genes that regulate processes such as cell proliferation, cell death, cell differentiation and genomic stability (Hanahan and Weinberg 2011). Endocrine disruptors or endocrine disrupting chemicals (EDC) are pseudo-persistent compounds present in the environment at very low concentrations; however, these low levels are able to interfere with hormonal regulation pathways causing effects leading to a variety of health problems, such as cancer, specifically the hormone-dependent type (breast, ovarian, endometrial, prostate, testicular) (Abaci et al. 2009, Nohynek et al. 2013, Rachoń 2015, Rochefort 2017).Endocrine disruptors act directly with hormone receptors by imitating or preventing the action of natural hormones (Schug et al. 2016).Most of these compounds have structures similar to steroid hormones such as estrogen and could interfere with the action of this hormone through binding to estrogen receptors (ER) (Shanle and Xu 2011, Tilghman et al. 2010).It is important to mention that estrogens activate different signalling pathways known to play an important role in tumour development (Vrtačnik et al. 2014). In Table 1, we listed some of the main environmental genotoxic molecules or endocrine disruptors that are known/thought to be implicated in the process of carcinogenesis.2014, Parada et al. 2012, Sagara et al. 2010 Table 1. List of the most common environmental molecules (genotoxics and endocrine disruptors) and the different types of cancers developed following their exposure. Environmental pollutants-dependent molecular pathways and carcinogenesis Receptors targeted by environmental pollutants Previous studies have suggested that environmental factors are able to induce deleterious effects within the cells through the activation of cellular receptors (Mnif et al. 2007, Routledge et al. 2000, Shi et al. 2009).It is important to note that the interactions between most of the environmental pollutants and their receptors are implicated in the regulation of molecular pathways involved in cancer progression, such as proliferation, metabolism of xenobiotics and apoptosis (Burz et al. 2009, Duronio and Xiong 2013, Rushmore and Kong 2002).It is known that environmental molecules, such as TCDD, B[a]P, BPA and phthalates, have the ability to interact with the two types of cellular receptors: nuclear and membrane receptors (Delfosse et al. 2014, Thomas and Dong 2006, Wallace and Redinbo 2013).Most of the exogenous agents act either as receptor's agonists or antagonists and compete with endogenous ligands to bind to their receptors (Handschin and Meyer 2003, Schlyer and Horuk 2006, Venkatakrishnan et al. 2013, Wang and LeCluyse 2003).In general, the effects of these interactions are able to induce two types of mechanisms: 1) the activation of cell surface receptors that induce signal transduction pathways leading to various physiological and pathological processes and playing important roles in cancer biology (Kampen 2011, Pierce et al. 2002); 2) an intracellular activation mediated by nuclear receptors acting as transcription factors in the nucleus resulting in modifications in the expression of several genes including enzymes involved in the metabolism of the exogenous molecules (Delfosse et al. 2014, Sever andGlass 2013). Nuclear receptors Nuclear receptors are activated by both intracellular and extracellular signals and act as transcription factors of target genes (Sever and Glass 2013).Many of these target genes are involved in cell growth and cell differentiation, development and metabolism (Carlberg and Seuter 2010, Kininis and Kraus 2008).There are three most common sub-families of nuclear receptors: 1) the classical steroid hormone receptors or endocrine receptors that bind to a unique high affinity ligand such as estrogens, androgens, glucocorticoids, thyroxin, progesterone, mineralocorticoids etc. and exert a wide range of biological functions including cell homeostasis, differentiation, regulation of proliferation, survival and cell death (Ward and Weigel 2009).Both the endogenous ligands (hormones) and the hormone receptors are targeted by environmental chemicals.For instance, the drug prulifloxacin activates the androgen receptor, while BPA and dicyclohexyl phthalate activate the glucocorticoid receptor (Lynch et al. 2017, Sargis et al. 2010).The classical nuclear ER, ERα and ERβ, are the most sensitive receptors to be targeted by some EDC that will compete with endogenous estrogen and target directly ER.EDC includes the pharmaceutical chemicals diethylstilbestrol, BPA, DDT and phytoestrogens such as genistein (Chen et al. 2018, Shanle andXu 2011).2) Orphan receptors, called as such because of their unknown physiological ligands, but represent candidate receptors for new ligands or hormones; they play important roles in cellular homeostasis and diseases including cancer where over-or under-expression of some receptors have prognostic significance for patient survival (Aesoy et al. 2015, Hummasti and Tontonoz 2008, Safe et al. 2014). 3) The xenobiotic receptors which are the most important group of nuclear receptors towards environmental molecules (Li and Wang 2010).They play an important role in cellular responses to accumulated endotoxins, chemicals compounds and their metabolites (Li and Wang 2010).To date, studies are focusing on three main xenobiotic receptors: the constitutive androstane receptor, the pregnane X receptor and the aryl hydrocarbon receptor (AhR), because of their predominance in the regulation of hepatic responses either to drugs or to environmental chemicals, such as some PAH and dioxin compounds (Banerjee et al. 2015, Verma et al. 2017, Vondráček and Machala 2016).Xenobiotic receptors play an important role between the environment and the physiological mechanisms due to their involvement in the transcriptional regulation of cytochromes P450 (CYP) family which represents one of the most important and predominant enzyme superfamilies involved in metabolism of xenobiotics; however, in some cases this metabolic transformation of xenobiotics may also produce active metabolites, able to induce DNA adducts and mutations or toxic intermediates (Fujii-Kuriyama and Mimura 2005, Guéguen et al. 2006, Tolson and Wang 2010).In addition, these three xenobiotic receptors are known to regulate, at the transcriptional level, the ridine 5'-diphosphoglucuronosyltransferase which is an enzyme involved in the detoxification process and the ATP-binding cassette sub-family G member 2 Breast Cancer Resistance Protein frequently associated with therapy resistance in cancers (Jigorel et al. 2006, Spitzwieser et al. 2016, Sugatani et al. 2001, Tompkins et al. 2010). Membrane receptors Membrane receptors are transmembrane proteins that serve as a communication interface between cells and their external and internal environments (Pierce et al. 2002, Venkatakrishnan et al. 2013).Three major classes of membrane receptors exist: 1) the enzyme linked-receptors that lack intrinsic catalytic activity and dimerise after binding with their ligands, in order to activate downstream signal transductions pathways through one or more cytosolic protein-tyrosine kinase (i.e.human growth factor receptors) (Dudek 2007); 2) the channel-linked receptors (also called ligand-gated ion channels) where the ligand binding changes the conformation of the receptor; in this case, specific ions flow through the channel altering the electric potential across the membrane of the target cell (Absalom et al. 2004); and 3) the G-protein coupled receptors (GPCRs). GPCRs represent one of the largest and most diverse families of membrane proteins.They are encoded by more than 800 genes and constitute the largest class of drug targets in the human genome (Ghosh et al. 2015, Venkatakrishnan et al. 2013).After ligand binding, GPCR undergo conformational changes; they couple to and activate a G protein, then trigger a cascade of signal transduction leading to various physiological and pathological processes (Venkatakrishnan et al. 2013).GPCRs are also targeted by environmental pollutants, such as TCDD which was identified to activate the GPCR signalling pathway maps (Jennen et al. 2011).In endothelial cells and adipocytes, B[a]P is able to bind the beta(2)-adrenergic receptor (β2ADR), a subfamily of GPCRs and induce intracellular calcium mobilisation and lipolysis (Irigaray et al. 2006, Mayati et al. 2012).GPCRs can also be targeted by endocrine disruptors.Indeed, some phthalate esters have the potential to bind to the G protein-coupled cannabinoid-1 (CB1) receptor and to modify CB1 receptordependent behaviour; DDT acted as a positive allosteric modulator on the human follitropin receptor function (Bisset et al. 2011, Munier et al. 2016). GPCRs are involved in many diseases including cancer (Nohata et al. 2017, Schlyer andHoruk 2006).A known GPCR, involved in the activation of intracellular signalling pathways that promote cancer development, is the G protein-coupled estrogen receptor 1 (GPER1), also known as GPR30, which is largely localised within intracellular membranes predominantly in the endoplasmic reticulum, while also found weakly expressed at the cell surface membrane (Cheng et al. 2011, Gaudet et al. 2015).GPER1 is activated by a large range of stimuli, including hormones and environmental molecules (Lu and Wu 2016).This receptor is characterised by its involvement in the estrogen signalling pathway and its high affinity to xenoestrogens and 17β-estradiol (E2), especially in cells that do not express classical ER (Filardo et al. 2000, Maggiolini and Picard 2010, Prossnitz and Hathaway 2015). The present review will highlight the recent research advances regarding carcinogenic mechanisms with the focus on two receptors in mediating environmental pollutants effects: the established nuclear receptor the Aryl hydrocarbon receptor (AhR), known to have a major role in the metabolism of toxic compounds and the promotion of tumours and the emerging membrane receptor G-protein coupled estrogen receptor 1 (GPER1), known to mediate estrogenic activity of environmental xeno-estrogens in different cell types (Filardo 2018, Xue et al. 2018). Overview AhR is a cytosolic nuclear receptor that, after binding with its ligand, moves to the nucleus and acts as a transcription factor (Denison et al. 2002, Schmidt andBradfield 1996).It belongs to the family of basic-helix/loop/helix per-Arnt-sim (bHLH/PAS) domain containing transcription factors (Burbach et al. 1992, Fukunaga et al. 1995).The structure of AhR is composed of an amino (N-) terminal bHLH domain, which is a common entity in a variety of transcription factors, required for DNA binding; followed by two per-Arnt-sim (PAS) domains (A and B) and a carboxy (C-)terminal transactivation domain (TAD) (Crews and Fan 1999, Fukunaga et al. 1995, Jones 2004).The ligand binding site of AhR is present within the PAS-B domain (Burbach et al. 1992, Coumailleau et al. 1995).In the absence of ligand, AhR is sequestered in the cytoplasm by the heat shock protein 90 (Hsp90), hepatitis B virus x-associated protein 2 (XAP2) and the p23 protein.Activation by a ligand induces the dissociation of XAP2 and p23; and the AhR/Hsp90 complex translocates to the nucleus forming the first essential step in AhR activation (Ikuta et al. 2000, Kazlauskas et al. 2001, Tsuji et al. 2014).Once in the nucleus, the AhR detaches from Hsp90 and heterodimerises with AhR nuclear translocator (ARNT), allowing the AhR/ARNT complex to bind to response elements called xenobiotic responsive elements (XRE), located in the promoters of target genes to induce their transcription (Dolwick et al. 1993, Fukunaga et al. 1995).It has been shown that AhR activation can cause toxic and carcinogenic effects (Schmidt and Bradfield 1996).Many metabolites could be candidates for natural endogenous AhR ligands such as the arachidonic acid metabolites (i.e. the lipoxin A4), heme metabolites (i.e. the bilirubin) and the tryptophan metabolites (i.e. the kynurenine and the kynurenic acid) (Schaldach et al. 1999, Sinal and Bend 1997, Wirthgen and Hoeflich 2015).The bestcharacterised AhR ligands that act as powerful activators have been identified as environmental toxins (Denison and Nagy 2003).These activators derive mainly from two classes of compounds: PAH such as B[a]P and halogenated aromatic hydrocarbons such as TCDD which have high-affinity for AhR binding (Denison et al. 2002).It has been proven that B[a]P induced its carcinogenicity at least via AhR (Shimizu et al. 2000).During its activation, AhR stimulates the expression of target genes, such as CYP1A1, CYP1A2 and CYP1B1, that are important in the metabolism and bioactivation of carcinogens (Kerzee andRamos 2001, Oyama et al. 2012). AhR and cancer Constitutive activation of AhR.Studies have shown that AhR can be constitutively active, presumably because of endogenous ligands and plays an important role in the biology of several cell types when exogenous ligands (environmental molecules) are absent.Environmental molecules that deregulate cell cycle control via AhR pathway.As cited previously, the TCDD and B[a]P represent high-affinity xenobiotic ligands for the AhR. Emerging evidence has demonstrated the role of the AhR and its ligands in cancer.A study showed that the treatment of rat liver normal cells with TCDD leads to the activation of the transcription factor JUN-D, via AhR, resulting in the transcriptional induction of the cell cycle regulator proto-oncogene Cyclin A, that provokes a release from contact inhibition (Weiss et al. 2008).Within the same cell lines, the B[a]P, also via AhR, disrupts the contact inhibition and enhances cell proliferation (Andrysík et al. 2007).Studies in a human adenocarcinoma cell line revealed that AhR agonist (TCDD) was able to stimulate the growth of cancer cells by inducing the expression of E2F/DP2 complex which is involved in cell cycle regulation and DNA synthesis (Shimba et al. 2002).Thus, the activation of AhR plays a significant role in cell cycle deregulation induced by environmental molecules. Environmental molecules that influence apoptosis.Inhibition of apoptosis is also a factor for tumour promotion/progression.In a model for studying hepatocarcinogenesis, TCDD stimulates the clonal expansion of pre-neoplastic hepatocytes by inhibiting apoptosis (Bock and Köhle 2005).It was also demonstrated in vitro that the use of AhR antagonist abolishes resistance to TCDD-induced apoptosis in three different lymphoma cell lines.Indeed, the TCDD-mediated inhibition of apoptosis via AhR was associated with an increase in cyclooxygenase-2 (COX-2) and deregulation of genes of the B-cell lymphoma-2 (Bcl-2) family such as the anti-apoptotic proteins Bcl-xl and Mcl-1 (Vogel et al. 2007).In addition, the activation of AhR by TCDD in mouse fibroblasts represses the induction of the pro-apoptotic E2F1 target genes such as TP73 and Apoptotic protease activating factor 1 (Apaf1); however, the inhibition of AhR causes an increase in E2F1 protein that will promote apoptosis (Marlowe et al. 2008).Moreover, Bekki et al. explored the activation of AhR by TCDD and kynurenine (an endogenous ligand for AhR) and found that these compounds were able to suppress the apoptotic response induced by anti-cancer therapy in breast cancer cells and induce inflammatory genes, such as COX-2 and nuclear factor kappa-light-chain-enhancer of activated B cells subunit RelB (NF-κB) (Bekki et al. 2015).These studies showed an anti-apoptotic function of the AhR suggesting its tumour promoting role. Environmental molecules that affect cellular plasticity.Deregulation of cell-cell contact and tumour malignancy is associated with increased AhR expression.For instance, Diry et al. (2006) highlighted the effect of TCDD and 3-methylcholanthrene via AhR on cellular motility.Dioxin stimulated cytoskeleton remodelling, resulting in an increased interaction with the extracellular matrix and loosening of the cell-cell contact.This pro-migratory activity was mediated by the activation of Jun NH2-terminal kinase (JNK) and reverted with a JNK inhibitor (Diry et al. 2006) The molecular mechanisms affected by the activation of AhR by environmental pollutants, discussed above, are represented in Fig. 1. The case of GPER1, an emerging receptor in mediating environmental pollutants impact 4.1 Overview GPER1 is a seven transmembrane-domain G protein-coupled receptor that shares, with other GPCR, a similar global architecture which consists of a transmembrane canonical part formed of seven helices α with various sequences serving as a communication link between the ligands and the G protein coupling region; the extracellular part consists of three extracellular loops containing the N-terminus and the intracellular part consisting of three intracellular loops with the C-terminus (Lu and Wu 2016).A large number of molecules that bind to classical ER can also bind to GPER1.Amongst these ligands, we distinguish some molecules that bind strongly to GPER1 such as: 1) the endogenous ligands including E2 acting as agonist and estriol (E3) acting as antagonist; 2) the antiestrogens tamoxifen and ICI 182,780 used in hormone therapy, in contrast to their antagonistic properties on ER, act as agonists on GPER1; and 3) the xeno-estrogens such as DDT, mono-2-ethylhexyl phthalate (MEHP) and BPA (Fitzgerald et al. 2015, Lappano et al. 2010, Thomas et al. 2005, Thomas and Dong 2006, Tiemann 2008).The localisation of GPER1 in the membrane promotes this coupling with heterotrimeric G proteins composed of Gαs and Gβ/γ subunits (Maggiolini andPicard 2010, Thomas et al. 2005), following the activation of GPER1 by a ligand, localised on the membrane of endoplasmic reticulum.It adopts a conformational change resulting in an exchange of guanosine diphosphate by guanosine triphosphate at the level of the G protein and which in turn triggers the dissociation of the α subunit from the β/γ subunits and from the receptor.The Gβ/γ subunits stimulate Src tyrosine kinase leading to the activation of matrix metalloproteinases (MMP) and therefore triggering a series of intracellular signal transduction cascades comprising the epidermal growth factor receptor (EGFR), a plasma membrane-associated enzyme which belongs to the ErbB/HER family of tyrosine kinase receptors (Filardo et al. 2000, Maggiolini and Picard 2010, Quinn et al. 2009).The MMP will then release heparinbound EGF (HB-EGF) from the cell surface; EGF binds to its receptor, the EGFR and thus activates the underlying signalling pathways such as the PI3K/Akt pathway and MAPK/ERK pathway in normal and malignant cells (Fan et al. 2018, Maggiolini andPicard 2010).As for the subunit Gαs, it will activate the adenylyl cyclase and then produce cAMP that in turn activates the phospholipase C (Maggiolini and Picard 2010). GPER1 and cancer GPER1 may promote carcinogenesis.The chemical structure of BPA that looks like E2 provides estrogenic properties to BPA (Brzozowski et al. 1997).It was demonstrated that, besides its activity through ER, BPA induces cell proliferation and migration via the GPER1/EGFR/ERK pathway in breast cancer cells (Pupo et al. 2012).In addition, the fact that BPA is able to bind GPER1 and to activate non-genomic pathways could explain these fast effects on the activation of signalling pathways, even at low doses (Richter et al. 2007, Talsness et al. 2009).For instance, at doses of 10 M to 10 M, BPA showed a proliferative effect on testicular cancer cells JKT-1 by activating the signalling pathways involving the protein kinase A and protein kinase G via GPER1 (Bouskine et al. 2009).Moreover, in seminoma cells, BPA was also able to promote proliferation through GPER1 (Chevalier et al. 2011).By binding to GPER1, BPA induced activation of ERK1/2 and transcriptional regulation of c-fos in human breast cancer cells via the AP1-mediated pathway (Dong et al. 2011).Additionally in breast cancer cells and through GPER1, BPA activated signal transduction pathways; it mediated migration and invasion by inducing the expression of kinases such as FAK, Src and ERK2 and by increasing AP-1 and NFκB-DNA binding activity through a Src-and ERK2-dependent pathway (Castillo Sanchez et al. 2016).Interestingly, in non-hormonal cancers, BPA binds to GPER1 and induces cancer progression in laryngeal squamous cell carcinoma and lung cancer cells (Li et al. 2017, Zhang et al. 2014).In a hypoxic microenvironment, BPA stimulated cell proliferation and migration of vascular endothelial cells and breast cancer cells in vitro by up-regulating the hypoxia inducible factor-1 alpha and VEGF expressions in a GPER1-dependent manner; and enhanced tumour growth in vivo (Xu et al. 2017).A recent study showed that one of the BPA derivatives, 4,4'-thiodiphenol, displaying more powerful estrogenic activity than BPA, was able to stimulate cell proliferation in ERα positive cancer cells by activating the -9 -12 GPER1-PI3K/AKT and ERK1/2 pathways (Lei et al. 2017).Therefore, more attention should be paid to BPA exposure.In addition, lower concentrations of phthalates were able to promote human breast cancer progression by inducing a proliferative effect through the PI3K/AKT signalling pathway (Chen and Chien 2014).Recent data showed that the MEHP, an environmental xenoestrogen, triggered the proliferation of cervical cancer cells within a GPER1/Akt-dependent-manner by directly binding to GPER1 (Yang et al. 2018).As for DDT, in 2006 Thomas and Dong showed that the derivative compounds of DDT displayed affinity for GPER1, but to date, data lack studies for testing the effect of DDT on carcinogenesis via the GPER1-dependent manner (Thomas and Dong 2006). GPER1 is implicated in pathways that lead to the activation of the transcriptional machinery. Studies have demonstrated the involvement of GPER1 in cell proliferation, cell survival and cell migration mechanisms by inducing the transcription of genes such as cyclin D2, Bcl-2, connective tissue growth factor (CTGF) and the oncogene c-fos etc. (Kanda and Watanabe 2003, Kanda and Watanabe 2004, Maggiolini et al. 2004, Pandey et al. 2009).These data suggest possible roles for GPER1 in the development of metastases and in the resistance to anti-estrogens.The role of GPER1 in promoting cancer is also reinforced with the presence of a cross talk between GPER1 and the insulin-like growth factor receptor-1 which is associated with multiple tumour progression characteristics, such as the development of metastases and resistance to chemotherapy by triggering downstream pathways, such as ERK and AKT (De Marco et al. 2013, Knowlden et al. 2008, Lappano et al. 2013). The possible GPER1 biomarker value in cancer.Several studies have highlighted the use of GPER1 as a cancer biomarker.The results of a clinical study showed that the expression of GPER-1 might correlate with clinical and pathological-poor outcome biomarkers, by showing an association with metastasis, human epidermal growth factor receptor 2 (HER2) expression and tumour size (Filardo et al. 2006).GPER1 has also been shown to be an important prognostic factor in high-risk endometrial cancer patients with lower survival rates (Smith et al. 2007).High expression levels of GPER1 have been correlated with low survival rates in breast cancer patients treated with tamoxifen and in patients with the aggressive epithelial ovarian cancer (Ignatov et al. 2011, Smith et al. 2009).In silico, a bad prognostic value for high levels of expression of GPER1 in HER2+ breast cancers subtype was obtained (Yang and Shao 2016).As well, Fahlén et al. ( 2016) have found that malignant breast tumours showed a high expression of GPER1 compared to benign tumours.Interestingly, a recent study showed that GPR30 expression was observed in both the cytoplasm and nucleus of cells from ovarian cancer tissues where the nuclear GPER1 expression predicts poor survival in patients with ovarian cancer, especially in those with a high grade malignancy (Zhu et al. 2018).To date, there is no study showing a correlation between the environmental carcinogen exposure and GPER1 expression to be used as a biomarker. The molecular mechanisms affected by the activation of GPER1 by environmental pollutants, discussed above, are represented in Fig. 2. Conclusion Several risk factors were identified playing important roles in carcinogenesis.Some major factors were attributed to the exposure to environmental molecules.In this review, we showed that exposure to environmental molecules can play a crucial role in the process of carcinogenesis.These molecules have the ability to interact with cellular receptors and act as either initiators of carcinogenesis by their genotoxic effect or agents promoting carcinogenesis via their estrogenic-like activities (xenoestrogens).Amongst cellular receptors, we highlighted two main receptors AhR and GPER1, where many studies demonstrated their implication in carcinogenesis.As discussed, studies reported that environmental pollutants exert estrogenic effects.The established nuclear receptor AhR, has long been identified as a receptor that mediates environmental pollutants effects.Acting as a transcription factor that responds to xenobiotics and play significant roles in the development and progression of cancer cells such as proliferation and differentiation, genetic damage, toxins metabolism, angiogenesis and survival, where its overexpression and constitutive activation have been observed in various tumour types.Some studies have also suggested that higher AhR activity could be correlated with increased aggressiveness and a poor prognosis.Previously, mechanistic studies focused on their actions mediated by the ER pathway and gave less importance to their effects mediated by the GPER1 pathway.However, GPER1 proved to be an emerging membrane receptor in mediating environmental pollutants impact.The currently available data suggest that GPER1 is a potential target for xenoestrogens in the human body.There is now good evidence that GPER1 may contribute separately to estrogen-induced carcinogenesis due to its ability to activate transcriptional machinery and employ different intracellular signalling mechanisms that promote cancer progression such as cell proliferation, migration, escape from apoptosis and cell cycle arrest.Moreover, several studies do suggest that GPER1 measurement alone may be a significant biomarker in cancer and therefore may hold a prognostic significance. In this context, more studies are needed to fully establish the role of pollutants that we are chronically (daily) exposed to, in inducing carcinogenesis and to develop a better understanding of how cellular receptors cooperate with these molecules to drive the biology of cancer.In fact, this type of research encounters important barriers to progress; for instance, some chemicals are rapidly metabolised, many exposures are complex mixtures of chemicals that have varied mechanisms of action, thus, there is a great challenge to reconstruct environmental exposures to assess pollutants effects.Furthermore, research needs to include continued support of cohorts with prospective exposure measurements from early life, so that further follow-ups would be informative.Finally, epidemiological studies highlight the need for better chemical testing and risk assessment approaches that are relevant to cancer that could be essential for cancer prediction and prevention.Clearly, the current scientific challenge is to identify new molecular biomarkers for environmental exposure that could be used to develop candidate prevention strategies for the environmental carcinogenesis induced by molecules with different mechanisms of action. References Schlezinger et al. (2006) reviewed the involvement of AhR in the mammary gland tumourigenesis by inhibiting apoptosis while promoting the transition to an invasive phenotype.Additionally, in a human hepatoblastoma cell line,Terashima et al. (2013) demonstrated that, under glucose deprivation, the AhR pathway induces vascular endothelial growth factor (VEGF) expression by activating transcription factor 4. In addition, knockdown or inhibition of AhR inhibits the invasion and migration of cancer cells, as well as downregulates the expression of metastasis-associated genes and tumour cells(Goode et al. 2014, Parks et al. 2014).In vivo and in vitro, D'Amato and his colleagues demonstrated that the tryptophan 2,3-dioxygenase (TDO2)-AhR pathway plays a crucial role in the anoikis resistance and metastasis of triple negatif breast cancer (TNBC) cell lines.TNBC cells regulate the enzyme TDO2, thereby causing AhR activation by this endogenous ligand kynurenine catalysed by TDO2(D'Amato et al. 2015).Moreover, in a human breast cancer cell line MDA-MB-231, knockdown of AhR by RNAi decreased proliferation, anchorage-independent growth and migration of the cells, suggesting a prooncogenic function of AhR(Goode et al. 2013). Figure 1 . Figure 1.Schematic resume representing the main effects of environmental pollutants in carcinogenesis mediated by the AhR receptor.Exogenous ligands activate AhR and affect several molecular mechanisms resulting in various genes expression that cooperate to promote carcinogenesis. Figure 2 . Figure 2. Schematic resume representing the main effects of environmental pollutants in carcinogenesis mediated by GPER1 receptor.Exogenous ligands activate GPER1 and affect several molecular mechanisms resulting in various genes expression that cooperate to promote carcinogenesis. (McDermott and Wicha 2010) al. (, Morris and Seifter 1992, Rundle et al. 2000as able to disrupt contact inhibition and reduce gap junctional intercellular communication via downregulation of connexin-43 in an AhR-dependent manner.In addition, activation of B[a]P-dependent signal transduction pathway, where AhR involvement is primordial in B[a]P-induced carcinogenesis, also interferes with biological processes involved in migration and invasion of breast cancer cells and Triple Negative Breast Cancer (TNBC) cells which represents the worse prognosis sub-type in breast cancer(Castillo-Sanchez et al. 2013, Guo et al. 2015, Novikov et al. 2016, Shimizu et al. 2000).DNA breaks, DNA damage and mutations in oncogenes and tumour suppressor genes(Chiang and Tsou 2009, IARC 2012, Morris and Seifter 1992, Rundle et al. 2000, Tarantini et al. 2011).BPDE was demonstrated to induce K-ras mutations in normal human bronchial epithelial and fibroblast cells; these mutations have also been found in lung tumours of people exposed to the smoke of charcoal combustion during their work(Feng et al. 2002, IARC 2010).AhR has also been reported to affect CSC, a subtype of cancerous cells and to lead to the initiation, progression and development of metastases in the carcinogenesis(Gasiewicz et al. 2017).One hypothesis assumes that tumours are maintained by a self-renewing CSC population, which is also able to differentiate into non-self-renewing cells populations that make the mass of the tumour(McDermott and Wicha 2010).Stanford et al. (2016)have shown that activation of AhR increases the development of CSC and their characteristics in TNBC cells. (Romagnolo et al. 2015)hich is involved in the biotransformation of B[a]P, a procarcinogen, into B[a]P-diol-epoxide (BPDE), an ultimate mutagen with a strong electrophilic power that allows it to form DNA adducts that cause cytogenetic alterations, Furthermore, hyperactivation of AhR by B[a]P increases the activity of the stem cells specific marker, the aldehyde dehydrogenase (ALDH) and the expression of migration/ invasion-associated genes such as Snai1, Twist 1, Twist2, Tgfb1 and Vim.In addition, AhR ligands increase the translocation of the sex-determining region Y-box 2, a master regulator of self-renewal, to the nucleus(Stanford et al. 2016).These data highlight the role of AhR in the development of cells with cancer stem cell-like properties and mostly the role of environmental AhR ligands in intensifying breast cancer progression.A recent study demonstrated that the AhR/CYP1A1 signalling pathway, activated by TCDD and 2,4dimethoxybenzaldehyde, appears to be involved in the regulation (development, maintenance and self-renewal) of breast CSC via PTEN/Akt and β-catenin pathways by inhibiting the expression of PTEN and activating the expression of Akt and β-catenin(Al- Dhfyan et al. 2017).The possible AhR biomarker value in cancer.Few studies investigated the prognosis value of AhR in cancer.In upper urinary tract tumours, the high levels of nuclear AhR expression predicted a higher tumour grade(Ishida et al. 2010).Later, it has been explored that nuclear translocation of AhR was associated with a poor prognosis in squamous cell carcinoma(Su et al. 2013).Moreover, ERα negative breast cancers exhibited a high expression of AhR, coupled with hypermethylation of the CpG islands of the BRCA1 gene promoter (or in other words with BRCA1 inactivation), suggesting that it could serve as a predictive biomarker for tumour development(Romagnolo et al. 2015).
v3-fos-license
2018-04-03T02:02:05.668Z
2016-10-03T00:00:00.000
14658943
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0163277&type=printable", "pdf_hash": "50ca806120c1fff8e4e56a07d1b5e37ab0a691f4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44908", "s2fieldsofstudy": [ "Biology" ], "sha1": "50ca806120c1fff8e4e56a07d1b5e37ab0a691f4", "year": 2016 }
pes2o/s2orc
Transcriptional Regulation of Frizzled-1 in Human Osteoblasts by Sp1 The wingless pathway has a powerful influence on bone metabolism and is a therapeutic target in skeletal disorders. Wingless signaling is mediated in part through the Frizzled (FZD) receptor family. FZD transcriptional regulation is poorly understood. Herein we tested the hypothesis that Sp1 plays an important role in the transcriptional regulation of FZD1 expression in osteoblasts and osteoblast mineralization. To test this hypothesis, we conducted FZD1 promoter assays in Saos2 cells with and without Sp1 overexpression. We found that Sp1 significantly up-regulates FZD1 promoter activity in Saos2 cells. Chromatin immunoprecipitation (ChIP) and electrophoretic mobility shift (EMSA) assays identified a novel and functional Sp1 binding site at -44 to -40 from the translation start site in the FZD1 promoter. The Sp1-dependent activation of the FZD1 promoter was abolished by mithramycin A (MMA), an antibiotic affecting both Sp1 binding and Sp1 protein levels in Saos2 cells. Similarly, down-regulation of Sp1 in hFOB cells resulted in less FZD1 expression and lower alkaline phosphatase activity. Moreover, over-expression of Sp1 increased FZD1 expression and Saos2 cell mineralization while MMA decreased Sp1 and FZD1 expression and Saos2 cell mineralization. Knockdown of FZD1 prior to Sp1 overexpression partially abolished Sp1 stimulation of osteoblast differentiation markers. Taken together, our results suggest that Sp1 plays a role in human osteoblast differentiation and mineralization, which is at least partially mediated by Sp1-dependent transactivation of FZD1. Introduction Transcription factor Sp1 regulates genes in both a positive and negative manner [1]. Sp1 plays an important role in cell cycle progression [2,3], apoptosis [4,5], and the cellular response to hormone/growth factor stimulation [6,7]. Sp7 (osterix), another member of the Sp transcription factor family, is essential for bone development and mineralization [8]. Knockout of Sp7 leads to a significant delay and reduction of bone maturation and mineralization in newborn mice [8]. Although a direct role of Sp1 in osteoblast differentiation and bone formation is less well known, a single nucleotide polymorphism (SNP) affecting Sp1 binding in the COL1A1 gene promoter has been associated with reduced bone mineral density (BMD) [9] and increased risk of osteoporotic fracture [10][11][12][13][14]. These studies support a potential role of Sp1 in osteoblast differentiation and mineralization. Frizzled1 (FZD1) is a receptor for the Wnt signaling pathway and promoter variants in FZD1 have been associated with BMD [15,16]. FZD1 plays a role in osteoblast mineralization and the FZD1 promoter is regulated by several transcription factors including early growth response 1 (EGR1), E2F transcription factor 1 (E2F1) and activating protein 2 (TFAP2) [15,17,18]. In addition, allele specific transactivation of the FZD1 promoter by EGR1 has also reported [15]. To further investigate the transcriptional regulation of FZD1, we performed bioinformatics analysis in silico and identified putative binding sites for Sp1 in the FZD1 promoter. To determine whether Sp1 is a regulator of osteoblast mineralization and FZD1 expression, we analyzed the transactivation of the FZD1 promoter by Sp1 and the effects of Sp1 on osteoblast mineralization in Saos2 cells and further validated in human fetal osteoblasts (hFOB). Saos2 is a cell line derived from primary osteosarcoma and has been well documented for the natural manner of osteoblastic differentiation [19,20], therefore we used Saos2 as our in vitro osteoblast mineralization model. We identified a novel functional Sp1 binding site and its role in the activation of FZD1 promoter. Furthermore, Sp1 enhanced mineralization of Saos2 osteoblastic cells at a later stage of osteoblast differentiation. Our findings suggest that Sp1 regulates FZD1 gene expression and influences mineralization of human osteoblasts. Construction of plasmid and luciferase assay Luciferase reporter plasmids of pGL3 basic (Promega, USA) containing 726 base pair (bp, full length -655 to +71 nucleotide relative to the translation start site) or 246 bp (proximal -175 to +71 nucleotide relative to the translation start site) FZD1 promoter fragments (FZD1-pGL3 plasmids) were described previously [15,17] and used for transfection. Recombinant plasmids containing mutated nucleotides AAA in each of the two putative core Sp1 banding site (-44 to -40 and -97 to -93 nucleotide relative to the translation start site), were generated using the wild type proximal FZD1-pGL3 plasmid and the Quikchange lightning site directed mutagenesis kit (Agilent Technologies, USA). Mutation was confirmed by direct sequencing and the plasmids were used for transfection and luciferase assay. Expression plasmids for Sp1 and mutated Sp1 were purchased from Addgene (#12097 and #12098, respectively). For transfection experiments, Saos2 cells were seeded at the density of 1x 10 5 /well in 24-well plates for 24 hr, followed by co-transfection of 100 ng FZD1 reporter plasmid and 250 ng expression plasmid for Sp1. Co-transfection of FZD1 reporter and β-gal expression plasmid was used as a control. A renilla luciferase reporter was included as an internal control for all transfections. Transfected cells were cultured for another 48 hr and whole cell protein was harvested in 1x passive lysis buffer for luciferase assay. Dual luciferase activity was measured on a SpectraMax L microplate reader (Molecular Device, USA) using a dual luciferase assay kit (Promega, USA). Luciferase assay was carried out in triplicate and repeated three times. Chromatin immunoprecipitation (ChIP) assay ChIP assay was performed as described [17,18]. In brief, Saos2 cells were treated with 1% formaldehyde freshly made in PBS for 10 min at room temperature. Chromatin sample was prepared and subjected to ChIP assay with antibodies against Sp1 or normal IgG as a control. Pull-down DNA fragments and input DNA were used as templates for PCR using primers designed to amplify -273 to +54 or -44 to +54 region relative to the translation start site of the FZD1 promoter containing the putative Sp1 binding sites. PCR products were analyzed using agarose electrophoresis. Electrophoretic mobility shift assay (EMSA) Sense and antisense oligonucleotides containing putative Sp1 core binding site were synthesized (CCGCCGGCCGTGCCCCTGGCAGCC, with Sp1 binding site underlined), end-labeled with biotin and annealed. Saos2 cells were infected with Adenovirus-Sp1 (Applied Biological Materials Inc) for 48 hr, and nuclear extracts were prepared using a Nuclear Extraction Kit (Active Motif, USA). Two microgram nuclear protein was used for each binding reaction with the Sp1 binding site oligonucleotides and EMSA experiments were carried out using a Lightshift Chemiluminescent EMSA kit according to the manufacturer's instructions (Thermo Scientific, USA). For Sp1 supershift experiments, 0.2 μg of Sp1 antibody (Santa Cruz, sc-59) was added to the reaction and incubated for an additional 60 min at 4°C following the standard binding reaction. Normal rabbit IgG was used in parallel as a control. EMSA assay was repeated three times. Cell culture and mithramycin A (MMA) treatment Saos2 cells were seeded at the density of 4x10 5 cells/35 mm dish and cultured for 24 hr, followed by MMA treatments for another 48 hr at 20, 100 and 200 nM (100 μM stock was prepared in 100% ethanol). The MMA treatment assay was repeated two times. Gene knockdown by siRNA in osteoblast Human fetal osteoblast cells hFOB and Saos2 cells (1 x 10 5 cells /well) were seeded in 12 well plates for overnight prior to transfection. Sp1 siRNA (50 nM) or an equal amount of scramble siRNA was transfected into Saos2 and hFOB cells. At 48 hr post transfection, the cells were harvested for further analysis including Western blot, real-time quantitative PCR and alkaline phosphatase (ALP) staining. FZD1 knockdown experiments were performed as described [17]. Real-time quantitative PCR Human FOB cells were transfected with 50 nM Sp1 siRNA or scramble siRNA and cultured for an additional 48 hr. The cells were harvested for total RNA isolation using TRIzol reagent (Invitrogen, USA). One microgram of total RNA was used for reverse transcription using High Capacity cDNA Reverse Transcriptase kit (Life Technology, USA). Real-time quantitative PCR was performed using 10 ng cDNA and SYBR green master kit and carried out on QuantStudio™ 5 System (Applied Biosystems, USA). PCR primers for FZD1, ALP, COL1A, OCN and OPN genes are described [17,18]. The expression levels of Sp1 were determined using PCR primer pair 5'-CCGCAGGTGAGAGGTCTTG-3' /5'-ACAGCCCAGATGCCCAACC-3'. Saos2 cells were transfected with 20 μM of FZD1 siRNA or scramble siRNA and cultured for 24 hr. The cells were subsequently transfected with 1μg of Sp1 or β-gal expression plasmid and cultured for an additional 48 hr. Cells were harvested for total RNA isolation and real-time quantitative PCR analysis as described above. Alkaline phosphatase staining in osteoblast Saos2 and hFOB cells were seeded and transfected with siRNA as described above. At 48 hr post transfection, growth media were changed and cells were cultured for an additional 72 hr. Cells were then fixed with 4% formaldehyde at room temperature for 10 min followed by an incubation with BCIP/NBT liquid substrate solution (Sigma, USA) at room temperature for 15 min. Stained cells were photographed and ALP activity (staining intensity) were determined by densitometry using ImageJ software (https://imagej.nih.gov/ij/index.html). ALP activity experiments were conducted in triplicates and repeated at least 2 times independently. Mineralization and Alizarin red S staining Saos2 cells were cultured in osteoblastic differentiation medium containing 50 μg/ml ascorbic acid and 10 mM 3-glycerophosphate for up to 18 days. Differentiation medium was changed every other day. For the Sp1 over-expression experiment, cells were infected with Adenovirus-Sp1 or Adenovirus-β-gal for 48 hr and then cultured in differentiation medium for 9 and 18 days to determine the effects of Sp1 in early and late stage of differentiation and mineralization. For the MMA treatment, 100 nM MMA or equal volume of ethanol was added to the MMA treatment or control group, respectively. After 24 hr, the cells were cultured in the same differentiation condition as described above and fixed at day 14 for mineralization assay. Cells were also cultured in growth media without the supplements in parallel as an undifferentiated control for both experiments. Cell fixing and staining with Alizarin red-S and quantification of mineralization in these cells using cetylpyridinium chloride was performed as described [18]. Statistical analysis Statistical analysis was performed using Student's t-test or one-way ANOVA followed by a Bonferroni multiple comparison adjustment. Differences were considered significant at P <0.05. Sp1 up-regulates FZD1 promoter activity Our previous studies reported that several transcription factors (EGR1, E2F1 and TFAP2) regulate FZD1 promoter activity in osteoblasts and that regulation by Egr1 was modulated by a promoter polymorphism (rs2232158). Moreover, both rs2232157 and rs2132158 in the FZD1 promoter have been associated with bone mineral density (BMD) [15,16]. Bioinformatics analysis of rs2232157 identified a putative Sp1 binding site on the antisense sequences for each of the alleles (C/AGGGCGCGC) (PROMO transcription factor site search engine (http://alggen. lsi.upc.es/cgi-bin/promo_v3/promo/promoinit.cgi?dirDB=TF_8.3) and the sequences associated with the T allele had a better match to the binding site matrix compared to the wild type G allele. To test whether Sp1 regulates FZD1 promoter activity and whether this effect is modified by rs3322157, we co-transfected Sp1 expression plasmids with rs2232157 and rs2232158 haplotype specific pGL3-FZD1 reporters and analyzed the promoter activity. Overexpression of Sp1 significantly increased FZD1 promoter activity approximately 3-fold for all three naturally occurring haplotypes (Fig 1A). However, there were no significant differences in Sp1-dependent promoter transactivation for the GC and TC haplotypes corresponding to the G and T allele of rs2232157, respectively. Furthermore, all three haplotypes had similar increases in promoter activity in cells overexpressing Sp1, suggesting that the Sp1 transactivation is independent of the two promoter SNPs. To determine the specific regions responsible for the Sp1 transactivation of FZD1, we tested Sp1 effects on promoter activity of both full length (726 bp) and proximal (246 bp) FZD1 promoter reporters. Overexpression of Sp1 produced similar increases (approximately 4-fold) for both promoters regardless of the promoter length and G or C alleles of rs2232158 (Fig 1B). Transactivation was abolished for the full length and proximal promoters when a loss of function Sp1 mutant was overexpressed (Fig 1C), suggesting that the proximal promoter is responsible for Sp1 activation. Furthermore, treating the Sp1 overexpressing cells with a known inhibitor of Sp1 activity and Sp1 protein levels, Mithramycin A (MMA), partially abolished the Sp1 activation of the FZD1 promoters (Fig 1D). These results suggest that Sp1 transactivates the FZD1 promoter through the proximal region of the promoter. Direct binding of Sp1 to the FZD1 promoter To identify the specific Sp1 binding site within the proximal promoter, we performed bioinformatic analysis of the FZD1 promoter and identified two putative Sp1 binding sites at -44 to -40 and -97 to -93 (relative to the translation start site). To characterize these Sp1 sites further, we performed site-directed mutagenesis of these sites using the wild type proximal promoter and co-transfected with Sp1 expression plasmid into Saos2 cells. Mutation of the Sp1 binding site located at the upstream promoter region (-97 to -93) did not affect Sp1 transactivation (Fig 2A Mut-up), whereas mutation of the downstream region (-44 to -40) reduced Sp1 transactivation of FZD1 promoter by 89% (Fig 2A, Mut-dn). To test whether Sp1 binds directly to the FZD1 promoter, we conducted Sp1 specific ChIP assay using Saos2 cells. FZD1 specific products were amplified from the ChIP precipitated DNA using PCR primer pairs spanning either the downstream Sp1 binding site (F1 and R) or both of the putative Sp1 binding sites (F2 and R) (Fig 2B). FZD1 promoter was amplified with both sets of primer pairs demonstrating that Sp1 binds to the -44 to +55 region in vivo (Fig 2C). Direct binding of Sp1 to the -44 to -40 region was further confirmed by EMSA analysis with both wild type and mutated oligonucleotides spanning the binding site. Formation of specific binding complexes with the labeled wild type probe were abolished by unlabeled wild type but not mutated probes (Fig 2D). The addition of Sp1 specific antibody dramatically interfered with the formation of Sp1 specific binding complexes, further suggesting that Sp1 binds to this specific site (Fig 2D). Sp1 upregulates FZD1 expression To determine whether the Sp1-dependent activation of FZD1 promoter leads to up-regulation of FZD1 expression, we performed Sp1 over-expression or down-regulation experiments in Saos2 cells. Overexpression of Sp1 increased FZD1 expression in a dose dependent manner (Fig 3A). To down-regulate Sp1 protein levels, we treated the cells with MMA, a known antibiotic that down-regulates Sp1 protein expression in different cell types [21][22][23] and observed dose-dependent decreases in Sp1 protein in Saos2 cells. Consistent with our over-expression experiment, FZD1 protein was also decreased in a MMA dose-depended manner (Fig 3B). Similarly, down-regulation of Sp1 by siRNA in hFOB resulted in lower expression levels of FZD1 and osteoblast differentiation markers, ALP and osteocalcin (OCN), compared to control siRNA (Fig 4A and 4B). Furthermore, ALP activities measured by ALP staining were also reduced in both Saos2 and hFOB cells treated with Sp1 specific siRNA (densitometry intensity 40 versus 82 and 4 versus 13 for Saos2 and hFOB cells, respectively. Fig 4C and 4D). These experiments further support that Sp1 positively regulates the expression of FZD1 and differentiation markers of human osteoblast cells. Sp1 alters osteoblast differentiation through regulation of FZD1 To determine whether the Sp1 effects on osteoblast differentiation were mediated through FZD1, we performed an experiment with a combination of FZD1 knockdown and Sp1 overexpression in Soas2 cells. FZD1 knockdown resulted in significantly lower expression levels of FZD1 and COL1A1 (Fig 5A, #). Overexpression of Sp1 in control siRNA pre-treated cells increased FZD1 and ALP expression levels (Fig 5A, Ã ). However, knockdown FZD1 prior to the overexpression of Sp1 gene abolished the Sp1-mediated effects on ALP (Fig 5A, &). Therefore, Sp1 down-regulation of ALP gene expression appears to be mediated by FZD1. We have reported that FZD1 is important in osteoblast mineralization in vitro using both FZD1 knockdown and overexpression systems [17,18]. Since down-regulation of Sp1 in hFOB cells reduced the expression of both ALP and OCN genes and Sp1 up-regulates FZD1 expression, we tested whether modulating Sp1 expression affects osteoblast mineralization. Mineralization of Saos2 cells was significantly increased at day 18 in cells overexpressing Sp1 compared to cells infected with β-gal control. Interestingly, we did not observe increased mineralization at an early stage of differentiation (day 9, Fig 5B and 5C). Similarly, treatment of the cells with MMA dramatically inhibited the mineralization of Saos2 cells (day 14, Fig 5D and 5E). Furthermore, pre-knock down FZD1 gene reduced the increase of mineralization by Sp1 overexpression, while scramble siRNA pre-treatment did not alter this Sp1 effect in Saos2 cells (data not shown). Thus, our findings suggest that modulation of Sp1 protein expression alters osteoblast differentiation and mineralization in vitro through activation of FZD1. Discussion Sp1 is a common transcription factor and plays an important role in cellular growth, cell cycle regulation and apoptosis [1,2,5]. In this study, we discovered that Sp1 is a novel transcriptional activator of FZD1, a co-receptor for Wnt signaling in osteoblasts. We also demonstrated that down-regulation of Sp1 reduced the expression of differentiation markers in both hFOB and Saos2 cells. Furthermore, modulation of Sp1 expression directly affected osteoblast differentiation and mineralization and knockdown of FZD1 prior to Sp1 overexpression abolished the Sp1 effects. Our results suggest that Sp1 plays a novel role in human osteoblast differentiation and mineralization, and that these effects are at least partially mediated by Sp1-dependent transactivation of FZD1. Sp1 regulates the expression of a number of genes in osteoblasts. For example, podoplanin (PDPN), encoding an integral membrane glycoprotein, is upregulated by Sp1 in MG63, a human osteoblast-like cell line [24]. In Saos2 cells, Sp1 directly binds to the collagen XI alpha 2 (COL11A2) proximal promoter and increases both promoter activity and expression of endogenous COL11A2 [25]. Sp1 also regulates gene expression in rat [26] and mouse osteoblasts [27], bone marrow stromal cells (BMSC) [28], and osteoclasts [27]. In ROS17/2.8, a rat osteoblast-like cell line, Sp1 directly binds to and regulates the PTH/PTHrP receptor gene [26]. In mouse osteoblasts and BMSC, Sp1 regulates the basal transcription of receptor activator of nuclear factor kappa B ligand (RANKL) [28]. The promoter activity and gene expression of integrin β5 are also upregulated by Sp1 in MC3T3-E1 and mouse macrophage cells [27]. Sp1 regulates bone cell differentiation and activity by controlling the levels of transforming growth factor β type I receptor (TGFβ-RI) [29] and regulation of Runx2 expression during osteogenesis [30]. Interestingly, increased Sp1 binding to the type II collagen gene (COL2A1) promoter is required for the stimulation of COL2A1 gene expression by 17β-estradiol in differentiated and dedifferentiated rabbit chondrocytes [31]. Among the Sp protein family, Sp7/Osterix is a widely-studied family member relevant for activation of osteoblast-specific genes and appears to be essential for osteoblast differentiation and bone formation as illustrated in Sp7 knockout mice [8]. In contrast, Sp1 knockout mice die embryonically, making it difficult to explore its function in bone in vivo [32]. However, the in vitro studies in osteoblasts [24] [25], osteoclasts [27], chondrocytes [31] and BMSCs [28], all suggest a role of Sp1 in bone cell differentiation and bone formation, which is consistent with our results showing that Sp1 increases FZD1 expression and osteoblast mineralization. Sp1 function is mediated by direct binding to a GC rich DNA sequence [30,31]. Mithramycin A (MMA) is a GC specific DNA binding antibiotic that inhibits RNA synthesis initiation [33,34]. MMA has been shown to inhibit Sp1 binding to other genes with GC rich promoter Scramble siRNA pre-treated cells were used as a control. RNA was isolated after 48 hr and used for real-time PCR analysis. Gene expression levels in cells treated with specific FZD1 siRNA or scramble siRNA were compared (#). Gene expression levels between cells transfected with Sp1 or β-gal expression plasmid were performed for FZD1 siRNA ($) or scramble siRNA (*) pre-treated cells. The effects of pre-knockdown of FZD1 in Sp1 dependent regulation of gene expression were observed by a direct comparison between FZD1 siRNA and scramble siRNA pre-treated cells which were subsequently transfected with Sp1 expression plasmid (&). P value of < 0.05 was considered to be significant and labeled with each of the above described symbols (#, $, * and &). (B) AR-S staining for mineralization of Saos2 cells over-expressing the Sp1 gene. Saos2 cells infected with Ad-Sp1 or Ad-β-gal were cultured in the presence or absence of osteoblast differentiation medium for 9 days or 18 days, and then the cells were fixed and stained with AR-S. (C) Quantitative results of the AR-S staining for Fig 4A. * indicates significant differences in mineralization between Sp1 and β-gal treated Saos2 cells (P < 0.05). (D) Saos2 cells were treated with MMA or reagent control for 24 hours, cultured in differentiation medium for an additional 14 days, and then the cells were fixed and stained with Alizarin Red-S (AR-S). (E) Quantitative results of the AR-S staining for Fig 4C. * indicates significant differences in mineralization between MMA treated and untreated Saos2 cells (P < 0.05). doi:10.1371/journal.pone.0163277.g005 Sp1 Regulation of Frizzled-1 and Osteoblast Mineralization motifs [35], and Sp1 expression is also inhibited by MMA in a number of cell types, such as cervical cancer KB cell lines [36], human gastric cancer N87cell lines [37], primary neuronal cells [38] and prostate cancer PC3 and LNCaP cells [39]. In our study, Sp1 protein level was down regulated by MMA in a dose dependent manner in Saos2 cells, which is consistent with results from the above studies. More importantly, MMA treatment decreased both Sp1-dependent activation of FZD1 promoter and upregulation of FZD1 protein expression. Furthermore, MMA treatment also decreased mineralization of differentiated Saos2 cells. Therefore, these results provide additional evidence of Sp1 as a regulator of FZD1 and osteoblast differentiation and mineralization. Since Sp7 binds to similar GC rich sequences in targeted promoters, it will be important to determine whether Sp7 is also a transcription regulator of FZD1 in future studies. In conclusion, our study demonstrates that Sp1 is a novel and positive transcriptional regulator of FZD1 expression in human osteoblasts. Furthermore, Sp1 regulation of human osteoblast differentiation and mineralization appears to be partially mediated by upregulation of FZD1. Additional studies are needed to dissect the role of Sp1 in the regulation of other FZD family members.
v3-fos-license
2023-08-12T13:11:33.663Z
2023-08-08T00:00:00.000
260808813
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "b7b410b917b3cd03ec6338f5cf119f303833b8a8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44912", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "sha1": "6cd1eaf12a421f3a79952c501d26ee4a52cd8877", "year": 2024 }
pes2o/s2orc
Gut Microbiota-mediated Alleviation of Dextran Sulfate Sodium-induced Colitis in Mice Background and Aims Gut dysbiosis characterized by an imbalanced microbiota is closely involved in the pathogenesis of a widespread gastrointestinal inflammatory disorder, inflammatory bowel disease. However, it is unclear how the complex intestinal microbiota affects development or resistant of mucosal inflammation. Our aim was to investigate the impact of the gut microbiota on susceptibility in a mouse model of ulcerative colitis. Methods We compared the susceptibility to dextran sulfate sodium (DSS)-induced colitis of inbred BALB/c mice obtained from the 3 main distributors of laboratory animals in Japan. Clinical symptoms of the colitis and the faecal microbiota were assessed. Cohousing approach was used to identify whether the gut microbiota is a primary factor determining disease susceptibility. Results Here, we showed differences in the susceptibility of BALB/c mice from the vendors to DSS colitis. Analysis of the gut microbiota using 16S ribosomal RNA sequencing revealed clear separation of the gut microbial composition among mice from the vendors. Notably, the abundance of the phylum Actinobacteriota was strongly associated with disease activity. We also observed the expansion of butyrate-producing Roseburia species in mice with decreased susceptibility of the disease. Further cohousing experiments showed that variation in clinical outcomes was more correlated with the gut microbiota than genetic variants among substrains from different suppliers. Conclusion A BALB/c substrain that was resistant to DSS-induced colitis was observed, and the severity of DSS-induced colitis was mainly influenced by the gut microbiota. Targeting butyrate-producing bacteria could have therapeutic potential for ulcerative colitis. Introduction L aboratory mice are important species for preclinical animal experiments in biomedical research and contribute to mechanistic studies and drug development in the context of various human diseases.Inbred mouse strains difference can be a reason of host immune characteristics as well as behavioural phenotypes. 1,2Numerous substrains have been derived from original inbred strains. 3Substrains are defined as branches of an inbred strain produced by separated brothersister mating over at least 20 generations from multiple vendors.4][15] Inflammatory bowel disease (IBD) is a multifactorial immunemediated inflammatory disease that includes 2 conditions, Crohn's disease and ulcerative colitis. 16,179][20][21] SCFAs produced by gut bacteria act as coenzymes in fat and carbohydrate metabolism, thus exhibiting anti-inflammatory effects in IBD.Among the SCFAs, butyrate serves as a principal energy source for intestinal wound healing and barrier function.The gut microbiota of IBD patients exhibits a selective decrease in the levels of butyrate producers, and the colonocytes of IBD patients are incapable of transferring and utilizing butyrate. 22he dextran sulfate sodium (DSS)-induced colitis model is routinely used as a principal mouse model of ulcerative colitis.DSS administration in drinking water mediates gut epithelial damage, which causes inflammation.Mouse strain and sex differences influence colitis susceptibility. 23For example, similar to the clinical development of ulcerative colitis in humans, male mice are more likely to develop DSSinduced colitis than female mice. 24,25Additionally, BALB/c mice require higher concentrations of DSS to induce colitis than C57BL/6J mice. 24C57BL/6 wild-type mice from the same inbred strain purchased from 2 vendors (substrains), Jackson laboratory and Taconic farms, showed different bacterial compositions, and Jackson mice showed significantly fewer species of bacteria. 26Colonization of the gut by a segmented filamentous bacterium was found only in Taconic mice, and the bacterium induced the production of inflammatory Th17 cells in the lamina propria of the small intestine, which resulted in increased resistance to Citrobacter rodentium-induced colitis. 26In addition, the composition of gut microbes that influence susceptibility to several diseases, including abdominal sepsis, vary among substrains. 27espite these advances in understanding, whether the gut microbiota is a primary factor determining disease susceptibility in DSS-induced colitis remains unknown.We compared mice from the same inbred strain (BALB/c) that were obtained from the 3 main distributors of laboratory animals in Japan and identified considerable variability in the presentation of DSS-induced colitis among mice from different vendors.We, therefore, quantified the influence of the faecal microbiota associated with mice from different vendors.We identified that mice from each vendor harboured a distinct gut microbiota.Thus, using a cohousing approach, we revealed that disease resistance mainly relies on the gut microbiota and not on genetic differences among substrains. All animal experiments were approved by the Institutional Animal Care and Use Committee of Tokyo Medical and Dental University (Protocol number: A2021-088C9) and the Animal Care and Use Committee of Osaka University Graduate School of Dentistry (R05-009-0).All animals that were used in this study were housed in groups of 3-6 mice, fed standard pellet diet, under a 12-hour light/dark cycle. 16S rRNA Sequencing Mouse faeces or colon luminal contents were collected on day 0 and day 8. Collected samples were stored at À80 C until further use.Bacterial DNA was isolated using a NucleoSpin DNA stool kit (Takara Bio, Shiga, Japan).The V3-V4 regions of the 16S ribosomal RNA (rRNA) gene were amplified in each sample.Sequencing was performed on the Illumina MiSeq platform using a MiSeq Reagent Kit V3 (300 bp x 2) (Eurofins genomics, Tokyo, Japan and Bioengineering Lab.Co., Ltd., Kanagawa, Japan). Raw sequences were curated using the software package Qiime2.Sequences were assigned to operational taxonomic units using a cut-off ¼ 0.03 and classified using the SILVA platform with a 70% confidence threshold.We used the linear discriminant analysis effect size (LEfSe) method 28 (http://huttenhower.sph.harvard.edu/lefse),which is used to perform a combined assessment of statistical significance and biological relevance. Cohousing For the cohousing experiment, four-week-old BALB/C female mice purchased from SLC and Charles River were housed separately for one week in the same room and were fed the same diet before cohousing.Then, the mice were transferred into a new cage, and SLC mice and Charles River mice were cohoused for 4 weeks as described previously. 29SLC mice or Charles River mice that were kept in separate cages were used as controls.Acute colitis was induced with 4% (w/v) DSS for 7 days afterwards. Statistics and Reproducibility Comparisons of 2 groups were performed using an unpaired t (parametric) test or a Mann-Whitney U (nonparametric) test.Differences among more than 3 groups were evaluated using oneway analysis of variance for parametric analysis or the Kruskal-Wallis test for nonparametric analysis followed by Bonferroni correction (parametric) or Steel-Dwass correction (nonparametric).The normality of the data was analyzed using the Kolmogorov-Smirnov test.Homogeneity of variance was analyzed using the F test (2 groups) or Bartlett test (more than 3 groups).The sample distribution of the gut microbiota was analyzed by a non-metric multi-dimensional scaling method.Correlation analysis was performed using the Pearson correlation coefficient.Error bars represent the standard deviation of a data set.All statistical analyses were performed with R statistical software. Results The Phenotype of DSS-Induced Colitis Varied Among Mice From Different Vendors We compared female six-to seven-week-old BALB/C wild-type mice obtained from 3 vendors: SLC, CLEA, and Charles River.Acute colitis was induced by the administration of 4% (w/v) DSS in drinking water for 8 days.To investigate whether colitis induction by DSS was comparable among mice from different vendors, weight loss, rectal bleeding, stool consistency, colon shortening, and spleen enlargement were observed as clinical features of disease.Regarding weight loss, the body weights of mice from all vendors slightly increased during the first few days of the experiment.The weight of CLEA mice and Charles River mice gradually began to decrease afterward, and the body weights on day 8 were 14.4 AE 9.7 and 16.8 AE 5.7% below the initial weight, respectively (Figure 1A).SLC mice showed little weight loss, and the body weight on day 8 was only 1.5 AE 4.1% below the initial weight.The level of weight loss in CLEA mice and Charles River mice significantly differed from that in SLC mice on day 7 (P ¼ .04 and .03,respectively) and day 8 (P ¼ 8.0 Â 10 -3 and 4.0 Â 10 -3 , respectively).Similarly, the DAI score, which comprises the degrees of weight loss and intestinal bleeding, was significantly higher (indicating more severe colitis) in CLEA and Charles River mice than in SLC mice on day 7 (P ¼ .04 and .02,respectively) and day 8 (P ¼ .02and .02,respectively) (Figure 1B).The DAI score ranges from 0 to 12 (total score).Moreover, SLC mice did not exhibit considerable colon shortening (Figure 1C and D) or spleen enlargement (Figure 1E), unlike CLEA or Charles River mice.There was a trend of more colitis symptoms in Charles River mice than in CLEA mice, although significant differences in these symptoms of colitis were not observed. The Gut Microbial Compositions of BALB/c Mice Largely Varied Among Mice From Different Vendors We compared the gut bacterial composition of mice from SLC, CLEA, and Charles River using 16S rRNA sequencing.Nonmetric multidimensional scaling ordination using Horn-Morisita dissimilarities based on community membership indicated a clear separation of the microbiota among mice from the different vendors before DSS treatment (Figure 2A).Consistent with previous studies, a significant decrease in microbial diversity, which is a characteristic of dysbiosis, was observed in CLEA and Charles River mice, whereas SLC mice did not show diversity changes after DSS administration (Figure 2B).Microbial diversity was not significantly different among vendors either before or after DSS administration.Fifteen phyla were identified in total, as shown in Figure 2C.Contrary to our expectations, Charles River mice, but not SLC mice, were distinct from mice from other vendors before DSS treatment.In other words, SLC mice and CLEA mice were similar despite differences in disease severity.Notably, although Bacteroidetes and Firmicutes are 2 main phyla of the gut microbiota, the phylum Firmicutes was the most dominant in Charles River mice before DSS treatment, comprising up to 95%; hence, a low abundance of Bacteroidetes (2.2%) was observed.The phylum Firmicutes consists of the class Bacilli and class Clostridia (Figure A1), and the levels of Clostridia in mice before DSS treatment was significantly lower in mice from SLC compared to CLEA and Charles River (Figure 2D), suggesting that the high Firmicutes abundance in Charles River mice was mainly due to the class Clostridia.The levels of Clostridia in mice after DSS treatment was not significantly different among vendors.Further investigation of the taxa comprising the class Clostridia was performed, and the levels of abundant families that were !1% abundant in at least one group are shown in Figure 2E.A decrease in the levels of SCFA-producing Clostridium cluster IV (Ruminococcaceae and Clostridia UCG-014) and XIVa (Lachnospiraceae) is often associated with gut dysbiosis. 30The levels of Lachnospiraceae and Rumicococcaceae in mice before DSS treatment were significantly lower in mice from SLC compared to CLEA and Charles River.The level of Oscillospiraceae in mice before DSS treatment was significantly lower in mice from SLC compared to Charles River.In addition to the presence of 2 main phyla, Firmicutes and Bacteroidetes, the analysis of the microbiota before DSS treatment showed a very strong association between the DAI inflammation score and Actinobacteriota proportion (R 2 ¼ 0.80), although the abundance of this microbe was quite low (Figure 2F and Figure A2).To distinguish the bacterial taxa that are commonly found in mice from each vendor, differences in microbial taxa at the family level among vendors were calculated by LEfSe as well as a heatmap (Figure 3 and Figure A3).Eight taxa out of 13 that were significantly abundant in Charles River mice before DSS treatment were from the class Clostridia except Lactobacillaceae, RF39, Deferribacteraceae, Erysipelatoclostridiaceae, Acholeplasmataceae, and Eggerthellaceae.The family Eggerthellaceae is a member of phylum Actinobacteriota, and all Actinobacteriota bacteria found in this study belonged to the Eggerthellaceae.Contrary to the mice before DSS treatment, taxa comprising the class Clostridia were significantly lower levels in Charles Rimer mice after DSS treatment.Members of the family Desulfovibrionaceae and Rikenllaceae were significantly abundant in SLC mice both before and after DSS treatment.More detailed composition at genus level by LEfSe analysis indicated similar tendency (Figure 4A and B).Members of the genus Desulfovibrio and Bilophila, both comprising family Desulfovibrionaceae, were significantly abundant in SLC mice both before and after DSS treatment.Twenty taxa out of 28 that were significantly abundant in Charles River mice before DSS treatment were from the class Clostridia and there were no abundant taxa in mice after DSS treatment.Furthermore, genus Roseburia is an only taxon comprising the class Clastridia in SLC mice after DSS treatment (Figure 4B).Among the SCFAs producing bacteria, Roseburia intestinalis (a member of family Lachnospiraceae) and Faecalibacterium prausnitzii (a member of family Oscillospiraceae) are the primary butyrate producers in the human gut. 31The mean abundance of the genus Roseburia was higher in Charles River and CLEA mice than in SLC mice before DSS treatment (3.1%, 0.81%, and 0.05%, respectively), and the abundance decreased in Charles River and CLEA mice after DSS treatment, whereas that in SLC mice increased after DSS treatment (0.17, 6.4 Â 10 -3 , and 0.68%, respectively) (Figure 4C).The genus Faecalibacterium was not identified in all samples. Commensal Intestinal Bacteria From Mice With Severe Colitis Influence the Severity of Disease in Mice With Milder Colitis Our gut microbiota analysis suggested that specific colonic microbes may induce colitis susceptibility.We next aimed to address the possibility that disease resistance to colitis in SLC mice might have been due to genetic variants among BALB/c mice substrains from the different vendors.Therefore, we cohoused SLC mice (mildest colitis symptoms) with Charles River mice (severest colitis symptoms) to allow horizontal bacterial transmission.SLC mice that were cohoused with Charles River mice for 4 weeks showed statistically higher DAI scores than SLC mice that were kept in separate cages (day 7, P ¼ .03),suggesting that the gut microbiota, rather than the presence of genetic variants, is a dominant factor in the variable response to DSS between these 2 mouse substrains (Figure 5A).Regarding Charles River mice, although both Charles River mice housed separately and cohoused mice showed significantly higher DAI values than SLC mice (day 7, P ¼ .03and .01,respectively), no change was observed due to cohousing with SLC mice.Similarly, more colon shortening was observed in cohoused SLC mice, separately housed Charles River mice, and cohoused Charles River mice than in separately housed SLC mice (P ¼ 1.6 Â 10 -4 , 2.0 Â 10 -3 , and 1.7 Â 10 -3 , respectively) (Figure 5B and C).We further analyzed the gut microbiota of cohoused SLC mice using 16S rRNA sequencing and compared bacterial compositions to those from solo housed SLC, CLEA, and Charles River mice.After 4 weeks of cohousing, SLC mice cohoused with Charles River mice maintained the Firmicutes/Bacteroidota balance and the bacterial compositions were generally similar to solo housing SLC (Figure 6A and B).Regarding the butyrateproducing genus Roseburia, the abundance of Roseburia in cohoused SLC mice was 0.38% after cohousing and decreased to 0.19% with DSS challenges.The decrease in Roseburia in cohoused SLC mice indicates loss of persistence of Roseburia, unlike disease-resistant solo housed SLC mice (Figures 4C and 6C).Furthermore, we performed LEfSe analysis to compare disease-resistant SLC mice with disease-susceptible Charles River, CLEA, and cohoused SLC mice following DSS treatment.The 6 genera were abundant in disease-resistant SLC mice, Turicibater, Roseburia, Alistipes, Rikenellaceae RC9 gut group, Ruminococcaceae, and Parasutterella (Figure 6D). Discussion The DSS-induced colitis model is widely used because it can be established quickly and is simple. 24Here, we found clear differences in susceptibility to DSS-induced colitis among Japanese laboratory mice from 3 vendors.BALB/C mice purchased from SLC showed lower disease symptoms than mice from the other 2 vendors.The gut microbiota is known as an indispensable factor in gut inflammation. 32To quantify the connection between disease severity and the gut microbiota, the faeces of mice from 3 vendors collected before and after DSS treatment were compared using 16S rRNA sequencing, revealing that each group of mice from the different vendors harboured a distinct gut microbiota.Moreover, the cohousing data from the present study further show that the severity of DSS-induced colitis was mainly influenced by the gut microbiota. The variability in DSS-induced colitis among individual mice from the same inbred strain has been documented in a large-scale animal experiment using genetically identical laboratory mice from a single animal facility. 33The researchers reported that the presence of specific gut bacteria was mainly responsible for the variable experimental outcomes in the DSS model.In humans, a large clinical cohort study examined genetic-microbial associations in healthy people who had different ancestral backgrounds but shared a relatively similar environment.The study showed that host genetics or ancestral backgrounds have a minor role in determining the gut microbiome; rather, the microbiota is shaped predominantly by lifestyle and is similar among individuals who share a relatively homogenous environment. 34IBD patients show a distinct distribution of certain bacterial taxa; IBD is accompanied by a decreased abundance of Bacteroidetes, Firmicutes, Clostridia, Lactobacillus, and Ruminococcaceae and an increased abundance of Gammaproteobacteria and Enterobacteriaceae. 35,36In particular, the Firmicutes/Bacteroidetes ratio is widely accepted to have vast influences on the maintenance of gut homeostasis, and an imbalance in these taxa can lead to various pathologies. 32Related to these epidemiological studies, our study revealed a higher percentage of Firmicutes in Charles River mice, and these bacteria were classified mainly into the class Clostridia.8][39] In particular, Roseburia intestinalis is one of the primary butyrate producers in the human gut. 31The genus Roseburia consists of obligate gram-positive anaerobic bacteria, all of which are known to be SCFA producers. 40We reported that the abundance of the genus Roseburia increased precipitously in SLC mice after DSS treatment, whereas cohoused SLC mice as well as mice from other 2 vendors decreased the abundance of Roseburia along with DSS treatment.This implies that the persistence of Roseburia correlates with decreased susceptibility to disease in this setting.In addition to Roseburia, 5 genera were more abundant in disease-resistant SLC mice than in cohoused SLC mice and mice from other 2 vendors shown in Figure 6D.Some members of genera Turicibater, Alistipes, and Ruminococcaceae were also known as SCFA-producers. 41Similarly, in a previous study that compared the gut microbiota of mice from 2 vendors (substrains) in the United States, colonization of the gut by colitis-resistant Candidatus arthromitus (a segmented filamentous bacterium and member of the family Clostridiaceae) was found in Taconic mice. 26The distribution of disease-resistant bacteria may not be ubiquitous but may have similar characteristics among different mice. As previously discussed, we showed that few colitis symptoms were observed in SLC mice in our study.Although SLC is a major animal manufacturer, there have been few reports of studies using mice from SLC in the field of DSSinduced colitis research.One report using male C57BL/6 mice from SLC described body weight loss as well as a low DAI score, which supports our data. 42C57BL/6, BALB/c, and C3H/ HeJ strains are known to be genetically susceptible to DSSinduced colitis. 43To our knowledge, this is the first report to describe a disease-resistant BALB/c substrain with lower disease susceptibility, and we showed that the susceptibility mainly relies on the gut microbiota.5][46] Considering these findings, members of the family Muribaculaceae could also be studied in the future as potential protective bacteria against disease. In summary, a mouse substrain that was resistant to DSSinduced colitis was observed, and the severity of DSS-induced colitis was mainly influenced by the gut microbiota.DSSinduced colitis is one of the central preclinical models used in the gastrointestinal field.When studying disease susceptibility in laboratory mice, the mouse vendor and/or bleeding conditions that influence the gut commensal microbiota may be the reason for variable outcomes. Figure 1 . Figure 1.Clinical symptoms of colitis are highly variable among mice from different vendors.Mice from 3 commercial vendors, SLC, CLEA, and Charles River (CHA), were treated with 4% DSS for 8 days.Mice were evaluated daily, and weight loss and disease activity index scores were recorded.(A) Body weight changes, (n ¼ 5).(B) DAI score, a score from 0 to 12.A higher number indicates more severe colitis (n ¼ 5).(C) Gross images of the colon on days 0 (pre-DSS) and 8 (post-DSS).(D) Colon length was measured on day 8. (E) Gross images of the spleen on days 0 and 8. *P < .05,**P < .01. Figure 2 . Figure 2. The gut microbiota in mice from 3 vendors assessed using 16S rRNA sequencing.The colorectal microbial composition in SLC, CLEA, and Charles River (CHA) mice treated with 4% DSS for 8 days was assessed using 16S rRNA amplicon sequencing.n ¼ 4-5.(A) A nonmetric multidimensional scaling analysis identified a clear difference among mice vendor SLC (pink), CLEA (grey), and CHA (blue) before DSS treatment.(B) Dot plots show species-level microbial diversity measured by the Shannon diversity index.(C) Relative abundance of bacterial phyla presents in faeces on days 0 (pre-DSS) and 8 (post-DSS).(D) Relative abundance of class Clostridia comprising the phylum Firmicutes in faeces on days 0 (pre-DSS) and 8 (post-DSS).(E) Relative abundance of the bacterial family comprising the class Clostridia that was !1% abundant in at least one group of mice before DSS treatment.Significantly different among vendors indicates.(F) Relationship between DAI score and relative abundance of phyla Actinobacteriota in mice from vendor SLC (pink) CLEA (grey), and CHA (blue) before DSS treatment (R 2 ¼ 0.83).*P < .05,**P < .01. Figure 3 . Figure 3. Differences in the microbiota of mice from the 3 vendors at the family level.Differences in microbiota taxa at the family level among mice from the 3 vendors were calculated by LDA effect size (LEfSe) on day 0 (A) and day 8 (B) (n ¼ 4-5).Bacterial taxa comprising the class Clostridia.(C) Heatmap showing bacterial family frequency distribution across mice from the 3 vendors before DSS treatment (n ¼ 3). Figure 4 . Figure 4. Differences in the microbiota of mice from the 3 vendors at the genus level.Differences in microbiota taxa at the genus level among mice from the 3 vendors were calculated by LDA effect size (LEfSe) on day 0 (A) and day 8 (B).Bacterial taxa comprising the class Clostridia.(C) Relative abundance of the butyrate-producing genus Roseburia of mice before and after DSS treatment. Figure 5 . Figure 5. SLC mice cohoused with Charles River mice developed DSS-induced colitis.SLC mice (CO-SLC) and Charles River mice (CO-CHA) were cohoused for 4 weeks followed by 7 days of DSS administration (n ¼ 8).SLC mice (SLC) or Charles River mice (CHA) that were kept in separate cages were used as controls (n ¼ 6).Two independent experiments with identical results were combined.(A) DAI score.(B and C) Colon length was measured on day 7. *P < .05,**P < .01. Figure 6 . Figure 6.Comparison of gut microbiota from SLC mice cohoused with Charles River mice to solo housed mice from 3 vendors.The gut microbial composition in cohoused SLC mice (CO-SLC) was assessed using 16S rRNA amplicon sequencing and compared those from solo housed 3 vendors mice, SLC, CLEA, and Charles River (CHA).n ¼ 4-5.(A) Relative abundance of bacterial phyla presents in faeces before and after DSS treatment.(B) A nonmetric multidimensional scaling analysis of gut microbial compositions among mice vendor CO-SLC (green), SLC (pink), and CHA (blue) before (circle) and after (square) DSS treatment.(C) Relative abundance of the butyrate-producing genus Roseburia of CO-SLC mice before and after DSS treatment.(D) Differences in microbiota taxa at the genus level among mice from the 3 vendors as well as CO-SLC mice were calculated by LDA effect size (LEfSe) after DSS treatment.
v3-fos-license
2018-12-12T19:54:02.368Z
2018-12-04T00:00:00.000
54500820
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0207926&type=printable", "pdf_hash": "885ab5a7a9d979a30319a9ab4ea96b67d041679d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44914", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "sha1": "885ab5a7a9d979a30319a9ab4ea96b67d041679d", "year": 2018 }
pes2o/s2orc
Biomarkers of erosive arthritis in systemic lupus erythematosus: Application of machine learning models Objective Limited evidences are available on biomarkers to recognize Systemic Lupus erythematosus (SLE) patients at risk to develop erosive arthritis. Anti-citrullinated peptide antibodies (ACPA) have been widely investigated and identified in up to 50% of X-ray detected erosive arthritis; conversely, few studies evaluated anti-carbamylated proteins antibodies (anti-CarP). Here, we considered the application of machine learning models to identify relevant factors in the development of ultrasonography (US)-detected erosive damage in a large cohort of SLE patients with joint involvement. Methods We enrolled consecutive SLE patients with arthritis/arthralgia. All patients underwent joint (DAS28, STR) and laboratory assessment (detection of ACPA, anti-CarP, Rheumatoid Factor, SLE-related antibodies). The bone surfaces of metacarpophalangeal and proximal interphalangeal joints were assessed by US: the presence of erosions was registered with a dichotomous value (0/1), obtaining a total score (0–20). Concerning machine learning techniques, we applied and compared Logistic Regression and Decision Trees in conjunction with the feature selection Forward Wrapper method. Results We enrolled 120 SLE patients [M/F 8/112, median age 47.0 years (IQR 15.0); median disease duration 120.0 months (IQR 156.0)], 73.3% of them referring at least one episode of arthritis. Erosive damage was identified in 25.8% of patients (mean±SD 0.7±1.6), all of them with clinically evident arthritis. We applied Logistic Regression in conjunction with the Forward Wrapper method, obtaining an AUC value of 0.806±0.02. As a result of the learning procedure, we evaluated the relevance of the different factors: this value was higher than 35% for ACPA and anti-CarP. Conclusion The application of Machine Learning Models allowed to identify factors associated with US-detected erosive bone damage in a large SLE cohort and their relevance in determining this phenotype. Although the scope of this study is limited by the small sample size and its cross-sectional nature, the results suggest the relevance of ACPA and anti-CarP antibodies in the development of erosive damage as also pointed out in other studies. Introduction Joint involvement is one of the most common features in patients affected by Systemic Lupus Erythematosus (SLE): a high proportion of patients (69-95%) could experience this manifestation during disease course. A great heterogeneity characterizes this manifestation, moving from arthralgia to more severe arthropathy, with possible development of erosive damage [1]. For a long time, the presence of an erosive arthritis in SLE patients has been considered a rare condition and generally identified in subjects overlapping with Rheumatoid Arthritis (RA). The introduction of more sensitive imaging techniques in the assessment of inflammatory arthritis, such as ultrasonography (US), allowed the identification of erosive damage in up to 40% of patients with SLE-related arthritis [2]. Nevertheless, few data are available concerning specific biomarkers able to recognize patients at risk to develop erosive damage. Several studies investigated the role of RA specific autoantibodies, moving from their relevance in the identification of individuals at risk to develop RA and in determining erosive arthritis [3]. The presence of anti-citrullinated peptide antibodies (ACPA) has been analyzed in SLE patients, identifying this biomarker in up to 50% of SLE patients with X-ray detected erosive arthritis [1,4,5]. Conversely, few data are available concerning the association between anticarbamylated proteins antibodies (anti-CarP) and bone erosions: Ziegelasch and colleagues have recently identified a significant association between X-ray detected erosive damage and anti-CarP in a small SLE cohort [6]. More recently, we confirmed this association in a large SLE population with joint involvement in which the damage was assessed by US [7]. Machine learning methodologies have already been applied in the medical setting. Artificial Neural Networks (ANNs) have been used in SLE cohorts to predict specific outcomes, such as chronic damage development or 3 years kidney graft survival in recipients affected by SLE [8,9]. Moreover, these mathematical models can be used to select the factors able to identify the presence of a specific outcome and to rate the relevance or ranking of different factors in determining it. Similar approaches have also been exploited in specific medical conditions, such as gene selection task in DNA microarray datasets, selection of genes associated with diffuse large B-cell lymphoma, and, finally, in the analysis of Alzheimer's disease progression [10][11][12]. Moving from these premises, we considered the application of machine learning models to identify relevant factors in the development of US-detected erosive damage in a large single center cohort of 120 SLE patients with joint involvement. In this study, we employed Logistic Regression and Decision Trees, both machine learning models for classification which are easily interpretable, in conjunction with an iterative feature selection technique, in order to recognize factors associated with erosive bone damage. Materials and methods Consecutive SLE patients with a clinical history of joint involvement, attending at the Lupus Clinic of the Rheumatology Unit, Sapienza University of Rome (Sapienza Lupus Cohort) were enrolled in the present study. SLE diagnosis was performed according to the revised 1997 American College of Rheumatology (ACR) criteria [13]. The study was performed according to the protocol and good clinical practice principles and Declaration of Helsinki statements and was approved by the Ethic committee of the Sapienza University of Rome, Policlinico Umberto I, Rome, Italy. All the patients signed an informed consent. The clinical and laboratory data of enrolled patients were collected in a standardized computerized electronically filled form, including demographics, past medical history with the date of diagnosis, co-morbidities, previous and concomitant treatments, serological status [C3/C4 levels (radial immunodiffusion), ANA (IIF on HEp-2), anti-dsDNA (IIF on Crithidia Luciliae), anti-Ro/SSA, anti-La/SSB, anti-Sm, and anti-RNP, anti-Cardiolipin (anti-CL) and anti-β2 Glycoprotein-I (anti-β2GPI) (ELISA assay), lupus anticoagulant (LA) according to the guidelines of the International Society on Thrombosis and Hemostasis]. Patients were divided according to the presence of arthralgia and arthritis. Arthralgia was defined as the presence of recurrent (minimum three episodes) or persistent (minimum 6 weeks) pain or stiffness (lasting at least 30 minutes) of at least one joint during patient's clinical history; arthritis as the occurrence of at least 1 episode of clinical synovitis (swelling, effusion or tenderness) and at least 30 minutes of morning stiffness of at least 1 joint. The activity of joint involvement was assessed by using the disease activity score on 28 joints (DAS28) and the swollen to tender ratio (STR), both previously applied in SLE cohorts with joint involvement [14,15]. SLE Disease Activity Index 2000 (SLEDAI-2k) was used to assess disease activity, while chronic damage was evaluated by SLICC Damage Index (SDI) [16,17]. Each subject underwent peripheral blood sample collection. Rheumatoid Factor (RF) and ACPA were detected by using commercial ELISA kits (Diamedix, Miami, USA; DELTA BIO-LOGICALS, Rome, Italy, respectively): the results were evaluated according to the manufacturers' instructions. For ACPA, values above 25 U/mL were considered positive, while for RF, values above 10 U/mL. Anti-CarP antibodies were detected by a home-made ELISA using carbamylated foetal calf serum (Ca-FCS) and non-modified FCS as antigens. Ca-FCS was obtained using the method described by Shi et al [18]. A titration curve of two positive reference sera with medium-high ELISA immunoreactivity for Ca-FCS was performed to show the performance of the tests and to transform the absorbance of Ca-FCS to arbitrary units per milliliter (aU/mL). The cut-off was established as the mean OD + 3 standard deviations (SD) of fifty-six age-and sex-matched healthy subjects (blood donors) and then the obtained value was converted into aU/mL (corresponding to 340 aU/mL). US imaging was performed in all SLE patients by using a MyLab70 XVG machine (Esaote S.p.A., Florence, Italy) equipped with a 6-18 MHz multifrequency linear array transducer. By using a fixed 18-MHz frequency, bone surfaces of metacarpophalangeal (MCP) and proximal interphalangeal (PIP) were studied on multiplanar scans, according with the EULAR US guidelines [19]. Each joint was scanned in both the longitudinal and transverse planes from the medial to lateral sides on both volar and dorsal aspects to enable maximum coverage of the joint surface area. At each joint, according with OMERACT definition, the presence of erosions was registered with a dichotomous value (0/1), allowing the possibility to obtain a total score, ranging from 0 to 20 [20]. Statistical analysis The statistical analyses were performed using the version 5.0 of the GraphPad statistical package (La Jolla, California). Normally distributed variables were summarized using the mean ± SD, and non-normally distributed variables by the median and interquartile range (IQR). Frequencies were expressed by percentage. Machine learning In order to understand factors leading to the development of erosive damage in SLE patients, we employed machine learning techniques. In particular we used binary classification models, which can be applied to learn a function that partitions of the data in two groups: in our case the two groups were SLE patients with and without bone erosions. The separating function depends on the features describing the data. Once a function has been identified, we can deepen our understanding on the factors more influencing the function behavior and on the modality by which the function relates to the different features. One question naturally arises: what is the degree of trust we have in the so identified factors? Of course, the relevance of the factors which are extracted by looking at the separating function is only good as the function itself. For this reason, a model has to be evaluated on a test set, as it is usually done in machine learning. Better models will lead to more reliable identified factors and the test accuracy can be used to measure the goodness of the obtained ranking. In light of the possible influence in the performance of machine learning models due to the presence of irrelevant features in the data, especially when few cases (patients) are available, a feature selection technique was employed, in order to select among all the available features a smaller subset of meaningful ones. After the application of feature selection, the model tailored on the selected subset of features can be applied to gain insights on the relative importance of each selected feature. In the following, we describe the Logistic Regression and Decision Tree models for classification, and the Forward Wrapper feature selection technique. We choose these two models for two reasons: 1) their natural interpretability, a desirable characteristic when the goal is to assess the importance of each feature in the outcome produced by the model and 2) more complex models, such as neural networks, usually have poor performances when the sample size is small. , where x i contains the variables (called features) of patient i and y i is a binary variable which indicates whether such patient has developed erosion or not. Once a model is trained, we can evaluate its generalization capabilities by testing it on unseen data. Logistic Regression. Logistic Regression is a linear classifier which aims to find a function h w,b such that h w;b ðx i Þ � y i 8i ¼ 1; :: :; m: In particular, the function h w,b (x) takes the following form: where the weight vector w and bias b are learned parameters, tuned with an iterative procedure. To avoid over-fitting the training data, the model also employs a penalty term l(w), to control the complexity of the model. Here we use l(w) = λ||w|| 2 , where λ > 0 is an hyperparameter which has to be tuned during model selection. Such procedure is performed by evaluating different configurations of hyper-parameters on a held-out portion of the train set (or several different portions, as in "leave-one-out" or k-fold). Here we select the hyper-parameter configurations among a grid of candidates (grid-search). After the training phase, the value h w,b (x) can be seen as a probability estimate on the class of the input vector x: if h w,b (x) � 0.5; we classify x as positive; otherwise, we classify it as negative. The weight vector w of the fitted Logistic Regression model can be used to measure the "importance" of each feature. Several approaches to determine the relative importance of the explanatory variables from the coefficients of a logistic regression model exists in the literature. Here we follow the method proposed in [21]. Decision Trees. A Decision Tree is a non-parametric machine learning model. It is structured as a tree where each internal node, including the root, is a decision test over one of the features. In particular, for continuous features, the test is of the form x i � alpha for some value alpha For categorical features, instead, the test is of the form x i = C, where c is one of the possible categories. The training phase is responsible for choosing the right decision tests, i.e., the feature that define a node and its associated "threshold". After the tree is constructed, starting from the root, an example x is recursively assigned to a subtree, according to the the different decision tests that are applied at each visited node. In the end,x will be assigned to one of the terminal nodes of the tree, also called "leaves". Each leaf has an associated response y, chosen with a majority vote over the values y i corresponding to the training examples x i that end up in the leaf in question. A number of different regularization techniques can be used. For example, we can limit the maximum depth of the tree, and the minimum number of training examples that define a leaf. These techniques help in reducing over-fitting, which is of real concern given the fact that decision trees are non-parametric. The maximum depth and the minimum number of examples in a leaf are hyper-parameters that, as in the logistic regression case, must be tuned during model selection. Decision Trees are also naturally interpretable. The features appear in a hierarchical fashion in the tree, where the features closer to the root can be seen as more important than the ones appearing in the lower part of the tree. Moreover, a tree implicitly performs feature selection: some features may not be present in any of the nodes of the tree. Feature selection. Starting from the entire set of features, the feature selection is performed with the aim of selecting a subset of relevant features. The main reasons concern the simplification of the resulting model, the individuation of the most important features and the achievement of a higher generalization capability of the machine learning models. Since an extensive search among all 2 n possible subsets of features, where n is the total number of features, is generally impracticable, several methods have been proposed in the machine learning literature in order to overcome this limitation. These methods can be divided into three main classes [22]: • Filter Methods: the selection of the features does not rely on the use of a model and generally a score is assigned to each feature in order to obtain a ranking. • Wrapper Methods: several subsets of features are evaluated and compared by assigning a score based on the accuracy achieved with a predictive model; among methodical, stochastic and heuristic search processes of the subsets; an example is the forward (or backward) method where features are iteratively added (or removed) based on the obtained model accuracy. • Embedded Methods: this class of methods tries to combine the advantages of both previous methods, in which feature selection is embedded in the learning process. The Forward Wrapper belongs to the class of wrapper methods. It starts with an empty set of features. Then, at each iteration, all the features not included in the current feature set are added independently and the one which leads to the best performance of the employed predictive model is inserted. The process is not stopped in case the insertion of a new feature leads to a worse performance score. Thus, the process requires an overall number of steps equal to the total number of features. At the end, the overall best score achieved through iterations is considered and the correspondent subset of features is chosen as the result of the feature selection. Results We enrolled 120 SLE patients with joint involvement [M/F 8/112, median age 47.0 years (IQR 15.0); median disease duration 120.0 months (IQR 177.0)]. Eighty-eight patients (73.3%) referred at least one episode of arthritis during disease history. The main clinical, laboratory and therapeutic features of the whole cohort were described in Table 1. At the time of the study enrollment, a median SLEDAI-2k of 2.0 (IQR 4.0) was registered. Concerning the joint involvement activity, the whole population showed a median DAS28 of 3.42 (IQR 2.2) and a median STR of 0.08 (IQR 0.68). By using US assessment, an erosive damage was identified in 31 SLE patients (25.8%) with a mean ±SD of 0.7±1.6 (range 1-9). All these patients referred at least one episode of clinically evident arthritis. Moving to the application of machine learning for the characterization of the erosive damage in SLE patients, we evaluated the generalization capabilities of the Logistic Regression and Decision Trees using 100 Monte Carlo repeated trials. Namely, we partition the data in 80% train and 20% test, preserving the positive-negative ratio, train the model using the training set and evaluate such model on the test set. We used two different metrics to evaluate the models, the area under the ROC curve (AUC) and the Matthews Correlation Coefficient (MCC), defined as To choose the hyper-parameters of the model (at each trial) we employed, instead, a "leaveone-out" procedure since the number of examples that are available for training after the initial split is limited. Such procedure is, in fact, often employed when the number of available data is limited, thus, making it impracticable to separate a sufficiently large train-set to build robust models and a validation-set for evaluating their performances. "leave-one-out" procedure overcomes these problems excluding a single example from the train-set, training the model on the slightly reduced dataset and computing the prediction of the model on the example that was excluded. In the first experiment, we employed the implementations of Logistic Regression and Decision Tree available in the 0.18 version of the scikit-learn library, using all the available features. The results are reported in Table 2. Next, in order to improve the generalization capability, we used the Forward Wrapper method with both Logistic Regression and Decision Trees. We used MCC as the metric to evaluate each subset of features at each step of the algorithm. Table 3 reports the numerical results for the subset of features selected by Forward Wrapper. We note that the employment of the feature selection approach resulted in an improvement of the generalization capability both in terms of AUC and MCC for both models, although Logistic Regression performs better w.r.t. both metrics. The mean MCC score for each step of the feature selection process, for Logistic Regression, is reported in Supplementary material (S1 Fig). The subset of features selected by the algorithm includes anti-CarP, ACPA, arthralgia, Jaccoud's arthropathy, anti-Sm, and neurological manifestations. Since the obtained results in terms of generalization capability are satisfactory, the Logistic Regression model combined with the feature selection is able to characterize well the relation between the erosive damage and the selected features. Thus, such model can be confidently used to evaluate the relevance of each feature in the development of erosive damage in SLE patients, which is the primary aim of the analysis. To do this, we considered the features selected by the Forward Wrapper and Logistic Regression and we trained a new model on all the available data obtaining a weight vector. The hyper-parameters for this final model have been chosen with a leave-one-out scheme performed on all the available examples. The relative importance of the selected feature is computed analyzing such weight vector. Fig 1 reports the relative importance (%) of the selected features. Note that the coefficients of the model can be further used to determine if a feature is positively (blue colored in Fig 1) associated with the development of bone erosion or negatively associated (orange colored). As reported in Fig 1, anti-CarP, ACPA and arthralgia resulted the most relevant features for our model. We note that while a patient with high levels of anti-CarP and ACPA will be likely classified as positive by the model, arthralgia, instead, has an inverse effect, confirming the observation that only SLE patients with at least one arthritis episode develop erosive damage. The combination of the 6 features reported in Fig 1 (anti-CarP, ACPA, Jaccoud's arthropathy, anti-Sm, arthralgia, neurological manifestations) identified an AUC value of 0.806±0.02 and a MCC value of 0.481±0.03. To compare our results whit those obtained by Verheul and colleagues, we also performed a test with the three antibodies anti-CarP, ACPA and RF [3]. This test showed a decrease in AUC (0.676±0.02) and MCC value (0.22±0.05). To summarize, the overall process applied in the present study consisted in 1. selecting the best subset of features with the Forward Wrapper coupled with Logistic Regression and Decision Tree; 2. fitting the best model on all the available data; 3. use such model to assess the relative importance of the selected feature. The complete process is sketched in Fig 2. Discussion To the best of our knowledge, this is the first study aimed at applying the Machine Learning models in order to identify factors associated with US-detected erosive bone damage in a large SLE cohort and their relevance in determining this specific phenotype. In particular, decision trees model was compared with logistic regression, in conjunction with forward wrapper feature selection. Thanks to this approach, we confirm the relevance of ACPA and anti-CarP in determining SLE-related erosive damage, suggesting their pathogenic role in the development of this phenotype. In particular, these autoantibodies showed a positive association and a similar relative importance, which was higher than 40% for both autoantibodies. These results reinforce the role of these autoantibodies as biomarkers of bone damage, suggesting a pathological link between their presence and bone erosions development. Despite the high frequency of SLE-related joint involvement, data concerning pathogenic mechanisms and specific biomarkers are lacking. In the last years, moving from evidences deriving from RA, the role of post-translational modifications has been suggested: in particular, citrullination and carbamylation could be able to induce neo-antigens generation and autoantibodies production in genetically susceptible individuals [23]. Indeed, ACPA have been frequently observed in SLE patients with X-ray detected erosive arthritis [1,2,4,5]. Nonetheless, a relevant percentage of SLE patients with erosive arthritis is ACPA negative, suggesting a different pathogenic scenario. In order to fill this space, also anti-CarP have been evaluated in SLE cohorts. In a previous study, we found the presence of anti-CarP in 46.1% of SLE patients with joint involvement, a prevalence similar to that identified in RA patients and significantly higher respect to healthy controls [24]. Moreover, Ziegelasch and colleagues identified an association between anti-CarP and radiographically detected erosions in a small cohort of SLE patients [6]. By using machine learning techniques, in particular the Forward Wrapper method, we confirmed this association, and, interestingly, we estimated the weight of ACPA and anti-CarP in determining a more aggressive phenotype in SLE-related joint involvement. These results reinforce the need to better understand the pathogenic mechanisms that could explain this association. We could hypothesize that ACPA and anti-CarP exert an action on osteoclasts, leading to erosive damage development. Furthermore, a mild relevance was identified for the presence of Jaccoud's arthropathy in erosive damage development. This result is in agreement with our previous study specifically evaluating SLE patients with this arthropathy. We found US-detected erosive damage in almost 60% of patients evaluated, observed prevalently at level of first and second MCP joint; moreover, the erosive damage was significantly associated with ACPA [25]. The low relative importance obtained for this feature could be related to its low frequency, ranging from 2 to 35% in described SLE cohorts [25]. On the other hand, if a patient did not present arthritis but only arthralgia during the clinical history, it is unlikely that bone erosions will develop. This is in agreement with previous studies performed on cohorts of patients with inflammatory arthritis and other diseases such as inflammatory bowel disease, in which erosions could not be detected by MRI [26]. Of note, in the present study we evaluated the erosive damage by using US assessment: this imaging technique demonstrated a higher sensitive in comparison of radiographic assessment in the evaluation of bone erosions at level of MCP and PIP in RA patients, especially during early disease phase [27]. A limit of this study is the cross-sectional design. The realization of a longitudinal analysis, in fact, is certainly needed in order to further understand the importance of the different prognostic factors of SLE-related erosive damage. Moreover the relatively small sample size discourages the use of more complex machine learning models that, when trained with large datasets, could arguably perform better than the simpler methods we consider here. Finally a larger sample size could surely lead to more robust results. In conclusion, despite the small sample size and the cross-sectional design of this study, the application of machine learning models provides a new point of view in the research of biomarkers for SLE-related erosive arthritis, confirming the possible role of ACPA and anti-CarP on this specific phenotype.
v3-fos-license
2014-10-01T00:00:00.000Z
2012-01-04T00:00:00.000
2736943
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0026924&type=printable", "pdf_hash": "819808b074d889315bfbf7710602715af6a3f7b1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44915", "s2fieldsofstudy": [ "Biology" ], "sha1": "819808b074d889315bfbf7710602715af6a3f7b1", "year": 2012 }
pes2o/s2orc
Pyrokinin β-Neuropeptide Affects Necrophoretic Behavior in Fire Ants (S. invicta), and Expression of β-NP in a Mycoinsecticide Increases Its Virulence Fire ants are one of the world's most damaging invasive pests, with few means for their effective control. Although ecologically friendly alternatives to chemical pesticides such as the insecticidal fungus Beauveria bassiana have been suggested for the control of fire ant populations, their use has been limited due to the low virulence of the fungus and the length of time it takes to kill its target. We present a means of increasing the virulence of the fungal agent by expressing a fire ant neuropeptide. Expression of the fire ant (Solenopsis invicta) pyrokinin β -neuropeptide (β-NP) by B. bassiana increased fungal virulence six-fold towards fire ants, decreased the LT50, but did not affect virulence towards the lepidopteran, Galleria mellonella. Intriguingly, ants killed by the β-NP expressing fungus were disrupted in the removal of dead colony members, i.e. necrophoretic behavior. Furthermore, synthetic C-terminal amidated β-NP but not the non-amidated peptide had a dramatic effect on necrophoretic behavior. These data link chemical sensing of a specific peptide to a complex social behavior. Our results also confirm a new approach to insect control in which expression of host molecules in an insect pathogen can by exploited for target specific augmentation of virulence. The minimization of the development of potential insect resistance by our approach is discussed. Introduction The spread of fire ants is considered a classic example of worldwide biological invasions of a species into previously unoccupied habitats with the potential to result in significant ecosystem alterations. The red imported fire ant (Solenopsis invicta), native to South America, is considered by the World Conservation Unit as one of the top 100 worst invasive alien species, and its detrimental impact on humans, domestic and wild animals, agriculture, and ecosystems is well-documented [1,2,3]. It is a major invasive pest insect to almost the entire Southeastern United States and continues to expand it range north and westwards causing agricultural and ecosystem disruptions that extend from crop losses to declines of native species [4]. Fire ant have continued to spread despite the treatment of over 56 million hectares with Mirex bait alone and tons of other chemical insecticides [5], which themselves have significant damaging environmental consequences. Biological control of fire ants using entomopathogenic fungi, such as Beauveria bassiana, offers a more environmentally friendly alternative to chemical pesticides [6,7,8,9]. The use of entomopathogenic fungi, however, has met with limited success partially due to the relatively long time (3-10 days) it can take for the fungus to kill target insects. Ants have posed a particular challenge due to communal behaviors such as grooming and nest cleaning which can decrease the efficacy of microbial agents [10]. Previous work has shown that the potency of fungal insecticides can be improved [11]. Expression of a 70 amino acid scorpion (Androctonus australis)derived neurotoxin in the fungal insect pathogen, Metarhizium anisopliae, increased its toxicity 9-fold against Aedes aegyptii as compared to its wild-type parent [12]. Here, we sought to use a different approach, namely to express host molecules, e.g. hormones or neuropeptides, in the fungal pathogen. As the fungus targets the insect, it will produce the host molecule, disrupting the normal endocrine or neurological balance of the host. The desired outcome is to make the target (fire ant) more susceptible to the invading fungus, thus increasing the potency of the fungal agent. As candidates for expression in the fungus we sought to use a recently described strategy whereby peptides that participate in a critical host physiological process are used [13]. Depending upon the molecule (peptide) chosen, in theory, the increased virulence can, to a particular degree, be host specific, thus minimizing nontarget effects. The pyrokinin/pheromone biosynthesis activating neuropeptide (PBAN) family consists of insect neurohormones characterized by the presence of a C-terminal FXPRL amine sequence [14,15]. First isolated from the cockroach, Leucophaea maderae, as a myotropic (visceral muscle contraction stimulatory) peptide, members of this peptide family are widely distributed within the Insecta, where depending upon the species, they function in a diverse range of physiological processes that includes stimulation of pheromone biosynthesis, melanization, acceleration of pupariation, and induction and/or termination of diapause [16,17,18]. In the natural insect host, these peptides are C-terminal amidated, a modification often required for their activity. In Lepidoptera, the PBAN peptide is encoded on a translated ORF that is subsequently processed (cleaved) to yield diapause hormone (DH), and the a-, b-, and c-neuropeptides, along with the PBAN peptide itself (which is found between the band c-neuropeptides). More recently, isolation of a cDNA sequence for the fire ant, S. invicta, led to the identification of PBAN and related peptide homologs [19]. Analysis of the ORF revealed the presence of DH, as well band c-neuropeptide homologs, but no a-neuropeptide. Here, we assessed the impact of expressing the b-NP peptide in the fungal insect pathogen B. bassiana. Our data show a decrease in both the lethal dose (LD 50 ) and lethal time (LT 50 ) it takes to kill target fire ants in the b-NP expressing strain as compared to its wild-type parent. The effect was host specific, and no increase in virulence was noted when the strain was tested against the greater wax moth, Galleria mellonella. By using a host molecule the chances of resistance are minimized due to the simple fact that the fungalexpressed peptide represents a host molecule that is regulated in both tissue specific and developmental patterns. Any mutations that could compensate for the increased dose given by the fungus during infection would be significantly compromised, indeed, potentially dependent upon the fungus for proper development. Unexpectedly, we observed that the cadavers of ants killed by the b-NP expressing B. bassiana strain were treated differently, i.e. removed slower, than controls or those killed by the WT fungus. Experiments testing the effects of synthetic peptides on cadaver removal or necrophoretic behavior resulted in another serendipitous result, namely that ant cadavers treated with the b-NP-NH 2 peptide were removed much more rapidly than b-NP or control treated cadavers. The implications of these results in terms of biological control of ants and chemical sensing are discussed. Purification and identification of b-NP from fungi cultures using HPLC and MS/MS In order to verify (extracellular) b-NP production in the recombinant B. bassiana strain, fungal cultures (Bb::spb-NP gpd and the WT parent) were grown first grown in SDBY (Sabouraud dextrose broth with 0.5% yeast extract) for 2 d, after which 1.5 g of washes cells were transferred to Czapek-dox broth (50-100 ml) for 3 days. Fungal cells were removed by centrifugation, the resultant supernatant filtered through a 0.22 mm filter, and the supernatant samples subsequently lyophilized and stored at 220uC until used. Lyophilized samples were rehydrated in 3.0 ml of water containing 0.1% TFA, and applied onto a C 18 reverse phase SepPak column. The column was washed with 0.1% TFA and peptides were eluted with 80% acetonitrile-0.1% TFA. The eluted fraction (in acetonitrile) was dried in a SpeedVac, resuspended in water-0.1% TFA (0.5 ml) and chromatographed on a C 18 reversed phase HPLC column with eluting factions monitored via absorbance at 214 nm. Fractions eluting at the same retention time as an initial run using synthetic b-NP used as a standard, were collected, dried with a fine stream of N 2 , rehydrated to 0.2 ml with water-0.1% TFA, and rechromatographed as above. Fractions were collected as above, dried under N 2 and analyzed by LC-MS/MS (University of Florida, Dept. of Chemistry, analytical Services). A standard curve using synthetic b-NP was made in order to quantify the amount of peptide in the sample. Insect Bioassays S. invicta colonies were collected from the field, separated from the soil by drip flotation and maintained in Fluon-coated trays with a diet consistaing of 10% sucrose solution, a variety of freezekilled insects, fruits and vegetables, and chicken eggs. Fungal cultures were grown on potato dextrose agar (PDA). Plates were incubated at 26uC for 14-21 d, and aerial conidia (spores) were harvested by flooding or scraping the plates with sterile distilled H 2 O containing 0.05% Tween 80. Spores concentrations were determined by direct count using a hemocytometer and adjusted to the desired concentration for use (typically between 10 6 -10 8 conidia/ml). Two types of bioassays were used to assess the virulence of the fungal strains: (1) ''classical bioassay'' using S. invicta workers. Test groups of ants (25/chamber) were inoculated with fungal suspensions (concentrations ranging from 10 6 -10 8 conidia/ml) using a spray tower as described [22]. The ants were housed in plastic cups (ø = 6 cm) whose sides had been coated with Fluon and topped with a perforated lid. Ants were given 10% sucrose solutions in 1.5 ml Eppendorf tubes with a cotton plug. Experiments were performed at 26uC and mortality was recorded daily. Controls were treated with Tween-80 and the mortality assays were repeated at least three times. (2) ''Mock mini-mound'' assays. Larger scale bioassays were performed using larger test chambers (ø = 19 cm). Test chambers contained a small Petri dish (ø = 3 cm) containing moist dental plaster, that served as the nest for the mini-mound. Ants (0.5 gm, ,2,000 individuals) including 3-4 dealate reproductive females were placed in the test chamber that included a 10% sucrose solution in an Eppendorf tube. Treatments and assay conditions were identical to the classical bioassay. Duplicate samples were performed for each experiment and the entire assay repeated three times with independent batches of fungal spores. For all experiments, a x 2 -test was first used to determine homogeneity among variance of the repeats (p,0.05). Further statistical analysis of the mortality was performed using SPSS which was used to estimate the median lethal time (LT 50 ), the median lethal concentration (LC 50 ), fiducial limits and other regression parameters. Necrophoretic behavior assays Assay chambers and methods were based upon a previously described protocol [23]. Briefly, the conical end of a 15 ml polypropylene tube (nest) was cut off and connected via a short tubing (ø = 8 mm, 10 cm long) to a round plastic container (ø = 19 cm, foraging arena) into which a hole had been punched out in the bottom at the middle of the container. Test ants (0.1 gm, ,400 ants with at least one dealate) were placed in the assay chamber and allowed to equilibrate for 1-2 hr before the experiment was initiated. Three separate experimental protocols were employed. (1) Freeze killed; ants killed by the WT B. bassiana strain, and ants killed by the Bb::spb-NP gpd strain were presented to untreated ants. For the freeze-killed ants, ants were placed at 280uC for 15 min, and then placed at R.T. for 24 hr before use. For fungal-killed ants, infections were performed as described above and the dead ants removed daily. Test ants were derived from those that died on day 4 post-infection. To measure necrophoretic behavior, the test items (5-10 dead ants) were placed in a ring around (1 cm from) the nest entrance. The time interval between introduction and removal of each item was recorded up to a time limit of 600-800 minutes. The number of test objects that were not moved within this interval was also recorded. (2) The effect of infection on necrophoretic behavior was probed by presenting WT-or Bb::spb-NP gpd -killed ants to (a) uninfected ants, (b) WT-infected ants, or (c) Bb::spb-NP gpdinfected ants. Ants were infected (5610 7 conidia/ml) with the fungal strains 2 d prior to testing. Test objects (dead ants) were prepared and tested as described above. (3) The effect of synthetic peptides on ant cadaver removal was evaluated by having three peptides; (a) b-NP (QPQFTPRL, no C-terminal amidation), (b) b-NP-NH 2 (QPQFTPRL-NH 2 ), and (c) QAGVTGHA-NH 2 (control 8 amino acid amidated peptide) synthesized (GenScript, Piscataway, NJ). Freeze-killed ants (15 min at 280uC, allowed to thaw for 15 min R.T.) were immersed in 100 nM solutions (resuspended in sterile distilled H 2 O) of the test peptide or H 2 O alone for 30 s, and then allowed to air-dry for 30 min on a Kim-wipe towel. Necrophoretic behavior to the ants was measured as described above. P-values were obtained from an analysis of variance (1 or 2 way-ANOVA) for each data set, using a permutation test to guard against possible non-normality. 10,000 permutations were used for each test statistic. The unknown (i.e. never moved test objects) data had no effect on the analysis. Construction and bioassay of b-NP expressing B. bassiana The fire ant b-NP, comprised of the eight-amino acid sequence, QPQFTPRL, was expressed in B. bassiana via transformation of an expression vector containing a constitutive B. bassiana-derived gpd-promoter, and the nucleotide sequence corresponding to the b-NP peptide fused to a 28-amino acid signal sequence derived from the B. bassiana chitinase (chit1) gene to produce strain Bb::spb-NP gpd . Heterologous expression of the peptide was confirmed by partial purification and mass spectrometry analysis of culture supernatants. These data indicated the production of a non-amidated b-NP peptide by the fungus at a concentration of ,0.2-0.4 mM. Both classical worker group and mock mound assays were used to assess the virulence of WT and b-NP expressing B. bassiana strains. Bb::spb-NP gpd was much more potent (P,0.001) than WT, causing 50% mortality against fire ants after 5 days postinfection with an LD 50 of 1.560.9610 7 conidia/ml compared to an LD 50 of 1.060.7610 8 conidia/ml for the WT parent. Thus, it takes 6-7-fold fewer conidia to provide the same level of mortality. Expressing b-NP also significantly reduced survival times (Fig. 1). At a concentration of 2610 7 conidia/ml, the mean lethal time to achieve 50% mortality (LT 50 ) was reduced from 177611 hr for the WT to 12265 hr for the b-NP expressing strain; representing ,30% reduction in the mean survival time (P,0.01). At lower spore concentrations (4610 6 conidia/ml) the effect was even more dramatic, with the WT LT 50 reaching 211623 hr and the b-NP expressing strain 13567 hr (P,0.001). In order to determine whether expression of the fire ant b-NP would affect virulence towards other insect, bioassays were performed with several other insect species. No significant difference was noted between the virulence of the WT and Bb::spb-NP gpd strains towards the lepidopteran host, Galleria mellonella, in which the LT 50 values were 15865 hr and 16668 hr, for the WT and b-NP expressing strains, respectively (P.0.05). Similarly, no difference was noted between the WT and b-NP expressing strains when tested against the tobacco hornworm, Manduca sexta. Alternations in ant social behavior mediated by b-NP In the course of performing mock fire ant mound experiments we noted that ants infected with the Bb::spb-NP gpd strain appeared altered in their necrophoretic, or disposal of the dead, behavior (Fig. 2). Whereas mock-treated and WT B. bassiana-infected ants disposed of their dead in well defined ''bone piles'', Bb::spb-NP gpdinfected ants appeared to have randomly scattered piles of dead throughout the assay chamber, although typically at the periphery. In order to further probe this observation, we examined the responses of workers to nestmate corpses by placing corpses near the nest entrance in an experimental arena, and monitoring the time taken to remove the corpses. Workers moved ants killed by the WT B. bassiana strain faster than freeze-killed ants (,24 hr old, P,0.01), but not those killed by the Bb::spb-NP gpd strain, which showed a wider variation, but was not significantly different from the response to the freeze-killed ants (Fig. 3). Thus, expression of the b-NP peptide appeared to delay removal of corpses. The large variation in removal time observed with Bb::spb-NP gpd -infected ants may be due to differences in levels of b-NP expression in infected ants resulting from differential fungal growth within individual ant hosts. The infection state of the ants themselves did not appear to make a significant difference (P = 0.86). When WT or Bb::spb-NP gpd -infected ants were presented with either WT-or Bb::spb-NP gpd -killed ants, they moved the Bb::spb-NP gpd -killed ants more slowly than WT-killed ones (P,0.001, Fig. 4). These experiments confirmed that ants killed by Bb::spb-NP gpd were treated differently than WT-killed ants, which were more rapidly removed regardless of the infection state of the ants themselves. This finding has potentially important application consequences since it may increase the lethality of the fungus in field applications due to reduced removal of cadavers which would increase the contact time and possible dispersal of the fungal agent within mounds. In order to further probe the effects of b-NP, a series of synthetic peptides were examined. Since both pheromonotropic and myotropic activity of pyrokinin/PBAN peptides have been demonstrated via topical application of the peptides onto insects [24], we sought to determine the effects of b-NP-NH 2 (Cterminal amidated), b-NP (non-amidated peptide), and a control amidated peptide (QAGVTGHA-NH 2 ) on the necrophoretic behavior of the fire ants. Freeze-killed ants were immersed in a 100 nM solution of the tested synthetic peptides and presented to untreated ants. Surprisingly, we found that ant corpses treated with the b-NP-NH 2 peptide were moved significantly faster than buffer treated, b-NP-treated ants, or ants treated with a control eight-amino acid amidated peptide (P,0.001, Fig. 5). b-NP-treated ants were not moved any slower than control or buffer treated ants, although their distribution and the number of ants that were never removed within our assay conditions was larger for the b-NP treatment than for any other treatment examined. Discussion The reconstruction of the global invasion history of fire ants, from introduction into the United States from their native South American range to their subsequent spread to newly colonized habitats worldwide, has highlighted the unintended perils and risks associated with the interconnected nature of global trade and travel [25]. The destructive nature of fire ant establishment and spread into new ecosystems has led to intense efforts at their control or eradication, in which, even fire ant detecting dogs have been employed [26]. The use of chemical pesticides has failed to stem the spread of fire ants, resulted in the emergence of pesticide resistance, and has not been without controversy [5,27]. Thus, there has been much interest in the use of biological control strategies for fire ant control ranging from release of various parasites including mites and phorid flies, to the use of viruses, microsporidia, nematodes, and fungi [6]. The use entomopathogenic fungi, such as B. bassiana, although promising in several studies, has thus far met with limited success [6,28,29]. Although newer formulation technologies have increased their field efficacy, the relatively slow kill rate of these fungi coupled to ant behavioral responses such as grooming and corpse removal continue to pose significant obstacles to the use of entomopathogenic fungi [10,30]. Recent efforts have demonstrated success in increasing the virulence of entomopathogenic fungi. Expression of a scorpion toxin in M. anisopliae increased virulence 22-fold towards the tobacco hornworm, Manduca sexta, and 9-fold against the mosquito Aedes aegypti. Expression of the same toxin in B. bassiana was shown to increase its virulence towards a variety of hosts including the pine caterpillar, Dendrolimus punctatus [31]. The heterologous expression of toxins, however, has not been without controversy, and the potential for the development of resistance to the toxin remains. We sought, therefore, to develop a different strategy for increasing virulence by using host molecules against the host from which it came [13]. There are several important features of this strategy; first a suitable insect molecule (peptide) must be identified. Numerous insect derived peptides have already been suggested or even employed for insect control [32,33,34]. Depending upon the peptide chosen (its distribution and orthology), in principle, various levels of selectivity can be obtained. Although it should be emphasized that such selectivity would need to be verified via experimental analysis, data concerning the spectrum of targets for a number of insect peptides proposed to be used for insect control already exists (see references above). Second, using a host molecule should minimize issues concerning the development of resistance primarily because any mechanism for potentially developing resistance to the host molecule is likely to severely compromise the host. In the case of fire ants, resistance development is even less likely since only queens produce progeny, thus selection occurs within a very small population. In this report, we improved the virulence of a B. bassiana strain to fire ants by expressing a fire ant (neuro-) peptide in the fungal pathogen. Increased virulence in the b-NP expressing fungal strain was noted in both standard and mock mound assays. The increased virulence was specific and no effects were detected against a Lepidopteran hosts (Galleria mellonella and Manduca sexta), indicating that target-specific virulence can be achieved. This has significant potential for fungal strain improvement and regulatory agencies approval for insect control applications. Unexpectedly, we noted an altered behavioral pattern in ants infected with the b-NP expressing strain. Rather than forming organized corpse piles as seen in uninfected and wild-type infected ant assays, the dead appeared to remain dispersed throughout the assay chambers. Removal of dead nestmates is thought to limit the potential spread of pathogens, particularly within a social society, and is a common behavior in many ants species. Our observation pointed to altered behavioral effects resulting from application of the b-NP expressing strain on fire ants. These behavioral effects were further probed using the various fungal strains as well as synthetic peptides. Experiments using synthetic peptides indicated that: (1) worker ants have chemosensory perception mechanisms that are able to discriminate between surface peptides, and (2) the b-NP-NH 2 peptide, but not the non-amidated form, can act as a semiochemical specifically eliciting enhanced necrophoretic behavior. However, although ants killed by the b-NP expressing fungus were moved slower than WT killed or controls, treatment of dead ants with the synthetic b-NP peptide did not show any significant differences in movement times of the dead ants as compared to the controls, i.e. the biological application displayed a phenotype not observed in the application of the synthetic peptide. There are several possible explanations for these results. First, in the biological application, b-NP would be expressed both within the ant (as the infection proceeds) as well as without. Internal expressed b-NP could then have agonistic interactions with the host's amidated peptide and/or receptor(s) or disrupt other host physiological processes that in turn affect (cadaver) recognition cues. Second, the fungal infection may elicit or suppress microbial pathogen/infection detection mechanisms which would not occur when the synthetic peptide is administered externally. Finally, the slower removal times observed using B. bassiana b-NP-expressing killed ants could be due to a combination of fungal factors (i.e. fungal produced enzymes, toxins, volatiles, or other compounds) that act in conjunction with the presence of b-NP to affect cadaver removal. As a facultative parasite, our results expand the realm of examination between B. bassiana and their insect targets, which represents a model system in which molecular and cellular dissection of the host-pathogen interaction is beginning to emerge [35,36,37,38,39]. Our results also open up a new avenue of research with respect to the role and functions of PBAN/pyrokinin peptides in insects, linking them with a complex social behavior. A number of chemical stimuli have previously been reported to act as signals for mediating dead nestmate recognition and removal, i.e. necrophoretic behavior [23,40]. In particular, increasing concentrations of decomposition products, especially fatty acids such as myristoleic, palmitoleic, oleic, and linoleic acids, appear to be major stimuli in eliciting necrophoretic behavior. It has also been proposed that chemical stimuli that elicit removal of nestmate corpses are present on both live and dead ants, however, live ants contain additional compounds that mask these signals, which are subsequently lost or dissipated upon death. [23]. To date, there are no reports on a peptide acting as a necrophoretic modulating semiochemical. Intriguingly, the draft genome of S. invicta has revealed over 400 potential odorant receptor (OR) loci (of which 297 appear to be intact), one of the largest repertoire of such receptors found in insects thus far [41]. Although most ORs are thought to bind hydrophobic and/or volatile compounds and chemicals, it is interesting to speculate that within the S. invicta OR set there may be members that can recognize peptides (and discriminate between C-terminal amidated and non-amidated) peptides. This report links b-NP-NH 2 and necrophoretic behavior. Topical application of PBAN/pyrokinins are known to induce pheromotropic and myotropic activity in live insects [24], however, the full range of their physiological activities remains obscure. Members of the PBAN/pyrokinin family can apparently act as necrophorectic-eliciting cues on dead insects, expanding the potential physiological and sanitary roles of these peptides.
v3-fos-license
2019-07-18T14:22:03.189Z
2019-07-01T00:00:00.000
197422141
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2079-4991/9/7/1016/pdf", "pdf_hash": "7577923ed2e6a49c939700c83dd3bd5c58021f96", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44916", "s2fieldsofstudy": [ "Biology" ], "sha1": "7577923ed2e6a49c939700c83dd3bd5c58021f96", "year": 2019 }
pes2o/s2orc
Biological Responses of Onion-Shaped Carbon Nanoparticles Nanodiamonds are emerging as new nanoscale materials because of their chemical stability, excellent crystallinity, and unique optical properties. In this study, the structure of nanodiamonds was engineered to produce carbon nano-onion particles (CNOs) with multiple layers. Following a series of physicochemical characterizations of the CNOs, various evaluations for biological responses were conducted for potential biotechnological applications of the CNOs. The possibility of biological applications was first confirmed by assessment of toxicity to animal cells, evaluation of hemolysis reactions, and evaluation of reactive oxygen species. In addition, human immune cells were evaluated for any possible induction of an immune response by CNOs. Finally, the toxicity of CNOs to Escherichia coli present in the human colon was evaluated. CNOs have the chemical and physical properties to be a unique variety of carbon nanomaterials, and their toxicity to animal and human cells is sufficiently low that their biotechnological applications in the future are expected. Introduction Nanomedicine is broadly defined as the biomedical application of nanotechnology. More specifically, nanomaterials with a variety of physical, chemical, and biological characteristics can be employed in biomedical applications to overcome difficulties that have remained unresolved. Recently, new therapeutic and diagnostic methods have been developed by combining simple supramolecular components using nanotechnology [1][2][3][4][5]. Examples include the detection of pathogenic antigens [6], diagnosis and imaging of disease [7], development of drug delivery vehicles [8][9][10], and development of antibacterial agents [11]. Carbon nanomaterials with various dimensions, such as one-dimensional carbon nanotubes, two-dimensional graphene, and zero-dimensional fullerenes, are very attractive nanomaterials. They have gained much attention and have been actively studied because of their small nanoscale size and various physicochemical characteristics that are due to the large surface-to-volume ratio. For example, applications in tissue engineering [12,13], drug or gene transfer biosensors [14], photothermal therapy [15], and antibacterial substances [16] are actively being pursued. Recently, carbon nanomaterials with various new structures, such as carbon dots or nanodiamonds, have been developed and are being continuously studied. However, some carbon nanomaterials have been reported to be toxic to certain cells or animals and to induce an immune response [17][18][19]. Carbon nanomaterials within the mammalian cell are reported to promote reactive oxygen species (ROS), which are the cause of the toxicity. The incidence and toxicity of ROS are known to vary depending on the type of carbon nanomaterial [20]. Therefore, a thorough evaluation of biocompatibility in the use of carbon nanomaterials in the field of nanomedicine is inevitable. Carbon nanomaterials such as graphene, graphene oxide, carbon nanotubes (CNTs), and acid functionalized CNTs have been studied for their potential toxicity and their cytotoxic mechanisms in cells for further biomedical applications. It was found that neat graphene was toxic when treated with a Raw 264.7 cell by inducing the production of reactive oxygen species and apoptosis [21] and the nanocomposite of metal and graphene can be applied as an electrochemical sensor, using excellent thermal conductivity and electric conductivity [22]. Graphene oxide is involved in the release of lactate dehydrogenase (LDH) released from dead or damaged cells by apoptosis or necrosis and causing toxicity, due to the accumulation of autophagosome [23]. There are many applications for the application of graphene oxide, but, for example, there is a case applied to photodynamic therapy (PDT). By directly coupling the polyethylene glycol (PEG) with functional groups on the graphene oxide sheet, the dispersibility of the existing graphene oxide was further improved. Furthermore, branched polyethyleneimine (BPEI) was conjugated using EDC-NHS coupling. BPEI increased the loading capability of Chlorin e6 (Ce6), a photosensitizer, and increased photodynamic efficacy [24]. In addition, single-walled carbon nanotubes (SWCNTs) have been shown to cause damage to mitochondria [25], and the acid functionalized SWCNTs were also found to be involved in LDH release and accumulation in the autophagosome, leading to toxicity. [23]. Multi-walled carbon nanotubes (MWCNTs) likewise have been shown to induce apoptosis, the release of LDH [26], and mitochondrial damage [27]. The toxicity of SWCNTs, MWCNTs, and acid functionalized MWCNTs to intestinal microbes was evaluated and found to be antimicrobial. All of these can be applied as antimicrobial agents and CNTs have been reported to exhibit antibacterial effects by destroying the cell walls and membranes of bacteria [28]. Furthermore, carbon nanomaterials are used as a genosensor [29], drug delivery carrier [30], and biosensor [31]. Table 1 summarizes carbon materials for their main biological applications and toxicity. [27,31] Previous studies on properties and toxicity of carbon nano-onions (CNOs) have been carried out. A team of researchers studied surface functionalization to improve the problems of existing CNOs [32,33]. The other team evaluated the derivatives of CNO against Hydra vulgaris [34], providing a model system for the application of anti-microbial carbon nanomaterials. In this study, we synthesized CNOs (multi-walled fullerenes) using nanodiamonds. In addition, carboxyl groups were introduced on the surface of CNO to increase the water dispersion of the nanomaterials, which is essential for their biomedical applications. We investigated the feasibility of biomedical applications of CNOs by analyzing their physicochemical characteristics and biological responses. We evaluated whether CNO is a biocompatible carbon nanomaterial by performing assessments of CNO cytotoxicity to human Nanomaterials 2019, 9, 1016 3 of 12 dermal fibroblast (HDF) cells and peripheral blood mononuclear cells (PBMCs), immune response assays, hemolysis evaluations, assays for ROS generation (which is the main toxic factor of carbon nanomaterials), and measurement of toxicity to Escherichia coli intestinal bacteria. In conclusion, CNO showed no toxicity to HDF cells and PBMCs and did not induce the secretion of IL-2 and tumor necrosis factor-alpha (TNF-alpha) (which are cytokines in human immune cells). Moreover, the hemolysis rate tended to be very low, and CNO was not toxic to E. coli. Particularly because of the low occurrence of ROS, CNO is considered promising for use as a new carbon nanomaterial in research in various biomedical fields, such as tissue engineering, drug delivery systems, and biosensors. Pyrolysis of Nanodiamond Carbon nano-onions (CNOs) were generated by the pyrolysis of nanodiamonds (NDs) (Plasma Chem, Berlin, Germany) at high temperature. A crucible of ND (3 g of gray powder) was placed in a furnace tube and annealed at 1400 • C for 1 h under a nitrogen purge. The temperature was gradually lowered to room temperature under nitrogen. The NOs obtained in this annealing process were entirely black. Carboxylated Nano-Onion The surface of the CNO treated by pyrolysis was functionalized with carboxyl groups by the Hummers' method [35] to create CNO-COOH, as shown in Figure 1. Sulfuric acid (360 mL) and phosphoric acid (40 mL) were carefully added to a round bottom flask in an ice bath. CNO (3 g) and potassium permanganate (9 g) were added into the sufficiently cooled sulfuric acid/phosphoric acid solution. The CNO was homogeneously dispersed for 1 h using a bath sonicator. The CNO dispersion was oxidized with gentle stirring at 50 • C. After 12 h, 800 mL of deionized (DI) water was added to the dispersion; the mixture was cooled and then combined with hydroperoxide (3 mL) to stop reactivity. The mixture was vacuum-filtered using a 0.2 µm polytetrafluoroethylene (PTFE) hydrophilic membrane filter (SciLab, Seoul, Korea). The filtered compact cake was washed twice with DI water (200 mL), HCl (200 mL), and ethanol (200 mL) through vacuum filtration. Finally, after washing with diethyl ether (200 mL), the cake was dried overnight in an 80 • C air-circulating oven. Characterization X-ray photoelectron spectroscopy (XPS) measurements were performed with a K-alpha+ spectrometer (Thermo Fisher Scientific, Waltham, MA, USA), using an Al Kα energy source. The spectra were analyzed using Advantage software. Transmission electron microscopy (TEM) images were acquired using a JEM-2100F electron microscope (JEOL, Tokyo, Japan). The CNO dispersion was dropped onto a Lacey Formvar/Carbon 200-mesh grid (Ted Pella, Redding, CA, USA) and dried for 10 min in a 60 • C oven. CNO and CNO-COOH(1 mg) were dispersed in ethanol (1 mL) and diluted with an appropriated concentration for the images. Cell Viability Assay Cell viability was determined using a CCK-8 cell counting kit [36] (Dojindo Molecular Technology, Kumamoto, Japan). Cells were seeded with equal density into each well of 96-well plates (5 × 10 3 cells per well), using 100 µL of cell culture medium (low-glucose Dulbecco's Modified Eagle Medium (DMEM), supplemented with 10% (v/v) fetal bovine serum and 1% sterile antibiotic), and were incubated for 24 h at 37 • C. Cells were then treated in 96-well plates with varying concentrations of CNO and CNO-COOH particles in a serum-free medium for 24 h at 37 • C. Untreated cells served as a control group. At the end of the treatment, CCK-8 dye was added to each well, and the plates were incubated for another 2 h at 37 • C. To prevent particles from interfering with this assay, the solution in each well of each plate was quantitatively transferred to an empty well in another plate after centrifugation. Subsequently, the absorbance was measured at 450 nm, using a microplate reader. Each treatment was repeated three times. The cell viability with the CNO and CNO-COOH was further assessed using a LIVE/DEAD ® Viability/Cytotoxicity Kit (Invitrogen™; Life Technologies, Carlsbad, CA, USA). The kit can quickly discriminate live from dead cells by simultaneously staining with green-fluorescent calcein acetoxymethyl ester to indicate intracellular esterase activity and with red-fluorescent ethidium homodimer-1 to indicate loss of plasma membrane integrity. After 24 h of incubation with varying concentrations of CNO and CNO-COOH, the culture medium was removed. Next, 200 µL of LIVE/DEAD stain was added to each well, and the wells were incubated for 30 min at 37 • C. Finally, the samples were observed using a fluorescence microscope. In this study, the sp 3 structure nanodiamond was annealed at 1400 °C for 1 h, and then the sp 2 structure CNO was synthesized. The CNO-COOH was subsequently synthesized by Hummers' method ( Figure 1A). The TEM images of CNO and CNO-COOH showed visualized features and the Hemolysis Test Aliquots (1 mL) of 2% red blood cells (from sheep blood) suspended in phosphate-buffered saline (PBS) were mixed with CNO and CNO-COOH solution (final concentrations of CNO and CNO-COOH were 5000, 10,000, 50,000, and 100,000 ng/mL each) and incubated at 37 • C for 1 h. The samples were then centrifuged at 2000 rpm for 5 min to remove intact red blood cells, and the absorbance of the supernatant was measured at 545 nm for the release of hemoglobin. PBS and 5% Triton X-100 were used as a negative and positive control, respectively. All measurements were performed in triplicate, and the hemolysis rate (%) was determined as HR (%) = (OD sample − OD negative control )/(OD positive control − OD negative control ) × 100% [37]. Intracellular Reactive Oxygen Species Measurement The intracellular ROS was determined using a well-characterized probe, namely 2 ,7 -dichlorofluorescein diacetate (DCFH-DA) [38]. DCFH-DA passively enters the cell and is hydrolyzed by esterases to DCFH. This nonfluorescent molecule is then oxidized to the fluorescent compound dichlorofluorescein (DCF) by cellular oxidants. A 10 mM DCFH-DA stock solution (in methanol) was diluted 1000-fold in the cell culture medium without serum or other additives to yield a 10 mM working solution. Cells were washed twice with PBS and then incubated with DCFH-DA working solution for 20 min in a dark environment (37 • C incubator). This was followed by treatment with varying concentrations of CNO and CNO-COOH particles for 24 h. The cells were then washed three times with PBS to eliminate DCFH-DA that did not enter the cells. Cells were collected in suspension and the fluorescence was determined at 488 nm excitation and 525 nm emission, using a fluorescence spectrophotometer. Cytokine Profiling Assay The cytokine profiling was performed using the enzyme-linked immunosorbent assay (ELISA) [39]. An unlabeled capture antibody was diluted to a final concentration of 0.5-8 µg/mL in coating buffer (Cat. No. 421701, BioLegend, CA USA) and 100 µL were transferred to each well of a high-affinity, protein-binding ELISA plate (for example BioLegend Cat. No. 423501). The plate was incubated at 4 • C overnight. After three washes with PBS/Tween, non-specific binding sites were blocked by adding 200 µL of blocking solution to each well. The plate was incubated at room temperature for 1 h. After three additional washes with PBS/Tween, 100 mL of the supernatant of cells treated with varying concentrations of CNO and CNO-COOH particles was added to each well in the ELISA plate and incubated at room temperature for 2-4 hours. After another three washes with PBS/Tween, 100 mL of biotin-labeled detection antibody, diluted to a concentration of 0.25-2 µg/mL in blocking solution, was added to each well and incubated at room temperature for 1 h. After another three washes with PBS/Tween, 100 mL of Av-HRP conjugate (BioLegend Cat. No. 405103) at its predetermined optimal concentration in blocking buffer (usually between 1/500 and 1/2000) was added to each well. After incubation and washing, 100 mL of TMB Reagent was transferred to each well and incubated at room temperature for color development. The optical density (OD) of each well was read with a microplate reader at 450 nm wavelength. Bacterial Tests The in vitro bacterial activities of CNO and CNO-COOH were examined using the colony counting method [40]. Gram-negative E. coli (ATCC 25922) were used as microorganisms. Sterilized Luria-Bertani (LB) broth was measured (1 mL) into sterile tubes. The CNO and CNO-COOH at varying concentrations (1.5625, 3.125, 6.25, 12.5, 25, and 50 µg/mL) were introduced into the LB broth solution, which contained approximately 1.5 × 10 5 colony forming units (CFU) of E. coli. The mixtures were cultured at 37 • C in a shaking incubator for 12 h. Pure PBS buffer and antibiotics were also tested as a negative control and positive control, respectively. A 100 µL aliquot of each of these cell solutions was seeded onto LB agar using a surface spread plate technique. The plates were incubated at 37 • C for 24 h. The bacterial CFUs were then counted to calculate the survivors. Statistical Analysis A one-tailed Mann-Whitney U test was performed using GraphPad Prism (v 7 for Mac OS X; GraphPad Software, La Jolla, CA, USA). Results and Discussion In this study, the sp 3 structure nanodiamond was annealed at 1400 • C for 1 h, and then the sp 2 structure CNO was synthesized. The CNO-COOH was subsequently synthesized by Hummers' method ( Figure 1A). The TEM images of CNO and CNO-COOH showed visualized features and the lattice gap between molecular layers, as shown in Figure 1B. Both CNO and CNO-COOH show the amorphous onion-like layers, unlike the crystalline structure of the nanodiamond. CNOs exist as aggregated forms between 300 and 400 nm, while each CNO is 5-8 nm. High-magnification TEM images of CNO and CNO-COOH show lattice gaps of 0.17 nm and 0.37 nm, respectively. The greater spacing in the lattice compared to the crystal has the potential to serve as a reservoir for electrons or small-molecule drugs. Two-dimensional graphene is believed to complex with small molecules via phi-phi stacking or physical adsorption, as small molecules settle in the interface between sheets. The XPS spectra of the C1s and O1s peaks of the CNO and CNO-COOH are shown in Supplementary Materials Figure S1A. CNO consists mainly of carbon (the 97.96%) and has low contents of oxygen (1.47%) and nitrogen (0.57%). In contrast, a CNO-COOH has high oxygen content (19.16%), with 80.13% carbon and 0.71% nitrogen. A simple comparison shows that the oxygen content increased 13-fold in the transformation of CNO into CNO-COOH, indicating that oxidation by the Hummer's method was functionalized into the carboxylation of the onion surface (Supplementary Materials Figures S1 and S2 and Supplementary Materials Table S1). The O1s peaks of the CNO and CNO-COOH were deconvoluted into three components at 531.6 eV (C=O) and 533.1 eV (C-OH and C-O-C). Based on the O1s peak, the relative C=O contents were 19.9 and 26.9% for the CNO and CNO-COOH, respectively (Supplementary Materials Table S2). The amplification of the C=O carbonyl group content and oxygen content indicates that the carboxyl group was successfully functionalized on the CNO surface. The C1s peaks of the CNO and CNO-COOH were deconvoluted into six component peaks at 284.6 (-C=C-), 285.4 (-C-C-), 286.0 (C-O), 287.2 (C=O), 289 (-COO-), and 290.55 eV (π-π*). The relative contents of -C=C-, which present an amorphous onion-like sp2 layer, were similar (54.0% and 49.7%) for the CNO and CNO-COOH. In contrast, the relative contents of -C=O-and -COOwere 18.13 and 9.66%, respectively, for the CNO-COOH, showing the high carboxylate content on the CNO surface. HDF cells and human PBMCs, which are composed of lymphocytes (for example T cells, B cells, and NK cells) and monocytes, were used to determine the biocompatibility of CNO and CNO-COOH ( Figure 2). Cytotoxicity tests were performed by using the CCK-8 assay after 24 h of sample treatment with different concentrations (namely 100, 500, 1000, 5000, and 10,000 ng/mL). For the negative control, the basal medium without any nanoparticle sample was added to the cells. As a positive control, Triton X-100 was used, which is known to dissolve the cell membrane as a surfactant and to kill the cell. Results show that cell viability of the CNO treatment group is high at all concentrations, whereas CNO-COOH decreases the cell viability to 80% at particle concentrations higher than 1000 ng/mL (Supplementary Materials Figure S3A). In the image analyses of the HDF cells, an increased number of aggregated CNO particles are observed compared with CNO-COOH, because the number of ionizable functional groups are lower in CNO. Particle aggregation and hydrophobicity would be the primary reasons for CNO having lower toxicity than CNO-COOH at the same concentration when co-cultured with cells. This phenomenon is also common in other carbon nanomaterials. In a previous report [41], a comparison of the toxicity of reduced graphene oxide with that of a neat graphene oxide showed that the reduced graphene oxide, which has better solubility in an aqueous environment, was highly Nanomaterials 2019, 9, 1016 7 of 12 toxic. The cell viability test with PBMC also showed no toxicity with either CNO or CNO-COOH at any concentration (Supplementary Materials Figure S3B). It should also be noted that CNO is present as an aggregate due to lack of oxygen function and is larger than CNO-COOH. CNO-COOH has excellent dispersibility due to its large number of oxygen functional groups. As CNO-COOH exists as individual particles, it has a large surface area to contact with cells and is more cytotoxic than CNO. 8 of hemolysis in proportion to their concentrations, but showed a hemolysis rate of less than 4%, even at a very high particle concentration of 100,000 ng/mL. ROSs are highly reactive molecules containing oxygen ions and hydrogen peroxide. Oxidative stresses, due to their high reactivity, can damage the cell structure. ROS generation caused by the presence of carbon-based nanomaterials is one of the major causes of induced cell cytotoxicity. Therefore, the quantity of ROS produced at various concentrations (500, 1000, 5000, and 10,000 ng/mL) of CNO and CNO-COOH was investigated ( Figure 3). The negative control was treated with a neat basal medium, and the positive control was treated with hydrogen peroxide as reactive oxygen species. The results show that a substantial quantity of ROS was detected in the positive group treated with hydrogen peroxide, and no significant differences were found among the CNO, CNO-COOH, and control group; however, the ROS production decreased slightly with increasing concentrations of 500 to ~10,000 ng/mL. The quantity of ROS was reduced as the particle concentration increased because the quantities of CNO and CNO-COOH internalized in the cells also increased. CNO and CNO-COOH taken up in the cell would interfere with the ROS assay, quenching the fluorescence signal of ROS. This trend is more significant with CNO-COOH because of its excellent solubility and efficient cell penetration. It should be noted that the current assay results are correct for the CNO functionalized by Hummers' method, having COOH, CHO, or C=O groups due to the oxidation processes. The ROS results here may not be generalized for all functionalized CNO. The T cells in PBMCs secrete cytokines, induce proliferation of macrophages, and promote immune cell differentiation. To investigate the immunological responses of T cells possibly induced by CNO and CNO-COOH, the levels of interleukin-2 (IL-2) and tumor necrosis factor-alpha (TNFalpha) secreted by the PBMCs were assayed using ELISA, with varying concentrations of CNO and CNO-COOH (Figures 3C-D). The results indicate that the quantity of IL-2 secreted by CNO and CNO-COOH was less than 500 pg in the negative control and at all concentrations of CNO and CNO-COOH. In addition, the quantity of TNF-alpha secreted was less than 200 pg in the negative control Hemolysis, an important consideration for the blood compatibility of nanoparticles, was tested with both CNO and CNO-COOH by co-culturing them with red blood cells. The test consisted of measuring the concentration of hemoglobin leaking out of red blood cells because of the collapse of the red blood cells. As shown in Figure 2C, both CNO and CNO-COOH tended to increase the rate of hemolysis in proportion to their concentrations, but showed a hemolysis rate of less than 4%, even at a very high particle concentration of 100,000 ng/mL. ROSs are highly reactive molecules containing oxygen ions and hydrogen peroxide. Oxidative stresses, due to their high reactivity, can damage the cell structure. ROS generation caused by the presence of carbon-based nanomaterials is one of the major causes of induced cell cytotoxicity. Therefore, the quantity of ROS produced at various concentrations (500, 1000, 5000, and 10,000 ng/mL) of CNO and CNO-COOH was investigated (Figure 3). The negative control was treated with a neat basal medium, and the positive control was treated with hydrogen peroxide as reactive oxygen species. The results show that a substantial quantity of ROS was detected in the positive group treated with hydrogen peroxide, and no significant differences were found among the CNO, CNO-COOH, and control group; however, the ROS production decreased slightly with increasing concentrations of 500 to~10,000 ng/mL. The quantity of ROS was reduced as the particle concentration increased because the quantities of CNO and CNO-COOH internalized in the cells also increased. CNO and CNO-COOH taken up in the cell would interfere with the ROS assay, quenching the fluorescence signal of ROS. This trend is more significant with CNO-COOH because of its excellent solubility and efficient cell penetration. It should be noted that the current assay results are correct for the CNO functionalized by Hummers' method, having COOH, CHO, or C=O groups due to the oxidation processes. The ROS results here may not be generalized for all functionalized CNO. 9 and at all concentrations of CNO and CNO-COOH. Therefore, CNO and CNO-COOH are negligibly toxic to human cells and non-immunogenic and are potentially biocompatible nanomaterials. There are approximately 100 trillion microorganisms in the human body called human microbiota; this number is ten times higher than the number of human cells and includes both beneficial and harmful bacteria. These microorganisms are present in various parts of the body, such as the skin, oral cavity, genitalia, respiratory tract, and gastrointestinal tract. The gastrointestinal tract The T cells in PBMCs secrete cytokines, induce proliferation of macrophages, and promote immune cell differentiation. To investigate the immunological responses of T cells possibly induced by CNO and CNO-COOH, the levels of interleukin-2 (IL-2) and tumor necrosis factor-alpha (TNF-alpha) secreted by the PBMCs were assayed using ELISA, with varying concentrations of CNO and CNO-COOH ( Figure 3C-D). The results indicate that the quantity of IL-2 secreted by CNO and CNO-COOH was less than 500 pg in the negative control and at all concentrations of CNO and CNO-COOH. In addition, the quantity of TNF-alpha secreted was less than 200 pg in the negative control and at all concentrations of CNO and CNO-COOH. Therefore, CNO and CNO-COOH are negligibly toxic to human cells and non-immunogenic and are potentially biocompatible nanomaterials. There are approximately 100 trillion microorganisms in the human body called human microbiota; this number is ten times higher than the number of human cells and includes both beneficial and harmful bacteria. These microorganisms are present in various parts of the body, such as the skin, oral cavity, genitalia, respiratory tract, and gastrointestinal tract. The gastrointestinal tract has the most numerous and greatest variety of microorganisms. The significant role that microorganisms play in interactions with the human body, such as absorption and metabolism of nutrients in the human body, maturation, development of the immune system and nervous system, and the occurrence and prevention of various diseases, is well documented. An upset in the balance between beneficial and harmful bacteria could lead to multiple diseases, such as obesity, diabetes, and colorectal cancer [42][43][44]. To test the cytotoxicity of CNO and CNO-COOH against microorganisms, E. coli was chosen as a model in our evaluation. The negative control was supplemented with the no-treatment medium, whereas the positive control was given an antibiotic-antimycotic. The test results showed that all the bacteria were killed in the positive control with antibiotics, but there was no difference between the negative control and CNO and CNO-COOH, even at their highest concentration of 50,000 ng/mL, and they did not affect the E. coli (Figure 4). has the most numerous and greatest variety of microorganisms. The significant role that microorganisms play in interactions with the human body, such as absorption and metabolism of nutrients in the human body, maturation, development of the immune system and nervous system, and the occurrence and prevention of various diseases, is well documented. An upset in the balance between beneficial and harmful bacteria could lead to multiple diseases, such as obesity, diabetes, and colorectal cancer [42][43][44]. To test the cytotoxicity of CNO and CNO-COOH against microorganisms, E. coli was chosen as a model in our evaluation. The negative control was supplemented with the no-treatment medium, whereas the positive control was given an antibioticantimycotic. The test results showed that all the bacteria were killed in the positive control with antibiotics, but there was no difference between the negative control and CNO and CNO-COOH, even at their highest concentration of 50,000 ng/mL, and they did not affect the E. coli (Figure 4). Summary To evaluate the biocompatibility of CNO and CNO-COOH, an in vitro cytotoxicity evaluation, immunological assays, hemolysis test, ROS production analysis, and toxicity evaluation against E. coli were performed. CNO and CNO-COOH showed no toxicity to human HDF cells and PBMCs at concentrations below 500 ng/mL. Neither CNO induced IL-2 and TNF-alpha secretion significantly. The hemolysis rate was also low, indicating that CNO and CNO-COOH have blood compatibility. Neither CNO was toxic to the E. coli intestinal bacteria. The results of this study show that CNO and CNO-COOH have excellent biocompatibility because of the low occurrence of ROS, which is believed to be the leading cause of carbon nanomaterial toxicity. CNOs are promising for future use in diverse biomedical and biomolecular engineering applications, including drug delivery, theranostics, and biosensors. In this study, conjugation of the various biomolecule is possible by functionalizing COOH in CNO. For example, peptide, DNA, and protein can be attached to the COOH of carbon nanomaterials through EDC-NHS crosslinkers. A fluorescent dye, metal nanoparticles, or other nonbiomolecular materials are also expected to be conjugated to COOH of carbon nanomaterials for further applications. Summary To evaluate the biocompatibility of CNO and CNO-COOH, an in vitro cytotoxicity evaluation, immunological assays, hemolysis test, ROS production analysis, and toxicity evaluation against E. coli were performed. CNO and CNO-COOH showed no toxicity to human HDF cells and PBMCs at concentrations below 500 ng/mL. Neither CNO induced IL-2 and TNF-alpha secretion significantly. The hemolysis rate was also low, indicating that CNO and CNO-COOH have blood compatibility. Neither CNO was toxic to the E. coli intestinal bacteria. The results of this study show that CNO and CNO-COOH have excellent biocompatibility because of the low occurrence of ROS, which is believed to be the leading cause of carbon nanomaterial toxicity. CNOs are promising for future use in diverse biomedical and biomolecular engineering applications, including drug delivery, theranostics, and biosensors. In this study, conjugation of the various biomolecule is possible by functionalizing COOH in CNO. For example, peptide, DNA, and protein can be attached to the COOH of carbon nanomaterials through EDC-NHS crosslinkers. A fluorescent dye, metal nanoparticles, or other non-biomolecular materials are also expected to be conjugated to COOH of carbon nanomaterials for further applications. Supplementary Materials: The following are available online at http://www.mdpi.com/2079-4991/9/7/1016/s1. Figure S1. XPS spectra of C1s (A and C) and O1s (B and D) peaks of CNO (A and B) and CNO-COOH (C and D). Figure S2. Spectral survey of (A) CNO and (B) CNO-COOH, showing increased oxygen contents. Figure S3. (A) Fluorescent microscopic images from live and dead assay for CNO and CNO-COOH with different concentrations on HDF cells after 24 h incubation; (B) microscopic images of PBMC after treatment of CNO and CNO-COOH with different concentrations and after 24 h incubation. Table S1. Atomic composition of CNO and CNO-COOH. Table S2. Components of CNO and CNO-COOH from C1s and O1s peaks.
v3-fos-license
2018-10-14T17:01:43.615Z
2018-09-09T00:00:00.000
52889342
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1155/2018/1537371", "pdf_hash": "dea8db8c8a2e14758cf91632a08275478aa8c3b0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44918", "s2fieldsofstudy": [ "Medicine" ], "sha1": "978c4f28a19d005f06c9f6a02995d41c3db64f0a", "year": 2018 }
pes2o/s2orc
Anxiolytic and Antidepressant Effects of Maerua angolensis DC. Stem Bark Extract in Mice Introduction The stem bark extract of Maerua angolensis DC. (Capparaceae) is used as a traditional remedy for management of anxiety, psychosis, and epilepsy. Aim of the Study We therefore aimed at evaluating the anxiolytic and antidepressant potential of the plant in mice models. Methods The dried stem bark was extracted with petroleum ether/ethyl acetate (50:50) mixture to obtain the extract, MAE. We employed Irwin's test to identify the preliminary behavioral and autonomic effects. Subsequently, MAE was administered per os to male mice and subsequently assessed, 1 h later, for anxiety parameters in the elevated plus maze (EPM) and the regular Suok tests. The forced swim (FST) and tail suspension (TST) tests were employed to assess the antidepressant potential of the extract (100-1000 mg kg−1). Results In our preliminary assay, MAE (100-5000 mg/kg) exhibited analgesic effects and a reduction in fear response in the Irwin's test. The spontaneous locomotor activity was reduced at 1000 mg/kg. Additionally, MAE (1000 mg/kg) increased the latency to PTZ-induced convulsions, and duration to sleep in the pentobarbitone induced sleeping time assay. MAE (1000 mg/kg), similar to diazepam, in the anxiolytic assay, increased the percentage time spent in the open arms while decreasing protected head dips and unprotected stretch attend postures in the EPM. Correspondingly, there was a reduction in anxiety-induced immobility and freezing in the Suok test (300 mg/kg) without loss of sensorimotor coordination. Additionally, there was a significant reduction in immobility duration in the FST (300 mg/kg) and TST (1000 mg/kg). Conclusion The petroleum ether/ethyl acetate fractions of Maerua angolensis stem bark possess anxiolytic and acute antidepressant effects in mice. Introduction Depression is a significant contributor to the total economic and health burden of every country [1]. This burden is more severe in third world countries where diagnosis and medications for treatment are inadequate and relatively expensive [2]. In contrast, most current treatment regimen available have proven less efficacious at ameliorating the condition. Anxiety is usually comorbid with depression states and treatment options that assuage both conditions are associated with a higher efficacy with a correspondingly lower relapse rates [3]. This has made the search for molecules with superior pharmacological profile and possibly effective at multiple related targets important. Plants have served as a rich source of new molecules with pharmacological properties that fill an essential gap in the search for superior therapeutic agents. Local remedies, over the years, have served as a relatively cheap source of therapy and have been employed in the management of disorders such as anxiety, schizophrenia, and epilepsy. The therapeutic 2 Depression Research and Treatment claims of preparations from local herbs have over the years provided valuable clues for the direction of pharmacological investigations [4][5][6] Maerua angolensis DC. (Capparaceae) is a local plant found in various parts of West and Central Africa with a myriad of uses for neurologic disorders [7]. The root and stem bark decoction is sedating and have been used in the management of pain, epilepsy and psychosis [7][8][9][10]. Additionally, Maerua angolensis is used in traditional medicine for ameliorating anxiety associated with other disease states such as schizophrenia. Recent pharmacological investigations demonstrate that the plant possesses significant in vivo antioxidant [11] and anti-inflammatory [8,12] properties. Despite the plants popular use, there is sparse scientific evidence supporting its purported CNS activity. Hence, it is important to investigate the potential of Maerua angolensis extract in anxiety and the related disorder depression in order to provide some scientific evidence for the plants folkloric use. The current work assessed the anxiolytic potential of Maerua angolensis extract in the elevated plus maze, open field, and Suok tests in mice. We further explored the potential antidepressant effects of Maerua angolensis extract in the tail suspension and forced swim tests. Plant Extraction and FT-IR Analysis of Crude Extract. Maerua angolensis extract was obtained according to methods described by Benneh and colleagues [13,14]. The concentrate obtained was further dried in a hot air oven at 55 ∘ C for 72 h to obtain a green semisolid mass (∼8.5 g) which was then stored in the freezer at -40 ∘ C until use. The spectral region between 400 and 1400 cm −1 is usually considered as the unique region for every compound/ compound mixtures and hence can be used for identification and quality control. Hence, triplicate FT-IR (PerkinElmer5 UATR Two) spectra were subsequently generated for the extract. Chemicals and Drugs. Imipramine; pentylenetetrazole; caffeine; sodium pentobarbitone; Tween 80 (Sigma-Aldrich Inc., St. Louis, MO, USA), fluoxetine (Eli Lilly and Co., Indianapolis, IN, USA), diazepam (INTAS, Gujarat, India) were used. Caffeine, pentobarbitone, diazepam, and pentylenetetrazole were dissolved in distilled water before oral or intraperitoneal administration. To avoid temperatureinduced breakdown of pentylenetetrazole, the solution was constantly kept on ice throughout the experimental duration. Maerua angolensis extract (MAE), fluoxetine, and imipramine were prepared by solubilizing the fine powder with Tween 80 q.s. A maximum of 1 mL was delivered by oral gavage for per os treatment. A maximum volume of 0.2 mL was set for subcutaneous injection and 1.0 mL for intraperitoneal injection. Animals. Male ICR mice (20-25 g) were obtained from the vivarium of the Department of Pharmacology, KNUST, Kumasi, Ghana. They were housed, in groups of 5, in stainless steel cages (34 × 47 × 18 cm 3 ) with soft wood shavings as bedding and housing conditions as follows: temperature maintained at 23-25 ∘ C, relative humidity 60-70 %, and 12 h light-dark cycle. All mice had free access to water and pellet diet (GAFCO, Tema, Ghana). All experiments were compliant with NIH Guidelines for the Care and Use of Laboratory Animals. Ethical approval was obtained from the Department of Pharmacology, Animal Ethics Committee, KNUST.. [16]. The setup consisted of a 2-meter aluminum rod (diameter = 2 cm) divided into 10 equal segments and elevated 25 cm high. To avoid or reduce harm to mice falling from the rod, the base of the setup was covered with a thick layer of paper towels. Preliminary Neuropharmacological Forty-two (42) mice were allowed to acclimatize for 24 h in a dimly lit experimental room for an hour before drug treatment and testing. Mice were the randomly selected and distributed into seven groups of six (6) animals each. Animals received either MAE (30, 100, and 300 mg kg −1 , p.o.), diazepam (0.1, 0.3, and 1.0 mg kg −1 , i.p.), or 1% tween in distilled water (10 ml kg −1 , p.o.). One hour after oral and thirty minutes after intraperitoneal administration, mice were placed in the central region of the rod. The behaviour on the rod was captured for five (5) minutes with the aid of a camcorder mounted approximately 2 meters away from the rod. The exploratory activity and specific behaviours were then scored and analyzed with the aid of JWatcher software. Behaviours assessed included (a) duration of immobility, (b) frequency of freezing, and (c) number of leg slips. Mice that fell off the rod were returned to the position of fall and recording continued. Elevated Plus-Maze. The elevated plus-maze test was performed according methods described by to Pawlak et al. [17]. The elevated plus maze consists of two closed (30 × 5 × 1 cm 3 ) and two open arms (30 × 5 × 30 cm 3 ) with a central arena (5 × 5 cm 2 ). The maze is elevated 60 cm above the floor with the aid of a platform. Behavioural testing was performed under dim light in a noise-attenuated room. Fifty-six (56) ICR mice were randomly selected and distributed into ten groups of seven (7) animals each. Animals received either MAE (30, 100,300, and 1000 mg kg (mouse stretches forward and retracts without moving its feet). An arm entry was defined as a mouse having entered one arm of the maze with all four limbs. Behaviours were defined as protected if they occur in the closed arms or center and unprotected when they were exhibited in the open arm region of the maze. Acute Antidepressant Tests 2.5.1. Forced Swim Test. The forced swim test was carried out according to the method described by Porsolt et al. [18]. Seventy (70) ICR mice were randomly assigned to ten groups of seven animals each. After acclimatization, mice were dosed with either MAE (100, 300, and 1000 mg kg −1 , p.o.), fluoxetine (3, 10, and 30 mg kg −1 , p.o.), imipramine (10, 30, and 100 mg kg −1 p.o.), or 1% tween in distilled water (10 ml kg −1 p.o.) 60 minutes before behavioural testing. Mice were gently dropped individually into identical transparent cylindrical tanks (25 cm high and 10 cm deep) containing water (26 ±1 ∘ C) up to 20 cm for a total of 6 minutes. Each session was videotaped with a camcorder suspended above the cylinder. The duration of immobility, latency to immobility, climbing, and swimming during the last 4 minutes were quantified using JWatcher Version 1.05. After the end of each session, animals were removed from the cylinders dried with a towel and placed near a heater until they were completely dry. The latency to and duration of immobility give an indication of antidepressant-like activity. An increased latency and reduced immobility are typically exhibited by antidepressant agents. The type and duration of the escape oriented behaviours (climbing and swimming) can be used to predict the possible mechanism(s) of action to the agent been tested. Tail Suspension Test. The tail suspension test was carried out according to the method previously described by Steru et al., 1985. ICR mice were randomly assigned to ten groups of seven animals each. After acclimatization, mice were dosed with either MAE (100, 300, and 1000 mg kg −1 , p.o.), fluoxetine (3, 10, and 30 mg kg-1 , p.o.), imipramine (10, 30, and 100 mg kg −1 p.o.), or 1% tween in distilled water (10 ml kg −1 p.o.). One hour after oral dosing animals were suspended at their tail (1 cm from the tip) with an adhesive tape on a horizontal bar raised 50 cm from a tabletop. Behaviours exhibited by the mice were recorded for a period of 6 min and subsequently analyzed for escapeoriented behaviours such as pedaling, curling, and swinging and immobility time and were quantified for the last 4 min of each session. Behaviours assessed included the following: (1) immobility, a mouse was judged to be immobile when it hung by its tail without engaging in any active behaviour; (2) swinging, a mouse was judged to be swinging when it continuously moved its paws in the vertical position while keeping its body straight and/or it moved its body from side to side; (3) curling, a mouse was judged to be curling when it engaged in active twisting movements of the entire body; (4) pedaling was defined as when the animal moved its paws continuously 2.6. Statistics. Data are presented as mean ± SEM. Data were analyzed using one-way analysis of variance (ANOVA). When ANOVA was significant, multiple comparisons between treatments were done using Sidak post hoc test. Dose-response curves are constructed using iterative curve fitting with the following nonlinear regression (threeparameter logistic) equation: Dose (mg kg -1 ) E ff e c t s ( t i m e p e r i o d ) D / T where X is the logarithm of dose and Y is the response. Y starts at a (the bottom) and goes to b (the top) with a sigmoid shape. The fitted midpoints (ED 50 ) of the curves were compared statistically using F test with GraphPad Prism for Windows version 6.01 (GraphPad5 Software, San Diego, CA, USA). FT-IR Analysis of Crude Extract. The characteristic spectra ( Figure 1) in the region from 400 to 1400 cm −1 were as a fingerprint region for subsequent comparison of future extracts. Figure 3). However due to lethality recorded before the 30th minute in the solvent control and MAE treated groups, the frequency and duration of convulsions were not assessed. A survival analysis ( Figure 4) was rather employed to reveal the degree of protection offered by the administered agents. Although the survival analysis revealed a significant trend in the degree of protection offered by MAE, comparison with the solvent control, it did not reveal any significant statistical difference. Diazepam (8 mg kg −1 ), the reference anticonvulsant, increased the latency to convulsions (P<0.0001) while decreasing the lethality significantly. Additionally, these doses produced a significant decrease in duration of protected head dips ( 7, 46 =4.252, P=0.0011) and protected stretch-attend postures ( 7, 46 =7.653, P<0.0001) (Figures 10(a) and 10(b)). The number of protected head dips ( 7, 45 =6.152 P<0.0001) and protected stretch attend postures ( 7, 46 =4.091, P=0.0014), as a percentage, decreased in a similar fashion as seen above (Figures 10(b), 11(b)). Figure 12 represents the effect of acute administration of MAE (100-1000 mg kg −1 p.o.), imipramine (10-100 mg kg −1 p.o.), or fluoxetine (3-30 mg kg −1 p.o.) on mice behaviours in the forced swim test. From Figure 13, it was observed that oral administration of fluoxetine, MAE, and imipramine reduced immobility time in a dose-dependent manner by a maximum (E max ) of 44.11 ± 14.29%, 71.72 ± 7.78%, and 91.47 ± 2.865%, respectively. The ED 50 values show the order of potency of the test compounds as MAE <imipramine < fluoxetine. Tail Suspension Test. Administration of fluoxetine, MAE, and imipramine reduced immobility time in a dose-dependent manner by a maximum (E max ) of 56.07 ± 14.62%, 82.06 ± 9.35%, and 86.19 Discussion The current study demonstrates the anxiolytic and acute antidepressant effects of the petroleum/ethyl acetate extract of Maerua angolensis stem bark. Contrary to results obtained in other studies [9], the fraction we employed showed no significant antiseizure activity. The present study demonstrates that the lipophilic fraction of the stem bark extract demonstrates significant anxiolytic and antidepressant activity in male mice. These anxiolytic effects are in agreement with zebrafish anxiolytic and antidepressant studies [13]. The Irwin test evaluates the qualitative effects of test substances on the behaviour and autonomic and the physiological function of a test animal [15]. Results from such test can give an approximate onset and duration of action of different measured effects. In the Irwin test, the extract showed analgesic effect and reduction to fear and touch response at doses of 100-5000 mg kg −1 p.o. Continuous observation for 48 hours after the test revealed no physical signs of toxicity or lethality at all tested doses. This suggests that the LD 50 in mice is beyond 5000 mg kg −1 . The onset of action was increased with dose increments, with the fastest onset observed at 30 min. Based on onset and duration of the effects registered in the Irwin test, further tests were carried out 60 minutes after oral administration since lower doses (below 1000 mg kg −1 ) were adopted for subsequent tests. The activity meter test was then employed to assess, quantitatively, the spontaneous behaviour with respect to locomotion after oral administration of MAE. There was significant reduction in locomotor activity after 1000 mg kg −1 MAE administration. Locomotor activity can be reduced significantly after dosing test animals with a sedative dose of CNS depressants. Also, a reduction in locomotor activity could also be due to motor impairment induced by the test compound. Consequently, the effects observed after MAE administration can be attributed to the sedative or locomotor impairment potential of the extract. Caffeine, a CNS stimulant, on the other hand, increased whilst diazepam, a CNS depressant, decreased the locomotor activity in this test. In the Irwin test, test compounds that possess seizure induction potential can be identified by observing physical signs such as tonic and or clonic convulsions during the test. However, the test is not sensitive at identifying proconvulsant effects of compounds. Instead, the proconvulsant potential (seizure liability) is uncovered after pretreatment with chemoconvulsants such as pentylenetetrazole. Such treatments can additionally be used to screen potential anticonvulsants since compounds with anticonvulsant properties being known to reduce seizure parameters induced by these agents. In the convulsive threshold test in mice, only the highest dose of the MAE increased the latency to clonic convulsion and survival compared to the saline group. A Kaplan-Meier analysis of survival revealed no significant protection compared to vehicle control. This indicates that the extract was not effective in delaying and preventing lethality induced by pentylenetetrazole (85 mg kg −1 , s.c.). Barbiturates are general CNS depressants that induce a state of calm, sedation, and hypnosis at high doses [19]. In mice, acute administration of barbiturates induces a state of sleep which is indicated by a loss of righting reflex. Consequently, the sedating effects of potential CNS depressants are usually unmasked by coadministration with pentobarbitone. This sedative effect is known to be reversed by stimulants and enhanced general CNS depressants. A significant decrease in latency to sleep and increase in sleep duration after coadministration with pentobarbitone indicates that MAE possesses sedative effects, which is in agreement with results obtained from the activity meter test. Anxiety studies were performed in the mice models. The amelioration of innate anxiety induced by novel environment was explored in the elevated plus maze test and the regular Suok test. The elevated plus maze and Suok tests assess the behaviour of mice in a conflict situation. The elevated plus maze assesses the aversion to height and open spaces [20]. The Suok test, however, possesses a unique advantage of assessing the anxiety state of rodents as well as the sensorimotor coordination on an elevated horizontal rod [16,21]. In general, anxiolytics are known to increase affinity for the aversive stimuli or environment whilst anxiogenic agents are known to enhance the innate aversion to these stimuli or environment In the EPM test, vehicle-treated mice exposed to the maze made fewer entries and spent less time in the open unprotected arms compared to the closed arms, a behaviour which is consistent with literature that mice generally avoid open unprotected arenas [22,23]. Pretreatment with MAE (100-1000 mg/kg ) or diazepam (0.1-1.0 mg/kg) significantly reversed this behaviour at all tested doses suggesting an anxiolytic property of MAE. Although time spent is key indicator of the anxiety state of the test animal, its measurement provided offers a scintilla of information relating to the actual behavioural measures whilst exploring the maze. It is therefore important to assess, describe, and possibly measure the general behavioural repertoire of the animal of interest in such behavioural assays to offer more insights into the general behaviour of the test animals after treatment with drugs of interest. Over the past 25 years, several workers have developed protocols that allow a comprehensive profiling of behaviour of mice in the elevated-plus maze based on the defensive behaviours exhibited in the test. While the open arm entries as well as % open arm entries have been found to be a consistent measure of anxiety in rodents, the total headdips (HD) is a measure of exploration. Additionally, the total SAP provides an additional measure of risk assessment of the test organism. Behaviours such as freezing, stretch attend postures, and head-dips are some of the ethological parameters that can give an indication of the anxiety and behavioural state of the test mouse [24]. Mice in general tend to move freely in the closed arms with an increased tendency to freeze in the open arm of the EPM. Anxiolytics reduce this freezing behaviour while a converse occurs after giving an appropriate dose of an anxiogenic agent. Based on the above facts, it was realized that a single ethological parameter could increase or decrease depending on whether it occurs in the open or closed arms of the EPM. Hence the designation, "protected" and "unprotected", is ascribed to an ethological parameter occurring in the closed or opened arms, respectively [24,25]. Anxiogenic agents increase the duration and frequency of protected behaviours (head dips), with a corresponding decrease in unprotected ethological behaviours. The converse is true for agents that possess anxiolytic properties in mice. Similar to diazepam, MAE (1000 mg/kg) administration increased the number of head dips. An increase in the number of head dips is an indication of low anxiety states while a decrease indicates high anxiety states [26]. To assess the effect on sensorimotor coordination and further establish the anxiolytic potential of MAE, the regular Suok test was employed. This test combines aspects of the EPM, OFT, and beam walk tests. Behaviours such as headdips, side-looks, and frequency and duration of freezing bouts are used as endpoints for assessing of the anxiety state of mice whilst the number of falls and missteps are known to predict the degree of impairment of sensorimotor coordination. Diazepam is known to exhibit anxiolytic effects at lower doses in several test paradigms including the Suok test [16]. However, at relatively higher doses, diazepam induces a state of impaired motor coordination. This made it an ideal positive control in the Suok test which explores both behaviours. MAE (300 mg/kg ) similar to diazepam (0.1-1.0 mg/kg) reduced the number of freezing events suggestive of an anxiolytic effect since increased freezing bouts are indicative of a heightened anxiety state. The sensorimotor coordination was impaired at the highest dose of diazepam (1 mg/kg) which was reflected in the increased number of leg slips. However, the anxiolytic doses of MAE (300 mg/kg) did not affect the number of leg slips. Taken together, it is suggested that MAE exhibits anxiolytic behaviour at doses that does not affect motor coordination although further tests will be required to corroborate this evidence. Anxiety and depression are intimately linked and usually appear as comorbid states and treatment of both states positively affect the outcome of therapy [27]. Selective serotonin reuptake inhibitors are usually considered first-line treatment for patients with depression and have significant anxiolytic effects [3]. Several classes of drugs that modify serotonin (5-HT) neurotransmission have previously been explored for their possible role in depression and schizophrenia [28]. Based on the above premise, the potential antidepressant effect of MAE was assessed in two acute depression models in mice: tail suspension and forced swim test. These models work on the principle that when mice are subjected to unavoidable, inescapable stress, they assume escape oriented behaviours with intermittent moments of despair usually in the form of immobility [29]. Periods of immobility is known to model some aspects of depressive symptoms and hence most antidepressants are known to decrease the duration of immobility. Consequently, these tests have been employed in the screening of potential antidepressant drugs. In the TST, significant decrease in immobility duration was achieved after MAE (1000 mg/kg), imipramine (100 mg/kg), and fluoxetine (3-30 mg/kg) treatment suggesting antidepressant activity. Antidepressants that inhibit serotonin and/or NA reuptake decrease immobility and increase swinging behaviour of mice in the TST, a behaviour that was not significantly altered in MAE-treated mice. Opioids are known to decrease immobility whilst increasing curling behaviour [30,31]. Hence significant increase in the curling duration after MAE administration can be attributed to a possible interaction with the -opioid receptors. Similarly, MAE (300 mg/kg) exhibited antidepressant activity comparable to fluoxetine (10 and 30 mg/kg) and imipramine (100 mg/kg) in the FST (Figure 15). Antidepressants acting through the serotonergic system, including fluoxetine, selectively increase swimming behaviour. In addition, the FST differentiates between antidepressants that work through serotonergic mechanisms or noradrenergic mechanisms, as noradrenergic compounds selectively increase climbing behavior [32] and drugs with dual effects increased both swimming and climbing [33]. In this study, MAE caused a dose-dependent reduction in the immobility time at 300 mg/kg, increase in the swimming behaviour, and increase climbing duration at 30-100 mg/kg. This behavioural profile may suggest that the mechanism of the antidepressant-like activity of MAE may be due to an interaction with both noradrenergic and serotonergic system. Conclusions Results from this study indicate that the petroleum ether/ethyl acetate fraction of Maerua angolensis stem bark possesses anxiolytic effects in male ICR mice. MAE also possesses antidepressant effects which might be due to interaction with opioid receptors and noradrenergic and serotonergic systems.
v3-fos-license
2022-07-24T04:20:01.165Z
0001-01-01T00:00:00.000
237418847
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2021.710332/pdf", "pdf_hash": "a9259b3eb1e7a19791ca9f6a313607806835720f", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44919", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "sha1": "684550f6a965ab4cd49448a53462831b2ef8414f", "year": 2021 }
pes2o/s2orc
ScholarWorks @ UTRGV ScholarWorks @ UTRGV Frontotemporal Dementias in Latin America: History, Frontotemporal Dementias in Latin America: History, Epidemiology, Genetics, and Clinical Research Epidemiology, Genetics, and Clinical Research has focused on clinical and neuropsychological features ( n = 247), including the local adaptation of neuropsychological and behavioral assessment batteries. However, there are little to no large studies on prevalence ( n = 4), biomarkers ( n = 9), or neuropathology ( n = 3) of FTD. Conclusions: Future FTD studies will be required in Latin America, albeit with a greater emphasis on clinical diagnosis, genetics, biomarkers, and neuropathological studies. Regional and country-level efforts should seek better estimations of the prevalence, incidence, and economic impact of FTD syndromes. has focused on clinical and neuropsychological features (n = 247), including the local adaptation of neuropsychological and behavioral assessment batteries. However, there are little to no large studies on prevalence (n = 4), biomarkers (n = 9), or neuropathology (n = 3) of FTD. INTRODUCTION Frontotemporal lobar degeneration (FTLD) is a neuropathological designation used to identify a group of neurodegenerative diseases of the frontal and anterior temporal lobes, typically associated with specific pathologies (1). In most cases, FTLD features pathological inclusions of either the microtubule-associated protein tau (MAPT) or the transactive response DNA-binding protein of 43 kDa (TDP-43), named FTLD-tau and FTLD-TDP, respectively (2). TDP-43 is the major pathological protein deposited in FTLD and amyotrophic lateral sclerosis (ALS) (3)(4)(5). FTLD can be sporadic or hereditary, the latter most commonly due to mutations in several genes, such as MAPT, progranulin (GRN), TARDBP, or chromosome 9 open reading frame 72 (C9orf72) expansion. The core clinical syndromes associated with FTLD are behavioral or language symptoms and are generally called frontotemporal dementia (FTD). There are three main clinical variants distinguished by early and predominant symptoms: behavior variant frontotemporal dementia (bvFTD); semantic variant primary progressive aphasia (svPPA); and non-fluent variant primary progressive aphasia (nfvPPA) (6) bvFTD accounts for roughly 60% of FTD cases, and the other 40% are language variants of FTD (7). Related FTD disorders include frontotemporal dementia with motor neuron disease (FTD-MND), progressive supranuclear palsy syndrome (PSP-S), and corticobasal syndrome (CBS). FTD is the second most common dementia disorder in individuals under the age of 65 years old and accounts for 5-10% of dementia patients older than 65 years (3,4). In the US, the total number of cases with FTD syndromes range from 15 to 22 per 100,000 people in the US (8,9) with ∼20,000 to 30,000 persons living with FTD (9). The incidence of FTD is estimated to be 1.61 to 4.1 cases per 100,000 people annually (8,9). FTD is likely underdiagnosed due to the relatively low recognition within the medical community, little disease awareness in the population, and the overlap with a multitude of psychiatric disorders (10)(11)(12)(13). Therefore, prevalence studies on bvFTD and the other FTD syndromes are challenging because many cases are misclassified, as the disease is largely unrecognized (7,9). The frequency and correlates of the impact of FTD are less clear in Latin American countries. Although there is a growing number of dementia studies in Latin America, little is known collectively about FTD studies by country, its clinical heterogeneity, risk factors, and genetics in Latin American countries. Therefore, we aimed to systematically review FTD studies reported in Latin America. This systematic review offers an overview of the history and evolution of FTD in Latin America and reports on FTD prevalence and clinical and neuropsychological syndromes. This is followed by a review of the biomarkers, neuropathology, and genetic studies in the region. METHODS A systematic review was completed at identifying and describing the frequency, clinical heterogeneity, and research studies on FTDs in Latin American populations. The search strategy was developed with assistance from a research committee formed by a medical librarian, representatives from multiple Latin American countries (local dementia experts and clinical researchers), and other stakeholders with expertise in FTD. The research committee provided feedback and guidance on the proposed search strategies, selection criteria, and data analysis approach. The published literature was searched using strategies designed by a medical librarian for the concepts of FTD, Latin American countries, and related synonyms. These strategies were created using a combination of controlled vocabulary terms and keywords and were executed in Medline (Ovid) 1946-, Embase.com 1947-, Scopus 1823-, PsycInfo, Cochrane Library (including CENTRAL), LILACS 1982-, and SciElo.org. No filters or limits were applied to the search. All searches were completed on September 14, 2020. Full search strategies are provided in the Supplementary Material. A total of 483 results were retrieved from the literature search and imported into Endnote. Dementia experts and clinical researchers from Latin America (at least one per country) were asked to provide information on FTD publications in the Latin American region, yielding 213 records through hand-searching. A total of 696 citations retrieved by these methods (literature search + dementia experts reports) were compiled and screened for duplicates. Duplicate citations (n = 272) were accurately identified and removed for a total of 424 unique citations. After removing duplicates all citations (n = 424) were screened for appropriateness against the inclusion and exclusion criteria. Studies were included if they reported on (1) clinical features of FTD and (2) reports from populations living in Latin American countries. Reports describing non-FTD studies were excluded from this study. Studies published by Latin American authors but that did not include Latin American participants, as well as studies of Hispanics not living in Latin American countries, were also excluded. Studies that were done in collaboration (regional or international) were included if they involved Latin American participants. Poster presentations and meetings abstracts were excluded, except in areas where its relevance was sought to contribute to the understanding of FTD in Latin America (e.g., genetics and prevalence studies). After the abstract screening phase, studies that met the inclusion criteria (n = 398) underwent full-text assessment for eligibility (second screening stage) and were selected based on their relevance. Three hundred and twenty-two (322) peer-reviewed publications were selected for the final analysis (Figure 1). At least one author per Latin American country summarized the FTD literature that was found in their country; collaborative or regional studies were reviewed during consensus meetings. From each research study, information on sociodemographic characteristics, country report, and genetics were extracted. Information on clinical features (age at onset, age of death, disease duration, clinical presentation, atypical manifestations, and neurological findings) were obtained when available. We considered each symptom or sign as present or absent when clearly stated in the reports. A group composed of three of the authors (MIB, JL, and RN) received all the comments and classified the FTD reports from Latin American countries according to publication date (before 2000 or after 2000), epidemiology, clinical presentation, genetics, and neuropathology. This study was reported according to the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines (14). Table 1). Most of the research has focused on clinical and neuropsychological features (n = 247), including the local adaptation of neuropsychological and behavioral assessment batteries. However, there are little to no large studies on prevalence (n = 4), biomarkers (n = 9), or genetics (n = 36) of FTD (Figure 3). Evolution of FTD in Latin America (Twentieth Century) The first Latin America publication of bvFTD associated with ALS was reported by Tretiakoff and Amorim in 1924 (15). The case report described a young woman with absolute indifference, complete absence of affective feelings, and severe impairment of memory, which were followed by motor neuron signs of ALS. The neuropathological examination of the case described evidence of ALS but no signs of other dementiacausing pathology in the brain. The authors hypothesized that dementia was part of ALS and recommended the search for signs of involvement of motor neurons in dementia cases, a practice currently accepted in the clinical workup of FTD cases. In 1987, Nitrini et al. described three patients with progressive supranuclear palsy (PSP) in Brazil who presented with elementary motor perseveration before the appearance of any other distinctive features of the disease (16). The authors suggested that motor perseveration was an important sign for early diagnosis and a key element for the clinical characterization of PSP. Total 172 11 51 15 36 9 25 322 FTD, frontotemporal dementia; bvFTD, behavioral variant FTD; ALS, amyotrophic lateral sclerosis CBS/PSPS, corticobasal syndrome/progressive supranuclear palsy syndrome; PPA, primary progressive aphasia. *2 additional epidemiological abstracts are discussed but not included in this table. **17 additional abstracts are discussed but not included in this table (eight from Brazil, five from Argentina, one from Colombia, one from Cuba, one from the Dominican Republic, and one from Puerto Rico). There are 22 regional collaborative studies (nine between Argentina, Chile, and Colombia; seven between Argentina and Colombia; two between Argentina and Chile; two between Argentina and Perú; one between Brazil and Chile; and one between Cuba, Uruguay, and Ireland. Collaborative studies were assigned to the country of the first author or to the nationality of the patients included. Frontiers in Neurology | www.frontiersin.org In 1989, Oliveira et al. reported a patient with difficulties in comprehension of written texts that were followed by other language disturbances and dementia (17). In 1992, Trevisol-Bittencourt, also from Brazil, reported a case of PSP with dementia and highlighted diagnostics challenges due to the presence of both "subcortical dementia" and frontal lobe syndrome (18). In 1994, Donoso et al., from Chile, reported six cases of degenerative dementia with frontal or frontotemporal hypoperfusion on SPECT (19). Five cases were classified as "frontal progressive dementia, " whereas one patient had progressive aphasia. In the same year, Leiguarda et al., from Argentina, in collaboration with the Institute of Neurology of the University College of London, published a description on apraxia and corticobasal degeneration, followed by a relevant contribution to the knowledge of apraxia (20)(21)(22)(23). Several case descriptions populated the regional literature from 1995 to the 2000s. In a publication on the diagnosis of 100 patients evaluated in an outpatient memory clinic in Brazil, Nitrini et al. (24), reported two cases classified as frontal lobe dementia. In Delgado et al. (25) from Brazil, reported a non-fluent PPA with MRI revealing atrophy on the left perisylvian fissure region. In 1998, three patients with neuropathologically confirmed FTD with motor neuron disease who manifested hallucinations were reported, and a hypothesis about the occurrence of hallucination in dementia associated with MND was proposed by Nitrini and Rosemberg (26). Caixeta and Nitrini described the clinical features of 10 Brazilian patients with FTD, searching for qualitative and quantitative behavioral changes. Disinhibition predominated in six patients, apathy in four, while all patients manifested repetitive behaviors (27). In 1998 Allegri et al. (28) compared the cognitive profile of 12 Argentinian patients with bvFTD and 20 patients with probable Alzheimer's disease, showing that FTD patients scored significantly better than AD patients in memory tests, calculations, visuospatial abilities, and the naming test. AD patients performed better on executive tasks. A clinical and pathological report of a case of FTD associated with ALS was published by De Brito-Marques and De Mello (29), describing neuropathological findings similar to those described by Gustafson (30). In Doval and Gaviria (31), from Venezuela, published a review on FTD emphasizing their opinion that FTD was not a new clinical entity but a redefinition of the classical Pick's disease, an opinion that reflected the central concept on dementia diagnosis during most of the twentieth century in Latin America and most of the Western countries (32,33). Finally, a Chilean and an Uruguayan investigator participated in the development of the Frontal Assessment Battery (FAB) test (34). After these early papers, the number of scientific publications increased exponentially (Figure 2). Clinical Presentation and Neuropsychology of FTD in Latin America In the decade between 2000 and 2010, most of the publications described clinical, neuropsychological features, and structural imaging of FTD cases ( Table 1). In addition, several authors have raised concerns about the difficulties and under-diagnosis of FTD and related disorders in Latin American countries (35)(36)(37)(38). FTD Prevalence Estimates in Latin America There are few studies on the prevalence of FTD in Latin American countries. In a systematic review, Custodio et al. (39) described FTD prevalence in three Latin American countries [Venezuela (40), Perú (41), Brazil (42,43)] ranging from 1.2 to 1.7 per 1,000. In a population-based study in an area of Maracaibo, Venezuela, in subjects older than 54 years, the prevalence of allcause dementia was 8.04%, while the prevalence of FTD was 1.5% (44). There are also two studies presented at International conferences: one population study from Mexico with 2003 participants estimated a prevalence of FTD of 0.9%, and another 5-year population study with nearly 3,000 participants from Habana Cuba found a prevalence of FTD of 1.1% (45). Other studies report the frequency of FTD within dementia cohorts in memory clinics. One study in Brazil reported a 3.5% frequency of FTD in 261 dementia cases assessed between 1989 and 1998, using the Brun criteria (46). Two studies from Memory clinics in Colombia report an FTD frequency between 11.5 and 12.9% (47,48). Finally, one study in a memory clinic in Santiago, Chile, found 57 FTD patients among 3,700 dementia patients assessed between 1981 and 2008, using the Neary et al. (3) criteria in a memory clinic in Santiago (1.5%) (49). FTD Clinical and Neuropsychology Studies in Latin America The majority of the publications in Latin America (n = 247) describe the clinical features of FTD. Brazil has the largest number of publications on the clinical and neuropsychological characteristics of FTD. Also, there are case reports of late-onset (>85) bvFTD (50). It is also interesting to mention a paper on long-term severe mental disorders preceding bvFTD in a Brazilian cohort (51). The relationship between FTD and creativity and theory of mind has also been explored (68)(69)(70). Recent papers also report the use of automated computational approaches and machine learning to aid in the diagnosis of FTD (71, 72). Taragano et al. (73)(74)(75) published several papers on mild behavioral impairment and Tabernero et al. (76,77) published papers on facial emotion recognition. There are several publications related to the validation of tests in Spanish and Brazilian Portuguese (78)(79)(80)(81)(82)(83)(84)(85). It is also important to mention that a group at the Institute of Cognitive Neurology (INECO) in Argentina developed the INECO Frontal Screening (IFS) as a brief, sensitive, and specific tool to assess executive functions in dementia (86). This test has also been validated in Chile (83), Perú (87), and Brazil (88). Genetics of FTD in Latin America The genetics of FTD syndromes in Latin America remains understudied, with no FTD large genetic studies aimed at identifying novel or functional rare variants in the region. However, there are family reports from various countries, including Brazil (91,92), Argentina (93), Uruguay (94), Cuba (95), Chile (96), and Caribbean origin families (97) (Figure 4). Families carrying C9ORF72 have been described in Chile (96), Cuba (95), Brazil (98,99), and Argentina (100,101), presenting with a significant phenotypic heterogeneity (ALS vs. bvFTD vs. bvFTD-MND). Families featuring GRN pathogenic variants have been described in Brazil (91,92), Uruguay (94), Argentina (102), and the Caribbean (97). MAPT mutations have only been reported in Brazilian (103), and Argentinian (104), families, while TARDBP mutations have only been reported in Brazil. A missense mutation (R93C) in the valosin-containing protein (gene) was also described in a Brazilian family presenting with progressive myopathy together with clinical and cognitive features of FTD (105). The study of other genetic factors related to FTD is also limited in Latin America (95,106). FTD Biomarkers and Neuropathology in Latin America We found relatively few reports with extensive documentation on neuropathology, biomarker profiles, and disease progression in Latin American populations, making genotype-phenotype correlations difficult in the region. Although the use of dementia biomarkers is not widespread across Latin American countries, studies using biomarkers in FTD cohorts are available in Argentina (108), Brazil (109), and Uruguay (110). Neuroimaging studies in Latin American populations mainly describe structural findings consistent with the atrophy patterns reported in FTD studies from high-income countries. Neuropathological reports were scarce and only available in Brazilian cohorts (107,111,112). Primary Progressive Aphasia In our review, we found a relatively low number of PPA reports in Latin America, with two reports before 2,000, 13 between 2000 and 2010, and 42 from 2011 to 2020. Similar to the findings in bvFTD, Brazil has the greatest number of publications in Latin America (36 vs. 63, respectively). There are PPA studies reported in Argentina (n = 12), Chile (n = 1), Colombia (n = 4), Peru (n = 2), Cuba (n = 1), Mexico (n = 1), and three collaborative studies: Argentina/Chile/Australia (n = 1), Argentina/Chile/Colombia, and Australia (n = 2). Some of these manuscripts have already been cited in the previous sections. According to the available reports, the frequency of PPA syndromes is low. Diagnostic classification also varies within PPA cohorts and country reports. (116). Clinical presentations as "psychiatric disorders" have also been reported (117). Hosogi Senaha et al. (118) published the case study of a SD patient without surface dyslexia, a sign usually found in most of the SD cases to date. Similarly, in 2012, Wilson and Martínez-Cuitiño (119) reported a Spanish-speaking SD case similar to the Brazilian case. Both studies raise awareness about the possible absence of surface dyslexia in Spanish and Portuguese speakers presenting with SD, probably related to the relatively transparent orthographies of both languages. It is worth noting that both patients were able to read non-words, regular and irregular words, and foreign words correctly but with difficulties in written comprehension. In both studies, the authors associated patients' performance-reading of irregular and foreign words without meaning-with the use of the direct lexical reading process. To the best of our knowledge, there are no large neuropathology reports on PPA cohorts. Most of the reports are based on case experiences. de Brito-Marques et al. (2011) reported a nfvPPA longitudinal case study with histopathologic analysis (120). Strategies for languages rehabilitation in PPA has been reported from single or multiple case studies in Brazil and Mexico (121)(122)(123)(124). FTD and Motor Neuron Disease Frontotemporal dementia and motor neuron disease (FTD-MND) has been recognized as overlapping multisystem disorders (125). In this section, we focus our review on Latin American studies describing the overlap between the two conditions. Studies describing amyotrophic lateral sclerosis (ALS) cohorts without assessments of cognitive measures were excluded from this review. As mentioned above, reports of cases combining the clinical picture of MND with mental symptoms, personality change, or dementia in Latin America date back to 1924 (15). Most of the reports on FTD/MND in Latin America are case reports, including a wide range of cognitive presentations combined with different MND syndromes, including ALS (29,(126)(127)(128) and primary lateral sclerosis (PLS) (129). There is a relative lack of large studies describing the overlap between the two conditions in Latin America, which might be related to the scarcity of adequate cognitive screening methods suitable for Spanish-and Portuguese-speaking populations with low education. To the best of our knowledge, there are only two cohorts studies exploring cognitive and behavioral presentations overlapping with MND/ALS (130,131). Recent efforts in the region, especially in Brazil, are on the way aimed to validate and implement adequate and more systematic cognitive screening methods in Dementia/ALS cohorts. DISCUSSION The first publications of Latin American authors in the twentieth century were mostly case reports or small series of patients in which the clinical features were described. There were also a few papers with deeper reasoning on apraxia in several movement disorders and on frontal type of disinhibition in PSP. In the last two decades, most of the papers report on clinical and neuropsychological features of FTLD. Case descriptions, translations, and adaptations of neuropsychological and behavioral tests were the predominant publications by Latin American authors. Argentina has contributed with several interesting publications on social cognition and decision making. Although there were only a few reports on FTD prevalence in the region, the reported prevalence is relatively low compared to North America and Europe. Nevertheless, future studies will be needed to determine whether this is true or a reflection that the disease is still underrecognized in Latin American counties. Available data from surveys suggest that FLD is not recognized by families and general physicians (35)(36)(37)(38). There are fewer studies published in Latin America related to the language variants of FTLD in comparison to the number of studies related to the bvFTD. Studies on PPA have increased substantially during recent years and also advanced from case reports to case series and, more recently, to rehabilitation initiatives. However, more sensitive methods to detect language variants are needed, especially as the classical testing methods used for English speakers cannot apply to Spanish or Portuguese speakers. Similarly, there is a relative lack of large studies describing the overlap between FTD/MND in Latin America or exploring the cognitive and behavioral manifestations in MND/ALS, which may be related to the scarcity of adequate cognitive screening methods suitable for Spanish-and Portuguesespeaking populations with low education. Two instruments, that provide adequate cognitive screening methods suitable for Spanish and Portuguese-speaking populations with low education, have been recently validated and are expected to improve studies in this area. Only a few neuropathological studies on FTLD have been published, and all of them are from Brazil. The relatively low number of neuropathology studies might be related to lack of resources; brain donation protocols require the existence of brain banks and trained personnel, which are scarce in the region. Overall, most of the FTD studies are concentrated in a few countries (Brazil, Argentina, Colombia, and Chile), with only a few collaborative studies between Latin American countries and between Latin American countries and more developed centers in North America and Europe. Collaboration may represent an alternative to achieve better results and more robust studies in a region where research resources and funding are scarce. Genetics is another area where future studies will be required. Much of the population of Latin American countries is a mixture of native American, European, African, and some Asian immigration. Therefore, it is expected to find similar mutations to those already described in the literature. In addition, the existence of novel mutations in the native American populations and the effect of admixture in gene expression, disease onset, and clinical heterogeneity should be further studied. This systematic review also found several relevant conference abstracts with large series of cases but, unfortunately, they did not end up in peer-reviewed publications. This may be explained by a lack of privileged time and grants to perform research in Latin American countries, as well as difficulties in reaching publications in a foreign language. Although there has been improvement in the last few years, academic and governmental institutions in Latin America should implement protected time for their researchers aimed to facilitate research dissemination. Public and private funds should be directed toward research grants that will improve the research and consistency of reports coming from Latin American researchers. CONCLUSIONS The analysis of the history of FTLD research in Latin America shows that there are several gaps in knowledge that remain to be explored and activities to be developed by the community. Based on our findings, we believe research on epidemiology and genetics of FTD in Latin America should be priorities. Several studies show that general physicians, neurologists, psychiatrists, and the lay public are unaware of these diseases. More collaborative studies are needed, both between Latin American countries and with developed centers in HIC, mainly on genetics and biomarkers. The interchange of undergraduate, graduate, and post-graduate students and academic professors between research centers in Latin America with those in the developed world has already started, and this is likely to change the history of FTD in Latin America. The recent formation of the Latin America network (RedLat) to study FTLD is tasked to increase these collaborations. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. AUTHOR CONTRIBUTIONS JL-G, MB, and RN: study concept and design, acquisition, analysis, interpretation of data, and drafting of the manuscript. JL-G: project administration. RN: study supervision. All authors critical revision of the manuscript for important intellectual content, had full access to all the data in the study, and take responsibility for the integrity of the data and the accuracy of the data analysis. FUNDING MB has received funding from FONDECYT: 1190958, RN has received funds from CNPq.
v3-fos-license
2023-07-11T16:24:40.529Z
2023-07-04T00:00:00.000
259641546
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2023.1176001/pdf", "pdf_hash": "d99173c316eb182ccfc36125a11b298d7a3f9e18", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44921", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "sha1": "607abc6f26a4aef252960e41c559e61df8422f34", "year": 2023 }
pes2o/s2orc
A diagnostic model of nerve root compression localization in lower lumbar disc herniation based on random forest algorithm and surface electromyography Objective This study aimed to investigate the muscle activation of patients with lumbar disc herniation (LDH) during walking by surface electromyography (SEMG) and establish a diagnostic model based on SEMG parameters using random forest (RF) algorithm for localization diagnosis of compressed nerve root in LDH patients. Methods Fifty-eight patients with LDH and thirty healthy subjects were recruited. The SEMG of tibialis anterior (TA) and lateral gastrocnemius (LG) were collected bilaterally during walking. The peak root mean square (RMS-peak), RMS-peak time, mean power frequency (MPF), and median frequency (MF) were analyzed. A diagnostic model based on SEMG parameters using RF algorithm was established to locate compressed nerve root, and repeated reservation experiments were conducted for verification. The study evaluated the diagnostic efficiency of the model using accuracy, precision, recall rate, F1-score, Kappa value, and area under the receiver operating characteristic (ROC) curve. Results The results showed that delayed activation of TA and decreased activation of LG were observed in the L5 group, while decreased activation of LG and earlier activation of LG were observed in the S1 group. The RF model based on eight SEMG parameters showed an average accuracy of 84%, with an area under the ROC curve of 0.93. The RMS peak time of TA was identified as the most important SEMG parameter. Conclusion These findings suggest that the RF model can assist in the localization diagnosis of compressed nerve roots in LDH patients, and the SEMG parameters can provide further references for optimizing the diagnosis model in the future. Introduction Lumbar disc herniation (LDH) is a common cause of low back pain and lower limb neuralgia (Furlan et al., 2009). The highest incidence of LDH occurs at the L4/5 and L5/S1 levels, with most patients experiencing radiculopathy involving a single nerve root, typically the L5 or S1 root (Al-Khawaja et al., 2016). At present, clinical symptoms, physical examination and imaging findings are usually combined to determine the diagnosis of LDH and the corresponding nerve root compression. However, LDH is a disease in which physical examination, symptoms and imaging findings are not always reliable or correlated. Previous studies have found that the ability to diagnose radiculopathy caused by LDH is not ideal either by physical examination alone or by isolated imaging findings. Magnetic resonance imaging (MRI) is the gold standard to evaluate the structural relationship between intervertebral disc, surrounding soft tissue and nerve tissue (Li et al., 2015). However, MRI is usually performed in the resting state, which cannot monitor and evaluate the compression and functional states during movement. Several studies have found that many people without neurological symptoms exhibit positive MRI signs (Brinjikji et al., 2015a), and the accuracy of MRI in the diagnosis of compressed nerve roots is also lower (Lee and Lee, 2012;Brinjikji et al., 2015b;de Schepper et al., 2016). Therefore, there is a need to explore other means to support the localization diagnosis of LDH from a functional perspective. Surface electromyography (SEMG) is a prevalent tool for functional assessment, enabling real-time, quantitative evaluation, and analysis of an individual's dynamic neuromuscular function (Wakeling, 2009). Prior research has illuminated the neuromuscular function alterations following LDH from various perspectives. LDH patients typically exhibit diminished muscle strength and endurance, weakness in certain lower limb muscles, and fatigue in lumbar muscles (Dedering, 2012;Supuk et al., 2014;Djordjevic et al., 2015). Moreover, studies have posited that LDH patients with differing nerve root compressions display distinct EMG characteristics . Our study aimed to delve further into the application of SEMG parameters in diagnosing LDH patients. However, additional exploration is required to analyze synchronous gait and SEMG changes, summarize muscle activation patterns, and more accurately classify and identify these patterns in patients with different nerve root compression. In this study, we concentrated on the tibialis anterior (TA) and lateral gastrocnemius (LG) muscles, because these muscles exhibited significant alterations in their SEMG characteristics in LDH patients with L5 and S1 nerve root compression. In instances of L5 nerve root compression, neurological control disorders are primarily observed in the TA. Conversely, in cases of S1 nerve root compression, these disorders are predominantly exhibited by the LG . These observations suggest that when a specific nerve root is compressed, the functional state of the muscles primarily innervated by that nerve root changes. Machine learning, a novel data processing method, can extract valuable information from vast amounts of data through learning and training, and construct effective prediction models. Random forest (RF) is a potent method that has found extensive application in the medical field. Machine learning has been employed in gait recognition, robot rehabilitation, motion control, among other fields (He et al., 2020;Shi et al., 2020). Therefore, our aim is to analyze the SEMG characteristics of LDH patients with different compressed nerve roots, summarize muscle activation regularity, establish an RF diagnostic model, and verify its diagnostic efficiency. By doing so, we aspire to provide a fresh approach for the localization diagnosis of LDH. Participants A total of 58 patients with LDH scheduled for lumbar decompression were recruited for this study from the Beijing Rehabilitation Hospital, Capital Medical University in Beijing, China. There were 29 patients with L4/5 herniation combined with L5 nerve root compression (L5 group), and 29 patients with L5/S1 herniation combined with S1 nerve root compression (S1 group). Thirty healthy adults (Healthy group) without previous neurological or musculoskeletal diseases or surgery were recruited as a control group. The sample size was preliminary estimated using G * Power 3.1 software. 1 Based on the results of a pilot study, the effect size was set at 1.06, the significance level at two-tailed α = 0.05, and the statistical power at 0.95, which indicated that a sample size of 24 was required. Considering the possible dropout rates and other uncertainties, 30 participants were planned for each group. However, one participant in each patient group was unable to complete the experiment due to personal reasons. Inclusion criteria for patients with LDH: (1) patients with a confirmed diagnosis of LDH with sciatic radicular pain; (2) herniated disc segments requiring MRI and surgical confirmation; (3) compressed nerve roots limited to L5 or S1 nerve roots; (4) indications for surgery and the need for surgical treatment; and (5) no contraindication to neurophysiology and can undergo SEMG. LDH patients with the following symptoms were excluded: (1) pacemaker or any other metal implant in the body; (2) related or other peripheral nerve diseases and abnormal motor fiber conduction; (3) spastic paralysis or other muscle diseases of lower limb muscles, such as cerebral palsy or muscular dystrophy; (4) previous history of spinal surgery; (5) clinical manifestations of lumbar spinal stenosis; and (6) combined with other serious diseases, such as severe cardiopulmonary disease, defined as a condition that requires continuous oxygen therapy or hospitalization for respiratory failure. Healthy controls with the following symptoms were excluded: (1) abnormal gait due to congenital skeletal deformity or neurological disorders, such as cerebral palsy or multiple sclerosis; (2) lower extremity degenerative diseases and clinical symptoms, such as osteoarthritis or peripheral arterial disease, which affect walking function; (3) pregnant or perinatal women; and (4) suffering from other diseases that affect walking and daily activities, such as severe heart failure or end-stage renal disease. Visual Analogue Scale (VAS) Pain scores and Japanese Orthopaedic Association scores (JOA) were obtained for all patients and the general characteristics of all subjects are shown in Table 1. Although MRI has some diagnostic limitations and does not guarantee 100% diagnostic accuracy, in current clinical practice it is still the primary basis for the localized diagnosis of nerve root compression in LDH. In the present study, patients were initially enrolled by physician examination, special examination, clinical symptoms, and MRI diagnosis, subjected to SEMG testing and further confirmed by surgery (Gurdjian et al., 1961;Al Nezari et al., 2013). MRI of a typical patient with nerve root compression is shown in Figure 1. Figure 2D). The electrodes were placed on the surface of the muscle belly. Its sampling frequency is up to 2,000 Hz, transmission range is 20 m, and it can detect up to 16 muscles at the same time. The SEMG signal was synchronized with an 8-camera 3D motion capture system (Vicon, Oxford, UK) and two embedded force platforms (AMTI, Watertown, MA, USA) to divide gait cycles ( Figure 2C). The gait cycle was defined using a heel strike frame on the force platform. The Vicon system and force measurement platform had sampling frequencies of 100 and 1,000 Hz, respectively. The Vicon system used a Plug-in gait model with 16 markers to define the body segments. Measurement methods The tests in this study were conducted in a dedicated room with a clean, distraction-free environment. Participants wore closefitting, non-black, and non-reflective clothing to minimize capture errors. Prior to testing, the skin of the bilateral TA and LG muscles was cleaned and prepared. According to the guidelines established by the Surface ElectroMyoGraphy for the Non-Invasive Assessment of Muscles (SENIAM) project, 2 the collection electrodes of the Delsys wireless dynamic EMG tester were strategically positioned on the most prominent portions of TA and LG muscles on both sides (Figures 2A, B). The EMG signals were filtered using a bandpass filter with a range of 20-500 Hz. Subjects walked at their own comfortable speed until at least six successful trials were captured (excluding the initial acceleration and deceleration phases of the assessment). Parameters and data analysis At the end of the test, the synchronization results of the Vicon and DELSYS data were imported into Python software for data processing and analysis. The SEMG parameters included in this study are as follows: (1) Time domain parameters: root mean square peak (RMS-peak) and RMS-peak time (i.e., the onset of the RMSpeak within the gait cycle). The RMS-peak represents the muscle force exerted during exercise; the RMS-peak time reflects the time of muscle activation during the gait cycle (Merletti et al., 2008). The specific data were processed as follows: RMS values during each gait cycle were calculated using a 30 ms window and a step length of 20 ms (Merletti et al., 2009). After normalizing the time according to the gait cycle, the average RMS value of the six gait cycles is calculated and the distribution of the RMS values is plotted, and then the RMS-peak and RMS-peak times (as a percentage of the gait cycle) are calculated. (2) Frequency domain parameters: the mean power frequency (MPF) and median frequency (MF) were chosen to reflect mainly the degree of muscle fatigue (Molinari et al., 2006). The specific data processing methods are as follows. First, the fast Fourier transform of the SEMG signal was performed to calculate the MPF and MF for each gait cycle, and then the average MPF and MF for the six gait cycles were calculated. Establishment of RF diagnosis model 2.3.1. Method of model establishment To establish the diagnostic model, we employed the RF algorithm using Python 3.7 scikit learn. To mitigate individual differences, we selected parameters that showed significant differences between the two lower limbs. Specifically, we considered the absolute value of the difference between the parameters of healthy controls and the difference between symptomatic and asymptomatic sides of LDH patients. Process of model establishment (1) Architecture of input and output layers: the RF model used in this study comprises eight input parameters and three output layers: no compression, L5 nerve root compression, and S1 nerve root compression. The input parameters consist of the SEMG parameters of TA and LG, namely RMS-peak, RMSpeak time, MPF, and MF. (2) Training parameter settings: (1) Sample size setting: we selected 88 subjects, with 50% of them being allocated to the training set and the remaining 50% to the prediction set. (2) Superparameter setting: we selected n_Estimators, which refers to the number of sub-datasets generated by bootstrapping the original dataset, and set it as 50 in this study. (3) During the training process, all data were used in each round. We set the stopping criteria based on two situations: firstly, when the required accuracy is achieved (RMS error reaches 0.005), the training is stopped. Secondly, when the training process fails to achieve the required accuracy, we stop the training until the maximum number of iterations, which is set to 1,500 times, is reached. Model validation During the experiments, the accuracy, precision, recall rate, F1score, and Kappa values were calculated 10 times using the repeated reservation experiment principle. Additionally, the area under the receiver operating characteristic (ROC) curve was utilized to evaluate the efficiency of the diagnosis model. Establishment of the final model In order to ensure the reliability of the diagnosis results, the testing procedure of the RF diagnosis model was repeated up to 10 times. If the results were not satisfactory, the procedure was repeated starting from the screening of independent variables. However, if the results were deemed satisfactory and reliable, the data from all patients were used to retrain and establish the RF diagnosis model. This approach aimed to ensure that the final diagnosis model had high accuracy, precision, recall rate, F1-score, Kappa values, and efficiency in detecting the different levels of nerve root compression. Statistical analysis The data were presented as mean ± standard error or median (interquartile range) based on the distribution characteristics. Normal distribution of data was assessed using the K-S test. Paired t-test and one-way ANOVA were used for comparison between groups for normally distributed data. For non-normally distributed data, non-parametric rank sum test such as Wilcoxon-Mann-Whitney test was used for comparison of two related groups, Kruskal-Wallis rank sum test was used for comparison of multiple groups, and pairwise comparison of multiple groups was performed using Bonferroni test. Statistical analysis was performed using SPSS 26.0 and P < 0.05 was considered statistically significant. SEMG characteristics of patients with different nerve root compression In the healthy group, the SEMG performance of each healthy subject was combined and averaged for their left and right sides, as all healthy subjects walked symmetrically. Their peak RMS, RMS-peak time, MPF, and MF were not significantly different from the asymptomatic side of patients in the L5 and S1 groups (Figure 3). In L5 group, compared with the asymptomatic side, the RMS-peak time of TA in the symptomatic side was significantly delayed (P < 0.001), the MPF and MF of TA were significantly decreased (both P < 0.001); the RMS-peak of LG was also significantly decreased (P = 0.016) ( Table 2). The delayed activation of TA was manifested by a later occurrence in the gait cycle [symptomatic side: 35 (25.5, 61), asymptomatic side 11 (7.5, 15), P < 0.001] and resulted in the co-contraction with LG (Figure 3). In S1 group, compared with the asymptomatic side, as for LG, the RMS-peak in the symptomatic side was significantly decreased (P = 0.003), the RMS-peak time was significantly moved forward (P < 0.001), the MPF and MF were significantly decreased (both P < 0.001), and the RMS-peak of TA in the symptomatic side was also significantly decreased (P = 0.043) ( Table 3). The early Panels (A-C) are typical SEMG presentations of L5 nerve root compression: on the symptomatic side, delayed activation of TA (A) and decreased peak RMS of LG (B), showing co-contraction of TA and LG (C). Panels (D-F) are typical SEMG presentations of S1 nerve root compression: on the symptomatic side, activation of the LG is shifted forward (D), peak RMS of the TA is decreased (E), and a double peak and co-contraction of the LG and TA is found (F). (Figure 3). Compared to the healthy group (Table 4), the L5 group showed significant delays in the RMS-peak time of TA (P < 0.001) and significant decreases in the MF (P = 0.002) and MPF (P = 0.001) of TA and LG. Similarly, the S1 group showed significant differences in the RMS-peak (P = 0.043) and MPF (P < 0.001) of LG and in the MPF (P = 0.033) and MF (P = 0.001) of TA, when compared to the healthy group. Additionally, the RMS-peak time of TA was significantly delayed in the L5 group compared to the S1 group (P < 0.001), while the RMS-peak of LG was significantly decreased (P = 0.001). Conversely, the RMS-peak time of LG was significantly earlier in the S1 group than in the L5 group (P = 0.045). *There were significant differences among three groups (P < 0.05). $ There was significant difference either L5 group or S1 group compared with healthy group (P < 0.05). There was significantly different between L5 and S1 group (P < 0.05). *There were significant differences among three groups (P < 0.05). $ There was significant difference either L5 group or S1 group compared with healthy group (P < 0.05). There was significantly different among L5 and S1 group (P < 0.05). Establishment of RF diagnosis model based on SEMG parameters In this study, we selected the difference of parameters between the bilateral lower limbs as the input parameter. According to our statistical results, there were significant differences in the RMS-peak and RMS-peak time of TA, as well as the RMS-peak, RMS-peak time, and MPF of LG when compared to the healthy group. Furthermore, when compared with the patients in the L5 and S1 groups, significant differences were observed in the bilateral RMS-peak and RMS-peak time of TA (Table 5). After 10 iterations of retention experiments, we confirmed that the diagnostic accuracy of the RF model based on the SEMG parameters was 84%. Additionally, the precision, recall, F1-score, and kappa values were found to be 85%, 84%, 0.84, and 0.76, respectively. The area under the ROC curve was calculated to be 0.93 (Figures 4, 5). Furthermore, we analyzed the weights of the RF model and found that among the eight SEMG parameters used in the model, the weights ranged from 6 to 26% (Figure 6). Notably, the RMSpeak time of TA had the highest weight (26%), followed by LG's RMS-peak time (15% each). Analysis of SEMG characteristics in lower limb muscles Lumbar disc herniation patients often experience reduced muscle strength and endurance. Previous studies have demonstrated that compression of the L5 nerve root can result in TA dysfunction, while compression of the S1 nerve root can lead to gastrocnemius dysfunction (Barr, 2013;Wang and Nataraj, 2014). Additionally, LDH patients with low back pain tend to experience increased multifidus muscle fatigue (Ramos et al., 2016). However, it can be challenging to diagnose the cause of abnormal gait when multiple pathological conditions coexist. In one case, peroneal nerve compression caused by ganglion cyst combined with L5 radiculopathy was observed, and electrical diagnosis was found to improve diagnostic accuracy in addition to MRI and other imaging methods (Park et al., 2019). SEMG is an effective tool for accurately assessing neuromuscular function in patients and has been widely used for clinical diagnosis and evaluation of various diseases and dysfunctions. While some studies have used SEMG to observe and record muscle function in LDH patients (Rønager et al., 1989;FIGURE 4 Confusion matrix of optimal RF diagnosis model based on SEMG parameters. The color scheme represents the consistency between the predicted and actual results. The numbers in the matrix denote the count of correctly predicted samples within specific categories. The percentages indicate the proportion of correctly predicted samples within those categories. The ROC curve of RF diagnosis model based on SEMG parameters. Dedering, 2012;Supuk et al., 2014;Djordjevic et al., 2015), most of these studies have focused on SEMG signals from paravertebral muscles and other muscles in the lumbar region, with only a few studies examining changes and dynamic adjustments in lower limb muscles in LDH patients. A study in 2020 found that abnormal gait in LDH patients was associated with abnormal lower limb muscle activity and neurological control disorders (Wang et al., 2020). In this study, SEMG analysis was conducted on LDH patients, which revealed significant changes in the muscle activation patterns of patients with L5 and S1 nerve root compression. In cases where the L5 nerve root was compressed, neural control disorders were observed mainly in the TA. Under normal circumstances, the activation of TA muscle occurs prior to the completion of 12% of the gait cycle. However, in L5 group patients, the RMS-peak The weight of SEMG parameters in RF model. time of the TA in the symptomatic side was significantly delayed, with a median activation time of 35% of the gait cycle, resulting in a tendency of co-contraction with LG. Additionally, the MPF and MF were significantly decreased, and the RMS-peak of LG in the symptomatic side was also decreased. In cases where the S1 nerve root was compressed, neural control disorders were observed mainly in the LG. Normally, the activation peak of LG appears in the late stance phase of the gait cycle, but in S1 group patients, the activation time of LG was advanced, with the median activation time shifting from 44 to 27% in the symptomatic side, resulting in a bimodal activation pattern and a tendency of co-contraction with TA. The RMS-peak, MPF, and MF of LG decreased, and the RMSpeak of TA also decreased accordingly. Compared with the healthy group, LDH patients showed similar changes in muscle activation patterns on the symptomatic side. Moreover, compared with the S1 group and healthy group, the activation time of TA in the L5 group was significantly delayed, and the degree of fatigue in the TA was increased. The RMS-peak in LG was also significantly lower than in the S1 group. The RMS-peak time of LG in the S1 group was significantly advanced compared to that in the L5 group, and the MPF and MF of TA, and the RMS-peak and MPF of LG were significantly lower than in the healthy group. The results of this study demonstrate that the functional state of the main muscles innervated by the corresponding nerve root changed when patients with different nerve root compressions were walking, which was related to neuromuscular control disorders after nerve compression . This led to abnormal recruitment and fatigue of the corresponding muscle at a specific stage of the gait cycle. In the case of L5 nerve root compression, mechanical compression of the nerve can trigger conduction function abnormalities, resulting in a significant delay in the peak activation of the symptomatic TA and a significant increase in the overlap contraction area with LG. At this point, the LG and TA, a pair of antagonist muscles, exhibit a co-contraction phenomenon, which is an ineffective muscle coordination strategy that is different from normal alternating contraction. This finding is consistent with the findings of Wang et al. (2020), who also observed inappropriate co-contraction between the TA and gastrocnemius during walking in LDH patients. Taking into account that LG is mainly innervated by the S1 nerve root, the compression of the L5 nerve root has a relatively small effect on LG. Therefore, to avoid dysfunction caused by muscle co-contraction, the RMS-peak of LG on the symptomatic side decreased accordingly. Co-contraction of antagonist muscles can cause joint stiffness or postural abnormalities (Lo et al., 2017;Du et al., 2018), significantly increasing energy expenditure during exercise and making muscles more prone to fatigue (Hallal et al., 2013). This is consistent with the decrease in MPF and MF of the symptomatic TA . It may also partially explain why LDH patients often experience symptoms such as joint stiffness, claudication, muscle pain, and discomfort during walking (Wang et al., 2020). When the S1 nerve root is compressed, the RMS distribution of LG changes to a bimodal activation pattern in the gait cycle, with the first peak occurring in the mid-stance stage. This early contraction of LG is thought to be a compensatory mechanism that helps speed up the transfer of the center of gravity, reduce weight-bearing on the symptomatic side, facilitate knee flexion, and reduce the length of the lower limb to avoid pain during single leg support. Our previous study also found similar SEMG changes in patients with S1 nerve root compression (Qie et al., 2020). Due to the compensatory contraction being small, the RMSpeak of LG was significantly lower on the symptomatic side than the asymptomatic side, and the earlier activation also led to an increase in the overlap contraction area with TA, which resulted in the same co-contraction of the antagonist muscles seen in L5 nerve root compression. Additionally, the RMS-peak of TA also decreased accordingly. However, we also recognize that different diagnoses of L5 and S1 nerve root compression may lead to different treatment options. For example, L5 nerve root compression may result in altered activity patterns in the TA and may require physiotherapy targeting the TA to improve its function and reduce cocontraction with the LG. Conversely, when the S1 nerve root is compressed, the activity pattern of the LG is altered and physiotherapy targeting the LG may be required to improve its function. In forthcoming research endeavors, our aspiration is not only to delve deeper into this salient issue to assist clinicians in diagnosing the location of nerve root compression more accurately but also to evaluate pre-and post-operative EMG patterns. In particular, we are interested in scrutinizing cases that result in substantial nerve decompression and meaningful pain improvement post-surgery. Such assessments could serve as pivotal indicators of normalized EMG activation, as per our hypothesis, ultimately enhancing the scope and efficacy of treatment options for patients. RF diagnosis model based on SEMG parameters Random forest is a highly flexible and innovative machine learning algorithm that has a broad range of potential applications. It was proposed by American scholar Breiman in 2001, building on the classification tree algorithm developed in the 1980s (Breiman, 2001). Compared to other current algorithms, RF offers exceptional accuracy, can effectively handle large datasets, and is adept at processing input samples with high-dimensional features while evaluating the importance of each feature in classification problems. He et al. (2020) applied the RF algorithm to evaluate the gait of elderly individuals, demonstrating that RF improved gait classification accuracy. Shi et al. (2020) found that the RF algorithm can provide a solution for fusing human and exoskeleton equipment by giving corresponding weight to the original data, enhancing the real-time classification of traditional SEMG signals. In this study, we trained an RF classification model using SEMG parameters. The training process involved ten repeated cross-validation experiments, where the model was trained on 50% of the patients and validated on the remaining 50% each time. The results showed that the RF model had a high diagnostic accuracy and could assist in localizing compressed nerve roots in LDH patients. We used the area under the ROC curve as the performance metric to evaluate the model's performance. The ROC curve is a plot of sensitivity (true positive rate) against 1-specificity (false positive rate) for different threshold values. A larger area under the curve indicates higher diagnostic accuracy. Based on our results, the RF model achieved an area under the ROC curve of 0.93, which indicates that it is an effective diagnostic model. In addition, RF algorithm can give corresponding weights to the original data, score the classification ability of different parameters, and identify the parameters that play an important role in the classification. Based on the characteristics of RF, we also compared the parameter weights of this model, and found that the RMS-peak time of TA has the highest weight ratio (26%). It is suggested that RMS-peak time of TA can be used as the most important SEMG parameter to identify L5 or S1 compressed nerve roots, which can provide further reference for optimizing the diagnosis model in the future. Our study is a replication of the observational study by Li et al. (2018) and we are in a larger, new cohort of patients where we confirm previous findings and also further expand the knowledge in this area. Our findings suggest that the RF model can assist in the localization and diagnosis of compressed nerve roots in LDH patients, while the SEMG parameters can provide a further reference for optimizing the diagnostic model. However, while SEMG is a powerful tool to help us understand the mechanisms of disease, the feasibility of implementing such a diagnostic procedure in a clinical setting also needs to be considered. While SEMG equipment is relatively easy to obtain and use, motion capture systems may require more equipment and space. In addition, some training and equipment maintenance may be required in order to integrate such a diagnostic procedure with existing diagnostic processes. Despite these challenges, we believe that as technology advances and costs decrease, the use of SEMG and motion capture systems in clinical settings will become increasingly feasible. We look forward to future research that will further explore the implementation of such diagnostic procedures to help clinicians more accurately diagnose the location of nerve root compression and provide better treatment options for patients. Conclusion This study highlights the potential of SEMG as a diagnostic tool for LDH patients with L5 and S1 nerve root compression. The differences in SEMG characteristics between TA and LG during walking provide valuable insights into the location of nerve root compression. The RF algorithm-based diagnostic model demonstrated high accuracy, precision, and recall, indicating its potential as a reliable diagnostic tool. The model's ability to identify the weights of different SEMG parameters provides clinicians with a better understanding of the relative importance of each parameter in diagnosis. Overall, this study suggests that SEMG can serve as an effective complementary diagnostic tool for LDH, helping clinicians accurately diagnose the location of nerve root compression and provide better treatment options for patients. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by the Ethics Committee of Capital Medical University, Beijing, China. The patients/participants provided their written informed consent to participate in this study.
v3-fos-license
2019-03-16T13:05:42.177Z
2016-01-01T00:00:00.000
27337233
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.oatext.com/pdf/OHNS-1-102.pdf", "pdf_hash": "5a09a836cf30d8edac6d62e14fc1dd98175413df", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44923", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3c87c7315fd7e4b6bb814944f069cdc54eb894bd", "year": 2016 }
pes2o/s2orc
20 ways of removing a nasal foreign body in the emergency department Background: Nasal foreign body (NFB) is a common presentation to the emergency department (ED). It is often a dilemma for the treating physician to choose the best method as removal of NFB can be challenging, frustrating and distressing for the physician, the patient, and the patient’s parents. Method: We have listed the 20 possible methods one could use to remove NFB in an ED setting. Results 1. Positive Pressure (Child) Method 2. Positive Pressure (Parent) Method – Also known as the kissing technique 3. Modified Positive Pressure (Parent) Method – It is a modification to the kissing technique 4. Positive Pressure with Ambu bag 5. Positive-Pressure Device 6. Beamsley Blaster Positive-Pressure Device 7. Saline Washout Technique 8. Cyanoacrylate (super glue) 9. Catheters 10. The Katz extractor 11. Using a Magnetic Device 12. Instrumentation – Alligator/Crocodile forceps 13. Instrumentation – Tilley nasal packing forceps 14. Instrumentation – Jobson-Horne Probe / ring curette / wax loop 15. Instrumentation – Right angle probe 16. Instrumentation – Frazier suction catheter 17. Instrumentation – Cut-down flexible suction catheter 18. Instrumentation– Refashioned / bent paper clips 19. Snare Technique Wire loop snare grasped by hemostat 20. ‘‘Hook-Scope’’ Technique for Endoscopic Extraction of NFB Discussion: Any of the chosen methods has its own benefits and most are suitable for some but not all NFB shapes. In addition, for the removal of NFB with any technique to be successful, four prerequisite needs to be fulfilled which are: a well restrained patient, a good head light with optimal illumination, a nasal speculum and decongested nasal cavity. Conclusion: The child usually allows only one or two attempts at most for removal of NFB, hence assess the child carefully and if you do not feel you are the right person for the job, get help from seniors or direct referral to the ENT team should you anticipate a difficult removal of a NFB. Correspondence to: Dr. Tian-Tee Ng, ENT Unit, Department of Surgery, Frankston Hospital, Frankston 3199, Victoria, Australia, Tel: +61-0426 266 890; E-mail: tntdynamites@yahoo.com treating physician is to choose the best method of removing the NFB. Removal of NFBs can be challenging, frustrating and distressing for the physician, the patient, and the patient's parents. In this article, we report 20 ways of removing NFBs in the emergency department. Methods A literature search was performed to look for additional ways of removing NFB other than those we routinely apply. Thus by examining published methods, coupled with our experiences, we have compiled a list of all the possible methods one could employ in removing NFB in an ED setting. Positive Pressure (Child) Method [1] Technique: Child to exhale forcibly through the nostril containing the object with the opposite nostril occluded and the mouth closed. Good for: Solid object e.g. beads Note: Difficult or impossible for young patients to accomplish on their own. Positive Pressure (Parent) Method [1,2] -Also known as the kissing technique Technique: Parent uses his or her mouth to apply positive pressure into the patient's mouth with simultaneous occlusion of the contralateral nostril [1]. Good for: Solid object e.g. beads Note: Potentially less emotionally traumatic for the child than direct physical removal of the object 1 and good for very young patients. Modified Positive Pressure (Parent) Method -a modification of the kissing technique Technique: Using drinking straw, or similar tubing, between the parent's mouth and child's mouth, the child is instructed to make a tight seal, as if drinking, and the parent delivers a quick puff [1,3]. Good for: Solid object e.g. beads Note: Risk of bleeding from tip of straw or rigid tubing if patient moves. Positive Pressure with Ambu bag Technique: Oral insufflation with an ordinary Ambu bag [1]. Good for: Solid object e.g. beads Note: Reasonable alternative in instances when parent and child have difficulty cooperating with the parent-applied mouth-to-mouth positive pressure. Positive-Pressure Device [4] Technique: Nasal occlusion device consisting of a medium or large disposable headset attached to an 8F feeding tube ( Figure 1). The headset includes a hose that is connected to a standard oxygen outlet, with an oxygen flow rate of 15 L/min which is equivalent to an output pressure of 100 to 160 mm Hg. With the patient restrained and properly positioned (sitting on parent's lap), the occlusion device is connected to the oxygen hose. The oxygen outlet is opened with a pressure of 15 L/ min, and the hose bent to occlude passage of pressure. After the device was placed in the unaffected nostril, the pressure was suddenly released and the device was immediately removed. Good for: Solid object e.g. beads Note: This device is comfortable, easy to use, and regulates sufficient positive pressure necessary to expel a NFB. Beamsley Blaster Positive-Pressure Device [4] Technique: Uses an oxygen tube adapter that provides unmodulated pressure in the posterior nasopharynx to eject the NFB. Good for: Solid object e.g. beads Note: Barotrauma manifested as periorbital subcutaneous emphysema has been reported as a complication and is therefore a risk. Saline Washout Technique [1,5] Technique: A bulb syringe filled with approximately 7 ml of sterile normal saline is placed in the opposite nostril. The bulb syringe is advanced several centimetres into the nasal cavity so that a tight seal is maintained. The bulb syringe is forcibly squeezed and the object is propelled out by the flow of saline back through the nasal passage that contains the foreign body. Good for: Friable foreign body Note: Potential reflux of saline and/or nasal contents into the eustachian tubes and potential aspiration of the foreign body [5]. However, no known adverse event has been reported by investigators who have regularly performed this technique to obtain nasal specimens for research studies [5]. Cyanoacrylate (super glue) Technique: Cyanoacrylate applied to the end of a plastic swab stick for the removal of nasal foreign bodies 1 . The stick must be pressed and held onto the NFB for 60 seconds before being withdrawn. Good for: Solid object e.g. beads Note: Any cyanoacrylate stuck and remaining on the skin can be removed using 3% hydrogen peroxide or acetone. the integrity of the balloon, the catheter is inserted above and distal to the foreign object (passing the catheter below the object can potentially drive the object into a tighter position). Once beyond the foreign body, the balloon is inflated with a predetermined amount of saline (1 mL for a no. 4 Fogarty catheter, 2-3 mL for a no. 6 Fogarty or 8F Foley catheter) and maintained at that size with pressure from the practitioner's thumb. Gentle traction is then applied to remove the object [1] (Figure 2). Good for: Good for NFB that cannot be visualised, but has a reliable history or signs of NFB lodgement (unilateral nasal obstruction or discharge) [6]. Note: Dwyer has reported more than 200 successful NFB removals in children via this technique which has become his sole strategy in removing NFB [6]. The Katz extractor [1] Technique: The success of the catheter method has led to the development of a disposable catheter made specifically for the removal of NFB called the Katz extractor [1]. The Katz extractor catheter is smaller than the catheters mentioned above, which results in a greater chance of the catheter being passed beyond the foreign body S Good for: Good for NFB that cannot be visualised Note: Easy to use. It only takes three simple steps to complete an extraction with the Katz Extractor: -Insert / Inflate / Extract. Using a Magnetic Device [7] Techniques: A strong magnet is used; for example in the ED setting it could be the magnet used for deactivating pacemakers. Good for: Metallic NFB and button batteries Notes: If the NFB is a button battery this is a medical emergency; and must be removed as soon as possible. Instrumentation -Alligator/Crocodile forceps [8] (Figure 4) Technique: Using one hand to elevate the nasal tip of the patient, the other hand inserts forceps into the nasal cavity to grasp the NFB Good for: Any firm to hard NFB Note: Not suitable for round NFB as repeated attempts to grasp only for object to escape could propel the NFB further back into the nasal cavity. Instrumentation -Tilley nasal packing forceps [8] (Figure 4) Technique: For removal of NFB deemed too large to be grasp with Alligator forceps Good for: Any firm to hard NFB Note: Not suitable for round NFBs as forceps are likely to slip off and propel the NFB back further into the nasal cavity. Instrumentation -Jobson-Horne Probe/ring curette/wax loop [8] (Figure 4) Technique: With nasal tip of the patient elevated using the other hand, the probe is inserted into the nasal cavity beyond the NFB, and then angle the tip is angled to allow the probe to propel the NFB out ahead of it as the probe is withdrawn from the nose 8 . Good for: Solid round object e.g. beads Note: This author has found this method to have a 100% success rate and have consequently adopted this method as my practice of choice in removing NFB. Instrumentation -Right angle probe [9] (Figure 4) Technique: The probe is manoeuvred alongside and past the NFB, then rotated it so that the right-angle is behind the NFB, and then withdrawn along with the object [9]. Instrumentation -Frazier suction catheter [9] (Figure 4) Technique: Place the end of the suction catheter on the surface of the object, apply suction and gently pull out the catheter with the NFB attached. Good for: Solid round object e.g. beads Note: Risk of epistaxis if patient moves or struggles while catheter is in the nose. Instrumentation -Cut-down flexible suction catheter [9] (Figure 4) Technique: Place the end of the suction catheter on the surface of the object, apply suction and gently pull out the catheter with the NFB attached. Good for: Solid round object e.g. beads Note: Less risk of epistaxis compared to using a Frazier suction catheter Good for: Solid objects e.g. beads Note: Use it when Jobson-Horne Probe / ring curette / wax loop is needed but cannot be found. Snare Technique -Wire loop snare grasped by haemostat Technique: A 24-gauge wire ''snare'' loop is created, then held and its position guided by a haemostat. The snare is inserted into the nasal aperture, and used to separate a plane between the NFB and the septal, turbinate and nasal floor mucosa until all sides were free. Once the posterior free edge of the NFB is palpable with the loop, it is rotated 90 degrees and retracted outward, freeing the NFB and bringing it forward ( Figure 6). Good for: Impacted NFB e.g.: button battery Note: This technique is noted to be a rapid, atraumatic, and effective means for the removal of difficult NFBs [10]. ''Hook-Scope'' Technique for Endoscopic Extraction of NFB Technique: A flexible nasal endoscope (3.7 mm diameter) connected to a video system is needed. First assess the NFB and surrounding nasal cavity using the nasal endoscopy. Upon location of NFB, the scope's head is then turned superiorly to identify the nasal area above the superior margin of the NFB, which will be the pathway for the scope to travel through. The existence of such a pathway is a prerequisite for the success of this technique. The endoscope is subsequently advanced above and posterior to the NFB, and, finally descends to the area facing the nasal floor in a manoeuvre that bypasses the object (Figure 7). After a quick assessment of the posterior extension of the NFB and the status of the posterior nasal cavity, turned anteriorly towards the object, encasing the NFB like a hook. The NFB is then disengaged and mobilized, by gently pulling the scope anteriorly outward towards the nasal vestibule, keeping the NFB locked in the flexed mode and the object enclosed within its hook. Following successful extraction, a diagnostic nasal endoscopy is performed to reassess the entire nasal cavity. The core of this technique is that the NFB is actually embraced by the endoscope, which subsequently acts as an extractor [11]. Good for: Posteriorly located and round objects that are difficult to grasp [11]. Note: Particular attention must be paid in order not to inadvertently dislocate the object towards the choana. The endoscope may become damaged in the hands of unfamiliar operator. Discussion Any of the chosen method, from a non-instrumental method such as the positive pressure to instrumental extraction, has its own benefits and advantages. For any removal of NFB to be successful, there are four prerequisite that needs to be fulfilled; these are: First -a restrained patient. A moving or mobile patient is at risk of epistaxis with instrumentation which will make the process of removing the NFB even more difficult, visualisation of the NFB almost impossible and risk posterior dislodgement of the NFB. A child can be safely and securely restrained by having the child sit on their parent's lap with the body facing to the front. The parent has one arm over the child's body and arms, and the parent's other arm holds onto the child's forehead, pushing the back of the child' head against the parent's chest. The child's lower limbs are securely locked between the parent's thighs ( Figure 8). Second -a good head light, eg Vorotek headlight. I cannot over stress the importance of having a good otolaryngology headlight as it frees both the operator's hands and gives a focus illumination into the tiny nasal cavities of the child. Third -a nasal speculum, e.g. Killian nasal speculum (Figure 4). In my early years as a junior registrar I missed NFB in two paediatric patients when examining without nasal speculum. Fortunately, both patients presented back for review and had proper nasal examination with nasal speculum where the NFB was identified and removed safely. Fourth -nasal decongestion and analgesia with local topical agent (cophenylcaine nasal spray). This is important as it decongests and numbs the nasal cavity to aid in the removal of NFB. The topical spray is dose dependent, hence be careful when using it on paediatric patients. Conclusions It is down to the operator to find the method he or she is most comfortable with to remove NFBs. Bear in mind, the child usually allows only one or two attempts at most for removal of NFB, hence assess the child carefully and if you do not feel like you are the right person for the job, get help from seniors or direct referral to the ENT team if the removal of NFB is anticipated to be difficult. If all else fails, no further attempt in removing the NFB in ED should be made, and NFB removal under general anaesthesia would be the next course of action. Copyright: ©2016 Tian-Tee Ng. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
v3-fos-license
2014-10-01T00:00:00.000Z
2012-03-19T00:00:00.000
7986796
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0033034&type=printable", "pdf_hash": "f13009ede180f648579c9fac0a57f303d0bd8b37", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44926", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Materials Science" ], "sha1": "2a2321adb906fd5a1af2d554aed305e13e0178c8", "year": 2012 }
pes2o/s2orc
Deep Annotation of Populus trichocarpa microRNAs from Diverse Tissue Sets Populus trichocarpa is an important woody model organism whose entire genome has been sequenced. This resource has facilitated the annotation of microRNAs (miRNAs), which are short non-coding RNAs with critical regulatory functions. However, despite their developmental importance, P. trichocarpa miRNAs have yet to be annotated from numerous important tissues. Here we significantly expand the breadth of tissue sampling and sequencing depth for miRNA annotation in P. trichocarpa using high-throughput smallRNA (sRNA) sequencing. miRNA annotation was performed using three individual next-generation sRNA sequencing runs from separate leaves, xylem, and mechanically treated xylem, as well as a fourth run using a pooled sample containing vegetative apices, male flowers, female flowers, female apical buds, and male apical and lateral buds. A total of 276 miRNAs were identified from these datasets, including 155 previously unannotated miRNAs, most of which are P. trichocarpa specific. Importantly, we identified several xylem-enriched miRNAs predicted to target genes known to be important in secondary growth, including the critical reaction wood enzyme xyloglucan endo-transglycosylase/hydrolase and vascular-related transcription factors. This study provides a thorough genome-wide annotation of miRNAs in P. trichocarpa through deep sRNA sequencing from diverse tissue sets. Our data significantly expands the P. trichocarpa miRNA repertoire, which will facilitate a broad range of research in this major model system. Introduction Given the environmental and bioenergetic interest in lignocellulosic biomass, understanding the underlying molecular basis of wood formation is of great importance [1]. P. trichocarpa, a woody model organism with a fully sequenced genome, is uniquely positioned to address the genomics of wood formation and, as such, significant work has been done analyzing the molecular pathways leading to secondary differentiation and growth in P. trichocarpa [2][3][4]. In particular, Du and Groover [4] emphasized the importance of transcriptional regulation in secondary wood formation. Within this context, miRNAs have emerged as a critical regulatory component of diverse genetic programs, often by regulating transcript levels [5]. Experimental annotation of the entire miRNA component is, therefore, an essential first step to fully utilizing P. trichocarpa as a woody model organism and lignocellulosic feedstock. miRNAs, a group of short (,21 nt) non-coding RNAs in plants and animals, are known to play critical roles in diverse plant developmental processes through sequence specific gene regulation via target transcript cleavage and translational repression [6,7]. One feature that distinguishes these molecules from other small RNAs is precise biogenesis from a stereotypical hairpin [8]. Long pri-miRNAs transcripts are trimmed by a Dicer-like 1 (DCL1) to form pre-miRNAs that fold to form stable secondary hairpin structures [6]. This pre-miRNA hairpin is further cleaved to give rise to short double stranded miRNA:miRNA* fragments [6]. The dsRNA fragment is exported to the cytoplasm where it is dissociated and the ,21 nt miRNA is incorporated into a protein complex known as the RNA-induced silencing complex (RISC) of which ARGONAUTES are the core components [5,6,9]. In turn, the RISC complex, guided by the miRNA sequence, regulates specific target transcripts [5,6]. Thorough annotation of miRNAs in an organism of interest is a critical component of transcriptome annotation. In the context of other plant systems, P. trichocarpa is the only woody model whose genome has been completely decoded [10]. However, despite its importance, we lack a broad and deep annotation of its miRNAs. Klevebring [11] took the first step in annotating miRNAs from P. trichocarpa via 454 pyrosequencing of concatenated sRNA sequences that yielded a total of 901,857 sequences. However, the sRNA library for this study was made solely from leaf tissue [11]. Two other related studies have sought to identify stress responsive miRNAs in P. trichocarpa [12,13]. Sequencing of sRNAs cloned from mechanically treated xylem in Lu et al in [12] resulted in a total of 898 sequences while sequencing of cloned sRNAs from abiotically treated stress samples and mechanically stressed samples resulted in a total of 2648 and 1179 unique sequences, respectively [13]. While both of these studies made significant advances in our understanding of miRNAs, progress in next-generation sequencing allows us to study sRNAs at significantly greater depth. The goal of this study is to expand the miRNA annotation depth of P. trichocarpa by sampling across a diverse set of tissue types, including the first sampling of reproductive tissue, and using deeper sequencing approaches. We have also analyzed publicly available previously unannotated P. trichocarpa sRNA sequencing runs from xylem, mechanically treated xylem, and leaves. By examining these datasets in concert, we gain a better understanding of the complexity of P. trichocarpa miRNA expression profiles and enable the identification of xylem-enhanced miRNA-target interactions. sRNA Sequencing Statistics Overview Four individual sRNA libraries were analyzed. A pooled sRNA library prepared from growing vegetative apices, male flowers, female flowers, female apical buds, and male apical and lateral buds was sequenced using the SOLiD ABI platform. Leaf-specific, xylem-specific, and mechanically treated xylem (MTX)-specific sRNA libraries available for download from http://smallrna.udel. edu/ and Gene Expression Omnibus (GEO) were also analyzed. Table 1 summarizes the read counts obtained from these four sequencing runs. MTX mechanical treatment and sample collection was performed as described in Lu [13]. The data was analyzed using the UEA sRNA toolkit [14] in conjunction with the most recent P. trichocarpa genome assembly (v.156, available from http://www.phytozome.net/) as a reference [10]. Adaptors were removed from SOLiD sequences and colorspace sequence was converted to base-space using custom perl scripts (the three downloaded Illumina libraries already had their adaptor sequences removed). Filtering to remove tRNA and rRNA was performed. These reads were then mapped against the reference P. trichocarpa genome and only perfect matches were allowed (Table 1). Length distributions before filtering are shown in Figure 1. microRNA Annotation Filtered sRNA datasets were uploaded individually into the miRCAT pipeline [14]. The mirCAT pipeline annotates miRNA based on expressed sRNA sequences and stable hairpin structures [14]. miRNA annotation was performed according to criteria described in [8]. A total of 276 miRNAs were identified from these four datasets (Tables S1, S2, S3, S4). The sequence and genomic location of all known P. trichocarpa miRNAs were downloaded from miRBASE version 17 [10,[15][16][17] and these previously annotated miRNAs were overlaid on the new dataset. This allowed us to compare the genomic locations extracted from the gff annotation file (available for download from miRBASE) to the sequences of miRNAs annotated in the current study [15][16][17]. Based on this, we have identified a total of 155 new miRNAs. There are a total of 234 P. trichocarpa miRNAs currently deposited on miRBASE (v18), however, only 198 of these miRNAs have genome coordinates. Of the 198 miRNAs P. trichocarpa already annotated in miRBASE, this study identifies 122 and misses 76. In order to understand the similarity and differences between the new sRNA sequencing runs, we created a venn diagram comparing presence-absence of miRNAs across the four datasets ( Figure 2A). We annotated 164 miRNAs from the pooled dataset, 173 from leaves, 169 from xylem, and 158 from mechanically treated xylem. miRNAs specific to P. trichocarpa were identified by looking for a matching miRNA (within 4 mismatches) anywhere in green plants in miRBASE. It is important to note that in this context, P. trichocarpa specificity is based on failure to annotate a specific miRNA from other genomes, which does not necessarily imply absence from other plant genomes. A total of 110 P. trichocarpa specific miRNAs were identified. These include 36, 53, 51, and 37 P. trichocarpa specific miRNAs identified from pooled, leaf, xylem, and mechanically treated xylem, respectively. To add another level of stringency, we then asked which of these miRNAs has a corresponding miRNA* sequence ( Figure 2B). A total of 157 miRNA sequences with a corresponding miRNA* were identified, including a total of 33 P. trichocarpa specific miRNAs. The majority of miRNAs with a miRNA* (124 of 157) could be grouped into more broadly conserved miRNA family (as defined by a miRNA family presence in other green plant species on miRBASE). Perhaps not surprisingly, the pooled tissue sample showed the highest percentage of miRNA annotated with a corrresponding miRNA* sequence -62% (102/164). The leaf, miRNA-target prediction miRNA target prediction was performed using the psRNAtarget predictor [18]. For target prediction, the most recent P. trichocarpa coding sequences were downloaded from Phytozome. Predicted miRNA-target interactions are reported with the expectation score as originally defined by [19]. Expectation scores are dependent on degree of miRNA-target complementarity. Perfect complementary miRNA-target binding sites receive an expectation score of 0 while mismatches or G-U base-pair wobbles in the miRNA-target site increase the expectation score. Target predictions for miRNAs based on each sRNA library (pooled, leaf, xylem, and MTX) are available in the supplement (Tables S5, S6, S7, S8). Expression of miRNAs In addition to miRNA annotation, miRNA expression data can be recovered via high-throughput sRNA sequencing datasets. The raw abundance of every miRNA annotated for each dataset was determined. To account for variations in sequencing depths between libraries, raw abundance was divided by the total number of perfectly mappable reads and multiplied by a constant: [miRNA expression = (Raw Abundance)/(Number of Mappable Reads)61,000,000] [20]. As has been observed in other plant species [21,22], more broadly evolutionarily conserved miRNAs are expressed at higher levels that P. trichocarpa specific miRNAs (Figure 3). Hierarchical clustering of the four sequenced datasets based on annotated miRNAs and expression levels of these miRNAs was performed using MATLAB's Clustergram algorithm (Figure 4) [23,24]. Hierarchical clustering indicates that the xylem and MTX are the most similar datasets based on miRNA expression patterns (Figure 4). Xylem and MTX miRNAs and their targets All plant cell walls are composed of essentially the same basic components [2]. What commonly distinguishes one cell wall, and often one cell type, from another is the proportion and arrangement in which these building blocks are deposited [2]. Understanding the genetic regulation that controls the deposition process provides insight into the regulation of secondary and primary wall biosynthesis and holds the potential to facilitate manipulation of the process, a subject with important economic potential. For these reasons, we would like to highlight several predicted miRNA-target interactions that are uniquely associated with xylem and MTX. A total of 57 xylem-enriched miRNAs were identified in this study, of which 11 miRNAs were enriched in MTX. Of the xylem-or MTX-enriched miRNAs, 12 can be grouped into more broadly conserved miRNA families. One MTX-enriched miRNA is more broadly conserved while six more broadly conserved miRNAs are enriched in xylem. Below we discuss the potential significance of predicted miRNA-targeted genes that have been implicated in wood formation. All predicted miRNA-target interactions described below are for miRNAs that to-date have only been identified in P. trichocarpa. ptc-miRX50 targeting of XTH16. A MTX-specific miRNA, ptc-miRX50, is predicted to target XTH16, which encodes a xyloglucan endotransglycosylase/hydrolase (XTH). This predicted interaction has a category score of 3.5 and ptc-miRX50 is expressed at a moderate to high level (normalized expression of 30.4 - Figure 3). This predicted interaction in MTX is of particular interest given the role of xyloglucan in tension wood. Tension wood is a special type of wood that contains gelatinous fibers (G-fibers), which are found on the upper side of branches and contain an increased-proportion of highly-oriented cellulose. As the cellulose fiber cells swell, a hoop-stress is generated resulting in the contraction of the entire G-fiber. The asymmetric distribution of G-fiber cells on the tree limb as a whole -the G-fiber cells being concetrated on the top of the branchresults in a drooping stem being pulled up when the G-fibers contract longitudinally. In addition to highly oriented cellulose, xyloglucan has been found to be the predominant component of the cellular matrix of G-fibers [25] and both in situ hybridization and antibody labeling has localized XTH to the G-fiber cells [25]. The biochemical function,and localized expression of XTH led Nishikubo [25] to hypothesize that XTH could play a role in repairing xyloglucan cross-links as the G-fibers are shrinking in tension wood. It is possible, therefore, that ptc-miRX50 plays a role in G-fiber formation and function by modulating levels of XTH16. Predicted targeting of NAC domain transcription factors. In addition to predicted targeting of XTH16, ptc-miRX50 (MTX-enriched), is predicted to target a NAC domain transcription factor, NAC083, with an expectation value of 3.5. A second NAC domain transcription factor, NAC050, is also a predicted target of the xylem-specific miRNA, ptc-miRX87 (normalized expression level 4.6 and target expectation score of 3.5 - Figure 3). While NAC transcription factors are known to play essential roles in regulating secondary growth [26,27], little is known about NAC050 or NAC083 in P. trichocarpa. It is interesting to note that while certain NAC genes are known targets of the miR164 family [6], neither NAC050 nor NAC083 are targets of this deeply conserved miRNA family. Furthermore, the predicted miRNA binding sites of ptc-miRX87 and ptc-miRX50 to NAC050 and NAC083, respectively, are not conserved across other related NAC genes in the angiosperms. A third miRNA, xylem-enriched ptc-miRX73, is predicted to target a vascular related NAC transcription factor called VND7 with a target expectation score of 4.5. ptc-miRX73 has a normalized expression value of 3.1 (Figure 3). VND7, a NAC domain transcription factor involved in xylem vessel differentiation [28], has been a gene of considerable interest in wood formation [26,27]. A recent paper by Yamaguchi [28] identified a range of genes that are directly regulated by VND7, including several IRX genes and an XCP1 cysteine protease. IRX genes play a role in secondary wall formation [29] while XCP1 genes play a role in programmed cell death [30]. ptc-miRX41 targeting of a cellulose synthase CSLD4. Cellulose is a basic component of plant cell walls that is produced by the cellulose synthase complex. ptc-miRX41 (MTX enriched) is predicted to target a cellulose synthase gene, CSLD4, with an interaction score of 3.5. ptc-miRX41 is specifically expressed at a moderate level in MTX (normalized expression of 6.9 - Figure 3). Discussion Through the use of high-throughput sRNA sequencing, we have significantly expanded the breadth and depth of miRNA annotation in P. trichocarpa, annotating a total of 155 new miRNAs that are now deposited in miRBASE. This includes the first sampling of reproductive tissue in P. trichocarpa. The dramatic improvement in the breadth of tissue examined greatly increases the significance of identified xylem-and MTX-enriched miRNA-target interactions. These are likely to be of considerable economic and bioenergentic interest given their potential role in regulating wood development. Thus, the public availability of these datasets will promote a wide range of research in a critical model for woody plants. Materials and Methods Pooled sRNA library sequencing sRNA sequencing of pooled tissue was performed using SOLiD ABI sequencing technology. The pooled tissue included growing vegetative apices, male flowers, female flowers, female apical buds, and male apical and lateral buds was sequenced using SOLiD ABI platform from P. trichocarpa. The diversity of the tissue used to make the pooled library required a variety of growth conditions including field, greenhouse, and regulated growth chambers. RNA was collected used the Plant RNA Purification Reagent (Invitrogen). The library was prepared with the SOLiD Total RNA-Seq kit using the small RNA variant protocol per the manufacturer's instructions. Adaptor sequences were removed and color-space converted to base-space with custom perl scripts. Tissue specific sRNA libraries Tissue specific sRNA sequencing was downloaded from GEO: leaf specific sRNA, GSM717875; xylem sRNA, GSM717876; and mechanically treated xylem, GSM717877 [31]. Plant RNA Purification Reagent (Invitrogen) was used to collect RNA and libraries were prepared by Illumina (Hayward, CA). Illumina SBS sequencing technology was used to sequence these libraries. Adaptor trimmed sequences were downloaded and used for analyses. miRNA annotation The most current P. trichocarpa genome, version 156, availble on www.phytozome.com, was used as a reference. Annotation of miRNAs was performed using mirCAT [14]. miRCAT is an online tool developed to annotate miRNAs based on nextgeneration high-througput sequencing data. Default miRCAT [14] options were used for initial miRNA annotation. Default miRCAT parameters are as follows: (1) These requirements adhere to the criteria of plant miRNA annotation described by Meyers et al. [8] except for the minimum number of paired bases (17 nt) in the miRNA region of a folded hairpin and miRNA processing precision. Meyers et al. [8] requires that the there be no more than 4 unpaired bases in the miRNA region of the hairpin. For 21 nt miRNA the miRCAT pipeline adheres to Meyers et al. [8] criteria. However, given the requirement of a minimum of 17 paired bases, it is possible that a 22 nt miRNA has 5 unpaired bases in the miRNA region. To account for this possibility and verify that all annotated miRNAs adhere to Meyers et al. [8] folding of all pre-miRNAs was performed and unpaired bases of the miRNA region counted. miRNA processing precision is a key criteria in annotating miRNAs [8] and is not explicitly taken into account in the miRCAT pipeline. Processing precision was calculated using custom perl scripts. smallRNA reads from a given library were aligned to miRNA precursor sequences (Tables S1, S2, S3, S4). Raw abundance values of each read were summed for half of the hairpin. The raw abundance value of the predicted miRNA was divided by the total raw abundance for half of the hairpin to give a processing precision value. All miRNAs with processing precision below 25% were thrown out. This criteria guarantees that the miRNA sequence represents a quarter or greater of the total reads on miRNA half of the hairpin.
v3-fos-license
2018-12-16T07:44:30.992Z
2016-01-01T00:00:00.000
55455986
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2016/12/epjconf_nn2016_09002.pdf", "pdf_hash": "72d096b11703706b4a6ee8d8f50dd9ce1e4ba450", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44927", "s2fieldsofstudy": [ "Physics" ], "sha1": "0f5e507b994c64f231a1a3d194a49db4eb5aea42", "year": 2016 }
pes2o/s2orc
LUNA : Present status and future prospects One of the main ingredients of nuclear astrophysics is the knowledge of the thermonuclear reactions responsible for powering the stellar engine and for the synthesis of the chemical elements. At astrophysical energies the cross section of nuclear processes is extremely reduced by the effect of the Coulomb barrier and often extrapolations are needed. The Laboratory for Underground Nuclear Astrophysics (LUNA) is placed under the Gran Sasso mountain. Thanks to the environmental background reduction provided by its position many reactions involved in hydrogen burning has been measured directly at astrophysical energies. Based on this progress, currently there are efforts in several countries to construct new underground accelerators. The exciting science that can be probed with these new facilities will be highlighted. Introduction Nuclear processes generate the energy that makes stars shine.Moreover they are responsible of the synthesis of the elements (and isotopes) in stars.As a matter of fact, hydrogen, helium and all isotopes until lithium and beryllium are synthesized during the Big Bang Nucleosynthesis (BBN).All other nuclei are produced during the different characteristic phases of the star evolution [1]. The understanding of these nuclear processes is the goal of nuclear astrophysics and, in particular, the knowledge of the nuclear cross-sections involved in that processes.At astrophysical energies the cross section is highly reduced by the effect of the Coulomb repulsion and the nuclear reactions can occur only via tunnel effect. Due to these small cross section values, the rate of the reactions, characterized by a typical energy release of a few MeV, is too low, down to a few events per year, in order to stand out from the laboratory background.In many cases it is not possible even to reach energy values close to the Gamow peak and extrapolations are needed, leading to substantial uncertainties.A way to handle that background problem is to go in an underground environment.As a matter of fact the natural shielding provided by an underground site will guarantee a reduction of the cosmic flux of orders of magnitude leading to the success of experimental nuclear physics.LUNA [2] is placed under the Gran Sasso National Laboratories of INFN.Two accelerators were used during years.First a 50 kV accelerator (hereafter LUNA1) [3] and then a 400kV accelerator (hereafter LUNA2) [4]. Under the Gran Sasso Laboratory the muon flux is reduced by a factor 10 6 and the neutron flux by a factor of 1000 [5,6].Further background reduction in the region below 3 MeV in the gamma spectrum can be achieved by implementing a shielding made by copper and lead [7].A review of the results achieved by the LUNA collaboration will be presented in this paper combined with a discussion on the future projects for nuclear astrophysics in underground with a MV accelerator. Solar hydrogen burning Hydrogen burning in the Sun proceeds mainly by the proton-proton chain, with a 0.8% contribution from the carbon-nitrogen-oxygen cycle (CNO cycle) [8].The basic processes are by now well understood, leading to the so-called standard solar model [9] that explains both helioseismological data and neutrino observations.The main uncertainty affecting this model, the solar neutrino puzzle, has been spectacularly solved by large neutrino detectors [10-12, e.g.] showing that the missing solar neutrinos have undergone flavour oscillation. LUNA started its work studying the 3 He( 3 He,2p) 4 He reaction since there was the discussion on a possible resonance at the Gamow peak energy [13,14].This study was done By using the LUNA1 50 kV accelerator this reaction was studied directly at the energy of the Gamow peak ruling out the resonance existence [15].The 2 H(p,γ) 3 He reaction, responsible for the production of 3 He, was also studied at LUNA1 [16]. The neutrino fluxes emitted by the Sun are strictly correlated with the nuclear processes involved in the hydrogen burning.The LUNA2 program was focused on these processes achieving important results.The 3 He(α,γ) 7 Be reaction was studied by using both the gamma prompt detection [17] and the activation techniques [18] finding a perfect agreement within the two methods.This result was important not only to reduce the systematics, but also to solve the discrepancy previously shown in experiments based on the two different techniques. The CN neutrino fluxes are governed by the 14 N(p,γ) 15 O reaction.It is the bottleneck of the first CNO cycle and therefore the 13 N and 15 O neutrinos are controlled by this reaction.LUNA deeply studied this reaction finding that the S-factor was a factor of two lower than what reported in the NACRE database [19].This results are shown in [20] and reference therein. A new precise knowledge of the 14 N(p,γ) 15 O cross section has been raised to solve the so called Solar Composition Problem [8]: the conflict between helioseismology and the new metal abundances (i.e. the amount of elements different from hydrogen and helium) that emerged from improved modelling of the photosphere [21].As a matter of fact, the CNO neutrino flux is decreased by about 30% in going from the high to the low metallicity scenario.This way it will be possible to test whether the early Sun was chemically homogeneous [22], a key assumption of the standard Solar Model.In order to reduce the nuclear uncertainties in the solar model a new measurement was performed reaching the final value of S 1,14 (0)=1.57±0.13keV barn [23,24]. Second, third CNO cycles, and the Mg-Al In recent years the LUNA collaboration focused its attention on several reactions involved in hydrogen burning Nova explosion.The first reaction studied in this program was the 15 N(p,γ) 16 O [25].This is the link from the first to the second CNO cycle and it was studied intensively at the LUNA accelerator.Two new experiments were performed by using 15 N enriched solid targets ( [26] and reference therein).The LUNA measurements cover totally the Gamow peak for Nova explosion where the 15 N(p,γ) 16 O is important and the cross section was found to be lower than a factor of 2 with respect what reported in the NACRE database [19].This leads to a reduction of 16 O produced by Novae of a 40% [26]. The 17 O(p,γ) 18 F reaction was investigated from 2011 to 2013.In particular, the ratio between the reaction rates of 17 O(p,α) 14 N (Q = 1.2 MeV) and 17 O(p,γ) 18 F (Q = 5.6 MeV) channels is one of the most important parameters for the galactic synthesis of 17 O, the stellar production of radioactive 18 F, and for predicted O isotopic ratios in presolar grains [27,28]. Since the 18 F is a radio-nuclide with a half life of ≈110 min, the cross section has been derived both by detecting the prompt gamma rays and by counting the 511 keV γs emitted by the 18 F decay.The results are perfectly in agreement reducing considerably the systematic uncertainties [29].The LUNA results affect not only the direct capture evaluation, but the 183.3keV resonance strength was also measured with a value of ωγ = (1.67±0.12)μeV.As a matter of fact, the LUNA measurements cover the whole Gamow peak referred to Nova scenarios reducing by a factor of 4 the uncertainty on this reaction in stellar models and in particular on the oxygen and fluorine isotopes produced in Nova explosions [29,30].The very low uncertainty obtained in this experiment was possible thank to an intensive study of the target, realised and tested directly by the LUNA group with IBA and SIMS technique [31]. To measure also the 17 O(p,α) 14 N reaction, a new chamber has been constructed which allows to place 8 silicons detectors in backward directions.The setup is described in [32] in details. The study of the CNO cycles is the natural precursor for the hydrogen burning in Ne-Na and Mg-Al cycles.One of the reaction more interesting of the Ne-Na cycle is the 22 Ne(p,γ) 23 Na.This is one of the slowest reactions of the entire cycle and dominates the uncertainty for the production of many isotopes from neon to aluminum in different stellar scenarios.This is due to the presence of many resonances at stellar energies where only upper limits were given.With a setup characterized by two high purity germanium detectors fully shielded [33] this reaction has been studied and several resonance strengths measured reducing drastically the uncertainties on the reaction rate [34]. The problem of the 26 Al production is one of the most interesting cases [35].LUNA measured precisely several resonances for 24,25 Mg(p,γ) 26,27 Al in order to reduce the uncertainties on those reactions as requested in the astrophysical models [36,37].The impact of the LUNA results is discussed in details in a recent work [38].It is worth to mention that the 26 Al uncertainty in C/Ne explosive burning is dominated by the 25 Mg(α,n) 26 Al reaction as discussed in [39] and reference therein. Big Bang nucleosynthesis The 3 He(α,γ) 7 Be reaction has an important role in solving the problem of the Spite plateau [40].LUNA measured this reaction in the Gamow peak for Big Bang Nucleosynthesis reducing the uncertainties to 3% overall.Another problem concerning lithium isotopes as been raised recently: the 6 Li has been measured to be 3 order of magnitude higher than what expected in BBN [41,42].For 6 Li production in the Big Bang, the main nuclear physics unknown is the 2 H(α,γ) 6 Li reaction rate.The setup used to study this reaction is already described in details [43].A long and detailed study of this background was required in order to perform the analysis of the data [43,44].This way a possible nuclear solution of the 6 Li problem has been ruled out by LUNA. 4 Science case for a future higher-energy accelerator underground Recent advances in astronomy and astrophysics require nuclear data at energies that are higher than the high-energy limit of LUNA2.Most notably, the 12 C(α,γ) 16 O reaction still eludes experimental and theoretical efforts to pin down its precise rate.This reaction, together with the triple-α reaction, determines the ratio of carbon to oxygen at the end of helium burning, a value that has wide-ranging impacts on the nucleosynthesis of heavier elements. Whereas a direct 12 C(α,γ) 16 O study at the relevant energy of 300 keV is impossible due to the forbiddingly-low absolute yield, a study in a lowbackground environment such as LUNA at higher energies can help improve necessary extrapolations by providing constraints at energies where there are currently no data.The study of this reaction is based on a precise knowledge of the targets, since the background induced by the parasitic (α,γ) reaction on 13 C can overwhelm the signal of the 12 C(α,γ) 16 O reaction if the isotopic ratio 12 C/ 13 C is less than 10 5 (at least three order of magnitude higher than in natural carbon).The LUNA collaboration has started a deep investigation on 12 C enriched targets and their stability, by performing analysis tests on different backings and cleaning techniques and to understand the behaviour of the produced targets against irradiated charge.Those tests are performed at the Laboratori Nazionali di Legnaro and they will continue also in 2015 in order to have a complete understanding of the targets and in order to keep better under control the production techniques used in their creation.This work is essential for the success of the 12 C(α,γ) 16 O cross section measurements. Another important open issue of nuclear astrophysics is the neutron source reactions.In particular the 13 C(α,n) 16 tions.They are responsible for the production on neutrons involved in the slow neutron capture process, called the astrophysical s-process.Whereas those reactions are the subject of intensive experimental study, so far the reactions actually producing the neutrons have not yet been measured in the relevant energy range since they should be addressed by an underground accelerator. A third topic is to complement some of the proton-and α-capture reactions studied at the LUNA2 accelerator at higher energy.Such a continuation is particularly important for the Big Bang reactions 3 He(α,γ) 7 Be and 2 H(α,γ) 6 Li, where the present LUNA2 400 kV machine can only cover the lower part of the relevant energy region. Future underground accelerator facilities Based on the successes of the LUNA collaboration, around the world several efforts are underway to install high-current, stable-beam accelerators in underground sites.LUNA-MV project was started in order to install a 3MV machine in the underground laboratories of Gran Sasso.The new accelerator has already been financed by the Italian government and it should be installed in the next years at the Gran Sasso Laboratory.The synergy between the existing LUNA2 and the new LUNA-MV accelerator will allow to perform reaction studies in a wide range of energies with complete understanding of the setups involved.The planned DIANA facility at the Deep Underground Science Laboratory DUSEL in the Unites States also includes a megavolt and a lower-energy machine.Another project is under discussion at the Canfranc underground laboratory in the Pyrenees, Spain.As part of a staged approach, even an accelerator laboratory in a shallow-underground facility such as Felsenkeller (Dresden, Germany) is under consideration. At present, the existing 400 kV LUNA2 machine continues its scientific program outlined here.The next few years will show where this highly successful approach will eventually be complemented by one or more higherenergy accelerators underground.The technique is sufficiently mature to address not only the data needs of the astrophysics community, but it has the potential to benefit also the astroparticle and other communities.K101328, NN83261, DFG (BE 4100/2-1), and NAVI is also gratefully acknowledged.
v3-fos-license
2018-12-02T16:55:09.748Z
2018-11-26T00:00:00.000
53726218
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/AC48EB9F94A7655A509584DAB09BDD34/S0033291718003161a.pdf/div-class-title-auditory-and-visual-hallucination-prevalence-in-parkinson-s-disease-and-dementia-with-lewy-bodies-a-systematic-review-and-meta-analysis-div.pdf", "pdf_hash": "e83a07e55b6514f2def48b633e6a53591cf6b7f2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44929", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3df50467c3502ea281bd35ff7698584f640e3f9f", "year": 2018 }
pes2o/s2orc
Auditory and visual hallucination prevalence in Parkinson's disease and dementia with Lewy bodies: a systematic review and meta-analysis Background Non-motor features of Parkinson's disease (PD) and dementia with Lewy bodies (DLB), such as auditory hallucinations (AH), contribute to disease burden but are not well understood. Methods Systematic review and random-effects meta-analyses of studies reporting AH associated with PD or DLB. Prevalence of visual hallucinations (VH) in identified studies meeting eligibility criteria were included in meta-analyses, facilitating comparison with AH. Synthesis of qualitative descriptions of AH was performed. PubMed, Web of Science and Scopus databases were searched for primary journal articles, written in English, published from 1970 to 2017. Studies reporting AH prevalence in PD or DLB were screened using PRISMA methods. Results Searches identified 4542 unique studies for consideration, of which, 26 met inclusion criteria. AH pooled prevalence in PD was estimated to be 8.9% [95% confidence interval (CI) 5.3–14.5], while in DLB was estimated to be 30.8% (±23.4 to 39.3). Verbal hallucinations, perceived as originating outside the head, were the most common form of AH. Non-verbal AH were also common while musical AH were rare. VH were more prevalent, with an estimated pooled prevalence in PD of 28.2% (±19.1 to 39.5), while in DLB they were estimated to be 61.8% (±49.1 to 73.0). Meta-regression determined that the use of validated methodologies to identify hallucinations produced higher prevalence estimates. Conclusions AH and VH present in a substantial proportion of PD and DLB cases, with VH reported more frequently in both conditions. Both AH and VH are more prevalent in DLB than PD. There is a need for standardised use of validated methods to detect and monitor hallucinations. Introduction Parkinson's disease (PD) and dementia with Lewy bodies (DLB) are neurodegenerative diseases associated with α-synuclein dysfunction. Estimates suggest PD prevalence is 1% in people over 60 (De Lau and Breteler, 2006), while DLB has a prevalence of 0.4% in people over 65 (Vann Jones and O'Brien, 2014). Both conditions are characterised by motor dysfunction but non-motor features contribute extensively to their presentation and disease burden. Hallucinations, spontaneous aberrant perceptions, occur in a significant proportion of cases (Diederich et al., 2009). Hallucinations can be induced by medications such as anticholinergics (Celesia and Wanamaker, 1972), dopamine agonists (Baker et al., 2009) and a range of medications modulating diverse neurochemical pathways (Porteous and Ross, 1956;Lees et al., 1977;Gondim et al., 2010;Friedman et al., 2011;Wand, 2012). This presents challenges to determining the causes and nature of AH in PD and DLB. The majority of hallucinations in PD and DLB are chronic, recurring and progressive in spite of stable medication regimens (Fénelon et al., 2000). Indeed, cognitive, sensory and circadian aspects also contribute to hallucinosis (Mosimann et al., 2006). Neuroleptics are often administered on presentation of AH, yet have moderate efficacy and potentially severe side effects, including increased mortality (McKeith et al., 1992a;Weintraub et al., 2016). Visual hallucinations (VH) constitute a core feature of DLB diagnosis (McKeith et al., 2017) and have been described as a hallmark of PD (Onofrj et al., 2007). An associated, but distinct condition, PD dementia (PDD) (Dubois et al., 2007), also presents with motor and non-motor features, including hallucinations, but is under-reported in the literature. In PD, PDD and DLB, auditory hallucinations (AH) are generally considered of secondary concern, in spite of the progressive nature of AH, their contribution to loss of insight, decreased quality of life and consequent influence on the decision to move patients into long-term care (Goetz and Stebbins, 1993;Aarsland et al., 2000). Previous reports vary widely in reported prevalence of AH in PD from 2% (Leu-Semenescu et al., 2011) to 45% (Amar et al., 2014;Llorca et al., 2016). Prevalence rates also range widely in DLB, from 18% (Suárez-González et al., 2014) to 43% (Piggott et al., 2007). Previous studies reporting prevalence of AH have predominantly been cross-sectional, with limited focus on the nature of AH reported. Furthermore, methods of determining the presence of hallucinations are diverse, potentially leading to differing reporting rates. Aims of this study: This complex picture suggested a need to characterise, with increased precision, the prevalence and nature of AH in PD and DLB. Therefore, in this study, we aimed to conduct a systematic review and meta-analysis of studies reporting AH prevalence in PD and DLB. Furthermore, we aimed to assess the types of AH in both conditions and compared their prevalence with that of VH, which are more commonly investigated. Methods Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were used to standardise the conduct and reporting of this study. The protocol for this study was registered in advance on PROSPERO (registration number: CRD42017067337). Ethical approval for this study was awarded by the Faculty of Science and Engineering Ethics Committee at Manchester Metropolitan University (EthOS Reference Number: 0240). Search strategy Literature searches for candidate studies were undertaken in the following databases: PubMed, Web of Science and Scopus. Search terms were text words: auditory, auditory hallucinations, hearing, dementia, Lewy bodies, dementia with Lewy bodies, Lewy body dementia, Parkinson's disease. The Boolean operator AND was used to maximise the number of identified papers containing combinations of search terms. A search matrix was used to ensure all paired combinations of search terms were searched for in each database. Study selection The search was conducted from December 2016 to November 2017. Papers published from 1 January 1970 to 13 November 2017 were considered for inclusion. Titles and abstracts were examined to remove duplications and irrelevant studies. To be included, studies needed to (i) be written in English, (ii) report measures of AH prevalence in patients with PD or DLB, and (iii) be structured as a prospective cohort, case-control or crosssectional study. Unpublished data were not pursued or included. Both investigators examined full-text versions of studies meeting the above criteria to assess compliance with inclusion criteria and extract data. We reviewed reference lists of all included articles to identify other potentially eligible studies. Data extraction and risk of bias assessment Both authors extracted data from included papers, including study authors; publication year; study title; journal; volume; issue; pages; study design; number of participants; number of female participants; number of participants diagnosed with PD/DLB; mean age of disease onset; number of participants with AH and/or VH; qualitative descriptions of AH; time window for hallucination presentation; method of hallucination assessment. Both reviewers independently evaluated risk of bias for each study using criteria adapted from the NIH Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies (https:// www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools). One point was awarded for each question we felt could be answered in the affirmative. Our implementation of this tool assessed study quality across nine domains: aims/objectives stated; experimental protocol appropriately described; selection bias (see Results for details); participant inclusion/exclusion criteria (selected from similar populations/appropriate diagnosis of PD or DLB); statistical analyses appropriate; condition assessed prior to outcomes; dropouts reported; timeframe for hallucination presentation sufficient; outcome measures clearly defined, valid, reliable and implemented consistently across all study participants; and presence of detailed qualitative description of AH. Scores were summed for each study to provide an overall score of bias and quality. No weighting was used in bias assessment. Studies were then grouped into those of highest (scores = 9/9), high (score = 8/9), moderate-to-high (score = 7/9), moderate (score = 6/9) and poor (score = 1-5/9) quality. Poor quality studies were excluded (Inzelberg et al., 1998;Katzen et al., 2010;Goetz et al., 2011;Grau-Rivera et al., 2013). A comparison of all binary decisions made found 77.4% agreement between the authors. Discrepancies were settled by discussion and consensus. Data analysis Our primary outcome measure was AH prevalence in patients with PD or DLB. All included studies also reported prevalence of VH, which we also extracted as a secondary outcome. Where studies reported longitudinal data, we extracted the maximum values reported. Prevalence estimates in longitudinal studies were not higher than other study designs. Indeed, one longitudinal estimate (Goetz et al., 1998) reported the lowest prevalence of both auditory and VH in PD, suggesting this method did not bias our findings. We conducted meta-analyses of AH and VH in Lewy body disease (LBD; pooled PD and DLB), and PD and DLB. Due to the range of study designs and different patient populations we anticipated would be included in our study, and the consequent assumption that effect sizes would be sampled from a population of effect sizes that could vary due to factors other than just sampling error, we planned to carry out random-effects models a priori. We constructed random-effects models using Comprehensive Meta-Analyses software (Borenstein et al., 2009). We calculated pooled prevalence estimates with 95% confidence intervals (CIs) and assessed heterogeneity using the I 2 statistic. Possible publication bias was assessed via the fail-safe N, Begg's funnel plot and Begg and Mazumdar's rank correlation tests. If publication bias was suspected, we used Duval and Tweedie's trim and fill to adjust our prevalence estimates. Outputs from these analyses were imported to an online forest plot generator to create figures (https://www.evidencepartners.com/resources/forest-plot-generator/). Meta-regression models were created to investigate the potential contribution of study-level covariates to the observed heterogeneity in our pooled prevalence estimates. Log pooled prevalence estimate was the dependent variable, while study quality score, mean age at disease onset and the use of validated methods to detect hallucinations were set as predictive variables. Due to the diversity of methods employed to detect hallucinations, it was not possible to compare each technique. However, a clear distinction could be drawn between those studies that employed validated methods [NeuroPsychiatric Inventory (NPI), Manchester and Oxford Universities Scale for the Psychopathological Assessment of Dementia (MOUSPAD), Columbia University Scale for Psychopathology in Alzheimer's Disease (CUSPAD), Psycho-Sensory hAllucinations Scale (PSAS), University of Miami Parkinson's Disease Hallucinations Questionnaire (UM-PDHQ), Parkinson's Psychosis Rating Scale (PPRS) or Queen Square Visual Hallucination Inventory (QSVHI)] and those that did not (Rush Hallucination Inventory, semi-structured interview, questionnaire, screening hospital records or diagnostic interview and checklist). We undertook sensitivity analyses to assess the robustness of our pooled prevalence estimate of AH and VH. We investigated the effect of year of publication by sequentially excluding studies published before 2000, 2005 and 2010. We also examined whether study design influenced outcomes by examining cross-sectional studies only and examined the effect of different quality score cutoff values for inclusion by sequentially excluding studies with scores of less than seven or eight out of nine. Study selection After duplicate removal, we identified 4542 unique articles through primary database searches. Screening titles and abstracts led to the elimination of 4499 irrelevant articles. Full-text versions of the remaining 43 potentially eligible articles were assessed. Of these, 13 did not meet inclusion criteria, leaving 30 articles, published between 1992 and 2016, in the qualitative synthesis. A further four articles were excluded from quantitative meta-analyses due to low-quality assessment scores (Fig. 1). This produced 26 studies eligible for inclusion in the meta-analysis of AH prevalence (online Supplementary Table S1). Characteristics of included studies Of the included studies, 19 were cross-sectional studies, four were longitudinal studies and three were case-control studies. These studies represent data from 10 countries, the majority of which were conducted in Europe (n = 15), while others were undertaken in North America (n = 6) and Asia (n = 5). Our quality assessment rated four studies as highest quality (score = 9/9), 12 as high quality (8/9), seven as moderate-to-high quality (7/9) and four as moderate quality (6/9) (online Supplementary Table S2). As a population, study quality was weakest in reporting of qualitative descriptions of hallucinations, with other areas of quality assessment scoring being consistently high among included studies. Demographics of dataset The mean age at onset of diagnosis was 61.9 years (S.D. = 7.6). We attempted to conduct group wise comparisons between diagnoses of PD without dementia (PDWD), PDD and DLB; however, PDD and PDWD were only separately reported in one study; consequently, these data were pooled into one PD group. Data from 3774 patients were identified, of which 3420 (90.6%) had PD, with the remainder DLB, and 1178 (31.2%) were female (range = 3.3-60.4%). A higher proportion of females were found in the DLB group (mean = 48.6%; range = 20.0-56.1%), than PD (mean = 30.6%; range = 3.3-60.4%). Mean age at onset for DLB was 74.1 years (S.D. = 8.1), while mean age at onset for PD was 58.8 years (S.D. = 10.6). Overall pooled prevalence of AH The overall random-effects model pooled prevalence of AH in LBD (Fig. 2) was 11.9% (95% CI ±7.9 to 17.7). To compare the relative prevalence of AH in DLB and PD, two further random-effects models were constructed for each condition, independent of the other. Pooled prevalence of AH in DLB (Fig. 3a) was 30.8% (±23.4 to 39.3), while in PD (Fig. 3b), it was 8.9% (±5.3 to 14.5). Overall pooled prevalence of VH We also extracted information on VH from the 26 included studies. The study by Leu-Semenescu et al. (2011) reported selection bias for this analysis as all 100 PD patients had VH due to the study design, and was therefore excluded leaving 25 studies in this analysis. The overall random-effects pooled prevalence of VH in LBD (Fig. 4) was 35.9% (±26.2 to 47.0). Pooled prevalence of VH in DLB (Fig. 5a) was 61.8% (±49.1 to 73.0), while in PD (Fig. 5b), it was 28.2% (±19.1 to 39.5). Low prevalence of single modality auditory or VH Pure sensory hallucinations (i.e. those in only one single sensory domain) were less common. There were two (0.6%) reports of pure AH in DLB and 23 (0.7%) in PD. Pure VH were more common than auditory in both conditions, being found in 36 (10.2%) DLB cases and 122 (3.6%) PD cases. Qualitative analysis of included longitudinal studies (Graham et al., 1997;Goetz et al., 1998;Ballard et al., 2001;de Maindreville et al., 2005) revealed that pure VH tended to predate AH in both PD and DLB. As each condition progressed, AH tended to bind with recurrent complex VH to form multi-modal hallucinations that increased in prevalence from 1.5 to 10 years post baseline assessment (Goetz et al., 2011). Types of AH The rate of reporting qualitative descriptions of AH was low, with data only available from six studies; five of which described data from PD patients, while the other described data from DLB patients. 2344 Charlotte Louise Eversfield and Llwyd David Orton The most commonly reported type of AH were verbal, which were reported in all six studies. Verbal AH were reported as human voices originating from outside the patient's head, often indistinct or incomprehensible, originating outside the visual field. Verbal AH were described as 'non-threatening', 'nonimperative', 'non-congruent' and 'non-paranoid'. Three studies reported the proportion of AH that were verbal: Amar et al. Non-verbal sounds, such as inanimate (bullet fired, doorbell ringing, tinkling of bells, walking on steps, cracking sounds or squeaking) or animate sounds (dogs barking, tigers and lions roaring) were also common. Musical hallucinations were rare, being reported in only three patients by Fénelon et al. (2000), two of whom were described as 'deaf'. Meta-regression analyses We observed considerable heterogeneity in all meta-analyses (I 2 range = 51.2-96.3), suggesting a large proportion of the observed variance may be due to real differences between studies. To investigate whether some of the observed heterogeneity could be explained by moderator variables, such as study quality score, mean age of disease onset or method of hallucination assessment, we constructed meta-regression models for those meta-analyses comprised of sufficient study numbers (LBD and PD but not DLB). The results of these four models (online Supplementary Table S3) revealed the use of validated hallucination assessment methods could explain a significant proportion of the variance for each meta-regression (R 2 range = 0.09-0.38), but study quality, mean age at disease onset and disease duration could not. The fail-safe N for our meta-analysis of AH in LBD was 5649 (Z = −28.9; p < 0.0001), while there was only minor asymmetry in Begg's funnel plot (online Supplementary Fig. S6A). Begg and Mazumdar's rank correlation test suggested that publication bias was not present (Kendall's Τ-b = −0.23; p = 0.09) and employing Duval and Tweedie's trim and fill did not modify the random-effects effect size estimate. Similar values were found for our meta-analyses of AH in PD (online Supplementary Fig. S6B) and DLB (online Supplementary Fig. S6C), though trim and fill on the latter analysis imputed three studies that increased the pooled prevalence estimate to 37.5% (±27.7 to 48.5). The fail-safe N for our meta-analysis of VHs in LBD was 1280 (Z = −14.16; p < 0.0001), while there was only minor asymmetry in Begg's funnel plot (online Supplementary Fig. S6D). Begg and Mazumdar's rank correlation test suggested that publication bias was not present (Kendall's Τ-b = −0.11; p = 0.44) and employing Duval and Tweedie's trim and fill did not modify the random-effects estimate. Similar values were found for our meta-analyses of VH in PD (online Supplementary Fig. S6E) and DLB (online Supplementary Fig. S6F), though trim and fill on the former analysis imputed four studies and increased the pooled prevalence estimate to 33.4% (±31.4 to 35.4). We further investigated the potential role of cognition in hallucination status. Mini-Mental State Examination (MMSE) scores Sensitivity analyses We assessed the impact of year of study, study design and quality score on the robustness of our pooled prevalence estimates of auditory and VH in LBD. These analyses indicated that our estimates were robust, but were a few percentage points lower than analyses only including studies published from 2010 onwards or analyses only including moderate-to-high and high-quality studies (online Supplementary Table S4). Discussion We report that AH and VH present in a significant proportion of PD and DLB cases, with both forms of hallucination being more prevalent in DLB. We found that VH have a higher prevalence than AH in both conditions, both occurring at rates much higher than those found in the general population (Ohayon, 2000; Psychological Medicine 2349 Waters et al., 2018). Of note were the wide variety of methods used to determine the presence of hallucinations. We found that more recently published studies, using validated methods, produced higher estimates of hallucination prevalence, suggesting a need for wider adoption of such approaches. Taken together, these data demonstrate that AH have a higher prevalence in PD and DLB than commonly assumed. Challenges to existing models of hallucinations Existing models of recurrent complex VH have considered VH to exist in isolation from other modalities. Some models highlight dysfunctional attentional, cognitive and perceptual networks (Collerton et al., 2005;Diederich et al., 2005;Shine et al., 2011). Our data suggest that in PD and DLB, most cases of VH progress to become multi-modal hallucinations, incorporating a bound AH to the VH (e.g. hallucinations of people progress such that they can be heard talking). Attentional-cognitive models could account for these observations; however, sensory deficits seem incongruous with bottom-up perceptual elements of existing models. The contribution of bottom-up sensory aspects to VH has been shown via studies detailing ocular (Urwyler et al., 2014) and occipital lobe dysfunction (Meppelink et al., 2009), while central, top-down contributions involving frontal (Sanchez-Castaneda et al., 2010) and temporal (Harding et al., 2002) lobes also play a role. However, hearing loss and auditory dysfunction are common at ages associated with PD and DLB diagnosis (Lin et al., 2011). It is therefore challenging to account for VH progressing to multimodal hallucinations, binding with AH, due to a visual perceptual deficit occurring in these cases, followed by an auditory deficit. Models of simple AH, such as tinnitus, incorporate loss of peripheral drive with adaptive changes in gain (Eggermont, 1990), reductions in inhibition throughout the auditory pathway (Wang et al., 2011) and mismatches with central predictive coding (Sedley et al., 2016), yet rarely do these changes lead to more complex AH. Attentional networks may tend to be directed more towards the visual scene, leading to more prevalent reporting of VH when these networks dysfunction. This may be due to attentional focus being more easily directed towards visual than auditory objects (Shinn-Cunningham, 2008). As widespread degeneration progresses, attentional deficits may facilitate widespread connectivity and phantom binding of VH with AH, perhaps acting via hyperexcitable cortical and subcortical networks (Grossberg, 2000;Robson et al., 2018). Strengths and limitations Our findings are supported by the large proportion of moderateto-high quality studies included in our meta-analyses (online Supplementary Table S2). The accuracy of our prevalence estimates are supported by the majority of studies being cross-sectional, the best experimental design by which to estimate prevalence (Mann, 2003). Furthermore, our sensitivity analyses showed that only including cross-sectional studies had little effect on our estimates (online Supplementary Table S4). However, the time window for detection of auditory or VH ranged up to 30 years post-diagnosis (Graham et al., 1997), limiting the temporal precision of our estimates. A selection bias may exist in our estimates due to studies selecting patients from movement disorder clinics with few community-based samples; future studies of different populations may allow insights into hallucinations in different populations. The use of various methods to identify hallucinations was a major contributor to the high degree of heterogeneity in our meta-analyses (online Supplementary Table S3). Meta-regression models found that validated methods produced higher prevalence estimates than non-validated, suggesting that there are advantages to such approaches. However, within the range of validated measures reported in our sample (UM-PDHQ, MOUSPAD, CUSPAD, PSAS, PPRS, NPI and QSVHI) exist substantial differences in approach and outcomes. Comparisons between these approaches are beyond the scope of this study. Future work comparing these methods within the same study population may be useful. Method of hallucination assessment is an important issue, as most patients do not report AH when they first perceive them (Chou et al., 2005). This may be due to AH being less easily identified than visual, but could also be due to patient knowledge that AH are commonly associated with psychiatric conditions. Our initial focus was to estimate AH prevalence in PD and DLB. As all studies included in this analysis also reported the number of participants who had VH, we also extracted and analysed these data. Numerous papers in the literature report VH but do not report other forms of hallucinations. Consequently, our estimates of VH do not contain all available evidence, but do provide comparison data that allow us to have confidence that VH are more common than AH. Across the timespan of included studies, diagnostic criteria for PD have largely remained unchanged; while there have been multiple iterations of the consensus criteria for DLB diagnosis, each of which has modified the specificity and sensitivity of this diagnosis (Rizzo et al., 2017). This may be another source of betweenstudies heterogeneity; however, the low number of DLB studies included in our analyses did not allow meta-regression models to be constructed for these data. The lowest I 2 values we observed in our meta-analyses were found for DLB studies (Figs 3a and 5a), which argues against the possible contribution of diagnostic criteria to the observed heterogeneity. Two potentially confounding covariates were later age of diagnosis of DLB than PD and the inclusion of lower quality studies (online Supplementary Table S2). Importantly, neither of these was found to account for a significant proportion of the variance in our meta-regression models (online Supplementary Table S3). This does not exclude the possibility that age of diagnosis or age per se contributes to hallucinosis. Indeed, the presentation of VH in agematched DLB and PDD patients shows extensive overlap, suggesting age-related changes may contribute to their generation (Mosimann et al., 2006). We were able to include data from four longitudinal cohort studies; however, two of these studies reported data for 1-year post-diagnosis, meaning that estimates of point prevalence over time were not possible in the present study. There is evidence that hallucination point prevalence in PD increases over time to affect a majority of patients Hely et al., 2008). Indeed, there is evidence that hallucinations increase in prevalence with age, post-PD diagnosis (Graham et al., 1997;Biglan et al., 2007), with VH progressing towards polysensory phenotypes (Goetz et al., 2011). Furthermore, once perceived, hallucinations generally recur and are accompanied by a lack of insight (Goetz et al., 2006), leading to increased risk of requiring placement in care facilities (Goetz and Stebbins, 1993;Aarsland et al., 2000). Types of AH AH in LDB are complex, typically polymodal and varied in their presentation, although there is a paucity of high-quality, qualitative descriptions. Most common were verbal AH, perceived as originating outside the head, which differentiates verbal AH in PD or DLB from those found in schizophrenia. Interestingly, of the three studies to report relative rates of different types of AHs, two found that verbal hallucinations formed the majority in PD (Fénelon et al., 2000;Amar et al., 2014) while Suárez-González et al. (2014) reported that these were a minority in DLB. This may suggest a difference in AH presentation between the two conditions. Non-verbal AH (acouasms) were also common and complex, whether animate or inanimate. These findings suggest auditory cortex and wider temporal and frontal lobe involvement in AH in PD and DLB, a speculation that is supported by neuroimaging data (Matsui et al., 2006). Some reports of simpler acouasms were also reported. The potential overlap between acouasms and tinnitus suggests that our estimated pooled prevalence of AH may be lower than the true prevalence in the population. Indeed, a recent cross-sectional study found that in a sample of 1000 patients in a cognitive neurology clinic, verbal and musical hallucinations had a prevalence of 0.9%, while tinnitus was present in 6.9% (Bayón et al., 2017). AH were described as providing a soundtrack to VH (Fénelon et al., 2000), such as when a patient hears conversations of visually hallucinated people talking. This presentation is of note, as numerous authors described VH as preceding AH, while the polymodal combination of VH and AH may provide diagnostic utility in differentiating cognitive and functional impairment in DLB from Alzheimer's disease (Suárez-González et al., 2014). These data agree with early operation criteria for DLB diagnosis (McKeith et al., 1992b). While the first consensus guidelines for DLB diagnosis included AH as supportive features (McKeith et al., 1996), more recent updates have removed them from consideration (McKeith, 2006;McKeith et al., 2017). A recent analysis found that the consensus criteria for DLB had become more sensitive but less specific through these iterations, with little change in diagnostic accuracy (Rizzo et al., 2017). Whether the use of validated methods to detect AH is of any diagnostic utility in DLB or PD requires further investigation. It is interesting to note that two of the three participants who reported musical hallucinations in our sample were described as 'deaf' (Fénelon et al., 2000). Musical hallucinations have been associated primarily with hearing impairment (Gordon, 1997;Cope and Baguley, 2009;Perez et al., 2017), though they have been reported in PD (Ergün et al., 2009) and DLB (Golden and Josephs, 2015) without hearing impairment. The contributions and potential interactions between hearing impairment and PD or DLB require further investigation, as there are suggestions that hearing impairment may present as a non-motor feature of PD (Lai et al., 2014) and hearing impairment may be more common in PD than age-matched controls (Yýlmaz et al., 2009). Conclusion This study is the first, to our knowledge, to summarise, synthesise and contrast evidence for AH and VH prevalence in PD and DLB. AH and VH contribute to disease burden in a significant proportion of LDB cases. Methods of identification and assessment of AH and VH requires investigation to standardise measurements. Successful developments in this field may improve the accuracy of hallucination diagnosis and inform disease progression monitoring and interventions. Supplementary material. The supplementary material for this article can be found at https://doi.org/10.1017/S0033291718003161 Contributors. CLE and LDO were involved in the conception, design, planning, data extraction, analysis, writing and overseeing completeness of the manuscript. Both authors gave final approval of the manuscript. Conflict of interest. None. Financial support. This work was supported by an Action on Hearing Loss Flexi grant to LDO (grant number F68_ORTON).
v3-fos-license
2018-04-03T02:21:07.857Z
2015-03-22T00:00:00.000
15226200
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/crionm/2015/163727.pdf", "pdf_hash": "9e2dc490123808a754d47ea1cd1c0e7760d378d9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44934", "s2fieldsofstudy": [ "Medicine" ], "sha1": "9e2dc490123808a754d47ea1cd1c0e7760d378d9", "year": 2015 }
pes2o/s2orc
Case Report of a Patient with Left Ventricular Assistance Device Undergoing Chemotherapy for a New Diagnosis of Lung Cancer The optimal management of cancer in patients with severe heart failure is not defined. This issue is particularly challenging when a diagnosis of limited-stage small cell lung cancer (SCLC) is made incidentally in the context of evaluating patient for candidacy for cardiac transplantation. Limited-stage SCLC is typically managed on a curative therapeutic paradigm with combined modality approach involving chemotherapy and radiation. Even with excellent performance status and good organ function, the presence of severe cardiomyopathy poses significant challenges to the delivery of even single modality approach with chemotherapy or radiotherapy, let alone the typical curative combined modality approach. With mechanical left ventricular devices to provide cardiac support, treatment options for cancer in the setting of advanced heart failure may be improved. Here we discuss the therapeutic dilemma involving a patient with severe cardiomyopathy and left ventricular assistant device (LVAD) who was found to have limited-stage SCLC during the evaluation process for cardiac transplantation. Introduction The current era of novel therapeutics has enabled groundbreaking consequences in the world of medicine as survivorship rates from life-threatening conditions are improving. The implantation of left ventricular assistant devices (LVAD) is one such trailblazing treatment modality for heart failure. The LVAD has emerged as a bridge to transplantation until a donor heart is available as well as destination therapy in patients unfit for cardiac transplantation. This has significantly alleviated the mortality risks related to heart failure [1]. Advanced heart failure requires careful attention to fluid balance and treatments with any potential cardiac toxicity may lead to decompensation and death. Unique challenges may further arise in case of a new cancer diagnosis preceding the LVAD implant. We present a case of limited-stage small cell lung cancer (SCLC) of the lung in a patient with an LVAD, undergoing evaluation for cough while under consideration for cardiac transplantation. This paper aims to discuss the optimal management options for SCLC in light of the comorbidities present. Case Presentation We report a 57-year-old male with history of extensive prior tobacco use, nonischemic cardiomyopathy, and an Eastern Cooperative Oncology Group (ECOG) performance status (PS) of 3. After successfully completing his initial evaluation, he was deemed cancer-free and eligible for cardiac transplantation. Due to severe heart failure despite standard medications, he received mechanical cardiac support with an LVAD. Three months after surgery for the LVAD, he was noted to have mediastinal widening on a chest X-ray performed for dyspnea and cough. CT scans showed mediastinal lymphadenopathy without evidence of disease outside the chest (Figures 1 and 2). Renal and liver functions were within normal limits. The patient underwent mediastinoscopy and the pathologic examination was consistent with SCLC. Due to headaches and distended neck veins, he was evaluated for superior vena cava syndrome. The patient was removed from active consideration for cardiac transplantation. After extensive discussion with patient and his family, chemotherapy was administered while hospitalized for close monitoring. He received carboplatin (area under the curve (AUC) 5) on day 1 and intravenous (IV) etoposide 100 mg/m 2 on days 1-3. Patient received IV dolasetron 100 mg for 30 minutes on days 1-3 and IV prochlorperazine 10 mg every 6 hours as needed. The patient did not receive any prophylactic antibiotics. Carboplatin was used instead of cisplatin due to concerns over aggressive hydration and inducing volume overload. The radiation oncologist had an extensive discussion with the patient and the multidisciplinary team including the LVAD manufacturer and provided the information about the risks, benefits, and complications of concurrent radiation treatments. The patient ultimately decided not to pursue any radiation treatment. The treatment course was complicated by cellulitis, neutropenic fevers with pseudomonas aeruginosa infection, and protracted nausea and vomiting. The patient was treated with prolonged IV antibiotics course including aztreonam and ciprofloxacin. The patient received oral dolasetron 100 mg on day 4 and subcutaneous pegfilgrastim 6 mg on day 4 after completion of chemotherapy. Anticoagulation with warfarin was started. Although the renal function remained within normal limits, the patient developed signs of worsening overload with left-sided pleural effusion and peripheral edema. Subsequently he became weaker with weight loss of about 25 pounds. After his first cycle of chemotherapy, the patient elected not to receive further chemotherapy and workup. Patient was discharged home under hospice care and passed away about 6 months after the administration of chemotherapy. Discussion and Conclusion To our knowledge, this case represents the first reported circumstance of chemotherapy administration to a patient with LVAD. As expected, chemotherapy administration was complicated by different challenges imposed by the severely compromised cardiac function in a patient with potentially curable cancer. With advanced technology and care, patients with an LVAD can be expected to survive for several years, and therefore this situation might be encountered more often and more data is needed to better understand the best ways to administer chemotherapy in this setting. Development of evidence-based guidelines for use of chemotherapy and radiotherapy use in this situation will likely be difficult due to lack of availability of high-quality data and management will have to rely on expert opinions, personal experience, and individualized patient choices. A multidisciplinary approach of care involving experienced providers (cardiologists, oncologists, radiation oncologists, pulmonologists, and others) in a tertiary specialized center is warranted for optimal outcomes. SCLC is divided into limited and extensive stage disease. The limited-stage disease is confined to an ipsilateral hemithorax which can safely be encompassed within a tolerable radiation field. The standard chemotherapy regimen consists of etoposide and a platinum agent [2]. Carboplatin is often used in place of cisplatin as it is known to reduce the risk of emesis, neuropathy, and nephropathy. However, the use of carboplatin carries a greater risk of myelosuppression [3]. Carboplatin does not require large fluid administration making it preferable in heart failure patients while cisplatin administration in contrast requires prolonged hydration of large amounts of fluid to maintain renal function [3,4]. Cisplatin use has also been associated with cardiotoxicity including myocardial infarction, cerebrovascular ischemic events, acute venous thrombotic events, and Raynaud's phenomenon [5]. Combined modality with chemotherapy and thoracic radiation therapy has been known to improve overall median survival of 5% at 3 years in patients with limited-stage SCLC disease [6]. In this particular case, management was complicated by the lack of relevant medical literature regarding optimal oncologic therapy for potentially curable limited-stage SCLC in a patient with coexisting LVAD. While isolated cases of SCLC can be successfully managed in the context of the predefined guidelines, planning out a reasonable management approach is, on the contrary, highly deterred in the situation of a coexisting LVAD implant and the complications that can arise as a result. Malignant and nonmalignant lesions detected in routine imaging in patients with LVAD have been reported and numerous noncardiac surgical procedures have been performed in these patients to date [7,8]. A case report published in 2011 described a 58-year-old female who was implanted with an LVAD despite a prior existing pulmonary nodule, which was later diagnosed as an adenocarcinoma [7]. A lower lung lobectomy was cautiously performed under strict hemodynamic control owing to the challenges posed by the LVAD, stressing upon the dire need for stringent cancer screening and patient selection before LVAD implantation [7]. Wei et al. also recounted a similar case [8]. These patients however were not reported to have received chemotherapy. Patients with advanced heart failure who are considered for cardiac transplantation are meticulously screened for neoplasms including rectal examination and stool occult blood examination, pelvic examination, and pap smear and mammography for women [9]. Identification of a malignancy not only prevents eligibility for cardiac transplantation but also poses grave challenges regarding treatment for the malignancy including morbid but not fatal infections [10]. Most malignancies with metastatic potential except primary CNS tumors are considered a contraindication to cardiac transplantation, unless successfully treated without recurrence for five years [11]. One such patient underwent a radical prostatectomy to reacquire his transplantation candidacy status [12]. More data should be reported to allow the development of management guidelines for administering chemotherapeutic agents to LVAD patient with concurrent malignancy to allow delivery of best care in the context of a balanced risk benefit index.
v3-fos-license
2021-09-26T06:17:27.955Z
2021-09-25T00:00:00.000
237627394
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00408-021-00477-z.pdf", "pdf_hash": "bf6da85d43b46fc1feec38252f2d327c0e3764e8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44937", "s2fieldsofstudy": [ "Medicine" ], "sha1": "787c66989e685b4c69600a077d52a1824b2bf47f", "year": 2021 }
pes2o/s2orc
4D Electromagnetic Navigation Bronchoscopy for the Sampling of Pulmonary Lesions: First European Real-Life Experience Purpose The use of Electromagnetic navigation bronchoscopy (ENB) for the diagnosis of pulmonary peripheral lesions is still debated due to its variable diagnostic yield; a new 4D ENB system, acquiring inspiratory and expiratory computed tomography (CT) scans, overcomes respiratory motion and uses tracked sampling instruments, reaching higher diagnostic yields. We aimed at evaluating diagnostic yield and accuracy of a 4D ENB system in sampling pulmonary lesions and at describing their influencing factors. Methods We conducted a three-year retrospective observational study including all patients with pulmonary lesions who underwent 4D ENB with diagnostic purposes; all the factors potentially influencing diagnosis were recorded. Results 103 ENB procedures were included; diagnostic yield and accuracy were, respectively, 55.3% and 66.3%. We reported a navigation success rate of 80.6% and a diagnosis with ENB was achieved in 68.3% of cases; sensitivity for malignancy was 61.8%. The majority of lesions had a bronchus sign on CT, but only the size of lesions influenced ENB diagnosis (p < 0.05). Transbronchial needle aspiration biopsy was the most used tool (93.2% of times) with the higher diagnostic rate (70.2%). We reported only one case of pneumothorax. Conclusion The diagnostic performance of a 4D ENB system is lower than other previous navigation systems used in research settings. Several factors still influence the reachability of the lesion and therefore diagnostic yield. Patient selection, as well as the multimodality approach of the lesion, is strongly recommended to obtain higher diagnostic yield and accuracy, with a low rate of complications. Introduction The early detection and diagnosis of pulmonary lesions represents the cornerstone in lung cancer mortality reduction [1]. Electromagnetic navigation bronchoscopy (ENB) provides a multiplanar approach to lung lesions, leading the bronchoscope in close proximity for sampling procedures [2]. The navigation bronchoscopy system allows the bronchoscopist to better find the correct route to the target pulmonary lesion, compared to the conventional fluoroscopyguided bronchoscopy [3]. Over the last years, many studies have been published on this subject, showing a pooled diagnostic yield of ENB between 65% and 74%, with a sensitivity of 77% [4][5][6]. The variability of the ENB diagnostic yield is influenced by many factors: some are dependent on the characteristics of lesions such as size, lobar location, presence of bronchus sign [3,[7][8][9][10][11][12][13], whereas some depend on both navigation system and tools used, or are operator dependent [14][15][16][17]. The results of a large real-life study reported a low diagnostic yield for endobronchial navigation system, even when its use was associated with radial endobronchial ultrasound (r-EBUS), questioning about the real performances of navigation system [3]. Nevertheless, a recent meta-analysis confirmed a higher diagnostic yield of navigation bronchoscopy systems for pulmonary nodules, 1.69 times higher than other non-navigation bronchoscopy ones [18,19]. Recently, a new ENB system based on 4D technology was introduced to approach peripheral lesions: the 4D ENB was developed to overcome the respiratory motion, reducing the inaccuracies of previous ENB systems in sampling procedures of nodules moving during respiratory cycle [20]. A pre-procedure CT collects images during the inspiratory and expiratory phases and then, during the airway inspection, the sensor probe collects 3D points reconstructing both the lumen registration map and the pathway to the target lesion. Pulmonary nodules location was demonstrated to be closer on expiratory phase acquisition images than inspiratory ones, suggesting a coordination during expiratory phases of sampling procedure [21]. Moreover, this technology incorporates the electromagnetic guidance system to perform a transthoracic needle aspiration (TTNA) sampling by using the same CT images. This approach had a diagnostic yield of 83%, up to 87% when TTNA was combined with the sameprocedure ENB [22]. The primary aim of our study is to report the 4D SPiN® Thoracic Navigation System feasibility, diagnostic yield, accuracy and safety in approaching pulmonary lesions; the second aim is to evaluate factors that could influence the diagnostic performance of this navigation system. Materials and Methods This study was conducted in accordance with the STROBE statement for observational studies [23]. Our local institutional Review Board approved of the study. All the procedures performed in this study were in accordance with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Patients We conducted a single-centre retrospective observational study, including all patients with pulmonary nodules or masses who underwent ENB between 24th July 2018 and 30th September 2020 at the Interventional Pulmonology Unit of Maggiore della Carità Hospital in Novara, Italy. Demographic data, main findings of pulmonary lesions at CT scan images were recorded for each patient. In particular, for each lesion we specified size, localization, distance to the pleura (visceral pleura), different type of bronchus sign (type A when the responsible bronchus clearly reaches the inside of the target lesion, type C when no bronchus can be detected in relation to the lesion, type B when the CT finding cannot be categorized either into type A or type C [24]), lesions' standardized uptake values (SUV) at positive emission tomography/CT (PET/CT). For each lesion, we also evaluated whether it was previously sampled during a conventional fluoroscopy-guided bronchoscopy or directly approached with ENB. For each lesion sampled during ENB, we recorded which sampling tool was used (transbronchial needle aspiration, TBNA, transbronchial lung biopsies with forceps, TBLB, bronchoalveolar lavage, BAL), the diagnosis achieved with the ENB procedure and, in case of a negative ENB, which other procedure reached a final diagnosis. Diagnostic Pathway As previously published, ENB was used as a second step of diagnostic approach when the patients previously underwent a non-diagnostic fluoroscopy-guided bronchoscopy [2]. We reserved ENB as first approach in selected difficult cases (e.g. presence of bronchus sign, lesions located in the upper lobes, diameter smaller than 20 mm), with a high risk of procedure-related complications (i.e. lesions surrounded by emphysema, chronic respiratory failure), with undetectable lesion by conventional fluoroscopy or, finally, according to the preference of the patient [2]. In the case of non-diagnostic ENB, the patients underwent either an additional diagnostic procedure (i.e. fluoroscopic or CT-guided TTNA or surgical biopsy) or a clinical and radiological follow-up, until a final diagnosis was achieved. Procedures The day of the procedure patients underwent a chest CT scan (0.5 mm interval, 0.75 mm thickness), with acquisition images at maximal inspiratory breath hold and expiratory breath hold at functional residual capacity, as previously reported [22]. The post processing of the acquired images generated a virtual airway map (Veran Medical Technologies, Inc., St. Luis, MO, USA) after the placement of a navigational tracking pad (vPAD2, Veran Medical Technologies, Inc.) on patients' anterior chest. The bronchoscopist, then, identified the target lesion and an endoscopic planning route was generated by the software (Fig. 1). All the procedures were performed under general anaesthesia by a single operator (PEB), highly experienced in ENB procedures (the operator performed more than 200 ENB procedures with different ENB systems), using the SPiN® Thoracic Navigation System (Veran Medical Technologies, Inc.). No other guidance systems were used for all sampling procedures (i.e. fluoroscopy or r-EBUS). Target lesions were reached using an electromagnetic tip-tracked biopsy instrument (21 Gauge Needle and 1.8-mm outer diameter Serrated Cup Always-On Tip Forceps, Veran Medical Technologies, Inc.) inserted in the working channel of a therapeutic bronchoscope (Olympus BF-H190, except for three cases where Olympus BF-MP190F was used). Once reached, the lesion was sampled with a hierarchical approach: a maximum of 4 passages of TBNA were followed by up to 4 TBLB with forceps and, finally, a selective BAL with a 50-mL sterile saline was performed. In our cohort, after each TBNA sampling, we always performed a rapid on site evaluation (ROSE), in order to decide whether or not to proceed with TBLB, providing more tissue for the pathology evaluations [2]. For each ENB procedure, we reported the sampling tools used and potential complications (pneumothorax, haemoptysis, respiratory distress). Statistical Analysis Categorical variables are presented as absolute value and percentage, while for continuous ones we reported mean ± SD or median and interquartile range [IQR], as appropriate. Statistical comparisons between ENB diagnosis and categorical variables were made using chi-square test, while t-student or non-parametrical tests were used for the continuous ones. Navigation success (number of lesions reached by ENB over all target lesions), diagnostic yield (malignancies and benign diagnoses over all target lesions), diagnosis accuracy (malignancies, benign diagnoses and intermediate results confirmed correct over all sampled nodules with known final diagnosis) and sensitivity for malignancies (malignancies over the final number of malignancies after further testing) were then calculated [4]. Navigation success was defined when the diagnostic marked tool (needle or forceps) reached the surface of the lesion, highlighting the target lesion with green colour. The 95% confidence interval [95% CI] was also reported. A p value < 0.05 was considered as statistically significant. Statistical analysis was performed using SAS 9.4. (SAS Institute Inc., Cary, NC, US). Results One-hundred-three ENB sampling procedures were performed among 77 subjects. Most patients were male (68.8%) with a median age of 72.6 years (minimum 39.13, maximum 86.99 years). The lesions were mainly located in the upper or middle lobes (79.6%) and were solid (79.6%) with spiculated margins (52.4%). Bronchus sign pattern was type A in 28.1%, B in 52.4% and C in 19.4% of cases. Most sampled lesions were nodules (61.1%), with a median maximum diameter of 26 mm, and were located in the outer diameter of the lung parenchyma (median distance to the visceral pleura of 4 mm). Median PET/CT SUV was 5.96 [3. .33] g/mL; 45 patients (58%) had a prior negative fluoroscopy-guided bronchoscopy ( Table 1). ENB allowed to reach the lesion in 83 cases (navigation success 80.6%) and the diagnosis was achieved with ENB in 57 cases (57/83 = 68.3%). The final diagnosis was definitively achieved with other techniques in 16 cases: 13 with surgery and 3 with TTNA. The diagnosis was achieved in 70.9% of the sampled lesions (73/103). For the remaining 30 lesions, 11 were lost at follow-up, 13 were considered malignant because the subsequent CT control demonstrated an enlargement of lesions' diameter, 1 lesion was stable after 2 years of CT follow-up, whereas 5 patients are still in follow-up. Among the characteristics of lesions, only size influenced the diagnostic performance of ENB, with diagnosed lesions having a median maximum diameter of 28 mm (p = 0.0201). Other characteristics were not associated with ENB diagnosis ( Table 2); even if non statistically significant, nearly 80% of ENB diagnostic procedures were performed on lesions located in the upper-middle lobes, with a solid pattern and had a diameter greater than 20 mm. We observed only one case of pneumothorax, which did not require to be drained. No other procedure-related complications were recorded. Discussion In our cohort, we report diagnostic performances of a 4D navigation system platform for diagnosis of peripheral pulmonary lesions. The diagnostic yield and accuracy were, respectively, 55.3% and 66.3% with a sensitivity for malignancy of 61.8%. These results, acquired without other guidance tools, are slightly lower than the previously reported ones for other navigation systems used in research settings [4]. Nevertheless, they are in line with, or even higher than, real-life studies, where diagnostic yield reaches 38%, questioning the role of navigation systems for the diagnosis of pulmonary nodules in real-life settings [3]. Raval et al. used the SPiNDrive system on 49 patients with 61 lesions with a majority of pulmonary nodules; they reported an overall diagnostic yield of 83.3% [19]. The results of a recent meta-analysis, evaluating the value of navigation bronchoscopy for the diagnosis of peripheral pulmonary lesions, underlined that the diagnostic yield of navigation bronchoscopy was higher than non-navigation bronchoscopy approaches, with an overall odds ratio of 1.69 [18]. There are many factors that could influence ENB diagnostic yield: lesion size, lobe location, distance from the pleural surface, presence of a bronchus sign and malignant nature of the lesion are those that mainly influence diagnostic yield [18]. In particular, pulmonary lesions with diameter < 20 mm were more frequently diagnosed with navigation bronchoscopy systems than non-bronchoscopy ones (64.09% versus 48.67%) [18]. Other previously published studies identified both the size and the bronchus sign as factors influencing ENB performance [13]. In the study of Raval et al. [19], bronchus sign was present in 52% of cases and, among them, diagnosis was achieved 88% of times. By contrast, even in the absence of a bronchus sign, a diagnosis was achieved in 78% of cases [19]. In our study, we found that the only characteristic associated with better ENB performance was size, with a median diameter of 28 mm in diagnosed lesions. We did not confirm our previously published results, where a bronchus sign was the only factor associated with a higher diagnostic yield [2]. This may be influenced by a bias in patient selection: in order to maximize the pre-test probability to have a diagnostic ENB by having many patients with lesions in the upper lobes and with a bronchus sign, in our cohort most of the sampled lesions had a bronchus sign (80.58%) and were located in the upper or middle lobes (79.6%). The 4D SPiN® Thoracic ENB System allows the physician to overcome some crucial limits of previous ENB systems. Firstly, the motion of pulmonary lesions during the respiratory cycle, especially when they are located in the lower lobes: the acquisition of inspiratory and expiratory CT sequences allows a better virtual reconstruction of the endobronchial pathway to the lesion. Secondly, the acquisition of CT on the same day of the bronchoscopic procedure could reveal last-minute variations in the characteristics of lesions [21]. The NAVIGATE post hoc analysis confirmed that in the multimodal approach strategy to the nodule, the aspirating needle and forceps had higher true-positive rates [6,25]. The Acquire registry reported that TBNA improved diagnostic yield when compared with other diagnostic tools, such as forceps biopsy, transbronchial brushing and lavage [3]. In our cohort, the extensive approach with all three instruments (TBNA, TBLB and BAL) was used 63% of times and TBNA was the mostly used tool (93% of times) with the higher diagnostic yield (70%); other sampling tools had lower diagnostic rates, even if they were used, beside TBNA, nearly 80% of times. Another factor that could influence positively the diagnostic rate is the use of an ultra-thin bronchoscope. Ali et al. achieved a 90% diagnostic yield using an ultra-thin bronchoscope in combination with Cone Beam CT [24]. In our cohort, we used an ultra-thin bronchoscope only three times and the navigation success was 100%: the tip of the bronchoscope could always reach the lesion under ENB guidance and all three lesions were sampled. The main difference is the manoeuverability of an instrument with a diameter of 3.0 mm in a combined working channel of 1.7 mm (Olympus BF-MP190F) against standard bronchoscope (Olympus BF-H190) with a diameter of 5.0 mm and a working channel of 2.0 mm. The transthoracic approach to pulmonary lesions is generally performed under CT guidance processing multiple samplings to achieve a real-time diagnosis. Even though the diagnostic yield is higher than bronchoscopy without guidance systems, the number of procedure-related complications is higher, with 16% of pneumothorax and 1% of major haemorrhage [26]. The introduction of the SPiN System™ allows the pulmonologist to biopsy a pulmonary lesion by performing a single percutaneous passage under electromagnetic guidance alone [22]. Mellow et al. in a retrospective analysis of 129 procedures using SPiNPerc™ for transthoracic sampling of pulmonary nodules reported a diagnostic yield of 73%, which raised to 81% when it was combined with ENB [27]. The reported complication rate in their study was 22.5%, 17% of which were pneumothorax [27]. In our cohort, we never used this approach: the implementation of transthoracic sampling, possibly during the same procedure, taking advantage of ROSE, would ideally achieve a diagnosis even in those three patients who subsequently underwent a diagnostic TTNA. We also confirmed the low incidence of procedure-related complications. We reported only one case of pneumothorax that resolved spontaneously. In literature, the prevalence of complications was 3.2% and the most common complications were pneumothorax (1.7%) and haemorrhage (1.38%) [18]. Major limitations of our study are as follows: the retrospective nature and consequently the absence of a control group for comparison analysis; we did not have an r-EBUS in use as additional guidance tool, circumstance which limited the possibility to better define interrelations between bronchus lumen and lesions, in particular, for those lesions with a type B bronchus sign. The inflexibility of forceps and needles as well as the use of operative bronchoscopes (large diameters of bronchoscope's tip, anatomical bronchial angulations) may have influenced our diagnostic performances. The rate of navigation success: our results are lower than those reported in literature [28]; such results are probably conditioned by technical and anatomical aspects (i.e. number or bronchial division, strict bronchial angulations); the implementation with other guidance tools (i.e. r-EBUS or guide sheath, unavailable at our institution) could improve the navigation success rate. Moreover, among those procedures that we defined as non-success four were definitively diagnosed as benign lesions; considering these lesions as diagnosed with ENB the diagnostic yield would slightly increase. Moreover, we defined navigation success when the tip of the sampling tool reached the lesion's surface; we based this definition on a virtual image, not in real life. The combination of other guidance tools (i.e. r-EBUS, cone beam CT) would probably increase the rate of navigation success and consequently the diagnostic yield. The acquisition of the CT on the same day of the bronchoscopic procedure needs a great coordination between all the involved services (i.e. radiologists, anaesthesiologists, pulmonologists). Finally, the diagnostic pathway designed in our institution could have influenced the selection of patients, as well as the pre-test probability of lung cancer. However, to the best of our knowledge, this is the first European report of a large real-life cohort of patients undergoing bronchoscopy with the use of 4D SPiN® Thoracic Navigation System for sampling pulmonary nodules and masses. Conclusions In conclusion, with our real-life study, we reported a diagnostic yield of 55% and an ENB diagnostic rate of 68% for the sampling of pulmonary lesions and masses; these results are lower than those previously reported in the literature using other guidance tools. The selection of patients and lesions (upper-middle lobes, diameter greater than 20 mm, solid), as well as the use of all the sampling tools in combination, provide better results in absence of the risk of major complications. CT acquired the same day of the procedure, with acquisition of inspiratory and expiratory scans, could help bronchoscopist during the sampling with a better coordination, in phase with respiratory motion, although this not fully overcomes all the challenges in peripheral sampling. Conflict of interest The authors have no conflicts of interest to declare. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
v3-fos-license
2018-04-03T00:37:07.813Z
2004-06-25T00:00:00.000
13216980
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/279/26/26885.full.pdf", "pdf_hash": "85dfa87b08afd359eae19e824822f1cc949a6f69", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44938", "s2fieldsofstudy": [ "Biology" ], "sha1": "4378471acc17b63da42a90448448fa1c5d1847f0", "year": 2004 }
pes2o/s2orc
Stimulatory Actions of Caffeic Acid Phenethyl Ester, a Known Inhibitor of NF- (cid:1) B Activation, on Ca 2 (cid:2) -activated K (cid:2) Current in Pituitary GH 3 Cells* Caffeic acid phenethyl ester (CAPE), a phenolic antioxidant derived from the propolis of honeybee hives, is known to be an inhibitor of activation of nuclear transcript factor NF- (cid:1) B. Its effects on ion currents have been investigated in pituitary GH 3 cells. This compound increased Ca 2 (cid:2) -activated K (cid:2) current ( I K(Ca) ) in a concentration-dependent manner with an EC 50 value of 14 (cid:3) 2 (cid:4) M . However, the magnitude of CAPE-induced stimulation of I K(Ca) was attenuated in GH 3 cells preincubated with 2,2 (cid:1) -azo-bis-(2-amidinopro-pane) hydrochloride (100 (cid:4) M ) or t -butyl hydroperoxide (1 m M ). CAPE (50 (cid:4) M ) slightly suppressed voltage-de-pendent L-type Ca 2 (cid:2) current. In inside-out configuration, CAPE (20 (cid:4) M ) applied to the intracellular face of the detached patch enhanced the activity of large conductance Ca 2 (cid:2) -activated K (cid:2) (BK Ca ) channels with no modification in single-channel conductance. After BK Ca channel activity was increased by CAPE (20 (cid:4) M ), subsequent application of nordihydroguaiaretic acid (20 (cid:4) The between the potentials and the probability of channel and after the application of fitted a function of the where n the maximal open probability, the is is half-maximal activation, and k is the slope factor of the activation curve ( i.e. the voltage dependence of the activation process in mV per e -fold change). The averaged results are presented as the mean values (cid:7) S.E. The paired Student’s t test was used for the statistical analyses. To further clarify the statistical difference among the two or four treatment groups, analyses of variance with Duncan’s multiple range test for multiple comparisons were also performed. Differences between values were considered significant when p (cid:8) 0.05. matory reaction in brain (6). Several lines of evidence also indicate that CAPE may modify the redox state in transformed fibroblast cells and in leukemic HL-60 cells (7)(8)(9). Furthermore, it has been reported that this compound inhibited the contractile response to phenylephrine or to high K ϩ solution in isolated rat thoracic aorta (10). However, to our knowledge, the effects of CAPE on ion currents have not been thoroughly studied. Large conductance Ca 2ϩ -activated K ϩ (BK Ca ) channels play important roles in controlling the excitability of nerve, muscle, and other cells by stabilizing cell membrane at negative potentials (11). Their gating is known to be controlled by intracellular Ca 2ϩ and/or membrane depolarization. The challenging of cells with oxidizing agents has been found to suppress the channel activity of these channels (12). Pituitary GH 3 cells have been demonstrated to exhibit the activity of these channels (13). Riluzole and ciglitazone, both of which were reported to prevent neuronal injuries, could enhance the activity of BK Ca channels functionally expressed in these cells (14,15). Importantly, the opener of these channels has been shown to counteract the deleterious effects of excitatory neurotransmitters following neurotoxic or ischemic injuries (16). Previous studies also revealed that the BK Ca channel might be a relevant target of DNA synthesis in cultured Mü ller glial cells (17). Therefore, the objective of this study was to (a) address the question of whether CAPE could affect Ca 2ϩ -activated K ϩ currents (I K(Ca) ) in GH 3 cells; (b) determine the effects of this compound on the activity of BK Ca channels; and (c) examine whether it can influence the membrane potential. Interestingly, the present results indicate that in GH 3 lactotrophs, CAPE does not appear to affect the activation of NF-B exclusively, despite its ability to inhibit NF-B activation in these cells (18). The CAPE-induced increase in BK Ca channel activity may account, at least in part, for its effects on cellular functions in neurons or neuroendocrine cells. MATERIALS AND METHODS Cell Culture-The clonal strain GH 3 cell line, originally derived from a rat anterior pituitary adenoma, was obtained from the Culture Collection and Research Center (CCRC-60015; Hsinchu, Taiwan). The detailed methods have been previously described (19). Briefly, the cells were cultured in Ham's F-12 medium (Invitrogen) supplemented with 15% heat-inactivated horse serum (v/v), 2.5% fetal calf serum (v/v), and 2 mM L-glutamine (Invitrogen) in a humidified environment of 5% CO 2 /95% air. The experiments were generally performed 5 or 6 days after cells were subcultured (60 -80% confluence). Electrophysiological Measurements-Immediately before each experiment, the cells were dissociated, and an aliquot of cell suspension was transferred to a recording chamber positioned on the stage of an inverted microscope (DM IL; Leica, Wetzlar, Germany). The cells were bathed at room temperature (20 -25°C) in normal Tyrode's solution containing 1.8 mM CaCl 2 . The recording pipettes were pulled from Kimax-51 capillaries (Kimble Glass, Vineland, NJ) using a two-stage microelectrode puller (PP-830; Narishige, Tokyo, Japan), and the tips were fire-polished with a microforge (MF-83, Narishige). When filled with pipette solution, their resistance ranged between 3 and 5 M⍀. Ion currents were measured in the cell-attached, inside-out, and whole cell configurations of the patch-clamp technique, using an RK-400 patchclamp amplifier (Bio-Logic, Claix, France) (19). Data Recording and Analysis-The signals were displayed on an analog/digital oscilloscope (HM 507; Hameg, East Meadow, NY) and on a liquid crystal projector (PJ550-2; ViewSonic, Walnut, CA). The data were stored in a Pentium III grade laptop computer (Slimnote VX 3 ; Lemel, Taipei, Taiwan) at 10 kHz through a Digidata 1322A interface (Axon Instruments, Union City, CA). This device was controlled by a commercially available software (pCLAMP 9.0; Axon Instruments). The currents were low pass filtered at 1 or 3 kHz. Ion currents obtained during whole cell experiments were stored without leakage correction and analyzed using the pCLAMP 9.0 software (Axon Instruments) or the Origin 6.0 software (Microcal, Northampton, MA). To calculate the percentage of stimulation of CAPE on I K(Ca) , each cell was depolarized from 0 to ϩ50 mV, and current amplitude during cell exposure to CAPE was measured and compared. The amplitude of I K(Ca) in the presence of this compound at a concentration 200 M was taken as 100%. The concentration of CAPE required to increase 50% of current amplitude was then determined using a Hill function, y ϭ (E max ϫ is the CAPE concentration, EC 50 is the concentration required for a 50% increase; n h is the Hill coefficient, and E max is the CAPE-induced maximal increase in the amplitude of I K(Ca) . The amplitudes of single BK Ca channel currents were determined by fitting Gaussian distributions to the amplitude histograms of the closed and the open state. The channel open probability in a patch was expressed as N⅐P o , which can be estimated using the following equation: where N is the number of active channels in the patch, A 0 is the area under the curve of an all points histogram corresponding to the closed state, and A 1 . . . . A n represent the histogram areas reflecting the levels of distinct open state for 1 to n channels in the patch. The relationships between the membrane potentials and the probability of channel openings obtained before and after the application of CAPE (20 M) were fitted with a Boltzmann function of the form: where n P ϭ the maximal open probability, V ϭ the membrane potential in mV, V1 ⁄2 is the voltage at which there is half-maximal activation, and k is the slope factor of the activation curve (i.e. the voltage dependence of the activation process in mV per e-fold change). The averaged results are presented as the mean values Ϯ S.E. The paired Student's t test was used for the statistical analyses. To further clarify the statistical difference among the two or four treatment groups, analyses of variance with Duncan's multiple range test for multiple comparisons were also performed. Differences between values were considered significant when p Ͻ 0.05. The composition of normal Tyrode's solution was 136.5 mM NaCl, 5.4 mM KCl, 1.8 mM CaCl 2 , 0.53 mM MgCl 2 , 5.5 mM glucose, and 5.5 mM HEPES-NaOH buffer, pH 7.4. To record the K ϩ currents or membrane potential, the recording pipette was backfilled with a solution consisting of 140 mM KCl, 1 mM MgCl 2 , 3 mM Na 2 ATP, 0.1 mM Na 2 GTP, 0.1 mM EGTA, and 5 mM HEPES-KOH buffer, pH 7.2. The free Ca 2ϩ concentration of this solution was estimated to be 230 nM, assuming that the residual contaminating Ca 2ϩ concentration was 70 M, and the ratiometric fura-2 measurement with F-2500 fluorescence spectrophotometer (Hitachi, Tokyo, Japan) showed that this solution contained 205 Ϯ 12 nM free Ca 2ϩ for three different experiments. To measure voltagedependent Ca 2ϩ current, KCl inside the pipette solution was replaced with equimolar CsCl, and pH was adjusted to 7.2 with CsOH, whereas bathing solution contained 1 M tetrodotoxin and 10 mM tetraethylammonium chloride. For single-channel current recordings, the high K ϩ bathing solution contained 145 mM KCl, 0.53 MgCl 2 and 5 mM HEPES-KOH buffer, pH 7.4, and the pipette solution contained 145 mM KCl, 2 mM MgCl 2 , and 5 mM HEPES-KOH buffer, pH 7.2. The value of free Ca 2ϩ concentration was calculated assuming a dissociation constant for EGTA and Ca 2ϩ (at pH 7.2) of 0.1 M. To provide 0.1 M free Ca 2ϩ in bath solution, 0.5 mM CaCl 2 and 1 mM EGTA were added. 3 Cells-In the first series of experiments, the whole cell configuration of the patch-clamp technique was used to investigate the effect of CAPE on ion currents in these cells. The cells were bathed in normal Tyrode's solution containing 1.8 mM CaCl 2 , and the pipette solution contained 0.1 mM EGTA and 3 mM ATP. ATP (3 mM) included in the pipette solution was effective at suppressing ATP-sensitive K ϩ channels (20). To inactivate other types of voltage-dependent K ϩ currents, each cell was held at the level of 0 mV. As illustrated in Fig. 1, when the cell was held at 0 mV and different potentials ranging from Ϫ10 to ϩ60 mV with 10-mV increments were applied, a family of large, noisy, outward currents was elicited. These outward currents have been previously identified as I K(Ca) (19). Interestingly, within 1 min of exposing the cells to CAPE (20 M), the amplitude of outward currents was greatly increased throughout the entire range of voltage-clamp step. For example, when the cells were depolarized from 0 to ϩ50 mV, current amplitudes measured at the end of the depolarizing pulses were increased to 552 Ϯ 36 pA from a control of 228 Ϯ 25 pA (p Ͻ 0.05; n ϭ 8). The relationship between the CAPE concentration and the percentage increase of I K(Ca) has been constructed (Fig. 1C). This compound could increase the amplitude of I K(Ca) in a concentration-dependent manner with an EC 50 value of 14 Ϯ 2 M. At a concentration of 200 M, it fully increased I K(Ca) . The Hill coefficient was found to be 1.8, suggesting that there was a positive cooperation for its stimulation of I K(Ca) . These results indicate that CAPE can produce a stimulatory action on I K(Ca) in these cells. Effect of CAPE on Ca 2ϩ -activated K ϩ Current (I K(Ca) ) in GH Effect of CAPE on Voltage-dependent L-type Ca 2ϩ Current (I Ca,L ) in GH 3 Cells-I K(Ca) can be functionally coupled with Ca 2ϩ influx through plasmalemmal voltage-dependent Ca 2ϩ channels (21). A recent report also demonstrated that the action of CAPE on vasorelaxation in rat thoracic aorta could be due to the blockade of Ca 2ϩ movement through the cell membrane (10). For these reasons, we further investigated whether it could exert any effect on I Ca,L that was previously described in these cells (21,22). These experiments were conducted with a Cs ϩ -containing solution. The exposure to CAPE (20 M) was found to be have little or no effect on I Ca,L . However, this compound at a concentration of 50 M slightly suppressed I Ca,L , although it did not modify the I-V relationship of I Ca,L (Fig. 2). For example, CAPE (50 M) decreased the amplitude of I Ca,L to 42 Ϯ 3 pA from a control value of 51 Ϯ 6 pA (p Ͻ 0.05; n ϭ 7), when cells were depolarized from Ϫ50 to 0 mV. Therefore, this compound stimulated I K(Ca) in a manner conceivably unlikely to be linked to an increase in the amplitude of I Ca,L . Effect of CAPE on I K(Ca) in Cells Preincubated with 2,2Ј-Azobis(2-aminopropane) Dihydrochloride AAPH or t-Butyl Hydroperoxide-CAPE is known to be an antioxidant flavonoid (1,23). Therefore, we next evaluated whether changes in reactive oxygen species can influence CAPE-induced stimulation of I K(Ca) in GH 3 cells. Interestingly, the results showed that CAPE-stimulated I K(Ca) was attenuated in GH 3 cells preincubated with either 100 M AAPH or 1 mM t-butyl hydroperoxide (Fig. 3). t-Butyl hydroperoxide is an oxidative agent, whereas AAPH is known to be an azo compound that can generate free radicals (24). A subsequent application of dithiothreitol (10 M) increased I K(Ca) in cells treated with AAPH or t-butyl hydroperoxide. When the AAPH-treated cells were depolarized from 0 to ϩ50 mV, CAPE (200 M) increased the density of I K(Ca) by about 15%. Conversely, in control cells, CAPE (200 M) nearly fully increased the density of these currents. These results suggest that the stimulation of I K(Ca) caused by this compound can be modified in the presence of these oxidizing agents. Effect of CAPE on the Activity of BK Ca Channels in GH 3 Cells-The results from our whole cell experiments suggest that I K(Ca) may be K ϩ flux through the BK Ca channel (13,19), because CAPE-induced increase in I K(Ca) was suppressed by paxilline yet not by glibenclamide or apamin. To elucidate how it could act to affect I K(Ca) , the effect of this compound on BK Ca channels was further investigated. In these experiments, the single-channel recordings with inside-out configuration were performed in symmetrical K ϩ concentration (145 mM). The bath solution contained 0.1 M Ca 2ϩ , and the potential was held at ϩ60 mV. As shown in Fig. 4, the activity of BK Ca channels could be readily observed in an excised patch. An increase in channel activity could also be obtained in cellattached patches when cells were exposed to ionomycin (10 M) or squamocin (10 M). These two agents were previously reported to be Ca 2ϩ ionophores (25). When CAPE (20 M) was applied to the intracellular face of the detached patch, the channel open probability was increased (Fig. 4). The open probability obtained at the level of ϩ60 mV in the control was 0.112 Ϯ 0.005 (n ϭ 6). The application of CAPE (20 M) significantly increased channel activity to 0.289 Ϯ 0.035 (p Ͻ 0.05; n ϭ 6). When this compound was washed out, the open probability returned to the control level. However, the single-channel amplitude remained unaltered in the presence of 20 M CAPE (Fig. 4C). Moreover, curcumin (20 M) applied to the intracellular face of the excised patch was not found to have any effects on the probability of channel openings, whereas cilostazol (20 M) could increase the channel activity effectively. Similar to CAPE, curcumin has been shown to inhibit the activation of NF-B (26). Cilostazol has been recently found to stimulate I K(Ca) in human neuroblastoma SK-N-SH cells (27). Effect of Nordihydroguaiaretic Acid on BK Ca Channels in GH 3 Cells-Nordihydroguaiaretic acid was previously reported to stimulate BK Ca channels (28). We also examined whether the stimulatory effects of CAPE and nordihydroguaiaretic acid on these channels are additive. Interestingly, as shown in (14). Taken together, the results indicate that the stimulatory effects of CAPE and nordihydroguaiaretic acid on a single BK Ca channel are not additive in GH 3 cells. Lack of Effect of CAPE on Single-channel Conductance of BK Ca Channels-In the next series of experiments, the effect of CAPE on BK Ca single-channel conductance was investigated. In inside-out configuration, the cells were bathed in symmetrical K ϩ concentration (145 mM), and the bath solution contained 0.1 M Ca 2ϩ . Fig. 5 (C and D) illustrates the I-V relationships of BK Ca channels obtained in the absence and presence of CAPE (20 M). The single BK Ca channel conductance calculated from a linear I-V relationship in control was 196 Ϯ 12 pS (n ϭ 11) with a reversal potential of 0 Ϯ 3 mV (n ϭ 11). Notably, the value of single-channel conductance did not differ from that (197 Ϯ 11 pS; p Ͼ 0.05, n ϭ 10) obtained in the presence of CAPE (20 M). These results indicate that CAPE causes no modification in single-channel conductance, despite its ability to increase the channel open probability. Fig. 5E shows the activation curve of BK Ca channels in the absence and presence of CAPE (20 M). The plot of open probability of BK Ca channels as a function of membrane potential was fitted with a Boltzmann function as described under "Materials and Methods." In control, n P ϭ 0.35 Ϯ 0.04, V1 ⁄2 ϭ 75.4 Ϯ 1.6 mV, and k ϭ 10.7 Ϯ 0.4 mV (n ϭ 6), whereas in the presence of CAPE (20 M), n P ϭ 0.71 Ϯ 0.07, V1 ⁄2 ϭ 61.2 Ϯ 1.9 mV, and k ϭ 10.9 Ϯ 0.6 mV (n ϭ 6). The data showed that the activation curve was shifted along the voltage axis to less positive potentials in the presence of CAPE. In contrast, no significant change in the slope (i.e. k value) of the activation curve was detected in the presence of this compound. Taken together, these results indicate that CAPE applied to the intracellular surface of the channel is capable of increasing the open probability in a voltage-dependent fashion. 3 Cells-Whether the CAPE-induced increase in the activity of these channels is associated with internal Ca 2ϩ concentration was also studied. In these experiments, when an excised membrane patch was formed, various concentrations of Ca 2ϩ in the bath before and during exposure to CAPE (20 M) were applied. As shown in Fig. 5F, the stimulatory effect of CAPE on BK Ca channel activity was exposed to CAPE (20 M), the membrane became hyperpolarized, and the repetitive firing of action potentials was gradually reduced (Fig. 7, A and B). CAPE (20 M) decreased the firing frequency from 1.05 Ϯ 0.08 to 0.36 Ϯ 0.05 Hz (p Ͻ 0.05; n ϭ 6). Paxilline (1 M), a known blocker of BK Ca channels, reversed the CAPE-induced decrease of firing frequency to 0.86 Ϯ 0.07 Hz (p Ͻ 0.05; n ϭ 5). Thus, it is clear that this compound can regulate the firing of action potentials in these cells. Effect of Internal Ca 2ϩ Concentration on CAPE-stimulated BK Ca Channel Activity in GH Effect of CAPE on I K(Ca) That Is Active in Normal Action Potential Waveforms-To determine whether CAPE affects I K(Ca) that is active during normal action potentials, each cell was held at Ϫ50 mV, and the ramp hyperpolarization pulses from ϩ20 to Ϫ50 mV with a duration of 100 ms at a rate of 0.05 Hz were delivered to mimic action potential-like waveforms of GH 3 cells (22). As shown in Fig. 7 (C and D), when cells were bathed in normal Tyrode's solution containing 1.8 mM CaCl 2 , the current traces representing the I-V relationships of I K(Ca) were observed in response to a voltage ramp protocol ranging from ϩ20 to Ϫ50 mV. The application of CAPE (20 M) increased peak outward currents from 203 Ϯ 15 to 506 Ϯ 34 pA (p Ͻ 0.05; n ϭ 6). A subsequent application of paxilline (1 M) could decrease the CAPE-stimulated I K(Ca) from 506 Ϯ 34 to 312 Ϯ 26 pA (p Ͻ 0.05; n ϭ 6). Thus, consistent with its inhibition of spontaneous action potentials, the results indicate that CAPE can increase I K(Ca) that is active during normal action potentials. DISCUSSION This study shows that CAPE (a) increases the amplitude of I K(Ca) in a concentration-dependent manner in pituitary GH 3 cells, (b) enhances the activity of BK Ca channel in a voltagedependent manner, and (c) reduces the repetitive firing of action potentials. This compound increased the probability of these channels in a mechanism unlikely to be linked to its inhibition of activation of NF-B. The stimulation by CAPE of I K(Ca) conceivably could be one of the mechanisms underlying CAPE-induced actions, if similar results occur in neurons or neuroendocrine cells in vivo. The EC 50 value of CAPE required for the stimulation of I K(Ca) was 14 M in this study. This value is similar to that required for the inhibition of NF-B activation (2). Therefore, there might be a link between the actions of CAPE on neurons or neuroendocrine cells and its observed effects on ion channels, although further experiments are required to find out whether CAPE can interact with the BK Ca channel to influence I K(Ca) in other types of cells. However, a lack of effect of glibenclamide excludes the involvement of activation of ATP-sensitive K ϩ channel in CAPE-stimulated I K(Ca) in these cells. A previous study showed that activation of NF-B could be accompanied by a decrease in the current density of smooth muscle L-type Ca 2ϩ channel (29). However, the present results indicate that the CAPE-induced increase in I K(Ca) does not depend on the increased availability of intracellular Ca 2ϩ resulting from the enhanced Ca 2ϩ influx through voltage-de- The parentheses shown with each bar indicates the number of cells examined. *, significantly different from control. **, significantly different from CAPE alone group. In C, the cell was held at the level of Ϫ50 mV, and the ramp pulses from ϩ20 to Ϫ50 mV with a duration of 100 ms were applied to mimic the shape and duration of normal action potential waveforms present in GH 3 pendent Ca 2ϩ channels, because it was not found to increase the amplitude of I Ca,L . These observations are compatible with a recent report showing the inability of CAPE to alter the decrease in intracellular Ca 2ϩ induced by low K ϩ solution in cerebellar granule cells (30). Our results demonstrating that this compound at a concentration of 50 M produced a slight reduction in the amplitude of I Ca,L can also account for its ability to inhibit high K ϩ -induced vasoconstriction in isolated rat aorta (10). It has been demonstrated that dithiothreitol could stimulate I K(Ca) in GH 3 cells (31). In AAPH-treated cells, the stimulatory effect of CAPE on I K(Ca) was attenuated, and a subsequent application of dithiothreitol effectively increased the amplitude of I K(Ca) . These results suggest that the sulfhydryl oxidizing and reducing agents can produce an effect on I K(Ca) in GH 3 cells. It will be interesting to determine to what extent the decreased production of reactive oxygen species caused by CAPE affects the stimulatory effect of CAPE on I K(Ca) , because this compound is known to be a potent flavonoid antioxidant (1,23). Moreover, it seems likely that a decrease in the production of reactive oxygen species caused by CAPE is upstream of its stimulation of I K(Ca) . Direct activation of BK Ca channels and indirect inhibition of production of reactive oxygen species may synergistically contribute to the underlying cellular mechanisms through which this compound modifies the repetitive firing of these cells. In addition, like NS004 (32), CAPE was found to increase Ca 2ϩ sensitivity of BK Ca channels observed in GH 3 cells. Its ability to increase Ca 2ϩ sensitivity of BK Ca channels suggests that the CAPE molecule may modify the cysteine residues near the carboxyl-terminal, Ca 2ϩ bowl domain of these channels (33). Our study demonstrated that CAPE could not modify singlechannel conductance of BK Ca channels, but it did increase the channel open probability. The increase in the amplitude of I K(Ca) caused by CAPE is primarily thought to be a result of a decrease in mean closed time. It was also seen that CAPE shifted the activation curve of BK Ca channels to the left with no modification in the slope factor of this curve. This compound thus appears to produce the stimulation of BK Ca channels by a direct effect on the channel or closely associated site, although the precise mechanisms of its action remain to be further elucidated. However, our data demonstrated that CAPE applied to the intracellular face of the excised patch produced a fraction of channel closings to shift to short-lived closings, resulting in one closed kinetic state. It is worth mentioning that unlike the molecules of NS004, NS1619, or riluzole, the CAPE molecule has the juxtaposition of two aromatic rings, the unique structure of which is similar to those of some BK Ca channel openers, such as nordihydroguaiaretic acid and resveratrol (Fig. 8) (34). The present results demonstrated that the stimulatory effect of CAPE and nordihydroguaiaretic acid on the BK Ca channel was not additive. It is thus tempting to speculate that these two compounds, which are structurally related, may interact with the same binding site in the channel. CAPE has been recently reported to induce the release of cytochrome c from mitochondria to cytosol in C6 glioma cells (35). Cytochrome c was found to activate K ϩ channels (36). However, in inside-out configurations, we showed that CAPE applied to the intracellular face of the excised patches enhanced BK Ca channel activity. It is unlikely that the ability of CAPE to increase the amplitude of I K(Ca) is primarily due to the release of cytochrome c from mitochondria. Curcumin, another inhibitor of NF-B activation, was not found to have effects on BK Ca channels, when it was applied to intracellular face of the excised patch. The results lead us to suggest that CAPE could induce the change in the activity of BK Ca channels in GH 3 cells in a mechanism unlikely to be linked to its inhibition of NF-B activation. However, its change in membrane potential can be explained by the stimulatory effect on these channels. Such an effect may be responsible for its actions on neurons or neuroendocrine cells in vivo, despite the ability of this compound to inhibit activation of NF-B in GH 3 cells (18). Furthermore, CAPE and other structurally related compounds seem to be intriguing pharmacological tools used to characterize the properties of the BK Ca channels. Elucidation of the structure of the binding site for CAPE or other structurally related compounds might provide a structural basis for the pharmacological modulation of BK Ca channels.
v3-fos-license
2021-09-15T06:17:59.690Z
2021-09-01T00:00:00.000
237505324
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.eneuro.org/content/eneuro/8/5/ENEURO.0116-21.2021.full.pdf", "pdf_hash": "7ec933556cd4169f271efe8e678be2692d175b45", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44940", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "d37b015ceca86349c932686baed348b5d9fe6cff", "year": 2021 }
pes2o/s2orc
Understanding the Significance of the Hypothalamic Nature of the Subthalamic Nucleus Abstract The subthalamic nucleus (STN) is an essential component of the basal ganglia and has long been considered to be a part of the ventral thalamus. However, recent neurodevelopmental data indicated that this nucleus is of hypothalamic origin which is now commonly acknowledged. In this work, we aimed to verify whether the inclusion of the STN in the hypothalamus could influence the way we understand and conduct research on the organization of the whole ventral and posterior diencephalon. Developmental and neurochemical data indicate that the STN is part of a larger glutamatergic posterior hypothalamic region that includes the premammillary and mammillary nuclei. The main anatomic characteristic common to this region involves the convergent cortical and pallidal projections that it receives, which is based on the model of the hyperdirect and indirect pathways to the STN. This whole posterior hypothalamic region is then integrated into distinct functional networks that interact with the ventral mesencephalon to adjust behavior depending on external and internal contexts. Introduction Initially the whole ventral diencephalon was included in a region named "regio subthalamica" by Forel (Forel, 1877) or "hypothalamus" by Wilhelm His (His, 1893). However, Herrick (Herrick, 1910) made the distinction between the hypothalamus proper, which covers a large collection of nuclei and areas within the ventral margin of the diencephalon, and the ventral thalamus, which essentially comprises the reticular thalamic nucleus, the zona incerta and the subthalamic nucleus (STN; Fig. 1A). This organization model was largely adopted until the end of the 20th century as it seemed to agree with functional differences: the hypothalamus is involved in the control of neuroendocrine/autonomic responses as well as the expression of instinctive behaviors, while the ventral thalamus participates in higher cognitive processes or voluntary motor actions by mediating cortico-thalamic interactions or as part of the basal ganglia network. However, in the late 20th century, the borders as well as the internal organization of these brain regions were strongly debated once again. The former consensus that both the ventral thalamus and the hypothalamus belong to the ventral diencephalic vesicle was shaken by evidence that both regions are best regarded as rostral rather than ventral to the thalamus (Puelles and Rubenstein, 2015;Puelles et al., 2019). The borders between the hypothalamus and ventral thalamus were disputed yet again. For example, in 1980, it was believed that the STN undeniably belonged to the ventral thalamus; however, it is now considered to be a part of the hypothalamus (Altman and Bayer, 1986;Swanson, 2004Swanson, , 2012. Furthermore, while the STN ventral thalamic identity was being challenged, organizational analogies between the basal ganglia and the hypothalamic networks were also recognized. Indeed, the systematic study of hypothalamic medial zone nuclei connections led to the conclusion that these nuclei are entangled in loop circuits with the thalamus, cerebral cortex and cerebral nuclei that parallel similar loops that are representative of the basal ganglia network in which the STN is integrated ( Fig. 1B; Risold et al., 1994Risold andSwanson, 1995, 1996;Swanson, 2000Swanson, , 2012. Unfortunately, this dramatic increase in our knowledge about the development and anatomy of the forebrain has not yet led to a new accepted view of the organization of the forebrain that can be shared with a general audience. In brief, neuroanatomists and developmentalists know that the former concepts of the forebrain organization are not in tune with our actual knowledge; however, a new and accepted schema has struggled to emerge and such changes as the anatomic identity of the STN may be viewed by many other neuroscientists as merely a matter of academic discussion, without any tangible consequences. In contrast, it is now appropriate to think about the implication of the STN having a hypothalamic identity as this will profoundly influence our understanding of the organization of the posterior hypothalamus and thus the hypothalamus and forebrain altogether. In this work, we analyze available data in the literature about the development, connectivity, and functions of the STN and of the neighboring posterior hypothalamic cell groups. We demonstrate that a specific glutamatergic posterior hypothalamic region that comprises nuclei from the STN to the mammillary body (MBO), receives convergent cortical and pallidal inputs from the telencephalon and is involved, along the striatally targeted ventral mesencephalon, in the coordinated control of the behavioral response of the individual. The STN Belongs to the Posterior Hypothalamus The STN was first named after its discoverer, the French neurologist Jules Bernard Luys (1828-1897), before receiving its definitive appellation as the "nucleus subthalamicus" (in Altman and Bayer, 1986). A hypothalamic identity for the STN was suggested by Rose (Rose, 1942) and Kuhlenbeck (Kuhlenbeck, 1973) in the 20th century, against the dominant perception that this region is located within the ventral thalamus. However, to the best of our knowledge, Altmann and Bayer (Altman and Bayer, 1986) were the first to show that the STN is generated within the caudal hypothalamic anlage. In a comprehensive study of the development of the hypothalamus, these authors showed that "postmitotic subthalamic neurons migrate by a semicircular route from the anterodorsal mammillary recess neuroepithelium" following an outside-in gradient, as classically described for the hypothalamus. Therefore, following the work of Altmann and Bayer, it can be stated that neurons of the STN are generated in a region that adjoins the premammillary (PM) and mammillary nuclei and, therefore, the STN is a part of the posterior hypothalamus. From the 1990s to the present day, the analysis of the distribution and action of dozens of developmental genes, many of which encode morphogenic proteins or transcription factors, has resulted in a better understanding of the precise molecular orchestration that drives brain patterning and neurogenesis Rubenstein, 1993, 2015;Alvarez-Bolado et al., 1995;Shimogori et al., 2010;Diez-Roux et al., 2011;Moreno and González, 2011;Puelles et al., 2013). Therefore, information about the mechanism that governs the formation of the posterior hypothalamus is slowly emerging (Bedont et al., 2015;Kim et al., 2020). Based on the current literature, it can be stated that the initial processes involved in the differentiation of the posterior hypothalamic and the ventral mesencephalic anlagen depend on the diffusion of morphogenic proteins that drive the expression of transcription factors through the mesodiencephalic floorplate (Fig. 2;Alvarez-Bolado et al., 2012;Bedont et al., 2015). While the processes involved in the interactions between these proteins are not yet fully clear, the early distribution of these molecules delimits three domains Bedont et al., 2015;Nouri and Awatramani, 2017). (1) Above the mesencephalic flexure, the ventral mesencephalic domain produces dopaminergic (DAergic) neurons in the substantia nigra (SN)/ventral tegmental area (VTA). (2) The ventral floor plate of the diencephalon is lined by a postoptic hypothalamic domain that is often referred to as the tuberal hypothalamus and in which the ventromedial hypothalamic nucleus (VMH), dorsomedial hypothalamic nucleus (DMH) and tuberal lateral hypothalamic area (LHA) are produced. (3) Between the mesencephalic and tuberal hypothalamic anlagen, we find the posterior hypothalamic domain. This domain produces the STN, parasubthalamic nucleus (PSTN), calbindin nucleus (CbN), Parvafox nucleus, Gemini nucleus, ventral PM (PMv), dorsal PM (PMd), and MBO (Fig. 3). These three domains require the expression of the morphogenic protein sonic hedgehog (SHH). However, the posterior hypothalamic anlage is also characterized by the specific expression of Wnt8b (Fig. 2). The role of the expression of this gene is unknown, but an interplay between Shh and Wnt8b has been observed in the patterning of the dorsomedial pallium which is another region showing intense Wnt8b expression that gives rise to cortical areas that, as we will see, are connected to the posterior hypothalamus in the mature brain. This posterior hypothalamic domain also expresses neuronal progenitor markers such as the transcription factors Nkx2.1 and Dbx1 which play important roles in hypothalamic patterning and are expressed in the tuberal hypothalamus (Fig. 2). The expression of Nkx2.1 is restricted to two regions of the prosencephalon Kimura et al., 1996;Sussel et al., 1999;Flandin et al., 2010;Moreno and González, 2011;Alvarez-Bolado et al., 2012;Magno et al., 2017): a large basal telencephalic zone encompassing the pallidum and the preoptic area, and a postoptic territory that includes the tuberal and posterior hypothalamus. Since Nkx2.1 is expressed throughout most of the hypothalamus except a restricted anterior region between the preoptic and postoptic hypothalamus, it is often considered a hypothalamic marker. Experimental silencing of the Nkx2.1 gene, critically perturbs the formation of the hypothalamus leading to a reduction in the size of many tuberal structures such as the VMH, DMH, or LHA, and ablation of the mammillary/premammillary structures as well as the STN (Kimura et al., 1996;Kim et al., 2020). Dbx1 is required for the differentiation of many hypothalamic cell types in both the tuberal and the posterior hypothalamus (Sokolowski et al., 2016;Nouri and Awatramani, 2017;Alvarez-Bolado, 2019). Therefore, according to the early distribution and functions of Nkx2.1 and Dbx1, the region that gives birth to the STN and MBO is hypothalamic in nature. However, recent studies also point toward intriguing relationships between mesencephalic and posterior hypothalamic neuronal lineages. As the grafting of DAergic neurons produced from embryonic or induced pluripotent stem cells is a promising field of research for the development of treatments for Parkinson's disease, much attention has been focused on the genetic mechanisms involved in the differentiation of these neurons . Therefore, many of the progenitor and postmitotic markers of DAergic neurons have been identified. Interestingly, most of the currently known DAergic progenitor markers, including Lmx1a and Foxa2, among others, are also expressed rostrally to the mesencephalic anlage into the posterior hypothalamus, but not into the tuberal hypothalamic domain Nouri and Awatramani, 2017). Nouri and Awatramani (Nouri and Awatramani, 2017) dissected the distribution of Lmx1a and Foxa2 in the posterior hypothalamus. They showed intense expression of the two progenitor markers in STN, PSTN, and PMv neurons coexpressing Dbx1. The close relationship between the cell lineage of the posterior hypothalamus and MES-DA may also be reflected by the expression of the DA transporter (DAT) in adult PMv neurons (Stagkourakis et al., 2018), whereas this protein is otherwise found only in DAergic neurons throughout the midbrain/forebrain (Ciliax et al., 1995). In wild-type embryos, the rostral boundary of En1 expression in the ventral mesencephalon abuts the expression domain of Dbx1 in the posterior hypothalamus (Nouri and Awatramani, 2017). It is suspected that some corepressive interactions take place between these two transcription factors which are probably important for maintaining the respective identity of the ventral mesencephalon and of the posterior hypothalamus (Nouri and Awatramani, 2017). Indeed, the forced expression of En1 in the posterior hypothalamic region induces the ectopic differentiation of DAergic neurons scattered in the mammillary region . In addition to early progenitor markers, postmitotic transcription factors such as Pitx2 are also necessary for the development of both the ventral mesencephalon and the posterior hypothalamus. In the posterior hypothalamus, Pitx2 plays a determinant role in the migration of STN neurons or the establishment of the mammillothalamic tract and is still expressed in the entire posterior hypothalamus of adult mice (Smidt et al., 2000;Skidmore et al., 2012;Waite et al., 2013). However, most postmitotic DAergic neuron markers such as Pitx3 are not found in the posterior hypothalamus . Each nucleus of the posterior hypothalamus is otherwise characterized by a specific combination of transcription factors, such as Barlh1 for the STN or Lhx5 and (Herrick, 1910). B, Model of circuitries involving the basal ganglia (top) and the medial zone nuclei of the hypothalamus (bottom). Both involve loop pathways with the thalamus and the cortex. The descending projections of the basal ganglia are classically divided into direct, indirect, and hyperdirect pathways. Such pathways for the medial zone nuclei of the hypothalamus have not yet been identified. BN: basal nuclei; CPu: caudoputamen nucleus; Ctx: cerebral cortex; GPe: globus pallidus, external part; GPi: globus pallidus, internal part; HYP: hypothalamus; PAG: periaqueductal gray; PBN: parabrachial nuclei; PPN: pedunculopontine nucleus; SC: superior colliculus; SNr: substantia nigra; reticular part; STN: subthalamic nucleus; TH: thalamus. Figure 2. Development of the posterior hypothalamus. A-C, Pictures reprinted from the Allen Brain Institute (image credit: Allen Institute; 2020 Allen Institute for Brain Science; Allen Brain Atlas: Mouse Brain; available from http://mouse.brain-map.org/experiment/show/ 100092704, http://mouse.brain-map.org/experiment/show/100029214, and http://mouse.brain-map.org/experiment/show/100030632) and illustrating the distribution of genes coding for the morphogenic proteins Shh and Wnt8b on sagittal sections of embryonic brains (embryonic stages 11.5 or 13.5). D-F, Pictures reprinted from the Allen Brain Institute (image credit: Allen Institute; available from: http:// mouse.brain-map.org/experiment/show/100093267, http://mouse.brain-map.org/experiment/show/100076539, and http://mouse.brainmap.org/experiment/show/100030677) and illustrating the distribution of the neuronal progenitors Nkx2.1, Lmx1a, and En1 on sagittal sections of the embryonic mouse brain. G-H, Pictures reprinted from the Allen Brain Institute (image credit: Allen Institute; available from: http://mouse.brain-map.org/experiment/show/100026263 and http://mouse.brain-map.org/experiment/show/100076531) to illustrate the embryonic distribution of the postmitotic transcription factor Pitx2 and the enzyme GAD. I, Line drawing summarizing the division of the embryonic prosencephalon and the distribution of Nkx2.1 (blue and red) and Lmx1a (green and red). J, Diagram illustrating the distribution of transcription factors involved in the differentiation of the posterior hypothalamus. The development of the ventral mesencephalon/ posterior hypothalamic continuum depends on the action of morphogenetic proteins such as SHH. However, the expression domain of Wnt8b is specific of the posterior hypothalamus. The posterior hypothalamic anlage is characterized by the expression of hypothalamic (Nkx2.1, Dbx1) and mesencephalic (Lmx1a, Foxa2) neuronal progenitor genes. Some postmitotic transcription factors are also common to the mesencephalon, but then each nucleus of the posterior hypothalamus necessitates the action of specific transcription factors such Miquelajáuregui et al., 2015), but the lineages of most cell types constituting this region still require investigation. An important neurochemical feature needs to be stressed here as it characterizes most of the posterior hypothalamic region and has important functional consequences: posterior hypothalamic structures are mostly glutamatergic while abundant GABAergic neurons can be found in the adjacent tuberal hypothalamus (DMH, LHA), zona incerta and ventral mesencephalon (SN, VTA). In the embryonic posterior hypothalamic domain, the lack of Dlx and Gad gene expression distinguishes the posterior hypothalamus from adjacent structures (Puelles et al., , 2013Figs. 2, 3). The Dlx genes code for transcription factors that are responsible for orienting differentiating neurons toward a GABAergic phenotype (Lindtner et al., 2019). The glutamic acid decarboxylase (GAD) enzyme is necessary for the synthesis of GABA (Esclapez et al., 1993;McDonald and Augustine, 1993). In the adult brain, GABAergic cells are present in the posterior hypothalamic nucleus and the capsule of the PMv that are close to the tuberal hypothalamus or in the supramammillary nucleus that abuts the VTA (Esclapez et al., 1993). However, the nuclei that form the core of this region, namely, the STN, PSTN, Parvafox, Gemini nucleus, core of the PMv, PMd, Figure 3. Architecture of the glutamaergic posterior hypothalamus. A, Line drawing to illustrate the nuclear parcellation of the glutamatergic posterior hypothalamus in the rat. The pink nuclei are glutamatergic. B-E, Pictures reprinted from the Allen Brain Institute (image credit: Allen Institute; 2020 Allen Institute for Brain Science; Allen Brain Atlas: Mouse Brain; available from http://mouse. brain-map.org/experiment/show/79591669) to illustrate the distribution of GAD2 in the posterior hypothalamus of the mouse. ARH: arcuate nucleus of the hypothalamus; CbN: calbindin nucleus; cpd: cerebral peduncle; fx: fornix; LHA: lateral hypothalamic area hypothalamus; lht: lateral hypothalamic tract; LM: lateral mammillary nucleus; MM: medial mammillary nucleus; mtt: mammillothalamic tract; NG: nucleus gemini; PH: posterior hypothalamic nucleus; pm: principal mammillary tract; PMd: dorsal premammillary nucleus; PMv: ventral premammillary nucleus; PSTN: para-STN; SNr: substantia nigra, reticular part; STN: subthalamic nucleus; SUM: supramammillary nucleus; VTA: ventral tegmental area; ZI: zona incerta. continued as Barhl1 for the STN or Lhx5 for the MBO. Finally, the posterior hypothalamic region is massively glutamatergic while adjacent territories contain a mix of GABAergic and glutamatergic neurons. ANT: presumptive anterior area of the hypothalamus; DMH: dorsomedial nucleus hypothalamus; Glu: glutamate; dPal: dorsal pallium; mPal: medial pallium; HYP: hypothalamus; LHA: lateral hypothalamic area; MBO: mammillary nuclei; MES: mesencephalon; MesDA: DAergic ventral mesencephalon: mPal: medial pallium; PAL: pallidum; PO: presumptive preoptic area; POST: presumptive posterior hypothalamic area; PTH: prethalamus (ventral thalamus); SN: substantia nigra; STN: subthalamic nucleus; TEL: telencephalon; TH: thalamus; TUB: presumptive tuberal hypothalamic area; VMH: ventromedial hypothalamic nucleus hypothalamus; VTA: ventral tegmental area; zli: zona limitans intrathalamica. and MBO are massively glutamatergic and contain very few or no GABAergic cells (Fig. 3). Therefore, the STN differentiates within a specific anlage that also produces premammillary and mammillary nuclei. The MBO was already included in the hypothalamus by His (His, 1893), and some of the genes that are necessary for the differentiation of this posterior hypothalamic region are emblematic hypothalamic markers. However, this region also requires the expression of progenitor markers that are needed for the development of the ventral mesencephalon and they display a specific feature by being massively glutamatergic. Convergence of Cortical and Pallidal Projections into the Posterior Hypothalamus As the STN shares clear developmental and neurochemical features with premammillary and mammillary nuclei, the appraisal of comparable anatomic traits is legitimate. Historically, the circuit involving the MBO was first described by James Papez in 1937(Papez, 1995. This circuit involves a strong hippocampal input that reaches the MBO through the fornix, a very conspicuous tract that longitudinally crosses the entire anterior and postoptic hypothalamus. By comparison, the STN is targeted by isocortical projections that constitute the hyperdirect pathway of the basal ganglia. It also receives abundant projections from the pallidum in the basal telencephalon, constituting the well-described indirect pathway of the basal ganglia. Therefore, the cortex and the pallidum could be important sources of afferences that drive the activity of neurons in this region. Cortical afferences or hyperdirect pathways The basal ganglia hyperdirect pathways The hyperdirect pathway of the basal ganglia is still the subject of regularly published anatomic articles using classic tract tracing or modern tractography Temiz et al., 2020). Observations in humans, primates and rodents are concordant, and the STN can be subdivided into three domains partially depending on the origin of the cortical input. Many authors recognize a large dorsolateral motor, a ventral associative and a medial "limbic" sector (Parent and Hazrati, 1995a;Emmi et al., 2020). This tripartite organization of the STN is debated because no obvious boundaries can be traced within the nucleus and projections from the telencephalon often overlap. Nevertheless, this points toward a topographical organization in the telencephalic (including cortical) afferences to the nucleus. The latest studies conducted in humans and primates extended the concept of the hyperdirect pathway to include the LHA that is medially adjacent to the STN (Haynes and Haber, 2013;Temiz et al., 2020). This region is referred to as the "medial subthalamic region" in primates and humans, and it receives projections from the ventral medial prefrontal, entorhinal and insular cortices that do not innervate the STN proper. Therefore, in primates including humans, the STN receives isocortical projections while periallocortical areas such as the ventral medial prefrontal and insular areas, target LHA regions that are medially adjacent to the STN. In rodents, a similar observation was made, but, in contrast to that in primates, the LHA nuclei medially adjacent to the STN are well characterized (Barbier et al., 2017(Barbier et al., , 2020Bilella et al., 2016;Chometton et al., 2016). The posterior LHA contains the PSTN, the closely related small calbindin nucleus (CbN) and the Parvafox nucleus (Fig. 3), which receive inputs from insular and orbital areas, respectively (Tsumori et al., 2006;Chometton et al., 2016;Babalian et al., 2019;Barbier et al., 2020). From the Parvafox, orbital cortex projections continue and end in the Gemini nucleus (Babalian et al., 2019). Ventral medial prefrontal axons (i.e., from the infralimbic area) also innervate the caudal lateral LHA in rodents, but the exact distribution of these axons with regard to the posterior LHA nuclei still requires investigation. Ventral medial prefrontal axons also reach the PMd and enter the MBO (Shibata, 1989;Hurley et al., 1991;Gonzalo-Ruiz et al., 1992;Comoli et al., 2000;Fisk and Wyss, 2000). Therefore, the ventral medial prefrontal input is not limited to the posterior LHA. In contrast, dorsal medial prefrontal areas (cingulate) target the medial STN (Canteras et al., 1990;Parent and Hazrati, 1995a;Emmi et al., 2020). The fornix system and the stria terminalis Since the mammillary circuit (or Papez circuit) involves some major fiber tracts such as the fornix and the mammillothalamic tract, its general architecture was understood very early. It was known since the beginning of the 20th century that the origin of the fornix is the hippocampal formation (Cajal, 1909). However, Swanson and Cowan (Swanson and Cowan, 1977) and Meibach and Siegel (Meibach and Siegel, 1977) were the first to identify pyramidal neurons in the dorsal subiculum at the origin of the postcommissural fornix, while it was observed that Ammon's horn projects mostly through the precommissural fornix to innervate the lateral septal complex (the lateral nucleus of the septum and the septofimbrial nucleus; Swanson et al., 1981). This was confirmed by many other authors (Shibata, 1989;van Groen and Wyss, 1990;Gonzalo-Ruiz et al., 1992), and it is now well established that the dorsal subiculum innervates the medial mammillary nucleus while the para-pre-postsubiculum innervates the lateral mammillary nucleus. The projections from these cortical areas reach the MBO through the fornix. By contrast, the projections from the ventral subiculum reach the hypothalamus through the medial cortico-hypothalamic tract . In the anterior and postoptic hypothalamus, this tract courses parallel to the stria terminalis which arises in the amygdala, and both the medial cortico-hypothalamic tract and the stria terminalis converge and mostly end in the PMv. The stria terminalis carries, in part, glutamatergic axons from the posterior nucleus of the amygdala which lies adjacent to the ventral subiculum and is a cortico-amygdalar nucleus with a pallial origin (Swanson and Petrovich, 1998). Therefore, the projections from the posterior amygdalar nucleus to the PMv should also be viewed as cortical in nature. Finally, and for the sake of completeness, other cortical nuclei of the amygdala (i.e., the anterior part of the basomedial nucleus) project through the direct amygdalo-hypothalamic pathway into the ventral posterior LHA (CbN; Barbier et al., 2017). Conclusions about the connections between the cerebral cortex and the posterior hypothalamus This short survey of the cortical innervation of the posterior hypothalamus shows that the glutamatergic nuclei of the posterior hypothalamus receive topographically organized inputs from the cortex, with the MBO and PMv receiving projections mostly from the allocortex (hippocampal formation, cortico-amygdala) and the STN receiving projections from the isocortex, while nuclei in-between these medial and lateral poles receive projections mostly from the periallocortex, including the ventral medial prefrontal, insular and orbital areas (Fig. 4). Therefore, the allocortical and periallocortical projections to the glutamatergic posterior hypothalamic structures are parallel to and topographically organized with the isocortical projections to the STN. In this way, the hyperdirect pathways arise from the cortical mantle as a whole and innervate glutamatergic nuclei of the posterior hypothalamic region. These cortical projections arise from pyramidal glutamatergic neurons. The STN is innervated by collaterals of descending axons that continue in the pyramidal tract. By contrast, the fornix ends in the MBO. However, at least in rats, the first axons constituting the fornix reach the mesencephalon during development and later emit collaterals that innervate the MBO while the distal mesencephalic branches recede (Stanfield et al., 1987). Subcortical afferences or indirect pathways General organization of the subpallium Based on the topographic organization of descending cortical inputs as well as on cytoarchitectural and neurochemical considerations, it has long been proposed that the cerebral nuclei of the basal telencephalon belong either to a striatal or to a pallidal compartment (Swanson, 2000(Swanson, , 2012Risold, 2004). Therefore, the telencephalon would be organized according to a basic plan with the pallium innervating the striatum which itself projects onto the pallidum. This organization of the telencephalon has been adopted by the Allen Brain Institute (Allen Institute, 2004), whose atlases and databases are extensively used by the scientific community (Table 1). According to the Allen Brain Institute's nomenclature, four striatal divisions receive projections from the cerebral cortex, including the dorsal striatum (caudoputamen) innervated by the isocortex as well as the ventral (nucleus accumbens, olfactory tubercle), medial (lateral septal complex) and caudal (striatal-like amygdalar nuclei) striatum receiving allocortical and periallocortical projections. The striatal compartment whose main cell type is the GABAergic somatospiny neuron, projects in a topographically organized way onto the dorsal [globus pallidus (GP)], ventral [ventral pallidum (VP), also named substantia innominate (SI)], medial (medial septal complex), and caudal [bed nucleus of the stria terminalis (BST)] pallidum (for additional information, see Tables 1, 2). The direct and indirect pathways of the basal ganglia Both striatal and pallidal compartments are then bidirectionally connected to the brainstem, but the organization of the descending pathways that connect these cerebral nuclei with the brainstem has been best portrayed for the dorsal striatum/dorsal pallidum, forming the well-known basal ganglia network (Fig. 5). Indeed, in addition to the hyperdirect pathway from the isocortex to the STN, the basal ganglia network is usually divided into direct and indirect pathways (Künzle, 1975;McGeorge and Faull, 1989;Graybiel et al., 1994;Parent and Hazrati, 1995a;Nambu et al., 2002;Graybiel, 2004;Gerfen and Bolam, 2016;Tecuapetla et al., 2016). The direct pathway involves several types of medium spiny neurons in the dorsal striatum that project in the internal part of the Table summarizing the parcellation of the telencephalon based on the nomenclature of the Allen Brain Atlas and Swanson (Allen Institute, 2004;Swanson, 2004) with a slight modification from Barbier et al. (2020; CEAm is adjoined to the PALc, see comments in Phillipson (1979). Commentaries about the used parcellation: although we have remained very close to the nomenclature used by the Allen Brain Atlas (Allen Institute, 2004), a few adaptations seemed necessary to us. (a) The CEA is one of the striatal-like amygdalar nuclei. However, the original cytoarchitectonic study by McDonald (McDonald, 1982) revealed that only the lateral and central parts of the CEA contain striatal-like medium spiny neurons, while the medial part do not contain such neurons. The medial part of the CEA (CEAm) receives afferences from the lateral CEA as well as from the fundus striatum (belonging to the ventral striatum), which signify that the CEAm is targeted by striatal-like structures. Furthermore, it is intensely, selectively bidirectionally connected to the PSTN adjacent to the STN. Based on these considerations, the CEAm belongs to the pallidum and not to the striatum. This assertion is also compatible with developmental data (Bupesh et al., 2011;Barbier et al., 2020). (b) The MEA is also one of the striatal-like amygdalar nuclei. Without wishing to question this hypothesis, it is necessary to make a comment. Indeed, the MEA is made of a complex collection of neurons. In particular, it contains abundant populations of glutamatergic neurons with a hypothalamic or a pallial origin (Ruiz-Reig et al., 2018). These neurons are abundant in the posteroventral part of the medial amygdalar nucleus (MEApv) which sends dense projections to the PMv . Therefore, a better characterization of the neurochemical nature of the MEA projection to the PMv is necessary to understand the organization of this complex amygdalar nucleus. (c) For practical reason only, we divided the substantia innominata/VP into the three parts: the anterior VP is deep to the olfactory tubercle. The central anterior pallidum corresponds to most of the pallidum as illustrated by Root et al. (2015); the posterior VP corresponds to the posterior substantia innominata excluded from the VP by Root et al. (2015) GP (GPi) and the reticular part of the SN (SNr). The indirect pathway originates from another class of medium spiny neurons of the dorsal striatum that project into the external part of the GP (GPe). The main output of the GPe is for the STN as well as for the SNr. In turn, the STN projects into the whole GP and the SNr. Therefore, the STN is an additional station between the striatum and GPi/SNr. Organization of subcortical projections to the posterior hypothalamus As hyperdirect-like projections were described for the glutamatergic nuclei of the posterior hypothalamus, the comparison with the STN can be prolonged by analyzing the origin of subcortical projections to other posterior nuclei of the hypothalamus. A general inspection of Table 2 that summarizes these data, reveals that the posterior hypothalamus is predominantly and intensely connected to the pallidal compartment of the telencephalon as defined by the Allen Brain Atlas Canteras et al. Projections from the pallidal compartment to the posterior hypothalamus are topographically organized (Fig. 6). Along the projection from the GPe to the whole STN, the medial tip of the STN receives inputs from the ventral VP (Groenewegen et al., 1993;Root et al., 2015;Groenewegen and Berendse, 1990). The rostral region of the VP (following the nomenclature of Root et al., 2015; see Table 2), sends its axons through the ventrolateral hypothalamic tract and innervates the Parvafox and Gemini nuclei (Lundberg, 1962;Heimer et al., 1990;Price et al., 1991). These nuclei also receive inputs from the magnocellular preoptic nucleus and from the nucleus of the diagonal band (Heimer et al., 1990;Groenewegen et al., 1993). The olfactory nature of this pathway was demonstrated by Price 30 years ago (Price et al., 1991). Located between the Parvafox and STN, the PSTN is targeted by posterior VP (Grove, 1988;Chometton et al., 2016). The PSTN also receives convergent inputs from the medial division of the central nucleus of the amygdala [CEAm; included in a recent study to the pallidal compartment, see the legend (a) of Table 2; Bupesh et al. (2011)], from the rhomboid nucleus of the BST and, to a lesser extent, from the anterolateral, and oval nuclei of the BST (Dong et al., 2001;Swanson, 2003, 2004a;Chometton et al., 2016;Barbier et al., 2017). The caudal BST projects mostly into the PMv and PMd. These two hypothalamic nuclei are innervated by projections from the principal (BSTpr) and interfascicular (BSTif) nuclei of the BST, respectively (Comoli et al., 2000;Gu et al., 2003;Dong and Swanson, 2004b;Cavalcante et al., 2014). Finally, the medial septal complex (medial pallidum) innervates the medial mammillary nucleus (Swanson and Cowan, 1979;Shibata, 1989;Vann and Aggleton, 2004;Vann, 2010). This input is not as dense as other pallidal projections into the posterior hypothalamic nuclei, but it is the sole subcortical projection from the telencephalon identified in the medial mammillary nucleus and it serves important functions in this nucleus (Dillingham et al., 2021). Both the PMv and the PMd are known to be integrated into circuits with other hypothalamic medial zone nuclei, and these circuits are also under the command of subcortical telencephalic projections (Fig. 7). The PMv is bidirectionally connected to the medial preoptic nucleus (MPN) while the PMd is bidirectionally linked with the anterior nucleus (AHN; Swanson, 2000). The MPN shows a strong sexual dimorphism, and the MPN-PMv circuit is called the sexually dimorphic circuit (Simerly and Swanson, 1988;Swanson, 2000). The AHN and PMd are involved in defense responses Risold et al., 1994;Swanson, 2000). Both the MPN and the AHN receive strong inputs from the BSTpr and BSTif, respectively (Dong and Swanson, 2004b), along with intense innervation from the ventral and rostral parts of the lateral septal nucleus (LSNv and LSNr, respectively; . Functional Considerations All glutamatergic nuclei of the posterior hypothalamus receive topographically arranged projections from the telencephalon which comprise inputs from the cortical mantle that are reminiscent of the hyperdirect pathway as well as from the pallidal compartment reminiscent of the indirect pathway. However, this whole analysis is worth considering only if it improves our understanding of the functional organization of this region. To date, most nuclei of the posterior hypothalamus have been studied independently and each one of them is involved in its own specific response: motor behavior for the STN, control of feeding for the PSTN/CbN, agonistic behaviors for the Parvafox, PMv, and PMd, and complex cognitive functions related to encoding spatial information for the MBO Parent and Hazrati, 1995a;Swanson, 2000;Gerfen and Bolam, 2016;Barbier et al., 2020;Dillingham et al., 2021). Therefore, no functional relationship seems to link these different structures, contrary to what the developmental and anatomic data suggest. To understand the functional organization of the glutamatergic posterior hypothalamic region as a whole, once again, the STN may serve as a model. Indeed, it is important to remember that we understand the functions of the STN in collaboration with and often as opposed to that of the striato-nigral direct pathway. Therefore, the function of each nucleus of the posterior hypothalamus should be considered within a larger anatomic network also involving the ventral mesencephalon. Indeed, the ventral mesencephalon is implicated in behavioral responses (motor, feeding, social, and agonistic behaviors) similar to those of the posterior nuclei of the hypothalamus (Wei et al., 2021). Summary of the functional organization of the basal ganglia network At the lateral pole of the posterior hypothalamic glutamatergic region, STN functions are related to that of the basal ganglia network to which it belongs. GPi and SNr are the output stations of the basal ganglia network: they innervate the pedunculopontine nucleus and the superior colliculus that grant access to the somatic motoneurons and the cerebellar network (Gerfen and Bolam, 2016;Fig. 5). They also project into several nuclei of the thalamus forming the classic loops of the basal ganglia network with the motor cortex (Alexander et al., 1986;Parent and Hazrati, 1995b;Deniau et al., 1996;Haber, 2003;Kim and Hikosaka, 2015). However, as the medium spiny neurons in the caudoputamen as well as GP and SNr neurons are GABAergic, the direct pathway results in tonic inhibition of its targets which are disinhibited when cortical glutamatergic inputs stimulate the striatum and this pathway is also known as the "Go" pathway. On the other hand, the STN is glutamatergic and stimulates GPi and SNr neurons on disinhibition through the cortex-striatum-GPe pathway or activation by the hyperdirect pathway. Therefore, the activation of the STN through indirect or hyperdirect pathways, results in the inhibition of ongoing motor actions and the indirect pathway is also known as the "No-Go" pathway (Bahuguna et al., 2015;Baghdadi et al., 2017;Bariselli et al., 2019). This No-Go action is deemed important for the suppression of competing motor programs that would otherwise interfere with the execution of the desired movement, as well as for switching motor action and adapting behavior to environmental changes perceived by the isocortex (Wessel and Aron, 2013;Fife et al., 2017;Chen et al., 2020) Posterior hypothalamus and VTA functional networks The VTA in the ventral mesencephalon is involved in similar behavioral responses to many nuclei of the posterior hypothalamus, excluding the STN and MBO. Through its connections with the accumbens nucleus and the VP, the VTA initiates approach or avoidance responses in relation to feeding or agonistic/social behaviors. Generally, the VTA is thought to be involved in reinforcing behavioral responses and increasing or decreasing reward-seeking behaviors (Bouarab et al., 2019;Morales and Margolis, 2017;Parker et al., 2019). Data that integrate posterior nuclei of the hypothalamus in the functional network of the VTA are lacking. An anterograde study illustrates projections from the PSTN into the VTA (Goto and Swanson, 2004). Unfortunately, the functional significance of these connections has not yet been further investigated. Nonetheless, anatomic links also exist through the ventral/caudal/medial striato-pallidal complexes or through other nuclei of the hypothalamus (Phillipson, 1979;Groenewegen et al., 1993;Geisler and Zahm, 2005;Kaufling et al., 2009;Luo et al., 2011), suggesting at least indirect interactions at functional levels between the posterior hypothalamus and the VTA (Table 2). Social behaviors in relation to reproduction and parental care The nucleus accumbens-VTA network is involved in reproduction through the regulation of sexual preferences (Beny-Shefer et al., 2017). The projections from the VTA to the nucleus accumbens can encode and predict key features of social interactions (Gunaydin et al., 2014). The medial preoptic area (MPO) is a key center for the expression of many aspects of reproductive behaviors. Several populations of neurons within this region serve distinct aspects of reproduction, including copulatory behaviors, nest building, pup retrieval and grooming. In lactating females, a specific medial preoptic-VTA pathway is involved in nursing and pup retrieval (Fang et al., 2018;Fig. 8). Moreover, oxytocinergic projections from the paraventricular nucleus of the hypothalamus to the VTA and SNc drive DAergic neuron activity in opposite directions by increasing the activity of the VTA and decreasing that of the SNc (Xiao et al., 2017). Oxytocin-modulated DAergic neurons give rise to canonical striatal projections and oxytocin release in the VTA is necessary to elicit social reward, and is involved in attachment or bonding between parents and pups. The PMv is involved in many other aspects of reproductive behaviors as part of the sexually dimorphic circuit with the MPN: it receives pheromonal information from the MEA and BSTpr, and the exposure of individuals to conspecific pheromonal stimuli induces a strong c-Fos expression in the PMv (Yokosuka et al., 1999;Nordman et al., 2020). Then, depending on the hormonal status of the individual and the sex of the intruder, the PMv either facilitates copulation or promotes an aggressive response. For example, the PMv is involved in intermale aggression or male copulatory behavior (Pfaus and Heeb, 1997;Stagkourakis et al., 2018;Fig. 8). In the case of females in estrus, this nucleus stimulates lordosis behavior. This is also a key site for leptin's regulation of reproduction, and it relays this information about the nutritional state to regulate gonadotropin-releasing hormone (GnRH) release (Leshan and Pfaff, 2014). In contrast to the VTA in lactating females, the PMv promotes a maternal aggressive response against a male intruder (Motta et al., 2013), but reports about the role of the PMv in caring for pups are lacking to date ( Fig. 8; see also Wei et al., 2021). Therefore, both the VTA and the PMv are connected to the medial preoptic region, but while the VTA plays a role in reinforcing social bonds between partners and parents/ infants, the role of the PMv is dictated by the hormonal status of the individual and the sex and status of conspecifics, and its role ranges from copulatory behavior to fight initiation, depending of context. Feeding behavior The VTA through a rewarding action involving the nucleus accumbens, promotes the ingestion of hedonic food (Valdivia et al., 2014;Coccurello and Maccarrone, 2018;Koch et al., 2020). In general, DA-deficient mice are hypoactive, aphagic and adipsic (Zhou and Palmiter, 1995). The virally-induced rescue of DAergic signaling in the ventral striatum selectively restores the feeding of DAdeficient mice (Szczypka et al., 1999). Therefore, DAergic projections from the VTA to the ventral striatum, affect the motivation to eat regardless of homeostatic constraints. By contrast, the PSTN and CbN have been implicated in the cognitive and physio-pathologic control of feeding (Barbier et al., 2020). Some authors also considered the PSTN as part of a satiety network (Zséli et al., 2016). These nuclei respond to the ingestion of hedonic food and to sickness. The response to hedonic food ingestion is even stronger if this food is consumed for the first time (Chometton et al., 2016;Barbier et al., 2020). However, they are preferentially involved in limiting food consumption in a way that was compared with the No-Go action of the STN (Barbier et al., 2020). The network involving these nuclei encompasses bidirectional connections with the insular cortex, the CEA and the posterior SI. Additionally, it comprises ascending calcitonin gene-related peptide (CGRP) inputs from the parabrachial nucleus in the pons that convey aversive signals from the periphery (Carter et al., 2015;Chometton et al., 2016;Barbier et al., 2017Barbier et al., , 2020Chen et al., 2018;Palmiter, 2018). Therefore, both the PSTN/CbN and the VTA respond to hedonic food intake, but DAergic signaling in the VTA increases consumption while the PSTN/CbN limits the ingestion of such food if circumstances are not favorable (e.g., neophobia, sickness). Defensive behavior Both the VTA and the PMd have been extensively involved in the response to environmental threats. These responses include freezing, escape and even fighting. Concerning the VTA, it has been shown that noxious stimuli are able to excite ventral DAergic neurons while dorsal DAergic neurons are inhibited (Brischoux et al., 2009). DAergic inputs in the basolateral nucleus of the amygdala mediate the freezing response in contextual conditioned fear (de Oliveira et al., 2017) and, more recently, Barbano The PMd has also long been associated with a defense circuit involving connections with the AHN in the anterior hypothalamus, the ventral part of the anteromedial nucleus of the thalamus, and the dorsolateral sector of the periaqueductal gray (Blanchard et al., 2003;Aguiar and Guimarães, 2011;Litvin et al., 2014). This nucleus also depends on olfactory/pheromonal inputs for its functions. Initially, it was mostly involved in freezing responses to either a predator or predator odors, or to a dominant conspecific (social threat; Canteras et al., , 2008Canteras et al., , 2015Pavesi et al., 2011;Rangel et al., 2018). Anatomical evidence for a circuit suggesting that the AHN and PMd may influence eye and head movements was described long ago (Risold and Swanson, 1995). Indeed, recently, a study by Wang et al., provided further insights into the function of the PMd (Wang et al., 2021). These authors showed that this nucleus coordinates escape with spatial navigation. Projections from the PMd to the dorsolateral periaqueductal gray are necessary for the flight response, but its projection into the ventral part of the anteromedial nucleus of the thalamus is required to choose complex and suitable routes to escape a threat. Therefore, this nucleus plays a specific role in versatile context-specific escape. Mammillary nuclei cooperation with the basal ganglia network The MBO forms the medial pole of the glutamatergic posterior hypothalamic region. It is made of two nuclei that have similar and parallel projections with the ventral or dorsal tegmental nuclei of Gudden and with the anterior thalamic nuclei, but have distinct cell types and functions (Vann and Aggleton, 2004;Vann, 2010). Being the farthest from the STN, these two nuclei have no obvious connections with the ventral mesencephalon. Nevertheless, the current notion concerning the functions of these nuclei suggests that they may complete or influence basal ganglia action in the expression of behavior. Occulomotor and head direction Eye and head movements are important for scanning the environment and their control is indissociable from attentional processes and the ability to adapt to the environment. The basal ganglia direct and indirect pathways play a key role in many aspects of these processes through the projections from the SNr to the superior colliculus (Kim et al., 2017;Hikosaka et al., 2019). By and large, the basal ganglia control gaze, gaze orientation and smooth pursuit (saccadic eye movements). Again, direct and indirect pathways play complementary roles with the indirect pathway being important for object choice and deteriorating gaze orientation to "bad" objects (Kim et al., 2017;Hikosaka et al., 2019). In addition, deep-brain stimulation of the STN used for the treatment of Parkinson's disease, affects eye movements (Klarendic and Kaski, 2021). Other striatal compartments may as well affect oculomotor responses from the SN. The amygdalo (from the CEA, caudal striatum)-nigral pathway is involved in boosting oculomotor action in motivating situations (Maeda et al., 2020). Projections from the superior colliculus into the pontine nucleus are important to control basal ganglia oculomotor responses. Indeed this nucleus along with the nucleus reticularis tegmenti pontis are intimately involved in the visual guidance of eye movements and are known to influence the cerebellar vermis and flocculus (Allen and Hopkins, 1990;Liu and Mihailoff, 1999). Interestingly, the descending output of the MBO into the nucleus reticularis tegmenti pontis and the dorsomedial pontine nucleus are also well documented (Allen and Hopkins, 1990;Liu and Mihailoff, 1999). Therefore, the MBO may also mediate visual and vestibular related information through an anatomic pathway that includes mammillopontine projections to these precerebellar relay nuclei. However, the lateral mammillary nucleus (LM) is mainly concerned with head direction. The LM along with the dorsal tegmental nucleus of Gudden, is probably particularly important for transforming vestibular information to signal head direction. Head direction cells are found in the LM but also in all the structures belonging to the LM circuit including the Gudden's dorsal tegmental nucleus, anterodorsal nucleus of the thalamus, retrosplenial cortex and postsubiculum (Vann and Aggleton, 2004;Vann, 2010;Fig. 9). Selective LM lesions abolish the anterior thalamic head direction signal as well as the directional specificity of hippocampal place field repetition. Head direction cells are critical for navigation and recent computational and experimental studies show that they interact with place and grid cells in large parts of the temporal cerebral cortex to support spatial memory, scene construction, novelty detection and mental navigation (Bicanski and Burgess, 2018;Soman et al., 2018;LaChance et al., 2020). Medial mammillary nucleus and theta rhythm Theta band oscillations encode information critical to mnemonic processing across a wide range of diencephalic and cortical brain areas, including the hippocampal formation, medial septum, MBO, Gudden's ventral tegmental nucleus (VTN) and anterior nuclei of the thalamus (ATN; Vann and Aggleton, 2004;Vann, 2010;Dillingham et al., 2021). Over the years, theta activity in the medial mammillary nucleus (MM) was thought to depend on descending input from the dorsal hippocampus through the fornix, but recent data indicate that MM-VTN interactions comprise an independent theta source and that the MBO-ATN pathway forms a medial diencephalic theta network that arises independently of the hippocampus (Dillingham et al., 2021). Therefore, the mammillothalamic pathway may contribute to contextual encoding, and as suggested by Dillingham and colleagues, "the MB-ATN axis may be specifically tuned (via theta oscillations) to process and relay context-rich and time-critical information that is further integrated and distributed to higher-order areas by thalamocortical circuits." At this point, it is important to remember that functional connectivity between basal ganglia neuronal activity and theta band activity in the hippocampus exists (Allers et al., 2002). The medial prefrontal cortex (MPF) is affected by theta rhythm generated in the hippocampus (Colgin, 2011). These connections are important for decision-making, as a dorsal medial prefrontal-subthalamic pathway supports action selection in a spatial working memory task (Heikenfeld et al., 2020) and theta oscillations in the STN also increase when individuals are making decisions in the presence of conflict (Zaghloul et al., 2012;Zavala et al., 2013Zavala et al., , 2018. A next step would be to verify whether the MM-ATN pathway could also be involved in such responses and whether a coupling of functions between the MM and STN occurs through an MM-ATN-MPF-STN pathway that is inferred by anatomy (Fig. 9). Concluding functional considerations Glutamatergic posterior hypothalamic structures are involved in controlling basal ganglia motor output or in Figure 10. Diagram summarizing the organization of the telencephalic input to the glutamatergic posterior hypothalamus and SN/ VTA. The posterior hypothalamus receives convergent cortical and pallidal afferences while the SN/VTA receives striatal inputs. The GPe input to the SNr is not illustrated to keep the schema simple and as they were not addressed within this paper. Cer. Cortex: cerebral cortex; PAL: pallidum; Post. Hyp.: posterior hypothalamus; Pth: pathway; SN: substantia nigra; STR: striatum; VTA: ventral tegmental area. Figure 9. Organization of circuits involving the LM and MM. A, The LM is bidirectionally connected to the DTN. It also projects into the AD of the anterior thalamus which innervates the RSP and hippocampal formation. In turn the LM is innervated by the fornix. This circuit is involved in head direction. B, The MM is bidirectionally connected with the VTN and projects into the AM and AV of the anterior thalamus. The AV innervates the RSP, ENT, and HF, but through the AM, MM can also influence frontal areas and the anterior cingulate cortex, and modulates, along hippocampal projections, the activity of indirect and hyperdirect pathways from these isocortical areas (for more details, see text and Dillingham et al., 2021). AD: anterodorsal nucleus of the thalamus; AM: anteromedial nucleus of the thalamus; AV: anteroventral nucleus of the thalamus; Cing: cingulate cortex; CPu: caudoputamen; Ctx: cortex; DTN: dorsal tegmental nucleus (Gudden); ENT: entorhinal area; GP: globus pallidus; HF: hippocampal formation; LM: lateral mammillary nucleus; LSc: lateral septal nucleus, caudal part; MM: medial mammillary nucleus; MS: medial septal nucleus; RSP: retrosplenial area; STN: subthalamic nucleus; VTN: ventral tegmental nucleus (Gudden). strategic decision-making regarding reactions toward conspecifics, ingestion of hedonic food or finding a path to escape a threat. As a whole, they appear to perform non-rewarding actions correlated to spatial or internal contexts, while the SN/VTA is associated with reinforcement, motivation and reward of actions also relying on gaze and attention. However, the medial and lateral nuclei of the posterior hypothalamus show differences in the kind of responses in which they are involved: the STN, PSTN, and PMv are clearly involved in controlling specific motor/behavioral outputs by directly or indirectly interacting with the telencephalic basal nuclei/ventral mesencephalic networks. The MBO influences cognitive processes through ascending thalamo-cortical projections and interacts with the medial wall of the pallium and of the striatum/pallidum whose functions are less dependent on ascending DAergic mesencephalic inputs. In particular, the MM contribute to the perception of the spatiotemporal context by the hippocampal formation which then provides this information to the iso/periallocortex. The PMd has an interesting intermediary position. Active research related to the role of the STN within the basal ganglia network is constantly being conducted in human and animal models (Hikosaka et al., 2019). To date, similar studies that examine the comparative roles of the posterior hypothalamic networks and that of the SN/VTA are still rare but will constitute a promising future field of research. Hypothesis and Perspectives A little more than two decades ago, it was established that the circuits involving the allocortex and periallocortex, cerebral nuclei and medial zone nuclei of the hypothalamus resembled in terms of their structures to the basal ganglia loop with the isocortex. In the meantime, it was noticed that the STN, which is an essential component of the basal ganglia network, belonged to the hypothalamus. To reconcile the two observations, we have reviewed recent developmental, anatomic and functional data concerning the STN and the posterior hypothalamus. The developmental data showed that the STN is integrated within a larger glutamatergic posterior hypothalamic region generated in a specific embryonic anlage that is adjacent to the ventral mesencephalon where the SN/ VTA differentiates. We then realized that this posterior hypothalamic region receives convergent and topographically organized cortical and pallidal projections. This pattern of telencephalic input can be compared with the intense striatal projections that reach the SN/VTA (Fig. 10). Finally, the structures belonging to this posterior glutamatergic hypothalamic region and the SN/VTA serve complementary functions to organize behaviors. In the end, it becomes tempting to hypothesize here that the glutamatergic posterior hypothalamic region is involved in decision-making processes in situations that are dictated by environmental or internal contexts and that require immediate behavioral adaptation (e.g., social or predator threats), or by bypassing the direct pathways of the basal ganglia to limit the pursuit of rewarding actions and prevent negative consequences (e.g., limit the ingestion of palatable but unknown food). Baed on this analysis, it is plausible to hypothesize that hypothalamic longitudinal circuits that interconnect hypothalamic medial zone nuclei and the basal ganglia circuitry are built on a similar basic plan (see also Croizier et al., 2015). The fact that the STN has a hypothalamic origin is a clear evidence supporting this hypothesis. The relationship between the preoptic region and the pallidal anlage in the embryonic brain is another sign that should not be neglected. Pursuing investigations in this direction (see as well Swanson et al., 2019) may prove to be fruitful to achieve a better understanding of how the hypothalamus is integrated within large scale neural circuits in the prosencephalon.
v3-fos-license
2019-12-19T09:16:19.410Z
2019-11-11T00:00:00.000
210176412
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/s12891-020-3061-7", "pdf_hash": "65ae21875aefb1b5927dc9e626de4772a817b2c4", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44941", "s2fieldsofstudy": [ "Medicine" ], "sha1": "912eb87c630e83bf24db88c22bd627467bae3885", "year": 2020 }
pes2o/s2orc
Effectiveness of distal tibial osteotomy with distraction arthroplasty in varus ankle osteoarthritis Background In highly active older individuals, end-stage ankle osteoarthritis has traditionally been treated using tibiotalar arthrodesis, which provides considerable pain relief. However, there is a loss of ankle joint movement and a risk of future arthrosis in the adjacent joints. Distraction arthroplasty is a simple method that allows joint cartilage repair; however, the results are currently mixed, with some reports showing improved pain scores and others showing no improvement. Distal tibial osteotomy (DTO) without fibular osteotomy is a type of joint preservation surgery that has garnered attention in recent years. However, to our knowledge, there are no reports on DTO with joint distraction using a circular external fixator. Therefore, the purpose of this study was to examine the effect of DTO with joint distraction using a circular external fixator for treating ankle osteoarthritis. Methods A total of 21 patients with medial ankle arthritis were examined. Arthroscopic synovectomy and a microfracture procedure were performed, followed by angled osteotomy and correction of the distal tibia; the ankle joint was then stabilized after its condition improved. An external fixator was used in all patients, and joint distraction of approximately 5.8 mm was performed. All patients were allowed full weight-bearing walking immediately after surgery. Results The anteroposterior and lateral mortise angle during weight-bearing, talar tilt angle, and anterior translation of the talus on ankle stress radiography were improved significantly (P < 0.05). Signal changes on magnetic resonance imaging also improved in all patients. Visual analog scale and American Orthopedic Foot & Ankle Society scores improved significantly (P < 0.05), and no severe complications were observed. Conclusion DTO with joint distraction may be useful as a joint-preserving surgery for medial ankle osteoarthritis in older patients with high levels of physical activity. Level of evidence Level IV, retrospective case series. Background To date, ankle arthrodesis or total ankle arthroplasty has been performed in patients with progressive or endstage ankle osteoarthritis. With the emergence of a super-aged society and an increased number of older patients with high levels of physical activity, jointpreserving surgery has become has become increasingly popular. The surgical treatment of ankle osteoarthritis varies according to the stage. In patients with stage II and IIIA ankle osteoarthritis, low tibial osteotomy (LTO) is indicated for correcting the alignment of the lower end of the tibia surface [1][2][3][4][5], whereas total ankle arthroplasty or arthrodesis is generally indicated for patients with end-stage arthropathy (stages IIIB and IV) [6][7][8][9]. Intraarticular deformities may also be treated with total ankle arthroplasty, and distraction arthroplasty is indicated for patients with stage III or IV arthropathy [10][11][12][13][14]. In younger-aged patients with post-traumatic osteoarthritis, distraction arthroplasty is often performed with an external fixator. However, Tellisi and Fragomen [15] reported that in terms of joint preservation in the osteoarthritic ankle, older patients (more than 60 years old) tend to have better outcomes with distraction arthroplasty than their younger counterparts. Horn and Fragomen [16] also reported that supramalleolar osteotomy using circular external fixation is an effective method for correcting distal tibial deformities in the adult population. Plafond-plasty is also well indicated for various stages of intra-articular varus ankle osteoarthritis (including stage IIIB) associated with ankle instability [17]. Distal tibial osteotomy (DTO) is a type of jointpreserving surgery allowing patients to reacquire ankle stability and achieve weight-bearing; hence, it has been reported that DTO using a site with remaining healthy cartilage is indicated for stage II-IIIB arthropathy. However, to our knowledge, there are no reports on DTO with joint distraction using a circular external fixator. Therefore, this study aimed to examine the effect of DTO on ankle osteoarthritis with joint distraction using a circular external fixator. Methods Twenty-one patients (7 males and 14 females; mean age: 68.2 years; age range: 60-80 years) with medial ankle osteoarthritis (Takakura classification stage IIIA: 4 cases and stage IIIB: 17 cases), who underwent DTO with joint distraction using a circular external fixator and had undergone ≥2 years of follow-up, were included in the study. Overall, 17, 2, 1, and 1 patients had primary ankle osteoarthritis, ankle rheumatoid arthritis, post-traumatic ankle osteoarthritis (tibial shaft fracture), and poliorelated ankle osteoarthritis, respectively. The left side was affected in 12 patients. The patients were all highly active people, and included farmers, manual laborer, and sports enthusiasts; they had chronic ankle pain with swelling, stiffness, and difficulty in walking, all of which are symptoms of ankle osteoarthritis. The mean period to bone union was 85.0 days (range, 77-121 days). The mean period with an external fixator was 89.2 days (range, 80-128 days) and the mean follow-up period was 3.2 years (range, 2-9 years). Magnetic resonance imaging (MRI) was performed at a mean of 11.2 months (range, 10-14 months) after frame removal. Pre-and postoperative images were obtained, as well as the latest X-ray. The Patient Archiving and Communication System (PACS) software was used, which allowed for remarkably consistent measurements on radiographs. We confirmed the absence of errors on the AP, lateral, and mortise views of the radiographs on every occasion, using the scale in PACS. We also checked for weightbearing on AP, lateral, and mortise views of the radiographs every 1 to 3 months, and assessed the arthropathic changes such as the appearance of loss of joint space, subchondral sclerosis, cysts, and eburnation. Surgical methods An external fixator was used in all patients. Ankle arthroscopy was first performed, following which a microfracture procedure was performed under arthroscopy after synovectomy. Osteotomy was then performed to create an opening from the medial aspect of the distal tibia, towards the tibiofibular joint. The osteotomy line was defined from a point at the medial cortex, 3 cm proximal to the joint line, to a point on the lateral tibial cortex 1 cm proximal to the joint line, and an opening was created in the tibia. The deformity was corrected in the coronal plane after the surgeon pushed on the ankle until it was stable or until the talus was perpendicular to the tibia. Further correction was performed until radiographic signs of subluxation disappeared on the lateral fluoroscopy image (Fig. 1). An ipsilateral iliac bone was subsequently transplanted into the opening. In patients who continued to experience less than 10 degrees of ankle dorsiflexion after opening-wedge correction osteotomy, Vulpius-type Achilles tendon lengthening techniques were additionally required. In addition to one foot ring, four circular external rings were applied from the proximal tibia to immediately above the ankle. The distal bone fragment was fixed in place immediately above the ankle using six straight wires (Fig. 2). During fixation with a circular external fixator, foot ring fixation (for the calcaneus) and joint distraction of approximately [13]. We also reduced the width of distraction during surgery when a large opening (i.e., approximately 20 mm) was required at the osteotomy site, and the tension of the soft tissue was strong. To avoid tibial nerve palsy, we gradually applied slight distraction to the ankle while checking the condition of the soft tissue and the ability to move the ankle and toes after surgery; this was continued until a 5.8-mm distraction was achieved in the ankle joint. If the radiographic joint space during distraction arthroplasty showed a minimum of 5.8 mm distraction gap, it ensured no contact between the joint surfaces during full weight-bearing. All patients were allowed full weightbearing walking immediately after surgery (Fig. 4). Patients treated with joint distraction performed articulation while wearing an external fixator, which reportedly increases the range of motion (ROM), promotes regeneration of a good articular surface, and facilitates maturation and regeneration of fibrocartilage. Hinges were placed along the axis of ankle motion, which was evaluated using anteroposterior (AP) and lateral fluoroscopy images to ensure proper placement. Two universal hinges were attached on either side of the ring using threaded rods (Fig. 5). Evaluation Static parameters were assessed using pre-and postoperative radiographic images including the AP and mediolateral (ML) mortise angles during weight-bearing (Fig. 6). Dynamic parameters were also assessed using pre-and postoperative radiographic measurements of manual mortise ankle stress, which were available for all patients preoperatively. Both lateral and mortise stress radiographs were obtained at the final follow-up visit. The tibiotalar tilt was measured on the mortise stress view of the ankle. Anterior displacement of the talus with respect to the tibia was assessed on the lateral stress radiograph. Varus instability was assessed as the degree of talar tilt and measured as the angle (in degrees) between the superior surface of the talus and the tibial plafond on a mortise radiograph of the ankle. Maximal manual pressure was used to exert an inversion/ varus force across the ankle joint. Anterior instability was assessed as the anterior translation of the talus (the distance between the posterior edge of the tibial articular surface and the posterior edge of the talar trochlea) on a lateral radiograph stress test. Radiographic anterior translation of the talus was assessed by measuring the nearest distance from the posterior edge of the distal tibial plafond to the posterior edge of the joint surface on the talar dome. Maximal manual pressure was used to exert anterior translation of the talus (Fig. 6). The rate of improvement in MRI signal changes (preoperative versus postoperative) was also determined (Fig. 6). MRI was performed on a 1.5-T system; T1and T2-weighted sequences were utilized to assess signal changes. The rate of adjacent joint arthritis was measured using pre-and post-operative visual analog scale (VAS) and American Orthopedic Foot and Ankle Society (AOFAS) scores. Statistical analysis The paired Student's t-test was used to compare preand post-operative values. All data passed the normality and equal variance Shapiro-Wilk tests. P < 0.05 was considered statistically significant. Imaging evaluations The AP mortise angle during weight-bearing improved from a mean preoperative value of 80.5°to a mean postoperative value of 98.0°. The ML mortise angle during weight-bearing also improved from a mean preoperative value of 76.0°to a mean postoperative value of 85.0°. Additionally, the talar tilt angle on ankle stress radiography improved from a mean preoperative value of 19.5°t o a mean postoperative value of 3.0°, while the anterior translation of the talus improved from a mean preoperative value of 27 mm to a mean postoperative value of 2.6 mm. All patients showed improvements in MRI T1weighted image signal changes, with a disappearance of preoperative signal changes and a reduction in the signal change area in 9 and 12 patients, respectively; additionally, in MRI T2-weighted images, there was a disappearance of preoperative signal changes and a reduction in the signal change area in 3 and 18 patients, respectively. None of the patients demonstrated radiographic evidence of arthropathic changes in the peripheral joints. Clinical evaluation The ROM was preserved, with similar pre-and postoperative values. The VAS improved from a preoperative mean of 9.1 points to a postoperative mean of 1.6 points. Additionally, the AOFAS improved from a preoperative mean of 35.5 points to a postoperative mean of 88.4 points. There were 14 superficial pin-tract infections, which were treated with empirical oral antibiotics and daily pin-tract dressings. None of the patients experienced skin disorders that required additional surgery, deep infection, deep venous thrombosis, nerve palsy, adjacent joint disorders, or arthrofibrosis. One patient (79-yearold female) was using a crutch to walk by 5 years after surgery. One patient transiently experienced mild postoperative pain at the iliac crest; however, this had disappeared at the latest follow-up. Discussion Various options exist for the surgical treatment of ankle osteoarthritis. Selection of the optimal treatment requires a thorough consideration of the patient's characteristics. In patients with progressive or end-stage ankle osteoarthritis, total ankle arthroplasty is indicated for cases with bilateral involvement or degeneration in the adjacent joints. However, total ankle arthroplasty is contraindicated in patients with infectious ankle osteoarthritis or severe deformities (≥15°varus and valgus deformity of the ankle joint); it is also not appropriate for patients with a high level of physical activity (including sports and farming), even if they are ≥60 years of age [18]. Although ankle arthrodesis shows stable long-term outcomes and is effective in reducing pain, it has certain disadvantages including a loss in the ankle ROM and adjacent joint disorders. Furthermore, in countries such as Japan where people do not wear shoes and tend to sit on the floor in the house, patient satisfaction with ankle arthroplasty is relatively low [7,9]. LTO includes valgus correction of the alignment and an outward shift of the weight-bearing line. The procedure has been reported to have good outcomes in patients with stage I-IIIA ankle osteoarthritis. However, LTO, which involves extraarticular osteotomy, is contraindicated in patients with ankle joint instability and may require additional surgery, such as ligament reconstruction [1,19,20]. Distraction arthroplasty includes cell mobilization from the bone marrow in the talus and tibial mortise (via a microfracture procedure or drilling), and requires Fig. 6 Pre-and postoperative images. a Preoperative antero-posterior mortise angle on X-ray. b Postoperative antero-posterior mortise angle on X-ray. c Preoperative lateral mortise angle on X-ray. d Postoperative lateral mortise angle on X-ray. e Preoperative talar tilt angle on ankle stress radiography. f Postoperative talar tilt angle on ankle stress radiography. g Postoperative anterior translation of the talus on ankle stress radiography. h Postoperative anterior translation of the talus on ankle stress radiography. i Signal changes (arrow) observed on magnetic resonance imaging (MRI) before osteotomy. j Signal changes (arrow) on MRI disappeared after osteotomy treated patients to perform articulation while wearing an external fixator, allowing for an increased ROM. Joint traction for an appropriate period of time prevents damage to the regenerated tissue, and articulation promotes its maturation [18,21]. In this study, we combined DTO and distraction arthroplasty. DTO has been shown to be effective in older patients with a high level of physical activity, as it preserves the ROM [22,23]. In a study, DTO was successfully performed in patients with stage IIIB arthropathy and ankle joint instability. DTO offers certain advantages over arthrodesis, which include preservation of joint function and pain reduction. Another merit is that it exerts less influence on peripheral joints, which often cause problems in fixation. Hence, none of the patients in the present study had an adjacent joint disorder. Deliberate flexion of the osteotomy serves to stabilize and provide more coverage to the talus. Since most patients with ankle osteoarthritis lack dorsiflexion, many surgeons are hesitant to flex the osteotomy and increase equinus. However, we overcame this issue by employing transverse Vulpius gastrocsoleus recession for increased equinus, which provided better coverage of the talus and shifted the healthier posterior cartilage anteriorly. This approach may have contributed to the observed good results. The advantage of DTO over LTO is that it improves ankle joint stability by an angled osteotomy of the proximal tibial attachment site of the anterior tibiofibular ligament with valgus correction [23]. Without fibular osteotomy, DTO is similar to LTO with fibular osteotomy, in that they both correct alignments. Both osteotomies may shift the weight-bearing axis laterally by angulation of the osteotomized distal part of the tibia. However, only DTO without fibular osteotomy can narrow the lateral mortise in cases of medial ankle arthritis with mortise widening [22]. Therefore, DTO with joint distraction using a circular external fixator may also be beneficial to the cartilage [10,24]. There are numerous reports on supramalleolar osteotomy with or without fibular osteotomy for varus ankle arthritis. Hongmou et al. [25] reported that fibular osteotomy may be necessary in supramalleolar osteotomy cases with a large talar tilt and small tibiocrural angles. Stufkens et al. [26] also reported that only supramalleolar osteotomy with fibular osteotomy shifts the pressure laterally for varus ankle arthritis. However, further research is required on this subject. Since long-term non-weight-bearing leads to reduced walking ability in older patients, walking with a circular external fixator with strong fixation immediately after surgery may greatly benefit them and mechanical stimulation by weight-bearing may have additional effects. Conversely, DTO using a plate requires 1 to 2 months of non-weight-bearing [22,23]. Caution is required with higher degrees of correction as it places a greater burden on soft tissues. In our study, the evaluation of joint space narrowing on pre-and postoperative radiographs permitted the visualization of postoperative improvements with our technique (Fig. 7). Furthermore, MRI evaluations confirmed the improvements, with reductions or disappearance of preoperative signal changes after surgery. This study has certain limitations. First, patients may find the use of a circular external fixator uncomfortable. However, one of the major reasons explaining the absence of deep infections or soft tissue complications requiring additional surgery in this cohort, may be the avoidance of plate fixation. Additionally, improvement of talus instability without ligament reconstruction requires a relatively large opening (i.e., about 20 mm) at the osteotomy site in most patients; this substantially increases the tension on the medial soft tissue in most patients. Therefore, additional studies with a larger number of older patients with ankle osteoarthritis and a high level of physical activity, are needed to validate the suitability of DTO with distraction arthroplasty using a circular external fixator as a treatment option for end-stage ankle osteoarthritis. Conclusions In the treatment of patients with ankle osteoarthritis, it is important to consider the patient's age and physical activity level while selecting the optimal surgical strategy. DTO with joint distraction may be useful as jointpreserving surgery for medial ankle osteoarthritis in young patients, and in older patients with a high level of physical activity.
v3-fos-license
2021-04-04T06:16:28.437Z
2021-03-31T00:00:00.000
232774166
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-4601/18/7/3608/pdf", "pdf_hash": "f733f5c8ebba372b24da572be505393f5b96b7ef", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44942", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "bbcb30d00f65b9957cedbe3860bcda9976fe159b", "year": 2021 }
pes2o/s2orc
Psychological Impact of COVID-19 Pandemic and Related Variables: A Cross-Sectional Study in a Sample of Workers in a Spanish Tertiary Hospital Introduction: We intend to objectify the psychological impact of the COVID-19 pandemic on the workers of a tertiary hospital. Methods: All the workers were invited to an online survey. In total, 657 workers were recruited, including 536 healthcare workers (HCWs) and 121 non-healthcare workers (nHCWs). General Health Questionnaire-12 items (GHQ-12) was used as a screening tool. Sociodemographic data, working environmental conditions, and health behaviors were also analyzed. Results: inadequate sleep, poor nutritional and social interaction habits, misuse of psychotropics, female gender, COVID-19 clinical diagnosis, and losing a relative by COVID-19 were variables associated with higher probability of GHQ-12 positive screening. Significant differences between “frontline workers” and the rest were not found, nor was higher the probability of psychological distress in healthcare workers compared to non-healthcare workers. After 3 months from the peak of the pandemic, 63.6% of participants screening positive in GHQ-12 reported remaining “the same or worse.” Limitations: Causal inferences cannot be established. Retrieval and selection biases must be considered as the survey was not conducted during the peak of the outbreak. Conclusions: psychological impact of COVID-19 has been broad, heavy, and persistent in our institution. Proper assessment and treatment must be offered to all hospital workers. Introduction There is growing evidence about the fact that healthcare workers (HCWs), particularly those involved in direct assistance to infected patients (so called "frontline HCWs"), have been exposed to mental health issues while working during the SARS-CoV-2 pandemic in China [1][2][3][4] and all around the world [5][6][7][8][9][10], including Spain, the country where our study has been carried on [11][12][13]. The reasons for the psychological distress to which medical health workers were exposed might be related to the many difficulties of being safe at work, such as the initially insufficient knowledge about the SARS-CoV-2 virus, the lack of prevention and control strategies, the long-term workload for staff, the high risk of exposure to patients with COVID-19, the shortage of personal protective equipment (PPE), the lack of getting rest, and the exposure to critical life events (like infection and death of loved ones) [14,15]. Not only HCWs but also the general population has been psychologically affected by the COVID-19 pandemic [16][17][18], assuming then that fear and psychological distress are extended conditions in the general population subjected to an epidemiological context, such as that of the COVID-19 pandemic. Experience gained in previous epidemic outbreaks (Severe Acute Respiratory Syndrome -SARS, Middle-East Respiratory Syndrome -MERS, Ebola crisis) shows that both HCWs and the general population can be affected by psychiatric symptoms of anxiety, depression, and posttraumatic stress disorder (PTSD) when exposed to these kinds of situations [19,20]. Less is known about the repercussion of the pandemics on non-healthcare workers (nHCWs) in hospitals, although some research carried in other countries show inconclusive results about whether they get more or less psychologically impacted than HCWs [15,[21][22][23]. The COVID-19 outbreak has been an adaptative challenge for workers in the Spanish health system. In Spain, medical assistance is universal, free, and easily accessible, and offers wide coverage. Medical assistance to COVID-19 is being fully assumed by the Public Health System. During the first wave of the COVID-19 pandemic, large hospital infrastructures and primary health care had to modify their normal work routine, while also requiring an increase in the endowment of certain devices and human resources. In Spain, the main peak of COVID-19 contagion took place from the first weeks of March to the end of April 2020. In this period, all the healthcare institutions had to transform their infrastructure and human resources in order to assume the care demand raised by the COVID-19 outbreak [11]. Our Hospital, Ramón y Cajal University Hospital (Madrid), is a third-level hospital with a total number of 901 hospital beds before the COVID-19 outbreak. On March 30, the maximum number of hospitalizations for COVID-19 was reached in our institution with 1028 patients admitted for this cause from a total figure of 1293 admissions. The total number of COVID-19 admissions along the first wave of the pandemic outbreak was 2654, of which 2127 were discharged and 527 died. The number of ICU (Intensive Care Unit) and ventilatory support beds had to increase from 77 to 94 beds, reaching a total figure of 200 critical patients along the peak of the pandemic. Global mortality during that period was 567 patients, with 151 patients during the same period as the previous year. Our hospital has a total number of workers of around 5250, and all the staff (including HCW and non-HCW) had to adapt their working routine to the emerging situation, which meant changes in their tasks and a huge increase in workload and shifts. During the first wave of the pandemic, the hospital had to hire a total of 640 new professionals, including 226 nurses, 201 nursing assistants, 47 physicians, 51 hospital porters, and 14 senior graduates. Of the entire workforce, a total of 1562 had to maintain some period of home isolation. During the outbreak, a hotline for psychological support/psychiatric evaluation for professionals was implemented. In parallel, a program for psychological assistance for COVID-19 patients and relatives of patients was also developed. Individual assessment and treatment were implemented if needed. In this work, we intended to study the scope of the emotional impact of the COVID-19 outbreak on our hospital workforce, and to determine whether HCWs were more intensely affected when compared to nHCWs, assuming the hypothesis that those performing clinical work in close contact with COVID-19 patients would be exposed to a higher risk of developing mental health issues. We also aimed to analyze a group of variables (demographic, professional, health-related, working environment-related), measuring their association with the presence of psychological burden on the workforce. The results obtained would facilitate the design and planning of preventive and therapeutic interventions to improve the mental health of hospital workers. Materials and Methods This is a cross-sectional study. We designed a form to make an online survey among all the staff working in Ramón y Cajal University Hospital during the COVID-19 outbreak. The survey was distributed online by institutional mailing, and it was uploaded to the Hospital's intranet. All workers of different categories were encouraged to participate. In addition, 657 workers were recruited, which represents 12.4% of total workers, from which 121 were nHCWs and 536 were HCWs. All participants were administered the General Health Questionnaire (GHQ), a validated tool for screening non-psychotic psychiatric disorders in the general population. The GHQ is a self-administered screening questionnaire, designed for use in consulting settings aimed at detecting individuals with a diagnosable psychiatric disorder [24]. In its original version, it had 60 items (GHQ-60). The 12-Item General Health Questionnaire (GHQ-12) [25] is the most extensively used screening instrument for common mental disorders, besides being a more general measure of psychiatric well-being. It is validated for its use in the Spanish population [26]. Individuals with positive screening were defined as those with a total score of GHQ-12 scale of 12 or beyond (using the Likert scoring system, ranging from 0 to 3 for each item assessed) [27]. It is assumed that those surveyed who scored 12 points or more reveal a situation of relevant emotional impact, which makes it advisable to rule out the presence of mental disorders. The survey was conducted from 15 June 2020 to 25 July 2020 and was previously approved by the Ethics Committee for Clinical Research of the Hospital (study number 150/20, approved on 26 May 2020). Informed consent was required for all individuals before participating. The form was divided in four sections, which grouped different kinds of variables: sociodemographic data (gender, age, type of familiar coexistence) and professional and health status during the pandemic (professional category, experience, type of activity, mental health personal history, infection by SARS-CoV-2, COVID-19 symptoms), stress factors of workers related to the working environment and activities during the pandemic, risk and protection behaviors outside the workplace during the pandemic, and the GHQ-12 scale. Initially, a raw analysis of the results was carried out. To enhance the power of the analysis, variables were recoded and grouped the following criteria that were considered clinically relevant. Continuous variables were described by a mean and standard deviation (sd). Categorical variables were described by absolute and relative frequency. Inferential statistics of a student's t-test was used for quantitative variables. The association between categorical variables was made using the Chi-square test. To study the association between psychopathological alterations and risk variables, a backward stepwise (Wald) logistic regression model was used, adjusting for those variables that were assumed, based on the bibliographic review, the raw results of our study, and the biological plausibility, which might have influenced the selected scale variable GHQ-12. The possible existence of interaction and confusion was explored. All contrasts were bilateral and with a significance level of p less than 0.05. All analyses were performed with the Statistical Package for the Social Sciences (SPSS), version 19 (IBM Corp. Released 2010. IBM SPSS Statistics for Windows, Version 19.0. Armon, NY, USA: IBM Corp.). Results Of 657 individuals which participated in our sample, 79.1% were women. The variable "age" was recorded in closed intervals of 10 years in width. The estimated average age was of 41.06 years (sd: 11.63). Furthermore, 84.2% of the sample exceeded the cut-off point (12 points or more total score, Likert scoring system) of the GHQ-12 test, suggesting the need to further explore the presence of any non-psychotic mental disorder. The average Goldberg score in our sample was 16.8 (sd: 5.5). In addition, 81.6% were healthcare workers (HCWs) while 15.3 were the average years of professional experience (sd: 10.9). After analyzing the descriptive data of the sample, we first conducted an analysis to determine which variables were associated with positive screening in GHQ-12. A statistical significance was found for the following variables: female gender (p = 0.003), age (p = 0.016), professional category (being a nurse or a nursing assistant) (p = 0.001), having developed COVID-19 infection symptoms (p = 0.021), having been diagnosed of COVID-19 infection (p = 0.004), experiencing the loss of a relative/close person from COVID-19 (p = 0.044). Healthcare workers (HCWs) were not related to positive screening in a significant way (p = 0.268), but, because of the variable s relevance, we decided to include it in logistic regression analysis. Frontline workers vs. second line workers did not fully reach statistical significance (p = 0.084), but because of the amount of evidence about a statistically significant association between frontline workers and psychological distress in previous research [1,28,29], we decided to include this variable in logistic regression analysis (Table 1). n total for each variable may not match the sum of partial n given that some respondents did not correctly complete the GHQ-12. As for health habits and risk behaviors (Table 2), the following variables were significantly more frequent in those individuals with positive screening GHQ-12 scores: inability to maintain adequate sleep hygiene habits (p < 0.001), inability to maintain adequate nutritional habits (p < 0.001), inability to maintain structured leisure activities and disconnecting from work (p < 0.001), inability to maintain adequate social interaction (p < 0.001), inability to maintain a regular physical activity routine (p = 0.008), and irregular use of psychotropic drugs (by self-prescription or prescribed by colleagues) (p < 0.001). Inability to regulate exposition to information in media and social networks nearly reached statistical significance (p = 0.076). The results of the logistic regression analysis are shown in Table 3. We introduced the following variables in our analysis: inability to maintain adequate sleep hygiene habits, inability to maintain adequate nutritional habits, inability to maintain structured leisure activities and disconnecting from work, inability to maintain adequate social interaction, inability to regulate the exposition to media and social networks, inability to maintain a regular physical activity routine, increasing the use of alcohol or illicit drugs, use of self-prescribed (or prescribed by a colleague) psychotropic drugs, performing relaxation/meditation/mindfulness techniques, gender, age, type of cohabitation, experiencing COVID-19 symptoms, being diagnosed with a COVID-19 infection, having a risk for COVID-19, loss of a relative, professional category, and working in close contact with COVID-19 patients. As for the evolution of psychological disturbances in our sample, it is striking that, after 2-3 months from the peak of the COVID-19 outbreak in Madrid, 59.9% of our sample responded that they felt emotionally "the same or worse" compared to then. If we looked exclusively at those who reached the cut-off point for GHQ-12 positive screening (therefore, those potentially with clinical disorders), the data were even more worrying: 63.6% were the same (34.9%) or worse (28.7%) than then. Logistic regression was performed, meeting the basic assumptions of independence of errors, linearity in the logit for continuous variables, absence of multicollinearity, and lack of strongly influential outliers. We introduced the following variables in step 1 of our analysis: inability to maintain adequate sleep hygiene habits, inability to maintain adequate nutritional habits, inability to maintain structured leisure activities and disconnecting from work, inability to maintain adequate social interaction, inability to regulate the exposition to media and social networks, inability to maintain a regular physical activity routine, increasing the use of alcohol or illicit drugs, use of self-prescribed (or prescribed by a colleague) psychotropic drugs, performing relaxation/meditation/mindfulness techniques, gender, age, type of cohabitation, experiencing COVID-19 symptoms, being diagnosed of COVID-19 infection, having a disease of risk for COVID-19, loss of a relative, history of mental disorder, professional category, and working in close contact with COVID-19 patients. After exploring the association between the independent variables and the dependent variable using the backward stepwise (Wald) logistic regression model, predictive variables for GHQ-12 screening positive were those shown in Table 3. The risk of screening positive on the GHQ-12 scale among those who had inadequate sleep habits was 2.256 (95% CI: 1.325 to 3.842) greater than the same risk for those who had adequate sleep habits, if all the other variables remained constant. In addition, the factors of inadequate social interaction and inadequate nutritional habits multiplied the risk of exceeding the cut-off point in said scale by 3.169 (95% CI: 1.801 to 5.574) and 1.736 (95% CI: 0.933 to 3.229), respectively. The odds of exceeding the cut-off point on the GHQ-12 scale for females over the same odds for males was 1.736 (95% CI: 1.036 to 2.909). Using irregularly psychotropic drugs to control anxiety or insomnia multiplied the risk of exceeding the cut-off point on the GHQ-12 scale by 2.010 (95% CI: 1.047 to 3.860). Having been diagnosed of COVID-19 infection (2.024 (95% CI: 1.191 to 3.441)), and having suffered the loss of a relative or close friend due to COVID-19 (2.022 (95% CI: 1.047 to 3.904)) were factors related to a greater risk of screening positive on the selected scale. Discussion The COVID-19 pandemic has carried a great burden of psychological distress in the population of workers in our hospital, with no statistically significant differences between HCWs and nHCWs. Although this might be unexpected on a first approach, we must consider that the core stressful events about being in risk for infection by SARS-CoV-2 could be universal to all hospital workers, and they could be extensive to the general population in some sense. The results of a recent extensive review [17] comparing the levels of depression, anxiety, and other psychological symptoms reported in research carried worldwide conclude that patients are always the most affected subjects, being followed by HCWs and the general population (showing these two latter groups overlapping frequency figures, without significant differences). In our institution, the COVID-19 outbreak challenged the hospital infrastructure and human resources to a great extent, with no clinical areas free from COVID-19-infected patients (except from Oncological and Trauma wards). Besides this, physicians from different specialties, with scarce training in treating this kind of pathology, were moved to treat patients admitted with COVID-19 pneumonia. Scarcity of individual protective equipment during the first wave of the pandemic in Madrid made professionals experience a great fear of getting infected and infecting significant others. The entire workforce, including HCWs and nHCWs, had to increase their workload and assumed a higher amount of pressure in the working environment, facing longer working shifts and limitation of rest days. The results of our research showed that variables found associated with psychological distress in previous reports referred in the introduction section (as being frontline workers vs. second-line workers, or being HCW vs. nHCW) did not predict a higher risk of showing psychological distress by scoring positive in GHQ-12 in our sample. It is remarkable that keeping inadequate basic habits (sleep hygiene, nutrition habits, and social interaction) were variables that predicted a greater risk of scoring positive in GHQ-12 in our sample, so these behavioral variables were more significant than others (professional or working environment-related variables) in our results, contradicting our expectations. Promoting adequate hygiene-dietary habits and minimizing social isolation (respecting epidemiological restrictions) could be profitable to minimize the negative psychological burden of hospital workers under these circumstances. A relevant potential limitation of the study is that the variable nHCWs groups together different types of professionals, with different levels of exposure to infected patients, ranging from those relatively preserved from contacts of infectious risk (i.e., administrative personnel, maintenance/technical staff) to those who keep close contact with patients affected by COVID-19 (i.e., hospital porters). Besides this, not all physicians or nurses were equally exposed to infected patients, since many of the assistance activities were differed or carried out through telematics. To limit that kind of bias in our study, variables were redefined in order to group our individuals in terms of direct exposure to COVID-19 infected patients or not. Despite this, in our sample, direct contact with patients infected with SARS-CoV-2 did not reach statistical significance in determining a higher probability of positive screening in GHQ-12. Although nHCWs are not involved in clinical tasks with COVID-19-infected patients, the psychological impact they reported in our study could be determined by other variables, which were not explored in this paper. The association between these variables (e.g., growing workload, disruption of their usual working procedures, and extension of work shifts) and psychological distress in nHCWS could explain the absence of significant differences between HCWs and nHCWs risk to screen GHQ-12 positively in our study. This hypothesis needs further investigation to be confirmed. As some previous research points out, psychological impact on non-health workers could also be due to the lesser availability of adequate coping strategies and scarce knowledge of self-protection and prevention techniques in this group of workers [23]. In our research, the female gender appears to be a variable significantly related to positive screening in GHQ-12. This finding is consistent with the conclusions drawn in numerous previous papers [1,3,15,17,21,22]. There are also some references about women being biologically more disposed to develop higher levels of anxiety and PTSD than men [30]. In our research, the professional category of nurse did not reach statistical significance with regards to positive screening in GHQ-12 in logistic regression analysis, which is a common finding in most of the papers cited in our references. We must consider that the female gender is overrepresented among nurses/nursing assistant categories, which is a fact that can condition the interpretation of results. In a further exploratory analysis, when we performed linear regression analysis, we did find that nurses and nursing assistants tended to report significantly higher scores in GHQ-12 compared to other professional categories. Keeping adequate sleep and nutritional habits and maintaining adequate social interaction showed up as protective factors from developing positive screening in GHQ-12 during the pandemic in our sample. Losing a relative or a close friend by COVID-19 and being diagnosed of COVID-19 infection were variables associated with positive screening on an GHQ-12 scale. A vast majority of the participants in our survey stated that they felt they were "the same or worse" than in the most severe moments of the COVID-19 outbreak in Spain in the months of March and April. This finding is consistent with previous work, which outlines that a psychological reaction to this kind of stressful situation may present with anxiety and fear in an initial phase, but may consolidate in persistent depressive and post-traumatic stress (PTS) symptoms in some individuals [31] Most of the research carried out during the initial phases of the COVID-19 outbreak outlined the need for preventive and screening strategies among health workers, but watching for the evolution of psychological distress over the long-term is also needed, to take care of the most chronically impaired individuals. The limitations of our study include the fact that, being a cross-sectional research, associations between variables and positive GHQ-12 screening cannot be considered in terms of causality. Beside this, the fact that the survey was conducted several weeks after the critical stage of the pandemic may lead to a bias in recalling the psychological aspects experienced during the crisis. Furthermore, the time elapsed since the most acute moments of the sanitary outbreak could have led to the preferential participation of the most severely and chronically affected individuals. It would have been desirable that a larger proportion of the working staff had engaged in the survey, reaching 12.51% of the global number. The measurement tool used was a screening tool, validated in the general population, and, therefore, could underestimate certain dimensions of symptomatic discomfort in health sector professionals that might be specific to their care activities. As a screening tool, it does not allow the diagnosis of a specific disorder, but it points to a greater probability of mental disorders that should be evaluated in greater depth. Specific kinds of psychological disturbances linked to clinical assistance during the COVID-19 pandemic. Moral injury [32], vicarious traumatization [21], compassion fatigue [33], and burnout syndrome [34,35] have been assessed in previous research, but were not the targets of our work. Previous research has suggested the need to implement a different kind of measure to prevent and minimize the psychological disturbance in healthcare workers, since we already know they are overexposed in many sources of stressful events during a pandemic [36][37][38][39]. Considering the results obtained, preventive and therapeutic strategies should be, perhaps, expanded to include non-health hospital workers, specifically targeting the groups identified as being at higher risk (women). Conclusions In our study, healthcare workers did not reach positive scoring for GHQ-12 test more frequently than non-healthcare workers, showing both categories a generalized, wide, and persistent psychological impact among their subjects during the COVID-19 pandemic. Moreover, those professionals placed in frontline duties did not score positively more frequently than others in a significant way. The female gender, health behaviors, and infection by SARS-CoV-2 of the individuals and/or their relatives were significantly associated with GHQ-12 positive screening as risk/protective factors. These results make it advisable to further investigate the evolution of psychiatric symptoms in hospital workers, and to implement a proper preventive assessment and therapeutic programs to meet their needs.
v3-fos-license
2021-05-18T13:22:07.339Z
2021-01-01T00:00:00.000
234752495
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.degruyter.com/document/doi/10.1515/epoly-2021-0032/pdf", "pdf_hash": "1ecd99cd2288c147933f59b4f7056982f0a97716", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44945", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "sha1": "0d9525f6ed7a343e94ca622761dbeff3ca60b5ac", "year": 2021 }
pes2o/s2orc
Modification of sodium bicarbonate and its effect on foaming behavior of polypropylene As a potential physical blowing agent, sodium bicarbonate (SB) is environmentally friendly and low in cost, but its low decomposition temperature cannot meet the requirements of polyolefin foammaterials. Herein, for enhancing the thermal properties of SB, a modified way was offered to fabricate various SB-based capsules via suspension polymerization. As the modified SB-based capsules, epoxy resin (EP) accompanied with several organic acids was successfully coated on the surface of SB, serving as heat-insulation layer of SB. Various physicochemical characterizations provided reliable evidences for the good coating effect, and the thermal performance of themodified SB was improved. Further, the composite SB capsules were applied for the foaming of polypropylene (PP), and the foaming behavior of the SB-based capsules in PP was significantly improved, withmore uniform distribution, smaller cell diameter, and higher cell density. In all, this work fully proved that the coated shells enhanced the thermal properties of SB, and the modified SB capsules significantly improved the foaming quality of foamed PP. Introduction In recent years, facing the increasing demand for foaming agents in the plastic industry, such as polyisocyanuratepolyurethane (PIR-PUR) foams for insulations constructions in wall panel (1) and application of polyurethane foams (RPUF) in flame retardant (2), and environmental problems caused by the reduction of natural resources or the increase of organic waste gas emissions, higher requirements are put forward for the selection and use of foaming agent. The foaming agent is a significant component in the synthesis of foaming materials, such as the commonly used foaming agent of azodicarbonamide (AC) and sodium bicarbonate (3). AC foaming agent could generate a large amount of gas, making it widely used in diversiform applications (4,5). But AC produces a lot of unpleasant nitrogen-containing compounds during foaming process, as well some toxic residue, which would pollute the environment. As a contrast, SB has the advantages of large gas generation and is cheap, easy to obtain, and environment-friendly (6). At the same time, the gas produced by SB decomposition is accessible to dissolve and diffuse in the polymer matrix, which has attracted wide attention to the fields of foaming materials due to its advantageous properties. But bottlenecks have emerged, the low decomposition temperature and wide decomposition temperature range of sodium bicarbonate could lead to the plasticizing effect of foam in polymer (7), which further limit its applications in the foaming area. Therefore, the research to improve the thermal performance of SB is the key problem to solve its engineering application. In terms of improving thermal properties of SB, some researchers have proved that the thermal properties of SB could be effectively improved by increasing the shell on the surface of SB. Yao et al. (7) also reported a modified SB coated with stearic acid (SA) by melting method and found that the thermal properties of SB have been improved from 112.6°C to 146.9°C. Petchwattana and Covavisaruch (8) reported a SB modified with citric acid (CA) by method of high speed mixing; the results showed that the decomposition temperature of the modified SB was increased to 130°C. This method slightly increased the decomposition temperature of SB, but the distribution of CA on the surface of SB was not uniform. Cao et al. (9) have investigated a modified SB coated with erucamides solution and found that the initial decomposition temperature of SB was raised from 115°C to 165.8°C, and the decomposition temperature range narrowed from 61°C to 46.7°C after the modification. Although these methods can increase the decomposition temperature of SB, it is still lower than the melting temperature of PP. PP is one of the most important general-purpose plastics with excellent mechanical properties, high-heat resistance, and excellent characteristics and is nontoxic, cheap, and has been widely used in automobiles, home appliances, construction, and other fields (10)(11)(12). Based on these considerations, it is of great significance to explore new modification methods and to improve the decomposition temperature of SB and its application as a foaming agent of polyolefin materials. Epoxy resin is a kind of macromolecular thermosetting polymer with good viscosity, thermal stability, and mechanical strength accompanied, so it is widely used in various industrial fields such as adhesives and paints (13,14). In this paper, to improve the thermal properties of SB, epoxy resin with several organic acids was coated on the surface of SB. The effect of surface modification of SB by EP and different organic acids on its thermal properties was systematically investigated, and the synthesized SB-based capsules were applied to the foaming of PP, and the foaming behavior of modified SB was further explored. This study not only conforms to the concept of environmentally friendly materials, but also provides direction guidance for the development of PP foaming products. Synthesis of the various SB-based capsules Firstly, SB was grinded with a ball mill and selected to collect the size fraction below 400 meshes. 1 g of EP was dissolved in 120 mL of ethyl alcohol and ultrasound around 5 min to obtain homogeneous mixture. The mixture of EP was placed in a 250 mL three-necked flask equipped with a magnetic stirrer; the flask was placed in an oil bath. Sodium bicarbonate (5 g) was added into the above solution and stirred for 20 min at room temperature. Subsequently, curing agent of TETA (10% mass fraction relative to EP), dispersing agent of SDBS, and catalytic agent of DMP-30 were dispersed in the above liquid and maintained to allow for 40 min at 40°C in an oil bath. The mixture was then heated to 70°C and sequentially stirred for 6 h. (It is worth mentioning that the weight loss rate of pure sodium bicarbonate is 3.18% at 70°C for 6 h; therefore, the longer heat treatment time in the preparation process has little effect on the later preparation, as shown in Figure A1). Finally, the obtained composite (EP@SB) was washed with deionized water and ethanol and dried in an oven at 60°C for 48 h for later use. The CA was mixed in a 250 mL three-necked flask with 120 mL of ethanol, followed by the addition of the nitric acid to adjust the pH to 2. The EP@SB composite was added to the above mixed liquid. The solution was kept stirring for 15 min to obtain homogeneous mixture at room temperature and then heated to 70°C for 7 h. Finally, the reaction was cooled at room temperature, and the obtained composite (CA/EP@SB) was washed three times with ethanol and distilled water to remove impurities and dried in an oven at 60°C for 48 h. Similarly, the experiments were prepared with PA and SA compounds to obtain the samples of PA/EP@SB and SA/EP@SB. Preparation of the foamed masterbatch and substrate Different SB-based capsules were blended with LDPE at a weight ratio of 1:9 in a torque rheometer (XSS-300, Shanghai Kechuang Rubber & Plastic Machinery Equipment Co., Ltd), and these composites were mixed about 6 min under a temperature of 100°C (SB did not decompose in this process, confirmed as in Figure A2). The above composites were cooled at room temperature and crushed with a pulverizer to acquire different types of foam masterbatches. The blends of PP with 3 wt% MMT were made by twin-screw extruder (TSE40A, Nanjing Ruiya polymer Equipment Co., Ltd). The melting temperatures of PP/MMT from zone 1 to zone 10 were 170°C, 173°C, 176°C, 180°C, 185°C, 186°C, 187°C, 189°C, 190°C, and 193°C, respectively. Afterwards, the PP/MMT extrudate was cooled in a water bath and cut into small particles, and dried at 80°C for 12 h in a vacuum oven. Preparation of the PP foaming materials PP/MMT mixed with 15 wt% previously prepared masterbatch and SB-based capsules foaming agents were injected using an injection molding machine with feeding zone temperature of 190°C, 189°C, 188°C, and 187°C, respectively, and the injection speed was 95% with injection pressure of 40 MPa. Then, the dumbbell-shaped PP/SB, PP/EP@SB, PP/PA/EP@SB, PP/SA/EP@SB, and PP/CA/EP@SB foaming products were obtained; they were designated by PP-1, PP-2, PP-3, PP-4, and PP-5, respectively. Foam characterization The densities of the PP/MMT blends and the foamed samples were measured by using a water displacement method. The cell size was calculated by SEM, and at least 100 cells were tested to acquire the average cell diameter. Image Proplus software was used to analyze the SEM images and the average diameter and cell density were calculated as follows (14,15): where N 0 was the cells' density per unit volume, ρ and ρ f were the density of sample before and after foaming, respectively. n was the number of cells observed under microscope and V f was the porosity percentage. A was the field observed under the microscope, and M is the magnification factor. Characterizations Fourier-Transform Infrared spectroscopy was conducted on a FT-IR, NEXUS670, Thermo Nicolet USA, and the wavelength range was 500-4,000 cm −1 . The thermal properties of the SB-based capsules were evaluated by thermogravimetry (TGA, Q50, iversePerkin Elmer, USA), and TGA was conducted from the 40°C to 600°C at 10°C/min under N 2 atmosphere. The thermal stability investigation of the modified SB composites was characterized by Differential Scanning Calorimeter (DSC, Q10 Perkin-Elmer, USA); the test temperature was from room temperature to 80°C for 10 min to eliminate the mass error of the sample. Then it was cooled to room temperature and heated to 300°C at a rate of 10°C/min in N 2 atmosphere. Nitrogen adsorption instrument measurement was conducted on a TriStar ® II 3020, Micromeritics. The morphology of the modified SB composites was measured by SEM (KYKY-EM6200, Beijing Science and Technology Instrument Co., Ltd.). Transmission Electron Microscope (TEM, Tecnai F20, FEI, USA) was used to study the shell thickness of the capsules. Thermal conductivity of the samples was tested by utilizing hotwire thermal conductivity meter (TC3000, Xiaxi Co., Ltd., China) at room temperature. 3 Results and discussion Structure of the synthesized sodium bicarbonate capsules To better understand the structure of the modified SB, a series of characterizations were conducted. As shown in Figure 1, in the FT-IR spectra, the prominent peaks at 835 cm −1 and 696 cm −1 in SB were assigned to the stretching vibration of CO 3 2− (Figure 1b), and the peak at 1,509 cm −1 in EP was corresponding to the stretching vibration of benzene ring (16,17). In the pure SA, PA, and CA, the peaks at 2,923 cm −1 and 2,849 cm −1 were originated from the symmetrical stretching vibration of the C-H of CH 2 and CH 3 (17,18) (Figure 1c). Moreover, the peak at 1,708 cm −1 was ascribed to the stretching of the C]O groups of the several organic acids, and the broad region of the CA spectrum at the range of 3,290-3,500 cm −1 came from the contribution of -OH bonds. After modification, as mentioned above, only absorption peak at 1,703 cm −1 was disappeared and others peaks still retained, whereas an obvious peak at 1,386 cm −1 corresponding to -COO groups was appeared (19) (Figure 1d), which may appear from the chemical reaction between organic acid and epoxy resin, indicating that the organic acid was grafted on the surface of the epoxy coating. FT-IR spectra indicated that the epoxy resin and organic acids were successfully coated on the surface of SB. The morphology of the SB-based capsules was observed by SEM, as shown in Figure 2. It could be seen that the pure SB exhibited smooth surface ( Figure 2a and a 1 ). After modified with EP, the surface of EP@SB presented a lot of wrinkles without visible agglomeration and exposed SB particles, indicating that the SB was effectively enclosed in EP (Figure 2b and b 1 ). Moreover, the EP@SB was further modified by a series of organic acids that could interact with EP. As shown in Figure 2c-e, the surface of the secondary modified products of PA/EP@SB, SA/EP@SB, and CA/EP@SB became rougher than for EP@SB. This may be due to the hydrogen bond interaction between organic acids and EP, which made the molecular chain of EP become dense, and the intertwined molecular chains made the surface rough (20). These data proved that the EP and some organic acids were successfully attached on the surface of SB, and the rough surface with intertwined molecular chains could improve the property of heat insulation of the outer shells. Moreover, in order to identify the dispersion of SB-based compounds before and after modification, N 2 adsorption isotherms were used. According to the isotherm adsorption curve and Brunauer-Emmet-Teller (BET) equation fitting, the specific surface area of SB, EP@SB, PA/EP@SB, CA/EP@SB, and SA/EP@SB was calculated as 5.82, 6.45, 9.89, 9.32, and 7.78 m 2 /g, respectively, as shown in Figure A3, which indicated the good dispersion of SB-based compounds. To further prove the morphology of the modified SB, TEM was conducted. Herein, sample of EP@SB was used as example, as shown in Figure 2b 1 , the EP@SB showed an irregular shell profile, in which core of SB is more contrast, and shell of EP coating layer is less contrast, demonstrating that EP was successfully coated on the surface of SB. Figure 2c 1 -e 1 displayed an obvious boundary between the core and the shell, and their shell was rougher than observed in Figure 2b 1 , which also proved that the organic acids were grafted on the EP. These results further suggested that the epoxy resin and organic acids were successfully coated on the surface of sodium bicarbonate. Thermal properties of the synthesized SB-based capsules The influences of epoxy resin and organic acids coating on the thermal behavior of SB were characterized by TG. As shown in Figure 3a and Table 1, the initial decomposition temperature (T 0 ) of the pure SB was 119°C, with 63% residue at 600°C. In the curve of EP@SB, the thermal behavior of weight loss was obtained in the temperature range of 150-200°C and 250-600°C. Among them, the first broad temperature range was owed to the decomposition of SB, while the second wide temperature range was due to the partial and complete decomposition of epoxy resin. It could be seen that the initial decomposition temperature of EP@SB was 36°C higher than that of pure SB, which was mainly due to the heat transfer of the outer epoxy resin shell, causing the gradual thermal decomposition of SB (21,22). When the temperature was higher than the decomposition temperature of epoxy, the outer capsule of epoxy resin was swelled until the gas escaped. To further improve the thermal performance of EP@SB, a variety of organic acids were coated on the basis of EP@SB. It could be seen that the T 0 of PA/EP@SB, SA/EP@SB, and CA/EP@SB was 178°C, 174°C, and 165°C, with weight loss of 28%, 24%, and 30%, respectively. The T 0 was increased by nearly 46-59°C compared with pure SB. Therefore, the effect of secondary modification with organic acid was better than that of primary modification with epoxy resin, which may be due to some chemical reaction or interaction between epoxy resin and organic acids, resulting in the increase of crosslinking structure density in the molecular chain and the difficulty in the movement of macromolecular chain. These results showed that we have successfully enhanced the thermal stability and improved the thermal insulation performance of SB. The thermal stability of the pure SB, EP@SB, PA/EP@SB, SA/EP@SB, and CA/EP@SB samples was also explored by DSC. It could be observed from Figure 3b and Table 1 that the maximum peak temperature (T p ) of pure SB was 152°C, with a decomposition temperature range (ΔT) of 69°C. After being modified by epoxy resin, the ΔT decreased from 69°C to 46°C compared to pure SB, and the T p was also increased to 171°C, which was attributed to the increase of SB coating shell, leading to better barrier effect (23). After further coating with some different organic acids, it could be observed in Table 1 that when PA was added, the maximum peak temperature of PA/EP@SB increased from 152°C (pure SB) to 186°C, suggesting better thermal stability, and the decomposition temperature range was 28°C, which was narrower than pure SB sample by 41°C. The results caused by when the heating temperature is more than the decomposition of coating shell, and the coating shell instantaneously are decomposed and ruptured, the temperature of the core of SB will rise rapidly and the temperature at this time was the maximum endothermic peak of SB. Furthermore, the decomposition temperature range of SA/EP@SB and CA/EP@SB was 31°C and 38°C, and the maximum peak temperature was 186°C and 185°C, respectively. By comparison, the multilayer coating effect of organic acids was better than that of epoxy single-layer coating for SB, especially the modification for PA/EP@SB. These results demonstrated that the decomposition temperature range of SB decreased with different coating layers, and multiple coating layers would give SB better thermal stability. The improvement of SB-based capsules' thermal performance encouraged us to explore the thermal conductivity of the coating shell, and the thermal conductivity of the pure EP, EP/CA, EP/SA, and EP/PA was measured by Thermal conductivity meter (TC300, Xi'an Xiaxi Electronic Technology Co., Ltd.). As shown in Figure 3c, the thermal conductivity of pure EP was 0.0639 W/m K at room temperature. After introducing some different organic acids, the thermal conductivity of EP slightly decreased, and the thermal conductivity of PA/EP, SA/EP, and CA/SB was 0.0603, 0.0604, and 0.0624 W/m K, respectively. This result may be due to the interaction between EP and PA, leading to the scattering of heat carrier phonons (24,25). Another reason may be that the addition of organic acids increased the density of epoxy polymer chain, destroyed the continuity of epoxy polymer, and led to the change of its thermal conductivity (26,27). In all, the addition of organic acid coating could reduce the thermal conductivity of the samples, which made the SB have good thermal insulation performance and further improved the thermal behavior of SB. NaHCO 3 119 186 37 119 152 188 69 EP@SB 155 180 32 151 171 197 46 PA/EP@SB 178 202 28 173 186 201 28 SA/EP@SB 174 199 24 169 186 200 31 CA/EP@SB 165 193 30 165 185 205 38 a, a' T 0 : initial decomposition temperature. b, b' T f : end thermal decomposition temperature. c L: weight loss of various SB-based capsules and pure SB. d T p : peak endothermic temperature. e ΔT: thermal decomposition temperature range. ΔT = T f − T 0 . Foaming behavior and cell morphology The good thermal insulation performance of the SBbased capsules encourages us to apply the modified SB as blowing agent to polyolefin foaming. The foaming process of SB in polyolefin was mainly composed of the formation of a stable gas-melt homogeneous system, cell nucleation, cell growth, and cell stability (28). Firstly, to observe the foaming behavior of the SB-based capsules in PP, SEM was conducted. The average cell diameter and cell density were presented in Figures 4 and 5a and b. Figure 4a 1 showed the phenomenon of cell merging and collapse, and the cells were large and few. The cell diameter was 163 μm and the cell density was 0.872 × 10 5 cells/cm 3 . This may be due to the rapid decomposition of SB, which could not be enriched in PP polymer. In contrast, the cell size shown in Figure 4b 1 was relatively small, and the cell diameter and cell density were 142 μm and 0.961 × 10 5 cells/cm 3 , respectively. Compared with Figure 4a 1 , the cell density shown in Figure 4b 1 was more regular, which may be due to the increase of thermal stability and foaming temperature of SB@EP (29). Besides, the melting strength of the polymer matrix was enhanced by EP, and MMT and EP had a heterogeneous nucleation effect, resulting in the decrease of cell size and increase of cell density (30,31). The presence of less-foamed stage in Figure 4a 1 and b 1 was possibly due to the lower initial decomposition temperature of EP@SB and pure SB compared with the melting temperature of PP, resulting in the bubbles nucleation in the cell during cell formation (32)(33)(34). In other words, SB had already decomposed gas before some PP had not completely melted. In addition, with the introduction of organic acids in EP@SB, the cell density and cell size in samples of PP/SA/EP@SB and PP/CA/EP@SB shown in Figure 4d and e were 1.20 × 10 5 cells/cm 3 , 1.18 × 10 5 cells/cm 3 and 118 μm, 125 μm in diameter, respectively. Notably, in the case of PA modifier, it can be clearly shown in Figure 4c 1 that there were many regular, uniform, and small cells in PP, and the cell density and cell size were 1.31 × 10 5 cells/cm 3 and 108 μm in diameter, respectively. This result was better than the reported melting modification method, as shown in Figure A4. Compared with Figure 4a 1 , the average cell diameter in Figure 4c 1 reduced from 163 to 108 μm; the cells density increased from 0.872 × 10 5 to 1.31 × 10 5 cells/cm 3 . Based on the above data analysis, it could be concluded that under the same amount of foaming agent, the foaming effect of organic acid-modified sodium bicarbonate in PP was superior than that of epoxy modified, and the long alkyl chain grafting improved the dispersibility of the CO 2 adduct in PP and favored a homogeneous release of CO 2 to blow PP during the exothermic foaming process (35). As presented in Figure 5c, the weight of PP foaming products was reduced after foaming injection molding, and the weight reduction rates of PP/SB, PP/EP@SB, PP/PA/EP@SB, PP/SA/EP@SB, and PP/CA/EP@SB were 9.7, 14.1, 21.73, 18.47, and 16.30 wt%, respectively. In contrast, PP/PA/EP@SB exhibited the highest weight loss rate than others, indicating that PP/PA/EP@SB had the best foaming effect and the highest gas production, which may be due to the excellent compatibility of PP/PA/EP@SB in the PP foaming system, and its foaming temperature was closed to the melting temperature of PP. The apparent density of these PP foaming samples was also measured, as shown in Figure 5d. It could be seen that the apparent density of these PP materials was significantly decreased after foaming, and the apparent density of PP products using pure SB as foaming agent was the lowest. It was considered that the coating on SB improved the thermal properties of SB and further improved the foaming behavior of SB. Notably, the foamed PP samples prepared by blowing agents of PP/PA/EP@SB presented the lowest apparent density of 0.72 g/cm 3 . This is because the interaction between EP and organic acid, which improved the performance of the coating, thereby further improved the thermal performance of SB, in turn affecting the foaming behavior of SB in PP. These data indicated that in the PP foaming system, by coating the foaming agent of SB with polymer or increasing the strength of the SB coating compound, the early decomposition of SB in PP can be restricted to achieve better foaming performance. This study not only conformed to the concept of environmentally friendly materials, but also provided direction guidance for the development of PP foaming products. Conclusion In summary, the green environmental SB foaming agent was modified by epoxy resin and some different organic acids. The modification of SB and its effect on foaming behavior of PP were investigated. After the modification, the decomposition temperature and decomposition temperature ranges of SB were significantly improved. In particular, compared with the pure SB, the decomposition temperature of PA/EP@SB composite increased from 119°C to 179°C, increased by nearly 50°C, and the decomposition temperature range was 41°C and 23°C lower than that of pure SB and EP@SB, respectively. Meanwhile, the foaming quality of PP was also significantly improved, and the cells were more uniform with smaller cells size and higher cells density compared to unmodified SB in PP. The results mainly ascribed to the addition of shells with EP and organic acids, which improved the thermal properties of SB, as well as enhanced the melt strength of PP. This study highlights that increasing the strength of the shell can enhance the decomposition temperature of SB and improve its foaming behavior in PP.
v3-fos-license
2018-05-23T13:16:44.654Z
2018-05-22T00:00:00.000
43956333
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41419-018-0644-4.pdf", "pdf_hash": "d90a1ee9c667974aca34c0863b088e737b5b1541", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44946", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "sha1": "d90a1ee9c667974aca34c0863b088e737b5b1541", "year": 2018 }
pes2o/s2orc
TRIM50 suppressed hepatocarcinoma progression through directly targeting SNAIL for ubiquitous degradation Tripartite motif-containing 50 (TRIM50) belongs to the tripartite motif (TRIM) protein family, which has been implicated in the pathogenesis of multiple cancers. However, the role of TRIM50 in hepatocellular carcinoma (HCC) remains to be clarified. Here we showed that TRIM50 expression was significantly decreased in liver cancer tissues compared with corresponding non-cancerous liver tissues, and its decreased expression was significantly correlated with advanced disease progression. Gain-of-function assay by exogenous overexpression of TRIM50 in HCC cells showed that proliferation, colony formation, migration and invasion of HCC cells were significantly inhibited, whereas loss-of-function assay by TRIM50 knockdown showed that these malignant behaviors of HCC cells were significantly increased. Further investigation showed that TRIM50 could directly bind with SNAIL and induced K-48 linked poly-ubiquitous degradation of SNAIL protein, which further reversed SNAIL-mediated epithelial-to-mesenchymal transition (EMT) process of HCC cells. In vivo assay by xenograft tumor model verified the antitumor effect of TRIM50 on HCC. Taken together, these results showed that TRIM50 acted as a tumor suppressor in HCC cells by directly targeting SNAIL and reversing EMT, which further indicated that positive modulation of TRIM50 might be a novel therapeutic strategy for SNAIL overexpressed HCC cells. Introduction Hepatocellular carcinoma (HCC) is the primary malignancy of the liver and the third leading cause of cancerrelated death worldwide [1][2][3] . Most of the patients are diagnosed at late stages with limited therapeutic options. Identifying novel disease marker and clarifying the pathological mechanism will provide new insight into this disease and facilitate discovery of novel therapeutic strategies. In recent years, the role of tripartite motif (TRIM) proteins in the development of cancer has attracted much research interest, and novel tumor promoters and tumor suppressors have been identified in TRIM family members 4,5 . TRIM protein family includes >70 highly conserved proteins, which are usually composed of a RING (R) domain, one or two B-boxes (B) domain(s) and a predicted coiled coil (CC) domain 6,7 . TRIM proteins have been reported to play important roles in development, inflammation, anti-virus immunity and cancer 8 . Several TRIM family members were identified to play important roles in the development of liver cancer, which demonstrated that they might have potential applications as novel therapeutic targets or prognostic markers. Tripartite motif-containing 50 (TRIM50) is a newly identified member of TRIM family and it was first identified as an E3 ubiquitin ligase in Williams-Beuren syndrome 9 . Later reports indicated that TRIM50 promoted the formation of sophisticated canaliculi and microvilli during acid secretion in parietal cells 10 . Another two reports suggested that TRIM50 interacted with HDAC6 and was involved in the regulation of P62 degradation 11,12 . Up to now, reports about the function of TRIM50 is very limited, and its biological function is far from being clarified. The role of TRIM50 in carcinogenesis has never been identified. Because of its recognized E3 ligase activity in diseases, we expected it might be involved in the regulation of the development of HCC. In the study, we detected the expression of TRIM50 in clinical HCC specimen, analyzed the correlation of TRIM50 expression with disease progression, and further investigated its role in tumor growth, migration, and invasion of HCC cells. All these data revealed that TRIM50 acted as a tumor suppressor in HCC via directly targeting SNAIL and reversing epithelial-to-mesenchymal transition (EMT) process. Thus, this work provided a novel insight into the development of hepatocarcinoma and indicated a novel strategy for the treatment of SNAIL overexpressed HCC cells. TRIM50 was downregulated in HCC tissues and its decreased expression was correlated with advanced disease progression To explore whether expression of TRIM50 in HCC tissues was altered during the development of liver cancer, we detect the levels of TRIM50 in HCC tissues and corresponding non-cancerous liver tissues by immunohistochemistry (IHC), quantitative real-time polymerase chain reaction (qRT-PCR), and western blot. We first detected TRIM50 expression by IHC in HCC tissues and corresponding non-cancerous liver tissues from 79 clinical HCC patients, and our data showed that TRIM50 expression was significantly decreased in the liver cancer tissues compared with corresponding distal noncancerous liver tissues (Fig. 1a, Supplementary Table 1). To further clarify whether decreased expression of TRIM50 in HCC tissues contributed to disease progression, we further analyzed the correlation between TRIM50 expression and clinical disease status in these IHC staining data. Statistical analysis showed that patients with poorly differentiated tumors, as well as patients with metastasis were prone to have lower levels of TRIM50 expression (Fig. 1b, c). Then, we did qRT-PCR assay in a cohort of 51 HCC patients and western blot assay in another cohort of 52 HCC patients. Both the qRT-PCR data (Fig. 1d) and western blot data (Fig. 1e) verified the IHC data, which showed that TRIM50 expression was significantly decreased in HCC tissues compared with corresponding non-cancerous liver tissues. Further assay of western blot data showed that patients with advanced Tumor Lymph Node Metastasis stages (TNM stages), Barcelona Clinic Liver Cancer stages (BCLC stages) and metastasis were prone to have lower levels of TRIM50 expression (Fig. 1f, h). Altogether, these data indicated that TRIM50 was downregulated in HCC tissues and its decreased expression contributed to HCC progression. TRIM50 inhibited proliferation, colony formation, and invasion of HCC cells To explore the effect of TRIM50 on the malignant behaviors of HCC cells, we constructed gain-of-function model by transfection of TRIM50 into HCC cells, and loss-of-function model by transfection of small interference RNA against TRIM50 into HCC cells. Western blot data showed that the protein levels of TRIM50 were lower in BEL7402 cells and HUH7 cells compared with those in HepG2 cells and SMMC7721 cells (Fig. 2a). Thus, BEL7402 and HUH7 cells were transfected with TRIM50 plasmid to construct the gain-of-function cellular model; and HepG2 cells and SMMC7721 cells were transfected with small interference RNA against TRIM50 (Si-TRIM50) to construct the loss-of-function model. These cellular models were investigated to define the effect of TRIM50 on the malignant behaviors of HCC cells. Our data showed that after successful overexpression of TRIM50 in HCC cells (Fig. 2b), proliferation, colony formation, migration, and invasion capabilities of HCC cells were significantly inhibited ( Fig. 2c-f). After successful knockdown of TRIM50 expression by its specific siRNAs (Fig. 2g), the proliferation, colony formation, migration, and invasion capabilities of HCC cells were significantly increased ( Fig. 2h-k). Altogether, both of our gain-of-function model and loss-of-function model support the conclusion that TRIM50 could act as a tumor suppressor to inhibit malignant behaviors of HCC cells. TRIM50 reversed resistance to anoikis of HCC cells Resistance to anoikis is the hallmark of cancer and the prerequisite step for distant metastasis of HCC cells. Our previous data showed that HCC cells resisted to anoikis after anchorage deprival and acquired more malignant properties during its anoikis-resistant process 13,14 . In this study, we are interested to know whether TRIM50 also plays a role in the resistance to anoikis of HCC cells. Our data showed that TRIM50 overexpression significantly decreased cell viabilities in the anchorage-deprived HCC cells, which indicated that TRIM50 could reverse resistance to anoikis of HCC cells (Supplementary Figure 1A). Caspase cascade assay further verified that TRIM50 reversed anoikis resistance of HCC cells and induced apoptotic cell death after anchorage deprival (Supplementary Figure 1B and 1C). Our previous data showed that during the process of anoikis resistance, the malignant behaviors of HCC cells were also significantly increased 13,14 . Thus, we are interested to verify whether these malignant behaviors of anoikis-resistant HCC cells could also be influenced by TRIM50. Our data showed that TRIM50 overexpression significantly inhibited the colony formation and invasion capabilities of anchorage-deprived HCC cells (Supplementary Figure 1D and E). These data further support the role of TRIM50 in HCC cells as a tumor suppressor. TRIM50 exerted its antitumor effect through directly targeting SNAIL and reversing EMT TRIM family members usually take their effects by direct binding with target proteins and exert their function through modulation of target molecules. To further define the molecular mechanism of TRIM50 in the regulation of HCC progression, we tested a series of molecules, which might be involved in the process of carcinogenesis to define the target of TRIM50 by immunoprecipitation (data not shown). Our immunoprecipitation data showed that TRIM50 could bind with SNAIL protein (Fig. 3a). Further immunofluorescence (IF) data showed that TRIM50 and SNAIL could colocalized in HCC cells (Fig. 3b), which indicated the interaction between TRIM50 and SNAIL in HCC cells. To further define whether the interaction between TRIM50 and SNAIL is a direct binding effect, we did immunoprecipitation assay with an in vitro transcription and translation system as described before 15 . Our data showed that TRIM50 protein could directly interact with SNAIL protein as detected by the in vitro translation system (Fig. 3c), which indicated that TRIM50 could bind with SNAIL protein directly. Thus, these data indicated that TRIM50 might exert its function through its direct interaction and modulation of SNAIL. To further define whether TRIM50 could have any influence on the expression of SNAIL, we detected the protein levels of SNAIL in TRIM50 overexpressed cells and TRIM50 knockdown cells by western blot and mRNA level of TRIM50 by real-time PCR. Our data verified that TRIM50 could negatively regulate SNAIL expression at the protein level but not at the mRNA level (Fig. 3d, e). This negative regulation of SNAIL by TRIM50 was also verified in the clinical HCC patients (Supplementary Figure 2). Further cyclohexamide (CHX) chase assay showed that TRIM50 increased the degradation of SNAIL after de novo protein synthesis was blocked (Fig. 3f). When we put proteasome inhibitor MG132 in HCC cells, the negative regulation of SNAIL by TRIM50 was significantly rescued, which indicated that TRIM50 regulated SNAIL by proteasome mediated degradation (Fig. 3g). The presence of RING domain confers E3 ligase activity to TRIM family members, thus we were interested to know whether RING domain was responsible for the negative regulation of SNAIL by TRIM50. Therefore, we transfected HUH7 with TRIM50 RING domain mutant and analyzed its effect on the expression of SNAIL. Our data showed that deletion of RING domain in TRIM50 significantly rescued the negative regulation of SNAIL by TRIM50 (Fig. 3h). Thus, these data indicated that TRIM50 negatively regulated SNAIL expression via its RING domain. Fig. 1 TRIM50 was downregulated in HCC tissues and its expression was inversely correlated with advanced disease progression. a Immunohistochemical staining was used to determine the location and expression of TRIM50 in HCC tissues and corresponding non-cancerous liver tissues from 79 clinical HCC patients. The intensities of the IHC staining were quantitatively analyzed by IPP6 software and statistically analyzed (right panel). b, c TRIM50 protein levels in different pathology grades (b) and different metastasis stages (c) from 79 clinical HCC patients used for IHC staining were statistically analyzed and compared. d mRNA expression of TRIM50 was determined by qRT-PCR in HCC tissues and corresponding non-cancerous liver tissues from 51 HCC patients. e Western blot analysis of protein levels of TRIM50 in the liver cancer tissues and corresponding non-cancerous liver tissues from 52 HCC patients, with GAPDH expression as internal references. The presented images are representative blots from 24 HCC patients. Band intensities of all the investigated patients were measured by Image J software and statistically analyzed (right panel). f-h Statistical analysis of TRIM50 protein level in different TNM stages (f), different BCLC stages (g), and different metastasis stages (h) from HCC patients used for western blot assay. *P < 0.05, **P < 0.01, and ***P < 0.001 for statistical analysis of the indicated groups SNAIL is recognized as a transcription factor, which plays a critical role in the regulation of EMT process and further promotes the development of cancer. It is reported that suppression of E-cadherin is a key step in EMT process, whereas SNAIL is reported to direct repress E-cadherin 16 . Thus, SNAIL-mediated E-cadherin repression is a critical step in the EMT process of cancer 17 . As our data showed the effective suppression of SNAIL by TRIM50 in HCC cells, we are further interested to define whether TRIM50 acts as a tumor suppressor through its suppression of SNAIL-mediated EMT process. Our data showed that when HCC cells were transfected with TRIM50 plasmid, with the The basic protein levels of TRIM50 in BEL7402, SMMC7721, HepG2, and HUH7 cells were detected by western blot. b BEL7402 and HUH7 cells were transfected with TIRM50 expression plasmid or mock control, and western blot assay was performed to define the successful exogenous overexpression of TRIM50 in HCC cells. c BEL7402 and HUH7 cells were transfected with TRIM50 plasmid or mock control, and proliferation status of the transfected HCC cells was detected at 0 h, 12 h, 24 h, 36 h, and 48 h by CCK8 assay. d BEL7402, HUH7, and HepG2 cells were transfected with TRIM50 expression plasmid or mock control, and further cultured for 24 h before being transferred to six-well plates at the density of 1000 cells per well for colony formation assay. The clone formations were harvested after 14 days and the number of clone formation was counted. e, f After transfection with TRIM50 plasmid or mock control, transwell migration assay (e) and transwell invasion assay (f) were performed to investigate the migration and invasion capabilities of HCC cells. g HepG2 cells were transfected with siRNAs specifically targeting TRIM50 (Si-TRIM50-1&2, and Si-TRIM50-3), the cells transfected with random sequences (Si-NC) were used as mock control. The cells were further cultured for 24 h before being harvested and the block efficiency was measured by western blot. h HepG2 cells were transfected with siRNAs specifically targeting TRIM50 (Si-TRIM50-1&2) or its nonsense control (Si-NC), and proliferation status of transfected HCC cells were measured at 0 h,12 h, 24 h, 36 h, and 48 h by CCK8 assay. i HepG2 cells and SMMC7721 cells were transfected with Si-TRIM50-1&2 or Si-NC and further cultured for 24 h. The transfected cells were further transferred to six-well plates at the density of 1000 cells per well and allowed to grow for 14 days for colony formation assay. j, k After transfection of Si-TRIM50 or nonsense control (Si-NC) to HepG2 cells, transwell migratory assay (j) and invasion assay (k) of HCC cells were performed. Similar results were obtained in at least three independent experiments. *P < 0.05 and ***P < 0.001 for statistical analysis of the indicated groups suppression of SNAIL expression, expression of the epithelia marker E-cadherin was significantly upregulated, whereas expression of the mesenchyme marker vimentin was significantly downregulated (Fig. 3i). When TRIM50 expression was blocked by its specific interference RNAs, expression of E-cadherin was significantly downregulated, whereas vimentin level was significantly upregulated (Fig. 3j). Further IF assay confirmed the positive regulation of E-cadherin and β-catenin by TRIM50, and negative regulation of N-cadherin and SNAIL by TRIM50 (Fig. 3k). Besides, the phenotypic changes of HCC cells after overexpression of TIRM50 also indicated negative regulation of the EMT process by TRIM50 (Supplementary Figure 3). These data verified that TRIM50 acted as a tumor suppressor through its negative regulation of SNAIL and further reversing the EMT process. TRIM50 induced ubiquitous degradation of SNAIL by K-48 linked poly-ubiquitination Based on the presence of typical RING domain in TRIM50, we speculated that TRIM50 might exert its E3 ligase activities on SNAIL via its RING domain. Thus, we co-transfected the HCC cells with TRIM50 plasmid and HA-UB plasmid, followed by the immunoprecipitation assay to verify whether TRIM50 could put the polyubiquitin chain to SNAIL. Our data showed that at the presence of TRIM50, the poly-ubiquitin chain was successfully put to SNAIL protein, which indicated that TRIM50 regulated SNAIL by its poly-ubiquitous modification of SNAIL protein (Fig. 4a). Further assay showed that RING domain deleted TRIM50 mutant failed to put the poly-ubiquitin chain to SNAIL, which indicated that TRIM50 regulated ubiquitous modification of SNAIL via its RING domain (Fig. 4b). Lysine-48 (K-48)-linked polyubiquitous modification is mainly involved in targeting proteins for proteasomal degradation, whereas Lysine-63 (K-63)-linked poly-ubiquitous modification is coupled to mediate non-proteolytic signals, including those regulating subcellular localization, protein activation, and protein interactions 18 . By co-transfection of SNAIL and K-63only or K-48-only ubiquitin constructs into HCC cells, we found that TRIM50 could put the K-48 linked but not K- Fig. 3 TRIM50 exerted its antitumor effect through directly targeting SNAIL and reversing EMT. a BEL7402 and HepG2 cells were transfected with TRIM50 plasmid or mock control, the binding between TRIM50 and SNAIL protein were detected by immunoprecipitation. b HCC cells were cultured for 24 h before immunofluorescence assay to detect the expression and colocalization status of TRIM50 (green) and SNAIL (red). c TRIM50 and SNAIL proteins were separately expressed by the in vitro transcription and translation system, and the direct binding between TRIM50 and SNAIL were analyzed by co-IP assay. d, e BEL7402 cells were transfected with TRIM50 plasmid or mock control, and HepG2 cells were transfected Si-TRIM50 or Si-NC, the transfected cells were further cultured for 24 h. e The protein level of SNAIL was detected by western blot and further quantitatively analyzed (d); and the mRNA level of SNAIL was detected by qRT-PCR (e). f BEL7402 cells were transfected with TRIM50 plasmid or mock control and further cultured for 24 h before cyclohexamide (CHX) was put into the transfected cells. The cells were further cultured for 0 h, 2 h, and 4 h before being harvested for western blot assay of SNAIL expression. The band intensities were further quantitatively analyzed (right panel). g BEL7402 cells and HUH7 cells were transfected with TRIM50 plasmid or mock control, and further cultured for 24 h. Transfected cells were treated with MG132 (10 μM) for 4 h before protein lysates were isolated to detect the expression level of TRIM50 and SNAIL by western blot. h Wild-type Myc-tagged TRIM50 or its RING domain deleted mutant(△RING) was co-transfected with SNAIL into HUH cells, and further cultured for 24 h before being harvested for western blot assay of SNAIL. i BEL7402 cells were transfected with TRIM50 plasmid or mock control, and further cultured for 24 h. The expression of Ecadherin, vimentin, and SNAIL was detected by western blot and quantitatively analyzed. j HepG2 cells were transfected with Si-TRIM50 or Si-NC, and further cultured for 24 h. The expression of E-cadherin, vimentin, and SNAIL was detected by western blot and quantitatively analyzed. k BEL7402 cells were transfected with TRIM50 plasmid or mock control, and the cells were further cultured for 24 h before immunofluorescence assay to detect the expression of E-cadherin, β-catenin, N-cadherin, and SNAIL. Similar results were obtained in at least three independent experiments. **P < 0.01 and ***P < 0.001 for statistical analysis of the indicated groups 63 linked poly-ubiquitin chain to SNAIL protein (Fig. 4c). These data suggested that TRIM50 acted as an E3 ubiquitin ligase and mediated K-48 linked poly-ubiquitous degradation of SNAIL protein. To further clarify whether TRIM50-mediated ubiquitous degradation of SNAIL occurred in the nuclear or cytoplasmic compartment, we isolated different compartments of the HCC cells for further analysis. Our immunoprecipitation data showed that TRIM50 could bind with SNAIL in both the nuclear and cytoplasm (Fig. 4d). When we co-transfected TRIM50 and HA-UB plasmid into HCC cells, our data showed that TRIM50 could successfully put the poly-ubiquitin chain to SNAIL in both the nucleic and cytoplasmic compartments. These data indicated that TRIM50 mediated ubiquitous degradation of SNAIL in the cytoplasmic, as well as in the nucleic compartment (Fig. 4e). Collectively, these above data also support our previous results showing that TRIM50 induced K-48 linked poly-ubiquitous degradation of SNAIL in HCC cells. Exogenous overexpression of SNAIL rescued the antitumor effect of TRIM50 Our data showed that TRIM50 directly targeted SNAIL for degradation and further inhibited malignant behaviors of HCC cells. Thus, we are interested to know whether overexpression of SNAIL could rescue the tumorsuppressor role of TRIM50. We co-transfected TRIM50 and SNAIL plasmid into HCC cells, and western blot assay verified successful overexpression of both TRIM50 and SNAIL proteins (Fig. 5a, b). Further assay showed that inhibition of malignant behaviors of HCC cells by TRIM50 overexpression was significantly reversed after transfection with SNAIL plasmid (Fig. 5c, d). These data further confirmed that TRIM50 in these HCC cells prohibited cancer progression through directly targeting SNAIL for degradation. Xenografted tumor model verified the antitumor effect of TRIM50 To further assess the antitumor effect of TRIM50 in vivo, we constructed xenograft tumor models by injection of BEL7402 cells to both flanks of nude mice. When visible tumor appeared, we injected TRIM50 expression plasmid to the left flanks and mock control to the right flanks of the mice. These plasmids were injected to the formed tumor every other day, and the sizes of the formed tumor were also measured until the mice were sacrificed on day 28 after the transplantation. The growth kinetics of the formed tumor showed that transfection of TRIM50 significantly inhibited tumor growth (Fig. 6a). The excised tumors from each group were compared, which showed that TRIM50 overexpressed tumors were much smaller than the mock group (Fig. 6b, c). The average size (d) and weight (e) of the TRIM50 transfected tumors were significantly decreased compared with the mock control group (Fig. 6d, e). qRT-PCR, IHC, and western blot assay further verified that TRIM50 was successfully overexpressed in the TRIM50 plasmid transfected group (Fig. 6f-h). Western blot assay confirmed that SNAIL expression was significantly suppressed in the TRIM50 transfected tumors (Fig. 6g). IHC assay confirmed the positive regulation of E-cadherin and β-catenin by TRIM50, and negative regulation of Ncadherin and SNAIL by TRIM50 (Fig. 6h). Thus, these in vivo data further verified our in vitro data that Fig. 4 TRIM50 induced ubiquitous degradation of SNAIL by K-48 linked poly-ubiquitination. a BEL7402 cells and HepG2 cells were transfected with TRIM50 plasmid and/or HA-UB plasmid as indicated, and further cultured for 24 h. The ubiquitous status of SNAIL was analyzed by co-IP assay. b HUH cells were co-transfected with Myc-tagged TRIM50 plasmid or TRIM50 truncation mutant (△RING) together with SNAIL expression plasmid, and further cultured for 24 h. The ubiquitous status of SNAIL was analyzed by co-IP assay. c BEL7402 cells and HepG2 cells were co-transfected with TRIM50 and HA-K-48-UB/HA-K-63-UB plasmids, and the ubiquitous status of SNAIL was analyzed by co-IP. d Cytoplasmic and nucleic fractions of HCC cells were prepared from HepG2 cells, and co-IP assay was used to detect the interaction between TRIM50 and SNAIL in different fractions of HCC cells. e The ubiquitous status of SNAIL protein in different fraction of HCC cells was also detected by co-IP. Lamin B1 was used as nuclear internal control, GAPDH was used as cytoplasm control, and β-actin was served as whole-cell loading control. Similar results were obtained in at least three independent experiments TRIM50 inhibited HCC growth through it suppression of SNAIL. Discussion Recent studies indicated that several members of the TRIM protein family were important regulators of carcinogenesis. Among these, TRIM24, TRIM26 were identified as tumor suppressors in the development of HCC, whereas TRIM31 was identified as a tumor promoter for HCC 5,15,19 . However, the role of TRIM50 in the progression of HCC was unknown. In this study, we identified the tumor-suppressor role of TRIM50 in the development of HCC and further clarified its involved molecular mechanism for the first time. We first investigated the expression of TRIM50 in clinical specimen, and all of our data showed a significantly decreased expression of TRIM50 in HCC tissues, and its expression was inversely correlated with clinical stages and differentiation status of the patients. These data indicated that decreased expression of TRIM50 may facilitate the development of liver cancer. Further cellular model data showed that proliferation, colony formation, and invasion capabilities of HCC cells were significantly inhibited after ectopic overexpression of TRIM50 in HCC cells, whereas these malignant behaviors were significantly enhanced after block of TRIM50 in HCC cells. These data indicated the tumor-suppressor role of TRIM50 in HCC cells, and further suggested that loss of TRIM50 in HCC tissues could lead to the progression of liver cancer. Recent reports showed the function of TRIM proteins often depended on their interactions with other proteins, usually target proteins 20 . In this study, we identified SNAIL as a novel binding partner of TRIM50 in liver cancer cells. At the cellular level, we demonstrated that TRIM50 negatively regulated SNAIL expression. Further investigation showed that TRIM50 could directly bind with SNAIL and induce K-48 linked poly-ubiquitination were transfected with TRIM50 plasmid and SNAIL plasmid, and the expression of TRIM50 and SNAIL were detected by western blot. c BEL7402 and HUH7 cells were transfected with TRIM50 plasmid or SNAIL plasmid, and transwell invasive assay was performed to detect the invasive capability of these transfected HCC cells. d BEL7402 cells were transfected with TIRM50 expression plasmid or SNAIL expression plasmid, and the cells were transferred to six-well plates at the density of 1000 cells per well for colony formation assay. The colonies were stained after 14 days, and the number of colonies was counted and statistically analyzed. **P < 0.01 and ***P < 0.001 for statistical analysis of the indicated groups of SNAIL protein. To further clarify whether decreased expression of SNAIL by TRIM50 was required for TRIM50-induced antitumor effect on HCC cells, we reintroduced SNAIL into TRIM50 overexpressed cells and measured its influence on the malignant behaviors of HCC cells. Our data showed that ectopic overexpression of SNAIL significantly rescued the antitumor effect of TRIM50, which further verified that TRIM50 exerted its effect on HCC cells through its negative regulation of SNAIL. SNAIL is a conserved transcription factor playing an essential role in EMT during cancer metastasis. EMT is a critical process involved in cancer progression. A hallmark for EMT is the loss of cell adhesion molecule Ecadherin, and it is reported that SNAIL could directly repress E-cadherin 16,21,22 . Our data showed that overexpression of TRIM50 in HCC cells could increase Ecadherin expression, which indicated that TRIM50 might exert its antitumor effect through reversing SNAILmediated EMT process. It is reported that SNAIL is expressed in both the cytoplasm and nuclear in cancer cells 16,23 . To further clarify the interaction between TRIM50 and SNAIL, we separated different compartments of HCC cells and did the immunoprecipitation assay to identify where this interaction occurred. Our data showed that TRIM50 could bind with SNAIL in both the cytoplasmic and nucleic compartments of HCC cells (Fig. 7). These data indicated that TRIM50 could act as a tumor suppressor by directly targeting SNAIL in both cytoplasmic and nuclear compartments of cancer cells. Like other TRIM family members, TRIM50 has a typical RING domain, which may confer it ubiquitous activation to its target proteins 24 . Ubiquitination is one of the most abundant and versatile post-translation modifications in cells where the ubiquitin is covalently added to lysine residues of target protein. There are several types of ubiquitin modifications with different effects on target proteins. For instance, the K-48 linked polyubiquitination could induce ubiquitous degradation of target proteins, whereas the K-63 linked polyubiquitination could modulate the activation of target proteins 18,25 . To better understand the posttranslational regulation of SNAIL by TRIM50, we performed the immunoprecipitation assay by co-transfection of SNAIL and ubiquitin expression plasmids into HCC cells. Our Fig. 6 Antitumor effect of TRIM50 was verified in xenografted tumor model. BEL7402 cells (10 7 cells) were subcutaneously injected to both flanks of the nude mice. When visible tumor appeared, TRIM50 expression plasmid was injected to the formed tumor in the left flank and mock control plasmid was injected to the tumor in the right flank every other day before the mice were sacrificed on day 28. a The growth curves of tumors with TRIM50 or mock plasmid transfection were analyzed every other day before the mice were sacrificed. b The formed tumors with TRIM50 or mock plasmid transfection were isolated and compared. c Images presented were the representative mice with subcutaneous xenograft tumor. d, e The volume (d) and weight (e) of the formed tumors transfected with TRIM50 plasmid or mock plasmid were analyzed and compared. f mRNA levels of TRIM50 in TRIM50 or mock control transfected tumors were analyzed by qRT-PCR. g Western blot assay was performed to detect the protein levels of TRIM50 and SNAIL in the formed tumors. h Immunohistochemical staining was used to detect the level of TRIM50, E-cadherin, β-catenin, Ncadherin, and SNAIL in formed tumors. *P < 0.05, **P < 0.01, and ***P < 0.001 for statistical analysis of the indicated groups data showed that TRIM50 could successfully put the polyubiquitin chain to SNAIL. Further analysis showed that TRIM50 could induce K-48 linked, but not K-63 linked poly-ubiquitination of SNAIL protein. Thus, we identified SNAIL as a novel important target for TRIM50-mediated poly-ubiquitination, and further analysis verified that TRIM50 induced K-48 linked ubiquitous degradation of SNAIL. In conclusion, we investigated the role of TRIM50 in HCC progression in an integrate investigation system including clinical specimen, cellular model, and animal model. Our study showed that TRIM50 expression was significantly decreased in HCC tissues compared with corresponding distal non-cancerous tissues. Its downregulation was significantly inversely correlated with disease progression, which indicated its involvement in the development of cancer. Further in vitro and in vivo study verified the antitumor effect of TRIM50 on HCC cells was mediated by its K-48 linked poly-ubiquitous degradation of SNAIL protein. Altogether, this study provided clues to understand the pathogenesis of HCC, and it indicated that therapeutic strategy by upregulating TRIM50 in SNAIL overexpressed cancers may pave a new avenue for manipulating liver cancer. Tissue samples Paired samples of HCC tissues and corresponding non-cancerous liver tissues from the Department of Hepatobiliary Surgery of the Provincial Hospital Affiliated to Shandong University were used for detection of TRIM50 expression. Among them, 79 pairs of liver cancer and corresponding non-cancerous tissues were used for IHC assay, 52 pairs of matched specimens were used for western blot assay, and 51 pairs of matched specimens were used for qRT-PCR. These procedures dealing with human specimen were approved by Shandong University Research Ethics Committee and all the protocols dealing with the patients met the ethical guidelines of the Helsinki Declaration. Written informed consent was obtained from each patient before participation and approved by the ethics committee of Shandong University. Details of the clinicopathologic characteristics of these recruited HCC patients were shown in Table 1. Immunohistochemistry IHC was performed to detect the expression and location of TRIM50 on paraffin sections of HCC tissues and non-cancerous liver tissues. IHC staining and evaluation were performed according to the procedure described before 26,27 . Specific antibody against TRIM50 (ab174880) was from Abcam company (Cambridge, MA, USA). Immunohistochemical staining was evaluated using Image-Pro Plus v6.2 software (Media Cybernetics, Inc., Bethesda, MD, USA). For accurate reading of the staining, we used the same setting for all the analyzed fields. Integrated optical density (IOD) was measured in all investigated fields, and density of positive staining was evaluated as IOD/the total area of each field. Quantitative real-time PCR Total RNA was extracted from liver cancer tissues and qRT-PCR was performed as described before 15,27 . Primers for human TRIM50 gene were forward: 5′-CCCAT TTGCCTGGAGGTCTTC-3′, reverse: 5′-CAGGACAG CATAGCTCGGAG-3′. Relative gene expression levels were normalized to β-actin. Primers for β-actin gene were forward: 5′-GGCACCACACCTTCTACAATG-3′, reverse: 5′-TAGCACAGCCTGGATAGCAAC-3′. Fig. 7 Working model of the role of TRIM50 in HCC progression. TRIM50 was significantly downregulated in HCC cells and its decreased expression further promoted HCC progression. Further investigation showed that TRIM50 could target SNAIL for K-48 linked poly-ubiquitous degradation and thus reversed SNAIL-mediated epithelial-to-mesenchymal transition (EMT) transition. Altogether, loss of TRIM50 in HCC cells led to upregulation of EMT process and further promoted the malignant behaviors of HCC cells, including proliferation, colony formation, anoikis resistance, and invasion, thus promoted HCC progression The relative mRNA levels of target genes were obtained by using the 2 -ΔΔ Ct method with all assays performed in triplicate. Cell culture, transfection and IF All of the HCC cell lines, including BEL7402, SMMC7721, HepG2 and HUH7 cells, were obtained and cultured as previously described 15 . HCC cells grown on normal plates and poly-2-hydroxyethylmethacrylate (poly-HEMA) coated plates were established as attached cell and detached anchorage-deprived cells, respectively 13,14,28 . TRIM50 plasmid was synthesized by Ori-Gene (OriGene Technologies, Maryland, USA). RING domain deleted mutant of TRIM50 was generated using the KOD-Plus-Mutagenesis kit (Toyobo, Osaka, Japan) according to the manufacture's protocol. The small interfering RNAs targeting TRIM50 and SNAIL were synthesized by RIBOBIO (RIBOBIO, Guangzhou, China). Transfection and IF assay were performed as previously describe 15 . In vitro binding assay The direct interaction between TRIM50 protein and SNAIL protein were performed by a TNT Quick Coupled Transcription and Translation System (Promega, Madison, WI, USA) according to the manufacturer's protocol. The TRIM50 and SNAIL proteins were expressed, mixed together, and analyzed with immunoprecipitation by the TRIM50 antibody, followed by western blot assay by SNAIL antibody to determine the direct binding of TRIM50 and SNAIL proteins. Subcellular fractionation Extraction and isolation of nuclear and cytoplasmic protein from HCC cells were performed by the Nuclear and Cytoplasmic Protein Extraction Kit (Beyotime, Jiangsu, China) according to the manufacturer's protocol. In vivo tumor growth assay Five-week-old immunodeficient male BALB/c athymic nude mice (Huafukang Biotechnology Ltd, Beijing, China) were used for construction of xenograft tumor model as described before 15,26 . When visible tumors appeared, we injected 30 ug of pCMV-TRIM50 and empty pCMV vector control to the tumors in either flank once every other day before the mice were sacrificed by cervical dislocation. The isolated tumors in the TRIM50 transfected group and mock control group were further isolated and analyzed. Statistical analysis Statistical analysis was analyzed by SPSS 16.0 software (SPSS, IL, USA) and GraphPad Prism software (version 5.0). Χ2-test was employed to compare qualitative variables. Analysis of quantitative variables was performed using the Student's t-test or one-way analysis of variance (ANOVA). Data were presented as mean ± S.D. P-value < 0.05 was considered statistically significant for all tests and all statistical tests were two sided.
v3-fos-license
2016-05-04T20:20:58.661Z
2014-06-07T00:00:00.000
15320342
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/1471-2180-14-149", "pdf_hash": "ad0c87a688ed39b7cc83aad492c34aed612113cb", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44948", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Chemistry" ], "sha1": "56fa1ecc33c025ff10b3f72a57e14ff07515bed2", "year": 2014 }
pes2o/s2orc
The genetic diversity of cereulide biosynthesis gene cluster indicates a composite transposon Tnces in emetic Bacillus weihenstephanensis Background Cereulide is a cyclic dodecadepsipeptide ionophore, produced via non-ribosomal peptide synthetases (NRPS), which in rare cases can lead to human death. Early studies had shown that emetic toxin formation belongs to a homogeneous group of Bacillus cereus sensu stricto and the genetic determinants of cereulide (a 24-kb gene cluster of cesHPTABCD) are located on a 270-kb plasmid related to the Bacillus anthracis virulence plasmid pXO1. Results The whole genome sequences from seven emetic isolates, including two B. cereus sensu stricto and five Bacillus weihenstephanensis strains, were compared, and their inside and adjacent DNA sequences of the cereulide biosynthesis gene clusters were analyzed. The sequence diversity was observed, which classified the seven emetic isolates into three clades. Different genomic locations of the cereulide biosynthesis gene clusters, plasmid-borne and chromosome-borne, were also found. Potential mobile genetic elements (MGEs) were identified in the flanking sequences of the ces gene cluster in all three types. The most striking observation was the identification of a putative composite transposon, Tnces, consisting of two copies of ISces element (belonging to IS6 family) in opposite orientations flanking the ces gene cluster in emetic B. weihenstephanensis. The mobility of this element was tested by replacing the ces gene cluster by a KmR gene marker and performing mating-out transposition assays in Escherichia coli. The results showed that Tnces::km transposes efficiently (1.04 × 10-3 T/R) and produces 8-bp direct repeat (DR) at the insertion sites. Conclusions Cereulide biosynthesis gene clusters display sequence diversity, different genomic locations and association with MGEs, in which the transposition capacity of a resistant derivative of the composite transposon Tnces in E. coli was demonstrated. Further study is needed to look for appropriate genetic tools to analysis the transposition of Tnces in Bacillus spp. and the dynamics of other MGEs flanking the ces gene clusters. Background The Bacillus cereus group consists of B. cereus sensu stricto, Bacillus thuringiensis, Bacillus anthracis, Bacillus weihenstephanensis, Bacillus mycoides, Bacillus pseudomycoides and Bacillus cytotoxicus, which share close genetic and biochemical relatedness. They have traditionally been classified as different species based on their distinct virulence characteristics or phenotypes [1,2], the formers are mostly directly associated with large plasmids. B. anthracis causes the fatal animal and human disease anthrax, genetically determined by its pXO1 and pXO2 plasmids [3]. Similarly, the biopesticidal properties of B. thuringiensis, which distinguish it from B. cereus, are due to large plasmids encoding cry genes [4]. Ubiquitous in natural environment and best known as an opportunistic pathogen and food contaminant, B. cereus sensu stricto can cause two distinct forms of food poisoning with symptoms of diarrhea or vomiting. The diarrheal type, generally mild and mostly self-healed, is caused by several potential heat-labile enterotoxins, e.g. Hbl, Nhe, and CytK, whereas the emetic type, which represents the most serious food safety risk linked to B. cereus, is associated with a heat stable peptide toxin named cereulide. Most virulence genes of B. cereus are located on the chromosome [5,6] with the exception of the cereulide genetic determinants [7,8]. B. cytotoxicus is a recently described thermotolerant member of the B. cereus group [1]. The remaining members of the group, B. mycoides, B. pseudomycoides and B. weihenstephanensis, are mainly distinguished on the basis of their morphology (rhizoidal growth) and physiology (psychrotolerance), respectively [9,10], but may also have enteropathogenic potential [11,12]. In this respect, two B. weihenstephanensis isolates were found to produce a higher amount of cereulide than the reference B. cereus AH187 quantified by liquid chromatography mass spectrometry [13,14]. Cereulide ((D-O-Leu-D-Ala-L-O-Val-L-Val) 3 ) is a small, heat and acid stable cyclic dodecadepsipeptide with a molecular weight of 1.2 kDa [15,16] and presents similar characteristics to valinomycin, i.e. chemical structure and toxicology [17,18]. Like valinomycin, cereulide is synthesized enzymatically via non-ribosomal peptide synthetases (NRPS), and is toxic to mitochondria by acting as a potassium ionophore [19]. It has been reported to inhibit human natural killer cells [20]. Indeed, severe and even lethal cases have been reported after the ingestion of food contaminated with high amounts of cereulide [21][22][23][24]. The cereulide genetic determinants correspond to a cluster of seven NRPS genes (cesA, B, C, D, H, P and T), which was originally found residing on a large plasmid [8]. This 270 kb element, pCER270, displays similarity to the anthrax virulence pXO1 from B. anthracis [7,25]. It is a member of pXO1-like plasmids, including pCER270, pPER272, pBC10987 and pBCXO1, which share a highly conserved core region containing genes involved in plasmid replication and its maintenance, sporulation and germination, and a formaldehyde-detoxification locus [25,26]. Previous studies have shown that enterotoxin production is broadly distributed among different members of the B. cereus group [6,27] and also found in other Bacillus spp. [28,29], whereas emetic toxin formation has been reported to be restricted to a homogeneous group of B. cereus sensu strict [30]. Although seldom, cereulideproducing B. weihenstephanensis strains have also recently been isolated [14]. In order to explore the phylogenetic relationship of the emetic isolates between B. cereus sensu stricto and B. weihenstephanensis, and to analyze the potential mode of genomic transfer of the cereulide genetic determinants, the genetic diversity between B. cereus sensu stricto and B. weihenstephanensis were analyzed in detail. Sequence diversity of the ces gene cluster All the emetic strains harbor the seven ces genes with the same sizes. The two "cereus" isolates, IS075 and AH187, only share three nucleotide variances for their cesB gene. For the five "weihenstephanensis" isolates, MC67 and MC118 from Denmark display only one synonymous mutation, in cesA and in cesT, respectively, and CER057, CER074 and BtB2-4 from Belgium are 100% identical. Each ces gene displays 90~95% identity between B. cereus and B. weihenstephanensis, and 95~100% identity within B. weihenstephanensis isolates. Similar but slightly lower identity levels were observed for the corresponding proteins. Thus, based on the concatenated ces genes and protein sequences, two main clusters, namely "cereus" and "weihenstephanensis", could be distinguished, and within "weihenstephanensis" cluster, two subsequent clades were identified ( Figure 1B). Genomic location of the ces gene clusters IS075 harbors a larger plasmid pool than AH187. The cereulide gene cluster of IS075 was observed to be located on a large plasmid with a size similar to that of pCER270 (270 kb) in AH187 ( Figure 2A). Like pCER270, IS075 was PCR-positive to the pXO1 backbone genes pXO1-11, pXO1-14, pXO1-45, pXO1-50 and pXO1-55, which all encode hypothetical proteins (data not shown). It was also observed that the IS075 contig containing the ces gene cluster is ca. 180.7 kb with 146 predicted CDSs, of which 85.6% matched to those of pCER270, with a good synteny ( Figure 2B). This indicated that the emetic plasmid in IS075 is pXO1-like with high similarity to pCER270. The deduced proteins from 21 predicted CDSs not matching those of pCER270 were blasted with databases (Nr and Swissprot). The result showed that two matched putative transposases, one was related to putative DNA topoisomerases I, one to putative transcriptional repressors, and the others to hypothetical proteins, all with homologs in other B. cereus group plasmids. For BtB2-4 and CER057, although large plasmid with smaller size to pCER270 was observed in the profile, no hybridization signal was detected ( Figure 2A). It was observed that the contig containing the ces gene cluster in CER057 is about 245.4 kb with 215 predicted CDSs, of which 80% and 85% matched those of the chromosomes of AH187 and KBAB4, respectively. Except for the ces genes, the deduced proteins of 25 predicted CDSs not matching the chromosome of KBAB4 were compared to protein databases (Nr and Swissprot). It was found that four CDSs encode putative transposase, acetyltransferase, phage integrase, and phosphoglycolate phosphatase, 17 encode hypothetical proteins with chromosomal homologs among B. cereus group strains and four had no hit. The linear alignment showed that the main matches were located in chromosome positions 2.15 M~2.34 M for AH187, and 2.05 M~2.28 M for KBAB4 ( Figure 2B). Thus, it is most likely that the ces gene cluster in CER057 has a chromosomal location. The hybridization bands of MC118 and MC67 are larger than that of pCER270, although the corresponding plasmid bands are rather weak (Figure 2A). This strongly suggests that the cereulide genetic determinants of both MC118 and MC67 (named pMC118 and pMC67) are located on plasmids larger than pCER270, which were PCR-negative to pXO1 backbone genes. Unfortunately, the contigs containing the ces gene clusters in MC67 and MC118 were very short, ca. 56.7 and 26.6 kb, respectively. Besides the seven ces genes, 30 putative CDSs were predicted in the larger contig of MC67, of which 9 had no hit, and the other 21 had homologs in the plasmids or chromosomes of other B. cereus group strains, including putative transposases, spore germination proteins, thiol-activated cytolysin, dehydratase and hypothetical proteins. However, although the gapped genome of MC67 was tentatively aligned with all the published plasmid sequences of the B. cereus group using the MAUVE contig aligner, no obvious colinear match was observed to large fragment (data not shown). Identification of putative mobile genetic elements (MGEs) flanking the cereulide genetic determinants About 5 kb DNA sequences upstream of cesH and downstream of cesD from the "ces" contigs were used for detailed analysis. In the case of MC67 and MC118, because the available flanking sequences were shorter they were obtained by primer walking. Three types of flanking sequences could be observed ( Figure 3). A potential group II intron, carrying an ncRNA Aligned segments are represented as dots (20~65 bp) and lines (>65 bp), with red and blue colors refer to forward and reverse matching substrings, respectively. and reverse endonuclease gene, is located 2.4 kb downstream of cesD in the plasmid of both AH187 and IS075, while an integrase/recombinase gene is located 1.1 kb downstream of cesD in chromosome of BtB2-4, CER057 and CER074. No other potential MGEs were observed in the flanking sequences of cesH of these strains. Strikingly, the ces gene cluster of pMC67 and pMC118 was found to be flanked by two copies of an IS element at each end, in opposite orientation (located ca. 2 kb from cesH and 800 bp from cesD), reminiscent of a typical class I composite transposon (designated Tnces). This IS element (named ISces) is 853 bp, contains a transposase gene and 16 bp terminal invert repeats (IR) and belongs to the IS6 family. In addition, an NERD domain or topoisomerase domains, belonging to DNA-breaking-rejoining enzyme superfamily, were also observed located between ISces and cesH and downstream of cesD and ISces on pMC67 and pMC118, respectively. Downstream of the Tnces, there is another transposase-encoding ORF showing high identity with the upstream ones, but with a shorter size. It is also flanked by the 16 bp IR (Figure 3). Transposition of ISces-based composite transposon In order to test the potential "transposability" of Tnces, the ces gene cluster was replaced by a Km R gene marker and a recombinant plasmid pTnkm was created and used for the transposition assay using a well-developed mating-out assay [32,33]. Conjugation between the donor strain E. coli JM109 (R388, pTnkm) and the recipient strain HB101 (Sm R ) was performed. The average transposition frequency of Tnces::km onto R388 in three independent experiments was estimated as 2.31 × 10 −3 (number of Km R Tp R Sm R transconjugants per Tp R Sm R transconjugants). The final transfer frequency, which is equal to the actual transposition frequency multiplied by the conjugation frequency, was calculated as 1.04 × 10 −3 Km R Sm R transconjugants per Sm R recipient. 60 transconjugants were randomly screened for Ampicilin resistance by disk diffusion assays and all displayed a positive result, indicating the formation of a cointegrate between the host chromosome and pTnkm. In order to distinguish whether the Km R Sm R transconjugants were achieved by transposition or other recombination events leading to plasmid integration, and whether the transposition happened randomly, a Southern-blot analysis was performed on nine transconjugants from two independent conjugation experiments that were randomly selected according to the resistance screening and the PCR validation. The hybridization was conducted on the transconjugants NdeI-digested genomic DNA using an internal bla fragment (pUC18), ISces and km as probes ( Figure 4). Both hybridizations with the bla and km probes produced a single signal band, the former confirming the formation of a cointegrate of the whole pTnkm into the recipient chromosome. Using the ISces probes, besides the expected 1 and 3.1 kb bands observed in all the transconjugants, at least one extra band with variable sizes was observed in the nine tested transconjugants, indicating that independent multi-events had occurred at distinct genomic sites ( Figure 5). To detect if the transposition of Tnces::Km displayed target site biases, the flanking sequences of insertion sites of the transconjugants used in hybridization were determined by primer walking. For three transconjugants, it was found that Tnces::Km insertions occurred in three distinct sites on plasmid R388 and that an 8-bp direct The transposase-mediated fusion of pTnkm and the target molecules generate a third copy of ISces. There are two theoretically possible results of transposition, depending on which ISces is duplicated. Three probes 1, 2, and 3, indicated by dotted lines, represent an internal fragment of bla in cloning vector pUC18, ISces, and Km, respectively, were used for the survey of the transposition. The NdeI sites in km R sm R transconjugants were indicated. No matter which ISces was duplicated, hybridization with probe 1 and 3, a 3.5 kb band and a 1.6 kb band is expected, respectively; with probe 2, besides the 1 kb and 3.5 kb expected bands, extra bands with variable sizes in each independent transconjugant are probably detected due to multi-transpositions. Although there is also a (remote) possibility for the duplication of the whole Tnces::km element, the result will be similar except that more bands with probe 2 are expected. repeat (DR) was produced after transposition (Table 2), which is a typical feature of IS6 family members (see the ISfinder database, http://www-is.biotoul.fr) [34]. For the other six transconjugants, although repeated several times, it is difficult to get the flanking sequences of insertion sites by primer walking, probably due to sequence complexity caused by multiple transposition events of ISces. Discussion The taxonomy of B. cereus group has long been controversial, since many of the species are genetically heterogenous, with the exception of B. anthracis, which is essentially a clone in nature [35]. One of the reasons of this difficulty is that many toxins used for classification are encoded on MGEs that have HGT potential, e.g. plasmids or transposons [3,36,37]. Cereulide may cause severe and potential lethal infection during an "emetic" form of B. cereus food poisoning. Most emetic B. cereus strains belong to a homogeneous group of B. cereus sensu stricto. Although rare, the emetic B. weihenstephanensis strains were recently isolated in nature [13]. Furthermore, a heat stable toxin, structural related to cereulide, has also been found in Paenibacillus tundra strain [38]. As a consequence, the intra-and inter-species diversity and potential transmission of the cereulide biosynthetic gene cluster is therefore thought provoking. In this study, the sequence diversity of emetic B. cereus sensu stricto and B. weihenstephanensis was analyzed. Since emetic B. cereus sensu stricto had been found to be restricted to a homogeneous group [30], only two B. cereus sensu stricto isolates were analyzed and compared the other five known B. weihenstephanensis. Except for AH187, the unfinished gapped genome sequences of the other emetic isolates were recently submitted [39]. As expected, the two emetic B. cereus sensu stricto isolates share very similar gene content in genome level. Furthermore, their "ces" plasmids are quite coherent in terms of synteny, protein similarity and gene content. Compared to AH187, IS075 has a larger plasmid pool, of which the "ces" plasmid is pXO1-like, but the presence of a pXO2like plasmid was also indicated [40]. Sequence diversity between B. cereus sensu stricto and B. weihenstephanensis or within B. weihenstephanensis was observed. It was also evidenced that the ces cluster had undergone horizontal gene transfer (HGT). This could be clued by the fact that the cluster is present in different hosts (B. cereus sensu stricto vs. B. weihenstephanensis), which have different chromosomal background, and displays different genomic locations (plasmids vs. chromosome). Moreover, another striking indication for HGT was the presence of putative MGEs in all tested emetic strains. The composite transposon, Tnces, located on large plasmids (pMC67/pMC118) in two B. weihenstephanensis strains isolated from soil in Denmark was identified. The mobility of Tnces was also proved by transposition experiments performed on a Tnces-derived element, indicating a HGT potential of the cereulide gene cluster in pMC67/ pMC118. Although the ces gene cluster is not flanked by IS elements in the other two types of emetic isolates, a Group II intron carrying an endonuclease gene in AH187 and IS075, and a putative integrase/recombinase gene in CER057, CER074 and BtB2-4 were also observed downstream of cesD. Both Group II intron and recombinase can potentially be involved in genome dynamics. Group II introns are self-splicing mobile retroelements, some of which have been shown experimentally to be able to invade new DNA sites and transfer between species, sometimes accompanied by adjacent sequence deletion or rearrangement [41][42][43]. This also relates to previous observations that bacterial group II introns tend to be located within mobile DNA elements such as plasmids, IS elements, transposons or pathogenicity islands (PAI), which could account for their spread among bacteria [44][45][46]. Based on our results, it is reasonable to suggest that MGEs have played a key role in the transmission of the cereulide gene cluster. In many cases, plasmids encode passenger genes originated via HGT that generally confer adaptive functions to the host cell, the classic example being antibiotic resistance genes. For instance, the NRPS gene cluster responsible for the production of β-lactam antibiotics (e.g. penicillins and cephalosporins) was proved to be transmitted by HGT from bacteria to bacteria and from bacteria to fungi [47,48]. This is also the general mode for toxin evolution [49,50]. In contrast, as a natural analog, a recent study reported that a vertical transmission (VT) origin rather than a HGT for the vlm gene cluster in Streptomyces spp. Although there is a significant structure and toxicology similarity between valinomycin and cereulide and an organizational similarity between the vlm gene cluster and the ces gene cluster, they are highly divergent from each other at the DNA level [51]. They may also have quite different evolution history. The conjugative and transfer promoting capacities of the emetic plasmids were also assessed by bi-and tri-parental matings, respectively. None were indicative of self-conjugative or mobilizable activities, at least under the conditions used in the assay (detection limit of 10 −7 T/R) (data not shown). Yet, the emetic strains can host the conjugative plasmid pXO16, which could be transferred from its native B. thuringiensis sv. israelensis to the emetic strains and, subsequently from the emetic strains to the original B. thuringiensis sv. israelensis host [52]. An important concern arising from this study is that the cereulide gene cluster may have the potential to be transmitted by transposition and, therefore, if the emetic strain can randomly encounter the conjugative plasmid pXO16 in nature, transposition of the cereulide gene cluster into pXO16 might happen at a low frequency, and as a consequence the resulting emetic pXO16, crossing boundaries within the B. cereus group by conjugation, could pose a serious public health issue. Conclusion Emetic B. cereus group isolates display more variations than originally thought. The cereulide biosynthesis gene cluster was present in different hosts (B. cereus sensu stricto and B. weihenstephanensis), which have different chromosomal background and display different genomic locations (plasmids vs. chromosome). The sequences of cereulide genetic determinants are diverse and coevolved with the host. Three types of MGEs were identified in the flanking sequences of the cereulide biosynthesis gene cluster, of which the transposition capacity of a resistant derivative of the composite transposon Tnces in E. coli was demonstrated. Further study is needed to look for appropriate genetic tools to analysis the transposition of Tnces in Bacillus spp. and the dynamics of other MGEs flanking the ces gene clusters. Strains and plasmids Emetic strains used in this study are listed in Table 1. A non cereulide-producing B. cereus isolate CER071 was used as negative control. E. coli DH5α and JM109 were used as bacterial hosts in electroporation experiments. Plasmid R388 (Trimethoprim resistant) [53], a conjugative plasmid devoid of transposon, was used for transposition assay. E. coli was routinely cultivated at 37°C in Luria-Bertani (LB) media. B. cereus group strains were grown at 30°C. Antibiotics were used at the following concentrations: Kanamycin (Km), 50 μg/ml; Ampicilin (Amp), 50 μg/ml and Trimethoprim (Tp), 50 μg/ml. Insertion site determination of the cereulide gene cluster and Tnces::Km Regions flanking the cereulide gene cluster sites of the emetic B. cereus isolates and the target site and flanking sequences of the composite transposon were obtained by the method of genome walking (Takara genome walking kit), using the primer walking sets listed in Table 3. All the sequences obtained by this method were validated by PCR and subsequent sequencing. DNA manipulation and plasmid construction Plasmid and genomic DNA were isolated using Plasmid Mini-Midi kits and Bacterial genome extraction kit (QIAGEN), respectively. Primers (Table 3) were designed [54]. Plasmid profiling and hybridization Plasmid profiling of the emetic isolates was performed according to Andrup et al. [55]. Genomic DNA from E. coli strains HB101, JM109 (pTnKm), JM109 (R388, pTnKm) and transconjugants were digested with NdeI and run in a 0.8% agarose gel electrophoresis before the separated DNA fragments were transferred from agarose gels to a positively charged nylon membrane (Boehringer Mannheim, Germany). DIG-labeled probes were designed by using the "PCR DIG Probe Synthesis Kit" from Roche. Probe P ces , consisting of an internal fragment of cesB using EmF and EmR primers, was used for the location of cereulide gene cluster. Probes 1, 2, and 3, which consisted of an internal fragment of bla pUC18 using APF1 and APR1 primers, an internal fragment of IS using ISF3 and ISR3 primers, and an internal fragment of km using kmF3 and KmR3 primers, were used for transposition survey. After transfer and fixation of the DNA on the membrane, the hybridization was performed with the "DIG High Prime DNA Labeling and Detection Starter Kit I" (Roche Diagnostic, Mannheim, Germany), according to the manufacturer's instructions. Transposition experiments The transposition of the pTnKm was examined using a mating-out experiment, as previously described [32,33]. For this purpose, E. coli JM109 harboring pTnKm and plasmid R388 (Tp R ) was used as the donor to mate with E. coli HB101 (Sm R ) on a membrane filter. The transposition frequency was expressed as the number of Km R Sm R transconjugants per Sm R recipients (T/R) and the plasmids in the transconjugants were further characterized by PCR and restriction digestion. Sequence analysis The complete genome sequence of AH187 and the gapped genome sequences of the other six emetic strains were obtained from NCBI (Table 1). A fragmented allagainst-all comparison analysis was performed using Gegenees (version 1. Each ces gene and the concatenated sequences, as well as the deduced amino acid sequences, were aligned by MEGA version 5.2 software. A neighbor-joining (NJ) phylogenetic tree based on the concatenated gene sequences was constructed with a bootstrap of 1,000. The contigs containing the ces gene cluster were compared with the genomes of AH187 and B. weihenstephanensis KBAB4 by BLASTN with an e-value cutoff of 1e-5. Linear alignment was finished by MUMmer software package (release 3.23) [56]. The sequences upstream of cesH and downstream of cesD were obtained from the complete genome sequence of AH187 and the contigs with the ces gene cluster located within the gapped genome sequences of the emetic strains (NCBI -
v3-fos-license
2021-10-23T06:16:58.471Z
2021-10-21T00:00:00.000
239456883
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-021-00110-2.pdf", "pdf_hash": "002238a3655b7cd9211a0b751f2b632984c35144", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44949", "s2fieldsofstudy": [ "Environmental Science", "Geography", "Biology" ], "sha1": "de47b024f09756d80276be6e1e1355bb553e40f4", "year": 2021 }
pes2o/s2orc
A twig-like insect stuck in the Permian mud indicates early origin of an ecological strategy in Hexapoda evolution Full body impressions and resting traces of Hexapoda can be of extreme importance because they bring crucial information on behavior and locomotion of the trace makers, and help to better define trophic relationships with other organisms (predators or preys). However, these ichnofossils are much rarer than trackways, especially for winged insects. Here we describe a new full-body impression of a winged insect from the Middle Permian of Gonfaron (Var, France) whose preservation is exceptional. The elongate body with short prothorax and legs and long wings overlapping the body might suggests a plant mimicry as for some extant stick insects. These innovations are probably in relation with an increasing predation pressure by terrestrial vertebrates, whose trackways are abundant in the same layers. This discovery would possibly support the recent age estimates for the appearance of phasmatodean-like stick insects, nearly 30 million years older than the previous putative records. The new exquisite specimen is fossilized on a slab with weak ripple-marks, suggesting the action of microbial mats favoring the preservation of its delicate structures. Further prospections in sites with this type of preservation could enrich our understanding of early evolutionary history of insects. Some outcrops are clearly more favourable for the preservation of trackways, resting traces and full body impressions than for the organisms themselves. The trackways of arthropods are rather frequent in the fossil record since the Devonian. But they are especially frequent in the continental outcrops of the 'red' Permian in Europe and North America 1-3 . Nevertheless, the trackmakers are generally difficult to determine, especially for those attributed to arthropods 4 . The resting traces and the full-body impressions (FBIs) are more complex than the trackways and give more information on the external morphology of the trackmakers. FBIs can be nearly as well-preserved as a body fossils 5 . But these are much rarer than the trackways, and among them, those attributed to the Hexapoda are even the rarest, mainly attributed to apterous Archaeognatha (Supplementary Information) [5][6][7][8][9][10][11] . FBIs of winged insects are extremely rare: only 10 specimens are recorded in the world, from the Carboniferous and Permian ( Table 1 in Supplementary Information) and they mostly preserve the ventral side of the trackmaker only. Here we describe an exquisite full-body impression of an elongate winged insect strongly resembling a Phasmatodea (stick insect), from the Middle Permian of the Luc Basin (Var, Provence, France). It consists in a delicate lateral impression of the entire insect. With only few occurrences from the Middle Triassic to the Cenozoic, the fossil record of the Phasmatodea is very poor compared to that of the other Polyneoptera such as the Orthoptera [12][13][14][15][16][17][18] . The earliest occurrences of stick insects are scarce and based on incomplete bodies or wing fossils 14,[17][18][19][20] . Stick insects are one of the most specialised insect orders in term of plant mimicry. New discoveries about their early origins give clue to the development of such a defence strategy, currently known from only a few fossil taxa 16,[21][22][23][24] ; 'Th' longer and thicker than 'H'; 'Th' divided into three parts from front to back, their limits being determined by the positions of the legs and wings: first part, 3.2 mm long, appearing very short dorsally and carrying a pair of legs in its ventral part; second part, 7.3 mm long, very long and also carrying posteriorly a pair of legs and a pair of very long and wide structures (forewing tracks lfw and rfg); third part, 5.9 mm long, also very long, carrying one or two visible legs (only distal parts visible, their bases and femora being hidden by forewings) and a pair of hind wing tracks (lhw & rhw) quite distinct from forewing tracks, located just after level of bases of median legs; these wing structures being large, and covering at least 1/2 of total length of impression; third tagma impression ' Abd' the longest, 24.1 mm long, longer than 'H' and 'Th' combined, and of comparable thickness as 'Th' , subdivided into a dorsal and a ventral parts; numerous indentations visible on it at fairly regular intervals (at least seven); no trace of dorsal or ventral appendages; a small structure ending in two points 'app' visible at apex of ' Abd' . Etymology. Dedicated to the wizard Radagast of the mythology of J.R.R. Tolkien (The Hobbit, 1937) who tames stick-insects in the adaptation of P. Jackson (The Hobbit: An Unexpected Journey, 2012). Discussion The impression most likely corresponds to a winged insect lying on its left side based on the presence of impressions of three tagmata with the anterior tagma much shorter than the two others and carrying structures that would correspond to antennae and possible mouthparts (mp) (Figs. 1a, 2); the median tagma corresponds to a thorax divided into a short prothorax bearing two legs, an elongated mesothorax with a pair of legs located anteriorly and a pair of wings located posteriorly, and a metathorax similar to the mesothorax, also carrying pairs of wings and legs; a segmented, elongated abdomen (Fig. 1c), legless and carrying a pair of complex terminal structures (cerci and/or ovipositor?) (Fig. 1b). The 'enlarged' legs and inconspicuous joints between the thorax and legs indicate a certain viscosity of the substrate and possible weak motion of the legs. This could also be due to low movements of the substrate as indicated by the ripple marks. Most insects are fossilized in a dorsal or ventral position, especially paleopterans, but for the neopteran insects that hold their wings along the body, a lateral impression is possible. FBIs can remain unnoticed because of their shallow relief, requiring appropriate lighting to distinguish them. Here the delicate FBI is visible because the thin pelitic layer is exposed. The slab also has weak ripple-marks at the same level, showing a low-energy current and the presence of a biofilm [25][26][27] , which allowed the exceptional preservation of Phasmichnus. It is difficult to interpret the moment of life captured by this impression. This FBI probably corresponds to that of dying animal that was transported on fresh mud and laid its impression on it. The insect itself was probably destroyed later by the microbial activity. Evidence of such mats are frequent in the outcrop (folding of the mat, trace of grazing by organisms under the mat, etc.). Similar phenomena have been recorded in the late Jurassic lithographic limestone of Cerin, another well-known outcrop generated by microbial mats 28 . No other known ichnotaxon of this size and shape can be brought closer to Phasmichnus gen. nov. Only Knecht et al. 5 described an unnamed FBI of winged insect (attributed to an Ephemeropterida or a Plecoptera 29-31 ) www.nature.com/scientificreports/ from the Late Carboniferous of the Massachusetts (USA), presenting an elongated body of comparable dimensions, but lacking distinct head and impressions of wings. It also preserves a possible cerci impression similar to that of Phasmichnus. The position of the wings on the thorax and abdomen excludes the attribution of Phasmichnus gen. nov. to a Paleoptera because these have their wings unfolded over the body, except in the Diaphanopterodea, which have very different shorter and broader body shapes. The positions of the wing impressions of Phasmichnus exclude an attribution to the Ephemeropterida and Plecoptera and thus to the FBI described by Knecht et al. 5 . An Odonatoptera Archizygoptera or Zygoptera in which the wings may be lying on the body, especially in the case of a drowned individual with 'wet' wings, is conceivable. However, this attribution is unlikely because Odonatoptera have a highly modified thorax. Their short and reduced prothorax is followed posteriorly by a diamond-shaped structure as high as long and formed by the fusion of the meso-and metathorax 32 . Phasmichnus gen. nov. does not have a thorax of this type at all. We attribute it to a Neoptera capable of folding its wings over the abdomen. The presence of wings, the short prothorax plus the very elongated and narrow thorax and abdomen are only found in Phasmatodea among the extant insects, which also have the bases of their hind wings located behind the mesothoracic legs. This is indeed the case here. The other extant Neoptera have shorter abdomen and thorax in relation to their diameters, with the exception of the Mantophasmatodea, now apterous but whose ancestors were possibly winged. The extant and fossil Mantophasmatodea have the thoracic segments of nearly the same lengths, which is not the case here 33 . The Palaeozoic Caloneurodea also had narrow bodies but distinctly longer legs; and the Carboniferous Geraridae had an elongate prothorax and long legs, especially the hind legs that have enlarged femora, unlike Phasmichnus gen. nov 34 . The Late Carboniferous-Early Permian archaeorthopteran clade Cnemidolestidae had also elongate bodies with a relatively short prothorax, but they strongly differ from Phasmichnus gen. nov. in their very long and strong legs, especially the fore legs. Extant stick insects, even the leaf-mimicking Phylliidae, differ from Phasmichnus gen. nov. in their shortened forewings, much shorter than the hind wings. But the Mesozoic winged representatives of the stem group Phasmatodea had fore-and hind wings of similar lengths, as in Phasmichnus gen. nov. 15,17 , possibly supporting its attribution to this lineage. Winged stick insects have their mid legs and forewings situated near the posterior margin of the elongate mesothorax as in Phasmichnus gen. nov. Some molecular dating suggest that the Phasmatodea originated during the Middle Jurassic 35,36 , while recent paleontological discoveries show that the phasmid crown group was already well diversified at that time 15,17 . Triassic representatives of their stem group are known 14 , clearly more in accordance with more recent dating, viz. Permian-Triassic 37 , Middle Permian with confidence interval Carboniferous-end Permian 38 , or Carboniferous-Permian for stem group of Phasmatodea and Permian-Triassic for crown group 39 . The clade ((Mantophasmatodea + Grylloblattodea) + (Phasmatodea + Embioptera)) is considered as sister group of the Dictyoptera 40 , and therefore at least as old as the Carboniferous 41 . Thus, it is highly probable that representatives of the stem group of Phasmatodea or of the stem group of the clade ((Mantophasmatodea + Grylloblattodea) + (Phasmatodea + Embioptera)) existed in the Middle Permian. Furthermore, some Permian putative stem Embioptera have been described 42 , and an undescribed wing of an Embioptera was recently found in the Middle Permian of Southern China (Huang and Nel, in prep.), supporting the existence of stem group representatives of the two sister clades at that time. This would be in accordance to a putative attribution of the present discovery to the stem group Phasmatodea. Phasmichnus radagasti cannot be identified with certainty as a body impression of a stick insect sensu stricto as no anatomical features and no strict apomorphies of stick insects (e.g. fusions of first abdominal tergites and sternites with metathorax) are directly preserved, but it is an evidence of the presence of phasmatodean-like insects in the Middle Permian. A FBI is not a direct representation of the anatomy of the trackmaker: some anatomical structures may not have been imprinted and the overall impression may have undergone deformations, depending on the substrate 43 . The typical morphology of Phasmatodea with narrow elongate bodies and elongate wings is clearly present in Phasmichnus. More precisely the general body and wing of Phasmichnus fits well with that of extant Tropidoderus childerni that has a very long body and hind wings and can hide itself in the vegetation with great efficiency (see internet site https:// www. flickr. com/ photos/ petri chor/ 21776 32362). This type of morphology is consistent with adaptations to mimicry of elongated plant elements such as stems, branches or elongate leaves 43 , obviously present in the Permian vegetation and allowing concealing from predators. More generally, such narrow elongated body and wings are consistent with mimicry with plants among the extant terrestrial Neoptera (Mantodea, Heteroptera Reduviidae, Neuroptera Mantispidae, etc.), together with other functions such as predation. The type of fossilization of Phasmichnus does not allow to find more information on this fossil. In particular, the possible pattern of coloration or details of ornamentations (spines, etc.), present in many extant stick insects and increasing the mimicry, are not available. Several camouflages strategy are known among the late Carboniferous-Permian insects but they generally implicate disruptive strategies (spots and/or bands of different colors or even eyespots on wings). The archaeorthopteran family Cnemidolestidae shows an impressive diversity of such structures 45 . But plant mimicry is clearly less frequent during these periods. The orthopteran Permotettigonia is the only other case of a large leaf mimicry during the Middle Permian 24 . The strategy is different in Phasmichnus that would have imitated small branches and/or elongate leaves. We found several slabs showing tetrapod tracks near Phasmichnus radagasti. The tetrapod ichnofauna of the type locality consists of small temnospondyl amphibians (Batrachichnus salamandroides), bolosaurian parareptiles (Varanopus isp.) and captorhinid eureptiles (Hyloidichnus bifurcatus). All these small vertebrates were probably insect hunters. Permotettigonia is the first accurate case of plant mimicry known in the Middle Permian of France 24 , Phasmichnus gen. nov. could be the second. The presence of a bio-mat could have been a large reserve of resources for the Hexapoda gathering on small water points and attracting the possible predators aforementioned. www.nature.com/scientificreports/ The trophic pressure of such potential predators in playa environments was possibly high enough to enable the rise of insects developing strategies of escape and/or plant-mimicry as early as the Middle Permian, in accordance with the opinion of Tihelka et al. 39 : 'We recover a Permian to Triassic origin of crown Phasmatodea coinciding with the radiation of early insectivorous parareptiles, amphibians and synapsids' [the only restriction to make to this sentence is that the Permian to Triassic stick insects did not belong to the crown but to the stem group of Phasmatodea]. Insectivorous synapsids and sauropsids, became very diverse in the Middle Permian 46 . Material and methods Preparation, observation and description. The holotype specimen was photographed with a Nikon D800 macro lens Micro Nikor 60/2,8 and drawn using Krita v.4.2.9 and PhotoFiltre 7 v.7.2.1. The nomenclature of Buatois et al. 10 is followed to classify the types of arthropod tracks. The specimen was collected by one of us (RG). Geological setting. The specimen was collected in the Gonfaron A site, located south-west of the Luc Basin in the Pelitic Formation 47,48 . The Pelitic Formation corresponds to the upper part of the Luc Basin stratigraphy 47 . It is currently dated to the Wordian thanks to ichnological studies 47 . This formation is characterised by red pelites with drying up facies (such as mudcracks) and ripplemarks suggesting shallow non-marine environments 49 . The Gonfaron A site is located in pelitic badlands (locally named 'red earth') on the edge of the Maures plain and topped by a Triassic cliff (Bundsandstein). Sedimentological data from the site indicate a floodplain environment of playa type (see Supplementary Fig. 1 and Supplementary Information for a complete interpretation of the stratigraphy of the site). Several fossil arthropods have been found recently 50 . Others have been collected, also in the adjacent Permian basins and will be described later 51 .
v3-fos-license