added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2022-06-30T04:15:30.483Z
|
0001-01-01T00:00:00.000
|
21632304
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.geoderma.2017.06.007",
"pdf_hash": "35ff679e7b6897b9d1c223c229570ac4d85af20b",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2090",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"sha1": "35ff679e7b6897b9d1c223c229570ac4d85af20b",
"year": 2017
}
|
pes2o/s2orc
|
Rothamsted Repository Download
Agricultural soils are a major source of nitric- (NO) and nitrous oxide (N 2 O), which are produced and consumed by biotic and abiotic soil processes. The dominant sources of NO and N 2 O are microbial nitri fi cation and denitri fi cation, and emissions of NO and N 2 O generally increase after fertiliser application. The present study investigated the impact of N-source distribution on emissions of NO and N 2 O from soil and the signi fi cance of denitri fi cation, rather than nitri fi cation, as a source of NO emissions. To eliminate spatial variability and changing environmental factors which impact processes and results, the experiment was con-ducted under highly controlled conditions. A laboratory incubation system (DENIS) was used, allowing si- multaneous measurement of three N-gases (NO, N 2 O, N 2 ) emitted from a repacked soil core, which was com-bined with 15 N-enrichment isotopic techniques to determine the source of N emissions. It was found that the areal distribution of N and C signi fi cantly a ff ected the quantity and timing of gaseous emissions and 15 N-analysis showed that N 2 O emissions resulted almost exclusively from the added amendments. Localised higher concentrations, so-called hot spots, resulted in a delay in N 2 O and N 2 emissions causing a longer residence time of the applied N-source in the soil, therefore minimising NO emissions while at the same time being potentially advantageous for plant-uptake of nutrients. If such e ff ects are also observed for a wider range of soils and conditions, then this will have major implications for fertiliser application protocols to minimise gaseous N emissions while maintaining fertilisation e ffi ciency.
Introduction
Agricultural soils are a dominant source of nitrous oxide (N 2 O) and nitric oxide (NO) emissions (IPCC, 2007b;Ravishankara et al., 2009). N 2 O is a potent greenhouse gas (GHG) with a global warming potential 298 times that of CO 2 for a 100-year timescale (IPCC, 2007a), while NO catalyses the formation of ground level ozone affecting human health and vegetation (Crutzen, 1981) and takes part in the formation of acid rain and the eutrophication of semi-natural ecosystems. Both gases are produced in soils by nitrification, denitrification, nitrifier denitrification and nitrate ammonification (Baggs, 2011). Which of these processes dominate in soil depends on several factors such as pH, temperature, nutrient availability, soil structure and soil water filled pore space (WFPS). Denitrification is a mainly bacterially mediated process occurring under absence/limitation of oxygen (O 2 ) as most denitrifying bacteria are facultative anaerobes. In addition, most denitrifying bacteria couple nitrate (NO 3 − ) reduction with organic carbon (C org ) oxidation to gain energy, making a supply of readily available C org a usual requirement for denitrification to occur (Knowles, 1982). High WFPS reduces the oxygen availability within the soil by replacing air in soil pores with water and with available C org present, this promotes denitrification. Inhomogeneous fertiliser application or excretions of grazing animals can change the factors influencing the processes resulting in high NO and N 2 O emissions in small areas, creating hot-spots of microbial activity.
In a comprehensive review Saggar et al. (2013) described the biological and chemical characteristics of denitrification. The denitrification process consists of several reactions with each reaction supplying the substrate for the subsequent one. Each reaction becomes progressively energetically less favourable. When the soil microbial community is supplied with NO 3 − as the first substrate of denitrification, it is transformed via NO 2 − to NO. NO is a very reactive gas, as well as toxic to most organisms (Richardson et al., 2009). Because of its toxicity, most organisms produce the enzyme nitric oxide reductase (Nor) which catalyses the transformation of NO to N 2 O, resulting in low NO:N 2 O ratios. During the next step in the denitrification process N 2 O is transformed by the nitrous oxide reductase (Nos) to nitrogen gas (N 2 ). However, the denitrification systems of most fungi and around one third of sequenced denitrifying bacteria lack the gene encoding Nos and consequently for those organisms, N 2 O will evolve as the final denitrification product rather than N 2 (Saggar et al., 2013), resulting in larger N 2 O:N 2 ratios. Both NO:N 2 O and N 2 O:N 2 ratios have been used as indicators for the relative contribution of denitrification and nitrification and the availability of C, respectively (del Prado et al., 2006;Scheer et al., 2009;Wang et al., 2011;Wang et al., 2013). Microbial denitrification is often the dominant process generating N 2 O and there is a good understanding of the abiotic factors regulating N 2 O emissions via denitrification (Beaulieu et al., 2011). However, even though NO is an obligatory intermediate of N 2 O formation in denitrification it is quickly reduced (Wolf and Russow, 2000;Russow et al., 2009).
Most experiments suggest that NO emitted from soils is mainly produced through nitrification (Skiba et al., 1997). Under denitrifying conditions, favoured by high water content, soil compaction and fine soil texture, there is consequently a low diffusivity, so it has been assumed that NO is further reduced to N 2 O before it escapes to the soil surface (Skiba et al., 1997). Recent findings, however, challenge these assumptions (Loick et al., 2016). Using the gas-flow-soil-core technique which has been proven to be a reliable tool for quantifying emissions from denitrification, Wang et al. (2013) observed significant NO fluxes from NO 3 − -amended soils. Attributing these emissions specifically to denitrification has previously remained elusive due to methodological constraints, which used to rely on acetylene inhibition and isotope labelling techniques but with no ability to directly quantify 15 N-NO production (Baggs, 2008). One factor affecting denitrification is the amount of N available to the denitrifying microbial community. It has been shown that with increasing NO 3 − concentrations, the positive relationship between NO 3 − concentrations and denitrification rates (NO 3 − -N < 1 mmol (Ogilvie et al., 1997;Zhong et al., 2010)) changes to a negative one when NO 3 − -N concentrations are above 50 μg g − 1 soil (Luo et al., 1996) or from 2 to 20 mM (Senbayram et al., 2012). On grazed fields, N is deposited at very high but localised concentrations via livestock excreta. The high concentration of N and available C in urine and dung result in a relatively high default emission factor of 2% of the applied N, but emissions also vary with pH and salinity (van Groenigen et al., 2005). Although applying fertiliser to grass-or arable land via spreaders distributes the N more evenly, there are still 'hot spots' of N around fertiliser granules. There is still large uncertainty about the contribution of these hot-spots to net GHG emissions. Models have been used to predict N 2 O emissions depending on soil structure (Laudone et al., 2011;Laudone et al., 2013). Understanding how hot-spots of N and C affect losses of N is crucial for the design of effective GHG mitigation strategies.
In the context of the complexity of the nitrification and denitrification processes occurring in soil, and the conflicting results which occur under varying conditions, unambiguous results can only be obtained by tightly controlling the conditions of the system and carrying out the experiments on a single soil type. The studies can then be carefully extended to other conditions and soil types, from which wider ranging conclusions can be drawn.
The aim of the present study was to investigate (i) the effects of Nsource distribution on emissions of NO and N 2 O from soil under highly controlled, denitrification favouring conditions, and (ii) the significance of denitrification as a source of NO emissions. We hypothesize that nutrient concentration and application area will affect the magnitude and timing of N emissions. This would result in the need to consider different mitigation strategies depending on hot-spots of nutrient availability.
Experimental design
To investigate the effects of nutrient concentration and application area the experimental design tightly constrained the following factors: lateral diffusion of nutrients (monitoring vertical diffusion); water filled pore space (WFPS), temperature, soil heterogeneity, surface mass transfer coefficient, ambient atmosphere (N 2 free to measure N 2 emissions), ratio of soil volume to nutrient concentration, and ratio of soil surface to nutrient concentration.
The implicit assumption is that we have therefore set up a one-dimensional system without any highly localised variation in WFPS and consequently without any spatial variation in microbial activity.
Conditions were chosen so that they were optimal for denitrification.
The incubation experiment was carried out using the DENItrification System (DENIS), a specialized gas-flow-soil-core incubation system (Cárdenas et al., 2003) in which environmental conditions can be tightly controlled. 12 vessels containing 3 soil cores each (Fig. 1). Cores were packed to a bulk density of 0.8 g cm − 3 to a height of 75 mm into plastic sleeves of 45 mm diameter. To promote denitrification conditions, the soil moisture was adjusted to 85% WFPS, taking the amendment with nutrient solution into account. To measure N 2 fluxes, the native N 2 was removed from the soil and headspace without limiting O 2 levels that would be present in air. This was achieved by using a mixture of He:O 2 (80:20). First the soil cores were flushed from the bottom at a flow rate of 30 ml min − 1 for 14 h. To measure baseline emissions, flow rates were then decreased to 12 ml min − 1 and the flow re-directed over the surface of the soil core for three days before amendment application. The vessels were kept at 20°C during flushing as well as for the 13-day incubation period after amendment application. The experiment was set up to investigate the effect of a heterogeneous distribution of N and C on gaseous emissions from denitrification, by applying a high concentration of N and C localised to only a third of the total surface area (i.e. one of the three cores) within a vessel, as opposed to an even distribution of the same amount of N and C over a three times larger area (i.e. evenly distributed over all 3 cores within a vessel). There were two reasons why the treatment was physically separated into one of three separate cores, rather than simply applying the treatment to one third of the surface of a larger core. The first was to remove subsurface lateral dispersion effects which could not be quantified. For future modelling purposes, the physical separation allows the system to be approximated as one-dimensional, to a workable level of approximation. The second reason is that gaseous emissions are controlled at the surface of the soil by the mass transfer coefficient which is directly related to the size of the transmitting layer, and diffusion through the stationary boundary layer of gas between the soil (with or without treatment) and the flowing gas stream (Laudone et al., 2011). Wetting precisely one third of the surface addressed both of these parameters.
The experiment involved the following 3 treatments (Fig. 1), with four replicate vessels per treatment: HS = hot-spot, one of the three cores inside a vessel was amended with 15 N-KNO 3 enriched to 5 at% and glucose; ED = equal distribution, all three of the cores inside a vessel were amended with 15 N-KNO 3 enriched to 5 at% and glucose; Control = only water was applied to each of the three cores. Considering the total surface area of the vessel, N was applied at a rate of 75 kg N ha − 1 (i.e. 125 mg N kg − 1 dry soil) and C as glucose at 400 kg C ha − 1 resulting in 35.78 mg N and 190.85 mg C per vessel. For treatment HS this resulted in all of the 35.78 mg N and 190.85 mg C being applied in solution with 5 ml water to one of the three cores, while the other two cores each received 5 ml water only. For treatment ED the same amount of N and C was diluted in 15 ml water and 5 ml of that solution were added to each one of the three cores inside one vessel. In order to maintain the incubation conditions, the amendment was applied to each of the three cores via a syringe through a sealed port on the lid of the incubation vessel.
Soil preparation
A clayey pelostagnogley soil of the Hallsworth series (Clayden and Hollis, 1984) (44% clay, 40% silt, 15% sand (w/w), Table 1) was collected on the 4th of November 2013 from a typical grassland in SW England, located at Rothamsted Research, North Wyke, Devon, UK (50°4 6′ 50″ N, 3°55′ 8″ W). Spade-squares (20 × 20 cm to a depth of 15 cm) of soil were taken from 12 locations along a 'W' line across a field of 600 m 2 size which was surrounded by larger fields of similar grassland. After sampling, the soil was air dried to~30% gravimetric moisture content, sieved to < 2 mm and stored at 4°C until preparation of the experiment. Before starting the experiment, the soil was preincubated to avoid the pulse of respiration associated with wetting dry soils (Kieft et al., 1987). For this, the required soil was spread to 3-5 cm thickness. Then, while being mixed continuously, the soil was primed by spraying it with water containing 25 kg N ha − 1 of KNO 3 , which is a typical yearly rate of N deposition through rainfall in the UK (Morecroft et al., 2009;RoTAP, 2012). The soil was then left for 3 days at room temperature before packing into cores and starting the incubation.
Gas analyses and data management
Gas samples were taken every 10 min, resulting in bi-hourly measurement for each vessel. Fluxes of N 2 O and CO 2 were quantified using a Perkin Elmer Clarus 500 gas chromatograph (GC; Perkin Elmer Instruments, Beaconsfield, UK) equipped with an electron capture detector (ECD) for N 2 O and with a flame ionization detector (FID) and a methanizer for CO 2 . N 2 emissions were measured by GC with a helium ionization detector (HID, VICI AG International, Schenkon, Switzerland) (Cárdenas et al., 2003), while NO concentrations were determined by chemiluminescence (Sievers NOA280i, GE Instruments, Colorado, USA). All gas concentrations were corrected for flow rate through the vessel, which was measured daily, and fluxes were calculated on a kg N or C ha − 1 h − 1 basis. CO 2 fluxes showed constant emissions of 0.67 kg C ha − 1 h − 1 before and after the peak in all vessels. In order to show emissions attributed to amendment application only, the CO 2 fluxes were adjusted by subtracting this baseline.
Initial emission rates for each gas and vessel were determined from the beginning of each peak until the increase in concentrations slowed down, i.e. for NO 12 h from day 0, for N 2 O 24 h from day 0, for N 2 36 h from day 2.5 for treatments ED and Control and from day 4.5 for treatment HS, for CO 2 36 h from day 0 (see Table 2).
Gaseous emissions were measured per incubation vessel. Additionally emissions attributed to the amended area within a vessel were calculated (per core basis). In treatment ED and the Control all cores within a vessel received the same application, i.e. emissions calculated for the vessel are the same as when calculated for the amendment concentration. For treatment HS, however, only one core received N at a rate of 225 kg ha − 1 . To calculate emissions from this one core only, the following equation was used: (1) Table 1 Soil characteristics (before (bp) and after priming (ap) but before amendment application). Mean ± standard error (n = 3).
Parameter Amount pH water [1:2.5] 5.6 ± 0.27 Available magnesium (mg kg − 1 dry soil) 100.4 ± 4.81 Available phosphorus (mg kg − 1 dry soil) 10.4 ± 1.10 Available potassium (mg kg − 1 dry soil) 97.5 ± 12.83 Available sulphate (mg kg − 1 dry soil) 51.7 ± 0.62 Total N (% w/w) 0.5 ± 0.01 Total oxidised N (mg kg − 1 dry soil) bp 46.0 ± 0.21 ap 97.5 ± 0.40 Ammonium N (mg kg − 1 dry soil) 6.1 ± 0.09 Organic matter (% w/w) 11.7 ± 0.29 Table 2 Initial production rates of measured gaseous emissions in g per hour. Mean ± standard error (n = 4). The rates were measured over the following time-periods: NO: 0-0.5 days; N 2 O: 0-1 day; N 2 : ED and Control 2.5-4 days, HS 4.5-6 days; CO 2 : 0-1.5 days. Different letters indicate significant differences between treatments (n = 4; p = 0.01). N 2 O emission rates are significantly different between 'HS' and 'Control' at the 95% confidence level (p = 0.017). with E HS⁎ = emissions from the one core from treatment HS that received N and C at three times the concentration compared to the single cores in treatment ED in kg N or C ha − 1 h − 1 ; V HS = emissions from the whole vessel of treatment HS in kg N or C ha − 1 h − 1 ; V C = emissions from the whole vessel of the Control treatment in kg N or C ha − 1 h − 1 .
Isotopic N 2 O
Gas sampling times for 15 N analysis were pre-determined based on data from previous experiments (data not shown). Samples were taken just before (0 h) and 4 h after amendment, then every 24 h for the first week, followed by a final sample at day 11. This sampling strategy covered changes in isotopic signature before amendment, as well as during the main period of NO and N 2 O fluxes, and after emissions returned to background levels. Samples were taken from the outlet line of each vessel using 12 ml exetainers (Labco) which had previously been flushed with He and evacuated. 15 N-enrichment of N 2 O was measured using a TG2 trace gas analyser (Europa Scientific, now Sercon, Crewe, UK) and Gilson autosampler, interfaced to a Sercon 20-22 isotope ratio mass spectrometer (IRMS). Solutions of 6.6 and 2.9 at% ammonium sulphate ((NH 4 ) 2 SO 4 ) were prepared and used to generate 6.6 and 2.9 at% N 2 O which were used as reference and quality control standards.
The process leading to the formation of the measured N 2 O, i.e. whether it is produced by nitrification or denitrification, can be de- Stevens and Laughlin, 1998). When the NO 3 − pool is labelled and the N 2 O flux is greater than the IRMS method detection limit (2 ppm) calculations of the fraction of N 2 O that derived from the denitrifying pool (d′ D ) can be performed. The sources of N 2 O were apportioned into d′ D and the fraction derived from the pool or pools at natural abundance d′ N = (1 − d′ D ) and were calculated as described in Arah (1997).
To determine the source of the measured N 2 O, i.e. how much of it was derived from the amendment (N 2 O_N amend ) rather than the native soil N, the following equation was used for the labelled treatments (Senbayram et al., 2009)
Soil analyses
Soil samples were taken at the beginning and end of the incubation to determine the initial and final moisture contents and the NH 4 + and total oxidised N (TOxN: is generally thought to accumulate very rarely in nature, and it has been shown that NO 2 − is rapidly transformed in soil (Paul and Clark, 1989;Burns et al., 1995Burns et al., , 1996 (Searle, 1984). 15 N-enrichment of NO 3 − in the soil solution was determined at the Thünen Institute of Climate Smart Agriculture (Brauschweig, Germany) using the bacterial denitrification method (Sigman et al., 2001) and 15 N-N 2 O obtained was analysed using a modified GasBenchII preparation system coupled to MAT 253 isotope ratio mass spectrometer (Thermo Scientific, Bremen, Germany) according to Lewicka-Szczebak et al. (2013).
Statistical analysis
Statistical analysis was performed using GenStat 16th edition (VSN International Ltd.). Cumulative emissions were calculated from the area under the curve after linear interpolation between sampling points. Prior to the statistical tests the data were analysed to determine whether the conditions of normality (Kolmogorov-Smirnov test) and equality of variance (Levene test) were satisfied. Where needed to fulfill these assumptions, the data were log-transformed before analysis. Differences in total emissions between treatments for each gas measured were assessed by ANOVA at p < 0.01. Where treatment effects proved to be significant, Fisher's Least Significant Test (LSD) was used to ascertain differences between treatments.
Per vessel
Nitric oxide (NO) emissions (Fig. 2a) increased immediately after amendment application with a peak lasting for about 2.5 days. NO emissions from the ED (equal distribution) treatment were about 4 times greater during the initial 12 h after amendment application than in the HS (Hot Spots) treatment (Table 2). Emissions from the ED treatment peaked after 26 h before decreasing again. In the HS treatment however, there was a plateau in NO emissions from about 24 to 48 h before showing the same decrease as the ED treatment. Cumulative emissions of NO (Table 3) were 2.7 times greater from the ED treatment compared to the HS treatment. Emissions of NO from the Control treatment were negligible.
Similar to NO emissions, N 2 O emissions increased immediately after amendment application (Fig. 2b). However, over the course of the experiment N 2 O fluxes from HS and ED showed the same shape reaching the same maximum fluxes, but at different times. The initial rate was determined over the first 24 h after amendment application and increased at a three times faster rate in the ED treatment than in the HS treatment (Table 2). In contrast to NO emissions, N 2 O emissions reached similar maximum fluxes for both treatments as well as similar cumulative emissions (Table 3). However, due to the initial slower increase in emissions the maximum N 2 O fluxes in the HS treatment were reached about 2 days later than in the ED treatment. The Control treatment only showed very small N 2 O emissions from 12 to 36 h after water addition.
Di-nitrogen gas (N 2 ) emissions were initially close to baseline levels, but showed an increase 3.5 days after amendment in the ED treatment and about 5 days after amendment in the HS treatment. Similar to N 2 O emissions there was no significant difference in the maximum fluxes (Fig. 2c) or cumulative N 2 emissions (Table 3) between the two treatments, while both were significantly higher than the Control which showed N 2 emissions around baseline levels. The rate of increase in N 2 concentrations was measured over 36 h following the start of the N 2 peak (days 2.5-4.0 for the ED treatment, days 4.5-6.0 for the HS treatment). In contrast to NO and N 2 O emissions, there was no significant difference in the rates at which N 2 emissions increased ( Table 2).
Total denitrification was calculated as the sum of all N emitted (Table 3) and was not significantly different between the HS and ED treatment. However, with 9 times higher N emissions than the Control treatment, both amended treatments had a significantly higher total N loss through gaseous emissions.
Carbon dioxide (CO 2 ) fluxes behaved in a similar manner to N 2 O fluxes. For both, the ED and HS treatment, CO 2 emissions increased immediately after amendment application (Fig. 2d). In the ED treatment concentrations increased at about twice the rate of the HS treatment (Table 2) peaking after about 3 days. In the HS treatment concentrations peaked after about 4.5 days at a slightly lower maximum concentration (2.2 kg N h − 1 ) than in the ED treatment (2.8 kg N h − 1 ). With a p-value of 0.011 cumulative emissions (Table 3) were different at the 95% level. CO 2 emissions above background levels were negligible for the Control treatment. (1 kg ha − 1 h − 1 = 1.74 × 10 − 5 mg cm − 2 h − 1 ).
Per amended area
Using Eq. (1), average emissions from cores that received 225 kg N + 1200 kg C ha − 1 (the one amended core from the HS treatment, HS*) could be compared to those that received N and C at a rate of 75 and 400 kg ha − 1 (each core in the ED treatment), and those that only received water (the two unamended cores from the HS treatment and all three cores from the Control vessels) (Fig. 2e-h, Table 3). Results show that total NO emissions were similar between the amended cores ( Fig. 2e, Table 3), independent of the amount of N and C added, but significantly higher than the control cores.
Total N 2 O and CO 2 emissions (Table 3) on the other hand were about three times higher from the core that had received 3 times the amount of N and C. Fig. 2f and h show that initial emissions up to day 2 were the same in both treatments, but while emissions decreased from the cores with the lower application rate (75 kg N) and reached background levels by day 5, emissions from the core with the higher N application (225 kg N) continued to rise, reaching their maximum at day 5 and only being reduced to background levels by day 9. N 2 emissions from the cores receiving the lower application rate were similar to the control, but were higher from the 225 kg N amended core (Fig. 2g). Total denitrification, calculated as the sum of all emitted N gases was about three times as high from the cores with the higher amendment (225 kg N) than in the cores with the lower N and C concentration (75 kg N) (Table 3). With a p-value of 0.015 the cores with the higher rate of N and C applied show significantly higher total N emissions at the 95% confidence level.
Soil mineral N
Results of the final soil analysis are given in Table 5. When considering only the amended core out of each treatment, the core amended with 225 kg N ha − 1 from the HS treatment (column 225 kg ha − 1 ) showed significantly higher concentrations of NO 3 − than the other cores both at the top as well as at the bottom of the core. The 15 N enrichment of NO 3 − was higher in the top half (1.683 ± 0.423 and 2.611 ± 0.508 at% for the 75 and 225 kg N ha − 1 amended cores, respectively) than in the bottom half (1.469 ± 0.327 and 2.514 ± 0.491 at% for the 75 and 225 kg N ha − 1 amended cores, respectively) of the cores in all amended cores. The enrichment was significantly higher in the cores receiving the higher N concentration (p < 0.01). By the end of the experiment about 45% of the soil NO 3 − remaining originated from the amendment, equating to 110.3 mg N kg − 1 dry soil, while in the cores amended with the lower N concentration about 25% of the remaining NO 3 − originated from the amendment, equating to 44.0 mg N kg − 1 dry soil (Fig. 4). The soil NH 4 + -N concentrations were lower than NO 3 − concentrations at the end of the incubation in all treatments with significantly higher values in the bottom section of the core. Looking at the whole vessel as well as individual amended cores: the vessel/core receiving 75 kg N ha − 1 (ED treatment) showed significantly lower amounts of NH 4 + (both in the top and bottom half of the core) than the vessel (and also the 225 kg N ha − 1 amended core), from the HS treatment. The Control treatment showed NH 4 + amounts similar to the HS treatment at the top and significantly lower amounts at the bottom of the cores. Soil moisture was 85% WFPS at the start of the incubation and remained similar between all cores irrespective of treatments.
Gaseous emissions
Only negligible gaseous emissions were detected in the control treatment. It can therefore be assumed that N 2 O emissions in the HS and ED treatments result almost exclusively from the amendments, which was confirmed by 15 N analysis (see below). Overall, total emissions of N 2 O, N 2 and CO 2 were not significantly different between the HS and ED treatment, meaning that the one amended core in the HS treatment produced three times the amount of gases than one core within the ED treatment. This indicates that the emission of those gases is related to the amount of applied NO 3 − and C, i.e. NO 3 − and C being the factors limiting denitrification activity, rather than the soil area (and mass) that receives the amendment. Therefore, three times more N 2 O, N 2 and CO 2 were produced when three times the amount of KNO 3 was applied. A similar effect has been observed by Wang et al. (2013) who found increasing N 2 O, N 2 and CO 2 emissions with increasing initial NO 3 − concentrations.
Though total emissions were similar, the peak of N 2 O and N 2 fluxes was delayed by about 2 days in the HS treatment. There was no leaching in this experiment, therefore this delay implies that the applied nutrients remained in the soil for a longer period in the HS compared to the ED treatment, where the transformation products in the form of N 2 O were detected and increased immediately after nutrient application. In contrast to this, NO emissions were three times lower in the HS treatment as compared to the ED treatment meaning that emissions from each amended core were the same, independently of the amount of KNO 3 applied. This suggests that NO emissions were related to the area (or soil volume) that received the amendment and not the amount of applied nutrients. NO emissions are therefore not a good indicator of hot-spot activity.
Denitrification reactions
In the ED treatment the amendment solution was spread over all three cores supplying a three times larger microbial community with the nutrients than in the HS treatment. The lower amounts of NO emitted from the HS treatment can be explained by both, a larger microbial community accessing the supplied NO 3 − substrate in the ED treatment, as well as a delay in the production of NO reductase (Nor)the enzyme responsible for reducing NO to N 2 O. In the HS treatment a smaller microbial community was supplied with the NO 3 − substrate and less NO was produced than by the larger community in the ED treatment which resulted in smaller initial emission rates. The microbial community using the NO 3 − substrate could grow and was therefore able to reduce more NO 3 − to NO. However, by the time the community was increasing NO production, it had also had time to develop the ability to further reduce the NO to N 2 O. The consumption of NO then resulted in a plateau in NO emissions in the HS treatment after just over 24 h. A similar pattern for NO emissions was also found by Wang et al. (2013). While they found that cumulative NO emissions increased with initial NO 3 − concentrations when those were below 50 mg N kg − 1 dry soil, they found no difference in NO emissions at higher concentrations. Similarly, Shannon et al. (2011) found no difference in the activity of Nor in an experiment where they inoculated Pseudomonas mandelii into anoxic soil with glucose (500 mg C kg − 1 dry soil) and NO 3 − at concentrations ranging from 0 to 500 mg N kg − 1 dry soil. In addition, it has been shown that the production of Nor is delayed by 24 to 48 h following the onset of anaerobic conditions (Saggar et al., 2013). However, NO emissions are not solely dependent on the NO 3 − concentration but also on the soil water content, pH, the soil temperature and the ambient NO concentration (Ludwig et al., 2001;Obia et al., 2015). In contrast to NO emissions, N 2 O emissions were similar between the HS and ED treatments, but calculating the gaseous N emissions per amended core confirmed a higher amount of N 2 O emitted from the cores receiving the higher concentration of KNO 3 and C (225 kg N and 1200 kg C ha − 1 ), meaning that total emissions were related to the amount of N and C applied and independent of the area they were applied to. During denitrification N 2 O is the product of NO reduction. The low amounts of detected NO are explained by NO being reduced to N 2 O before it can reach the soil surface and be measured. Following the denitrification process, N 2 O should be further reduced to N 2 . Although N 2 concentrations were elevated in the core with the higher concentrated amendment, concentrations were low and the difference to the cores receiving the lower N amendment was not significant.
This result can be explained by the metabolism of the denitrifying microbial community. Because of NO being membrane-labile and highly toxic, most bacteria, including all denitrifiers, synthesise the Nor enzyme to reduce NO to N 2 O to avoid poisoning. However, many denitrifiers lack one or more of the other enzymes to catalyse all reduction steps during denitrification (Saggar et al., 2013). This very often is the N 2 O reductase (Nos) which reduces N 2 O further to N 2 . Additionally, energy yields from denitrification reactions lessen in order of their sequence, with the reduction of NO to N 2 O being more energetically favourable than the reduction of N 2 O to N 2 (Koike and Hattori, 1975;Saggar et al., 2013). The relatively high amounts of N 2 O being produced while amounts of N 2 detected in this experiment were very low can be explained by a combination of the factors mentioned above, which promote an accumulation of N 2 O. Additionally, NO 3 − was present in abundance and denitrification requires available C, which was also applied, but might have become limiting before the NO 3 − was used up and therefore not making the microorganisms perform the last, less energetically favourable step of reducing N 2 O to N 2 . Carbon dioxide emissions are a measure of biological activity and are often used to indicate microbial activity or respiration (Parkin et al., 1996). Denitrification requires an electron donor such as C. In this experiment glucose-C was applied resulting in the production of CO 2 . The measured CO 2 concentrations increased similarly to the N 2 O emissions, peaking just before the maximum N 2 O emissions were measured. The simultaneous occurrence of peak CO 2 and N 2 O fluxes may indicate both denitrifying and other heterotrophic microbes being active at the time (Tiedje, 1988).
Molar ratios of denitrification gases
Ratios of NO:N 2 O as well as N 2 O:N 2 have been used as indicators of the relative contributions of nitrification and denitrification to the detected NO and N 2 O emissions. For the ED and HS treatment the molar NO:N 2 O emission rates in this experiment decreased from 0.0046 to 0.0002 during the first 5 days due to a decrease in NO emissions and an increase in N 2 O emissions. With decreasing N 2 O emissions those ratios increased again to 0.0016 by day 7 after which NO emissions were below the detection limit. In the Control, ratios decreased similarly until day 1.5 but then showed a gradual increase to 0.012 until day 7. Ratios of total, cumulative emissions were below 0.001 for all treatments irrespective of whether an amendment was applied and how (i.e. as a hot-spot (HS) or equally distributed (ED), as a high (225 kg ha − 1 ) or low (75 kg ha − 1 ) concentration, or without nutrient addition (Control)).
Values < 0.01 have been associated with denitrification and restricted aeration (Skiba et al., 1992) and while our results fit with this assumption it should be noted that other studies clearly showed that using the NO:N 2 O ratio as an indicator to judge whether nitrification (NO:N 2 O > 1) or denitrification (NO:N 2 O < 1) was the dominating source process must be reconsidered (del Prado et al., 2006;Scheer et al., 2009;Wang et al., 2011;Wang et al., 2013). The N 2 :N 2 O ratios peaked with the N 2 peak of the respective treatment. The largest ratios of N 2 :N 2 O are expected if available C is high and the denitrification reactions are followed all the way to N 2 , whereas if NO 3 − concentrations are high, but available C is low, the reduction of N 2 O to N 2 is inhibited and N 2 O may be the sole end product, resulting in a low N 2 :N 2 O ratio (Wang et al., 2011). Ratios of cumulative emissions were around 0.1 for the amended treatments (HS, ED, 75 kg ha − 1 , 225 kg ha − 1 ) and 1 for the Control treatment. Decreasing ratios of N 2 :N 2 O after day 4 in ED and after day 6 in HS indicate C limitation in this experiment. However, great ranges of ratios have been reported in the literature from < 1 to 200 indicating that those ratios can vary significantly depending on soil NO 3 − , C availability, redox potential, soil properties and denitrifier activity (Wang et al., 2013).
15 N-N 2 O
15 N analysis was used to determine whether the native soil NO 3 − or the NO 3 − added with the amendment was the source of the emitted N 2 O. Results showed that emissions measured in the ED treatment were mainly from the added NO 3 − throughout the whole incubation period.
In the HS treatment, however, a low 15 N enrichment of the measured N 2 O after 4 h indicates that during the first few hours most of the emitted N 2 O was from the native soil NO 3 − -pool. As the production of N 2 O is low at this stage, the N 2 O produced from the non-amended cores is likely to mask the effect of the amendment on N 2 O production. While the microbial communities receiving nutrient amendment are expected to be stimulated to the same extent, in the HS treatment only one third of the soil/microbial community received nutrient amendment. The lower percentage of amendment-derived N 2 O 4 h after N application in the HS treatment may be explained by this smaller volume of soil/the microbial community receiving the enriched amendment. At this stage the two cores that only received water within this treatment were producing N 2 O from native soil N sources, like the Control treatment. The higher ratio of amendment-derived N 2 O in the HS treatment possibly results from the relative enhanced accessibility of amendment within a small core volume replacing the use of native soil N which might be harder to access for the microbial community. Fig. 4b showed that at the end of the experiment in both treatments about 130 mg NO 3 − -N kg − 1 remained which was not derived from the amendment. This large total amount of NO 3 − at the end of the experiment indicates that denitrification reactions might have stopped due to a lack of available C.
Conclusions
The results of our study showed that under the given conditions NO emissions were proportional to surface area, while N 2 O emissions were proportional to nutrient concentration.
Results of this experiment showed that applying nutrients in a localised manner reduced the rate of NO emissions, a gas of environmental concern. At the same time it delayed gaseous emissions of N 2 O, resulting in a longer residence time of the parent compound in the soil.
This study therefore showed that emissions of different gases are not influenced by the same factors in the same way. The amount of NO emissions depend on the area/soil volume that received KNO 3 and C fertiliser, while the scale of N 2 O and N 2 emissions depends on the amount of the applied KNO 3 and available C.
Our results indicate that, under conditions promoting denitrification, the tendency for higher activity at nutrient hot-spots is greater for N 2 O and N 2 emissions. Due to the relatively lower amounts of emitted NO, the contribution of this gas on the total gaseous emissions of N was negligible. However, with mitigation strategies reducing emissions of N 2 O, NO will become of more interest in the future and different factors influencing its emission will need to be considered and incorporated into mitigation strategies.
This study was performed under highly controlled conditions necessary to investigate effects of single factors. However, due to these conditions it cannot be scaled up to the field scale. Further experiments are needed to expand our knowledge about conditions affecting emissions. While this study did not include mechanistic investigations, future studies should be performed to include analyses such as methods to determine denitrification kinetics. It is possible that DNRA (or nitrate ammonification) contributed to these emissions, although several studies have demonstrated this process to be low under high nitrate conditions, such as in our experiment (e.g. Rütting et al. (2011), van den Berg et al. (2015)).
Additionally, this experiment was performed to investigate soil effects only, however, in future experiments, when introducing plants to the system, it is expected that this delay in NO 3 − reduction will give those plants more time to take up the NO 3 − , therefore reducing the amount of NO 3 − in the soil. Decreasing NO 3 − as an energy source for denitrifiers can not only result in lower N 2 O emissions due to a lower availability of substrate, but also due to driving those organisms to perform the subsequent and less energetically favourable step of denitrification, i.e. using N 2 O to produce N 2 and hence lowering GHG emissions even further.
|
v3-fos-license
|
2020-10-29T09:07:16.403Z
|
2020-10-22T00:00:00.000
|
226318847
|
{
"extfieldsofstudy": [
"Economics",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4336/11/4/46/pdf",
"pdf_hash": "9978ddb1ccbcc4af497cacf540dbe0f7f2cf367f",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2092",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "caa09ee0468808d7b26a197d05004bfec986c099",
"year": 2020
}
|
pes2o/s2orc
|
Human Capital Accumulation and the Evolution of Overconfidence
: This paper studies the evolution of overconfidence over a cohort’s working life. To do this, the paper incorporates subjective assessments into a continuous time human capital accumulation model with a finite horizon. The main finding is that the processes of human capital accumulation, skill depreciation, and subjective assessments imply that overconfidence first increases and then decreases over the cohort’s working life. In the absence of skill depreciation, overconfidence monotonically increases over the cohort’s working life. The model generates four additional testable predictions. First, everything else equal, overconfidence peaks earlier in activities where skill depreciation is higher. Second, overconfidence is lower in activities where the distribution of income is more dispersed. Third, for a minority of individuals, overconfidence decreases over their working life. Fourth, overconfidence is lower with a higher market discount rate. The paper provides two applications of the model. It shows the model can help make sense of field data on overconfidence, experience, and trading activity in financial markets. The model can also explain experimental data on the evolution of overconfidence among poker and chess players.
Introduction
Evidence from economics and psychology shows that entrepreneurs, currency traders, fund managers, car drivers, college professors, and aviation pilots have one thing in common: they all hold overly positive views of their relative abilities. The tendency that individuals have to make overly positive evaluations of their relative abilities is a staple finding in psychology. According to [1], a textbook in social psychology: "(...) on nearly any dimension that is both subjective and socially desirable, most people see themselves as better than average." Throughout the paper this bias is referred to as overconfidence.
Overconfidence influences behavior in many economically relevant situations. For example, [2] shows experimentally that there is more entry into markets when self-selection and relative skill determines payoffs. Ref. [3] finds that CEO overconfidence is associated with a higher likelihood of making acquisitions. Ref. [4] shows that CFO overconfidence is correlated with own-firm project overconfidence and increased corporate investment. Overconfidence also has implications for labor market decisions, as reviewed by [5].
Interestingly, experience with an activity and repeated feedback do not necessarily diminish overconfidence. For example, [6] ran experiments where participants showed no overconfidence as they begin an activity, quickly became overconfident, and then overconfidence leveled off while performance continued to increase. They label this finding the "beginner's bubble hypothesis" whereby individuals begin their career at some activity by quickly becoming Second, the model predicts that if there are strong diminishing returns to the production of skills from increases in the capability to produce human capital, then one should find smaller levels of overconfidence in activities where the distribution of income is more dispersed. The intuition for this result is as follows. It is a well-known result from standard human capital accumulation models that an increase in heterogeneity in the capability to produce human capital increases income dispersion. This result also applies to our model. Additionally, if there are strong diminishing returns to the production of skills from increases in the capability to produce human capital, then an increase in heterogeneity in the capability to produce human capital also lowers overconfidence. This happens because when individuals' capability to produce human capital becomes more variable, the chance of moving up in relative rankings through skill investment decreases.
Third, for the majority of individuals overconfidence first increases and then decreases over their working life, but for a minority-those who start with high initial skills and who have low ability to produce human capital-overconfidence decreases over their working life.
Fourth and last, overconfidence is lower with a higher market discount rate. When the market discount rate is high the future is heavily discounted, and individuals will devote fewer resources to producing human capital. If that is the case, then the correlation between productivity and final skills will be smaller and so will be overconfidence.
The rest of the paper proceeds as follows. Section 2 reviews related literature. Section 3 reviews empirical evidence on the evolution of overconfidence. Section 4 sets-up the model. Section 5 contains the findings. Section 6 presents two applications. Section 7 discusses the main assumptions and alternative explanations. Section 8 concludes the paper. The Appendix A contains the proofs of all results.
Related Literature
This section relates the human capital accumulation and subjective assessments model to the existing literature on the evolution of overconfidence. More importantly, this section shows that the model's main prediction-that overconfidence first increases and then decreases over a cohort's working life-stands in contrast with the predictions of the existing literature, except [8,9].
In the psychology literature, overconfidence falls under the rubric of "biases in judgment" together with optimism (overestimation of the chances of experiencing favorable events), and the self-serving bias in causal attribution (the fact that most people tend to attribute success to effort or ability and failure to bad luck). Ref. [10] distinguishes between three main types of overconfidence: overestimation, overplacement, and overprecision. Overestimation is the tendency to overestimate one's absolute skills, performance, or desirable personality traits. Overplacement is the tendency to overestimate one's relative skills, performance, or desirable personality traits. Overprecision is the tendency to overestimate the precision of one's estimates or knowledge. This paper uses the term overconfidence in the sense of overplacement.
Overconfidence can be the outcome of Bayesian updating from a common prior. In [11] individuals learn their ability by actively undertaking costly experiments. The costs of experimenting are proportional to expected output, which increases in expected ability. Individuals will continue testing their abilities until their posterior beliefs become high enough, at which point they stop. Those with higher beliefs start producing early, since their opportunity cost of experimenting is higher. In contrast, those with lower beliefs keep experimenting until they strike a string of good signals, and so will end up with high posteriors. This way, the share of individuals with high posterior beliefs grows over time.
In [12] individuals passively learn their ability through their personal experiences (success of failure) working at an activity. If unfavorable signals are rare (the activity is easy), the population becomes overconfident. In contrast, if unfavorable signals are frequent (the activity is hard), the population becomes underconfident. Over time, as signals accumulate, individuals' posterior beliefs converge to their true ability and the population ends up with correct beliefs.
Overconfidence can arise in a population of Bayesian rational agents with differing priors or opinions [8,13]. Evidence from social psychology demonstrates that individuals make subjective assessments when evaluating the abilities of others. That is, in order to evaluate the behavior of others, they apply the standards that they use on themselves. Ref. [8] shows that in the presence of skill enhancement, subjective assessments lead to overconfidence. This model implies that overconfidence of a cohort should increase with experience, provided that skill investment opportunities increase with experience. However, [8] does not consider the impact of skill depreciation over a finite horizon on the evolution of overconfidence.
Overconfidence can be a consequence of confirmation bias: the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one's prior beliefs or values [14,15]. In [16] there are two possible states of the world, and agents receive binary signals that are correlated with the true state. Agents initially view the two states as equally likely and, after receiving each signal, update their beliefs about the true state. Ref. [16] assumes agents display confirmation bias, that is, when an agent receives a signal that is counter to his current belief about which state is more likely, there is a positive probability that he misinterprets that signal. The model shows the first state signals an agent that observes playing a disproportionately large role in determining his posterior beliefs, and that the agent displays overconfidence in the sense that his belief in favor of one state is stronger than what is justified by the available evidence. When the bias is mild, learning will lead the agent to eventually learn the truth. However, when the bias is severe, learning can exacerbate it.
Overconfidence can also be a result of the self-serving bias in causal attribution: the fact that people tend to attribute success to skill and failure to bad luck [17]. In [9] individuals start out with a common prior belief about their ability, observe a sequence of signals, and display a learning bias inspired by the self-serving bias in causal attributions: they overweight their successes when they form their posterior beliefs. The model shows that as soon as an individual observes one success, he overestimates his ability. In the short run, after a few signals, individuals will tend to overestimate their abilities. In the long run, as signals accumulate, and provided that the learning bias is not too large, individuals will end up with correct beliefs. Hence, [9] also predicts an inverse U-shaped pattern for the evolution of overconfidence, provided that the learning bias is not too large.
Overconfidence might also exist because it provides strategic benefits that compensate for its decision-making costs. In [18,19] a large population of individuals are continuously and randomly matched in pairs to interact with one another. Individuals may differ in the way they perceive the returns of their actions. An overconfident individual overestimates the return to his action for any given action taken by the rival, while an underconfident individual underestimates it. Individuals' perceptions are perfectly observable. In every pairwise interaction, the matched individuals choose actions to maximize their perceived payoff functions and receive payoffs according to their actual payoff functions. Actions can be either strategic substitutes or complements. The proportion of more successful perceptions in the population increases over time at the expense of less successful perceptions. Ref. [18,19] shows that the distribution of perceptions converges to a unit mass where individuals slightly overestimate the returns to their actions. All other perceptions, including correct ones, become extinct asymptotically.
Empirical Evidence
This section presents empirical evidence showing that experience with an activity and repeated feedback do not necessary diminish overconfidence. More surprisingly, in many instances, experience and overconfidence are positively correlated.
Ref. [20] studies aviation pilots' perceptions of relative flying ability. Aviation pilots report their flight hours and assess their relative ability to avoid inadvertent flights into clouds or fog (and to fly out of clouds or fog) by comparison with other pilots with similar flight experience.
One question asked "In comparison with other pilots with similar flight background and experience as yourself, how would you rate your ability to avoid inadvertent flight into instrument meteorological conditions (i.e., cloud or fog)?" Another question asked "In comparison with other pilots with similar flight background and experience as yourself, how would you rate your ability to successfully fly out of instrument meteorological conditions should inadvertent flight into cloud or fog occur?" The pilots' answers show that they believed they were more able than average to avoid inadvertently flying into clouds or fog and were able to successfully fly out of clouds or fog. Ref. [20] also finds that flight hours is a significant predictor of pilots' assessments of their relative ability. Hence, experience of aviation pilots seems to raise their overconfidence rather than reduce it. An older study with aviation pilots [21] also finds evidence for overconfidence about flying ability. However, in contrast to [20], younger pilots are more overconfident about their abilities than older pilots.
Ref. [22] studies the impact of expertise on several judgment biases. To this purpose they run two experiments. The first one involved a group of 29 German professional traders at a bank (median age of 33 years, median of 5 years of experience in the bank, 14 had a university diploma) and a control group of 75 advanced students in Banking and Finance (median age of 24 years). The second one involved a group of 90 professional investment bankers (median age of 34 years) and another control group of 76 advanced students (median age of 24 years). Among other judgment biases, they wanted to compare overconfidence of professionals to that of students. They asked subjects to state subjective confidence intervals for 20 questions (10 questions concerning general knowledge and 10 questions concerning economics and finance). After that, each professional (student) was asked to evaluate his own performance and the performance of an average professional (student). Ref. [22] finds that the degree of overconfidence of professionals is greater than in the student control group in both experiments. Thus, the experience of professional traders seems to exacerbate the degree of overconfidence rather than reduce it.
Ref. [23] finds evidence of overconfidence in German fund managers. The survey asked "How do you evaluate your own performance compared to other fund managers?" The fund managers could pick from five categories from "much better" (coded as 5) to "much worse" (coded as 1). The mean assessment for all fund managers was 2.67, which indicates a tendency to see oneself as better than others. Ref. [23] also collected data on each fund manager's professional experience. Fund managers were divided into "inexperienced" (less than 5 years of professional experience), "experienced" (more than 5 and less than 15 years of professional experience), and "very experienced" (more than 15 years of professional experience). The mean assessment of the inexperienced group was 2.33, the mean assessment of the experienced group was 2.72, and the mean assessment of the very experienced group was 2.89.
Ref. [24] finds that participants in poker and chess tournaments overestimate their relative performance even when given monetary incentives to make accurate predictions. They also find that overestimation of relative performance of poker players increases with experience. By contrast, they find that chess players' forecasts of relative performance in tournaments becomes more accurate with experience.
Ref. [25] uses a survey to study self-confidence of North American foreign exchange (FX) traders. Among other things the survey asked "How successful do you see yourself as an FX trader?" The top rank of 7 was assigned to "Much more successful than other FX traders"; the bottom rank of 1 was assigned to "Much less successful than other FX traders." Oberlechner and Osler (2012) also asked participants' immediate superiors (i.e., head traders or chief dealers) to rank them on a seven-point scale for three separate measures of performance: "trading potential," "trading profits," and "overall contribution to the organization." The currency market professionals gave themselves a mean ranking of 5.06 or "better than average." Almost three quarters of FX traders (73.6%) perceived themselves as more successful than other FX traders. Both FX traders at top-tier and lower-tier institutions exhibited the same tendency. A strong tendency for overestimation of relative performance was found when FX traders' assessments were compared to their superiors' assessments. The FX traders in the survey tended to be fairly experienced and high-ranking: the average work experience in the FX market was 12 years, and 75% of the participants were senior traders. Traders' work experience in the FX market was positively correlated with overconfidence.
Ref. [6] conducted six studies on the evolution of overconfidence. The first four studies were laboratory experiments where participants completed a novel medical diagnostic task over repeated trials. Participants in the first four studies showed no overconfidence as they began the activity, but after a few learning trials their confidence rose and then leveled off while performance continued to increase. The last two studies switched to a real-world task: subjective assessments of financial literacy across the life span. The data were obtained from panels from the Financial Industry Regulatory Authority (FINRA) survey on financial capability. Each panel queried a nationally representative sample of roughly 25,000 U.S. respondents on their financial history, habits, and opinions. Participants' subjective assessments of financial knowledge were compared to a financial literacy test. Ref. [6] finds that overconfidence about financial literacy increases across the life span.
Ref. [7] finds that managers of a chain of food-and-beverage stores who compete repeatedly in high-stakes tournaments overplace themselves relative to a range of different predictors obtained from past tournament outcomes. Overplacement is persistent under repeated feedback, and there is evidence of selective memory: managers with poorer past performances have larger recall errors, and these are skewed towards overly positive memories. In addition, managers who have overly positive memories of past feedback are those who are particularly likely to overplace themselves.
The Model
The human capital accumulation model introduced by [26] has proved one of the most successful models in explaining the evolution of individuals' earnings over their working life. The model has stood empirical testing and provides a plausible theoretical benchmark to study skill investment decisions over time. The human capital accumulation model in this paper is based on [26] and is given by where K i represents units of skill i, λ i represents the marginal perceived productivity of skill i, and I i (t) represents the amount spent to increase skill i. According to this model, an individual chooses how much to invest in each of two skills with the objective of maximizing his discounted sum of perceived disposable income over his working cycle. Perceived disposable income at time t is the difference between perceived gross income at time t, , and the amount spent in goods and services to increase the two skills at time t, I 1 (t) + I 2 (t). Perceived gross income is an increasing function of the stock of each skill K i (t) and its perceived productivity λ i . More precisely, perceived gross income is a linear function of the two skills weighted by their perceived productivity.
The model assumes an individual cannot buy skills by going to the capital market; instead he has to produce them. The rate of change of the stock of each skill,K i (t), is determined by the amount that is produced, 2) , and b ∈ (0, 1) , less the depreciated stock δK i (t), where δ is the constant rate of depreciation and δ ∈ [0, 1]. The parameter A measures the capability of an individual to produce human capital. The assumption that α ∈ (0, 2) implies that there are decreasing returns to the production of skills from increases in the capability to produce human capital. The parameter b measures the impact of investments in goods and services on skill production. The assumption that b ∈ (0, 1) implies the production of skills exhibits decreasing returns to increases in direct expenditures in goods and services. The individual can borrow and lend at the constant market discount rate r ∈ (0, 1).
The model differs from [26] in four main ways. First, it assumes there is more than one skill. Second, it assumes different individuals perceive the productivity of the skills to be different. These two critical assumptions are needed for the model to generate overconfidence. The intuition follows from Santos-Pinto and Sobel (2005). If there is only one skill, then each individual only invests in that skill. If all individuals have the same initial stock of that skill and the same capability to produce it, then all end up with the same final stock of that skill. Hence, no matter if there is heterogeneity in the productivity of this skill or not, everyone thinks to be as good as everyone else. When there are two skills and the productivity of each skill is evaluated differently, an individual will invest more on the skill that he values the most. If different individuals evaluate the two skills differently, then their final stocks of the two skills will differ. Furthermore, since each individual uses his own evaluation to assess the worth of the final stocks of skills of others, both individuals will tend to think they are better than the other. Third, the model does not consider the choice between time spent in formal education and time working. Fourth, the model abstracts from the choice between how much time to devote to market production versus skill production. These last two critical assumptions make the model tractable.
Finally, the model makes several simplifying assumptions. It assumes the unit cost of investment in each skill is the same and that the rate of skill depreciation of each skill is identical. Generalizations of these two assumptions would have no implications in terms of the main results of the model. The model also assumes that the production function of each skill does not depend on the current stock of that skill. Usually, the production function would be specified with two inputs: current skill stock and the amount spent in market goods. Assuming the production of skills also depends on current skill levels complicates the algebra without changing the main insights in the paper. Finally, the model could have allowed for α 1 = α 2 and b 1 = b 2 , and also for different prices of expenditures in goods and services in each skill. This generalization also has no implications for the main results. The model assumes symmetry in the cost and production of skills to focus on the implications of heterogeneity in perceived skill productivity in terms of skill investments.
Solving the Model
Applying standard control theory to Problem (1) one finds that the evolution of investment in skill i is given byİ Equation (2) is a Bernoulli differential equation with constant coefficients with solution given by It follows from (3) that the amount invested in skills decreases over time, reaching zero at t = T. At the beginning of an individual's working life there are strong incentives to produce human capital, since at that time human capital generates income for many periods. Similarly, when an individual approaches the end of his working life there are almost no incentives to produce new human capital, since at that time human capital only generates income for very few periods. It also follows from (3) that investment in skills does not depend on the stocks of skills. This happens because the production function of human capital does not depend on current skill levels.
Substituting (3) into the equation for the evolution of the stock of skill i gives usK It follows from (4) that at the end of an individual's working lifeK i (T) = −δK i (T), that is, since there is no new production of human capital at time T, the stock of each skill must be reduced by the amount of depreciation.
Equation (4) can be solved for any real number contained in (0, 1). When b/(1 − b) is an integer, the solution to (4) is a finite series. However, when b/ (1 − b) is not an integer, the solution to (4) is an infinite series. From now on assume b = 1/2. This assumption makes the problem tractable without loss of generality. For a detailed discussion of this simplifying assumption see [27,28]. Thus, setting b = 1/2 in (4) giveṡ Equation (5) is a linear, non-homogeneous differential equation with solution given by where Equation (6) describes the evolution of the stock of skill i given the initial stock of that skill, the rate of human capital depreciation, the capability to produce human capital, the perceived productivity of the skill, and the market discount rate. It follows from (6) that if an individual's initial stocks of each skill are identical, then he will have more of the skill that is more valuable to him.
Understanding the behavior of the function ω (t) will be critical for understanding the evolution of overconfidence. Thus, our first result characterizes the function ω (t). Lemma 1. The function ω (t) verifies four properties: (i) ω (0) = 0, (ii) ω (T) > 0, (iii) ω (t) is concave, and (iv) ω (t) attains its maximum at t * , with t * ∈ (0, T). where the capability to produce human capital is very low and initial talent is almost all that matters. All the findings in the paper also apply to this case.
Skill Comparisons
Assume that initial skills K i (0), i = 1, 2, capability to produce human capital, A, and perceived productivity of skills, λ, are independently distributed. Let λ 1 = λ and λ 2 = 1 − λ and assume that λ has a symmetric beta distribution (the results in the paper are valid for more general distributions for λ). Finally, assume that A has a distribution with support on [Ā ,Ā] with 1 ≤ Ā <Ā and that initial skills have a distribution with support on R + .
An individual with initial skills K(0), capability to produce skills A, and perceived productivity of skills λ measures his ability at time t as where φ(t; K(0), A, λ) denotes the optimal stocks of skills at time t as a function of parameters K(0), A, and λ. Making use of (6) ones has that An individual with initial skills K(0), capability to produce skills A, and perceived productivity of skills λ measures the expected ability of the population at time t as whereK i (t), i = 1, 2, denote the average skill levels in the population at time t. Making use of (6) one has that andK i (0), i = 1, 2, denote the average initial skills in the population. Following Santos-Pinto and Sobel (2005), let be the difference between an individual's ability and the expected ability of the population, where ability is measured according to that individual's perceived productivity. Refer to D * (t; K(0), A, λ) an individual's ability gap at time t.
Substituting (8) and (9) into (10) gives us It follows directly from (i), (ii), and (iii) in Lemma 1 that ω(t) > 0 for t ∈ (0, T] . This implies that an individual's ability gap at time t increases in A. The ability gap is always positive for individuals who have high initial skills and who have high capability to produce human capital. The ability gap can be negative for individuals who have low capability to produce human capital. Since initial skills K i (0), i = 1, 2, capability to produce human capital, A, and perceived productivity of skills, λ, are independently distributed, it follows that the expected ability gap of a cohort at time t is equal to The expected ability gap is positive for all t ∈ (0, T] since E (λ − .5) 2 > 0, E(A α ) > 0, and ω(t) > 0 for t ∈ (0, T] . Thus, the cohort exhibits overconfidence during the entire working life.
Results
The main result of the paper describes the pattern of overconfidence over time implied by human capital accumulation and subjective assessments when there is a positive rate of skill depreciation. Proposition 1. If δ ∈ (0, 1] , then the expected ability gap is increasing with t for 0 < t < t * and decreasing with t for t * < t < T, where t * = arg max ω(t).
Proposition 1 tells us if skills depreciate, then human capital accumulation and subjective assessments imply that a cohort's overconfidence increases at the beginning of working life, reaches its peak at t * , and then decreases until the end of working life. Since the intuition for this result was already discussed in Section 1, let us now discuss the main assumptions behind it.
Clearly, the assumption of heterogeneity in perceived skill productivity together with the assumption that individuals make subjective assessments are the ones that are responsible for an increase in overconfidence in the earlier stages of working life. Support for these assumptions can be found in [8] and will not be discussed here.
Let us then discuss the role of the assumption of positive skill depreciation. One can show that if there is no skill depreciation then overconfidence, measured by the expected ability gap, always increases over time. To see this, notice that overconfidence reaches its peak at t * , where t * = arg max ω(t). From the definition of ω(t) and Lemma 1, t * is the solution to Solving (12) for t we have If we set δ = 0 in (13) then t * = T. Thus, if human capital does not depreciate, then overconfidence of a cohort always increases over time.
Taking a linear approximation of t * around δ = 0 gives By inspection of (14) we see that if the market discount rate is close to one and the rate of skill depreciation is close to zero, then r−δ r T is a good approximation to t * . Thus, if the discount rate is close to one and the rate of skill depreciation is close to zero, then overconfidence of a cohort reaches its peak close to the end of working life. Simulations of the model with different parameter values confirmed this. For example, with T = 60, r = 0.8, and δ = 0.1, we have t * = 54.105. The approximation gives us r−δ r T = 0.7 0.8 60 = 52.5. The approximation also shows that overconfidence should peak earlier in activities where skill depreciation is high (e.g., computer programming, playing a musical instrument) than in activities where skill depreciation is low (e.g., typing, sorting, and flipping through files) since r−δ r T decreases with δ.
Another implication of the model is stated formally in the next proposition.
Proposition 2.
If α ∈ (0, 1), then a mean preserving spread in the distribution of A reduces the expected ability gap for all t.
Several studies show that heterogeneity in capability to produce human capital is key for human capital accumulation models to be able to explain the evolution of earnings over working life. According to [29], "(..) mean earnings and measures of earnings dispersion and skewness all increase in US data over most of the working life-cycle for a typical cohort as the cohort ages." In fact, labor economists who use human capital accumulation models to explain the evolution of earnings over working life agree the assumption that individuals have different capabilities to produce human capital is the only way to explain the increase in earnings dispersion over working life. For a good discussion on this topic see [30].
Proposition 2 shows that heterogeneity in capability to produce human capital constrains the degree of overconfidence. Everything else constant, an increase in heterogeneity in capability to produce human capital lowers overconfidence at any point in time. This happens because when individuals' capability to produce human capital is more variable, the chance of moving up in relative rankings through skill investment decreases. This result is the equivalent of Proposition 9 in [8]. The novelty here is the interpretation of the result in the context of a human capital accumulation model.
It follows from Proposition 2 that, everything else equal, overconfidence should be smaller in activities where the distribution of income is more dispersed. In other words, controlling for all other variables that have an impact on overconfidence (average income, the number of skills required in different activities, experience, etc.), we should expect to find smaller levels of overconfidence if we ask individuals to evaluate their skills in activities where the distribution of income is more dispersed. One implication of this result is that if overconfidence leads to poor decision making, then this effect will be small in activities where income is very dispersed but large in activities were income is not very dispersed. For example, [31] finds that 94% of college instructors think their teaching ability is above average. If college instructors' income does not become dispersed over working life, then the model implies that their high level of overconfidence will persist. If college instructors' overconfidence leads them to make lower investments in teaching skills, then there can be adverse welfare consequences.
Proposition 2 only holds when α ∈ (0, 1) . This assumption implies that there are strong diminishing returns to the production of skills from increases in the capability to produce human capital. It also guarantees that the expected ability gap is a concave function of the capability to produce human capital, and this implies that an increase in variability in the distribution of A reduces the expected ability gap. If α ∈ (1, 2) , that is, there are weak diminishing returns to the production of skills from increases in the capability to produce human capital, then the opposite result would follow, that is, a mean preserving spread in the distribution of A increases the expected ability gap for all t.
As we have seen, the model shows us that the process of human capital accumulation together with subjective assessments imply that, for the majority of individuals in a cohort, overconfidence should first increase and then decrease over time. However, for a minority, overconfidence decreases over most of working life. This is stated precisely in the next result.
Proposition 3 tells us that individuals who are initially very talented but who have low capability to produce human capital will exhibit decreasing overconfidence over time for most of their working life. We can state one additional result.
Proposition 4.
An increase in the market discount rate r reduces the expected ability gap for all t.
If the market discount rate r is large the future is heavily discounted, and so individuals devote fewer resources to producing human capital. If that is the case, then the correlation between productivity and final skills will be smaller and so will overconfidence.
Applications
This section discusses two applications of the model. It shows that the model can help make sense of data on overconfidence, experience, and trading activity in financial markets. It also shows how an extension of the model can explain why poker players' perceptions of relative skill become more inflated over time, whereas those of chess players become more accurate.
Overconfidence, Experience, and Trading Activity
The model can shed light on the question of gender and trading activity, which has been the focus of a number of studies starting with [32]. The argument in [32] goes as follows. Overconfidence is one of the most prominent explanations for why some individuals trade more frequently than others in financial markets. If men are more overconfident than women, then men should trade more than women. Consistent with this prediction, [32] analyzes the common stock investments of men and women from 1991 to 1997 using account data for over 35,000 households from a large discount brokerage firm and finds that men trade 45% more than women.
The human capital accumulation and subjective assessments model offers an alternative explanation for why men trade more than women in [32]. Suppose that men and women are equally likely to be overconfident, that trading experience increases overconfidence, and that overconfidence increases trading activity. If that is the case, then if men have more trading experience than women, men should trade more than women. In fact, according to [32], "The differences in self-reported experience by gender are quite large. In general, women report having less investment experience than men." Ref. [33] finds that the switch from phone-based trading to online trading activity is associated with greater trading activity. Furthermore, they report a dramatic erosion in the performance of investors after they switch to online trading. They argue that investors who switch to online trading are likely to be more overconfident after going online than before. This happens because these investors usually experience unusually strong performance prior to the switch and low performance after. According to [33], the strong prior performance leads to overconfidence via the self-serving attribution bias. The human capital accumulation and subjective assessments model offers an alternative explanation for this finding. Suppose that trading experience increases overconfidence and that overconfidence increases trading activity. If this is the case, then if online investors have more trading experience than other investors, online investors should trade more. In fact, in [33], online investors report having more trading experience than other investors.
Ref. [34] finds that investors who think they are better than average, in terms of investment skills or past performance, trade more. Ref. [35] confirms this prediction using an asset market experiment. Moreover, [35] shows that overconfidence leads to increased trading activity and that individuals with more trading experience tend to trade more. Interestingly, in [35], women had about the same level of both overconfidence and trading activity as did men. Thus, contrary to the findings in [32], there is little evidence that overconfidence and trading activity are in any meaningful way related to gender.
Overconfidence of Poker and Chess Players
The model also assumes that skills have different perceived productivity for different individuals. It would be absurd to pretend that this assumption applies to all settings. It does not. In many activities each skill has the same productivity for all individuals. Even if that is the case, we cannot rule out the influence of skill investment and subjective assessments in determining individuals' perceptions of relative skill. In fact, it is possible to incorporate skill investment and subjective assessments into a Bayesian learning model where each skill has the same productivity across all individuals. For example, one could assume the process that generates income as a function of skills is given by where λ j , j = 1, ..., J, represents the productivity of skill j and ε(t) is a random term. Individuals start with subjective prior beliefs about productivity of skills and learn about the true productivity over time. In this case, individual i's perception of the process that generates income would be given by where λ i j (t), j = 1, ..., J, is the expected productivity of skill j from the perspective of individual i, a function of past observations of income of individual i. In this model individuals choose investments in skills to maximize the sum of their discounted perceived disposable income over the working life. Individuals observe their own income at each period in time and use that information to update their beliefs about the productivity of skills. After updating their beliefs about the productivity of skills, individuals use their own beliefs to compare their skills to the skills of others. Note that if individuals had full information about the income of their peers, they could use that information and individuals' assessments would no longer be subjective.
In a model like this, individuals will use skill investments to learn about the technology, that is, there is learning by experimentation. This complicates the analysis substantially. The pattern of overconfidence over time will depend critically on the variability of the random term. If the random term has a large variance, then learning about λ will take time, and the impact of skill investment and subjective assessments will persist. In this case, overconfidence will increase with experience over most of an individual's working life. By contrast, if the random term has a small variance, then learning about λ is fast, and the impact of skill investment and subjective assessments will vanish quite rapidly.
Ref. [24] finds that overestimation of relative performance of poker players increases with experience, whereas chess players' forecasts of relative performance become more accurate with experience. If poker is an activity where random factors are very important in determining outcomes, poker players can improve different skills and make subjective assessments, then it may take a long time until experience with poker tournaments reduces poker players' overconfidence. By contrast, if chess is an activity where random factors are not so important in determining outcomes, chess players can improve different skills and make subjective assessments, then playing a few chess tournaments might be enough to reduce chess players' positive views about their relative skill.
Discussion
This section discusses the implications of relaxing the main assumptions of the model and alternative explanations for the evolution of overconfidence.
Main Assumptions
To clarify the predictions of the model, consider the implications of dropping its two main assumptions-skill acquisition and subjective assessments-one at a time. Suppose first that individuals cannot increase their skills but make subjective assessments. Since, by assumption, initial skills and productivity of skills are independently distributed, then, on average, individuals should have an accurate view of their relative ability. It also follows that each individual's self-confidence does not change over time. Now, suppose that individuals do not make subjective assessments, but they are able to increase their skills. If this is the case, then individuals become better over time in absolute terms, but all individuals should have an accurate view of their relative ability at any period in time.
An implicit assumption of the model is that individuals do not use any empirical observations about the income of their peers to make comparisons. This assumption is not valid for activities where individuals receive unambiguous information about the income of their peers.
Alternative Explanations
There are alternative explanations that can account for some of the evidence on the evolution of overconfidence discussed in Section 3. These alternative explanations do not require that individuals are able to increase their skills. They also do not rely on individuals making subjective assessments.
Consider a situation where individuals differ in their ability at a task. To make things simple, suppose each individual can have either high or low ability and that there is a selection effect that rewards high ability. For example, high-ability individuals survive with probability 75% and the low-ability ones with probability 25%. Furthermore, suppose that every time an individual is wiped out he is replaced by an (inexperienced) individual (who may be of high or low ability with 50% probability each). In this case, the more experienced individuals have, on average, a higher ability than the less experienced individuals. Thus, self-confidence increases with experience. It is easy to see that, without any added feature, this description of behavior implies that there is no overconfidence in the population.
One simple way to generate overconfidence is to assume that the individuals who survive compare themselves against the wrong pool. For example, experienced individuals may overestimate the percentage of inexperienced individuals in the population. If that is the case, and assuming that inexperienced individuals compare themselves against the correct pool, then, on average, individuals will be overconfident, and cross-sectional overconfidence will increase with experience. If individuals who survive have an accurate assessment of the composition of the population, and the inexperienced individuals underestimate the percentage of experienced individuals in the population, then there would still be overconfidence in the population, but this would decrease with experience. If there are strong selection effects towards the survival of the best mutual fund managers or foreign exchange traders, then this explanation can account for the evolution of overconfidence displayed by these individuals.
Another alternative explanation is that overconfidence causes experience. This happens if overconfidence leads to better relative performance, and better relative performance (through a selection effect) leads to more experience. For example, overconfidence may lead to better relative performance if it reduces stress [36]. Overconfidence may also lead to better relative performance when it has strategic effects on others that are beneficial to the self. For example, an overconfident person may cause a more favorable impression on his superiors and so may be promoted more quickly. Alternatively, an overconfident person may look more aggressive to competitors, and this may give that person a strategic edge [18,19]. Each of the variations of this second explanation may account for the pattern of overconfidence displayed by mutual fund and foreign exchange traders. However, this explanation is not able to account for the pattern of overconfidence displayed by airplane pilots.
Finally, experience may cause overconfidence through self-serving bias in causal attributions [9]. Suppose that, before engaging in a job, individuals have incomplete information about their ability, but they know that can be of either high or low ability. Individuals learn about their ability over time by observing a series of experiments that are correlated with ability. If this is the case then, on average, inexperienced individuals will develop overconfidence in their abilities. However, as experience with the task accumulates, and provided that individuals are not too biased, they will eventually learn their true ability. In other words, when self-serving bias is not too large, the model predicts that overconfidence first increases and then decreases with experience. Of course, if self-serving bias is very large, then overconfidence will always increase with experience.
Conclusions
This paper shows that the processes of human capital accumulation, skill depreciation, and subjective assessments imply that individuals' perceptions of skill do not have to become more accurate over time; on the contrary, they may become increasingly inflated. Moreover, the model predicts that overconfidence of a cohort first increases and then decreases over the cohort's working life. This prediction is consistent with the "beginner's bubble hypothesis" in [6].
The explanation in this paper is an additional contribution to the literature on the evolution of overconfidence. Explaining the evolution overconfidence across different activities is beyond the scope of this paper and is left for future research. Still, the paper shows that some of the ingredients that should be part of such an analysis are the possibility of self-selection into an activity, the presence or absence of skill investment opportunities, the possibility of making subjective assessments, and the frequency and quality of information about an individual's performance at the activity.
Funding: This research received no external funding.
Conflicts of Interest:
The author declares no conflict of interest.
Derivation of Equation (2)
The Hamiltonian for the human capital accumulation problem is given by The optimal conditions for the control variables are given by and, for the state variables, by Solving (A1) for µ i (t) and taking logs gives us Taking the derivative with respect to t we have Making use of (A1) and (A2) we have that which after simplification gives us which is Equation (2). Q.E.D. (2) is a Bernoulli differential equation with constant coefficients and can be solved by performing a change of variable. If we let W i (t) = (I i (t)) 1−b we have that ∂I i (t) ∂t
Derivation of Equation (3) Equation
After the change of variable, Equation (3) becomes which is a first-order nonhomogeneous linear differential equation. The solution to (A3) is given by where C i is a constant. At the end of individual's working life investment in human capital must be zero, so Solving for C i we have that Substituting (A5) into (A4) we have that which is Equation (3). Q.E.D.
Derivation of Equation (6) Rearranging (5) we have that
The solution to this differential equation is given by K i (t) = e −δt C + 1 2 A α λ i r + δ 1 − e −(r+δ)(T−t) e δt dt = C i e −δt + 1 2 where C i is a constant. At the start of an individual's working life the stock of skill i is given by K i (0), so A α λ i (r + δ)(r + 2δ) e −(r+δ)T .
Solving for C i we have that Substituting (A7) into (A6) we have that which is Equation (6). Q.E.D.
Proof of Proposition 1. The change in the expected ability gap over time is completely determined by the change in ω(t) over time. Thus, Lemma 1 implies that the expected ability gap is increasing with t for 0 < t < t * and decreasing with t for t * < t < T, where t * = arg max ω(t). Q.E.D.
Proof of Proposition 2. The proof is a direct application of Proposition 9 in Santos-Pinto and Sobel (2005). If α ∈ (0, 1) then D * (t; K(0), A, λ) is concave in A and so a mean preserving spread in the distribution of the capability to produce human capital decreases E A D * (t; K(0), A, λ). Q.E.D.
Proof of Proposition 3. From (10) see that K i (0) ≥K i (0) implies that the first two terms in (10) are nonnegative. We also see that A α λ 2 + (1 − λ) 2 − E(A α ) 1 2 < 0 implies that the third term in (10) is negative. For t ∈ (0, t * ) , an increase in t increases the contribution of the third term and reduces the contribution of the first two terms to the individual's ability gap. Q.E.D.
By Lemma 1 ω(t) is non-negative. The numerator in the second term is non-negative. The numerator in the third term is also non-negative since (T − t)/T ≥ e −(r+2δ)t for t ∈ [0, T) . We also have that
|
v3-fos-license
|
2018-01-20T12:49:42.265Z
|
2008-07-26T00:00:00.000
|
190942
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.4414/smw.2008.12120",
"pdf_hash": "3c7a00a3d4c8ae1779d8e130a2da467470d3e969",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2093",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3c7a00a3d4c8ae1779d8e130a2da467470d3e969",
"year": 2008
}
|
pes2o/s2orc
|
Impact of enhanced compliance initiatives on the efficacy of rosuvastatin in reducing low density lipoprotein cholesterol levels in patients with primary hypercholesterolaemia
Background: The effectiveness of lipid-lowering medication critically depends on the patients’ compliance and the efficacy of the prescribed drug. Objectives: The primary objective of this multicentre study was to compare the efficacy of rosuvastatin with or without access to compliance initiatives, in bringing patients to the Joint European Task Force’s (1998) recommended low-density lipoprotein cholesterol (LDL-C) level goal (LDL-C, <3.0 mmol/L) at week 24. Secondary objectives were comparison of the number and percentage of patients achieving European goals (1998, 2003) for LDL-C and other lipid parameters. Patients and methods: Patients with primary hypercholesterolaemia and a 10-year coronary heart disease risk of >20% received open label rosuvastatin treatment for 24 weeks with or without access to compliance enhancement tools. The initial daily dosage of 10 mg could be doubled at week 12. Compliance tools included: a) a starter pack for subjects containing a videotape, an educational leaflet, a passport/goal diary and details of the helpline and/or website; b) regular personalised letters to provide message reinforcement; c) a toll-free helpline and a website. Results: The majority of patients (67%) achieved the 1998 European goal for LDL-C at week 24. 31% required an increase in dosage of rosuvastatin to 20 mg at week 12. Compliance enhancement tools did not increase the number of patients achieving either the 1998 or the 2003 European target for plasma lipids. Rosuvastatin was well tolerated during this study. The safety profile was comparable with other drugs of the same class. 63 patients in the 10 mg group and 58 in the 10 mg Plus group discontinued treatment. The main reasons for discontinuation were adverse events (39 patients in the 10 mg group; 35 patients in the 10 mg Plus group) and loss to follow-up (13 patients in the 10 mg group; 9 patients in the 10 mg Plus group). The two most frequently reported adverse events were myalgia (34 patients, 3% respectively) and back pain (23 patients, 2% respectively). The overall rate of temporary or permanent study discontinuation due to adverse events was 9% (n = 101) in patients receiving 10 mg rosuvastatin and 3% (n = 9) in patients titrated up to 20 mg rosuvastatin. Conclusions: Rosuvastatin was effective in lowering LDL-C values in patients with hypercholesterolaemia to the 1998 European target at week 24. However, compliance enhancement tools did not increase the number of patients achieving any European targets for plasma lipids.
Impact of enhanced compliance initiatives on the efficacy of rosuvastatin in reducing low density lipoprotein cholesterol levels in patients with primary hypercholesterolaemia Background: The effectiveness of lipid-lowering medication critically depends on the patients' compliance and the efficacy of the prescribed drug.
Objectives: The primary objective of this multicentre study was to compare the efficacy of rosuvastatin with or without access to compliance initiatives, in bringing patients to the Joint European Task Force's (1998) recommended low-density lipoprotein cholesterol (LDL-C) level goal (LDL-C, <3.0 mmol/L) at week 24.Secondary objectives were comparison of the number and percentage of patients achieving European goals (1998,2003) for LDL-C and other lipid parameters.
Patients and methods: Patients with primary hypercholesterolaemia and a 10-year coronary heart disease risk of >20% received open label rosuvastatin treatment for 24 weeks with or without access to compliance enhancement tools.The initial daily dosage of 10 mg could be doubled at week 12. Compliance tools included: a) a starter pack for subjects containing a videotape, an educational leaflet, a passport/goal diary and details of the helpline and/or website; b) regular personalised letters to provide message reinforcement; c) a toll-free helpline and a website.
Results: The majority of patients (67%) achieved the 1998 European goal for LDL-C at week 24.31% required an increase in dosage of rosuvastatin to 20 mg at week 12. Compliance enhancement tools did not increase the number of patients achieving either the 1998 or the 2003 European target for plasma lipids.Rosuvastatin was well tolerated during this study.The safety profile was comparable with other drugs of the same class.63 patients in the 10 mg group and 58 in the 10 mg Plus group discontinued treatment.The main reasons for discontinuation were adverse events (39 patients in the 10 mg group; 35 patients in the 10 mg Plus group) and loss to follow-up (13 patients in the 10 mg group; 9 patients in the 10 mg Plus group).The two most frequently reported adverse events were myalgia (34 patients, 3% respectively) and back pain (23 patients, 2% respectively).The overall rate of temporary or permanent study discontinuation due to adverse events was 9% (n = 101) in patients receiving 10 mg rosuvastatin and 3% (n = 9) in patients titrated up to 20 mg rosuvastatin.
Conclusions: Rosuvastatin was effective in lowering LDL-C values in patients with hypercholesterolaemia to the 1998 European target at week 24.However, compliance enhancement tools did not increase the number of patients achieving any European targets for plasma lipids.
Introduction
Sponsor of the study: AstraZeneca AG, Grafenau 10, CH-6301 Zug Switzerland Clinical trials of lipid modification either by diet or drugs have shown that CHD risk associated with elevated cholesterol can be substantially reduced [5].
Guidelines for the management of risk factors influencing CHD were initially based on largescale epidemiological surveys conducted in the USA.After similar studies in other countries it became apparent that there were significant inconsistencies in the assessment of risk for CHD, and in the LDL-C or TC threshold levels indicating lipid lowering treatment [6].In 1998 the Second Joint Task Force of European and other Societies issued recommendations on goals for LDL-C (<3.0 mmol/L) and TC levels (<5.0 mmol/L) [5].These recommendations applied to patients with CHD (or other atherosclerotic disease) and patients with a 10-year CHD risk 020% either at their present age or when projected to age 60 [5].
New guidelines were issued by the Third Joint Task Force of European and other Societies in 2003 [7].For asymptomatic patients with a 10year risk of a fatal coronary event <5%, these guidelines contain the same goals for plasma LDL-C (<3.0 mmol/L) and TC (<5.0 mmol/L) as those recommended in 1998 [5,7].However, for patients with clinically established cardiovascular disease (CVD), diabetes or a 10-year risk of a fatal coronary event >5%, the new guidelines recommend lower goals for LDL-C (<2.5 mmol/L) and TC levels (<4.5 mmol/L) [7].
Statins (HMG-CoA reductase inhibitors) have been shown to be highly effective in reducing the level of LDL-C and the rate of major coronary events, and in improving overall survival in patients with CHD [8][9][10][11].Moreover, statins are well tolerated and are cost-effective in secondary prevention of CHD [12][13][14].Rosuvastatin is a highly potent statin that effectively reduces LDL-C in patients with hypercholesterolaemia [15].
Compliance is a complex behavioural process and is strongly influenced by the environment, the healthcare provider's practice, and the care delivery system [16,17].Several clinical trials have shown suboptimal compliance with lipid-lowering therapy with statins [18][19][20].Achieving a high level of compliance presupposes that the patient has the requisite knowledge, motivation and resources to follow treatment recommendations.Several approaches may be considered to improve compliance, the most promising being combinations of interventions involving, amongst other things, patient education, self-monitoring, social support and telephone follow-up [21][22][23][24].
The primary objective of this study was to compare the efficacy of treatment with rosuvastatin, with or without access to compliance initiatives, in bringing patients to the Joint European Task Force (1998) recommended LDL-C goal (<3.0 mmol/L).
Trial design
This was a cluster-randomised, multicentre, open label, parallel group study of 24 weeks' duration and conducted throughout Switzerland.A cluster randomisation procedure was used and all patients in each centre were assigned to the same treatment group.
Patients
Patients (018 years) attending primary care physician practices for treatment of primary hypercholesterolaemia with a 10-year CHD risk >20% (as defined by the 1998 European Guidelines [5]), CHD or other atherosclerotic disease were eligible for the study.The patients were statin naïve or on an accepted starting dose of lipid-lowering medication, which had proved ineffective in reaching the target level of LDL-C for that dose.Statin naïve patients (with an LDL-C >3.5 mmol/L fasting level) were required to complete dietary counselling before entering the study.Patients who switched from accepted starting doses of other lipid-lowering medication (with an LDL-C >3.1 mmol/L fasting level) were directly enrolled in the study.Another inclusion criterion was a fasting triglyceride (TG) level of 94.52 mmol/L.
Patients were excluded from the study if they were known to have heterozygous or homozygous familial hypercholesterolaemia, type III hyperlipoproteinaemia (familial dysbetalipoproteinaemia), or secondary hypercholesterolaemia.Those known to have hypersensitivity reactions or serious adverse effects (eg.myopathy) in relation to other statins were also excluded.Pregnant or breast feeding women were excluded, whilst women of child bearing potential were asked to use adequate contraception during the study.Other exclusion criteria included unstable cardiovascular disease, uncontrolled diabetes, active liver disease, renal impairment as defined by a serum creatinine level >220 mmol/L, any medical condition requiring cyclosporine therapy, and a history of alcohol and/or drug abuse.
The study was conducted in accordance with Good Clinical Practice Guidelines and local law.The study protocol was approved by the appropriate ethics committees.Written informed consent was obtained from each patient.
Study procedure
Patients received a daily oral treatment with either rosuvastatin alone (10 mg group) or with rosuvastatin and access to compliance enhancement tools (10 mg Plus group) for 24 weeks.Patients were assessed at week 4 and at week 12 to review fasting levels of TC, LDL-C, HDL-C, and triglycerides.For patients not achieving the 1998 European target for LDL-C at week 12 the daily dose of rosuvastatin was increased to 20 mg for the remainder of the study.Patients in the 10 mg Plus group received a starter pack containing a videotape and educational leaflets concerning their condition.These patients also received newsletters at regular intervals and had access to both a telephone helpline and an Internet website, all designed to reinforce the initial message in the starter pack.
Rosuvastatin tablets were dispensed to all patients during the first study visit (week 0) and at week 12. Patients were asked to return all unused rosuvastatin tablets and containers to the investigator at week 12 and week 24.The Patient's compliance was determined by the difference between the dispensed and the returned tablets in comparison with the number of days between the visits.The dropout patients were not considered for the compliance assessment.Only the data from the patients who completed the whole protocol were taken into account.
Efficacy and safety endpoints
The primary efficacy endpoint was the number and percentage of patients in both treatment groups who reached the 1998 European goal for LDL-C (<3.0 mmol/L) after 24 weeks of therapy.Secondary efficacy endpoints included the number and percentage of patients within the 1998 European goal for LDL-C at week 12, the number and percentage of patients within the 1998 European goal for TC (<5.0 mmol/l) at week 12 and week 24, and the number and percentage of patients within the 2003 European goal for LDL-C (<2.5 mmol/L) and for TC (<4.5 mmol/L) at week 12 and week 24.Other secondary efficacy endpoints were the number of patients with a dose increase to 20 mg rosuvastatin at week 12; the percentage change in LDL-C, TC, HDL-C, and TG between baseline and week 24; and the patient's compliance as measured by tablet count for both treatment groups.
Safety assessment included adverse events reporting, clinical chemistry measurements and physical examinations.
Statistical methods
Efficacy analyses were performed on data from the intention to treat (ITT) population which included patients who received 01 doses of drug and had 01 postbaseline lipid values.The last observation was carried forward (LOCF) for missing efficacy data at week 24.All analyses performed at week 12 were based upon observed data (OC).All patients who received at least one dose of rosuvastatin were included in the safety population.Comparisons were performed between treatment groups using logistic regression analysis.The number of patients achieving the EAS LDL-C target was analysed using a logistic analysis, with terms included in the model for patient type (naïve or switched), treatment group (only rosuvastatin or rosuvastatin plus compliance tool) and the interaction between treatment group and patient type.The interaction term was found to be not statistically significant (change in -2logL = 0.608, p-value = 0.4356) and was dropped from the final model.The adjusted odds ratios derived from the final model and their 95% CI (estimated from the likelihood ratio method) were shown with corresponding p-values.Statistical significance was accepted at the 5% level.
Patient demographics
A total of 1128 patients were randomised to the two treatment groups, 601 patients to rosuvastatin alone (10 mg group), 527 patients to rosuvastatin with access to compliance enhancement tools (10 mg Plus group).All 1128 patients were part of the safety population.A total of 126 patients failed to provide a lipid sample at baseline or at least one lipid sample post-baseline and were excluded from the efficacy analyses.Data for 1002 patients (531 patients in the 10 mg group; 471 patients in the 10 mg Plus group) were therefore available for the efficacy analyses (ITT population).Demographic characteristics of the ITT population entering the study are shown in table 1. Patients in both treatment groups received rosuvastatin for similar periods of time.The mean duration of treatment was 177 days for patients in the 10 mg group and 174 days for those in the 10 mg Plus group.At week 12, 309 patients (31%) in the ITT population (162 patients in the 10 mg group; 147 patients in the 10 mg Plus group), were titrated up to 20 mg rosuvastatin.
Primary efficacy analysis
In the 10 mg group, 67% of patients (358) achieved the 1998 European LDL-C goal (<3.0 mmol/L) at week 24 (LOCF) compared to 61% of patients (286) in the 10 mg Plus group, as shown in Figure 1.A patient assigned to the 10 mg Plus treatment group has 0.74 (95% CI 0.60-0.97;p = 0.032) lower odds of achieving the target than a patient assigned to the 10 mg (alone) group.
Secondary efficacy analyses
In the 10 mg group, 62% (316 patients) achieved the 1998 European LDL-C goal at week 12 (OC), compared to 61% (272 patients) in the 10 mg Plus group (figure 1) (difference not statistically significant (ns)).Patients in both treatment groups were found to have the same likelihood of achieving either the 1998 European LDL-C goal or the 2003 European LDL-C goal (<2.5 mmol/L) at week 12 and week 24.Similar findings were obtained for the 1998 European TC goal (<5.0 mmol/L) at week 12 and week 24, and for the 2003 European TC goal (<4.5 mmol/L) at week 12 and week 24.
There were favourable changes in lipid levels from baseline to week 24 (figure 2).Mean per-centage decrease from baseline to week 24 in LDL-C, TC and TG levels were similar for both treatment groups (ns).LDL-C, TC and TG decreased in the 10mg group by 37.3%, 26.4% and 11.4% respectively and in the 10mg Plus group by 37.2%, 26.1% and 9.8% respectively.In general, higher reductions in LDL-C, TC and TG levels were reported for statin-naïve patients in comparison with those who switched from other lipid-lowering medication A statin-switched patient has 0.38 (95%; CI 0.29-0.51;p = 0.0001) lower odds of achieving the EAS LDL-C target than a statin-naïve patient.In the 10 mg group, 75.5% of statin-naïve vs. 58.8% of switched patients achieved the primary endpoint, compared to 66.4% statin-naïve vs. 54.8% of switched patients in the 10 mg Plus group.
Patient compliance with treatment was reported as high, with mean values of 97% (10 mg group) and 100% (10 mg Plus group) between week 12 and week 24.
Safety and tolerability
Table 2 shows the common adverse events occurring in more than 1% of patients.The two most frequently reported adverse events were myalgia (34 patients, 3% respectively) and back pain (23 patients, 2% respectively).The overall rate of temporary or permanent study discontinuation due to adverse events was 9% (n = 101) in patients receiving 10 mg rosuvastatin and 3% (n = 9) in patients titrated up to 20 mg rosuvastatin.
74 patients reported serious adverse events (SAEs).The two most common SAEs were myocardial infarction (4 patients) and angina pectoris (3 patients).One patient died from a sudden cardiac event while receiving 20 mg rosuvastatin.According to the investigator the death of this patient was not treatment-related.6 patients experienced SAEs for which a causal relation with rosuvastatin therapy was suspected.These SAEs were hepatitis (1 patient), creatinine elevation (2 patients), dyspnoea (1 patient), muscle pain
LDL-C TG
Mean percentage change in plasma lipids between baseline and week 24 in patients treated with rosuvastatin alone (10 mg group) or in combination with compliance enhancement tools (10 mg Plus group).
(1 patient), and blood pressure elevation (1 patient).Changes in clinical laboratory parameters were generally minor.Three patients experienced clinically important creatinine (>upper limit of normal and >100% over baseline) elevations.Three patients had clinically important ASAT and ALAT elevations (>3x the upper limit of normal).There were no patients with a clinically important increase of creatine kinase (>10x upper limit of normal).
There were no clinically significant changes in body weight, blood pressure and heart rate of patients during the study.
Discussion
Statins (HMG-CoA reductase inhibitors) are effective therapeutic agents for reducing plasma lipid levels and lowering cardiovascular morbidity and mortality in patients with CHD.Evidence from multiple clinical trials suggests that rosuvastatin offers the highest lipid-lowering efficacy at the lowest dose for the treatment of patients with hyperlipidaemia [25][26][27].It is suggested that this lipid-lowering efficacy is based on the higher affinity of rosuvastatin to HMG-CoA reductase compared with other statins including atorvastatin, simvastatin and pravastatin [28].
In this study up to 67% of the patients treated with rosuvastatin reached the 1998 European goal for LDL-C (<3.0 mmol/L) at week 24.This was achieved although the initial dosage of 10mg rosuvastatin had to be increased to 20 mg in only 31% of patients.These findings differed from similar studies where as many as 74-88% of the patients achieved the 1998 European goal for LDL-C during treatment with a 10 mg daily dose of rosuvastatin [27,[30][31][32].However, baseline levels of TC and LDL-C were higher in this study than in other similar studies and 49% of the patients were already pre-treated with another statin.This could explain why fewer patients achieved the 1998 European goal and is supported by data showing that reductions in levels of LDL-C in patients in this study were similar to reductions reported in other studies.
Rosuvastatin effectively reduced the mean plasma levels of LDL-C in patients who were either statin naïve or had switched from other lipid-lowering medication by 49.2% and 31.2%respectively.The mean reduction in LDL-C was comparable with findings from other studies showing reductions of 42-52% with 5 mg or 10 mg daily doses of rosuvastatin [26,33].
The reduction in TC and TG levels and increase in HDL-C levels achieved with rosuvastatin in this study was comparable to those reported in other studies [26,27,34].
The reductions in LDL-C, TC and TG levels were generally higher for statin naïve patients than for those switched from other lipid-lowering medication.Moreover, statin naïve patents treated with rosuvastatin showed a greater increase in HDL-C levels compared to those switched from other lipid-lowering medication.These changes in the level of plasma lipids were in agreement with observations other studies [34].
The effectiveness of lipid-lowering therapy crucially depends on the patient's compliance [35], which was determined by counting tablets during the study.Excellent compliance with treatment was indicated by mean values of 97% (10 mg group) and 100% (10 mg Plus group) between week 12 and week 24.It is unusual that compliance proportions of 100% are achieved in clinical trials.This may be due to missing values resulting from unreturned medication.It was assumed that all the unreturned tablets had been taken.
Compliance enhancement initiatives employed during the present study were not found to increase the number of patients achieving 1998 or 2003 European goals for plasma lipids.However, this may be explained by the fact that patients in both groups were already very compliant with their treatment regimens.Moreover, patients participating in a clinical trial are more likely to be compliant with their treatment than in real liferegardless of which treatment group they are in.
Rosuvastatin was generally well tolerated.The adverse events and safety profile for rosuvastatin were similar to that of other statins [36].Myalgia was the most commonly reported adverse event, affecting 3% of patients.
This study shows that rosuvastatin was effective in reducing LDL-C in the majority of patients with hypercholesterolaemia to the 1998 European goals for LDL-C at week 24.Compliance was high and the use of compliance enhancement tools did not increase the number and percentage of patients achieving European goals for plasma lipids.Rosuvastatin has a safety profile similar to that of other statins and was well tolerated by patients.
The authors acknowledge the support of As-traZeneca AG (Zug, Switzerland) for this trial.They express gratitude to archimed medical communication ag (Zofingen, Switzerland) for providing medical writing support and to the investigators and patients who participated in the study.
Table 2
the 10 mg Plus group) discontinued treatment during the study.The main reasons for discontinuation were adverse events (39 patients in the 10 mg group; 35 patients in the 10 mg Plus group) and loss to follow-up (13 patients in the 10 mg group; 9 patients in the 10 mg Plus group).
|
v3-fos-license
|
2017-05-26T00:47:06.106Z
|
2018-09-11T00:00:00.000
|
39407457
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://academic.oup.com/imammb/article-pdf/35/3/279/25692948/dqx003.pdf",
"pdf_hash": "cb0fe9374df68c2a15563c1015a7f40238e4868e",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2095",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "0c2765ec20d9e893ed387411713af52618738024",
"year": 2018
}
|
pes2o/s2orc
|
Modelling and mathematical analysis of the M 2 receptor-dependent joint signalling and secondary messenger network in CHO cells
The muscarinic M2 receptor is a prominent member of the GPCR family and strongly involved in heart diseases. Recently published experimental work explored the cellular response to iperoxo-induced M2 receptor stimulation in Chinese hamster ovary (CHO) cells. To better understand these responses, we modelled and analysed the muscarinic M2 receptor-dependent signalling pathway combined with relevant secondary messenger molecules using mass action. In our literature-based joint signalling and secondary messenger model, all binding and phosphorylation events are explicitly taken into account in order to enable subsequent stoichiometric matrix analysis. We propose constraint flux sampling (CFS) as a method to characterize the expected shift of the steady state reaction flux distribution due to the known amount of cAMP production and PDE4 activation. CFS correctly predicts an experimentally observable
Introduction
The muscarinic acetylcholine receptor (M 2 receptor) (encoded by the CHRM2 gene) belongs to the family of G protein-coupled receptors (GPCR) and is among other locations expressed in cardiomyocytes where it influences the heart beat rate (Brodde & Michel, 1999).It is related to negative dromotropic and negative chronotropic events.Its malfunctioning has been associated with a number of diseases, such as cardiomyopathies (Brodde & Michel, 1999).GPCRs represent one of the most important target classes of proteins for drug discovery (Zheng, 2006).Hence, development of specific agonists and antagonists for muscarinic receptors, including the M 2 receptor, is still of high interest.Iperoxo is a highly affine and efficacious muscarinic agonist (Schrage et al., 2013(Schrage et al., , 2014) ) that has recently served to elucidate the crystal structure of the active state of the M 2 receptor (Hu et al., 2010;Kruse et al., 2013).In traditional pharmacology, the ligand-binding event, second messenger concentrations, ion channel function, as well as tissue, organ or body responses are recorded.New label free, whole cell techniques nowadays are used to dissect signalling of intact cells into different components (Schröder et al., 2011).In addition, iperoxo and its derivatives turned out to be valuable tools for gaining deeper insight into structure-signal relationships (Bock et al., 2014;Antony et al., 2009).
Recent experimental work explored the cellular response to iperoxo-induced M 2 receptor stimulation in Chinese hamster ovary (CHO) cells (Kruse et al., 2013;Schrage et al., 2013Schrage et al., , 2015)).The cellular response was measured by dynamic mass redistribution (DMR), a technique to quantify the intracellular mass movement via optical density (Schröder et al., 2011).Since the DMR response can be assumed to be dependent on the M 2 receptor-dependent signalling our aim was to model and study the corresponding reaction system.The pathway consists of proteins as well as the secondary messenger cyclic adenosine monophosphate (cAMP).The respective biochemical reactions are principally well known (Pierce et al., 2002;Linderman, 2009;Sunahara & Taussig, 2002;Taylor et al., 2012), but to the best of our knowledge no effort has been taken so far to derive a mathematical model, especially for CHO cells, which are very important in pharmaceutical research and for the industrial production of recombinant protein therapeutics (De Jesus & Wurm, 2011;Walsh, 2010).
In this work, we developed a mass action based mathematical description of the M 2 receptor-dependent signalling network.Our developed model consists of 79 reactions, altogether involving 64 relevant proteins and secondary messenger molecules described in literature.In our joint signalling and secondary messenger model, all binding and (de-)phosphorylation events are explicitly taken into account in order to enable subsequent stoichiometric matrix and flux distribution analysis (Wiback et al., 2004).Although this kind of analysis is usually only employed for metabolic networks, our explicit modelling of binding and phosphorylation events enables the adaption of these techniques to a mixed signalling and secondary messenger system.The usefulness of applying stoichiometric matrix analysis techniques to signalling pathways has e.g.been demonstrated by Behre & Schuster (2009), who adapted elementary flux mode (EFM) analysis to this situation.We here show, how the known flux sampling technique (Smith, 1996) can be extended to incorporate partially available experimental information (here: cAMP production, phosphodiesterase 4 (PDE4) activation).We tested our combined modelling and data driven sampling method by predicting key signalling mechanisms known from literature, but not explicitly encoded into the model.Our proposed constraint flux sampling (CFS) technique allows for qualitative predictions of downstream stimulation effects on actin and tubulin levels, which here serve as markers for the mass redistribution effect.These qualitative predictions are in agreement with the experimental observations, which suggests CFS as a technique for model checking.This is further underlined by the possibility to combine CFS and EFM analysis yielding a statistical ranking of EFMs according to their expected biological relevance.
Biological Background and Network Reconstruction
GPCR-induced signalling is well-known in common literature (Pierce et al., 2002;Linderman, 2009;Taylor et al., 2012;Sunahara & Taussig, 2002).Specifically the link to the cyclic AMP (cAMP -a secondary messenger molecule)-induced signalling is in the focus of current pharmaceutical research (Milligan & Kostenis, 2006;Hu et al., 2010).Figure 1 depicts a schematic representation of the whole set of relevant molecules and their interplay, which are considered in our model.In particular, the process of receptor-induced G protein (GP) activation is well studied, where the ligand-bound receptor changes its physical structure and the inactive associated GP interacts with the receptor and dissociates into its subunits (Pierce et al., 2002).Thereby the alpha-i/alpha-s and beta/gamma subunits are activated and are able to interact independently with other proteins like adenylyl cyclase (AC) (Sunahara & Taussig, 2002;Milligan & Kostenis, 2006).The GP subunit alpha-o has no significant influence on AC but it has an influence on the DMR (Milligan & Kostenis, 2006).AC is one of the most important proteins within the GP-mediated pathway and responsible for the secondary messenger production.The large number of AC and GP subtypes causes a highly complex sub-network with many cross-reactions (Milligan & Kostenis, 2006;Sunahara & Taussig, 2002).Also the receptor activation cycle itself is not trivial.This first step in the signalling cascade is highly interesting for pharmaceutical research and led to well-developed models for receptor activation and inhibition (Woodroffe et al., 2009;Chen, 2003;Strange, 2009;Bornheimer et al., 2004).
Besides this completely membrane bound sub-network the protein kinase A (PKA)-induced phosphorylation cascade, and the feedback loop causing cAMP degradation is well studied (Taylor et al., 2012).Cyclic AMP binds to PKA and causes its activation.But an increase of PKA activity also leads to an increase in phosphodiesterase (PDE) activity, which inactivates cAMP by degrading it to AMP (Boswell-Smith et al., 2006).Through this mechanism the cell prevents a continuous overstimulation by excessive cAMP levels.Stimulation of the receptor population via the muscarinic agonist iperoxo induced a cellular DMR response at concentrations that are far lower than the corresponding concentrationbinding relationships (Schrage et al., 2013).The same authors reported this amplification phenomenon also for other ligands, including the natural ligand ACh.The exact nature of the amplification process is not understood so far, but may at least be partially attributed to intracellular signalling events (Schrage et al., 2015).According to common literature we suppose regulators of G protein signalling (RGS) and G protein-coupled receptor kinases (GRK) to be of relevance.These proteins are closely related to the deactivation of the receptor and the GP subunits (De Vries et al., 2000;Pierce et al., 2002;Hollinger et al., 2003.In this approach, we choose RGS14, GRK6 and GRK2 as important representatives for each group. Nowadays, several readouts for the stimulation response are well established, e.g. the Ca 2+ -level or the cAMP concentration (Paredes et al., 2008;Hennen et al., 2013;Gabriel et al., 2003).In addition DMR has been introduced into the pharmaceutical field in order to specifically quantify the stimulation effect on the cytoskeleton (Fang et al., 2005;Schröder et al., 2011).The DMR is an optical biosensor based procedure and measures the shift in wavelength resulting from intracellular mass movement caused by rearrangement of cell organelles and transportation processes.The optical density is very sensitive to intracellular reorganization and morphological changes of the cell, and by comparing the optical density of unstimulated and stimulated cells one can measure the specific wavelength shift and draw conclusions about the intensity of cellular response (Schröder et al., 2011).Maximal DMR response induced by iperoxo occurs typically after approximately 10 minutes (Schrage et al., 2013).According to the timescale and common literature, we did not consider transcriptional downstream responses (Shaywitz & Greenberg, 1999;Mayr & Montminy, 2001).
In this work, we chose actin and tubulin as DMR markers.Actin and tubulin are closely related to cellular movement and we assume a strong correlation between changes in both proteins and the relative wavelength shift measured by DMR (Hammond et al., 2008;Schmidt & Hall, 1998).As described in Fang et al. (2005); Strange (2009);Schröder et al. (2011) the wavelength shift is caused by intracellular mass movement.Therefore, we took all proteins directly linked to actin and tubulin into account and assumed their activation to be correlated with the wavelength shift (Fig. 1).For further references see the supplementary material.
Mathematical modelling
All interactions shown in Fig. 1 are explicitly formalized as mass action based elementary reactions (Horn and Jackson, 1972) and all known proteins and their occurring complexes are included.Hence biological information from the available biochemical knowledge is preserved.Let x 1 , ..., x n denote concentrations of all molecules in the system.Then the concentration change of molecule i can be written as where s ij is the stoichiometric coefficient of molecule i in reaction j and v j denotes the corresponding rate of reaction j.As an example, we here show the PKA activation by cAMP (Corbin et al., 1988) Here, PKA denotes the inactive form of PKA.Let us denote the rate of both reactions by v 1 and v 2 , respectively.The stoichiometric coefficients are s 11 = s 22 = −1, s 12 = s 21 = 1 and s 32 = −2.We obtain (2.6) Altogether our modelled system contains 79 elementary reactions, which can be found in the supplementary material.The full reaction system can be represented via a stoichiometric matrix S ∈ R n×m .In this matrix, every molecule is represented by one row and every reaction is represented by one column.
There are some details that need to be mentioned: Our modelled system consists of several biochemical reaction types, namely binding, stimulation and inhibition.These biochemical events need to be represented appropriately in the reaction system and the stoichiometric matrix, respectively.This was done as follows: Protein activation via phosphorylation was modelled with help of an intermediate molecule which represents the complex of the substrate and the related kinase.The kinase binds reversibly to the substrate and forms an intermediate complex which then dissociates irreversibly into the kinase and the modified substrate.For instance, GRK2 is phosphorylated by PKA (Cong et al., 2001).For this purpose, we introduce the intermediate complex PKA : GRK2 and write the reaction system as For every phosphorylation step, we assumed a backward reaction P * → P, which dephosphorylates the phospho-protein P * with the help of an unknown phosphatase.In our example, GRK2 is dephosphorylated into GRK2 GRK2 → GRK2. (2.9) Protein inhibition by kinases is modelled in a similar manner.A kinase binds reversibly to the target protein and forms an intermediate complex which then dissociates irreversibly into the kinase and the inactive protein.The inactive protein is now able to be activated again by another kinase.We illustrate this process using the inhibition of GEF by GRK2 (Eijkelkamp et al., 2010) GRK2 + GEF GRK2 : GEF (2.10) (2.11) As shown above, these reactions can be represented in a stoichiometric matrix S. The dimension of the stoichiometric matrix can be decreased by expressing the forward and backward direction of the same reversible reaction by one row where reaction rates can be both positive and negative.This is in contrast to strictly irreversible reactions where only positive reaction rates are allowed.We also used the stoichiometric model to derive a system of ordinary differential equations based on the assumption of mass action kinetics, see supplementary material.
Conservation relationships
Since signalling events are relatively fast we can assume that for each protein the overall total amount of phosphorylated, bound and unphosphorylated proteins is approximately constant, provided that the biological system is in steady state and the model was correct.Hence, checking conservation relationships is a means to check the consistency of our model.According to Palsson (2006), conservation relationships under steady state conditions are mathematically identifiable from the null space of S T .That means conservation relationships are all those vectors g for which S T g = 0. (2.12) Each entry in g corresponds to exactly one molecule.Analysis of the entries of vectors g provides thus a means to verify whether the expected constant total concentration of each protein is fulfilled in reality.Moreover, we can also obtain insights into possibly existing constant protein concentrations within whole reaction cascades.
Stimulation of the system
We are interested in qualitative changes upon receptor stimulation.For the unstimulated system, we assume a steady state characterized by constant concentrations of all molecules.Receptor stimulation causes a perturbation of this steady state resulting in dynamic changes of molecular concentrations.However, we assume that after some relaxation time the system will attain a supposingly differentsteady state, which is characterized by the molecular concentrations in the stimulated state.In reality, the stimulated state does not need to be a dynamic equilibrium in the strict sense, but we believe it to be a useful approximation for a situation of maximum response, where all concentrations are nearly constant over time.We believe that this working hypothesis is useful to analyse qualitative changes between the unstimulated and stimulated states, which is also supported by the fast-usually milliseconds-time scale of the signalling events in comparison to the observable duration of responses to receptor stimulation.
Mathematically, all stationary reaction rates v in the steady state-so called fluxes-are given as solutions of the underdetermined system of equations Sv = 0. (2.13) The unstimulated and the stimulated state correspond to different solutions of this equation.Our strategy will be to constrain the solution space of 2.13 using experimental data.We will then use Monte Carlo Sampling (see below) to compare possible fluxes in the simulated and unstimulated state.For a qualitative comparison, we suppose the DMR response to be given as the sum of all fluxes with known influence on the wavelength shift (2.14) Here, v i denotes the activating flux related to molecule i with influence on the wavelength shift.The sum runs over all k in-fluxes into tubulin and actin, which are considered as markers of the DMR response (Hammond et al., 2008;Schmidt & Hall, 1998;Schröder et al., 2011).
Sampling the flux polytope
Since we are interested in the general behaviour of the system without incorporating additional rate parameters, steady state solutions of the system can in principle be found through Markov Chain Monte Carlo Hit-and-Run sampling (Smith, 1996;Brooks, 1998;Price et al., 2004).A single move in the Hitand-Run sampling is performed by making from a given feasible solution a uniform randomly chosen B. ENGELHARDT ET AL.
move within the unit sphere.Afterwards the step size is adjusted such that the new solution is also feasible (Smith, 1996;Megchelenbrink et al., 2014).A solution v * is called feasible, if it satisfies Sv * = 0 (2.15) with bounds α i , β i .Note, that without further constraints, fluxes could take any real value, but in reality fluxes are bounded.Hence, we set for all reversible reactions α i = −1000 and β i = 1000 as loose bounds.For irreversible reactions we set α i = 0.The flux bounds can in principle be used to incorporate experimental data.We will modify the flux bounds to qualitatively incorporate fold changes between stimulated and unstimulated cells, as explained in the following section.
Constrained flux sampling
We incorporate partially available data of experimentally measured relative (steady state) molecular concentrations into the above described flux sampling scheme in order to make qualitative predictions about flux changes upon stimulation.The approach thus does not require detailed knowledge of kinetic rate constants.
Let ṽj denote the steady state flux of the j-th reaction in the case of an unstimulated receptor.According to the law of mass action (see section 2.2) with rate parameters k j , we have where {x i } is the set of molecules taking part in the particular reaction and xi their concentrations.Note that at this point we suppose involved reversible reactions to be split into two irreversible ones.Accordingly, the flux vj for the same reaction under stimulation can be defined, now with concentrations xi .Usually, in experiments relative concentration changes (fold changes) Note that the stoichiometric coefficients s ij in most cases are 1.The equation suggests a principal two-step procedure: (1) Perform conventional flux sampling for the unstimulated situation.This yields a set ṽj .
(2) Perform flux sampling for the stimulated situation by plugging observed fold changes into Eq.(2.19) in order to constrain sampled fluxes.
In reality it may be more appropriate to consider confidence intervals [f Min i , f Max i ] for f i because fold changes are subject to uncertainty.This can be addressed straightforwardly by replacing Eq. (2.19) by an inequality (2.20) The quantity ṽj in practice needs to be estimated from the empirical flux distribution under steady state conditions.A reasonable choice is to take the mean or median of the sample distribution plus/minus the standard deviation for that purpose.
Elementary flux modes
Schuster and Hilgetag introduced EFM analysis for characterizing the geometry of the solution polytope of the equation system Sv = 0 in a biologically interpretable manner (Schuster & Hilgetag, 1994).All solution vectors occur as linear combinations of EFMs.More specifically, Schuster & Hilgetag (1994) and Llaneras & Pico (2010) consider the convex flux polyhedral cone P(S) w j e j w j ≥ 0 . (2.21) EFMs are then defined as the extreme rays or edges of the flux cone P(S).A formal assumption made in this equation is that reversible reactions are split into irreversible ones.Each EFM can be characterized as the minimal set of reactions which are required for a sub-system to exist as a functional unit (Papin et al., 2004).These sub-systems either reflect fluxes through the whole reaction system or functional cycles within the system.Thus, analysis of EFMs allows for identifying biologically functional and interpretable 'building blocks' of the biological reaction system.In case of signalling, this also implies that without stimulation there exists no EFM representing the whole network and no EFM describing the signalling flow through it.
In this article, we combine EFM analysis with CFS: after having determined the flux distributions of the overall system in stimulated and unstimulated conditions we map fluxes to each of the calculated EFMs.This is possible because each flux corresponds uniquely to one reaction.We then compute the median of all fluxes related to a specific EFM.Since with our sampling procedure we generated a large (here: 100, 000) sample of flux vectors we obtained an empirical distribution of these medians for each EFM.The significance of the difference in these distributions between stimulated and unstimulated conditions can be assessed via a Wilcoxon rank test, yielding a P-value.Because we do not only compare one but several EFMs, multiple testing correction of P-values is performed via control of the false discovery rate (FDR) (Benjamini & Hochberg, 1995).Moreover, we estimated the median fold change between stimulated and unstimulated conditions.
Data: cAMP, PDE4 and DMR
Parts of the experimental data (dose-response relationships) were taken from Schrage et al. (2013): In that article, DMR was measured at 13 different concentrations of the M 2 receptor-specific activator iperoxo, giving rise to dose-response curves (Fig. 2).In addition to these data by Schrage et al., cAMP response to iperoxo treatment was measured here (Fig. 2).This was done after 30 minutes of iperoxo incubation with a concentration of 0.1μM = 10 −7 M, which corresponds to a full DMR response (Schrage et al., 2013).
The induced cAMP fold change, calculated as the ratio between the cAMP level related to the iperoxo concentration of 0.1μM and the 95% confidence interval of the basal cAMP level (see Fig. 2) is given by [2.22, 2.71].
In addition, we measured the activation of PDE4 after 30 minutes for the iperoxo concentration of 0.1μM.The 95% confidence interval of active PDE4 level (see Fig. 2) is [3.21, 3.46].For information about the experimental details we refer the reader to the supplementary material.
Resulting conservation relationships
We uncovered 14 conservation relationships within the modelled biological system under steady state conditions.Figure 3 illustrates which sets of proteins have to maintain a constant total concentration.As illustrated in Fig. 3, we found at least one conservation relationship for each protein which reflects the 1. main endogenous signalling cycles.Thus our expectations coming from the signalling character of our modelled system are verified.
Constraint flux sampling correctly predicts DMR response under receptor activation
We applied the CFS framework described above incorporating cAMP as well as PDE4 fold changes into flux constraints.DMR response measurements were not taken into consideration at this point, but left out for independent validation.Figure 4 depicts the distributions of those selected fluxes, which according to our CFS analysis are predicted to show a statistically significant shift under stimulation (FDR <1%, Wilcoxon signed rank test with Benjamini and Hochberg's FDR control of P-values for multiple testing (Hollander, 1999;Benjamini & Hochberg, 1995)).CFS predicts a significant change of RGS 14.A slight decrease of the receptor de-activation and increase of the GP subtype alpha-i de-activation via RGS14 can be expected under stimulation according to our simulation.The receptor de-activation is compensated by an increasing receptor recycling.This phenomenon of signal regulation by RGS14 and GRK6 is well described in the literature where both proteins are known as important signal regulators (Pierce et al., 2002;Dale & Rang, 2011;Berridge, 2014).The RGS family is involved into the extinction of GP-dependent signalling (Zhang and Mende, 2011), e.g.via receptor desensitization or endocytosis (Reiter & Lefkowitz, 2006).This causes the downregulation of GP alpha-i downstream and together with cAMP forms a positive feedback loop.More specifically, the inhibition of the cAMP inhibitor GP alpha-i subunit leads to an increase of the cAMP production.In addition, significant GP alpha-i de-activation causes a significant increase of GRK6 activation.
The boxplots clearly highlight that-besides cAMP-under stimulation increasing levels of AMP production and cAMP degradation are expected, which is in agreement with current literature (Pierce et al., 2002;Sunahara & Taussig, 2002;Strange, 2009;Berridge, 2014).CFS is able to correctly predict a significant positive wavelength shift, i.e.DMR response, under receptor stimulation which is in agreement with our experimental validation data (see Fig. 2).Hence, CFS allows for a qualitative check of our pathway model.
Knock-out simulations
To further check the hypothesized relevance of RGS14 for the observed DMR response we conducted an in silico knock-out simulation.This means we restricted all fluxes going through this molecule to zero while repeating our CFS.To investigate the effect of the different molecules on the DMR response, we performed knock-out simulations for all molecules except for GP alpha-s, AMP, AC, cAMP, PKA, the receptor and the ligand.We then ranked the molecules according to their statistical significance of the influence on DMR related fluxes.GP alpha-s is not considered for the knock-out simulation because no steady state solutions are possible when constraining the fluxes through these important signalling molecules to zero.AMP, AC, cAMP, PKA, the receptor and the ligand are not considered because these are characteristic molecules for signalling and removing these molecules is unphysiological.Table 2 shows the proteins with influence on the response under stimulation.
Altogether our simulations underline our findings from section 3.3.The highest impact was found for RGS14 and GP alpha-o, followed by PDE4.Furthermore, GRK6 has a significant influence.
Combining EFMs and CFS reveals important sub-networks and regulatory mechanisms
In a last step, we applied EFM analysis to the system with ligand stimulation (see section 2.4), resulting in 63 EFMs (see supplementary material).Notably, many of these EFMs represent similar biological mechanisms.We ranked all EFMs with respect to their predicted change under stimulation by the method described in section 2.7.Table 3 shows all EFMs with an FDR lower than 0.001 and a median fold change greater than 1.Interestingly enough, four of the most significant EFMs describe the GP alpha regulation via RGS14, and also the receptor regulation via GRK6 is among the most significant EFMs (Figs 5 and 6).This is in full agreement with our previous findings and provides a possible explanation for the relevance of these molecules.Further significant EFMs are related to PDE activation and cAMP/GEF production (in agreement with our experimental data). .First the inactive G protein complex consisting of the subunits alpha-i and beta/gamma binds to the active receptor and the bound GDP (Guanosine diphosphate) is replaced by GTP (Guanosine-5'-triphosphate) while the G protein dissociates from the receptor and splits into its subunits beta/gamma and alphai.Afterwards the activated alpha subunit is deactivated by replacing GTP with GDP-mediated by RGS14.In the last step, the deactivated GDP-bound alpha-subunit again associates with the beta/gamma subunit and forms the inactive G protein.
Discussion
In this article, we presented a comprehensive mathematical model of the M 2 receptor-dependent joint signalling and secondary messenger network.The motivation for our work comes from the pharmacological relevance of the M 2 receptor and the induced cellular responses.Whereas in principle the individual parts of our studied system are well described in the biological literature, to our knowledge there have been no attempts so far to combine these information into a mathematical model.
Fig. 1 .
Fig. 1.Cartoon of the M 2 receptor-dependent signalling and secondary messenger network in CHO-hM2 cells based on the known literature.The receptor is activated by a ligand (e.g.iperoxo) and induces the membrane-bound signalling cascade including G protein (G) activation and production of cAMP by adenylate cyclases (AC).Via cAMP the signal is transferred to the PKA-induced phosporylation cascade.Detailed reactions are suppressed for simplification.The detailed reaction system can be found in the supplementary material.
Fig. 2 .
Fig. 2. (a) DMR concentration-response curve of iperoxo modified from Schrage et al. (2013).The affinity between iperoxo and the receptor (pK D ) of the iperoxo-induced ligand binding curve for intact cells (obtained from Schrage et al. (2014)) is marked by the blue line.(b) Concentration effect curve of measured iperoxo-induced G protein alpha-s mediated cAMP accumulation with standard deviations and estimated confidence interval marked by the blue line.The inactivation of G protein alpha-i was induced via a pretreatment with pertussis toxin (PTX).The inactivation of the cAMP inhibiting G protein alpha-i allows for matching the measurements with their corresponding G protein alpha-s mediated network fluxes.cAMP accumulation in the absence of test compounds was set to 0% and maximum forskolin-induced binding was set to 100%.(c) Western Blot for the total amount of PDE4 (Pan-PDE4) and active PDE4 (pUCR1) under stimulation with 0.1μM iperoxo normalized against GAPDH.(d) Fold change for normalized active PDE4 (pUCR1).
Fig. 3 .
Fig. 3. Calculated conservation relationships: each column represents one conservation relationship and each row a protein.Red cells indicate proteins involved in a concentration relationship.The sum over all marked protein concentrations per column is constant.The inactive form of each protein is indicated by the subscript 'in'.
Fig. 4 .
Fig. 4. Predicted fluxes without stimulation (yellow/left) and under stimulation (green/right).(a) Boxplots illustrating the distribution of selected fluxes under different conditions using cAMP measurements.Yellow (left) boxes indicate the unstimulated and green (right) boxes the stimulated condition.Ligand induced G protein activation is shown for the alpha-s subtype here.Boxplots for all fluxes can be found in the supplementary material.(b) Overall response given as the sum of fluxes into tubulin and actin, see 2.14.Related median fold changes are shown in Table1.
Fig. 5 .
Fig.5.Elementary flux mode for G protein (G) regulation via RGS14.First the inactive G protein complex consisting of the subunits alpha-i and beta/gamma binds to the active receptor and the bound GDP (Guanosine diphosphate) is replaced by GTP (Guanosine-5'-triphosphate) while the G protein dissociates from the receptor and splits into its subunits beta/gamma and alphai.Afterwards the activated alpha subunit is deactivated by replacing GTP with GDP-mediated by RGS14.In the last step, the deactivated GDP-bound alpha-subunit again associates with the beta/gamma subunit and forms the inactive G protein.
Fig. 6 .
Fig. 6.Elementary flux mode for the GRK-mediated receptor inactivation via phosphorylation.The ligand-bound receptor gets phosphorylated by GRK.The phosporylated and hence inactive receptor is no longer able to mediate G protein activation.After ligand-dissociation the receptor gets de-phosphorylated and is now again able to mediate G protein activation.
Table 1
Median fold changes related to Fig.4
Table 2
Proteins ranked with respect to their predicted influence on the DMR response.The influence on the response was estimated by the median fold change expected by a knock-out simulation of each protein.The statistical significance of each simulated fold change is shown in terms of FDR.A high fold change implies a strong influence of the particular protein
Table 3
Significant EFMs (FDR < 1E − 6) ranked by their median predicted fold change induced by stimulation.Note that the first three fold changes are not computable, because without stimulation there is no flux through these EFMs.For the complete table please see supplementary material
|
v3-fos-license
|
2023-02-03T14:42:01.995Z
|
2020-12-01T00:00:00.000
|
256513559
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ijae.springeropen.com/track/pdf/10.1186/s41257-020-00041-2",
"pdf_hash": "760cc1f8b9423017622d7fcd7c658c2db2308e48",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2096",
"s2fieldsofstudy": [
"Sociology",
"Environmental Science"
],
"sha1": "760cc1f8b9423017622d7fcd7c658c2db2308e48",
"year": 2020
}
|
pes2o/s2orc
|
The endogenous development of pastoral society: an anthropological case study in East Ujimqin Banner in Inner Mongolia
Through empirical research on the social development of pastoral society in the East Ujimqin Banner, this study puts forward that within the government’s “passive development” discourse, local herders would prefer to consciously practice “active development”. This method both respects local culture and traditions, and triggers a shift from exogenous to endogenous development. My study shows that only by cultivating the self-development mechanism of ethnic minorities and the initiative to participate in development can we realize the social development of pastoral areas. The survival practices constructed by the local society according to its traditional mechanisms are not only connected with the external market and the state as modes of production, but also enable the local society’s modes of livelihood to be maintained and the traditional social culture to continue amid the continuous transformation taking place under the impact of pastoral modernization.
Abstract Through empirical research on the social development of pastoral society in the East Ujimqin Banner, this study puts forward that within the government's "passive development" discourse, local herders would prefer to consciously practice "active development". This method both respects local culture and traditions, and triggers a shift from exogenous to endogenous development. My study shows that only by cultivating the self-development mechanism of ethnic minorities and the initiative to participate in development can we realize the social development of pastoral areas. The survival practices constructed by the local society according to its traditional mechanisms are not only connected with the external market and the state as modes of production, but also enable the local society's modes of livelihood to be maintained and the traditional social culture to continue amid the continuous transformation taking place under the impact of pastoral modernization.
Introduction
There are two main theoretical standpoints in the academic community when it comes to social development, namely, exogenous development, which stems from outside a community, and endogenous development, which is achieved by the community themselves. In the 1970s, the United Nations Economic and Social Council put forward a model that differed from the exogenous development model widely used in developing countries. This model later became the prototype for regional endogenous development, and stressed the impact that internal factors like equality, freedom, and economic democratization have on regional development. In 1975, the Swedish Dag Hammarskjöld Foundation published the Dag Hammarskjöld Report on Development and International Cooperation: What now? Another Development at the United Nations, which formally proposed the concept of endogenous development and expounded the significance of human beings, the environment, culture, ecology, and diversified development (Linstone H. A 1979: 95-96). After the 1980s, the research focus of the regional endogenous development theory gradually shifted from "material" to "human", putting people first. Hence, a reflection on endogenous development theory emerged from Japan. Tsurumi Kazuko and Kawata Tadashi defined endogenous theory from a sociological perspective and published Endogenous Development Theory (Kazuko and Tadash 1989:46-47).
As a synonym of "another development", endogenous development aims to explore development modes that are different from European and American modernization. Compared with exogenous development theory, endogenous development theory advocates using a strong foothold in the specific local ecosystems and traditions of a region to creatively transform the external factors affecting the region, and encourage its development. The Declaration of Madrid, released at the 2000 United Nations International Conference on a Culture of Peace, declared, on the basis of the four "new contracts", the necessity of promoting global endogenous development based on knowledge and internal capacities (Declaration of Madrid 2006).
In the field of Chinese Anthropology, Fei Xiaotong put forward his views on ethnic minority development as early as the 1980s. Fei held that development efforts among ethnic minority groups must pay attention to the unique physical and cultural advantages of the group themselves, giving them due power in the development of their region through the development of their social productivity and spiritual culture (Fei Xiaotong 1993:220). In fact, these thoughts on the development of Chinese society bring together a wide range of Fei Xiaotong's ideologies and research methods, such as rural community studies, differential pattern, small-town theory, regional development research, cultural awareness, and the pattern of diversity in unity of the Chinese nation.
In the past ten years, many Chinese scholars have proposed their views on regional social development, such as on exerting the autonomy of different regions over concentrating their funds and power in a direction that benefits regional development (Tu Renmeng 1993: 21-25). Moreover, scholars have posited that an endogenous development model combining the exogenous power of rural primary governmental organizations with the endogenous power of villages themselves has become the ideal form of Chinese rural development (Lu Xueyi 2001: 9). Therefore, in order to pursue sustainable development, it is necessary to empower local people (Wu Chongqing 2016: 6).
The above studies demonstrate that the regional endogenous development theory has become a new widely accepted perspective in academic circles both in China and beyond. These studies have been produced from a variety of academic background, providing a substantial theoretical and practical basis for anthropological research. However, few scholars have published results from applying this theory to the social development and anthropology of pastoral societies. Moreover, there are even fewer adequate case studies on the development of pastoral society. This study seeks to explore the systems that empower herders to actively change their situation as their region undergoes social transformation, by interpreting the diverse endogenous social development of pastoral areas. In so doing, the study aims to provide new cases to support the previous theoretical system and past research on pastoral development.
In practice, herders are usually remarkably good at making use of the original social mechanisms at their disposal, and combining traditional natural and social resources to find the best way to survive. This makes the endogenous development strategy an important way for them to adapt to social transformation and promote regional development. At present, by way of addressing the development of rural areas, the Chinese government has proposed a "rural revitalization strategy", which places rural pastoral development as the top priority for the government's future "three rural issues." 1 Therefore, as a vital part of the development of ethnic regions in China, how can pastoral society transform the traditional nomadic lifestyles that have existed for thousands of years? How can we deal with the relationship between modernity and locality that underpins development discourse? What active strategies will the herders take to adapt to the rapid social transformation? What is the cultural logic behind the modernization of animal farming practices and the livelihoods adopted by herders? These issues must be the concern not only of policymakers, but also of academics.
Field site and research methods
The field site selected for this study is the East Ujimqin Banner, 2 located in the eastern part of the Inner Mongolia Plateau and at the western slopes of the Greater Khingan Mountains. Its geographical location is bounded by the Hinggan League in the east, West Ujimqin Banner in the south, Abag Banner in the west and Sükhbaatar Province in the north. The total area is 47,300km 2 , including 69.17 million mu (1 mu = appr. 666.5 m 2 ) of natural pastures, 95% of which are available for grazing. At present, the East Ujimqin Banner has jurisdiction over 5 towns, 4 sumu towns, 9 township-level administrative districts, and the Wulagai agriculture and animal husbandry comprehensive development zone. There are 57 gacha settlements, 1 state-owned forest farm, 13 communities, 220 herder groups and 192 resident groups. 3 As of 2016, the East Ujimqin Banner had a total population of 70,700, a Mongolian population of 45,600 and a livestock rearing population of 33,000. 4 The East Ujimqin Banner is a border animal husbandry area with ethnic Mongols as the main ethnic group.
Located in the hinterland of Xilingol Grassland, a relatively the grassland vegetation and nomadic livelihoods of the East Ujimqin Banner have been preserved in a relatively authentic way. Therefore, it is a National Key Ecological Function Area known as "the hometown of Mongolian long songs, the cradle of Mongolian wrestling, the capital of traditional ethnic clothing, a famous location for nomadic farmers and an ecological paradise". The modernization of animal husbandry in the area has achieved remarkable results in recent years, and the East Ujimqin Banner is a typical, representative example of the development of pastoral areas in China. The Ujimqin herders have lived a nomadic lifestyle for generations. Since the beginning of the 1980s, the local pastoral society has been involved in a nationalized and market-oriented modern system, developing from nomadism to settlement and resettlement, then from assigning households with a certain price of livestock to contracting households with areas of grassland. Today, we can see a revitalization of pastoral areas and modernization of animal husbandry. Local people's livelihoods have experienced a historic process of rapid change and transformation. It is reasonable to argue that this change is not only influenced by the natural environment and historical conditions; it is also related to the regulation of multiple forces such as policies and markets.
In order to explore the characteristics of endogenous development and livelihood transformation in the pastoral society of East Ujimqin Banner, this study adopts the classic anthropological method of participant observation. In August 2011, August 2012 -September 2013, and August 2018, I conducted over a year of field research into the livelihood of animal husbandry and the social development of the East Ujimqin Banner. I also took a one-year post at the Bureau of Agriculture and Animal Husbandry in the East Ujimqin Banner. Therefore, the multiplicity of the author's roles in the field research, the diversity of the investigation sites, and the effectiveness of the progressive combination of government staff, researchers, and research subjects, have provided more favorable conditions for this research and multiple perspectives for observation. This has allowed me to carry out a progressive multi-point study across the entire East Ujimqin Banner area.
A traditional method in anthropological research is to use participant observation to conduct a meticulous, in-depth piece of research that dissects the ways of life of a village or community. Of course, this approach is typical of anthropological fieldwork, which habitually focuses on research of a microscopic level of detail. However, in doing so, the perspective of such research can become mired in detailed information specific to one small community, thus failing to achieve any overarching, holistic understandings or reflections. In his writing, Fei Xiaotong provided his own valuable reflections on anthropological research methods. He believed that the ethnographic method of anthropology in the past was inadequate, and we should instead adopt a research method that combines individual parts with the whole, considering different types and different levels of information in order to explain the manifold and integrative structure of the Chinese civilization system (Fei Xiaotong 2000: 9).
In the present paper, therefore, I did not confine the research perspective to a single community, but took the entire Banner as the study area. I selected a range of different geographical locations, types of vegetation, degrees of degradation, and livelihood status in East Ujimqin Banner, then carried out in-depth classification and comparison. This approach allows us to begin with the region's individual parts before studying the whole, systematically progressively clarifying the economic characteristics and cultural specialties of the entire pastoral society. Therefore, among the anthropological researches of pastoral areas, this study is positioned as a multi-point ethnography with the East Ujimqin Banner as its unit of study. The study aims to provide insightful breakthroughs and advances on the research methods formerly used for pastoral anthropology research on individual communities in the past.
Results and discussion: development practices of pastoral society in the East Ujimqin Banner
In the face of problems such as the degradation of grassland ecosystems, increasingly fragmented grassland area, and the constant impact and bombardment from modern systems such as the nation, the market, and technology, how can herders, as the main subject of this case study, adapt to the processes of change and transformation and maintain the continuity of their nomadic livelihoods and traditional culture. As Loye said, adaptation is beginning to be seen not as the result of changes to an organism determined by their environment, but instead as the result of an active response to perceived environmental restrictions exacted by the organism themselves (Loye 2004: 122). At the uncomfortable juncture between tradition and modernity, adaptation and selection, what kind of initiative and active features do the herders have at their disposal, and what kind of attributes are unique to this group? It is with these questions in mind that we embark on our discussion.
The initial attempt: changes in development concepts and management practices
After the grasslands and livestock were contracted to the household, the function of the household as an independent business unit has been strengthened beyond recognition. This new social environment has placed new demands on herders, endowing them with new responsibilities, obligations and roles to play. This means that the herders have been required to change their traditional ways of thinking and behaving, picking up new knowledge and skills, so as to adapt to the new natural environment and social culture, meanwhile maintaining the sustainability of a nomadic livelihood. Independent management and self-financing are the current realities that herders must face. Against the background of individualized livestock management and the constant improvement of market concepts, learning how to manage and how to maximize the benefits of these changes have become the key to every herder family's survival. As a result, a number of herders took a flexible attitude and responded early, beginning their exploration towards prosperity and becoming the first pastoral elite to adapt to the new environment. After the division of local grasslands, Qinggeletu became the first person to attempt to change the management system and expand family herds.
Case 1
Qinggeletu, a 63-year-old Chinese Mongol, migrated from Tongliao to Arslan Gacha, Huretunao'er Sumu in 1968. The livestock distribution rule in 1983 was 15 sheep and 5 cattle per person. Qinggeletu has 5 siblings in his family. Together with their parents, the 7 of them were given 105 sheep and 35 cattle. In the second year, the grassland was divided into 2000 mu per person. Qinggeletu's family received 14,000 mu of grassland in total. Qinggeletu is smart. He bought Ujimqin lamb rakes 5 from the local area for 45 yuan a piece, and sold them to herders in Hulunbuir for 85 yuan each. With the money, he bought goats and lambs to increase the size of his herd. Due to the small size of the local Mongolian cattle, he spent 500-600 yuan on an improved fattened calf from northeastern China, and raised it to sell for 1200 yuan. The whole family worked painstakingly and after a few years, the herd quickly grew to over 2000 lambs and 500 cattle, making the family a typical example of the Banner's wealthy households.
In the East Ujimqin Banner, herders like Qinggeletu are not in the minority. At the beginning of the grassland distribution, each household originally received relatively few livestock, but due to market demand and the increasing price of livestock products, herders were compelled to expand their herds to increase their family income. In fact, as long as some herders benefit from the change in management methods, their surrounding contemporaries will begin to follow the trend, thus accelerating the socio-economic transformation and reform of the East Ujimqin Banner.
Furthermore, herders need to stock thousands of jin (1 jin = appr. 0.6 kg) of fodder every August. This stockpiling is done in preparation for the winter to ensure that livestock can be kept fat and as many lambs as possible can be delivered successfully. During the summer, herders set aside a certain area of grassland to be used for hay making instead of grazing. If their pastures are seriously degraded and are insufficient for fodder reserves in winter and spring, they must buy grass and fodder from other herders. This shows that herders are aware of the effects of grassland degradation, which has rendered it impossible for the herds to maintain their physical strength and nutritional balance on their ordinary diet of grass alone. As a result, it is necessary to combine grazing with drylot feeding to survive. For herders who have lived a traditional nomadic lifestyle for generations, adapting their way of thinking has been a slow process; from relying completely on pasture grazing to combining grazing and supplementary feeding. Qingbate, who is responsible for distributing subsidies, told me about the initial attitude of the herders towards government subsidies: "From 2003 to 2006, the government subsidized the herders with fodder because of the spring rest period from grazing. At that time, we were responsible for distributing corn seeds to herders at the health team's farmyard. Having just received their corn seeds, as soon as they left the farmyard many herders would sell their provisions on to the Han people in the Banner to feed their pigs. A bag of corn seed fodder worth 100 yuan would sell for just 30 yuan. At that time, they did not know about the fodder or its effects on livestock. They also said, 'Our sheep eat natural pasture, not those things.' Now, there's not one herder that doesn't know about the fodder, and every family comes to the Banner to buy truckloads of fodder to feed their sheep in the winter".
Therefore, during this process of constant learning, adaptation, change and selection, herders are doing all they can to maintain the traditional nomadic lifestyle while they transition from a total reliance on natural grazing to a combination of grazing and drylot feeding. Since ancient times, pastoral production has always followed the traditional herding custom of reasonably adjusting each family's herd structure according to the type of pasture in order to maintain the best conditions for the pasture to grow. The herd structure refers to the proportion of different breeds in the family herd, and the proportional composition of age and sex within one kind of livestock (Wang Jiange 2001: 47). Wang Jiange believes that the herd structure is not only a variable in ecological conditions, but also a variable in social structures, especially the Mongolian class structure. The discussion of herd structure here is mainly based on the proportions of different breeds in the family herd under specific ecological conditions. According to the amount of grazing, the herders of East Ujimqin Banner divided the five types of livestock into two types, large and small. Large animals include Mongolian horses, Mongolian cattle, and camels, and small animals are mainly Mongolian goats and sheep. In terms of utility, large animals used as working livestock, and small animals used as both dairy and meat livestock. They are also sold as commodities or as gifts for weddings and social gatherings.
According to the survey, the traditional "five key livestock" herd structure has changed since the 1980s. In general, the proportion of large animals decreases, and the proportion of small animals increases year by year. Among them, the proportion of sheep is increasing rapidly. At the end of 2015, the total number of livestock in the entire Banner was 1,642,634, including 127,766 large animals, 1,514,868 small animals, 80, 887 cattle, 46,066 horses, 813 camels, 1,433,441 sheep, and 80,887 goats. 6 Based on the data obtained from the field survey of the East Ujimqin Banner, I conducted a statistical analysis on the composition of the entire Banner's herd population structure after the 1980s, taking 10 years as a statistical cycle (Fig. 1): The above statistics show that from 1981 to 2011, the number of large animals-horses, cattle, and camels-decreased sharply year by year. Horses decreased from 8.7% of the total herd in 1981 to 1.4% in 2011, cattle decreased from 10.4% to 5.5%, and camels saw the most acute reduction, from 0.3% to 0.03%. On the contrary, the number of small animals is growing rapidly. Among them, sheep increased from 72.5% to 86.09%, and goats showed a trend of increasing at first, then decreasing. Goats increased from 8.1% in 1981, to 16.3% in 1991 and 19.9% in 2001, then decreased to 7% in 2011. The reason for this unprecedented decrease is that goats are more active and prefer to eat grass roots. Not only can they easily cross through fences into the pastures of other herders, they can also cause serious damage to the grassland ecology. In addition, the price of cashmere has fluctuated greatly in recent years, so herders begun to continually reduce their goat population.
Therefore, we can see that in dispersed family operations, wherein people bear their own profits, losses, and risks, herders have begun to adjust their herd structures to expand the herd's size, which can quickly increase family income in the short term. The number of the "key five livestock" owned by a herder used to indicate which tribe they were a member of, as well as their social status. Now, however, the number of sheep in a herd has become a symbol of the herder's social status and living conditions. An important matter for herders is choosing when to sell their sheep. They always weigh up the sheep's price, then wait for the most opportune and profitable time to sell. In this regard, Lattimore once pointed out, "None of these livestock can provide a higher economic value to nomads on the grassland than sheep" (Lattimore 2005: 53). In view of this claim, I selected three cases from among 100 recorded interviews for further analysis. The selected cases came from different geographical locations, with different types of grassland vegetation, and different family economic conditions.
Case 2
Wuenbaiyila, a 30-year-old ethnic Mongolian herder from Taidaomude Gacha, Wuliyasi. He was born in 1984, the year after grassland contracting was introduced. Therefore, he did not receive any grasslands himself. After his father died, his mother gave him 3119 mu of grassland. Wuenbaiyila's wife had no grassland. At present, there are 304 sheep in the family, including 200 ewes, 60 large lambs, 40 small lambs, 4 sheep rakes, and no goats. In addition, there are 2 horses used for grazing during the heavy winter snows, and 4 cows for milk and dairy production.
Case 3
Hobart, a 45-year-old ethnic Mongolian herder from Mandulatu Gacha, Samai Sumu. In 1984, when the local grassland was divided, 1225 mu were allocated to each person. Hobart's family of four received 4900 mu of grassland and rented a further 4000 mu from other herders. There are currently 1300 sheep in the family, including 20 sheep rakes, no cattle or goats, and 2 horses, mainly used for Nadam exhibitions and horse racing during the Obo Festival.
Case 4
Siren, a 62-year-old ethnic Mongolian herder from Shangdu Gacha, Gadabuqizhen. In 1983, when the local grasslands were divided into 1969 mu per person, Siren and his mother, younger brother, wife, and four children, a total of 8 people, received 15,752 mu of grassland. Now that his mother and brother have passed away, the grassland of the family of six has not been separated. There are more than 1000 sheep in Siren's herd, including over 20 goats, but no cattle or horses.
The above cases show that the herders in the central part of the East Ujimqin Banner (see Case 2) mainly raise sheep, no goats, two horses, and four cattle. The herders of Case 3, living in the north of the East Ujimqin Banner, also raise sheep as the main livestock, as well as 2 horses, but no cattle or goats. The herders of Case 4 in the west only raise sheep and goats, but no cattle or horses. It can be seen that the composition of herds is mainly dominated by small animals. In sheep, this mainly consists of ewes, supplemented by rams and goats. Large animals such as cattle, horses, and camels have largely been expelled from the family herd structure for their long rearing cycles and low economic benefits. Small animals are mainly represented by sheep, which have seen a substantial increase in number and proportion within the herd. This strong representation of sheep is connected to the animal's increasingly important position in the life of herders, because sheep breed fast, have short feeding periods, high economic benefits, and a large competitive advantage in the market. Herders, who have been introduced to the market system in a short space of time, have begun to rely more and more on the market and economic liquidity, striving to participate in the market economy in order to increase family income.
Technological integration & livelihood transformation: modernization of pastoral mode of production
My investigation has also found that many herders, under the encouragement and guidance of local policies, have been actively engaging in modern livestock production. This is shown in how they have modernized and industrialized the livestock production methods and technologies, utilizing the excellent resources available in the grasslands to transform the way they breed and feed their livestock. While we investigate to how pastoral livelihoods have changed, we must also pay attention to the significant impact modern technology has had on these traditional livelihoods. Modern technological devices such as grass trimmers and hay rakes, automatic milking pumps, mechanized shearing machines and solar-powered motor homes are being increasingly integrated into traditional pastoral mode of production. Interviewees told me that before 2009, they could only trim their grasslands by hand. This was before they had modernized mechanical devices like grass trimmers, hay rakes and balers at their disposal. Now, however, almost all the herders have used mechanical devices to cut their grasslands. If any households lack such devices, they usually employ temporary workers to trim and bale the grass. In order to reduce of independently hiring temporary workers, such households often choose to hire these workers in cooperation, or by lending each other hay rakes for communal use. With this, herders have managed to blend their traditional cooperative organizational mechanisms into a modern technological system.
White theorized that culture is a dynamic system; It can move and evolve when provided with energy (Shupin and Peihua 1998: 287). White also divided culture into three sub-systems, namely the technical system, social system and ideological system. He pointed out that they both interrelate and interact with each other. Among them, the technical system plays a leading role because people require technical means in order to survive in nature. Nowadays, based on traditional grazing, herders have begun to consciously acquire the advanced knowledge about how to improve their livestock and fatten their cattle and sheep. Furthermore, they are actively participating in government-organized training programs such as "Introducing Science and Technology to Pastoral Areas" and "Livestock Improvement". These programs allow the herders to strive for better policy support and continuously improve their craft, as well as their ability to use the technology. Therefore, many typical families who were relatively successful in the management of their household production appeared in the East Ujimqin Banner region, including those who specialized in areas such as structural adjustment of herds, ram breeding, delivering lambs in early spring, lamb fattening, Simmental cattle breeding, yellow cattle improvement, and Mongolian horse breed conservation. These families were issued certificates as rewards by the Bureau of Agriculture and Animal Husbandry. During the study, I found that medals became a special part of the landscape in homes of many herders, with titles such as "Standardized Herd of Ujimqin Sheep", "Model Household for the Promotion of Agricultural Machinery" and "Ujimqin's Best Ram Rearing". Below is an example of a typical family specializing in Simmental cattle breeding.
Case 5
Wuliji, a 46-year-old ethnic Mongolian herder from Jirigalang Gacha, Enhe, Wuliyasitai town. His family of five was allocated 11,040 mu of grassland, where they raise 610 sheep. Since 2005, Wuliji has invested 30,000 yuan in building 4 warm cattle sheds made of brick and tile covering 200 m 2 , installing a motorpumped well and renovating the necessary winter infrastructures, including livestock sheds and water wells. At the same time, he has adjusted his herd's compositional structure and improved the breeds of his livestock, making Ujimqin sheep the priority. In 2007, he then removed 200 goats and 20 cows to adjust the structure to accommodate more sheep. From the original total of 610 sheep, the 230 that were in poor condition, had low meat yields, or small tails were all slaughtered. The remaining 380 sheep were raised as breeding ewes. On this basis, he also changed his former practice of allowing the sheep to inbreed, by mating within their immediate blood relations. Instead, he implemented the "cross breeding" method to avoid the degeneration of sheep breeds due to inbreeding. Every 3 years, he selects a number of breeding rams with a good bodily condition, high meat yield, high production rates and large tails from another herd of non-blood related sheep to mate naturally with his own ewes. By doing so, he can gradually improve the level of purebred Ujimqin sheep in his flock. After several years of hard work, each sheep can yield 15-20 jin more, achieving prices 45-60 yuan higher than before. At present, from the 240-260 lambs delivered each year, he selects 50-60 ewe lambs of good bodily condition and with large tails to replenish his stock of breeding ewes. The rest of the lambs are all slaughtered or sold in the same year. The family annual net income from sheep sales can be as high as 50,000 yuan.
This case suggests that in order to modernize animal husbandry, local herders are making every effort to learn the latest farming technology and expertise, transforming the traditional way of viewing and managing their businesses. After examining their past experience of breeding, herders took a number of measures: infrastructures were enhanced; the percentages of breeding ewes and rams in the herd were adjusted; and cross breeding was introduced to improve the livestock's genetic purity. As a result, there is a higher proportion of good quality livestock in the farms, and the wellknown Ujimqin Sheep are even purer than before. These phenomena are demonstrative of a positive reform in how the herders think and operate when managing their fields. By utilizing more natural, economic and social capital, they are trying to adapt to the trend of modernization in this industry and pursuing sustainable methods of development through which to live in harmony with nature. Obviously, what we have seen so far are all examples of endogenous development, for the economic strategies are made by individuals or families themselves. This active model shows a marked difference from the passive development that is driven by external policies.
Leaving the land, but not the countryside: diversified operations and independent development beyond the grassland The phrase "Leaving the land without leaving the countryside" refers to the demographic transition that is taking place among the agricultural population; the processes of urbanization and modernization are taking this from an agricultural population to a non-agricultural one, without removing them from their agricultural land. This is achieved mainly through developing non-agricultural land management in rural areas, and through small towns absorbing the rural population who leave their farmlands. In effect, this increases the number of people who live in the countryside, but do not work in agricultural production (Zhao Xishun 1984: 11). Many scholars have viewed this as a route to urbanization that would be suited to China's national conditions. They argued that, with a monumental population and thus a deficiency in per capita area of arable land, we would inevitably see employment problems as the population transitions from a large amount of rural surplus labor to a non-agricultural population. Furthermore, in reality it is impossible for the Chinese government to invest more into setting up new factories to absorb the newly-created non-agricultural population. Therefore, the localized demographic transition taking place within the agricultural population is a vital solution for China to achieve urbanization.
By the end of 2019, China had an urban population of 848.43 million, and a rural population of 551.62 million, which accounts for 60.6 and 39.4% of the country's total population respectively. 7 As the country undergoes urbanization, it would be impossible for the limited number of large and medium sized cities to absorb this entire rural population of over 500 million people. Chinese government has put forward a series of policies highlighting the need to transfer the large amount of rural surplus labor to cities and towns. This is necessary in order to revitalize rural development and achieve urbanization. However, what awaits the rural area if all its residents are gone? To this end, we must carefully examine the ways to guide and coordinate positive interaction and balanced development between the urban and rural areas. Meanwhile, the resources of urban areas should be made available to rural areas when appropriate, thus enhancing the integrated development between the two.
With the regard to the development of the pastoral areas in the East Ujimqin Banner, as the traditional forms of livestock farming gear themselves towards modernization, there will be a decrease in the amount of labor needed to produce the same amount of livestock products as before, due to the increase in productivity. At the same time, the emergence of a large number of specialized livestock farming cooperatives will inevitably lead to an intensification and up-scaling in livestock production. The integration of resources will also cause a gradual swell in the surplus labor force. The utilization of this surplus labor will become a problem. If these newly unemployed individuals moved into town, they would encounter difficulties in their family lives, as they have long been accustomed to the nomadic way of life and cannot easily blend in with the new way of life in urban areas. Therefore, the abovementioned localized demographic transition is currently a relatively plausible model for completing the transformation of the nomadic population into an urban population. Without the surplus labor needing to leave their grassland, they can be combined with other production conditions beyond the grassland to stimulate the initiative of producers and promote self-sufficiency, causing the production methods of pastoral areas to turn towards diversification and endogenous development. All this can be done without the surplus labor needing to leave their grassland. This will require the local government to utilize this surplus labor force to boost development in other industries in the pastoral areas, while encouraging herders to step up livestock production, thus establishing a multi-sectoral economic structureis suitable for pastoral development. As part of this, the government can encourage herders to develop the local cultural industries, building a culturally rich pasture based on the region's own traditional systems of knowledge. The subject of Case 6, Siqintuya is a perfect example of the East Ujimqin Banner's many impressive entrepreneurs working in the ethnic cultural sector. Led by Siqintuya, several lovers of sewing established the "Shuangyi" Ethnic Clothing Sewing Center, now famous in the East Ujimqin Banner.
Case 6
Siqintuya is a 45-year-old ethnic Mongolian herder from the Taosen Baola Gegacha, the Samai Sumu. She has a dexterous hand and is a naturally talented person. With the teaching of elders and her own hard work, she became skilled in the production of ethnic clothing. Initially, she became the preferred tailor for herders in the Gacha village because of her fine workmanship, and her skill of infusing fashion into the making of traditional clothes. Later, as more and more people were asking her to make Mongolian robes for them, the business grew bigger and bigger and it became difficult for her to manage it all by herself. Therefore, to expand the business, she decided to bring her household production of ethnic clothing into the market and undergo expansion. In June 1997, she led the other female herders from Gacha village in founding the "Shuangyi" Ethnic Clothing Sewing Center, located in the East Ujimqin Banner. The Center also took 14 hobbyists into its ranks. To facilitate the development of the Center, she consulted experts in Hohhot and even as far as Beijing and Shanghai. She surveyed the market, and sought the support of preferential government policies on the transfer and employment of herders as workers. In terms of the Center's production, garment processing falls into two parts based on the technical characteristics of the processors, i.e., garment construction and embroidery. Siqintuya herself is in charge of ordering and quality assurance. With the efforts of Siqintuya and the hobbyists, "Shuangyi" has grown into a large garment processing plant. Each year, they sew more than 300 pieces of ethnic clothing and attend many ethnic clothing exhibitions and performances, both in and outside of the Banner.
Siqintuya's ethnic garments produced at the "Shuangyi" Ethnic Clothing Sewing Center have gained serious recognition. For example, in July 2002, the Center won an Excellency Award at the first East Ujimqin Banner "Bai Ce Shu" Nadam Fair for Ethnic Arts and Culture; in January 2005, her ethnic clothing won third prize for Mongolian ethnic clothing at the East Ujimqin Banner "Winter Nadam Fair" Art Exhibition; and in November 2005, she attended the exhibition on the 9th Women's Representatives' Conference of the Inner Mongolia Autonomous Region on behalf of the East Ujimqin Banner Women's Federation. Furthermore, in January 2006, her design works were highly popular at the Region's first Arhada Cup Ujimqin Clothing Design and Performance Contest, winning second prize in design and a prize for being one of the "Top Ten Design Works". Now, two new training centers have been established in her garment factory: the East Ujimqin Banner Reemployment Training Center for Women and the Ethnic Clothing Craftsmanship Training Center. In the past 2 years, more than 80 women completed training in these training centers, and while a number of trainees have stayed on to work at the center, most trainees have started up their own traditional ethnic clothing shops after completing their training. This approach is an effective response to the pressure on local herders to set up businesses in their native regions. So far, more than 20 trainees have set up their own business on making ethnic clothing. The income of Siqintuya's clothing center is increasing year by year. In 2012 alone, the factory's revenue reached RMB 220,000. RMB 120,000 was paid to employees in salaries, leaving a net income of RMB 100,000.
Without leaving the grassland where she was born, the subject of this case study, Siqintuya successfully established a sewing center for ethnic clothing, featuring traditional craftsmanship and attracting many like-minded hobbyists in the process. Her story sets a perfect example for the localized demographic transition behind the idea of "leaving the land without leaving the countryside". She managed to innovate traditional economic practices, using methods that were specifically suited to the local conditions. Consequently, projects like this help to advance the positive reorganization of both the economic and demographic structure of the pastures, meanwhile preserving traditional ethnic culture. Furthermore, the surplus labor from the livestock production industry, which are, in this case, herders leaving the grassland, have better options available to them. As described by Siqintuya, the ethnic clothing they make is mainly all for sales purposes, but the designing and studying that they do every day is extremely beneficial towards preserving the near-extinct traditional craftsmanship used in the making of Ujimqin clothing.
Case 7
Sarina, a 39-year-old ethnic Mongolian herder from Dabuxilatu Gacha, Wuliyasitai Town. In 2002, Sarina decided to reorientate production, in order to alleviate the problems brought about by the grassland degradation, such as a lower family income and the increasing difficulty of making a living. She sold all of her livestock and enclosed her 3000 mu (2000 km 2 ) of allocated grassland to set up the "Ujimqin Dairy Franchise Station". In 2005, aiming to expand the scale of production, Sarina purchased a series of processing equipment used for fermentation, milk refining, molding, and air drying. As for the sources of her milk, in addition to her own dairy cattle, she also purchased about 1000 jin of fresh milk per day for 1.00 yuan per jin from the dairy farmers in the Suen Baolige community. Due to the large amount of processing and production needed, and the lack of a sufficient workforce, she hired three herders from Gacha with a monthly salary of 400-600 yuan. The Station currently produces over 300 jin of dairy food products every day, with a gross income of around 1000 yuan. In June 2006, Sarina registered the trademark of "Zhusaleng". Then, by continuously improving product packaging and integrating the products into grassland tourism, Sarina was able to gradually build up the East Ujimqin Banner's dairy farming cultural industry by integrating production and marketing within one operation. In recent years, over 10 herders from Gacha have found employment at the station as the industry develops and expands.
By selling her milk products on the market, Sarina has set the standard for the other young herder entrepreneurs of Dabuxilatu Gacha. Under her influence, more and more young herders have begun to experiment with diversified business management and independent development beyond animal husbandry. To a certain extent, these efforts have promoted the transformation of Gacha herders' methods of production and management, stimulating pastoral economic development in the East Ujimqin Banner. In recent years, under the guidance and encouragement of the local government, Mongolian clothing, leather boots, bone carvings, saddles and other manufacturing workshops have sprung up all over the streets of East Ujimqin Banner. Herders strive to maintain nomadic lifestyles while consciously inheriting their national culture. Their domestic space is also expanding with economic activities, bringing advanced resources and technologies back to the grassland after "leaving home" in the short-term, so as to better engage in the animal husbandry economy "without leaving the land". Herders constantly and flexibly adjust their family strategies, but always adhere to the tradition of "making a living" directly from the grassland (Fei Xiaotong 1998).
The localized transition mainly relies on the local people themselves. Combining the natural and cultural characteristics of the pastoral area, vigorous work is underway to develop the productivity of animal husbandry and to improve the herders' independent management methods and development capabilities. People are now trying to break away from the single economic structure of the pastoral area, so as to combine animal husbandry with other professions to promote coordinated development between each sector. In this way, it could be possible to avoid the outflow of labor force, capital, raw materials and traditional culture, which is usually caused by the regional economic development rigidly driven by the growth of large and medium-sized cities and towns. Meanwhile, the gap between herders and urban residents could also be narrowed, both in terms of productivity and ideology. Therefore, we can conclude that the multi-occupational structure, constructed deliberately by local herders and adapted to grassland ecology, further demonstrates the various possibilities of social development in pastoral areas. They are striving to make their own voices heard in the national discourse on development, taking advantage of their existing natural and social capital to actively improve their ability to develop economically, and constructing a set of behavioral strategies that are suitable for the modern market economy.
Conclusions: another perspective on endogenous development
The above ethnographic cases reveal how the once isolated pastoral society at China's northern frontier is able to respond to the government's standpoint on modernization and effectively put this standpoint into practice. Local society refused to passively develop, striving instead to actively adapt itself to new conditions. Working with the family as the central unit, local people were able to constantly change and adapt their ways of life. They actively transformed their concepts of development, adjusted herd structures, promoted livestock improvement, and carried out lamb fattening. Pastoral society has been striving to learn new skills, incorporate technology into traditional livestock production, and even extend their business beyond the nomadic economy. Based on local traditional knowledge, they have independently developed their minority ethnic cultural industries and created a broader range of survival strategies that combine the traditional with the modern. The concept of active development can be seen clearly in the process of modernization undergone in the pastoral areas of the East Ujimqin Banner. Their practices beg further reflection on the previous concept of stimulating development from outside a community.
The autonomous development that is suitable for this ethnic group's personal and cultural characteristics may be an appropriate basis for the development of wider pastoral society. The concept of "endogenous development", put forward by Professor Tsurumi from Japan's Sophia University in the 1980s, provides inspiration for how today's pastoral societies will develop in the future. It is a theory that emphasizes taking full advantage of the natural and social resources from within a region in order to give full play to the consciousness and subjectivity of the local people. In short, this is an approach that attaches importance to development that is triggered from within a community. At present, a number of questions arise given the context of the implementation and promotion of the Rural Revitalization Strategy. How best to transform such vast pastoral areas? How can we strike a balance between modernity and local issues in the discourse of development, so as to better integrate the development of the first, second, and third sectors together as one? In formulating policies aimed towards the economic development of ethnic areas, close attention must be paid to the development demands and spiritual worlds of local societies. We must seek to deeply understand each local society, both in its particular details and its overall nature. Policies should give full play to the traditional mechanisms of the local culture, giving substantial consideration to the histories, cultures, and realities that circulate there. We must not act blindly as we implement the abolishment of the dual urban-rural structure. If we eliminate small farm and small pastoral economy to radically speed up China's urbanization, we run the risk of casting farmers and herders into urban slums and robbing them of their sustainable traditional livelihoods.
The study shows that cultivating the ethnic minorities' own development mechanisms and fostering their initiative to take part in development is an effective and beneficial measure. Otherwise, economic growth and development can only be a short-term expansion in scale, unable to sustain itself sufficiently to support the ethnic minorities' long-term development. There is evidence to prove that local society tends to understand development and can practice transformation through their own original social and cultural systems. Under the new circumstances, the endogenous development among the pastoral herders of the East Ujimqin Banner is now a survival practice that is based on traditional nomadic life and the reconstruction of local knowledge. This set of practical strategies combining traditional and modern development not only connects pastoralists with the external market and the nation as modes of production, but also enables the livelihoods of local societies to be maintained and traditional social culture to continue despite the ever-changing impacts of modernization.
|
v3-fos-license
|
2021-06-22T17:54:41.969Z
|
2021-05-06T00:00:00.000
|
235556234
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2311-7524/7/5/100/pdf",
"pdf_hash": "ec35181bc38132607520d1fbcebc94842cdb1537",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2097",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"sha1": "5747cfc3bf5f56a6bd975caaddcaf807254a1745",
"year": 2021
}
|
pes2o/s2orc
|
Effects of Vermicompost Leachate versus Inorganic Fertilizer on Morphology and Microbial Traits in the Early Development Growth Stage in Mint (Mentha spicata L.) And Rosemary (Rosmarinus officinalis L.) Plants under Closed Hydroponic System
The objective of this study was to compare the morphology of M. spicata and R. officinalis plants, and the relative abundance quantification, colony-forming units, ribotypes, and biofilm former bacteria under an inorganic fertilizer and the use of vermicompost leachate in the rhizosphere under a closed hydroponic system. In mint (Mentha spicata) plants treated with the vermicompost leachate, growth increase was determined mainly in root length from an average of 38 cm in plants under inorganic fertilizer to 74 cm under vermicompost leachate. In rosemary (Rosmarinus officinalis), no changes were determined between the two treatments. There were differences in the compositions of microbial communities: For R. officinalis, eight ribotypes were identified, seven for inorganic fertilizer and four for vermicompost leachate. For M. spicata, eight ribotypes were identified, three of them exclusive to vermicompost leachate. However, no changes were observed in microbial communities between the two treatments. Otherwise, some changes were observed in the compositions of these communities over time. In both cases, the main found phylum was Firmicutes, with 60% for R. officinalis and 80% for M. spicata represented by the Bacillus genus. In conclusion, the use of vermicompost leachate under the hydroponic system is a viable alternative to achieve an increase in the production of M. spicata, and for both plants (mint and rosemary), the quality of the product and the microbial communities that inhabited them remained unaltered.
Introduction
At present, the growing global population has put pressure on agriculture in different ways: the increase in demand for food and the need to meet this demand in an environmentally friendly manner. Although the use of chemical fertilizers has led to an enhancement in crop production, several major health-and environment-related concerns are associated with their use [1,2]. Pollution and the increase in global temperature are predicted to have negative consequences for agriculture in the coming decades [3]. Likewise, future climatechange scenarios predict a more frequent occurrence of extreme conditions [4]. In this sense, hydroponic systems have emerged as an alternative to improve yield, product quality, water management, land saving, nutrient recycling, and environmental and pathogen control. Hydroponic systems are cultivation technologies that use nutrient solutions rather facility were 21.4, 31.8, and 8.9 • C, respectively, with a mean of 70% relative humidity. Meteorological records were obtained during the study from an automated weather station located inside the shade-enclosure facility.
Plant Cultivation Conditions and Hydroponic System
The experiment was carried out from September to November. M. spicata and R. officinalis cuttings were obtained from mother plants within their regional cultivars and were placed in pots with vermiculite until they developed enough roots to be able to absorb nutrients from fertilizers after applying the treatments. The pots were placed in 30 propylene containers of 20 L (24.5 × 16 × 10 cm (length × width × height)) filled with water. Oxygen supplementation in containers was provided with a Blogger Sweetwater pump (model SST20, 50 Hz). The water volume was maintained constant to build a closed hydroponic system; there was no recirculating water because the study was on the early vegetative stage (September to November).
Treatments and Experimental Design
The experimental design consisted of two treatments: one applying vermicompost leachate (L) and the other applying inorganic fertilizer (SS; control group) [26]. Vermicompost leachate (L) was produced at the CIBNOR experimental field according to recommendations by Gunadi et al. [27]. The vermicomposting process was carried out in 200 L containers cut in half, to which 5 holes were made in its base. Subsequently, a 5 cm thick layer of gravel and an antiaphid mesh were placed to separate the gravel from the bed where the earthworms developed. Kitchen waste and manure were used as food for the earthworms in a ratio of 1:1 volume:volume. Both kitchen waste and manure were precomposted for 21 days before being used as food for the earthworms. The feeding process was carried out using 5 cm thick layers of precomposted food every week for 12 weeks. The vermicomposting process was considered to have ended when a homogeneous material was observed without the presence of remnants of the original material. The vermicompost was separated to be laid and sheltered in a dry place and away from light for 90 days for its mineralization. Vermicompost leachate was obtained according to the methodology described by García-Galindo et al. [28], where 5 kg of vermicompost was placed in a container. Three liters of distilled water was poured into the container, and the leachate was collected. Information of the nutrient content of both inorganic fertilizer and vermicompost leachate is shown in Table 1. The experiment was established under a completely randomized design with 15 replicates for each treatment (vermicompost leachate and inorganic fertilizer). Each replicate consisted in a container before descripted with 12 pots, each pot with one plan. Treatments were applied once at five days after sowing (DAS), for inorganic fertilizer a commercial fertilizer of 17% NPK was used to prepared 10 mL that contained 0.0079, 0.000087, 0.070 (parts per million of N, P K, respectively) diluted in 40 L of top water (the capacity of pot container). For the vermicompost-leachate treatment, 140 mL that contained 0.00709, 0.000259, and 0.074 (parts per million of N, P K, respectively) was diluted in 40 L of tap water. The nutrient doses of N-P-K corresponded to the minimum established for these crops in the region to examine if any differences could be detected in microbial and morphological traits in the use of an organic versus inorganic fertilizer. Plants were analyzed in early-stage growth at 35 days after fertilizer application.
Morphological Traits and Relative-Growth Analysis
Stem length (SL, cm), fresh stem weight (FSW), dry stem weight (DSW), foliar area (FA), fresh foliar weight (FFW), dry foliar weight (DFW), root length (RL), fresh root weight (FRW), and dry root weight (DRW) were evaluated in five M. spicata plants and five R. officinalis rosemary plants before treatment application and at the end of the experiment (35 DAS). Stem and root weights (g) were obtained using an analytical scale (Mettler Toledo, AG204); for dry weights, an oven was used with forced air circulation at 70 • C (Shel-Lab ® , FX-5, series 1000203) until constant weight. Data of initial and final dry weights were used to calculate total relative growth rate (TGR), foliar growth rate (FGR), root growth rate (RGR), and stem growth rate (SGR) in grams per day, according to Hunt [29], following Formula (1): where DW2 and DW1 are the total plant (TGR), foliar (FGR), root (RGR) and stem (SGR) dry weight (g), recorded at times t2 (time of sampling) and t1 (beginning of the experiment), respectively. The difference (t2 − t1) is expressed in days. TGR, FGR, RGR, and SGR are expressed in g −1 day −1 .
Photosynthetic Pigments
For M. spicata and R. officinalis plants under organic and inorganic treatments, we determined chlorophyll with seven plants (one leaf per plant) per treatment. M. spicata SPAD values [30,31] were recorded for 20 consecutive days after the beginning of both organic and inorganic treatments application. In R. officinalis plants, chlorophyll was evaluated two times: before any treatment application, and 20 days after both treatment applications. For R. officinalis, the chlorophyll was extracted following the acetone extraction methodology from leaf tissue, and the absorbance measure was carried out with a UV/visible spectrophotometer (model HELIOS OMEGA, Thermo Scientific, Vantaa, Finland). Chlorophyll a and b concentrations were estimated by applying the following functions [32]: Chlorophyll a (mg mL −1 ) = 11.64 (A663) − 2.16 (A645) where A663 and A645 correspond for the absorbance values at wavelengths (λ) of 663 and 645 nm, respectively.
Sampling for Bacterial-Community Characterization
To determine the influence of organic and inorganic fertilizers on rhizobial microbial communities from the plant rhizosphere, samples of the root rhizosphere were taken in the hydroponic system as follows: a water sample of 50 mL with the roots (0-0.5 cm) from three different reservoirs at three times (1, 7, and 35 DAS). The collected samples were processed immediately for: (i) total DNA isolation from water (rhizosphere) samples, and (ii) bacterial isolation from R. officinalis and M. spicata root samples with the methodology that follows below. Vermicompost was free of pathogens. the colony-forming units (CFU). One milliliter of the remaining sample was used to perform serial dilutions in saline solution 0.85% (w/v) (from 10-2 to 10-7). Lastly, 100 µL for each dilution (from 10-2 to 10-7) was plated on nutrient agar (NA) and incubated for 24 h at 30 • C. After 24 h, the CFU count was performed.
After the CFU count, bacterial colonies were isolated on the basis of their morphology. A representative colony of the five most abundant colonial morphologies was reseeded by streak dilution in a new plate of NA and incubated at 30 • C overnight. This step was repeated until a pure isolate in each case (a single bacterial morphology per isolate) was obtained. The obtained pure isolates were stored in glycerol 30% (v/v) at −80 • C until their use.
DNA Isolation
The total DNA isolation of the water samples and bacterial isolates was carried out according to the protocol with slight modifications [33]. For water samples, 25 mL was centrifuged at 5000× g for 10 min, and the supernatant was discarded. For bacterial isolates, 3 mL of liquid culture was placed in nutrient broth (NB) at 30 • C overnight and centrifuged at 5000× g for 5 min, and the supernatant was discarded. Both the pellet from water samples and the bacterial isolate pellets were processed in the same way. The resulting pellet was resuspended in 1 mL of a lysis buffer (15% sucrose, 0.3 mg/mL lysozyme, 0.05 M EDTA and 1 M Tris, pH 8) and incubated for 30 min at 37 • C. Then, 100 µL of 10% SDS (w/v), 100 µL of 5 M NaCl, and 5 µL of proteinase K (0.4 mg/mL) were added and incubated under agitation for 1 h at 50 • C. After incubation, 200 µL of phenol-chloroform-isoamyl alcohol (25:24:1) was added to 500 µL of the solution, briefly vorticed, and then centrifuged at 12,000× g for 5 min. The aqueous phase was recovered, and 200 µL of ammonium acetate (7.5 M) and 500 µL (1 volume) of absolute ethanol were added to be mixed by inversion and precipitate at 4 • C overnight to centrifuge at 4 • C at 12,000× g for 15 min. The supernatant was discarded, and the pellet was washed twice with 100 µL of ethanol 70% (v/v). The DNA was dried at room temperature, resuspended in molecular-biology-grade water, and stored at −20 • C until use.
Relative-Abundance Quantification by qPCR
The relative abundance of the bacterial population was assessed through qPCR to determine the effect of treatments. The qPCR was performed on a CFX96 Touch™ Real-Time PCR Detection System (Bio-Rad, Hercules, CA, USA) according to the instructions of the iTaq™ Universal SYBR ® Green Supermix (Bio-Rad, Hercules, CA, USA). The relative abundance of the total bacteria in the rhizosphere samples for each treatment was assessed according to the methodology described by López-Gutiérrez et al. [33] with slight modifications.
Characterization of Bacterial Communities by Ribotype Assay Analysis (16S rRNA Gene)
Ribotype assay analysis was conducted according to the Bogino et al. [34] methodology. A total DNA of 36 water samples (3 samples × 3 times × 2 treatments × 2 species of plants = 36 samples in total) and 60 bacterial isolate strains (30 isolate strains for each plant for both organic and inorganic fertilization treatments) were characterized by amplified ribosomal DNA restriction analysis (ARDRA). Bacterial genomic DNA was extracted from each isolate as mentioned previously. For 16S rRNA gene amplification, we used primers fD1 (5 -AGAGTTTGATCCTGGCTCAG-3 ) and rD1 (5 -AAGGAGGTGATCCAGCC-3 ). PCR amplification products (~1500 bp) were processed by a restriction endonuclease assay with HaeIII (Thermo Fisher Scientific), and the resulting fragments were electrophoretically separated on a 2% (w/v) agarose gel, stained with ethidium bromide to visualize them with UV radiation, and the corresponding image was photographed. Ribotype identification is directly associated with a specific restriction fragment fingerprint. The community structure dendrogram was constructed on the basis of ribotypes of the bacterial isolates with GelCompar II software. Bacterial isolate strains belonging to either unique majority ribotypes or common ribotypes were selected for further identification through 16S rRNA gene nucleotide sequence analysis with primers COM 1 (5 -CAGCAGCCGCGGTAATAC-3 ) and COM 2 (5 -CCGTCAATTCCTTTGAGTTT-3 ) with the methodology described by Stach et al. [35]. The 16S rRNA gene sequences were analyzed using the BLAST (blastn) search program (National Center for Biotechnology Information (NCBI)).
Biofilm-Formation Assay
Biofilms are microbial communities that adhere to surfaces and are enclosed in a protective matrix; this is also the primary structure from which bacteria interact with plants and other eukaryotes. Thus, to characterize the bacterial capability of the rhizosphere (water samples) isolate strains from M. spicata and R. officinalis to form biofilms, we carried out the crystal violet (CV) staining quantitative assay of Labrie et al. [36] with slight modifications. CV staining absorbance was measured at 590 nm using a spectrophotometer (Multiskan Spectrum, Thermo Scientific, Wilmington, DE, USA).
Statistical Analysis
Data were analyzed using univariate and multivariate analysis of variance (ANOVA and MANOVA) for one-way classification, and the nutrition source was the study factor. For chlorophyll content, multiple analysis of variance (MANOVA) and significant differences between means for each recorded date were determined by two-way analysis of variance (ANOVA). Least significant differences (LSD) in Tukey's HSD test (p = 0.05) were estimated for one-way ANOVA. For all cases, significant differences between means were considered to be significant at p < 0.05. All statistical analyses were performed with Statistica software program v10.0 and GraphPad Prism version 6.0 (GraphPad Software, San Diego, CA, USA). (Table 2). There was no difference between the vermicompost leachate treatment and the inorganic treatment for relative growth rates of leaves (FGR), stems (RGS), total growth rate (TGR), and roots (RGR), which was lower for vermicompost leachate than inorganic fertilizer was (Table 3). Chlorophyll a and b, and total content did not show any differences between plants with vermicompost leachate or inorganic treatment (Table 4 and Figure 1).
CFU Quantification and Relative Abundance of Bacterial Communities
The relative abundance of total bacterial communities due to the effect of treatments was assessed by CFU estimation and by a qPCR-based assay. For both M. spicata and R.
R. officinalis
For all morphological traits, there were no differences between the vermicompost leachate and inorganic treatments (Tables 2 and 3) except for rosemary under treatment with leachate in RGR, which showed lower growth (Table 3). Organic treatment did not affect chlorophyll a and b, and total content did not undergo alterations in either organic or inorganic treatment, and the only variable that exerted an effect was the time (date) of chlorophyll sampling (Table 4).
CFU Quantification and Relative Abundance of Bacterial Communities
The relative abundance of total bacterial communities due to the effect of treatments was assessed by CFU estimation and by a qPCR-based assay. For both M. spicata and R. officinalis, no differences were determined between the vermicompost leachate and inorganic treatments regarding the abundance of bacterial populations; however, an increase in relative abundance in time was more evident for the vermicompost leachate ( Figure 2).
Bacterial community structure kinetics between both vermicompost leachate and inorganic treatments was analyzed. Thirty-six total DNA water samples were analyzed by amplified ribosomal DNA restriction analysis (ARDRA). As this test showed for M. spicata and R. officinalis, bacterial community structures underwent changes through time without a significant effect between treatments (Figure 3a,b). Thus, these results highlight the feasibility of replacing inorganic fertilizer with the vermicompost leachate without significant impact on the bacterial abundance or bacterial community structures of M. spicata and R. officinalis in hydroponic systems.
inorganic treatments was analyzed. Thirty-six total DNA water samples were analyzed by amplified ribosomal DNA restriction analysis (ARDRA). As this test showed for M. spicata and R. officinalis, bacterial community structures underwent changes through time without a significant effect between treatments (Figure 3a,b). Thus, these results highlight the feasibility of replacing inorganic fertilizer with the vermicompost leachate without significant impact on the bacterial abundance or bacterial community structures of M. spicata and R. officinalis in hydroponic systems.
Composition and Diversity of Bacterial Communities
A total of 60 bacterial isolate strains (30 isolate strains for each plant for both vermicompost leachate and inorganic fertilization treatments) were characterized by ARDRA. From ARDRA, 15 ribotypes were identified in M. spicata and R. officinalis
Composition and Diversity of Bacterial Communities
A total of 60 bacterial isolate strains (30 isolate strains for each plant for both vermicompost leachate and inorganic fertilization treatments) were characterized by ARDRA. From ARDRA, 15 ribotypes were identified in M. spicata and R. officinalis according to the yielded fingerprint after the restriction assay with the HaeIII restriction enzyme (Table 5). In the case of R. officinalis, eight different ribotypes were identified (Figure 4). Of these eight ribotypes, seven were present in inorganic treatment, and four in the vermicompost leachate. Of the ribotypes present in the inorganic treatment, four were exclusively present in this treatment, while only one ribotype was exclusive of the vermicompost leachate. In the case of M. spicata, there were also eight different ribotypes for both the vermicompost leachate and the inorganic treatment. For the inorganic treatment, there were five ribotypes, and none was exclusive to this treatment. For the vermicompost leachate treatment, eight ribotypes were present, and three ribotypes were exclusive of this treatment. However, it was not possible to characterize the ribotype to which three bacterial isolates from M. spicata belonged (two from inorganic treatment and one from organic treatment).
Representative bacterial strains were identified by 16S rRNA gene sequencing. Bacterial isolate strains were selected according to ribotype ARDRA profiles (Table 6). Most bacterial isolate strains belonged to the Firmicutes phylum, which was mainly composed of the Bacilli class, the Bacillaceae family, and the Bacillus genus. Bacterial isolate strains belonging to Alphaproteobacteria, Betaproteobacteria, and Gammaproteobacteria classes from the Proteobacteria phylum were found (Table 6). Ribotypes found in rosemary bacterial isolate strains belonged to Firmicutes (60%), mainly composed of the Bacillus genus. Comparing the vermicompost leachate and inorganic treatments, we determined that the Firmicutes phylum was the most abundant between treatments, and the Alphaproteobacteria and Betaproteobacteria classes, and Gammaproteobacteria showed greater abundance in inorganic treatment than in the vermicompost leachate treatment ( Figure 4, Table 6). The ribotypes found in M. spicata bacterial isolate strains belonged to Firmicutes (80% and were mainly composed of the Bacillus genus. Interestingly, 10% of the bacterial isolate strains were unclassified. Comparing the vermicompost leachate and inorganic treatments, the most abundant phylum was Firmicutes, followed by the Gammaproteobacteria class (Tables 5 and 6). For the vermicompost leachate, the Betaproteobacteria class showed greater abundance in the vermicompost leachate treatment than in inorganic treatment (Tables 5 and 6). Therefore, the Firmicutes phylum was the most abundant in both R. officinalis and M. spicata plants, and in both the vermicompost leachate and the inorganic treatment.
Biofilm-Forming Ability of Bacterial Communities
All bacterial isolate strains from R. officinalis (30 isolates) and M. spicata (30 isolates) were assessed for adhesion and biofilm-establishment capability with a CV assay. The CV assay showed that all bacterial isolates were able to adhere to the surface and establish biofilms ( Figure 5). Differences were found in biofilm formation that were categorized according to the capability to retain CV measured by the OD at 595 nm (CV-OD595) [28], for all bacterial isolate strains as follows: weak (<0.6), moderate (0.6-1.2), and strong (>1.2). R. officinalis bacterial isolate strains with the vermicompost leachate treatment showed that 3 bacterial isolates formed a moderate biofilm, 2 a strong biofilm, and the remaining 10 a weak biofilm. For the bacterial isolate strains from the inorganic treatment, 4 bacterial isolates formed a moderate biofilm, 1 a strong biofilm, and the remaining 10 a weak biofilm. The M. spicata bacterial isolate strains with the vermicompost leachate treatment showed that 1 bacterial isolate formed a strong biofilm, 2 a moderate biofilm, and the remaining 12 formed a weak biofilm. For the inorganic treatment, 2 bacterial isolates were able to form a strong biofilm, 1 a moderate biofilm, and the remaining 12 a weak biofilm. Altogether, for the R. officinalis and M. spicata plants and both the vermicompost leachate and the inorganic treatment, most bacterial isolates were able to form weak biofilms in the conditions assessed in this study.
Biofilm-Forming Ability of Bacterial Communities
All bacterial isolate strains from R. officinalis (30 isolates) and M. spicata (30 isolates) were assessed for adhesion and biofilm-establishment capability with a CV assay. The CV assay showed that all bacterial isolates were able to adhere to the surface and establish biofilms ( Figure 5). Differences were found in biofilm formation that were categorized according to the capability to retain CV measured by the OD at 595 nm (CV-OD595) [28], for all bacterial isolate strains as follows: weak (<0.6), moderate (0.6-1.2), and strong (>1.2). R. officinalis bacterial isolate strains with the vermicompost leachate treatment showed that 3 bacterial isolates formed a moderate biofilm, 2 a strong biofilm, and the remaining 10 a weak biofilm. For the bacterial isolate strains from the inorganic treatment, 4 bacterial isolates formed a moderate biofilm, 1 a strong biofilm, and the remaining 10 a weak biofilm. The M. spicata bacterial isolate strains with the vermicompost leachate treatment showed that 1 bacterial isolate formed a strong biofilm, 2 a moderate biofilm, and the remaining 12 formed a weak biofilm. For the inorganic treatment, 2 bacterial isolates were able to form a strong biofilm, 1 a moderate biofilm, and the remaining 12 a weak biofilm. Altogether, for the R. officinalis and M. spicata plants and both the vermicompost leachate and the inorganic treatment, most bacterial isolates were able to form weak biofilms in the conditions assessed in this study.
Discussion
The vermicompost leachate treatment for both M. spicata (mint) and R. officinalis (rosemary) plants did not affect their growth; even for M. spicata plants, we were able to determine a growth increase for several morphometric parameters. Moreover, for R. officinalis plant growth, for all morphometric parameters, there were only differences for root growth, which was lower for vermicompost than for inorganic leachate; similar
Discussion
The vermicompost leachate treatment for both M. spicata (mint) and R. officinalis (rosemary) plants did not affect their growth; even for M. spicata plants, we were able to determine a growth increase for several morphometric parameters. Moreover, for R. officinalis plant growth, for all morphometric parameters, there were only differences for root growth, which was lower for vermicompost than for inorganic leachate; similar results were found by Peng et al. [37]. This is important since the aim of healthy food production is avoiding the application of inorganic fertilizer [25,[38][39][40][41]. Furthermore, vermicompost leachate contains a high amount of plant hormones, such as auxins, gibberellins, and cytokinins from microbial origin, giving rise to plant-growth enhancement, and acting as a liquid fertilizer [15,[42][43][44][45]. Emperor and Kumar [45] determined that organic matter processed in the earthworm gut and then excreted as vermicast undergoes an increased level of microbial population, microbial respiration, microbial enzyme activity, and N, P, and K enrichment, bacterial exopolysaccharide production, lignocellulolytic activity establishment, nitrifying, and nitrogen-fixing microorganism proliferation. The above allow for us to conclude that the use of vermicompost to replace inorganic fertilizers is a viable option under the use of hydroponic systems [43,[46][47][48][49].
The bacterial communities' relative abundance showed no differences between the vermicompost leachate and inorganic treatments for both R. officinalis and M. spicata plants, showing time-related differences, as expected, in accordance with previous works, where the analyzed bacterial communities underwent the same behavior [50,51]. The bacterialcommunity structure for the R. officinalis and M. spicata plants and for both treatment types were mainly composed by the Firmicutes phylum, followed by the Proteobacteria phylum, which was represented by the Alphaproteobacteria, Betaproteobacteria, and Gammaproteobacteria classes; we were also able to determine the presence of beneficial bacteria from the Bacillus (Firmicutes phylum) and Pseudomonas (Proteobacteria phylum) genera. Those bacteria are designated as beneficial or plant-growth-promoting (PGPB), and the characterization of the bacterial-community structures of the rhizosphere for other plant members (Thymus vulgaris, T. citriodorus, T. zygis, Santolina chamaecyparissus, Lavandula dentata, and Salvia miltiorrhiza) of the Lamiaceae family showed that Proteobacteria, Firmicutes, Bacteroidetes, Actinobacteria, Acidobacteria, and Gemmatimonadetes were among the most abundant bacterial phyla [5,[52][53][54][55][56].
Lastly, the capability to establish biofilms was assessed for all 60 bacterial isolate strains from the M. spicata and R. officinalis plants and both treatments, with no differences highlighting the essential role of biofilm development in bacterial survival and physiology [36]. We determined that most of the isolates (66.67% in R. officinalis and 80% in M. spicata) had weak capacity (CV-OD595) to form a biofilm; a smaller proportion were able to produce a strong biofilm for both M plants and both treatments. In an aqueous environment, such as a hydroponic system, biofilm establishment follows other mechanisms that are not yet characterized. Authors should discuss the results and how they can be interpreted from the perspective of previous studies and working hypotheses. The findings and their implications should be discussed in the broadest context possible. Future research directions may also be highlighted.
Conclusions
In this study, we showed that the substitution of inorganic fertilizer by vermicompost leachate in a hydroponic system allows for us to maintain or increase the production of two crop plants with agricultural importance (M. spicata (mint), and R. officinalis (rosemary)). Furthermore, we determined that this fertilizer substitution modifies neither the bacterial communities for both plants nor their ability to form biofilms. Through time, the vermicompost leachate tendency showed an increase in relative abundance, which is important to consider for future studies. Therefore, we propose the use of vermicompost leachate fertilizer as a feasible replacement for inorganic fertilizer in hydroponic systems to achieve sustainable and ecofriendly agricultural production, in agreement with our results and recent research conducted on open-field cultures, to face the challenge of a growing population and pollution derived from the use of inorganic fertilizers.
|
v3-fos-license
|
2018-12-16T18:51:52.230Z
|
2017-01-01T00:00:00.000
|
54930936
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/42/matecconf_eitce2017_01003.pdf",
"pdf_hash": "ef0f4d3682b51c6f9af6353e89657c446daeb8e7",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2098",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"sha1": "ef0f4d3682b51c6f9af6353e89657c446daeb8e7",
"year": 2017
}
|
pes2o/s2orc
|
Speech Denoising in White Noise Based on Signal Subspace Low-rank Plus Sparse Decomposition
In this paper, a new subspace speech enhancement method using low-rank and sparse decomposition is presented. In the proposed method, we firstly structure the corrupted data as a Toeplitz matrix and estimate its effective rank for the underlying human speech signal. Then the low-rank and sparse decomposition is performed with the guidance of speech rank value to remove the noise. Extensive experiments have been carried out in white Gaussian noise condition, and experimental results show the proposed method performs better than conventional speech enhancement methods, in terms of yielding less residual noise and lower speech distortion. Keywordsspeech enhancement; subspace method; low-rank plus sparse decomposition.
Introduction
Speech enhancement refers to the improvement in quality and intelligibility of noise corrupted speech signals by using supervised or unsupervised speech enhancement methods.It is widely used as a pre-processing block in a lot of applications like automatic speech recognizer and other communication systems.
Over the last fifty decades, many algorithms have been proposed about for speech enhancement.The typical algorithms including spectral subtraction [1], minimum mean square error (MMSE) estimation [2][3][4], Wiener filtering [5][6][7][8], and subspace methods [9][10][11][12][13]. Spectral Subtraction and Wiener filtering have been widely used for enhancing speech because of their simplicity and ease of implementation in single channel systems but they suffer from the production of musical noise after enhancement and is one of their major drawbacks.Signal subspace approach [9][10][11][12][13], have shown to give a better compromise between less residual noise and signal distortion of the output signal, compared to the other existing techniques.
Signal subspace approach was firstly proposed by Ephraim Y, et al.The principle of this method is to separate the noisy speech observation space into a signal subspace and a noise subspace, and the enhanced speech was constructed using only the components of the signal within the signal subspace.In the subspace-based algorithms, subspace decomposition is a critical step for subspace separation, which is often performed via Karhunen-Loeve transform (KLT) [10] or singular value decomposition (SVD) [9].The main issue in developing a subspace-based model is the way of splitting and refining the signal and noise subspace in an optimal way.In [14], variance of the reconstruction error criterion was introduced to optimize the subspace selection for speech enhancement.In [15], to optimize the subspace decomposition model, human auditory psychoacoustic properties are incorporated into the subspace filter to reconstruct the enhanced signal.Although many efforts were conducts to improve the subspace methods, the existing subspace-based speech enhancement methods still suffer from the problem of low decomposition accuracy in the presence of large noise, resulting in a high remainder noise within enhanced speech in strong noise cases.
In this paper, we propose a new subspace-based method for speech enhancement based on the principle of low-rank and sparse decomposition (LSD).The main idea behind our method is motivated by the recent development of lowrank and sparse theory [16].According to this theory, if a given corrupted data matrix Y has an underlying low-rank structure, yet corrupted by sparse additive noises.The underlying low-rank component L can be effectively recovered by solving a convex optimization problem, even if the noise is arbitrary in magnitude.In the time domain, owing to the short-time stability of human speech, speech signals can be assumed to have a low-rank structure.On the other hand, due to the randomness of noise, background noise is more variable and thus can be viewed as sparse and high-rank.Thus LSD theory can be exploited to recover the underlying speech from corrupted speech signals.
The rest of the paper is organized as follows.We first briefly review the previous works in Section 2. In Section 3, we describe the LSD based signal subspace speech enhancement method.Section 4 presents the experiments and results.Finally, we give the concludes and future work in section 5.
Related work
The goal of principal component analysis (PCA) technique is to determine the most significant basis to re-express a noisy speech set [17].This new basis will filter out the noise and reduce a multidimensional speech to lower dimensions by avoiding redundant data.
Let us consider the problem of the enhancement of a speech signal contaminated by an independent additive noise.Let x(t) and d(t) denote the sampled clean speech and noise signal, respectively.The observed noisy speech signal ) Suppose y(t) was framed with the length N. Arranging the N-dimensional vectors into a (M-l+1)×l Toeplitz structure matrix, we can get .Y X D (2) Assuming that the rank of matrix Y is r, the optimal enhanced speech matrix X can be estimated according to the following least-square criterion 2 ˆˆm in , rank( ) where symbol F denotes the Frobenius norm of a matrix and If d(t) is a white Gaussian noise, it satisfies the conditions is the variance of noise.The optimal solution of ( 4) can be obtained by applying singular value decomposition (SVD) of Y.
Here, U and V are two orthogonal matrices holding the left and right (approximate) singular vectors of given matrix, and is a diagonal matrix holding the singular values: 1 The above low-rank matrix X represents the original speech matrix X in the sense of least-square minimization.This may get the optimal estimate when the noise is small, independent, and identically distributed Gaussian.
However, PCA is highly sensitive to the presence of large corruptions.Even a single outlier in the data matrix can render the estimation of the low-rank component arbitrarily far from the true model.In [16], a new theory called Robust PCA was developed for this shortcoming.The basic idea of Robust PCA is to decompose the data matrix M as M=L+S, where is a sparse matrix with a sparse number of non-zero coefficients with arbitrarily large magnitude.RPCA can be solved by minimizing the following convex program where * denotes the matrix nuclear norm, which is defined as the sum of all singular values and is suggested as a convex surrogate to the rank function [18]. 1 denotes the l 1 -norm of a matrix, which is defined as the sum of the absolute values of matrix elements.This problem is known to have a stable solution provided L and S are sufficiently incoherent [19], i. e., the low-rank matrix is not sparse and the sparse matrix is not low-rank.More recently, RPCA theory was introduced into the speech enhancement task in [20], where a constrained low-rank and sparse matrix decomposition (CLSMD) algorithm is designed for noise reduction.
LSD based speech denoising method
In this work, we propose a new subspace decomposition algorithm based on the LSD, which is less sensitive to the large noise interferences.
Firstly, we formulate the speech enhancement problem as the following optimization problem, The above formula can be solved by alternatively solving the following two formulas until convergence Given an estimate of sparse matrix 1 , i S the minimization in (7-a) over L is to learn a rank-r low-rank matrix from partial observations.This is a fixed-rank approximation problem, we can solve it use bilateral random projections (BRP) based fast low-rank matrix approximation. Where , .
are Gaussian random matrices.The minimization in (7-b) over S is to learn a sparse matrix from partial observations.This can be computed via entry-wise hard thresholding function [21], ( ) 1( ), which keeps the input if it is larger than the threshold; otherwise, it is set to zero.In summary, we have following optimization algorithm for LSD.
Algorithm 1. Optimization algorithm for LSD
Given r, T, ε, Figure 1 shows the scheme of LSD based speech enhancement method.At first, the noisy speech signal is divided into frames in the time domain.Then we arrange each frame of the noisy speech into a Toeplitz matrix.After we estimated the effective rank r with the analysis-bysynthesis approach [22], the noisy speech matrix Y is decomposed into the low-rank matrix L with the rank r using the LSD algorithm.Since L is not a Toeplitz matrix, we average all the diagonal elements of L to let it became a Toeplitz matrix form.Finally, the enhanced speech is constructed by taking the inverse transform of Toeplitz matrix followed by least-squares overlap-add synthesis [23].
Experimental results
For evaluation of the proposed JLSMD method, we choose a total of 30 sentences (sp01~sp30) taken from NOIZEUS database.Both speech and noise were sampled at 8 kHz 16 bits.Time frame length is 264 sample points with 50% frame overlap.White Gaussian noise was added to clean speech at various levels.We use segSNR and PESQ ((Perceptual Evaluation of Speech Quality) scores for performance measure.four conventional speech enhancement methods: spectral subtraction (SSboll [1]), Subspace SVD based subspace decomposition algorithm (SSVD) [9], Wiener filter based method (Wiener [8]), minimum mean-square error algorithm (MMSE [24]), KLT [12] and CLSMD [20]).
Tables 1 and 2 show the comparison of performance in terms of PESQ and segSNR scores.The larger the PESQ-MOS and segSNR scores are, the better the performances are.We can see the proposed method LSD has got the highest PESQ-MOS and segSNR scores among all the compared methods, except at 0 dB where CLSMD has the highest segSNR score.method is still able to preserve most of the low-energy speech components compared with the seven speech enhancement methods.
Conclusions
In this paper, we presented a LSD based signal subspace speech enhancement method.The proposed method is less sensitive to the large interferences as compared with traditional algorithms, and can significantly reduce noise.Experiments demonstrate that the proposed method is good at improving the overall enhanced speech quality, especially in low SNRs.It should be pointed out that LSD method has improved the original subspace method based on SVD and can wipe out more residual noise.In the future research work we will devote more efforts to improving the noise reduction performancein the colored noise.
Figure 1 .
Figure 1.The scheme of LSD based speech enhancement method
Figure 2 .
Figure 2. Comparison of the spectrograms for speech enhanced by different methods Fig. 2 presents spectrogram comparisons for various speech enhancement methods in the 10 dB SNR.We can see from these enhanced speech spectrograms.Along with the high levels of noise reduction, the proposed LSD based
Table 1 .
PESQ scores in the white noise case at different SNRs
Table 2 .
PESQ scores in the white noise case at different
|
v3-fos-license
|
2020-10-28T19:21:12.635Z
|
2020-10-13T00:00:00.000
|
225095077
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/omcl/2020/3534570.pdf",
"pdf_hash": "dcf57ca11071b6f53c96e10980df1a884cd19d90",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2099",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "af1c73e8d887a40bed04ca16bea5035904d99836",
"year": 2020
}
|
pes2o/s2orc
|
The Antiaggregative and Antiamyloidogenic Properties of Nanoparticles: A Promising Tool for the Treatment and Diagnostics of Neurodegenerative Diseases
Due to the progressive aging of the society, the prevalence and socioeconomic burden of neurodegenerative diseases are predicted to rise. The most common neurodegenerative disorders nowadays, such as Parkinson's disease, Alzheimer's disease, and amyotrophic lateral sclerosis, can be classified as proteinopathies. They can be either synucleinopathies, amyloidopathies, tauopathies, or TDP-43-related proteinopathies; thus, nanoparticles with a potential ability to inhibit pathological protein aggregation and/or degrade already existing aggregates can be a promising approach in the treatment of neurodegenerative diseases. As it turns out, nanoparticles can be a double-edged sword; they can either promote or inhibit protein aggregation, depending on coating, shape, size, surface charge, and concentration. In this review, we aim to emphasize the need of a breakthrough in the treatment of neurodegenerative disorders and draw attention to nanomaterials, as they can also serve as a diagnostic tool for protein aggregates or can be used in a high-throughput screening for novel antiaggregative compounds.
Introduction
Undoubtedly, the progress in medical and biological studies has led to increased quality of life and extension of life span. Furthermore, the overall fertility has dropped and these two factors contribute to the aging of the society. Due to this phenomenon, the increase in prevalence of neurodegenerative diseases is predicted to be more visible in the future than it currently is. According to the World Health Organization, it is projected that the number of people aged ≥ 65 will grow from about 524 million in 2010 to around 1.5 billion in 2050 [1]. Neurodegenerative diseases impose burden not only on people affected by this disorder but also on their caregivers. There are three major neurodegenerative diseases whose pervasiveness and incidence significantly rise with age.
First of them and the most common one is Alzheimer's disease (AD), which affects approximately 30% of people aged 85 or older. After the age of 85, the incidence of AD rises gradually from 6 to 8% per year, in contrast to the 0.5% rise per year when peoples' age ranges between 65 and 75 [2]. The second most common is Parkinson's disease (PD), which affects 10-15 per 100 000 people annually [3]. Its prevalence has been more than two times higher in 2016 (6.1 million cases) comparing to 2010 (2.5 million cases) and may reach 2% among people aged ≥ 65. Consequently, it is estimated that in 2050, there will be more than 12 million cases of PD worldwide [4]. Subsequently, amyotrophic lateral sclerosis's (ALS) annual incidence is approximately 1-2.6 new cases per 100 000 persons. This disease is characterized by rapid progression with average survival 3-4 years from onset, whereas the average age of onset nowadays is 59-60 years [5].
Indeed, there is a variety of symptoms of the aforementioned neurodegenerative diseases, but the exact pathophysiology of these conditions is still elusive. Nevertheless, it should be emphasized that they have some common pathogenic features. Among them, genetic [6] and environmental [7] factors can be listed. Yet, the most classical feature of all these diseases is protein misfolding in specific brain regions; thus, these disorders can be classified as proteinopathies ( Table 1). The hallmark of proteinopathies is either intraor extracellular accumulation of aggregates in the central nervous system that are abundant in β-sheets. In these diseases, altered forms of proteins, which play a physiological role, accumulate in the brain. They turn out to have pathological functions after modifications of their 3D structure, which in consequence leads to self-aggregation, aggregate growth, and eventually precipitation [8].
Unfortunately, the current and only available treatment of neurodegenerative diseases is strictly symptomatic. Treatment of PD has not significantly changed over decades: L-DOPA treatment is a gold standard for 60 years so far. Apart from levodopa-carbidopa preparations, other dopamine agonists, monoamine oxidase-B inhibitors, cholinesterase inhibitors, and selective serotonin and norepinephrine reuptake inhibitors are also used as a drug regimen [13,14]. The treatment of AD is not much more sophisticated and is based on cholinesterase inhibitors and NDMA receptor agonist, namely, memantine. It addresses not only the behavioural and cognitive symptoms but also covers for functional ones [15]. A review of treatments for AD in clinical trials can be found in a recent article [16], demonstrating that there is no effective antiaggregative treatment so far. Similarly, the information about PD drugs in clinical trials can be found in another review [17]. When it comes to ALS, there is only one FDA approved drug-riluzole, which has a glutamine agonist activity and extends the survival of patients by only 2-3 months [18,19].
Due to the abovementioned facts, in this review, we aim to highlight the burden of neurodegenerative diseases and discuss novel approaches to their treatment using nanomaterials ( Figure 1). Furthermore, we would like to point out the versatility and impact of the nanoparticles used to combat proteinopathies, on pathological protein aggregation.
Protein Aggregation in Neurodegeneration
A body of evidence suggests that the accumulation and transmission of α-synuclein (α-syn) aggregates in the midbrain are highly associated with the pathogenesis of PD [20]. α-Synuclein is a presynaptic protein, which probably plays a regulatory function in modulation of synaptic plasticity, control of presynaptic vesicle pool size, release of neurotransmitters, and vesicle recycling. Its structure can be divided into three regions: an amphiphilic N-terminus, an acidic C-terminus, and a hydrophobic central domain, which is known as the nonamyloid β component (NAC). The NAC region is crucial for α-syn aggregation and formation of β-sheet fibrils, which are the main elements of Lewy bodies [21]. Studies showed that electrostatic forces play a crucial role in α-syn fibrillation; thus, this process can be obstructed by charged nanoparticles [22].
Yet, interestingly, the exact molecular mechanism, time of occurrence, and influence of protein misfolding on the onset and/or progression of these particular diseases are still beyond reach. According to Janezic et al. who introduced a new mouse model for PD studies, neurophysiological changes forerun and are not driven by α-syn aggregate formation [23]. Nevertheless, the search for antiaggregative agents is still highly desirable.
Amyloid β peptide has a leading role in the onset and progression of AD. In this disease, amyloid plaques containing aggregated amyloid-β protein (Aβ) are surrounded by morphologically altered neurons, cause synapse and memory loss, and induce neurotoxicity [24].
As a matter of fact, Aβ is physiologically present and derives from the amyloid precursor protein (APP), which is implicated in regulation of synapse formation. Unfortunately, under particular circumstances, it starts to aggregate and initiates the disease progression [25]. Amyloid-β protein monomers tend to aggregate into several forms, namely, soluble oligomers, protofibrils, and insoluble amyloid fibrils which can further aggregate into amyloid plaques. This Oxidative Medicine and Cellular Longevity process is accompanied by oxidative stress, leading to the formation of oxidized proteins and lipid peroxidation. Products of lipid peroxidation, especially 4-hydroxynonenal, can in turn disrupt function of glucose and glutamate transporters and of ion-dependent ATPases [26]. Therefore, Aβ advocates synaptic membrane depolarization, uncontrolled Ca 2+ influx, and mitochondrial damage, which cause undesirable changes in cellular activity [27]. Additionally, tau protein is being hyperphosphorylated because of changes in protein kinase activity, which are a result of Aβ aggregation. The hyperphosphorylated form of tau protein becomes a core of neurofibrillary tangle (NFT) formation, whereas physiologically tau protein fosters the assemblance of tubulin into microtubules and helps to maintain their stability. The link between the existence of NFTs and neuronal dysfunction is straightforward. Notwithstanding, the relation of Aβ and NTFs is intertwined, because the inhibition of tau generation can impact production of Aβ and its derivatives [28].
Nanoparticles as Therapeutics of the Future
A nanoparticle (NP) is defined as a particle of matter that is between 1 and 100 nanometres in at least one dimension. Nanoparticles arose as attractive tools for both therapeutic and diagnostic applications, especially in imaging, diagnostics, and drug delivery. They can be synthesized from a broad range of materials, such as polymers, metals, or carbon-based molecules. NPs are also highly functional because of the ease with which their shape, size, and sur-face properties can be modified. Furthermore, NP properties can be also altered by attachment of other substances to the surface or their entrapment within the NP cavities, if these exist ( Figure 2) [29].
3.1. Graphene Quantum Dots. Graphene quantum dots (GQDs) are less than 100 nm in size and are made of singleor few-layer graphene ( Figure 3). They have been widely used in nanobiomedicine by virtue of their low cytotoxicity and high biocompatibility [30]. The group of Kim et al. demonstrated that GQDs were able to pass through the BBB. In the brain, they reduced α-syn fibrillization and triggered fibril disaggregation in a time-dependent manner by direct interaction with mature fibrils. The binding between GQDs and α-syn is driven by negatively charged carboxyl groups of GDQs and the positively charged α-syn region. Furthermore, these GQDs did not manifest any long-term toxicity in vivo and in vitro and also were able to prevent neuronal death, diminish Lewy body and Lewy neurite formation, and alleviate mitochondrial damage and dysfunction, and last but not least, they have the ability to prevent neuron to neuron transmission of pathological α-syn. Moreover, experiments performed on a mouse model showed that GQD protected against α-syn preformed fibril-induced loss of dopaminergic neurons and alleviated motor deficits [31].
With regard to AD, GQDs were also used to inhibit Aβ aggregation. The β-amyloid peptide consists of 39-42 amino acids, where several regions can be defined. The His13-Lys16 (HHQK) region plays a significant role in oligomerization and fibril formation. This region is a crucial component of Oxidative Medicine and Cellular Longevity glycosaminoglycan (GAG) binding site, which facilitates a conformational change of Aβ from soluble and unordered α-helix to stable β-sheet [32]. A construct composed of GQDs and tramiprosate, a mimic of GAGs, which specifically binds to HQQK motif and inhibits Aβ peptide aggregation, showed an inhibition of Aβ aggregation driven by breaking β-sheets. Furthermore, GQDs combined with tramiprosate evidently protected PC12 cells from Aβ-induced cytotoxicity, meanwhile exhibiting a synergistic effect [33].
3.2. Dendrimers. Dendrimers are highly branched, tree-like polymers with unique properties thanks to their terminal functional surface groups ( Figure 4). The size, shape, and surface charge change with an increase in generation. Dendrimers are highly functional because of simplicity of modifying their biological and/or physicochemical properties [34,35]. There is evidence that generations 3, 4, and 5 of PAMAM dendrimers are able to interfere with Aβ aggregation by blocking growth of new fibrils and breaking the existing ones in a concentration-and generation-dependent Oxidative Medicine and Cellular Longevity manner: the higher the dendrimer concentration and generation, the lower number of new fibrils [36]. A similar impact of dendrimers on α-syn aggregates has been observed. The dendrimers inhibit formation of β-sheet structures and disrupt remaining β-sheets or the agglomerates, in concentration and generation axis [37]. Furthermore, only full-generation PAMAM dendrimers, which have the cationic amino groups on their surface, were able to interact with the basic amino acid N-terminal region of α-syn responsible for the β-sheet formation and protein aggregation, contrary to half-generation PAMAM dendrimers [38]. Third and fifth generations of polylysine dendrimers obstructed amyloid aggregation in solution, whereas generation 3 dendrimers also protected SH-SY5Y cells against amyloid-induced toxicity [39].
3.3. Metal Nanoparticles. Cerium oxide nanoparticles (CeO 2 NPs) or nanoceria are multifaceted polymers. They are characterized by good bioavailability and ability to mimic superoxide dismutase or catalase activity. They are quite potent ROS and nitric oxide scavengers. The antioxidant properties of CeO 2 NPs are linked to the Ce 3+ /Ce 4+ redox shift. Additionally, research shows that nanoceria are able to protect neurons against Aβ-induced mitochondrial fragmentation and also reduce DRP-1 hyperphosphorylation on Ser616, which is related with AD and neurodegeneration. Inhibiting this posttranslational modification turns out to be a potential mechanism of mitochondrial preservation [40]. Beyond that, another group of researchers studied the influence of CeO 2 NPs in a yeast model of PD. Nanoceria significantly increased the viability of yeast cells expressing α-syn. In addition, these NPs decreased α-syn-induced ROS production and alleviated mitochondrial dysfunction and fragmentation. The most probable mechanism of inhibiting the formation of α-syn aggregates occurs by a direct interaction of these nanoceria with α-syn monomers or oligomers; hence, their miscellaneous properties are also exhibited by the ability to adsorb α-syn on the nanoparticle surface [41].
Gold nanoparticles (AuNPs) have been extensively used in biomedicine because of their great biocompatibility, chemical inertness, and effortless size control. AuNPs are also able to abrogate aggregation of pathological proteins. Nevertheless, they may be toxic; toxicity of gold NPs significantly depends on their size, charge, and coating. Large AuNPs (36 nm and 18 nm) increase Aβ fibrillation, whereas small ones are able to delay (6 nm) or utterly inhibit (1.9 nm) this process [42]. Particularly, smaller, anionic NPs exhibit better ability to halt protein aggregation. The researchers have studied four different coatings (citrate, poly(acrylic acid) (PAA), poly(allylamine) hydrochloride (PAH), or polyelectrolyte surfaces) and three different sizes of AuNPs (8 nm, 18 nm, and 40 nm). The results altogether demonstrated that PAAcoated, 18 nm AuNPs exhibited superiority in the inhibition of Aβ aggregation and were the least toxic towards human neuroblastoma SH-SY5Y cells [43]. In order to improve the ability of AuNPs to cross the BBB, Prades et al. created an AuNP conjugated with two peptides, where one of the peptide sequences was designed to interact with the transferrin receptor. The authors suggest that this platform can increase the efficiency of drug delivery into the brain [44]. Noteworthily, natural compounds are also able to obstruct amyloid fibrillation and break existing amyloid fibrils, one of which is curcumin [45]. Because of its hydrophobicity and thus insolubility in water, curcumin has to be conjugated with other compounds [46]. Water-soluble curcuminfunctionalized gold nanoparticles turned out to efficiently inhibit amyloid fibrillation, but also to break and dissolve Aβ fibrils. Furthermore, these curcumin-AuNPs protect neuro2a cells from Aβ 1-40 fibril-induced cytotoxicity, giving nearly doubled improvement in viability. It is suspected that the great inhibitory efficiency is a result of nanoparticle binding to the fibrils via curcumin moiety and disrupting the elongation phase of fibrillation [47].
Antioxidant-Loaded NPs.
Apart from the abovementioned example, other phytochemicals have also arisen as useful in prohibiting pathological protein aggregation regarding neurodegenerative diseases ( Figure 5). Among them, baicalein [48], chlorogenic acid [49], gallic acid [50], and many other natural compounds [51] are able to inhibit the formation of α-syn aggregates and/or even disaggregate existing ones. Selenium nanoparticles (SeNPs) turned out to be an effective carrier of antioxidants. Their peculiar biomedical applications and wide range of therapeutical properties are ascribed mainly to the ability to modulate redox state. Moreover, SeNPs show low toxicity and great biodegradability in vivo [52]. Yang et al. investigated anti-Aβ-aggregative and antioxidative properties of SeNPs conjugated with chlorogenic acid (CGASeNPs). These authors hypothesized that binding CGA with nanoparticles will improve its bioavailability and stability. They proved that antiaggregative properties of CGASeNPs are contributed by their ability to bind Aβ 40 on their surface. Furthermore, CGASeNPs effectively scavenged ROS and protected PC12 cells against Aβ-induced toxicity [53]. Likewise, the same group designed SeNPs modified with resveratrol and tested their properties against ion metal- Oxidative Medicine and Cellular Longevity induced Aβ42 aggregation. They obtained similar effects as described above, i.e., that resveratrol and SeNPs exhibit synergistic effect regarding the inhibition of pathological protein aggregation [54]. A nanocomposite engineered from quercetin, SeNPs, and polysorbate 80 can serve as another example of SeNPs combined with antioxidants.
In vitro analyses showed that the nanocomposite exhibited greater solubility in water comparing to quercetin per se, which has poor aqueous solubility. On top of that, such nanocomposite had an exceptional antioxidative activity, inhibited Aβ 1-42 monomer aggregation, and protected PC12 cells from hydrogen peroxide-induced cell death [55]. Zhang et al. studied both EGCG-SeNPs and NPs conjugated with EGCG and Tet-1 peptide. Tet-1-EGCG-SeNPs showed better efficacy comparing to NPs without the peptide. Both types of NPs not only protected PC-12 cells against amyloid-induced cytotoxicity and inhibited Aβ fibrillation but were also able to dissociate existing fibrils into nontoxic monomeric state. Nevertheless, peptide-containing NPs had overall better performance due to increased neuronal targeting efficiency in vitro [56]. NPs loaded with other antioxidants, namely, ferulic acid (as a powerful anti-inflammatory agent) and tannic acid (acting as an inhibitor of α-syn fibrillation), exhibited potent inhibitory effect on α-syn aggregation, diminished proinflammatory responses, and reduced oxidative stress caused by α-syn [57]. Additionally, curcumin-loaded NPs inhibited amyloid-like aggregation of superoxide dismutase (SOD) 1, which occurs in about 20% of familial ALS cases [58].
Nanoparticles loaded with synthetic antioxidants can also serve as antiaggregative agents. Nitroxides exhibited better efficacy in prevention of nitration reactions and were more reactive than natural antioxidant, vitamin E [59]. It has been established that nitroxide-containing redox NPs are able to alleviate typical aspects of neurodegenerative diseases, namely, protect cells against oxidative stress, improve mitochondrial function, and inhibit Aβ aggregation [60,61].
Other Therapeutic Approaches
Unquestionably, transition metals are among the main culprits of pathological protein accumulation. Moreover, they widely contribute to an altered redox state; thus, chelators might bring alleviation of the toxic activity of these metals. Liu et al. created a chelating nanoparticle, in a nutshell-a NP conjugated with 2-methyl-N-(2 ′ -aminoethyl)-3-hydroxyl-4-pyridinone. This construct significantly inhibited Aβ aggregation, protected human cortical neuronal cells from Aβ-induced cytotoxicity, and had no impact on cell proliferation [62].
Given the fact that the nanoparticle efficacy in inhibiting protein aggregation greatly depends on the surface charge, the use of amino acids as coating agents is not surprising; they may enhance biocompatibility of nanoparticles. It is mainly due to the fact that amino acids are zwitterionic. Antosova et al. proved that amino acidcoated superparamagnetic nanoparticles can be quite a powerful tool for treatment of amyloidopathies. The group showed that tryptophan-coated NPs exhibited the best antiaggregative properties [63]. Furthermore, others demonstrated that histidine-coated nanoparticles can completely suppress amyloid fibril formation [64]. Moreover, lysinecoated Fe 3 O 4 NPs were less toxic than bare iron oxide NPs, strongly bound to monomeric α-syn, and inhibited the early phases of its aggregation [65].
NPs can be also used as safer carriers for gene therapy, instead of viral vectors. Niu et al. created multifunctional magnetic nanoparticles which are a complex platform that combines elements of cell targeting, controlled drug release, and gene therapy. The authors developed a NP that interferes with α-syn synthesis by shRNA, hence Oxidative Medicine and Cellular Longevity alleviating its toxic effect, so cell death is inhibited both in vitro and in vivo [66].
The Dark Side of the Nanoparticles with a Useful Outcome
Despite undoubted success of some nanoparticles as promising antineurodegenerative compounds, it is important to mention that there is also data on their possible contribution to the disease progression. A plethora of evidence suggests that the nanostructures can influence protein fibrillation depending on various conditions, including the coating, size, surface charge, and concentration. Such discrepancy has been seen for example in silica-based nanoparticles, where positively charged silica nanoparticles inhibited α-syn fibrillation and negatively charged one had an opposite effect [67]. Also, it was also established that SiO 2 NPs upregulate α-syn expression, inhibit protein levels of the ubiquitinproteasome system, and induce autophagy by interference in the PI3K-Akt-mTOR signalling pathway [68]. Contrary to that, negatively charged gold nanoparticles act as chaperones and prevent Aβ fibrillation [69]. Yet, regarding α-syn, the opposite effect was seen: gold nanoparticles are also a double-edged sword. Citrate-capped (negatively charged) AuNPs speeded up the formation of α-syn aggregates in nanomolar concentrations, and time of the nucleation phase was dependent on the surface availability. The smaller the NPs (10-14 nm), the more aggregate growth acceleration, whereas particular sizes (22 nm) were able to inhibit the fibrils' growth; thus, in summary, the AuNP aggregative properties hinge on their size and concentration [70].
Nowadays, numerous NPs have been used in a variety of fields, namely, electronics, pharmaceuticals, cosmetics, and fabrics; hence, their toxicity has started to be more widely observed and studies on the health risks are a bit behind the prompt development of nanotechnology, regardless of a body of evidence of toxic effects of NPs, both in vitro and in vivo [71]. For example, Shah et al. prove that nanoscalealumina can accumulate in the brains of exposed animals and thus induce oxidative stress and neurodegeneration. It promoted the production of toxic Aβ through the amyloidogenic pathway, caused overexpression of APP, and increased the β-secretase BACE1 activity that boosted the formation of Aβ aggregates. Their findings suggest that exposure to nanoalumina might increase the probability of the neurodegenerative disease onset [72].
It is worth mentioning that TiO 2 NPs are commonly used in numerous daily use products like cosmetics or antiseptic agents. These NPs turned up to induce α-syn fibrillation via shortening its nucleation process and may contribute to PD onset [73]. Additionally, there is a positive correlation between α-syn expression levels and TiO 2 NP concentration [74]. Moreover, exposure of wild-type mice to inhalation of nickel-containing NP air for 3 h increased both Aβ40 and Aβ42 amyloid peptide levels in the brain by 72-129% [75].
Moreover, Yarjanli et al. gave excellent examples of the role of iron in neurodegeneration. They speculated whether iron ions released from the NPs are capable to activate posi-tive feedback loop among iron accumulation. First and foremost, there is evidence that released iron ions can support Fenton's reaction and produce ROS from hydrogen peroxide and superoxide. Further than that, iron NPs can decrease GSH content, which may lead to increased oxidative stress and mitochondrial degradation. Due to these factors, it is not a surprise that these NPs can boost protein aggregation. Nevertheless, the authors emphasize that the toxicity of iron NPs is dependent on their size, shape, surface charge, coating, functional groups, and concentration and their utility must be considered in regard to these aforementioned aspects [76].
In any case, the knowledge about the diverse nature of nanoparticles was used to look for their other applications in the field of neurodegenerative disorders. For the case in point, nanoparticle-induced protein fibrillation can be employed as a fast screening method for novel potential antiaggregative compounds [77] and also as a methodology for rapid detection of protein aggregation that can be used to analyze the fibrillation process as well [78]. Likewise, NPs can serve as advanced, real-time screening platform which will help to identify various mechanisms of Aβ aggregation [79].
Here, SOD1-functionalized AuNPs served as a colorimetric detection platform for SOD1 aggregate evaluation. The test is simple and sensitive comparing to other methods as it is based on absorbance; thus, such a sensor system can serve as a diagnostic tool of SOD1 aggregates which are a hallmark of a fraction of familial ALS [80].
A step further, some nanoparticles might be designed not only to inhibit aggregation of pathological proteins but also to serve as a diagnostic tool. The results presented by Skaat et al. indicate that the conjugation of a BAM10 antibody to the near-infrared fluorescent Fe 3 O 4 nanoparticles not only significantly hinders Aβ40 fibrillation but also acts as their marker; thus, the aggregates can be detected by MRI or fluorescence imaging [81]. Another example of "traceable" and successful antineurodegenerative NPs is the superparamagnetic iron oxide nanoparticles conjugated with two cell targeting molecules-a peptide with strong affinity to transferrin receptor used in order to enable NP crossing the BBB and mazindol, a dopamine inhibitor which stimulates dopamine transporter internalization to facilitate specific internalization to dopaminergic neurons. EGCG attached to this NP prevents α-syn aggregation [82].
Conclusions
This review gives an insight into the burden and predictions of the prevalence of the most common neurodegenerative diseases and the lack of effective treatment. Contemporary regimen is solely symptomatic; thus, we wanted to point out the emerging significance of nanoparticles as a promising approach in the treatment and diagnostics of these disorders. Despite the complexity of mechanisms underlying neurodegenerative diseases, some pathological aspects tend to overlap; thus, nanoparticles can act on many levels. Further, both in vitro and in vivo studies are extremely important to the discovery of the most efficient treatment of these diseases. Amyloid-β protein AD: Alzheimer's disease ALS: Amyotrophic lateral sclerosis APP: Amyloid precursor protein AuNPs: Gold nanoparticles BBB: Blood-brain barrier CGASeNPs: SeNPs conjugated with chlorogenic acid L-DOPA: L-3,4-Dihydroxyphenylalanine EGCG: Epigallocatechin gallate GAGs: Glycosaminoglycans GQDs: Graphene quantum dots GSH: Glutathione NAC: Nonamyloid β component NFTs: Neurofibrillary tangles NMDA: N-Methyl-D-aspartate NPs: Nanoparticles PAMAM: Poly(amidoamine) PD: Parkinson's disease ROS: Reactive oxygen species SeNPs: Selenium nanoparticles SOD: Superoxide dismutase α-syn: Alpha-synuclein.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper.
Authors' Contributions
M. P. wrote the main part of the manuscript. G. B. participated in the revision of the manuscript. I. S.-B. was responsible for the concept of the review and preparation of the manuscript. She was also responsible for providing the funding for the study. All authors have read and approved the final manuscript.
|
v3-fos-license
|
2016-09-14T22:35:13.896Z
|
2006-01-01T00:00:00.000
|
18042256
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.47795/zoxa7742",
"pdf_hash": "8811000306389ba43aa087340dc5876dc0710458",
"pdf_src": "Grobid",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2101",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "8811000306389ba43aa087340dc5876dc0710458",
"year": 2006
}
|
pes2o/s2orc
|
Hydrocephalus. a Practical Guide to Csf Dynamics and Ventriculoperitoneal Shunts Clinical Feature of Inph
H ydrocephalus is defined and the mechanisms of CSF hydrodynamics discussed. Supplementary tests used in the investigation of idiopathic normal pressure hydrocephalus are reviewed with a detailed explanation of constant flow CSF infusion tests. The principles governing valve selection are illustrated. Hydrocephalus is the abnormal accumulation of CSF within the cranium due to defective CSF production, flow or absorption. The CSF usually accumulates within the ventricular system, however 'external hydrocephalus' with widening of the subarachnoid spaces is described. Hydrocephalus can be due to obstructive causes preventing normal CSF flow through the CSF pathways, or due to abnormal absorption of CSF: communicating hydro-cephalus. CSF flow studies frequently show a complex picture with contributions from both mechanisms. Overproduction of CSF that exceeds the absorption capacity of the arachnoid granulations is rare. CSF physiology CSF is mainly produced by passive ultrafiltration of plasma with some active electrolyte transport from the ven-tricular choroid plexus. The rate of CSF production is around 20ml/hour, although MRI evidence suggests that this may increase significantly during sleep. To maintain equilibrium, CSF is normally absorbed into the major venous sinuses by a passive mechanism through the one-way valves of the arachnoid villi. The normal CSF pressure at the reference level (the foramen of Monro) in the recumbent adult is 100-200mm H2O (7-15mmHg) with mean pressures of 20mmHg regarded as elevated. Pressures from 0-7mmHg do not usually signify any pathology. The CSF pressure fluctuates with the arterial pulse wave and respiratory excursions. Symptomatic patients with obstructive hydrocephalus continue producing CSF. CT and MRI scans identify the site of obstruction. The definitive treatment for most of these patients is removal of the obstructive cause. If CSF diversion is required and the outlet of the IIIrd or IVth ventricle obstructed, a IIIrd ventriculostomy is usually the first choice treatment, particularly in newly diagnosed patients with aqueduct stenosis. Communicating hydrocephalus occurs when the lateral , IIIrd and IVth ventricles appear to communicate freely. The absorption capacity of the arachnoid villi is exceeded or obstruction of CSF flow occurs within the subarach-noid space. This condition is usually managed with a ven-triculoperitoneal shunt system and provides the focus for this paper. Normal pressure hydrocephalus In the pre-CT scan era Adams et al. reported three cases of ventriculomegaly associated with gait disturbance, dementia and incontinence. 1 All three patients had normal CSF pressure (140-50, 160 and 175mmH2O) on lum-bar puncture but improved with either a ventriculo-atrial …
Normal pressure hydrocephalus
In the pre-CT scan era Adams et al. reported three cases of ventriculomegaly associated with gait disturbance, dementia and incontinence. 1All three patients had normal CSF pressure (140-50, 160 and 175mmH2O) on lumbar puncture but improved with either a ventriculo-atrial shunt (two cases) or a Torkildsen (lateral ventricle to cisterna magna) diversion (one case).Two cases were idiopathic and one due to a cyst in the IIIrd ventricle.The condition is now classified as (i) Primary or Idiopathic Normal Pressure Hydrocephalus (INPH) and (ii) Secondary Normal Pressure Hydrocephalus.In the latter group of patients a well-established cause is evident (eg.subarachnoid haemorrhage, traumatic brain injury, meningitis).Whilst the primary pathology may increase the certainty of diagnosing hydrocephalus, the results of treatment may be confounded by the original brain insult.
Even in the presence of a classic triad of symptoms the response to treatment is often disappointing.Indeed Black reported that 67.2% of patients with gait, cognitive and urinary symptoms and signs improved with a shunt. 2 The outcome was significantly worse in patients with only dementia and gait disturbance (31.6% improved).Overall, 35.4% of the 62 patients studied suffered complications, including subdural haematomas and fits.The challenge therefore lies in increasing diagnostic accuracy and timely management at a point when symptoms and signs are retrievable.
The Symptomatic Triad
The gait disturbance in INPH includes at least two of the following features: wide based stance, out-turned feet, decreased step height, decreased step length, decreased speed, increased trunk sway, en bloc turning requiring three or more steps for 180 o , and poor heel-toe walking.Cognitive features are wide-ranging and include attention deficits, psychomotor retardation, impaired recall and memory deficits, executive dysfunction, behavioural and personality changes.Such features can be quantified using a summative mental state examination.Urinary dysfunction is characterised by nocturia, urgency, frequency or incontinence reflecting a low capacity neurogenic bladder.
Evidence-based clinical diagnostic criteria for the diagnosis of INPH have only recently been developed.A consensus panel recommends that INPH candidates be categorised into 'probable' and 'possible' groups based upon history, examination, brain imaging and CSF opening pressure. 3
Probable INPH
This requires a gait disturbance and either cognitive and/or urinary disturbances in a patient over 40 years old.In addition the history, imaging and lumbar puncture opening pressures must be consistent with the diagnosis.The imaging findings are characterised by ventriculomegaly not due to atrophy or obstructive hydrocephalus, associated with one or more of the following; temporal horn enlargement, a callosal angle of 40 o or more (due to bowing of the corpus callosum), periventricular lucency not due to ischaemia and a flow void in the aqueduct or IVth ventricle.The accepted range of CSF opening pressure for probable INPH is 70-245mmH2O (5-18mmHg).
Possible INPH
This group may have a more acute history in a younger patient with only one of the triad of symptoms and an opening CSF pressure outside the guidelines above.The imaging findings may appear to be consistent with atrophy.
Correspondence to:
Peter Whitfield Email: Peter.Whitfield@ phnt.swest.nhs.uk are commonly observed in T2-weighted MRI they are also associated with hypertension and cerebrovascular disease and are therefore not pathognomic of hydrocephalus. 4 Post-shunting MRI scans do show an improvement in the frontal horn periventricular changes but such pre-operative features are not required to predict a good outcome. 5Calculation of the 'stroke volume' of CSF moving in a craniocaudal direction during systole using phase contrast CSF velocity MR imaging has shown that a volume greater than 42µL correlated with a favourable response to shunting. 6This is characterised by a signal flow void.PET cerebral blood flow (CBF) studies have shown that pre and post-shunt assessments of haemodynamic reserve using a carbonic anhydrase inhibitor to stimulate increased PaCO2 indicate that shunt responders show an improvement in their cerebrovascular reserve compared with non-responders. 7This suggests that altered CBF dynamics are important in the pathogenesis of INPH and in determining the success of treatment.Unfortunately specific thresholds for low CBF have not been identified as pre-operative predictors of treatment success.
Due to the conundrum and the difficulties in determining shunt responsive cases several other tests have been developed to aid management.These include intracranial pressure monitoring, CSF infusion tests, the tap test and a period of CSF drainage.The main drawback of these tests is the low sensitivity and poor predictive value of some tests (see Table 1).The additional tests that are often used are detailed below.
ICP monitoring
Patients with INPH frequently have normal ICP.However, 24 hour monitoring may reveal several abnormalities that indicate poor cerebral compliance (Figure 1).An ICP recording shows systolic and diastolic pulsations.Plateau (A) waves with elevations exceeding 50mmHg for periods of 5-20 min are not normally seen in patients with idiopathic hydrocephalus.However, careful analysis of the ICP trace -using computer software with threshold filters -reveals low amplitude (commonly 1-5mmHg) superimposed B waves with a period of 30 seconds to 2 minutes. 8The prevalence of B waves appears to increase during normal REM sleep and with rises in intracranial pressure.A recent detailed analysis in patients with communicating and non-communicating hydrocephalus indicates that B waves are commonly observed but have a poor correlation to clinical outcome. 9
Tap test
Many authors have reported the withdrawal of 40-50ml of CSF as a useful test, with responders benefiting from shunt insertion.However the test has a low sensitivity (26-62%) and should not be used to rule out a diagnosis of idiopathic normal pressure hydrocephalus. 10ternal Lumbar CSF Drainage This test developed from the concept that a trial of controlled CSF removal (10ml/hr) for 72 hours might predict shunt responders.The sensitivity of Figure 2B: CSF infusion study performed via a ventricular access device -patient with probable idiopathic normal pressure hydrocephalus, infusion rate 1.5ml/min.The opening pressure is normal (10mmHg) but CSF infusion produces a plateau around 34mmHg enabling the Rcsf to be calculated (16mmHg/ml/min).This level is just below the 18mmHg/ml/min threshold described by Boon et al. 12 Strong vasogenic waves are also evident at pressures above 25mmHg with an increase in the pulse amplitude.The derived Pressure Volume Index (PVI) was elevated (9.1ml) reflecting poor compensatory reserve.
Neurosurgery Article
the test has been reported as 50-100% with a specificity of 60-80% and a positive predictive value of 80-100%. 10
CSF infusion test
The resistance to CSF absorption by the arachnoid villi can be measured and helps predict shunt responsive patients.
The test is commonly performed in the left lateral position using a constant infusion technique.Lumbar puncture needles are inserted at 2 levels; the use of a solitary needle with a three-way tap is not as reliable.A pressure transducer is connected to one needle and the baseline opening pressure recorded.Normal saline is infused at 1.5ml/min through the second needle whilst the pressure is continuously measured.In most patients the pressure rises steadily and then reaches a plateau.The resistance (Rcsf) to CSF absorption can be calculated using an Ohm's Law analogy: Infusion rate (ml/min) The Rcsf in normal subjects ranges from 6 to 10mmHg/ml/min.It increases in the elderly.In such cases the rate of CSF production probably decreases to prevent hydrocephalus ensuing. 11y using an infusion rate of 1.5ml/min a 30-mmHg increase in CSF pressure provides evidence of the Rcsf exceeding 20mmHg/ml/min.The use of higher infusion rates (eg.3ml/min) imposes limitations in that the pressure needs to rise by 60mmHg to confirm an Rcsf of 20mmHg/ml/min.We recommend aborting the test if CSF pressure exceeds 50mmHg.In this case a minimum value for the Rcsf can still be calculated using the (peak pressure -baseline pressure) as the numerator in the equation.Boon et al. have reported that in patients with probable NPH (mainly idiopathic but also including some secondary cases) a positive response to shunting was likely if Rcsf exceeded 18mmHg/ml/min with a PPV of 92% and a likelihood ratio of 3.5 in their series of 95 patients.However, the sensitivity of the test at this threshold was only 46% although the specificity was high at 87%. 12 Performing the infusion test via a frontal ventricular access device appears to minimise the effect of CSF leakage around lumbar needles and may increase the predictive value of the investigation (Figures 2A and 2B).With sophisticated computer analysis (see www.neurosurg.cam.ac.uk/icmplus) of the pressure waveform further information about the elastance and compliance of the craniospinal axis, including the Pressure Volume Index (PVI), can be derived both in lumbar and ventricular CSF infusion studies.This may assist the decision making process in borderline cases.
Choosing a CSF shunt
Most neurosurgeons advocate a ventriculo-peritoneal shunt (VP shunt) as the preferred system for implantation.In some circumstances (eg.inadequate absorption in a patient with multiple previous abdominal operations) alternative sites are required (eg.ventriculo-pleural, ventriculo-atrial).VP shunt insertion is associated with numerous potential complications.These include: • Peri-operative intracranial bleeding Attention to detail during the placement of a VP shunt is crucial to minimise the risks of shunt insertion.Meticulous sterility, accurate catheter placement and secure connections between shunt components are essential.Consideration needs to be applied to the choice of shunt hardware.The ventricular catheter most widely used is a straight non-flanged device with multiple apertures in proximity to the catheter tip.There is no consensus over the best anatomical site for catheter placement.The distal catheter provides a conduit to drain CSF to the peritoneal cavity.Distal slit valves are unnecessary and may increase the risk of distal obstruction provided a valve is utilised proximally.Antimicrobial impregnated ventricular and peritoneal catheters have been developed in an attempt to reduce shunt infection rates.
Valves
Valve systems with different hydrodynamic properties have been developed to try and minimise complications such as over-drainage with lowpressure postural headaches and subdural fluid collections.The properties of valves have been independently evaluated in vivo. 13Valves are designed to be (1) flow regulated or (2) differential pressure regulated (Figure 3).The Orbis Sigma Valve is the archetypal flow controlled device.CSF flows though a diaphragmatic aperture whose diameter decreases as the flow rate rises above 20ml/hr.This increases the resistance of the valve, regulating flow.A safety mechanism leading to a reduction of resistance at differential pressures of 25-30mmHg is incorporated to avoid acute severe elevations in intracranial pressure.
Most valves are differential pressure regulated.These devices are
Figure 1 :
Figure 1: Overnight ICP monitoring in a normal pressure hydrocephalus patient who responded to subsequent VP shunt insertion.Graphs show ICP; Amplitude; Slow B waves and RAP coefficient.The baseline pressure was normal (8-10mmHg) with many vasogenic waves exceeding 20mmHg.The pulse amplitude of the ICP waveform was elevated, especially during the vasogenic waves.The averaged amplitude of the slow B waves was above 5mmHg and the derived RAP coefficient was above 0.7 most of the time, signifying poor compensatory reserve.
Figure 2A :
Figure 2A: CSF infusion study performed via a ventricular access device -normal result showing ICP, heart rate and ICP pulse amplitude; infusion rate 1.5ml/min.The opening pressure (5mmHg) and amplitude were normal.During infusion the ICP increased to a plateau of 15mmHg, enabling calculation of resistance to CSF outflow (7mmHg/ml/min).The low pulse amplitude and absence of vasogenic waves are characteristic of a normal study.Measurement of the heart rate from the pulse amplitude enables the technical quality of the recording to be assessed.
Figure 3A :
Figure 3A: Flow-pressure curves for a differential pressure-regulating valve.The pressure regulating mechanisms try to maintain the same differential pressure across the valve regardless of the flow rate.In practice most manufacturers market high, medium and lowpressure valves each with different pressure flow characteristics.
Figure 3B :
Figure 3B: Flow-pressure curve for a flow regulated valve.The flow regulator attempts to change its resistance in response to the differential pressure thereby maintaining flow at a constant level.The Orbis Sigma Valve has a variable resistance that increases in the mid-range acting as a flow control mechanism.
Neurosurgery Article Marek Czosnyka PhD (Warsaw) DSc (Warsaw) in Biomedical Engineering is
Reader in Brain Physics and Director of Neurosurgical Physics in the Academic Neurosurgical Unit, University of Cambridge, UK.He is also Associate Professor at Warsaw University of Technology, Faculty of Electronics, Poland.He is interested in hydrocephalus research with an emphasis on CSF dynamics, cerebrovascular factors and mathematical modeling.
Peter Whitfield is Consultant Neurosurgeon at the South West Neurosurgical Centre, Plymouth.He has a PhD in the molecular biology of cerebral ischaemia.Clinical interests include vascular neurosurgery, image guided tumour surgery and microsurgical spinal surgery.
|
v3-fos-license
|
2021-07-26T00:06:09.455Z
|
2021-06-08T00:00:00.000
|
236259339
|
{
"extfieldsofstudy": [
"Economics"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.preprints.org/manuscript/202106.0240/v1/download",
"pdf_hash": "da3435dde52e065db3ba85524b701d3bdb4c9be5",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2104",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "8dc4c1c7c09f95fe01cde2bb734496eb2c3908ca",
"year": 2021
}
|
pes2o/s2orc
|
Factors that Determine and Influences Foreign Exchange Rates
The Foreign Exchange rate is very much crucial for determining the economic health level of the country. The foreign exchange rate provides financial stability, enhances purchasing power and allows global trades. This rate usually fluctuates due to the market forces which control the supply and demand of the currency. Nominal and relative inflation and income level have a substantial effect on determining the exchange rates. Government measures, international situations, natural disasters or any unexpected situation like Covid-19, Rohingya crisis etc. can affect the exchange rates. Besides this, the interaction between the factors can create different reasoning to affect the market. This study tries to identify some factors with relevant examples.
Introduction
Foreign exchange rates are one of the most critical determinants of the economy of any country. Exchange rates play a vital role in any country's trade condition. Foreign exchanges are exchanged based on the equilibrium exchange rate. When a rate increases, it is called appreciation in the exchange tare, and when a decrease, it is called depreciation. The foreign exchange rate determines over time by the changes in supply and demand. This change in demand and supply schedule occurs because of some factors (Eitrman, 2013). These factors are known as factors influencing foreign exchange rates.
Impact of Foreign Exchange Rates on Economy
Before knowing what factors influence the exchange rate, we describe the effect of any exchange rate on a country's economy. A higher exchange rate makes export expensive and imports cheaper for the country. But lower exchange rate makes the export more affordable and import costly. Any country expects a higher exchange rate as it keeps a country's condition favorable. It is also expected as a higher exchange rate helps keep the balance of trade of any country lower. Thus, the factors that influence exchange rates positively or negatively impact the whole economy of a nation.
Factors Influencing Foreign Exchange Rates
There are so many factors that influence exchange rates. But five factors are considered most important in influencing foreign exchange rates. These five factors are inflation, interest rate differentials, and differences in income level, government control and changes in expectations (Madura, 2010). All the factors in an equation is shown belowe =∫▒〖(∆INF,∆INT,∆INC,∆GC,∆EXP)〗 Here, e = percentage change in foreign exchange spot rate ∆INF=Change in the differential between home country inflation and foreign country inflation ∆INT=Change in the differential between home country interest rate and foreign country interest rate ∆INC= Change in the differential between home country income level and foreign country income level ∆GC=Change in Government Controls ∆EXP=Change in Expectations of future exchange rates
Relative Inflation Rates
Changes in relative inflation rates affect international trade activity, which influences the demand and supply of currencies, which influences exchange rates (Eun, 2007). Let a US firm and a UK firm sells goods that are substitute. If UK inflation increases and US inflation remains the same, then demand in the UK for US goods will increase and increase UK demand for US dollars. Moreover, US demand for UK goods will reduce and thus reduce the supply of US dollars. Because of the reduction in supply, the supply curve will shift leftward and because of increased demand, the demand curve will shift rightward. Then the new equilibrium exchange rate will be higher than the present rate. Let US exchange rate is £1.5 per Dollar. Now increase in UK inflation increases demand in the UK and reduces the supply of $ has creates upward pressure in the dollar price. The new equilibrium exchange rate is now £1.6= $1. But if UK inflation increased and US inflation remained the same, then the opposite would have occurred, and the US supply of dollars has increased. UK demand would have decreased, creating the opposite and exchange rate value would have decreased to a new equilibrium price. In this situation, exchange rate should have been £1.4 =$1
Relative Interest Rates
Investment in foreign securities is affected by the change in the relative interest rates, which affects the demand and supply of any currency and influences the exchange rates Alam., Khondker & Molla, 2013). Lets' assume the UK interest rate rises while the US interest rate remains the same. In this case, UK investors will likely reduce their demand for dollars. UK rates are more attractive now and US banks require keeping fewer deposits. As UK rates are now more attractive, US investors with excess cash will increase the supply of dollars for sale by US investors as they want to invest inre at the UK. Because of the inward shift of demand of the dollar and the dollar supply's outward change, the equilibrium exchange rate will decrease. Let the exchange rate is £1.5= $1. As UK interest rates rise and US interest remains, UK investors have reduced demand for US dollars. But investment in the UK has become an attractive option for US investors that have increased US dollars' supply. This condition of increased supply and reduced demand creates a new equilibrium condition that depreciated the exchange rate, and the new equilibrium exchange rate is £1.4= $1. But, if the US interest rate rises and the UK rate remains the same, there would have been increased demand for the dollar as US investors wanted to invest more in the US. As it is an attractive option, banks' supply would have decreased as they need to keep more deposits. As a result of the outward shift of the demand curve and the inward shift of the supply curve, the new equilibrium exchange rate would have increased. The new equilibrium interest rate would have been £1.6= $1 Relative Income Levels The third factor influencing the exchange rate is the relative income level because income level affects the amount of import demanded and the exchange rate (Eun, 2007). Let, between the US and UK, UK income level increases and US income level remains unchanged. As a result demand curve will shift upward because of the increase in UK income and increased demand for US goods. But the supply schedule is not expected to change. For this reason, there is an increase in the equilibrium exchange rate. Let, the exchange rate is £1.5= $1. If the UK's income level increases and the US's income remains the same, then the demand US goods in the UK will increase. But, because of no change in supply schedule, the exchange rate will appreciate, and the new exchange rate would be £1.6= $1. But, if the UK's income level decreases and the US income level remains the same, the UK's demand for US goods will reduce. As a result of decreased income level in the UK and decreased demand, the demand curve will shift downward. But there will be no change in the supply schedule. As a result, there will be a decrease in the equilibrium exchange rate. The new equilibrium exchange rate would be £1.4= $1
Government Controls
Government can control equilibrium exchange rates by many ways. There are so many factors, such as foreign exchange barriers, foreign trade barriers, Intervening transactions etc., in the foreign exchange market that government can impose to affect the exchange rate (Madura, 2010). As described above, the UK interest rate example, the rate rose, and the US interest rate remains the same. The typical situation would be the increase in Dollar supply. If the government imposes a high tax on interest income earned from foreign investment, investors will be discouraged from exchanging the dollar for pounds. The result will be an increase in the exchange rate rather decrease. If the government imposes all the other barriers described above the result becomes a decrease in supply, whether there is an increase in demand or no change in demand and the exchange rate will become influenced in the opposite direction. Let, the exchange rate £1.5= $1. The exchange rate had to be decreased to £1.4= $1 when there is an increase in the UK's interest rate through the normal procedure of increase in interest rate. But because of government intervention and imposition of High tax on foreign investment, the supply curve will move upward and the result will be an increase in the exchange rate to £1.6= $1. Let, the exchange rate is £1.5= $1. If the demand remains the same but because the government imposes barriers, there will be a decrease in supply. As a result exchange rate will increase to £1.6= $1. Moreover, if there is an increase in the demand of the US dollar but because of government intervention, there is a decrease in supply, which will lead to more equilibrium exchange rates. The new exchange rate will be again £1.6= $1. So, government intervention influences the exchange rate adversely.
Expectations
The last factor that influences the exchange rate is expectations of future exchange rates. Like all financial markets, foreign exchange markets react to the news that can create future effects. The effective can either increase or decrease the exchange rate (Eitrman, 2013). Let news spreads in the US that inflation will increase in the recent future. It will make traders sell US dollars and supply will increase. But, demand will have no impact. As a result, supply will be decreased and this will result in a decreased exchange rate. But if news spread that there will be deflation in the US, traders will buy US dollars as they will try to purchase all the US dollars from the market, increasing US dollar demand. But there will be no change in supply. This will increase equilibrium exchange rate. Let Equilibrium exchange rate is £1.5= $1. As news spreads that US inflation will increase in the future, traders try to sell their dollars. So, there is an increase in supply. But, there is no increase in demand, the exchange rate falls. The new exchange rate is £1.4= $1. But, if news spreads that deflation will occur at US, there will be increased demand in traders to buy dollars.
Natural Disasters or Unexpected Event
Natural disasters like floods, cyclones, tornados etc can occur from time to time, affecting the foreign exchange rate. Natural disasters can create food shortages and heavy damage to the economy. The domestic currency becomes weak and demand for foreign currency grows higher. This shifts the foreign exchange rate to an adverse level for the country. (Escaleras, & Register, 2011). Unexpected events like Covid-19 can also impact the foreign exchange market. The government of different countries are imposing lockdown and economic growth is severely affected. (Ahamed, 2021). The recent Rohingya crisis in the country also created pressure on the food and other imports which in turn affected currency price. (Minar, 2019). The international trade and exchange markets become stagnant, and the global supply chain faces severe challenges. It creates volatility in the foreign exchange market and increases pressure on the country's economy.
Current Account Deficit The current account is the pat of the balance of trade that reflects the exchange of goods and services with other countries. The deficit in this account shows that the country is expending more on foreign expenses than earning. On the other hand, which means the country needs more foreign currency than its earnings. This excess demand for foreign currency reduces the home currency's exchange rate in exchange of foreign currency up to the situation when home goods become cheaper to foreigners (Agaroal O, 2009).
Public Debt and Tax
The countries which have more public debts pay most of their earnings in payment of debts. Foreigners do not like these kinds of countries. As a result, they become discouraged from doing any type of business with these countries, making imports expensive and export cheaper. (Eitrman, 2013). This situation, in turn, makes the exchange rate depreciated of those countries against foreign countries. The change in the tax rate draws money in and out of the country. Empirical evidence show a significant expansionary effect of tax cuts on the macroeconomic variables. Cuts in personal and corporate income taxes cause a rise in output, investment, employment, and consumption. (Alam, 2021).
Political Stability
Foreign investors always seek to find politically stable countries. The more stable the country's political condition, the more the country is attractive in the eyes of foreign investors and vice versa. (Eun, 2007). Now, a politically stable country is more appealing to foreigners and the demand of the currency of that country is more to the foreigners. As a result, the exchange rate of the country will appreciate against foreign currencies and the country's currency will be strengthened
Terms of Trade
Terms of trade are also an important factor affecting the exchange rate of any country. If a country's export is more than the import, it can be said that trade is favorable. But, if the opposite occurs, then it is not in favorable condition. When increasing export conditions, the country's goods have much demand outside the country, which shows increased demand in currency. As a result currency of that country appreciates (Madura, 2010).
Market Judgment and Speculation
Foreign exchange market does not always follow logical pattern. Rates sometimes are influenced by judgments, emotions as well as analysis of economic and political events. Before making the information public, market make its own judgment and influence on exchange rates occur accordingly (Madura, 2010). Speculation by the major traders of the market is another important factor that influences the exchange rate. Direct movement of currencies in international market is very low. Most trades are done by speculative trading on currencies that influences exchange rates (Agaroal, 2009).
Interaction of Factors Interaction among factors can create change in factors' influence. Most of the times, the factors don't affect the exchange rate individually. The effect is simultaneous on the exchange rate. Because of the interaction of factors, the exchange rate sometimes goes in the opposite direction as it should have been. (Feinberg, 1986). This interaction mainly determines the exchange rates. As an example Increase in income levels can cause an increase in expectations of higher interest rates. It also can result in increased imports and more financial inflows. Favorable financial flows at best can strengthen the local currency, and exchange rate decreases against foreign currency. In this way interaction of factors can make different results
Conclusion
So, finally, it can be said that there are so many factors that influence exchange rates. The most important are five factors which are inflation, interest rate differentials, and differences in income level, government control and changes in expectations. These factors move the demand and supply schedule and create a new exchange rate in a new equilibrium condition. But there are some other factors like political stability, terms of trade, market judgment, etc., which also play an essential role in determining the demand condition of currency and determining the foreign exchange rate change.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2012-02-28T00:00:00.000
|
106729
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/13/3/2636/pdf",
"pdf_hash": "3791489d60112d7711587798c4c0a6ce47e1dacf",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2105",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "3791489d60112d7711587798c4c0a6ce47e1dacf",
"year": 2012
}
|
pes2o/s2orc
|
Role of SDF1/CXCR4 Interaction in Experimental Hemiplegic Models with Neural Cell Transplantation
Much attention has been focused on neural cell transplantation because of its promising clinical applications. We have reported that embryonic stem (ES) cell derived neural stem/progenitor cell transplantation significantly improved motor functions in a hemiplegic mouse model. It is important to understand the molecular mechanisms governing neural regeneration of the damaged motor cortex after the transplantation. Recent investigations disclosed that chemokines participated in the regulation of migration and maturation of neural cell grafts. In this review, we summarize the involvement of inflammatory chemokines including stromal cell derived factor 1 (SDF1) in neural regeneration after ES cell derived neural stem/progenitor cell transplantation in mouse stroke models.
Introduction
Cerebral vascular diseases often cause severe neurological dysfunctions with high disability and mortality [1]. There is accumulating evidence that supports effectiveness of neural tissue transplantation on the recovery of neurological dysfunctions in experimental stroke models [1][2][3][4][5][6]. In rodents, endogenous neural stem cells are enriched in the subventricular zone (SVZ) [7] and the subgranular zone (SGZ) [8] even in adult brains. Neural stem cells in the SVZ migrate to the olfactory bulb (OB) through the rostral migratory stream (RMS) and they differentiate into neural cells [9]. This migration of stem cells is called chain migration because of its histological features. Neural stem/progenitor cells have high abilities to proliferate and to differentiate into several neural cell types in the CNS [10][11][12]. Therefore, endogenous neural stem/progenitor cells may be responsible for the spontaneous functional recovery frequently observed in the damaged CNS of mice. In brain injury models of mice, astrocytes around the injured area activate neural stem cells in SVZ and SGZ, and promote them to migrate to the injured area, where they differentiate into mature neural cells ( Figure 1) [6,13,14]. However, the number of endogenous neural stem/progenitor cells is not sufficient to repair the damaged brain efficiently. The neural cells derived from the endogenous neural stem cells migrating from SVZ in the injured area occupy only 0.2% of the dead neurons in the middle cerebral artery occlusion (MCAO) model [13].
The human brain has RMS that consists of similar cells to the rodent brain including OB [15]. The activity of cells in RMS is relatively lower in humans than in rodents. It suggests that the cell replacement rate in human OB is much lower than in rodents. Moreover the weight of human OB is below 0.1% of the whole human brain, whereas rodent OB occupies more than 20% of the total rodent brain [16]. These findings suggest that the regenerative and proliferating potential of the cells in OB are much lower in humans than in rodents [15].
Transplantation of exogenous neural cells derived from ES and induced pluripotent stem (iPS) cells has the possibility to provide sufficient numbers of neural cells to the damaged tissue, leading to the restoration of lost motor functions. In our hemiplegic model of mice, transplantation with both mouse and monkey ES cell derived neural stem/progenitor cells improved motor functions [4,5,17]. The neural graft derived from the ES cells migrated to the injured area and expressed neural cell adhesion molecules which mediated homophilic binding [5]. We speculate that both cell-to-cell and cell-to-soluble factor interactions are essential for the grafted neural stem/progenitor cells to differentiate toward neural cells suitable for their neighboring environment and to form new neural circuits with the damaged host neurons.
Chemokines are small polypeptides consisting of about 100 amino acids, initially identified as molecules which belong to a subtype of cytokines produced from immune cells, and contribute to the maturation and trafficking of leukocytes. Chemokines and their receptors play an important role in neural cell migration and they are constitutively expressed in glial and neural cells in the CNS.
Chemokines are categorized into four groups (CXC, CC, C and CX3C chemokines) by the number and position of the conserved cysteine residues in their amino termini. Chemokine receptors are categorized into four groups (CXCRn, CCRn, XCRn and CX3CRn), each corresponding to the respective chemokine nomenclature above, and belong to seven-transmembrane-domain G-protein coupled receptors. Several chemokines are involved in neural formation in both ontogenic development and tissue regeneration through their interaction with neural cell surface molecules [18][19][20]. Stereographic drawing of the brain and three dimensional positions of the subventricular zone (SVZ), subgranular zone (SGZ), rostral migratory stream (RMS) and olfactory bulb (OB). Continuous generation of neural cells is shown in both SVZ of the lateral ventricle and SGZ of the dentate gyrus in the mouse hippocampus; (c and d) Endogenous neural stem cell (NSC) migration to facilitate regeneration of the damaged area in the middle cerebral artery occlusion (MCAO) model. Migratory pathway of the endogenous neural cells generated in SVZ is shown. They migrate to the injured area directly or through RMS and OB. However, numbers of the endogenous cells are insufficient to enable the repair of the damaged tissue; (e and f) Neural cell migration after ES cell derived NSC transplantation in the hemiplegic mouse with brain injury. We injected ES cell derived neural stem/progenitor cells into the periventricular region of the striatum. The transplanted cells migrated to the injured motor cortex and located diffusely over the cortex. We speculated that the grafted cells replaced and regenerated the damaged cortex, leading to recovery of the hemiplegia. In our hemiplegic model, stromal cell derived factor 1 (SDF1) and some neural cell associated adhesion molecules, such as L1 cell adhesion molecule (L1CAM), neural cell adhesion molecule (NCAM) and N-cadherin, were associated with migration and maturation of the neural stem/progenitor cells derived from monkey ES cells. Here, we review chemokine regulation of neural regeneration after ES cell derived neural stem/progenitor cell transplantation.
Transplanted Neural Cells and Chemokines
After brain injury, astrocytes respond to some signals from the injured area and produce various molecules which provoke neural stem/progenitor cell proliferation, migration and differentiation to start regeneration of the damaged tissue (Figure 3a,b) [42][43][44]. Astrocytes produce several inflammatory chemokines, such as SDF1, MCP1, MIP1α, MIP1β, RANTES and fractalkine [18,20]. Neural stem/progenitor cells express chemokine receptors, such as CCR2, CCR5, CXCR3, and CXCR4 [45]. It has been reported that CCR2 (the receptor of MCP1) and CXCR4 (the receptor of SDF1) are involved in the migration of endogenous neural stem cells to the injured area in mice [44,46]. Interaction between SDF1 and CXCR4 activates several signaling molecules in neural stem cells including p38MAPK, ribosomal S6 kinase, c-Jun and paxillin. The interactions are followed by various cell activities, such as proliferation, chemotaxis and migration with conformational changes of cytoskeleton for neurite outgrowth. The SDF1/CXCR4 interaction on the neural cell graft activates several signaling pathways including mitogen-activated protein kinase (MAPK). Various cellular responses including proliferation and migration of the neural cell graft are induced by the MAPK activation. As for migration, activated p38MAPK is involved in actin reorganization essential for cell migration. The ribosomal S6 kinase (RS6K), which is activated by extracellular signal-regulated kinases (ERK), provokes the phosphorylation of cytoskeletal molecules which are mandatory for cell migration and neurite elongation. MAP/MEK kinase 1 (MEKK1) is essential for cell migration, because the c-Jun N-terminal kinase (JNK), which is phosphorylated by MEKK1, phosphorylates the focal adhesion adaptor molecule, paxillin, important for cell migration. Collectively, accumulation of the phosphorylation and subsequent activation events following SDF1/CXCR4 interaction brings about neural cell migration, observed in mice with neural cell grafts (Figure 3c) [47][48][49][50]. SDF1 stimulates cortical astrocyte proliferation through Src-ERK1/2 activation [36].
SDF1 and CXCR4 are highly expressed in the embryonic brain. Mice with congenital deficit of CXCR4 or SDF1 shared the same pathological development, hypoplasia of hippocampal external granule cell layer [51,52]. Recent analysis using CXCR4 transfected neural progenitor cells revealed that SDF1/CXCR4 regulated adult neural progenitor cell motility but not differentiation [53]. SDF1 utilizes both CXCR4 and CXCR7 for its receptor. SDF1 activates Galpha (i1) protein-dependent signaling pathway through CXCR4. Through CXCR7, SDF1 does not activate Galpha (i1) signaling pathway but activates the MAP kinase pathway [54]. In mouse development, CXCR7 also contributes to cortical interneuron migration. Mice deficient either in CXCR4 or CXCR7 show similar phenotypes. However, there is a distinct difference in the interneuron tangential and radial migration motility between CXCR4 and CXCR7 deficient mice during the early embryonic stage [55]. We think that further studies are needed to clarify the relation between these molecules in the regeneration of damaged CNS after transplantation.
When we conducted monkey ES cell derived neural cell transplantation into hemiplegic mice with brain injury (Figure 1), a significant recovery of motor functions was observed [3]. We found unidirectional migration of the neural cells from the grafted periventricular region toward the damaged motor cortex where SDF1 was expressed extensively. This migration resembled so-called chain migration, physiologically shown in embryonic brain development and adult forebrain (Figure 3b,e).
Using a microchemotaxis assay in vitro, we showed that the migration ability of mouse and monkey ES cell derived neural stem/progenitor cells depended on the concentration gradient of SDF1. It has been shown that by the blocking of CXCR4 signaling by AMD3100 in vitro and in vivo, an antagonist of CXCR4 [5,56], inhibited migration of the neural stem/progenitor cells. Migration of the neural stem/progenitor cells was not affected by other chemokines, such as MCP1, CTACK, RANTES, fractalkine and MIP1α. The ES cell derived neural stem/progenitor cells expressed CXCR4 but did not express CCR2 or any other major chemokine receptors [5]. . (a,b) Astrocytes and vascular endothelial cells in the injured area (pink colored) produce several chemokines (e.g., SDF1). After expressing chemokine receptors (e.g., CXCR4), the neural stem/progenitor cells react with the chemokines and then start to move along with the concentration gradient of the chemokines, whose concentration is the highest at the injured area; (c) The SDF1/CXCR4 interaction on the neural cell graft activates several signaling molecules including p38MAPK, ribosomal S6 kinase, c-Jun and paxillin. The interactions are followed by various cell activities, such as proliferation, chemotaxis and migration with conformational changes of cytoskeleton for neurite outgrowth; (d) Conceptional classification of neural cell migration in mice with transplantation. One is multidirectional migration and the other is unidirectional migration of neural stem/progenitor cells. Multidirectional migration means that neural stem/progenitor cells accumulated in a region migrate in all directions; (e) Unidirectional migration means that neural stem/progenitor cells continuously move on in one direction, dependent on the lower to higher concentration gradient of the chemokines. Cells in the unidirectional migration look like a chain of cells in histological examination. In another in vitro study of our own, we found that ES cells differentiated efficiently into neural cells in the presence of SDF1, whereas MCP1 and other major chemokines did not affect their differentiation. We think that migration and subsequent maturation of the neural stem/progenitor cells transplanted to the damaged brain are mainly caused by SDF1 secreted from glial cells accumulating around the injured area (Figures 2 and 3b) [5].
We found that ES cell derived neural stem/progenitor cells migrated as a cell aggregate from the periventricular region of the striatum where they had been injected in the damaged motor cortex. The migration was inhibited by the administration of AMD3100, suggesting that the cells were guided by the concentration gradient of SDF1 which was secreted by glial cells accumulated in the damaged cortex. Migration of the grafted neural stem/progenitor cells resembled so-called chain migration, but not radial (multidirectional) migration in the recipient brain (Figure 3d) [5].
CCR2 deficient mice do not show any abnormality of neural stem/progenitor cells throughout the embryonic CNS development. Low expressions of MCP1 and CCR2 molecules on the CNS are physiologically recognized in the mice throughout their lives. Redundancy of chemokines/cytokines may explain the lack of abnormality [20].
Neural Stem/Progenitor Cells and Neural Cell Associated Adhesion Molecules
Eventually, the grafted cells migrate into the superficial layer of the injured motor cortex and re-connect the pyramidal tract which has once been damaged with their extended axons. Migration of the endogenous neural cells in the adult forebrain, including RMS, needs interactions with neighboring cells via neural cell associated adhesion molecules and other cell surface molecules.
The polysialylated form of the neural cell adhesion molecule (PSA-NCAM) is one of the homophilic binding cell adhesion molecules. Homophilic PSA-NCAM interaction between endogenous neural stem/progenitor cells emerging from SVZ and surrounding neural cells, which have been existing there, is important to form the chain migration (or tangentinal migration) of RMS [57,58]. The neural cells in chain migration are surrounded by a microenvironment mainly consisting of astrocytes, with which the neural stem/progenitor cells interact to promote their migration (Figure 3b) [59].
Neural cell associated adhesion molecules, such as L1CAM and NCAM, and N-cadherin are important for axon elongation [60,61]. L1CAM and NCAM are members of the immunoglobulin superfamily and they are widely expressed in neural tissues during development. Both L1CAM and NCAM mediate homophilic and heterophilic adhesion [62]. Cell adhesion molecules are assigned an important role in the cytoskeletal and transcriptional event during neurite outgrowth [60]. L1CAM plays a role in neurite extension and NCAM is important for the cone protrusive growth of axon [63]. L1CAM deficient mice have enlarged ventricles and severe hypoplasia of the corticospinal tract [64]. NCAM deficient mice show a primary defect in embryonic neural cell migration and subsequent defects in axon growth and fasciculation [65]. Mice deficient in N-cadherin show that neuroepithelial and radial glial cells do not expand their processes to span the distance and therefore terminate them in the middle zone of the cortex, suggesting that N-cadherin contributes to neurite outgrowth and synaptic connection [66].
In our study, ES cell derived neural stem/progenitor cells started to express NCAM and L1CAM mRNAs 4 h after SDF1 stimulation in vitro [5]. The cells grafted to mouse brain started to express NCAM, L1CAM and N-cadherin simultaneously soon after transplantation, and formed homophilic and heterophilic intercellular bindings and de novo neural network at the damaged cortex 28 days after transplantation [5]. It was possible that L1CAM, NCAM, and N-cadherin induced by SDF1 contributed to regenerating neural network by promoting the extension of axons/neurites in the damaged cortex [67,68].
Conclusions
SDF1 and several neural cell adhesion molecules play a role in migration and differentiation of the grafted neural stem/progenitor cells and subsequent neural network reconstruction in the damaged brain. We found that SDF1 was one of the most important molecules among other chemokines tested so far for the regulation of the neural stem/progenitor cell migration and the formation of neural network. Endogenous glial cells around the injured area mainly secreted SDF1. NCAM was expressed on the transplanted neural cell after reaching the damaged cortex. SDF1 induced to express neural cell associated adhesion molecules, which in turn helped promote appropriate differentiation of neural stem/progenitor cells and subsequent regeneration of neural network in vivo.
Our hemiplegic mouse model served as a basis to understand the molecular mechanisms governing neural regeneration after transplantation, and indicated the importance of SDF1 and the neural adhesion molecules for the recovery of motor functions.
|
v3-fos-license
|
2022-01-17T16:22:24.424Z
|
2022-01-01T00:00:00.000
|
245992898
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.ecologyandsociety.org/vol27/iss1/art6/ES-2021-12987.pdf",
"pdf_hash": "a50d1d712d605342b3398e7fb3eca536e00470dd",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2106",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "13ef5cece627603f8fde9ee2b481112e8f0c29ee",
"year": 2022
}
|
pes2o/s2orc
|
Not too small to benefit society: insights into perceived cultural ecosystem services of mountain lakes in the European Alps
Although the importance of lakes for providing cultural ecosystem services (CES) is widely recognized, the integration of associated values and benefits in decision making is still underdeveloped. Therefore, this study aimed at collecting and analyzing people’s perceptions related to various CES of mountain lakes using an online questionnaire. We thereby distinguished societal values in terms of CES from individual experiences that contribute to subjective well-being and elicited perceived pressures reducing the quality of nature-based experiences. Based on 526 responses, our results indicate that bequest, symbolic, aesthetic, and spiritual values are perceived as most important, while representation and entertainment were less important. Accordingly, experiences such as connection to nature, relaxation, and freedom had the highest values. In terms of pressures, crowdedness was mentioned most often, followed by noisiness and garbage. These pressures mostly affected experiences such as connection to nature, freedom, relaxation, peace, and memories, with negative effects also on CES, mainly on aesthetic value, sense of place, existence value, and symbolic value. In general, the perceptions were highly consistent across different socio-cultural groups. Nevertheless, some differences emerged between groups with different cultural backgrounds with respect to CES and pressures, while differences in experiences were mostly related to gender. Our findings advance the understanding of CES related to mountain lakes and provide useful insights for research as well as decision and policy making, emphasizing the high intrinsic value expressed by the respondents as well as the variety of CES and experiences associated with mountain lakes. Moreover, the identified pressures provide a valuable basis for consideration in tourism management, the protection of natural resources, and sustainable development because they advance our understanding of how infrastructure development and socioeconomic changes may aggravate impacts on societal values and individual experiences.
representing individual values related to subjective well-being. In particular, we understand CES potential as the capacity of mountain lakes to support activities and interactions of people with ecosystems , creating cultural values that are of societal importance (Kenter et al. 2015, Small et al. 2017, Muhar et al. 2018. We distinguish CES experiences that can be associated with a visit to mountain lakes by referring to different facets of subjective well-being (Russell et al. 2013, Bryce et al. 2016, which represent individual perceptions depending on personal preferences and values (Small et al. 2017, Muhar et al. 2018. In this way, we recognize the interactions between the individual level (CES experience) and the collective level (CES potential) in the perception and understanding of nature-human relationships (Muhar et al. 2018).
The subjective character of CES makes it difficult to quantify CES in biophysical or monetary terms (Daniel et al. 2012). Many studies have therefore applied non-monetary methods, including stated preference methods such as interviews, questionnaires, and participatory mapping methods, or revealed preference methods using social media data (Cheng et al. 2019). The analysis of social media data, e.g., photographs posted on online platforms, is relatively cost-efficient and can be applied for CES such as outdoor recreation at regional or cross-regional level (e.g., Angradi et al. 2018, Keeler et al. 2015, Oteros-Rozas et al. 2018. Insights from social media data are still limited because not all CES can be assessed without interviewing people and asking for their thoughts or feelings (Moreno-Llorca et al. 2020). Stated preference methods are often applied to elicit in-depth insights on CES, indicating a generally high agreement about preferences concerning the attributes of valued ecosystems or semi-natural contexts among respondents (Daniel et al. 2012). Some studies have also found that perception of CES can be influenced by the socio-cultural characteristics of the respondents (Quintas-Soriano et al. 2018), i.e., perceptions can diverge because of different underlying values and belief systems (Muhar et al. 2018). For example, younger people prefer urban green spaces for social interactions, while older people appreciate more quiet naturebased recreational activities (Riechers et al. 2018), or females value CES provided by grassland more than males do (Nowak-Olejnik et al. 2020). Other studies have found some differences between local residents and visitors in terms of landscape preferences (Soliva et al. 2010, Zoderer et al. 2016a or their connection or affinity with specific land-use types (Sayadi et al. 2009, van Zanten et al. 2016).
In summary, three major issues need to be addressed to better support the integration of CES into policies and management with regard to mountain lakes. First, to overcome conceptual issues related to CES (Plieninger et al. 2015), the distinction between community-based values and individual benefits may provide useful information for decision makers (Small et al. 2017). Second, knowledge about socio-cultural differences in perceptions can support the development of better-targeted management strategies. Finally, in contrast to large and low elevation lakes, small mountain lakes have rarely been the focus of CES assessments. Research on mountain lakes has mostly focused on ecological issues, but little is known about people's perceptions of the provided CES. Such knowledge is particularly important to develop sustainable management and conservation strategies considering the increasing impacts of global change on these sensitive ecosystems (Schmeller et al. 2018, Moser et al. 2019) as well as the increasing demand for outdoor recreation opportunities and nature-based tourism in mountain regions (Buckley et al. 2015, Pröbstl-Haider et al. 2021. To address these challenges, we aimed to identify how people perceive CES of mountain lakes. Specifically, we aimed to (1) assess the variety of values people associate with these small ecosystems, distinguishing between CES potential (henceforth referred to as CES) and CES experiences (henceforth referred to as experiences); (2) recognize differences across socio-cultural groups; (3) identify positive and negative correlations among CES and experiences; and (4) assess the pressures on experiences. A better understanding of these issues may be helpful in managing the pressures on sensitive ecosystems and in anticipating potential conflicts between different user groups. Using a questionnaire, which was distributed to people living inside and outside the European Alps, we collected people's perceptions on CES, experiences, and pressures as well as socio-demographic information on the respondents.
Conceptual design
The data for this analysis were derived from a questionnaire (Appendix 1) with closed and open-ended questions. The questionnaire included five sections, which were shown on separate pages: . The first section started with a short description of the study's purpose and provided a definition of mountain lakes as being "smaller-sized natural lakes, which are located at least 1000 meters above sea level." . Section 2 referred to CES. We selected CES that are associated with mountain lakes based on the latest version of the Common International Classification of Ecosystem Services (CICES; Haines-Young and Potschin 2018) because the CICES is widely used in mapping efforts and policies in Europe. It offers a high level of detail and it is partly based on previous classification systems used in the Millennium Ecosystem Assessment (MEA), the Economics of Ecosystems and Biodiversity (TEEB), and different national ecosystem assessments (Burkhard and Maes 2017). We asked the respondents in a closed question to indicate how accurate the provided statements related to CES (Table 1) were on a four-point rating scale ("does not apply at all," "does not really apply," "somewhat applies," "definitely applies") with an additional option for "I don't know." . In section 3, we asked the respondents about their perceptions of experiences. Based on literature describing different facets of subjective well-being (Russell et al. 2013, Bryce et al. 2016, we selected different types of experiences ( Table 2) that can be associated with a visit to mountain lakes. Perceptions of experiences were collected using closed questions, asking the respondents to indicate how accurate the provided statements (Table 2) were on a five-point rating scale from 1 ("does not apply at all") to 5 ("definitely applies").
. In section 4, we included an open-ended question linked to the pressures on experiences related to a visit to a mountain . In section 5 at the end of the questionnaire, we asked the participants to provide information on gender, age, their relation to the Alps, frequency of visits in nature, and lake affinity (see also Fig. A2.1). The answers were not required, and respondents could complete the questionnaire without filling out the responses. The information from this section was used to compare perceptions across socio-cultural groups (Table 3).
A complete draft of the questionnaire was sent to 17 people, selected to represent the target population, that is, people of different gender, age, educational level, profession, and living in and outside the European Alps. In this pre-test, participants filled out the questionnaire and provided feedback on the presentation, clarity, and completeness of the questions and response options. After evaluating and incorporating suggested changes, the final questionnaire was translated into the three languages: English, German, and Italian.
Data collection
Data were collected via an online survey focusing on respondents living in or visiting the European Alps. The European Alps are the highest mountain range in Europe, extending over about 192,000 km² across different cultures and societies. They include about 6000 small natural lakes (between 0.005 km² and 1 km²) that are located above 800 m a.s.l. (Schirpke et al. 2021a). Being also one of the most important European touristic destinations with more than 100 million visitors each year (Batista e Silva et al. 2018), the greater Alpine region is suitable for analyzing perceptions related to CES of mountain lakes and to examine the socio-cultural influence on values.
The questionnaire was made available between July and December 2020. We targeted people that directly benefit from CES of mountain lakes, for example, during hiking excursions, such as members of Alpine clubs or people with a professional interest in mountain lakes such as members of associations of biologists, limnologists, etc. To reach many potential respondents living or working in the European Alps and surroundings, we asked various organizations to distribute the links to the questionnaires via their newsletters and social media channels (e. g., Facebook). Our request was supported by the Alpine clubs of different countries (Austria, Germany, and Italy), different associations of biologists and limnologists in Austria and Northern Italy, and the International Commission for the Protection of the Alps (CIPRA), among others. Moreover, we sent invitations via email to research partners and colleagues located in and around the European Alps with the request to forward the links also to their relatives, friends, and colleagues. The responses of all completed questionnaires were registered in a database. Before filling out the questionnaire, the participants were informed that the study was carried out in accordance with national and institutional legal and ethical requirements, i.e., that participation was anonymous and on a voluntary basis (see Appendix 1). All participants also confirmed their voluntary participation. To secure privacy, all data were collected via a web survey with no collection of identifiers/codes and therefore analyzed anonymously.
Data analysis
We analyzed the responses using a combination of qualitative and quantitative methods. To quantify the values of CES and experiences, we assigned numeric values to each answer of a respective question (0 = does not apply at all/don't know, 1 = does not really apply, 2 = somewhat applies, 3 = definitely applies). We then calculated the mean value for each CES and type of experience from these values. For a comparison of values across different socio-cultural groups, we categorized the respondents into groups of similar sample size based on different sociocultural variables (Table 3). Because some respondents did not provide information of all or some socio-cultural variables, sample sizes may differ from the total sample. Groups with a very small sample size were excluded, e.g., English-speaking respondents (n = 22). We calculated the mean values of CES, experiences, and pressures for each group using cross-tabulation This is a place to refresh and cool down on particularly hot days and Chi-Square tests to assess the significance of the differences between groups.
We assessed the relations between individual types of CES and experiences as well as between CES and experiences using correlation analysis to quantitatively evaluate positive and negative relationships (Cord et al. 2017). For each pair, we calculated bivariate correlations (Pearson's r coefficient) in SPSS Statistics (IBM SPSS 26), indicating the strength and direction of the relationship. All significant correlations were plotted as correlograms using the package corrplot version 3.3.3 for R (R Core Development Team 2019).
Pressures were identified from the open-ended question adopting a qualitative analysis of free lists (Bieling et al. 2014, Wartmann andPurves 2018). All German and Italian responses were translated into English merging conceptually and semantically similar terms (e.g., too many people/crowds of people, rubbish/ garbage), which resulted in 90 different terms. Based on the frequencies of these terms, we identified six broader categories of pressures (crowdedness, noisiness, garbage/pollution, touristic exploitation, bad weather, and anthropization). We then assigned all responses to one or more categories, which were coded into presence/absence. Mentions that did not fit into one of the categories were summarized in a separate category (other). The pressures were also examined for differences between sociocultural groups using a Chi-Square test.
To identify the differences in the influence of pressures on experiences and consequently on CES, we coded all variables into presence/absence, assigning 1 to "definitely applies" and 0 to all other response options of experiences. We then created correspondence tables to depict the relations between pressures and experiences in a Sankey plot using SankeyMATIC (http:// sankeymatic.com/build/). Similarly, the correlations between experiences and CES were assessed using only significant correlations between experiences and CES (p ≤ 0.001).
Characteristics of respondents
In total, we obtained 526 valid responses, with a higher share of female respondents (61%), more German-speaking people (56%), and almost 50% were younger than 45 years ( Fig. A2.1). In terms of their relation to the European Alps, the largest group were respondents visiting the Alps for touristic purposes (44%), followed by residents who were also born in the Alps (38%). A high share of the respondents had frequent contact with nature spending time in nature at least several times a week (64%). More than half of all respondents (54%) also indicated a high lake affinity because they visited mountain lakes at least four times a year.
CES relating to mountain lakes
Respondents attributed the highest value to bequest values, followed by symbolic values, aesthetic values, education, spiritual values, and existence values, whereas entertainment and representation obtained the lowest values of all CES (Fig. 1). Across the socio-cultural groups, high statistically significant differences (p ≤ 0.001) in the valuation of individual CES mainly occurred in relation to cultural background (Table A2.1). Accordingly, German-speaking respondents valued symbolic and spiritual values higher than Italian-speaking respondents, who in turn perceived scientific research, existence value, education, and sense of place as more important. Although almost all sociocultural groups agreed on bequest value as being the most important CES, some differences in the rankings of CES occurred as well although they were not always significant (Table A2.1). For example, female respondents valued symbolic values slightly higher than aesthetic values, whereas male respondents assigned higher values to existence values than to symbolic values.
CES were partly correlated to each other (Fig. 2). The highest synergies occurred between symbolic and spiritual values. Bequest values correlated with many other CES, mainly with existence, spiritual, symbolic, and aesthetic values, while existence values were more related to scientific research, education, aesthetic value, and sense of place. The only negative correlation was found between scientific research and spiritual value.
Experiences relating to mountain lakes
In terms of experiences, the highest values occurred for connection to nature, relaxation, and freedom, followed by peace and memories (Fig. 3). The greatest differences between sociocultural groups occurred for gender (Table A2.2): female respondents generally valued experiences higher than male respondents, in particular, connection to nature, relaxation, peace, memories, health, inspiration, and excitement. Sense of belonging was more important for respondents with high lake affinity, residents, as well as respondents with frequent visits in nature. Synergies occurred between most experiences (Fig. 4). The highest synergies were found between connection to nature and peace. Freedom and excitement were related to all other experiences with the exception of life lessons, whereas the weakest synergies generally occurred for refreshment and life lessons. Some positive and negative correlations were also found between CES and experiences ( Fig. A2.2). Experiences were mainly related to https://www.ecologyandsociety.org/vol27/iss1/art6/ aesthetic value, existence value, and sense of place, specifically, inspiration, sense of belonging, health, life lessons, freedom, and connection to nature. Weak negative correlations were found between sense of belonging and outdoor recreation as well as between memories and scientific research. Although refreshment was only related to representation, none of the experiences correlated with entertainment.
Pressures on experiences and CES
Based on the responses to the open-ended question, we identified several pressures that would diminish the experience or prevent extending the visit to mountain lakes. Most often respondents mentioned crowdedness (70.0%), followed by noisiness (27.2%), garbage/pollution (21.3%), touristic exploitation (18.6%), bad weather (12.0%), anthropization (11.8%), and other aspects (4.6%; e.g., disturbance by animals or difficult access). Some significant differences occurred between socio-cultural groups (Table A2.3). A higher share of German-speaking respondents mentioned noisiness, garbage, and touristic exploitation compared to Italian-speaking people. Although noisiness was indicated more often by female respondents, garbage/pollution was stated more often by younger people.
In general, the indicated pressures mostly affected connection to nature, freedom, relaxation, peace, and memories, while having lower impacts on excitement and life lessons (Fig. 5). Garbage/ pollution and anthropization seemed to affect all experiences similarly, whereas the other pressures had some statistically significant differences. Crowdedness had the greatest impact on freedom and sense of belonging. Unlike touristic exploitation, noisiness had the greatest negative effect on inspiration and the least effects on the sense of belonging and refreshment. Bad weather had the highest influence on life lessons, while the other mentioned pressures were most important for refreshment. These negative impacts on experiences also affected CES at different levels (Fig. 5). Relaxation (13.9%), peace (13.0%), freedom (11.9%), and inspiration (11.5%) had the highest impacts on CES, mostly affecting aesthetic value (22.6%), followed by sense of place (20.5%), existence value (18.2%), and symbolic value (15.2%).
CES and experiences relating to mountain lakes
In many studies examining the perceptions of various CES across different landscapes, outdoor recreation was found to be one of the most valued CES (Bieling et al. 2014, Rall et al. 2017, Ko and Son 2018 Consequently, water-based activities are less important in mountain lakes than other recreational activities that are carried out in the surrounding landscape, such as hiking or biking that Fig. A1.2). The width of the lines indicates the magnitude of influence, and the colors show individual pressures, experiences, and CES.
Fig. 5. Impacts of pressures (left) on experiences (center) and related cultural ecosystem services (CES; right). Only significant relations between experiences and CES were included (see
do not require direct contact with the lake (Pröbstl-Haider et al. 2021, Schirpke et al. 2021b). This is also reflected in the most valued experiences, which are related to mental and restorative experiences such as connection to nature, relaxation, and freedom, while respondents deemed physical benefits provided by mountain lakes as less relevant.
By analyzing the correlations between CES and experiences, our results indicate high positive correlations between three CES (aesthetic value, existence value, and sense of place) with most of the experiences (Fig. A2.2). This high level of interrelation and overlap has also been acknowledged in previous studies in a similar way (Bieling et al. 2014, Hausmann et al. 2016, Wartmann and Purves 2018. The two negative correlations that emerged suggest that there are distinct user groups with specific preferences regarding some CES (outdoor recreation vs. sense of belonging and scientific research vs memory), which probably do not overlap. Other studies mention such similar competing interests, for example, in relation to recreational activities or biodiversity conservation goals (Ament et al. 2017, Roux et al. 2020). Sense of belonging was highest valued by people with high lake affinity regardless of being resident or tourist, whereas for outdoor recreation the opposite was true. Therefore, diverging interests seem to be related to specific values that are not limited to specific socio-cultural groups but rather to their value orientations (Kaltenborn and Bjerke 2002). This also allows the binary distinction between residents and tourists to be overcome and for intermediate categories of beneficiaries to be identified, e.g., tourists with a high lake affinity with similar perceptions compared to specific groups of residents. This understanding can be important for local institutions to promote a certain type of tourism through a more conscientious infrastructure development and accounting for social-ecological impacts over time (Haraldsson and Ólafsdóttir 2018).
Socio-cultural differences
Although almost all socio-cultural groups agreed that mountain lakes are "worth preserving in their natural state," most differences in values of other CES emerged between groups with different cultural backgrounds. These findings are supported by other studies pointing out the importance of accounting for sociocultural differences (Soliva et al. 2010, Zoderer et al. 2016a, b, Quintas-Soriano et al. 2018, Dou et al. 2020). For example, many studies found differences between male and female respondents, suggesting that women more often indicate immaterial values than men (Plieninger et al. 2013, Nowak-Olejnik et al. 2020. In concordance with these findings, women attributed higher values to sense of place in our study, while men scored higher representation. Such differences were even more pronounced in the valuation of the experiences, of which 7 out of 11 received significantly higher values from women, in particular excitement, inspiration, and peace, while refreshment and life lessons were evaluated most similarly. https://www.ecologyandsociety.org/vol27/iss1/art6/ The differences across socio-cultural groups reflect that different uses and experiences are appreciated by different types of beneficiaries (Van Berkel and Verburg 2014, Scolozzi et al. 2015, Ament et al. 2017, Small et al. 2017, Schirpke et al. 2018.
Regarding mountain lakes, our results suggest two orientations between the groups: (1) people who are more practice-oriented (doing, enjoying, teaching, and learning something), and (2) people with a more reflexive orientation (contemplating, meditation moments, regardless of any particular activities). By integrating further information on the preferred activities and pressures, this information can be used as a basis for further analysis to anticipate potential conflicts between groups with different preferences (Confer et al. 2005, Schirpke et al. 2020.
Pressures on CES and experiences
Our results clearly indicate that crowdedness is the most frequently mentioned pressure on the quality of nature-based experiences, agreeing with previous empirical and recreational studies (Moyle and Croy 2007, Arnberger and Mann 2008, Zehrer and Raich 2016, Roux et al. 2020. Perceived crowdedness is considered to be the subjective negative evaluation of density levels in a specific location, emerging when the actual experience diverges from the expectations (Oliver 1980). These expectations can be influenced by gender, age, frequency of visit, and specific situations (Zehrer and Raich 2016). Confer et al. (2005) found that garbage, noise, and congestion increase negative associations with recreational activities, resulting in less tolerance toward other users. Similarly, our results suggest that the behavior of other visitors is decisive in interfering with experiences, as many respondents also specified noisiness (e.g., loud music, screaming people) or garbage left around the lake. The pressures may also be linked to different types of activities, which can provoke conflicts between different user groups (Scolozzi et al. 2015, Schirpke et al. 2020; in the case of mountain lakes, these may be people visiting lakes alone or with their partner to enjoy the quietness versus people having a barbecue with friends. In addition, these pressures also affect highly valued CES such as aesthetic value, sense of place, existence value, and symbolic value. Here, management measures could start with specific interventions for raising awareness on abandoned garbage or exhibiting respective behavior toward nature and other people. As in Lapointe et al. (2020), we found some significant differences between socio-cultural groups in pressures, indicating that Italian-speaking respondents are more tolerant toward noisiness, garbage, and touristic exploitation.
Management implications
Our findings emphasize the importance of mountain lakes in providing CES and experiences, but also call for more attention from decision makers and managers as stressed by the indicated pressures, in particular crowdedness. Previous research on crowdedness mostly concerned built environments (e.g., urban areas and urban parks) or specific contexts, such as ski areas, national parks, or forests (Moyle and Croy 2007, Arnberger and Mann 2008, Kainzinger et al. 2015, Roux et al. 2020. Open landscapes, however, have received less attention in studies addressing the interrelationships between crowdedness and the benefits of CES. Such research is particularly important in the light of global megatrends because it is expected that relatively remote and natural places will experience increasing pressures in the coming decades because of a constantly growing demand for nature-based experiences (Buckley et al. 2015), despite and possibly also due to economic and health crises (Gössling et al. 2021, Wen et al. 2021). Hence, efforts should be made to identify carrying capacities to support visitor management plans, which should also account for the impacts of recreational activities on lake ecosystems (Dokulil 2014, Senetra et al. 2020). These could be integrated with voluntary codes of conduct for hikers/tourists for attentive and considerate behaviors toward other visitors.
Beyond the regional and national borders, the European Alps can be considered as a large natural area in the center of a vast urbanized area, providing high levels of CES and being exposed to similar pressures and trends (Schirpke et al. 2019, Egarter Vigl et al. 2021. Therefore, increasing pressures from socioeconomic and climatic changes can also be expected on mountain lakes in the future, requiring the attention of decision makers. Mountain lakes are particularly vulnerable to increasing use, including water abstraction, livestock farming, tourism, and hydropower generation, and such pressures may severely affect the lake ecosystem and related CES (Dokulil 2014, Van Colen et al. 2018, Schmeller et al. 2018, Brunner et al. 2019, Moser et al. 2019). In addition, these pressures may escalate the competing interests of different users (Schirpke et al. 2020). A careful evaluation of potential environmental impacts is therefore necessary before the construction of new infrastructures, the increase of farming activities, or the promotion of lakes as a tourist destination. By understanding and acknowledging societal values as well as individual well-being benefits, as reported in this study, decision makers may be able to better balance potential impacts and conflicts. For example, considering the high perception of intrinsic values, tourism management and nature conservation may pool forces to maintain the high environmental quality of lakes, while offering opportunities to visitors to engage with nature to encourage pro-environmental behaviors (Mackay and Schmitt 2019).
Limitations and future directions
Our study is limited by several factors. One is related to our sample, because we aimed to collect perceptions of people that directly benefit from CES of mountain lakes, also asking mountaineering associations to inform their members about the survey. We can therefore assume that the sample of respondents represents mostly people with a high interest in hiking and an elevated level of nature awareness because the mountaineering associations are very engaged in protecting the environment and supporting a sustainable development of the mountain regions. Accordingly, only 5% of the respondents never visit mountain lakes, meaning that our results do not sufficiently reflect the preferences of other types of visitors, e.g., mountain bikers and leisure tourists (Scolozzi et al. 2015) or anglers and kayakers (Confer et al. 2005). Moreover, our results mostly depict the perceptions of German-speaking and Italian-speaking people, but it would also be interesting to include people with other cultural backgrounds, which seems to be a key factor for differences in perceptions of CES. Future studies should also account for psycho-cultural aspects to improve the understanding of human behavior and human-nature interactions, which can improve management issues and improve the characterization of the respondents (Kumar and Kumar 2008).
Another issue is related to the choice of using an online-survey because of Covid-19 restrictions. This is a disadvantage when Ecology and Society 27(1): 6 https://www.ecologyandsociety.org/vol27/iss1/art6/ including open-ended questions because we could not ask participants to specify their answers as during interviews in situ (e.g., Bieling et al. 2014, Wartmann andPurves 2018). For example, many people indicated "too many people" as a pressure, but there was no indication of how many people and whether these people posed a disturbance simply because of their presence or because of a specific behavior, e.g., screaming or leaving garbage. Nevertheless, the broad categorization of the mentioned pressures matches those used in other studies (Confer et al. 2005, Roux et al. 2020, and the results are a useful starting point for further studies. Further research could investigate the "style of enjoyment" for each type of visitor to identify and possibly anticipate important issues for destination management. This could include a potential impact profile (e.g., high or low tendency to litter, high or low potential for noisiness) and sensitivity to crowdedness (e.g., ranges in the number of other users on the same site that make people feel like it is crowded). These would require research approaches similar to those used in market studies, in which the variables can be controlled; this is feasible for specific variables but difficult to perform for open landscape features such as mountain lakes.
Concerning destination management, previous examples include the models of wildlife tourism established in several British destinations (Curtin 2013). However, in the macro-region of the European Alps, an area with a high complexity of institutions and diversity of administration forms, this would require longterm visions and an anticipatory governance approach (Jurgilevich 2021) to be shared between destination marketers, local administrations, conservation NGOs, and private sector operators.
CONCLUSIONS
This study addressed several challenges related to the assessment of CES and focused on small mountain lakes, which are different from many other ecosystems studied in terms of recognized appreciation and values. First, by distinguishing between CES and experiences, this study emphasizes synergies between a variety of values associated with small mountain lakes. This provides a basis for better consideration of CES in nature-based tourism and conservation management because it encourages decision makers and landscape managers to evaluate how interventions that affect the lake characteristics can change experiences. Second, mountain lakes are relatively more sensitive to global megatrends and local pressures (e.g., nature-based tourism and outdoor recreation, climate change) than large lakes, and therefore require the attention of decision makers. This study provides novel insights into the variety of values people associate with mountain lakes and recognizes the potential pressures on related experiences. Finally, our results relating to the socio-cultural groups suggest that there are different user groups with distinct preferences and value orientations, with lake enjoyment varying from more active and recreational to more contemplative practices. Unlike previous studies on other natural areas, these groups are not binarily divided between residents and visitors; gender, age, cultural background, lake affinity, and frequency of visits in nature indicate finer differences in perceived values between the groups. This understanding can be important for institutions to promote sustainable tourism through a more aware infrastructure development and accounting for social-ecological impacts over time.
MOUNTAIN LAKES IN THE ALPS Section 1
The questionnaire below is composed of questions about your perception of mountain lakes. These are smallersized natural lakes, which are located at least 1000 meters above sea level. To complete this questionnaire will take you approximately 10-15 minutes. Within each section, please choose the most appropriate option for you.
The participation is voluntary and anonymous. You can withdraw at any time. All data will be treated confidentially and not be passed on to third parties.
Section 3
Have a look at the picture. Imagine that you have arrived at this mountain lake and take a short rest.
How accurate are the following statements for you?
Visiting a place like this leaves me with the feeling that I have learned something from nature *
|
v3-fos-license
|
2023-04-03T15:11:09.687Z
|
2023-03-20T00:00:00.000
|
257903920
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.gavinpublishers.com/assets/articles_pdf/The-Impact-of-Two-Hourly-Purposive-Rounds-2HNR-on-Nurses-Perception-and-Satisfaction-A-Cross-Sectional-Study.pdf",
"pdf_hash": "c5a6be8e5af07b356a9fb4361f1de7789d865f88",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2107",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "9385166082beac6b1199cc9667293a5905142dbc",
"year": 2023
}
|
pes2o/s2orc
|
The Impact of Two Hourly Purposive Rounds (2HNR) on Nurses’ Perception and Satisfaction: A Cross-Sectional Study
Nurses are the backbone of healthcare organizations, and they play a crucial role in delivering quality care and ensuring patient safety, and that, in turn, can be achieved through the two hourly purposive nursing rounds (2HNR). The 2 HNR is a structured rounding conducted by the bedside nurses on a one to two hourly bases. It has been associated with increased patient satisfaction and nursing care quality, improved nurse-patient interaction
Introduction
Nurses are the backbone of the healthcare organizations and they play crucial role in delivering quality care and ensuring patient safety [2]. The two hourly purposive nursing round (2HNR) can fulfill the last point [3]. In 2018, a tertiary hospital in Oman, has implemented the 2HNR in both adult and paediatric general medical and surgical wards, to ensure equity, quality, and standardized nursing care. The 2HNR has been studied by several authors and has been associated with positive outcomes [1,2].
The 2 HNR is a structured rounding conducted by the bedside nurses on one to two hourly bases [2]. The 2HNR is referred to as intentional rounds, timely rounds, and structured rounds in the literature and has been associated with increased patient satisfaction and the nursing care quality, improved nurse-patient interaction, reduced incidence of fall and call bell frequency, and early identification of patients' needs [1,2]. To the best of our knowledge, this is the first study done on the 2HNR in Oman; hence, it is an addition to the entire body of knowledge of the effect of 2HNR in hospital settings. This study aims to evaluate the impact of 2 HNR on nurses' perception and satisfaction at a tertiary hospital in Oman [3].
Study setting
The research study was conducted at a tertiary hospital in Oman to evaluate the impact of 2 HNR on nurses' perception and satisfaction.
Study design and population
A cross sectional design study was conducted from January 2020 to July, 2020. Nurses working in general units such as medical, surgical, pediatrics, maternity, cardiac units, as well as oncology were invited to participate by filling the online survey. A total of 513 nurses filled the online survey, to assess their satisfaction and perception about the Two Hourly Purposive Nurses Round (2HNR) [4]. The online link to participate in the survey were sent to the nurses by their ward in-charges and clinical nurse educators. Nurses working in critical care units and outpatient units were excluded. The total number of nurses who work at general units (i,e 1000) and the sample selected for the study is (i.e 513 nurses) is representative to the target population.
Study tool
The survey questionnaire was designed in English as it is the common language used in the hospital. The questionnaire consisted of closed ended questions to reveal perception of the nurses in regard to the 2HNR. A total of nine closed ended questions were established. Likert scale was used with five options from strongly agree to strongly disagree. These types of scales are commonly used to assess the attitude and opinion of the participants . The questions were about nurses' perceived ability of performing the round during the three shift duties, the perceived benefits and values for nurses and patients, the perceived work load and time consumption; reduction of call bell from nurses' point of view, satisfaction, and challenges of performing the 2NHR [5]. A survey questionnaire was constructed by the researchers based on program's objectives, and later on was piloted among 20 nurses in 2019. Nurses' feedback revealed that the survey questionnaire was clear.
Data collection
The online link to the survey questionnaire was distributed by the researchers to all general units through their ward nurses and clinical educators to invite them for participation. Ward nurses and clinical educators were utilized for the recruitment process as they were having access to all nurses' contact. A seven-month period was given to allow nurses participate and give feedback on the practice based on their experience. Frequent reminders were sent to reach the targeted sample size and that was with the help of ward incharges and the clinical nurse educators.
Statistical analysis
Descriptive analysis was performed to describe the characteristics of the participants such as percentage, frequency, counts of the gender, length of experience, and qualification ( Table 1). Chi-square and cross tabulation analysis were conducted to correlate nurses' qualification with perceptions (i.e the perceived ability of the nurses in performing the round during the three shift duties, perceived benefits and values for nurses and patients, perceived workload and time consumption, reduction of call bell from nurses' point of view, satisfaction, and challenges of performing the 2NHR) (
Ethical consideration
This study was approved by the Hospital Scientific Research Board under approval number 62/2019. Participants of this study were informed about the objective of the research and voluntary participation was explained via an online link prior to introducing the survey questions. Additionally, participants were informed through the link that once they click on the bottom of the agreement; they automatically would agree to participate in the study. No details were required for the tool to identify specific participants. To secure the collected data, all data were transferred directly online to the primary researcher.
Participants' characteristics
Based on the survey findings, a total of 513 nurses have completed the survey and female constituted the highest proportion (96.1%). In terms of experience, 27% were more than 12 years' experienced, followed by 24% of the participants having 3-6 years of experience. About 54.1% of the participants held Diploma in general nursing . MSN qualified nurses formed the least percentage of participation (Table 1).
Benefits and challenges of the 2HNR
Around 53.9% of the participants believed that the two hourly nurses round (2HNR) had enhanced their skills of time management. Moreover, 61.1% were able to prioritize their patients care as a result of the two hourly rounding. The participants were able to perform the two hourly nurses round at all shifts, the morning shift (54.5%), the afternoon shift (61.1%), and the night shift (69.1%), respectively [6][7][8].
Looking at the benefits and values that are added by the 2HNR, the participants agreed that it does pose benefits and value to the patients (76%) and the staffs (67%). The 2HNR is seen as an approach to decrease the workload and save time among 32.6% of the participants, while it is not the same case among 36.9% of the participants, who disagreed on having the 2HNR as a way to reduce the workload and save time. Approximately 50.8% of the participants were satisfied with the 2HNR. About 53.4% of the Based on the findings, the following significant correlations were found. There is a significant relationship between the qualification level and the nurses' ability in managing the time (p value 0.024). The 2HNR is deemed significant to add value and benefits to both patients (p value 0.018) and the staff nurses (p value 0.035). 2 HNR decreases the workload and saves time (p value .003), reduces the call bell frequency whenever performed on regular basis (p value 0.14).
Discussion
Female represented the highest proportion in this survey (96.1%) as at the local hospital the number of females is obviously nineteen times more than the male. As the 2HNR tackles the basic needs of the patients, general diploma nurses, who are usually the bedside nurses, formed the highest percentage of participation.
Moreover, nurses who pose higher qualification
In term of ability in managing time and ability to prioritize the care by performing the intentional rounds, this study found that effective rounding could enhance the skills of nurses time management and task prioritization. This is supported by Fabry (2014) and Langley (2015) studies which found that intentional rounds causes fewer interruptions and spare of time during nursing activities [4,8]. Moreover, intentional rounds enhances nurses ability to manage their time, as it allows them to continuously assess and prioritize the nursing care, and identify patients concerns early [5,[9][10][11][12][13][14][15][16][17][18].
A significant percentage of more than 54% of the participants could manage to conduct the 2HNR during all the shifts and that indicates that the 2HNR is doable and can be integrated into the daily nursing practice. However, a special attention must be paid to the factors that rendered other participants who did not manage to perform the 2HNR in all the shifts. Proper planning of the 2HNR process and policy can be executed to overcome the barriers to implantation [18].
In terms of benefits and values associated with the 2HNR, it can be argued that a significant percentage of the participants have gained time and tasks' management skills which are crucial to the nursing task implementation. More than half of the participants acknowledged that the 2HNR poses benefits and value to both the staffs and patients; therefore it can add a lot of value to the overall quality of the nursing care. The latter findings go parallel with findings, stressing that intentional nursing round has positive effect on patients as it recognizes changes and deteriorations in patients' health status and spares time for nurses to stay with sicker patients (Patterson, 2014) [15,6,16]. Importantly, by checking on patient frequently, all patients needs like pain management, comfort, and safety needs would be met [13]. The 2HNR is seen beneficial to nurses and this finding supports the literature in terms of viewing the intentional rounds as beneficial too in understanding the patient and planning the care as accordingly, reducing patient's and family's anxiety and uncertainties [19].
In this study, some nurses perceived 2HNR as a burden and extra work for them. This result aligns with Fabry (2014), Harris et al. (2019) and Ryan et al. (2019) studies in which some nurses also perceived that intentional round was an additional work and burden to them [4,6,9,16]. The last was attributable to the following. First, the burden of rounds' documentation. Secondly, nurses perceived that they had been checking on their patients on regular and frequent basis in the normal course of patient care and the 2HNR would not add a lot in knowing patients basic needs [6,9,16]. In the current study, some nurses felt reluctant to perform the 2HNR as they thought that the 2HNR takes them away from focusing on sicker patients. Though the participants of this study valued the role 2HNR in prioritizing their patients care, the 2HNR was perceived as a workload [16]. This finding is contradicting with the 2HNR task prioritization component and goal and this point can be an indication of improper proper education on 2HNR and lack of buy-in of the task among nurses [9,16].
Based on the participants perception, whenever the 2HNR had been performed regularly, the call bell usage has had reduced. The last finding matches Krepper (2014), Mathew (2014) and Langley (2015) findings [7,8,10]. Attending call bells, at each two hours, ensures fulfillment of our patients' needs and reduction of the call bell [7,17].
This study found that more than half of the nurses were found to be satisfied with the current process of the 2HNR. Langley (2015) also found that intentional rounds have a positive effect on staff satisfaction, secondary to enhanced teamwork and communication [8].The 2HNR was found to be associated with some challenges since more than half of the respondents (53.4%) reported that they had challenges with performing the 2HNR. These challenges included rounds' documentation load, difficulty in structuring the rounds along with other nursing procedures, and rapid flow of patients. According to the literature, time constraints and perception of being too busy, fluctuating in patients' number and acuity, staffing levels, and rounds' documentation loads were the major challenges identified for the 2HNR [13]. The challenges can be overcome by performing a well-organized 2HNR process. A well-organized 2HNR can offer multiple benefits to nurses, patients, families, and the entire health care organization.
Conclusion
In conclusion, the current cross-sectional study aimed to evaluate the impact of 2HNR in a tertiary hospital in Oman. The study continued in general units from January 2020 to July 2020 (Seven Months). A total of 513 nurses responded to the online survey, which asses the perceived ability to perform the round during the three-shift duties, the perceived benefits and values for nurses and patients, the perceived workload and time consumption, reduction of call bell from a nurse's point of view, satisfaction, and challenges of performing the 2NHR. The collected data were processed and analyzed descriptively to describe the characteristics of the participants and by Chi-square and cross-tabulation analysis to correlate nurses' qualifications with studied variables. Ethical approval was obtained from the Hospital Scientific Research Board under approval 62/2019. The collected data shows that most of the respondent is female nurses with a percentage of 96.1%, and there is a good distribution of their experience, varying from less than one year to more than 12.
This study valued the role 2 HNR in prioritizing patients' care. Whenever the 2HNR was performed regularly, the call bell usage was reduced, and the majority of nurses were satisfied with the current process of the 2HNR. On the other hand, challenges that were perceived as workload, by nurses were the documentation load and difficulty structuring the rounds along with other nursing procedures and rapid flow of patients. Overall, this study acknowledges that a well and organized 2HNR procedure would contribute positively to the entire health care organization's quality and patient safety standards, and staff satisfaction. A more rigorous study needs to be conducted to further explore challenges to the 2HNR practice at the tertiary hospital in Oman, considering the staffing level, structure of the ward, leadership, structured rounding education, ward layout, and workload [11].
Limitation
The main limitations of the current study are as follows. This is a single center study and that might limit generalization of the results to other health care organizations. Moreover, the study was dependent on filling the onlineself-administered survey questionnaire. It would be a great input if the online survey findings would be verified with onsite clinical researcher's observations by the researches, to add more credibility to the entire study findings. Cross tabulation of data has been limited to the participants' qualifications, while years of experience and gender could have impacted the study findings.
|
v3-fos-license
|
2018-04-03T00:49:55.435Z
|
2018-03-09T00:00:00.000
|
3778535
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-018-22557-6.pdf",
"pdf_hash": "2e4a59401003d81e8e0fe725493519e69be55f19",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2110",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "6a6254e1aa6799014e3fc28846776dbb4d6341c6",
"year": 2018
}
|
pes2o/s2orc
|
Interferon-β deficiency at asthma exacerbation promotes MLKL mediated necroptosis
Defective production of antiviral interferon (IFN)-β is thought to contribute to rhinovirus-induced asthma exacerbations. These exacerbations are associated with elevated lung levels of lactate dehydrogenase (LDH), indicating occurrence of cell necrosis. We thus hypothesized that reduced lung IFN-β could contribute to necrotic cell death in a model of asthma exacerbations. Wild-type and IFN-β−/− mice were given saline or house dust mite (HDM) intranasally for 3 weeks to induce inflammation. Double-stranded RNA (dsRNA) was then given for additional 3 days to induce exacerbation. HDM induced an eosinophilic inflammation, which was not associated with increased expression of cleaved caspase-3, cleaved PARP or elevated bronchoalveolar lavage fluid (BALF) LDH levels in wild-type. However, exacerbation evoked by HDM + dsRNA challenges increased BALF levels of LDH, apoptotic markers and the necroptotic markers receptor-interacting protein (RIP)-3 and phosphorylation of mixed linage kinase domain-like protein (pMLKL), compared to HDM + saline. Absence of IFN-β at exacerbation further increased BALF LDH and protein expression of pMLKL compared to wild-type. We demonstrate that cell death markers are increased at viral stimulus-induced exacerbation in mouse lungs, and that absence of IFN-β augments markers of necroptotic cell death at exacerbation. Our data thus suggest a novel role of deficient IFN-β production at viral-induced exacerbation.
Up to 80 percent of all asthma exacerbations are triggered by respiratory viral infections, which cause severe lower respiratory tract illness in asthmatics 1 . Pattern recognition receptors (PRRs) play a major role in innate immune responses to allergens and viruses 2,3 and may also recognize components of dying cells 4 . Rhinoviruses produce double-stranded RNA (dsRNA) during replication, which is recognized by PRRs notably Toll-like receptor (TLR)-3 and retinoic acid-inducible gene I (RIG-I)-like receptors 5 . The result of activation of these PRRs involves the production and release of interferon (IFN)-β, which induces an antiviral state in surrounding cells 6 . It has been shown that primary cells from asthmatics may have a deficient ability to produce IFN-β at rhinoviral infection and dsRNA stimulation, the latter representing a given viral infection burden 7,8 . IFN-β is a multipurpose cytokine. In addition to its antiviral properties it can both induce cell death and, by contrast, promote cell survival in various cell types 9,10 . However, little is known regarding any association between IFN-β deficiency and occurrence of cell death in asthma or experimental models of asthma.
Virus infection-associated asthma exacerbations have been characterized by increased cell necrosis as reflected by released lactate dehydrogenase LDH 11 , a pan-cell-necrosis marker. A variety of cells in the asthmatic airways, including granulocytes and epithelial cells, may undergo necrosis at asthma exacerbations [12][13][14] . However, it is not known what modes of cell necrosis are involved. Eosinophil necrosis is clearly regulated in part by factors previously mistaken to specifically indicate apoptosis in these cells 15 . Apoptosis is a form of regulated cell death controlled by caspases and required for many physiological processes 16 . Apoptosis can be induced from extrinsic signals such as activators of cell surface death receptors or PRRs including TLR-3 17 . Once the initiator caspases get activated they cleave and activate caspase-3, which will execute apoptosis by proteolytic cleavage of several proteins including Poly (ADP-ribose) polymerase (PARP) involved in DNA repair 18 . If apoptotic cells are not phagocytosed they will undergo necrosis, which has been denominated as 'secondary necrosis' . Necrosis is clearly induced by physical trauma such as heat damage or hypoxia. However, of special interest in disease is well-regulated necrosis 19 . Different modes of regulated necrosis have now been identified: secondary necrosis, necroptosis, and pyroptosis that all manifest with necrotic morphology 20 .
Necroptosis is a proposed form of programmed cell death that so far has not been clearly associated with human lung diseases although it is speculated to be involved in chronic obstructive pulmonary disease (COPD) and acute respiratory distress syndrome (ARDS) 21,22 . Necroptosis involves the proteins receptor-interacting protein (RIP)-1, -3 and mixed linage kinase domain-like protein (MLKL). Upon activation, RIP1 and RIP3 form a complex called the necrosome, which phosphorylates MLKL to its active form that causes plasma membrane rupture. To avoid extensive necroptosis, the kinase activity of RIP1 and RIP3 is suppressed by full-length caspase-8 23 . Necroptosis has also been associated with inflammasome activation and subsequently interleukin (IL)-1β secretion and maturation 24,25 . Occurrence of necroptosis markers in asthma and animal models of asthma now awaits exploration.
We have recently developed a mouse model of viral stimulus-induced exacerbation of asthma with similarities to human exacerbations including increased bronchoalveolar lavage fluid (BALF) levels of LDH compared to allergic lung inflammation without exacerbation 26 . In this study, we test our hypotheses (A) that necroptosis occurs at viral induced exacerbations and (B) that IFN-β deficiency may be involved in increased lung necrosis. Part of the present results of these studies have previously reported in the form of abstracts 27 .
Results
Allergic airway inflammation induced by HDM does not involve LDH release or caspase-3 activation. Mice where challenged with HDM or saline for three weeks to establish experimental asthma ( Figure S1). HDM challenges induced an increase in total number of eosinophils, neutrophils and lymphocytes (Fig. 1A). There was also higher total protein levels in BALF in HDM challenged mice compared to saline challenged mice (Fig. 1B), which are in line with previously published data 26 . We could not detect any difference in the release of the cell death marker LDH in BALF after HDM challenges (Fig. 1C). Tissue staining with H&E of mouse lungs showed that HDM challenges increased perivascular and peribronchial infiltration of immune cells, and induced mucus production, which was not found in mice challenged with saline ( Fig. 1D,E). We then analyzed protein expression of the apoptotic markers cleaved caspase-3 and cleaved PARP. There was no difference in protein expression of cleaved caspase-3 and cleaved PARP in mice challenged with HDM compared to saline ( Fig. 1F-H).
Increased expression of both apoptotic and necroptotic markers during viral stimulus-induced asthma exacerbation. Asthma exacerbations have been associated with increased cell death, but the mechanism leading to cell death is largely unknown. Since we found that allergic airway inflammation induced by HDM developed without pronounced involvement of apoptosis or necrosis we wanted to examine the occurrence of various molecular cell death markers during viral-induced asthma exacerbation. We used a previously established mouse model of experimental asthma exacerbation where two different doses of dsRNA as a viral mimic (50 μg, 100 μg) or saline control were given intranasally to mice with established HDM-induced airway inflammation 26 ( Figure S1). We found that exacerbation evoked by both doses of dsRNA increased the apoptotic markers cleaved caspase-3 and cleaved PARP compared to HDM:saline challenged mice ( Fig. 2A,B). Further, we found that both 50 μg and 100 μg dsRNA increased the expression of full-length caspase-8 (Fig. 2C). The expression of the necroptotic effector proteins RIP3 and phosphorylated MLKL were also increased to a similar level with both doses of dsRNA (Fig. 2D,E), indicating occurrence of necroptosis at exacerbation. Interferon-β deficiency increases BALF LDH levels at dsRNA-induced asthma exacerbation in mice. Having examined the effects of dsRNA on mouse lungs previously challenged with HDM for three weeks as regards to cell death markers we next performed a study with 50 μg dsRNA that included mice deficient in IFN-β. There was a trend towards increased total cell count in BALF in wild-type mice at exacerbation compared to HDM:saline challenged wild-type mice (Fig. 3A). There was also a higher total cell count in BALF at exacerbation compared to saline:dsRNA challenged wild-type mice, indicating that the combination of HDM and dsRNA produced an aggravated immune response (Fig. 3A). Total cell count in IFN-β −/− mice had similar pattern as wild-type mice (Fig. 3A). In wild type mice at exacerbation there was a higher percentage of neutrophils compared to HDM:saline challenged wild-type mice, while the percentage of eosinophils was similar between the two groups ( Figure S2B,D). However, saline:dsRNA challenged IFN-β −/− had higher percentage of neutrophils, lymphocytes and eosinophils compared to wild-type mice with the same treatment ( Figure S2B-D). Furthermore, there was a shift towards an increase of the percentage of lymphocytes, in the IFN-β −/− mice at exacerbation compared to wild-type mice at exacerbation ( Figure S2C). Total protein and LDH levels in BALF in wild-type mice were increased at exacerbation compared HDM:saline challenged wild-type mice (Fig. 3B,C). Similarly there were increased total protein and LDH release in BALF in IFN-β −/− mice at exacerbation compared to IFN-β −/− mice that where challenged with HDM:saline (Fig. 3B,C). Strikingly, there were much higher levels of LDH in BALF at exacerbation in IFN-β −/− mice compared to wild-type mice, indicating occurrence of cell necrosis (Fig. 3C). This was accompanied by increased gene expression of IL-1β at exacerbation in IFN-β −/− mice compared to wild-type mice at exacerbation (Fig. 3D). H&E staining showed a trend towards increased perivascular recruitment of immune cells in wild-type mice close to large airways at exacerbation compared to both HDM:saline and saline:dsRNA challenged wild-type mice (Fig. 3E,F). Tissue staining revealed comparable inflammation pattern in IFN-β −/− mice (Fig. 3E,G).
Lack of interferon-β increases pMLKL in HDM-challenged mice compared to wild-type mice.
We then studied specific cell death markers in wild-type and IFN-β −/− mice. We found that protein expression (E) pMLKL from homogenized lungs. Optical density was measured and bands were related to housekeeping protein GAPDH and normalized towards HDM:dsRNA 50. The data are presented as mean ± SEM (n = 5-6). *p < 0.05, **p < 0.01. of the apoptotic markers cleaved caspase-3 and cleaved PARP were increased to similar extent in both saline:ds-RNA and HDM:dsRNA challenged wild-type mice compared to wild-type mice not receiving dsRNA (Fig. 4A,B). In IFN-β −/− mice, there was also a higher protein expression of cleaved caspase-3 and cleaved PARP in both saline:dsRNA and HDM:dsRNA challenged mice compared to IFN-β −/− mice not receiving dsRNA (Fig. 4A,B), although cleaved PARP did not reach statistical significance at exacerbation. The protein expression of RIP3 was also increased in wild-type mice at exacerbation compared to HDM:saline challenged mice, however this was not the case for full-length caspase-8. In IFN-β −/− mice, RIP3 expression also tended to be increased at exacerbation compared to HDM:saline challenged IFN-β −/− mice, although not significant (Fig. 4D). There was a 3-fold higher protein expression of pMLKL, which causes cell membrane rupture at exacerbation in IFN-β −/− compared to wild-type mice at exacerbation (Fig. 4E). TUNEL-positive cells were found in similar patterns in both wild-type and IFN-β −/− mice at exacerbation (Fig. 4F).
Discussion
This study addressed the occurrence of different modes of cell death and their dependence on IFN-β in a model of viral stimulus-induced asthma exacerbations. We demonstrate that the exacerbation is associated with increased markers of both apoptosis and necroptosis along with increased release of the pan-necrosis indicator, LDH. Furthermore, cell death indices at exacerbation were further increased in IFN-β deficient mice; the most conspicuous observation being a marked increase in LDH release together with an index of necroptosis and increased gene expression of IL-1β. These data are novel and of interest with regard to potential pathogenic roles of cell death and IFN-β deficiency in asthma, respectively.
We used HDM as inducer of a baseline allergic inflammation because this allergen is commonly involved in asthma. HDM induced inflammation interacts with the viral stimulus dsRNA to produce robust and reproducible exacerbation features including increased necrosis as reflected by increased BALF levels of LDH 26 . By the present regimen, HDM produced an eosinophilic inflammation with no signs of cell death. These results suggest that allergic lung inflammation is not always associated with aggravated cell death response. This observation suited the present focus on exacerbation. However, it is acknowledged that authors employing other HDM regimens and cell culture studies have reported that HDM has capacity to evoke cell death 28,29 including markers of epithelial cell apoptosis that were not increased in this study. The present viral stimulus, dsRNA, is employed as a rhinovirus infection intermediate known to mimic biological effects of actual infection 30,31 . This study also reproduced central features of viral exacerbations reported previously and that agree with observations in human asthma including LDH release and mixed granulocyte and protein exudation features 11 . There is a further potential advantage with dsRNA challenges in exploratory studies comparing different interventions because it provides the opportunity of exposing the animals to a given pathogen burden. This goal may be difficult to achieve with actual infection, in particular when variations between animals in the antiviral response and pathogen resistance against virus occurs. We demonstrated that two different doses (50ug and 100ug) of dsRNA challenges, administered to animals with an established HDM-induced allergic condition, produced significant increases in lung indices of apoptosis, cleaved caspase-3 and cleaved PARP, as well as necroptosis markers, RIP3, and pMLKL. These observations demonstrated involvement of regulated cell death in viral-induced exacerbations. The lower dose level of dsRNA (50ug) was therefore chosen for further studies of effects of dsRNA alone and for comparisons between wild-type mice and IFN-β deficient animals regarding exacerbation features.
In test systems involving cancer cells and macrophages, dsRNA has known effects on cell death indices reportedly involving both apoptosis and necroptosis 32,33 . Similarly to dsRNA, influenza infection has also been shown to induce both apoptosis and necroptosis 34 . Hence, it is not surprising that dsRNA increased LDH levels and markers of both apoptosis and necroptosis in wild-type mice in this study. Mice deficient in IFN-β showed amplified levels of LDH and the necroptotic marker pMLKL at exacerbation compared to their wild type counterparts. Furthermore, there was also higher levels of LDH and the necroptotic marker pMLKL in IFN-β −/− mice at exacerbation compared to saline:dsRNA challenged IFN-β −/− mice. In contrast, apoptotic markers were not altered in IFN-β −/− mice at exacerbation. However, this finding may not exclude that secondary necrosis contributed to the high levels of LDH. A majority of apoptotic eosinophils in a model of severe asthma remained non-phagocytosed and underwent secondary necrosis, which along with direct cytolysis of eosinophils was associated with increased airway epithelial derangement and inflammation 35 . Our results suggest that virus in combination with allergy could lead to a more detrimental cell death response in those asthmatics with reduced IFN expression.
Currently, the cellular source of the released LDH during asthma exacerbations and experimental models of asthma is not known. Wild-type and IFN-β −/− mice at exacerbation had similar pattern of TUNEL-positive cells. The TUNEL assay detects cells with fragmented DNA, and detects both apoptotic and secondary necrotic cells 36 . The TUNEL-positive cells were found in inflammatory foci, suggesting that they could be recruited inflammatory cells. Indeed, non-injurious resolution of inflammation in mucosal lined hollow organs such as the lungs is not dependent on apoptosis of inflammatory immune cells because the disease-driving cells in the airway wall are evidently eliminated through transmigration into the airway lumen for final clearance by mucociliary transport 37 . Shedding of epithelial cells is a hallmark of asthma likely contributing to pathogenesis of the disease. Especially large numbers of epithelial cells are shed at exacerbations. They appear in sputum and BALF as conglomerates called Creola bodies consisting of 10 cells or more including alive and dead cells 13 . However, it is of note that epithelial cells do not have to die before being shed but be released through down-regulation of adhesion proteins in intercellular junctions 38 .
Necroptosis may have contributed to the high LDH levels because pMLKL was increased at exacerbation in IFN-β deficient animals. Yet, RIP3 did not follow same pattern as pMLKL, which remains to be explained. pMLKL-induced necroptosis has been associated with inflammasome activation and increased IL-1β expression 24 Interestingly, also in this study we demonstrated increased IL-1β expression along with the increased pMLKL, potentially extending the role of necroptosis to involve promotion of IL-1β dependent features of asthma exacerbation 39 .
Previous studies have focused on the importance of IFN-β as an antiviral agent and its deficiency at viral-induced asthma exacerbations 40 . Based on the present findings we suggest a novel additional role of IFN-β deficiency as a regulator of necrosis and necroptosis at exacerbation of asthma. How IFN-β may affect cell death in context of asthma has previously been limited to observations in vitro, where IFN-β was required for an apoptotic fate of viral-infected cells 7 . It has been reported that IFN-β knockout mice had higher levels of cytokines potentially promoting necrotic cell death, including TNF-α in central nervous system compared to wild-type mice 41 . Our results showed that the inflammation in wild-type mice and IFN-β −/− mice was not much different at exacerbation at least at one specific time-point. Interestingly, we found that IFN-β deficient mice challenged with three weeks of HDM had increased expression of pMLKL compared to wild-type. In contrast, the apoptotic markers were mainly induced by dsRNA in both wild-type and IFN-β −/− mice. Hence, the inflammation induced by HDM in mice that are deficient in IFN-β may have a dysfunctional tissue repair mechanism involving necroptosis, which might lead to a prolonged inflammation. The present finding thus provides a basis for future exploration of time course aspects of pathogenic factors emanating from necroptosis at asthma exacerbations. Future studies are also warranted to validate the present novel findings with dsRNA in experimental exacerbations involving HDM and live rhinovirus infections.
Occurrence of necroptosis in lung diseases has only recently started to be explored 42 . This is the first paper to our knowledge that shows involvement of necroptosis and hence potential pathogenic cell death in asthma models. In COPD patients, elevated expression levels of RIP3 has been observed in lung epithelial cells compared to controls but pMLKL, considered the most appropriate marker of necroptosis, was not studied 20,43 . Diseases that have been suggested to involve necroptosis have also increased incidence of other forms of necrosis. This may be expected because the cell death pathways, as reflected by currently employed molecular markers, are highly intertwined 44 . How much necroptosis and necrosis contribute individually to driving inflammation needs further studies. Similarly the present discovery of a novel role of IFN-β as regulator of necrosis/necroptosis at viral induced exacerbations needs validation in future studies.
Materials and Methods
Additional information about materials and methods is provided in the supplementary material. Institutional Animal Care and Use Committee (IACUC) guidelines. The IFN-β −/− mice have previously been evaluated during infection with Sendai virus and in a experimental model of autoimmune encephalomyelitis 41,45 . The mice were fed ad libitum. Experimental asthma and asthma exacerbation in mice were induced as previously described 26 . Shortly, the mice were challenged with 25 μg/mouse dose HDM (Greer, Lenoir, USA) or saline intranasally 3 times/week for 3 weeks in order to establish experimental asthma. For the exacerbation model, HDM or saline challenged mice received 50 μg or 100 μg dsRNA {polyinosine-polycytidylic acid [Poly(I:C)]; (InVivogen, San Diego, USA)} or saline intranasally as control for 3 additional days. Mice were divided in seven groups; saline, HDM, saline/saline, saline/dsRNA, HDM/saline, HDM/dsRNA 50 μg and HDM/dsRNA 100 μg. The experiment was terminated three days after the last saline/HDM challenge for the two first groups, and 24 hours after the last saline/dsRNA or saline administrations for the other groups ( Figure S1). Bronchoalveolar lavage fluid. BALF was obtained by rinsing the lungs with PBS. BALF was centrifuged and the supernatants were then used to measure LDH release. The cell pellet from BALF was then resuspended in PBS and analyzed for total cell count with NucleoCounter (Chemometec, Allerod, Denmark). 50 000 cells where loaded to a cytospin funnel and where centrifuged at 450 g for 6 minutes. The cytospin slides where than stained with May-Grünwald Geimsa and analyzed under microscope for differential cell count.
Animals.
Lung dissection and preparation. The left lung lobes were fixed in 4% formaldehyde (Histolab, Gothenburg, Sweden) and paraffin embedded followed by sectioning. The sectioned lungs where then used for terminal deoxynucleotidyl-mediated dUTP nick end labeling (TUNEL) and hematoxylin and eosin (H&E) staining. The H&E slides where analyzed under light microscope and a score (1-6) was given reflecting the degree of lung inflammation. Sections with no obvious cell infiltrate were scored 0. All slides were analyzed blindly. The right lobe was snap frozen in liquid nitrogen until usage for western blot analysis or RT-qPCR. The right lung lobes were weighed and homogenized mechanically by using an OmniPrep Rotor Stator Generator (Omni International, Waterbury, USA). For western blot the lungs where additionally chemically lysed with lysis buffer (1% TritonX-100, 10 mM Tris-HCl, 50 mM NaCl, 5 mM EDTA, 30 mM sodium pyrophosphate, 50 mM NaF, 0.1 mM Na 3 VO 4 ) together with 1% protease and 1% phosphatase inhibitor cocktail (Sigma-Aldrich, Stockholm, Sweden). Total protein was measured with Pierce BCA assay (Thermo Scientific, Waltham USA).
Statistical analysis. The data are presented as mean±SEM. The statistical difference between-group comparisons were made using the Mann-Whitney U test. P-values of <0.05 were considered statistically significant. Statistics were performed by GraphPad Prism version 6.0 g software (GraphPad Software).
Data availability. The authors declare that all the data supporting the findings of this study are available from the corresponding author on request.
|
v3-fos-license
|
2016-03-14T22:51:50.573Z
|
2014-04-01T00:00:00.000
|
17249090
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-6643/6/4/1394/pdf",
"pdf_hash": "10f283615c6fd285e4594b1dff3acf7098166185",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2111",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "10f283615c6fd285e4594b1dff3acf7098166185",
"year": 2014
}
|
pes2o/s2orc
|
Comparison of Two Doses of Elemental Iron in the Treatment of Latent Iron Deficiency: Efficacy, Side Effects and Blinding Capabilities
Adherence to iron supplementation can be compromised due to side effects, and these limit blinding in studies of iron deficiency. No studies have reported an efficacious iron dose that allows participants to remain blinded. This pilot study aimed to determine a ferrous sulfate dose that improves iron stores, while minimising side effects and enabling blinding. A double-blinded RCT was conducted in 32 women (18–35 years): 24 with latent iron deficiency (serum ferritin < 20 µg/L) and 8 iron sufficient controls. Participants with latent iron deficiency were randomised to 60 mg or 80 mg elemental iron or to placebo, for 16 weeks. The iron sufficient control group took placebo. Treatment groups (60 mg n = 7 and 80 mg n = 6) had significantly higher ferritin change scores than placebo groups (iron deficient n = 5 and iron sufficient n = 6), F(1, 23) = 8.46, p ≤ 0.01. Of the 24 who completed the trial, 10 participants (77%) on iron reported side effects, compared with 5 (45%) on placebo, but there were no differences in side effects (p = 0.29), or compliance (p = 0.60) between iron groups. Nine (69%) participants on iron, and 11 (56%) on placebo correctly guessed their treatment allocation. Both iron doses were equally effective in normalising ferritin levels. Although reported side-effects were similar for both groups, a majority of participants correctly guessed their treatment group.
Introduction
Young women are at high risk of iron deficiency secondary to menstruation and childbirth [1]. The nutritional disorder affects one in five young women in Australia [2] and is associated with poorer general health and wellbeing and high levels of fatigue [3,4]. It is imperative that iron deficiency is effectively managed to prevent progression to anaemia. Increased dietary iron intake, iron fortification and iron supplementation are used to improve iron status [5]. Clinical practice guidelines for the management of iron deficiency have been developed in the United States [6], the United Kingdom [7] and in Australia [8,9]. These all recommend the use of dried ferrous sulfate which contains approximately 33% elemental iron. Clinical practice guidelines recommend a daily dose of 80-105 mg of elemental iron for treatment of iron deficiency anaemia in adults [10]. A systematic review conducted in 2011 assessed the effects of intermittent oral iron supplementation on anaemia in menstruating women, compared with no intervention, a placebo or daily supplementation [11]. This study found weekly supplementation with 60 to 120 mg elemental iron was effective in improving haematological markers [11]. Treatment of latent iron deficiency and the impact of using lower dose iron treatment on iron status are not articulated within current literature and iron treatment guidelines.
Ideally, supplementation should achieve maximal absorption with minimal side effects [12]. Oral iron has been associated with gastrointestinal side effects such as nausea, constipation and darkening of stools which can decrease compliance [8,13]. Such side effects can compromise blinding within trials. Lower dose iron supplements have fewer side effects [10,14] yet the effect of varying the dosage of iron on iron status [15][16][17] has rarely been studied, with no studies conducted in non-pregnant young women. Whether lower doses are absorbed as efficiently as higher doses in non-pregnant young women remains unknown [18]. Therefore, the current study aims to determine the efficacy of two different doses of iron supplementation in improving iron status whilst maintaining blinding to treatment groups.
Experimental Section
Testing was conducted at the University of Newcastle, Callaghan Campus in NSW, Australia between April 2010 and April 2013. Women aged 18-35 years were recruited via flyers and promotion in lectures within the University. Recruitment also included flyers at the Technical Education (TAFE) College, accessing the volunteer register at Hunter Medical Research Institute and word-of-mouth. All interested individuals were screened for eligibility against inclusion criteria using an author designed questionnaire (refer to supplementary material). The inclusion criteria were: female, 18-35 years; BMI 18-30 kg/m 2 ; English as primary language; not iron deficient within the last 12 months; not currently taking iron supplementation (those who had been on a standard multivitamin, containing minimal or no iron, were eligible to participate and asked to cease the supplement); no chronic medical condition; not taking medication that could potentially interfere with results; ability to provide blood samples for biomarkers of iron status; not having donated blood within the last three months and will not donate blood during the trial; not pregnant, or planning a pregnancy within the following 4 months; available to participate in intervention for 4 months. Those eligible were provided with an information statement and informed consent was obtained prior to the commencement of the study.
Participants
Thirty two women were included in the intervention. As shown in Figure 1, eight participants were included in the iron sufficient control group and were provided placebo capsules, and 24 iron deficient participants were randomised to either placebo, or treatment (60 mg or 80 mg iron).
Haematological Testing
Serum ferritin, haemoglobin and soluble transferrin receptor (sTfR) were used as biomarkers of iron status, and alpha-1glycoprotein (A1GP) was used as a marker of inflammation. A1GP is slower to rise, but remains at a high concentration longer than C-reactive protein (CRP), so is a better indicator of chronic sub-clinical infection than CRP, and may better reflect changes in the concentration of ferritin during infections [1]. Blood tests were performed by Hunter Area Pathology Service, accredited by the National Association of Testing Authorities Australia, using standard techniques. The timing of the blood testing was not restricted in order to optimise recruitment and compliance [19]. Results of the blood tests were sent directly to the research team at the University, and participants remained blinded to blood test results until the completion of the trial. Iron deficiency was defined as having ferritin < 20 µg/L [20] and all other markers within reference ranges (haemoglobin 115-165 g/L [20], soluble transferrin receptor 0.9-2.30 mg/L [21,22], A1GP 0.51-1.17 g/L [23]). sTfR reflects the number of iron receptors expressed on cell membranes and is raised once tissue iron starts to become limited [24]. It should theoretically represent a definitive marker of latent iron deficiency [25]. Participants with haemoglobin results below the reference range were excluded from the intervention and were immediately referred to their General Practitioner. At completion of the trial, all women were given copies of their blood test results for communication with their General Practitioner. Figure 1 summarises group allocation and progression through the trial. Young women found to be iron deficient at baseline were randomly assigned to one of two different doses (60 mg or 80 mg) of ferrous sulfate or placebo for 16 weeks. Ferrous sulfate was used as it is the most common type of elemental iron used to treat iron deficiency; two different doses were used to determine the most effective dose in improving iron status with the fewest side-effects. The doses 60 mg and 80 mg have been associated with fewer side-effects than the doses recommended in the National Guidelines [10,14]. The duration of supplementation was chosen based upon the correction of iron deficiency anaemia taking between 2 and 4 months [10]. The first eight iron sufficient participants were invited into the intervention as a control group. A single blinding approach was used with the control group who were all provided with placebo capsules, which contained Lactose. Subsequent iron sufficient participants exited the study following baseline testing. Participants were not informed of their treatment or iron status until trial completion. All participants were contacted on a four weekly basis to report any potential side-effects, using a specifically designed questionnaire. It was explained to participants that any remaining capsules would be counted following the intervention to increase compliance. In addition, participants were provided with a "Tips and Reminders" sheet for taking capsules. This included the following information: Take one capsule per day; Leave your container of capsules next to your tooth brush; Keep one or two capsules in the small container provided and leave this in your handbag in case you forget to take it in the morning and then remember part way through the day; Use the calendar to cross off each day once you have taken your capsule. This will help you to keep a track of how regularly you are remembering to take them; Take the capsule two hours apart from any other regular medication (except the oral contraceptive pill which can be taken at the same time); Do not take two capsules on the same day to compensate for missing your capsule the previous day-Take only one capsule per day; Return any left-over capsules in your container when you return for your follow up testing; Please take note of your compliance with the treatment regimen and any side effects you may have experienced and report this to the research team when you are contacted by phone every four weeks. Immediately following the 16 week intervention participants were asked to guess which treatment protocol they thought they had been allocated to. Participants had repeat blood tests after 16 weeks.
Capsules and Randomisation
Compounding chemists were contracted to provide the iron and placebo supplements and used Random Allocation Software to allocate treatments to participant Identification Numbers [26]. The active and placebo supplements were identical in appearance and were packaged in identical containers. The researchers and participants remained blinded to the treatment protocol and the randomisation code was held by a third party researcher only to be broken once the final results were collected. The study protocol was approved by the University of Newcastle Human Research Ethics Committee.
Statistical Analysis
STATA-IC 11 statistical analysis software was used with an alpha level of 0.05 set for statistical significance. Kruskal-Wallis rank test was used to analyse the effect of group on iron markers at baseline and follow-up and the difference in ferritin change score between oral contraceptive pill users and non-users on iron treatment. One-way analysis of variance (ANOVA) was used to examine the difference in iron marker change scores between treatment (60 mg iron and 80 mg iron) and no treatment groups (control and placebo). Fischer's exact test was used to examine the frequency of reported side effects and the frequency of correct treatment guesses between treatment and no treatment groups.
Participants
Twenty-four (75%) participants (mean age ± SD 25.6 ± 4.1 years) completed the intervention (60 mg iron n = 7, 80 mg iron n = 6, placebo n = 5, control n = 6). Reasons given for withdrawing from the study were unrelated illness (n = 3) or being too busy (n = 3). Two participants gave no reason. Participant demographics are shown in Table 1. Participants were primarily Australian, had a mean BMI of 21.2 kg/m 2 and 48% used an oral contraceptive pill (OCP). Note: BMI: Body mass index, Age and BMI data is provided as mean ± SD.
Iron Status
Ferritin, haemoglobin, and sTfR levels at baseline and follow-up, together with change scores for each group (60 mg iron, 80 mg iron, iron deficient placebo, iron sufficient control) are presented in Table 2. The A1GP was normal in all participants and was unchanged following the intervention.
Baseline
Kruskal-Wallis analyses performed on iron status markers for the three iron deficient groups (60 mg, 80 mg and placebo) confirmed there were no significant between group differences in ferritin (p = 0.38), haemoglobin (p = 0.34) or sTfR-Index (p = 0.82) at baseline. As shown in Table 3, analyses comparing iron sufficient (controls) and iron deficient participants (60 mg, 80 mg and placebo groups combined) revealed that controls had significantly higher ferritin (p < 0.01) and lower sTfR-Index (p < 0.01) than the combined iron deficient groups at baseline, but no difference in haemoglobin (p = 0.30).
Follow-Up
Analysis of iron status at follow-up revealed a significant difference between the placebo group vs. the 60 mg, 80 mg and iron sufficient controls combined in ferritin (p ≤ 0.01), but no difference in haemoglobin (p = 1.0) or sTfR-Index (p = 0.11) (as shown in Table 3). Post hoc analysis showed the placebo group had significantly lower ferritin at follow-up than the 60 mg iron group, 80 mg iron group and controls (p = 0.02, p = 0.02, p = 0.04). There was no significant difference in ferritin between controls and 60 mg iron (p = 0.57), controls and 80 mg iron (p = 0.87), or 60 mg and 80 mg groups (p = 0.89) at follow-up.
Change Scores
Change scores between baseline and follow-up for the iron treatment (60 mg and 80 mg combined) and placebo (iron deficient placebo and iron sufficient controls combined) groups were compared using one-way ANOVA. As shown in Table 3, the analyses revealed that the increase in ferritin levels was significantly greater following iron treatment compared with placebo, F(1, 23) = 8.46, p ≤ 0.01. There were no differences in haemoglobin change, F(1, 22) = 0.60, p = 0.45, or sTfR-Index change, F(1, 15) = 3.95, p = 0.07, between iron treatment and placebo groups.
Based on our criteria for iron deficiency (ferritin < 20 µg/L, haemoglobin 115-165 g/L, soluble transferrin receptor 0.9-2.30 mg/L, A1GP 0.51-1.17 g/L), at follow-up six (75%) iron deficient participants on 60 mg iron became iron sufficient and one remained iron deficient (13%). Four iron deficient (57%) participants on 80 mg iron became iron sufficient and two remained iron deficient (28%). Two iron deficient (25%) participants on placebo become iron sufficient and four (50%) remained iron deficient. All except one (who became iron deficient) of the iron sufficient controls (80%) remained iron sufficient (see Table 4). Of the participants on iron treatment, there was no difference in ferritin change score between oral contraceptive pill users and non-users (p = 0.94).
Side Effects and Compliance
Reported side effects included nausea, darkening of stools and constipation. While these were more commonly reported by participants in the 80 mg elemental iron group, particularly dark stools (see Table 4), a Fischer's exact test indicated there was no statistically significant differences in the frequency of reported side effects between the 60 mg and 80 mg groups (p = 0.29), the placebo and controls (p = 0.55), or between the treatment and placebo groups (p = 0.42). Kruskal-Wallis analyses performed on compliance scores (% of capsules taken) showed no statistically significant difference between the 60 mg and 80 mg groups (p = 0.22), between the placebo and controls (p = 0.25), or between the treatment and placebo groups (p = 0.60).
Participants' Treatment Guesses
Of the 13 participants taking iron supplements who completed the trial, 9 (69%) correctly guessed they were taking iron supplements. A Fischer's exact test showed no difference in the number of correct treatment guesses between the 60 mg and 80 mg groups (p = 0.27). Of the 11 participants taking placebo capsules who completed the trial, 6 participants (56%) correctly guessed that they were taking placebo capsules. There was also no significant difference in the number of correct treatment guesses between the placebo and control groups (p = 0.08).
Change in Iron Status
Limited knowledge exists on the efficacy of different doses of iron supplementation on iron status in non-pregnant young women with latent iron deficiency. Our study aimed to determine a ferrous sulfate dose that improves iron stores in women with latent iron deficiency, while minimising side effects. At follow-up, iron deficient participants who were randomised to ferrous sulfate (60 mg or 80 mg) had significant improvements in their ferritin from baseline levels. Following this improvement, there was no difference in ferritin when compared to controls at follow-up. Iron deficient participants randomised to placebo had significantly lower ferritin than iron treatment groups and controls at follow-up. This shows that 16 weeks of elemental iron is effective in normalising iron levels in most participants in this population of young women, and that without such iron treatment iron stores remain depleted. This pilot study had 68% power to detect a difference in ferritin change score between treatment or no treatment groups. The analysis showed a significantly higher ferritin change in treatment compared to no treatment groups. The power in this study will be used to inform the sample size in future studies as data on treatment of latent iron deficiency in non-pregnant women is limited.
Ferritin increased in iron deficient participants on placebo to a much lesser degree than those on iron treatment. Altering dietary iron was not part of the intervention and participants were not given any advice about changing their diet in order to keep their current intakes stable during the trial. It is possible that participants altered their dietary iron intake, which may explain some of the change in iron status and is a limitation that must be acknowledged. However, participants remained blinded to their iron status until the completion of the trial, so it is just as likely that iron sufficient controls changed their dietary iron intake as the iron deficient participants. Results also demonstrated that a daily 60 mg dose was as effective as an 80 mg dose in treating latent iron deficiency. Seventy five per cent of participants on 60 mg iron dose became iron sufficient at the end of the trial as compared with 57% of participants in the 80 mg iron group. However, there was no significant difference in iron status at follow-up between the 60 mg vs. 80 mg iron groups.
A systematic review of the literature has actually shown that weekly dosing at 60-120 mg is adequate for treating iron deficiency in menstruating women [11], however national guidelines recommend 80 or 105 mg daily, which is also recommended by General Practitioners in Australia [10]. We have shown that 60 mg daily is efficacious in young women with latent iron deficiency.
Compliance
The incidence of reported side effects was not statistically significantly different between placebo or treatment groups in this trial. The gastrointestinal effects of iron supplementation appear to be highly individual. Clear dose related side effects have been reported in previous studies using low (15 mg) and high doses (222 mg) [8,15], whereas others have found no difference in side effects between placebo and treatment groups, even when daily doses of 260 mg were used [27]. In the current study, there was no statistically significant difference in the compliance between groups. Galloway et al. (1994) reviewed literature on participants' compliance with iron supplement regimes in research studies and reported that compliance decreases when dose increases, however, as in the current study, Galloway found little evidence of side effects causing low compliance [28].
Side Effects and Treatment Guess
This study also aimed to examine the effect of potential side effects of the two different doses of iron supplementation on awareness of blinding to treatment groups. To assist with blinding, capsules were used in the study rather than tablets. This is due to ferrous sulfate being slightly green in colour and having a distinctly metallic taste. Therefore, to produce tablets for a blinded trial would involve finding inactive compounds to mimic or hide both the colour and taste of ferrous sulfate. Seventy seven per cent of participants in the treatment groups could guess that they were on iron, which is much higher than the 48% of 191 correctly guessing they were taking iron reported by Makrides et al. [16], though this study was in pregnant women who were obviously undergoing significant bodily changes making any additional effects of iron treatment difficult to identify. In the current study, the incidence of reported side effects was not different between treatment groups and placebo. This suggests that factors other than side effects play a role in the identification of their treatment, such as perhaps feeling more energetic. Although there was no formal assessment of fatigue and vitality in the current study, Patterson et al. (2001) showed improved vitality and decreased fatigue after treatment of iron deficiency in young women [4].
Limitations
Several limitations of this study must be acknowledged. These include the small sample size, and low power, which are likely to have affected the reliability of results. Some participants may have self-selected for this study given that they thought they were iron deficient, however, we made it clear that participants with iron deficiency within the 12 months prior to their enrolment in the study were not eligible. Also, physical activity and dietary intake were not assessed. These factors may have influenced individuals iron status at follow-up [29]. Despite possible influence of day-to-day variation [30], menstruation [31] and seasonal variation [32] on hematological results, the timing of the blood testing was not controlled to prevent unnecessary increased participant burden.
Conclusions
Results of this study revealed that a 60 mg iron dose can normalize iron status in non-pregnant young women with latent iron deficiency. No differences were found in the incidence of reported side effects or the level of compliance between treatment groups and placebo. Further double-blinded trials should examine the effectiveness of iron doses lower than 60 mg for improving iron status in young women, and to determine if awareness of treatment allocation is reduced.
|
v3-fos-license
|
2021-04-15T13:31:52.780Z
|
2021-04-15T00:00:00.000
|
233236465
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2021.650102/pdf",
"pdf_hash": "a367541ca3395d75dc9f18b7eebf31197af4539d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2112",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "a367541ca3395d75dc9f18b7eebf31197af4539d",
"year": 2021
}
|
pes2o/s2orc
|
Identification of a MicroRNA Signature Associated With Lymph Node Metastasis in Endometrial Endometrioid Cancer
Background Lymph node metastasis (LNM) is an important prognostic factor in endometrial cancer. Anomalous microRNAs (miRNAs) are associated with cell functions and are becoming a powerful tool to characterize malignant transformation and metastasis. The aim of this study was to construct a miRNA signature to predict LNM in endometrial endometrioid carcinoma (EEC). Method Candidate target miRNAs related to LNM in EEC were screened by three methods including differentially expressed miRNAs (DEmiRs), weighted gene co-expression network analysis (WGCNA), and decision tree algorithms. Samples were randomly divided into the training and validation cohorts. A miRNA signature was built using a logistic regression model and was evaluated by the area under the curve (AUC) of receiver operating characteristic curve (ROC) and decision curve analysis (DCA). We also conducted pathway enrichment analysis and miRNA–gene regulatory network to look for potential genes and pathways engaged in LNM progression. Survival analysis was performed, and the miRNAs were tested whether they expressed differently in another independent GEO database. Result Thirty-one candidate miRNAs were screened and a final 15-miRNA signature was constructed by logistic regression. The model showed good calibration in the training and validation cohorts, with AUC of 0.824 (95% CI, 0.739–0.912) and 0.821 (95% CI, 0.691–0.925), respectively. The DCA demonstrated the miRNA signature was clinically useful. Hub miRNAs in signature seemed to contribute to EEC progression via mitotic cell cycle, cellular protein modification process, and molecular function. MiR-34c was statistically significant in survival that a higher expression of miR-34c indicated a higher survival time. MiR-34c-3p, miR-34c-5p, and miR-34b-5p were expressed differentially in GSE75968. Conclusion The miRNA signature could work as a noninvasive method to detect LNM in EEC with a high prediction accuracy. In addition, miR-34c cluster may be a key biomarker referring LNM in endometrial cancer.
INTRODUCTION
Endometrial cancer is the fourth most often diagnosed malignancy in the female population worldwide. Estimated numbers of new cases and deaths in 2020 in the United States were 65,620 and 12,590, respectively (Siegel et al., 2020). Endometrial endometrioid carcinoma (EEC) is the most common histological type of endometrial cancer (Creasman et al., 2006). Lymph node metastasis (LNM) is a key determinant of the prognosis and treatment of EEC. It was reported that 5-year survival of patients whose tumor was limited in the uterine corpus was 80-90%, while those with LNM was 50-60% (Creasman et al., 2006;Lewin and Wright, 2011). Therefore, lymph node evaluation is critical for diagnosis and further adjuvant therapy. Lymphadenectomy used to be the routine therapy for EEC and was critical for surgical staging. However, evidence shows that lymphadenectomy may be unnecessary for early-stage EEC because of limited benefits and may lead to nerve injury, prolonged operation time, lymphedema, blood loss, and lymph cyst formation (Morrow et al., 1991;Homesley et al., 1992;Orr et al., 1997;Abu-Rustum et al., 2006). Therefore, a more selective lymphadenectomy is applied, and new noninvasive ways to evaluate lymph node status before surgery need to be explored.
MicroRNAs (miRNAs) are small RNA molecules that posttranscriptionally regulate gene expression by guiding target mRNA cleavage or translational inhibition. Multiple studies have shown that miRNAs play significant roles in the occurrence, development, and prognosis of cancer, making them potential markers for diagnosing specific cancers and progression (Cai et al., 2009;Chan et al., 2011). For example, a miRNA signature consisting of miR-155, miR-21, and 33 other miRNAs was found to distinguish clear-cell kidney cancer from normal kidney tissue with high confidence (Juan et al., 2010). The specific miRNA panels also have good performance on the prediction of prognosis of colon cancer, liver cancer, and lung cancer (Budhu et al., 2008;Hur et al., 2015;Cen et al., 2020). Previous studies have tried to determine the miRNAs associated with EEC compared to normal endometrial tissue (Tsukamoto et al., 2014;Wang Q. et al., 2020). However, few studies have worked on LNM evaluation in EEC using miRNA signatures. Therefore, the aim of this study was to evaluate whether miRNA profiles can predict LNM and to identify candidate target miRNAs and their relations to LNM progression in EEC.
Study Workflow
The schematic of study workflow was shown in Figure 1. Clinical data and miRNA profile were obtained from The Cancer Genome Atlas (TCGA). Three methods including differentially expressed miRNAs (DEmiRs), weighted gene co-expression network analysis (WGCNA), and decision tree algorithms were performed between LNM-positive group and LNM-negative group to screen candidate target miRNAs. Samples from TCGA were randomly divided into training and validation cohorts.
A miRNA signature was built using logistic regression model in the training cohort. The performance of the miRNA signature was evaluated by receiver operating characteristic curve (ROC) and decision curve analysis (DCA). Pathway enrichment analysis and miRNA-gene regulatory network were constructed to look for potential genes and pathways engaged in LNM progression. The expression of miRNAs in signature was validated in another independent Gene Expression Omnibus (GEO) database. Finally, survival analysis was performed to explore the prognosis meaning of the identified miRNAs.
The Cancer Genome Atlas miRNA Expression Profiles
Transcriptome data including miRNA expression and mRNA expression for EEC were obtained from TCGA (TCGA-UCEC) for count data type 1 . The corresponding clinical data, including age, stage, and histological type and grade, were also collected. Only cases involving a histologic EEC diagnosis and with complete clinical information regarding tumor grade and lymph node status were selected for analysis. Additionally, we selected only patients with clinical stage I (negative lymph nodes) or IIIC (positive lymph nodes) disease for comparison.
Screening Candidate miRNA
Three methods were used to screen candidate miRNA related to LNM in EEC including DEmiR, WGCNA, and decision tree algorithms, which were combined to come up with a union set of candidate miRNAs for further analysis.
Differential Expression Analysis
The downloaded data of miRNAs were standardized, and then edge R package was used for differential expression analysis. The screening criteria were |fold change|>2 and false discovery rate (FDR) < 0.05.
Construction of Co-expression Network
Weighted gene co-expression network analysis was aimed to form the modules of co-expression gene for the EECrelated networks and interactions (Langfelder and Horvath, 2008). Following the protocols of WGCNA, the networks were constructed based on the weighted correlation matrices. Briefly, the gene expression profiles were transformed into connection weights that can be visualized as topology overlap measures (TOMs). We selected the module most relevant to LNM and then screened target miRNA in the chosen module.
Decision Tree Algorithms
Decision tree algorithms are widely used for detecting the important features in classification in the machine learning field (Monteiro and Murphy, 2011). In our research, we applied decision tree algorithms to identify target miRNA related to LNM. Light-GBM, a state-of-the-art Gradient Boosting Decision Tree (GBDT) algorithm, was used as our feature-ranking algorithm (Ke et al., 2017). Features were ranked according to the feature importance value, which is defined as the number of times a feature is selected as a partition point. To ensure that the final ranking of features is reliable, the process was repeated 1,000 times. In each cycle, learning rate, feature fraction, and bagging fraction were set randomly between 0.005 and 0.015, 0.7 and 1, and 0.7 and 1, respectively.
Model Construction and Validation
Patients in TCGA-UCEC dataset were randomly divided into training and validation cohorts, with t test and chi-square test proving no significant difference of patients' characteristics between the two cohorts. Logistic regression analysis was used in the training cohort to form the miRNA signature. After removing miRNAs that contributed little to the prediction of LNM, the final miRNA signature was defined. Then, the logistic regression formula was applied to the validation cohort, and a risk score of LNM was calculated. ROC was constructed, and the area under the curve (AUC) was calculated to validate the performance of prediction. DCA was conducted by R studio in order to evaluate the clinical application value of the signature.
The Gene Ontology Annotation and Kyoto Encyclopedia of Genes and Genomes Analysis of miRNAs in the Signature
The functional enrichment analysis of miRNAs in the signature was applied by Gene Ontology (GO) annotation and Kyoto Encyclopedia of Genes and Genomes (KEGG) signaling pathway in miRPath v.3 (Vlachos et al., 2015).
miRNA-Gene Interaction Network
We screened transcriptional target genes of miRNAs in our signature by the miRWalk database 2 (Dweep and Gretz, 2015).
Gene Expression Omnibus Data Validation
We then tested whether miRNAs in the signature were expressed differentially in another independent GEO database. GSE75968 consisted of 12 tumor samples and 12 paired normal tissues from patients with EEC from the GPL19117 platform. Probes were converted to the gene symbols based on a manufacturer-provided annotation file, and duplicated probes for the same gene were removed by determining the median expression value of all of its corresponding probes.
Survival Analysis
To determine the association of specific miRNAs with survival, Kaplan-Meier survival analysis was performed using TCGA-UCEC database. Log-rank test was utilized for comparison of survival curves between "high" and "low" expression group. All statistical analyses were conducted using SPSS Version 23.0 software or R statistical software version 3.6.0. Two-tailed tests and p values < 0.05 for significance were used.
Differential Expression Analysis
After filtering out excluded cases, 324 patients were selected for analyses. Here, 113 miRNAs were differently expressed between patients with and without LNM. Among them, 73 miRNAs were upregulated and 40 miRNAs were downregulated in patients with LNM (Figure 2A). Ten miRNAs with the most significant discrepancy were selected to construct the predictive signature.
Construction of Co-expression Network
To build a scale-free network, a soft threshold value (β) was tried from 1 to 20 following the WGCNA protocol. With β = 4, the degree of independence reached 0.9 and the mean connectivity dropped to zero, indicating that the network met the requirements for scale-free distribution. Gene modules close to each other were visualized by the dynamic tree cut method ( Figure 2B). Finally, 10 modules were obtained, and only modules significantly correlated with certain clinical features were selected ( Figure 2C). There was a significant negative correlation between the green module and LNM. Besides, correlation analysis showed that gene significance (GS) and module membership (MM) of the green modules were significantly correlated (cor = 0.41), suggesting that miRNAs in the green module may be related to LNM progression.
Decision Tree Analysis
The GBDT construction process was repeated 1,000 times with random super parameters. To ensure that the GBDT was not overfitted or underfitted, among the 1,000 models, only 147 GBDTs that met the criterion were selected. Then, we summed up the importance value of features in the aforementioned GBDTs for feature ranking and screened the top 10 as potential target miRNAs ( Figure 2E).
Together with the three methods, a total of 31 miRNAs were screened for signature construction.
Construction and Validation of the miRNA Signature
A total of 324 patients with an average age of 62.81 years were included in this study from TCGA-UCEC database, and 36 (11.1%) had LNM. They were randomly partitioned into a training cohort (n = 226) and a validation cohort (n = 98). As shown in Table 1, the demographics of the two cohorts were well balanced, including age, body mass index, the proportion of LNM, and G stage.
The Prediction Confidence of the miRNA Signature
The prediction confidence of the 15-miRNA signature was validated in the training and validation cohorts, with AUC of 0.824 (95% CI, 0.739-0.912) and 0.821 (95% CI, 0.691-0.925), respectively (Figures 3A,B). The result of DCA showed that the miRNA signature would be more clinically beneficial than the strategy "treat all" or "treat none" for predicting LNM if the threshold probability of a patient was between 0.1 and 0.8 ( Figure 3C). Therefore, the results of ROC and DCA both proved that the miRNA signature had good predicted validation.
The Gene Ontology Annotation and Kyoto Encyclopedia of Genes and Genomes Analysis
The functional enrichment analysis of miRNAs in the signature applied by GO annotation and KEGG signaling pathway was displayed in Figure 4. The result of GO annotation showed that miRNAs in the signature played roles in the mitotic cell cycle, cellular protein modification process, molecular function, and so on, some of which may make a contribution to the metastasis of EEC. The KEGG analysis suggested seven pathways were significantly enriched, including extracellular matrix (ECM)-receptor interaction, proteoglycans in cancer, transforming growth factor (TGF)-beta signaling pathway, and fatty acid metabolism.
The Construction of miRNAs and mRNA Regulatory Network
From TCGA database, a total of 188 mRNAs were differentially expressed between EEC patients with LNM and those without LNM (|fold change| > 2, FDR < 0.05). Using the miRWalk database, mRNAs targeted by miRNAs in our signature were identified, and 30 of the most related mRNAs were selected to construct a miRNA-mRNA regulatory network by Cytoscape 3.7. As shown in Figure 5, there were 114 interactions in this network. Among them, hsa-miR-135a-3p, hsa-miR-4788, and hsa-miR-122-5p regulated the most target mRNAs; meanwhile RGS8, DCT, and SP7 were regulated by most miRNAs.
Gene Expression Omnibus Data Validation and Survival Analysis
Among the 15 miRNAs, miR-34c-3p, miR-34c-5p, and miR-34b-5p were expressed differentially in GSE75968. The expression values of miR-34c-3p, miR-34c-5p, and miR-34b-5p in LNMpositive patients were significantly lower than those in LNMnegative patients (3.163 vs. 5.343, 1.557 vs. 3.259, 3.445 vs. 6.113, respectively), inferring that the miR-34 cluster may be key miRNA related to LNM progress (Figure 6). We then applied Kaplan-Meier survival analysis with miRNAs in our signature using TCGA-UCEC database. During the follow-up period, among 324 EEC patients, 30 died (9.26%) and one was lost to follow-up (0.31%). The 5-year overall survival rate was 88.9%. As shown in Figure 7, miR-34c-3p and miR-34c-5p were statistically significant in survival. Higher expression of miR-34c-3p and miR-34c-5p was associated with higher survival time. Thus, miR-34c was related to prognosis, and further research ought to be completed about the molecular mechanism of miR-34c in EEC.
DISCUSSION
Endometrial cancer is a major gynecological malignancy worldwide, with a cumulative risk of 1% by the age of 75 years, while the death risk is 0.2% (Morice et al., 2016;Van Nyen et al., 2018). LNM is a critical prognosis-related risk factor for EEC, and the status of lymph nodes is an essential consideration when making clinical decisions. Since lymphadenectomy is not applied as routine therapy in EEC, new ways for determining lymph node status need to be explored. Sentinel lymph node (SLN) mapping can be an alternative-thanks to its increased detection rate compared with lymphadenectomy (Ballester et al., 2011;Rossi et al., 2017). However, reliable SLN mapping requires surgeons and institutions to equip relevant expertise and skills. Also, SLN mapping is performed during surgery. Consequently, finding preoperative ways that can accurately identify LNM would have great clinical value. Similar to most tumors, the occurrence, development, and metastasis of EEC also involve complex molecular mechanisms (Stampoliou et al., 2016). Recently, research using dysregulated miRNAs as powerful tools to characterize environments of tumor and identify novel oncogenic pathway is emerging (Rupaimoole et al., 2016). Furthermore, there is a view that miRNA dysregulation patterns and signatures work better than mRNA in terms of identifying tumor origins due to their stability, robust expression, and lack of transcript variants (Chan et al., 2011). Thus, miRNAs may be reliable molecular biomarkers to predict LNM and help in the diagnosis and treatment of EEC. For the first time, we developed a miRNA signature to predict LNM in patients with EEC using TCGA-UCEC cohort. Innovatively, we used three different methods to screen candidate miRNAs. Identifying differently expressed genes or miRNAs by fold change between two groups is the most common way to find out the hub biomolecules in present bioinformatics research. However, a disadvantage of using fold change is that it is biased and may misclassify differentially expressed genes with large differences but small ratios, leading to poor identification of changes at high expression levels (Mariani et al., 2003). Recently, WGCNA analysis is widely used to construct the modules of co-expression genes that relate to prognosis or other clinical outcomes. For instance, researchers found that Prostaglandin D2 Synthase (PTGDS) predicted poor survival, while ANO1 might be a potential marker for good prognosis in endometrial cancer by WGCNA Zou et al., 2020). Besides, an increasing number of research apply the machine learning into the biomedical field. Decision tree algorithm is used to detect the important features in classification in the machine learning field and is also applicable for diagnosis and classification of diseases. By using the three aforementioned methods, 31 miRNAs were screened as candidate target miRNAs for signature construction. It should be noted that the screened miRNAs from each method were scarcely overlapped, indicating that the data were analyzed in discrepant statistical ways, which would make better use of specific data and lead to more discoveries.
We constructed the final 15-miRNA signature to predict the LNM of EEC by logistic regression, and a risk score of LNM was calculated. The AUC values were 0.824 and 0.821 in the training and validation cohorts, respectively. Thus, our miRNA signature has potential for LMN prediction and may provide biological insights in EEC. The result of DCA suggested that the miRNA signature had clinical value. The signature would be more beneficial than the strategy "treat all" or "treat none" in most cases, with a threshold probability range from 0.1 to 0.8. Subsequently, functional enrichment analyses were , and miR-34b-5p (C) in lymph node metastasis-positive and lymph node metastasis-negative groups in the GSE75968 database. N+, lymph node metastasis positive; N-, lymph node metastasis negative. Data were shown as mean ± SD. Individual data points were shown. *p < 0.05, **p < 0.01. performed to define biological process, molecular function, and signaling pathways. Determination of these pathways could serve as potential therapeutic targets for treatments in EEC and help in future clinical use. Meanwhile, a miRNA-mRNA interaction network was visualized by Cytoscape. Identifying the interactions between miRNAs in our signature and mRNA did good on our understanding of the regulation of target miRNAs in EEC.
To validate whether miRNAs in the signature were expressed differentially in another independent database, we tested the expression of our miRNA between LNM-positive and LNMnegative groups in GSE75968. The same as what we found in TCGA-UCEC, the expression of miR-34c-3p, miR-34c-5p, and miR-34b-5p was significantly lower in the LNMpositive group. Moreover, miR-34c-3p and miR-34c-5p were statistically significant in survival. Higher expression of miR-34c-3p and miR-34c-5p was associated with longer survival time, indicating that miR-34c may be a key miRNA related to LNM progress and survival. miRNAs in our signature were known to function in oncogenesis or had been reported to have prognostic value in cancers, especially in endometrial cancer. In our signature, hsa-miR-34c-5p, hsa-miR-34c-3p, hsa-miR-135a-3p, hsa-miR-449b-3p, hsa-miR-34b-5p, hsa-miR-34b-3p, hsa-miR-122-5p, hsa-miR-449c-5p, and hsa-miR-4788 were downregulated in patients with LNM in EEC. It was reported that overexpression of miR-34c-5p significantly inhibited cell proliferation, colony formation, migration, and invasion and induced cell cycle arrest and apoptosis by targeting E2F3 in HEC-1-B cells (Li et al., 2015). Liu et al. (2020) also found that miR-34a/c induced caprine endometrial epithelial cell apoptosis by regulating circ-8073/CEP55 via the RAS/RAF/MEK/ERK and PI3K/AKT/mTOR pathways. Simultaneously, miR-34 may have regulatory effects on epithelial-mesenchymal transition (EMT) of cancers by targeting SNAIL . Studies have concluded that miR-34b might act as a tumor suppressor in endometrial serous adenocarcinoma, estrogendependent breast cancer, and lung cancer (Lee et al., 2011;Hiroki et al., 2012). The impact of miR-135 on endometrial cancer was contradictory in the literature. Wang J. et al. (2020) revealed that miR-135a promoted proliferation, migration, and invasion and induced chemoresistance of endometrial cancer cells, but (Mirabutalebi et al., 2018) found miR-135a acted as a tumor suppressor by targeting ASPH in endometrial cancer (Chen et al., 2019;Wang J. et al., 2020). A positive correlation was also observed between the expression of miR-135a and endometriosis lesions, which is a disease also referring migration of the endometrium (Mirabutalebi et al., 2018;Petracco et al., 2019). The expression of miR-449b was markedly reduced in type II endometrial cancer tissues, and its reduction was associated with endometriosis lesions via endometrial stromal cell proliferation and angiogenesis (Braza-Boils et al., 2014;Ye et al., 2014;Liu et al., 2018). Similarly, literature revealed that miR-449 suppressed endometrial cancer invasion and metastasis by targeting N-MYC downstream regulated gene 1 (NDRG1) .
On the other hand, hsa-miR-483-3p, hsa-miR-548n, hsa-miR-137, hsa-miR-612, hsa-miR-4795-3p, and hsa-miR-875-3p were upregulated in patients with LNM in EEC. miR-483 has not been reported to be associated with endometrial cancer. Nevertheless, miR-483-5p was significantly downregulated in patients with endometriosis (Laudanski et al., 2013). Zhu et al. (2020) reported that the unavailability of miR-548 suppressed the progression of colorectal cancer by regulating the miR-548n/TP53INP1 signaling pathway. Moreover, miR-548 downregulated the host immune response via direct targeting of IFN-λ1 and thereby might provide a better microenvironment for tumor progression (Li et al., 2013). The expression of miR-137 was higher in patients with LMM in TCGA-UCEC; however, others reported that miR-137 was a tumor suppressor in endometrial cancer and was repressed by DNA hypermethylation (Banno et al., 2014;Zhang W. et al., 2018). found that miR-612 might compete with lncRNA H19 to regulate the expression of target gene HOXA10, which is related to cancer cell proliferation in endometrial carcinoma. Similarly, miR-612 was associated with esophageal squamous cell carcinoma development and metastasis, mediated through TP53 (Zhou et al., 2017).
Additionally, some mRNAs targeted by our identified miRNAs were reported to engage in tumorigenesis and progression. Zhang et al. (2009) found that Zinc finger transcription factor INSM1 interrupted cyclin D1 and CDK4 binding and induced cell cycle arrest. Besides, adjacent single-nucleotide polymorphisms (SNPs) to gene B4GALT1 could be associated with cervical cancer development (Danolic et al., 2020). Research revealed both SLC30A3 and GABRB2 had diagnostic and prognostic values for colon adenocarcinoma Yin et al., 2020). As for Regulator of G Protein Signaling Like 1 (RGSL), its novel mutations were related to the pathophysiology of breast cancer (Wiechec et al., 2011). Cell Adhesion Molecule 3 (CADM3) engaged in retinoblastoma cell proliferation, migration, and invasion targeted by miR-140-5p (Miao et al., 2018).
Although miRNAs in our signature were reported to be closely related to occurrences and progression of tumors, the relationships between some miRNAs and EEC were uncertain. Furthermore, there is few research referring to the target gene regulated by those miRNAs and their interactions. Thus, investigations are warranted to look into these miRNAs and genes.
The current study has several limitations. The proportion of patients with LNM was low in TCGA database. Besides, both our training and validation cohorts were obtained from TCGA database. Thus, more EEC samples are needed for further validation of the constructed signature before application. One other limitation is that the mechanisms of most identified miRNAs of EEC were unclear, so downstream experimental studies on these miRNAs need to be completed in the future.
CONCLUSION
In conclusion, we constructed a miRNA signature that worked as a noninvasive method to detect LNM in EEC and achieved a high prediction accuracy. In addition, miR-34c cluster may be key biomarkers referring LNM in endometrial cancer.
AUTHOR CONTRIBUTIONS
JX designed the study. KF, YL, JS, WC, WW, and XY prepared material and collected and analyzed the data. KF wrote the first draft of the manuscript. All authors read and approved the final manuscript.
|
v3-fos-license
|
2018-04-03T06:15:38.435Z
|
2017-08-14T00:00:00.000
|
4934636
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.7554/elife.26509",
"pdf_hash": "91ad19f63caffa602b42a162b399da0a7317f8dc",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2113",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "7a1a8101dcc556152ad1a884284986b351300e9c",
"year": 2017
}
|
pes2o/s2orc
|
Kinesin superfamily protein Kif26b links Wnt5a-Ror signaling to the control of cell and tissue behaviors in vertebrates
Wnt5a-Ror signaling constitutes a developmental pathway crucial for embryonic tissue morphogenesis, reproduction and adult tissue regeneration, yet the molecular mechanisms by which the Wnt5a-Ror pathway mediates these processes are largely unknown. Using a proteomic screen, we identify the kinesin superfamily protein Kif26b as a downstream target of the Wnt5a-Ror pathway. Wnt5a-Ror, through a process independent of the canonical Wnt/β-catenin-dependent pathway, regulates the cellular stability of Kif26b by inducing its degradation via the ubiquitin-proteasome system. Through this mechanism, Kif26b modulates the migratory behavior of cultured mesenchymal cells in a Wnt5a-dependent manner. Genetic perturbation of Kif26b function in vivo caused embryonic axis malformations and depletion of primordial germ cells in the developing gonad, two phenotypes characteristic of disrupted Wnt5a-Ror signaling. These findings indicate that Kif26b links Wnt5a-Ror signaling to the control of morphogenetic cell and tissue behaviors in vertebrates and reveal a new role for regulated proteolysis in noncanonical Wnt5a-Ror signal transduction.
Introduction
The Wnt family of extracellular signaling factors orchestrates diverse developmental processes during both embryogenesis and adult tissue homeostasis. Dysfunction of Wnt signaling has been implicated in many human diseases ranging from congenital birth defects to neoplasia (Clevers and Nusse, 2012;Kikuchi et al., 2012). Wnt ligands achieve high functional versatility in part by activating multiple biochemically distinct pathways to regulate diverse cell biological processes (Veeman et al., 2003;Semenov et al., 2007).
A growing consensus suggests that the Ror family of receptor tyrosine kinases mediates Wnt5adependent morphogenetic functions in the developing animal (Oishi et al., 2003;Green et al., 2008a;Mikels et al., 2009;Ho et al., 2012). However, how Wnt5a signaling via Ror receptors affects downstream cellular processes remains poorly understood. In a previous study, we found that among the many biochemical activities previously proposed to be downstream of Wnt5a signaling, only the phosphorylation of the cytoplasmic scaffolding protein Dishevelled (Dvl) required the expression of both Wnt5a and Ror proteins (Ho et al., 2012). This finding suggested that Wnt5a-Ror-dependent phosphorylation of Dvl specifically mediates the biological functions of Wnt5a signaling and led us to propose that Ror and Dvl are key components of the noncanonical Wnt5a pathway. The assignment of these proteins to a common pathway is further supported by the observation that human mutations in WNT5A, ROR2, DVL1 and DVL3 can cause Robinow syndrome, a congenital disorder characterized by short-limbed dwarfism and morphological defects in craniofacial and genital structures, demonstrating that the Wnt5a-Ror-Dvl pathway regulates morphogenesis during human development (Afzal et al., 2000;van Bokhoven et al., 2000;Person et al., 2010;Bunn et al., 2015;White et al., 2015White et al., , 2016. However, since the function of Dvl phosphorylation is not clear, and Dvl is a common component of several signaling pathways including the canonical Wnt signaling pathway and the planar cell polarity (PCP) pathway, how the Wnt5a-Ror pathway signals to carry out its biological functions remains incompletely understood.
In this study, we conducted a whole phosphoproteome-scale mass spectrometric screen comparing wild-type cells with cells lacking the Ror family of proteins in an effort to identify additional effectors of Wnt5a-Ror signaling. The screen identified a number of candidate proteins whose levels or phosphorylation status was influenced by Wnt5a-Ror signaling, including factors involved in cytoskeletal regulation and cell adhesion, processes crucial for the morphogenesis of tissues. We then focused the remainder of the study on characterizing Kif26b, a member of the kinesin microtubule motor superfamily, which stood out as a particularly promising target of Wnt5a-Ror signaling for the following reasons. Mutations in the C. elegans orthologs of Ror and Kif26b produce similar neuronal migration and axon guidance phenotypes, suggesting that these molecules might function in a common molecular pathway (Wightman et al., 1996;Forrester et al., 1998). Moreover, recent studies demonstrated that Kif26b plays crucial roles in regulating cytoskeleton-driven processes, including cell migration, polarization and adhesion, raising the possibility that Kif26b could function specifically as a cytoskeletal effector of the Wnt5a-Ror pathway (Uchiyama et al., 2010;Guillabert-Gourgues et al., 2016).
Through a series of biochemical studies, we demonstrate that Wnt5a-Ror signaling regulates the steady-state abundance of Kif26b in cells via a mechanism involving the ubiquitin-proteasome system that is independent of the canonical Wnt/b-catenin-dependent pathway. Importantly, gain-and loss-of-function experiments in cultured mesenchymal cells indicate that Wnt5a-Ror-Kif26b signaling modulates mesenchymal cell migration. We also find that perturbation of Kif26b function disrupts a number of Wnt5a/Ror-dependent processes in vivo. For example, in developing zebrafish embryos, mis-expression of Kif26b causes axis and craniofacial malformations, thus mirroring the effects of mis-expression of Wnt5a or Ror in zebrafish. In developing mouse embryos, Kif26b expression is required for primordial germ cells to populate the developing gonad, a process that also requires the expression of Wnt5a or Ror proteins. Taken together, these findings establish Kif26b as a downstream effector of the noncanonical Wnt5a-Ror pathway that regulates cell and tissue behaviors during development.
A phosphoproteomic screen identifies Wnt5a-Ror signaling targets
We sought to discover downstream effectors of Wnt5a-Ror signaling, as these could provide insight into the biochemical regulation and cell biological functions of the pathway. We reasoned that perturbation of upstream pathway components, such as the Ror receptors, would result in alterations in the biochemical regulation of downstream effectors. To test this hypothesis, we took advantage of primary mouse embryonic fibroblasts (MEFs) carrying conditional knockout alleles for the Ror1 and Ror2 genes (Ho et al., 2012) and screened for biochemical changes that occur upon genetic ablation of these genes. We previously showed that embryonic day 12.5 (E12.5) MEFs are a useful physiologically-relevant system for studying Wnt5a-Ror signaling. Not only are these cells derived from mesenchymal tissues that undergo active Wnt5a-Ror signaling in vivo, they continue to express high levels of endogenous Wnt5a, Ror1, Ror2 and Dvl proteins in culture and undergo autocrine/paracrine Wnt5a-Ror signaling without the addition of exogenous Wnt5a (Ho et al., 2012).
Using these conditional knockout MEFs, we performed a phosphoproteome-wide mass spectrometric screen to identify Ror-dependent changes in protein phosphorylation and/or abundance. Our reasoning was that since Wnt5a signaling regulates the phosphorylation state of known downstream components of the Wnt5a-Ror pathway, including Ror1, Ror2 and Dvl proteins (Bryja et al., 2007b;Nishita et al., 2010;Grumolato et al., 2010;Ho et al., 2012) and microarray analysis of primary MEFs lacking both Ror1 and Ror2 proteins failed to identify transcriptional changes relative to wildtype cells, Wnt5a-Ror signaling likely affects cellular functions via a transcription-independent process in MEFs (M.W.S., M.E.G., H.H.H. unpublished data).
To conduct the screen, we employed tandem mass tag (TMT) technology that enables the characterization and quantification of peptides from six experimental conditions in a single, multiplexed mass spectrometric (MS) analysis (Ting et al., 2011). This paradigm enabled the direct comparison of the identity, abundance and post-translational modifications of proteins present in cells in which the Ror proteins have been knocked out relative to wild-type control cells. Specifically, we analyzed phosphopeptides isolated from MEFs derived from E12.5 Ror1 f/f ; Ror2 f/f ; CAG-CreER embryos. As we described previously, the Ror1 and Ror2 conditional knockout alleles combined with the 4hydroxytamoxifen (4-OHT) inducible CAG-CreER allele enable the acute elimination of Ror1 and Ror2 protein expression in vitro (Ho et al., 2012).
For the first four of the six experimental conditions analyzed in the MS screen, we derived MEFs from two separate Ror1 f/f ; Ror2 f/f ; CAG-CreER embryos and treated each group with either 4-OHT or a vehicle control ( Figure 1A). 4-OHT treatment of Ror1 f/f ; Ror2 f/f ; CAG-CreER MEFs effectively eliminated Ror1 and Ror2 protein expression, as measured by western blotting, and reduced the phosphorylation of Dvl2, as measured by a motility shift of this protein on SDS-PAGE gels (Figure 1-figure supplement 1). This result confirmed the acute elimination of Ror1 and Ror2 protein expression and a decrease in Wnt5a-Ror-Dvl signaling in both biological replicates analyzed in the MS screen.
For the last two of the six experimental conditions in the screen, we cultured control MEFs from a single Ror1 +/+ ; Ror2 +/+ ; CAG-CreER embryo and treated these cells with either 4-OHT or a vehicle control to identify any nonspecific effects due to the addition of 4-OHT and the induction of Cre recombinase expression ( Figure 1A). As expected, treatment of these control cells with 4-OHT did not alter the expression of Ror proteins or the phosphorylation of Dvl2, as compared with the vehicle control ( Figure 1-figure supplement 1). Together, the two experimental replicates and the control condition allowed us to identify changes in the abundance of phosphorylated proteins and/or specific changes in protein phosphorylation events that are due to the disruption of Ror expression.
6498 unique phosphopeptides, representing 7426 distinct phosphosites, were quantified in the screen (Supplementary file 1). For high confidence identification of biochemical changes that are specific to the cells in which the Ror proteins were inducibly knocked out, phosphopeptides categorized as 'hits' had to meet the following criteria: (1) an average of !2-fold increase or decrease in the abundance of the phosphopeptide in 4-OHT-treated Ror1 f/f ; Ror2 f/f ; CAG-CreER MEFs relative
Kif26b
Significance (-log p) Fold Change (log 2 (+4OHT/+vehicle)) Phosphopeptides in arbitrary order (2) a significant fold change (p<0.05) across two experimental replicates; and (3) a <2-fold increase or decrease in the abundance of the phosphopeptide in 4-OHT-treated Ror1 +/+ ; Ror2 +/+ ; CAG-CreER MEFs relative to vehicle-treated Ror1 +/ + ; Ror2 +/+ ; CAG-CreER MEFs. The 2-fold threshold was chosen to capture candidates whose change in abundance ranked in the top 0.2 percent of all phosphopeptides analyzed in the screen. Hits in this screen could reflect two possibilities: (1) Ror signaling mediates the phosphorylation or dephosphorylation of a candidate protein, or (2) Ror signaling alters the total level of expression of a candidate protein. The first possibility is more likely when a specific phosphorylated peptide changes in abundance in a Ror-dependent manner while other phosphopeptides from the same protein do not change. The second possibility is more likely when each phosphorylated peptide from a given protein increases or decreases in abundance in a Ror-dependent manner.
A total of fifteen unique phosphopeptides were identified as hits ( Figure 1B and Supplementary file 2). Eleven phosphopeptides increased in abundance and four phosphopeptides decreased in abundance upon Ror depletion. Eight of the phosphopeptides that increased in abundance mapped to the same protein-the kinesin superfamily member Kif26b-making it a high-confidence candidate target of Wnt5a-Ror signaling ( Figure 1B,C and Figure 1-figure supplement 2). Moreover, all eight Kif26b phosphopeptides exhibited a significant increase in abundance following genetic ablation of Ror expression in both experimental replicates ( Figure 1B and Supplementary file 2). Taken together, these observations strongly suggest that Wnt5a-Ror signaling leads to a decrease in the level of Kif26b protein expression.
Kif26b is a downstream target of noncanonical Wnt5a-Ror signaling
Kif26b is a highly conserved atypical kinesin of the Kinesin-11 family, which includes Kif26a and Kif26b, two proteins whose developmental and cellular functions have only begun to be revealed in recent years (Uchiyama et al., 2010;Hirokawa and Tanaka, 2015;Guillabert-Gourgues et al., 2016). To further test the hypothesis that Wnt5a-Ror signaling leads to a decrease in Kif26b protein expression, as suggested by our MS screen, we generated polyclonal antibodies that specifically recognize the Kif26b protein. We validated the specificity of these antibodies by western blotting of protein extracts obtained from wild-type MEFs, multiple Kif26b shRNA-knockdown MEFs or Kif26b -/-MEFs (Figure 2-figure supplement 1A-C). We found that the anti-Kif26b antibodies recognized protein bands at the predicted size of Kif26b (~220 kD) in wild-type MEF lysates but not in the Kif26b knockout or knockdown MEF lysates, confirming that our antibodies specifically recognize endogenous Kif26b.
Using these antibodies, we assessed the expression of Kif26b protein in primary MEFs in which Ror1 and Ror2 proteins had been inducibly knocked out. We found that Kif26b levels were elevated in Ror1 f/f ; Ror2 f/f ; CAG-CreER MEFs treated with 4-OHT as compared to MEFs with the same genotype treated with a vehicle control ( Figure 2A). This finding suggests that Wnt5a-Ror signaling negatively regulates total Kif26b protein expression, as opposed to selectively catalyzing the dephosphorylation of multiple phosphorylation sites across the Kif26b protein. Moreover, this finding validates that our MS screening approach reliably identifies cellular proteins whose expression is regulated by Wnt5a-Ror signaling.
To test directly whether Wnt5a signaling regulates Kif26b protein levels, we assessed the expression of Kif26b in primary MEFs in which Wnt5a had been knocked out. We observed that Wnt5a -/- Figure 1 continued biological replicates. 6498 phosphopeptides were screened in each experiment. 'Hit' phosphopeptides, as defined in the text, are marked in blue. All Kif26b phosphopeptides are circled in orange. (C) Volcano plot of all phosphopeptides identified and quantified in the control group. The fold change (4-OHT/vehicle treated samples) is plotted along the x-axis. The position of the phosphopeptides along the y-axis is arbitrary, since there is only one replicate of the control group and no significance value is calculated. Kif26b phosphopeptides are circled in orange. DOI: https://doi.org/10.7554/eLife.26509.002 The following figure supplements are available for figure 1: MEFs also had a higher level of Kif26b protein relative to wild-type control MEFs, indicating that Wnt5a, like Ror1 and Ror2, negatively regulates the steady-state level of Kif26b expression ( Figure 2B). This finding suggests that Wnt5a signaling via Ror proteins leads to a decrease in Kif26b protein expression.
We next investigated whether acute activation of Wnt5a-Ror signaling by the addition of exogenous Wnt5a triggers a decrease in Kif26b protein expression in MEFs. We stimulated Wnt5a -/-MEFs with purified, recombinant Wnt5a and found that this treatment led to a decrease in Kif26b protein expression in a dose-dependent manner ( Figure 2C). The decrease in Kif26b was accompanied by a commensurate increase in Ror1 and Dvl2 phosphorylation, a readout of Wnt5a-Ror signaling ( Figure 2C). These findings suggest that Wnt5a induces the downregulation of endogenous Kif26b expression as the Wnt5a-Ror-Dvl pathway becomes activated.
We found that the Wnt5a-induced downregulation of Kif26b expression is first detected 1 hr after Wnt5a stimulation and that Kif26b expression is maximally decreased after 6 hr ( Figure 2D). The kinetics of Kif26b protein downregulation closely paralleled those of Ror1 and Dvl2 phosphorylation in response to acute Wnt5a stimulation ( Figure 2D), suggesting that Wnt5a signaling regulates these downstream biochemical events in a coordinated manner. In addition, we found that stimulation with Wnt5a does not change the level of Kif26b mRNA in Wnt5a -/-MEFs, as assayed by reverse transcription-quantitative PCR (RT-qPCR), throughout the course of the experiment ( Figure 2E), suggesting that Wnt5a-dependent downregulation of Kif26b protein levels occurs post-transcriptionally. Together, these findings establish that Wnt5a-Ror signaling leads to a decrease in the steady-state levels of Kif26b protein expression in MEFs and validate Kif26b as a bona fide target of the Wnt5a-Ror signaling pathway.
The Wnt5a-Ror pathway is generally thought to operate via a noncanonical, b-catenin-independent mechanism (Green et al., 2008b). To determine whether Wnt5a-Ror signaling induces the downregulation of Kif26b expression via a noncanonical Wnt signaling mechanism, we tested whether blocking the canonical Wnt/b-catenin pathway with Dkk-1, an antagonist of the b-catenindependent Wnt pathway that binds and prevents the phosphorylation of the canonical Wnt signaling pathway co-receptors Lrp5 and Lrp6, inhibits the Wnt5a-induced decrease in Kif26b protein in MEFs (Bafico et al., 2001). We found that exposure of Wnt5a -/-MEFs to exogenous Wnt5a protein induced Kif26b downregulation to a similar degree with and without the addition of Dkk-1 ( Figure 2F). To ensure that the Dkk-1 protein used in the experiment was active, we assessed whether the same concentration of Dkk-1 was capable of blocking signaling by Wnt3a, a prototypic canonical Wnt that induces the phosphorylation of the Wnt receptor Lrp6. We found that Wnt3adependent phosphorylation of Lrp6 was completely blocked in the presence of Dkk-1 ( Figure 2F). replicates are shown. (C) Immunoblots of Kif26b protein, Ror1 protein and Dvl2 protein in primary MEFs derived from E12.5 Wnt5a -/mice. Recombinant Wnt5a protein was added for 6 hr prior to lysis at the indicated dose. (D) Immunoblots of Kif26b protein, Ror1 protein and Dvl2 protein in primary MEFs derived from E12.5 Wnt5a -/mice. 0.1 mg/ml recombinant Wnt5a protein was added for the indicated amount of time prior to lysis. (E) Plot showing relative mRNA expression of Kif26b and b-actin as measured by RT-qPCR in primary MEFs derived from E12.5 Wnt5a -/mice, with 1 hr or 6 hr of Wnt5a stimulation. The y-axis represents fold change relative to expression levels in unstimulated cells. Error bars represent ± SEM calculated from three technical replicates. t-test (unpaired) was determined for the following comparisons: Kif26b 1 hr vs. 6 hr, p=0.249, not significant; b-actin 1 hr vs. 6 hr, p=0.320, not significant. (F) Immunoblots of Kif26b protein and phospho-Lrp6 (serine 1490) protein in primary MEFs derived from E12.5 Wnt5a -/mice. Recombinant Dkk-1 (0.1 mg/ml) or vehicle control was added 8 hr prior to lysis. Recombinant Wnt3a protein, Wnt5a protein or vehicle control was added for 6 hr prior to lysis. (G) Immunoblots of Kif26b protein in primary MEFs derived from E12.5 Wnt5a -/mice. IWR-1 or vehicle control was added 7 hr prior to lysis. Recombinant Wnt3a protein, Wnt5a protein or vehicle control was added for 6 hr prior to lysis. a-tubulin was used for loading controls in all experiments. All immunoblot samples were normalized by BCA assays for total protein. These findings suggest that Wnt5a-dependent regulation of Kif26b occurs via a noncanonical Wnt signaling mechanism that is independent of the canonical Wnt/b-catenin pathway.
To verify that Wnt5a regulation of Kif26b occurs via a noncanonical Wnt signaling mechanism, we next tested whether inhibiting the Wnt/b-catenin pathway at a more downstream step blocks Wnt5a-dependent downregulation of Kif26b levels. For this purpose, we used IWR-1, a small molecule that inhibits Wnt/b-catenin signaling by stabilizing Axin2, a key component of the b-catenin destruction complex (Lee et al., 2003;Chen et al., 2009). Similar to our findings using Dkk-1 to block the canonical Wnt/b-catenin pathway at the receptor level, we found that pre-treatment of Wnt5a -/-MEFs with IWR-1 did not block the ability of Wnt5a or Wnt3a to induce the downregulation of Kif26b protein expression ( Figure 2G). These results further support the conclusion that Wnt5a-Ror signaling leads to the downregulation of Kif26b protein expression via a noncanonical, b-catenin-independent mechanism.
It is interesting to note that the addition of exogenous Wnt3a protein, classically considered a canonical Wnt, also led to a decrease in the expression of Kif26b in Wnt5a -/-MEFs, even in the presence of Dkk-1 or IWR-1 ( Figure 2F,G). These findings indicate that exogenous Wnt3a also signals via a noncanonical Wnt signaling mechanism to downregulate the expression of Kif26b and supports the emerging view that the distinction between canonical and noncanonical Wnt signaling is not strictly determined at the level of Wnt ligands (van Amerongen et al., 2008). However, it is also possible that there is some specificity to Wnt3a and Wnt5a signaling in the developing embryo that is lost when these factors are studied using cultured MEFs.
Wnt5a signals the degradation of Kif26b via the ubiquitin-proteasome pathway
We next investigated the biochemical mechanisms by which Wnt5a-Ror signaling leads to decreased Kif26b protein expression. To more accurately quantify Kif26b levels in live cells, we developed a flow cytometry-based reporter assay using NIH/3T3 cell lines stably expressing a GFP-Kif26b fusion protein. We chose NIH/3T3 cells for the reporter assay because these cells express key components of the Wnt5a-Ror pathway, including Ror1, Ror2, Dvl2 and Kif26b, which we found to be similarly regulated by Wnt5a as in MEFs (Figure 3-figure supplement 1A,B). Furthermore, NIH/3T3 cells are an immortalized cell line derived from mouse embryonic mesenchymal cells that undergo morphogenetic movements during development (Todaro and Green, 1963), and they have been used previously to study cell behaviors in the context of Wnt5a-Ror signaling (Endo et al., 2012). Western analysis and immunostaining of these NIH/3T3 cell lines confirmed that the GFP-Kif26b protein is stably expressed in these cells ( Given that Wnt5a signaling acutely downregulates the level of endogenous Kif26b expression in MEFs, we tested whether Wnt5a similarly downregulates GFP-Kif26b levels in the reporter NIH/3T3 cells. Treatment of these cells with exogenous recombinant Wnt5a protein induced an approximately 50% decrease in GFP-Kif26b expression as detected by three independent methods: flow cytometry ( Figure 3A), western analysis (Figure 3-figure supplement 2A) and time-lapse microscopy (Video 1). These results indicate that Wnt5a treatment leads to a decrease in GFP-Kif26b in NIH/3T3 cells similar to that observed for endogenous Kif26b in MEFs, and that the GFP-Kif26b reporter can be used reliably to monitor the decrease in Kif26b protein expression in response to Wnt5a signaling. To the best of our knowledge, this is the first fluorescence-based reporter for real-time measurement of a non-transcriptional Wnt5a-Ror signaling response in live cells. For the remainder of this article, the Wnt5a-Ror-Kif26b reporter assay will be referred to as the WRK reporter assay.
We next used the WRK reporter assay in conjunction with flow cytometry to determine the doseresponse relationship between Wnt5a and GFP-Kif26b. We found that nanomolar amounts of Wnt5a induced the downregulation of GFP-Kif26b expression with a calculated EC 50 of 79.8 ng/ml (or 2.1 nM; Figure 3B). This response to Wnt5a occurs at a Wnt5a concentration that is similar to that of other previously reported Wnt-induced cellular responses (Bryja et al., 2007a(Bryja et al., , 2007bWitze et al., 2008;Ho et al., 2012;van Amerongen et al., 2012;Witze et al., 2013;Park et al., 2015;Connacher et al., 2017), suggesting that Wnt5a-induced Kif26b downregulation is a physiologically relevant response to Wnt5a.
Wnt5a-Ror signaling could downregulate the steady state level of Kif26b protein either by decreasing Kif26b synthesis or by increasing Kif26b turnover. Given that exposure of fibroblasts to Wnt5a leads to a decrease in Kif26b expression over minutes to hours but does not lead to a reduction in the level of Kif26b mRNA ( Figure 2D,E), we favored the latter possibility. To directly measure the rate of Kif26b turnover, we treated WRK reporter cells with cycloheximide to block new protein synthesis and then used flow cytometry to measure the effect of Wnt5a treatment on the rate of GFP-Kif26b turnover. Consistent with the hypothesis that Wnt5a treatment leads to the increased degradation of Kif26b, we found that Wnt5a accelerated the turnover of GFP-Kif26b in the reporter cells ( Figure 3C). In the absence of Wnt5a stimulation, 80.2% of the GFP-Kif26b signal remained in the cells 6 hr after the initiation of the cycloheximide treatment. By contrast, in the presence of Wnt5a stimulation, only 33.2% of the GFP-Kif26b signal remained in the same period. These results strongly suggest that Wnt5a treatment leads to a decrease in the steady-state levels of Kif26b expression in cells by promoting Kif26b turnover.
We next investigated whether Wnt5a downregulates Kif26b expression by increasing the rate of Kif26b degradation via the ubiquitin-proteasome system (UPS). To test this hypothesis, we asked whether a selective inhibitor of the proteasome, epoxomicin, blocks downregulation of Kif26b expression in response to Wnt5a (Meng et al., 1999). Pre-treatment of the WRK reporter line with epoxomicin strongly blocked Wnt5a-induced Kif26b downregulation ( Figure 3D,F). This result suggests that Wnt5a-induced downregulation of Kif26b occurs via proteasome-dependent degradation.
To determine whether protein ubiquitination is specifically required for Wnt5a-induced Kif26b degradation, we tested whether an inhibitor of the E1 enzyme required for ubiquitin activation, PYR-41, also blocks Wnt5a-induced Kif26b downregulation (Yang et al., 2007). Pre-treatment of the WRK reporter cells with PYR-41 blocked Wnt5a-induced Kif26b downregulation to an extent similar to that observed upon epoxomicin treatment ( Figure 3E,F). This finding provides further evidence that downregulation of Kif26b expression induced by Wnt5a occurs via the UPS. We conclude from these results that exposure to Wnt5a leads to the downregulation of the steady-state levels of Kif26b expression by promoting Kif26b ubiquitin-and proteasome-dependent degradation.
Frizzled and Dishevelled proteins mediate Wnt5a-induced Kif26b degradation
Our pharmacological inhibitor experiments suggest that the WRK reporter assay might be used to interrogate other molecular components of the Wnt5a-Ror signaling pathway that operate upstream of Kif26b. To test this idea, we first investigated a possible role of Frizzled (Fzd) proteins in Wnt5adependent Kif26b degradation. Fzds make up a family of 10 seven-transmembrane domain receptor proteins that function as co-receptors in canonical Wnt/b-catenin signaling and as polarity determinants in the PCP pathway (Vinson and Adler, 1987;Bhanot et al., 1996). Recent work additionally implicates Fzd proteins in aspects of noncanonical Wnt function as well as in the phosphorylation of Ror2 in cultured cells (Habas et al., 2001;Nishita et al., 2010;Grumolato et al., 2010;Sato et al., 2010). Moreover, protein sequence homology analysis and in vivo mouse genetic studies revealed that Fzd1, Fzd2 and Fzd7 form a distinct sub-family that functions redundantly to control tissue morphogenetic events such as convergent extension of embryonic tissues and closure of the palate and ventricular septum (Yu et al., 2010;Yu et al., 2012). As many of these developmental processes also require the Wnt5a-Ror pathway, we hypothesized that certain Fzd proteins, such as those in the Fzd1, Fzd2 and Fzd7 sub-family, might participate in Wnt5a-Ror-Kif26b signaling.
To test whether Fzd proteins mediate Wnt5a-Ror-Kif26b signaling, we first took a loss-of-function approach using Shisa proteins, which are Wnt signaling regulators that inhibit Fzd processing and trafficking by sequestering Fzds in the endoplasmic reticulum (Yamamoto et al., 2005). We found that viral transduction of Shisa2 expression, but not Cas9 expression as a control, partially blocked the ability of Wnt5a to induce Kif26b degradation in the WRK reporter cell line ( Figure 4A,B). This observation suggests that the Fzd family plays a role in Wnt5a-dependent regulation of Kif26b. To more directly test the involvement of Fzds in Wnt5a-Kif26b signaling, we asked whether overexpression of Fzds could induce Kif26b degradation in the absence of exogenously added Wnt5a. We first focused on members of the Fzd1, Fzd2 and Fzd7 sub-family, as the mouse knockout phenotypes of these Fzds are most consistent with a functional interaction with the Wnt5a-Ror pathway (Yu et al., 2010;Yu et al., 2012). Interestingly, we found that lentiviral transduction of Fzd1 or Fzd7 expression, but not of the negative control Cas9, constitutively induced Kif26b degradation as measured in the WRK assay ( Figure 4C,D). This finding, taken together with the decreased Wnt5a-Ror-Kif26b signaling upon expression of the Fzd inhibitor Shisa2, strongly suggests that Fzd family proteins are involved in Wnt5a regulation of Kif26b degradation, possibly as co-receptors together with members of the Ror family of proteins.
We also considered the possibility that functional specificity might exist among the different Fzd sub-families. We therefore used the WRK assay to test multiple members from each Fzd sub-family, as defined previously (Yu et al., 2010;Yu et al., 2012). Interestingly, we found that all the Fzd family members that we tested were able to induce Kif26b degradation (Figure 4-figure supplement 1). However, it is possible that the specificity of Fzd protein function is lost when these proteins are overexpressed.
We next investigated the role of Dvl proteins in Wnt5a-Ror-dependent Kif26b degradation. Our previous study identified Dvl phosphorylation as a specific downstream target of Wnt5a-Ror signaling (Ho et al., 2012), and a recent study demonstrated that the Dvl and Kif26b proteins physically interact (Guillabert-Gourgues et al., 2016). Moreover, we found that Wnt5a-induced Kif26b degradation occurs with similar kinetics as Wnt5a-induced Dvl phosphorylation ( Figure 2D). However, it remains unknown whether Dvl proteins are required for Wnt5a-dependent degradation of Kif26b. Since the presence of three Dvl genes in the mammalian genome makes loss-of-function analysis of Dvl proteins challenging, we took a gain-of-function approach to determine if Dvl protein expression affects Wnt5a-Ror-dependent Kif26b degradation. Notably, a similar overexpression approach was previously used to demonstrate a role for Dvl proteins in Wnt/b-catenin signaling, as overexpression of Dvl proteins in Xenopus embryos induces axis duplication recapitulating overexpression of canonical Wnts (Sokol et al., 1995). Using the WRK reporter assay, we found that overexpression of Dvl1, but not overexpression of a control protein Cas9, led to an increase in Kif26b degradation ( Figure 4E,F), mimicking the effects of Wnt5a stimulation or Fzd overexpression. Taken together, The Wnt5a-Ror-Kif26b signaling cassette directs the migratory behavior of cells Genetic studies in C. elegans have shown that mutations in the nematode orthologs of Kif26b (vab-8) and Ror (cam-1) cause similar polarized cell migration and axon guidance phenotypes, and a recent study in human umbilical vein endothelial cells (HUVECs) demonstrates a physical interaction between Kif26b and Dvl3 (Forrester et al., 1998;Wolf et al., 1998;Chien et al., 2015;Guillabert-Gourgues et al., 2016). These findings, together with the observations reported above, raise the possibility that the cell biological effects of noncanonical Wnt signaling are mediated by the Wnt5a-Ror-dependent degradation of Kif26b. To test this hypothesis, we employed both gain-and loss-offunction approaches in cultured NIH/3T3 cells, the same cells used for the WRK assays. Western analysis of the NIH/3T3 cell lines used in the WRK assay showed that GFP-tagged Kif26b protein is overexpressed relative to endogenous Kif26b in NIH/3T3 cells ( Figure 5-figure supplement 1A), indicating that these cells could be used to assess the effects of Kif26b overexpression on cell responses. For examining the effects of loss of Kif26b expression on cellular responses in NIH/3T3 cells, we used CRISPR/Cas9-mediated genome editing to generate stable cell lines in which the expression of Kif26b is knocked out (Figure 5-figure supplement 1B and C). Western blot analysis verified complete elimination of Kif26b protein relative to a control cell line expressing the Cas9 endonuclease without a guide RNA ( Figure 5-figure supplement 1B). We next confirmed that neither GFP-Kif26b overexpression nor loss of Kif26b affects cell proliferation or survival through mitotic index quantification and TUNEL staining, respectively ( Figure 5-figure supplement 1D-G), indicating that these NIH/3T3 cell lines could be used to study other possible Kif26b-dependent cellular responses.
We employed an automated kinetic wound-healing assay that provides a quantitative, integrated readout of cell morphogenesis such as cell polarization, cell motility and cell adhesion by measuring the wound closure efficiency of cells under different experimental conditions (Gujral et al., 2014a). The wound-healing assay has been used previously to assess effects of noncanonical Wnt signaling (Gujral et al., 2014a(Gujral et al., , 2014b. Using this assay, we asked if the level of Kif26b expression affects wound closure efficiency. We found that Kif26b-knockout cells exhibit a decrease in wound closure efficiency relative to control cells while cell lines overexpressing GFP-Kif26b exhibit an enhanced rate of wound closure efficiency relative to control cells ( Figure 5A,B). These findings suggest that Kif26b may promote cell migration and are consistent with previous observations in HUVECs where Kif26b expression was correlated with increased directional cell migration (Guillabert-Gourgues et al., 2016). Taken together, these findings suggest that by controlling the rate of Kif26b protein degradation, Wnt5a-Ror signaling regulates cell migration.
Given that exposure of NIH/3T3 cells to Wnt5a acutely downregulates Kif26b protein expression, we next tested whether this treatment affects the migration of these cells in the wound-healing assay. We found that Wnt5a treatment decreases the wound closure efficiency of cells overexpressing GFP-Kif26b to a rate approximating that of a control NIH/3T3 cell line in which GFP-Kif26b is not overexpressed ( Figure 5B). This Wnt5a-mediated decrease in wound closure efficiency was correlated with a concomitant decrease in the cellular abundance of Kif26b, suggesting that the specific degradation of Kif26b could underlie the decrease in wound closure efficiency ( Figure 5-figure supplement 2). Importantly, Wnt5a stimulation of NIH/3T3 cells in which Kif26b expression had been knocked out via CRISPR/Cas9-mediated genome editing resulted in no decrease in wound closure efficiency (i.e. had no effect on cell migration) ( Figure 5-figure supplement 3), suggesting that Kif26b expression is required for Wnt5a-dependent changes in cell migration. Taken together, these results indicate that one key function of Kif26b is to promote cell migration, and that Wnt5a signaling may control the extent of cell migration by regulating the degradation of the Kif26b protein.
In vivo perturbation of Kif26b function produces phenotypes characteristic of noncanonical Wnt5a-Ror signaling defects We next sought to determine whether in vivo perturbation of Kif26b during embryonic development results in phenotypes that are consistent with an important function for Kif26b in Wnt5a-Ror signaling during embryonic development. A previously used approach for determining if a given signaling molecule functions as part of the noncanonical Wnt pathway was to determine if mis-expression of the signaling molecule leads to a phenotype similar to that observed when Wnt5a or Rors are misexpressed in developing Xenopus or zebrafish embryos (Moon et al., 1993;Hikasa et al., 2002;Bai et al., 2014;Habas et al., 2001). Mis-expression of either Wnt5a or Ror2 in these embryos produces tissue morphogenesis defects, including defective convergent extension movements and a shortened or bent body axis, which are characteristic of disrupted noncanonical Wnt signaling. We therefore asked whether mis-expression of Kif26b protein in developing zebrafish embryos similarly induced phenotypes typical of abnormal noncanonical Wnt signaling.
To mis-express proteins in the developing zebrafish embryo, we microinjected in vitro-transcribed mRNA into one-cell stage embryos. Replicating previous reports (Moon et al., 1993;Bai et al., 2014), microinjection of Wnt5a mRNA caused axis truncation and bending phenotypes ( Figure 6A, B). Strikingly, microinjection of Kif26b mRNA into one-cell zebrafish embryos also caused axis truncation and bending phenotypes resembling those caused by mis-expression of Wnt5a ( Figure 6A,B). These phenotypes were rarely observed in uninjected embryos or in embryos injected with a negative control mRNA of similar size, Cas9, indicating that the axis truncation and bending phenotypes produced by Kif26b mis-expression are specific ( Figure 6A,B). Together, these findings suggest that Kif26b specifically affects morphogenetic movements of cells in developing embryos in a manner similar to that of Wnt5a or Ror2 and support a model whereby Kif26b functions as part of a noncanonical Wnt5a-Ror regulatory cassette that regulates morphogenetic movements during embryogenesis.
To investigate further if Kif26b functions as part of the Wnt5a-Ror pathway during mouse development, we analyzed the development of primordial germ cells (PGCs), a process previously shown to require the Wnt5a-Ror pathway (Laird et al., 2011;Chawengsaksophak et al., 2012). During mouse embryogenesis, PGCs are specified from the epiblast at~E7.25 and subsequently migrate through the hindgut and dorsal mesentery to populate the gonadal ridges by E11.5. PGCs that fail to enter the gonad are eliminated by programmed cell death (McLaren, 2003;Laird et al., 2008). Loss-of-function mutations in Wnt5a or Ror2 alleles result in a substantial decrease in the number of PGCs that successfully colonize the gonad at E11.5 as compared to wild-type controls (~75% fewer in Wnt5a and~50% fewer in Ror2 mutants) (Chawengsaksophak et al., 2012;Laird et al., 2011), indicating that Wnt5a-Ror signaling is required for the proper colonization of the gonadal ridges by migrating PGCs. We reasoned that if Kif26b mediates biological activities of the Wnt5a-Ror pathway during PGC development, genetic perturbation of Kif26b might also disrupt PGC colonization of the gonads.
To test this hypothesis, we quantified the number of PGCs in the gonadal ridges of E11.5 Kif26b -/embryos by whole-mount SSEA1 (a marker of PGCs) staining. The mean number of SSEA1positive PGCs in the Kif26b -/gonads (372.8 ± 74.95, n=4) was decreased significantly to 44.9% of Kif26b +/+ littermate controls (829.5 ± 88.553, n=2) ( Figure 6C,D). The relative similarity of the PGC depletion phenotypes observed in E11.5 Kif26b -/-, Ror2 -/and Wnt5a -/mouse embryos suggest that Kif26b functions in a common signaling pathway with Wnt5a and Ror proteins to orchestrate PGC development. This experiment, taken together with the mis-expression analysis in zebrafish, provides in vivo evidence that Kif26b contributes to Wnt5a-Ror signaling in multiple tissues during embryonic development.
Discussion
Since the seminal discovery that certain Wnt proteins can signal independently of b-catenin-mediated transcription to affect organ and tissue morphogenesis during development (Moon et al., 1993), few downstream effectors of this signaling system have been identified and validated in biological systems. In our previous work, we used in vivo mouse genetics to demonstrate that Ror receptors are essential mediators of a core noncanonical Wnt5a pathway crucial for tissue morphogenesis (Ho et al., 2012). In the current study, we integrate conditional mouse genetics with quantitative proteomics to identify new targets of Wnt5a-Ror signaling, and focus on one particularly high confidence target, Kif26b.
The first indication that Kif26b may mediate Wnt5a/Ror-dependent developmental processes came from studies in C. elegans. Orthologs of Kif26b (vab-8) and Ror (cam-1) were among the fourteen genes identified in a forward mutagenesis screen for genes required for the directional migration of the C. elegans canal-associated neuron (Forrester et al., 1998). C. elegans mutants of vab-8 and cam-1 display similar polarized cell migration and axon guidance defects where cell bodies and axons that normally move specifically toward the posterior end of the body become abnormally anteriorly displaced (Wightman et al., 1996;Forrester et al., 1999). In addition, both vab-8 and cam-1 mutants exhibit a lower penetrance withered-tail (Wit) phenotype reminiscent of the posterior A-P body axis truncation phenotype seen in Wnt5a or Ror mutant mice (Forrester et al., 1998). These studies, taken together with the findings in mammalian cells described in this article, suggest that Wnt5a, Ror and Kif26b proteins may function as part of an evolutionarily conserved pathway that orchestrates morphogenetic processes during the development of tissues. Further evidence suggesting Kif26b may function in noncanonical Wnt signaling came from a recent study that identified Kif26b as a binding partner of Dvl3 in HUVECs (Guillabert-Gourgues et al., 2016), as Dvl proteins are known targets of Wnt5a-Ror signaling (Ho et al., 2012). Interestingly, this study demonstrated that Kif26b promotes the directional cell polarization and growth of HUVECs (Guillabert-Gourgues et al., 2016), consistent with our finding that Kif26b controls the migration of cells.
Our study demonstrates that Wnt5a-Ror signaling regulates Kif26b degradation, thereby influencing dynamic cellular processes such as the migratory behavior of cells. Moreover, by perturbing Kif26b function during embryogenesis, we provide in vivo evidence that Kif26b mediates certain biological effects of the Wnt5a-Ror pathway in body axis elongation and PGC development. Together, these findings suggest that a Wnt5a-Ror-Kif26b pathway comprises a conserved signaling cassette crucial for the execution of noncanonical Wnt functions.
Biochemical mechanisms of Wnt5a-Ror-Kif26b signaling
Our observation that Wnt5a-Ror signaling triggers the ubiquitin/proteasome-dependent degradation of Kif26b demonstrates that UPS-mediated proteolysis is a conserved strategy employed by both the canonical Wnt/b-catenin and the noncanonical Wnt5a-Ror pathways to control the cellular abundance of their respective downstream effectors (Figure 7). In addition, we provide further evidence that Fzd and Dvl proteins are shared functional components of both Wnt signaling branches. Collectively, these observations suggest that the Wnt-Fzd-Dvl signaling module is an ancient and conserved feature common to multiple Wnt signaling systems. During evolution of the pathway, different signaling branches appear to have adopted additional regulatory mechanisms to achieve signaling and functional specificity. For example, while the Wnt/b-catenin pathway uses the Lrp5/6 receptors in conjunction with Fzd receptors to transmit a canonical Wnt signal across the plasma membrane, the Wnt5a-Ror pathway uses Ror1/2 receptors, likely also in conjunction with Fzds, to transmit a noncanonical Wnt signal.
At present, we do not understand the biochemical mechanisms by which Ror and/or Fzd receptors function in the pathway. In the conventional paradigm of receptor tyrosine kinase signaling, ligand binding induces receptor dimerization or oligomerization, which in turn enhances the intrinsic kinase activity of the receptor (Lemmon and Schlessinger, 2010). This then triggers receptor autophosphorylation and the subsequent recruitment and/or phosphorylation of downstream cellular effectors. Whether the Ror family of receptor tyrosine kinases possesses catalytically active kinase activity, however, is still under debate. It was reported that Wnt5a can stimulate the ability of fulllength Ror2 to phosphorylate a GST-Ror2 kinase domain fusion protein on tyrosine residue(s) in vitro, and that mutagenesis of conserved tyrosine residues within the tyrosine kinase regulatory loop of Ror2 impairs the protein's ability to repress canonical Wnt/b-catenin signaling (Mikels et al., 2009). Ror1 and Ror2, however, are known to harbor substitutions at amino acid residues generally believed to be critical for normal kinase function, and recent biochemical and structural studies have further suggested that the Ror proteins are pseudokinases (Gentile et al., 2011;Artim et al., 2012;Mendrola et al., 2013). In our phosphoproteomic study, we identified nine distinct Kif26b phosphorylation sites on eight unique phosphopeptides, and all these sites mapped to serine or threonine residues (Figure 1-figure supplement 2 and Supplementary file 1). In general, tyrosine phosphorylation is more dynamic than serine or threonine phosphorylation and tends to be underrepresented in large-scale phosphoproteomic studies (Lombardi et al., 2015). We therefore examined whether Wnt5a-Ror signaling induces tyrosine phosphorylation of Kif26b in NIH/3T3 cells using an anti-phosphotyrosine antibody-based affinity pull-down approach. However, under the various experimental conditions tested, we observed no effect of Wnt5a-Ror signaling on the tyrosine phosphorylation of Kif26b (S.S.C. and H.H.H., unpublished data). Thus, whether Ror receptors are active kinases or pseudokinases, and whether Kif26b is a direct substrate of these receptor tyrosine kinases will require further investigation.
It is also possible that phosphorylation of Kif26b by other cellular kinases is involved in Wnt5adependent regulation of Kif26b. Interestingly, a previous study showed that cyclin-dependent kinase (CDK) phosphorylates Kif26b on multiple serine and threonine sites and that these phosphorylation events play a critical role in controlling the stability of Kif26b by recruiting the E3 ubiquitin ligase Nedd4 (Terabayashi et al., 2012). It will be important in future studies to test whether these mechanisms, as well as the additional phosphorylation sites identified in our proteomic screens, are required for Wnt5a regulation of Kif26b degradation.
The functional interplay between Ror and Fzd receptors is also currently unclear. We have demonstrated that overexpression of Fzd family members is sufficient to induce Kif26b degradation in NIH/ 3T3 cells ( Figure 4C,D). Overexpression of Ror2, however, does not have the same effect (E.P.K. and H.H.H., unpublished observation). These observations raise the possibility that Fzd proteins may function as the signaling receptors, while Ror proteins may play a modulatory role. This model is also consistent with our observation that even though endogenously expressed Wnt5a requires Ror1 and Ror2 expression to downregulate levels of Kif26b ( Figures 1B and 2A), high concentrations of exogenously added Wnt5a can induce Kif26b degradation in Ror1 and Ror2 double knockout MEFs (M. W.S., M.E.G. and H.H.H., unpublished data), suggesting that Wnt5a can signal through receptors other than Rors. A deeper understanding of how Ror and Fzd proteins mediate Wnt5a-Kif26b signaling will require additional biochemical, structural and functional studies.
Our study also directly implicates Dvl proteins in promoting Kif26b degradation ( Figure 4E,F) and further suggests a broader role for Dvl in regulating the stability of Wnt signaling effectors. Interestingly, during canonical Wnt signaling, Dvl functions to inhibit the degradation of b-catenin, whereas during noncanonical Wnt5a signaling, Dvl functions to induce the degradation of Kif26b (Figure 7). How Dvl transmits pathway-specific signals remains to be elucidated, but it likely involves differential post-translational modifications such as phosphorylation, differential protein-protein interactions, or both (Wallingford and Habas, 2005). The identification of Kif26b as a specific target of Dvl in the noncanonical Wnt signaling branch provides a new inroad to address this important question.
There is also emerging evidence that noncanonical Wnt signaling may function more generally to regulate the degradation of multiple proteins. For instance, Wnt5a signaling can induce the degradation of the cell adhesion receptor Syndecan 4 via the proteasome (Carvallo et al., 2010), and noncanonical Wnt5a and Wnt11 signaling has been associated with the proteasomal degradation of the focal adhesion molecule Paxillin (Kurayoshi et al., 2006;Iioka et al., 2007). It is possible that Wnt5a downregulates multiple effector molecules to affect the shape and behavior of cells in a variety of cell types and developmental contexts.
Cellular and developmental functions of Kif26b
Data from our wound-healing assay suggest that one function of the Wnt5a-Ror-Kif26b axis is to modulate the migration of cells via Wnt5a-dependent degradation of Kif26b. How might Kif26b function at a subcellular level to affect cell migration? Mesenchymal cell migration typically involves four reiterative steps: protrusion of the leading edge, formation and maturation of focal adhesions, forward translocation of the cell body and finally retraction and de-adhesion of the trailing edge. Using the GFP-Kif26b NIH/3T3 cell line, we observed that the Kif26b protein is predominantly localized to the trailing edge of the cell, which is most consistent with a role of Kif26b in promoting trailing edge de-adhesion and/or retraction (Figure 3-figure supplement 2D,E; Video 1). This model is supported by a previously reported binding interaction between Kif26b and Myosin IIb, a key regulator of cell trailing edge de-adhesion and contractility (Uchiyama et al., 2010). It may also explain why Wnt5a has been reported in some studies to promote, while in others to suppress, cell migration (Kremenevskaja et al., 2005;Dissanayake et al., 2007;Nomachi et al., 2008;Enomoto et al., 2009;McDonald and Silver, 2009;Jiang et al., 2013;Liu et al., 2014;Zhang et al., 2014;Prasad et al., 2016;Yu et al., 2016;Connacher et al., 2017). It is well established that a delicate balance of cell adhesive and contractile activities dictates the manner by which cells migrate (Parsons et al., 2010;Huttenlocher and Horwitz, 2011). Moreover, this balance is likely to be different in different cell types and subject to further regulation by the extracellular environments to which the cells are exposed. Thus, if the primary function of the Wnt5a-Kif26b signaling axis is to modulate the strength of cell adhesion and/or contractility at the trailing edge, it could potentially manifest in opposing cell migratory behaviors depending on the cell types and the culture conditions used in a given study. Higher resolution imaging studies capable of directly measuring cell adhesion dynamics and contractile forces will be crucial to understanding how Kif26b functions at the trailing edge of the cell, or at other subcellular locations, to affect cell migration.
Although we have shown in a wound-healing assay that higher Kif26b levels promote the migration of NIH/3T3 cells and that Wnt5a signaling negatively regulates the migratory behavior of these cells via the degradation of Kif26b, it is likely that the signaling dynamics of the Wnt5a-Ror-Kif26b pathway are actually considerably more complex in vivo. During embryogenesis, migration and other morphogenetic processes must occur in a highly choreographed fashion where signals that promote and restrict cell movements are finely tuned in time and space. It is therefore possible that the relatively modest Wnt5a-dependent alteration of Kif26b levels we observed in vitro, as measured across entire cell populations, does not accurately reflect the dynamic regulation of the pathway that occurs during the development of tissues. The function of the Wnt5a-Ror pathway may not be simply to degrade Kif26b constitutively within a cell or tissue, but rather to tune the activity of Kif26b with a high degree of temporospatial resolution to achieve proper tissue morphogenesis. Indeed, cells are known to integrate and amplify shallow spatial gradients of extracellular cues into robust changes in behaviors, such as during axon pathfinding and neutrophil chemotaxis (von Philipsborn and Bastmeyer, 2007).
The in vivo significance of Kif26b in noncanonical Wnt5a-Ror function is demonstrated by our gene perturbation experiments. Disruption of Kif26b expression, whether by mis-expression in zebrafish or by loss-of-function gene ablation in mouse embryos, results in developmental defects that are similar to those observed when Wnt5a or Ror expression is perturbed. These findings, taken together with the observation that loss-of-function cam-1 and vab-8 mutants in C. elegans exhibit similar neuronal migration and polarization phenotypes (Forrester et al., 1998), support the model that Wnt5a, Ror and Kif26b comprise a signaling cassette crucial for the morphogenesis of tissues during development. However, it is important to note that while our in vivo experiments focus on comparing the gross morphological phenotypes of Kif26b, Wnt5a and Ror mutants, they do not directly demonstrate that the similarities in the observed phenotypes are caused by common underlying mechanisms. Therefore, it remains crucial to characterize the molecular and cellular basis of the Wnt5a, Ror and Kif26b mutant phenotypes, which will further establish the functional interactions among these molecules.
It is notable that genetic loss-of-function perturbation of Kif26b expression in mice does not closely mimic perturbation of Wnt5a or Ror expression in all developmental contexts. For instance, Kif26b knockout mice do not exhibit the global tissue truncation and craniofacial malformation phenotypes seen in Wnt5a and Ror1/2 double mutants (Yamaguchi et al., 1999;Uchiyama et al., 2010;Ho et al., 2012). In addition, Kif26b knockout mice fail to elicit sprouting of the ureteric bud, resulting in kidney agenesis or severe kidney hypoplasia (Uchiyama et al., 2010), whereas Wnt5a or Ror knockout mice exhibit duplication of the ureteric bud, resulting in pleiotropic kidney defects (Huang et al., 2014;Nishita et al., 2014;Yun et al., 2014).
There are a number of explanations that might reconcile these seemingly discordant findings in different biological contexts. First, given the highly complex temporal and spatial dynamics that Wnt5a-Ror-Kif26b signaling likely undergoes in vivo, it becomes difficult to predict the ultimate phenotypes of embryos that constitutively lack the expression of downstream signaling components across development. Second, Kif26b may not be required in all tissues that undergo Wnt5a-Ror signaling during their morphogenesis. It is likely that other effector molecules may compensate for the genetic loss of Kif26b in certain developing tissues (Rossi et al., 2015). For instance, the closely related Kif26a gene is broadly expressed during mouse embryogenesis, and the Kif26a protein is also a target of Wnt5a-Ror signaling (M.K.S., E.P.K., H.H.H. unpublished data). Thus, one plausible explanation is that the developmental requirement of Kif26b in tissue extension and craniofacial morphogenesis may be masked by the continued expression of Kif26a in Kif26b knockout mice. By RT-qPCR, we verified that Kif26a mRNA is indeed expressed at appreciable levels in both wild-type and Kif26b knockout E12.5 MEFs, although we did not find evidence that Kif26a transcription is specifically upregulated in Kif26b knockout MEFs (E.P.K. and H.H.H., unpublished data). Kif26a single knockout mice also do not exhibit phenotypes typical of noncanonical Wnt5a-Ror signaling defects, instead manifesting with megacolon from enteric nerve hyperplasia (Zhou et al., 2009). Thus, discovering the full extent of developmental processes regulated by the Kinesin-11 family of proteins, as well as their functional relationship to the Wnt5a-Ror pathway, awaits a loss-of-function analysis of both Kif26a and Kif26b together, which may reveal non-overlapping phenotypes not present in either Kif26a or Kif26b single loss-of-function mutants. Lastly, it is also likely that other signaling branches apart from Kif26b exist within the Wnt5a-Ror pathway, and that these branches control distinct cellular processes that do not require the expression of the Kinesin-11 family of proteins. Thus, building a more complete inventory of the pathway components and understanding how these components interact functionally represents an important direction of future investigation.
TMT/MS3 phosphoproteomic screen Two E12.5 Ror1 f/f ; Ror2 f/f ; CAG-CreER/+ embryos were individually dissected and used to derive primary MEFs. One E12.5 Ror1 +/+ ; Ror2 +/+ ; CAG-CreER/+ embryo was used to derive the control MEFs. The primary passage of cells derived from each embryo was cultured in a 10-cm plate until confluent. The primary passage was then split into two 10 cm plates at six million cells per plate (day 0). 24 hr later (day 1), 4-OHT was added to one plate (0.25 mM final concentration), and the drug vehicle (EtOH) was added to the other plate. On day 2, a media change was performed using fresh media containing 0.25 mM 4-OHT. On day 3, a media change was performed again, but the final 4-OHT concentration was reduced to 0.1 mM. On day 4, a media change was performed without 4-OHT. On day 5, cells were washed once in ice-cold PBS, and cells from each 10-cm plate was scraped into 1 mL of ice-cold lysis buffer (8 M urea, 75 mM NaCl, 50 mM Tris pH 8.2, 1 mM NaF, 1 mM b-glycerophosphate, 1 mM Na 3 VO 4 , 10 mM Na 4 P 2 O 7 , 1 mM PMSF, and cOmplete protease inhibitor (-EDTA, 04693159001, Roche, Indianapolis, IN). Cells were homogenized by pipetting up and down using a P-1000 pipettor and then sonicated in a Bioruptor (Diagenode, Denville, NJ; 17 Â 30 s ON/OFF cycles). Cell lysates were then centrifuged at 40,000 RPM for 20 min at 4˚C. The clarified high-speed supernatants were collected, snap frozen in liquid nitrogen and stored at À80˚C until the TMT/MS3 analysis was performed. Protein concentrations were determined using BCA reagents (23225, Thermo Fisher Scientific).
To perform the TMT/MS3 screen, tryptic peptides were prepared from whole cell lysates and the peptide mixtures from the six experimental conditions were individually labeled with the TMT reagents, such that reporter ions at m/z of 126,127,128,129,130 and 131 would be generated in the tandem spectrometry. Phosphopeptides were enriched by TiO 2 chromatography. Liquid chromatography, MS3 tandem mass spectrometry and data analysis were carried out as previously described (Ting et al., 2011;McAlister et al., 2014;Paulo et al., 2015).
Cloning of mouse Kif26b cDNA
A first-strand cDNA library was generated from MEF total RNA using M-MuLV reverse transcriptase (M0253S, New England BioLabs, Ipswich, MA) and the d(T)23VN primer according to the manufacturer's instructions. This cDNA library was then used as template for PCR amplification of the Kif26b open reading frame with the following primers: Forward: gatcggccggcctaccatgaattcggtagccggaaataaag; Reverse: gatcggcgcgccttatcggcgcctggaggtgatgtc. The PCR product was subcloned into a modified pCS2+ vector using the FseI and AscI restriction sites. The entire Kif26b open reading frame was confirmed by sequencing.
Antibodies
Antibodies against Kif26b were generated using a previously described antigen (Uchiyama et al., 2010). A C-terminal fragment of Kif26b was PCR amplified using the following primers and subcloned into a modified pGEX (28-9546-63, GE Healthcare, Pittsburgh, PA) vector to generate a GST fusion protein in E. coli. Forward: gatcggccggcctaccatgcgaaacgtgcaagagcctgagtcc; Reverse: gatcggcgcgccttatcggcgcctggaggtgatgtc. Protein expression was induced in the E. coli strain BL21 (DE3) using IPTG (0.3 mM). To purify the C-terminal Kif26b protein fragment, bacterial pellets were lysed in STE (150 mM NaCl, 1 mM Tris pH 8.0, 1 mM EDTA) supplemented with protease inhibitors (04693159001, Roche) and 0.1 mg/ml lysozyme (L-6876, Sigma-Aldrich, St. Louis, MO) and incubated on ice for 15 min. Just before sonication, DTT was added to a final concentration of 2 mM and prediluted sodium lauroyl sarcosinate (in STE) was added to a final concentration of 10%. Lysates were then sonicated in a Bioruptor (5 Â 30 s ON/OFF cycles) and then centrifuged at 60,000 RPM for 30 min at 4˚C. Triton X-100 was added to the supernatant to a final concentration of 3% and incubated with Glutathione Sepharose 4B beads (17075601, GE Healthcare) for affinity purification. Purified C-terminal Kif26b proteins were dialyzed in PBS and used for immunization of rabbits.
Western blotting
Quantitative western blotting was performed using the Odyssey infrared imaging system (Li-Cor Biosciences, Lincoln, IL) according to the manufacturer's instructions. Non-saturated protein bands were quantified using Odyssey software, with the gamma level set at 1. Protein lysates for SDS-PAGE and western blotting were prepared in 1x or 2x LDS sample buffer (NP0008, Thermo Fisher Scientific) supplemented with 2-mercaptoethanol (5% final concentration). If BCA assays were required to quantify the protein lysate concentrations, the lysates were prepared instead in a homemade 1x SDS sample buffer (50 mM Tris pH 6.8, 2% SDS, 10% glycerol) without bromophenol blue or 2-mercaptoethanol. Once the protein concentrations were determined and normalized, the lysates were then mixed with 1/3 volume of 4x SDS sample buffer containing bromophenol blue (0.025%) and 2-mercaptoethanol (20%). Protein lysates used for Kif26b western blotting were not heated, as the Kif26b signal weakens substantially after heating, likely due to heat-induced protein aggregation. All other protein lysates were heated at 90˚C for 5 min before SDS-PAGE and western blotting.
Generation of shRNA targeting Kif26b
The lentiviral vector pLLX3.7 was used to generate recombinant lentiviruses expressing shRNA that target mouse Kif26b. The following sequences were targeted: gtgccttgcaaatctttat and gctcgagatacctcagaat.
Generation of stable NIH/3T3 cell lines
To construct the GFP-Kif26b expression plasmid, the eGFP open reading frame was first subcloned into pENTR-2B (11816-014, Thermo Fisher Scientific), and the full-length mouse Kif26b open reading frame was subcloned in frame to the C-terminus of GFP. The resulting construct was verified by sequencing and then recombined with the pEF5-FRT-V5 vector (V602020, Thermo Fisher Scientific) using LR clonase II (11791100, Thermo Fisher Scientific) to create pEF5-GFP-Kif26b-FRT. The pEF5-GFP-Kif26b-FRT plasmid was used to generate stable isogenic cell lines using the Flp-In system and Flp-In NIH/3T3 cell line (Thermo Fisher Scientific). DNA transfection was performed in 10-cm plates with Genjet In Vitro Transfection Reagent (SL100488; SignaGen Laboratories, Rockville, MD). Cells that stably integrated the Flp-In constructs were selected using 200 mg/ml hygromycin B and expanded. A more detailed protocol is described at Bio-Protocol (Karuna et al., 2018).
Lentivirus-mediated protein overexpression
Recombinant lentiviruses were generated using the pLEX_307 (for Dvl1 and all Fzd constructs) or pLVX-EF1a-mCherry-N1 (for Shisa2) vectors. Both vector systems use the EF1 promoter for driving transgene expression. pLEX_307 was a gift from David Root (Plasmid 41392, Addgene, Cambridge, MA) and pLVX-EF1a-mCherry-N1 was purchased (631986, Clontech Laboratories, Mountain View, CA). The human Dvl1 open-reading frame was cloned by PCR from a HeLa cell cDNA pool. The mouse Shisa2 open reading frame was PCR amplified from a Shisa2-containing plasmid (a gift from Xi He). The Fzd open reading frames were PCR amplified from the following Addgene plasmids: 42253, 42259, 42255, 42256, 42267, 42258, 42270 and 42261 (gifts from Chris Garcia and Jeremy Nathans). The open-reading frames in all lentiviral constructs were verified by sequencing. Lentiviruses were packaged and produced in HEK293T cells by co-transfection of the lentiviral vectors with the following packaging plasmids: pRSV-REV, pMD-2-G and pMD-Lg1-pRRE (gifts from Thomas Vierbuchen). 3 ml or 0.3 ml of the viral supernatants was used to infect WRK reporter cells seeded at 50% confluency in six-well plates. Puromycin selection (0.002 mg/ml) was carried out for 3 days. Cells from the viral titer that killed a large proportion of cells (60-90%) were expanded and used for FACS; this ensured that the multiplicity of infection (MOI) is~1 for all cell lines used in the experiments.
Following lentivirus infection and puromycin selection, NIH/3T3 cells were passaged for 3 days to allow time for mutagenesis to occur. Individual cell clones were picked from cell populations targeted with each of these sgRNAs, expanded and then screened initially by western blotting. Clones that appeared to lack Kif26b expression were sequenced to confirm genome modification.
Cell proliferation and survival assays
For quantifications of cell proliferation, NIH/3T3 cell lines were plated on glass coverslips 24 hr prior to fixation. Cells were fixed with 4% paraformaldehyde in Cytoskeleton Buffer with sucrose (10 mM MES pH 6.1, 138 mM KCl, 3 mM MgCl, 2 mM EGTA, 0.32 M sucrose) for 20 min at room temperature, permeabilized in TBS-0.5% Triton X-100 for 10 min, then rinsed 3x with TBS-0.1% Triton X-100. Cells were blocked in Antibody Diluting Solution (AbDil) (TBS-0.1% Triton X-100, 2% BSA, 0.1% sodium azide) for 30 min at room temperature, then incubated overnight at 4˚C with 1:500 of rabbit anti-phospho-Histone H3 (Ser10, Mitosis Marker) (#3377, Cell Signaling Technology) diluted in AbDil, or a no primary antibody control. After five washes in TBS-0.1% Triton X-100, Alexa dye-conjugated secondary antibodies were added at 1:1000 in AbDil for 45 min at room temperature. After five washes in TBS-0.1% Triton X-100, coverslips were mounted in DAPI Fluoromount-G (0100-20, SouthernBiotech, Birmingham, AL). Images of cells were acquired using a 10x objective at equal exposure, and then analyzed for the presence of nuclear staining of the Mitosis Marker per DAPIpositive nuclei counted.
For quantifications of cell survival, NIH/3T3 cells lines were plated similarly as for the cell proliferation assays. TUNEL staining was performed according to the manufacturer's instructions (In Situ Cell Death Detection Kit, TMR red, 12156792910, Roche), including a DNase-positive control (M0303S, New England BioLabs). Images of cells were acquired using a 10x objective at equal exposure, and then analyzed for the presence of nuclear TUNEL staining per DAPI-positive nuclei counted.
Kinetic wound-healing cell migration assay Cells were plated on 96-well plates (Essen ImageLock, 4379, Essen Instruments, Ann Arbor, MI). Wnt-C59 (100 nM final concentration) was added to cells 24 hr prior to wound creation with a wound scratcher (Essen Instruments). For Wnt5a treatment, recombinant Wnt5a (100 ng/ml final concentration) was added immediately after creation of wounds. Wound confluence was monitored with Incucyte Live-Cell Imaging System and software (Essen Instruments). Wound closure was observed every 1-2 hr for 48-96 hr by comparing the mean relative wound density of at least four biological replicates in each experiment.
Immunocytochemistry NIH/3T3 cells were plated at low density and grown on glass coverslips for 24 hr. Cells were rinsed 1x with PBS, then fixed with either ice-cold methanol for 3 min or with 4% paraformaldehyde in Cytoskeleton Buffer with sucrose for 20 min at room temperature. Cells were then rinsed 3x in PBS, permeabilized in TBS-0.5% Triton X-100 for 10 min, then rinsed 3x with TBS-0.1% Triton X-100. Cells were blocked in AbDil for 30 min at room temperature. Primary antibodies were added at the following dilutions in AbDil overnight at 4˚C: chicken anti-GFP at 1:1000, mouse anti-a-tubulin at 1:5000, anti-Myosin IIb at 1:200, and anti-GM130 at 1:50. After five washes in TBS-0.1% Triton X-100, Alexa dye-conjugated secondary antibodies were added at 1:1000 in AbDil for 45 min at room temperature. After five washes in TBS-0.1% Triton X-100, coverslips were mounted in DAPI Fluoromount-G.
Reverse transcription and qPCR
Total RNA was isolated from MEFs using the PureLink RNA Mini Kit (121830108A, Thermo Fisher Scientific) according to the manufacturer's instructions. Isolated RNA was treated with DNase I (recombinant, RNase-free; 4716728001, Roche) and a cDNA library was synthesized using the cDNA High Capacity cDNA Reverse Transcription Kit (4368814, Thermo Fisher Scientific). The cDNA was the source of input for qPCR, using 7900 HT FAST and SYBR Green reagents (4329001, Thermo Fisher Scientific). The following qPCR primer pairs were used: mKif26b forward, CAAGTACGAG TGGCTGATGAA; mKif26b reverse, GGACCTGCTCCAAGTCAAAT; b-actin forward, GCTTC TAGGCGGACTGTTACTGA; b-actin reverse, GCGCAAGTTAGGTTTTGTCAAA.
Flow cytometry
NIH/3T3 cells were plated at a density of 0.09 million/well in a poly-D-lysine-coated 48-well plate. 24 hr after plating, the cells were incubated with 10 nM Wnt-C59 and allowed to reach confluency. 48 hr after plating, cells were stimulated with either Wnt proteins or an equivalent volume of the control buffer (PBS with 0.1% BSA and 0.5% (w/v) CHAPS) in the presence of 10 nM Wnt-C59 for 6 hr. Cells were then harvested, resuspended in PBS + 0.5% FBS and analyzed using a flow cytometer (FACScan with a 488 nm laser, Becton Dickinson, San Jose, CA). Raw data were acquired with CellQuest (Becton Dickinson) and processed in FlowJoX (FLOWJO, Ashland, OR). Processing entailed gating out dead cells, calculation of median fluorescence, percent change of medians, and overlay of histograms. Dose-response curves based on percent change were fitted in MATLAB with the doseResponse function (written by Richie Smith and publicly available on Matlab File Exchange, File ID # 33604). A more detailed protocol is described at Bio-Protocol (Karuna et al., 2018).
Live imaging
0.05 million cells were plated in a collagen-coated 35-mm glass bottom plate (P35GCOL-0-10-C, MatTek Corp, Ashland, MA). After adhering to the plate, cells were incubated in culture media supplemented with 25 mM Hepes and 10 nM Wnt-C59 for 24 hr. Cells were stimulated with 200 ng/ ml recombinant Wnt5a in the presence of 25 mM Hepes and 10 nM Wnt-C59. Cells were imaged every 10 min for 16 hr, with 500 ms exposure at 40x magnification.
Zebrafish
Wild-type NHGRI-1 fish were bred and maintained using standard procedures (LaFave et al., 2014). Embryos were obtained by natural spawning and staged as described (Kimmel et al., 1995). All zebrafish works were approved by the Institutional Animal Care and Use Committee, Office of Animal Welfare Assurance, University of California, Davis.
In vitro transcribed capped RNAs were prepared using the mMessage mMachine RNA Synthesis Kit (AM1340, Thermo Fisher Scientific) and purified using the RNeasy Mini Kit (74104, Qiagen, Germantown, MD) following manufacturers' instructions.
PGC analysis
For PGC analysis, E11.5 embryos were dissected from timed matings. E0.5 is defined as noon of the day when the vaginal plug is detected. To expose the gonadal ridges, the abdominal cavity was opened and the visceral organs removed. The embryos were then cut just anterior to the forelimbs and just posterior to the hindlimbs. The midsection containing the gonadal ridges were washed once in cold (À20˚C) methanol:DMSO (4:1), and then stored in the same fixative solution at À20˚C until analysis. Genotypes were determined by PCR.
For whole-mount immunofluorescence, fixed embryos were rehydrated and rocked at 4˚C overnight in PBSMT (PBS with 2% nonfat dry milk and 0.5% Triton X-100) with antibodies to SSEA1 (mouse IgM, MC-480, Developmental Studies Hybridoma Bank, Iowa City, IA, 1:200). Three PBSMT washes were followed by overnight incubation with secondary antibodies (1:200) and Hoechst (1:1000) in PBSMT. Embryos were then washed three times in PBS, dehydrated in a series of 5 min washes in 50%, 70%, 95%, and two times in 100% ethanol while rocking in the dark, and cleared with methyl salicylate for imaging.
Confocal imaging was carried out at room temperature with a 10x dry objective on a Leica SP5 TCS microscope equipped with 405, 488, 543, 594, and 633 nm lasers. Use of the 10x objective typically required the addition of a 1.5x digital zoom for optimal visualization of PGCs for quantification. Files of 1024 Â 1024 pixel images with 2-3 mm distance between z-stacks were captured by a scanner with maximal frame resolution and Leica acquisition software. PGCs were counted on Imaris imaging software (Bitplane, Belfast, UK) using the Spots module. Spots of 7 mm size in the SSEA1 channel were identified by the software and visually inspected to confirm accuracy. All measurements were exported to Excel (Microsoft, Redmond, WA) for calculations and statistical analyses.
'hits' as defined in the text. Phosphopeptides above the bold line are 'hits' scored using the 2-fold cutoff filter. Phosphopeptides below the bold line are those scored between the 1.5-and 2-fold cutoffs.
|
v3-fos-license
|
2019-04-24T13:03:26.234Z
|
2019-04-01T00:00:00.000
|
128360470
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1420-3049/24/8/1552/pdf",
"pdf_hash": "43b5a36d156288376690a1b01913b71e16d84c8f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2114",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "43b5a36d156288376690a1b01913b71e16d84c8f",
"year": 2019
}
|
pes2o/s2orc
|
Antiviral Activities of Silymarin and Derivatives
Silymarin flavonolignans are well-known agents that typically possess antioxidative, anti-inflammatory, and hepatoprotective functions. Recent studies have also documented the antiviral activities of silymarin and its derivatives against several viruses, including the flaviviruses (hepatitis C virus and dengue virus), togaviruses (Chikungunya virus and Mayaro virus), influenza virus, human immunodeficiency virus, and hepatitis B virus. This review will describe some of the latest preclinical and clinical studies detailing the antiviral profiles of silymarin and its derivatives, and discuss their relevance for antiviral drug development.
Silymarin, Its Components, and Derivatives
Silymarin, an extract from the seed of the milk thistle plant (Silybum marianum [S. marianum]) is widely known for its hepatoprotective functions, mainly due to its anti-oxidative, anti-inflammatory, and immunomodulatory effects [1]. The primary bioactive components of the extract consist of several flavonolignans (silybin, silychristin, silydianin, isosilybin, and dehydrosilybin), and a few flavonoids, mainly taxifolin [2]. The mixture of silybin A and silybin B (1:1) is also known as silibinin (C 25 H 22 O 10 , PubChem CID: 31553; Figure 1), which makes up the major active ingredient (roughly 50%) of silymarin [2,3]. Although silymarin is known mostly for its hepatoprotective functions, accumulating evidence now suggests that the extract possesses potent antiviral activities against numerous viruses, particularly hepatitis C virus (HCV). Consequently, silymarin is the most commonly consumed herbal product among HCV-infected patients in western countries [4]. Despite its potent medicinal effects, silymarin suffers from poor solubility which affects its bioavailability in vivo. To improve the issue, the chemically-hydrophilized silibinin, Legalon ® SIL (C 66 H 56 Na 4 O 32 , PubChem CID: 76956344), was developed by the pharmaceutical company Rottapharm Madaus (Monza, Italy) for the administration by intravenous infusion, and the drug was further granted orphan medicinal product designation (EU/3/10/828) from the European Medicines Agency (EMA) for the prevention of recurrent hepatitis C in liver transplant recipients in 2010 [5]. To date, silymarin and its derivatives have been examined for potential bioactivities against several viruses and various strategies to address its drug delivery challenges have also been explored. This review examines the current literature concerning the antiviral effects of silymarin and silymarin-derived compounds used in preclinical and clinical studies, the challenges to clinical application, as well as its prospects as clinically applicable antiviral agents.
Antiviral Activity of Silymarin and Its Derivatives In Vitro, In Silico, and In Vivo
Viral infections represent important public health concern and socioeconomic burden globally. Presently, numerous viral infectious diseases are without effective vaccines and/or specific antiviral treatments. The increased significance of viruses as human pathogens, and the rising epidemic outbreaks worldwide due to increased population density and migration/travel, underscore the need to continuously identify antiviral strategies against these infectious agents. Silymarin and its derivatives have been reported to possess potent antiviral activities against a number of viruses by targeting multiple steps of the viral life cycle. We describe below the antiviral activities of silymarin and its derivatives against different important human viruses in preclinical studies. Results of the in vitro or in silico studies as well as the in vivo studies are summarized in Tables 1 and 2, respectively.
The Flaviviridae Family
Flaviviruses are (+)ssRNA viruses that include important human pathogens such as hepatitis C virus (HCV) and dengue virus (DENV). HCV is known to cause chronic infection (hepatitis C) that can lead to end-stage liver diseases such as cirrhosis and hepatocellular carcinoma (HCC) [6]. DENV, on the other hand, is the etiologic agent of dengue fever (DF) and the more severe dengue hemorrhagic fever (DHF) and dengue shock syndrome (DSS), which are fatal illnesses that could lead to death in young children [7]. Currently, there are no effective vaccines against these viruses. Although the treatments for HCV infection has remarkably improved with the advent of direct-acting antivirals (DAAs), important issues such as cost, selection of drug-resistant mutants, and challenges in the difficult-to-treat populations have limited the widespread use of these drugs. DENV infection on the other hand, has no available antiviral treatments. These circumstances necessitate the search for novel/alternative forms of therapy to complement the existing treatment options.
Hepatitis C Virus
The effect of silymarin on HCV has been extensively studied and the antiviral activity of the drug against HCV in vitro is well documented. Using a standardized silymarin extract (MK-001), Polyak et al. demonstrated that MK-001 not only inhibited the genotype 2a HCV strain JFH-1 infection in both the pretreatment and post-infection analysis, but also blocked TNF-α and NFκB transcriptional activity in peripheral blood mononuclear cells (PBMCs) and hepatoma Huh-7 cells, respectively, suggesting that the extract possesses both antiviral and anti-inflammatory bioactivities [8]. Further mechanistic studies demonstrated that although MK-001 treatment alone only modestly affected the interferon (IFN) JAK-STAT pathway, the combination of MK-001 with IFN-α augmented the antiviral efficacy of exogenously added IFN, leading to the conclusion that the antiviral effect of MK-001 is mediated by potentiating the JAK-STAT antiviral signaling pathway which, in turn, inhibits HCV replication.
Following the above discovery, the same authors in two independent studies demonstrated that silymarin treatment blocked different steps of the HCV (JFH-1) life cycle, including entry/fusion, replication, and virion production in the host cells [9,10]. Furthermore, silymarin and its derived pure
Antiviral Activity of Silymarin and Its Derivatives In Vitro, In Silico, and In Vivo
Viral infections represent important public health concern and socioeconomic burden globally. Presently, numerous viral infectious diseases are without effective vaccines and/or specific antiviral treatments. The increased significance of viruses as human pathogens, and the rising epidemic outbreaks worldwide due to increased population density and migration/travel, underscore the need to continuously identify antiviral strategies against these infectious agents. Silymarin and its derivatives have been reported to possess potent antiviral activities against a number of viruses by targeting multiple steps of the viral life cycle. We describe below the antiviral activities of silymarin and its derivatives against different important human viruses in preclinical studies. Results of the in vitro or in silico studies as well as the in vivo studies are summarized in Tables 1 and 2, respectively.
The Flaviviridae Family
Flaviviruses are (+)ssRNA viruses that include important human pathogens such as hepatitis C virus (HCV) and dengue virus (DENV). HCV is known to cause chronic infection (hepatitis C) that can lead to end-stage liver diseases such as cirrhosis and hepatocellular carcinoma (HCC) [6]. DENV, on the other hand, is the etiologic agent of dengue fever (DF) and the more severe dengue hemorrhagic fever (DHF) and dengue shock syndrome (DSS), which are fatal illnesses that could lead to death in young children [7]. Currently, there are no effective vaccines against these viruses. Although the treatments for HCV infection has remarkably improved with the advent of direct-acting antivirals (DAAs), important issues such as cost, selection of drug-resistant mutants, and challenges in the difficult-to-treat populations have limited the widespread use of these drugs. DENV infection on the other hand, has no available antiviral treatments. These circumstances necessitate the search for novel/alternative forms of therapy to complement the existing treatment options.
Hepatitis C Virus
The effect of silymarin on HCV has been extensively studied and the antiviral activity of the drug against HCV in vitro is well documented. Using a standardized silymarin extract (MK-001), Polyak et al. demonstrated that MK-001 not only inhibited the genotype 2a HCV strain JFH-1 infection in both the pretreatment and post-infection analysis, but also blocked TNF-α and NFκB transcriptional activity in peripheral blood mononuclear cells (PBMCs) and hepatoma Huh-7 cells, respectively, suggesting that the extract possesses both antiviral and anti-inflammatory bioactivities [8]. Further mechanistic studies demonstrated that although MK-001 treatment alone only modestly affected the interferon (IFN) JAK-STAT pathway, the combination of MK-001 with IFN-α augmented the antiviral efficacy of exogenously added IFN, leading to the conclusion that the antiviral effect of MK-001 is mediated by potentiating the JAK-STAT antiviral signaling pathway which, in turn, inhibits HCV replication.
Following the above discovery, the same authors in two independent studies demonstrated that silymarin treatment blocked different steps of the HCV (JFH-1) life cycle, including entry/fusion, replication, and virion production in the host cells [9,10]. Furthermore, silymarin and its derived pure compounds exhibited potent hepatoprotective functions by inhibiting the HCV-induced oxidative stress, NFκB-dependent transcription, and T-cell receptor (TCR)-mediated proliferation [9]. Interestingly, both studies observed an impact of silymarin on HCV NS5B RNA-dependent RNA polymerase (RdRp) activity, albeit at concentrations higher than that required for its anti-HCV effect. Consistent with the inhibition of the RdRp activity, Belkacem et al. employing silybin A, silybin B, and Legalon ® SIL demonstrated that all three forms of silymarin components inhibited HCV replication by targeting the HCV RdRp activity of NS5B, with an IC50 value ranging between 75-100 µM [11].
In contrast, Blaising et al. in 2013 showed that the major bioactive component silibinin exerts antiviral effect against HCV by blocking clathrin-mediated endocytosis [12]. Another study in 2013 indicated that silibinin impedes HCV infection by targeting the HCV NS4B protein [13], which is known to mediate the membranous web formation where HCV RNA replication occurs [14] and hence affecting the morphogenesis of the viral replication sites. More recently, we developed silibinin nanoparticles (SB-NPs) and showed that both the SB-NPs and the conventional silibinin inhibited HCV infection by blocking viral cell-to-cell spread [15]. However, in our hand, the drug had minimal impact on other steps of the viral life cycle (entry, replication, and virion production) or in modulating the type I IFN response.
In 2016, DebRoy et al. investigated the antiviral effect of intravenous Legalon ® SIL monotherapy on uPA-SCID chimeric mice with humanized livers [16]. Mice chronically infected with HCV were treated with different intravenous doses of Legalon ® SIL (469, 265 or 61.5 mg/kg) for 14 days before analyzing serum HCV, human albumin, and liver HCV RNA levels. The results demonstrated that Legalon ® SIL monotherapy led to a biphasic serum viral decline without affecting the human albumin levels, suggesting that the antiviral effect observed was not due to a decline in the human hepatocytes. Furthermore, administration of Legalon ® SIL induced anti-inflammatory and anti-proliferative gene expressions that were demonstrated by a decrease in TNF-α and NFκB-associated transcriptional activations. Interestingly, the microarray analysis showed that Legalon ® SIL treatment inhibited the expression of genes such as interleukin 8 (IL-8), nicotamidine N-methyltransferase (NNMT), and osteopontin/secreted phosphoprotein 1 (SPP1), which are known to facilitate HCV replication, consistent with the inhibition of HCV production. These results suggest that Legalon ® SIL could efficiently inhibit HCV production in mice in the absence of adaptive immunity in vivo.
Together, the results discussed above support the robust anti-HCV activity of silymarin and its derivatives, although the underlying anti-HCV mechanism varies substantially from study to study. Possible explanations include discrepancy in the source or type of drug used (e.g., silymarin component, purity, and extraction method), variation in experimental design such as concentration and treatment protocol, and the specific model systems employed. This is supported by Wagoner et al.'s work demonstrating that the oral and intravenous (i.e., Legalon ® SIL) formulations of silibinin exert different effects on HCV life cycle, inflammation, and antiviral signaling in vitro [17]. Thus, concerted efforts should be made to delineate the main mechanism of action of this promising drug. Nonetheless, given that silymarin and its constituents are bioactive against several cell physiological processes, it is likely that silymarin and its major active components may exhibit impact on multiple steps of the virus life cycle, either directly or indirectly.
Dengue Virus
Using in silico drug development approach against the dengue nonstructural protein 4B (NS4B), Qaddir et al. docked 2750 phytochemicals from different medicinal plants to the DENV NS4B protein and identified nine phytochemicals possessing potential inhibitory effects against NS4B, including silibinin and other compounds from the plant S. marianum [18]. This result suggests that silibinin could potentially inhibit DENV viral replication. Given that molecular docking analysis is only predictive, further in vitro and in vivo studies would be necessary to confirm the antiviral effect.
Influenza A Virus
Influenza A virus (IAV) is a highly contagious virus and a leading cause of mortality and morbidity globally. Influenza epidemics and pandemics pose a serious threat to both human and animal populations. Although effective vaccines are available against IAV, these vaccines must be regularly updated due to the ability of the virus to induce frequent antigenic drift and occasional antigenic shifts to its envelope glycoproteins. Moreover, only few IAV antiviral therapeutics have been clinically approved and currently in use including neuraminidase inhibitors [19] and the more recent inhibitor of cap-dependent endonuclease [20]. Therefore, continuous identification of novel therapeutic strategies to expand or complement the existing options against this important pathogen is highly envisaged. Gazák et al. developed silibinin derivatives that were conjugated with long-chain fatty acids and demonstrated their superior anti-influenza virus activity compared to conventional silibinin in plaque reduction assay [21]. Later, using cytopathic effect (CPE) reduction method, Song et al. explored the antiviral activity of silymarin against IAV [22]. The authors demonstrated that silymarin dose-dependently inhibited IAV replication without significant cytotoxicity. Further examination revealed that the silymarin-mediated inhibition of influenza replication occurred through inhibition of late mRNA synthesis. However, whether or not silymarin could modulate other phases of the influenza life cycle was not investigated. The other study that looked at the anti-influenza activity of silymarin was the study by Dai et al. [23]. Due to the importance of autophagy in promoting influenza replication, these authors elegantly designed a bimolecular fluorescence complementation-fluorescence resonance energy transfer (BiFC-FRET) assay to analyze the anti-influenza activity of 89 medicinal plants and discovered that S. marianum L. possessed excellent activity in the assay. In order to improve the anti-influenza activity, the authors synthesized five silybin amino acid derivatives (S0-S5) and demonstrated using the sulforhodamine B (SRB) antiviral assay that S3 is the most effective against IAV. Further analysis with plaque inhibition assays revealed that in addition to inhibiting the autophagy elongation complex formation, the S0 and S3 derivatives robustly inhibited IAV replication as well as several physiological processes induced by influenza replication, such as oxidative stress and the activation of extracellular signal-regulated kinase (ERK)/p38 mitogen-activated protein kinase (MAPK) and I kappa B (IκB) kinase (IKK) pathways. In agreement with the report above, the authors demonstrated through a time-course analysis that S0 and S3 mainly inhibited IAV replication without exerting any significant effect on viral adsorption. Therefore, it appears that silymarin and its derivatives' antiviral activity against IAV is chiefly mediated by the inhibition of viral replication. Finally, in vivo oral administration of S0 and S3 not only increased the survival rate in mice infected with a lethal dose of IAV, but also decreased the viral titers in their lungs [23]. Interestingly, this finding is compatible with the accumulation of free silibinin in lung after oral administration that others and we have observed [15,24]. These results, together, point to a prominent role of silymarin as a potent IAV inhibitor.
Human Immunodeficiency Virus
The human immunodeficiency virus (HIV) is a lentivirus that causes acquired immunodeficiency syndrome. HIV is estimated to currently infect over 38 million people, the majority of whom live in sub-Saharan Africa. The advent of the highly active antiretroviral therapy has monumentally improved the survival rate of the HIV-infected population by substantially decreasing the viral load while at the same time preserving the CD4 count. Despite this advancement, the treatment is life-long and associated with side effects. Currently, there is no effective vaccine against this deadly human pathogen. Therefore, identifying novel therapeutics to complement the existing ones would be expected to further improve the management of HIV-infected patients. Interestingly, about 30% of the HIV-infected patients in Europe and North America are co-infected with HCV [25]. With the aim of identifying drugs that could simultaneously target both HCV and HIV, McClure et al. explored the anti-HIV activity of Legalon ® SIL [26]. The authors showed that Legalon ® SIL inhibited HIV replication in the HeLa cell line TZM-bl, PBMCs, and the human T lymphoblastic leukemia cell line CME in vitro. Mechanistic studies revealed that Legalon ® SIL blocked HIV replication by attenuating cellular functions implicated in T-cell activation and proliferation, hence resulting in fewer CD4+ T cells expressing the HIV co-receptors, CXCR4 and CCR5. In a separate study, the authors further characterized the role of Legalon ® SIL in HIV infection and demonstrated that Legalon ® SIL treatment at the time of virus adsorption in PBMCs and CEM cells blocked HIV infection [27]. Intriguingly, the authors showed that in contrast to their previous report, silibinin's perturbation of T-cell metabolism is not involved in its ability to block HIV entry. Thus, it appears that silibinin could simultaneously block HIV entry and T-cell activation. Combined together, these results provide evidence for the robust anti-HIV role of silibinin, which therefore merits further evaluation for potential development as an anti-HIV agent.
The Togaviridae Family
Togaviruses are arthropod-borne (+)ssRNA viruses that contribute to various human diseases. Presently, there is no effective treatment or preventive vaccines against important togaviruses including Chikungunya virus (CHIKV) and Mayaro virus (MAYV). Both CHIKV and MAYV belong to the alphavirus genus and are implicated in a variety of human illnesses such as encephalitis, arthralgia, fever, and rash. Therefore, exploring candidate agents capable of inhibiting these togaviruses may provide potential treatment options.
In 2015, Lani et al. investigated the antiviral effect of several flavonoids including silymarin against CHIKV using a CPE reduction assay and RT-PCR analysis [28]. The authors discovered that silymarin robustly inhibited CHIKV-induced CPE. Mechanistic studies further demonstrated that silymarin inhibited CHIKV infection by targeting the post-entry steps of the viral life cycle.
In a similar study, Camini et al. explored the antiviral activity of silymarin against the related togavirus, MAYV [29]. Using an analogous approach, the CPE reduction assay, the authors demonstrated that silymarin at non-cytotoxic concentrations inhibited MAYV replication. Further experiments demonstrated that silymarin pretreatment inhibited MAYV-induced oxidative stress. However, whether the inhibition of the oxidative stress is due, in part, to the inhibition of viral replication or whether inhibition of reactive oxygen species (ROS) itself is sufficient to impede MAYV replication was not addressed. Nonetheless, the findings above suggest that silymarin mainly inhibit alphaviruses by hindering viral replication.
Hepatitis B Virus
Hepatitis B Virus (HBV) is an important human liver pathogen belonging to the Hepadnaviridae family. The virus is estimated to chronically infect 240 million people worldwide killing approximately 1 million people every year due to the HBV-associated end-stage liver diseases such as cirrhosis and HCC [30]. Although effective vaccines against the virus have been in existence for the past few decades, the current treatment strategies can only control and suppress the HBV viral load but unable to cure. Thus, the continuous identification of new treatment strategies against the liver pathogen is still needed. Recently, Umetsu et al. demonstrated that similar to HCV, silibinin inhibited HBV entry into the permissive HepG2-NTCP-C4 and PXB cells by blocking clathrin-mediated endocytosis without affecting HBV-receptor interaction, replication or release [31]. More importantly, the combination of silibinin and Entecavir, a known nucleoside reverse transcriptase inhibitor, reduced HBV DNA in the culture supernatant more than either mono-treatment alone in HepG2-NTCP-C4 cells already established with HBV infection, thus highlighting the anti-HBV potential of silibinin.
In 2008, using a different approach, Wu et al. tested the effect of the silymarin on HBV X protein (HBx) transgenic mice and demonstrated that the natural product possesses therapeutic effects at the early stages of HCC development when given orally to 4-6 weeks old transgenic mice [32]. Specifically, oral administration of silymarin dose-dependently reversed fatty liver changes and restored normal liver histopathology in these animals. Further analysis revealed that administration of silymarin to the precancerous HBx transgenic mice prevented the development of HCC. In contrast, silymarin treatment could not block the progression of established cancer in mice and had no significant effect on the HBx gene expression. The fact that silymarin did not modulate HBx gene expression could imply that the drug does not affect HBV replication, which is consistent with the in vitro study above. Thus, it appears that silymarin blocks HBV infectivity by influencing early viral entry.
In summary, the in vitro and in silico studies described above identify silymarin and its derivatives as attractive antiviral candidates against multiple viruses. The extract or molecular components appear to inhibit viral infection by targeting several steps of the viral life cycle either directly or indirectly, thereby highlighting the robust antiviral activities of silymarin and its derivatives. Silymarin had no effect on HBx expression and late stage carcinogenesis, but recovered fatty acid change and liver pathology in the early stages of liver damage [32]
Antiviral Activity of Silymarin and Its Derivatives in Clinical Trials
To date, clinical studies of silymarin, its component, and their derivatives are mostly limited to HCV-related infections due to their pronounced effect in preclinical studies. Here we review the antiviral effect of silymarin-associated drugs in chronic hepatitis C, liver transplantation, and difficult-to-treat HIV/HCV coinfected patients.
Chronic Hepatitis C
Several studies have evaluated the effect of silymarin or its component silibinin in patients with chronic hepatitis C. Oral administration of silymarin capsules ranging from 140 mg 3 times per day (for 1 year) to 700 mg 3 times per day (for 24 weeks) failed to decrease HCV viral load in three previous studies conducted in Egypt [33], Israel [34], and the United States [35]. However, interestingly, Malaguarnera et al. demonstrated in two randomized controlled trials (RCTs) that patients who received 12-month silybin-vitamin E-phospholipid complex pills supplemented to pegylated-interferon (Peg-IFN) + ribavirin (RBV) treatment achieved lower viral load compared to those who only received Peg-IFN+RBV treatment [36,37]. The silybin-vitamin E-phospholipid complex was reported to improve silybin's solubility and bioavailability [38]; therefore, the difference between these results may imply the importance of improving silibinin or silybin's solubility to enhance its effect in clinical use. Consistent with the above results, intravenous infusion of Legalon ® SIL appeared to produce a better anti-HCV effect in clinical studies. Ferenci et al. demonstrated in a before-after study that 14 consecutive days of Legalon ® SIL infusion combined with 7 days of Peg-IFN+RBV treatment dose-dependently and continuously decreased HCV viral load in chronic hepatitis C patients who were previously non-responders to the Peg-IFN+RBV therapy [39]. After the infusion schedule, antiviral treatment with Peg-IFN+RBV and oral silymarin continued, and HCV RNA became undetectable in several patients of the 15 or 20 mg/kg/day groups. A later study analyzed the pattern in viral load decline over time with intravenous Legalon ® SIL monotherapy, and suggested that Legalon ® SIL may block both viral infection and viral production or release, with its dose-dependent effect mainly associated with blocking HCV production/release [40]. Following Ferenci's study, Biermer et al. also reported a successful suppression of HCV viremia to undetectable level in a Peg-IFN+RBV non-responder patient using a combination of RBV, Legalon ® SIL (20 mg/kg/day), and Peg-IFN with a modified administration protocol [41]. The two groups also demonstrated in the next three years that additional Legalon ® SIL infusion, 20 mg/kg/day for 14 or 21 days [42], or 1400 mg/day for 2 days [43], could induce undetectable viral load in over half of the on-treatment non-responders to Peg-IFN+RBV therapy. Later in 2015, Dahari et al. published a case report showing that a patient who had severe adverse effect from IFN-containing regimen achieved SVR (sustained virological response) after 33 weeks of Legalon ® SIL infusion plus RBV and vitamin D therapy [44]. These results highlighted the potential of using Legalon ® SIL in combination with either IFN-based or IFN-free treatments to treat patients who are non-responsive or intolerable to IFN-containing treatments.
Liver Transplantation in Hepatitis C
HCV-associated liver cirrhosis and HCC are common indications for liver transplantation. The clearance of viremia pre-transplant or post-transplant is critical to prevent graft failure. However, with the introduction of DAAs for HCV, the need of transplantation has declined, and the management in liver transplant candidates and recipients has revolutionized [45]. The clinical trials reviewed here were conducted in the "pre-DAA era," when IFN-based regimen remained the standard of care, and failure to achieve SVR with such regimen was a huge challenge for successful liver transplantation.
In 2010, Neumann et al. reported the first successful prevention of HCV reinfection and SVR 24 (sustained virological response in 24 weeks after treatment) after liver transplantation by the post-transplant administration of intravenous Legalon ® SIL monotherapy for 14 days in a genotype 3a patient who was non-responsive to Peg-IFN+RBV therapy [46]. The next year, Beinhardt et al. reported another IFN non-responder with mixed genotype 1a/4 infection achieving SVR 20 after liver transplantation by the administration of intravenous Legalon ® SIL monotherapy starting from 15 days pre-transplant to 25 days post-transplant [47]. Both groups suggested in their studies that relatively low viral load before transplantation could be a good prognostic factor [46,47]. Concordant with the viral load observation, Eurich et al. further reported a case series of four Peg-IFN+RBV non-responders who started intravenous Legalon ® SIL treatment months after liver transplant [48]. The patient who had the lowest viral load eliminated the virus during the first week of the 14-day Legalon ® SIL monotherapy, while the patient who had the second lowest viral load eliminated the virus under Peg-IFN+RBV therapy 2 months later. Both patients achieved SVR 24. As for the other two patients with higher initial viral load, 2.3 and 2.9 logs of viral load drop, respectively, were observed during the first 10 days of Legalon ® SIL administration, despite the rebounding viremia in the follow-up Peg-IFN+RBV therapy. On the other hand, Aghemo et al. presented a genotype 2a patient who started intravenous Legalon ® SIL monotherapy 24 h pre-transplant but failed to eliminate graft reinfection [49]. The authors proposed that this may be a result of genotype difference, but did not discuss about the relatively high viral load (more than 10 6 IU/mL) before treatment of this patient [49]. In contrast, Knapstein et al. reported a genotype 3 Peg-IFN+RBV non-responder who also had pre-treatment viral load of more than 10 6 IU/mL but successfully treated with Peg-IFN+RBV and intravenous Legalon ® SIL combination therapy at the post-transplant stage and achieved SVR 24 [50].
Later in 2013 and 2014, two randomized placebo-controlled trials [51,52] and a non-treated controlled trial [53] further explored the effect of intravenous Legalon ® SIL monotherapy to prevent HCV recurrence in liver transplantation setting. Both pre-transplant (a maximum of 21 consecutive days pre-transplant and 7 days post-transplant) [51] and post-transplant [52,53] administration of Legalon ® SIL significantly decreased the viral loads during the treatment; however, in all three studies, viremia rebounded after the end of treatment and became insignificant between the two groups, and no patient reached SVR after 24 weeks of follow up. These results confirmed the anti-HCV effect of Legalon ® SIL, but eradication of the virus may require longer administration or combination with other antivirals.
HIV/HCV Coinfection
Sharing similar transmission routes, 25-30% of HIV patients are coinfected with HCV [54]. Traditionally, HIV/HCV coinfected patients are known to be more difficult to treat compared to their mono-infected counterparts using Peg-IFN+RBV combination therapy. However, the use of the recently introduced DAAs in these patients has been demonstrated to yield similar SVR rates to those only infected with HCV [55]. Despite this miraculous achievement, the DAAs are very expensive and have the tendency to select for resistant mutants. Thus, finding alternative treatment strategies for the treatment of these patients is highly desirable. Due to the ability of silymarin and derivatives to target both viruses, several clinical trials have explored the use of these drugs in the HIV/HCV coinfected patients.
Payer et al. explored the use of Legalon ® SIL infusion in a 27-year-old female with the unfavorable single nucleotide polymorphism (SNP) IL28β genotype T/T, who was coinfected with HIV and HCV and refractory to Peg-IFN+RBV treatment [56]. The patient was subjected to Legalon ® SIL (20 mg/kg/day) monotherapy for 14 days. At day 8, the combination therapy with Peg-IFN+RBV was then started and continued until week 16 when the treatment had to be stopped due to psychiatric and other adverse events. Legalon ® SIL monotherapy for 1 week substantially decreased both HCV and HIV RNA levels, and after 2 weeks of Legalon ® SIL therapy including 1 week of Peg-IFN+RBV combination therapy, both HCV RNA and HIV RNA were undetectable. While the HIV RNA rebounded back 24 weeks after cessation of treatment, the HCV RNA remained negative during the same time frame [56].
Using a different protocol, Braun et al. examined the efficacy of Legalon ® SIL lead-in treatment in a total of 16 HIV/HCV coinfected Peg-IFN+RBV non-responding patients with advanced liver fibrosis in the clinical trial named THISTLE [57,58]. All patients were given 20 mg/kg/day of intravenous Legalon ® SIL monotherapy for 14 days, after which Peg-IFN+RBV combined with the HCV protease inhibitor Telaprevir was initiated for 12 weeks, followed by Peg-IFN+RBV dual regimen for another 36 weeks. Fifteen out of the 16 patients (94%) had undetectable HCV RNA at weeks 4 and 12, with 11 patients (69%) having undetectable HCV-RNA at week 48, and 10 patients (63%) reaching SVR at week 12. Six out of the 16 patients, however, could not achieve SVR 12. Collectively, these studies provide evidence that Legalon ® SIL lead-in treatment in combination with Peg-IFN+RBV and HCV protease inhibitors may be an exciting alternative to treat HIV/HCV coinfected patients to prevent potential drug-drug interactions and to improve treatment success with DAAs [58].
Challenges to Clinical Application and the Need to Enhance Bioavailability
Drug solubility has an important influence over drug absorption, and hence bioavailability. Despite the wide range of biological and pharmacological effects of silymarin, the extract is relatively insoluble in water (0.4 mg/mL), and the use of other solvents such as ethanol, glyceryl monooleate, polysorbate 20, and transcutol may help increase its solubility ranging from 33-350 mg/mL [59]. Studies based on silymarin's primary active molecular component silybin indicate extensive enterohepatic circulation following oral administration, rapid excretion in bile and urine with an elimination half-life of about 6 h, and a low absorption from the gastrointestinal tract with a reported 0.73% of oral bioavailability in rat plasma [60,61]. In addition, the silybin content is particularly susceptible to conjugation reactions in phase II metabolism in the human liver, yielding various silybin metabolites conjugated with sulfates and glucuronides [61], and an observed average of 10% of the silybin isomers as unconjugated form in the plasma of orally-administered healthy volunteers [62]. The recent phase II trial demonstrated in chronic hepatitis C patients who received silymarin capsules that serum level of silybin varied significantly from 2.1 to 2048 ng/mL despite the high dose range used (420-700 mg, 3× daily) in the patient, indicating absorption and bioavailability issues which likely affected the efficacy outcome of the drug against hepatitis C [35]. The above factors contribute to the poor oral bioavailability of silymarin and, likewise, of its active constituent silybin. For this reason, most clinical trials and case studies, including those against chronic hepatitis C and HIV/HCV coinfection, employed the more water-soluble salt-derivatives such as Legalon ® SIL (silibinin-C-2 ,3-dihydrogen succinate, disodium salt) [39][40][41][42][43][44][46][47][48][51][52][53][56][57][58]. However, Legalon ® SIL is inconvenient for administration, because it is given by i.v. infusion and cannot be administered orally. The available pharmacokinetic and clinical studies highlight the need to overcome drug delivery problems and formulate or modify silymarin and its active derivatives into more soluble forms that can achieve higher bioavailability.
To address this challenge, several methods have also been explored to increase the bioavailability of silymarin and its constituents. These include combination with phosphatidylcholine [63] or β-cyclodextrins [64], formation of salts and glycoside derivatives [65,66], liposome delivery [67,68], solid dispersion incorporation [69,70], self-microemulsifying drug delivery systems (SMEDDS) [59,71,72], and nanoformulations [73][74][75], which can all improve the solubility of silymarin as well as enhance the prolonged and sustained release of silybin. As an example, we have recently employed a nano-emulsification strategy in addressing the solubility and bioavailability issue of the standardized silibinin (silybin isomers). Specifically, silibinin-loaded nanoparticles (SB-NP) with diameters <200 nm were successfully developed using the hydrophilic carrier polyvinylpyrrolidone (PVP), which resulted in the transition of the silibinin crystalline structure into an amorphous state in the SB-NP and demonstrated a significantly enhanced solubility [15]. Interestingly, free silibinin was efficiently released from the nanoformulation at pH 7.4 but was prohibited at pH 1.2, indicating that the drug would be released extensively in the alkaline intestine rather than the acerbic stomach, thus favoring intestinal absorption. Importantly, the SB-NP retained their antioxidant activity and antiviral function against HCV infection in vitro, and were safe and orally bioavailable in vivo [15]. Enhanced serum concentration and superior biodistribution to the liver was observed compared to non-modified silibinin following oral administration in rats [15]. The orally applicable SB-NP with its improved solubility, absorption, and higher accumulation in the liver highlight an advantage for application against viral hepatitis, including hepatitis C, and underscores its potency for further development as a promising candidate drug agent.
Altogether, due to the widely known pharmacological effects but low solubility and bioavailability of silymarin and its derivatives, the above suggests that increasing the oral bioavailability is critical to their development and application in clinical settings. This is attested by the numerous studies to date, as mentioned above, aiming to address these challenges.
Prospects of Silymarin and Derivatives in Antiviral Development
A growing number of studies have demonstrated the hepatoprotective and antiviral effect of silymarin and derivatives both in vitro and in vivo. Although the in vivo hepatoprotective activity of the drug and its derivatives is ambiguous [76], partly due to the low bioavailability, improving the bioavailability for example through nano-formulation and other approaches could help solve this controversy. Various antiviral activities of silymarin and derivatives have been shown against liver and non-liver pathogens, making them potential broad-spectrum antivirals, at least for some of the enveloped viruses explored to date. In addition, considering the polypharmacological activity of silymarin and derivatives towards multiple host cell targets, such as cell innate immunity and inflammation [8,17], oxidative stress production [15], and autophagy [23], which are all cell physiological processes that are known to be elicited or subverted by many viral infections, these natural products are likely to exert their antiviral activities by modulating the cellular environment in addition to any potential direct antiviral function(s) against a specific viral protein. In the context of hepatic diseases, the ability of silymarin and derivatives to exert both hepatoprotective and antiviral activity makes them ideal candidates, particularly for hepatitis C with the greatest number of preclinical and clinical studies undertaken thus far. Given that the current antiviral agents, for example, the DAAs can only abrogate viral replication without displaying any hepatoprotective effects and are mechanistically different to silymarin's known antiviral targeting activities, combining such drugs with silymarin or its derivatives would be expected to robustly improve the patient conditions. This notion is supported by the examples reviewed above, including the use of Legalon ® SIL in combination with PegIFN and/or RBV, or protease inhibitors in HCV non-responders [39,[41][42][43][44][46][47][48][51][52][53][56][57][58]. These findings provide compelling evidence to explore the use of silymarin and derivatives in combination with existing antivirals as a potential treatment strategy, particularly for the treatment of chronic viral hepatitis. Further research to improve the bioavailability, delivery, as well as elucidating the main mechanism of antiviral activity of silymarin and derivatives could help to boost our understanding of these drugs and accelerate their development as hepatoprotective antiviral agents.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2018-04-03T00:39:22.851Z
|
2017-05-24T00:00:00.000
|
13391387
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/292/29/12100.full.pdf",
"pdf_hash": "366f0b2cc1e59ef5fad5e2e484f74eac558b3a33",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2115",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "62fd398485363c48d07fdc52768a1142682ec73f",
"year": 2017
}
|
pes2o/s2orc
|
Protein kinase C ϵ stabilizes β-catenin and regulates its subcellular localization in podocytes
Kidney disease has been linked to dysregulated signaling via PKC in kidney cells such as podocytes. PKCα is a conventional isoform of PKC and a well-known binding partner of β-catenin, which promotes its degradation. β-Catenin is the main effector of the canonical Wnt pathway and is critical in cell adhesion. However, whether other PKC isoforms interact with β-catenin has not been studied systematically. Here we demonstrate that PKCϵ-deficient mice, which develop proteinuria and glomerulosclerosis, display lower β-catenin expression compared with PKC wild-type mice, consistent with an altered phenotype of podocytes in culture. Remarkably, β-catenin showed a reversed subcellular localization pattern: Although β-catenin exhibited a perinuclear pattern in undifferentiated wild-type cells, it predominantly localized to the nucleus in PKCϵ knockout cells. Phorbol 12-myristate 13-acetate stimulation of both cell types revealed that PKCϵ positively regulates β-catenin expression and stabilization in a glycogen synthase kinase 3β-independent manner. Further, β-catenin overexpression in PKCϵ-deficient podocytes could restore the wild-type phenotype, similar to rescue with a PKCϵ construct. This effect was mediated by up-regulation of P-cadherin and the β-catenin downstream target fascin1. Zebrafish studies indicated three PKCϵ-specific phosphorylation sites in β-catenin that are required for full β-catenin function. Co-immunoprecipitation and pulldown assays confirmed PKCϵ and β-catenin as binding partners and revealed that ablation of the three PKCϵ phosphorylation sites weakens their interaction. In summary, we identified a novel pathway for regulation of β-catenin levels and define PKCϵ as an important β-catenin interaction partner and signaling opponent of other PKC isoforms in podocytes.
The PKC family of serine and threonine kinases is subdivided into the three subfamilies of conventional, atypical, and novel PKCs (1), depending on the involvement of diacylglycerol (DAG) 2 and calcium during activation. Conventional, atypical, and novel PKCs display different roles in signal transduction via phosphorylation of their target proteins. Regarding the kidney, several studies have shown that PKCs play a pivotal role in the development of diabetic nephropathy. For PKC␣ and PKC, two members of the conventional PKC subfamily, complete depletion seems to lead to a better outcome in streptozotocininduced diabetic mice in terms of proteinuria (2) or renal hypertrophy and glomerular injury (3). This implies that these PKC isoforms induce or exacerbate kidney injury. In contrast, we have shown previously that knockdown of PKC⑀ in mice leads to a renal phenotype with glomerulosclerosis, indicating that PKC⑀ may protect from diabetic nephropathy (4,5).
PKC⑀ belongs to the subfamily of novel PKCs and is activated via DAG, different from conventional isoforms, which are activated by calcium and DAG. PKC⑀ was shown previously to play an essential role in cell proliferation, migration, invasion, and survival, and lack of PKC⑀ is linked to reduced cardioprotective effects (6) and neurotransmission (7). Moreover, PKC⑀ has been well characterized as an important binding protein of actin and, thus, has a regulatory function for the cytoskeleton (8,9). -Catenin is the key component in the highly conserved canonical Wnt pathway, regulating cell-cell adhesion as well as serving as a transcription co-factor (10). Widely expressed, -catenin is also found in podocytes and plays a pivotal role in cell adhesion and differentiation. Depending on phosphorylation, -catenin exists in an active (non-phosphorylated) and an inactivated (phosphorylated) state. Early studies demonstrated that overexpression of -catenin alone can cause or aggravate damage in podocytes (11), whereas, in apparent contradiction, a later study showed that -catenin knock-out mice under diabetic conditions display more albuminuria than control mice in long-term studies (Ͼ15 weeks) (12). From their conflicting data, Kato et al. (12) went on to show that a balanced expression of active and inactive -catenin might be what is important for cell function .
The presence of -catenin is strongly regulated via glycogen synthase kinase 3 (GSK3), which forms a degradation com- plex with adenomatous polyposis coli (APC), Axin, and CK-1 upon Wnt absence. The resulting N-terminal phosphorylation of -catenin leads to ubiquitination and proteasome-mediated reduction of the cellular -catenin level. In contrast, activation of Wnt, a group of secreted glycolipoproteins, leads to stabilization and accumulation of -catenin in the cytoplasm and translocation into the nucleus, which then results in Wnt target gene expression (13).
Other so-called non-canonical Wnt signaling pathways, especially the Wnt/Ca 2ϩ pathway, are involved in PKC signaling (14); many studies have demonstrated the alternative degradation of -catenin by PKC isoforms such as PKC␣ (15,16). PKC␣ has been shown to mediate GSK3-independent -catenin down-regulation by phosphorylation of the N-terminal serine residues Ser-33, Ser-37, and Ser-45 of -catenin. This finding is consistent with the known common phosphorylation sites of GSK3 (17).
The aim of this study was to analyze the relationship between -catenin and PKC⑀ in podocytes. We used a murine PKC⑀ knockout podocyte cell line to perform in vitro immunofluorescence staining and characterize -catenin expression under phorbol 12-myristate 13-acetate (PMA) stimulation and adenoviral constructs to investigate the effects of overexpression of -catenin in PKC⑀ knockout podocytes. Furthermore, we created -catenin mutants and validated these in vitro and in vivo to investigate which domain of -catenin may contribute to the interaction/function with PKC⑀.
PKC⑀ knockout in mice leads to proteinuria and reduced expression of -catenin
To evaluate podocytic expression of -catenin in mice, we performed immunofluorescence co-staining of the podocyte marker synaptopodin and -catenin on glomeruli of 12-weekold wild-type and PKC⑀ Ϫ/Ϫ mice. -Catenin expression was detected in the nucleus as well as in the cytoplasm in wild-type mice, whereas, in PKC⑀ Ϫ/Ϫ glomeruli, we observed predominantly nuclear localization. Further, -catenin expression was less intense in PKC⑀ Ϫ/Ϫ than in wild-type mice (Fig. 1A). Semiquantitative analysis of -catenin in fluorescent cells in the glomeruli of mice (Ն10 glomeruli were included for each genotype) confirmed that -catenin expression in WT mice was significantly higher than in PKC⑀ Ϫ/Ϫ mice at 12 weeks of age ( Fig. 1B; ****, p Ͻ 0.0001). Of note, Coomassie staining of urine revealed that glomerular barrier function was already impaired, displaying mild albuminuria onset at 12 weeks of age in PKC⑀deficient mice (Fig. 1C).
PKC⑀ influences the subcellular distribution of -catenin and its activity
Because we detected a difference in podocytic -catenin expression between glomeruli of wild-type and PKC⑀ Ϫ/Ϫ mice, we wanted to investigate expression in vitro in wild-type and PKC⑀ Ϫ/Ϫ podocytes. Interestingly, PKC⑀ Ϫ/Ϫ podocytes displayed a very unique phenotype in tissue culture, as they were significantly smaller than wild-type podocytes, detached very easily, and never reached a confluent growth pattern. Western blot analysis using antibodies against active and total -catenin showed similar results as those in vivo, displaying increased expression from day 0 to day 10 ( Fig. 2A), with stronger up-regulation in wild-type cells than in knockout cells. The up-regulation supported findings in other cell lines (18,19), which indicated that the presence of -catenin is critical for cell differentiation. An immunofluorescent staining time course (0, 4, and 8 days after differentiation) revealed reverse localization of -catenin in wild-type and PKC⑀ Ϫ/Ϫ podocytes during differentiation (Fig. 2B). In wild-type cells, the localization of -catenin expression changed from predominant expression in the perinuclear areas, exhibited on day 0, to expression in nuclei and cell junctions during differentiation (on days 4 and 8). In contrast, -catenin in PKC⑀ Ϫ/Ϫ podocytes translocated from the nuclei (day 0) to the perinuclear areas and cell junctions (days 4 and 8). We quantified the shift and intensity of the localization of -catenin by expressing the ratio of the mean fluorescence of the nuclei in relation to the perinuclear areas. The quantification supported the observation that -catenin expression changes significantly from perinuclear areas to nuclei in wild-type podocytes during differentiation (Fig. 2, B and C; ****, p Ͻ 0.0001), whereas, in PKC⑀ Ϫ/Ϫ podocytes, -catenin switches from nuclei to perinuclear areas (Fig. 2, B and C; ***, p Ͻ 0.0002). These data suggest that PKC⑀ orchestrates the subcellular localization of -catenin in podocytes, which significantly shifts during a differentiation time course (days 0 and 8; ****, p Ͻ 0.0001; *, p Ͻ 0.05). Interestingly, on day 4, no difference was observed between both genotypes, implying a PKC⑀-dependent checkpoint of -catenin during podocyte differentiation.
PKC⑀ stabilizes active -catenin levels in a GSK3-independent manner
To further explore the effects of PKC⑀ on -catenin activity, PMA (a reversible and highly potent PKC activator) was used to stimulate murine wild-type and PKC⑀ knockout podocytes. Normalization of Western blot results indicated that, after differentiation for 8 days, PMA stimulation for 30 min and 1, 2, 4, 8, and 24 h results in an increase of active (non-phosphorylated) -catenin in wild-type podocytes (Fig. 3A) up to 2 h, followed by continuous reduction over the 24-h time course. In contrast, PKC⑀ Ϫ/Ϫ podocytes showed no temporary up-regulation of -catenin; instead, we noted a gradual diminishment of active -catenin from the start of the time course to its end, indicating accelerated degradation of active -catenin in PKC⑀ Ϫ/Ϫ podocytes. The impact of PMA stimulation on active -catenin expression was significantly higher (*, p Ͻ 0.05; **, p Ͻ 0.01) in wild-type podocytes than in PKC⑀-deficient cells (Fig. 3B).
To investigate whether the diminished -catenin levels resulted from a rising activation of GSK3, the main inhibitor of the canonical Wnt pathway, we analyzed GSK3 activity after PMA stimulation. GSK3 levels were detected at a constant expression level in both wild-type and PKC⑀ knockout podocytes (Fig. 3, C and D). Because the phosphorylation level of GSK3 ␣/ at Ser-9/21 leads to decreased activity (20), we also looked at p-GSK3 after PMA stimulation. p-GSK3 increased in both groups in a similar fashion during the time course (Fig. 3C), indicating that the reduced -catenin expression in the PKC⑀ Ϫ/Ϫ podocytes is independent of GSK3 expression and activity.
-Catenin overexpression rescues the impaired actin cytoskeleton of PKC⑀-deficient podocytes
As shown previously (21), PKC⑀-deficient murine podocytes in culture display a malfunction in the organization of the actin cytoskeleton. F-actin and focal adhesion marker paxillin staining revealed an overall smaller cell size, less stress fibers, and reduced size and number of focal adhesions in PKC⑀ Ϫ/Ϫ podocytes (Fig. 4A). Adenoviral transduction with a PKC⑀ wild-type construct in PKC⑀-deficient podocytes indicated complete rescue of the phenotype, as expected, and the cells displayed a rearranged cytoskeleton and higher levels of paxillin. Interestingly, adenoviral transduction with a wild-type -catenin construct in PKC⑀ knockout cells led to a similar recovery as the rescue with the PKC⑀-wildtype construct. Measurement of the average cell size performed with ImageJ demonstrated that PKC⑀ knockout cells transfected with PKC⑀ and -catenin reached cell sizes similar to PKC wild-type cells and a similar distribution pattern of paxillin expression with elongated focal contacts (Fig. 4, A and B).
Because -catenin plays a major role in cell adhesion with its binding partner P-cadherin (22), and PKC⑀ knockout podocytes in culture detach very easily, we examined the impact of PKC⑀ deficiency on P-cadherin expression during cell differentiation. As depicted in Fig. 4C, P-cadherin expression was drastically reduced in knockout cells. As expected, we could restore normal P-cadherin expression by overexpressing a wild-type PKC⑀ construct (Fig. 4, D and E). Interestingly, overexpressing a wild-type -catenin construct in PKC⑀ knockout podocytes also led to a significant increase in P-cadherin expression (*, p Ͻ 0.05), which was not detected in cells transduced with a pAd-Dest vector alone.
To explore how the actin cytoskeleton is restored by -catenin, we considered known protein targets of -catenin that are downstream in the Wnt pathway and that may interfere A, immunofluorescence staining of murine glomeruli at 12 weeks of age in PKC⑀ ϩ/ϩ and PKC⑀ Ϫ/Ϫ mice. Co-staining with -catenin (green), the podocyte marker synaptopodin (red), and DAPI (blue) shows expression and localization within glomeruli. B, semiquantitative analysis of the number of cells with -catenin expression in PKC⑀ ϩ/ϩ and PKC⑀ Ϫ/Ϫ mice (****, p Ͻ 0.0001). C, SDS-PAGE/Coomassie gel staining of urine from wild-type and PKC⑀ knockout mice at 9 and 12 weeks (w). BSA at 1, 5, and 10 g/ml served both as a control and standard. Data are mean Ϯ S.D. of at least three different independent experiments.
with the cytoskeleton. This led us to discover reduced mRNA expression levels of fascin1 in PKC⑀-deficient podocytes. The fascin1 level significantly increased when cells were transduced with either a PKC⑀ or a -catenin construct (Fig. 4F). Fascin1, an actin filament-bundling protein (23), is known to bind to -catenin at the cellular edges and possesses a PKC binding site at serine 39 (24,25). Western blot analysis of lysates from the adenovirus-transfected cells confirmed the quantitative RT-PCR results (Fig. 4, G and H). The knockout cells that were rescued by overexpression of PKC⑀ or -catenin exhibited significantly higher fascin1 protein expression than mock-(pAd-Dest)-transduced PKC⑀ Ϫ/Ϫ podocytes (**, p Ͻ 0.05). These data suggest that -catenin mediates the podocytic actin cytoskeleton via regulation of fascin1 expression.
PKC⑀-specific phosphorylation sites in -catenin are indispensable for filtration barrier function
After the discovery that -catenin and PKC⑀ expression levels influence each other, we wanted to explore whether PKC⑀ is a binding partner of -catenin, as reported previously for PKC␣ (15,16). To this end, we performed co-immunoprecipitation experiments in HEK cells transfected with GFP-tagged PKC⑀ (or PKC␣ as a positive control) and FLAG-tagged -catenin constructs. Indeed, after immunoprecipitation of FLAG-tagged -catenin, we detected an interaction with PKC⑀ as well as with PKC␣ (Fig. 5A).
To further characterize whether the interaction between PKC⑀ and -catenin derives from a direct or indirect linkage, we performed a pulldown assay with pure recombinant GST--catenin and His-PKC⑀. These experiments confirmed a direct interaction between the two proteins in vitro (Fig. 5B).
To elucidate which domain of -catenin is important and functionally altered by its interaction with PKC⑀, we searched for specific phosphorylation sites with phosphomotif-predicting programs such as PhosphoNET, HPRD release 9, dbPTM 3.0, SysPTM 2.0, and UniProt. We chose seven phosphomotifs with matches in at least three databases (Fig. 5C). We performed site-directed mutagenesis to ablate these phosphorylation sites in -catenin (the amino acid serine was either mutated to alanine or arginine, and threonine was switched to alanine) and performed zebrafish experiments to explore the biological relevance of these mutations of potential PKC⑀ binding sites. To accomplish this, we first established the knockdown model of zebrafish -catenin1 expression using a -catenin1 morpholino. Zebrafish express two isoforms of -catenin that seem not to be functionally redundant during development (26). We decided to use specific morpholinos for the -catenin1 isoform for our knockdown experiments because there is a significantly higher percentage of sequence similarity of this isoform to the mammalian and Xenopus protein sequences (27). As depicted in Fig. 5D, reduction of -catenin in zebrafish leads to a phenotype with edema of the yolk sac, pericardial effusion, and a shorter tail compared with control morpholino-injected fish. The dorsalized phenotype thus displayed the typical features of -catenin1 knockdown also seen in other studies (26,27). Using our previously described eye assay, we measured the level of proteinuria in zebrafish larvae (28). In brief, the eye assay is an indirect method for determining proteinuria. The experimental zebrafish are transgenic for a liver promoter-driven, GFP-labeled vitamin D-binding protein with a molecular mass of 78 kD that accumulates in the circulation under normal conditions and can be easily quantified in the retinal blood vessel plexus of the fish. Decreased fluorescence levels, as displayed by the -catenin knockdown fish, indicated a significant loss of high-molecular-weight proteins from the circulation of the fish (Fig. 5E). To verify the specificity of the -catenin1 knockdown, we also performed a cross-species rescue experiment with a wild-type mRNA construct, leading to full recovery of the proteinuria phenotype (Fig. 5E). Next we performed cross-species rescue experiments by co-injecting the -catenin zebrafish morpholino and cRNA of different murine -catenin mutant constructs. The mutants S47R, T551A/S552A, S675A, and S715R/S718A showed partial or full rescue of the proteinuria phenotype with increased levels of circulating fluorescence in the fish. The rescue was considered partial when the mean fluorescence level was significantly higher than that of the -catenin knockdown zebrafish (*, p Ͻ 0.05) and full when the statistical significance was ***, p Ͻ 0.001. In contrast, the mutants S352R, T472A/S473R, and S663A were not able to rescue the -catenin phenotype, indicating a functional role of these phosphorylation sites. Interestingly, co-immunoprecipitation experiments using anti-FLAG beads and HEK cells transfected with GFP-tagged PKC⑀ and with these FLAG-tagged -catenin mutants also confirmed a lower binding affinity in the -catenin mutants that did not exhibit rescue activity in the zebrafish experiments (Fig. 5F). These observations indicated that the ablated phosphorylation sites are relevant for the interaction between PKC⑀ and -catenin in vivo and in vitro. As a control, to ensure that the constructs would not influence glomerular filtration function, we also performed overexpression of all constructs in developing fish larvae and detected no difference in fluorescence levels compared with scrambled morpholino-injected control fish (data not shown). These data indicate that the interaction between PKC⑀ and -catenin is indispensable for proper filtration barrier function and depends on several binding sites.
Discussion
We and others have shown previously that the dynamics of the actin cytoskeleton are crucial for podocyte function and often depend on a single player in the network of cytoskeletal signaling components (29). Here we demonstrate that PKC⑀ is a key player in this context via its interaction and regulation of -catenin (Fig. 6). PKC⑀ dictates the subcellular localization of -catenin; immunofluorescence staining of PKC⑀-deficient cells showed that -catenin translocation from the nucleus to perinuclear areas and the membrane during differentiation depends on PKC⑀. PKC⑀ shifted -catenin into the cytosol and to the cellular membrane. Under normal conditions in undifferentiated cells in culture, -catenin first accumulates in the cytoplasm and is localized to the membrane, whereas, during the differentiation process, excessive -catenin enters the nucleus to stimulate Wnt target signaling (30).
Use of PMA stimulation as a PKC activator in tissue culture revealed a more detailed picture. PKC⑀ knockout decreased active -catenin, whereas increasing -catenin levels were found in control cells. PMA triggers not only PKC⑀ activity (31) but also other C1 domain-containing isoforms such as PKC␣ and PKC. PKC␣ has been described previously as a negative regulator of -catenin, as its specific inhibition by the small molecule A23187 results in -catenin up-regulation (15), and our data indicate that PKC⑀ might counterbalance the influence of PKC␣ on -catenin. We showed that -catenin and PKC⑀ interact in a GSK3-independent manner. These results are in line with those of previous studies in other cell types, indicating that PKC␣ also regulates -catenin expression separately from GSK3, when -catenin is reduced after withdrawing glucose from the medium (32). Our findings also sup- port our previous observation suggesting an antagonistic role of PKC␣ and PKC compared with PKC⑀ in the context of diabetic nephropathy (3). Under physiological conditions, -catenin levels are highly regulated via their phosphorylation sites. The major pathway of -catenin degradation is via GSK3, which relies on phosphorylation of -catenin by CK1 (33). Besides this pathway, alternative pathways of -catenin degradation are not well characterized. In the Ca 2ϩ /DAG-dependent pathway, PKC␣ knockdown leads to an accumulation of -catenin (16), indicating that this conventional PKC plays a direct role in -catenin degradation. In the planar cell polarity pathway, inhibition of PKC␦, another novel PKC, has been shown to induce stabilization of -catenin (16). Our data define PKC⑀ as an antagonistic regulator of -catenin stability in both signaling pathways.
PKC⑀ deficient podocytes in vitro show a distinct phenotype, displaying an abnormally small cell size, disturbed actin cytoskeleton dynamics, a higher tendency to undergo spontaneous apoptosis, and lower expression of podocyte differentiation markers (21). Adenoviral transfection of PKC⑀ Ϫ/Ϫ podocytes with a PKC⑀ wild-type construct led to complete recovery of the cells. Interestingly, viral transduction of knockout cells with a human -catenin construct could also overrule the effects of the PKC⑀ deficiency. The podocytes appeared normal in size and exhibited a normalized actin cytoskeleton arrangement (Fig. 4A).
P-cadherin, one major factor of the cell-cell adhesion complex of podocytes, was down-regulated in PKC⑀ podocytes throughout the differentiation process (Fig. 4C). This finding certainly could explain the poor cell adhesion of PKC⑀ deficient podocytes because -catenin and P-cadherin together form a complex, and both show reduced expression. Transfection with PKC⑀ rescued the phenotype as expected. -Catenin overexpression in PKC⑀-deficient podocytes also induced up-regulation of P-cadherin expression not detected in control-transfected cells. This observation surprised us because it is established that aberrant E-cadherin expression promotes accumulation of an unbound cytoplasmic -catenin pool. This excessive -catenin can then further act as a transcription cofactor (34). Our findings suggest that there is a mutual relationship between the two proteins, leading to up-regulation of P-cadherin expression by ectopic expression of -catenin. Further, our results indicate that PKC⑀ is a protein that leads to P-cadherin enhancement rather than down-regulation. These results further supplement the described relationship between novel PKC⑀ and P-cadherin, in addition to findings by other groups (35), describing novel PKCs as important regulators of endocytosis and recycling of E-cadherin in other cell types.
Fascin1, a downstream target of the canonical Wnt/catenin signaling pathway, is an actin filament-bundling protein. It is well-known for binding to -catenin at the cellular edges and possesses a PKC binding site at Ser-39 that has been described for conventional PKCs such as PKC␣ (24). Larrson (8) reported that fascin1 phosphorylation by PKC␣ leads to release of fascin1 from actin bundles, presumably enabling cell spreading. However, so far, an interaction between PKC⑀ and fascin1 has not been presented. Our data suggest an interdependence among PKC⑀, -catenin, and fascin1 that supports a
Regulation of -catenin by PKC⑀
healthy actin cytoskeleton (Fig. 6). Whether fascin1 up-regulation solely relies on increased -catenin expression mediated per PKC⑀ or could also be induced by PKC⑀ alone needs further investigation, as the rescue of PKC⑀ Ϫ/Ϫ podocytes with the PKC⑀ construct did not restore fascin1 mRNA expression levels present in wild-type cells, whereas Western blot analysis suggests a normalized fascin1 protein presence.
The question remains whether rescue of the PKC⑀ knockout phenotype by -catenin overexpression is derived from enhanced Wnt-signaling activity or from accumulation at the cell adhesion complex. We will address this in the future.
We could identify three different PKC⑀ binding/phosphorylation sites in -catenin, located in its central domain, within the armadillo repeats. This location differs from those for GSK3 and PKC␣, which have been demonstrated previously to regulate -catenin via their phosphorylation sites in the N-terminal domain (Ser-33, Ser-37, Thr-41, and Ser-45) (36). Further studies in vivo in the zebrafish model demonstrate that knockdown of -catenin via morpholino and coinjection of -catenin RNA lacking Ser-352, Thr-472/Ser-473, or Ser-663 cannot rescue the proteinuria phenotype, suggesting that these binding sites are indispensable for phosphorylation of -catenin by PKC⑀, leading to its stabilization.
Conclusion
In summary, our data indicate that PKC⑀ deficiency leads to low -catenin expression and defective actin cytoskeleton organization and contributes to disrupted podocyte function, leading to an impaired glomerular filtration barrier. Our results further support the hypothesis that balanced -catenin levels are important for normal kidney function (12). The novel binding sites of PKC⑀ identified here and the role of PKC⑀ in -catenin regulation, opposing those of GSK3 and other PKCs, might represent a new way of counterbalancing physiological -catenin levels. Therefore, our study could be the basis for further pharmacological intervention studies targeting these binding sites, which will be explored in future studies.
Figure 5. PKC⑀ is a functional binding partner of -catenin, and the interaction is indispensable for filtration barrier function.
A, FLAG immunoprecipitation (IP) of HEK cells transduced with a FLAG-tagged -catenin construct and either a GFP, GFP-tagged PKC␣ (positive control), or GFP-tagged PKC⑀ construct. Immunoblotting against FLAG shows successful transfection. B, in vitro direct interaction between -catenin and PKC⑀ was examined in a GST pulldown experiment with pure recombinant GST--catenin and His-PKC⑀. His-PKC⑀ was incubated with either GST or GST--catenin. The pulldown fractions were analyzed by Western immunoblotting against PKC⑀ and GST. C, schematic illustrating -catenin protein structure, which is divided into the N-terminal, Armadillo repeats, and C-terminal parts. The suggested phosphorylation sites of PKC⑀ are shown. D, individual larvae at 120 hpf, indicating control morpholino-injected fish with no edema and -catenin morpholino-injected fish with edema and a dorsalized phenotype. E, murine -catenin-mutants were tested in the zebrafish model by injecting the capped mRNA into fertilized zebrafish eggs at one-to four-cell-stage embryos. The transgenic zebrafish produce a vitamin D-binding protein fused with GFP that, under normal conditions, accumulates in the retina and is quantified 120 hpf by measuring the fluorescence level. Reduced fluorescence indicates a disturbed glomerular filtration barrier. The graph shows GFP fluorescence intensity of the eye assay of wild-type, -catenin1 morpholino and co-injection of -catenin1 morpholino with a human wild-type RNA construct. F, group mean fluorescence intensities of the eye assay. A comparison of the combined injection of -catenin morpholino plus -catenin-mutant results was conducted against -catenin morpholino injection alone (*, p Ͻ 0.05; **, p Ͻ 0.01; ***, p Ͻ 0.001; ns, not significant). Binding of each -catenin mutant to GFP-tagged PKC⑀ was tested in vitro with FLAG immunoprecipitation in transfected HEK cells. Immunoblotting was performed against GFP and FLAG.
Urine analysis
Murine spontaneous spot urine samples were produced by abdomen massage and collected on parafilm. The samples were analyzed by SDS-PAGE and followed by Coomassie Blue staining.
Western blot analysis
For protein extractions, podocytes were lysed in radioimmune precipitation assay buffer (50 mM Tris (pH 7.5), 150 mM NaCl, 0.5% sodium desoxycholate, 1% Nonidet P-40, and 0.1% SDS). The lysates were stored at Ϫ80°C overnight and centrifuged at 11,000 rpm for 15 min at 4°C. Afterward, the supernatant was collected and transferred into a new tube. Protein concentrations were then determined with the BCA protein assay kit (Thermo Scientific, Rockford, IL) according to the manual. Equal amount of proteins were separated by 10% SDS-PAGE and electrotransferred to a PVDF membrane (Immobilon-P, Millipore). After blocking in 2% BSA (SERVA Electrophoresis GmbH, Heidelberg, Germany), the membrane was sequentially probed with the first antibody (1:1000) and HRP-conjugated secondary antibody (1:10,000). Visualization of the HRP signals was achieved with the enhanced chemiluminescence kit Figure 6. Schematic overview of -catenin regulation by PKC⑀ in wild-type and PKC⑀ ؊/؊ podocytes. In wild-type podocytes, PKC⑀ regulates cytoplasmic -catenin level in balance with other PKC isoforms: PKC␣ phosphorylates -catenin and, thus, leads to its degradation. PKC⑀ inhibits degradation via its interaction sites (Ser-352, Thr-472/Ser-473, and Ser-663) in a GSK3-independent manner and leads to a predominantly cytosolic localization of -catenin, which then might bind to the actin-bundling protein fascin1 in filopodia of the podocytes or localize to the membrane, forming the cell adhesion complex with P-cadherin and ␣-catenin, resulting in a stabilized actin cytoskeleton and cell adhesion. In PKC⑀-deficient podocytes, this balance is disturbed, leading to degradation of -catenin, abolished P-cadherin expression, and lower fascin1 expression, which then results in impaired cell adhesion and actin bundling and, thus, in a weakened actin cytoskeleton.
Regulation of -catenin by PKC⑀ (Pierce). As a loading control, either GAPDH or total protein analysis via Coomassie staining of the membrane was used. For Coomassie staining, the membrane was incubated in 0.1% Coomassie 250G (w/v) in 50% (v/v) methanol for 1 min, destained for 15 min in a solution of water/ethanol/acetic acid in proportions of 4:5:1, and then air-dried overnight.
Cell culture and drug treatment
Cell culture of conditionally immortalized mouse podocytes was performed by following the description of Mundel et al. (37). Under permissive conditions, podocyte proliferation was induced in the presence of 10 units/ml ␥-interferon (Cell Sciences, Canton, MA) in RPMI 1640 medium (Biochrom AG, Berlin, Germany) including 10% FCS and 1% penicillin/streptomycin (both from Gibco/Invitrogen) at 33°C. The podocytes were allowed to differentiate at the nonpermissive temperature of 37°C in the same medium but without ␥-interferon. All flasks used for podocytes were coated with collagen I (BD Biosciences), which was mixed in 20 mM sodium acetate (pH 4.7). After being cultured at 37°C for 9 days, the podocytes were pretreated in starvation medium containing 1% FCS and 1% penicillin/streptomycin for 16 h. On the 10th day of differentiation, the podocytes were treated with PMA (Sigma-Aldrich) and harvested at different time points. For immunocytochemistry, the podocytes were plated on collagen I-coated coverslips in a 24-well plate. To analyze differentiation, podocytes were cultured under nonpermissive conditions and harvested at different time points (30 min, 1 h, 2 h, 4 h, 8 h, and 24 h). HEK 293T cells were cultured in DMEM (Invitrogen) containing 10% FCS and 1% antimycotic solution (Gibco/Invitrogen) at 37°C and plated in 10-cm dishes for transfection.
Transfection
HEK 293T cells were seeded in 10-cm dishes 2 days prior to transfection. According to the user manual, 1-2 g of plasmid and 6 l of FuGENE HD transfection reagent (Promega Corp., Fitchburg, WI) were mixed gently in 200 l of serum-free medium and incubated for 20 min at room temperature. Afterward, the mixture was added dropwise into every dish and incubated for 48 h.
Immunoprecipitation
About 48 h after transfection, HEK293T cells were washed with ice-cold PBS and lysed in 1 ml of ice-cold radioimmune precipitation assay buffer (50 mM Tris-HCl (pH 7.5), 200 mM NaCl, 1 mM EDTA, 1 mM EGTA, 1% Triton X-100, and 0.25% desoxycholic acid sodium salt) with protease inhibitor and phosphatase inhibitor (Roche Diagnostics GmbH, Mannheim, Germany) on ice. The lysate was collected and rotated for 1 h at 4°C and then centrifuged at 11,000 rpm for 15 min at 4°C. The supernatant was transferred into new tubes, and 50 l of FLAG beads (Sigma, 50% slurry in Triton buffer) was added to each tube. The tubes were then rotated at 4°C for 3 h. The beads were collected and centrifuged at 3000 rpm for 3 min, washed with 1 ml of radioimmune precipitation assay buffer, and rotated for 5 min at 4°C three times. After detachment from the beads by adding loading buffer and boiling at 95°C, the proteins were separated by SDS-PAGE and analyzed by Western blotting as described above.
Pulldown assay
30 l of magnetic GST beads (Sigma) was first washed in washing buffer (50 mM TBS, 138 mM NaCl, and 2.7 mM KCl (pH 8.0)) before adding either 1 g of pure recombinant GST or GST--catenin (Sigma) protein and rotating the mixture for 30 min at room temperature. After washing the mixture twice for 1 min each time with 300 l of washing buffer, pure recombinant PKC⑀ protein was added and rotated at 4°C overnight. After washing three times, 120 l of elution buffer (TBS with 15 mM reduced glutathione (pH 8.0)) was added to the mixture and rotated for 30 min. The eluate was then used for separation by SDS-PAGE and analyzed by Western blot analysis.
Immunofluorescence staining
The immortalized mouse podocytes were plated on coverslips for differentiation and fixed with 4% paraformaldehyde at different time points. After permeabilization with 0.1% Triton X-100, podocytes were blocked in 10% donkey serum (Jackson ImmunoResearch Laboratories, Suffolk, England) for 30 min and incubated with the primary antibody at 4°C overnight, followed by incubation with Alexa Fluor 488 donkey anti-rabbit IgG and Alexa-Fluor 546 phalloidin for 1 h (Invitrogen). Afterward, glass coverslips were mounted in Aquapolymount medium (Polysciences Inc., Warrington, PA) with DAPI.
For immunohistochemistry, murine kidney sections were blocked in 10% donkey serum and incubated with the primary antibody at 4°C overnight. Then the sections were rinsed with TBS three times, incubated with the Alexa Fluor 488 donkey anti-rabbit IgG for 1 h, and mounted with DAPI in Aquapolymount medium. A Zeiss Axioplan-2 imaging microscope and digital image processing software (Axio Vision 4.6, Zeiss, Jena, Germany) were used for analysis of the images.
Real-time PCR
Following the manual of the RNeasy Mini Kit (Qiagen, Hilden, Germany), total RNA was extracted from the cultured mouse podocytes. Reverse transcription was performed using 1 g of total RNA, Moloney murine leukemia virus reverse transcriptase, oligo(dT) 15 , and random primers (Promega). The amplification reaction of cDNA was achieved by using Fast Start Taq polymerase (Roche Diagnostics), SYBR Green (Molecular Probes, Eugene, OR), and gene-specific primers in the following thermal cycle: 95°C for 5 min, followed by 45 cycles for 10 s at 95°C, 10 s at 60°C, and 10 s at 72°C. Each reaction was performed in triplicate and normalized to the constitutive gene mouse hypoxanthine phosphoribosyl transferase 1 (mHPRT-1). Melting curve analysis was used to verify the specificity of the PCR product. The primers used for amplification were as follows: HPRT-1, 5Ј-CAGTCCCAGCGTCG-TGATTA-3Ј and 5Ј-AGCAAGTCTTTCAGTCCTGTC-3Ј; fascin1, 5Ј-AACGTGTCCACGCGCC-3Ј and 5Ј-GCAG-CTGGCGTTCTTGGT-3Ј.
Site-directed mutagenesis
After searching in five phosphorylation databases (Phospho-NET, HPRD release 9, dbPTM 3.0, SysPTM 2.0, and UniProt), predicted phosphosites were chosen and compared in five species (human, mouse, zebrafish, Drosophila, and Caenorhabditis elegans) to determine the most conserved ones. Finally, seven promising phosphomotifs were selected as candidates, and corresponding site-specific mutations of mouse-derived DNA were introduced into the plasmid with overlapping PCR. The primers used were as follows: S47R forward, 5Ј-CAGC-TCCTTCTCTGAGA ( The original bases are shown in parentheses. The site-directed mutagenesis was performed with the QuikChange sitedirected mutagenesis kit following the instructions recommended by the manufacturer. All constructs were sequenced to verify the nucleotide sequences.
Adenoviral production and infection
Gateway technology (Invitrogen) was used for the generation of adenoviral vectors. With BP Clonase II, donor vectors were established by cloning wild-type or site-directed mutant -catenin into the pDONR221 vectors. Afterward, adenoviral constructs were generated by recombining pDONR221 with pAd/CMV/V5-DEST using LR Clonase II. All reactions were accomplished by following the manual of the manufacturer. All constructs were confirmed by DNA sequencing. Adenoviral expression and amplification were performed in HEK293 cells. Adenoviral transduction of PKC⑀ Ϫ/Ϫ podocytes with either pDest, PKC⑀, or -catenin constructs was conducted on day 5 of differentiation. About 24 h later, the medium was changed, and 72 h after transduction, the cells were harvested or fixed for further analysis.
Cell size
ImageJ was used to quantify the total area of phalloidinstained podocytes. At least 30 single cells (verified by DAPI staining) of every subgroup were photographed in black-andwhite format.
Zebrafish experiments
We followed the method described by Hentschel et al. (28). Zebrafish (L-FABP:DBP-EGFP) were mated and housed at 28.5°C in embryo rearing medium (E3). After having been embedded in 1.2% agarose, one-cell-to four-cell-stage fertilized embryos were injected using a Nanoject II injection device (Drummond Scientific, Broomall, PA). For overexpression experiments, mRNA of wild-type or site-directed mutant -catenin was 1:1 diluted with injection buffer (20 mM HEPES, 200 mM KCl, and 0.01% phenol red) and injected at a final concentration of 30 ng/l in a total volume of 4.6 nl. For rescue experiments, wild-type or site-directed mutant -catenin mRNA was mixed with -catenin morpholino, which was diluted in injection buffer, and injected at final concentrations of 30 ng/l mRNA and 100 M morpholino in a total volume of 4.6 nl.
Scrambled morpholino was injected as a control. Morpholino sequences were designed and ordered from GeneTools (Philomath, OR) as follows: standard control sequence, 5Ј-CCTCTTACCTCAGTTACAATTTATA-3Ј; -catenin sequence, CTGTGTCAAAAGCTGTATATTCCTG. The -catenin morpholino sequence was blasted to ensure no off-target splice junction or start codon annealing occurred for sequence matches greater than 14 nt. 120 h post-fertilization (hpf), zebrafish larvae were anesthetized with a 1:28 dilution of 4 mg/ml tricaine (MESAB, 1% Na 2 HPO 4 (pH 7.0)) and photographed at ϫ10 magnification. The fluorescence of GFP-labeled vitamin D-binding protein in the pupil of the zebrafish eye was measured and analyzed with ImageJ. The animal protocol was approved by the Animal Care and Use Committee of the Mount Desert Island Biological Laboratory (Protocol 14-06).
Statistical analysis
Data are shown as means Ϯ S.D. and were compared with unpaired Student's t tests. PMA stimulation experiments were compared by using two-way analysis of variance. Prism 6 was used for data analysis. Differences were considered significant at p Ͻ 0.05.
Ethics statement
Animal work, which was performed following the guidelines of the American Physiologic Society, was approved by Institutional Animal Care and Use Committee of Hannover Medical School and the animal welfare authorities of Lower Saxony. All efforts were made to minimize the number of animals used and their suffering.
|
v3-fos-license
|
2022-03-08T02:10:41.492Z
|
2022-03-05T00:00:00.000
|
247289407
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.japsonline.com/admin/php/uploads/3591_pdf.pdf",
"pdf_hash": "6a213cb7ee6454a7e6d9265d3f4ae2d8b61936f0",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2116",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"sha1": "20996aa4833dfa34f841302952b870a551d6de34",
"year": 2022
}
|
pes2o/s2orc
|
The glucose uptake of type 2 diabetic rats by Sargassum olygocystum extract: In silico and in vivo studies
This study was to improve the glucose uptake in type 2 diabetic rats by Sargassum olygocystum extract. The identity of S. olygocystum metabolome was determined by high performance liquid chromatography-high resolution mass spectrometry. The value of the energy binding interactions between metabolomes of S. olygocystum and pioglitazone against protein tyrosine phosphatase 1B (PTP1B) was determined by the docking method. Male Rattus norvegicus weighing between 180 and 200 g was used as an experimental animal model for type 2 diabetes mellitus. This experiment consisted of six groups, i.e., normal, diabetes mellitus type 2 (DM2), DM2 + pioglitazone, DM2 + administered with S. olygocystum extract once, twice, and thrice per day, consecutively. The treatment was carried out for 45 days. The parameters were blood glucose, the area under the curve, insulin, homeostasis model assessmentinsulin resistance,, and expression of phosphatidylinositol-3-kinase (PI3K) and Akt. The data were stated as mean and standard deviation, and the differences between the treatments were determined by the Duncan test. The significance level used in this study was 5%. This study showed that S. olygocystum extract capable of reducing the blood glucose and rhamnetin of this seaweed extract enhances the glucose uptake in type 2 diabetic via inhibition of PTP1B activity and inducing PI3K/Akt expression.
INTRODUCTION
Type 2 diabetes mellitus is a metabolic disorder in which the body cells become resistant to insulin. This is possibly due to the increased activity of protein tyrosine phosphatase 1B (PTP1B) (Abdelsalam et al., 2019) and reduced activation of phosphatidylinositol-3-kinase (PI3K) and Akt (Huang et al., 2018). These activities reduce the glucose transporter 4 (Glut 4) translocation from the cytoplasm to the cell membrane. The blood glucose uptake into the cells also decreases (Afzalpoura et al., 2016). Increased blood glucose uptake is one mechanism for controlling the blood glucose level in type 2 diabetics (Natali and Ferrannini, 2006).
PI3K is an enzyme that catalyzes the formation of phosphatidylinositol-3,4,5-triphosphate in the cell membrane (Abdelsalam et al., 2019). The formation of this phosphate compound can activate Akt, which then plays a role in controlling the most important cellular processes in metabolism, including the Glut translocation (Abdelsalam et al., 2019;Natali and Ferrannini, 2006). Previous studies have shown the increase of Glut 4 translocation in 3T3-L1 cells and diabetic type 2 mice via the PI3K and Akt signaling pathways (Jang et al., 2020;Ramachandran and Saravanan, 2015) and the increase of glucose uptake in experimental animals and type 2 diabetics through the accumulative presence of Gluts (Różańska and RegulskaIlow, 2018).
The protein tyrosine PTP1B is an enzyme that catalyzes the dephosphorylation of tyrosine-phosphorylated proteins and plays a role in the state of insulin resistance. This enzyme is widespread in the muscles, liver, adipose tissue, and brain (Cho, 2013;Valverde and González-Rodríguez, 2011). Previous studies have shown that inhibiting PTP1B activity can increase the translocation of Glut 4 and further improve blood sugar levels in type 2 diabetes test animals (Chen et al., 1997;Yang et al., 2018;Zhang et al., 2014).
It is known that brown algae contain many bioactive substances that are beneficial to human health and may include hypoglycemic agents (Gabbia and DeMartin, 2020). Some Sargassum species that have been studied as hypoglycemic agents include Sargassum hystrix, Sargassum yezoense, Sargassum polycystum, Sargassum hemiphyllum, Sargassum serratifolium, and Sargassum echinocarpum. The active ingredients in Sargassum spp. that are known to act as hypoglycemic agents include plastoquinones, polyphenols, and phlorotannins, although their bioavailability is low. The mechanisms of these compounds as hypoglycemic agents include α-amylase and α-glucosidase inhibitors, insulin secretion enhancers, insulin sensitivity enhancers, and PTP1B activity inhibitors (Ali et al., 2017;Corona et al., 2016;Firdaus and Chamidah, 2018;Gotama et al., 2018;Soliman et al., 2020;Hwang et al., 2015;Motshakeri et al., 2013).
Decocting refers to a method of extracting active ingredients using water and heat. This method is used because there are many active ingredients, it is cheap, and the extracts are free from toxic solvents (Yang et al., 2020). Previous studies showed that decoction of Syzygium cumini (Perera et al., 2017) and traditional Chinese medicine (Qi et al., 2019) had the ability to lower blood sugar in rats and people with type 2 diabetes. Most of the active ingredients dissolved in it are organic acid derivatives and polyphenols (Akhtar et al., 2019).
Sargassum sp. is known to contain bioactive substances that play a role in lowering blood glucose in animals with diabetes mellitus induced by alloxan and streptozotocin. However, the study of the active substance in Sargassum olygocystum obtained by decoction in lowering blood glucose based on glucose uptake in a type 2 diabetes animal model has not been explored. Therefore, the purpose of this study was to obtain the active ingredient from an S. olygocystum decoction, which plays a role in the glucose uptake in type 2 diabetes rats.
Materials
Sargassum olygocystum was collected in February-March 2021 from Talango waters, Sumenep, Madura. Seaweed was authenticated by the Research Centre of Oceanography, Indonesian Institute of Sciences (1368/IPK.2/KS). Sargassum olygocystum was boiled for 23 min in aquadest (1/6.5: w/v) at a temperature of about 90°C to obtain the extract. High performance liquid chromatography (HPLC) grade of aquadest, acetonitrile, and formic acid was used to identify the bioactive compounds of S. olygocystum. The structure of compounds identified from S. olygocystum in SDF format was downloaded from the PubMed database. The HPLC-high resolution mass spectrometry (HRMS) Thermo Scientific Dionex Ultimate 3000 RSLCnano using a Hypersil GOLD aQ column (50 × 1 mm × 1.9 µ particle size) was used to identify the active compounds of S. olygocystum. An HP Intel® Core TMi3-5005U with a Microsoft Windows 10 operating system was used for the in silico method. Open Babel GUI version 2.4.1, PyMOL 1.7.4 Edu (Schrödinger), BIOVIA Discovery Studio 2019 (Dassault Systèmes BIOVIA Corp.), and PyRx 0.8 (The Scripps Research Institute) were used for the docking analysis (Firdaus et al., 2020). The materials used in the in vivo study were male Rattus norvegicus aged 2-3 months, pioglitazone (Dexa Medica), streptozotocin (BioWorld), rat insulin kit (BT-Lab E0707Ra), rat PI3K kit (BT-Lab E0438Ra), and rat Akt kit (BT-Lab E0201Ra).
HPLC-HRMS analysis
Sargassum olygocystum was decocted in water (1:6.7: w/v) for 23 minutes at around 90°C, cooled at room temperature, and then filtered with Whatman No. 40 paper. The filtrate was then diluted with aquadest containing 0.1% formic acid, vortexed at 2,000 rpm for 2 minutes, and spun down at 6,000 rpm for 2 minutes. Afterward, the supernatant was filtered with a 0.22 µm filter syringe and then 1 ml of supernatant injected into the HPLC-HRMS autosampler (Thermo Scientific™) for untargeted metabolome identification. This analysis used an aquadest with 0.1% formic acid as solvent A and acetonitrile with 0.1% formic acid as solvent B. The flow rate of the mobile phase was 40 μl/ minute. The gradient ratios of solvents A and B were 95:5 at minutes 0-15, 40:60 at minutes 15-22, and 5:95 at minutes 22-25. The column temperature was 30°C. The metabolome identification was based on the similarity of detected compounds and compounds information contained in the Compound Discoverer, mzCloud MS/MS Library.
Docking methods
The 3D ligand structures of S. olygocystum compounds and pioglitazone in the form of SDF format were changed to PDB form using Open Babel. Before the docking process, the energy of these ligands was then minimized to optimize their conformation with Open Babel. The minimization results were then formatted in pdbqt and were finally made ready for the docking process. The macromolecule was PTP1B (ID: 2hnp), which was downloaded from http://www.rcsb.org/ (Huang et al., 2018). PTP1B as a macromolecule in *.pdb format was converted into *.pdbqt format using PyRx. Each ligand was in a flexible state that interacted with the macromolecule under rigid conditions. AutoDock Vina was used to simulate the test ligands' docking and comparison ligand against PTP1B (Hwang et al., 2015). All calculations were executed via a grid-box size of x = 66.77 Å, y = 49.04 Å, z = 40.19 Å, with a grid center of x = 43.42 Å, y = 15.89 Å, z = 14.73 Å. An exhaustiveness search parameter of eight was used to predict the binding affinities due to the probability of finding the global minimum of the scoring functions. The docking results were evaluated, and the best value (ΔG was the most negative) was observed in the area of the ligands attached to the macromolecule. Interactions in the form of hydrogen bonds, hydrophobic bonds, and electrostatic bonds and bond distances were visualized in 2D and 3D with Discovery Studio and PyMOL with an interaction radius of 5 Å (Firdaus et al., 2020).
Animal model
Two-to three-month-old male Wistar rats weighing 200-250 g were acclimatized in individual cages for 1 week by feeding and drinking ad libitum. A type 2 diabetic rat model was obtained by high-fat feeding and diabetogen injection in the normal rats. After the acclimatization phase, the treated group of rats was administered a high-calorie diet until hypercholesterolemia. The rats were then injected intraperitoneally with streptozotocin (stz) at a dose of 30 mg/kg body weight. Ten days after the injection, the blood glucose levels were determined. If the glucose level of rats was >200 mg/dl, it was declared diabetes, while those who had lower glucose levels were excluded from the study (Firdaus and Chamidah, 2018). This study group included six groups, namely, normal (A), DM (B), DM + pioglitazone at a dose of 2 mg/kg (C), DM + administration once with 4 ml/kg of S. olygocystum extract (D), DM + administration twice with 4 ml/kg of S. olygocystum extract (E), and DM + administration thrice with 4 ml/kg of S. olygocystum extract (F).
Blood glucose and area-under-curve glucose (AUC glu )
The measurement of blood glucose in rats was carried out by taking blood samples from the tail. On day 45 of the animal experiment, overnight fasting and then instantaneous glucose levels were measured. Blood glucose was measured with a glucometer (GlucoDr AGM-2100) and expressed in mg/dl. The AUC glu determination was carried out on rats based on an oral glucose tolerance test whose blood glucose levels were observed at 0, 30, 60, and 120 minutes after administering 5 ml/kg body weight of a 10% glucose solution (Cai et al., 2016). This assay was determined in rats that had been fasted overnight. The AUC glu formula is as follows: , C, and D represent blood glucose levels at 0, 30, 60, and 120 minutes, respectively).
Homeostasis model assessment-insulin resistance (HOMA-IR)
HOMA-IR was determined based on glucose and insulin level and was measured using the following formula (Esteghamati et al., 2010):
Biochemical determination
The insulin, PI3K, and Akt levels of rats were measured based on the enzyme-linked immunosorbent assay method. The measurement was based on the guidelines listed in each kit. Blood was drawn from the heart for insulin determination, whereas the liver was taken for PI3K and Akt determination. These organs were centrifuged at 3,000 rpm for 20 minutes to obtain serum and supernatant. The serum and supernatant were stored at −20°C until they were used. Approximately 50 μl of a standard solution was inserted into the standard well, while 40 μl of sample and 10 μl of insulin or kinases antibody were added to the sample well. Fifty μl of streptavidin-horseradish peroxidase was then added to the two wells and homogenized. The solution was incubated for 60 minutes at 37°C. After that, the well was washed with a washing buffer five times and soaked in 0.35 ml of the buffer for 1 minute. The wells were then dried, and 50 μl of substrate A and 50 μl of substrate B were added. The well plates were incubated for 10 minutes at 37°C in the dark, and finally, 50 μl of a stopping solution was added. The optical density of the color change of the reaction was measured 30 minutes after administering the stopping solution. The absorption measurement of the reaction result was carried out on a microplate reader (Bio-Rad Model 550) with a wavelength of 450 nm.
Data analysis
The data were expressed as the mean and standard deviation. The difference in treatments was analyzed using a fully randomized design method. The significance level used in this study was α = 5%.
D-(-)-Glutamine
The metabolites of the S. olygocystum extract consist of essential amino acids, nonessential amino acids, amino acid derivatives, terpenes, terpenoids, indoles, caprolactam, sulfonamides, nucleotides and their derivatives, carboxylic acid derivatives, cinnamic acid derivatives, flavonoid derivatives, and polyphenols. Previous studies have reported that this type of algae also contains phenols and flavonoids (Kanimozhi et al., 2015;Mehdinezhad et al., 2016). Meanwhile, another study showed that Cystoseira barbata contains rhamnetin, a derivate of flavonoid (Ibrahim and Abdel-Tawab, 2020). The presence of these two metabolites in the algae genera is possible because there is a synthesis process. Its synthesis needs precursor compounds, namely, phenylalanine and cinnamic acid, the two precursor compounds found in this algae extract (Koes et al., 2005;Milke et al., 2018).
Docking analysis
The interaction analysis results of the bioactive compounds of the S. olygocystum extract against PTP1B showed that rhamnetin had the strongest binding affinity among the active substances of S. olygocystum and a greater affinity than pioglitazone. The binding affinity value of pioglitazone is −7.6 kcal/mol, while the binding affinity value of rhamnetin is −8.4 kcal/ mol. Table 1 displays the binding affinity value of pioglitazone and the bioactive compounds of the S. olygocystum extract. Table 2 exhibits the interaction and binding affinity of pioglitazone and rhamnetin. Figures 1 and 2 show the visualization of 2D and 3D interactions between pioglitazone and rhamnetin with PTP1B. Figure 1 and Table 2 show the hydrogen bond between pioglitazone and PTP1B residue on aspartate 265, threonine 263, and glycine 266, with pioglitazone acting as a proton donor for aspartate 265 but as a proton acceptor in threonine 263 and glycine 266. The interaction of pioglitazone and threonine 263 is also in the form of hydrogen bonds, but pioglitazone acts as a proton donor. Pioglitazone also acts as a proton acceptor in the hydrogen bond of carbon with serine 216. The alkyl bond occurs between pioglitazone and alanine 217. Figure 2 and Table 2 show the hydrogen bonds between rhamnetin and the PTP1B residue of glutamate 115, aspartate 181, cysteine 215, arginine 221, aspartate 265, and glycine 266, while rhamnetin acts as a proton donor for glutamate 181, aspartate 181, and aspartate 265. Rhamnetin acts as a proton acceptor in cysteine 215, arginine 221, and glycine 266. The interaction of rhamnetin with threonine 263 is the pi orbital hydrogen bond with threonine as a proton donor.
Insulin resistance is a metabolic disorder characteristic in people with diabetes mellitus 2. Blood glucose levels in type 2 diabetics are still high, despite the high insulin level in the blood. The low sensitivity of the cells to insulin leads to the body's low glucose uptake. Pioglitazone is one of the sensitizers of fat, liver, and muscle cells to the presence of insulin. An in silico study showed that pioglitazone inhibits PTP1B but does not anchor on this protein's active site. In this study, a glitazone derivative can replace pioglitazone because the barrier is located directly on the enzyme's active site, namely, Cys215 and Arg221, at a distance of 4-5 Å (Bhattarai et al., 2010). Rhamnetin from brown seaweed also showed a docking on the protein's active site, and even the interaction was a residue only 3.5-4 Å apart. It means that this bioactive compound has tremendous potential as an inhibitor of PTP1B compared to the glitazone derivative. This ability is possible due to the conformation of the rhamnetin hydroxyl group, which readily accepts protons from the two residues on the enzyme's active site (Lopez et al., 2017).
Glucose, insulin, HOMA-IR, and AUC glu
The results of blood glucose, insulin, HOMA-IR, and AUC glu determination showed that treatment with the S. olygocystum extract resulted in lower parameter levels than in the diabetic rats, although the levels were higher compared with those in the animals treated with pioglitazone. Table 3 and Figure 3 presents the blood glucose, insulin, HOMA-IR, and AUC glu levels of diabetic rats treated with the S. olygocystum extract.
The cells of type 2 diabetics have low insulin sensitivity, and glucose entering the blood circulation cannot enter directly into the body's cells. This study also obtained HOMA-IR and AUC glu values of the experimental animals. The administration of pioglitazone provides improved insulin sensitivity through decreased blood glucose levels and hyperinsulinemia. Similar results were also reported for the use of pioglitazone in people with type 2 diabetes (Rajagopalan et al., 2015). Pioglitazone is a hypoglycemic agent Pi-donor hydrogen bond −8.4 ± 0.14 that increases insulin sensitivity in the liver, muscle, and fat tissue. Glitazone, besides working by activating peroxisome proliferatoractivated receptor-γ, is also able to inhibit PTP1B. Sargassum olygocystum extract treatment can improve these metabolic disorders. Improvements in insulin sensitivity in type 2 diabetic rats can be attributed to the S. polycystum and Sargassum coreanum extracts (Motshakeri et al., 2013;Park et al., 2016). The administration of the S. serratifolium extract showed improvement through the inhibition of PTP1B. Plastoquinones from S. serratifolium can perform competitive and noncompetitive inhibition against these enzymes by binding to an enzyme's allosteric site or the substrate-enzyme complex (Ali et al., 2017). In this study, the improvement of insulin resistance due to S. olygocystum decoction was possible due to the presence of rhamnetin, a quercetin derivative. Quercetin is known to control blood glucose levels by increasing blood glucose uptake in muscles. The enhancement of glucose uptake is induced by the activation of AMPK and PI3K/Akt expressions. The increase in the expression of these kinase enzymes can be caused by the inhibition of PTP1B activity (Shi et al., 2019).
PI3K and Akt expression
The results showed that treatment with S. olygocystum extract increased the PI3K and Akt expression levels in the liver of diabetic rats, although the value was lower than in the diabetic rats treated with pioglitazone. Table 4 presents the PI3K and Akt expression levels in the liver of rats. PI3K and Akt are kinases that play essential roles in various metabolic activities, which include controlling blood glucose levels. The activity of these kinases was decreased in diabetic animals but increased in the group given pioglitazone and the S. olygocystum extract. It has been shown that the translocation of Glut 4 in diabetic animals is due to PI3K and Akt's low activity (Pinent et al., 2004). Pioglitazone treatment in people with type 2 diabetes can increase glucose uptake (Rajagopalan et al., 2015). Glitazone can increase glucose uptake due to its ability to inhibit PTP1B activity (Bhattarai et al., 2010). The Glut is a transporter that is responsible for the entry of glucose into cells. These transporters are transferred from the cytoplasm to the membrane as a result of the presence of insulin. Glut 4 is a type of Glut that is most abundant in muscle and fat tissue. Glut 4 translocation to muscle and fat cell membranes occurs due to a series of reactions triggered by the presence of insulin through the PI3K/Akt pathway. Through this route, many flavonoids are involved in glucose uptake. Procyanidins, the polymers of flavan-3-ol catechins and epicatechins, increase glucose uptake in 3T3-L1 adipose cells and myotubes L6E9 with Akt activity (Afzalpoura et al., 2016), while flavanone eriodictyol and flavonoids 7-O-methylaromadendrin increase glucose uptake via the PI3K/Akt pathway in liver cells and fat cells (Zhang et al., 2010;Zhang et al., 2012).
CONCLUSION
This study found that the S. olygocystum extract lowered blood sugar levels and increased PI3K and Akt expression in the liver in rats with type 2 diabetes. HPLC-HRMS analysis identified alternative bioactive compounds contained in the S. olygocystum extract. The docking analysis of the identified active substances showed that rhamnetin was the most effective compound for inhibiting PTP1B. In summary, rhamnetin from the S. olygocystum extract is a natural ingredient that plays an important role in lowering blood sugar levels in rats with type 2 diabetes through the mechanism of inhibiting PTP1B activity and activating PI3K/Akt expression. However, an in vivo study of the ability of rhamnetin to control blood sugar levels in type 2 diabetes needs to be performed.
|
v3-fos-license
|
2017-04-01T00:50:47.625Z
|
2013-04-15T00:00:00.000
|
17242156
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.beilstein-journals.org/bjnano/content/pdf/2190-4286-4-28.pdf",
"pdf_hash": "6ac7225991d3dc2fc1aa8e9ee80c14259f4ab233",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2117",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"sha1": "6ac7225991d3dc2fc1aa8e9ee80c14259f4ab233",
"year": 2013
}
|
pes2o/s2orc
|
Size variation of infrared vibrational spectra from molecules to hydrogenated diamond nanocrystals: a density functional theory study
Summary Infrared spectra of hydrogenated diamond nanocrystals of one nanometer length are calculated by ab initio methods. Positions of atoms are optimized via density functional theory at the level of the generalized gradient approximation of Perdew, Burke and Ernzerhof (PBE) using 3-21G basis states. The frequencies in the vibrational spectrum are analyzed against reduced masses, force constants and intensities of vibration. The spectrum can be divided into two regions depending on the properties of the vibrations or the gap separating them. In the first region, results show good matching to several experimentally obtained lines. The 500 cm−1 broad-peak acoustical branch region is characterized by pure C–C vibrations. The optical branch is centered at 1185 cm−1. Calculations show that several C–C vibrations are mixed with some C–H vibrations in the first region. In the second region the matching also extends to C–H vibration frequencies that include different modes such as symmetric, asymmetric, wagging, scissor, rocking and twisting modes. In order to complete the picture of the size dependence of the vibrational spectra, we analyzed the spectra of ethane and adamantane. The present analysis shows that acoustical and optical branches in diamond nanocrystals approach each other and collapse at 963 cm−1 in ethane. Variation of the highest reduced-mass-mode C–C vibrations from 1332 cm−1 of bulk diamond to 963 cm−1 for ethane (red shift) is shown. The analysis also shows the variation of the radial breathing mode from 0 cm−1 of bulk diamond to 963 cm−1 for ethane (blue shift). These variations compare well with experiment. Experimentally, the above-mentioned modes appear shifted from their exact positions due to overlap with neighboring modes.
Introduction
Diamond nanocrystals are a very important material theoretically and experimentally. This importance seems to originate from the extraordinary properties of bulk diamond that include high hardness, inertness and high thermal conductivity. The additional properties added by reduction to the nanoscale make diamonds and related carbon materials a focus for recent investigations [1][2][3][4][5][6][7][8][9]. One of the first steps of investigating a material is the characterization of its properties. The present work is concerned with the theoretical calculation of vibrational infrared frequency lines of diamond nanocrystals and the variation of these vibrations from molecular to bulk sizes. Several previous calculations from other authors assigned different origins for some of the well observed lines, such as the experimental 500 and 1130-1332 cm −1 diamond nanocrystal lines [2,3,6]. In the present work we shall try to explain and calculate some of these lines. In addition, we shall discuss C-H vibrations and their mixing with C-C vibrations. The importance of identifying C-H frequencies will be shortly demonstrated in the subsequent sections. The variation of C-C vibrations with the size of the carbon-hydrogen molecules or nanocrystals is shown by including the investigation of ethane and adamantane molecules.
Theory
Density functional theory at level of the generalized gradient approximation of Perdew, Burke and Ernzerhof (PBE) is used in the present work to determine stable optimized positions of atoms in the nanocrystal [10]. Double-zeta 3-21G basis functions are chosen to perform the above calculations so that all vibrational analysis is performed with the same level of theory, which is feasible within our computer system in terms of memory and time. The chosen diamond nanocrystal is of 1 nm length. It has the stoichiometry C 64 H 84 . After optimizing geometrical positions, vibrational frequencies are determined by solving coupled perturbed Hartree-Fock equations [11,12]. The frequencies are then analyzed against other vibrational properties such as reduced masses, force constants and infrared vibration intensities. To complete the picture of C-C vibrations with size variation we included the infrared vibrational frequencies of ethane and adamantane molecules using the same level of theory.
Results
The program Gaussian 03 [13] is used to optimize the geometries and calculate the vibrational spectra of diamond nanocrystal, ethane and adamantane molecules. The calculated frequencies need to be corrected for the systematic frequency error that results from ab initio calculations [10]. The previous estimation of this scale factor for PBE theory by using the 3-21G basis is 0.991 [14]. Note that different authors use different scale factors for the same basis at the same level of calculation [10,14,15]. The present scale factor is one of the nearest scale factors to the unscaled data (very close to 1) and will be used without modification for all spectra. the analysis of vibrational reduced masses, force constants and infrared vibrational intensities against the frequency of vibration. Figure 2 shows the vibrational frequencies of adamantane. Figure 3 shows vibrational the frequencies of ethane. Figure 4 shows the variation of the highest reduced-mass mode (HRMM) of C-C vibrations and radial breathing mode (RBM) with the number of carbon atoms. Figure 5 shows the RBM displacement vectors in diamond nanocrystal C 64 H 84 at 330 cm −1 . For comparison of the present calculations a wide range of references exist for C-C and C-H vibrations [16][17][18][19]. Theoretical behavior of the radial breathing modes for nanomaterials can be found in reference [20].
Discussion
In the first part of Figure 1 (0-1589 cm −1 ), we can note that most vibrations have a reduced mass of 2 atomic mass units (amu) or greater. This means a less active C-H vibrational contribution in this region. Most vibrations in the first region are C-C vibrations. The reduced mass formula of two particles of masses m a and m b is given by: (1) Although the above equation is for diatomic molecules, it can be used for the understanding of vibrational modes of other larger molecules. We can note from the above equation that the reduced mass of two carbon atoms (the mass of one carbon [14,19]. The limits of these two modes in bulk diamond are shown. atom is approximately 12 amu) can be approximately 6 amu, which is the case of C-C vibrations and can be seen not to be exceeded in the reduced masses of Figure 1c. In the other two following figures (Figure 2c and Figure 3c) the highest reduced masses are 6.49 and 4.06 amu, respectively. This shows that the above rule is also approximately followed. The value of 6.49 amu of Figure 2c is due to movement of carbon atoms in phase with one of their bonded hydrogen atoms which makes their effective masses 13 instead of 12. On the other hand the C-H reduced vibrational mass is approximately 0.923 amu (the mass of one hydrogen atom is approximately 1 amu) and can be seen not to be violated in the three figures (Figures 1c-3c) by going to less than this value. All the reduced-mass points in the three Figures 1-3 are in between these two values (0.923 to 6.49 amu).
In order to determine peaks related to the bulk diamond structure and separate them from surface C-H vibrations, two conditions have to be met. The first condition is that the reduced mass has to be the closest to the reduced mass value of two atoms C-C at 6.5 amu (highest possible value in the spectrum), and the second condition is that it has to have a distinguishable high force constant that converges to the bulk diamond force constant at 4.7 mDyne/Å [16]. Two trends can be seen in the reduced masses of Figure 1c. The first trend is the acoustical branch that begins from the beginning of the spectrum and has peaks at 330, 516, 646, and 782 cm −1 . This broad peak ends at nearly 819 cm −1 when the optical branch begins, which has strong oscillations between a lower limit above 1 amu reduced mass and highest values of reduced masses (HRMM) (5.25 amu) at 1185 cm −1 . However, the highest intensity peaks in the above two regions have different positions than the highest reduced masses due to the overlap between neighboring peaks. We shall continue to refer to the value of the pure mode rather than the experimentally observed peak, which may move slightly due to overlapping. The range 300-700 cm −1 is the only wide range of frequencies that is totally free from the contaminating C-H vibrations, which have less than 2 amu of reduced mass. In our opinion this is the origin of the broad 500 cm −1 peak of diamond nanocrystals reported repeatedly in literature with varying explanations [2,4]. The existence of this broad peak is a definite signal of the existence of diamond nanocrystal structures.
From displacement vectors, the vibrational mode at 330 cm −1 peak is identified as the RBM in diamond nanocrystals (see Figure 5). This mode corresponds to radial expansion-contraction of the nanocrystal. From the value of the reduced mass and force constant, the HRMM peak at 1185 cm −1 is the distorted original experimental 1332 cm −1 diamond bulk line [2]. This phenomenon is termed as the vibrational red-shift effect in nanocrystals [21]. The idle strong bonds of bulk diamond are weakened in nanocrystals because of surface and reconstruction effects. Since surface effects penetrate at least three layers of the surface [1], the present nanocrystal, which has four layers between surface and core, will maintain a small number of idle tetrahedral bonds at the center or the core part of the present nanocrystal. The value of the force constant at this frequency ( Figure 1b) supports this argument, having the exceptional value 4.4 mDyne/Å, close to the ideal bulk-diamond force constant mentioned earlier (4.7 mDyne/Å). As nanocrystals grow in size the intensity and force constant of this line increases and surface effects decrease, which enhances the strength of the bonds. Since the frequency of vibration is proportional to the force constant of the vibrating bond as given by the equation (2) the frequency of the present 1185 cm −1 line will increase as the nanocrystals increase in size and head towards that of the bulk at 1332 cm −1 . Note that many C-H vibrations interfere with the highest peak at 1185 cm −1 and continue to the end of the first part of Figure 1. This can be noted from the high oscillation in reduced masses at the end of first part of Figure 1c. The above range of frequencies are identified experimentally in nanodiamonds, such as the lines 1132, 1134, 1140, 1150, and 1240 cm −1 in references [2,4,5,7,8], respectively. These lines all belong partially to the originally distorted bulk diamond line at 1332 cm −1 , which appears at different positions in different sizes of nanocrystals. C-H vibrations on the surface of diamond-like carbon or hydrocarbon molecules are near to the present C-H vibrations [16][17][18]. As an example, C-H vibrations that nearly match the low-frequency and reduced-mass vibrations in Figure 1 include those at 700, 755, 840, 910, 1030, 1075 and 1110 cm −1 as can be seen in Table 8 of [18]. These vibrations include different modes of vibrations such as rocking, bending, twisting and wagging [16][17][18].
The end of the first part of Figure 1 (1170 to 1589 cm −1 ) is distinguished from the beginning of Figure 1 by lower reduced masses and is mainly due to C-H vibrations that are coupled with C-C vibrations in the first part of the figure. C-H vibrations of similar frequencies on the surface of diamond-like carbon that match frequencies in Figure 1 include those at 1170, 1180, 1280, 1325, 1445, 1450 and 1490 cm −1 as can be seen in Table 8 of [18].
A frequency gap in the range 1589-2920 cm −1 is seen in Figure 1 compared to a frequency gap in the range 1490-2850 cm −1 in bulk diamond-like carbon or hydrocarbon molecules [18]. The differences can be attributed to various reasons, such as an internal structure effect, scale factor effect, size effects, etc.
An important feature of the second part of Figure 1 (range 2920-3174 cm −1 ) is that the reduced masses are all practically equal to 1. This is a sign that all modes are C-H modes. This includes symmetric and asymmetric stretching of CH 2 and CH 3 surface clusters, and symmetric deformation of the CH 3 cluster. At the end of Figure 1 an sp 2 hybridization mode is expected [18]. As in the case of the first part of Figure 1, many of the present frequencies are very near to their analogous lines in diamond-like carbon surfaces and hydrocarbon molecular frequencies such as the lines 2850, 2875, 2920cm −1 , etc. [18]. Some of these lines are identified experimentally in diamond nanocrystals themselves, such as the 2857, 2930, 2971 cm −1 , etc., lines [7].
In order to examine the variation of RBM and HRMM vibrational properties with the size transition from molecules to nanocrystals we investigated adamantane (C 10 H 16 ), the smallest diamondoid, and ethane (C 2 H 6 ), the smallest hydrocarbon, with C-C bonding using the same level of theory. As we can see from Figure 2 and Figure 3, the general shape of the intensity, force constant and reduced mass data are nearly the same as in Figure 1 with the exception of having a lower number of points and different peak heights. The breathing modes of adamantane and ethane are indicated in Figure 2 and Figure 3, respectively. The highest reduced-mass C-C lines are also indicated on these figures. Figure 4 shows the frequency variation of RBM and HRMM as a function of the number of carbon atoms. In this figure we also included our theoretical results for RBM and HRMM of diamantane (C 14 H 20 ) and triamantane (C 18 H 24 ) using the same level of theory. The experimental values of the breathing and highest reduced-mass C-C modes of ethane and diamondoids [14,19], and the limits of these modes as nanocrystals grow in size are shown [2,20]. Since we have one C-C bond in ethane, the breathing mode in ethane is actually a one-dimensional stretching mode. The breathing-mode frequency is inversely proportional to the mean radius of the nanoparticles such that it approaches 0.0 cm −1 as these particles grow in size [20]. Experimental results of the HRMM C-C mode are limited. This can be attributed to the weak and gradually increasing intensity of this mode with increasing size. As an example, the theoretical intensities of the three present working examples C 2 H 6 , C 10 H 16 and C 64 H 84 are 0.0, 0.0001 and 1.1148 km/mole, respectively, as calculated by the present theory. This mode becomes the dominant mode in intensity in bulk diamond crystals at 1332 cm −1 . Force constants for this mode also increase, as can be seen in the above figures, from 2.26 to 4.42 mDyne/Å as we go from ethane to C 64 H 84 . To the best of our knowledge, the trend of frequencies of these modes (RBM and HRMM) in Figure 4 as hydrocarbon molecules or diamond nanocrystals grow in size has not been presented previously in the literature. The two modes converge to one mode as the number of carbon atoms decreases to 2 atoms in ethane. Since RBM is actually an acoustical mode while the HRMM mode is an optical mode, Figure 4 also shows the collapse of acoustical and optical modes of bulk diamond to one mode at the ultimate molecular scale. To the best of our knowledge this "branch collapse" had not been reported before. In view of the recent applications of diamond nanocrystals that include medical [22] and industrial applications [23] involving the use of infrared spectroscopy, the present research has an extraordinary importance. The present method that incorporates the reduced-mass and force-constant analysis is sometimes used in solid-state physics and nanocrystals [21,24]. It is also used in large molecules, such as RNA [25,26].
Conclusion
As concluding remarks, we can note that the present theory can adequately reproduce many of the experimental data of infrared vibrational frequencies. This includes the 330-1185 cm −1 modes in the C-C vibrational region. The region around the broad peak at 500 cm −1 has pure C-C vibrations and is a sign of diamond structure in nanocrystals. The present theory reproduces adequately various C-H vibrations, which include symmetric, asymmetric, wagging, scissor, rocking and twisting modes. It also reproduces the movement of the radial breathing mode and highest reduced-mass C-C mode as nanocrystals grow in size. The variation of the highest reduced-mass mode C-C vibrations from that of ethane at 963 cm −1 to that of bulk diamond at 1332 cm −1 is shown. The variation of the radial breathing mode from that of ethane at 963 cm −1 to that of bulk diamond at 0 cm −1 is also shown and is also found to coincide with experimental values. Acoustical and optical vibrational branches of bulk diamond are proved in the present work to approach each other at the nanoscale and collapse at the molecular limit.
|
v3-fos-license
|
2017-07-12T12:27:26.672Z
|
2005-06-01T00:00:00.000
|
23952955
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.scielo.br/j/bjmbr/a/RwFVfstGrgs54mqR8LRPLHL/?format=pdf&lang=en",
"pdf_hash": "19eefb7c8b2d5ea5abb738ff4195a501eaa47356",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2118",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"sha1": "19eefb7c8b2d5ea5abb738ff4195a501eaa47356",
"year": 2005
}
|
pes2o/s2orc
|
Molecular epidemiology of type 1 and 2 dengue viruses in Brazil from 1988 to 2001
Dengue is a mosquito-borne viral infection that in recent decades has become a major international public health concern. Epidemic dengue fever reemerged in Brazil in 1981. Since 1990 more than one dengue virus serotype has been circulating in this tropical country and increasing rates of dengue hemorrhagic fever and dengue shock syndrome have been detected every year. Some evidence supports the association between the introduction of a new serotype and/or genotype in a region and the appearance of dengue hemorrhagic fever. In order to study the evolutionary relationships and possible detection of the introduction of new dengue virus genotypes in Brazil in the last years, we analyzed partial nucleotide sequences of 52 Brazilian samples of both dengue type 1 and dengue type 2 isolated from 1988 to 2001 from highly endemic regions. A 240-nucleotide-long sequence from the envelope/nonstructural protein 1 gene junction was used for phylogenetic analysis. After comparing the nucleotide sequences originally obtained in this study to those previously studied by others, and analyzing the phylogenetic trees, we conclude that, after the initial introduction of the currently circulating dengue-1 and dengue-2 genotypes in Brazil, there has been no evidence of introduction of new genotypes since 1988. The increasing number of dengue hemorrhagic fever cases seen in Brazil in the last years is probably associated with secondary infections or with the introduction of new serotypes but not with the introduction of new genotypes.
Introduction
Dengue viruses belong to the Flaviviridae family and are transmitted to humans through the bite of female Aedes mosquitoes.As the most important arthropod-borne viral infection of humans, dengue represents an important public health problem for urban popula-tion in the tropical and subtropical areas of the world.About 2.5 billion people in 100 countries are at risk for infection, and over 100 million cases of human infections and about 20,000 deaths occur each year.Symptomatic human infections may range from a mild, flulike syndrome, sometimes associated with a rash (dengue fever, DF) to a more severe form of disease associated with plasma leakage, thrombocytopenia, hemorrhage (dengue hemorrhagic fever, DHF) and/or shock (dengue shock syndrome, DSS) (1)(2)(3).
The dengue virus genome is an ~11-kb single-strand positive sense RNA with a single-open reading frame which encodes a polyprotein precursor of about 3,400 amino acid residues.Proteolytic cleavages generate 10 proteins that are detected in infected cells (C, prM, E, NS1, NS2A, NS2B, NS3, NS4A, NS4B, and NS5) (4,5).On the basis of antigenic variability, dengue viruses are classified into 4 serotypes (DEN-1 to 4).In addition to serotype classification, significant variation in genomic composition among viruses of each serotype permits genotype classification (5).Unlike other RNA viruses, some segments of the dengue virus genome have a high degree of stability where fixed mutations are common.Partial sequencing of some genomic regions has been successfully employed to determine the genetic variation of dengue viruses and to characterize genotypes within serotypes (6)(7)(8)(9)(10)(11).
The genomic regions encoding the envelope protein (E) and the nonstructural protein 1 (NS1) seem to be the most appropriate to characterize genotypes within serotypes, especially a 240-nucleotide long sequence spanning the E/NS1 junction (6,8,12).Studies of the evolutionary relationships of dengue viruses have revealed several major genotypes among each of the four serotypes.For DEN-1 viruses five genotypes have been described: one group representing strains from the Americas, Africa and Southeast Asia (I), a Sri Lankan group (II), a Japanese group (III), a fourth group including strains from Southeast Asia, the South Pacific, Mexico, and Australia (IV), and a fifth group composed of Taiwanese and Thai strains (V).For DEN-2 viruses, phylogenetic analysis initially identified five genotypes: strains from the Caribbean and South America (I); strains from the South Pacific, i.e., Taiwan, Philippines and New Guinea C prototype viruses and an older Thai strain (II); Vietnamese, Jamaican and Thai strains (III); isolates from Indonesia, the Seychelles, Burkina Fasso and Sri Lanka (IV), and finally, isolates from rural Africa (V) (6,8).Further studies incorporating additional DEN-2 strains resulted in the merging of genotypes II and III into one genotype (Asian/American-Asian genotype) (13).Infection by one serotype does not protect against infection by a second serotype, and epidemiologic and laboratory studies have shown that cross-reactive immune responses, including infection-enhancing antibodies, contribute to the higher frequency of DHF/DSS in persons with sequential infections (14,15).The occurrence of DHF/DSS in some regions has been associated with the introduction of new serotypes and/or genotypes of dengue virus (6,8,13,16).
As in many other countries in Latin America, Brazilian people have been seriously affected by dengue infections.About 80% of notified dengue cases in the Americas occurred in Brazil, and in the last years more than 1 million cases have been reported in all five Brazilian geographic regions (17).In spite of the importance of dengue as a serious public health problem in Brazil, only a few dengue viruses isolated from endemic regions have been analyzed with respect to genomic variability (6,7,9,16,(18)(19)(20).
In the present study, we analyzed partial nucleotide sequences of a significant number of DEN-1 and DEN-2 strains isolated in Brazil since 1988 from regions of high endemicity in order to determine the evolutionary relationships among them and the introduction and circulation of new genotypes that could be associated with the more severe cases seen in the last years.
Viruses
All 25 strains of dengue viruses originally analyzed in this study were isolated from acute-phase sera of patients with DF who had been infected in different states of Brazil from 1995 to 2001.Twenty-two of these strains were randomly selected from the collection of the Arbovirus Section, Adolfo Lutz Institute, São Paulo, São Paulo State, Brazil.Two other DEN-2 strains were obtained from the collection of Evandro Chagas Institute, Belém, Pará State, Brazil, and the most recent strain (D1-BRA/SP/01) was isolated by the investigators from a patient with DF living in São Paulo State.Acute-phase sera were used to infect monolayers of a mosquito cell line (C6/36 -Aedes albopictus) and serotypes were identified by indirect fluorescent antibody tests using typespecific monoclonal antibodies (15F3-1 and 3H5-1) kindly donated to Adolfo Lutz Institute by the Centers for Disease Control and Prevention, Atlanta, GA, USA (21,22).All strains were submitted to a single passage in cell culture.After growth for 7 days at 28ºC, virus-infected supernatants were collected, clarified by centrifugation and stored at -70ºC until the time for use.The identification of viral strains and their distribution in Brazilian regions are listed in Table 1.
Viral RNA extraction and RT-PCR amplification
Viral RNA was extracted from 140 µl supernatant medium of virus-infected cells using the QIAamp ® Viral RNA system according to the manufacturer's protocol (Qiagen ® , Chatsworth, CA, USA).Viral RNA was reverse transcribed to cDNA in a 20-µl reaction volume with he Superscript II reverse transcriptase system (Invitrogen, Carlsbad, CA, USA) and pd(N) 6 random primers (Amersham-Pharmacia, Piscataway, NJ, USA).Reverse transcription was allowed to proceed at 42ºC for 50 min followed by reverse transcriptase inactivation at 70ºC for 15 min.cDNA amplification was performed with synthetic primers as previously described (6).Both sense and antisense primers were used to amplify a 408-bp fragment of the E/NS1 junction region of the viral RNA; the 240-nucleotide segment for genetic comparison is comprised within this fragment (nucleotides 2.282 to 2.521 for DEN-1 and 2.311 to 2.550 for DEN-2).PCR amplifications consisted of 35 cycles of denaturation (94ºC for 1 min), annealing (55ºC for 1 min) and extension (72ºC for 2 min) for both DEN-1 and DEN-2 strains in a GeneAmp PCR System 2400 thermal cycler (Applied Biosystems, Foster City, CA, USA).A final extension step was carried out at 72ºC for 10 min.Each PCR was run with positive and negative controls and the fragments were separated by 2% agarose gel electrophoresis, stained with 1 µg/ml ethidium bromide, and detected under ultraviolet light.
Sequencing
Direct nucleotide sequencing of both strands was performed with an automated Sequence Detector System (ABI Prism 310 sequencer; Applied Biosystems) using a commercially available kit (BigDye Terminator Cycle Sequencing Ready Reaction ® , Applied Biosystems) according to the manufacturer's protocol.Briefly, for each sequencing reaction, 50 to 100 ng DNA was mixed with 3.2 pmol of a sense or antisense primer, 5 µl water and a reaction mixture containing the four dye-labeled dideoxynucleotide terminators.Sequencing reactions were performed in 25 cycles of denaturation (96ºC for 30 s), annealing (50ºC for 1 min) and extension (60ºC for 4 min) on a GeneAmp PCR System 2400 (Applied Biosystems).The final reaction mixture was purified with 75% isopropanol and the cycle-sequenced DNA was then dried in a vacuum centrifuge for 20 min.The pellet was resuspended in 20 µl of template suppression reagent (Applied Biosystems) and loaded onto the sequencer.Sequences were base-called using the DNA Sequencing Analyses software (Applied Biosystems).
Phylogenetic analysis
A data set of 72 E/NS1 junction sequences was used for comparison and phylogenetic analyses (Table 1).These data included the 25 sequences of Brazilian dengue viruses first described in the present study (16 DEN-1; 9 DEN-2) combined with 27 sequences of Brazilian dengue viruses previously reported (6 DEN-1; 21 DEN-2), and 20 global reference sequences of different genotypes of both serotypes (10 DEN-1; 10 DEN-2) deposited in GenBank.None of the viruses compared had additions or deletions in the genomic region studied.Alignments were done manually using nucleotide sequences.Phylogenetic analyses and construction of phylogenetic trees for both DEN-1 and DEN-2 strains were done using the neighbor-joining method and p-distance (MEGA software, version 2.1, Temple, AZ, USA) (23).Sequences from representative strains of dengue serotypes 3 and 4 (strain H87, Philip-pines, 1956) and 4 (strain 814669, Dominican Republic, 1981) obtained from GenBank (accession numbers M93130 and M14931, respectively) were used as an outgroup to root the trees.The bootstrap method, with 500 replicates, was used to estimate the reliability of the predicted trees.
Results
In the present study, we obtained and analyzed original partial nucleotide sequences from 16 strains of DEN-1 and nine strains of DEN-2 selected to represent the viruses circulating in all five Brazilian geographical regions.All viruses had undergone only one passage in C6/36 cells and nucleotide sequences were deposited in GenBank (accession numbers AY159257-AY159279, AY277245 and AY277246).Analysis of these nucleotide sequences in comparison to reference virus sequences revealed new mutations for both DEN-1 and DEN-2, probably representing evolutionary mutations.The majority occurred in the third base of codons resulting in silent mutations.
All mutations in the DEN-1 sequences were silent, whereas for DEN-2, mutations in three codons resulted in amino acid changes between residues of the same non-polar hydrophobic class.
The phylogenetic trees generated by neighbor joining analysis of nucleotide sequences are presented in Figures 1 and 2. Some reference nucleotide sequences of different genotypes of both DEN-1 and DEN-2 viruses isolated in other countries and published in the literature, as well as nucleotide sequences from Brazilian isolates available in the GenBank database, were also included for phylogenetic purposes.Brazilian DEN-1 strains segregated into one large group along with reference strains from the Americas and the Caribbean (Colombia, Surinam, El Salvador, Haiti, and Mexico), whereas Brazilian DEN-2 segregated into a group along with reference strains from the Americas and Southeast Asia (Jamaica, Thailand, and Vietnam).Bootstrap values of statistical significance were obtained for these groups.There was no segregation of Brazilian DEN-1 viruses according to regions or states but some nucleotide variations were observed between viruses from the same regions.However, for Brazilian DEN-2 viruses most strains isolated in 1990 from the Southeast region presented a distinct segregation pattern from the viruses isolated more recently.The Brazilian DEN-1 strains analyzed in this study presented 97.9% similarity amongst themselves and 96.8% similarity to other samples belonging to the American/Caribbean genotype, the same genotype as the sample isolated in Brazil in 1988 (6).Brazilian DEN-2 isolates presented 97.3% similarity amongst themselves and 94.2% similarity to other samples of the Asian/American-Asian genotype, the same genotype of two viruses isolated in Brazil in 1990 and 1991 (16).
Discussion
Epidemics of DF reemerged in Brazil in 1981 when an outbreak caused by DEN-1 and DEN-4 viruses occurred in the Northern region (Roraima State).Subsequently, in 1986, the first outbreak of greater proportions caused by DEN-1 occurred in the met-and Alagoas classified them into the Asian/ American-Asian genotype (18).Similar results were also obtained by nucleotide sequence analysis of another E protein gene segment (nucleotides 85 to 282) of three DEN-2 samples isolated in Brazil in 1990 (7).Nucleotide sequence analysis of the NS5/ 3'NC junction area of two DEN-1 samples isolated in 1990 and 1994 from Rio de Janeiro and São Paulo States, respectively, classi-ropolitan area of Rio de Janeiro and then spread towards the urban areas in the Northeast and Midwest regions of Brazil.In 1990, a new epidemic broke out in Rio de Janeiro, now related to the introduction of DEN-2.Since then, with the spread and circulation of more than one serotype, several Brazilian regions have reported outbreaks with severe illness and deaths (17,(24)(25)(26).From 1995 to 2001, an increasing number of DF and DHF cases has been observed in several urban areas of the country (17).Recently, DEN-3 has been isolated in Brazil but its association with DHF has not been clearly established (27,28).
In a country of continental proportions and incredible heterogeneity of people and environment such as Brazil, phylogenetic studies of dengue isolates must represent all regions of the country.The present study contributes to this viewpoint in the sense that viral strains from Brazilian endemic regions poorly studied before (North and South) were also analyzed.Although all strains were isolated from patients with DF, they represent a sample of the dengue viruses circulating in Brazil during the study period, and we assume that they would also carry the ability to cause DHF/DSS.Supporting this assumption, evolutionary divergence of DF-versus DHF-associated viruses (serotype 2) from Thailand was not observed, denoting that one particular strain has the potential to cause both DF and DHF in different hosts (13).
Different genomic regions of Brazilian dengue viruses have been previously studied by comparative analysis of partial nucleotide sequences for both serotypes 1 and 2. For DEN-1, comparative analysis of a protein E gene segment (nucleotides between 52 and 288) of two DEN-1 samples isolated in Brazil, one in 1986 and another in 1990, classified them into the American/Caribbean genotype (9).Analysis of the DEN-2 E protein gene (nucleotides 1685 to 2504) of twelve samples isolated between 1990 and 1995 in the States of Rio de Janeiro, Ceará, Bahia, bean since 1970, only in 1981 was the first outbreak of DHF/DSS observed in Cuba.Epidemiological studies in Latin America have demonstrated that sequential infections and co-circulation of different serotypes in a population at risk are not sufficient for DHF/ DSS occurrence and other factors related to virus, host, vector, and environment must be involved (2,(31)(32)(33)(34)(35)(36)(37)(38)(39).
Phylogenetic studies of many different dengue virus samples have led to the association between specific genotypes and the presentation of more or less severe disease.In Cuba, the 1981 outbreak was associated with the introduction of a new DEN-2 genotype, different from the genotype that had been circulating in that country.The DEN-2 genotype circulating in Central America before 1981 was associated with DF.The introduction of a different genotype imported from Southeast Asia coincided with the appearance of DHF/DSS in four different American countries (Venezuela, Brazil, Colombia, and Mexico; 6,16).Even though specific dengue genotypes have been associated with DHF/DSS, our study shows that the growing number of DHF/DSS cases in Brazil in the last years seems not to be associated with the introduction of a new genotype.Of note, the DEN-2 genotype circulating in Brazil since the 90's is the Southeast Asian genotype but there was no explosive outbreak of DHF/DSS during the first years after its introduction into the country.There are no clear data accurately showing when the Southeast Asian DEN-2 genotype was introduced into Brazil.Rico-Hesse et al. (16) analyzed two DEN-2 viruses isolated in Venezuela and Colombia some time around 1987, and concluded that they belonged to the Southeast Asian genotype.In Brazil, this genotype was probably introduced either at the same time or at some point during 1988, like in other areas of the Americas (6,8,16,18,20,38,40).
The present study is the largest one involving serotypes 1 and 2 of Brazilian den-fied these viruses into the American/Caribbean genotype (19).Recently, a nucleotide sequence of the entire genome from a Brazilian DEN-2 virus was determined for the first time, and a phylogenetic analysis also classified it into the Asian/American-Asian genotype (29).Nucleotide sequences of the entire genome were also determined recently from four Brazilian DEN-1 viruses (30).
Regarding the E/NS1 junction region, many authorities advocate it as one of the best segments for molecular epidemiology and phylogenetic studies of dengue viruses, since it is a genomic region not involved in immune recognition/stimulation and characterized by a uniform rate of random mutations that occur most frequently in the third base of codons (silent mutations) (6,8,12).Only two DEN-1 viruses isolated in Brazil in the mid-80's and 21 DEN-2 viruses isolated in the 90's have been previously studied regarding the E/NS1 junction region (6,16,20).Comparing all these sequences with others of different origins and genotypes it becomes clear that, in spite of the detection of new evolutionary mutations in the genome of Brazilian dengue viruses in the last 15 years, phylogenetic analysis did not identify the circulation of new genotypes for either DEN-1 or DEN-2 in our country.
Although genetic and antigenic differences in dengue virus strains have become evident, the lack of animal models of the disease has made it difficult to detect differences in virulence among dengue viruses.The most accepted hypothesis to explain the development of serious illnesses by some people infected with dengue virus suggests that sequential infection with different serotypes followed by antibody-dependent enhancement of infection play a major role in the pathogenesis of DHF/DSS (2,14,15).However, DEN-1 circulating in America is capable of producing DHF/DSS even in people not affected by sequential infection.In addition, despite co-circulation of several serotypes in Central America and the Carib-gue viruses regarding genomic variability.The 25 new nucleotide sequences determined in this study allowed us to expand considerably the number of Brazilian dengue virus strains evaluated so far regarding their genomic variability and molecular epidemiology.As a huge tropical country with many frontiers, a high rate of A. aegypti infestation and a growing migration flow, Brazil has been recognized as a high risk for the introduction and circulation of new serotypes and/or genotypes of dengue virus (25,27).To date, this has been true for the introduction and circulation of new serotypes, but not for new genotypes, as demonstrated by the present study.The reasons why introduction of new genotypes did not occur in Brazil in the last years are not understood.In other tropical countries the co-circulation of two or more genotypes of dengue virus is commonplace (36).Actually, we cannot rule out the possibility that new genotypes of dengue virus have already been introduced in Brazil but, for reasons not clearly defined, they were not successful in establishing themselves and consequently spreading to the whole country.Ecological, vector and host factors may be involved and should be addressed in further studies.Efforts should be directed at obtaining the complete genomic sequence of Brazilian dengue viruses so that a more detailed comparative analysis could be done.The genomic variability of Brazilian dengue virus strains isolated from patients with DHF/DSS should also be studied.Continuous monitoring of the introduction of new serotypes as well as new genotypes in Brazil is necessary so that control measures can be promptly implemented in order to reduce the potential risk for more serious epidemics.
IAL = viruses donated
by Adolfo Lutz Institute and isolated on C6/36 Aedes albopictus cell line.IEC = viruses donated by Evandro Chagas Institute and isolated in the C6/36 Aedes albopictus cell line.SMRP = strain isolated at the School of Medicine of Ribeirão Preto.Notation for Brazilian strains: Brazil (BRA)/state (region); S = South; SE = Southeast; N = North; NE = Northeast; MW = Midwest.¶ Sequence data obtained in the present study.R.J. Pires Neto et al.
Figure 1 .
Figure1.Phylogenetic relationships among dengue-1 (DEN-1) viruses.Phylogenetic tree generated by neighbor-joining analysis of nucleotide sequences from the E/NS1 junction of 32 strains of DEN-1 and representatives of serotypes 2, 3 and 4. Viruses are listed by strain abbreviation for state and year of isolation (see Table1).Horizontal branch lengths are drawn to scale.Bootstrap values (500 replicates) are shown for some key nodes that connect the genotypic groups of DEN-1.
Figure 2 .
Figure 2. Phylogenetic relationships among dengue-2 (DEN-2) viruses.Phylogenetic tree generated by using neighbor-joining analysis of nucleotide sequences from the E/NS1 junction of 40 strains of DEN-2 and representatives of serotypes 1, 3 and 4.Viruses are listed by strain abbreviation for state and year of isolation (see Table1).Horizontal branch lengths are drawn to scale.Bootstrap values (500 replicates) are shown for some key nodes that connect the genotypic groups of DEN-2.
Figure 2. Phylogenetic relationships among dengue-2 (DEN-2) viruses.Phylogenetic tree generated by using neighbor-joining analysis of nucleotide sequences from the E/NS1 junction of 40 strains of DEN-2 and representatives of serotypes 1, 3 and 4.Viruses are listed by strain abbreviation for state and year of isolation (see Table1).Horizontal branch lengths are drawn to scale.Bootstrap values (500 replicates) are shown for some key nodes that connect the genotypic groups of DEN-2.
Table 1 .
Dengue virus strains compared by sequence analysis.
|
v3-fos-license
|
2018-08-14T19:01:43.887Z
|
2018-05-01T00:00:00.000
|
51920634
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.cureus.com/articles/12563-extreme-lateral-interbody-fusion-complicated-by-fungal-osteomyelitis-case-report-and-quick-review-of-the-literature.pdf",
"pdf_hash": "e3ba1a4f515678e62c188f16754e859954593310",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2124",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "07dfd6fa2dbdd1cc6e13591902fe04e6f4910e92",
"year": 2018
}
|
pes2o/s2orc
|
Extreme Lateral Interbody Fusion Complicated by Fungal Osteomyelitis: Case Report and Quick Review of the Literature
The authors describe a 67-year-old man with a prior history of alcohol abuse who presented with a complaint of worsening low back pain. Four months prior to his presentation, the patient had undergone extreme lateral interbody fusion (XLIF) of his lumbar 3-4 segment for the treatment of his chronic low back and legs pain. Imaging revealed a loosening of his interbody fusion implant on top of his prior lumbar spine instrumentation. In surgery, the removal of his loose implant was followed by decompression, the stabilization of the collapsed segment, and the implant of antibiotic-impregnated polymethyl-methacrylate (PMMA) spacer and beads. At a later stage, the patient underwent an interbody fusion of the affected segment as well as a segmental fusion from T10 to his pelvis. Whereas all aerobes and anaerobes stains were negative for organisms, multiple fungal smears from the failed segment were positive for yeast, and the patient was placed on oral fluconazole. Infections complicating the surgical site of interbody fusions performed by minimally invasive techniques are rare. To the best of our knowledge and after reviewing the literature, this is the first report of an extreme lateral interbody fusion implant complicated by fungal osteomyelitis.
Introduction
Minimally invasive surgery (MIS) has substantially evolved in recent years, allowing both decompression and stabilization in a variety of conditions affecting the spine [1]. Among the reported advantages of MIS over the traditional open approach is the lower incidence of surgical site infections (SSI), with some reports citing an almost six-fold decrease in the likelihood of acquiring SSI with the former [2]. Regardless of approach, the most commonly cultured pathogen remains Staphylococcus aureus, affecting more than 50% of all postoperative spine infections [3]. Here, we present a case in which interbody fusion performed using a lateral MIS approach was complicated by fungal osteodiscitis, leading to a septic loosening of the implant.
Case Presentation
History and physical examination A 67-year-old male presented to the emergency department with complaints of worsening low back pain and a progressive inability to ambulate as well as to maintain an upright posture. No complaints of fever or bowel and bladder dysfunction were noted. The patient's past medical history was positive for alcohol abuse and pancreatitis, as well as chronic low back and bilateral leg pain. Relevant past surgical history was positive for prior L4-S1 posterior and interbody fusion performed in 2012 and a recent extreme lateral interbody fusion (XLIF) of L3-4, performed four months prior to his presentation for adjacent segment degeneration and stenosis. The physical exam revealed diffuse weakness, rated 3-4/5 of all bilateral lower extremity key muscles. The workup to rule out infection, including white blood cell count, Creactive protein (CRP), and erythrocyte sedimentation rate (ESR), was negative. Initial diagnostic imaging consisting of a lumbar x-ray showed that the L3-4 implanted cage has developed significant cavitation around it. In addition, new compression fractures were noted at the vertebral bodies of L1 and L2 ( Figure 1).
FIGURE 1: Pre-op AP (A) and lateral (B) lumbar X-rays
A previously placed XLIF cage (black asterisk) in the L3-4 disc space is surrounded by a welldemarcated cavitation (white arrowhead).
XLIF: extreme lateral interbody fusion; AP: antero-posterior
Lumbar magnetic resonance imaging (MRI) with contrast demonstrated diffuse edema and enhancement of the L3 and L4 vertebral bodies, strengthening possible infection as the primary etiologic mechanism (Figure 2). Finally, abdominal and pelvic computed tomography (CT) for ruling out a possible intra-abdominal involvement was negative.
FIGURE 2: Pre-op lumbar MRI (A,T2 sequence; B, contrast)
In addition to adjacent spinal stenosis noted in the T2 sequence (left), the L3 vertebral body shows increased contrast uptake (white asterisk), highly suggestive of infection.
Surgical treatment and postoperative course
In light of the acute infection resulting in segmental instability, the patient was planned for a two-stage intervention. In the first stage, removal of his existing L4-S1 posterior hardware was followed by spinal canal decompression, which allowed the retrieval of the loose L3-4 interbody loose implant as well as multiple tissue samples for culture and pathology. Spinal stabilization was achieved by placing antibiotic-impregnated temporary polymethyl-methacrylate (PMMA) spacer in the L3-4-disc space and posterior spinal instrumentation from L2 to S1 (Figure 3).
FIGURE 3: Postoperative lumbar AP (A) and lateral (B) X-rays
Removing of the existing hardware, including the L3-4 XLIF, was followed by instrumentation from L2-S1 and the placement of a cement spacer in the L3-4 disc space.
XLIF: extreme lateral interbody fusion; AP: antero-posterior In addition, the placement of antibiotic-impregnated PMMA beads allowed for the optimization of local control of the infection, whereas intravenous empirically administered ceftriaxone and vancomycin enabled systemic control. Specimens taken intraoperatively for aerobes and anaerobes cultures and gram stain were negative. Surprisingly, several separate fungus smears have yielded yeast, resulting in adjusting treatment to oral fluconazole only.
Following the uneventful surgery, the patient's back pain and ambulation had progressively improved and the patient was discharged home. The complete resolution of his symptoms as well as persistently negative CRP and ESR at ambulatory follow-up suggested that his infection had resolved. Four months after the first stage, the patient was taken back to the operating room for a planned second stage. The removal of the PMMA spacer and beads and irrigation was followed by a definite fusion of both the L3-4 segment as well as from T10 to his pelvis ( Figure 4).
FIGURE 4: Coronal (A) and sagittal (B) CT images of the lumbar spine following the second stage
The temporary PMMA spacer was replaced by an interbody fusion and the previous instrumentation extended to T10 and to the pelvis.
Discussion
Interbody fusion with a cage performed by the extreme lateral approach (XLIF) has become popular in recent years for the treatment of various degenerative, traumatic, and deformity conditions affecting the spine. Similar to other MIS techniques, the procedure is not without risk, with the most commonly reported complications being neurological deficits and anterior thigh pain [4]. Surgical site infection (SSI) is yet another possible major complication, often demanding a revision surgery and a prolonged hospital stay. While the incidence of SSI in open approaches has been estimated to be between 1.9% and 5.5% [5][6], MIS has been associated with a six-fold rate decrease [2]. A study looking specifically at the infections rate associated with the lateral approach found comparable low infections rates, with 0.27% and 0.14% rates of superficial and deep wound infections, respectively [7].
The most common pathogen to cause deep infection and vertebral osteomyelitis following spinal instrumentation is Staphylococcus aureus followed by Escherichia coli and Enterococcus faecalis [3]. Fungal infections of the spine are uncommon, usually affecting patients who are immunocompromised secondary to diabetes mellitus, chemotherapy, chronic corticosteroid use, or malnutrition. Fungal vertebral osteomyelitis following spinal surgery is an extremely rare occurrence, requiring a high clinical index of suspicion as one-third of patients with candida-related spondylitis lack fever and lab tests are usually non-specific [8].
Conclusions
In this case, a history of chronic alcohol abuse with relative malnutrition probably played a role in the patient's pathogenesis, leading to the extremely rare occurrence of fungal osteomyelitis following an MIS lateral approach intervertebral fusion. In conclusion, we suggest that in the presence of the above-mentioned patient-related risk factors, a fungal infection should be considered in the differential diagnosis, regardless of the approach and extent used.
Additional Information Disclosures
Human subjects: Consent was obtained by all participants in this study.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
v3-fos-license
|
2022-11-05T15:45:24.766Z
|
2022-11-01T00:00:00.000
|
253323567
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1420-3049/27/21/7449/pdf?version=1667438547",
"pdf_hash": "46fc07240bb000656c91054c4b08827509798a1b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2126",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "44cbd53fbc3a9c3a2940cb0b616d9b7ddc39e451",
"year": 2022
}
|
pes2o/s2orc
|
Revisiting Non-Conventional Crystallinity-Induced Effects on Molecular Mobility in Sustainable Diblock Copolymers of Poly(propylene adipate) and Polylactide
This work deals with molecular mobility in renewable block copolymers based on polylactide (PLA) and poly(propylene adipate) (PPAd). In particular, we assess non-trivial effects on the mobility arising from the implementation of crystallization. Differential scanning calorimetry, polarized light microscopy and broadband dielectric spectroscopy were employed in combination for this study. The materials were subjected to various thermal treatments aiming at the manipulation of crystallization, namely, fast and slow cooling, isothermal melt- and cold-crystallization. Subsequently, we evaluated the changes recorded in the overall thermal behavior, semicrystalline morphology and molecular mobility (segmental and local). The molecular dynamics map for neat PPAd is presented here for the first time. Unexpectedly, the glass transition temperature, Tg, in the amorphous state drops upon crystallization by 8–50 K. The drop becomes stronger with the increase in the PPAd fraction. Compared to the amorphous state, crystallization leads to significantly faster segmental dynamics with severely suppressed cooperativity. For the PLA/PPAd copolymers, the effects are systematically stronger in the cold- as compared to the melt-crystallization, whereas the opposite happens for neat PLA. The local βPLA relaxation of PLA was, interestingly, recorded to almost vanish upon crystallization. This suggests that the corresponding molecular groups (carbonyl) are strongly involved and immobilized within the semicrystalline regions. The overall results suggest the involvement of either spatial nanoconfinement imposed on the mobile chains within the inter-crystal amorphous areas and/or a crystallization-driven effect of nanophase separation. The latter phase separation seems to be at the origins of the significant discrepancy recorded between the calorimetric and dielectric recordings on Tg in the copolymers. Once again, compared to more conventional techniques such as calorimetry, dielectric spectroscopy was proved a powerful and quite sensitive tool in recording such effects as well as in providing indirect indications for the polymer chains’ topology.
Introduction
Polymers are all over the world, from everyday life uses to industrial and space applications [1,2]. This is mainly due to the combination of unique properties of polymers (high performance), the relatively easy processing and low economic cost of production. During the last decades, there has been a significant growth in environmental concerns, which, in the case of polymers, is reflected in the oil-based molecular origin of the most on segmental mobility in PLA/PPAd blends as compared to neat PLA and initial PPAd. The investigation mainly involves the implementation of three states regarding crystallinity, namely, amorphous, isothermally melt-crystallized and isothermally cold-crystallized. Finally, we construct the corresponding molecular mobility (dynamics) map for these cases. This is the first recording of the molecular dynamics for initial PPAd, to the best of our knowledge.
Materials
The materials investigated here are diblock copolymers based on PLA and PPAd, synthesized in a previous work by Terzopoulou et al. [30]. Briefly, the copolymers were prepared by using an initial PPAd polymer of low M n~6 kg/mol, forming the first block, onto which the second block of PLA was build, in situ, via ring opening polymerization of L-lactide at 180 • C ( Figure 1). The samples differ in the mass ratio PLA (%)/PPAd (%) as 95/5, 85/15 and 75/25 and are listed in Table 1 along with values on the estimated average molar masses (M n ). As reference samples, we comparatively study the initial PPAd (the same as that used in the copolymer preparation) and a neat PLA prepared by a similar ROP route (Table 1). Regarding the initial scope of these copolymers, this was a success as the enzymatic degradation of PLA is significantly accelerated in the copolymers as compared to PLA in bulk.
Molecules 2022, 27, x FOR PEER REVIEW 3 of 22 crystallization protocols. We employ the combination of differential scanning calorimetry (DSC), polarized light microscopy (PLM) and broadband dielectric spectroscopy (BDS) to study the effects on segmental mobility in PLA/PPAd blends as compared to neat PLA and initial PPAd. The investigation mainly involves the implementation of three states regarding crystallinity, namely, amorphous, isothermally melt-crystallized and isothermally cold-crystallized. Finally, we construct the corresponding molecular mobility (dynamics) map for these cases. This is the first recording of the molecular dynamics for initial PPAd, to the best of our knowledge.
Materials
The materials investigated here are diblock copolymers based on PLA and PPAd, synthesized in a previous work by Terzopoulou et al. [30]. Briefly, the copolymers were prepared by using an initial PPAd polymer of low Mn ~ 6 kg/mol, forming the first block, onto which the second block of PLA was build, in situ, via ring opening polymerization of L-lactide at 180 °C ( Figure 1). The samples differ in the mass ratio PLA (%)/PPAd (%) as 95/5, 85/15 and 75/25 and are listed in Table 1 along with values on the estimated average molar masses (Mn). As reference samples, we comparatively study the initial PPAd (the same as that used in the copolymer preparation) and a neat PLA prepared by a similar ROP route (Table 1). Regarding the initial scope of these copolymers, this was a success as the enzymatic degradation of PLA is significantly accelerated in the copolymers as compared to PLA in bulk. [30]. Included are the values for the selected temperatures of isothermal annealing of crystallization, Tanneal,mc and Tanneal,cc, for melt-and cold-crystallization, respectively.
Sample
Code Name
Differential Scanning Calorimetry
The glass transition and crystallization of the block copolymers as well as initial PLA and PPAd were assessed by DSC. To that aim, we employed a TA Q200 calorimeter (TA, New Castle, DE, USA), combined with a liquid nitrogen control system. The instrument had been calibrated with indium for temperature and enthalpy and sapphires for heat [30]. Included are the values for the selected temperatures of isothermal annealing of crystallization, T anneal,mc and T anneal,cc , for melt-and cold-crystallization, respectively.
Sample
Code Name M n (g/mol) The glass transition and crystallization of the block copolymers as well as initial PLA and PPAd were assessed by DSC. To that aim, we employed a TA Q200 calorimeter (TA, New Castle, DE, USA), combined with a liquid nitrogen control system. The instrument had been calibrated with indium for temperature and enthalpy and sapphires for heat capacity. The thermograms were recorded in N 2 atmosphere of high purity (99.995%) and within the range from −110 to 200 • C. In total, five (5) scans were performed, schematically described in Figure 2, first involving a heating scan for erasing thermal history (Scan 1), and four main scans aiming at manipulating crystallization. In particular, in Scan 2, the melted samples were cooled at 10 K/min, in Scan 3, the melted samples were cooled at the highest achievable rate in order to eliminate crystallization, while in Scans 4 and 5, the samples were subjected to isothermal melt-and cold-crystallization annealings at selected temperatures (T anneal,mc and T anneal,cc , respectively), different for each sample and chosen based on the results of the previous Scans 2 and 3. Details on the selection of T anneal are given along with the experimental results. The values for T anneal are listed in Table 1. Upon each crystallization treatment, the sample was cooled to −110 • C and subsequently a final heating scan was recorded. The heating rate was fixed for all scans at 10 K/min. capacity. The thermograms were recorded in N2 atmosphere of high purity (99.995%) and within the range from −110 to 200 °C. In total, five (5) scans were performed, schematically described in Figure 2, first involving a heating scan for erasing thermal history (Scan 1), and four main scans aiming at manipulating crystallization. In particular, in Scan 2, the melted samples were cooled at 10 K/min, in Scan 3, the melted samples were cooled at the highest achievable rate in order to eliminate crystallization, while in Scans 4 and 5, the samples were subjected to isothermal melt-and cold-crystallization annealings at selected temperatures (Tanneal,mc and Tanneal,cc, respectively), different for each sample and chosen based on the results of the previous Scans 2 and 3. Details on the selection of Tanneal are given along with the experimental results. The values for Tanneal are listed in Table 1. Upon each crystallization treatment, the sample was cooled to −110 °C and subsequently a final heating scan was recorded. The heating rate was fixed for all scans at 10 K/min. The characteristic temperature of the glass transition step, Tg, was estimated from the heating curve as the point of half increase in the heat capacity. Crystallization and melting events were evaluated in terms of peak temperature maxima, onsets and enthalpy changes (ΔH in J/g). The crystalline fraction, CF, was estimated by both the melt-and cold-crystallization peaks, both isothermals and non-isothermals, and by comparing the crystallization enthalpy (ΔΗ, respectively) with the theoretical heat of fusion for a 100% crystalline PLA, ΔΗ100%,PLA, usually taken as 93 J/g [42] according to Equation (1).
Please note that, more recently, compared to the work by Fischer et al. [42], there more alternative values for ΔΗ100%,PLA have been reported, considering the individual crystal polymorphs of PLA (α and α′) [43,44].
Polarized Light Microscopy
The PLM technique was employed to follow the effect of the copolymer composition on the semicrystalline morphology. PLM micrographs were recorded isothermally during melt-and cold-crystallization at the same Tanneal,mc and Tanneal,cc as those employed in DSC ( Table 1). The micrographs were recorded by means of a Nikon Optiphot-1 polarizing The characteristic temperature of the glass transition step, T g , was estimated from the heating curve as the point of half increase in the heat capacity. Crystallization and melting events were evaluated in terms of peak temperature maxima, onsets and enthalpy changes (∆H in J/g). The crystalline fraction, CF, was estimated by both the melt-and cold-crystallization peaks, both isothermals and non-isothermals, and by comparing the crystallization enthalpy (∆H, respectively) with the theoretical heat of fusion for a 100% crystalline PLA, ∆H 100%,PLA , usually taken as 93 J/g [42] according to Equation (1).
Please note that, more recently, compared to the work by Fischer et al. [42], there more alternative values for ∆H 100%,PLA have been reported, considering the individual crystal polymorphs of PLA (α and α ) [43,44].
Polarized Light Microscopy
The PLM technique was employed to follow the effect of the copolymer composition on the semicrystalline morphology. PLM micrographs were recorded isothermally during melt-and cold-crystallization at the same T anneal,mc and T anneal,cc as those employed in DSC ( Table 1). The micrographs were recorded by means of a Nikon Optiphot-1 polarizing microscope equipped with a Linkam THMS 600 heated stage, a Linkam TP91 control unit and a Jenoptik Gryphax Arktur camera.
From the data of PLM, the spherulitic growth rate was followed during isothermal melt-crystallization, at the same T anneal,mc as those in DSC. In the isothermal crystallization step, a minimum of three spherulites were followed during their free growth before they impinged on one another. Then, the radius of each spherulite was measured and plotted as a function of time. From this plot, the slope represents the spherulitic growth rate (G) at the selected T anneal,mc . The latter estimation was technically impossible for the case of cold-crystallization.
Broadband Dielectric Spectroscopy
The molecular mobility, with emphasis on segmental mobility, was investigated by BDS [45], employing a Novocontrol BDS setup (Novocontrol GmbH, Montabaur, Germany). Pieces of the produced sample were initially placed and melted between finely polished brash electrodes of 14 mm in diameter. Silica spacers of~100 µm in thickness were used in order to prevent the electrical contact of the electrodes and keep them parallel with each other. Consistent with the above techniques, three thermal protocols were adopted for BDS, namely, preparing (1) amorphous samples by melting and fast cooling, (2) isothermally meltcrystallized samples at T anneal,mc and (3) isothermally cold-crystallized samples at T anneal,cc . For each sample and protocol, the complex dielectric permittivity, ε (Equation (2)), was recorded in N 2 (g) nitrogen flow, isothermally as a function of frequency, f, in the range from 10 -1 to 10 6 Hz and in the temperature range between −150 and 120 • C, upon heating in steps of 5 and 10 K. ε The permittivity spectra are mainly complex as they consist of multiple contributions (relaxation mechanisms). Therefore, the spectra were analysed by the fitting of special models, mainly, the Havriliak-Negami, HN [45,46], function (Equation (3)).
Therein, ε ∞ describes the value of the real part of dielectric permittivity, ε , for f >> f 0 , ∆ε is the dielectric strength, f 0 is a characteristic frequency related to the frequency of maximum dielectric loss and α HN and β HN are shape parameters, for width and symmetry, respectively. Upon this analysis, we constructed the timescale map of local and segmental relaxations. The local processes usually obey the Arrhenius equation [45,47] (Equation (4)), as they exhibit a temperature-independent activation energy, E act . On the other hand, the segmental relaxations, related to the glass transition, demonstrate a different timescale due to their cooperative character, usually described by the Vogel-Fulcher-Tammann-Hesse (VFTH) expression [45,48] (Equation (5)), within which, D is the so-called fragility strength parameter [48] and is related to the measure of cooperativity, namely, the fragility index, m (Equation (6)).
Crystallization and Glass Transition
In Figure 3, we present the calorimetric results for Scans 2 and 3, upon erasing any thermal history. During cooling at 10 K/min (Figure 3a), all copolymers crystallize at 100 • C, similarly to neat PLA. Initial PPAd crystallizes at −10 • C, exhibiting an enthalpy change of~21 J/g. During the subsequent heating at 10 K/min, PPAd exhibits a T g of −61 • C. The corresponding heat capacity change ∆c p = 0.42 J/g·K. On the other hand, the T g of neat PLA equals 43 • C. The addition of PPAd in the copolymers results in a decrease in the T g from 39 • C down to the quite low value of −42 • C. This suppression, along with the absence of a second glass transition step in the copolymers, suggests the homogeneity (no significant micro-phase separation) of the copolymers, which was actually the goal of said synthesis.
Crystallization and Glass Transition
In Figure 3, we present the calorimetric results for Scans 2 and 3, upon erasing any thermal history. During cooling at 10 K/min (Figure 3a), all copolymers crystallize at ~100 °C, similarly to neat PLA. Initial PPAd crystallizes at −10 °C, exhibiting an enthalpy change of ~21 J/g. During the subsequent heating at 10 K/min, PPAd exhibits a Tg of −61 °C. The corresponding heat capacity change Δcp = 0.42 J/g•K. On the other hand, the Tg of neat PLA equals 43 °C. The addition of PPAd in the copolymers results in a decrease in the Tg from 39 °C down to the quite low value of −42 °C. This suppression, along with the absence of a second glass transition step in the copolymers, suggests the homogeneity (no significant micro-phase separation) of the copolymers, which was actually the goal of said synthesis. The results shown in Figure 3b involve the effects of the copolymer composition as well as the effects of crystallinity on Tg. Therefore, to assess the direct effects of composition on Tg, we performed measurements on amorphous samples. These are shown in Figure 3c, i.e., via the heating scan recorded upon a prior fast cooling. With the increase in PPAd content from 0 to 25%, the glass transition temperature decreases from 51 down to 11 °C. The drop in Tg of PLA by the addition of PPAd on the same polymer chain was rationalized in terms of the plasticization effect of the small PPAd blocks. The effect is also The results shown in Figure 3b involve the effects of the copolymer composition as well as the effects of crystallinity on T g . Therefore, to assess the direct effects of composition on T g , we performed measurements on amorphous samples. These are shown in Figure 3c, i.e., via the heating scan recorded upon a prior fast cooling. With the increase in PPAd content from 0 to 25%, the glass transition temperature decreases from 51 down to 11 • C. The drop in T g of PLA by the addition of PPAd on the same polymer chain was rationalized in terms of the plasticization effect of the small PPAd blocks. The effect is also facilitated by the overall shortening of the copolymer chains [26]. The surprising effect is the further lowering of T g with the implementation of crystallinity, as, conversely, the presence of crystals would be expected to hinder the polymer chains diffusion and, subsequently, to elevate the T g [16,[49][50][51] (and references therein). So far, these results confirm previous recordings on the same systems [26], which actually generated the interest for the present 'follow-up' study.
Based on the results of Scan 2 (Figure 3a), in particular, from the temperature range wherein the sample is neither melted nor has crystallization begun, we have selected suitable temperatures (Table 1) to perform the isothermal melt-crystallization annealing, T anneal,mc . Then, based on the results of Scan 3 (Figure 3c), we have chosen suitable T to perform the cold-isothermal crystallization annealings, T anneal,cc (Table 1), namely, to be above T g and below the event of cold-crystallization. In particular, the T anneal values were chosen as the T just before (3-5 K) the initiation of each non-isothermal crystallization event. The corresponding DSC results are shown in Figure 4.
presence of crystals would be expected to hinder the polymer chains diffusion and, subsequently, to elevate the Tg [16,[49][50][51] (and references therein). So far, these results confirm previous recordings on the same systems [26], which actually generated the interest for the present 'follow-up' study.
Based on the results of Scan 2 (Figure 3a), in particular, from the temperature range wherein the sample is neither melted nor has crystallization begun, we have selected suitable temperatures (Table 1) to perform the isothermal melt-crystallization annealing, Tanneal,mc. Then, based on the results of Scan 3 (Figure 3c), we have chosen suitable T to perform the cold-isothermal crystallization annealings, Tanneal,cc (Table 1), namely, to be above Tg and below the event of cold-crystallization. In particular, the Tanneal values were chosen as the T just before (3-5 K) the initiation of each non-isothermal crystallization event. The corresponding DSC results are shown in Figure 4. In Figure 4a,c, we present the time evolutions of crystallization, while in Figure 4b,d, we show the subsequent heating scans. Almost all systems, including neat PLA, exhibit similar crystallization rates. The exception to this behaviour is the cold-crystallization of 75/25, which is somewhat retarded. The effects are actually due to the different Tanneal selected for the different samples, thus, the result on the crystallization rate should not be compared between the different samples for drawing conclusions on the direct effect of In Figure 4a,c, we present the time evolutions of crystallization, while in Figure 4b,d, we show the subsequent heating scans. Almost all systems, including neat PLA, exhibit similar crystallization rates. The exception to this behaviour is the cold-crystallization of 75/25, which is somewhat retarded. The effects are actually due to the different T anneal selected for the different samples, thus, the result on the crystallization rate should not be compared between the different samples for drawing conclusions on the direct effect of the copolymeric structure on the nucleation and crystal growth (e.g., the performance of Avrami analysis [52]). For the most clear conclusions, the results of Figure 4 (Scans 4 and 5) have been evaluated and are discussed in terms of crystalline fraction (CF, Equation (1)) and the overall results (Scans 2-5) in terms of characteristic temperatures, namely, crystallization, T c , onset of crystallization, T c,onset , cold-crystallization, T cc , onset of cold-crystallization, T c,onset , and glass transition temperature. Please note that the T g is always estimated from the corresponding heating scan.
It is important to note that, based on the results of Figure 3a, the temperature range of the crystallization of the copolymers coincides to that of neat PLA, not PPAd. Therefore, we have concluded that the recorded crystallization should involve the PLA-rich phases. Thus, in Equation (1), the crystallization enthalpy (∆H) used for calculations has been normalized to the PLA mass content, w PLA (∆H/w PLA ). Otherwise, it would not be correct to compare the recorded ∆H to the heat of fusion of PLA.
In Figure 5, we follow the effects of the PPAd loading and of each thermal treatment on CF. CF = 0 upon the fast cooling, which denotes that the cooling rate of~100 K/min (in the temperature range of the expected crystallization) is sufficient to eliminate crystallization [21,23]. However, this rate is not sufficient to prevent nucleation, which is also expected for conventional cooling in PLA [23]. For Scans 3-5, in Figure 5a, CF drops monotonically by~10% with the addition of PPAd. The same happens with the characteristic Ts of crystallization (melt and cold) in Figure 5b, which drops by about 10-40 K (depending on the thermal treatment). The results suggest that both the amounts as well as the rates of crystallization (nucleation/lamellae packings) are slower/hindered in the copolymers. The retarded lamellae packing should have an impact on the quality (density and/or size) of the spherulites. This is partly confirmed by the lowering of the melting temperature, T m , with PPAd loading (see Figures 3b,c and 4b,d). The situation seems to be more complex for the case of Scan 2 (non-isothermal crystallization), within which both the CF and T c do not seem to vary significantly. In our previous work [26] on the same systems, we presented PLM results for the crystallization during the cooling of Scan 2 which showed non-systematic effects on the rate of crystallization (faster for PLA and 85/15 and slower for 95/05 and 75/25) while the final size of the spherulites was found to increase in the copolymers. Additionally, compared to most scans, the isothermal cold-crystallization has resulted in both lower CF and T c . This is most probably due to poor mobility of the chains during crystallization, which is reflected in the doubled (on average) crystallization times needed (Figure 4c), compared to the case of melt-crystallization. 5) have been evaluated and are discussed in terms of crystalline fraction (CF, Equation (1)) and the overall results (Scans 2-5) in terms of characteristic temperatures, namely, crystallization, Tc, onset of crystallization, Tc,onset, cold-crystallization, Tcc, onset of coldcrystallization, Tc,onset, and glass transition temperature. Please note that the Tg is always estimated from the corresponding heating scan.
It is important to note that, based on the results of Figure 3a, the temperature range of the crystallization of the copolymers coincides to that of neat PLA, not PPAd. Therefore, we have concluded that the recorded crystallization should involve the PLA-rich phases. Thus, in Equation (1), the crystallization enthalpy (ΔH) used for calculations has been normalized to the PLA mass content, wPLA (ΔH/wPLA). Otherwise, it would not be correct to compare the recorded ΔΗ to the heat of fusion of PLA.
In Figure 5, we follow the effects of the PPAd loading and of each thermal treatment on CF. CF = 0 upon the fast cooling, which denotes that the cooling rate of ~100 K/min (in the temperature range of the expected crystallization) is sufficient to eliminate crystallization [21,23]. However, this rate is not sufficient to prevent nucleation, which is also expected for conventional cooling in PLA [23]. For Scans 3-5, in Figure 5a, CF drops monotonically by ~10% with the addition of PPAd. The same happens with the characteristic Ts of crystallization (melt and cold) in Figure 5b, which drops by about 10-40 K (depending on the thermal treatment). The results suggest that both the amounts as well as the rates of crystallization (nucleation/lamellae packings) are slower/hindered in the copolymers. The retarded lamellae packing should have an impact on the quality (density and/or size) of the spherulites. This is partly confirmed by the lowering of the melting temperature, Tm, with PPAd loading (see Figures 3b,c and 4b,d). The situation seems to be more complex for the case of Scan 2 (non-isothermal crystallization), within which both the CF and Tc do not seem to vary significantly. In our previous work [26] on the same systems, we presented PLM results for the crystallization during the cooling of Scan 2 which showed non-systematic effects on the rate of crystallization (faster for PLA and 85/15 and slower for 95/05 and 75/25) while the final size of the spherulites was found to increase in the copolymers. Additionally, compared to most scans, the isothermal cold-crystallization has resulted in both lower CF and Tc. This is most probably due to poor mobility of the chains during crystallization, which is reflected in the doubled (on average) crystallization times needed (Figure 4c), compared to the case of melt-crystallization. From the results of PLM ( Figure 6) during melt-crystallization, we were able to estimate the spherulitic growth rate, G, which is presented in Figure 7. G increases in the copolymers from~270 (neat PLA) up to~520 µm/min (75/25) with the increase in PPAd. The effect could be correlated with the easier diffusion of the chains as manifested by the lowering of T g .
From the results of PLM ( Figure 6) during melt-crystallization, we were able to estimate the spherulitic growth rate, G, which is presented in Figure 7. G increases in the copolymers from ~270 (neat PLA) up to ~520 μm/min (75/25) with the increase in PPAd. The effect could be correlated with the easier diffusion of the chains as manifested by the lowering of Tg. Regarding the semicrystalline morphology upon the isothermal cold-crystallization (right side of Figure 6), we could not conclude its to effects on the size of the formed spherulites. However, from a more careful look at the data and upon repeating the measurements on various spots and different samples, we observed that, contrary to PLA, 95/05 and 85/15, in the case of 75/25 the formed spherulites are smaller, and the sample volume Regarding the semicrystalline morphology upon the isothermal cold-crystallization (right side of Figure 6), we could not conclude its to effects on the size of the formed spherulites. However, from a more careful look at the data and upon repeating the measurements on various spots and different samples, we observed that, contrary to PLA, 95/05 and 85/15, in the case of 75/25 the formed spherulites are smaller, and the sample volume is not completely filled with crystals. The latter also seems true in the case of meltcrystallization. We will come back at this point later. Finally, we observe for the copolymers that the spherulites exhibit the so called 'ring-banded' structure [53]. This is quite clear in 85/15 and 75/25. The phenomenon is expected when the polymer chains consist of both crystallizable and non-crystallizable segments and low M n , in general, such as in our case. Ring-banded spherulites have been observed for both PLA [54] and poly(butylene adipate) [55].
We may come now to the most interesting effect recorded herein. The basis for the discussion of this is Figure 8. Therein, we have plotted together the heating curves of Scans 3, 4 and 5 (Figure 8a). The results clearly show that the glass transition step is sharp and strong (high ∆c p ) in the case of the amorphous samples. Upon crystallization (both types), the glass transition becomes more broad and weaker, which is expected, nevertheless, it migrates toward lower temperatures. In terms of T g , in Figure 8b, it is shown that whereas in the amorphous state the addition of PPAd leads to a maximum drop of T g by~40 K (75/25), the implementation of crystallinity additionally lowers the T g by 8 K in PLA, 15 K in 95/05 and~50 K in 85/15 and 75/25. The effect seems controversial considering the expected effects of crystallinity, usually imposed on conventional polymers, hindering the mobility of the chains and elevating the T g [16,[49][50][51]. Before attempting to provide physically rational explanations for this, we will discuss the results in terms of molecular dynamics obtained via BDS and the corresponding critical analysis.
Molecular Mobility (BDS)
In Figures 9-11, we present raw BDS data in various forms. The molecular mobility is usually assessed in BDS by following the imaginary part of the dielectric permittivity, ε″, which is considered to be related with the dielectric loss [45,56]. An example of raw ε″(f) is shown in Figure 9a for initial PPAd. At T < Tg the dipolar relaxation mechanisms recorded as peaks in ε″(f) are considered to arise from local molecular motions of corresponding polar groups. These secondary relaxation mechanisms are generally named as β, γ, δ, etc. [45]. Then, as T increases and approaches Tg, the dielectric signal increases by one or more orders of magnitude and the strong main relaxation enters the frequency Before attempting to provide physically rational explanations for this, we will discuss the results in terms of molecular dynamics obtained via BDS and the corresponding critical analysis.
Molecular Mobility (BDS)
In Figures 9-11, we present raw BDS data in various forms. The molecular mobility is usually assessed in BDS by following the imaginary part of the dielectric permittivity, ε , which is considered to be related with the dielectric loss [45,56]. An example of raw ε (f ) is shown in Figure 9a for initial PPAd. At T < T g the dipolar relaxation mechanisms recorded as peaks in ε (f ) are considered to arise from local molecular motions of corresponding polar groups. These secondary relaxation mechanisms are generally named as β, γ, δ, etc. [45]. Then, as T increases and approaches T g , the dielectric signal increases by one or more orders of magnitude and the strong main relaxation enters the frequency window. This is the case of the dielectric analogue of glass transition usually called 'α relaxation', as it monitors the segmental mobility of the polymer chains via the relaxation of the dipoles perpendicularly distributed on the main polymer chain [45]. cal analysis.
Molecular Mobility (BDS)
In Figures 9-11, we present raw BDS data in various forms. The molecular mobility is usually assessed in BDS by following the imaginary part of the dielectric permittivity, ε″, which is considered to be related with the dielectric loss [45,56]. An example of raw ε″(f) is shown in Figure 9a for initial PPAd. At T < Tg the dipolar relaxation mechanisms recorded as peaks in ε″(f) are considered to arise from local molecular motions of corresponding polar groups. These secondary relaxation mechanisms are generally named as β, γ, δ, etc. [45]. Then, as T increases and approaches Tg, the dielectric signal increases by one or more orders of magnitude and the strong main relaxation enters the frequency window. This is the case of the dielectric analogue of glass transition usually called 'α relaxation', as it monitors the segmental mobility of the polymer chains via the relaxation of the dipoles perpendicularly distributed on the main polymer chain [45]. In Figure 9a, we can follow the local γ PPAd located between 10 2 and 10 3 Hz at −110 • C and the segmental α PPAd located between~10 2 and 10 3 Hz at −50 • C by the naked eye for PPAd. The relaxation peaks migrate toward higher frequencies upon increasing temperature (increasing the provided thermal energy) due to acceleration of the corresponding molecular groups. To facilitate a more direct comparison with calorimetry, the raw BDS data (isothermal curves, Figures 9a and 10) can be presented upon replotting to the form of 'isochronal' ε (T) curves. These are shown in Figure 9b for all samples in the initially amorphous state and at f~125 Hz. Therein, PLA exhibits a local and a segmental relaxation, β PLA and α PLA , respectively.
Regarding the situation is the copolymers, the recorded relaxations can be followed in the isothermal ( Figure 10) and isochronal ( Figure 11) plots, that are used for comparisons both between the different copolymer compositions and amorphous-semicrystalline states.
The copolymers' spectra are more complex than those of the individual homopolymers, we have performed critical analysis of the spectra and constructed the molecular mobility maps. These maps show the timescale for all relaxations (i.e., dynamics), in terms of the peak frequency maxima, logf max , against the inverse temperature, 1000/T (otherwise called Arrhenius plots).
In Figure 12, we present the overall maps for the initial PPAd and neat amorphous PLA. Therein, for PLA, β PLA is shown, a local process originating from fluctuations or twisting motions of the -C=O group at the backbone of PLA (inset to Figure 12) [57][58][59]. β PLA exhibits linear behavior (obeying the Arrhenius law) and an activation energy of 50 kJ/mol. At higher temperature, the α PLA is recorded, with its timescale points wellfitted with the VFTH equation (curved line in Figure 12), denoting the cooperative character of the main relaxation. The fragility index for α PLA was estimated m α = 178. Please also note that the good agreement between the BDS points on α PLA relaxation and the calorimetric T g (=51 • C, DSC line in Figure 12). From the BDS points, we may estimate the 'dielectric glass transition temperature' as the point where the extrapolation of the VFTH fitting meets the equivalent frequency of DSC, f eq , with logf eq~− 2.8, as the relaxation time in DSC is 100 s. This way, T g,diel was estimated as~50 • C for PLA. In Figure 9a, we can follow the local γPPAd located between 10 2 and 10 3 Hz at −110 °C and the segmental αPPAd located between ~10 2 and 10 3 Hz at −50 °C by the naked eye for PPAd. The relaxation peaks migrate toward higher frequencies upon increasing temperature (increasing the provided thermal energy) due to acceleration of the corresponding molecular groups. To facilitate a more direct comparison with calorimetry, the raw BDS data (isothermal curves, Figures 9a and 10) can be presented upon replotting to the form of 'isochronal' ε″(T) curves. These are shown in Figure 9b for all samples in the initially amorphous state and at f ~ 125 Hz. Therein, PLA exhibits a local and a segmental relaxation, βPLA and αPLA, respectively.
Regarding the situation is the copolymers, the recorded relaxations can be followed in the isothermal ( Figure 10) and isochronal ( Figure 11) plots, that are used for comparisons both between the different copolymer compositions and amorphous-semicrystalline states.
Τhe copolymers' spectra are more complex than those of the individual homopolymers, we have performed critical analysis of the spectra and constructed the molecular mobility maps. These maps show the timescale for all relaxations (i.e., dynamics), in terms of the peak frequency maxima, logfmax, against the inverse temperature, 1000/T (otherwise called Arrhenius plots).
In Figure 12, we present the overall maps for the initial PPAd and neat amorphous PLA. Therein, for PLA, βPLA is shown, a local process originating from fluctuations or twisting motions of the -C=O group at the backbone of PLA (inset to Figure 12) [57][58][59]. To the best of our knowledge, the full range timescale map for PPAd is presented here for the first time. In Figure 12, the fastest relaxation is γ PPAd , which is expected to screen the most localized mobility of the polymer. Unfortunately, there are still limited data in the literature on the dynamics of these relatively new class of polymers. By simply comparing the structure-dynamics of PPAd with other polymers exhibiting partial similarities in their structure, such as poly(ethylene glycol)s, poly(n-alkylene acrylate)s [60][61][62][63], we have suggested [26] that γ PPAd could arise from the crankshaft motions of methylene sequences at the chain backbone (inset scheme to Figure 12). γ PPAd follows the Arrhenius trend and the corresponding E act is~43 kJ/mol. At higher temperatures, another local-like was revealed, only via the fitting, as the process is quite weak. Due to the latter, this relaxation could not be resolved (safely) within the copolymers [26]. This process is named β PPAd . The timescale points of β PPAd are almost identical to those for β PLA . Therefore, we suspect that it has similar molecular origins, as PLA and PPAd both carry backbone -C=O groups ( Figure 12, inset scheme). Results from previous works on the similar polymer, poly(butylene adipate) [32,39], provide support for the proposed origins for β PPAd . At T ≥ T g , the segmental α PPAd is recorded. We should report, from the methodological point of view, that α PPAd was fitted by an asymmetric HN term (Equation (3)) with α HN~0 .6-0.7 and β HN~0 .6. As in the case of neat PLA, the dielectric and calorimetric T g values, respectively, −62 and −61 • C, are also quite alike in terms of initial PPAd. Finally, another more retarded and weaker relaxation located close to α PPAd was resolved. The relaxation can also be identified by the naked eye, e.g., in Figures 9b and 11c,d, as a shoulder of α PPAd . The process could be fitted by a symmetric HN term (β HN = 1) with α HN~0 .4-0.6. In Figure 12, its timescale denotes cooperative character, whereas its extrapolation to the f eq of DSC meets the region of the calorimetry T g . These facts denote that this process depends on segmental mobility and is either coupled with α PPAd or a modified version of α PPAd . Thus, the process is named here as α . Recalling the low M n~6 kg/mol of neat PPAd, this 'coupling' of α PPAd and α , resembles the situation between the main relaxations and the so-called Normal Mode relaxation [64][65][66]. Normal Mode arises from the fluctuation of the polymer chain end-to-end vector [57]. The relaxation is mainly recordable for short polymer chains (low M n ) such as those in our case. However, Normal Mode relaxations are generally stronger and narrower (higher α HN ) than α . More work is needed to shed light on the molecular origins of α . βPLA exhibits linear behavior (obeying the Arrhenius law) and an activation energy of ~50 kJ/mol. At higher temperature, the αPLA is recorded, with its timescale points well-fitted with the VFTH equation (curved line in Figure 12), denoting the cooperative character of the main relaxation. The fragility index for αPLA was estimated mα = 178. Please also note that the good agreement between the BDS points on αPLA relaxation and the calorimetric Tg (=51 °C, DSC line in Figure 12). From the BDS points, we may estimate the 'dielectric glass transition temperature' as the point where the extrapolation of the VFTH fitting meets the equivalent frequency of DSC, feq, with logfeq ~ −2.8, as the relaxation time in DSC is ~100 s. This way, Tg,diel was estimated as ~50 °C for PLA. To the best of our knowledge, the full range timescale map for PPAd is presented here for the first time. In Figure 12, the fastest relaxation is γPPAd, which is expected to screen the most localized mobility of the polymer. Unfortunately, there are still limited data in the literature on the dynamics of these relatively new class of polymers. By simply comparing the structure-dynamics of PPAd with other polymers exhibiting partial similarities in their structure, such as poly(ethylene glycol)s, poly(n-alkylene acrylate)s [60][61][62][63], we have suggested [26] that γPPAd could arise from the crankshaft motions of methylene sequences at the chain backbone (inset scheme to Figure 12). γPPAd follows the Arrhenius trend and the corresponding Eact is ~43 kJ/mol. At higher temperatures, another local-like was revealed, only via the fitting, as the process is quite weak. Due to the latter, this relaxation could not be resolved (safely) within the copolymers [26]. This process is named βPPAd. The timescale points of βPPAd are almost identical to those for βPLA. Therefore, we suspect that it has similar molecular origins, as PLA and PPAd both carry backbone -C=O groups (Figure 12, inset scheme). Results from previous works on the similar polymer, poly(butylene adipate) [32,39], provide support for the proposed origins for βPPAd. At T ≥ Tg, the segmental αPPAd is recorded. We should report, from the methodological point At this point, we turn the focus onto the copolymers. The changes imposed on local dynamics in the PLA/PPAd copolymers have been discussed in our previous work [26]. The main focus here is on the main dynamics. Before that, however, we should focus on β PLA . In Figures 10b,d and 11a, β PLA is quite strong in the amorphous state. Upon melt-crystallization, strikingly, the relaxation is almost eliminated. This suggests that the corresponding molecular group (backbone carbonyl) is strongly involved and immobilized within the semicrystalline regions. Interestingly, upon cold-crystallization, β PLA is suppressed as compared to the amorphous state, however, to a lesser extent than the meltcrystallized state. This difference can be rationalized by the lower CF cc as compared to CF mc ( Figure 5a) and/or by the expected looser lamellae packing in the case of cold-crystallization. In previous excellent works by Ezquerra and co-workers [39,67] who studied polyesters, including poly(butylene adipate), similar effects on the local relaxations were revealed, assigned to changes in the chain-chain associations also related to crystallinity.
Focusing on segmental mobility, we follow in the comparative dynamics maps of Figure 13, which at all cases shows that the involvement of crystallization leads to accelerated segmental mobility. In neat PLA (Figure 13a), cold-crystallization accelerates α PLA , while melt-crystallization accelerates it even more. Simultaneously, the cooperativity of α PLA seems to vanish, as manifested by the transition from the VFTH behavior (amorphous) to linear Arrhenius-like behavior (semicrystalline). Qualitatively similar effects are recorded on the α relaxation in the copolymers, α copol , whereas an opposite impact as compared to neat PLA is recorded in the copolymers, within which, cold-crystallization leads to more extensive acceleration than the melt-crystallization. Obviously, the strength of the main relaxation is suppressed upon crystallization in all cases. Due to this strength suppression, we were able to distinguish double segmental dynamics in the semicrystalline copolymers 85/15 and 75/25. We recall that in the amorphous state, only 75/25 exhibited two relaxations, α copol and α PPAd [26]. This was evaluated as an indication of nanophase separation for 75/25 [26].
In Figure 14, we concentrate the overall dynamics data in terms of T g,diel ( Figure 14a) and fragility (cooperativity, Figure 14b). The impact of PPAd addition is systematically the acceleration in dynamics and suppression in fragility (mainly the vanishing of cooperativity). These effects are strongly enhanced, i.e., in the same direction, when crystallization is implemented.
These are non-trivial effects imposed by crystallization, at least for conventional polymers/homopolymers, as the presence of crystallites is considered a factor that tends to decelerate the segmental mobility [68].
We should keep in mind that, upon crystallization, the fraction of mobile amorphous PLA is decreasing. Thus, at first approximation, the plasticization effect of PPAd within the amorphous areas could be more pronounced, as compared to the fully amorphous state (Scan 3). One scenario that could further explain the overall data on mobility and crystallization is in terms of spatial confinements and constraints imposed by crystallization on the amorphous PLA and PLA-PPAd chains. The spatial confinement would have a strong impact when the amorphous zones formed between the crystallites are of dimensions comparable to the cooperativity length, ξ, [16,69] namely, some nanometers [70][71][72][73][74]. Values in such a situation cannot be easily checked, especially, the dimensions of the inter-chain distances in particular. The advanced structural characterization technique of small-angle X-ray scattering could provide further insight on this point [75,76].
The results shown in Figures 13 and 14 can also be discussed from an alternative point of view for the copolymers. It is interesting that upon crystallization, the segmental dynamics tend to approach the fast dynamics of the initial PPAd. Considered together with the mobility maps, the shape parameters and the temperature dependences on α copol (not shown), we suspect that upon crystallization, the mobility of PLA (i.e., the contribution of α PLA to α copol ) vanishes. The effect is stronger for cold-crystallization in the copolymers. Due to that, a weaker relaxation was revealed, the timescale of which resembles that of α PPAd . This could be the modified, decelerated, version of bulky α PPAd in the copolymers [26,33]. According to this scenario, partly compatible with the previous one, the majority of PLA is 'dielectrically immobilized' within the formed crystals and both the local backbone groups and the overall chains are dielectrically inactive (vanishing of β PLA and α PLA ). In this context, the PLA-like segmental dynamics disappears and the 'semicrystalline' α relaxation is dominated by the PPAd-phases. This implies that crystallization of the PLA-rich phases leads to some kind of further phase separation of the PPAd segments, which are more active, at least dielectrically. In Figure 14, we concentrate the overall dynamics data in terms of Tg,diel ( Figure 14a) and fragility (cooperativity, Figure 14b). The impact of PPAd addition is systematically the acceleration in dynamics and suppression in fragility (mainly the vanishing of cooperativity). These effects are strongly enhanced, i.e., in the same direction, when crystallization is implemented. These are non-trivial effects imposed by crystallization, at least for conventional polymers/homopolymers, as the presence of crystallites is considered a factor that tends to decelerate the segmental mobility [68].
We should keep in mind that, upon crystallization, the fraction of mobile amorphous PLA is decreasing. Thus, at first approximation, the plasticization effect of PPAd within the amorphous areas could be more pronounced, as compared to the fully amorphous state (Scan 3). One scenario that could further explain the overall data on mobility and crystallization is in terms of spatial confinements and constraints imposed by crystallization on the amorphous PLA and PLA-PPAd chains. The spatial confinement would have a strong impact when the amorphous zones formed between the crystallites are of dimensions comparable to the cooperativity length, ξ, [16,69] namely, some nanometers [70][71][72][73][74]. Values in such a situation cannot be easily checked, especially, the dimensions of the interchain distances in particular. The advanced structural characterization technique of smallangle X-ray scattering could provide further insight on this point [75,76]. In this context, it is worthy to note that from the methodological point of view, the significant mismatch between the calorimetric and the dielectric T g in the case of copolymers upon crystallization is shown in Figure 14a. Despite the general comment involving the 'in principle' different techniques that follow different modes (thermal events vs. dielectric relaxations) [56,59,77], we gain indications that, most probably, DSC is able to record the amorphous part of PLA (high calorimetric T g ), whereas BDS is not. Thus, the T g,diel in the semicrystalline state mainly follow the dynamics of PPAd (lower T g in Figure 14a).
A final point worthy of discussion refers to the ionic conductivity effects in Figures 10 and 11. Therein, it follows that a sharp increase in the dielectric signal is recorded at temperatures well above T g . As mentioned previously, this originates in the transportation of small charges (ions) throughout the sample. Obviously, the transportation of ions can take place only via the rubbery domains. In the copolymer, the ionic conductivity always dominates the signal at T > T g of the copolymer. In none of the samples, amorphous or semicrystalline states, do we record a contribution of the ionic conductivity at lower temperatures, in particular at T > T g of initial PPAd. This more macroscopic observation suggests, on the one hand, that the PLA-PPAd distribution is excellent, and, on the other hand, in any state, there is no continuity of the pure PPAd phase throughout the copolymer's volume. A similar situation had been recorded in a previous work on PLA/poly(butylene adipate) diblock copolymers [32]. On the contrary, in polymeric blends of PLA and poly(ethylene adipate) (PEAd), exhibiting partial miscibility, we recorded significant continuous paths of PEAd throughout the copolymer volume, as manifested by strong ionic conductivity arising from PEAd [33].
Conclusions
Diblock copolymers of PPAd and PLA, prepared by ROP of lactic acid onto PPAd segments of low M n , are studied here regarding crystallization-induced effects on molecular mobility. In the amorphous state, the materials were found to be homogeneous with the PPAd playing a plasticization role on T g , systematically lowering with PPAd. Upon carefully chosen crystallization treatments, isothermal melt-and cold-crystallization, as well as non-isothermal crystallization, an interesting effect was revealed. T g is significantly suppressed in the presence of crystals, by 8 to 50 K. This effect is, at first glance, controversial as the crystals usual hinder the chains' mobility, leading to the elevation of the T g . The T g drop was, interestingly, found to be facilitated by the increasing PPAd amount in the copolymers. For the highest amount of PPAd (25%), partial phase nano-separation was revealed, only by BDS [26], via the individual recording of the weak α PPAd relaxation next to the bulk-like α copol . This is here proposed to be responsible for the formation of smaller crystals, not being able to completely fill the whole sample's volume. The phase separation was also recorded upon crystallization, for 25% and 15% PPAd, as the PLA-originating dielectric response was suppressed overall, and that of PPAd rose 'artificially'. Upon crystallization, the fragility index of segmental dynamics was found to severely decrease, and in some cases, to even vanish. The overall effects suggest the involvement of spatial nanoconfinement of the amorphous polymer between the spherulites or, in a more complex situation, involving additional PLA/PPAd separation driven by the crystal's formation. Overall, crystallization seems to make PPAd the dominant polymer over PLA in terms of the mobility of the copolymers. This is found true for both the segmental and the local mobility, as, for example, the local β PLA is quite strong in the amorphous state and almost vanishes upon crystallization. This scenario enabled the rationalizing of the significant discrepancy regarding T g as recorded between DSC and BDS, with the dielectric T g being lower in all copolymers than the calorimetric one.
|
v3-fos-license
|
2018-05-21T19:18:14.029Z
|
2017-12-21T00:00:00.000
|
115284194
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1996-1073/11/1/1/pdf?version=1513858251",
"pdf_hash": "8902d596a93f4fab56b886da2b265eea98469df2",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2128",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "8902d596a93f4fab56b886da2b265eea98469df2",
"year": 2017
}
|
pes2o/s2orc
|
A Parallel Restoration for Black Start of Microgrids Considering Characteristics of Distributed Generations
The black start capability is vital for microgrids, which can potentially improve the reliability of the power grid. This paper proposes a black start strategy for microgrids based on a parallel restoration strategy. Considering the characteristics of distributed generations (DGs), an evaluation model, which is used to assess the black start capability of DGs, is established by adopting the variation coefficient method. Thus, the DGs with good black start capability, which are selected by a diversity sequence method, are restored first in parallel under the constraints of DGs and network. During the selection process of recovery paths, line weight and node importance degree are proposed under the consideration of the node topological importance and the load importance as well as the backbone network restoration time. Therefore, the whole optimization of the reconstructed network is realized. Finally, the simulation results verify the feasibility and effectiveness of the strategy.
Introduction
In recent years, due to the gradual depletion of fossil fuel and increasing environmental pressure, the distributed power generation technology of renewable energy has been rapidly developing.As an important application form of distributed generation (DG), microgrids have received widespread attention [1,2].Although the reliability of microgrids has been greatly improved, all kinds of uncertainties resulting in blackout are inevitable [3], so the research on the black start of microgrids in islanding mode has important significance for speeding up the restoration process of microgrids and reducing outage loss.
Black start of microgrids refers to the technology when blackout, which is caused by external or internal fault, occurs in microgrids, and the restoration process does not rely on large power systems or other microgrids, but on the distributed generations with black start capability driving other DGs without such capability and gradually expanding the scope of restoration [4,5].The black start of microgrids includes three stages, namely, DG restoration, network reconfiguration and load restoration.As the connecting stage of the restoration process, network reconfiguration is to restore the DGs' connection to the network and generate electricity again as fast as possible.It lays a foundation for the restoration of load by restoring important nodes and reconstructing the backbone network with key lines [6][7][8].In conventional power systems, there are two types of black start strategies: the build-up strategy and build-down strategy.The first one is that the high voltage grid is energized first and then used to energize the lower voltage systems, while the second one starts from individual generating units with black start capability and critical loads and synchronizes them later [9].This paper gives a new explanation to two strategies of microgrid restoration, namely, serial restoration and parallel restoration.Serial restoration is when the blackout occurs, one of the DGs with good black start capability within the blackout area is selected and restored first, providing stable voltage and frequency to energize other DGs.This strategy gives priority to the reconstruction of backbone networks, and gradually expands the scope of restoration under the balance of active power.Parallel restoration is when the blackout occurs, DGs with good black start capability within the blackout area are restored in parallel and the DGs without black start capability nearby are energized later.The scope of restoration is gradually expanded under the balance of active power.The backbone network is reconstructed through the interconnection of the restored area.
At present, the black start of power system is studied widely, but the focus is mainly on the traditional bulk power grid.The research on the black start of microgrids is still in an early stage.Ref. [10] analyses the feasibility of selecting microgrids as black start power.It adopts the Dijkstra algorithm to search for the extended black start paths.However, it mainly focuses on the traditional bulk power grid with DGs and microgrids in grid-connected mode.Ref. [11] studies the categories of micro sources and their control strategies.It presents a black start strategy based on the serial restoration strategy with long restoration time.Ref. [12] tackles the problem with black start restoration sequences to be used for microgrids after a blackout occurs.The control strategies are analyzed and the identification of the set of rules and conditions are derived.Ref. [13] establishes a dynamic model for micro sources and inverters, respectively.It proposes a restoration strategy for microgrids' black start and subsequent islanded operation based on multi-agent technology.The identification of the set of rules and conditions are derived and evaluated by numerical simulations.Ref. [14] verifies the feasibility of adopting a hierarchical architecture of a multi-agent system in microgrid operation.However, in the above references, the focus is mainly on the control of the black start process and pays little attention to the number of DGs, loads and lines.
This paper proposes a black start strategy for microgrids based on parallel restoration strategy.Compared to the serial restoration strategy, which involves the charging of network and the reconstruction of backbone network, the parallel restoration, which divides the system into some small systems and restores them in parallel, can shorten the restoration time and reduce the outage loss.In consideration of the characteristics of DGs, the model of DGs is established to evaluate the black start capability, adopting the diversity sequence method to select the DGs with good black start capability.The black start DGs adopt a constant voltage and constant frequency control (V/f control) strategy to provide reference voltage and frequency to the DGs to be restored.Line weight and node importance degree are introduced as the indices to select the recovery path.The paths from the restored power supply area to the DGs to be restored are searched until all DGs have been interconnected to form the backbone network.Subsequently, the scale of the network structure is gradually expanded until the microgrids is fully restored to normal operation.Compared to the serial restoration strategy, this strategy focuses on the parallel restoration of DGs, which can not only ensure the synchronization of the whole network restoration, but also take into consideration the hierarchy of the restoration of the microgrids.This parallel restoration strategy is a combination of complex network theory and power system analysis.It also provides a new thought for the black start of microgrids.
The organization of this paper is as follows: Section 2 analyzes the characteristics of DGs and proposes a black start capability evaluation model of DGs based on a variation coefficient method.The parallel restoration strategy for microgrid reconfiguration is explained in detail in Section 3. Simulation results to prove the effectiveness of the strategy are demonstrated in Section 4. Section 5 concludes the paper.
Characteristics of DGs
The difference between conventional power system and microgrids mainly comes from the difference between DGs and conventional thermal power generating units, so the microgrids cannot directly copy the black start strategy of conventional power system.Therefore, it is necessary to take into account the characteristics of DGs when developing the black start strategy of microgrids.Compared with thermal power generating units, DGs have the following unique characteristics: (1) The output power of DGs is intermittent Conventional thermal power generating units can adjust its output according to the generation plan while the output of renewable energy source is intermittent.Taking wind power and photovoltaic power as an example, the output will fluctuate with the change of wind speed and light intensity.
(2) DGs have various control strategies Most DGs are not suitable for a direct connection to the grid, thus the power electronic interface is needed.DGs' control strategies are mainly inclusive of constant power control (PQ control), V/f control, droop control, virtual synchronous generator and so on.Different control strategies are adopted according to the specific situation [15][16][17].
(3) DGs start up without external power supply Conventional thermal power generating units with high power loads need starting power supply.Relatively, DGs have the ability to operate independently as DGs mainly rely on the natural conditions around.Taking wind turbine as an example, when wind speed reaches the minimum requirement, electricity is automatically generated without external power.
(4) DGs start up free from the starting time constraints The startup of conventional thermal power generating units is divided into hot start, warm start and cold start by the cylinder temperature.In order to protect the cylinder from the thermal stress caused by large temperature difference, the cylinder temperature needs to be controlled, leading to a longer starting time of warm start and cold start.Therefore, the black start strategy of the conventional power system needs to consider the starting time constraints of thermal power generating units.In contrast, DGs are free from the starting time constraints, so the recovery time of microgrids is shorter.
(5) DGs can improve power quality by using energy storage devices Energy storage devices can avoid large frequency and voltage deviations and provide stable voltage and frequency for microgrids by injecting (or absorbing) active power into the microgrids proportionally to the frequency deviation [18], so V/f control mode is adopted for the DGs with storage devices, while the output power of DGs without energy storage devices is intermittent, and DGs without energy storage devices cannot generate electricity in accordance with the load demand.Therefore, PQ control strategy is usually adopted.
Black Start Capability Evaluation Model of DGs Based on Variation Coefficient Method
The black start capability of DGs is that the black start process does not rely on external power, but on DGs own starting power sources to start up and provide stable power supply for a certain number of loads.Considering the characteristics of DGs, the black start capability evaluation model of DGs based on variation coefficient method is established in this paper.It selects DGs' real-time output power, starting time, load capacity, variable voltage and variable frequency capability (VVVF capability) and State of Charge (SOC) of the DGs with energy storage devices as the indices to evaluate the black start capability.The variation coefficient method is adopted to assign weight to each index and DGs with good black start capability are selected by diversity sequence method.
Among the above evaluation indices, the reason why real-time output power should be taken into consideration is that restoring DGs with high real-time output power first can generate more electricity in a short time, so as to guarantee power supply to more loads, which is beneficial to the restoration of the network.As the DGs' capacity depends on ambient condition, its capacity does not mean the real-time output power, but the maximum capability of its output power, which is different from conventional thermal power generating units.The capacity of thermal power generating units is the maximum of real-time output power controlled by input fuel.Since restoring DGs with short starting time can accelerate the whole restoration process, the starting time of DGs is an important index to be considered.Because restoring DGs with better load capacity can feed more loads during the same amount of time, taking into account the load capacity of DGs is needed.DGs with good VVVF capability can provide the network with voltage and frequency reference, making it more stable at the initial stage of black start.DGs with higher SOC can maintain stable power output for a longer time, which is conducive to maintain the stability of network.
The variation coefficient [19], as an objective weighting method, assesses the weight of each index by calculating the index data.According to the variation coefficient method, indexes with a large difference can indicate the quality of the evaluated objects and reflect objective information of the change of index data, so the index is given more weight.The specific steps of the variation coefficient method are as follows: (1) establish evaluation matrix Let there be n objects and m evaluation indices, the evaluation index vector of the object i is denoted as X i = [x i1 , x i2 , x i3 , . . ., x im ], and thus the evaluation matrix is denoted as X = [x ij ] n×m .The index weight of index j is expressed by ω j , and then the index weight vector is ω = [ω 1 , ω 2 , . . ., ω m ] T .
(2) normalize the indices In order to eliminate the effect of index dimension, it is necessary to normalize the original index value.
It is better for a positive index, which is also called as "efficiency" index, to have a larger index value: Negative index, which is also called "cost" index, should have a smaller index value: In Equations ( 1) and ( 2), i = 1, 2, . . ., n; j = 1, 2, . . ., m; r ij is the normalized value of the index j of the object i.
(3) calculate the weight of index After normalizing each index, the average value and standard deviation of each index are calculated by Equations ( 3) and (4): The variation coefficient of each index is calculated by Equations ( 5): where V j is the variation coefficient of the index j; r j is the average value of the index j; and s j is the standard deviation of the index j.
Then, the weight of each index is: (4) obtain comprehensive score The product of evaluation matrix X and weight vector ω is the comprehensive score vector G of each object: After scoring DGs by this comprehensive evaluation method, the DGs with good black start capability are further screened out by the diversity sequence method and selected to be restored first.
Index and Model of Microgrid Reconstruction Based on Parallel Restoration
The main purpose of the network reconfiguration is to reconstruct the backbone network in the shortest time by restoring the important nodes and key lines, so that the DGs to be restored can be connected to the network again.This paper introduces the line weight and node importance degree as the indices to select the recovery path.Line weight is evaluated by line operation time, and node importance degree is evaluated by combining the importance of node topology and nodal load.It provides guidance for the reconstruction of network.
Line Weight
In the black start process of conventional power systems, the overvoltage problem may easily occur when charging for the transmission lines without load.The long-distance high-voltage transmission line contains the distributed capacitance and inductance.The phase difference between inductive voltage and capacitive voltage is 180 • .The capacitive reactance will raise the terminal voltage of transmission line while the inductive reactance will drop the terminal voltage of transmission line.As the capacitive reactance is larger than inductive reactance, the capacitive current will flow through the line, so the capacitance effect of non-loaded line may cause the overvoltage problem.Charging capacitor on the line can preferably reflect the risk of overvoltage, so it is often used to weigh the lines in the black start of conventional power systems.However, the scale of microgrids is small, the distributed capacitance and inductance are much smaller, and overvoltage problem is less likely to occur.In order to shorten the restoration time, the line operation time is used to weigh the lines when selecting the recovery path [20].The path with short operation time is selected to speed up the restoration process.According to the experience of operator, the optimistic operation time D, pessimistic operation time A and the most probable time M are determined.The actual operation time of line is a beta distribution between D and A [21,22], so the expectation E(t i ) and variance σ i of operation time t i of line i are defined as: where E(t i ) is the expectation of operation time of line i, σ i is the variance of operation time of line i, D is the optimistic operation time, A is the pessimistic operation time and M is the most probable operation time.
In this paper, the operation time is random.It is easily proved that for two random numbers a and b, if E(a) > E(b), then Pr(a > b) ≥ Pr(a < b).Therefore, the expectation of the operation time is selected as the weight of recovery path.
Node Importance Degree
This paper evaluates the node importance degree mainly in two aspects: the electrical connection between nodes, known as the importance of node topology, and the factors of nodes themselves, known as the importance of nodal load.Taking the above two aspects into consideration makes the evaluation of node importance degree more objective.
For the importance of node topology, node betweenness is used to evaluate it quantitatively in this paper.The concept of betweenness was proposed by Freeman in 1979, which reflects the core role of the node in network [23].The node with larger node betweenness plays a more important role in network, and removing the node with larger betweenness value will make the distance between large numbers of nodes pairs longer.This paper defines node betweenness as the proportion of the shortest paths via the node among all the shortest paths, and the betweenness is expressed as: where B k is the betweenness of the Node k; is the number of the shortest paths between any two nodes in a network via Node k; N ij is the number of the shortest paths between any two nodes in a network; and ψ N is a set of all nodes in a network.
During the black start process, the re-energization of a certain number of loads is mainly to ensure the stable operation of the network and keep the system voltage within the allowable range.As the re-energization of loads needs all aspects of coordination and certain operation time, a small number of nodes with important loads should be restored.For the importance of nodal load, load capacity and load level are mainly considered in this paper.Loads are generally divided into three levels [24].For the nodes with the same number of loads, the higher the load level is, the more outage loss and the greater the importance of load will be.Therefore, the importance of nodal load is defined as: where L k is the load importance of node k; ω k is the load weight of node k; and P L.k is the load capacity of the node k.
By adjusting the value of load weight ω k , the nodes with important loads can be restored first.This paper assumes that the weight of the first level load is 1, while the second is 0.3 and the third is 0.08.
Since it is not possible to directly compare betweenness and load importance due to the dimension difference, normalization is essential.Therefore, the maximum value of node betweenness and nodal load importance are selected as their reference value, respectively.The node importance is defined as: where W k is the importance degree of node k; B k is the betweenness of node k; B base is the reference value of the node betweenness; L k is the load importance of node k; L base is the reference value of load importance; α and β are the proportional coefficient and they are used to adjust the relative importance of B k and L k ; α + β = 1.Considering that restoring important nodes with greater betweenness is conducive to speeding up the reconstruction of the network, this paper takes α = 0.6, β = 0.4.
Optimization Model of Recovery Path
After screening out the DGs for black start, the next step is to select a suitable recovery path for the DGs to be restored.
Objective Function
The recovery path consists of a number of nodes and lines.In order for the recovery path to restore more important nodes and use less time, this paper considers the importance of node topology, the importance of load and the recovery time of the network when selecting the recovery path.Therefore, the optimization objective function of the recovery path is defined as: where s and e are the start node and end node of the recovery path, respectively; W se is the average importance of the nodes to be recovered in the recovery path; and C se is the sum weight of the paths to be restored.
In the objective function ( 13), the numerator is the average value of the importance degree of nodes to be restored in recovery path.The larger the numerator is, the more "efficient" the recovery path will be.The reason for selecting the average value and not the sum value of the importance degree of nodes to be restored in the recovery path is due to the fact that selecting a longer recovery path that passes more important nodes may raise the risk of operation failure.The denominator is the sum weight of the paths to be restored, which stands for the "cost" of the recovery path.The larger the value of the denominator, the greater the restoration time.Selecting the recovery path with maximum f can restore as many important nodes as possible by less line operation, which is constructive to the reconfiguration of a backbone network.
Constraint Condition
The recovery process needs to satisfy some constraints.The explanation is as follows: (1) DGs output constraint where ψ G is the set of all the DGs in the network; P Gi and Q Gi are the active power and reactive power of DG i, respectively; P min Gi and P max Gi are the lower and upper bounds of the active power of DG i, respectively; Q max Gi and Q max Gi are the lower and upper bounds of the reactive power of DG i, respectively.
(2) Node voltage constraint where ψ N is the set of all nodes in the network; U i is the voltage magnitude of Node i; U min i and U max i are the lower and upper bounds of the voltage magnitude of Node i, respectively.
(3) Line constraint where ψ L is the set of all lines in the network; P li is the active power flowing in the Line i; and . is the maximum active power of Line i.
(4) Node power balance constraint where P i and Q i are the active power and reactive power injected to Node i, respectively; U i and U j are the voltage of Node i and Node j, respectively; G ij is the conductance between Node i and Node j; S ij is the susceptance between Node i and Node j; and θ ij is the voltage phase difference between Node i and Node j.
In Equation ( 18), the P i equals the output active power of DG i minus active power of the load on node i.The result of Equation ( 18), which can be adjusted by controlling the loads of node i, should be 0 to maintain stable frequency of system.In Equation (19), the Q i equals the output active power of DG i minus reactive power of load on node i.The result of Equation (19), which can be adjusted by using inductive compensation equipment, should be 0 to maintain stable voltage of the system.
Stability
During the restoration process of microgrids, due to the weakness of the backbone network, a stability problem may occur that affects the restoration of microgrids [25].The specific analysis is as follows: (1) Transient stability of the small independent system The parallel restoration of microgrids black start will inevitably create stable operation of a number of discrete and independent small systems.It is necessary to conduct a stable analysis of the small systems that run independently during the initial recovery period.The short-circuit fault is generally used to analyze a small system's anti-disturbance capability.According to the "Guide on Security and Stability for Power System" [26], the transient stability of a small system in the initial stage of black start is usually investigated as the single fault of a single-phase grounding fault.
(2) The stability of closed loop and grid-connected The parallel restoration of microgrids will face the synchronization problem when discrete and independent small systems connect to the backbone network, and it will inevitably generate a combined loop current, which will affect the safety and stability of grid operation.The main cause of the loop current is that the voltage difference of the bus or line on both sides of the loop switches produces the circulating current.The reactive power compensation device can be added in the appropriate position to adjust the distribution of reactive power flow in the network, thus changing the voltage at both ends of the loop and reducing the influence of the closed loop current on the system.
Communication
The microgrids is controlled by a microgrid central controller (MGCC) installed in the low voltage substation.The MGCC should exchange information with the local distribution management system (DMS), which needs to be enhanced with new features related to the microgrid operation.
Communication between MGCC and DMS includes information related to upstream grid status and economic issues for an efficient management of the microgrids.At a second hierarchical control level, controllers located at loads or at groups of loads (smart switch) and controllers located at DG or microsource controller (MC) exchange information with the MGCC and control local devices.
During normal operation, the MGCC periodically receives information from the smart switches and MCs about consumption levels and power generation, storing this information in a database.After a general blackout, the MGCC will perform service restoration based on the information stored in a database about the last microgrid load scenario by controlling a sequence of actions of smart switches and MCs.
Parallel Restoration Strategy for Microgrid Reconfiguration
In summary, the steps of the parallel restoration strategy are as follows: Step 1: Read in the parameters of each DG in microgrids, inclusive of real-time output power, starting time, load capacity, the VVVF capability and the SOC of the energy storage device on DG.For the wind turbine and photovoltaic without energy storage, its SOC index is 0; for a micro gas turbine, assuming that the gas is sufficient to maintain stable output power, its SOC index is 1.The black start capability score of each DG can be evaluated based on the evaluation model of black start capability.
Step 2: According to the comprehensive score of black start capability, DGs with good black start capability are selected via a diversity sequence method to be restored first.
Step 3: Read in the network parameters of microgrids and weigh the line by its operation time.
Step 4: Construct a weighted adjacency matrix in accordance with the weighted network.The number of all the shortest paths via each node in the network is calculated through the Betweenness-Centrality algorithm and the betweenness of each node is finally determined.The importance of nodal load is calculated by Equation (11).Finally, the importance degree of each node is calculated by Equation (12).Since the restoration of the power-supply node is more important in the process of network reconfiguration, this paper selects the maximum value of the node importance degree max W k as power-supply node importance degree.
Step 5: Centered on the DGs screened out for black start, parallel restoration is synchronous.Search the paths from a power supply area to DGs to be restored and select the path with maximum f value as the recovery path.Black start DGs adopt a V/f control strategy to provide stable voltage and frequency.DGs to be restored are connected to the grid in PQ control mode.Once the connection between black start DGs is set up, DGs with a high black start score adopts a V/f control strategy to provide stable voltage and frequency, while the ones with low scores switch to PQ control mode.
Step 6: In the restoration process, it is necessary to supply a proper number of loads to ensure the stable operation of the network; in addition, it is necessary to test whether the DGs' output constraints ( 14) and ( 15), node voltage constraint (16), line constraint (17), and node power balance constraints (18) and ( 19) are satisfied.If a line operation fails in the restoration process, return to step 5 on the basis of the restored lines and nodes and select a new recovery path.
Step 7: Repeat steps 5 and step 6 until all the DGs can be interconnected to form the backbone network.Then, energize the rest loads until the black start process succeeds and microgrids are fully restored to normal operation.
The flow chart of this restoration strategy is shown in Figure 1.
Numerical Results
According to Ref. [27], a modified Institute of Electrical and Electronics Engineers (IEEE) 30-bus microgrid system is selected as an example in this paper, as shown in Figure 2. The system consists of seven DGs, 30 load nodes and 41 lines.The feasibility of the parallel restoration strategy proposed in this paper is verified by MATLAB (R2013b (8.2.0.701),The MathWorks, Inc., Natick, MA, USA) simulation.
Numerical Results
According to Ref. [27], a modified Institute of Electrical and Electronics Engineers (IEEE) 30-bus microgrid system is selected as an example in this paper, as shown in Figure 2. The system consists of seven DGs, 30 load nodes and 41 lines.The feasibility of the parallel restoration strategy proposed in this paper is verified by MATLAB (R2013b (8.2.0.701),The MathWorks, Inc., Natick, MA, USA) simulation.
(1) The assumptions of the simulation are as follows: the real-time output power (Output), starting time, VVVF capability, load capacity, and SOC of each DG are shown in Table 1.
(1) The assumptions of the simulation are as follows: the real-time output power (Output), starting time, VVVF capability, load capacity, and SOC of each DG are shown in Table 1.(2) The expectation E(t) of operation time for each line is shown in Table 2.For each load, 1 min is needed for the transient regulation process.(3) Considering that restoring important nodes with greater betweenness is conducive to speeding up the reconstruction of the network, this paper takes α = 0.6, β = 0.4.(2) The expectation E(t) of operation time for each line is shown in Table 2.For each load, 1 min is needed for the transient regulation process.(3) Considering that restoring important nodes with greater betweenness is conducive to speeding up the reconstruction of the network, this paper takes α = 0.6, β = 0.4.(4) By adjusting the value of load weight ω k , the nodes with important loads can be restored first.
This paper assumes that the weight of the first level load is 1, while the second is 0.3 and the third is 0.08.
According to Section 2.2, the black start capability evaluation model of each DG is established and the comprehensive score of DGs' black start capability (Score) is obtained, which is shown in Table 3. DGs are divided into two types via a diversity sequence method, namely, black start DGs and DGs without black start capability.DGs that are screened out for black start are restored first.The classification results via diversity sequence method and the control strategy of each DG are shown in Table 3. From Table 3, although DG3 is a wind turbine with energy storage device and it can provide power for a certain number of loads, the SOC of its energy storage device is too low to provide stable output power for a long time, so DG3 is not suitable as a black start power source.
The number of all the shortest paths via each node in the network is calculated through the Betweenness-Centrality algorithm, and then the node betweenness is calculated.The normalized value of node betweenness is shown in Table 4.The larger the betweenness is, the more important the node is in the network.Combined with the importance of nodal load, the importance degree of each node is further calculated and shown in Table 5.The maximum value of node importance degree is selected as power-supply node importance degree.The larger the node importance value is, the greater the number of shortest paths that are passed via this node and the more important loads this node has.Restoring the nodes with large node importance values can speed up the process of restoration and reduce the outage loss.Black start power sources DG2, DG22, DG27 are the first to be restored, while DGs without black start capability wait to be restored.Centered on the black start power source DGs, DGs without black start capability are restored in parallel.Among all the paths from black start DGs to DGs to be restored, the ones with maximum f value are selected, which are shown in Table 6.The larger f value is, the more important nodes the recovery path has and the less the "cost" of the recovery path.The paths with the maximum f value are selected as the recovery path from black start DGs to DGs to be restored, namely the recovery paths are: However, DGs have not been interconnected.It is necessary to restore part of the lines to make all DGs interconnected to form the backbone network.A further search for the paths with maximum f value among the black start power sources, namely the recovery paths are: 2 → 6 → 10 → 22, 27 → 28 → 6.
The restoration strategy is shown in Figure 3, in which the solid lines are the restored lines while the dotted lines are the ones to be restored.
From Figure 3, all DGs and some important nodes have been restored.DGs have been interconnected to form a stable backbone grid.There are 13 nodes and 12 lines restored during the interconnection of DGs.Among the nodes that are not restored, only node 11, node 19 and node 26 are two lines away from the backbone network, the rest are all one line away from the backbone network.The recovery paths have a good coverage for the microgrids.The rest of the nodes and lines can be restored rapidly at later stages.In order to further verify the effectiveness of the parallel restoration strategy, the same work is done for the IEEE57-bus microgrid system [28], which contains seven DGs, 57 nodes and 80 lines.Its structure is shown in Figure 4.According to the parallel restoration proposed in this paper, firstly the black start capability score of each DG is evaluated based on the parameters of DGs.DG2, DG6 and DG12 are selected as the black start DGs accordingly.Then, above black start DGs are restored, adopting a V/f control strategy to provide reference voltage and frequency.After that, the paths from the power supply area to DGs to be restored are searched and paths with maximum f value are selected as the recovery paths.Although the path 1 → 2 → 3 → 4 → 6 → 8 → 9 → 12 is the shortest path to restore all DGs and form the backbone network, the recovery path contains a small number of shortest paths, which means restoring a few important nodes with large betweenness values, making the f value of the recovery path low.Moreover, the backbone network that is formed by this recovery path is away from most nodes in the network, which will take a longer time in the restoration of the rest of the In order to further verify the effectiveness of the parallel restoration strategy, the same work is done for the IEEE57-bus microgrid system [28], which contains seven DGs, 57 nodes and 80 lines.Its structure is shown in Figure 4.In order to further verify the effectiveness of the parallel restoration strategy, the same work is done for the IEEE57-bus microgrid system [28], which contains seven DGs, 57 nodes and 80 lines.Its structure is shown in Figure 4.According to the parallel restoration proposed in this paper, firstly the black start capability score of each DG is evaluated based on the parameters of DGs.DG2, DG6 and DG12 are selected as the black start DGs accordingly.Then, above black start DGs are restored, adopting a V/f control strategy to provide reference voltage and frequency.After that, the paths from the power supply area to DGs to be restored are searched and paths with maximum f value are selected as the recovery paths.Although the path 1 → 2 → 3 → 4 → 6 → 8 → 9 → 12 is the shortest path to restore all DGs and form the backbone network, the recovery path contains a small number of shortest paths, which means restoring a few important nodes with large betweenness values, making the f value of the recovery path low.Moreover, the backbone network that is formed by this recovery path is away from most nodes in the network, which will take a longer time in the restoration of the rest of the According to the parallel restoration proposed in this paper, firstly the black start capability score of each DG is evaluated based on the parameters of DGs.DG2, DG6 and DG12 are selected as the black start DGs accordingly.Then, above black start DGs are restored, adopting a V/f control strategy to provide reference voltage and frequency.After that, the paths from the power supply area to DGs to be restored are searched and paths with maximum f value are selected as the recovery paths.Although the path 1 → 2 → 3 → 4 → 6 → 8 → 9 → 12 is the shortest path to restore all DGs and form the backbone network, the recovery path contains a small number of shortest paths, which means restoring a few important nodes with large betweenness values, making the f value of the recovery path low.Moreover, the backbone network that is formed by this recovery path is away from most nodes in the network, which will take a longer time in the restoration of the rest of the nodes.Therefore, it is not suitable to be selected as the recovery path.The recovery paths with maximum f values for the DGs to be restored are shown as follows: DG1: 2 → 1, DG3: 2 → 3, DG8: 6 → 7 → 8, DG9: 12 → 13 → 49 → 38 → 37 → 39 → 57 → 56 → 41 → 11 → 9.It is necessary to restore part of the lines to make all DGs interconnected to form the backbone network.A further search for the paths with maximum f values among the black start power sources, namely the recovery paths are: 13 → 14 → 15 → 3 and 7 → 29 → 28 → 27 → 26 → 24 → 23 → 22 → 38.The final restoration strategy is shown in Figure 5, in which the solid lines are the restored lines while the dotted lines are the ones to be restored.5, in which the solid lines are the restored lines while the dotted lines are the ones to be restored.There are 26 nodes restored during the interconnection of DGs.The top five nodes with the largest node importance degree values are all restored.Among the rest 21 nodes that are not restored, except for node 32, which is four lines away from the backbone network and node 33, which is five lines from the backbone network, 84.2% of the rest nodes are no more than two lines from the backbone network.The backbone network does not have redundant nodes and lines, providing the prerequisite and guarantee for the restoration of the rest load in the next step.
Conclusions
This paper proposes a parallel restoration strategy for microgrids' black start.First, the evaluation model of DGs' black start capability is established by a variation coefficient method, and the comprehensive score of DGs' black start capability is obtained.DGs with good black start capability are screened out by a diversity sequence method.Then, under the constraints of DGs and network, the importance degree of nodes and line operation time are introduced as the indices to select the recovery path, comprehensively considering the importance of node topology, the importance of load and the recovery time of the network.The black start power source adopts a V/f control strategy to provide reference voltage and frequency to DGs to be restored in parallel.Among all the paths from DGs for a black start to DGs to be restored, the ones with the maximum optimization objective function value are selected as the recovery paths until the DGs can be interconnected to form the backbone network; finally, the feasibility of the proposed strategy is verified by a modified IEEE 30-bus microgrid system and the IEEE 57-bus microgrid system.There are 26 nodes restored during the interconnection of DGs.The top five nodes with the largest node importance degree values are all restored.Among the rest 21 nodes that are not restored, except for node 32, which is four lines away from the backbone network and node 33, which is five lines from the backbone network, 84.2% of the rest nodes are no more than two lines from the backbone network.The backbone network does not have redundant nodes and lines, providing the prerequisite and guarantee for the restoration of the rest load in the next step.
Conclusions
This paper proposes a parallel restoration strategy for microgrids' black start.First, the evaluation model of DGs' black start capability is established by a variation coefficient method, and the comprehensive score of DGs' black start capability is obtained.DGs with good black start capability are screened out by a diversity sequence method.Then, under the constraints of DGs and network, the importance degree of nodes and line operation time are introduced as the indices to select the recovery path, comprehensively considering the importance of node topology, the importance of load and the recovery time of the network.The black start power source adopts a V/f control strategy to provide reference voltage and frequency to DGs to be restored in parallel.Among all the paths from DGs for a black start to DGs to be restored, the ones with the maximum optimization objective function value are selected as the recovery paths until the DGs can be interconnected to form the backbone network; finally, the feasibility of the proposed strategy is verified by a modified IEEE 30-bus microgrid system and the IEEE 57-bus microgrid system.
Figure 1 .
Figure 1.Flow chart of parallel restoration strategy.
Figure 3 .
Figure 3.The final restoration backbone network.
Figure 3 .
Figure 3.The final restoration backbone network.
Figure 3 .
Figure 3.The final restoration backbone network.
Figure 5 .
Figure 5.The final restoration backbone network.
Figure 5 .
Figure 5.The final restoration backbone network.
Table 1 .
The parameters of distributed generations (DGs).VVVF: variable voltage and variable frequency; SOC: State of Charge; PV: photovoltaics.
Table 2 .
The expectation of operation time for each line.
Table 1 .
The parameters of distributed generations (DGs).VVVF: variable voltage and variable frequency; SOC: State of Charge; PV: photovoltaics.
Table 2 .
The expectation of operation time for each line.
Table 3 .
The classification result of DGs.PQ: constant power; V/f: constant voltage and constant frequency.
Table 6 .
The restoration path with maximum f value.
Therefore, it is not suitable to be selected as the recovery path.The recovery paths with maximum f values for the DGs to be restored are shown as follows: DG1: 2 → 1, DG3: 2 → 3, DG8: 6 → 7 → 8, DG9: 12 → 13 → 49 → 38 → 37 → 39 → 57 → 56 → 41 → 11 → 9.It is necessary to restore part of the lines to make all DGs interconnected to form the backbone network.A further search for the paths with maximum f values among the black start power sources, namely the recovery paths are: 13 → 14 → 15 → 3 and 7 → 29 → 28 → 27 → 26 → 24 → 23 → 22 → 38.The final restoration strategy is shown in Figure
|
v3-fos-license
|
2023-02-10T14:08:56.478Z
|
2020-07-24T00:00:00.000
|
256718606
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41535-020-00256-8.pdf",
"pdf_hash": "4dbe3f4e8367aa17e454a1be165c5ba1ad0cb6c4",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2130",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"sha1": "4dbe3f4e8367aa17e454a1be165c5ba1ad0cb6c4",
"year": 2020
}
|
pes2o/s2orc
|
Colossal magnetoresistance in a nonsymmorphic antiferromagnetic insulator
Here we investigate antiferromagnetic Eu5In2Sb6, a nonsymmorphic Zintl phase. Our electrical transport data show that Eu5In2Sb6 is remarkably insulating and exhibits an exceptionally large negative magnetoresistance, which is consistent with the presence of magnetic polarons. From ab initio calculations, the paramagnetic state of Eu5In2Sb6 is a topologically nontrivial semimetal within the generalized gradient approximation (GGA), whereas an insulating state with trivial topological indices is obtained using a modified Becke−Johnson potential. Notably, GGA + U calculations suggest that the antiferromagnetic phase of Eu5In2Sb6 may host an axion insulating state. Our results provide important feedback for theories of topological classification and highlight the potential of realizing clean magnetic narrow-gap semiconductors in Zintl materials.
INTRODUCTION
Narrow-gap semiconductors exhibit a breadth of striking functionalities ranging from thermoelectricity to dark matter detection 1,2 . More recently, the concept of topological insulating phases in bulk materials has renewed the interest in this class of materials [3][4][5] . Independent of the target application, a primary goal from the experimental point of view is the synthesis of genuine insulators free of self doping. Materials design is usually guided by simple electron count (e.g. tetradymite Bi 2 Te 3 6 ), correlated gaps (e.g. Kondo insulators SmB 6 7 and YbB 12 8 ) or the Zintl concept (e.g. Sr 2 Pb 9 and BaCaPb 10 ). Zintl phases are valence precise intermetallic phases formed by cations (alkaline, alkaline-earth and rare-earth elements) and covalently bonded (poly)anionic structures containing post-transition metals. The electron transfer between these two entities gives rise to an insulating state, whereas the inclusion of rare-earth elements allows for magnetism, which breaks time-reversal symmetry and may promote new quantum ground states [11][12][13] .
The myriad of crystal structures within the Zintl concept provides a promising avenue to search for clean semiconductors. Here we experimentally investigate Zintl Eu 5 In 2 Sb 6 in single crystalline form. Low-carrier density magnetic materials containing Europium are prone to exhibiting colossal magnetoresistance (CMR) [14][15][16][17][18] . The strong exchange coupling between the spin of the carriers and the spins of the Eu 2+ background causes free carriers at low densities to self-trap in ferromagnetic clusters around the Eu sites, which gives rise to a quasiparticle called magnetic polaron 19 . This quasiparticle has been identified in several Zintl materials ranging from simple cubic EuB 6 17,20 to monoclinic Eu 11 Zn 4 Sn 2 As 12 18 . Most CMR compounds have a ferromagnetic ground state, including doped magnanites RE 1 − x A x MnO 3 (RE = rare-earth, A = divalent cation) in which CMR was first observed 21,22 . EuTe and Eu 14 MnBi 11 , however, revealed the possibility of realizing CMR in antiferromagnets 14,16,23 , which also brings promise for applications due to their small stray fields 24 .
Additionally, nonsymmorphic symmetries are expected to be particularly powerful in creating protected band crossings and surface states, which provide an additional organizing principle within the Zintl concept 25,26 . For instance, Wieder et al. predicted that Zintl Ba 5 In 2 Sb 6 , the non-f analog of Eu 5 In 2 Sb 6 , hosts fourfold Dirac fermions at M connected to an hourglass fermion along ΓX 27 . Recent attempts to theoretically catalog all known uncorrelated materials indicate that Ba 5 In 2 Sb 6 may be classified as a topological insulator 28,29 or trivial insulator 30,31 . This discrepancy begs for an experimental investigation.
Eu 5 In 2 Sb 6 , just like its Ba analog, crystallizes in space group Pbam. As expected from the 4f localized moments in multiple sites, Eu 5 In 2 Sb 6 orders antiferromagnetically at T N1 = 14 K in a complex magnetic structure. Remarkably, CMR sets in at 15T N1 and is accompanied by an anomalous Hall component. Our data collectively point to the presence of magnetic polarons. To shed light on the topology of the band structure of Eu 5 In 2 Sb 6 , we have performed first-principles calculations using different functionals and magnetic phases. Though an insulating state with trivial topological indices is obtained using modified Becke−Johnson (mBJ) functional in the paramagnetic state, topological nontrivial states with strong indices emerge in the generalized gradient approximation (GGA) + U calculations within putative antiferromagnetic states.
Magnetic susceptibility measurements
We first discuss the thermodynamic properties of Eu 5 In 2 Sb 6 single crystals. Figure 1a highlights the complex anisotropy in the low-temperature magnetic susceptibility of Eu 5 In 2 Sb 6 . Two magnetic transitions can be identified at T N1 = 14 K and T N2 = 7 K, in agreement with previous measurements on polycrystalline samples 32 . One can also infer that the c-axis is the magnetization hard-axis and that the moments lie in the ab-plane. No hysteresis is observed between zero-field-cooled and field-cooled measurements at 0.1 T, which rules out hard ferromagnetic order or spinglass behavior; however, a small in-plane ferromagnetic component (0.06 μ B ) is observed at very low fields (B ≤ 0.1 T), indicative of a complex magnetic structure with canted moments (see Supplementary Fig. 1).
The inset of Fig. 1a shows the product of magnetic susceptibility and temperature as a function of temperature. At high temperatures (T > 225 K), a Curie-Weiss (CW) fit yields a ferromagnetic (FM) Weiss temperature of θ = 30 K despite the antiferromagnetic (AFM) order at low temperatures, which further corroborates the presence of a complex magnetic configuration with multiple exchange interactions. The inverse of the magnetic susceptibility is shown in Supplementary Fig. 3. The CW fit also yields an effective moment of 8 μ B Eu −1 , in good agreement with the Hund's rule moment of 7.94 μ B Eu −1 for Eu 2+ . In fact, our X-ray absorption spectra at the Eu L edges 33 confirm that all three Eu sites are divalent (see Supplementary Fig. 5). Previous X-ray absorption studies observed a finite Eu 3+ component, which could be due to an impurity phase present in polycrystalline samples 32 . The fully divalent character of europium in Eu 5 In 2 Sb 6 has been recently confirmed by Mossbauer measurements 34 .
Notably, our magnetic susceptibility data deviate from the CW fit at temperatures well above the ordering temperature (inset of Fig. 1a). In purely divalent compounds such as Eu 5 In 2 Sb 6 , Eu 2+ is a localized S-only ion (J = S = 7/2), which implies crystal-field and Kondo effects to be negligible to first order. As a result, the deviation from a CW fit indicates the presence of short-range magnetic interactions as observed previously in the manganites RE 1 − x A x MnO 3 (RE = rare-earth, A = divalent cation). Based on small-angle neutron scattering measurements, this deviation was argued to be due to the formation of magnetic polarons 35 . As temperature decreases, magnetic polarons are expected to grow in size and eventually overlap when nξ 3 ≈ 1, where n is the carrier density and ξ is the magnetic correlation length 36 . The inset of Fig. 1a shows a sharp decrease in χ(T)T at T*~40 K, which reflects the onset of strong antiferromagnetic correlations between polarons. Figure 1b shows the low-temperature anisotropic magnetization of Eu 5 In 2 Sb 6 . The hard c-axis magnetization increases linearly with field, whereas a field-induced transition is observed within the basal plane before saturation is reached at about 10 T (inset of Fig. 1b). Figure 1c shows the temperature dependence of the specific heat, C, at zero field. In agreement with magnetic susceptibility data, C/T exhibits two phase transitions at T N1 and T N2 as well as a magnon contribution below T N2 , typical of Eu 2+ compounds. The entropy recovered at T N1 is about 90% of Rln8 (not shown), the expected entropy from the Eu 2+ (J = 7/2) ground state. The extrapolation of the zero-field C/T to T = 0 gives a Sommerfeld coefficient of zero within the experimental error, indicating that Eu 5 In 2 Sb 6 is an insulator with very small amounts of impurities. A Schottky-like anomaly at about 35 K indicates the presence of short-range correlations, in agreement with magnetic susceptibility data at T*. The inset of Fig. 1c displays the field dependence of the low-temperature transitions when field is applied along the b-axis. The transitions are mostly suppressed by 9 T, in agreement with the saturation in magnetization.
Electrical transport measurements We now turn our attention to electrical transport data. Figure 2a shows the temperature-dependent electrical resistivity, ρ(T), of Eu 5 In 2 Sb 6 measured with current along the c-axis. Remarkably, ρ (T) rises by almost six orders of magnitude in the paramagnetic state, in agreement with the clean insulating response observed in C/T but in stark contrast to ρ(T) measurements in polycrystals 37 . Below T N1 , ρ(T) decreases by three orders of magnitude, pointing to the overlap of magnetic polarons within the antiferromagnetic state. Finally, at lower temperatures ρ(T) rises again, and a small kink is observed at T N2 .
The high-temperature electrical resistivity can be fit to an activated behavior given by ρ 0T n expðE a =k B TÞ (inset of Fig. 2a), Fig. 1 Thermodynamic properties of Eu 5 In 2 Sb 6 crystals. a Magnetic susceptibility, χ(T), in both zero-field-cooled (ZFC) and field-cooled (FC) sweeps. Inset shows χT. Black solid line shows the high-temperature CW fit. b Magnetization vs applied field at 2 K. Inset shows high-field magnetization data at 4 K. c Zero-field specific heat as a function of temperature. Inset shows C/T at different applied fields.
whereT is the reduced temperature. For n = 0, the Arrhenius plot yields a narrow gap of 40 meV whereas a slightly larger energy is extracted when n = 1 for adiabatic small-polaron hopping conduction 38 . From these data alone, it is not possible to differentiate between the two mechanisms. Nevertheless, the activated behavior breaks down at about T*~40 K, indicating that another mechanism is present. This energy scale is more pronounced in a log plot shown in Fig. 2b.
The evolution of the CMR in Eu 5 In 2 Sb 6 is summarized in Fig. 2d. Though the negative magnetoresistance is small at room temperature, it rapidly increases below about 15T N1 . At liquid nitrogen temperatures (T~75 K), for instance, the MR reaches −50% at only 3 T and −94% at 9 T. Ultimately, the MR peaks at −99.999% at 9 T and 15 K. This is, to our knowledge, the largest CMR observed in a stoichiometric antiferromagnetic compound.
Hall measurements provide valuable information on the type of carriers and the scattering mechanisms in a material. Figure 3 shows the Hall resistivity, R H ≡ ρ xz , for fields applied along the b-axis of Eu 5 In 2 Sb 6 . At room temperature, R H is linear, as expected from a nonmagnetic single-band material (inset of Fig. 3a). The positive slope, R 0 , implies positive (hole) carriers and a carrier density of n h = 1/R 0 e = 10 17 /cm 3 , typical of narrow-gap semiconductors.
As the temperature is lowered, however, a nonlinear R H component sets in at about 15T N1 , the same temperature at which CMR emerges. As the band structure of this band insulator is not expected to change dramatically in this temperature range, our result may indicate that the formation of magnetic polarons is responsible for the anomalous Hall effect (AHE). We note, however, that the presence of multiple carriers cannot be ruled out at this time. Though the ferromagnetic nature of the magnetic polaron cluster is a natural explanation for the anomalous contribution, a quantitative analysis of the various intrinsic and extrinsic contributions to the AHE will require determining the anisotropic conductivity tensor using micro-fabricated devices, including the region below 50 K.
Electron spin resonance measurements We complete our experimental investigation with microscopic electron spin resonance (ESR) measurements. Electron spin resonance is a site-specific spectroscopic technique, and Eu 2+ ions are particularly suitable paramagnetic probes because of their S-only state 39,40 . The Eu 2+ ESR spectra of Eu 5 In 2 Sb 6 in the paramagnetic state, shown in Fig. 4, consists of a single unresolved resonance (i.e., no fine or hyperfine structure). The ESR linewidth, ΔH, provides information on the interactions of the spins with their environment and their motion. In the case of semimetallic EuB 6 , the Eu 2+ ΔH was claimed to be dominated by spin-flip scattering due to the exchange between 4f and conduction electrons 39 . As a result, ΔH narrows at higher fields due to a reduction in the spin-flip scattering, consistent with the presence of magnetic polarons. The linewidth of Eu 5 In 2 Sb 6 also narrows at higher fields (Q-band) when compared to low fields (X-band), though not as strongly as in EuB 6 39 . This narrowing further indicates that the resonance is homogeneous in the paramagnetic state. In the case of a small-gap insulator as Eu 5 In 2 Sb 6 , the Eu 2+ ESR linewidth is dominated by spin−spin interactions 39,41,42 . The resulting relaxation mechanism is set by T 2 , the spin−spin relaxation time, which in turn is affected by the distribution of Eu−Eu exchange interactions and internal fields. An applied magnetic field causes an increase in T 2 as the size of the ferromagnetic polaron grows, which results in the observed ESR line narrowing. At the same time, the g-value decreases as a function of magnetic field, which indicates an antiferromagnetic inter-polaron coupling. Therefore, our ESR results are also consistent with the presence of magnetic polarons in Eu 5 In 2 Sb 6 . More detailed ESR measurements will be the focus of a separate study. Band structure calculations To shed light on the possible topological nature of the band structure of Eu 5 In 2 Sb 6 , we perform band structure calculations in the paramagnetic state by taking the 4f orbitals of Eu as core states, as shown in Fig. 5. Both barium and europium are divalent in the 526 structure, and our experimental results imply that europium has a well-localized f-electron contribution. One would therefore naively expect that the band structure and topology of Eu 5 In 2 Sb 6 are similar to that of Ba 5 In 2 Sb 6 , whose topology is not indicated by any symmetry indicators but can be characterized by nontrivial connecting pattern in the Wilson bands 27 .
Remarkably, GGA + SOC calculations in the paramagnetic state of Eu 5 In 2 Sb 6 indicate a semimetal state with one extra band inversion compared to Ba 5 In 2 Sb 6 at the Γ point. Because there are no symmetry-protected band crossings between the valence and conduction bands at any k-point, a k-dependent chemical potential can be defined, which yields a fully gapped state. By calculating the topological indices of the bands below the kdependent chemical potential, we find that the extra band inversion at Γ point yields a strong topological insulator with (z 2 ; z 2w,1 z 2w2 z 2w,3 ) = (1; 000), where z 2 is strong index and z 2w,i is weak index 43 , as shown in Fig. 5a. Compared with our experimental results, however, the ab initio calculation with the GGA functional incorrectly predicts Eu 5 In 2 Sb 6 to be semimetallic. Considering the possible underestimation of the band gap in semiconductors by the GGA functional, we have also performed band structure calculations using the mBJ potential with a coefficient c MBJ = 1.18, which was obtained self-consistently. As shown in Fig. 5b, the band inversion near the Γ point disappears, and a small gap opens along the Γ−Y path. The topological indices (z 2 ; z 2w,1 z 2w2 z 2w,3 ) are computed to be (0;000). In fact, surface states are not detected by our electrical transport measurements. Scanning tunneling microscopy and angle-resolved photoemission measurements will be valuable to confirm the absence of in-gap states.
We now investigate the topology of Eu 5 In 2 Sb 6 in the magnetically ordered state. Because the magnetic structure of Eu 5 In 2 Sb 6 has not been solved yet, we investigate theoretically, using the GGA + U + SOC approach, three A-type AFM phases with the easy axis along different directions. All of the antiferromagnetic phases are theoretically characterized by the so-called Type-IV magnetic space groups (MSGs) with inversion symmetry. The magnetic topological quantum chemistry theory therefore describes the topology of these MSGs by an index group (Z 4 Z 3 2 ), as proposed recently 44 . From the calculations detailed in Supplementary Fig. 5, the magnetic moment is about 7 μ B /Eu, and the energy difference between the different phases is within 3 meV per unit cell. From the results tabulated in Supplementary Table 1, all three AFM phases are axion insulators with strong indices ðz 4 ; z 21 ; z 22 ; z 23 Þ ¼ ð2; 0; 0; 0Þ. By comparing the band structures for three different AFM phases, the polarized 4f states do not change the band inversion characteristics of the paramagnetic state but induce a small exchange splitting near the Fermi level. Though the AFM structure at low temperatures has yet to be determined experimentally, we proposed that this phase is an axion insulator candidate that preserves inversion symmetry.
DISCUSSION
The magnetic polaron picture is fully consistent with our data. At high temperatures (~15T N1 = 210 K), the formation of isolated magnetic polarons is manifested in magnetic susceptibility measurements via a deviation from the Curie−Weiss law (inset of Fig. 1a) and in electrical resistivity measurements via the onset of negative magnetoresistance (Fig. 2c). As the temperature is further lowered, these polarons increase in size until they start to interact at T* giving rise to a sharp decrease in the χT plot, a Schottky anomaly in the specific heat data (Fig. 1c), and an anomaly in electrical resistivity measurements (Fig. 2b). At T N1 , the polarons coalesce and become delocalized, which gives way to a drastic increase in conductivity. Though the delocalization temperature virtually coincides with T N1 at zero field, delocalization is expected to occur at higher temperatures as the size of the polarons increase in field. Antiferromagnetic-driven T*, however, is suppressed in field. This opposite field dependence causes the delocalization temperature and T* to merge into one at about 3 T, which gives rise to a resistivity maximum above T N1 that moves to higher temperatures in field (see Supplementary Fig. 7). Importantly, the increase in size of magnetic polarons in applied fields also promotes large negative (termed colossal) magnetoresistance in the paramagnetic state. In fact, CMR sets in at about 200 K and peaks just above T N1 , as shown in Fig. 2c.
Another characteristic of CMR materials is the scaling of the lowfield MR with the square of the reduced magnetization, Δρ=ρ 0 ¼ CðM=M sat Þ 2 , where M sat is the saturation magnetization 36,45 . Just above T N1 , this scaling is valid and yields C = 50 (inset of Fig. 2c). When electron scattering is dominated by magnetic fluctuations, the scaling constant C is proportional to n −2/3 , n being the carrier density 36 . The scaling constant calculated this way (n~10 12 /cm 3 at 15 K) is four orders of magnitude higher than the experimentally determined constant, which is an indication of a distinct mechanism. Another notable exception is EuB 6 , for which the field-dependent resistivity was argued to be dominated by the increase in polaron size with field rather than by the suppression of critical scattering 17,46 . In fact, recent scanning tunneling microscopy measurements have directly imaged the formation of magnetic polarons in EuB 6 20 . In summary, we investigate the thermodynamic and electrical transport properties of single crystalline Eu 5 In 2 Sb 6 , a nonsymmorphic Zintl antiferromagnetic insulator. Colossal magnetoresistance sets in at temperatures one order of magnitude higher than the magnetic ordering temperature, T N1 = 14 K, and peaks just above T N1 reaching −99.7% at 3 T and −99.999% at 9 T. This is, to our knowledge, the largest CMR observed in a stoichiometric antiferromagnetic compound. Our combined electrical transport and microscopic ESR measurements point to the presence of magnetic polarons that generate an anomalous Hall component. Our first-principles band structure calculations yield an insulating state with trivial topological indices in the paramagnetic state, whereas an axion insulating state emerges within putative antiferromagnetic states. Our results highlight that Zintl phases could provide truly insulating states in the search for topological insulators, and rare-earth elements provide a route for the discovery of topological interacting phenomena. In fact, Zintl EuX 2 As 2 (X = In, Sn) have been recently proposed to be antiferromagnetic topological insulators 47,48 . The metallic-like behavior observed in electrical resistivity, however, suggests that these materials have a semimetallic ground state akin to EuB 6 49 .
Experimental details
Single crystalline samples of Eu 5 In 2 Sb 6 were grown using a combined In-Sb self-flux technique. The crystallographic structure was verified at room temperature by both single-crystal diffraction using Mo radiation in a commercial diffractometer (see Supplementary Fig. 6) and powder diffraction using Cu radiation in a commercial diffractometer. Eu 5 In 2 Sb 6 crystallizes in an orthorhombic structure (space group 55) with lattice parameters a = 12.553(5) Å, b = 14.603(2) Å and c = 4.635(1) Å. As shown in Supplementary Fig. 6, the observed mosaicity of the Bragg reflections is limited by the resolution of the diffractometer. The crystals have a rod-like shape, the c-axis is the long axis, and typical sizes are 0.5 mm × 0.5 mm × 3 mm. In addition, the stoichiometry of crystals was checked by energy dispersive X-ray spectroscopy (EDX). Magnetization measurements were performed in a commercial SQUID-based magnetometer. Specific heat measurements were made using the thermal relaxation technique in a commercial measurement system. Because of the difficulties in the synthesis of phase pure Ba 5 In 2 Sb 6 , no phonon background was subtracted from the data. A four-probe configuration was used in the electrical resistivity experiments performed using a low-frequency AC bridge. Highfield magnetization measurements were performed in the 65 T pulse field magnet at 4 K at the National High Magnetic Field Laboratory at Los Alamos National Laboratory. Details of the magnetometer design are described in ref. 50 . The sample was mounted in a plastic cup oriented with b-axis parallel to the magnetic field. The data were normalized by the lowfield data obtained from a commercial SQUID magnetometer. ESR measurements were performed on single crystals in X-band (f = 9.5 GHz) and Q-band (f = 34 GHz) spectrometers equipped with a goniometer and a He-flow cryostat in the temperature range of 4 K < T < 300 K.
Theoretical details
First-principle calculations were performed using the Vienna ab initio simulation package (VASP), and the GGA with the Perdew−Burke −Ernzerhof (PBE) type exchange correlation potential was adopted. The Brillouin zone (BZ) sampling was performed by using k grids with an 7 × 7 × 9 mesh in self-consistent calculations. In the paramagnetic state, we employed a europium pseudopotential with seven f electrons treated as core electrons. In the antiferromagnetic states, we performed the LSDA + U calculations with U = 5 eV for the three distinct magnetic structures.
DATA AVAILABILITY
Data presented in this study are available from authors upon request.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2010-05-13T00:00:00.000
|
9273457
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/1472-6963-10-123",
"pdf_hash": "97f2468fad26c96aa409c5b4d195723148b7f35a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2131",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3a4ddada049751171b34e68fc2a981cfbb110527",
"year": 2010
}
|
pes2o/s2orc
|
Recruitment of multiple stakeholders to health services research: Lessons from the front lines
Background Self-administered surveys are an essential methodological tool for health services and knowledge translation research, and engaging end-users of the research is critical. However, few documented accounts of the efforts invested in recruitment of multiple different stakeholders to one health services research study exist. Here, we highlight the challenges of recruiting key stakeholders (policy-makers, clinicians, guideline developers) to a Canadian Institutes of Health Research (CIHR) funded health services research (HSR) study aimed to develop an updated and refined version of a guideline appraisal tool, the AGREE. Methods Using evidence-based methods of recruitment, our goal was to recruit 192 individuals: 80 international guideline developers, 80 Canadian clinicians and 32 Canadian policy/decision-makers. We calculated the participation rate and the recruitment efficiency. Results We mailed 873 invitation letters. Of 838 approached, our participation rate was 29%(240) and recruitment efficiency, 19%(156). One policy-maker manager did not allow policy staff to participate in the study. Conclusions Based on the results from this study, we suggest that future studies aiming to engage similar stakeholders in HSR over sample by at least 5 times to achieve their target sample size and allow for participant withdrawals. We need continued efforts to communicate the value of research between researchers and end-users of research (policy-makers, clinicians, and other researchers), integration of participatory research strategies, and promotion of the value of end-user involvement in research. Future research to understand methods of improving recruitment efficiency and engaging key stakeholders in HSR is warranted.
Background
Expectations for well-designed self-administered surveys are high [1] and results can only be drawn and generalized based on the quantity, quality and representativeness of information returned [2]. Therefore, achieving a high participation rate is a significant precursor to ensuring the validity of survey results and minimizing the risk of bias. Studies show a trend towards decreased participation in survey research [3]. Thus, we need methods to facilitate participation. A Cochrane systematic review and metaanalysis identified several key methods to enhance response rates to postal questionnaires, including a more versus less interesting questionnaire, recorded delivery, and receipt of a monetary incentive [2]. In contrast to research aimed at improving response rates, however, there are few documented accounts of the efforts invested in participant recruitment and the resultant participation rates for this investment. The purpose of this short report is to outline our experiences recruiting practice guideline developers/researchers, clinicians and policy-makers to a Canadian Institutes of Health Research (CIHR) funded health services research (HSR) study. The object of this study was the Appraisal of Guidelines Research and Evaluation (AGREE) Instrument, a tool used to evaluate the quality of practice guidelines (PGs) reporting [4].
Identification of target participants and sampling strategy
Following an a-priori sample size calculation for our primary outcome, our total recruitment target was 192: 80 Canadian clinicians, (oncology, cardiovascular, and critical care), 80 international guideline developers/research-ers, and 32 Canadian policy/decision-makers. Based on previous specialist response rates to the 2004 Canadian National Physician survey, we expected to approach 4 physicians for every physician we needed to recruit [5]. We also applied the same oversampling rate to the guideline developers/researchers and policy-makers. We identified potential participants using membership lists from professional associations, known research/clinician collaborations, and professional entities found on the Internet (see Table 1). From this population, we invited a random sample of clinicians and guideline developers/ researchers, with e-mail addresses, to participate. As we had fewer candidates, we invited all identified policymakers to participate. Informed consent was implied with the return of completed survey materials. The Hamilton Health Sciences/McMaster University Faculty of Health Sciences Research Ethics Board approved this study.
Description of self-administered questionnaire
Our research protocol involved four parts: i) reading a PG; ii) assessing the PG using either the AGREE Instrument and the Global Rating Scale (Condition 1) or the Global Rating Scale alone (Condition 2); iii) completing a survey of perceptions of the usefulness of the instrument(s) from (2); and iv) completion of a short demographic section. The PGs included 10 documents of 3 clinical areas (4 oncology, 4 cardiovascular, 2 critical care), and all PGs were 50 pages or less. We randomized participants to either Condition 1 (134 items total) or Condition 2 (41 items). For clinicians, we stratified randomization to their corresponding area of expertise (e.g., oncologists randomly assigned to condition 1 or condition 2, and randomized to 1 of 4 oncology PGs). We randomized policy-makers to oncology PGs alone, because of a smaller pool of participants. Finally, we randomly allocated developers/researchers to condition and guideline. Further details about the primary research protocol and survey instruments are described elsewhere [6].
From pilot testing, the estimated time to complete all three parts was no more than two hours for those in Condition 1, and approximately 1.5 hours for Condition 2. We sent the initial survey by personally addressed e-mail, which included direct electronic links to the study materials. Participants had the option of completing the survey electronically or by paper. In turn, participants could choose to submit their completed survey materials electronically via the secure online data portal http:// www.vovici.com, by electronic mail word processing document, by post mail or by fax.
To inform our recruitment efforts, we used a systematic review summarizing evidence-based strategies for recruitment [2] and a narrative review of key methodological steps in survey administration [1]. We incorporated a modified Dillman approach [7] in our recruitment strategies: we pre-contacted participants via personally addressed letters on McMaster University letterhead followed by a personally addressed e-mail or individual telephone call 10 days later to ascertain their participation [2,8]. We offered participants a $100 CDN gift certificate incentive upon completion of study materials. All participants submitting data received a personalized note of thanks. For all participants with outstanding submissions, we followed up with two reminder e-mails and/or telephone calls and resent the complete study package with the second email reminder, as per our protocol. Our protocol allotted and resourced for 6.5 months to complete participant recruitment and data collection.
Outcomes
Using a screening log, we recorded the number of eligible people and those approached to participate in the study [9]. Of those approached, we recorded the number of undeliverable letters, affirmative responses, active declines, and non-responses. We calculated the participation rate (number who agreed to participate over the total number approached) [10], and the recruitment efficiency (proportion of completed data submissions as a function of the number of letters sent) [9].
Results
Recruitment and data collection took nearly twice as long as we anticipated. Of 838 pre-contacted, our participation rate was 29% (240). We received data from 65% (156/240) of the individuals who agreed to participate, representing a recruitment efficiency of 19% (156/838) of the original sample invited to participate. Of those who submitted data, 95% (148) used the online data portal, 7 submitted their data by electronic mail (word processing document), and 1 submitted their data by post. No respondents returned their data via fax. Of those participating and submitting data, we actively monitored each submission for complete data. We had no missing data for the main study primary outcomes.
We followed-up with 333 reminder e-mails and 61 telephone calls. Of the reminder e-mails sent, 215 were second follow-ups and contained a complete electronic survey package as per our protocol. Developers/researchers were more likely to participate than clinicians and policy-makers. Of those initially agreeing to participate, 8% (19) actively withdrew from the study and from 26% (63) we received no data. One policy-maker manager did not allow the participation of policy staff who already gave consent, accounting for 5 of 8 policy-maker withdrawals. Of the 19 withdrawals, 26% (5) occurred before randomization, 42% (8) were allocated to Condition 1 (the longer condition), and the remaining 32% (6) were allocated to Condition 2. Of the 63 who did not submit data, 43% (27) were allocated to Condition 1 and 57% (36) to Condition 2.
Discussion
Research productivity is dependent on timely receipt, analysis, and publication of data, which is ultimately dependent on study sample participation. The validity and generalizability of survey results are dependent on a high participation rate and representative sample. We incorporated the best available evidence to optimize our participation rates [2] and used previously reported response rate estimates to guide our recruitment efforts [5]. While the number of individuals who originally agreed to participate was 25% higher than our target sample size, we still missed our target by 17%.
Based on our experiences, and in contrast to previous research [5], we received 1 person's data for every 5 letters of invitation. Guideline developer/researcher recruitment was highest, probably reflecting their existing interests in this area. Clinician and policy-maker recruitment was more challenging. Our clinician recruitment rates were much lower than previous studies, where recruitment rates for medical oncologists, radiation oncologists, and cardiologists were 33.0%, 36.9%, and 28.4%, respectively (response rates unavailable for critical care) [5]. We found similar responses for policy-makers.
As has been found elsewhere [3,11], reasons for our low recruitment rate might include seasonality, lack of interest, limited time or lack of perceived relevance. Despite the three-fold difference in the total number of questionnaire items between Condition 1 (n = 134) and Condition 2 (n = 41), there was little impact as a function of study load; more participants who did not complete data came from Condition 2, the less demanding study condition. Of particular interest in our case, we learned that some policy-makers were actually dissuaded by their superiors from participating. Although this may be an isolated incident, this is an interesting finding nonetheless and suggests further fostering the much needed collaboration between the research and policy/decision-making entities. Further, lack of anonymity may have dissuaded others from participating in the study.
Health services research often relies on the participation of different stakeholder groups "in the field" to yield findings that can be useful and relevant to improve the system. Knowledge translation efforts depend on stakeholder involvement [12]. We need continued efforts to communicate the value of research between researchers and end-users of research (policy-makers, clinicians, and other researchers), integration of participatory research strategies [13], and promotion of the value of end-user involvement in research. Our research team included perspectives from each of the target groups we sought to recruit. However, given the breadth of coverage of stake-holder groups we sought to recruit (perspectives and geography) it may be that we did not include all "typical" phenotypes.
Conclusions
Based on the results from this study, we suggest that future studies aiming to engage similar stakeholders in HSR over sample by at least 5 times to achieve their target sample size and allow for participant withdrawals. Continued use of appropriate evidence-based strategies to increase survey response rates is important, with a particular emphasis on highlighting the relevance of the study to the prospective participants and the importance of their participation. Further, we suggest ongoing dialogue about how to best engage end-users. While our recruitment strategies for physicians and policy-makers were specific to the Canadian health care system, we suggest that the underlying principles are applicable to any systematic effort at identifying a population sample. Future research to understand methods of improving recruit- Oncologists included medical and radiation oncologists. Declined participation includes active declines, and no responses. Two developers, who were not on our recruitment lists, volunteered to participate in the study. One developer, who also held cardiology credentials was grouped into cardiologists. One oncologist, who was also a policy-maker, was grouped into policy/decision-makers. Participation rate = Agreed to participate/Total approached; Recruitment efficiency = Data received/Total approached. ment efficiency and engaging key stakeholders in HSR is warranted.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2011-08-31T00:00:00.000
|
2208861
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://europepmc.org/articles/pmc3151598?pdf=render",
"pdf_hash": "eee72c16f0462de70472393d5c72d50eb7fee544",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2132",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "eee72c16f0462de70472393d5c72d50eb7fee544",
"year": 2011
}
|
pes2o/s2orc
|
Central Nervous System Vasculitis: Still More Questions than Answers
The central nervous system (CNS) may be involved by a variety of inflammatory diseases of blood vessels. These include primary angiitis of the central nervous system (PACNS), a rare disorder specifically targeting the CNS vasculature, and the systemic vasculitides which may affect the CNS among other organs and systems. Both situations are severe and convey a guarded prognosis. PACNS usually presents with headache and cognitive impairment. Focal symptoms are infrequent at disease onset but are common in more advanced stages. The diagnosis of PACNS is difficult because, although magnetic resonance imaging is almost invariably abnormal, findings are non specific. Angiography has limited sensitivity and specificity. Brain and leptomeningeal biopsy may provide a definitive diagnosis when disclosing blood vessel inflammation and are also useful to exclude other conditions presenting with similar findings. However, since lesions are segmental, a normal biopsy does not completely exclude PACNS. Secondary CNS involvement by systemic vasculitis occurs in less than one fifth of patients but may be devastating. A prompt recognition and aggressive treatment is crucial to avoid permanent damage and dysfunction. Glucocorticoids and cyclophosphamide are recommended for patients with PACNS and for patients with secondary CNS involvement by small-medium-sized systemic vasculitis. CNS involvement in large-vessel vasculitis is usually managed with high-dose glucocorticoids (giant-cell arteritis) or glucocorticoids and immunosuppressive agents (Takayasu’s disease). However, in large vessel vasculitis, where CNS symptoms are usually due to involvement of extracranial arteries (Takayasu’s disease) or proximal portions of intracranial arteries (giant-cell arteritis), revascularization procedures may also have an important role.
INTRODUCTION
The central nervous system (CNS) vasculature may be targeted by an heterogeneous group of inflammatory diseases. In its isolated, primary form, angiitis of the CNS (PACNS) is a rare form of vasculitis of unknown etiology primarily affecting small and medium sized vessels supplying the brain parenchyma, spinal cord and leptomeninges [1][2][3]. PACNS results in signs and symptoms of CNS dysfunction with no clinically apparent participation of other organs. The CNS may also be targeted, among other territories, by systemic vasculitides [4,5]. This review will focus on diagnostic and therapeutic aspects of PACNS and secondary CNS involvement by systemic vasculitides in adulthood. Primary and secondary CNS vasculitis in childhood have been addressed in excellent recent reviews [6][7][8].
Epidemiology
Because of the rarity of PACNS and the absence of definitive diagnostic tests, epidemiologic studies are virtually inexistent. An annual incidence of 2.4 per million people has been recently estimated in North America [9]. PACNS has been reported in children [6][7][8] and in the elderly. However, *Address correspondence to this author at the Systemic Autoimmune Diseases Department Hospital Clínic Villarroel 170 08036-Barcelona Spain; Tel: +34 93 2275774; Fax: +34 93 2271707; E-mail:mccid@clinic.ub.es it appears to be more frequent in males in their fourth and fifth decades of life [2,9]. PACNS may represent 1.2% of vasculitis involving the CNS [3].
Pathogenesis
The pathogenesis of PACNS is unknown. Similar to other chronic inflammatory or autoimmune diseases, PACNS is thought to be triggered by infection. Cytomegalovirus, Ebstein-Barr virus, varicella-zoster virus, human immunodeficiency virus, mycoplasma and chlamydia have been considered given the ability of these agents to produce vasculitic lesions [10][11][12][13][14][15]. However, in the majority of patients with PACNS a potential relationship with these or other infectious agents cannot be demonstrated.
The granulomatous nature of the vascular inflammatory lesions in most cases suggests a Th1-mediated response [3,16]. Th1-related cytokines may promote vascular inflammation in PACNS as suggested by several experimental models. Intracerebral injections of interferon-gamma have been shown to trigger inflammatory lesions and vasculitis in rats. [17]. Tumor necrosis factor (TNF) and interleukin-6 proinflammatory functions may also contribute to vascular inflammation in PACNS [18,19]. TNF/TNF receptor p75 transgenic mice develop multifocal CNS ischemic injury secondary to vasculitis [18]. Elevated CSF IL-6 has been found in 3 patients with different types of vasculitis (polyarteritis nodosa, temporal arteritis and Behcet's disease) involving the CNS [19]. Current knowledge of the pathophysi-ology of PACNS is very limited delaying progress in the diagnosis and management of affected patients.
Pathology
PACNS typically involves small-medium sized arteries and veins, especially those located in leptomeninges and subcortical areas. The characteristic histopathologic findings consist of inflammatory infiltration of vessel walls by T lymphocytes and activated macrophages which undergo granulomatous differentiation with giant-cell formation [3,16]. Inflammatory cells infiltrate the adventitia and subsequently progress through the artery wall causing fragmentation of the internal elastic lamina. Intimal proliferation and fibrosis leading to vascular occlusion is frequently observed [3,16] (Fig. 1). This granulomatous pattern is the most commonly seen and led to the previously used term granulomatous angiitis of the CNS [3,16,20]. However, granulomatous features may not be always observed and some specimens disclose the so-called atypical CNS angiitis patterns consisting in predominantly lymphocytic infiltrates (lymphocytic pattern), necrotizing vasculitis with fibrinoid necrosis (necrotizing pattern) or mixed patterns [20]. In some cases, B lymphocytes and plasma cells can also be observed [21]. Vascular amyloid deposits may be found in a subset of patients [20].
Although most patients with PACNS present primarily with CNS dysfunction, necropsy studies may disclose clinically asymptomatic vasculitis in additional locations including lungs, kidneys and gastrointestinal tract [3,5,16]. Distinction from systemic vasculitis with prominent CNS involvement may be sometimes difficult to establish.
Clinical Manifestations
Depending on the areas of the brain involved, PACNS may convey a wide variety of clinical findings. Moreover, disease severity and rapidity of progression may be highly variable among patients, increasing heterogeneity in clinical presentation.
In order to facilitate clinical recognition and early diagnosis, clinical manifestations have been grouped in three major phenotypes: 1) Acute or more commonly subacute encephalopathy, presenting as a confusional syndrome with progression to stupor and coma; 2) Disease presentation resembling atypical multiple sclerosis with a variety of focal symptoms such as optic neuropathy, brain stem episodes, seizures, headaches, encephalopathic episodes or hemispheric stroke-like events and 3) Intracranial mass lesions, with headache, drowsiness, focal signs and elevated intracranial pressure [24,25].
It has also been suggested that predominant involvement of small versus medium-sized vessel may influence disease presentation. Small-vessel PACNS manifests as a subacute or acute encephalopathy with persistent headaches, cognitive impairment, confusion, and seizures. MRI usually discloses marked meningeal contrast enhancement whereas angiography may not reveal changes because the affected vessels are small, beyond the detection threshold [26,27]. This form of PACNS may respond to glucocorticoid monotherapy but 25% of patients relapse. In contrast, when medium-size vessels are involved, in addition to headaches and general CNS dysfunction, focal neurologic deficits and stroke are more common and angiography is more likely to reveal vascular abnormalities [9,26,27]. Four clinical features are associated with an increased mortality in patients with PACNS: focal neurological deficit, cognitive impairment, cerebral infarction and involvement of larger vessels [9].
General symptoms and findings suggesting some extent of systemic involvement may occur. Fever, weight loss, livedo reticularis, rash, peripheral neuropathy, arthritis and night sweats may be recorded in 20% of patients [2,9].
5. Diagnosis
The diagnosis of PACNS is a challenge because of the lack of highly sensitive and specific diagnostic tests. Clinical, analytical, neuroimaging, and histopathologic data are important, both in supporting the diagnostic suspicion and in excluding other conditions which may present with similar features.
Laboratory Test Abnormalities
Routine laboratory tests are frequently within the normal range [2,9,28]. In some patients features of systemic inflammatory response including anemia, leukocytosis and moderately increased acute phase reactants (ESR, C-reactive protein and platelet counts) can be observed [2,9]. Laboratory tests are useful to rule out other diseases which may present with similar symptoms such as infection, systemic vasculitis, malignancy, drug abuse and hypercoagulability states [5,28,29].
Cerebrospinal fluid (CSF) is abnormal in 80-90% of patients [9]. Increased protein concentration is the most common finding. In a series of 101 patients, mean CSF protein concentration was 7 gr/L (range 1.5-10.3 gr/L) [9]. Pressure is increased in 50% of patients and elevated lymphocyte counts may be observed in 50-80%. CSF oligoclonal immunoglobulins may be found in up to 50% of individuals with PACNS [5,23]. CSF pleocytosis is modest, rarely exceeding 250 cells/μL. Higher leukocyte counts and the presence of neutrophils are uncommon and, when present, should alert for possible infection [2]. CSF analysis is useful to exclude infection and malignancy and appropriate bacterial and fungal stains, viral polymerase chain reactions, and flow cytometry studies should be performed.
Magnetic Resonance Imaging (MRI) and Magnetic Resonance Angiography (MRA)
MRI is sensitive but not specific in revealing changes associated with PACNS [30]. Lesions are frequently multiple and bilateral and include parenchymal or meningeal enhancing areas, ischemic areas or infarcts in the cortex, deep white matter, or periventricular white matter (Fig. 1A). It may also disclose hemorrhagic lesions [31,32]. The sensitivity of MRI in biopsy-proven PACNS is very high, disclosing abnormalities in 97% of cases [22,[32][33][34] but abnormal findings are non specific. Diffusion weighted imaging is highly sensitive in detecting diffusion abnormalities and may be useful in patients with normal MRI [35]. MRA has limited sensitivity and is only able to disclose abnormalities in the largest intracranial vessels. The same limitations apply to CT-angiography [33,34].
Conventional Angiography
Conventional angiography is the most specific imaging technique for the diagnosis of PACNS and, compared to MRA is able to detect abnormalities in smaller vessels. Typical angiographic features of PACNS include multiple "beading" or segmental narrowing in large, intermediate, or small arteries with interposed regions of ectasia or normal luminal architecture [31][32][33] (Fig. 1D). Beading may be smooth or irregular and typically occurs bilaterally. Additional changes include aneurysms, collateral flow, isolated areas of vessel narrowing in multiple branches, circumferential or eccentric vessel irregularities, multiple occlusions with sharp cutoffs, and apparently avascular mass lesions [31][32][33].
Although findings from CNS conventional angiograms may support the diagnosis of PACNS and can be used to direct the site of biopsy, none of these findings alone is diagnostic because similar images can be present in other diseases (Tables 1 and 2) [2,5,22,28,[36][37][38].
Although essential for diagnosis, angiography has limited sensitivity and specificity. Patients with biopsy-proven PACNS may have normal appearing angiograms and, conversely, biopsies of angiographically abnormal vessels have been reported as normal [2,5,28]. The sensitivity of angiography in detecting PACNS ranges from 20% to 90% [1,9,31,35,37,38] and specificity from 20 to 60% [1,9,31,34]. The sensitivity of cerebral angiography decreases along with the caliber of the involved vessels, being most sensitive for involvement of large-medium sized vessels. Angiography is not exempt of side effects. About 0.8% of patients subjected to angiography experience additional neurologic deficits as an adverse event related to the procedure [32]. However, given the severity of PACNS and the difficulties in achieving an accurate diagnosis, the risk/ benefit is acceptable and conventional angiography is recommended as a key diagnostic procedure.
Histopathologic Examination
Brain biopsy is considered the gold standard for the diagnosis of PACNS but reveals diagnostic histopathologic abnormalities in only 50% to 75% of cases [1] (Fig. 1B and C). The role of brain biopsy in PACNS is not limited to proving inflammation of blood vessels: it is also important to excluding other conditions such as infection, malignancy, or degenerative diseases for which completely different treatment approaches are required (Table 1) [5,27].
In the largest series of PACNS patients undergoing surgical biopsy, including 43 patients, diagnostic sensitivity of brain biopsy was 63% [20]. In this series, the distribution of the various morphologic patterns was as follows: acute necrotizing (14%), purely lymphocytic (28%) and granulomatous (58%), with no statistically significant differences in disease aggressiveness or response to treatment among them. Interestingly, 78% of the biopsies directed to an imaging abnormality were diagnostic, whereas none of the blind biopsies demonstrated vasculitis. Biopsies including leptomeninges were slightly more sensitive in detecting vasculitis than those not including it (58% vs. 40%). In accordance with these results other authors have reported a sensitivity of brain biopsy around 50% [2,16]. The high proportion of negative biopsies in patients with clinical and radiographic features highly suggestive of PACNS may be explained by the segmental nature of lesions. Moreover biopsies are usually taken from the superficial parenchyma and leptomeninges and, in some instances, involved vessels are of greater size and are located deeper from these areas [20]. To maximize the diagnostic sensitivity of the procedure it is recommended that biopsies are performed in abnormal areas detected by previous imaging and include leptomeninges. Stereotactic biopsy is recommended for mass lesions only [20,25].
Occasionally, amyloid deposits can be observed [20,25]. These are more frequently found in samples with a granulomatous pattern and those presenting as mass lesions [20,25]. Clinically, patients with amyloid deposits are older and more frequently presenting with acute onset and cognitive impairment [39]. Clinical outcome and response to treatment seems to be similar to that of patients with no amyloid deposits [39].
Diagnostic Criteria
Since histopathologic confirmation of PACNS is not always feasible, Calabrese and Mallek proposed a series of diagnostic criteria combining, clinical, imaging and histopathologic findings [1]. These include: 1) neurologic deficit that remains unexplained after a vigorous diagnostic workup, including lumbar puncture and neuroimaging studies, 2) angiographic abnormalities highly suggestive of vasculitis or histopathologic evidence of vasculitis within the CNS and 3) no evidence of systemic vasculitis or any other condition to which the angiographic or pathologic findings can be attributed. These conditions are listed in Table 1 (Fig. 2).
Treatment
No randomized controlled trials or prospective studies have been performed with patients with PACNS. Therefore, therapeutic recommendations are based on extrapolation of data obtained from trials performed in other severe systemic vasculitides, retrospective studies, small case series and expert opinion [2,5,40]. In a retrospective review of treatments received by 101 patients diagnosed with PACNS (70 by angiography, 31 by biopsy) Salvarani et al. found that 97 patients were treated with glucocorticosteroids, 25 of them with 1gr intravenous methyl-prednisolone pulses and the remaining with oral prednisone at a median dose of 60 mg/day [9]. Forty-nine patients received an immunosuppressive agent: 46 cyclophosphamide (oral at 150 mg/day or intravenous at around 1 gr/month) and 3 azathioprine. A favorable response was observed in 81% of the patients treated with glucocorticoids alone and in 81% of those receiving both prednisone and cyclophosphamide. Given the retrospective nature of the survey it is not possible to conclude that immunosuppressive agents are not necessary since the group receiving cyclophosphamide may have been considered more severe by treating physicians.
Treatment with glucocorticoids (oral prednisone or equivalent at 60 mg/day preceded by three 1 gr intravenous pulses in severe cases) should, then, be started as soon as CNS vasculitis (primary or secondary) is clinically suspected and infectious diseases reasonably excluded. Prednisone can be quickly tapered if the diagnosis is eventually ruled out. When the diagnosis of CNS vasculitis is also supported by angiography or biopsy and mimics are convincingly excluded, cyclophosphamide (oral at 150 mg/day or 1gr monthly pulse) is recommended. Pulse intravenous cyclo- phosphamide has equivalent efficacy in inducing remission but it is less toxic than daily oral cyclophosphamide in systemic vasculitis [40]. By analogy to severe systemic vasculitis, switch to a safer immunosuppressive agent (azathioprine, methotrexate or mycophenolate) may be considered after 4-6 months of cyclophosphamide treatment [40][41][42][43]. All patients should be given calcium and vitamin D, bone protection agents and Pneumocystis infection prophylaxis [5].
Recently it has been shown that rituximab is equally effective than cyclophosphamide in inducing remission in severe ANCA-associated systemic vasculitis [44,45]. Rituximab has also been successful in treating SLE patients with CNS involvement [46], but there is no experience with ri-tuximab in PACNS. Two glucocorticoid and cyclophosphamide refractory cases responding to TNF blockade have been reported [47].
Immunossuppressive treatment should be maintained for 2-3 years [2,5]. It is important to keep in mind that about 25% of patients may relapse [9]. Response to treatment must be monitored by periodic neurologic evaluation and serial MRI examination every 3-4 months [2,28].
REVERSIBLE CEREBRAL VASOCONSTRICTION SYNDROME (RCSV)
RCVS is a recently proposed term to describe the physiopathologic substrate of a group of conditions characterized by prolonged but reversible vasoconstriction of the cerebral arteries [48]. Previously, these syndromes were referred as benign angiopathy of the central nervous system and, for many years, there has not been a clear distinction between RCVS and true primary angiitis of the CNS. RCVS has received a variety of names: Call-Fleming syndrome, thunderclap headache with reversible vasospasm, migrainous vasospasm or migraine angiitis, postpartum angiopathy, or drug-induced cerebral arteritis or angiopathy [48].
RCVS may occur spontaneously but in most instances is associated with precipitating factors including the use of vasoactive substances (i.e. ergotamine derivatives, amphetamines and nasal decongestants) other drugs (i.e selective serotonin-reuptake inhibitors, contraceptives), recreational drugs (cannabis, ecstasy, LSD, cocaine, alcohol), late pregnancy or puerperium, sexual intercourse, and catecholamine producing tumors [48][49][50]. The most characteristic initial clinical manifestation include hyperacute severe and recurrent headache that can be associated with neurologic symptoms and signs [48]. Headache is usually diffuse although may be also localized, preferentially in the occipital area, and may be associated with nausea, vomiting and photosensitivity. Other clinical manifestations include visual dysfunction, transient ischemic attacks and seizures [48]. The major complication of RCVS is stroke that can eventually lead to permanent sequelae and even death [48,49]. Although the pathophysiology of RCVS is not known, the prevailing hypothesis considers that there is a transient disturbance in the control of cerebral vascular tone [48].
In the largest series reported including 67 patients [49], there was a female predominance (67%) with a mean age at diagnosis of 42.5±11.8 years (range 19-70 years). Precipitating factors were identified in 63%, being the use of vasoactive substances the most frequent (55%). The presenting symptom in all cases was recent severe headache, and this was the only symptom in 76%. Among the 67 patients, 94% had multiple thunderclap headaches (mean of 4.5 episodes) that recurred over a mean period of 1 week. In this series, early complications (within the first week) included cortical subarachnoid hemorrhage (22%), reversible posterior leukoencephalopathy (9%), intrecerebral bleeding (6%) and seizures (3%). Delayed complications (after the first week) included transient ischemic attack in 16% and cerebral infarcts in 4%. The overall outcome in this series was good, with no relapses during a 16±12.4 month follow-up period and only 4% of patients had persistent neurological deficits.
In the absence of validated diagnostic criteria, Calabrese et al. [48] proposed a set of key elements required for the diagnosis of RCVS. These include severe, acute headaches, with or without additional neurologic signs or symptoms, normal or near to normal cerebrospinal fluid analysis, neuroimaging tests (transfemoral angiography, CT angiography or MRA) documenting multifocal segmental cerebral artery vasoconstriction, with no evidence for aneurysmal subarachnoid hemorrhage, and reversibility of angiographic abnormalities within 12 weeks [47][48][49]. Treatment usually consists of calcium-channel blockers [48][49][50][51] and brief glucocorticoid courses [50,52].
The distinction of PACNS and RVCS is important because of the different prognosis and treatment requirements. Key elements for distinction have been proposed [2,48] and are summarized in Table 2. PACNS typically affects middleaged men whereas RVCS is primarily a disease of women between 20-40 years. In the latter almost 60% of patients report a precipitating event [48], usually exposure to vasoactive substances. Headache in PACNS is indolent and progressive [9] whereas headache in RVCS is acute and severe [2,48,49]. Unless complicated by bleeding or infarct, MRI does not disclose major changes in RVCS whereas MRI is abnormal in 97% of cases with PACNS [9,50]. By definition, angiographic abnormalities substantially or completely reverse within approximately 3 months.
SYSTEMIC VASCULITIDES INVOLVING THE CNS
The CNS vasculature can be targeted by systemic vasculitis ( Table 3). Usually CNS involvement coexists with other clearly apparent systemic manifestations but some patients may present primarily with prominent symptoms of CNS dysfunction [4,5,53]. In systemic vasculitis targeting small-medium sized vessels, CNS involvement is a predictor of poor/guarded prognosis [54,55] and is one of the factors considered to recommend aggressive treatment with cyclophosphamide in addition to high-dose steroids [40,54,55]. However, in large-vessel vasculitis, CNS involvement may benefit from vascular intervention procedures (angioplasty, derivative surgery), antiplatelet or anticoagulation treatment in addition to high dose glucocorticoids rather than intensification of immunossupressive therapy [56][57][58].
CNS Involvement by Small and Medium Sized Vessel Vasculitis
Globally, cerebrospinal involvement is infrequent in small-medium size vessel vasculitis, including Wegener's granulomatosis, microscopic polyangiitis, Churg-Strauss syndrome, polyarteritis nodosa, cryoglobulinemic vasculitis, and Behçet's disease. CNS involvement occurs in less than 15% of patients in most series.
Cerebral vasculitis is the most frequent CNS lesion and may present with headache, visual disturbances, seizures, confusion, ischemic stroke, intracerebral or subarachnoid haemorrhage, venous thrombosis or dementia [62,63]. Granulomatous inflammation and thickening of the duramater, pachymeningitis, may present with chronic headache, multiple cranial nerve palsies, seizures, meningeal signs, encephalopathy, proptosis, limb palsy or ataxia [62][63][64][65]. Pituitary involvement leads to central diabetes insipidus, panhypopituitarism or a combination of hormone deficien-cies [66]. In these patients, MRI is the image technique of choice because it can reveal ischemic or hemorrhagic lesions, dural thickening, pituitary involvement or enhancement of inflamed orbital and paranasal mucosa [63]. In the case of dural involvement, tissue biopsy may disclose granulomatous pachymeningitis [66].
Microscopic Polyangiitis (MPA)
In a series of 85 patients, CNS involvement was present in 10 cases (11.8%) and CNS vasculitis was the cause of death of one of them [67].
There are only scattered case reports of CNS manifestations related to MPA in the literature. Multiple bilateral cerebral infarctions [68], multiple hemorrhagic infarction of the cerebral cortex caused by CNS vasculitis [69], capsular warning syndrome and subsequent stroke [70] and pachymeningitis have been occasionally reported [71,72].
Cerebral infarction is the most frequently reported manifestation of CNS involvement [75,77], probably as result of cerebral vasculitis (Fig. 3). Additional less commonly reported CNS events include intracerebral haemorrhage [78,79] and pachymeningitis [80,81]. Fig. (3A). Multiple brain infarcts in a patient with Churg-Strauss syndrome. B) CT scan from the same patient disclosing pulmonary infiltrates and bilateral pleural effusion. Toracocentesis disclosed predominance of eosinophils in pleural fluid exudate.
Polyarteritis Nodosa (PAN)
In a recent series of 348 patients diagnosed with PAN over a 42-year period, 4.6% presented with central nervous system-related abnormalities [82]. Earlier studies reported a higher prevalence, between 15 and 65% [83]. Perhaps in present days, earlier recognition of the disease with prompt treatment prevents development of severe complications. It is important to remark that, widespread ANCA and cryoglobulin testing has led to re-classification of a substantial proportion of patients with necrotizing vasculitis previously diagnosed with PAN, which, in fact, has become a much more infrequent disease [84].
In an extensive literature review, three major clinical presentations related to CNS involvement have been recognized in PAN: 1) diffuse encephalopathy characterized by cognitive impairment, disorientation or psychosis (8% to 20%), 2) seizures (focal or generalized) and 3) focal neurologic deficits [83]. Accelerated hypertension may also contribute to diffuse encephalopathy in some patients [83]. Abnormal findings reported in neuroimaging studies (MRI and CTscan) include cerebral infarctions located in the brain (cortical or subcortical), cerebellum or brainstem and cerebral hemorrhages [85,86] (Fig. 4). Fig. (4A). Hemorrhagic brain infarct in a patient with systemic poyarteritis nodosa. This patient also had hypertension, postprandial abdominal pain, multineuritis and livedo reticularis. B) Skin biopsy of the same patient disclosing necrotizing arteritis in the subcutaneous tissue.
Cryoglobulinemia
CNS involvement is uncommon in cryoglobulinemic vasculitis. In a retrospective series of 209 patients [87], CNS involvement was detected in 3. In a prospective study of 40 patients with mixed type II cryoglobulinemia vasculitis [88] specifically investigating signs of CNS dysfunction, 89% of the patients had some cognitive impairment, being attention the aspect most commonly altered (70.3%), followed by alterations in executive functions and visual construction. Whether these abnormalities are due to CNS vasculitis, co-morbidities, glucocorticoid, immunosuppressive or antiviral treatments or a combination of factors is unclear.
Clinical features of CNS involvement in cryoglobulinemia include encephalopathy, stroke, transient ischemic attacks, lacunar infarctions and hemorrhage [89,90]. Most of the cases reported are associated to hepatitis C virus infection.
Behçet's Disease
The frequency of neurological involvement in Behçet's disease ranges from 5.3% to 14.3% in prospective studies [91,92]. Neuro-Behçet occurs more frequently in patients aged 20 to 40 years and is 2-8 times more frequent in men than in women. Neurological manifestations commonly appear when other systemic features are present. CNS involvement is the first disease manifestation in less than 6% of patients with neuro-Behçet [93]. CNS involvement in Behçet's disease may occur through 2 major mechanisms: meningoencephalitis and vascular disease.
Meningoencephalitis is usually subacute and predominantly involves the brainstem but may extend to basal ganglia, thalamus, cortex and white matter [93,94]. The spinal cord and cranial nerves may also be affected. In the largest series of patients with neuro-Behçet [92] the most common clinical symptoms were pyramidal signs (96%), hemiparesis (60%), behavioural changes, headache and sphincter disturbance or impotence. Less common manifestations were paraparesis, meningeal signs, movement disorders, brainstem signs, seizures, hemianopsia, aphasia, psyachiatric disturbances or cerebellar syndrome. CSF analysis was abnormal 70-80% disclosing moderately elevated protein concentration and pleocytosis with neutrophilia at early stages [89]. MRI discloses hyperintense T2 lesions with contrast enhancement and edema. Lesions are usually unilateral and are located in the upper brainstem extending towards the thalamus and basal ganglia [95]. Tumor-like lesions may occasionally occur [93].
The most common manifestation of vascular neuro-Behçet is central venous thrombosis with signs and symptoms of intracranial hypertension, including papilledema. Intracranial aneurysms and ischemic stroke may also occur but are infrequent complications. Combined parenchymal and vascular involvement may be seen in 20% of patients with neuro-Behçet [93]. Patients with neuro-Behçet are treated with high-dose glucocorticoids and cyclophosphamide. Blocking TNF with infliximab may be useful in refractory patients.
Large Vessel Vasculitis
Both giant-cell arteritis of the elderly and Takayasu disease may convey CNS involvement.
Giant Cell Arteritis
GCA preferentially targets the cranial vessels. Consequently the most common ischemic complications occur in territories supplied by the carotid and vertebral arteries. Although GCA is considered a large to medium sized vessel vasculitis, small cranial vessels are frequently affected [96] and the most frequent ischemic complication, visual loss, derives from involvement of the small arteries supplying the optic nerve [97][98][99][100]. Visual loss occurs in 15-20% of patients [97][98][99][100]. In 80-90% of cases visual impairment is due to anterior ischemic optic neuritis secondary to involvement of the posterior cilliary arteries supplying the optic nerve [101,102]. Occlusion of the retinal artery is less frequent and underlies visual loss in 10% of cases [99,100].
Ischemic stroke or multiinfarct dementia occurs in 3-6% of patients and is due to inflammatory involvement of the intracranial branches of the carotid and vertebral arteries. [97,100,103,104]. When explored, ultrasonography of the supraaortic branches are frequently normal [103,104]. Usually, inflammation is limited to the most proximal, extradural part of these arteries. In some series, strokes are more frequent in the vertebrobasilar territories contrarily to atherosclerotic occlusions which are more frequent in the carotid branches [103]. Brain infarcts are frequently multiple, indicating involvement of various branches, reduced flow from proximal stenosis, distant embolization of proximal thrombi, or a combination of these [97,103,104] (Fig. 5A). Although thrombosis is uncommonly seen in temporal artery biopsies, necropsy studies from patients dying from GCA-related stroke, frequently disclose thrombosis as a precipitating event [100]. Mortality of GCA-related stroke is about 30% [103,104].
Stroke is more frequent among individuals with visual loss indicating that some individuals may be more prone to develop intracranial involvement and related complications [97,98]. Several studies indicate that individuals with prominent extracranial large-vessel involvement are less prone to develop cranial ischemic complications, suggesting heterogeneity in patterns of vascular targeting by GCA [105][106][107]. Several studies indicate that traditional vascular risk factors are more frequent and the systemic inflammatory response is weaker in patients with GCA-related ophthalmic and neurologic ischemic complications, making early diagnosis and follow up more difficult [97][98][99][100]108]. High dose glucocorticoids usually prevent progression of visual impairment. Intravenous methylprednisolone pulses are usually administered in this setting but there is no proof that this approach is more effective than the standard daily 60 mg dose. In 10-27% of patients presenting with visual symptoms, vision may continue to deteriorate during the first 1-2 weeks after the beginning of glucocorticoid treatment [99]. Antiplatelet or anticoagulant therapy is usually given in these circumstances with variable results [99,101,102]. After this initial period, the risk of developing subsequent diseaserelated visual loss is low, about 1% in 5 years [109].
Stroke frequently occurs during the first weeks after the initiation of glucocorticoid treatment. Besides adding antiaggregants, anticoagulants, or both, the classical approach to this situation has been intensifying glucocorticoid and immunosuppressive therapy. However, a recent report indicate that some patients with proximal lesions may better benefit from intracerebral percutaneous angioplasty [57] (Fig 5B).
Takayasu Arteritis
Non specific neurologic manifestations such as headache, dizziness of variable intensity, and lightheadedness are highly frequent in patients with Takayasu's arteritis, occurring in 57-90% in most series [58,110,111] (Fig. 6). More severe complications include visual disturbances or visual loss, syncope, transient ischemic attacks and stroke. Most of these symptoms/complications can be related to extracranial steno-occlusive lesions in the subclavian (with subsequent arm-steal syndrome), carotid and vertebral arteries which results in decrease brain flow [112,113]. Stroke occurs in less than 10% of cases in large cohorts but it is among the leading causes of premature death in these patients [58]. Strokes are usually ischemic and secondary thrombosis of stenotic vessels with subsequent embolization may be precipitating events. It is important to remark that cardiomyopathy secondary to aortic valve insufficiency due to aortic root dilatation or hypertension occurs in about 10% of patients with Takayasu's disease and may also result in thromboembolic strokes [58]. Hemorrhagic stroke related to hypertension has also been reported [112]. Intracranial artery involvement seems to be uncommon. A recent prospective study using ultrasonography and MRI in 17 patients with neurologic symptoms, disclosed signs of intracranial involvement in 7 patients [113]. However, no angiography was performed and it was not possible to discern whether these findings were related to vasculitis or previous embolization. Autopsy studies including the brain are scarce in Takayasu's disease, but intracranial involvement seems to be unusual. However, at least one patient with vasculitis of intracranial arteries has been reported [114].
Glucocorticoids and in most instances immunossupressive agents are mandatory to induce and maintain remission in patients with Takayasu disease. Cyclophosphamide and methotrexate have been useful in open-label studies and mycophenolate has been also tried in small case series [58,110,115]. Because of its side effects cyclophosphamide is usually avoided and other immunosuppressive agents are preferred, since Takayasu disease is a relapsing condition usually targeting young women [58,110,115]. TNF blockade has provided benefit to patients refractory to other therapies [116]. Angioplasty, stenting and by-pass surgery are very important in the management of severe neurological involvement [56,58]. For better results, revascularization procedures, should be avoided, when possible, during periods of active disease and be performed to patients in remission [115].
CONCLUSIONS
CNS vasculitis, either primary or complicating systemic vasculitis is uncommon. However CNS involvement is a major determinant of severity, morbidity and mortality in patients with vasculitis. Diagnosis of PACNS is a challenge and requires high index of clinical suspicion. Diagnosis is supported by neuroimaging and histologic data but requires exclusion of other conditions with the appropriate work-up. Neuroimaging techniques are pivotal not only to support the diagnosis but also in the follow up of affected patients. PACNS or CNS involvement by systemic vasculitis requires prompt recognition and aggressive treatment in order to reduce mortality and preserve function.
|
v3-fos-license
|
2019-05-30T23:43:47.440Z
|
2018-03-10T00:00:00.000
|
170057289
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journal.unnes.ac.id/nju/index.php/jejak/article/download/13647/7604",
"pdf_hash": "cc601e7994df0105698f8c07db87864840c7890c",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2133",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"sha1": "83a5b7e7af0ccb54a40f1a5589307cbb0be398d8",
"year": 2018
}
|
pes2o/s2orc
|
How to Enhance MSMEs Readiness? an Empirical Study in Semarang Municipality
The absence of maximum readiness of the existing Micro Small Medium Enterprises or SMEs in Semarang City in facing the ASEAN Economic Community (AEC) requires an effort to improve the quality and competitiveness of SMEs. According to this problem, this research had several objectives to achieve, namely 1) evaluating the readiness of Semarang Municipality’s SMEs from various aspects in facing Asean Economic Community (AEC), 2) preparing the strategies to improve Semarang Municipality's SMEs business quality and capacity. This research uses descriptive and SWOT analyses. The research results indicate that SMEs in Semarang Municipality has been ready in facing AEC. The obstacles standing in the way of SMEs include poorly arranged administration, the absence of product standardization, and marketing. The strategies practicable to improve SMEs’ businesses in Semarang Municipality include improving and extending marketing network and organizing training, assistance and technology upgrade for SMEs to improve their product standard, value and quality.
INTRODUCTION
Indonesia is currently racing against time in welcoming the implementation of the free market of Southeast Asia or so-called ASEAN Economic Community (AEC) that has begun on December 31, 2015. In facing the implementation of AEC 2015, Indonesia is still facing some external and internal challenges. The external challenges faced include the increasingly competitive level of trade competition, the greater deficit of Indonesia's trade balance with other ASEAN countries, and how Indonesia can increase investment attractiveness. AEC while, Indonesia's internal challenges include the low understanding of the people towards the AEC, the unpreparedness of the regions facing the AEC, the level of regional development that still varies greatly and the condition of human resources and employment of Indonesia.
Thus, in facing the 2015 AEC, Indonesia is still having many of homework to improve in order to achieve sustainable national economic growth. It is said so because national economic growth is largely determined by the dynamics of the regional economy, while the regional economy is generally supported by small and mediumsized economic activities.
Micro, Small and Medium Enterprises (MSMEs) sector has proven resilient in facing crisis, and even showed rapid growth. Based on the HSBC survey, of the 51 million registered SMEs businesses, 37 percent of which will expand business, 16 percent will increase the number of employees.
This shows that SMEs have a large multiplier effect in the national economy. Also, approximately 60 percent of current GDP is related to the SMEs sector. (Saefudin, Dinar, 2013).
Any business units included in the category of MSMEs are the vein of regional and national economy. As a result, it is obvious that SMEs are formidable business in the middle of the slow of economic growth. Currently around 97% of the majority economic actors are the business actors of SMES which continue to grow significantly and become a business sector capable of supporting the stability of the national economy. Alternatively, SMEs become more resilient and remain optimistic in the midst of crisis.
This Small and Medium-Sized Micro Enterprises (SMEs) sector is a tough business in the midst of current decelarating economic growth. With the potentials that this Small and Medium-Sized Micro Enterprises (SMEs) has, it is expected that they would be able to survive in the free market era of AEC. According to Skokan Karel (2013), the very existence of Small and Medium-Sized Micro Enterprises (SMEs) in a nation's economic arena ought to have full attention from the government given the magnitude of benefits these enterprises give for the nation's economic growth. They should always be encouraged in order to be able overcome various weaknesses to compete and not fall oppressed by competitors from outside countries.
The number of micro and small-sized enterprises in Semarang Municipality, Central Java Province, has always been increasing each year, indicating that a productive economic growth is taking placeas indicated by increasingly better and conducive growth and climate of micro and small-sized businesses. The reality shows that when economic crisis occurs, micro and small-sized enterprises are more resistant than their bigger counterparts. The problems of Semarang Municipality's SMEs generally lie with human resources, capital, and modern technology mastery, leading to their weak competitiveness against imported products. This will gets even harder after AEC is in effect. With no interference, SMEs who have been known for their tenacity and resilience will eventually fall. Considering the problems above, there is a need, then, for the government of Semarang Municipality or relevant offices or society to pay more attention to the development of Semarang Municipality's SMEs to enable them to grow to be even more competitive together with other economic agents.
There is also a need for the government of Semarang Municipality to pass future policies which make the situation more conducive for SMEs to grow and develop. This research aims at (1) evaluating the readiness of Semarang Municipality's SMEs from various aspects in facing AEC, and 2) preparing the strategies to improve Semarang Municipality's MSMEs business quality and capacity in multiple aspects.
RESEARCH METHODS
This study used primary and secondary data. The data were taken by doing sampling on each business field through proportional sampling where the number of samples and respondents to be taken were proportionally in accordance with the number of micro,small and medium enterprises (MSMEs) population in each type of business. AECnwhile, the total population in this case was the number of outstanding MSMEs in Semarang City as many as 110 SMEs. Additionally, the number of samples were decided by using Slovin error tolerance formulae of 10%. Then, the number of samples to be used were 53 SMEs.
This study used two approaches which were quantitative and qualitative. Quantitative approach was used in determining the weight of strengths, weaknesses, opportunities and threats of SMEs in Semarang City. AECnwhile, the qualitative approach was used to describe the phenomenon that occurred in MSMEs in Semarang City.
To evaluate the readiness of SMEs in facing the AEC, descriptive analysis was used. AECnwhile, to know the efforts to be made by the Government to support the readiness of SMEs in facing the AEC, the researcher used SWOT analysis.
SWOT analysis was needed to determine strategies and efforts that would be done by Semarang City Government to support the readiness of SMEs in facing AEC.
RESULTS AND DISCUSSION
The development of MSMEs in the city of Semarang is very rapid. Accordingly, the number of micro and small businesses in Semarang City increased annually, indicating a productive economic growth due to the improvement and climate of micro and small enterprises which were getting better and conducive.
In addition, the most dominating MSMEs in Semarang City was the food sector. This business dominated 48 percent of the total number of MSMEs, while the handicraft sector ranked second as much as 29 percent, followed by the fashion sector as much as 23 percent.
In facing the current AEC challenges, MSMEs should be able to prepare themselves to survive in situations that may be difficult for themselves. Moreover, it is known that the form of self-preparation in terms of improving product quality has been done by most of the actors of MSMEs in the city of Semarang which amounted to 73 percent of the total actors. However, the improvement forms in terms of marketing and improvement in the quality of human resources were slightly done by the actors of MSMEs, namely 13 percent and 14 percent. The results of the study regarding the constraints faced by MSMEs in the city of Semarang in facing AEC obtained 47% constraints in the capital. Therefore, if AEC want to increase their business capacity, their capital must be large. Another obstacle faced by MSMEs in the city of Semarang was the reliable human resources. Given the most MSMEs in Semarang City are home-based businesses, their human resources are limited in ability. There is society pragmatic side from this question. As many as 41% of businessmen claimed that AEC wouldn't give any opportunity for business sustainability. However there is 37% stated that AEC will bring some challenge for competition, including the foreign competitors. Amount 20% participant declared that AEC will give some chance for export.
Internal identification was undertaken to understand the strengths and weakness influencing in readiness Semarang MSMEs toward AEC. Table 1 showed that score median total for strength factors was 1,91, AEC while weakness score median total was 0,68. This implied that the readiness MSMEs against AEC had strength factors bigger than weakness factors, so that MSME can utilize these strength to improve ths business in progress.
AEC surement result from Table 1 showed that Semarang strategic geographical location was ths biggest strength for MSMEs businessmen against AEC.
This location was the one influencing the MSME's development and sustainability. This strategic location resulted in low transportation cost since the access to harbour and airport wasn't far. This strategic location also AEC that Semarang is accessible, easily reached and passable by consumer so it was highly potential for the products to be bought.
AEC while, undisciplined administration and bookkeeping (impacting on TIN ownership and capital access) became the worst weakness suffered by MSMEs businessmen. This condition proofed that businessmen didn't have any appropriate planning.
Disciplined bookkeeping and financial statement actually was really essential for Semarang's MSME businessmen provided they were ready against AEC. These bookeeping and financial statement could represent the business development. That way when the business is growing, MSMEs can make its improvement plan. On the contrary, if the progress is declining, MSMEs can promptly attempted to prevent the event so that the business didn't go downhill.
The 2,59 of IFE matrix score total showed that Semarang's MSMEs endeavoring against AEC competition was in median condition. This condition demanded MSMEs businessmen to optimize the strengths to overcome the weakness. External identification was undertaken to understand the opportunity and threats influencing in readiness Semarang's MSMEs toward AEC The given rating based on the high low response showed by MSME against opportunity and threats. Table 2 showed that score median total for opportunity key factor was 1,48. Whereas score median total for threats key factors is 1,33. This showed that the opportunity key factors is bigger than the threats factors, so that MSMEs businessmen have to optimize the availalbe opportunity to overcome the threats.
Based on EFE matrix result observing the readiness of MSMEs against AEC, we got total score 2,81.
This showed that Semarang's MSMEs businessmen had enough ability to garner external opportunities and evade the threats during business process. Table 2 showed that financial organizations either goverment or private sector offering micro Credit for Business became the prime opportunity and motivation for MSMEs businessmen to keep forward improving their business. As far as we know that Credit for Business is a credit program channeled using a guarantee pattern and this credit was intended fo MSMEs businessmen without collateral but the business was worthy for bank finance. Credit for Business was given in orm of working capital and investation supported by guarantee facility for productivity enterprises.
Credit for Business was expected to solve the main problem faced by MSMEs in Semarang, namely financial aspect (initial capital mobilization and working capital access). Whereas this financial aspect was really crucial in long term, for investation, was essentially needed for long term output growing.
The worst weakness involved by Semarang's MSME businessmen against AEC was there was no product standardization. This factor became an anticipated threat becase it can hindered the readiness MSME against AEC. In case a product can compete in AEC, this product needed to fullfil certain standard and Based on IFE and EFE analyses, it is found that the total score of each factor could be detailed as follows: Strength (1.91), Weakness (0.68), Opportunity (1.48) and Threat (1.33). It could then be confirmed that the Strength score is above that of Weakness, at a difference of (+) 1.23 and the Opportunity score is above that of Threat, at a difference of (+) 0.15. Based on the SWOT diagram presented in Figure 3, it is found that Semarang Municipality's SMEs are placed in quadran I, AEC that the MSME development strategy should be SO strategy, i.e. a strategy which employs strength to take the existing opportunity. The strategy which should be implemented under this condition is to support the agressive growth policy (Growth Oriented Strategy).
In the face of current AEC challenge, SMEs should be capable of preparing themselves to survive under a circumstance which may be difficult for them. The research results indicate that MSMEsin Semarang Municipality have been prepared in facing AEC. However, only a few of these SMEs have themselves prepared in terms of their marketing and HR quality improvement, i.e. only 13 and 14 percents.
What stands in the way of these MSMEs in Semarang Municipality in the face of AEC is that 47% of these MSMEs have limited amount of capital.
Therefore, if these MSMEs want to improve their business capacity, they have to have large amount of capital. Another issue standing in the way of these MSMEs in Semarang Municipality is that their human resources are less reliable. Since most MSMEs in Semarang Municipality are household industry, the HR they use have limited ability. From the society's preference regarding AEC, it is found that most of the society and employers surprisingly think that AEC will give them challenges as well as opportunities. This is indicated by their preference score at 31%. Moreover, 33% of employers say they believe AEC will give them profits, because this era will present them an access to a greater market where they could sell their products, even to foreign countries. Nevertheless, 12% of respondents acknowledge that AEC will only give more threats and even has the potential of harming the businesses they are running. In between these two groups, there remains a pragmatic group, in this case 24% of respondents, who say that they do not really understand what AEC is all about that they fail to give an objective preference.
The result of IFE calculation shows that the strategic geographical location of Semarang Municipalityis the greatest strength of MSMEs in facing AEC. This location influences MSMEs' growth and continuance.
AEC while, poorly arranged administration and bookkeeping (with its effect to their NPWP ownership, access to capital) become the biggest weaknesses that MSMEs' owners feel. This condition is a proof that business owners do not have any appropriate planning. Well-arranged bookkeeping and financial statements are actually important for MSME owners in Semarang Municipalityif they want to get themselves prepared in facing AEC competition. The total score of IFE matrix is 2.59, indicating that Semarang Municipality's SMEs is at average condition in facing AEC. Such condition demands SME owners to optimize their strengths even more in dealing with their weaknesses.
Based on the result of EFE matrix to see the readiness of MSMEs in facing AEC, a total score of 2.81 is obtained. This shows that MSME owners in Semarang Municipality have fairly high ability in utilizing external opportunities and avoiding the threats that they have during the business process.
The result of EFE calculation indicates that the many micro KUR (people's business loan) offers from a number of both public and private financial institutions and MSME owners' motivation to keep developing their business have been the main opportunities. On the other hand, the biggest weakness standing in the way of SME owners in Semarang Municipality in facing AEC is the absence of product standardization. This factor becomes the threat which needs anticipation because it has the potential of preventing MSMEs from being wellprepared in the face of AEC. This is because to be able to compete in AEC, the products they are marketing should meet the standards and quality of HSE (Health, Safety, and Environment). Optimizing the existing resources and increasing the efficiency to save more production time and costs through technologyand information mastery to create products with uniqueness and characteristics. effort to facilitate and expand product marketing.
Source: Processed primary data, 2016 The formulation of alternative strategies for Semarang Municipality's MSME owners to face AEC competition could be made using SWOT matrix as in table 3. MSMEs in Semarang Municipality states that they are ready to face AEC. The first attempt they make to get themselves prepared for AEC is by improving their product quality. In running their business, the products and services these SMEs are selling should be of fine quality or worth the tagged price in order for their businesses or enterprises to survive in the face of harsh competition to come, particularly the one from quality perspective. Kotler (2012:49) suggests that quality is the entire characteristics and nature of a product or service which influence its ability to satisfy the explicit or implied needs.
In the SWOT matrix analysis results (Table 4) combined with quantitative model analysis, it was found that the effective strategy for MSMEs in Semarang City in facing the AEC is SO strategy, that is strategy to use strength to reach the opportunity.
Further, the implementation of the SO strategy are as follows; 1) Improve and expand the marketing network through promotions, exhibitions, fashion shows, inter-agency cooperation and electronic media to meet the growing market demand; 2) Conducting training, mentoring and technological enhancement to MSMEs to improve the standard, grade and quality of products that meet the Work Healthiness, Safety and Environment (K3L) to meet the growing global market demand and share.
In Indonesia, MSMEs have a very important role. Urata (2000) who has observed the development of MSMEs in Indonesia confirms that MSMEs play some important role in Indonesia.
Some of the roles are: (1) Employment provider (2) Important actors in local economic development and community development (3) Creator of market and innovation through flexibility and sensitivity as well as dynamic interconnection among company's activities (4) Contribute to the increase of non-oil exports. SMEs reduce income inequality.In facing the ASEAN Economic Community (AEC), MSMEs need to prepare themselves to engage in broader production processes. One of them is to be able to contribute in regional-scale production process chain. The discussion by The Asia Foundation with SMEs and regional economic experts in Bangkok 2014 suggested that regional economic integration can benefit MSMEs through opening access to wider raw materials, more efficient economies of scale and increase demand potential. Therefore, the main requirement for MSMEs in facing the AEC is to strengthen MSMEs with concrete information and actual issues of establishing the ASEAN Community, including the understanding of the concept of a single market and a regional production process.
Regarding the previous explanation, on the other hand, the Tambunan's study (2013) explains that regional free trade such as AEC has two blades of opportunity as well as challenges for MSMEs. Further, Tambunan maps both sides as follows: 1)The opening of regional markets can sharpen the competition at the local level. The loss of trade barriers provides incentives for non-domestic products to enter.2) Without significant trade barriers, economic actors will enjoy a decrease in production costs if the raw materials used are imported products.
In order to reach the Asean Free Market 2015, there are still many opportunities for MSMEs to gain market share and investment opportunities. To take advantage of these opportunities, the biggest challenge for MSMEs in Indonesia to face the Asean Free Market is how to be able to determine the right strategy to win the competition.
MSMEs in Semarang City are ready to face AEC. The first effort made in preparing AEC is by improving product quality. In running their business, products or services sold must have a good quality or in accordance with the price offered for a business or company can survive in the face of competition, especially competition in terms of quality. According to Kotler (2012: 49) quality is the overall trait and nature of a product or service that affects its ability to satisfy expressed or implied needs. The definition of product quality it self according to Kotler and Armstrong (2012: 283) is the ability of a product in demonstrating its function, this includes the overall durability, reliability, accuracy, ease of operation, and product repairs as well as other product attributes. Additionally, MSMEs need to continuously improve the quality of their products or services because the improvement of product quality can make consumers feel satisfied with the products or services they buy, and will affect consumers to re-purchase.
To achieve the desired product quality, it is necessary to create a standardization of quality. From this research, there was a condition that contradicts between improving the quality of MSMEs products and the constraints of nonstandardization of products produced by MSMEs. Moreover, MSMEs that produce their products do not meet the applicable international standards. It was caused by the perpetrators of MSMEs who just produced only products without seeing the provisions of standardization of products that produce.
In the era of AEC, all products produced by MSMEs should refer to predetermined international standards. Indonesian National Standard (SNI) is one of the standards that become the reference in product production. The existence of product standardization can ensure the safety and comfort of consumers in consuming any products produced by MSMEs.
To make MSMEs in Semarang City know the standardization of the products they produce, it is necessary also the active role of the Government to socialize and give mentoring regarding the standardization of MSMEs products. This way is intended to keep the products produced meet the standards that have been set so that consumers will not lose confidence in the products that MSMEs produced.
There is a need to have a quality standardization in order to achieve the desired product. It is intended that the products maintain their quality and produce standards that have been set so that consumers do not lose confidence in the products offered.
MSMEs which do not pay attention to the quality of products offered will bear disloyal consumers so that the sales of their products will tend to decline. They should pay attention to quality, even reinforced by advertising and reasonable price then consumers will not think long to make purchases of the product (Kotler and Armstrong, 2012: 284).
In addition to product quality, the increase of Human Resources (HR) is very necessary. HR will drive MSMEs to participate actively in AEC. Reliable human resources are able to manage MSMEs, responsive technology and creative, necessary to maintain the existence of their business in the era of AEC.
One of the strengths of MSMEs in Semarang city lies on its strategic location. Semarang City strategic location as a development corridor in Central Java Province consists of four gate nodes, north coast corridor, south corridor, east corridor and west corridor. The strategic location of Semarang City is also supported by the presence of Tanjung Mas Port, Ahmad Yani Airport, Terboyo Terminal, Tawang and Poncol Railway Station, which strengthen the role of Semarang City as a development activity node in Central Java Province and the central part of Java Island, Indonesia.
The existence of this strategic location facilitates MSMEs to conduct their production activities, market their products abroad and minimize the cost of transportation from the place of business to the airport or port because the location of the facility is close and easy to reach.
In facing more open and competitive market mechanism, market control is a prerequisite for improving the competitiveness of MSMEs. To be able to dominate the market, MSMEs need to get information easily and quickly, both information about the production market and production of factor market. Information on the production market is necessary to expand the marketing network of products produced by MSMEs. Market information of production or commodities market require things such as (1) kind of goods or products needed by consumers in certain areas, (2) how people's purchasing power to the product, (3) the existing market price, (4) consumers' appetite on local, regional and international markets. Thus, MSMEs can anticipate various market conditions so that in running their business, they will be more innovative.
Alternatively, the market information of production factors is also needed in order to know: (1) the source of raw materials needed, (2) the price of raw materials to be purchased, (3) where and how to obtain business capital, (4) where to get professionals, (5) reasonable wage or salary levels for workers, (6) where to obtain the necessary tools or machinery (Effendi Ishak, 2005).
Comprehensive and accurate market information can be utilized by MSMEs to properly plan their business, for example: (1) make product designs favored by consumers, (2) determine competitive prices in the market, (3) know the target market and other benefits. Therefore, the role of government is needed in encouraging the success of MSMEs in gaining access to expand its marketing network the responsibility of Semarang City Government, but also Higher Education in Semarang City, should help MSMEs and Semarang City Government to conduct training and bookkeeping assistance as well.
In relation to this, if MSMEs can do a simple bookkeeping in a simple manner in accordance with the rules of accounting. MSMEs can access credit already provided by the Government and banks to increase their business capacity.
AEC requires competent human resources (HR) and superior products. Superior products can be generated from cooperative buildings, linkages, strong supporting synergies. The strong supporters among others must have the elements of involvement of A, B, G, C (Academics, Business, Government, Community) and Banking.
University can enter all areas of B, G, C and banking because they produce the human resources needed by B, G, C and banking. In this way, the government can open wide taps to boost employment with the creation of Micro Small and Medium Enterprises (MSMEs), including elements of capital.
From
the research resultsand discussion, it could be concluded that: (1) MSMEs in Semarang Municipality suggest their readiness to face the competition in AEC era; (2) The strategies taken by MSMEs to face AEC are improving and expanding their marketing networkand organizing training, assistance and technology upgrade to MSMEs in order to improve their product standards, value and quality.
|
v3-fos-license
|
2023-09-30T15:04:13.263Z
|
2023-09-27T00:00:00.000
|
263251194
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1420-3049/28/19/6819/pdf?version=1695802326",
"pdf_hash": "3edb248048f5a2f76b1483d81d44b58a30ff815b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2134",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "b19072ac3131c3815678ae7478cb2e5f7b976a81",
"year": 2023
}
|
pes2o/s2orc
|
Virtual Screening and Binding Analysis of Potential CD58 Inhibitors in Colorectal Cancer (CRC)
Human cell surface receptor CD58, also known as lymphocyte function-associated antigen 3 (LFA-3), plays a critical role in the early stages of immune response through interacting with CD2. Recent research identified CD58 as a surface marker of colorectal cancer (CRC), which can upregulate the Wnt pathway and promote self-renewal of colorectal tumor-initiating cells (CT-ICs) by degradation of Dickkopf 3. In addition, it was also shown that knockdown of CD58 significantly impaired tumor growth. In this study, we developed a structure-based virtual screening pipeline using Autodock Vina and binding analysis and identified a group of small molecular compounds having the potential to bind with CD58. Five of them significantly inhibited the growth of the SW620 cell line in the following in vitro studies. Their proposed binding models were further verified by molecular dynamics (MD) simulations, and some pharmaceutically relevant chemical and physical properties were predicted. The hits described in this work may be considered interesting leads or structures for the development of new and more efficient CD58 inhibitors.
Introduction
As one of the most common gastrointestinal cancers, colorectal cancer (CRC) ranked as the fourth incidence of malignant tumors and the fifth death rate in China [1].CRC incidence and death rates have been stabilizing or decreasing in some developed countries, but rapid growth has been seen in many developing countries including China [2].
Increasing evidence suggested that tumor-initiating cells (T-ICs) exist in different tumors, such as brain, breast, prostate, pancreatic, and colorectal tumors [3][4][5][6].T-ICs are tumor cells that have the ability to regrow the tumor from isolation [7], and are characterized by the distinctive features of self-renewal, proliferation, multi-lineage differentiation, and strong tumorigenicity [8].Recent studies have revealed that colorectal tumor-initiating cells (CT-ICs) play a crucial role in tumorigenesis, metastasis, recurrence, and treatment resistance of colorectal cancer [9].The most important and characteristic feature of T-ICs is their increased self-renewal potential [10], which is dominantly regulated by the Wnt pathway in CRC [11].CD58, a glycosylated adhesion molecule, was identified as a surface marker of CT-ICs [12].CD58 activation upregulated the Wnt/β-catenin pathway by degradation of Dkk-3 and facilitated the self-renewal ability of CT-ICs.In addition, knockdown of CD58 significantly impaired sphere formation and prevented tumor growth [12].
CD58, also known as lymphocyte function-associated antigen (LFA-3), is a cell adhesion molecule expressed on Antigen Presenting-Cells (APCs) [13].Protein-protein interaction between CD58 and CD2 can increase the sensitivity of immune recognition, and facilitate the adhesion between T cells and APCs as well as the contacts between cytolytic T cells, natural killer (NK) cells, and their target cells [14][15][16].It has been shown that CD58/CD2 interaction stimulated the synergistic secretion of CXC chemokine ligand 8 (CXCL-8/IL-8) of human intestinal CD3 + TCRαβ + CD8+ intraepithelial lymphocytes T cells, natural killer (NK) cells, and their target cells [14][15][16].It has been shown that CD58/CD2 interaction stimulated the synergistic secretion of CXC chemokine ligand 8 (CXCL-8/IL-8) of human intestinal CD3 + TCRαβ + CD8+ intraepithelial lymphocytes (IELs) [17], and CXCL-8 induces cell proliferation and migration, promoting tumor cell growth in CRC [18,19].CD58-CD2 interaction is related to maintaining the self-renewal ability of CT-ICs, by promoting the secretion of CXCL-8 from T cells [12].Therefore, effective blocking of CD58 and intervention of the Wnt pathway were proposed as a potential strategy to treat CRC caused by CT-ICs.
As shown in Figure 1, the extracellular region (171 residues) consists of two immunoglobulin-like domains [20], and domain I is responsible for the adhesion to CD2 [21][22][23].The crystal structure of a CD2-binding chimeric form of CD58 (Figure 1A, PDB ID: 1 CCZ) reveals that the CD2-binding domain (domain I) has the Ig superfamily V-set AG-FCC′C″:DEB domain topology (Figure 1B) and shares several unique structural features with CD2 [24].Recent findings suggested that domain I is a promising drug target, and some peptides were designed to modulate immune response, targeting the structural epitope of domain I [25][26][27][28][29][30][31][32][33].Structural-based virtual screening is a computer-aided screening approach to discover novel inhibitors against the selected target by evaluating the chemical structure and binding affinity [34].High throughput virtual screening is a process that computationally investigates a large set of chemical compounds or materials and discriminates drug candidates from non-candidates by their binding affinities with the target protein [35].It has become a popular tool for molecular discovery due to the exponential growth of available computational resources and the constant improvement of simulation and machine-learning techniques [36].Molecular dynamics (MD) simulations can provide important dynamic and structural information about ligand-protein interactions in a flexible manner and has been combined with molecular docking in drug design [37].In many cases, MD simulations were applied to improve and reinforce the performance of molecular docking [38][39][40].
The study focused on identifying a group of small molecular compounds that have the potential to become CD58 inhibitors from natural products.We screened over 183,000 small molecular compounds from the ZINC database [41,42] and Traditional Chinese Medicine (TCM) database@Taiwan [43].The screening process was followed by cell assays to test the activities of candidates in vitro.The proposed binding models were further validated by MD simulations, and then ADMET prediction was made to determine the toxicity associated with structural conformation.Five small molecular compounds were identified as potential inhibitors of CD58, and future work would be directed at experimental verification (e.g., cell adhesion inhibition assay to verify the binding models with CD58) and structure optimization of the good candidates.Structural-based virtual screening is a computer-aided screening approach to discover novel inhibitors against the selected target by evaluating the chemical structure and binding affinity [34].High throughput virtual screening is a process that computationally investigates a large set of chemical compounds or materials and discriminates drug candidates from non-candidates by their binding affinities with the target protein [35].It has become a popular tool for molecular discovery due to the exponential growth of available computational resources and the constant improvement of simulation and machine-learning techniques [36].Molecular dynamics (MD) simulations can provide important dynamic and structural information about ligand-protein interactions in a flexible manner and has been combined with molecular docking in drug design [37].In many cases, MD simulations were applied to improve and reinforce the performance of molecular docking [38][39][40].
The study focused on identifying a group of small molecular compounds that have the potential to become CD58 inhibitors from natural products.We screened over 183,000 small molecular compounds from the ZINC database [41,42] and Traditional Chinese Medicine (TCM) database@Taiwan [43].The screening process was followed by cell assays to test the activities of candidates in vitro.The proposed binding models were further validated by MD simulations, and then ADMET prediction was made to determine the toxicity associated with structural conformation.Five small molecular compounds were identified as potential inhibitors of CD58, and future work would be directed at experimental verification (e.g., cell adhesion inhibition assay to verify the binding models with CD58) and structure optimization of the good candidates.
Active Site Analysis
The extracellular region of CD58 is shown in Figure 1A.Domain I is the CD2-binding domain and the ligand binding sites are located on the AGFCC C interface (Figure 1B).
The highly acidic AGFCC C β-sheet interface, containing ten negatively charged residues and six positively charged residues, shows overall electrostatic complementary with the ligands [24].In addition, as reported, in CD58-CD2 s 'hand-shake' binding model, CD58 and CD2 adhesion domains contact each other from opposite ends in an orthogonal orientation, and residues Glu25, Lys29, Lys32, Asp33, Lys34, Glu37, Glu39, and Phe46 are important in the binding [30,44,45] (Figure 2A).Several peptide inhibitors targeting the AGFCC C interface of CD58 have been designed successfully to inhibit cell adhesion and modulate immune response [25][26][27][28][29][30][31][32][33].This provides us with a rational approach to modulate the activation of CD58 and inhibit CD2-CD58 interaction by designing compounds binding to the AGFCC C interface of CD58.Therefore, we selected the AGFCC C interface for virtual screening (Figure 2B), to search for small molecular inhibitors of CD58.The inhibitors could have the potential to suppress the self-renewal potential of CT-ICs and prevent CRC tumor growth by blocking CD58-CD2 interaction as well as the Wnt pathway.
Active Site Analysis
The extracellular region of CD58 is shown in Figure 1A.Domain I is the CD2-binding domain and the ligand binding sites are located on the AGFCC′C″ interface (Figure 1B).The highly acidic AGFCC′C″ β-sheet interface, containing ten negatively charged residues and six positively charged residues, shows overall electrostatic complementary with the ligands [24].In addition, as reported, in CD58-CD2′s 'hand-shake' binding model, CD58 and CD2 adhesion domains contact each other from opposite ends in an orthogonal orientation, and residues Glu25, Lys29, Lys32, Asp33, Lys34, Glu37, Glu39, and Phe46 are important in the binding [30,44,45] (Figure 2A).Several peptide inhibitors targeting the AGFCC′C″ interface of CD58 have been designed successfully to inhibit cell adhesion and modulate immune response [25][26][27][28][29][30][31][32][33].This provides us with a rational approach to modulate the activation of CD58 and inhibit CD2-CD58 interaction by designing compounds binding to the AGFCC′C″ interface of CD58.Therefore, we selected the AGFCC′C″ interface for virtual screening (Figure 2B), to search for small molecular inhibitors of CD58.
The inhibitors could have the potential to suppress the self-renewal potential of CT-ICs and prevent CRC tumor growth by blocking CD58-CD2 interaction as well as the Wnt pathway.
Virtual Screening and Proliferation Inhibition of SW620 Cell Line
A custom high throughput virtual screening pipeline is shown in Figure 3.In the process, both binding affinity from Autodock Vina and commercial availability (e.g., price) were considered to select 9 candidates from NP and 4 candidates from TCM for further cell experiments.To investigate the cytotoxic effects of these compounds on the SW620 cell line (human colorectal adenocarcinoma epithelial cells), cells were incubated with various concentrations of each compound for 48 h, and a CCK-8 assay was applied to analyze cell viability.After the exposure, significant decreases in cell viability were observed following an increased concentration of the compounds.Compared to the approved anticancer drug, Nimustine Hydrochloride, five of them (DY6, DY7, DY10, DY11, and DY12) showed relatively good performance, inhibiting the growth of SW620 cells with IC50
Virtual Screening and Proliferation Inhibition of SW620 Cell Line
A custom high throughput virtual screening pipeline is shown in Figure 3.In the process, both binding affinity from Autodock Vina and commercial availability (e.g., price) were considered to select 9 candidates from NP and 4 candidates from TCM for further cell experiments.To investigate the cytotoxic effects of these compounds on the SW620 cell line (human colorectal adenocarcinoma epithelial cells), cells were incubated with various concentrations of each compound for 48 h, and a CCK-8 assay was applied to analyze cell viability.After the exposure, significant decreases in cell viability were observed following an increased concentration of the compounds.Compared to the approved anti-cancer drug, Nimustine Hydrochloride, five of them (DY6, DY7, DY10, DY11, and DY12) showed relatively good performance, inhibiting the growth of SW620 cells with IC50 values ranging from 26.33 ± 0.23 µM to 104.99 ± 7.86 µM (Table 1, Supplementary Figure S1A-F).
values ranging from 26.33 ± 0.23 µM to 104.99 ± 7.86 µM (Table 1, Supplementary Figure S1A-F).With resource and funding limitations, only cell assay was performed on SW620 cell lines to validate the inhibition in vitro.However, cell is a system too complex to prove our in silico hypothesis, whether the compounds could selectively bind with the active site of CD58.Future work would be focused on additional experiments to verify the binding models of the candidate molecules.For example, the ability of the candidates to inhibit CD2-CD58 interaction can be evaluated by cell adhesion assay using model systems [31].With resource and funding limitations, only cell assay was performed on SW620 cell lines to validate the inhibition in vitro.However, cell is a system too complex to prove our in silico hypothesis, whether the compounds could selectively bind with the active site of CD58.Future work would be focused on additional experiments to verify the binding models of the candidate molecules.For example, the ability of the candidates to inhibit CD2-CD58 interaction can be evaluated by cell adhesion assay using model systems [31].With resource and funding limitations, only cell assay was performed on SW620 cell lines to validate the inhibition in vitro.However, cell is a system too complex to prove our in silico hypothesis, whether the compounds could selectively bind with the active site of CD58.Future work would be focused on additional experiments to verify the binding models of the candidate molecules.For example, the ability of the candidates to inhibit CD2-CD58 interaction can be evaluated by cell adhesion assay using model systems [31].With resource and funding limitations, only cell assay was performed on SW620 cell lines to validate the inhibition in vitro.However, cell is a system too complex to prove our in silico hypothesis, whether the compounds could selectively bind with the active site of CD58.Future work would be focused on additional experiments to verify the binding models of the candidate molecules.For example, the ability of the candidates to inhibit With resource and funding limitations, only cell assay was performed on SW620 cell lines to validate the inhibition in vitro.However, cell is a system too complex to prove our in silico hypothesis, whether the compounds could selectively bind with the active site of CD58.Future work would be focused on additional experiments to verify the binding models of the candidate molecules.For example, the ability of the candidates to inhibit With resource and funding limitations, only cell assay was performed on SW620 cell lines to validate the inhibition in vitro.However, cell is a system too complex to prove our in silico hypothesis, whether the compounds could selectively bind with the active site of CD58.Future work would be focused on additional experiments to verify the binding models of the candidate molecules.For example, the ability of the candidates to inhibit With resource and funding limitations, only cell assay was performed on SW620 cell lines to validate the inhibition in vitro.However, cell is a system too complex to prove our in silico hypothesis, whether the compounds could selectively bind with the active site of CD58.Future work would be focused on additional experiments to verify the binding With resource and funding limitations, only cell assay was performed on SW620 cell lines to validate the inhibition in vitro.However, cell is a system too complex to prove our in silico hypothesis, whether the compounds could selectively bind with the active site of CD58.Future work would be focused on additional experiments to verify the binding models of the candidate molecules.For example, the ability of the candidates to inhibit CD2-CD58 interaction can be evaluated by cell adhesion assay using model systems [31].In addition, a cell assay on human normal colorectal cells will also be made to test the toxicity.
Binding Analysis of the Five Good Candidates
The possible binding models of the five good candidates (DY6, DY7, DY10, DY11, and DY12) were revealed in Figure 4B-F.As shown in Figure 4A, four of the five hits, including DY6, DY10, DY11, and DY12, fit into the GFCC' cavity, while DY7 (represented as yellow sticks) interacted with the CC C region.DY7 formed two hydrogen bonds with residues Lys29 and Asp33 of the active site of CD58.The benzene ring interacted with residue Phe46 through a π-π interaction, and the whole skeleton interacted with the key residues Lys29, Asp33, Glu37, Glu39, and Phe46 of the active site through Van der Waals (VDW) interactions.
Molecules 2023, 28, x FOR PEER REVIEW 7 of 14 While the predicted binding models of DY6, DY10, DY11, and DY12 were different from DY7, as they interacted with the GFCC' region via VDW interactions (Figure 4C-F).All of them formed hydrogen bonds with residues Val26, Glu37, Leu38, Glu39 and Glu78, among which residues Glu37 and Glu39 are key residues [30].Except for the above interactions, DY10 formed an additional hydrogen bond with residue Glu76.In general, the hydrogen bonds with residues Val26, Glu37, Leu38, and Glu39 stabilized the benzenediol group, and those with residues Glu76 and Glu78 stabilized the saccharide ring.These hy- While the predicted binding models of DY6, DY10, DY11, and DY12 were different from DY7, as they interacted with the GFCC' region via VDW interactions (Figure 4C-F).All of them formed hydrogen bonds with residues Val26, Glu37, Leu38, Glu39 and Glu78, among which residues Glu37 and Glu39 are key residues [30].Except for the above interactions, DY10 formed an additional hydrogen bond with residue Glu76.In general, the hydrogen bonds with residues Val26, Glu37, Leu38, and Glu39 stabilized the benzenediol group, and those with residues Glu76 and Glu78 stabilized the saccharide ring.These hydrogen bonds made DY10 stable and well fit into the GFCC' cavity, forming strong VDW interactions (Figure 4D).The cyclic amide of DY6 formed two hydrogen bonds with the carboxyl of Glu78, which is in the center of the GFCC' cavity, so the hydrogen bonds may be crucial in the interaction.Moreover, the hydroxyl of the steroid part of DY6 formed a hydrogen bond with the carbonyl of residue Thr83 (Figure 4C).In DY12, the hydroxyl of the saccharide ring formed hydrogen bonds with the carbonyls of residues Ile82, Thr83, and Asp84 (Figure 4E).The only difference between DY10 and DY11 was the orientation of the hydroxyl group in the saccharide ring.As shown in Figure 4D, the saccharide ring of DY10 showed a better electrostatic complementarity with the GFCC' cavity, which is an electronegative pocket.The negatively charged hydroxyl group of the saccharide ring interacted with the electropositive sides of the pocket in DY10 (Figure 4D), while in DY11, the hydroxyl group formed a hydrogen bond with residue Glu78 and pointed to the electronegative pocket (Figure 4F).
Overall, our binding analysis showed that the five good candidates interacted with the CC C region or the GFCC' cavity to bind with CD58, and thus could have the potential to affect CD58-CD2 binding and modulate downstream signaling pathways.As three (DY10, DY11, and DY12) of them fit into the GFCC' cavity and shared the same skeleton, and the only difference was the saccharide ring, we speculate that: the hydrogen bonds with residues Val26, Glu37, Leu38, Glu39, Glu76, and Glu78 are necessary for compound-CD58 binding; the orientation of the hydroxyl groups in the saccharide ring may influence binding affinity through hydrogen bond or electrostatic interactions.
MD Simulations
MD simulation was applied to further investigate the stability of the proposed binding models of the ligand-protein complexes using GROMACS version 2023 [46] at various time points up to 100 ns.Overall, all the complexes were stable throughout the simulation (RMSD range: 0.2-0.5 nm, Figure 5A).Among them, DY6-, DY10-and DY11-CD58 complexes had relatively lower RMSD and RMSF values, which could be explained by their proposed binding models.RMSF analysis provides important information about the flexibility of different regions of the complexes, and the results suggested that all the complexes experienced nearly the same fluctuation throughout the time scale (Figure 5B).ROG spectrum answers a question about the compactness of the system during MD simulation, e.g., the performance of the complex in the biological system [47].As shown in Figure 5C, the system was seen to sample nearly the same compactness throughout.In addition, we used H-bond functionality in GROMACS to determine whether the hydrogen bonds mediating the ligand-protein interactions in the molecular docking were sustained in the MD simulation.The results indicated that usually 2-4 hydrogen bonds could be generated in the complexes, and as a support of molecular docking and the predicted binding models, DY7 mediated fewer hydrogen bonds than other compounds (Figure 5D).
ADMET Prediction
As shown in Table 2, we estimated various pharmaceutically relevant properties and physical descriptors for ADME profiles for the five good candidates.All compounds showed good ADME parameters.An in silico toxicity risk was also performed to check hERG inhibition, AMES toxicity, carcinogens, and acute oral toxicity.The compounds were found to be of good safety, except compound DY6 was predicted as positive for AMES toxicity.Future work of structure optimization could focus on increasing their activities, as well as decreasing their toxicities.
ADMET Prediction
As shown in Table 2, we estimated various pharmaceutically relevant properties and physical descriptors for ADME profiles for the five good candidates.All compounds showed good ADME parameters.An in silico toxicity risk was also performed to check hERG inhibition, AMES toxicity, carcinogens, and acute oral toxicity.The compounds were found to be of good safety, except compound DY6 was predicted as positive for AMES toxicity.Future work of structure optimization could focus on increasing their activities, as well as decreasing their toxicities.
Library Download
The structural information was collected from the ZINC database [41,42] and Traditional Chinese Medicine (TCM) database@Taiwan [43].ZINC is a curated collection of more than 230 million commercially available chemical compounds prepared for virtual screening [41,42].TCM database@Taiwan is a large and comprehensive small molecular database on traditional Chinese medicine for virtual screening [43].The format of compound structure (mol2 or sdf) was converted to the pdbqt format through Applied Chemistry Software Open Babel 2.3 [48].To provide potential active compounds for the following structure-based virtual screening, we successfully constructed two 3D structural libraries: traditional Chinese medicines (TCM) library containing 33,765 small molecular compounds from 8445 active ingredients of traditional Chinese medicines; natural product (NP) library containing 149,515 small molecular compounds from the ZINC database.The small molecular compounds were categorized based on their physical and chemical properties (e.g., molecular weight, hydrogen bond donor/acceptor, the number of rotatable bonds, cLogP, etc.), and stored in parallel nodes, to accelerate the computing speed of virtual screening.
Protein Preparation and High Throughput Virtual Screening
The 3D structure of the CD2-binding domain of CD58 [24] (PDB ID: 1 CCZ) was downloaded from the Protein Data Bank (PDB; http://www.rcsb.org/pdb,accessed on 3 August 2015).All water and solvent molecules were removed from the structure, and hydrogen atoms and Gasteiger charges were added using AutoDock Tools [49].The protein file was prepared in pdbqt format.The grid-enclosing box was set to center on the AGFCC C interface, which is the binding site of CD58 with its ligand CD2.Center coordinates (center_x: −13.24, center_y: 54.06, center_z: 28.1) and box size (size_x: 26, size_y: 22, size_ z: 24) of the grid were proposed to enclose the whole interface.NP and TCM libraries were screened using Autodock Vina [49].The top 200 compounds (binding affinity range: −10 kcal/mol to −7 kcal/mol) from each library were selected, and the binding models with the residues of the AGFCC C interface were manually analyzed to exclude false positive results.Here, are two criteria for small molecular compounds to be considered as good candidates: shape or molecular electrostatic potential matching with the active site; or having hydrogen bonds with the critical residues (Glu25, Lys29, Lys32, Asp33, Lys34, Glu37, Glu39, and Phe46) of the active site.Their commercial availability and prices were also considered in the screening process.Finally, nine candidates from the NP library and four from the TCM library were chosen for the following cell assay.
In Vitro Assay on SW620 Cell Lines
Cell viability was measured by the Cell Counting Kit-8 (CCK-8) assay as previously described [50,51].Briefly, SW620 cells were plated into 96-well plates with 200 µL DMEM medium containing 10% fetal bovine serum (FBS) and 1% Penicillin-Streptomycin solution (PS) at a density of 5 × 10 4 cells/well.A compound solution was made at the concentration of 2 µL by dissolving with DMEM medium.The cells were incubated with the selected 13 compounds, as well as the positive control Nimustine Hydrochloride [52] (an approved nitrosourea-derived anticancer agent effectively against CRC), at different concentrations (1 mg/mL, 200 µg/mL, 40 µg/mL, 8 µg/mL, 160 ng/mL and 32 ng/mL) in a humidified incubator with 5% CO 2 , 37 °C for 48 h.At the end of each treatment, the supernatant was discarded and 200 µL DMEM medium with 10% FBS and 1% PS was added.20 µL Cell Counting Kit-8 (CCK-8) was added to each well and cells were further cultured for 4 h.The absorbance was measured at a wavelength of 450 nm with a Synergy 2 multimode microplate reader (BioTek, Winooski, VT, USA).The inhibition rate (%) was calculated by the formula: Inhibition% = (1 − F450,compound/F450,control) × 100% Three biological replicates were made.IC50 values were calculated from the inhibition curves, and the standard deviation was calculated.
MD Simulations
Molecular dynamics (MD) simulation was applied to further validate the dynamics and binding models of the target protein with the five good candidates using GROningen MAchine for Chemical Simulations (GROMACS) version 2023 [46].CHARMM36 (Chemistry at Harvard Macromolecular Mechanics) was applied as an all-atom force field [53] and CHARMM General Force Field (CGenFF) server was used to retrieve the topology of the hits [54,55].The complex was put into a 10 nm box.After solvation (TIP3 P water model), neutralization (Na + and Cl − ions), equilibration [canonical (NVT) and isobaric-isothermic (NPT) ensemble for 500 ps], energy minimization was carried out for the neutralized complexes using the steepest descent minimization algorithm and then progressively balanced to 310 K and 1 Bar.For each complex, a 100-ns MD (0.002 ps/step, 50,000,000 steps) was performed on the equilibrated systems.
ADMET Prediction
We evaluated the ADMET properties of the five good candidates using admetSAR, a comprehensive source and free tool for evaluating chemical ADMET properties [56].It has been considered widely as a useful tool for in silico screening ADMET profiles of drug candidates and environmental chemicals [57,58].Thirteen properties related to absorption, distribution, metabolism, elimination, and toxicity were estimated for the selected compounds.
Conclusions
The study screened a group of small molecular compounds as potential CD58 inhibitors from NP and TCM libraries using high throughput virtual screening and binding analysis, and five of the 13 commercially available candidates showed significant proliferation inhibition on SW620 cell lines.MD simulations further verified the predicted CD58 interactions.In silico prediction of ADMET properties indicated that the selected compounds have good pharmaceutical properties and unsatisfactory toxicities.Further experiments (e.g., cell adhesion inhibition assay, cell assay on human normal colorectal cells, etc.) are needed to verify the predicted binding models, and structure optimization could be made to improve their activities and decrease their toxicities.The hits discovered in this work could provide novel scaffolds for further hit-to-lead optimization and lay a foundation for the development of therapeutic candidates for CRC treatments.
Figure 2 .
Figure 2. (A) The overlay of the CD2-binding domain (domain I) of CD58 (PDB ID: 1CCZ; colored in yellow) and CD58-CD2 complex (PDB ID: 1QA9; colored in purple).Critical residues of the AG-FCC′C″ interface were shown as cyan sticks and labeled.(B) The selected active site of virtual screening was indicated in the red box.
Figure 2 .
Figure 2. (A) The overlay of the CD2-binding domain (domain I) of CD58 (PDB ID: 1CCZ; colored in yellow) and CD58-CD2 complex (PDB ID: 1QA9; colored in purple).Critical residues of the AGFCC C interface were shown as cyan sticks and labeled.(B) The selected active site of virtual screening was indicated in the red box.
45 a
Molecular weight.b SW620 cell line is a human colorectal adenocarcinoma epithelial cell line.
45 a
Molecular weight.b SW620 cell line is a human colorectal adenocarcinoma epithelial cell line.
45 a
Molecular weight.b SW620 cell line is a human colorectal adenocarcinoma epithelial cell line.
45 a
Molecular weight.b SW620 cell line is a human colorectal adenocarcinoma epithelial cell line.
45 a
Molecular weight.b SW620 cell line is a human colorectal adenocarcinoma epithelial cell line.
45 a
Molecular weight.b SW620 cell line is a human colorectal adenocarcinoma epithelial cell line.
59. 44 ± 4 . 45 a
Molecular weight.b SW620 cell line is a human colorectal adenocarcinoma epithelial cell line.
Figure 4 .
Figure 4. (A) Electrostatic potential diagram of CD58 (PDB ID: 1CCZ) interacting with the five good candidates (DY6: pink; DY7: yellow; DY10: white; DY11: cyan; DY12: green).Predicted binding models of DY7 (B), DY6 (C), DY10 (D), DY12 (E), and DY11 (F) with the GFCC' cavity of CD58.CD58 was shown as a yellow cartoon, relevant residues were shown as yellow sticks, and the critical residues in binding were colored in cyan.The five compounds were shown as pink sticks.Red dashed lines represented the potential hydrogen bonds.
Figure 4 .
Figure 4. (A) Electrostatic potential diagram of CD58 (PDB ID: 1CCZ) interacting with the five good candidates (DY6: pink; DY7: yellow; DY10: white; DY11: cyan; DY12: green).Predicted binding models of DY7 (B), DY6 (C), DY10 (D), DY12 (E), and DY11 (F) with the GFCC' cavity of CD58.CD58 was shown as a yellow cartoon, relevant residues were shown as yellow sticks, and the critical residues in binding were colored in cyan.The five compounds were shown as pink sticks.Red dashed lines represented the potential hydrogen bonds.
Table 1 .
Structures and IC50 values of the 13 candidates.
Table 1 .
Structures and IC 50 values of the 13 candidates.
a Molecular weight.b SW620 cell line is a human colorectal adenocarcinoma epithelial cell line.
Table 2 .
In silico prediction of ADMET properties.
Table 2 .
In silico prediction of ADMET properties.
|
v3-fos-license
|
2016-06-17T01:33:05.342Z
|
2013-09-20T00:00:00.000
|
17400954
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2013.00184/pdf",
"pdf_hash": "d28cbdf4ef766d8dd3cf0b75ec95587bc259e879",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2135",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "d28cbdf4ef766d8dd3cf0b75ec95587bc259e879",
"year": 2013
}
|
pes2o/s2orc
|
Exploiting tumor epigenetics to improve oncolytic virotherapy
Oncolytic viruses (OVs) comprise a versatile and multi-mechanistic therapeutic platform in the growing arsenal of anticancer biologics. These replicating therapeutics find favorable conditions in the tumor niche, characterized among others by increased metabolism, reduced anti-tumor/antiviral immunity, and disorganized vasculature. Through a self-amplification that is dependent on multiple cancer-specific defects, these agents exhibit remarkable tumor selectivity. With several OVs completing or entering Phase III clinical evaluation, their therapeutic potential as well as the challenges ahead are increasingly clear. One key hurdle is tumor heterogeneity, which results in variations in the ability of tumors to support productive infection by OVs and to induce adaptive anti-tumor immunity. To this end, mounting evidence suggests tumor epigenetics may play a key role. This review will focus on the epigenetic landscape of tumors and how it relates to OV infection. Therapeutic strategies aiming to exploit the epigenetic identity of tumors in order to improve OV therapy are also discussed.
INTRODUCTION
While genetic information establishes the primary blueprint for cellular identity, multiple regulatory layers responsive to extra and intra-cellular signals ultimately control the manifestation of this blueprint. Changes in cellular state, including initiation of DNA synthesis, activation of apoptotic programs, or triggering of antiviral defense mechanisms, result from an integrated response to stimuli received by the cell. These are controlled in large part by gene/protein expression profiles unique to each cell. It is now well understood that activation of transcription factors that bind in a DNA sequence-specific manner at promoter and enhancer elements is responsible for many of the changes in gene expression that occur in response to environmental or developmental cues. However transcription factors and their associated gene targets are themselves further regulated by the accessibility of DNA sequences. Since the genome resides in the finite space provided by the nucleus, it interacts with proteins known as histones to form chromatin and facilitate its compaction. The configuration of chromatin compaction is modulated by epigenetic modification and is a key determinant for transcription factor-mediated activation of gene transcription (Magnani et al., 2011).
Epigenetic modifications create a reversible imprint that may be inherited through cell division. For example, DNA methylated at promoter CpG islands is associated with gene silencing and can be reversed by treatment with DNA methyltransferase inhibitors such as 5-AZA (5-aza-2 -deoxycytidine) leading to the reactivation of silenced genes (Baylin and Jones, 2011;Krecmerova and Otmar, 2012). Similarly, chromatin structure can alter accessibility to the DNA template and can be readily remodeled by histone post-translational modifications (PTMs). PTMs including acetylation, methylation, phosphorylation, ubiquitination, and many others can be added to numerous residues of histone proteins (Bannister and Kouzarides, 2011). Different PTMs will favor chromatin compaction while others will increase its accessibility to DNA binding proteins. Histone modifications and DNA methylation are highly interdependent processes and define the epigenetic code (Cedar and Bergman, 2009). The epigenetic code is regulated by a complex interplay of enzymatic erasers, readers, and writers that exhibit specificities toward different histones and residues (Rice and Allis, 2001). For example, the level of histone acetylation is regulated by the relative activity of histone acetyltransferases (HATs) and histone deacetylases (HDACs), proteins with opposing enzymatic activities that are often found in the same protein complexes (Johnsson et al., 2009;Peserico and Simone, 2010). This also applies to histone lysine methyltransferases (KMTs) and lysine demethylases (KDMs). Consequently, modulating the activity of histone-modifying enzymes can profoundly alter the epigenetic profile of a cell (Egger et al., 2004;Yoo and Jones, 2006).
Given their critical role in the regulation of normal cellular physiology, it is not surprising that aberrations in epigenetic modifications can contribute to the manifestations of human disease. For example, a cell's epigenetic profile can impact the progression of acute microbial diseases (discussed in more detail below) as well as the development and treatment of chronic diseases such as cancer. DNA hypermethylation is often observed in cancer cells (Patel et al., 2012). The genome-wide distribution of histone modifications can also be altered in the course of cancer development (Akhtar-Zaidi et al., 2012;Magnani et al., 2013). As well, the activity of various histone-modifying enzymes can be altered through mutations (Taylor et al., 2011), aberrant expression (Schildhaus et al., 2011;Bennani-Baiti et al., 2012) and/or recruitment to target histone residues via oncogenic fusion proteins (Lubieniecka et al., 2008). Consequently, many cancers are sensitive to epigenetic modulators such as 5-AZA, HDAC, or KDM inhibitors (Hurtubise et al., 2008;Taylor et al., 2011;Schenk et al., 2012) and epigenetic modifications have been shown to influence the response to chemotherapy (Glasspool et al., 2006;Magnani et al., 2013).
ONCOLYTIC VIROTHERAPY
While epigenetic modulators hold promise as anticancer agents, it is clear that like for many other cancer therapies, tumor-specificity is of paramount importance. Tremendous efforts have been made over the past decades to tackle the difficult task of developing more selective cancer therapies, aiming to exploit the sometimes-subtle differences between normal tissues and tumors. One promising new class of therapeutics comes to us from the field of virology. Since the early 1900s it has been observed that cancers can be uniquely susceptible to virus infection (Dock, 1904). While the first clinical trials using replication-competent viruses to treat cancer began in the seventies (Asada, 1974;Kelly and Russell, 2007;Pol et al., 2013), approval of the first oncolytic virus (OV) is only now in the foreseeable future in North America (Carroll, 2011;Galanis et al., 2012;Heo et al., 2013). The more recent clinical success of OVs is in large part due to our more complete understanding of the molecular biology of both cancer cells and viruses that allowed us to create virus strains with improved selectivity and anti-tumor activity, and clinical safety profile (Breitbach et al., 2011). Rapid proliferation and deregulated metabolism (Fritz and Fajas, 2010), disorganized vasculature (Jain, 2005), and defective antiviral innate immune responses (Dunn et al., 2006) in malignant tumors are hallmarks that not only define cancer, but also favor viral growth. Building on these observations, several OVs have been engineered or selected to take advantage of one or more of these features (Russell et al., 2012). A variety of OV platforms are currently under clinical evaluation including those based on herpes simplex virus (HSV), Reovirus, vaccinia virus (VV), Adenovirus, Measles virus, and vesicular stomatitis virus (VSV; U.S. National Library of Medicine, 2013).
ONCOLYTIC VIROTHERAPY AND THE CELLULAR INNATE ANTIVIRAL RESPONSE
It is now well established that cancer cells that evolve to frank malignancies often acquire defects in their ability to mount a successful antiviral response and this attribute/deficit contributes to the selectivity of many if not all OVs (Norman and Lee, 2000;Stojdl et al., 2000Stojdl et al., , 2003. This is often a consequence of the observation that approximately 65-70% of tumors are unable to produce or respond to type I interferon (IFN), a key mediator of the cellular antiviral response (Stojdl et al., 2003;Dunn et al., 2006). IFNs are antiviral cytokines induced following recognition of viral proteins and nucleic acids by cellular pattern recognition receptors such as Toll-like receptors (TLRs) that signal through to transcription factors such as interferon regulatory factors (IRFs). There are many isoforms of IFN, which can be functionally sub-divided in at least three types (types I/II/III). While type I/III IFNs (e.g., IFN-α, IFN-β/IFN-λ) stimulate cellular antimicrobial immunity; type II IFNs (e.g., IFN-γ) coordinate the host immune response. IFNs elicit their transcriptional effects through autocrine and paracrine activation of IFN receptors and signaling through the Jak/STAT signaling pathway . This induces the transcriptional up-regulation of interferon-stimulated genes (ISGs), many of which have direct antiviral/pro-apoptotic activities (e.g., RNAseL, TNF-α, TRAIL) and/or immune-stimulatory properties (e.g., components of major histocompatibility complex).
ONCOLYTIC VIRUSES AND THE GENERATION OF AN ANTI-TUMOR IMMUNE RESPONSE
In addition to taking advantage of a niche provided by aberrations unique to cancer and the tumor microenvironment, OVs have been used as platforms to express a range of therapeutic transgenes, from suicide genes to immune-stimulatory cytokines (Merrick et al., 2009;Maldonado et al., 2010;Chai et al., 2012;Stephenson et al., 2012;Lange et al., 2013). In this regard, it is now well recognized that beyond simply lysing infected tumor cells, OVs effectively "de-cloak" tumors by stimulating immune cells to recognize cancer antigens, ultimately leading to tumor destruction and in some cases, long-term cures (Sobol et al., 2011;Huang et al., 2012). Many tumors evade immune recognition due to a dysfunctional antigen presentation pathway, which is under tight multilayered transcriptional control ultimately dictated by type I/II IFNs and the class II transactivator (CIITA). This transcription factor controls the expression of numerous genes involved in antigen presentation, including class I and II MHC molecules, which display tumor or pathogen derived peptides to killer T cells (CD4 + /CD8 + ; LeibundGut-Landmann et al., 2004).
The antigen presentation pathway is influenced by both tumorigenesis and OV therapy. Many tumor cells including leukemias, lymphomas, and carcinomas, avoid immune recognition due to a dysfunctional antigen presentation pathway, largely caused by epigenetic silencing (e.g., histone deacetylation or DNA methylation) of MHC2TA, the gene encoding CIITA (LeibundGut-Landmann et al., 2004). OV therapies can enhance tumor-associated antigen presentation through various mechanisms. In response to OV infection, type I and II IFN secretion by infected cells within the tumor environment (which also includes normal tumor infiltrating cells) leads to the up-regulation of hundreds of ISGs including Frontiers in Genetics | Epigenomics and Epigenetics IRF-1, which up-regulates CIITA expression (Muhlethaler-Mottet et al., 1998). Notably, this response is dependent upon the ability to respond to IFN, which can be limited in many cancer cells (Stojdl et al., 2003;Dunn et al., 2006).
Oncolytic virotherapy can have a positive influence on antigen presentation and the anti-tumor response. Some OVs including HSV, reovirus, and measles virus, induce syncytia formation in infected and neighboring cells. These large multinucleated tumor cells secrete an abundance of "syncytiosomes," which are exosomelike vesicles that present tumor-associated antigens via MHC molecules (Bateman et al., 2000(Bateman et al., , 2002. Finally, destruction of cancer cells following infection by OVs provides an additional source of tumor antigens available for capture by antigen-presenting immune cells. The immunostimulatory nature of the virus itself, through activation of TLRs and subsequent cellular production of pro-inflammatory cytokines stimulates the recruitment of antigen-presenting cells that sample tumor-derived and virusexpressed antigens. Presentation of tumor antigens to killer T cells (CD4 + /CD8 + ) through MHC molecules in the presence of inflammatory cytokines can thus lead to generation of a robust and long-lasting immune responses directed against the tumor.
To capitalize on these beneficial immunological effects, some groups have developed OV/vaccine hybrid strategies. These strategies are designed specifically to re-educate the adaptive immune system to recognize and respond to tumor antigens. Thus, OVs can be engineered to express not only immune-stimulatory cytokines but also tumor-specific antigens to further stimulate an anti-tumor immune response following OV infection of cancer cells (Diaz et al., 2007;Pulido et al., 2012). Indeed, several studies have shown that this "tumor antigen vaccination" effect can be further amplified using a prime-boost strategy, by priming with an antigen then boosting the response using an OV expressing the same antigen (Bridle et al., 2010(Bridle et al., , 2013. As discussed below, it is possible to use epigenetic modifiers to further fine-tune this oncolytic vaccine approach. It is also possible to take advantage of this vaccine effect by infecting cancer cells ex vivo and re-injecting the inactivated "oncolysate" to generate prophylactic and even therapeutic anticancer immune responses. The resulting up-regulation of MHCs and co-regulatory factors and presentation of tumor antigens at the surface of OV infected cells as well as the presence of immunestimulating virus is thought to be at the root of this effect (Lemay et al., 2012). Overall, these studies emphasize the important role of antigen expression/presentation in OV-stimulated anti-tumoral responses.
TUMOR HETEROGENEITY: INHERENT BARRIER TO OV THERAPY
Despite promising clinical data, it is clear that there is considerable inter-(and likely intra-) tumor heterogeneity in the responsiveness to OV therapy in vitro as well as in vivo in both pre-clinical and clinical settings (Breitbach et al., 2011;Sobol et al., 2011). Because overcoming the innate cellular antiviral response and generating a robust anti-tumor response are critical to observe meaningful therapeutic benefits from oncolytic virotherapy, it is important to understand what tumorigenic processes influence these closely linked pathways in order to manipulate them to improve therapeutic outcomes.
Given the profound epigenetic divergence that prevails in tumor cells (Akhtar-Zaidi et al., 2012;De Carvalho et al., 2012), it is foreseeable that tumor-specific gene expression response profiles induced by virus infection may be altered by epigenetic modifications and that this could contribute to the heterogeneity of tumor responsiveness to OVs. As discussed previously, epigenetic reprogramming is well known to play an important role in oncogenic transformation and numerous reviews extensively cover the role of epigenetics in cancer (Muntean and Hess, 2009;Baylin and Jones, 2011;Hatziapostolou and Iliopoulos, 2011;Suva et al., 2013). Thus, the remainder of this review aims to highlight current knowledge of genes epigenetically regulated in cancer that are also involved in pathways critical for OV therapy, namely the IFNmediated antiviral response and antigen presentation ( Table 1), and how this contributes to tumor heterogeneity (Figure 1).
THE ROLE OF EPIGENETICS IN HOST SUSCEPTIBILITY TO VIRAL INFECTION
Epigenetic regulation of innate and adaptive immune processes is emerging as a key determinant of susceptibility to viral infection. Several reports suggest that cell type-specific epigenetic regulation of antiviral ISGs leads to differences in permissibility to virus infections in both normal and tumor cells (Naka et al., 2006;Nguyen et al., 2008;Fang et al., 2012;Chen et al., 2013;Cho et al., 2013). Recently, histone H3K9 di-methylation, a repressive heterochromatin mark, was found to be elevated within IFN genes and ISGs in non-professional IFN-producing cells (e.g., fibroblasts) as compared to professional IFN-producing plasmacytoid dendritic cells (pDCs). Interestingly, inhibiting the KMT G9a by both genetic and pharmacological means led to increased IFN production and responsiveness in fibroblasts. In line with this, G9a-ablated fibroblasts were also rendered more resistant to infection by viruses (Fang et al., 2012; Figure 1).
Another recent study in mice harboring the murine viral susceptibility locus Tmevp3 revealed the intriguing role of NeST, a long non-coding RNA (lncRNA) adjacent to the IFN-γ locus in both mice and humans (Vigneau et al., 2001). NeST was found to function as an epigenetically driven enhancer element (Gomez et al., 2013) leading to increased IFN-γ production in mouse CD8 + T cells by directly interacting with the H3K4 histone methyltransferase complex and increasing H3K4 trimethylation, an activating mark. This novel epigenetic modification culminated in heightened susceptibility to persistent viral infection in mice (Gomez et al., 2013; Figure 1). Although the role of NeST in human epigenetic regulation is currently unknown, it is likely lncR-NAs contribute to epigenetic regulation and manifestation of cell phenotypes including permissiveness to virus infection and cancer.
CANCER EPIGENETICS IMPACT THE REGULATION OF ANTIVIRAL RESPONSE GENES
As previously discussed, the majority (but not all) of cancer cells are dysfunctional in their ability to produce and/or respond to IFN (Dunn et al., 2006). While crosstalk between oncogenic signals and the antiviral response pathways have been shown to play a role (Farassati et al., 2001;Shmulevitz et al., 2005); epigenetic events are also likely contributors to this phenotype. One indication of this comes from a series of studies on cells derived from www.frontiersin.org cancer-prone Li-Fraumeni syndrome patients. Cells from these patients spontaneously immortalize when serially passaged in tissue culture due to mutations in the tumor suppressor p53, however transformation is inhibited upon treatment with 5-AZA (Kulaeva et al., 2003;Fridman et al., 2006). DNA methylation profiling of these immortalized cells revealed hypermethylation at the promoters of numerous genes involved in the type I IFN pathway, including IRF7 (Kulaeva et al., 2003;Fridman et al., 2006;Li et al., 2008). Interestingly, these immortalized Li-Fraumeni patientderived cells were inherently more sensitive to VSV infection (Fridman et al., 2006; Figure 1). Indeed, epigenetic repression of IFN and associated genes correlates with IFN insensitivity in many cancers. IRFs 4, 5, 7, and 8 are the target of DNA methylation, leading to dysfunctional responsiveness to type I and II IFNs in gastric cancer (Yamashita et al., 2010), while IRF8 is silenced by the same mechanism in www.frontiersin.org FIGURE 1 | Impact of cancer epigenetics on oncolytic virotherapy. The integration of repressive epigenetic marks such as DNA CpG methylation (Me, circle flags) and histone H3K9 methylation (Me, square flags), and activating epigenetic marks such as histone H3K4 methylation and histone H3K27 acetylation (Ac, square flags) lead to higher-order nucleosome packaging and repression (red flags) or open chromatin and gene expression (green flags). In cancer cells, dysregulation of epigenetic processes leads to various possible epigenetic states with respect to genes involved in the antiviral response (e.g., type I IFN, interferon stimulated genes or ISGs) as well as those involved in antigen presentation (e.g., MHC I/II expression, represented by a semi-circle at the end of a stick). This ultimately leads to a variety of cancer cell phenotypes (A-D) and subsequently, a variety of potential therapeutic responses to oncolytic viruses (OVs, represented by spiked green circles). several carcinomas (Lee et al., 2008). Similarly, IFN responsiveness was found to be suppressed in colon carcinoma cells due to DNA methylation at STAT1, STAT2, and STAT3, which can be restored following 5-AZA treatment (Karpf et al., 1999 ; Figure 1). Along the same signaling axis, epigenetic silencing of JAK1 in prostate adenocarcinoma cells was associated with unresponsiveness to both type I and type II IFNs (Dunn et al., 2005).
Overall, these studies highlight multiple epigenetic mechanisms that transcriptionally repress IFN-associated genes, culminating in dysfunctional and non-responsive IFN signaling across various cancer subtypes. However, in some instances alterations to epigenetic modifications in cancer lead to the up-regulation of antiviral factors. In both gastric tumors and gliomas, overexpression of the ISG IFITM1 promotes cancer cell migration and invasion, and its elevated expression is linked to reduced CpG methylation levels (Yu et al., 2011;Lee et al., 2012). Alongside its oncogenic properties, IFITM1 has antiviral properties, through its ability to inhibit viral membrane fusion (Li et al., 2013 ; Figure 1).
It is also notable that while most cancers display IFN pathway defects, approximately a third of cancer cells are fully functional in their ability to produce and respond to IFN (Stojdl et al., 2003;Norman and Lee, 2000). Importantly, several studies have shown that HDAC inhibition using a variety of chemical inhibitors Frontiers in Genetics | Epigenomics and Epigenetics modulate IFN-induced expression of ISGs, type I IFN, and TLR3/4 (Génin et al., 2003;Nusinzon and Horvath, 2003;Chang et al., 2004;Klampfer et al., 2004;Sakamoto et al., 2004;Suh et al., 2010), which leads to increased OV activity in resistant cells (Nguyen et al., 2008). This further highlights the key role of epigenetic regulation in the generation of an antiviral response and suggests that it may be possible to improve OV efficacy in resistant tumors by manipulating the cancer epigenome as will be discussed shortly.
CANCER CELLS EPIGENETICALLY REGULATE GENES INVOLVED IN ANTIGEN PRESENTATION
In addition to inactivating the antiviral response to escape antiproliferative/pro-death signals, tumors must also evade immune recognition and clearance. To this end, many tumor types epigenetically suppress CIITA expression by mechanisms including histone deacetylation/methylation and DNA promoter methylation, resulting in suppressed IFN-γ mediated MHC-I and MHC-II gene expression and dysfunctional antigen presentation (Morris et al., 2000;Kanaseki et al., 2003;Morimoto et al., 2004;Satoh et al., 2004;Chou, 2005;Holling et al., 2007;Radosevich et al., 2007;Meissner et al., 2008;Londhe et al., 2012;Truax et al., 2012; Figure 1). Interestingly, treatment of cancer cells with HDAC inhibitors can promote antigen presentation and ultimately help to induce anti-tumor immunity (Khan et al., 2004;Chou, 2005). For example, trichostatin A (TSA)-treated irradiated B16 melanoma cells administered prophylactically as a cancer vaccine are significantly more effective then control irradiated B16 cells at protecting from a subsequent challenge with live B16 tumor cells (Khan et al., 2007). Cancer immune evasion can also be mediated by dampened expression of the transporter associated with antigen processing 1 (TAP1), a key factor for antigen presentation by MHC molecules (Johnsen et al., 1999). In carcinoma cells, decreased TAP1 expression was attributed to reduced levels of histone H3 acetylation at the TAP-1 promoter (Setiadi et al., 2007; Figure 1).
In addition to these direct epigenetic effects on components of the antigenic response within cancer cells, the tumor microenvironment has also been shown to epigenetically drive tumor infiltrating CD4 + T cells to tolerance. In colon cancer, infiltrating CD4 + lymphocytes displayed high levels of DNA methylation at the IFN-γ promoter, and consequently required treatment with 5-AZA to enable tumor antigen-stimulated IFN-γ production (Janson et al., 2008; Figure 1). Overall, these studies highlight the role of epigenetic control in conferring "stealth" status to tumor cells such that they may evade the immune surveillance.
HDAC INHIBITORS CAN ALTER SUSCEPTIBILITY TO ONCOLYTIC VIRUSES
As alluded to earlier, defects in the IFN pathway are common in many malignancies but a significant proportion of tumors retain an active antiviral response (Stojdl et al., 2003;Dunn et al., 2006). Overcoming this antiviral response has been identified as a key barrier to the success of OV therapy and is the focus of many research groups including our own (Parato et al., 2005;Chiocca, 2008;Diallo et al., 2010;Liikanen et al., 2011;Russell et al., 2012). To overcome this barrier, many groups have looked at the possibility of using HDAC inhibitors in combination with OV therapy due to their repressive effects on the IFN-mediated antiviral response.
In one of the earliest reports, the anti-tumor effect of oncolytic adenovirus (OBP-301) in human lung cancer cells was found to synergize with FR901228 (Romidepsin), a class I HDAC inhibitor (Watanabe et al., 2006). However, in this report, increased activity was attributed to the upregulation of coxsackie adenovirus receptor (CAR) expression in cancer cells as opposed to direct effects on the antiviral response. Intriguingly, valproic acid, a class I/II HDAC inhibitor was found by another group in parallel to inhibit oncolytic adenovirus through the up-regulation of p21 (WAF1/CIP1; Hoti et al., 2006). Subsequently, TSA and valproic acid, two pan-HDAC inhibitors were found to enhance HSV oncolysis in squamous cell carcinoma and glioma cells (Otsuki et al., 2008;Katsura et al., 2009). Around the same time, Nguyen et al. (2008) showed that several HDIs could synergize with the oncolytic VSV-51, an attenuated oncolytic VSV-mutant that is incapable of blocking IFN production (Stojdl et al., 2003). Combination treatment with HDIs resulted in synergistic cell killing, due to both enhanced induction of cell death and increased viral output (typically over 100-fold). Enhanced viral spreading of VV and semliki forest virus (SFV) was also observed in this study. Subsequent to this, TSA was shown to be particularly effective for improving VV-based OVs in several resistant cancer cell lines in vitro and in subcutaneous xenograft and syngeneic lung metastasis mouse models (MacTavish et al., 2011). Importantly, the impacts of HDAC inhibitors on OV spread and efficacy remain restricted to tumors and not normal cells, presumably because cancer cells exhibit a number of additional aberrations, such as increased metabolism, that promote viral growth independent of the status of the antiviral response.
HDAC INHIBITORS AS MODULATORS OF ONCOLYTIC VIRUS-ASSOCIATED ANTI-TUMOR IMMUNITY
While initial experiences with HDAC inhibitors in combination with OVs exploited mainly the ability of these epigenetic modifiers to improve the infectivity of resistant tumors, at least in part by dampening the innate cellular antiviral response, more recent studies have further exploited the broader immunological effects of HDAC inhibitors. For example, one report showed that valproic acid suppresses NK cell activity by blocking STAT5/T-BET signaling leading to enhanced oncolytic HSV activity (Alvarez-Breckenridge et al., 2012). Also of note, a recent report by Bridle et al. (2013) demonstrated significant improvements in the generation of an anti-tumor immune response elicited against aggressive melanoma following a heterologous primeboost vaccination strategy. After the establishment of intracranial melanomas, immune-competent mice were primed with a nonreplicating adenovirus expressing the dopachrome tautomerase (hDCT) melanoma antigen, and then boosted with oncolytic VSV expressing hDCT. While this prolonged survival, mice were fully cured (64%) only when VSV-hDCT was administered in combination with the class I HDAC inhibitor MS-275. Remarkably, MS-275 reduced VSV-specific neutralizing antibodies and memory CD8 + T cells while maintaining prime-induced levels of humoral and cellular immunity against the tumor antigen. Interestingly, MS-275 www.frontiersin.org also ablated autoimmune vitiligo typically observed following immunization against the melanocyte-expressed antigen (Bridle et al., 2013).
USE OF OTHER EPIGENETIC MODULATORS TO IMPROVE ONCOLYTIC VIROTHERAPY?
Given the epigenetic regulation of the antiviral response and antigen presentation pathways, it is tempting to speculate that other epigenetic modulators, in addition to HDAC inhibitors, may also be used to amplify therapeutic responses in combination with OVs. To this end, a recent study by Okemoto et al. (2012) showed that 5-AZA treatment could enhance HSV replication when coadministered with IL-6 ( Figure 1). However, given numerous reports of cancers epigenetically silencing antiviral genes by DNA methylation (Table 1), we would expect that in general 5-AZA and other DNA methyltransferase inhibitors should be ineffective at overcoming the cellular antiviral response. On the other hand, the advent of new pharmacological inhibitors of KMTs and KDMs brings forth new possibilities for improving OV efficacy. For example, given the finding that histone H3K9 dimethylation observed at ISGs correlates with repression and reduced IFN response/expression, investigating the potential utility of H3K9demetylase inhibitors for enhancing OV spread in resistant tumors seems warranted. However, it is of critical importance that, as is observed for HDAC inhibitors, OV-enhancing effects remain tumor-selective.
CONCLUSION
While genetic mutations are believed to be essential initiators of carcinogenesis, it is clear that epigenetic deregulation plays a key role in augmenting and/or maintaining the tumor phenotype. OVs are promising biotherapeutics that among others take advantage of the epigenetic silencing of cellular antiviral response genes and in many ways unmask cancer antigens as they destroy cancer cells and promote an inflammatory response. While additional studies on the impact of epigenetic regulation on the antiviral and immunological responses are needed, it is already recognized from studies using HDAC inhibitors that epigenetic modulators can positively impact OV efficacy. Additional in vitro and in vivo studies evaluating the effect of other epigenetic modulators are needed to determine whether these could be used in combination with promising OV platforms anticipated to reach the clinic in the near future, to further improve their therapeutic impact.
ACKNOWLEDGMENT
This work was supported by grants from the Terry Fox Research Institute (Jean-Simon Diallo and John C. Bell).
|
v3-fos-license
|
2017-09-08T05:34:03.142Z
|
2017-09-08T00:00:00.000
|
28585596
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnint.2017.00021/pdf",
"pdf_hash": "187245daf43eba1851c16e2eda4dc1d2fcd79752",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2137",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "187245daf43eba1851c16e2eda4dc1d2fcd79752",
"year": 2017
}
|
pes2o/s2orc
|
When Non-Dominant Is Better than Dominant: Kinesiotape Modulates Asymmetries in Timed Performance during a Synchronization-Continuation Task
There is a growing consensus regarding the specialization of the non-dominant limb (NDL)/hemisphere system to employ proprioceptive feedback when executing motor actions. In a wide variety of rhythmic tasks the dominant limb (DL) has advantages in speed and timing consistency over the NDL. Recently, we demonstrated that the application of Kinesio® Tex (KT) tape, an elastic therapeutic device used for treating athletic injuries, improves significantly the timing consistency of isochronous wrist’s flexion-extensions (IWFEs) of the DL. We argued that the augmented precision of IWFEs is determined by a more efficient motor control during movements due to the extra-proprioceptive effect provided by KT. In this study, we tested the effect of KT on timing precision of IWFEs performed with the DL and the NDL, and we evaluated the efficacy of KT to counteract possible timing precision difference between limbs. Young healthy subjects performed with and without KT (NKT) a synchronization-continuation task in which they first entrained IWFEs to paced auditory stimuli (synchronization phase), and subsequently continued to produce motor responses with the same temporal interval in the absence of the auditory stimulus (continuation phase). Two inter-onset intervals (IOIs) of 550-ms and 800-ms, one within and the other beyond the boundaries of the spontaneous motor tempo, were tested. Kinematics was recorded and temporal parameters were extracted and analyzed. Our results show that limb advantages in performing proficiently rhythmic movements are not side-locked but depend also on speed of movement. The application of KT significantly reduces the timing variability of IWFEs performed at 550-ms IOI. KT not only cancels the disadvantages of the NDL but also makes it even more precise than the DL without KT. The superior sensitivity of the NDL to use the extra-sensory information provided by KT is attributed to a greater competence of the NDL/hemisphere system to rely on sensory input. The findings in this study add a new piece of information to the context of motor timing literature. The performance asymmetries here demonstrated as preferred temporal environments could reflect limb differences in the choice of sensorimotor control strategies for the production of human movement.
There is a growing consensus regarding the specialization of the non-dominant limb (NDL)/hemisphere system to employ proprioceptive feedback when executing motor actions. In a wide variety of rhythmic tasks the dominant limb (DL) has advantages in speed and timing consistency over the NDL. Recently, we demonstrated that the application of Kinesio Tex (KT) tape, an elastic therapeutic device used for treating athletic injuries, improves significantly the timing consistency of isochronous wrist's flexion-extensions (IWFEs) of the DL. We argued that the augmented precision of IWFEs is determined by a more efficient motor control during movements due to the extra-proprioceptive effect provided by KT. In this study, we tested the effect of KT on timing precision of IWFEs performed with the DL and the NDL, and we evaluated the efficacy of KT to counteract possible timing precision difference between limbs. Young healthy subjects performed with and without KT (NKT) a synchronization-continuation task in which they first entrained IWFEs to paced auditory stimuli (synchronization phase), and subsequently continued to produce motor responses with the same temporal interval in the absence of the auditory stimulus (continuation phase). Two inter-onset intervals (IOIs) of 550-ms and 800-ms, one within and the other beyond the boundaries of the spontaneous motor tempo, were tested. Kinematics was recorded and temporal parameters were extracted and analyzed. Our results show that limb advantages in performing proficiently rhythmic movements are not side-locked but depend also on speed of movement. The application of KT significantly reduces the timing variability of IWFEs performed at 550-ms IOI. KT not only cancels the disadvantages of the NDL but also makes it even more precise than the DL without KT. The superior sensitivity of the NDL to use the extra-sensory information provided
INTRODUCTION
Motor timing coordination is referred to the ability of individuals to perceive and generate motor responses at appropriate time intervals (Buhusi and Meck, 2005), and, like any motor behavior, it is characterized by some degree of variability (Fitts, 1954). The extent of this variability varies depending on the type of task performed, but also on the limb performing the task. In fact, behavioral research has revealed numerous advantages of the dominant (or preferred) limb in the generation of motor output including an increased strength (Armstrong and Oldham, 1999;Farthing et al., 2005), rate (Todor and Kyprie, 1980;Noguchi et al., 2006) and consistency of movement (Peters, 1976;Todor et al., 1982;Schmidt et al., 2000). For instance, Armstrong and Oldham (1999), when comparing maximum grip forces in healthy subjects, showed that they produced with the dominant arm forces approximately larger by 10% than those produced with the non-dominant arm. Also, in reaching tasks, when the rate by which pegs can be moved on a pegboard was evaluated, the dominant hand showed to be superior to the non-dominant one (Noguchi et al., 2006). Moreover, numerous finger tapping experiments have demonstrated that the dominant limb (DL) has advantages, in terms of speed and timing consistency, over the non-dominant limb (NDL) when sequential actions are performed at maximal speed (Peters, 1976;Todor et al., 1982;Schmidt et al., 2000). This asymmetric motor skill in favor of DL has been explained not only by increased use and training of the hand muscles (Ozcan et al., 2004) but also by the relatively enlarged excitability of the dominant motor cortex (De Gennaro et al., 2004) as well as by the increased excitability of motorneuronal pool at the level of spinal circuitry (Adam et al., 1998).
The execution of a motor task may also be influenced by the different senses of the somatosensory system (Avanzino and Fiorio, 2014). Proprioception, defined as the ability to sense the position and the movement of a limb in space along with muscular effort and tension (Proske and Gandevia, 2009), is surely central to determine the accuracy of motor performance (Guigon et al., 2008;Rosenkranz et al., 2009). Specifically to timing control, evidence for the relevance of somatosensory feedback in timing coordination comes from studies that investigated basic mechanisms of timing by using a tapping paradigm. In fact, damage to peripheral or central structures for sensory/proprioceptive information processing results in the increase of timing variability. For instance, timing skills were found impaired in a deafferented patient respect to those of healthy subjects (LaRue et al., 1995). Also, Spencer et al. (2003) showed a deleterious reduction of timing precision in individuals with cerebellar lesions, a nervous structure that is strongly implicated in the processing of proprioceptive information (Tinazzi et al., 2013).
Given that sensory feedback plays an essential role in motor control, it is logical to hypothesize that the use of some device, able to influence proprioceptive information, may result in a modification of the performance precision. We examined the effect of the Kinesio Tex (KT) tape, as a sensory device, on influencing the precision of motor timing coordination. KT taping is a kinesthetic method currently used in clinical practice to benefit some symptoms of athletic injuries and a variety of physical disorders (Kase et al., 2013). Developed by Japanese chiropractor Dr. Kenso Kase in the 1970's (Morris et al., 2013), KT is a specially designed tape having elastic properties and stretching capability with the purpose of mimicking the thickness and flexibility of the skin (Kase et al., 2013). It is claimed that KT application provides, while the movement occurs, a constant pulling force to the skin over which it is applied and brings about a lifting of the skin away from the tissue beneath, favoring the release of pressure from tender tissues underneath (Morris et al., 2013). Recently, a magnetic resonance imaging study quantified objectively KT mechanical effects on the skin and soft tissues over which it is applied (Pamuk and Yucesoy, 2015).
KT application was shown to influence significantly proprioception (Pelosin et al., 2013;Seo et al., 2016). KT was found to induce a modification in the ability of sensory discrimination, which is abnormal in patients with dystonia (Pelosin et al., 2013). Moreover, Seo et al. (2016) found that, in normal adults with sprained ankles, KT improved position sense in the dorsiflexion and inversion of the ankle joint. At first it was proposed that extra-proprioceptive effect provided by KT is due to the stimulation of cutaneous mechanoreceptors via stretching/deformation of skin (Kase et al., 2013). However, the recent study by Pamuk and Yucesoy (2015) showed that KT application causes deformations of targeted and deeper muscle tissues and permitted to make more plausible the assumption that KT may also stimulate muscle spindles during movement (Chang et al., 2010).
Recently, our group has devoted attention to improve the understanding of how KT is able to modulate motor control, and namely the variability, of a rhythmic motor behavior. We investigated the effect of KT application on timing coordination in healthy individuals by studying repeated isochronous wrist's flexion-extensions (IWFEs) performed with no direct surface opposition and while minimizing visual information (Bravi et al., 2014a(Bravi et al., ,b, 2015(Bravi et al., , 2016. We showed that KT, when applied on the dominant arm, was able to reduce timing variability of IWFEs performed under various auditory conditions and during their recall (Bravi et al., 2014b). In addition, we showed that sensorimotor coordination was significantly improved independently from direction and tension of KT application (Bravi et al., 2016). We attributed the effect of KT to augmented afferent proprioceptive information via the stimulation of mechanoreceptors.
Notwithstanding the dominant upper right limb was found to be faster (Flowers, 1975;Elliott et al., 1999), more accurate (Carson et al., 1993) and less variable (Elliott et al., 1999) than the non-dominant left arm, there is a growing consensus regarding the specialization of the NDL/hemisphere system for utilizing proprioceptive feedback (Colley, 1984;Riolo-Quinn, 1991;Goble et al., 2006;Brown, 2007, 2010). Conversely, the dominant system was suggested to more likely function in a feedforward fashion (Goble and Brown, 2007). While over the past decades between-hand differences during rhythmic cyclical movements have been explored quite in depth (Peters, 1976;Todor et al., 1982;Schmidt et al., 2000), how flexible they are, and whether and by what means these differences can be modulated is much less known. Therefore, in the current study we aimed to investigate the impact of KT when applied on DL and NDL on timing variability of IWFEs.
Furthermore, spontaneous rhythmic activity is a pervasive behavior of the nervous system in animals and humans (Brown, 1914;Yates et al., 1972;Sternad et al., 2000). Spontaneous motor tempo is defined as the frequency that a moving organism prefers when performing rhythmic actions (MacDougall and Moore, 2005). Although each individual has its own spontaneous tempo, it was shown that humans prefer to perform rhythmic motor behaviors with different motor effectors within a frequency region around 2 Hz, ranging from 2.2 Hz to 1.66 Hz (Vanneste et al., 2001;MacDougall and Moore, 2005;McAuley et al., 2006;Bisio et al., 2015). Spontaneous motor tempo is speculated to reflect the intrinsic rate of a spinal central generator (MacDougall and Moore, 2005). Central patterns generators (CPGs) are spinal neuronal networks that are thought to contribute to the execution of rhythmic motor patterns, such as locomotion, by generation of periodic motor commands (Frigon et al., 2004;Zehr et al., 2007). While CPGs have been well ascertained in invertebrates, primitive fish, and quadrupeds like cats (Arshavsky et al., 1985;Grillner, 1985;Baev et al., 1991), it is hard to locate elements of such circuits in higher vertebrates due to the complexity of the nervous structures and their additional modulation by higher brain centers (Schaal et al., 2004). Although the existence of CPGs in humans is only inferred indirectly, recent evidences suggest that neuronal networks are generally well preserved throughout evolution (Lamb and Yang, 2000;Marder, 2001;Zehr et al., 2007;Guertin, 2013).
We previously found that the reduction of timing variability of IWFEs provided by KT is concomitant with the modulation of neural processes elicited to govern the temporal production of rhythmic movements (Bravi et al., 2014b). Specifically, mean lag-1 autocorrelation values were biased towards positive when KT was applied, indicating a reinforcement of dynamic control of non-temporal movement parameters (Spencer and Ivry, 2005;Huys et al., 2008). This allows us to suspect that the application of KT, by augmenting proprioceptive information during movement, reinforces the efficiency of spinal motor circuitry, rendering the production of IWFEs less dependent on central drive (Bravi et al., 2014b). Therefore, to pursue our hypothesis, we evaluated the effect of KT on sets of IWFEs having interval duration of 550-ms and 800-ms (equivalent in that order to 1.81 Hz and 1.25 Hz). These durations were chosen since we were interested to investigate two movement frequencies falling, respectively, within and beyond the boundaries of the spontaneous motor tempo.
In this study, our interest was the assessment of an inexpensive wearable sensory device like KT in influencing rhythmic motor behavior. Specifically, we tested the effect of KT on timing variability of IWFEs performed with the DL and the NDL, and we evaluated the efficacy of KT to counteract possible timing precision difference between limbs. Also, since past numerous experiments have evidenced the superiority of DL over the NDL when sequential actions are performed at maximal speed (Peters, 1976;Todor et al., 1982;Schmidt et al., 2000), this study gives the opportunity to test whether such timing precision asymmetry is still preserved when the speed of rhythmic movement is not maximal.
We thus performed an experiment in which healthy subjects, tested with KT and without KT (NKT), have participated in two sessions (KT and NKT cases) in which sets of IWFEs were performed with the DL and the NDL, in a synchronizationcontinuation task at the two inter-onset intervals (IOIs) of 550-ms and 800-ms. As in our previous studies (Bravi et al., 2014a(Bravi et al., ,b, 2015(Bravi et al., , 2016 participants were asked to perform movements in a natural way (Huys et al., 2008).
Our first experimental hypothesis is that the effect of KT should be greater on NDL since NDL/hemisphere system is specialized for utilizing proprioceptive feedback (Goble et al., 2006;Brown, 2007, 2010). Additionally, in the event of a specific action of KT on spinal circuitry, we expect to observe more prominent effect of KT when participants perform movements within the spontaneous motor tempo range of frequency (MacDougall and Moore, 2005;McAuley et al., 2006).
Participants
Twenty-five healthy adults were recruited for this study (age: 22.7 ± 2.5 years; 12 males and 13 females). All participants were right handed (82.1 ± 23.4; laterality score from the Edinburgh Handedness Inventory, Oldfield, 1971); they were naive to the task and the purpose of the study, and knew nothing about the KT method. All were not musically trained and free of documented auditory, motor, neurological impairments. Participants were not paid. The study protocol was approved by the Institutional Ethics Committee (Comitato Etico Area Vasta Centro AOUCareggi, Florence, Italy; Prot. N. 2015/0018234, Rif. 63/12). All subjects gave written informed consent in accordance with the Declaration of Helsinki.
Set Up
The set up is fully described elsewhere (Bravi et al., 2014a(Bravi et al., ,b, 2015(Bravi et al., , 2016 and is summarized here. Every participant was tested individually, sitting upright on a chair with the feet on leg rest. Participant was asked to wear eye mask to prevent interference from visual information as well as headphones (K 240 Studio, AKG Acoustics GmbH, Wien, Austria) through which audio files could be heard. The forearms of participant were well placed on armrests of the chair, in a relaxed horizontal position. The wrist and hand were free to move in mid-air with no direct opposition, thus minimizing tactile information. The angle of the elbow joint was the result of the subject sitting in a comfortable position while respecting the prerequisites of maintaining the wrist and hand free to move without any possibility to touch, with any part of the hand, the armrest during the task. In any case, the elbow joint angle, measured by a goniometer, averaged around 100 • (± 5 • ). A triaxial accelerometer (ADXL330, Analog Devices Inc., Norwood, MA, USA) was placed on the dorsal aspect of the hand when performing the rhythmic task. Triaxial accelerometer was sited over the proximal part of the 2nd-3rd metacarpal bones ( Figure 1A). Sensor output was acquired and digitized at 200 Hz through PCI-6071E (12-Bit E Series Multifunction DAQ, National Instruments, Austin, TX, USA). Streams of clicks were generated by using Audacity , via the Generate Click Track function. Each sequence contained 16 clicks with constant IOIs of 550-and 800-ms. Each click sound of 20 ms duration (set to white noise) was followed by 530-ms of silence for the IOI of 550-ms or by 780-ms of silence for the IOI of 800-ms.
KT Application
The KT tape (Kinesio Holding Company, Albuquerque, NM, USA) is comprised of a polymer elastic strand wrapped by 100% cotton fibers and waterborne acrylic pressure-sensitive adhesives. KT is applied to the paper substrate with approximately 25% of tension, and the adhesive is 100% acrylic (Kase et al., 2013). Following the previously described protocols (Bravi et al., 2014b(Bravi et al., , 2016, KT application was designed with the purpose of covering the open kinetic chain including wrist, metacarpal and finger joints ( Figure 1B). In order to accomplish this goal, the participant, already seated on the chair with forearm in full pronation and rested on the armrest, was asked to keep the wrist in full flexion. After manually assessing the origin (i.e., lateral epicondyle) and insertion (i.e., distal phalanges) of extensors muscles of the kinetic chain as a whole, the distance between the lateral epicondyle of the humerus and the distal end of the third phalanx of the middle finger was measured with a tape meter. The strip of KT was cut 5 cm longer than the maximum length of the measured kinetic chain (Kase et al., 2013). The course of the tendons of the extensor muscles (for each finger) was then identified on the back of participant's hand and distances were measured between the distal end of each phalanx and the wrist. Measurements were used to cut the distal side of the elastic band into five branches to be placed over the metacarpal area and fingers following the course of the tendons. Once the tape was cut to the desired configuration, KT was applied from origin to insertion of wrist and fingers extensors of the arm. Specifically, KT was applied from the lateral epicondyle of the humerus to metacarpal area and fingers with moderate length tension (50% of the maximum available tension). In order to identify the percentage of KT tension, we (C) The no KT (NKT) and the KT sessions (color-coded in beige and pink, respectively) comprised each of a total of 24 IWFEs sets and were divided in two blocks. One block was performed exclusively with the dominant limb (DL), while the other one with the non-dominant limb (NDL). Each block consisted of 12 IWFEs sets performed in two conditions, six of them at 800-ms inter-onset interval (IOI), the other six at 550-ms IOI. The performance of the second block started after a 5-min rest interval from the end of the first block.
have considered the length of KT when the tape is off the paper (expressed in cm) as a reference point (0%). KT was stretched to its maximum available tension. During application, since the technique required a length tension of 50%, this would translate to 50% of the difference (expressed in cm) between the maximum available length and the reference point length. KT was applied to all participants by the same investigator to ensure consistency throughout the study (Bravi et al., 2016). This procedure was repeated two times in order to apply the strip of KT on the wrist and fingers extensors of both the dominant and non-dominant arm, respectively.
Sessions
All individuals had participated in two sessions, one with no KT (the NKT case) and one with KT application (the KT case; Figure 1C). Sessions were performed at least at a 3 days' distance (Bravi et al., 2014b(Bravi et al., , 2016. Since it was found that the time of the day affects people's timing performance (Lotze et al., 1999), participants executed the sessions during daylight hours, with each participant performing the two sessions systematically at the same hour of the day. The order of the two sessions was randomized between participants. In the KT session the test started after having applied the strip of KT on both dominant and non-dominant arm, starting with the application of KT on the arm by which the sets of IWFEs would be performed first. The synchronization-continuation paradigm was adopted for this experiment (Repp and Steinman, 2010;Braun Janzen et al., 2014). Each participant was asked to entrain IWFEs to the clicks so that the point of wrist flexion peak would coincide with the presentation of discrete auditory event (synchronization phase). When the stream of clicks ceased, participants continued to produce movements with the same temporal interval for 1 min until a vocal stop signal was given by experimenter announcing the end of the set of IWFEs (continuation phase; Figure 2). The duration of the continuation phase in each set of IWFEs was controlled by a stopwatch.
Each session began with instructions on the rhythmic motor task to complete as well as on how the sets of wrist's flexionextensions would be performed (for criteria see ''Set Up'' Section). This phase was followed by a short practice test to familiarize the participant with the task. Before data collection started, it was assessed whether the instructions were understood and whether the participant felt comfortable with the task. Two blocks of synchronization-continuations composed each session. One block was performed exclusively with the DL (Figure 1C), while the other one with the NDL (Figure 1C). Each block consisted of 12 IWFEs sets, six for each of the two IOIs (550-and 800-ms) conditions. A whole set of IWFEs lasted approximately 68.8 s in 550-ms IOI condition or 72.8 s in 800-ms IOI condition. The passage from the first block to the second one was after a 5-min rest interval. The order of blocks and IOI conditions was randomized to obtain a balanced number of subjects that executed sets of IWFEs with first one hand or the other and received first one IOI or the other. A set began when the experimenter asked whether the participant was ''ready'' after which the stream of clicks engaged and the participant moved in synchrony with it. A 60-s rest interval separated each set of IWFEs to avoid fatigue during performance (Bonassi et al., 2016; Figure 1C).
Data Format and Statistical Analysis
Kinematic parameters were evaluated from a total of 1200 sets of movements (48 sets per participant). Data from the accelerometer were stored on a computer and an off-line analysis was implemented. The signal extracted from the accelerometer presented a minimum when the wrist reached the maximum flexion and a maximum when it reached the maximum extension. The duration of a single wrist's flexionextension (i.e., IWFE duration) was calculated as the distance between two consecutive flexion-extension minima (custom software developed in Matlab ). Only the data from the continuation phase were analyzed since the synchronization phase was used only to induce the desired frequency of IWFEs.
In addition, since changes in timing were commonly observed at the transition from the synchronization to continuation phase (Flach, 2005), the first 5 s of the continuation phase in each recording were excluded from analysis. The last IWFE before or across the vocal stop signal was also excluded.
To assess the effect of the KT and the limb on the observed IWFEs durations, we adopted a random effect analysis of variance (ANOVA) model for repeated measurements (Pinheiro and Bates, 2000;Diggle et al., 2002), as previously used in Bravi et al. (2014b).
Two separate random effect ANOVA models were performed on data collected in the 550-ms and 800-ms IOI conditions. The response variable was the difference between the observed IWFEs duration and the expected duration in each condition. In the following this variable will be called error duration. The explanatory factors were the KT (presence/absence) and the limb (dominant/non-dominant).
The random effect ANOVA model adopted for the analyses has parameters that can be partitioned into two parts: the fixed effect part and the random effect part. The fixed effect parameters model the average response as dependent on the explanatory factors and their interaction as an ordinary ANOVA model. We used a dummy coding for the factors in the fixed effect part. We set NKT = 0 and KT = 1, DL = 0 and NDL = 1. In addition, as the response variables have been recorded several times for each performance and for each individual, random effect parameters had to be included to take into account the lack of independence among the observations. The random effect part was specified in order to separately measure the variability within individuals and within performances. In particular, we adopted a random effect ANOVA model with both a random intercept and a random slope, in which the random effect variability (measured by standard deviation (SD)) depends on the explanatory factors and their interaction. This model takes into account for possible residual heteroscedasticity. A lower random effect residual SD reflects a stronger proficiency in the production of the IWFEs durations. Specifically, the model has three levels of variation. The three levels correspond to: (1) single IWFEs duration on which error duration is measured; (2) series of IWFEs durations as sets of movements (48 sets per participant); and (3) individuals, performing the 48 sets of IWFEs durations. At the IWFEs duration level (1), within each set, we adopted an autoregressive AR(1) model for the random effects. The covariance between errors in duration i and duration j in set k is the variance in set k times ρ |i-j| where |i-j| is the absolute value and ρ is the parameter measuring the correlation between two subsequent durations. At the set of IWFEs duration level (2), the random effects have different variances for each combination of treatments (NDL-NKT, NDL-KT, DL-NKT and DL-KT). The combination NDL-NKT has been considered as baseline category. The SD for the combination h (h = NDL-KT, DL-NKT or DL-KT) have been parametrized as: SD h = baseline SD * ratio h. Individuals have been considered independent, with constant variance.
In order to display the effect of KT on the variability of the timed performance of the single individual, we performed FIGURE 2 | Synchronization-continuation task. Each participant was required to entrain isochronous wrist's flexion-extensions (IWFEs) to the paced clicks so that the wrist flexion peak would coincide with the presentation of discrete auditory event (synchronization phase). When the stream of clicks ceased, participants continued to produce movements with the same temporal interval for 1 min until a vocal stop signal was given by experimenter announcing the end of the set of IWFEs (continuation phase). Streams of paced audio stimuli had an IOI of 550 and 800 ms, respectively.
Poincaré maps or return maps of the time series of IWFEs durations (Shenker, 1982;Mendez-Balbuena et al., 2012). A Return map is a graph of the IWFEs duration x i+1 vs. the previous IWFEs duration x i where i is the actual observation. According to the return maps a timed performance with lower timing variability will have a smaller dispersion of the points in the graph.
In addition, we investigated if KT influences the short-term dependencies in the sets of IWFEs durations by computing lag-one autocorrelation-ρ(1)-analysis. ρ(1) is the autocorrelation of a series with itself, shifted by a particular lag of 1 observation. Positive ρ(1) describes a series in which adjacent observations move generally in the same direction. In the presence of a strong linear trend, it would be expected a value of ρ(1) close to 1. Conversely, negative ρ(1) reflects swings in the set, in which high values are immediately followed by low values and vice versa (Dunn, 2005). The presence of a slow natural change in tempo resulting in differing expected tap intervals at different points in time (drift) is occurring in time long interval sets (Collier and Ogden, 2004). Since the presence of drift's behavior in time interval series could be a source of positive autocorrelations in long series in continuation time sets (Collier and Ogden, 2004), we performed a series of detrended windowed lag-one autocorrelations, herein abbreviated wρ(1), for each set of IWFEs (Lemoine and Delignières, 2009;Bravi et al., 2014b). We computed wρ(1) over a window of the 30 first points, moving the window by one point, all along the sets. To analyze the observed wρ(1) we adopted random effect ANOVA model for repeated measurements. In order to allow an appropriate use of parametric statistical tests, the Fisher's Z-transformation was used to normalize the distribution of wρ(1) (Nolte et al., 2004;Freyer et al., 2012).
The significance level was set at p ≤ 0.05 for the analyses in the fixed effect part of the random effect ANOVA model for repeated measurements.
RESULTS
To assess the effect of the KT and the limb on the observed IWFE durations, we performed random effect ANOVA models separately for the 800-ms and 550-ms IOI conditions. The estimates of the error duration for the fixed effect part of the models in the two IOI conditions are reported in Table 1, together with their p-values, t-values and confidence intervals. As already mentioned in ''Data Format and Statistical Analysis'' Section, for these models the error duration is the difference between the observed and the expected IWFEs duration (800-ms and 550-ms, respectively).
For the 800-ms IOI condition, the intercept, that is the estimate of error duration when movements were performed with DL and without KT, was found to be negative and significant (i.e.,−16.2 ms; p-value = 0.0366; Table 1), indicating that observed IWFEs durations were, on mean, shorter than those expected. The effect of NDL was highly significant on influencing the error duration, which was found to be negative (i.e.,−8.9 ms; p-value = 0.0027). This implies that IWFEs durations produced with the NDL were on average about 9 ms shorter than those achieved with the DL. The application of KT on the DL (with KT; Table 1) corrected significantly toward the expected IWFEs durations (5.9 ms; p-value = 0.0438). The interaction between the NDL and the KT was not found significant (estimate: 3.1 ms; p-value = 0.5984) indicating that the effect of KT in modeling the error duration does not vary with the limb on which it is applied. For the 550-ms IOI condition, the intercept was not found to be significant (i.e., 1.8 ms; p-value = 0.6872; Table 1) as well as the effect of NDL (i.e., 0.9 ms; p-value = 0.5754). Consequently, participants were on average slower and almost equally accurate at producing the expected IWFEs durations with dominant or NDL when KT was not applied. Conversely, KT, when applied on the DL (with KT), had a highly significant effect on the Additionally, random effect residual SD estimates were computed for each condition. The ratio of residual SDs estimates was analyzed to determine if there were significant differences among cases. Coding the effect of a factor on a SD via a ratio guarantees the derived SD to be positive. The residual SD for NDL-NKT case was considered as baseline (residual SD = 1.00) and compared with residual SD for the other three cases (NDL-KT, DL-NKT and DL-KT). A SD ratio lower or greater than one means that timing variability of a specific case is reduced or augmented respect to the baseline (NDL-NKT) case. Two conditions are considered significantly different when their confidence intervals do not overlap (for details on random effect model for heterogeneous population, see Muthén, 1989). Estimates and confidence intervals resulting from ratio between the residual SD for NDL-NKT and residual SDs for other three NDL-KT, DL-NKT and DL-KT cases are given in Table 1 (see Random effect part).
For the 800-ms IOI, the residual SD was found to be 34.29 ms in NDL-NKT case, 34.15 ms in NDL-KT case, 35.45 ms in DL-NKT case, and 34.88 ms in DL-KT case ( Figure 3A). According to the confidence intervals of the SDs ratio over the experimental conditions, KT reduced the variability of IWFEs performed with the dominant or the NDL, but in both cases such decrease did not reach the significance level (i.e., NDL-NKT vs. NDL-KT or DL-NKT vs. DL-KT; Figure 3B). Conversely, significant differences were shown in the NKT cases when comparing the non-dominant and the DL (i.e., NDL-NKT vs. DL-NKT; Figure 3B). Individuals were found to be more precise in performing slow rhythmic movements by using the NDL (Figures 3A,B). Significant differences were also maintained in the KT cases when comparing the non-dominant and the DL (i.e., NDL-KT vs. DL-KT; Figure 3B).
For the 550-ms IOI, the residual SDs of all cases showed, in general, values smaller than those achieved in the 800-ms IOI condition. The residual SD was found to be 21.25 ms in NDL-NKT, 19.60 ms in NDL-KT, 20.57 ms in DL-NKT, and 19.59 ms in DL-KT cases ( Figure 3C). According to the confidence intervals of the SDs ratio over the experimental conditions, significant differences were shown in the NKT cases when comparing the non-dominant and the DL (i.e., NDL-NKT vs. DL-NKT; Figure 3D). However, differently from that observed in the 800-ms IOI condition, subjects were more precise in performing IWFEs with the DL (Figures 3C,D). Also, significant differences were found between the NDL-NKT and the NDL-KT cases, or the DL-NKT and the DL-KT cases, respectively ( Figure 3D). The application of KT, unlike to 800-ms IOI, helped to decrease significantly in both limbs the timing variability of IWFEs durations (Figures 3C,D), suggesting that the effect of KT is influenced by the frequency of movement being performed. Finally, the NDL-KT and the DL-NKT cases presented significant differences (Figure 3D), showing that KT, when applied on NDL, differently from what happens for the slower movements, not only counteracts the precision disadvantage respect to the DL but it makes the NDL more precise than the dominant one. These significant differences were lost when KT was applied on DL (i.e., NDL-KT vs. DL-KT; Figures 3C,D).
To visualize the effect of KT on reducing timing variability of IWFEs durations performed at IOI of 550-ms, we used the qualitative analysis of the Poincaré maps or return maps. Figure 4 displays return maps of seven subjects in the four conditions: NDL-NKT, NDL-KT, DL-NKT and DL-KT. The dispersions of the points in the maps, per subject, are smaller in the KT cases than those in the NKT cases, meaning that in the KT cases the IWFEs were performed more proficiently than in the NKT cases in which a large dispersion of the points is shown.
In addition, we explored whether and to what extent KT and the limb modulated the short-term dependencies in the sets of IWFEs durations by studying the wρ(1). We adopted random effect ANOVA models separately for the 800-ms and 550-ms IOI conditions. The estimates of the wρ(1) of the IWFEs durations for the fixed effect parameters in the two IOI conditions are reported FIGURE 3 | Mean residual SDs of IWFEs durations and residual standard deviation (SD) ratios. (A,B) In the NKT cases, participants are more precise in performing slow rhythmic movements (800-ms IOI) by using NDL. The NDL proficiency is maintained also in the KT (color-coded in pink) cases. (C,D) In the NKT (color-coded in beige) cases, subjects were more precise to perform faster rhythmic movements (550-ms IOI) with the DL. The application of KT counterbalances the between-hand differences in performance. Panels (B,D) show the estimates and confidence intervals (vertical bars) resulting from the ratio between the residual SD for NDL-NKT and the residual SDs for other three NDL-KT, DL-NKT and DL-KT cases.
in Table 2, together with their p-values, t-values and confidence intervals. For the 800-ms IOI condition, the intercept, that is the estimate of the wρ(1) of IWFEs durations when movements were performed with DL and without KT, was found to be positive and significantly different from the value 0 (i.e., 0.180; pvalue = 0.0000; Table 2). The NDL had no significant effect on wρ(1) (i.e., 0.024; p-value = 0.3078). In addition, KT, when applied on the DL (with KT; Table 2), did not modulate significantly the wρ(1) of IWFEs durations (i.e.,−0.028; pvalue = 0.2228), and the interaction between NDL and KT was not found to be significant (estimate: 0.012; p-value = 0.7179). These results suggest that KT when applied on dominant and NDL does not influence the short-term dependencies in the sets of IWFEs durations performed at IOI of 800-ms.
Higher positive values of wρ(1) were found in the 550-ms IOI condition respect to the 800-ms IOI condition ( Table 2). The intercept was found to be positive and significantly different from the value 0 (i.e., 0.316; p-value = 0.0000; Table 2). The NDL did not influence significantly the wρ(1) of IWFEs durations (i.e.,−0.014; p-value = 0.4213). Differently from what happens for the slower movements at 800-ms IOI, the effect of KT, when it was applied on the DL, on the wρ(1) was highly significant and positive (i.e., 0.066; p-value = 0.0002). Also in this case, the interaction between the NDL and KT was not found to be significant (estimate: −0.011; p-value = 0.6448). Overall, the application of KT influences significantly wρ(1) having the IOI of 550-ms and this effect does not vary with the limb on which it is applied.
DISCUSSION
The results show that timing precision asymmetries between dominant and non-dominant hands are present when IWFEs are performed at the two different frequencies investigated. Work by Peters (1976), although on a single subject, showed a difference between sides for finger tapping executed at maximal rate in terms of timing variability of intertap intervals, with the DL performing more regularly than the NDL. Todor et al. (1982) showed that side differences, in rate and variability, of tapping exist not only for distal joints (i.e., finger), but also when the movements are performed at more proximal joints (i.e., wrist and shoulder). In addition, Schmidt et al. (2000) demonstrated that asymmetry for intertap variability is significantly greater in right-handers than left-handers when performing with the DL. Also, they confirmed the earlier observations relative to DL superiority in execution of rhythmic movements with higher precision when the performance is requiring maximal speed (Peters, 1976;Todor et al., 1982;Schmidt et al., 2000). According to this literature, and shown also by our findings, there seems to be a precision advantage of the DL which is reflected by a smaller timing variability, when IWFEs are performed at faster rates (550 ms-IOI). However, this precision-based DL superiority is lost when IWFEs are performed at slower rates (800 ms-IOI). The opposite is true for the NDL, with a more precise performance, compared to the DL, at slower rates. These findings may suggest different preferred temporal environments, specific to the dominant and non-dominant motor effectors, when performing sequential motor actions. At present, we can only speculate on reasons for this phenomenon. One possibility is that the mode of temporal processing for motor control (Peters, 1976;Todor and Kyprie, 1980) between the two arms is different, being dependent on exploitation of different sensorimotor processes and neuromuscular resources that each arm had strengthened for the execution of functional habitually movements. It was suggested that for sequential rhythmic actions of supra-second durations, a more cognitive control is employed. For sub-second durations however, the circuitry used to ensure the consistency of rhythmic movements is assumed to be ingrained more tightly within the motor system (Lewis and Miall, 2003). This hypothesis resides in the fact that voluntary movements are typically of sub-second durations and can be reproduced with extreme temporal precision (Lewis and Miall, 2003). Recently, it was shown that also cognitive control processes might influence sub-second repetitive motor timing actions (Holm et al., 2017). Optimal control of goal-directed arm movements is proposed to reflect two strategies, feedforward and feedback control (Kawato, 1999;Shadmehr et al., 2010). Feedback and feedforward sensorimotor control of human movements, rather than working independently, complement each other to guarantee motor performance with a high precision (Gritsenko et al., 2009;Ao et al., 2015). Also, it was shown that control strategies during voluntary goal-directed movements are influenced by speed, shifting from feedback to feedforward control as the speed increases (Kawato, 1999;Gerisch et al., 2013;Ao et al., 2015).
Furthermore, the most leading theories attempting to describe the neurophysiological basis of interlimb performance differences are the so-called open vs. closed loop and the dynamic dominance. The former speculates that arm differences are derived from specialization of dominant and non-dominant systems for different mechanisms for motor control: dominant system for feedforward processes and non-dominant system for sensory feedback mediated error correction mechanisms (Haaland and Harrington, 1994;Hermsdörfer et al., 1999). The second hypothesizes that the dominant arm, by relying on a predictive dynamic control, is specialized for optimizing dynamic features of movement whereas the non-dominant arm, by employing a feedback-and impedance-based positional control mechanisms, is specialized in stabilizing tasks and corrective movements (Bagesteiro and Sainburg, 2002;Mutha et al., 2013). According to the open vs. closed loop theory, the differences between dominant and non-dominant side that we found for movements with temporal durations of 550and 800-ms could reflect the different specialization of each arm for the employment of specific different mechanisms for motor control. In particular, we speculate that, for fast rhythmic movements, a better proficiency of the dominant arm in relying on feedforward processes could favor the reduction of variability of temporal movements; viceversa, below a certain threshold of speed, there is a greater dependency on feedback processes and, consequently, the non-dominant arm, by being more feedback dependent, will produce a better performance.
On the other hand, when considering the dynamic dominance theory, the speed of movement is critical in influencing the shape of rhythmic actions (Huys et al., 2008;Repp, 2008). Rhythmic movements performed in a natural way (i.e., with no specific indication) at a slow pace were shown to have a discrete shape (i.e., characterized by singularly occurring events preceded and followed by periods of stabilizing posture in absence of motion), whereas fast movements were demonstrated to possess a continuous configuration (Huys et al., 2008). In Figure 5A are shown two typical examples of kinematic parameters of sequences of movements performed with the DL by a participant in the 800-and the 550-ms IOI conditions. It is possible to observe that movements in the 800-ms IOI condition are characterized by a pause after each downstroke, while movements in the 550-ms IOI condition are performed in a rather continuous way. Therefore, in agreement with the dynamic dominance theory, it is also possible that the NDL, by engaging a feedback-and impedance-based positional control mechanisms, could perform more proficiently than dominant hand in a rhythmic task, like the 800-ms IOI condition, in which stabilizing postures and dynamic movements are both present. Conversely, a rhythmic task of 550-ms IOI, in which the dynamic features of movement are preponderant, could be a condition particularly fitting for the DL due to a greater proficiency in employing predictive dynamic mechanisms.
Another possibility is that participants, by perceiving slower IWFEs less stable when performed by non-dominant hand, are trying to compensate for it by allocating more attention for the execution of isochronous motor actions. However, this alternative hypothesis seems to be unlikely due to evidence showing that increased cognitive load (i.e., working memory and executive load) influences variability of the rhythmic motor performance, by increasing it (Holm et al., 2013(Holm et al., , 2017Bravi et al., 2014a). A recent brilliant study by Holm et al. (2017) tested the influence of executive functions in repetitive motor timing by using a synchronization-continuation task. In this study participants were asked to repeat a fixed three finger sequence (low executive load) or a pseudorandom sequence (high executive load) executed at different tempi. It was shown that, while not for the longer IOIs (1024-ms and 1431-ms IOIs), high load increased timing variability for 524-ms and 733-ms IOIs. Therefore, data available in literature reinforce our hypothesis that the precision asymmetry between hands, here demonstrated as preferred temporal environment, could reflect limb differences in exploitation of different sensorimotor processes for the production of movement. Whatever the case may be, our results add a new piece of information to the context of motor timing literature, revealing that hand advantages/preferences in performing proficiently rhythmic movements are not side-locked but depend also on the speed of movement.
The use of KT in our experiments is designed to add some sensorial feedback through a wearable device able to influence proprioceptive information and modify performance precision. Our results, besides confirming previous data, show that KT improves the consistency of IWFEs (Bravi et al., 2014b(Bravi et al., , 2016. Figure. The trace is cut since only IWFEs pertaining to the continuation phase of the recording are shown. A gray vertical line marks the onset of each IWFE. The duration of a single IWFE is the distance between two consecutive flexion-extension minima. It is possible to observe that movements in the 800-ms IOI condition (upper trace) are characterized by a pause after each downstroke (marked as red), while movements in the 550-ms IOI condition (lower trace) are performed in a rather continuous way. SDs of IWFEs durations for the corresponding sets are also given. (B) In the lower panel, sets of IWFEs durations performed by a participant with the DL without KT (color-coded in beige) and with KT (color-coded in pink). Note that the variability of IWFEs durations is remarkably reduced when KT is applied. Also, it is illustrated that the reduced variability of IWFEs durations when KT was applied is associated with the tendency of IWFEs durations to decrease during performance.
However, the frequency in which IWFEs are performed is crucial to determine the extent of the KT effect. We found that, while KT, on average, reduced significantly timing variability of 550 ms (1.81 Hz) IWFEs, it was not able to improve consistency of IWFEs having a duration of 800 ms (1.25 Hz). In addition, the effect of KT was hand-independent. In Figure 5B are illustrated sets of IWFEs durations performed by a participant with the DL without and with KT. It is possible to note the remarkable reduction of variability of IWFEs durations when KT is applied. We ascribe the observed KT effect in the 550-ms IOI condition to an extra-proprioceptive information provided by KT application. In fact, KT was shown to influence significantly proprioception (Pelosin et al., 2013;Seo et al., 2016). Also, somatosensory feedback was shown to be critical in influencing the precision of the variability of movements in tasks of timing coordination (LaRue et al., 1995;Spencer et al., 2003;Bravi et al., 2014b). Specifically, we speculate that KT due to its elastic properties, during the phase of wrist flexion, applies a pulling force that, in turn, provides an additional stimulation of cutaneous, and presumably muscle, mechanoreceptors by stretching and deforming the skin as well as targeting deeper muscle tissues (Bravi et al., 2014b(Bravi et al., , 2016Pamuk and Yucesoy, 2015). Such effect of KT would augment the coordination of the wrist joint during the rhythmic motor performance and, consequently, contribute to the reduction in timing variability of the IWFEs (Bravi et al., 2014b). The extra-proprioceptive hypothesis is plausible since in our experimental paradigm IWFEs were performed with no direct surface opposition and while minimizing visual information, thus accentuating the role of the sensory component that provides limb position and movement senses to produce rhythmic actions as accurate as possible (Guigon et al., 2008;Bravi et al., 2014aBravi et al., ,b, 2016. In Bravi et al. (2016), we investigated whether different directions and tensions of KT application differently influenced the precision of sensorimotor synchronization. We showed a highly significant effect of KT in improving the precision of the performance of IWFEs having duration of 500 and 400 ms (2 and 2.5 Hz, respectively). Therefore, if data obtained previously (Bravi et al., 2016) and in this study are compared it might be possible to locate a time region of optimal adaptability of the motor output to the sensory information provided by KT. It seems that KT does manage more efficiently the rhythmic behavior within specific temporal windows, in which a control mechanism has been hypothesized to operate in an optimal, or preferential, state of activity for the production of rhythmic motor behaviors (McAuley et al., 2006). This preferential state is identified as spontaneous motor tempo, that is, a preferred rate in which rhythmic actions are performed. Although each individual has its own spontaneous tempo, rhythmic motor actions in humans were shown to be performed, on average, with a preference/spontaneous frequency of around 2 Hz. Locomotion studies conducted both in laboratory and natural settings showed a highly tuned resonant frequency of human locomotion at 2 Hz (Murray et al., 1964;MacDougall and Moore, 2005). A predilection for a 2-Hz frequency of movement has also been observed in subjects freely tapping out a rhythm (Collyer et al., 1994;Vanneste et al., 2001;Bisio et al., 2015). Collyer et al. (1994) reported a bimodal distribution of spontaneous motor tempi in which the main of these was around 2.2 Hz (equivalent to 450 ms duration), while McAuley et al. (2006) showed that spontaneous motor tempo was influenced across the life span, and that adults of ages ranging between 18 and 38 (very similar to age of group in our study) preferred to perform rhythmic movements with an interval duration of 1.66 Hz (equivalent to 600 ms interval duration).
Moreover, it is speculated that the spontaneous motor tempo reflects the intrinsic rate of a spinal central generator (MacDougall and Moore, 2005). Evidences suggest that in humans both the arms and legs are regulated by CPGs and that sensory feedback contributes strongly to the modulation of the putative CPG output (Van de Crommert et al., 1998;Marder, 2001;Harischandra et al., 2011) and assists in mediating interlimb coordination (Zehr and Duysens, 2004). Kuo (2002), by using a model of a single pendulum driven to oscillate in a manner analogous to limb motion, explored how feedforward and feedback can be combined to control rhythmic limb movements. He demonstrated that a cooperation of these mechanisms could improve performance in systems subject to both unexpected disturbances and sensor noise. In this model, a CPG acts as an internal model by making a sensory prediction of limb movement that, in turn, drives the activation of the feedback mechanism. During motion, the magnitude of incongruity between the commanded and the occurring movement results in sensory error signals that are fed back to the oscillator, which entrains a feedforward component to the actual movement (Kuo, 2002). The adjustment of the expected state is used to produce the appropriate feedback command. However, sensory information provided by proprioceptors is not perfectly accurate as that of the pendulum model, and such uncertainty, coupled with motor noise, directly translates into performance variability (van Beers et al., 2002;Guigon et al., 2008). Therefore, consistent with this model, changes in sensory signal provided by application of KT during movement, could reduce performance variability of IWFEs by compensating for such discrepancy between the commanded and the occurring movement that, in turn, would favor the generation of the appropriate feedback command for an augmented motor performance.
Additionally, we performed a detrended windowed lag-one autocorrelation analysis and we found positive values of wρ(1) in both the IOIs conditions. In line with previous studies (Huys et al., 2008;Repp and Steinman, 2010;Bravi et al., 2014bBravi et al., , 2015, the highest wρ(1) values were yielded for fast IWFEs. When NKT and KT cases were compared, we found that KT influences the short-term dependencies of IWFEs durations. Interestingly, KT biased wρ(1) values of IWFEs towards higher positive values in the 550-ms IOI condition, but not in the 800-ms IOI. Also, our participants performed IWFEs faster compared to the expected interval durations (i.e., 550-ms IOI; see Table 1). Ivry and Keele (1989) reported that their trials showed a positive lag-one covariance after detrending and that the mean intertap intervals were less than the target of 550 ms (Ivry and Keele, 1989). Together with their findings, our present data indicate that some acceleration and, thus, some residual drifting tempo, may persist even after linear detrending. Our wρ(1) analysis (see Table 2, for 550-ms IOI) substantiate their remark of a ''drift effect'', when KT is applied. To summarize, the application of KT, while enhancing precision of performance, seems, paradoxically, to be associated with loss of cognitive control (Holm et al., 2017) during the production of repetitive motor actions.
In Bravi et al. (2014b), it was demonstrated that the improvement of timing precision of IWFEs provided by KT was associated with a modulation of the timing processes. By providing extra proprioceptive information and stabilizing wrist joint, the production of IWFEs could become less dependent on central drive (Bravi et al., 2014b). It is believed that the potential for interference between areas of cerebral cortex increases with the degree in which these areas are activated (Kinsbourne and Hicks, 1978;Carroll et al., 2001). Therefore, an augmented activity of lower circuitry appointed to the optimization of sensorimotor behavior could allow, at least in part, the release of control from time-bearing higher centers including those for cognition (Fischer et al., 2016), which would allow a net augmentation of the motor control efficiency and, ultimately, an improvement of timing precision. By extending and partly revisiting the hypothesis proposed in Bravi et al. (2014b), the increased consistency of the rhythmic motor behavior following application of KT could be ascribed to a combined adaptation effects occurring at both lower-spinal and higher central sites handling the production of IWFEs.
Finally, our results can also be explained from another perspective, which is to be confirmed in future experiments. It was shown in recent studies that tactile-proprioceptive noise is capable of improving the stability in sensorimotor performance when appropriate amounts of noise are used (e.g., Mendez-Balbuena et al., 2012;Trenado et al., 2014). The augmented performance precision is speculated to be due to an increased stimulation of cutaneous mechanoreceptors, causing, via internal stochastic resonance, an enhancement in neuronal firing synchronization at spinal and cortical level (Manjarrez et al., 2002) and cortico-spinal level (Mendez-Balbuena et al., 2012). This neuronal firing synchronization was reflected in spinalcortical and corticospinal coherence. Higher corticospinal coherence has been shown to be associated with better motor performance (Baker, 2007;Kristeva et al., 2007;Pogosyan et al., 2009).
Therefore, similarly to the tactile noise, an enhancement of stimulation of cutaneous mechanoreceptors provided by KT could reduce IWFEs timing variability by increasing the coherence between spinal and cortical neurons activity within the somatosensory system. A similar increase in such spinalcortical coherence was found in cats when a particular intensity of tactile noise was applied on the skin (Manjarrez et al., 2002). As shown by Fisher et al. (2002), sensory information from cutaneous receptors enhances oscillatory synchrony in the motor system. Therefore, KT could increase sensorimotor integration at cortical level, leading to a greater cortical motor synchrony and a stronger motor cortex drive to the muscles (Mendez-Balbuena et al., 2012). It would be interesting to examine in future studies the effect of the KT on the cortico muscular coherence during a synchronization-continuation task, and whether a combination between KT and tactile noise could provide further stimulation to cutaneous receptors in order to improve the efficiency of motor control for a better performance.
By studying dominant and non-dominant upper limbs, we evaluated the differential effect of KT in influencing a rhythmic motor behavior and in counteracting timing precision difference between limbs. Significant effect of KT application was observed only at 550-ms IOI, consequently we will focus on this condition. In the 550-ms IOI condition, participants not wearing KT show a reduced ability to perform IWFEs consistently with the NDL. The application of KT not only cancels this precision disadvantage but it makes the non-dominant hand even more precise than the dominant one without KT. KT augmented also timing skills of dominant hand but only enough to neutralize the gap created by KT on the non-dominant hand.
Research on the contribution of sensory input in influencing motor performance asymmetries between arms denotes, as mentioned above, a non-dominant left arm/right hemisphere ''sensory dominance'' for the utilization of proprioceptive feedback in right-handed individuals (Colley, 1984;Riolo-Quinn, 1991;Goble et al., 2006;Brown, 2007, 2010). Conversely, the dominant system is suggested to function in a feedforward fashion (Goble and Brown, 2007), relying more on visual feedback (Honda, 1982). This asymmetry between upper limbs to exploit proprioceptive feedback is speculated to stem from functional differences in the roles of the dominant and non-dominant hands during bimanual tasks (Han et al., 2013). For instance, early results by Roy and MacKenzie (1978), who investigated arm differences in the ability to match thumb and multi-joint arm positions after depriving the subjects of visual information, revealed a non-dominant arm advantage for matching end positions of the thumb, with no arm differences for multi-joint arm matching (Roy and MacKenzie, 1978). Later, Colley (1984) and Riolo-Quinn (1991) confirmed the presence of a non-dominant thumb advantage to accomplish proprioceptiveguided matches, and Kurian et al. (1989) demonstrated a non-dominant arm supremacy for accurately reproducing elbow angles. More recently, Goble et al. (2006) by using a memorybased proprioceptive matching task, in which participants were required to memorize limb position and match with the ipsilateral and the contralateral arm, showed a specialization of the right hemisphere/left arm for proprioceptive feedback processing that is either position-or dynamic position-related Brown, 2007, 2010).
Although the lower level of timing precision of NDL can impact on the effect of KT, the superior sensitivity of the NDL to KT, able even to overturn the original between-hand asymmetries, could be explained by the specific proficiency of the NDL to use the extra-sensory information provided by KT to correct ongoing movement.
CONCLUSION
The results from this study shed light on the working mechanism of KT in rhythmic movement around spontaneous tempo. It seems that the effect of KT is more pronounced for certain temporal intervals, and that these intervals are reminiscent to those encountered in human walking (MacDougall and Moore, 2005;Styns et al., 2007). As such, the implementation of KT as an added measure in rehabilitation protocols, where rhythmic movement is impaired, may prove to be efficient. Although further investigations of the effect of KT are needed, for example, analysis of goal-directed movements (Kuling et al., 2016), an additional application of the KT method could be coupled with motor protocols for rehabilitation in impairments of the non-dominant motor system to enhance the use of movementrelated proprioceptive information (Goble and Brown, 2007). Finally, at the other end of the motor system, in individuals with peripheral neuropathy, a condition that is known to reduce asymmetries in inter-limb transfer (Pan and Van Gemmert, 2016), KT could be thought of as an effective mean to enhance the motor performance. These latter speculations remain to be confirmed or rejected by future experimentation.
AUTHOR CONTRIBUTIONS
RB: substantial contributions to the conception or design of the work, the acquisition, analysis and interpretation of data for the work; final approval of the version to be published. EJC: substantial contributions to the conception or design of the work and interpretation of data for the work; drafting the work or revising it critically for important intellectual content. AM: substantial contributions to the analysis; drafting the work or revising it critically for important intellectual content. AG: substantial contributions to the inferential statistics analysis; drafting the work or revising it critically for important intellectual content. DM: substantial contributions to the drafting the work or revising it critically for important intellectual content; agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
|
v3-fos-license
|
2018-12-12T11:09:33.752Z
|
2015-06-05T00:00:00.000
|
153259643
|
{
"extfieldsofstudy": [
"Economics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.ccsenet.org/journal/index.php/ass/article/download/46280/26827",
"pdf_hash": "2e9d3e8e2b30b18166ec01781a1dde258550657a",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2139",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Economics",
"Business"
],
"sha1": "2e9d3e8e2b30b18166ec01781a1dde258550657a",
"year": 2015
}
|
pes2o/s2orc
|
Can Indonesia Cocoa Farmers Get Benefit on Global Value Chain Inclusion ? A Literature Review
The purpose of this paper is to analyze the potential benefit of Indonesia cocoa farmers in the global value chain inclusion. The fact that there is a gap between international price and farmer’s price is due to existence of power asymmetry especially on supply side. This paper uses global value chain perspective and the existence of internet innovation in food supply chain network by reducing the power asymmetry with a benchmark from developed and emerging market. We recommend some issues need to be intervened and resolved by the government, before the farmers could benefit on global value chain inclusion by internet adoption. On the other hand, telecommunication infrastructure deployment and increasing internet penetration are the necessity, and education on benefit of ICT inclusion in daily operation is becoming the next step. This conclusion supports the role of ICT in agriculture which is providing access to the market, financial inclusion, and access to technology.
Introduction
In Indonesia, cocoa is the 3rd seed export commodities after palm oil and rubber.According toMinistry of Economic Affairs Republic of Indonesia (2011) Indonesia istop 2 cocoa producers which a contribution of 18% in the global market, while according to ICCO (2013) Indonesia is the top three world producers after Cote d'Ivoire and Ghana.In national level, cocoa also earn the third largest export commodity after palm oil and rubber.The cocoa export trading in 2009 reached USD 1.38 billion (derived from cocoa beans and processed).Demand for cocoa beans for America and Europe reached 2.5 million tons per year (MoEARoI, 2011).
Most of cocoa producers are developing countries while the final product manufacturers are based in developed counties such as USA, Europe, and Japan as shown in the Table 1 and Table 2. Table 1.World producer countries of cocoa beans (ICCO, 2014) Based on interview by KPPU (2009) to Government Agency, Association of Indonesia Cocoa, Association of Indonesia Cocoa Farmers, and academics, the top seven problems in Indonesia cocoa are: (1) Processing industries have difficulties finding fermented cocoa.
(2) Low productivity and quality of Indonesia cocoa plants.
(3) Cocoa pest and disease (4) Lack of financial support to the farmers (5) Too many intermediaries in the cocoa trading chain (6) Importers demand for non-fermented cocoa (7) Increasing demand on export of cocoa Table 2. Top ten chocolate producers (ICCO, 2014) Research by Rifin (2013) using Revealed Comparative Advantage (RCA) and Almost Ideal Demand System (AIDS) tools, Indonesia has a comparative advantage on producing cocoa beans although it is still below Ivory Coast, Ghana, and Nigeria.This research also concludes that Indonesia and Ghana's cocoa beans are complementary in the international market and recommends cooperation between two countries to get the most benefit on the increase of world demand.
Global Value Chain
A value chain is the full range of activities that firms and workers do to bring a product or service from its conception to its end use and beyond.The activities that are included in a value chain are design, production, marketing, distribution and service/support to the end user.All the activities in a value chain can be done by a single firm or divided among a number of firms.They can be contained within a single geographical location or spread over wider geographic regions/countries.When the chain of interrelated activities to bring out a product or service from concept to complete production and delivery to final consumers is divided among multiple firms in different geographic locations, it is known as the Global Value Chain (CTCS, 2013).
Through the globalization, the majority of developing countries, including the poorest, are increasingly participating in GVCs.GVC links in developing countries can play an important role in economic growth, domestic value added created from GVC trade can be very significant relative to the size of local economies.The developing country share in global value added trade increased from 20% in 1990 to 30% in 2000 to over 40% in 2010s.(UNCTAD, 2013).
Global Value Chain gives us an understanding on the nature of interaction between demand side and supply side in a specific sector and provides the analysis tool in developing intervention to include small farmers in the value chain (Zylberberg, 2013).We will use GVC perspective in identifying the opportunities for cocoa farmers to move up their value chain by producing a higher value of product and processes as well as an effective tool for farmer empowerment.(-40)
Indonesia Cocoa Value Chain
Cocoa industry plays an important role in the Indonesia economy growth, it creates revenue sources, trading commodity, development of agro-industry, and the creation of employment opportunities.Government believes that they should revitalize and develop this sector include investment in the cocoa base, upstream-downstream supply, and infrastructure to support this role (MoEARoI, 2011).
Final Report on Cocoa Cluster Performance Appraisal by SEADI (2012), an USAID project in West Sulawesi and South Sulawesi, provide a simple trading chain of cocoa in Sulawesi.Small farmers sells their product to the traders, bigger farmers sell directly to district wholesaler, traders bring the products to the larger merchant and finally wholesaler sell the product to the exporters.Indonesia cocoa supply chain consist of farmers, cocoa collector, local traders, exporters, multi-national, and both local processors and manufacturers.(Syahrudin, 2012), There are three types of farmers based on the cocoa plantation ownership which are farmers as owners, sharecroppers and farm managers.Collectors take role of butting and cocoa beans collecting, usually they have financial capability to buy it in upfront, they take advantage on limited capability of the farmers in term of logistic.Traders will act as the marketing point for the exporters, local processors and MNC processors.Panlibuton (2004) in the report on Indonesia Cocoa Value Chain Assessment identify the following stakeholders as part of cocoa value chain: (1) Research/Extension conducted by government institution, research agency and private research (2) Input Supply provides the basic and supporting material to the farmers (i.e.seed and planting material/tools, fertilizers, and pesticides) (3) Growing process by smallholder farmers and large estates (4) Collecting/Bulking by local collectors and buying station (5) Trading process from collectors or directly from the farmers in some cases and sends the beans to the main cities.
(6) Local Processing of dried beans into a variety of processed cocoa products (i.e.cocoa paste or liquor, cake, powder, and butter).
(7) Local Manufacturing of producing final finished chocolate products.
(8) Exporting beans from collectors and traders then sell it to regional buyers.
(9) Importing/Trading, international agents primarily based in commodity market as the sources for large multinational processors and manufactures.
(10) Regional/International Processing, affiliation/subsidiary of some multinational processors which directly supply beans and product to them.
(11) International Manufacturing producers of final chocolate product mostly in well developed countries.
All of the previous studies on Indonesia cocoa value chain implied that the Indonesia cocoa farmers are part of the global cocoa value chain which most of the cocoa beans are manufactured by the more well-developed countries before send the final chocolate product to the global consumers.The chart in Figure 1 shows the gap of 15%-25% between farmer level and international market on the price of cocoa beans.
Figure 1.Price comparison between international and farmer of dried cocoa beans (Rifin, 2013) 4. Governance in Global Value Chain Gereffi et al. (1994) divided the value chain governance into producer-driven chains which the barrier to entry is capital and proprietary knowledge due to existence of high technology; and buyer-driven chains in which key barrier to entry is marketing costs, product design and market information, found in labor-intensive sectors.
Sturgeon et al. ( 2001) used the degree of standardization of product and process as a basis to divide the supply relationship into three types of (1) commodity suppliers, depend on generalized assets and often produce standard products, do not connect directly to the customers, price is the key factor, and suppliers could switch easily; (2) captive suppliers, depend on dedicated assets, high connectivity with customers and tend to be found within symbiotic supplier networks; and (3) turn-key suppliers, relatively independent stance toward their customers, high level of competence, ability to serve many type of customers and/or businesses.(1) Market, involve transaction that are relatively simple, typical spot market; repeated transaction and low switching cost for both parties (2) Modular, made by order to customer's specification, use generic machinery that limits transaction-specific investment, and make capital outlays for components and materials on behalf of customers.
(3) Relational, exist when buyers and sellers rely on complex information which create mutual dependence and high level of asset specificity, such linkages require trust and generate mutual reliance regulated through reputation, social and spatial proximity, and family and ethnic ties.
(4) Captive, small suppliers are transactionally dependent on much larger buyers and faces significant switching cost (captive).Such network are frequently characterized by a high degree of monitoring and control by lead firm (5) Hierarchy characterized by vertical integration and dominated by managerial control such as headquarters to subsidiaries and affiliates.
He also identifies some dynamics of global value chain governance (Gereffi, 2011)such as: (1) shifting from market governance to relational by increasing complexity of transactions and reduces supplier competence in relation to new demands, (2) shifting from relational governance to market by reduce the complexity of transactions and greater ease of codification, (3) better codification of transactions to shift from relational to modular, (4) and the other way around by de-codification of transactions, (5) increasing supplier competence to shift from captive to modular, and (6) the other way around by decreasing supplier competence.With reference to Indonesia cocoa value chain, all the five archetypes of governance in global value chain are exist in Indonesia cocoa value chain as well as opportunities to upgrade the linkage and benefit according to the dynamics in global value chain governance.Kaplinsky (2000) uses GVC framework to understand that inequality has expanded markedly in spite of increasing integration of developing countries into the world economy due to these issues of governance and power symmetry; and Humphrey et al. ( 2010) states that smallholders are generally at a disadvantage when participating in GVCs for a multitude of reasons such as lack information about market opportunities and technology and they generally work through intermediaries and see marginal benefits from inclusion into value chains and not part of high value activities concentrated in developed countries.In his conclusion, he states that to grab the potential gains for the farmers, the governance of the chain need to be change due to very fragmented production of small farmers and the varied of intermediaries quality in agricultural market.
Potential Upgrading in the Dynamic of Global Value Chain Governance
As smallholders tend to participate in buyer-driven value chains, the power asymmetries present in these trading relationships hamper possibilities for upgrading into higher value-added activities (Zylberberg, 2013), and shifting from market governance to more relational reduced the power asymmetries substantially but pushed the intermediaries on supply side to produce more from their own farms rather than purchased from small farmers (Gereffi et al., 2005).Itneed an innovative smallholder-based business model as a viable path out of poverty in countries with low labor costs, suitable climatic conditions and basic infrastructural capacities (Zylberberg, 2013).
Potential Contribution from ICT Inclusion
Could ICT inclusion contribute to address the smallholder's issues in global value chain participation?FAO (2013) define that the utilization of ICT in agribusiness could contribute in the areas such as access to a better technology on production system management, access to market, and access to financial institutions.Porter ( 2001) implies that the role of internet in the competition will reduce the competitive advantage by making information widely available; reducing the barrier to entry such as physical stores, sales force, and channel distribution; and creating virtual market for more buyers and sellers.
A combination of global value chain governance reference with internet innovation in food supply chain network provide an opportunity for the supply side (farmers) to get benefit on global value chain inclusion and internet adoption by lowering the degree of power asymmetry as can be seen in the Figure 3. Implicitly, we need to "commoditizing" a "generic" specification of product in "virtual" market.
Consequently, by providing product at basic level, the farmers will be located at the bottom of value chain.There are some evidences on the initiatives to move up farmers' position in Indonesia cocoa value chain; Ministry of Agriculture RoI released a specific regulation to push the sales of fermented cocoa in farmer's point starting 2016; AMARTA (Agribusiness Market and Support Activity) project in Indonesia provided training and supporting activities for cocoa farmers include development of fermented cocoa community; and CIP (Cocoa Innovation Project) which has plan to push ICT usage among cocoa farmers and the providing financial services access through mobile application.
Figure 3. Combination of GVC Governance and Internet Innovation (Gereffi, 2005& van der Vorst, 2005) Even though there is a possibility of utilizing internet for beneficiary of farmers, there are some issues in ICT adoption by small holders, Stuart (2004) states that the success factor on information technology adoption by farmers in New Zealand is government projects related to the development of broadband infrastructure such as e-government and e-procurement.While Aleke et al. (2010) based on the results of research on the adoption of ICT from the perspective of a small -scale agribusiness in Nigeria stated that to ensure the success of the diffusion of an innovation, a balance must be maintained between the work done during the design of information and communication technology with social factors such as language and lifestyle.Sangha et al. (2010) which examines the role of ICT in the agriculture sector in India states that the barriers in adopting information and communication technologies by farmers are the lack of training, inadequate infrastructure, and equipment costs.Taragola et al. (2004) compare the perception in the developed and developing countries of the ISHS symposium participants , the results stated that in developing countries there is no perception of the economic benefit , do not
Low
High understand the value of benefits , and there is no time for information and communication technology .While in developed countries have already passed through an understanding of the benefits and time, but more emphasis on the need for the availability of infrastructure and technology costs.Burke (2010) in his research on the adoption of information and communication technologies by small-scale agribusiness in Hawaii states that the growth of the size of the company and the increase of their operational complexity and structure will increase the role of information and planning, it needs to be supported by information technology and more advanced communication.Adamides et al. (2013) stated that almost 98 % of farmers in Cyprus using the mobile phone as agricultural resources and advised the service provider to take advantage of this as dissemination tool for agricultural information.Kumar (2012) based on the results of research on the use of information and communication technology for development of remote areas and agriculture in India concluded that the information and communications technology plays a very important role in the agricultural education, research and service development.Farmers will be made more intelligent and always be aided by the availability of digital information systems.
Ofusu-Asare ( 2011), his research in Ghana on the utilization of mobile phone by cocoa farmers concludes that the device is used to meet social and economic needs.It will include the arrangement for inputs, sales, information sharing, social media, and cost reduction especially on transportation..
Conclusion
Participation in cocoa global value chain does not automatically improve the cocoa small holders' quality of life, but there is a room for improvement by riding the dynamic of global value chain governance.Information and communication technology could help farmers to improve the level of complexity of transaction as well as increase farmers' ability to codify transaction by giving them access to (virtual) market and the latest technology and information about market needs.Stuart (2004) state a widely broadband infrastructure is a necessity to create an ICT ecosystem for the farmer communities, and Sangha (2010) adds the importance of device penetration on the market.Aleke (2010) adds that the right application should be in place to complete the three pillars of ICT ecosystem.Broadband infrastructure deployment in farming area (rural) could face a profitability problem, decreasing trend of internet device price will automatically push the device penetration, and there are a lot of internet application in the market that provide the related info on technology (from cultivation to after-harvest processing) and last but not least is an adequate training to use it (Sangha, 2010) and induction of local context into the application (Aleke, 2010).Given the breadth of cocoa value chain, there is an opportunity for small farmer to shift their selling product to a more advance product along the value chain by adopting the proper technology.Government and business communities could help them in providing access to technology and financial services, while academics could help them in technology adoption process and the form of farmer association could strengthen their position in many aspects.
Gereffi et al. (2005) uses three key determinants of value chain patterns: complexity of transaction, the ability to codify information, and capability of supplier.Based on those variables,Gereffi et al. (2005) defines five types of value chain governance structures:
|
v3-fos-license
|
2022-12-15T15:39:51.062Z
|
2016-03-22T00:00:00.000
|
254645574
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10895-016-1789-0.pdf",
"pdf_hash": "e150466748c193b275c46703115094fd65c4cf86",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2140",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "e150466748c193b275c46703115094fd65c4cf86",
"year": 2016
}
|
pes2o/s2orc
|
Simultaneous Surface-Near and Solution Fluorescence Correlation Spectroscopy
We report the first simultaneous measurement of surface-confined and solution fluorescence correlation spectroscopy (FCS). We use an optical configuration for tightly focused excitation and separate detection of light emitted below (undercritical angle fluorescence, UAF) and above (supercritical angle fluorescence, SAF) the critical angle of total internal reflection of the coverslip/sample interface. This creates two laterally coincident detection volumes which differ in their axial extent. While detection of far-field UAF emission producesa standard confocal volume, near-field-mediated SAF produces a highly surface-confined detection volume at the coverslip/sample interface which extends only ~200 nm into the sample. A characterization of the two detection volumes by FCS of free diffusion is presented and compared with analytical models and simulations. The presented FCS technique allows to determine bulk solution concentrations and surface-near concentrations at the same time.
Introduction
For the study of processes at surfaces and interfaces the standard confocal FCS has the immanent problem that the ellipsoidal observation volume suffers from having a low axial confinement. As a result, surface processes remain concealed by the background produced by the bulk fluorescence.
Optical near fields have been succefully used to confine observation volumes to interfaces. FCS has, for instance, been performed using evanescent waves produced at optical nanostructures called zero-mode waveguides [1][2][3] or more commonly using TIRF [4][5][6][7]. TIR-FCS uses objective-type TIRF illumination to restrict the excitation to a thin section less than 200 nm above the interface in combination with standard confocal detection to ensurse the lateral confinement of the detection volume. TIR-FCS has proven very useful for the study of processes close to a surface/solution interface. In theory, it can give access to a number of properties, including local fluorophore concentrations and local fluorophore translational mobility [8], or kinetic rate constants for reversible association of fluorophores with the interface [9]. The determination of these quantities by TIR-FCS, however, relies on the a priori knowledge of the fluorescent solution concentration. In many biological cases, though, such as the study of the interaction of proteins with membranes or membrane proteins, rely on the use of fluorescent fusion proteins whose cellular expression levels are not precisely known [10]. While the advantages of SAF-CS have already been described [11], in this report we provide an extension to the technique which allows to perform FCS in close proximity to the sample/solution interface as well as deeper in solution simultaneously.
Results
We use make use of a previously reported custom setup and microscope objective [12] (Fig. 1) for tighly focused, undercritical angle excitation and parallel, well-separated collection of SAF and UAF. SAF collection yields a highly surface-confined detection volume, while UAF collection yields a conventional confocal volume which extends deeper into the sample. The simultaneous measurement of SAF and UAF has been used for determining axial emitter positions with nanometer accuracy [13] as well as to reduce artifacts in membrane FCS related to a non-planar geometry of the membrane [14].
Quantitive results in FCS rely on the size and shape of the detection volume. The most common way of calibrating the detection volume is to perform FCS on a fluorescent species with known diffusion coefficient and concentration. While the temporal decay of the autocorrelation function (ACF) depends on the shape of the observation volume, the amplitude of the ACF gives direct access to the size of the detection volume through the relationship V eff = 1/ (G 0 ×C). Here, V eff is the socalled effective volume, G 0 the amplitude of the ACF, and C the concentration of the sample. In turn, it is possible to determine concentrations of fluorescent species with a calibrated effective volume. We carried out diffusion measurements on the red fluorescent dye Atto655 (in its carboxylic acid form, −COOH) which has negligible triplet state contributions and a precisely determined diffusion coefficient [15]. A difficulty when trying to probe the detection volume at the coverslip/solute interface by free diffusion arises from non-specific interaction of the fluorophore with the coverslip glass. This flaws the ACF in that it is shifted to longer decay times while the amplitude is decreased. Accordingly, great care needs to be taken for the preparation of the coverslip. Figure 2 (bottom graph) shows the parallel detection of SAF and UAF of a 10 nM solution of Atto655 with a plasma-treated coverslip and at high ionic strength (200 mM NaCl) to shield the electrostatic repulsion between the negatively charged dye (net charge of −1) and the glass [16]. In comparison, Fig. 2 (top graph) shows the intensity tracks of SAF and UAF using a non-plasma treated coverslip with pronounced non-specific adsorption. A 63°cut-off was used for SAF (critical angle for water/ glass: 61.9°). Figure 3a shows the parallel FCS measurement with SAF and UAF of freely diffusion Atto655. The amplitude (G 0 ) of the ACF for SAF was over thirty times larger than for UAF, given the substantially larger detection volume (Fig. 2, inset). The UAF ACF was fitted to the standard three-dimensional Gaussian model (Eq. 1 from Ref. [11]) while the SAF ACF was evaluated according to Eq. 5 from Ref. [11]. The average of six separate FCS measurements, each with different lateral positions on the coverslip and newly adjusted focus, gave an effective volume V eff = 144.0 ± 1.3 aL for SAF and V eff = 5.49 ± 0.07 fL for UAF. Notably, the relative error for both the SAF and UAF effective volumes is around 1 %. Theoretical values for V eff were calculated directly from the observation volume spatial profile according to Eq. 23 in Ref. [17] and gave V eff = 136.7 aL for SAF and V eff = 6.50 fL for UAF, which is in good agreement with experimentally determined values.
The comparatively large effective volume for UAF is because we used the larger photosenstive area of the detector of 180 μm diameter as a pinhole (corresponding to 4.5 Airy units). This was to ensure that the excited area at the coverslip/sample interface coincided with both detection volumes to ensure that both SAF and UAF detection volumes were interrogating the same area.
With SAF and UAF being measured in parallel we additionally evaluated the cross-correlation functions SAF⋆UAF and UAF⋆SAF (Fig. 3b). We compared the experimental cross-correlation functions with simulations and there was a good agreement. Although a model for the cross-correlation of SAF and UAF currently lacks, it is conceivable that it contains information on directional transport along the z-axis or irreversible binding processes.
It is possible to further axially confine the detection for SAF by increasing the cut-off angle of fluorescence collection. For this we show FCS measurements of freely diffusing Atto655 carried out with higher SAF cut-off angles. Experimental SAF ACFs for cut-off angles of 66°and 70°are shown in Fig. 4. The experimentally determined effective volumes for SAF were V eff = 114.5 ± 1.0 aL (theory: 112.2 aL) for the 66°and V eff = 127.7 ± 4.2 aL (theory: 98.1 aL) for the 70°SAF-aperture. While the decays of the SAF ACFs are good agreement with the analytical model, the experimental value for the 70°a perture is significantly larger than the theoretical value-even larger than compared to the 66°aperture. However, fluorescence collection this far above the fluorescence maximum at the critical angle comes at a larger loss of fluorescence signal and statistical accurracy and is therefore less practicable. For freely diffusing Atto655 a countrate per molecule cpm of 54.4 kHz for SAF and 28.5 kHz for UAF was calculated for a measurement using 67 μW excitation intensity. This corresponds to a molecule brightness mB of 8.2 × 10 5 W −1 and 4.3 × 10 5 W −1 for SAF and UAF, respectively.
In summary, the first simultaneous measurement of surface-near and solution FCS was described and a detailed quantification of the custom optics by FCS was provided. It is noteworthy that the method is not restricted to our specialized optics. It could in principle be performed with conventional high NA objectives as the separate detection of SAF and UAF has already been demonstrated [18]. Our approach can be used for measuring weak or transient interactions at surfaces or membranes with unknown solution concentrations by FCS.
Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
|
v3-fos-license
|
2018-04-03T00:30:50.053Z
|
2016-12-14T00:00:00.000
|
12368121
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0168129&type=printable",
"pdf_hash": "dfae2923f13341f1895e30db060dfe021770b452",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2141",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "da825e994e06e8ba6d274a29ac3f48f5952dc8be",
"year": 2016
}
|
pes2o/s2orc
|
Experimental Infection of Mink Enforces the Role of Arcanobacterium phocae as Causative Agent of Fur Animal Epidemic Necrotic Pyoderma (FENP)
Fur Animal Epidemic Necrotic Pyoderma (FENP) is a severe, often lethal infectious disease affecting all three fur animal species: mink (Neovision vision), foxes (Vulpes lagopus) and finnraccoons (Nyctereutes procyonoides). Previous studies showed an association between Arcanobacterium phocae and FENP. An experimental infection was conducted to confirm the ability of A. phocae to infect mink either alone or concurrently with a novel Streptococcus sp. found together with A. phocae in many cases of FENP. Different inoculation methods were tested to study possible routes of transmission. Typical signs, and gross- and histopathological findings for FENP were detected when naïve mink were infected with the tissue extract of mink with FENP, using a subcutaneous/ intradermal infection route. Edema, hemorrhage, necrosis and pus formation were detected in the infection site. A pure culture preparation of A. phocae alone or concurrently with the novel Streptococcus sp. caused severe acute signs of lethargy, apathy and anorexia and even mortality. The histopathological findings were similar to those found in naturally occurring cases of FENP. In contrast, the perorally infected mink presented no clinical signs nor any gross- or histopathological lesions. This study showed that A. phocae is able to cause FENP. The study also indicated that predisposing factors such as the environment, the general condition of the animals, temperature and skin trauma contribute to the development of the disease.
Introduction
Fur Animal Epidemic Necrotic Pyoderma (FENP) is a newly discovered, emerging disease which affects animals in the fur industry. At present, the etiology remains unconfirmed, but a strong correlation was found between Arcanobacterium phocae (A. phocae) and FENP. [1] The disease is known to affect the major species in this industry, including mink (Neovison vison), foxes (Vulpes lagopus) and Finnraccoons (a raccoon dog bred for fur, Nyctereutes procyonoides). In all species, FENP infection causes necrotic pyoderma, however the manifestations differ slightly between species. In mink, lesions are detected in the paws and head; in foxes, a conjunctivitis spreads to the eyelids and facial skin; and in raccoon dogs, abscesses develop in the paws [1]. The disease develops rapidly, has no effective treatment and has rapidly evolved to a severe animal welfare problem as well as cause of financial losses to the farmers and the fur industry.
Nordgren et al, (2014) detected the bacterium A. phocae in tissues of animals with FENP. In some cases it was found together with a novel Streptococcus sp. [1]. A. phocae is a gram positive, small, pleomorphic, coccobacillary rod. On blood agar it grows, within 24 hours, as small, pinpoint like colonies with a strong hemolytic zone around each colony. Cultivation in a CO 2enriched environment does not enhance growth. It is catalase-positive, oxidase-negative, and elicits a positive CAMP-reaction to Rhodococcus equi and Streptococcus agalactiae, and an inverse CAMP-reaction to Staphylococcus aureus. [2] In addition to fur animals, A. phocae has been isolated from marine mammals, including harbor seals, common seals, dolphins, and sea lions. It is generally found in superficial mixed infections, abscesses and wounds. Many of the infections are secondary to skin tissue damage, such as bites, bullet wounds or other traumatic injuries in the skin tissue. [2,3] Discovering A. phocae in tissues of animal showing signs of FENP, later in the text referred to as FENP affected animals, is particularly interesting as the onset of FENP related signs were first seen in mink fed with seal byproducts in the USA in 1970 and in Canada in 1996 [4]. The bacterium can also be found on Canadian farms suffering from a chronic pododermatitis but not on farms with healthy animals [5].
An experimental infection was performed to investigate the ability of A. phocae to infect mink and cause FENP either alone or concurrently with the novel Streptococcus sp. It is crucial to confirm the causative agent of this disease in order to develop appropriate diagnostics, prevention protocols, and future treatments.
Ethics statement
This study was performed in strict accordance with the Finnish Act on animal Experimentation 62/2006, with the European convention for the protection of vertebrate animals used for experimental and other scientific purpose (Directive 86/609/EEC) fully implemented. The housing and management of mink fulfilled all the requirements given to the housing of mink (VNa1084/2011). All the experimental procedures of the study were approved by the Animal Experiment board in Oulu (Permit Number: ESAVI/6780/04.10.03/2012). The animals were monitored regularly and euthanized when sever systemic signs such as fever, anorexia or apathy or inflammatory signs in the skin were detected to avoid unnecessary prolonged pain or distress and suffering. The animals were euthanized by CO gas, the method described in legislation concerning culling of the animals ((EU) N: o 1099/2009).
Animals
Mink (Neovison vison) were bought from an ordinary fur farm since mink are not bred for experimental use. The farmer was informed of the experimental use of the purchased mink. The farm had no history of FENP, and the animals were free from plasmacytosis (Aleutian mink disease), which is a common viral mink disease that affects immunity and is suspected to predispose animals to other diseases. [6,7] Black color type was selected because it was known decision to publish, or preparation of the manuscript. The specific role of this author is articulated in the 'author contributions' section. to be susceptible to FENP. One hundred females, born the previous spring, were used in the experiment. The breeder females were selected for the experiment as there were no males left for sale during the winter season.
The bacterial inocula
The inocula were prepared from cultures of A. phocae and the novel Streptococcus sp. isolated from clinical cases of FENP [1]. Six separate A. phocae isolates, two from each fur animal species, were used in mixed preparation to account for possible differences in virulence. Correspondingly a mixture of three separate isolates of the novel Streptococcus were used for the dual infections. The bacterial species were confirmed by sequencing a partial 16S rDNA gene (all nine isolates), and a partial 16S-23S ribosomal RNA intergenic spacer sequence of the three Streptococcus isolates. The unique sequences have been submitted to NCBI GenBank under the accession numbers KX966275-KX966278. The bacteria were grown over night on blood agar plates, containing 4% defibrinated sheep blood. They were scraped off the plates and suspended in injection grade saline solution. The concentration was determined by spectroscopy based on experiments and adjusted to set concentrations based on the viability experiment (described below). The bacteria remained in suspension for approximately five hours prior to infection.
To ensure the viability and infectivity of the bacterial strains to be used in the experiment, they were pretested by a six hour incubation in saline suspension at +4˚C at different, known concentrations based on experiments correlating OD600 with cfu. The +4˚C was used in the testing because of the subzero temperatures at the experimental farm. The suspensions, both pre-and post-incubation, were plated onto blood agar plates with 4% defibrinated sheep blood and incubated overnight at +37˚C. The colonies were counted the following day.
In addition to pure cultures, tissue suspension from foot lesions of three FENP affected mink was used for inoculation of the experimental mink. These three animals were obtained from a farm with confirmed history of FENP. The diseased tissues were cut and homogenized with mortar and pestle, diluted in saline and filtered through a 5 μm syringe filter (Millex, Merc, Darmstadt, Germany). Samples of the diseased tissues used in the infection were taken for PCR studies (more detailed description by Nordgren et al [1]) and also plated onto blood agar plates to confirm the presence of the bacteria of interest. The doses in all the experiments remained in suspension for three to five hours which could cause a reduction in the amount of infectious bacteria which was accounted for in the amounts suspended.
Experimental infections
The experimental infection was performed on a fur farm not used for housing mink at the time of the experiment. The farm had no previous history of FENP. The shelter building used in the experiment had been empty for 14 months and the cages were mechanically cleaned before the trial. The mink were transported to the experimental farm, caged, marked individually, and allowed to adjust to the environment for two weeks prior to the infections. One mink died during the adaptation period.
The experiment was performed from January to March, because clinical experience had shown that cold weather predisposes the animals to FENP. During the two pilot studies the temperature varied between 0˚C and +4˚C. The temperature dropped to -20˚C during the main experiment. The winter season is optimal for an infection trial also because the density of the animals is at its lowest thus limiting the possibility of unintentional transmission.
Two pilot experiments were performed to determine the optimal dose and route for the main study; we used either peroral (p.o.) infection route or subcutaneous (s.c.)/intradermal (i.d.) injections to the right hind foot (Table 1). For the first pilot study thirty mink were used, and they were penned in one end of the shed. The trial lasted for four weeks. Four different doses were tested in the first trial, two for each infection route. The p.o. inoculum was mixed in feed either as a dose of 200 cfu), or a dose of (10 000 cfu) of A. phocae. The potential oral and/or intranasal infection routes were mimicked by infecting via feed. Both routes might be possible if feed were a source of infection. The mink were fed once a day and the infection suspension was mixed in the total feed portion. On the day of infection, a slightly smaller feed portion, approximately 150 g, was used to ensure that the mink ate the whole portion including the infection dose. The s.c./i.d. infection was performed by injecting 0.5 ml of either a dose of 50 cfu or 2500 cfu by a 20 G needle to the right hind leg. The mink were also given homogenized skin tissue filtrate from lesions of FENP affected mink via feed and as a s.c./i.d. injection. In addition, we performed p.o. and s.c./i.d. infections using tissue suspension from FENP affected animals. Control animals were fed (2) and injected s.c./i.d. (2) with the saline solution used for bacterial suspensions (9 mg/ml NaCl solution, Fresenius Kabi, Bad Homburg, Germany). Two mink were left as controls without any procedures to make sure that the group did not become infected from the environment. Animals were monitored signs of disease for at least twice a day during the experimental period.
Based on the results of the first pilot study, we continued to a second pilot study to test higher doses (Table 1), 9 million cfu p.o. and 4 million cfu s.c./i.d. of A. phocae. We also used a mixture of 2 million cfu of A. phocae and 2 million cfu of the novel Streptococcus. In the second pilot study ten mink were used. A combination of A. phocae and the novel Streptococcus sp. was used in pilot study 2 because the results of the previous study of FENP (Nordgren et al 2014) had shown that especially in FENP affected mink A. phocae was often isolated in mixed cultures with a novel Streptococcus sp. Two mink were left as controls between pilot study animals and the rest of the study animals waiting for the main experiment. The second pilot study lasted for four weeks. The main study was designed based on the information obtained from these two pilot studies. Based on the findings in the pilot infections, we proceeded to the main study in which we used 59 mink (Table 2). Blood serum samples were taken prior the infection by drawing blood from a clipped nail into a capillary tube containing heparin (VWR Microhematocrit Capillary Tubes, VWR, USA, PA). The blood samples were tested by PCR for A. phocae and the novel Streptococcus sp. after the experiment. Blood samples instead of swabs were taken because the blood samples could be used later on to detect other causative agents, such as viruses, the possible role of which is yet unknown in pathogenesis of FENP.
The p.o. infection route by feed was substituted by direct application onto the facial area and mouth. In the pilot studies it was difficult to mix the infection dose in feed in the cold circumstances, and thus the actual dose eaten was uncertain. The animals were divided into five groups A-E. Group A was infected by injecting one ml of the A. phocae and the novel Streptococcus sp. suspension (5 000 000 cfu each) or pure A. phocae at 5 000 000 cfu suspension with a syringe to the nasal area and mouth of the mink. Group B received a dose of 300 000 cfu or a dose of 3 000 000 cfu of A. phocae s.c./i.d. in 0.5 ml of saline solution. This was to test the ability of A. phocae alone to cause the typical signs of FENP. Group C received either the 300 000 cfu or the 3 000 000 cfu dose to a skin wound artificially created by scraping the area with a scalpel. This route caused more tissue damage than s.c./i.d. injections alone, thus better mimicking a trauma to the skin, which was earlier noted to predispose the animals to FENP. Group D mink received a mixture of A. phocae (500 000 cfu) and novel Streptococcus sp. (500 000 cfu) in an artificial skin wound (as in Group C) to further test the possible synergy of the two bacteria. In Group E mink were injected s.c./i.d. with tissue suspension from three FENP affected mink. An 18 G needle was used to facilitate the passing of the suspension, thus creating a larger than normal injection trauma. Artificial skin wounds were also created in six control mink and one mink was left completely untouched. In addition, four mink were transferred to the cages of animals that died/were euthanized during the second pilot test to study a possible infection route by contact with the cage of a diseased animal.
Aliquots of the inocula were plated on blood agar plates after the infection to confirm the cfu. The plates were then taken to a laboratory and incubated overnight at +37˚C. PCR and subsequent sequencing was done on the tissues from diseased animals used in the experiment to confirm the presence of A. phocae and the novel Streptococcus sp.
Signs of disease and gross pathology
All animals showing systemic signs or signs of inflammation in the skin were immediately euthanized and necropsied. At the end of the experiment all remaining animals were euthanized and selectively necropsied. The clinical signs of FENP, in the acute form, were determined to be general signs of illness i.e. fever, malaise, anorexia and lethargy combined with one or more of the specific signs of FENP. These were considered to be discharge from the eyes and nose, swelling of the foot and avoidance to use the limb followed by skin lesions in the feet and/or facial area. The lesions were considered specific for FENP when they were exudative, necrotic and a typical crust could be detected. The gross lesions were described by the location, extension and severity (mild, moderate, severe) of the lesion and the duration (acute, chronic) of the process. Any lesions in the internal organs were also recorded.
Histopathology
Samples of brain, heart, lung, trachea, spleen, liver, kidney, bladder, duodenum, jejunum, ileum, colon, local lymph nodes and skin tissue were fixed in 10% phosphate buffered formalin for at least 24 h, embedded in paraffin and sectioned at 4 micrometers. All tissues were stained with hematoxylin and eosin (H&E). All the animals with clinical signs were submitted to a complete necropsy; also selected controls (4) and infected animals without signs were fully necropsied (9). Samples of brain, heart, lung, spleen, liver, kidney, duodenum, jejunum, ileum, colon, local lymph nodes and skin were sent for bacteriological studies and cultured on blood agar plates containing 5% defibrinated sheep blood and incubated aerobically at 37˚C for 24-48 hours. Intestinal samples were cultured to detect salmonella, campylobacter and anaerobic pathogens. Bacteriological culturing was performed on all fully necropsied animals.
Experimental infection protocol
In this experiment both pure cultures of the bacteria and tissue suspensions were used to infect animals. The culture suspensions of A. phocae and the novel Streptococcus sp. suspected to be associated with FENP in mink were first quantified by plating and counting colony forming units (cfu) to ensure the quality of the inocula after dilution and processing and storage times at ambient temperatures. The inocula were successfully adjusted to account for the lowered viability. The tissue suspension from animals with FENP which was used as an inoculum for the experimental infections, was confirmed to contain both A. phocae and the novel Streptococcus sp. by PCR and sequencing. The feed used in the experiment was found to contain DNA of Streptococcal (Streptococcus agalactiae) and Arcanobacterial (Trueperella bernardiae, previously known as Arcanobacterium bernardiae) origin but not the species used in the experiment. When plated the feed did not produce colonies except those expected to be found in the environment.
The PCR tests done on the pre-infection blood samples taken from the animals before the main experiment, showed that one mink had been exposed to A. phocae and another one to the novel Streptococcus before infection. The first mink, predisposed to A. phocae, belonged to group A and received A. phocae and the Streptococcus p.o. The Streptococcus carrier belonged in to group E which received the tissue suspension from a diseased animal.
Clinical outcome
In the first pilot study, one mink infected s.c./i.d. with FENP-tissue suspension, developed a typical FENP lesion in the injection site. The lesion was a 0.5x0.5 cm necrotic pyoderma with brownish exudate and crust formation. All the other animals (29/30) remained healthy (Table 1). In the second pilot, all mink infected s.c./i.d. with a higher dose of A. phocae or a mixture of A. phocae and the novel Streptococcus sp., had severe systemic signs of lethargy, anorexia and apathy, and pyoderma with edema, hemorrhages and necrosis in the injection site. Two of the animals died of sudden, peracute infection before euthanasia could be performed. In the main study, almost all (26/30) of the animals inoculated either s.c./i.d. or via the artificial wound (groups B, C, D) developed a lesion in the inoculation site, as did half of the mink (3/6) inoculated with tissue of FENP affected animals ( Table 2.). Mink inoculated p.o. did not develop FENP in any of the substudies, neither did any of the control animals.
Pathology
Mink with clinical signs (34) in the pilot studies and the main study presented edema in the inoculation site, and brownish exudate on the skin. Mink from the first pilot inoculated with FENP tissue and mink from the second pilot with an artificially created wound had crust formation typical to FENP as well (Fig 1). Edema, hemorrhage, necrosis and pus were detected in the subcutaneous tissues (Fig 2). The spleens were dark red, enlarged (approx. four times the size of a normal spleen) and the surfaces were slightly mottled in 26/34 cases. Slight fatty liver changes were observed in 26/34 mink.
Mink with clinical signs and gross pathological findings (33/34) had histopathological changes in the skin samples as well. Lesions were similar as those seen in clinical cases of FENP. In the epidermal layers an acute inflammation was observed; necrosis, fibrin, clusters of gram positive bacteria and mainly diffuse neutrophilic inflammation were recorded. Inflammation spread to subcutaneous tissue, where hemorrhage, necrosis and edema were detected. Vasculitis and subcorneal micropustular inflammation were also occasionally detected.
Hemorrhages, necrosis and fibrin in the spleen samples were detected in 28/34 mink, 19/34 had mild to moderate vacuolar degeneration in liver samples typical of fatty liver and 16/34 had congestion and moderate perivascular lympho-and plasmacytic inflammations and 11/34 fibrin and neutrophilic granulocytes in the lung sample. Mink inoculated s.c./i.d. with tissue of FENP affected animals (1 from pilot test, 3 from main study) developed histopathological lesions in the skin similar to those seen in FENP. Three mink inoculated with the tissue of FENP affected mink (1 p.o. and 2 s.c./i.d.) without clinical signs were necropsied and did not show any macroscopic changes. Three controls and the mink which died in the pre-experiment adaption period were necropsied as healthy controls. Six mink inoculated with A. phocae perorally and one inoculated through wounded skin showed no signs of disease and were selected for necropsy. One mink inoculated with both A. phocae and the novel Streptococcus sp. also failed to show clinical signs and was necropsied as well. No histopathological changes were detected in any of these animals.
Microbiology
In the second pilot study and the main study, 23 mink inoculated with A. phocae developed skin lesions, and from 17 of these, A. phocae was isolated from the skin sample in mixed culture and from two as a pure culture. Four had bacterial growth with A. phocae in the internal organs (liver, kidney) as well. The isolation of A. phocae was unsuccessful in four mink that were inoculated with A. phocae s.c./i.d.. Two of these had Proteus sp. overgrowth and two had abundant growth of bacteria belonging to the Staphylococcus intermedius group. One inoculated animal that remained asymptomatic had A. phocae growth in the skin sample.
Seven out of eight mink inoculated with a mixture of A. phocae and the novel Streptococcus developed systemic signs and skin lesions. Both of the bacteria were grown from a skin sample of one of these mink, A. phocae alone from two, Streptococcus alone from three mink. Those mink which were inoculated but did not develop visible systemic signs or skin lesions had growth of both bacteria in the injection site as well. Four animals inoculated with both bacteria had the novel Streptococcus in the internal organs (kidney, spleen and liver) alone, and in one case with Escherichia coli.
Altogether three (3/6) mink inoculated with FENP affected tissue developed systemic signs and skin lesions, and the novel Streptococcus sp. together with Streptococcus canis was isolated from the skin samples of all these animals, and one also had Proteus sp.. Two of these had bacterial growth in internal organs (liver, kidney, and lung) as well (the novel Streptococcus, E.coli, S. canis, Proteus sp.). In the lung neutrophilic granulocytes and fibrin were detected and the liver showed mild steatosis. A. phocae could only be detected by PCR (see below) Samples from the skin and organs of three healthy controls were cultured with no microbiological findings. The mink that died in the pre-experiment adaption period and six animals inoculated without signs were sent for necropsy, but no bacteria were isolated.
PCR
The PCR tests performed on the pre-inoculation blood samples of the main experiment showed that one mink had been exposed to A. phocae and another one to the novel Streptococcus before the inoculation. The first mink, pre-exposed to A. phocae, belongs to group A and received A. phocae and the Streptococcus p.o. The Streptococcus carrier belonged to group E which received the tissue suspension from a diseased animal. While other animals belonging to these groups developed clinical signs, these two remained healthy.
The animals inoculated but not showing signs of infection and negative control animals were found to be negative or the copy numbers in PCR were extremely low (4 negative controls). Two mink had completely eliminated even the genetic material derived from the inoculum. The three mink inoculated with both bacteria and presenting with clinical signs but found positive only for the novel Streptococcus sp. by culturing were found positive for A. phocae by PCR.
Discussion
Here we describe the setup of an experimental infection model for FENP by inoculation of mink with A. phocae and a novel Streptococcus sp. In our previous report on the disease, [1] an association between FENP and A. phocae and to a smaller extent Streptococcus spp. was detected. Our experimental infection model showed that tissue suspension from a FENP affected mink and A. phocae either alone or together with the novel Streptococcus sp. can cause lesions and signs which mimic those detected in FENP. These were observed in the animals inoculated by both s.c./ i.d. route and inoculation of artificially wounded skin. These findings resemble the infections detected in the marine mammals where A. phocae causes inflammation in wounds [2]. No signs or lesions were detected in the groups inoculated through the p.o. route however some of these animals were PCR positive, which suggests that feed may be one of the routes by which the bacteria colonize the animals, but this alone is not enough to induce disease. The mink transferred to the cages that had housed animals showing signs of FENP also remained healthy, again suggesting that environmental exposure alone is not sufficient to cause clinical disease. These animals were not tested by PCR so it remains unclear if they were colonized without any detectable signs of disease. All of this indicates that trauma, stress or an underlying infection predisposes to FENP and reinforces the previous theory of a multifactorial disease.
An ordinary mink farm was chosen for the experiment because no experimental laboratory premises suitable for mink existed in Finland. By this approach, we also replicated natural conditions on the farms, which was considered an advantage since practical experience suggests that environmental factors may predispose to the infection. On the other hand, a farm is suboptimal for an infection experiment because it compromises the biosafety of the experimental environment. The bacteria may be able to spread in the greater farm area much as in natural cases of FENP despite measures taken to prevent this. Indeed, the PCR results of the negative controls suggested that the bacteria had been able to spread within the experimental group of animals, supporting the high potential for transmission and rapid spread of these pathogens.
In the main study three out of six mink inoculated with tissues of an animal with FENP developed macroscopic lesions and histopathologic changes typical for FENP. One mink developed chronic lesions more typical for the FENP detected in clinical cases. The macroscopic signs and histologic lesions of FENP are distinct from other bacterial skin infections. Other factors separating FENP are the localization, early clinical signs, necrosis and severity of the lesions and also the epidemic nature of this disease. The bacteriological studies revealed the novel Streptococcus in all cases but no presence of cultivable A. phocae. All the cultures were mixed, and as A. phocae is easily overgrown by other bacteria it may be missed in conventional diagnostics. The PCR showed the presence of both bacteria in the animals with typical lesions. Animals inoculated with direct injection of A. phocae, a mixture of the two bacteria or the tissue, but showing no signs of the disease were negative for A. phocae both by culture and PCR showing that the PCR positive results were unlikely to come from the inoculum and were more likely indicative of bacterial growth in the tissues.
A. phocae has been detected by PCR in samples of clinically healthy animals [5] indicating that it is an opportunistic pathogen and can be part of the normal flora of the animals, causing problems only in favorable circumstances, for instance in the case of trauma to the skin or in the presence of other pathogens. The colonization of the fur animals appears to be recent as no isolates of A. phocae were found prior to emergence of FENP further suggesting a link between this bacteria and the disease. It is of course possible, if unlikely, that the bacteria could have gone undiagnosed until the arrival of FENP. In this study two animals shown to be carriers prior to the main experiment, had actually been housed close to the pilot animals suggesting a possible source of infection at the site of the experiment. These animals also remained healthy despite experimental infection which might suggest immunological protection stemming from previous low level exposure. Samples from control animals at the end of the experiment were PCR positive at low quantities for both bacteria whereas samples from control animals in the pilot studies were not. This suggests that the bacteria started to spontaneously spread among the test animal population correlating with the suspected, highly contagious nature of the bacteria. This could also explain the positive findings of the Streptococcus in experimental animals inoculated solely with A. phocae.
The inoculation doses were determined and adjusted in the pilot studies 1 and 2. Suboptimal doses were probably used in pilot test 1 as no signs or lesions were detected, whereas the relatively high dosage in the pilot test 2 caused severe systemic signs of apathy, lethargy and anorexia as well sudden mortality. The clinical picture was subacute to acute and no chronic lesions developed apart from one mink inoculated with tissue suspension of mink with FENP. The doses used for inoculation should be further optimized, as the clinical signs varied between subacute and severely acute causing deaths or leading to euthanasia. Sudden mortality is reported in natural cases of FENP and was seen also in the study animals. In such cases A. phocae is mostly isolated in the skin samples and no growth of microbes is detected in the organ samples. Even though the clinical outcome differed in severity and duration from the typical, more chronic, cases of FENP, both the gross and histopathological findings in the study animals including pyoderma and profound necrosis in the skin resembled the findings typical for FENP.
The difficulty to reach correct dosing was partly attributable to the poor viability or longevity of the bacteria in suspension. The viability is reduced in direct correlation with time of incubation in saline solution and also correlates with temperature which is more difficult to standardize in field conditions. Therefore, the amount of infectious bacteria may have varied within the experiment. The growth of A. phocae is relatively poor in laboratory conditions in general further complicating diagnostic approaches during experiments and to a lesser degree in natural cases of FENP as the sample material is prone to environmental contaminants.
The mink used in the study were in excellent health and the environment and management of the animals was good. This probably influenced the general immunological status of the animals in the beginning of the trial. During the main study weather conditions changed notably, with a sudden cold weather period with -20˚C temperatures, which may have stressed and weakened the mink thus resulting in the signs observed during the main experiment.
It may be hypothesized that the systemic signs and lesions detected in diseased and succumbed animals could be explained with the effect of bacterial toxins. Bacteria belonging to genus Arcanobacterium are known to produce toxins with dermonecrotic activity, which could explain some of the features of the pathogenesis of FENP [8]. The presence and role of such toxin(s) requires further investigations, also the possible role of the Streptococci remains unclear. Bacteriological examinations of naturally occurring cases of FENP can show A. phocae alone or together with a Streptococcus, including the novel Streptococcus species that is closely related to Streptococci of marine origins. Our experiment enforces somewhat the hypothesis that the two bacteria have synergy but that the disease does not necessarily require the presence of the Streptococcus to develop. Further studies are under way to characterize and define the role of the Streptococcus. Additionally, more research should be done on the differences of viability and infectivity of different bacterial isolates and to overcome their poor longevity in saline solutions at low ambient temperatures. This is unlikely to be a characteristic of the wild type bacteria as they were originally found in marine mammals.
This study demonstrated that tissue suspension of a mink with FENP, A. phocae alone and together with the novel Streptococcus sp. cause lesions and mortality in mink. There were similarities between the signs and lesions as well as the gross and histopathological findings in the study animals with those typical to FENP, even in the more acute forms of the disease developed by the study animals. The study confirmed that predisposing factors like skin trauma and environmental factors are needed to evoke the disease.
|
v3-fos-license
|
2018-12-09T03:10:06.169Z
|
2018-05-23T00:00:00.000
|
55999870
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4360/10/6/577/pdf",
"pdf_hash": "2818c4ee3605ef4cd0b27e48bdc31703aee1f506",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2144",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"sha1": "2818c4ee3605ef4cd0b27e48bdc31703aee1f506",
"year": 2018
}
|
pes2o/s2orc
|
Precise Synthesis, Properties, and Structures of Cyclic Poly(ε-caprolactone)s
Cyclic PCL (c-PCL) has drawn great attention from academia and industry because of its unique, unusual structure and property characteristics due to the absence of end groups in addition to the biocompatibility and biodegradability of its linear analogue. As a result of much research effort, several synthetic methods have been developed to produce c-PCLs so far. Their chain, morphology and property characteristics were investigated even though carried out on a very limited basis. This feature article reviews the research progress made in the synthesis, morphology, and properties of c-PCL; all results and their pros and cons are discussed in terms of purity and molecular weight distribution in addition to the cyclic topology effect. In addition, we attempted to synthesize a series of c-PCL products of high purity by using intramolecular azido-alkynyl click cyclization chemistry and subsequent precise and controlled separation and purification; and their thermal degradation and phase transitions were investigated in terms of the cyclic topology effect.
The research interest has recently been extended to topological PCLs. In particular, the cyclic topology, namely cyclic PCL (c-PCL) has drawn attraction because of its unique properties due to the absence of end groups [6][7][8][9][10][11][12]. The unusual properties render c-PCL more versatile in various fields. A research effort was first made to develop synthetic methods of c-PCL. As a result, several synthetic schemes were reported . The synthesized products were examined in terms of properties and structures [6][7][8]10,[16][17][18]20,27,28,[31][32][33][34][35]. Nevertheless, c-PCL synthesis is still confronted by critical issues: unreacted linear PCL precursor residues and their removal, byproducts and their removal, low reaction yields, long reaction times, limited ring sizes, and so on. Additionally, the properties and structures of c-PCL products have been investigated on a very limited basis.
In this feature article, we review the synthetic schemes, properties, and morphological structures of c-PCL reported in the literature so far. In addition, we discuss new results of the precise synthesis and thermal properties of c-PCL recently achieved in our laboratory. In our study, a series of c-PCL products was synthesized by using the intramolecular click chemistry of linear PCL precursors possessing an azido group on one chain end and an alkynyl group on the other chain end; their thermal properties were examined by using thermogravimetry (TGA) and differential scanning calorimetry (DSC).
Synthesis
Much research effort has been made to synthesize c-PCLs. As a result, several synthetic methods have been developed so far, and can be classified into three major families: (i) ring-expansion polymerizations; (ii) intermolecular cyclizations; (iii) intramolecular cyclizations. In this section, the synthetic schemes reported in the literatures are reviewed and their pros and cons are discussed. The purity of c-PCL is critical in understanding its structure, properties and the development of its applications. Thus, its workup process is discussed in detail from the view of purity. At the end of this section, our precise synthesis works are additionally introduced together with purification processes, providing highly pure c-PCL products with low polydispersity indices (PDIs).
Ring-Expansion Polymerization
A ring-expansion polymerization (REP) of ε-caprolactone (CL) was first succeeded in 1998 by using 2,2-dibutyl-2-stanna-1,3-dioxebutane [n-Bu 2 Sn(OCH 2 CH 2 CH 2 CH 2 O)], a cyclic initiator synthesized from di-n-butyltin dimethoxide [n-Bu 2 Sn(OMe) 2 ] and 1,4-dihydroxybutane (Scheme 1a) [13][14][15][16][17]. The REP reaction was carried out at 60~180 • C in bulk. The degree of polymerization (DP) was proportional to the [monomer]/[initiator] (=[M]/[I]) up to 1000 (which corresponds to a number-average molecular weight M n of 190,000 g/mol). The polydispersity index (PDI) ranged 1.42~1.71 and increased with the [M]/[I] ratio, reaction time and temperature. The reaction conversion was >90%. Overall, this REP is a good approach to synthesize c-PCL. However, this method was found to have three drawbacks. The first one is the formation of cyclic oligomers as byproducts via thermal back-biting degradations; the amount of such byproducts increased with reaction temperature and time; the lowest yield of byproducts was 1~2 wt %. The second one is the formation of c-PCL containing the 2,2-dibutyltin component in the ring structure; the presence of tin metal component may cause toxicity in some applications. The tin metal component could be eliminated via an insertion-elimination mechanism using 2-oxo-1,3-dithiane. However, the obtained c-PCL product behaves with a low stability due to the carboxylic anhydride unit formed in the ring structure. The final drawback is the formation of c-PCL with relatively high PDI values.
For the REP reaction of CL, monoalkyl-organoaluminums were introduced in 2013 as a new initiator system (Scheme 1b) [18]. Their cyclic initiator form would be generated through the nucleophilic attack upon CL by the scorpionate ligand followed by macrolactonization via the continuous insertion of CL monomers without the involvement of the alkyl (methyl or ethyl) group directly attached to the aluminum center. The REP reaction produced c-PCL via the intramolecular chain-transfer reaction and the regeneration of the monoalkyl-organoaluminum initiator. The REP reactions were conducted in toluene at 70~130 • C. The reaction yield ranges from 91 to 99%, depending on the alkyl substituent, [M]/[I] ratio, temperature, and time; the obtained c-PCL products were obtained with M n = 18,690~52,020 g/mol and PDI = 1.02~1.37; both M n and PDI were increased with reaction time. All results collectively informed that the monoalkyl-organoaluminum initiator is quite novel for the REP of CL and its performance is much better than that of 2,2-dibutyl-2-stanna-1,3-dioxebutane initiator. Moreover, it is quite unique that this synthetic method can produce c-PCLs without any chemical heterogeneities in the ring. However, there are some concerns as follows. First, the intramolecular chain-transfer reaction was proposed for the termination step. This chain transfer reaction may have a chance of causing side reactions, possibly producing byproducts including cyclic oligomers. Second, it is still necessary to verify the presence of an aluminum-containing component or parts of the initiator within the obtained c-PCL. Third, how can the metal-containing initiator be eliminated from the c-PCL after the REP reaction? Fourth, can c-PCL with M n higher than 60,000 g/mol be achievable? Lastly, the PDI value is still large; can it be improved to a lower value?
α,ω-Anthracene-terminated linear PCLs were synthesized at 80~110 • C in toluene under argon atmosphere by a one-pot reaction composed of two steps. In the first-step, mono-anthracene-terminated PCLs were synthesized from CL with the aid of 9-anthracenemethanol initiator and tin(II) 2-ethylhexanoate (Sn(Oct) 2 ) catalyst; in the second-step, the coupling reaction of mono-anthracene-terminated PCLs was conducted by using 1,6-hexane diisocyanate. The obtained dianthracene-terminated PCLs were measured to have M n = 3270~6780 g/mol and PDI = 1.38~1.99, depending on the [M]/[I] ratio, temperature, and time in the polymerization; the polymerization yields ranged between 72% and 98%. Dianthracene-terminated PCL underwent [4+4] a cyclization reaction in tetrahydrafuran (THF) over the polymer concentration range of 10 to 100 mg/mL by ultraviolet (UV) exposure (365 nm wavelength); the exposure time was in the range of 120~168 h. The cyclic PCL products were purified by precipitation in methanol and subsequent filtration and drying. The obtained product was a cyclic dimer to cyclic tetramer, depending on the linear precursor polymer concentration and UV-exposure time in the photocyclization reaction. The c-PCL products revealed PDI = 2.06~2.26, depending on the ring size; larger c-PCL showed higher PDI value. Overall, the photoinduced cyclization is a good approach to produce cyclic multimers. However, this method should clarify several concerns or questions as follows. First, α,ω-anthracene-terminated linear PCL precursors were synthesized with relatively high PDIs; the PDI should be improved. Second, all cyclic multimer products were obtained with the PDI values larger than those of their linear PCL precursors. These results are an indication that each cyclic multimer might include impurities (perhaps, coupled linear precursors and cyclic multimers in different sizes) in certain levels. Third, the cyclic multimers inherently include bulky [4+4] cyclized anthracene dimeric units, which can influence their structure and properties. Lastly, this method is still challenged with the task of producing a monomeric cyclization product; any possibility of monomeric cyclization was not shown or discussed. Scheme 2. Ring-opening polymerization of CL using an aromatic alcohol as an initiator; the post modification and intermolecular cyclization of the linear PCL product [19].
Intramolecular Cyclizations
2.3.1. Pseudo [2]rotaxane-Initiated Cyclization c-PCL was synthesized via the ring-opening polymerization of CL initiated by a pseudo [2]rotaxane initiator with the aid of diphenyl phosphate catalyst and subsequent capping of the propagating chain end with a bulky isocyanate (3,5-dimethylphenyl isocyanate or 3,5-bis(trifluoromethyl)phenyl isocyanate) to afford macromolecular [2]rotaxane (Scheme 3a) [10]. Here, the pseudo [2]rotaxane initiator was prepared from sec-ammonium salt with both pentenyl and benzyl alcohol termini and dibenzo-24-crown-8-ether possessing pentenyl substituent. The PCL precursor polymers with M n = 5500~6700 g/mol were obtained with 75~80% yield and 1.12~1.17 PDI. The successive intramolecular cyclization to macromolecular [1]rotaxane at the precursor polymer terminus was carried out with a yield of 80~82% via the ring-closing metathesis with the aid of Grubbs catalyst (2nd-generation catalyst). Then, the attractive interaction of the terminal ammonium/crown ether moiety was removed via N-acetylation; this enabled movement of the crown ether wheel along the axle PCL chain to the urethane region of the other terminus in a solution. The obtained c-PCLs showed PDI values very close to those of the linear precursor polymers, indicating that the purification process was done effectively. This approach nicely demonstrated the formation of c-PCLs. But, there are several concerns. First, the linear PCL precursors still reveal relatively higher PDI values. Second, the c-PCLs possess a very bulky crown ether ring. Such a bulky unit may affect their properties and structure. Lastly, they retain a ring structure based on interactions of the crown ether's oxygen atoms in the wheel and the amide linker in the axle, which are somewhat weaker than chemical bonds. This non-bond type cyclic coupling may also affect the properties and structure of c-PCL. Scheme 3. Synthesis of c-PCL from CL: (a) linear PCL and its cyclization using a rotaxane [10]; (b) zwitterionic polymerization of CL [20][21][22]; (c) linear PCL and its cyclic amidation [23].
Zwitterionic Polymerization and Cyclization
c-PCL was prepared in THF or toluene at room temperature by the zwitterionic polymerization of CL with an N-heterocyclic carbine (NHC) initiator (1,3,4,5-tetramethylimidazol-2-ylidene or 1,3-diethyl-4,5-dimethylimidazol-2-ylidene or 1,3-diisopropyl-4,5-dimethylimidazol-2-ylidene) and subsequent cyclization via intramolecular backbiting of the terminal alkoxides on internal esters of the zwitterions (Scheme 3b) [20][21][22]. The yield of the polymerization with [M]/[I] = 100 was in the range of 30~80% depending on the initiator and time. The c-PCL products were determined to reveal M n = 41,000~114,000 g/mol and PDI = 1.36~2. 16. It is quite novel that this synthetic method can produce c-PCLs without any chemical heterogeneities in the ring. However, the polymerization yields are relatively low; and the PDI values are relatively large. There may be three reasons for these results, as follows. The first possibility is that during the polymerization the cyclization reaction, namely intramolecular backbiting of the terminal alkoxides on the internal ester near the cationic terminal, competes with the propagation reactions. The second possible factor is that the intramolecular backbiting of the terminal alkoxide has multiple options on the internal ester units of the growing polymer chain. The third possible factor is that the attack of the terminal alkoxide has more options on the ester linkers of different growing polymer chains.
Cyclic Amidation
c-PCL bearing an amide linker was prepared in a four-step reaction (Scheme 3c) [23]. In the first step, linear PCL possessing α-tert-butoxycarbonyl and ω-hydroxyl groups was prepared by the tert-butyl-6-hydroxyhexanoate initiated ring-opening polymerization of CL in toluene at room temperature under argon atmosphere with the aid of diphenyl phosphate (DPP) [24] as a metal-free organocatalyst. In the second step, the ω-hydroxyl group was reacted with N-(tert-butoxycarbonyl)-β-alanine (N-Boc-β-alanine), converted to Boc-NH. In the third step, the α-tert-butoxycarbonyl group was converted to COOH while the ω-Boc-NH was converted to NH 2 . In the last step, α-COOH-ω-NH 2 -PCL was cyclized by a solid-phase amidation method using a silica-supported carbodiimide condensation agent (SiliaBond ® Carbodiimide, Si-DCC) [25,26]. The cyclic amidation was reported to reveal a yield of around 30%. However, no information on the purification and characterization details was given. Thus, serious concerns remain with the c-PCL product in terms of purity.
In the first-step, linear α-vinyl-ω-hydroxy-PCL precursor with M n = 3600 g/mol and PDI = 1.08 was prepared in a yield of 75.4% by a polymerization of CL with undecylenyl alcohol initiator and Sn(Oct) 2 catalyst in toluene at 110 • C for 24 h. The product was purified by precipitation in methanol and subsequent drying. Here it is noted that the α-vinyl-ω-hydroxy-PCL precursor was obtained with a relatively low PDI value even though Sn(Oct) 2 was used as catalyst. In the second-step, the hydroxyl end group of the linear precursor was reacted with 10-undecenoyl chloride with the aid of triethylamine in dichloromethane (CH 2 Cl 2 ) under a nitrogen atmosphere. The α,ω-divinyl-PCL product was obtained in a yield of 70%, revealing M n = 3900 g/mol and PDI = 1.11. In the third-step, the intramolecular cyclization of α,ω-divinyl-PCL in a dilute concentration of 0.0005 M was carried out with the aid of Grubbs catalyst (1st-generation catalyst) in CH 2 Cl 2 at 40 • C for 48 h under a nitrogen atmosphere. The c-PCL product with M n = 3500 g/mol and PDI = 1.70 was obtained with a yield of 90% after precipitation in methanol, filtering and drying. Overall, oligomeric c-PCL product was prepared. For these results, some questions arise as follows. First, the α,ω-divinyl-PCL was obtained with a relatively low yield. Such low yield suggests that some amount of unreacted α-vinyl-ω-hydroxy-PCL precursors remained; how could they be eliminated through such simple precipitation process? Second, in the cyclization reaction the c-PCL product was obtained with an exceptionally high yield of 90%, but revealing a higher PDI value. Perhaps, the adopted simple precipitation process could not eliminate all possible impurities including unreacted α,ω-divinyl-PCL precursors. Overall, the results collectively suggest that the c-PCL product still includes a certain level of impurities such as unreacted precursors, linear dimers and multimers, and cyclic dimers and multimers. Lastly, it is noted that the c-PCL contains an ethenyl linker in the ring which may play as a reactive site for post-chemical modifications.
Alkyne-Alkene Metathetic Cyclization
In the first-step, linear α-ethynyl-ω-hydroxy-PCL precursor with M n = 4100 g/mol and PDI = 1.09 was prepared in a yield of 77% by a polymerization of CL with propargyl alcohol initiator and Sn(Oct) 2 catalyst in toluene at 110 • C for 24 h (Scheme 4b) [27]. In the second-step, the α-ethynyl-ω-hydroxy-PCL precursor was converted to α-ethynyl-ω-vinyl-PCL through a reaction with 10-undecenoyl chloride under a nitrogen atmosphere; the reaction yield was 72%. The product was measured to have PDI = 1.07. In the final step, the α-ethynyl-ω-vinyl-PCL precursor in a dilute solution (0.0005 M) underwent cyclization with the aid of Grubbs catalyst (1st-generation catalyst) under a nitrogen atmosphere. The c-PCL product was obtained with 89.5% yield after precipitation in methanol, filtering and drying; the product was determined to reveal PDI = 1.6. For these results, some concerns arise as follows. First, the relatively low yield of α-ethynyl-ω-vinyl-PCL indicates that unreacted α-ethynyl-ω-hydroxy-PCL precursors could remain in a certain amount. How could such unreacted precursors be removed via the simple precipitation process employed? Second, the yield of c-PCL is exceptionally too high in regard to the intramolecular cyclization done in the highly dilute condition. The simple precipitation process was employed to purify the cyclic product; this purification could not completely remove possible unreacted precursors and byproducts from the target product. Therefore, such high product yield might be overestimated, which could be attributed to impurities. Third, the PDI of c-PCL was relatively larger than that of the precursor polymer. The larger PDI value again indicates that the c-PCL product still included some impurities such as unreacted precursors, linear dimers and multimers, and cyclic dimers and multimers. Lastly, it is noted that the c-PCL product possesses two reactive sites: one is the vinyl linker in the ring and another is the vinyl group as a side group.
In the first-step, linear α-ethynyl-ω-hydroxy-PCL precursor with M n = 4100 g/mol and PDI = 1.09 was prepared according to the method in Scheme 4c. In the second-step, the α-ethynyl-ω-hydroxy-PCL precursor was converted to α-ethynyl-ω-azido-PCL through a reaction with 11-azidoundecanoyl chloride under a nitrogen atmosphere. Yield: 70%. In the final step, the α-ethynyl-ω-azido-PCL precursor in a dilute solution (0.00067 M precursor concentration) underwent ethyne-azide click cyclization with the aid of copper (I) bromide (CuBr) catalyst and α,α -bipyridyl (bPy) in N,N-dimethylformamide (DMF) at 120 • C for 48 h under a nitrogen atmosphere; the precursor solution was slowly added by using a dropping funnel. The c-PCL product was prepared with 91% yield after precipitation in methanol, filtering and drying. The c-PCL product was determined to have PDI = 1.1. Overall, the synthesis of c-PCL was done in a reasonably good manner. However, some questions still arise as follows. First, the α-ethynyl-ω-azido-PCL precursor was prepared with a relatively low yield. This result suggests that a certain portion of the α-ethynyl-ω-hydroxy-PCL precursor remained as an unreacted form; how could a simple precipitation process remove the unreacted precursors from the reaction product? Second, the yield of c-PCL product was remarkably high in the consideration of the cyclization reaction in a very dilute solution. And, if the precursors remained in a certain portion as an unreacted form and other polymeric byproducts were generated, they could not easily be removed out through only a simple precipitation process. Thus, the purity of the c-PCL product is doubtful. Third, the PDI of c-PCL product increased to 1.1 (from 1.03 of the precursor polymer). This increment in the PDI is an indication that the c-PCL product still includes polymeric impurities. Lastly, the c-PCL product possesses a trizolyl linker in the ring; such planar and bulky triazolyl linker may affect the structure and properties of the cyclic polymer.
Azide-Alkyne Click Cyclization (I)
α-Azido-ω-hydroxy-PCL precursors were prepared by Sn(Oct) 2 -catalyzed polymerizations of CL with the aid of 3-azidopropanol initiator at 110 • C (Scheme 4d) [28]. The obtained linear precursors were determined to have M n = 3830~14,900 g/mol and PDI = 1.07~1.14 depending on the [M]/[I] ratio and reaction time. The α-azido-ω-hydroxy-PCL precursors were converted to α-azido-ω-ethynyl-PCLs by the reactions with 4-pentynoic anhydride in CH 2 Cl 2 at 40 • C with the aids of 4-(dimethylamino)pyridine and pyridine. The α-azido-ω-ethynyl-PCL products were characterized to reveal PDI = 1.07~1.20. The click cyclizations of the precursors in dilute solutions were conducted in CH 2 Cl 2 by using CuBr catalyst and N,N,N ,N",N"-pentamethyldiethylenetriamine (PMDETA); here each precursor solution (0.0044 M precursor concentration) was added with a rate of 2 mL/h to the CuBr/PMDETA solution by using a syringe pump. After complete addition of the precursor solution, the reaction mixture was stirred for additional 2 h. The reaction mixture was extracted from saturated aqueous NaHSO 4 into CH 2 Cl 2 . The organic layer was dried over anhydrous MgSO 4 , filtered, and concentrated prior to precipitation from CH 2 Cl 2 into a 1:1 mixture of chilled hexanes and diethyl ether. The product was isolated via filtration and dried. The yields were around 57%; M n = 3780~15,000 g/mol. All c-CL products revealed very low PDI values (1.07~1.13), which were comparable to those of the precursor polymers. These results collectively suggest that c-PCLs were synthesized with high purity by the azide-alkyne click cyclization. Here, several questions, however, arise about the cyclization reactions and subsequent purification processes as follows. First, the cyclization yields are not high enough (only around 60%), suggesting that each reaction mixture might include a substantial amount of unreacted precursor polymers. Second, how could such unreacted precursors be removed from the c-PCL product? Was the employed workup process efficient enough to remove the unreacted precursors out from the c-PCL product? Third, could such low PDI values assure purity for the c-PCL products? Could each c-PCL product have any possibility of including other impurities such as linear dimers and multimers and cyclic dimers and multimers? Lastly, the analytical gel permeation chromatography (GPC) profiles confirmed that the c-PCL products include a certain level of impurities even though they are minor components. Thus, the c-PCL products still need further purification although they reveal relatively low PDI values.
In the third-step, N 3 -PCL 90 -C≡CH was cyclized as follows. The N 3 -PCL 90 -C≡CH (510 mg, 15.5 µmol) in degassed CH 2 Cl 2 (36 mL) (0.00043 M precursor concentration) was added with a rate of 0.5 mL/h to a mixture of CuBr (386 mg, 0.837 mmol) and PMDETA (0.68 mL, 1.01 mmol) in degassed CH 2 Cl 2 (350 mL) at 25 • C under flowing argon using a syringe pump (model Legato 100, KD Scientific, Holliston, MA, USA) equipped with a fine hypodermic needle (21 g × 30 cm). After the addition was completed, the reaction mixture was stirred for additional 3 h. Then, propargyl-functionalized polystyrene (PS-C≡CH) resin (2.5 g, 7.35 mmol) and a solution of CuBr (772 mg, 0.837 mmol) and PMDETA (1.36 mL, 1.01 mmol) were added in order to further reactions with unreacted precursor polymers as well as possibly formed linear dimers and multimers; here, PS-C≡CH resin was prepared by the treatment of 4-(hydroxymethyl)phenoxymethylpolystyrene resin (polystyrene resin cross-linked with 1% divinylbenzene (200-400 mesh)) with propargyl bromide [29,30]. After being stirred for 24 h, the reaction mixture was filtered, eliminating the unreacted precursors and possible linear dimers and mutlimers together with the used PS resin. The filtrate was concentrated by using a rotary evaporator. The crude product was purified by using aluminum oxide columns (eluent, THF). The obtained product was further purified using a preparative GPC system, followed by drying at room temperature under vacuum. The recycling preparative GPC runs were carried out with THF (7.5mL/min) at 25 • C using a JAI GPC system (model LC-9260 II Next, Japan Analytical Industry, Tokyo, Japan) equipped with a JAI JAIGEL-2.5HH column (600 × 20.0 mm), a JAI JAIGEL-2HH column (600 × 20.0 mm) and a JAI RI-700 II NEXT refractive index detector. The target cyclic product (c-PCL 90 ) was obtained. Yield: 38.9%. The c-PCL 90 product was characterized by NMR spectroscopy and GPC. M n, N MR = 10,200 g/mol; PDI = 1.08. 1 7.31 (s, 1H, triazole ring). 13 As described above, we adopted diphenyl phosphate as a metal-free, nontoxic organocatalyst and 6-azido-1-hexanol as a initiator for the ring opening polymerization of CL. Linear PCL products were obtained with very high yields (96.5~97.1%) over the range [M]/[I] = 20~200 and determined to reveal very low PDI values (1.06~1.10), as shown in Table 1. The high yields and low PDI values are much better than those of the PCL products obtained by the Sn(Oct) 2 -catalyzed polymerizations [27,28]. The yield and PDI results are further improved, compared to those of the pseudo [2]rotaxane-initiated polymerizations with the aid of diphenyl phosphate catalyst [10]. These collectively inform the following key features. First, for the polymerization of CL, diphenyl phosphate is a better catalyst in the aspect of catalytic performance, compared to Sn(Oct) 2 . Moreover, diphenyl phosphate is a non-metallic catalyst and, therefore, suitable to produce linear PCLs free of metallic residues. Second, chemical characteristics and bulkiness of a chosen initiator can have a significant influence on the yield, molecular weight and molecular weight distribution of PCL product. The obtained α-azido-ω-hydroxyl-PCLs were fully converted to the α-azido-ω-ethynyl-PCLs via the reactions with 5-hexynoic acid in excess. The product yields ranged from 96.8 to 98.1% (Table 2). In fact, the reaction conversion was 100% for each reaction; some losses of the individual products took place through the purification processes.
For the α-azido-ω-ethynyl-PCLs, intramolecular cyclizations were attempted via azide-alkyne click chemistry with the aid of CuBr and PMDETA. Similar cyclizations were previously done with a precursor concentration of 0.00067~0.00440 M; a dropping funnel and a syringe pump system were employed to add the precursor solutions in a precisely-controlled dropping manner [27,28]. In the case of the syringe pump system, an injection rate of 2 mL/h was used [28]. In these studies simple precipitation methods were employed to purify cyclic products. In view of the high precision and control in the addition of precursor solutions, a digital-based syringe pump system would be much better than a dropping funnel. To get c-PCL of high or highest purity, a high-performance purification process, rather than simple precipitation methods, is absolutely necessary. Therefore, in our study, we decided to adopt the use of highly dilute precursor solution (0.00043 M), a high-precision syringe pump system with a very fine needle for the precisely-controlled addition of precursor solution (0.5 mL/h injection rate), a PS-C≡CH resin treatment to remove unreacted precursors and linear byproducts and subsequent filtration, a chromatographic treatment to remove the used catalyst, and a recycling preparative GPC system to purify the target cyclic product, as described above (Figure 2). The purified c-PCL products were further characterized by using an analytical GPC system calibrated with polystyrene standards (Figure 3). The c-PCL products were obtained with yields ranging from 37.7 to 41.1%; the cyclic byproducts were additionally formed with yields ranging from 0.2 to 8.7% (Figure 2; Tables 2 and 3). The yield data for the c-PCL products are somewhat scattered. The cyclic byproducts' yields are also scattered. However, the c-PCL products show a trend that the product yield increased slightly by increasing the molecular weight of precursor polymer. In contrast, the cyclic byproducts exhibit a trend that their yield decreased with increasing the molecular weight of precursor polymer. On the other hand, the total amount of unreacted precursors and their linear dimmer and multimers ranges from 52.2 to 60.9%. These results collectively provide important information on the intramolecular cyclization reaction via azide-alkyne click reaction as follows.
First, the generation of byproducts could not be avoided even under cyclization reactions in highly dilute concentration (0.00043 M, which is the lowest concentration in comparison to those reported in the literature) and very slow addition (0.5 mL/h) with very small drop size (which are the smallest drop size and slowest addition, compared to those reported in the literature).
Second, a target cyclic product could be obtained with relatively low yield, which is attributed to an unavoidable amount of unreacted precursors remaining due to the very dilute precursor concentration in the reaction as well as byproduct formations.
Third, cyclic byproducts could be formed in much lower fractions, compared to the sum of unreacted precursors and linear byproducts (dimers and multimers).
Fourth, the unreacted precursor polymers and linear byproducts could be eliminated effectively by their reactions with alkyne-or azide-functionalized polymer resins and subsequent filtration. Their removals could not be easy or possible only through a conventional, simple precipitation process.
Fifth, the removal of cyclic byproducts could not be easy. They could be removed via tedious, labor-intensive separation and fractionation by using a preparative or analytical chromatography system. Their removals are also impossible only through a conventional, simple precipitation process.
Lastly, the intramolecular azide-alkyne click cyclization method still finds it challenging to increase target cyclic product yield and reduce byproducts even though the generally-known high efficiency of azide-alkyne click chemistry. Moreover, effective elimination of unreacted precursors and byproducts is essential to get the desired cyclic polymer product in high purity. To obtain a higher quality cyclic polymer (namely, cyclic product with a lowest PDI), the synthesis of higher quality linear precursor polymer is absolutely necessary. a Reaction mixture obtained by click cyclization reaction. b Cyclic PCL obtained by the separation and fractionation using a preparative GPC system after treated with PS-C≡CH resins. c Cyclic byproducts obtained by the separation and fractionation using a preparative GPC system after treated with PS-C≡CH resins. d Sum of the unreacted precursor polymers and possible linear byproducts removed by treatment with PS-C≡CH resins; this was estimated from the yield of cyclic PCL product and the preparative GPC analysis result.
Chain Structure Characteristics
Information on chain structure characteristics is necessary to understand c-PCL itself and its structure and properties at a molecular level and to develop its applications. However, the chain characteristics of c-PCL have been investigated on a very limited basis. The chain characteristics of c-PCL evident in hydrodynamic volume, solution viscosity, melt viscosity and chain mobility in the melt state are reviewed below.
Hydrodynamic Volume
GPC is widely used to determine the M n, GPC and PDI (=M w, GPC /M n, GPC ; M w, GPC , weight-average molecular weight determined by GPC analysis) of polymer with respect to those of standard polymers (linear polystyrene (PS) or polymethylmethacrylate (PMMA) standards employed widely). GPC is also a good analytical tool to provide information on the geometrical dimensions of c-PCLs in addition to the determination of molecular weights and PDI because of its size-exclusion ability. In this study, we have conducted GPC analysis on the series of c-PCLs and their linear precursors synthesized with high purity in the section above; here the GPC system has been calibrated with linear PS standards rather than cyclic PS standards because cyclic PS standards are not available, and THF was used as an eluent. The individual N 3 -PCL n -C≡CH polymers reveal M n, GPC values almost same with those of their N 3 -PCL n -OH precursors, as shown in Figure 4a (also data available in Tables 1 and 2). These results support the fact that the individual N 3 -PCL n -C≡CH polymers apparently have same hydrodynamic volumes in THF solutions as their N 3 -PCL n -OH precursors exhibit even though there is a difference in the ω-functional groups.
In contrast, the individual c-PCL products exhibit relatively smaller M n, GPC values than those of their N 3 -PCL n -C≡CH precursors (Figure 4a and Table 2) even though they have molecular weights (M n, N MR values) the same as those of their precursors (Figure 4b and Table 2). The c-PCL products reveal longer elution times (t elution ), compared to those of the linear counterparts (Table 4c). Similar GPC analysis results were reported for the c-PCLs and their linear analogues in the literature [10,[16][17][18]27,28,[31][32][33][34][35]; however, it is noted that the t elution differences were influenced by the impurity levels in the c-PCLs. Overall, the GPC analysis results confirm that c-PCLs have relatively smaller hydrodynamic volumes in comparison to their linear analogues. Such dynamic volume difference due to the cyclic topology tends to be increased as the molecular weight increases.
Solution Viscosity
A series of c-PCLs and their linear analogues have been characterized as dilute solutions in THF by viscometry combined with a GPC system equipped with light scattering [18]; here all c-PCLs were synthesized by using a ring-expansion method with the aids of alkyl-organoaluminum initiators. The intrinsic viscosity [η] was increased with molecular weight for the c-PCLs as well as the linear PCL analogues. In comparison, the invidual c-PCLs always showed lower viscosities than those of their linear counterparts. These results again prove that c-PCLs have smaller hydrodynamic volumes relative to those of their linear counterparts. These are further correlated to the GPC analysis results discussed above.
Melt Viscosity and Chain Mobility
Melt rheology analysis was carried out on c-PCLs and their linear counterparts with M n, GPC = 63,400 g/mol (1.6 PDI) and 69,200 g/mol (1.8 PDI) at 60 and 80 • C; here the c-PCLs were prepared by ring-expansion polymerizations with the aid of a stannane initiator [31]. This analysis found that the melt viscosities η of the c-PCLs are lower than those of the linear analogues by nearly a factor of 2; even considering a low level of impurities, such lower η values of the c-PCLs might result from their shear motions (i.e., chain segmental motions; molecular diffusions) being much easier than those of the linear analogues. Such higher segmental mobilities of c-PCLs in the melt state were confirmed by NMR Hahn echo and more advanced multiquantum analysis [31].
In fact, linear PCLs were intensively investigated in the view of melt rheology [35][36][37][38][39][40]. Two research groups found that the critical molecular weight M e for the development of molecular entanglements in linear PCL is close to M w = 2000~6000 g/mol [35][36][37][38][39]. However, another research group identified that the linear PCLs of M w = 2000~7000 g/mol still are not in completely free of entanglements [35,40]. Linear PCL is further known to have a Kuhn segment length of 0.7 nm [41]. A similar Kuhn length is expected for c-PCL. In fact, the cyclic topology may however cause a certain degree of geometrical confinement on the chain conformation even its impact is low and, therefore, a little bit larger Kuhn length may be expected for c-PCL. Taking into consideration these, the relatively lower melt viscosities of c-PCLs suggest that c-PCL reveals relatively larger M e than that of its linear counterpart. Furthermore, c-PCLs of >2000 g/mol M w may behave with a relatively lower degree of entanglements, compared to their linear analogues. These interesting features would obviously originate from the cyclic topology of c-PCLs in free of end-groups. However, more research is still necessary to understand more about chain characteristics of c-PCLs in the melt.
Properties
A research effort has been made to understand the properties of c-PCL. However, such research efforts have been mainly focused on hydrolysis, thermal degradation, and physical phase transitions. These properties are reviewed in this section. In addition, we discuss new results on the thermal degradation and phase transitions which we have measured for the series of c-PCL products and their linear analogues synthesized in high purity as discussed above.
Acid-Catalytic Degradation
The synthesis of c-PCL was studied in relatively intensive way as discussed above. On the contrary, its catalytic degradation behavior was vey rarely investigated. For example, the degradation characteristics of a c-PCL with M n, MALDI−TOF = 6180 g/mol and its linear azide-ethynyl precursor were examined in a methanol/dichloromethane mixture at room temperature by the aid of toluenesulfonic acid as a catalyst [28]. Samplings were conducted with degradation time intervals and characterized by GPC and matrix-assisted laser desorption time-of-flight mass spectroscopy (MALDI-TOF MS). Overall, c-PCL reveals relatively slower degradation behavior than the linear precursor polymer. In particular, c-PCL showed significant retardation in the early stages of degradation (up to ca. 3 h). Such retarded degradation could be attributed to the opening of the ring structure, evidencing the effect of cyclic topology in the free chain ends.
Thermal Degradation
Only one report on the thermal degradation of c-PCL is available in the literature; two c-PCLs with different molecular weights (M n, MALDI−TOF = 4940 and 15,000 g/mol) were investigated [28]. The study concluded that the cyclic topology does not have a significant effect on thermal degradation. Overall, the thermal degradation behavior of c-PCL has been very rarely studied, compared to its synthesis as well as its linear forms; a greater understanding of the cyclic topology effect on the thermal degradation of PCL is required. Therefore, we conducted TGA analysis on the series of c-PCLs with different molecular weights and their linear precursors synthesized above (Tables 1 and 2). The N 3 -PCL n -OH precursors (M n, N MR = 2680~24,300 g/mol) started weight loss around 242 • C and continued weight losses in a minor manner, then followed by major weight losses above 337 • C (Figure 5a). Namely, they revealed a two-step manner of thermal degradation under a nitrogen atmosphere: T d1,onset = 242 • C (the onset temperature of the first-step degradation) and T d2,onset = 337 • C (the onset temperature of the second-step degradation). Here, it is noted that the first-step degradation took place weakly and thus was discernible for some precursors, but not clearly discernible for the other precursors because of continuous weight loss behaviors. In comparison, a major degradation occurred in the second-step. Similar degradation characteristics were previously observed for linear α-azido-ω-hydroxyl-PCLs (M n, MALDI−TOF = 4940 and 15,000 g/mol) [28] and also for linear α-isopropyl-ω-hydroxyl-PCLs (M n, osmometry = 1800~42,450 g/mol) [42]. The TGA analysis coupled with mass and IR spectroscopies confirmed the first-step degradation results from a statistical rupture of the ester backbone chains via ester pyrolysis reaction, while the second-step degradation is led by the formation of CL monomers as a result of unzipping depolymerization process [42]. Considering these facts, our N 3 -PCL n -OH precursors may follow a similar mechanism during thermal degradation, regardless of molecular weight. Taking these facts into account further, weight-loss signals in the first-step degradation could be dependent upon how many volatile fragments are generated by such pyrolysis-induced chain rupturing. Indeed, our results show that relatively small amounts of volatile fragments were generated in the first-step event and their amounts varied with the precursor's molecular weights in an irregular manner.
Similar two-step degradation behaviors were observed for the N 3 -PCL n -C≡CH precursors even though the ω-hydroxyl end was capped by 5-hexynoic acid in addition to the azido-capped chain end (Figure 5b). These results suggest that the N 3 -PCL n -C≡CH precursors followed a thermal degradation mechanism similar to that of N 3 -PCL n -OH precursors. Interestingly, the precursors showed higher T d1,onset (257 • C) but similar T d2,onset , compared to those of their starting polymers (N 3 -PCL n -OH). These results confirm that the thermal stability of linear PCL precursor polymer was improved by ethynyl-capping of one chain end in addition to azido-capping of the other chain end. Overall, these results are different from those claimed previously in the literature [28]. The c-PCL polymers exhibited a thermal degradation in a single-step manner rather than a two-step manner, regardless of molecular weights (Figure 5c). The onset degradation temperature was 337 • C, which was same with the T d2,onset values of their N 3 -PCL n -C≡CH and N 3 -PCL n -OH precursors. These TGA results indicate that the c-PCL polymers underwent thermally-induced unzipping depolymerization process, producing volatile CL monomers. The c-PCL polymers did not undergo a statistical rupture of the ester backbone chains as a prelude to the main thermal degradation, which was observed for linear PCL polymers.
Collectively, the TGA analysis confirmed that the thermal stability of PCL can be enhanced significantly by cyclization. This result is quite different from a claim in the literature that the cyclic topology does not have a significant effect on thermal degradation [28].
Phase Transitions (I)
The phase transition behaviors of c-PCLs and their linear analogues were investigated by several research groups [6,20,31,32]. They were mainly investigated by using differential scanning calorimetry (DSC). All DSC results are plotted together as a function of molecular weight (in particular, M n ) in Figure 6. These plots can provide some features and related questions as follows.
First, c-PCL reveals a crystallization exotherm in the DSC analysis. A crystallization temperature T c (that corresponds to the temperature at which the exothermic thermogram peak reveals a maximum during cooling with a rate of 10 or 20 • C/min from a melt state) ranges in 27~42 • C depending on the molecular weights ( Figure 6a); however, the T c values are scattered over that temperature range. In comparison, their linear analogues also show T c values scattered over the temperature range 19~37 • C. Overall, it is very hard to find a correlation between T c and M n over the molecular weight range of 2000 to 168,000 g/mol for c-PCLs as well as their linear analogues. These features are quite strange. These unusual results might be caused by several factors such as polydispersity difference, impurity level, incomplete elimination of thermal history, and some differences in the DSC instrumentations and their measurement conditions. However, it is clearly shown that for a given molecular weight c-PCL exhibits relatively higher T c than that of its linear analogue; their difference still varies with the molecular weight and paired sample set. In particular, the cyclic and linear pairs with M n < 30,000 g/mol show relatively large differences in T c , compared to those of the higher molecular weight pairs. At this moment, this feature could not be understood.
Second, during the cooling process from the melt state, a c-PCL of M n = 2000 g/mol reveals an exothermic heat of fusion ∆H c (78 J/g) which is comparable to that (79 J/g) of its linear analogue; due to these ∆H c values, they show almost same crystallinity X c,c . However, all other c-PCLs with M n > 2000 g/mol exhibit relatively lower ∆H c values, compared to those of their linear counterparts; as a result, the same trend is observed for the X c,c values of c-PCLs and their linear analogues (Figure 6b,c). Moreover, they reveal a molecular weight dependency whereby both ∆H c and X c,c decrease as M n increases. A similar molecular weight dependency is observed for linear PCLs. Here, at least three questions are raised. Why does the cyclic topology effect take place for the ∆H c and X c,c of PCL of M n > 2000 g/mol? Why is such topology effect not observed for the ∆H c and X c,c of PCL with M n = 2000 g/mol? Why do the ∆H c and X c,c of c-PCL as well as of its linear analogue decrease with increasing molecular weight, respectively?
Third, c-PCL shows a crystal melting endotherm in the heating run. A crystal melting temperature T m (that corresponds to the temperature at which the endothermic thermogram peak reveals a maximum during heating with a rate of 10 or 20 • C) ranges from 53~63 • C (Figure 6d). In comparison, their linear analogues also show T m values over the range of 48~62 • C. These results indicate that c-PCL and its linear analogue do not reveal a dependency of T m to the molecular weight in a regular manner, respectively. Furthermore, in three pairs c-PCLs show relatively lower T m values than their linear analogues; on the contrary, for the other pairs c-PCLs exhibit relatively higher T m values than those of their linear analogues. These collectively suggest that more quantitative studies are still necessary to understand the crystal melting transitions of c-PCLs and their linear analogues from the view of the cyclic topology effect. Fourth, the endothermic heat of fusion ∆H m and cystallinity X c,m of c-PCL measured in a heating run range from 53~63 J/g and 48~68% respectively, depending on the samples and molecular weights (Figure 6e,f). The ∆H m and X c,m tend to decrease roughly with increasing molecular weight, respectively. In fact, they vary with molecular weight in an irregular manner for the c-PCLs with M n < 30,000 g/mol. Similar trends of ∆H m and X c,m are observed for the linear PCLs. These unusual dependencies of ∆H m and X c,m to the molecular weight could not yet be understood. Furthermore, some c-PCLs exhibit lower ∆H m and X c,m values, compared to those of their linear analogues; the other c-PCLs exhibit larger ∆H m and X c,m values, compared to those of their linear analogues. These results are also very hard to understand from the aspect of the cyclic topology effect.
Sixth, additional research efforts have been made to determine equilibrium melting temperature T o m . Linear PCLs were estimated to reveal T o m ranged in 69~98 • C [20,32,35,[48][49][50]. In comparison, c-PCLs were determined to show T o m ranged in 81~92 • C [20,32,35]. A cyclic topology effect could be discernible on the T o m of PCL but its impact seems not to be significant. More detailed investigation may be necessary.
Lastly, linear PCLs are known to exhibit glass transitions about −60 • C [37,38]. However, the glass transition behavior of c-PCL has not yet been investigated in detail.
Phase Transitions (II)
As discussed above, the cyclic topology effect has not been discernible clearly in the thermal transitions of PCL. Furthermore, the thermal transition behaviors of c-PCL itself as well as of linear PCL could not be understood in detail. Thus, in this study we investigated them with a series of c-PCLs and their linear analogues in a quantitative manner. The DSC analysis results are shown in Table 4 and Figure 7. Tables 1 and 2. b Number-average molecular weight determined by 1 H NMR spectroscopy analysis (Tables 1 and 2). c Polydisperty index determined by using GPC analysis calibrated with PS standards (Tables 1 and 2). d Crystallization temperature at the peak maximum of the crystallization transition in DSC analysis with a cooling rate of 10.0 • C/min. e Heat of fusion for crystallization in cooling run from melt with a rate of 10.0 • C/min. f Crystallinity estimated from the heat of fusion of crystallization by assuming ∆H o c = −139.5 J/g. g Temperature at the peak maximum of the crystal melting transition in DSC analysis with a heating rate of 10.0 • C/min. h Heat of fusion for crystal melting in heating run with a rate of 10.0 • C/min. i Crystallinity estimated from the heat of fusion of crystal melting by assuming ∆H o m = 139.5 J/g [51].
In the nonisothermal crystallization with a cooling rate of 10.0 • C/min from the melt state, c-PCL reveals a T c of 37.4~38.6 • C, a ∆H c of −87.3~−88.7 J/g, and a X c,c of 62.6~63.6% over the molecular weight range of 2770 to 23,270. These results collectively indicate that the phase transition characteristics of c-PCL due to non-isothermal self-assembling show no molecular weight dependencies over the molecular weight range considered. This feature is quite unique and significantly different in part or fully from those reported in the literature [6,20,31,32]. Taking our results into account, the variations in the T c , ∆H c and X c,c of c-PCL with molecular weight in the literature might indeed originate from substantial amounts of impurities (for examples, unreacted precursors and others) which could not be ignored. In the case of linear PCLs, no molecular weight dependencies of phase transition characteristics in non-isothermal self-assembling are observed over the molecular weight range 2770 to 23,200 g/mol. These results are quite different in part or fully from those appeared in the literature [6,20,31,32]. Thus, the variations of thermal characteristics reported in the literature may be a strong indication that their linear PCL products could still include certain levels of impurities and have relatively large PDI values.
However, the c-PCLs clearly reveal relatively higher T c (ca. 5 • C higher), larger ∆H c (ca. 10 J/g larger), and larger X c,c (14% larger), compared to those of the linear PCL analogues. These results collectively confirm that the cyclic topology effect is quite significant in the crystallization-induced phase transition behaviors of PCL.
In melting at a 10.0 • C/min heating rate of the nonisothermally-crystallized crystals, c-PCL exhibits a T m of 60.9~62.6 • C, a ∆H m of 86.8~88.6 J/g, and a X c,m of 62.2~63.5% over the molecular weight range 2770 to 23,200 g/mol. These phase-transition characteristics apparently show no molecular weight dependencies. No molecular weight dependencies are observed on the crystal melting transition characteristics of the linear PCL analogues. However, the c-PCL polymers obviously reveal higher T m (ca. 6 • C higher), larger ∆H m (ca. 10 J/g larger), and larger X c,m (14% larger), compared to those of the linear PCL analogues. These comparisons confirm that the cyclic topology effect is also quite significant in the crystal melting transition behaviors of PCL. Overall, this study confirms again that our results are significantly different in part or fully from those reported in the literature [6,20,31,32].
In this study, we have additionally found interesting new features of c-PCLs and their linear analogues as follows.
First, c-PCL 20 is an oligomer which has only 2770 g/mol M n and 20 DP n (number-average degree of polymerization); however it reveals crystallization-and crystal melting-induced phase transition characteristics almost the same with those of higher molecular weight c-PCLs. These results suggest that for c-PCL's crystals, the crystallite with a certain thickness (namely, lamellar crystal thickness) is well developed enough with the cyclic chains of DP n = 20. The formation of such crystallites seems to be very fast in the low molecular weight polymer as well as in the high molecular weight polymers; thus, the formation kinetics of crystallites could not be distinguishable by varying molecular weight. Furthermore, a surface with a certain level of quality seems to be formed on such crystallites regardless of molecular weights; namely, the surface characteristics of crystallites are not significantly varied with molecular weight.
Second, N 3 -PCL 20 -C≡CH is also an oligomer which has 2770 g/mol M n and 20 DP n , exhibiting crystallization-and crystal melting-induced phase transition characteristics almost same with those of higher molecular weight analogues. These interesting results also suggest that crystallites with a certain thickness are developed enough with the linear chains of only 20 DP n . Such crystallites may result from a very fast crystallization rate. Such fast crystallization kinetics could not be distinguishable with molecular weights. The crystallite surface characteristics would not easily be differentiated with varying molecular weight either.
Third, all N 3 -PCL n -OH polymers exhibit almost same phase transition behaviors as the N 3 -PCL n -C≡CH polymers show. Therefore, very similar crystallite and surface characteristics are expected for the N 3 -PCL n -OH polymers. Namely, the phase transition characteristics are apparently independent upon the ω-hydroxy and ethynyl end groups. These surprising behaviors may be driven by the very fast crystallization rate of PCL main body; namely, the effect of chain end groups cannot be easily discernible under such fast crystallizations.
Fourth, ∆H c ∼ = ∆H m for a given c-PCL as well as for a chosen linear PCL. These results are a good evidence that the crystallization rate is very fast for c-PCLs as well as for their linear analogues.
Fifth, such fast crystallization characteristics of linear PCLs as well as of cyclic analogues may originate from a highly optimized chain mobility (which is essential for self-assembling) due to their short persistent length or Kuhn length (namely, high chain flexibility; its information is available in the literature [38]) and favorable interchain and intrachain interactions using the ester linkers in the repeat units.
Sixth, here one question arises: why does c-PCL always reveal higher T c (ca. 5 • C higher) and larger ∆H c (ca. 10 J/g larger) than those of its linear analogues in non-isothermal crystallization from the melt? For these phenomena, there are several factors to consider: (i) degree of supercooling ∆T c (due to T o m ); (ii) chain mobility; (iii) degree of entanglement; (iv) degree of molecular preordering; and (v) presence of chain-ends or chain-endless. In general, higher T o m in a polymer can induce higher T c , leading to larger ∆T c in crystallization. Taking this fact into account, c-PCL may own relatively higher T o m than those of its linear analogues; higher T o m (2 to 11 • C higher) was reported for c-PCL [20,33] but still controversial [48][49][50]. The crystallites with lower ∆H c are generally formed under a crystallization condition with higher ∆T c . However, c-PCLs reveal larger ∆H c , compared to those of their linear analogues. The observation of such larger ∆H c suggests that c-PCLs underwent much faster crystallizations, compared to those of their linear analogues; relatively faster crystallization rates were reported for c-PCLs [6,20,33,42]. c-PCLs were reported to reveal higher chain mobility (i.e., faster diffusion) in the melt state, compared to their linear analogues because of their compact size due to the cyclic chain system without chain ends. This factor may make a positive contribution to the larger ∆H c values of c-PCLs. In fact, it is well known that two ends in a linear polymer can accelerate the chain mobility by their high entropic gains. However, it seems that the impact of two chain ends on the chain mobility of linear PCL is relatively lower, compared to that of the compact size without chain ends on the chain mobility of its cyclic analogue. The faster diffusions of c-PCLs could be attributed to less degree of entanglement (or disentanglement) in addition to the compact geometrical dimension. In particular, all c-PCLs of our study were synthesized in a very dilute condition so that they are very much disentangled or less entangled. c-PCLs might be in a preordered state rather than completely random coil like state because the ester linkers in the repeat units can induce a certain level of attractive interactions intra-and inter-molecularly. The presence of such preordered polymer chains can make a positive contribution to the faster crystallization and the higher ordering in the resulting crystallites. This effect is more favorable for c-PCLs rather than their linear analogues owning two chain-ends which can have significant negative impacts on such preordering of chains. All factors considered here originate in the absence of chain-ends in c-PCLs; namely, the cyclic topological effect is significant in PCL.
Lastly, another question arises: why does c-PCL always exhibit higher T m (ca. 6 • C higher) with larger ∆H m (ca. 10 J/g larger) than those of its linear analogues in the crystal melting process? In general, higher quality crystals in a polymer reveal higher T m and larger ∆H m . Therefore, the higher T m and larger ∆H m of c-PCLs may be attributed to their crystals possibly associated with (i) larger crystallite thickness; (ii) higher lateral ordering; (iii) less defect level in the crystallites and their surfaces; and (iv) a positive contribution of entropy term which originates from the free chain-ends and cyclic topology, compared to those of their linear analogues. However, these speculations still need to be confirmed experimentally. Moreover, to rank these three factors in the aspect of contributions, more information is necessary.
Crystallization Characteristics
Córdova et al. [35] and Pérez [6] reported isothermal crystallizations of c-PCLs with M n,MALDI−TOFF = 2000~22,000 g/mol (which were prepared by azide-alkyne click cyclizations) and their linear analogues with α-azido and ω-hydoxyl or ω-acetyl end groups (which were synthesized with the aid of tin octanoate catalyst) over the temperature range of 38-56 • C by using DSC and polarized light optical microscopy (PLOM). They found that regardless of the chain topologies, all oligomeric PCLs undergo crystallization, forming spherulites instantaneously (Avrami exponent n = 3) or sporadically (n = 4). For the linear analogues, the crystallization behaviors were not influenced by the difference between the ω-end groups. However, c-PCLs always showed faster crystallization rates, compared to those of the linear counterparts. The PLOM analysis further found that the nucleation density at saturation values (long times) was 7% higher for c-PCL (4900 g/mol M n,MALDI−TOFF ) than for the linear counterpart at the same T c (54 • C); the nucleation rate was much higher for c-PCL, which was a clear sign of a more instantaneous nucleation process in c-PCL as compared to the linear counterpart. Overall, the nucleation and spherulite growth analysis with PLOM, as well as the DSC data analysis with the Lauritzen-Hoffman theory confirmed that such faster overall crystallization rates of c-PCLs are mainly driven by the crystal growth rates and attributed in part to the nucleation rates. T o m was determined to be 81 • C for c-PCLs and 80 • C for the linear analogues by Hoffman-Week analysis. Therefore, they interpreted the faster crystallization rates by considering higher chain diffusions (i.e., mobilities) of c-PCLs in more collapsed conformation (which are due to the cyclic topology in free of chain ends) rather than the differences in the degrees of supercooling (∆T) caused by the T o m values associated with the chain topologies. The above PCL samples were reexamined by Su et al. [32]. They measured T o m = 91.2 • C for c-PCLs (which is much higher than that (81 • C) reported previously [35]) and 80.0 • C for the linear analogues by using the Thomson-Gibbs analysis. Namely, they found a large difference (11.2 • C) between the T o m values of c-PCLs and the linear counterparts. Thus, they claimed that the faster crystallization rates of c-PCLs at given temperatures could be attributed to the significantly larger degree of supercooling. Moreover, Wang et al. [8] and Li et al. [7] measured slower crystallization rates of c-PCLs (2000 and 4900 g/mol M n,MALDI−TOFF ) in the low temperature region, compared to those of the linear counterpart. They claimed that such the result could be attributed to a higher mobility of free chain ends in the linear analogue as compared to the c-PCL. Such faster crystallization rates of linear PCLs could not be observed for a linear PCL with M n,MALDI−TOFF = 22,000 g/mol because of diluted chain-end effects in long chain polymers. NMR and DSC analyses were conducted on the crystallizations of c-PCLs with M n,GPC = 50,000~80,000 g/mol and 1.6~2.1 PDI (which were prepared by ring-expansion polymerization with a cyclic tin catalyst and indeed contained a tin-linker in the ring) and their linear analogues [31]. The analyses confirmed relatively higher crystallinities for the isothermally-crystallized c-PCLs, compared to those of the linear PCL analogues. This study claimed that such higher crystallinities were attributed to a higher chain segmental mobility of the c-PCLs, which was measured in the melt state by using NMR and rheology analyses. The enhanced overall segmental mobility in the melt could lead to more perfect morphology, resulting in higher crystallinity. The crystallization analysis with the aid of nucleating agents further found that c-PCLs underwent faster crystal growth than that of the linear analogues.
Together with X-ray scattering, DSC analysis was extended to crystallization behaviors of c-PCLs with 75,000~142,000 g/mol M w,light scattering (weight-average molecular weight measured by light scattering) and 1.83~2.03 PDI (which were prepared by Zwitterionic polymerization and cyclization and thus in free of heterogeneous linkers such as tin and triazole) and their linear counterparts [20]. This study determined T o m = 84.2 • C for c-PCL and 82.0 • C for linear PCL by using Hoffman-Weeks analysis but T o m = 121 • C for c-PCL and 97 • C for linear PCL by the Thomson-Gibbs analysis. The study confirmed that all c-PCLs crystallized more rapidly than the linear analogues. The X-ray scattering analysis found that c-PCLs as well as the linear counterparts reveal increments in the crystal thickness as the crystallization temperature increases. Overall, this study concluded that the cyclic topology can have a significant influence on the rate of crystallization from the melt but does not have a significant influence on the crystal structure or morphology.
As reviewed above, all studies found that at a given crystallization temperature from the melt, c-PCLs always exhibited faster crystallization rates than those of the linear counterparts although investigated in a limited base and further in free of or with heterogeneous linkers in the ring structure. However, their interpretations and understandings on such crystallization behaviors were controversial. Regarding the degree of supercooling in crystallization, T o m values determined for c-PCL as well as for linear PCL were also controversial. Therefore, detailed investigation is still needed to understand the crystallization behaviors of c-PCL in depth.
Morphological Characteristics
By contrast with the synthesis and properties of c-PCLs, their morphological characteristics have rarely been investigated. Only two reports are available in the literature as reviewed below.
Transmission Electron Microscopy (TEM), Electron Diffraction and Atomic Force Microscopy (AFM) Analyses
Transmission electron microscopy (TEM) and electron diffraction analyses, as well as atomic force microscopy (AFM) analysis were conducted for the lamellar crystals of a c-PCL (which was prepared by azide-alkyne click cyclization; M n,MALDI−TOFF = 2320~7000 g/mol and PDI = 1.08~1.13) and its linear analogues with α-azido and ω-hydoxyl or ω-acetyl end groups (which were synthesized with the aid of tin octanoate catalyst; M n,MALDI−TOFF = 2040~7340 g/mol and PDI = 1.11~1.17) grown successfully in n-hexanol and N,N-dimethylacetamide (DMAc) solutions [32]. The TEM analysis found that both the c-PCL and its linear counterparts made distorted hexagonal-shaped lamellar crystals in a size of 2~10 µm through crystallizations in the dilute solutions. The solution-grown c-PCL crystal revealed a typical single crystallographic image in electron diffraction, which was identical to those of the linear counterpart crystals; more than 25 reflection spots were clearly discernible and indexed with an orthorhombic lattice (a = 0.747 nm, b = 0.498 nm and c = 1.705~1.729 nm) with a space group of P2 1 2 1 2 1 . These single crystals exhibited relatively higher T m values (3~4 • C higher), compared to those of the melt-crystallized ones. The AFM analysis found that the interlamellar distance L (i.e., long period) ranged from 5.4~13.2 nm for the c-PCL crystals and 8.7~11.6 nm for the linear PCL crystals (here, α-azido-ω-hydoxyl-PCL crystals). Both c-PCL and linear PCL crystals showed a trend that the L value increases with increasing molecular weight. The c-PCL of M n,MALDI−TOFF = 2320 g/mol made relatively thinner lamellar crystals (L = 5.4 nm) than those (8.7 nm) of the linear counterpart. In contrast, the c-PCLs of >2320 g/mol M n,MALDI−TOFF formed thicker lamellar crystals, compared to those of the linear counterparts. These results collectively suggest that the cyclic topology effect is present in the solution-grown morphology of PCL and, however, its impact is varied negatively or positively depending on the molecular weight; these statements are standing on the very limited observations. Thus, more detailed investigation is still needed to understand the cyclic topology effect on the solution-grown lamellar structure of PCL.
Transmission Small-Angle X-ray Scattering (TSAXS) Analysis
Shin et al. [20] reported time-resolved transmission small-angle X-ray scattering (TSAXS) analysis on the crystallizations of c-PCLs (M w,light scattering = 75,000~142,000 g/mol and PDI = 1.83~2.03; all were prepared by Zwitterionic polymerization and cyclization) and their linear counterparts. They attempted to analyze the measured TSAXS data by adopting a correlation function approach with two-phase model. The TSAXS analysis confirmed that c-PCLs formed lamellar structures via crystallizations from the melt as linear PCLs demonstrated. During the isothermal crystallization of c-PCL at a chosen temperature T c , the long period L (, i.e., interlamellar distance) was decreased as the crystallization time increased; but the crystal thickness l c apparently was not varied with the crystallization time. The l c value was slightly increased with T c in an irregular manner; namely, the l c values scattered with varying T c . This T c dependency of l c may directly correlate to the unusually high T o m (121 • C) of c-PCL estimated by Thomson-Gibbs analysis. Similar morphological characteristics were observed for the linear PCLs. However, the l c value was increased with T c in a regular manner than an irregular manner. For the specimens crystallized isothermally at a chosen T c , c-PCL revealed an L value and an l c value, which were very similar to those of the linear PCL. For example, both c-PCLs and their linear counterparts, which were crystallized at 45 • C, showed L = ca. 20 nm and l c = ca. 9 nm. Overall this study concluded that the lamellar structure characteristics of PCL were insensitive to the cyclic topology.
Su et al. [32] extended TSAXS analysis to a c-PCL (M n,MALDI−TOFF = 7000 g/mol and PDI = 1.11) and its linear analogue (α-azido-ω-acetyl-PCL; M n,MALDI−TOFF = 7340 g/mol and PDI = 1.13) crystallized isothermally at various temperatures. c-PCL was determined to show a trend that the L value was almost linearly increased from 12.4 to 14.9 nm as T c was increased from 20 to 55 • C. In case of the linear PCL, the L was also increased with increasing T c but varied in an irregular manner; the L values ranged in 13.5~15.0 nm over T c = 25~53 • C. The l c value was further estimated in a qualitative manner rather than a quantitative way. For c-PCL, the l c value was linearly increased from 7.25 to 8.75 nm as T c was increased from 20 to 55 • C. For the linear PCL, the l c value was also linearly increased from 7.30 to 8.40 nm over T c = 25~53 • C. Considering the irregular variations of L with T c , such linear increments in the l c variation with T c are quite surprising. Overall, c-PCL revealed relatively smaller L values than those of the linear PCL; but, c-PCL exhibited slightly larger l c values than those of the linear PCL. These results collectively indicate that the morphological characteristics of PCL could be influenced by the cyclic topology.
As reviewed above, only two X-ray scattering studies have been reported for c-PCLs so far. Unfortunately, their results are controversial in the aspect of cyclic topology effect. Therefore, more detailed X-ray scattering analysis is still necessary to get better features of the cyclic topology effect in the morphological structure of PCL.
Summary and Perspectives
Several synthetic schemes have been successfully developed to produce c-PCLs, as a result of rigorous research effort: (a) two ring-expansion polymerization schemes; (b) seven intramolecular cyclization schemes; (c) one intermolecular cyclization scheme. In addition, it was demonstrated that a comprehensive, highly-efficient purification process, which combines several essential workup methods including chromatographic separation and purification, is essential for obtaining high-purity c-PCL. Nevertheless, several issues remain in the synthesis of c-PCL as follows. First, all developed synthetic methods still find it challenging to minimize or completely eliminate byproducts. Second, the synthetic schemes using metal-containing initiators and/or catalysts need to be further developed to produce c-PCL products free of metal. Third, the cyclization schemes need to further improve reaction yields. Lastly, it is a challenge to develop a more efficient workup process for producing high-purity c-PCL products in large quantity.
The chain structure characteristics and properties of c-PCL have been examined on a limited basis: hydrodynamic volume, solution and melt viscosities, and chain mobility in the melt, hydrolysis, thermal degradation, crystallization, nucleation, spherulitic growth, crystal melting, equilibrium melting temperature, and equilibrium heat of fusion. The effect of cyclic topology was evidenced in the chain characteristics (hydrodynamic volume, solution and melt viscosities, and chain mobility in the melt) as well as in some of the properties. However, for the other properties the cyclic topology effects were controversial. The morphological structures of c-PCLs and their linear counterparts have been investigated on a very limited basis. Nevertheless, the structural results were also controversial in the aspect of the cyclic topology effect. Such controversial results may be attributed in part to the impurities and their quantities in c-PCL products, which cannot be neglected. Therefore, more quantitative and comprehensive investigations are urgently required for producing high-purity c-PCLs and their linear counterparts in order to gain a better understanding of them and to develop appropriate applications.
Author Contributions: M.R. and H.K. designed the research and initiated the study. W.R. synthesized all polymers; L.X. and W.R. characterized the materials. W.R. and L.X. did literature surveys in a comprehensive manner. L.X. and M.R. prepared the manuscript. All authors contributed to the discussions and finalization of the manuscript.
|
v3-fos-license
|
2024-01-15T14:16:11.209Z
|
2024-01-01T00:00:00.000
|
266978261
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11032-024-01442-3.pdf",
"pdf_hash": "bc0bcf78e08394d22f040eaa2a1817cc03ea64cf",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2146",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "3b2a9dcdb0ae1f66b131a8b894af1696685e05cc",
"year": 2024
}
|
pes2o/s2orc
|
Exploration of quality variation and stability of hybrid rice under multi-environments
Improving quality is an essential goal of rice breeding and production. However, rice quality is not solely determined by genotype, but is also influenced by the environment. Phenotype plasticity refers to the ability of a given genotype to produce different phenotypes under different environmental conditions, which can be a representation of the stability of traits. Seven quality traits of 141 hybrid combinations, deriving from the test-crossing of 7 thermosensitive genic male sterile (TGMS) and 25 restorer lines, were evaluated at 5 trial sites with intermittent sowing of three to five in Southern China. In the Yangtze River Basin, it was observed that delaying the sowing time of hybrid rice combinations leads to an improvement in their overall quality. Twelve parents were identified to have lower plasticity general combing ability (GCA) values with increased ability to produce hybrids with a more stable quality. The parents with superior quality tend to exhibit lower GCA values for plasticity. The genome-wide association study (GWAS) identified 13 and 15 quantitative trait loci (QTLs) associated with phenotype plasticity and BLUP measurement, respectively. Notably, seven QTLs simultaneously affected both phenotype plasticity and BLUP measurement. Two cloned rice quality genes, ALK and GL7, may be involved in controlling the plasticity of quality traits in hybrid rice. The direction of the genetic effect of the QTL6 (ALK) on alkali spreading value (ASV) plasticity varies in different cropping environments. This study provides novel insights into the dynamic genetic basis of quality traits in response to different cropping regions, cultivation practices, and changing climates. These findings establish a foundation for precise breeding and production of stable and high-quality rice. Supplementary Information The online version contains supplementary material available at 10.1007/s11032-024-01442-3.
Introduction
Rice is one of the important staple crops globally, and hybrid rice technology has greatly increased rice production, ensuring global food security.Along with the progression of society and the enhancement of living standards, there has been a gradual increase in the demand for rice quality (Li et al. 2023).Rice quality is also an important and complex trait, including eating and cooking quality, milling quality, apparent quality, and nutritional quality.The quality of rice directly impacts its commercial value and palatability.Improving the quality of hybrid rice has always been the goal pursued by breeders.Over the years, both traditional and modern molecular breeding techniques have been employed to enhance the quality of hybrid rice steadily (Tian et al. 2009;Zhang et al. 2016).However, the quality traits of rice are susceptible to environmental factors such as light, temperature, and humidity, resulting in variations (Liu et al. 2013;Lu et al. 2022).Therefore, how to mitigate the impact of environmental factors on rice quality and enhance its stability is also one of the concerns in improving hybrid rice.
Phenotype plasticity refers to the ability of the same genotype to produce different phenotypes in different environments (Sultan 2000), reflecting the relationship between organisms and their environment, which is widely present in plants (Bradshaw 1965).Phenotype plasticity is related to the adaptability and stability of plants (Chevin et al. 2013;Finlay and Wilkinson 1963).From an evolutionary perspective, varieties with high phenotype plasticity exhibit stronger adaptability to the environment (Des Marais et al. 2013;Bonamour et al. 2019).However, in the context of crop production, plants with lower phenotype plasticity exhibit greater stability.Consequently, implementing techniques to decrease the phenotype plasticity of crops and enhance their stability becomes crucial in enabling the expression of desired traits across a broader range of locations.
Phenotype plasticity is under genetic control and can be targeted for artificial improvement in crop breeding (Gage et al. 2017).To achieve this goal, based on scientific quantification methods for plasticity, efforts have been made to study the genetic architecture of crop plasticity and analyze the QTLs across various crops (Wang et al. 2015a;Kadam et al. 2017;Jin et al. 2023).Exploratory studies on rice have revealed the genetic structure and potential QTLs underlying the plasticity of yield-related traits (Kikuchi et al. 2017;Mu et al. 2022).Although the precise functions of these QTLs remain unclear, they provide new insights and methods for artificially selecting and improving crop phenotype plasticity.Quality-related traits are influenced by the environment, exhibiting phenotype plasticity.However, previous studies on phenotype plasticity in rice have primarily focused on yield-related traits, but there has been limited research on the patterns and genetic architecture of phenotype plasticity in quality traits, which are crucial for breeding advancements.Enhancing rice quality stability has implications for enhancing the potential and commercial value of rice varieties.Therefore, understanding the patterns of phenotype plasticity and genetic structure of qualityrelated traits in rice will provide better references for breeding improvements.
Parents selection is crucial in hybrid rice breeding (Chen et al. 2019).However, the challenge of selecting ideal parental materials from a large population to induce strong heterosis is significant.Therefore, breeders use the combining ability to assess the breeding value of parental materials in hybrid production (Sprague and Tatum 1942).By identifying the combining ability of parents in phenotype plasticity, breeders can predict the performance of hybrid combinations, thereby enhancing the efficiency and stability of hybrid rice production (Abd El-Aty et al. 2022).
The large-scale phenotypic analysis is an important foundation for plasticity research.In this study, a total of 141 hybrid rice combinations were obtained from 7 TGMS lines and 25 restorer lines.These combinations were planted in five locations in Southern China in the 2020 summer season and arranged 3 to 5 intermittent sowings in each trial location.Seven quality traits and their phenotype plasticity were investigated, including amylose content (AC), alkali spreading value (ASV), gel consistency (GC), chalkiness degree (CD), percentage of grains with chalkiness (PGWC), transparency (TP), and milled rice ratio (MRR).We analyzed the combining ability of phenotype plasticity for 32 parental materials and elucidated the genetic structure of phenotype plasticity for quality-related traits.Genetic effects and candidate gene analyses were conducted on the identified QTLs.The statistical results suggest that delaying sowing date is beneficial for enhancing rice quality in the Yangtze River basin.We utilized a model to evaluate the phenotype plasticity of each hybrid and identify TGMS and restorer lines that exhibit improved quality stability in hybrid rice breeding.Furthermore, our research uncovered the genetic basis of phenotypic plasticity in rice quality traits and discovered QTLs associated with quality plasticity while predicting candidate genes.These findings offer theoretical guidance for determining the optimal sowing date of high-quality rice and enhancing the quality stability of hybrid rice.
Page of 21
The field experiments were conducted in the summer season of 2020 in five locations including SC-GH (Guanghan, Sichuan Province, 104°25′ E, 30°99′ N), HN-LS (Lingshui, Hainan Province, 109°45′ E, 18°22′ N), HN-CS (Changsha, Hunan Province, 112°59′ E, 28°12′ N), HB-EZ (Ezhou, Hubei Province, 114°52′ E, 30°23′ N), and AH-HF (Hefei, Anhui Province, 117°17′ E, 31°52′ N).Among the five trial locations, HN-LS is situated in the south China rice cropping region, SC-GH is located in the upper Yangtze River rice cropping region, while AH-HF, HN-CS, and HB-EZ are situated in the middle and lower Yangtze River rice cropping region.In SC-GH, three intermittent sowings were arranged from April 1 to May 1, with a 10-day interval between sowings.In HB-EZ and AH-HF, four intermittent sowings were arranged from April 15 to June 1, with a 15-day interval between sowings.In HN-CS, five intermittent sowings were arranged from April 10 to June 10, with a 15-day interval between sowings.In HN-LS, four intermittent sowings were arranged from June 15 to July 30, with a 15-day interval between sowings (Supplementary Table S3).Seedlings of 5-leaf age were transplanted.Each material was planted in a five-row plot with eight individuals in each row at a spacing of 20 cm × 26.5 cm density in the field and standard management practices throughout the growing period.At maturity, nine uniform plants in the middle of each plot were harvested.
Parental DNA extraction and whole genome sequencing
After germination of parental seeds, young leaves were collected and immediately flash-frozen in liquid nitrogen, and then the samples were stored at − 80℃ for future use.DNA extraction was performed using the FastPure Plant DNA Isolation Mini Kit (Vazyme, Jiangsu, China).The concentration of the extracted DNA was evaluated using a NanoDrop spectrophotometer (NanoDrop Technologies, Wilmington, DE, USA) and a Qubit 3.0 fluorometer (Life Technologies, Carlsbad, CA, USA).To assess the purity and integrity of the DNA, 1% agarose gel electrophoresis was conducted.For library preparation, a short-read library with a DNA-fragment insert size of 200-400 bp was generated using 1 μg of genomic DNA.The library preparation was carried out following the manufacturer's instructions using a library preparation kit compatible with DNBSEQ-T7 (BGI, Shenzhen, China).Subsequently, pairedend (PE) sequencing was performed on a DNBSEQ-T7 platform using the PE 150 model.
Measurement of quality trait
A minimum of 200 g seeds from each accession were used for measuring the seven quality traits, including AC, ASV, GC, CD, PGWC, TP, and MRR.The CD, PGWC, and TP were measured according to NY/T 2334-2013 (Chinese Ministry of Agriculture Standards).CD, PGWC, and TP were measured with a Microtek Scan Wizard EZ scanner and rice quality analyzer SC-E software (Hangzhou Wanshen Detection Technology Co., Ltd., Hangzhou, China).The AC, GC, and ASV were measured according to Chinese National Standards GB/T 15683-2008(GB/T 22294-2008 and NY/T 83-2017), respectively.
Measurement of overall quality and meteorological condition
To evaluate the overall quality of the hybrids, we followed the quality standards for cooking rice varieties set by the Chinese Ministry of Agriculture and Rural Affairs (NY/T 593-2021).Here, a record is specified as the quality performance of a specific hybrid, which has been planted at a trial location on a particular sowing date.We assessed the quality of all 2774 records in our experiment by applying the standard to each of them.
In order to analyze the correlation between quality characteristics and meteorological conditions, we have also summarized the meteorological data collected at different times within each day into the following meteorological factors: minimum temperature, maximum temperature, average temperature, day/night temperature difference, surface solar radiation, accumulated rainfall, and number of rainy days (Cheng and Zhu 1998).In our analysis, we consider a day with more than 1 mm of rainfall as a rainy day.Spearman correlation was conducted between each quality trait and meteorological factors using the SciPy library in Python.
Analysis of phenotype plasticity
According to the reaction norm model, the phenotypic record y ij of the i th hybrid line observed under the j th environment can be modeled as follows: where is the mean value of the trait, g i is the main effect of the i th line, and h j is the main effect of the j th environment.b i h j represents the interaction term (G-by-E) between i th hybrid line (genotype) and j th environment.ij is the error term.
Here, we introduced the Finlay-Wilkinson regression (FW) as the representation of phenotype plasticity.FW reorganized Eq. 1 into follows: If we consider h j as the independent variable of the functiony ij = f (h j ) , then the slope of this function (1 + b i ) indicates the expected trait variation ( Δy i ) in traits with changes in unit environmental effects ( Δh j = 1 ), capable of indicating the phe- notype plasticity.
To calculate the plasticity of quality traits at different locations and sowing stages, we selected the hybrid lines with multi-location field trials and at least three intermittent sowings per location from the 141 hybrids.We used the FW package implemented in R by Lian et al. (Lian and De Los Campos 2016) and applied the Bayesian method for regression. (1)
Analysis of combining ability
Combining ability analysis is a statistical technique employed in plant breeding to assess the genetic potential of parents and predict the performance of their hybrids.The analysis considers both general combining ability (GCA), which measures the average performance of a parent across different crosses, and specific combining ability (SCA), which measures the performance of a specific parent in specific crosses.Since there are no extra replications for each sowing management in our experiment, we could not properly evaluate the SCA for each hybrid.For convenience, we simply treated the SCA as 0. Consequently, the phenotype plasticity of hybrid p ij , which has parents i and j, can be modeled as follows: where is the mean value of the phenotype plasticity, is the error term, and GCA i represents the breeding value of phenotype plasticity for i th parent.Since the dis- tribution of p ij has a mean value of , the sum of the GCAs of all parents is 0 (i.e., ∑ i GCA i = 0 ).We used the sommer package (Covarrubias-Pazaran 2016) in R to calculate the GCA in a mixed model, in which the parents and crosses are considered as random effects.If there are no significant additive effects detected from the parents for the plasticity of hybrid traits, the model will indicate a GCA of 0, which will be later displayed in the paper.
Analysis of BLUP
The analysis for various traits of each location is conducted by fitting a linear mixed model and computing best linear unbiased predictors (BLUPs) with the lme4 R package (Bates et al. 2015): where Y represents the phenotypic records, the parentheses indicate random effects, "1|" denotes groups, and ":" refers to interactions.LINE indicates to the hybrid lines, and ENV indicates to different sowing stages or different locations.We utilized the fitted random effects of hybrid lines as the representation of the overall genetic effects of each genotype for these traits while eliminating the environmental effects.
Genome-wide association analysis
The GEMMA (Zhou and Stephens 2014) software was utilized to conduct Genome-Wide Association Studies (GWAS) for BLUP and FW values of different traits and locations, with a mixed linear model (MLM) fitted.The kinship matrix was used as a random effect, while the first three principal components from principal component analysis (PCA) were included as fixed effects in the MLM.PCA analysis was performed using the smartPCA program implemented in the Eigensoft package (Patterson et al. 2006).The significance threshold for GWAS analysis was determined using the Bonferroni correction method, which means dividing 0.05 by the number of SNPs (n) in the analysis.Here, the threshold was set at p < 5.32 × 10 −8 (i.e., − log10(p) < 7.27).The entire genome was partitioned into LD blocks based on an LD (linkage disequilibrium) threshold of r 2 = 0.6 using the gpart R package (Kim et al. 2019).LD blocks with a distance of less than 1 Mb are merged.LD blocks that contain significant SNPs were considered as candidate QTLs.
Effect of sowing date and meteorological factors on quality of hybrid rice
The rice quality traits of the hybrids collected from 5 trial locations, with multiple intermittent sowing arrangements, were measured, and the distributions of the trait values are illustrated in Fig. 1a and Supplementary Table S4.Nearly all quality traits in these rice cropping regions exhibited significant correlations with the sowing date, suggesting a general contribution of sowing date to rice quality trait variation (Supplementary Table S5).As sowing dates were delayed, we observed the mean of AC increasing by 2.37-3.97% in AH-HF, 3.82-5.12%in HN-CS, 4.37-5.32% in HB-EZ, and 0.77-0.89% in SC-GH, while decreasing 0.32-0.75% in HN-LS.In contrast, the mean of CD decreased by 1.31-6.2 in AH-HF, 2.71-5.03 in HN-CS, 2.85-7.82 in HB-EZ, and 2.34-2.07 in SC-GH, while increasing by 0.84-1.89 in HN-LS in tandem with the delay in sowing dates (Fig. 1b-g).
We assessed the overall quality of the hybrids by referring to the Chinese Ministry of Agriculture and Rural Affairs's quality standard for cooking rice variety (NY/T 593-2021).Out of the 2774 records examined, 872 of them exhibited high-quality with grade 3 or superior performance.These high-quality records 4 Page 8 of 21 are sourced from 124 hybrid varieties and shown in all five trial locations and all sowing dates.For all trial locations except HN-LS, the improvement of rice quality correlates with the postponement of sowing dates, with a greater number of records achieving quality grade 3 or higher grades (Fig. 1h-l).
To assess the meteorological variation more precisely, we decomposed it into the following meteorological factors: minimum temperature, maximum temperature, average temperature, day/night temperature difference, surface solar radiation, accumulated rainfall, and number of rainy days (Cheng and Zhu 1998) (Supplementary Fig. S1).Existing studies have shown that the meteorological factors during the grain filling stage, particularly the initial 15 days following full heading, have a considerable influence on rice quality (Wu et al. 2016;Yan et al. 2021).Thus, we employed the mean values of the aforementioned meteorological factors during this time period for further evaluation.We performed an analysis to investigate the correlation between meteorological factors and quality traits.Due to the non-normal distribution of the quality traits data, we computed Spearman correlation coefficients between meteorological factors and these quality traits (Fig. 2).Nearly all quality traits displayed a significant correlation with multiple meteorological factors in each trial location.The correlation trend of each meteorological factor with each quality trait differed across different trial locations.More specifically, at HN-LS, the minimum, maximum, and average temperatures exhibit significant positive and negative correlations with AC and GC, respectively.At the other four locations, there were significant negative and positive correlations between these meteorological factors and AC and GC, respectively.Likewise, at HN-LS, surface solar radiation was positively correlated with AC and negatively correlated with GC.However, at the other four trial locations, contrary correlation trends were observed.In terms of rainy days and accumulated rainfall, our research suggests that increased rainy days are potentially associated with lower levels of AC and higher levels of CD.The significant impact of accumulated rainfall on quality traits was observed, but a more general pattern was not observed in our experiment.Moreover, we considered the rice quality grade as a trait and investigated its correlation with meteorological factors as a reference.For trial locations in the rice cropping region of the upper, middle, and lower reaches of the Yangtze River (i.e., SC-GH, AH-HF, HN-CS, and HB-EZ), there exists a significant correlation between better rice quality and the minimum, maximum, and average temperatures.
General combining ability analysis for phenotype plasticity of quality traits
In this study, we evaluated the stability of quality traits for 141 hybrid combinations across five trial locations using the Finlay-Wilkinson regression (FW), a measurement of phenotypic plasticity (Supplementary Table S6) (Fig. 3).The combining ability for phenotypic plasticity was conducted to identify potential parents capable of producing hybrid combinations with stable quality traits.Figure 4 shows the GCA value of the hybrid parents in relation to the plasticity of the seven quality traits.Notably, parents with a low GCA value for plasticity are more likely to derive hybrid combinations with stable quality performance.Fig. 4 The heatmap illustrates the GCA value for phenotype plasticity in quality traits, as observed between TGMS and restorer lines across various trial locations.Each grid number represents the GCA value for the trait.The red grid denotes a positive GCA value, the blue grid denotes a negative GCA value, and the white grid denotes a zero GCA value Three TGMS lines (ZXS, HX302S, and HY468S) and nine restorer lines (HH8012, HH7503, DZ, LKSM13, HH2646, HH5106, HH8549, YZ, and WSSM534) consistently exhibited comprehensive low GCA values (≤ 0) for plasticity across more than three trial locations.It indicates their potential to consistently produce hybrid combinations with stable quality traits across different cropping regions and varied sowing management.Interestingly, parents with superior quality tend to exhibit low GCA values for plasticity.The 12 parental lines with low plasticity GCA value are all of high quality and have been widely used in highquality hybrid rice development.We analyzed the quality statistics from yield testing trials of nationally approved hybrid rice varieties derived from all tested parental lines from 2016 to 2022.Notably, the hybrid rice varieties derived from parents with low plasticity GCA values exhibited a higher rate of high quality compared to those derived from parents with high plasticity GCA values (Supplementary Table S7).
Genome-wide association analysis for phenotype plasticity and BLUP measurement of quality traits
The genome-wide association study (GWAS) was conducted independently for the plasticity of each quality trait across different trial locations.This analysis revealed a total of 13 QTLs associated with grain quality plasticity (Fig. 5a-b; Table 1).Among these, four QTLs were detected in two trial locations, two QTLs were detected in three trial locations, and the remaining seven QTLs were only detected in a single trial location.These findings underscore the complex genetic basis of plasticity for quality traits in different regions.
In the case of AC plasticity, we identified six QTLs were identified on chromosomes 3, 9, 10, 11, and 12.Among these QTLs, four were identified in both HN-CS and AN-HF, while two were exclusively identified in HN-CS.As for CD plasticity, we detected five QTLs were detected on chromosomes 2, 3, 4, 7, and 8. Notably, all QTLs were identified in a single trial location, either HB-EZ or AN-HF.
We identified QTL6 on chromosome 6 as being associated with the plasticity of two key taste quality traits (ASV and GC).The effect of QTL6 on ASV plasticity was detected in both AH-HF and SC-GH, while its effect on GC plasticity was observed exclusively in HB-EZ.These findings suggest that the plasticity of ASV and GC may share a similar genetic basis.
A single QTL, QTL7, located on chromosome 7, was found to be associated with the plasticity of MRR.This association was observed across three trial locations including HB-EZ, HN-CS, and SC-GH.
Furthermore, we employed the best linear unbiased prediction (BLUP) approach to mitigate the impact of non-genetic factors and calculate the genetic effect values for these quality traits (Supplementary Table S8).We also conducted a GWAS for the BLUP measurement.This analysis identified a total of 15 QTLs for the BLUP measurement of seven quality traits across five trial locations (Fig. 5c-d, Supplementary Table S9).Among these QTLs, seven QTLs (QTL2, QTL3, QTL6, QTL7, QTL11, QTL12, and QTL13) were also identified in the previous GWAS of plasticity (Supplementary Table S10).These overlapping 4 Page 12 of 21 QTLs, identified in both GWAS results, could play a crucial role in regulating quality traits and responding to the quality plasticity observed in diverse cropping environments.
Analysis of QTLs genetic effects and prediction of candidate genes
We evaluated the genetic effects of all 13 plasticity QTLs (Fig. 6a-f, Fig. S2).Among these, two QTLs (QTL6 and QTL7) were detected in three trial locations, thereby establishing them as stable major QTLs for rice quality plasticity.As these two QTLs also demonstrated an effect on the BLUP measurement, further investigation into the genetic effects of these QTLs could lay a foundation for molecular breeding strategies aimed at enhancing both the quality and stability of hybrid rice.
The QTL6 was found to be associated with ASV and GC.The leading SNP (Chr6:6,722,905) of QTL6 had a − log10 (p value) of 7.83 in AH-HF, 13.61 in HB-EZ and 31.03 in SC-GH.Three genotypes of the leading SNP of QTL6 were identified.A significance analysis revealed that CC genotypes showed significantly lower GC plasticity compared to CT and TT genotypes in HB-EZ.It is worth noting that the direction of the genetic effects of the identical ASV plasticity allele varied across different cropping environments.In the AH-HF, CC genotypes exhibited significantly lower ASV plasticity compared to varieties with CT and TT genotypes.Conversely, in the SC-GH, CC genotypes showed significantly higher ASV plasticity compared to varieties with CT and TT genotypes (Fig. 6a-c).The QTL7 was associated with MRR.The leading SNP (Chr7:24,665,290) of QTL7 had a − log10 (p value) of 15.69 in HN-LS, 9.93 in HB-EZ, and 7.67 in SC-GH.The leading SNP alleles exhibited significantly different MRR plasticity, as shown in Fig. 6d-f.Notably, the AA genotypes displayed the lowest MRR plasticity value.
Within the associated genomic region of the QTL6 and QTL7, a total of 6 and 18 annotated genes were found, respectively.Among these genes, ALK and GL7, known for their functional annotations in controlling rice quality, were pinpointed as candidate genes for further analysis in rice quality plasticity.The ALK gene, located within QTL6, is a key gene controlling rice gelatinization temperature which encodes the soluble starch synthase II-3.The GL7/GW7 gene, located within QTL7, encodes a protein that is homologous to the LONGIFOLIA proteins found in Arabidopsis thaliana.This protein is known to regulate cell elongation, thereby affecting grain length and grain shape.
Discussion
Rice quality is one of the important goals in hybrid rice breeding improvement.The formation of rice grain quality is under genetic control and also influenced by the environment.Numerous studies have been undertaken to explore the impact of sowing dates on rice quality.However, due to differences in location, tested varieties, climatic conditions, and other factors, the outcomes of these studies lack consistency.Yao et al. observed that a delay in the sowing date leads to an improvement in the appearance quality of rice, but results in a decrease in the eating and cooking quality (Yao et al. 2011).Wang et al. found that a delay in the sowing date causes the CD, PGWC, AC, and GC to exhibit a declining trend (Wenting et al. 2021).
In this study, we analyzed the rice quality across multiple sowing dates in five trial regions located in Southern China and gathered the meteorological data for each region.It was observed that in the Yangtze River basin trials, the amount of high-quality rice (≥ grade 3 set by the Chinese Ministry of Agriculture) increases as the sowing date is delayed.The different sowing date essentially influences the weather conditions in rice growing.Previous studies indicated that an inappropriate sowing date will expose rice to unfavorable climatic conditions during the grain filling stage, resulting in a decline in rice quality (Cheng and Zhong 2001).
It has been proved that light, temperature, and rainfall during the grain filling stage are the pivotal climatic factors influencing rice quality (Resurreccion et al. 1977;Cheng and Zhu 1998).This study quantified and analyzed several crucial meteorological factors during the grain filling stage to assess their relationship with rice quality and endeavored to improve the precision of selecting the appropriate sowing date in high-quality hybrid rice production.Deng et al. found that the average daily temperature ranges of 22-27 °C in the grain filling stage are recommended to achieve high grain yield and quality for irrigated rice in the Yangtze River basin (Deng et al. 2015).Our results suggest that lower average daily temperature can contribute to improving the quality of rice, supporting the conclusion from Deng et al.In addition, we found that the impact of meteorological factors on rice quality exhibited varied trends among trial locations, and even a single factor may affect multiple quality traits in contrasting directions, resulting in a complex effect on overall rice quality.For instance, surface solar radiation was positively correlated with AC and negatively correlated with GC at HN-LS, while contrary correlation trends were observed at the other four trial locations.This difference may be due to the fact that HN-LS is located in the South China rice cropping region, with persistent high temperatures and abundant rainfall in the dry season, and has distinct climate conditions compared to the other trial locations within the Yangtze River basin.Simultaneously, the decomposed meteorological factors are unavoidably related in different ways.For instance, rainfall consistently results in a more significant temperature decrease within AH-HF, HN-CS, and HB-EZ in the middle and lower reaches of the Yangtze River basin in comparison to SC-GH and HN-LS.This discrepancy can be attributed to the lower altitude and the prevalence of hot and sunny weather in the summer.
Concurrently, rain can reduce solar radiation, so SC-GH in the upper reaches of the Yangtze River has low solar radiation due to cloudy and rainy weather.As the quality of rice is the result of the combined effects of multiple meteorological factors, further analysis and modeling based on more experiments are necessary.
It is crucial to develop varieties with stable phenotypes that are less sensitive to environmental changes.This will help mitigate the adverse effects of frequent extreme climate events on rice yield and quality, guaranteeing the production of high-quality rice.This study employs phenotypic plasticity measurement to evaluate the stability of different quality traits in hybrid rice combinations.Combining ability analysis is a useful approach to selecting ideal parent lines for hybrid rice breeding (Chen et al. 2019).Through the analysis of combining ability in phenotype plasticity, we discovered that the TGMS lines ZXS, HX302S, and HY468S and the restorer lines HH8012, HH7503, DZ, LKSM13, HH2646, HH5106, HH8549, YZ, and WSSM534 showed low GCA effects on five or more quality traits plasticity across three rice cropping regions; they could be recommended for utilization in rice hybrid breeding programs to improve the stability of rice quality.A noteworthy result was found that parents with superior quality tend to exhibit low GCA values for plasticity and have more potential to develop a hybrid with optimal quality stability.ZXS is an indica TGMS line with high-quality approved in Hunan province in 2020.Its milled rice rate is 63.2%, chalky grains rate is 7%, chalkiness degree is 1.3%, amylose content is 16.1%, gel consistency is 60 mm, alkali digestion value is 7.0, and transparency is grade 1 (China Rice Data Center).A total of ten new hybrid rice varieties of ZXS have been nationally or provincially approved and certificated, and their quality has all reached the grade 2 or grade 1.The quality of ZhenLiangY-ouYuZhan (ZXS/YZ), a nationally approved new hybrid rice variety of ZXS, met grade-1 in the new variety regional trial and was awarded the Gold Award for Taste Evaluation of High-Quality Rice at the 4th National High-Quality Rice Competition in 2023.In our experimental field settings, the ZXS/YZ combination showed high-quality with grade 3 or superior performance in 12 out of 19 records, and these records occurred in all locations except for HB-EZ.
To uncover the genetic basis of phenotype plasticity in rice quality traits, GWAS for the quality plasticity was performed and identified 13 plastic QTLs.In line with previous studies (Zan and Carlborg 2020), we determined that plasticity is polygenic and exhibits a variable genetic basis for rice quality traits across regions.The genetic effects of plastic QTL can change in different locations.For instance, the QTL6 is a multi-effect QTL associated with the plasticity of ASV and GC, where genetic effects on ASV differ among trial locations.We also identified seven plastic QTLs overlapped in the GWAS of BLUP measurement of quality traits, indicating these QTLs could regulate the quality traits and respond to the plasticity of quality traits.This further validates previous research that there may be a connection between the genetic regulation of traits and plasticity (Zan and Carlborg 2020;Jin et al. 2023).
In the present study, we endeavored to identify the candidate genes of two major plasticity QTLs for rice quality.The region of these two major QTLs contains 24 annotated genes and 2 of them (ALK and GL7) are known to regulate the rice quality.ALK is the key gene controlling rice gelatinization temperature, which is closely associated with the eating and cooking quality in rice (Gao et al. 4 Page 18 of 21 2011).According to previous studies, there are three main alleles of ALK, including ALK a , ALK b , and ALK c , with ALK c controlling high gelatinization temperature, and ALK a and ALK b controlling low gelatinization temperature ( Chen et al. 2020;Huang et al. 2021).In this study, we identified two alleles of ALK including ALK b and ALK c in our parental rice accessions.Interestingly, we found that the leading SNP genotype of QTL6 was completely linked with ALK.Specifically, the CC genotype of the leading SNP of QTL6 was linked with ALK c while the TT genotype was linked with ALK b (Fig. 6g-h).Sequence analysis showed that among the above low plasticity GCA parents, all carried ALK b except HH7503 and HH5106, which carried ALK c .To some extent, this result explains although rice varieties with low to medium GT are preferred by consumers, those carrying ALK c alleles that regulate high GT are still commonly employed in rice breeding, potentially due to their plasticity levels.
The other plasticity candidate gene, the GL7 locus, plays a significant role in grain size diversity and has been utilized in rice breeding.Wang et al. demonstrated that the copy number variation (CNV) at the GL7 locus led to the difference in the rice grain size (Wang et al. 2015b).Whole genome sequencing analysis of parental rice accessions revealed that the CNV at GL7 was linked with the leading SNP of QTL7.The varieties containing the CNV at GL7 carry the AA genotype of the leading SNP of QTL7 and the remainder carry the GG genotype (Fig. 6i).Rice varieties with multi-copy at GL7 locus exhibit lower plasticity in terms of milled rice ratio, which can be attributed to their long-grain phenotype.Further studies will be necessary to determine the effect of these candidate genes on phenotypic plasticity and their pleiotropic effects on quality traits and plasticity.These QTLs and candidate genes have the potential to contribute to improving rice quality and maintaining the stability of quality in breeding.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/ licenses/by/4.0/.
Fig. 1
Fig. 1 Variation in the trait of hybrid rice combinations across different locations and sowing stages in the southern rice region of China.a The five surveyed sites span China's southern rice region, where 7 quality traits were phenotyped for hybrid rice varieties.b-g Boxplot of seven quality traits including b chalkiness degree (CD), c alkali spreading value (ASV), d amylose content (AC), e gel consistency (GC), f percentage of grains with chalkiness (PGWC), and g milled rice ratio (MRR) measured at the five sites in different sowing stages.h-j The number of rice varieties reached quality grade 3 or superior quality according to NY/T 593-2021 at each trial location on different sowing dates
Fig. 2
Fig. 2 The heatmap illustrates Spearman's correlation coefficients between meteorological factors and quality traits across five trial locations.T denotes temperature.The green color of each box indicates positive relationships between meteorological factors and the corresponding quality trait, while the red color indicates negative relationships.Significant correlations are denoted by asterisks, as determined by a two-tailed t-test (**p < 0.001; ***p < 0.0001).r stands for correlation coefficients
Fig. 3
Fig. 3 The boxplot illustrates the phenotype plasticity of hybrid rice across five trial locations for the following traits: a chalkiness degree (CD), b percentage of grains with chalkiness (PGWC), c transparency (TP), d milled rice ratio (MRR), e amylose content (AC), f alkali spreading value (ASV), g gel consistency (GC)
Fig. 5 4
Fig. 5 QTLs associated with plasticity and BLUP for seven traits across five trial locations.a QTLs associated with the plasticity of different traits across different locations.Each dot represents an SNP, and the size of the dot is proportional to its − log 10 (p)-value, as indicated on the right.Loci with a − log 10 (p)-value exceeding the genome-wide significance threshold are highlighted in red.b Manhattan plot overlays GWAS results of plasticity for the seven traits across five locations.The black horizontal dashed line indicates Bonferroni-corrected genome-wide significance, and the vertical gray lines indicate the positions of detected QTLs. c Manhattan plot overlays GWAS results of BLUP for the seven traits across five locations.The vertical red lines indicate the positions of detected QTLs that overlap in both plasticity and BLUP.d QTLs associated with the BLUP of different traits across different locations ▸
Fig. 6
Fig. 6 Genetic effects of QTLs and prediction of candidate genes.a-f Boxplots of plasticity of quality traits in hybrid combinations containing the different leading SNPs of QTLs. a Plasticity of GC in hybrid combinations containing the different leading SNPs of QTL6 in HB-EZ.b-c Plasticity of ASV in hybrid combinations containing the different leading SNPs of QTL6 in AH-HF and SC-GH.d-f The plasticity of MRR in hybrid combinations containing the different leading SNPs of QTL7 in HB-EZ, HN-LS, and SC-GH.Uppercase letters indicate statistically significant differences at p < 0.01; lowercase letters indicate statistically significant differences at p < 0.05.g The relationships between the QTL6 leading SNP and its candidate gene.h The haplotypes identified by combinations of QTL6 leading SNP and candidate gene FNPs.i The relationships between the QTL7 leading SNP and its candidate gene
Table 1
The QTLs associated with plasticity of quality traits in different environments and summary of candidate genes
|
v3-fos-license
|
2023-10-11T06:17:44.037Z
|
2023-10-10T00:00:00.000
|
263802049
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13105-023-00986-w.pdf",
"pdf_hash": "c802befb2f8709f9c687df473630098f61ca24c1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2150",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "0889470bfb0d9aa9ccd84d024130510e72bd9f39",
"year": 2023
}
|
pes2o/s2orc
|
DERL2 (derlin 2) stabilizes BAG6 (BAG cochaperone 6) in chemotherapy resistance of cholangiocarcinoma
DERL2 (derlin 2) is a critical component of the endoplasmic reticulum quality control pathway system whose mutations play an important role in carcinogenesis, including cholangiocarcinoma (CHOL). However, its role and its underlying mechanism have yet to be elucidated. Herein, we revealed that DERL2 was highly expressed in CHOL and considered as an independent prognostic indicator for inferior survival in CHOL. DERL2 ectopically expressed in CHOL cells promoted cell proliferation and colony formation rates, and depleting DERL2 in CHOL cells curbed tumor growth in vitro and in vivo. More interestingly, the knockout of DERL2 augmented the growth-inhibitory effect of gemcitabine chemotherapy on CHOL cells by inducing cell apoptosis. Mechanistically, we discovered that DERL2 interacted with BAG6 (BAG cochaperone 6), thereby extending its half-life and reinforcing the oncogenic role of BAG6 in CHOL progression.
Introduction
Cholangiocarcinoma (CHOL) is a heterogeneous epithelial cell tumor that represents approximately 10 to 20% of hepatic cancer and 2% of all cancers [1].It mainly arises from the peripheral locations within the intrahepatic bile ducts, with cholangiocyte differentiation features [2].Surgery resection combined with traditional therapy is the first option to treat CHOL patients [3,4].However, just a few CHOL patients respond well, resulting in favorable longterm prognoses [5].The risk of neoplastic development is potentiated and exacerbated by chronic inflammation, infections, and cholestasis [6].Additionally, genetic disorders sustain tumor cell proliferation, migration, and survival, consequently being identified as risk factors [7,8].Hence, there is a paramount need to explore novel therapeutic targets for CHOL patients.
Persistent endoplasmic reticulum (ER) stress is regarded as a friend or a foe of tumorigenesis and cancer development, exerting context-dependent effects on tumor cell growth or cell death [9][10][11][12][13].The endoplasmic reticulum (ER), an essential organelle in eukaryotic cells, assumes critical functions in protein synthesis, folding, and transportation.It intricately coordinates protein folding and export processes [14][15][16].This protein ER quality control system is responsible for maintaining adequate ER proteostasis via endoplasmic reticulum-associated degradation (ERAD) machinery [17][18][19].This intricate proteostasis is indispensable to determining cellular function and behavior [20][21][22].However, when external stressors disrupt this delicate equilibrium, misfolded proteins accumulate within the ER, leading to ER stress, which contributes to the survival and proliferation of cancer cells [23,24].Additionally, the ER plays a crucial role in lipid metabolism, and calcium signaling, as well as cancer angiogenesis and invasion.Among the membrane protein family known as Derlins, which form dislocation pores through transmembrane domain oligomerization and facilitate ER degradation of misfolded glycoproteins, three highly homologous members have been identified: Derlin1 (derlin 1), Derlin2 (derlin 2), and Derlin3 (derlin 3).Recent studies have suggested amplification of the DERL1 protein is involved in cell behavior and functionality of breast cancer [25], colon cancer [26], and bladder tumors [27].DERL3 silence confers the cell's unlimited proliferation potency and drives the progression of breast cancer [28], lung adenocarcinoma [29], and cervical cancer [30].DERL2, an ER membrane-associated and luminal protein characterized by three predicted loops, has been shown to cause perinatal lethality in whole-body DERL2 deletion mice, with the surviving mice developing skeletal dysplasia due to abnormal accumulation of collagen matrix proteins within the ER lumen [31].In chronic lymphocytic leukemia, preclinical and clinical evidence has shown the amplification of DERL2 mRNA levels [32].However, its role and mechanism in cancers are hardly disclosed.
In this study, we employed RNA-sequencing data from TCGA (The Cancer Genome Atlas) to investigate the relationship between the expression of the Derline protein family and the progression of CHOL, shedding light on the potential oncoprotein role of DERL2 in CHOL.Additionally, we elucidated the influence of DERL2 on CHOL growth and uncovered the therapeutic potential of targeting the DERL2mediated signaling axis through preclinical investigations.
DERL2 expression in TCGA pan-cancers and its prognostic implication
The RNA sequencing data of pan-cancer were collected from TCGA normal and TCGA tumors (https:// portal.gdc.cancer.gov/) [33].After transformation to log2, the expression data were analyzed by the Mann-Whitney U test and plotted on the "ggplot" package of the R language.The clinical information of the TCGA pan-cancer cohort was used to evaluate the impact of DERL2 expression on the clinical outcome of CHOL patients.
Gene set enrichment analysis
An analysis of gene set enrichment analysis (GSEA) on the website (https:// www.broad lnsti tute.org/ gsea/) [34] was conducted to verify the signaling cascades related to DERL2.
Immune infiltrate correlation analysis using Tumor Immune Single-cell Hub database
A correlation analysis with immune checkpoints was performed via Tumor Immune Single-cell Hub (TISCH) online database to examine the impact of different variants of DERL2 on tumor immune infiltration (immune cell and immune checkpoint molecules) [35].
Analyzing DERL2 expression in CHOL cells
DERL2 mRNA data were retrieved from the Cancer Cell Line Encyclopedia (CCLE) website (http:// www.broad insti tute.org/ ccle) [36].Its expression in a panel of CHOL cells was visualized and plotted.
The constructed vectors carrying DERL2-Myc and BAG6 (BAG cochaperone 6)-HA were generated using PCR and cloned into pCDNA5/FRT/TO-Myc (Thermofisher, USA) or pCDNA5/FRT/TO-HA (Thermofisher, USA), respectively.To overexpress DERL2 in QBC939 cells, the cDNA sequence of DERL2 was amplified from QBC939 cells and inserted into pCDNA5 vectors (Thermofisher, USA).The produced vectors were transfected into QBC939 cells.The corresponding empty vectors were transfected into QBC939 cells and served as the controls.
RT-qPCR
RNAs were isolated with a Trizol-chloroform method in a ratio 1:5 (chloroform: Trizol) (TRIzol, Invitrogen, USA) [38].Using a NanoDrop Spectrophotometer, RNA concentration was determined.Reverse transcribe RNA into cDNA was performed using SuperScript™ IV One-Step RT-PCR system (Thermo Fisher Scientific, Inc.).The levels of gene transcripts were assessed using Luna Universal qPCR Master Mix (NEB). 2 −ΔΔdCT relative quantification method was used.The DERL2 primer is as
Construction of DERL2 knockout CHOL cells
Using the CRISPR (http:// crisp or.tefor.net/), two sgRNAs targeting DERL2 Exon5 and Exon6 were designed and then synthesized by Tianyi Huiyuan Biotechnology Co. Ltd.China.Each of the two complementary sgRNA sequences was annealed at 95 °C, resulting in the formation of double-stranded duplexes.Subsequently, the complexes underwent restriction digestion and were inserted into the pLen-tiCRISPRv2 vector to generate the Lentiv2-sgRNA vector.LentiV2-sgRNA or Lentiv2 vectors were packaged in 293T cells with psPAX2 and pMD2.G.Forty-eight hours later, the cell culture was filtered with 0.22-μm filters (Anotop, Whatman).The medium, containing virus particles, was used to infect CHOL cells for another 48 h.The infected cells were selected with 3 μg/ml puromycin (Sigma, USA).After 2 weeks, positive clones were collected and amplified, followed by confirmation through Sanger sequencing and western blot analysis.The sgRNA sequences were listed below 5′ GAG CTT AGT TTT CTT GGG CCAGG3′, 5 ′GTA TTT CCC AAT CAA CCT GGTGG3′.
CCK8 assays
A total of 1000 cells were seeded in each well of a 96-well plate.After 24 h, 48 h, or 72 h, 100 μL/well solution of CCK8 was added and incubated for 1 h.The plates were subsequently analyzed using a microplate reader at a wavelength of 450 nm.
To evaluate the impact of different drugs on CHOL cell proliferation, varying concentrations of Gemcitabine (0, 1, 2, 5, or 5 ng) were added to the 96-well plates and maintained for 48 h.Following the incubation period, cell proliferation was assessed using CCK8 assays.
Colony formation assays
In a 12-well plate, 400 cells were seeded onto and grown for 14 days.Subsequently, the cells were fixed and stained in 1 mL in 100% ethanol containing 0.25% crystal violet.The staining was allowed to stand for 20 min and then the cell colony was counted.
Apoptosis assay
Annexin V-FITC/PI Apoptosis Detection Kit (enzyme, China) was applied to determine the QBC939 cell apoptosis when DERL2 knockout or not.In short, the transfected QBC939 cells (KO-1 and KO-2) and wild-type (WT) QBC939 cells were plated on 6-well plates.Fortyeight hours later, the cells were harvested and trypsinized without EDTA.Subsequently, 100 μl of 1× binding buffer was added to resuspend the cells, followed by the addition of propidium iodide (PI; 5 μl) and Annexin V-FITC (5 μl).The staining reaction was conducted in the dark at room temperature, and after 10 min, the cells were treated with 200 μl of 1× binding buffer.The apoptotic cells were counted using a BD FACScan™ flow cytometer (BD Biosciences) within 1 h, and the data was read using BD FACSuite™ (BD Biosciences).
Cell cycle analysis
The cell cycles were examined using Cell Cycle and Apoptosis Analysis Kit (PI staining, Medchemexpress, China).The transfected cells (2×10 5 cells/ml) were plated on 6-well plates.Two days later, the cells were centrifuged, harvested, and rinsed with PBS following fixation with 70% pre-chilled ethanol at 4 °C.On the following day, the ethanol was removed by centrifugation, and the cells were further washed with PBS and resuspended in a pre-prepared PI working solution (PI/ RNase A ratio, 1:9).After incubation with the PI working solution at room temperature for 1 h, the apoptotic cells were quantified using a BD FACScan flow cytometer.
In vivo assays
Thirty BALB/c nude mice (4-5 weeks of age, 16-20 g) were purchased from the Wuhan University Center for Animal Experiment/Animal Biosafety Level III laboratory (ABSL-III lab) of Wuhan University (Wuhan, Hubei, China).The study was approved by the institutional ethics Committee of Hainan Medical University (No.GKJ190015).The mice were assigned into three groups (n=10): wild-type group (WT), KO-1, and KO-2 groups.Subcutaneous injections of the indicated QBC939 cells were conducted on the back of mice.The tumor size was measured every 5 days.The tumor weight was recorded following euthanasia with CO 2 .Tumor volume=tumor length×tumor width 2 /2.
Co-immunoprecipitation assays
For coimmunoprecipitation assays, we prepared QBC939 cells that were transfected with Flag-DERL2 vectors, as well as 293T cells that were cotransfected with HA-BAG6 and Flag-DERL2 vectors.Following centrifugation at 16,000 g for 15 min at 4 °C using a microcentrifuge, the protein concentration was determined using the Pierce BCA Protein Assay Kit.Subsequently, 50-100 μg of total cell lysate, along with the recommended amount of antibody, were mixed on ice and incubated overnight at 4 °C with gentle rotation.The resuspended protein A/G PLUS-Agarose (Santa Cruz, Cat.#sc-2003) was added and incubated at 4 °C for 5~6 h.After centrifugation at 2500 g for 5 min at 4 °C, the immunoprecipitates were subjected to western blot analysis.
Protein half-life analysis
Two days post-transfection, cells were exposed to different concentrations of CHX (Cycloheximide, MKBio, China) at the indicated time points.Western blot analysis was carried out to detect the DERL2 or BAG6 expression.
Statistical analysis
Statistical significance was determined at a significance level of P<0.05.The data are presented as mean ± standard error of the mean (SEM).Statistical analysis was performed using Prism 8.For comparisons involving more than two groups, one-way ANOVA followed by the Tukey test was employed.The unpaired Student t-test was utilized for comparisons between the two groups.
Analysis of the Derline protein in the CHOL malignancy
Considering the significance of Derline protein in CHOL malignancy, we analyzed to explore their expression profile and prognostic implications in patients with CHOL, utilizing data from the TCGA CHOL cohort.Figure 1 A illustrates that the expression levels of the three Derline genes were notably elevated in CHOL tissues compared to their corresponding normal tissues.While the expression of DERL1 and DERL3 did not exhibit a significant impact on the overall survival of CHOL patients (P=0.516,P=0.983, respectively), patients with high expression of DERL2 demonstrated a worse overall survival outcome in comparison to those with low DERL2 expression (P=0.008) (Fig. 1B-C).These findings piqued our interest in investigating the functional role of DERL2 in CHOL patients.
Subsequent bioinformatics analysis was performed to elucidate the pan-cancer expression landscape of DERL2, as the availability of normal tissues and paired tumor tissues was limited for differential expression analysis across various cancers.Utilizing the TCGA pan-cancer cohort's normal/tumor data, we observed the prevalence of DERL2 mRNA expression across a wide range of 2A).Consistently, a comparison of DERL2 expression in TCGA tumors and normal tissues revealed elevated mRNA levels of DERL2 in pan-cancer samples (Fig. 2B-D).To further analyze its expression pattern, we retrieved data from the GEO database (GSE107943), confirming the high expression of DERL2 in CHOL tissues (Fig. 2E).To assess the impact of DERL2 on the clinical outcome of CHOL patients, we analyzed the TCGA-CHOL cohort.Notably, DERL2 amplification was associated with adverse prognosis across multiple survival measures, including overall survival, disease-free survival, disease-specific survival, and progression-free survival (Fig. 3).However, no significant differences were observed in DERL2 expression among different patient subgroups categorized by gender, age, and TNM stage (Fig. 4).To unravel the signaling mechanism underlying the role of DERL2 in CHOL progression, we conducted GSEA using transcriptome data from the TCGA-CHOL cohort.Our analysis revealed a selective positive enrichment of gene sets associated with DNA repair, Myc targets, and Myc targets V2, indicating their potential involvement in DERL2-mediated phenotypes (Fig. 5).
DERL2, a crucial factor in ER-associated protein degradation (ERAD) pathways known for its involvement in host innate immunity [39], prompted us to investigate its impact on the immune microenvironment using samples from the TCGA-CHOL cohort.To explore the influence of DERL2 expression on the immune microenvironment, we utilized samples from the TCGA-CHOL cohort and conducted the analysis using the TISCH platform.The resulting lollipop diagram (Fig. 6A) revealed noteworthy correlations between DERL2 mRNA levels and various immune cell populations.Specifically, DERL2 expression exhibited negative associations with macrophages (Fig. 6B), mast cells (Fig. 6C), type 2 T helper cells (Th2) (Fig. 6D), Th1 cells (Fig. 6E), and CD56 bright cells (Fig. 6F).Additionally, a positive correlation between DERL2 and CD cells was observed (Fig. 6G).These findings shed light on the potential involvement of DERL2 in shaping the immune landscape within the tumor microenvironment.
DERL2 influences the CHOL cell proliferation
Before functional assays, we investigated the expression pattern of DERL2 in CHOL using genomics data obtained from CCLE.Analysis of the CCLE data revealed distinct expression profiles of DERL2 in extrahepatic (shown in red) and intrahepatic (shown in black) CHOL cells (Fig. 7).Notably, the CHOL cell lines RBE, QBC939, HUCCT1, and HCCC9810 exhibited relatively higher levels of DERL2 protein expression compared to HiBEC cells (Fig. 8A).
To elucidate the specific role of DERL2 in CHOL cell growth, we introduced DERL2-HA vectors into RBE and HCCC9810 cells, and subsequently assessed the ectopic expression of the DERL2 protein through Western blot analysis (Fig. 8B).Overexpression of DERL2 resulted in a significant increase in cell proliferation (Fig. 8C-D), accompanied by a marked enhancement in colony formation capacity (Fig. 8E-F).
Depletion of DERL2 led to a significant reduction in the colony formation rate of QBC939 cells (Fig. 9D).Consistent with the in vitro findings, mice transplanted with DERL2deficient QBC939 cells exhibited a substantial decrease in tumor weight and size (Fig. 9E-G), providing further evidence of the crucial role played by DERL2 in CHOL tumorigenesis.DERL2 deficiency induces apoptosis and suppresses cell cycle transition.
Notably, the depletion of DERL2 in QBC939 cells resulted in a significant increase in cell apoptosis (Fig. 10A).Furthermore, consistent with these findings, DERL2-deficient QBC939 cells exhibited cell-cycle arrest at the S and G2 phases (Fig. 10B).These observed phenotypic changes strongly suggest the significance of the DERL2 gene in CHOL progression.
DERL2 interacting with BAG6
To investigate the mechanistic basis of DERL2's influence on CHOL cell proliferation, we proceeded to explore its underlying interactions.We introduced DERL2-Flag recombinant vectors or empty vectors into 293T cells, followed by co-immunoprecipitation (co-IP) experiments.Mass spectrometry analysis of the DERL2-Flag complex captured by anti-Flag antibodies revealed the immunoprecipitation of BAG6 as a potential interacting protein with DERL2 (Fig. 11A).To further confirm the interaction between DERL2 and BAG6 during CHOL progression, we performed co-IP experiments in 293T cells co-transfected with Flag-DERL2 and HA-BAG6 vectors.Western blot analysis using HA or Flag antibody was employed to assess the expression of HA or Flag in the transfected cell lysate.Figure 11 B and C validate the interaction between DERL2 and BAG6.Moreover, the distribution of Flag-DERL2 and HA-BAG6 in QBC939 cells was examined through immunofluorescence staining.As depicted in Fig. 11 D, both proteins exhibited colocalization in QBC939 cells.This finding was further substantiated by western blot analysis, which demonstrated that depletion of DERL2 in QBC939 cells resulted in reduced BAG6 expression (Fig. 11E).Furthermore, increasing the concentration of Flag-DERL2 vectors in QBC939 cells transfected with HA-BAG6 vectors led to enhanced HA-BAG6 expression (Fig. 11F).To assess the impact between proteins, cycloheximide (CHX), a protein translation inhibitor, was employed.HA-BAG6 vectors were transfected into 293T cells with or without Flag-DERL2 vector, followed by western blot analysis of BAG6 expression in 293T cells treated with CHX for varying durations.As shown in Fig. 11 G, BAG6 expression gradually decreased with prolonged exposure to CHX, indicating that DERL2 influenced the half-life of BAG6.Intriguingly, the tight correlation between DERL2 and BAG6 in CHOL was further Fig. 13 Drug sensitivity analysis in DERL2-high and DERL2-low groups using the GDSC database.Chemotherapeutic drugs included: QS11 (A), PF-562271 (B), Shikomin (C), BAY 61-3606 (D), 5-Fluorouracil (E), Bleomycin (F), Epothilone B (G), AS601245 (H), Genentech Cpd (I), and FMK (J).Significance: ***p < 0.001 ◂ substantiated by Pearson correlation analysis using the GEPIA website (Fig. 12A, B).In summary, these findings suggest that DERL2 drives the oncogenic properties of BAG6 to promote CHOL progression.
DERL2 silence attenuates CHOL cell chemoresistance
Previous studies have reported the chemotherapy resistance of BAG6 in breast cancer and colorectal cancer [40,41].To investigate the drug sensitivity of DERL2 in CHOL patients, we performed drug sensitivity analysis using the GDSC database to stratify CHOL patients based on their chemosensitivity to DERL2.Our findings revealed that DERL2-high CHOL patients exhibited higher sensitivity to some chemotherapeutic drugs compared to DERL2-low CHOL patients.The specific chemotherapeutic drugs included QS11, PF-562271, BAY 61-3606, 5-Fluorouracil, Bleomycin, Epothilone B, AS601245, Genentech Cpd, and FMK (Fig. 13).Additionally, other members of the Derline family have been implicated in chemoresistance in bladder cancer [42].Gemcitabine is a standard chemotherapeutic agent used in cancer therapy.Therefore, we further investigated the impact of DERL2 expression on the sensitivity of QBC939 cells to Gemcitabine.Firstly, we assessed DERL2 expression in QBC939 cells after 24 h of exposure to different concentrations (0, 1, 2.5, and 5 nM) of Gemcitabine.Western blot analysis revealed an increase in DERL2 expression with increasing doses of Gemcitabine (Fig. 14A).Subsequently, we examined whether DERL2 deficiency affected the sensitivity of QBC939 cells to Gemcitabine.DERL2-deficient QBC939 cells were treated with various concentrations of Gemcitabine (0, 1, 2.5, and 5 nM), and cell proliferation was assessed using CCK8 assays.As depicted in Fig. 14 B, DERL2 depletion enhanced the inhibition rate of Gemcitabine on QBC939 cell proliferation.Furthermore, we evaluated key apoptotic effector molecules, such as PARP1, cleaved PARP1, caspase-3, and cleaved caspase-3, and observed that DERL2 depletion increased the expression of cleaved PARP1 and cleaved caspase-3.Moreover, this increment was further potentiated by additional Gemcitabine treatment (Fig. 14C).Importantly, after 5 nM Gemcitabine treatment, the apoptosis of DERL2-deficient QBC939 cells was also augmented compared to normal QBC939 cells (Fig. 14C).Collectively, these findings suggest that DERL2 plays a role in determining the sensitivity of CHOL to Gemcitabine.
Discussion
Herein, using the different bioinformatic analyses, we found the highly expressed DERL2 in CHOL progression and its tight association with worse clinical outcomes of CHOL patients.Furthermore, DERL2 was associated with several oncogenic signaling pathways.The results also showed that the highly expressed DERL2 might favor the tumor immune infiltrates of CHOL cells.Mechanically, we found that DERL2 might interact with BAG6 and stabilize the BAG6 expression, which finally promoted the CHOL cell proliferation.Additionally, highly expressed DERL2 induced chemotherapy resistance to Gemcitabine in the CHOL cells.Our findings suggest targeting DERL2 might effectively interfere with the CHOL progression.Cancer cells rely on favorable endoplasmic reticulum (ER) stress conditions for their survival.However, excessive ER stress can trigger apoptosis in these cells [21,43,44].To counterbalance this stress, certain cells employ the ER-associated protein degradation (ERAD) mechanism, which facilitates the clearance of misfolded and/ or mislocalized proteins, including glycoproteins (ERAD substrates) within the ER lumen [17].DERL2 has been identified as a crucial component of the ER-resident dislocation complex responsible for degrading misfolded glycoproteins in the ER.Notably, a previous investigation in chronic lymphocytic leukemia mice examined the distinct expression patterns of DERL2 in cancerous tissues and cells.In the present study, we made novel observations regarding the differential expression of DERL2 mRNA across various cancer types, accompanied by the presence of DERL2 mutants in different cancers.In the case of CHOL, we detected elevated DERL2 expression in cancerous tissues, which exhibited a strong correlation with poor clinical outcomes.Patients with high DERL2 expression may derive potential benefits from immunotherapy and displayed strong chemoresistance to conventional chemotherapy.In in vitro cell functional assays, we provide compelling evidence that DERL2 overexpression promotes cell proliferation, whereas its knockout yields the opposite effect.Moreover, our in vitro experiments assessing cellular chemoresistance demonstrate that silencing DERL2 enhances the sensitivity of CHOL cells to Gemcitabine.Importantly, it is worth noting that other members of the Derlin family have also been implicated in modulating sensitivity to chemotherapeutic drugs [42].Collectively, our data underscore the pivotal role of DERL2 in driving CHOL cell proliferation and chemoresistance in vitro.
BAG6, a member of the BAG gene family, exhibits widespread expression in various tissues including the testis, spleen, and 25 other tissues.Originally identified within the human major histocompatibility complex class III domain [45][46][47], the BAG6 gene encodes a nuclear protein that plays a significant role in cell apoptosis and autophagy [48].Furthermore, the BAG6 complex, in conjunction with the E1A binding protein p300, assumes a critical role in the acetylation of p53 or forkhead box protein O1 (FoxO1) in response to DNA damage [49].This complex, formed by BAG6 and a co-chaperone, facilitates the biogenesis and quality control of hydrophobic proteins [50].Notably, BAG6 has been implicated in driving colorectal cancer progression as a nucleocytoplasmic shuttling protein [45,51].In a study by Ragimbeau et al., the silencing of BAG6 was demonstrated to disrupt the phospho-ubiquitylation process of mitochondrial proteins, thereby inhibiting cancer progression [45].However, the precise identity of the critical BAG6 modulators in cancer progression remains elusive.In our current investigation, we have discovered that DERL2 functions to stabilize BAG6, potentially contributing to cancer progression.
Conclusively, our finding for the first time demonstrated the oncogenic function of robustly expressed DERL2 during CHOL progression.Furthermore, DERL2 interacted with BAG6 to favor the drug resistance of CHOL cells.Therefore, blocking the DERL2/BAG6 axis might have a strong rationale for therapies against CHOL progression.
Luzheng Liu and Jincai Wu contributed equally to the work.Key Points • Amplification of DERL2 negatively impacted the four survival.
Fig. 1 Fig. 2
Fig. 1 Expression and prognosis of the Derline protein in CHOL malignancy.A The expression of three members of the Derline family in the TCGA CHOL cohort.B Overall survival by DERL1 expres-
Fig. 5
Fig. 5 Identification of DERL2-associated signaling pathways.A GSEA DNA repair-associated genes.B GSEA Myc target gene set.C GSEA Myc target gene set.D GSEA Kras signaling path-
Fig. 6
Fig. 6 DERL2 expression correlates with immune cell infiltration in CHOL.Analyses of the correlation of DERL2 mRNA levels with immune cell infiltration in CHOL using the immune microenvironment on ISCH (http:// tisch.comp-genom ics.org), A lollipop diagram
Fig. 7
Fig. 7 Comprehensive cell line encyclopedia (CCLE) data analysis of DERL2 transcription across a panel of CHOL cells
Fig. 8 Fig. 9
Fig. 8 Overexpressing DERL2 boosts the CHOL cell proliferation.A Western blot analyzing DERL2 expression in a group of CHOL cells and HiBECs.B Western blot analyzing DERL2 expression in RBE and HCCC910 cells when transfected with HA-DERL2 vectors or not.C CCK8 assays analyzing the HCCC910 cell proliferation when DERL2 overexpression or not.D CCK8 assays analyzing the RBE cell proliferation when DERL2 overexpression or not.E Colony formation assays analyzing the HCCC910 cell colony formation rate when DERL2 overexpression or not.F Colony formation assays analyzing the RBE cell proliferation when DERL2 overexpression or not.Significance: *p<0.05,**p < 0.01
Fig. 11
Fig.11DERL2 interacting with BAG6.A Schematic flow chart of the experimental design.B 293T cells were transfected with the Flag-DERL2 or/and HA-BAG6 vectors.Forty-eight hours later, the immunoprecipitated complex with Anti-HA beads was analyzed using the corresponding antibodies.C 293T cells were transfected with the Flag-DERL2 or/and HA-BAG6 vectors.Forty-eight hours later, the immunoprecipitated complex with anti-Flag beads was analyzed using the corresponding antibodies.D Fluorescence confocal microscopy analysis of the colocalization of DERL2 with BAG6 in
Fig. 14
Fig. 14 DERL2 deficiency increases the sensitivity of QBC939 cells to Gemcitabine.A Western blot analysis of DERL2 expression in QBC939 cells when treated with different doses of Gemcitabine.B The inhibition rate of Gemcitabine on QBC939 cells when DERL2 deficiency or not.Western blot analysis of the PARP1,
|
v3-fos-license
|
2019-05-30T13:12:16.161Z
|
2019-05-01T00:00:00.000
|
169033709
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-2615/9/5/277/pdf",
"pdf_hash": "75759336a77b6d71e519a39c64fa69384ec45aac",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2151",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "75759336a77b6d71e519a39c64fa69384ec45aac",
"year": 2019
}
|
pes2o/s2orc
|
Phenotypic Stability of Sex and Expression of Sex Identification Markers in the Adult Yesso Scallop Mizuhopecten yessoensis throughout the Reproductive Cycle
Simple Summary Bivalve sex is thought to fluctuate depending on environmental conditions. So far, there has been no investigation on the phenotypic stability of sex in the commercially important Yesso scallop Mizuhopecten yessoensis. The present study revealed that the sex of the Yesso scallop is stable after initial sex differentiation and that this species maintains a sex-stable maturation system throughout its life. In addition, gonad differentiation for each sex was precisely characterized by using molecular markers throughout the maturational cycle. Abstract The objective of the present study was to analyze the phenotypic stability of sex after sex differentiation in the Yesso scallop, which is a gonochoristic species that has been described as protandrous. So far, no study has investigated in detail the sexual fate of the scallop after completion of sex differentiation, although bivalve species often show annual sex change. In the present study, we performed a tracking experiment to analyze the phenotypic stability of sex in scallops between one and two years of age. We also conducted molecular marker analyses to describe sex differentiation and gonad development. The results of the tracking experiment revealed that all scallops maintained their initial sex phenotype, as identified in the last reproductive period. Using molecular analyses, we characterized my-dmrt2 and my-foxl2 as sex identification markers for the testis and ovary, respectively. We conclude by proposing that the Yesso scallop is a sex-stable bivalve after its initial sex differentiation and that it maintains a sex-stable maturation system throughout its life. The sex-specific molecular markers identified in this study are useful tools to assess the reproductive status of the Yesso scallop.
Introduction
Systems of sex differentiation and phenotypic stability of sex have evolved into diverse forms in molluskan species and often show species-specific features. Owing to the diversity of sex-controlling systems, there is still controversy about the molecular mechanisms of sex differentiation, particularly in bivalves. The Yesso scallop (Mizuhopecten yessoensis, previously called Patinopecten yessoensis) is a gonochoristic species that has been described as protandrous [1]. Previous studies [2][3][4][5] have proposed that all juveniles first differentiate into males possessing a small amount of sperm, and then, in the next reproductive season, some of the males undergo a sex change to female via a hermaphroditic transition phase. The sex differentiation of the Yesso scallop is generally completed within one year in most parts of Japan where aquaculture of this species is performed [5]. However, this hypothetical protandrous model may require further validation because Maru [6] pointed out that no study has confirmed whether the hermaphroditic gonad eventually transforms into the ovary of a female. Interestingly, many population-based studies [2,4,5] have reported that after the completion of sex reversal, the Yesso scallop (1-5 years of age) normally showed a sex ratio of approximately 1:1 in various culture conditions, while hermaphroditic gonads were very rarely observed [3]. These findings imply that sex determination in the Yesso scallop is strictly controlled by genetic factors rather than by environmental ones. These observations suggest that the Yesso scallop has a system that firmly regulates phenotypic sex consistently throughout life, even though no sex chromosomes have yet been identified in mollusks.
To understand the phenotypic stability of sex in bivalves, Park et al. [7] directly confirmed annual sex reversal in the Pacific oyster (Crassostrea gigas) by tag tracking. They confirmed that sub-populations of both males and females can re-change sex after the first sex reversal. In fact, the results revealed that the phenotypic sex of the Pacific oyster fluctuates annually and is regulated by external factors (i.e., temperature, food availability, exogenous steroids, and pollutants), as reviewed elsewhere [8]. Thus, it can be said that the Pacific oyster has high plasticity of sex differentiation throughout its life. In addition, a similar annual sex change has been reported in a bloody clam (Tegillarca granosa) [9]. For scallops, Coe [10] reported variations in sexuality of several European species of Pecten that are mostly hermaphroditic. In addition, a recent book [1] reviewed sexuality in various scallops that show species-specific varieties in sexuality. However, to the best of our knowledge, no study has investigated the phenotypic stability of sex in the Yesso scallop, despite the strict control of sex determination mentioned above. Therefore, we aimed to analyze the phenotypic change of sex after the first sex reversal in the Yesso scallop by performing a tag-tracking experiment. In previous studies, the judgment of sex was performed by histological observation. After sex differentiation, the seasonal changes of gonad development were classified into seven stages [4,11,12]. During the reproductive period, the sex of an adult scallop can be determined by visual judgment of gonad color. Therefore, the Yesso scallop is a good model species that can be used for understanding the mechanism of bivalve reproduction [11,13]. During the annual sex-differentiating phase, however, both males and females exhibit immature and transparent gonads, wherein a very small amount of undifferentiated germ cells exists (i.e., spermatogonia for males or oogonia for females), which are not distinguishable histologically [8]. Hence, thus far, it has not been possible to perform definitive judgments of sex during the sex-differentiating period at every age. Therefore, a sex-specific molecular marker is required to confirm the sex in order to research sex differentiation during gonad development in the Yesso scallop.
In vertebrates, the early stages of gonad sex differentiation have been described using several molecular markers, alongside studies in teleosts [14]. Ijiri et al. [15] confirmed the identity of sex differentiation-related molecules (i.e., steroidogenic enzymes, steroid receptors, transcription factors, and anti-Mullerian hormone (Amh) during gonadal differentiation and development) in tilapia, which is a gonochoristic fish with an XX/XY sex-determining system. Doublesex/male-abnormal-3-related transcription factor 1 (Dmrt1) exhibited male-specific expression, suggesting that it plays an important role in testicular differentiation. Indeed, Dmrt-related molecules have been found in several bivalves and are thought to be key players in male gonadal development (as in the case of vertebrates) in Pacific oyster [16], Chlamys nobilis [17], lion-paw scallop (Nodipecten subnodosus) [18], and Akoya pearl oyster (Pinctada fucata) [19]. In ovarian differentiation, Nagahama [20] found that the forkhead transcription factor (Foxl2) activates P450arom transcription to lead to ovary differentiation, whereas Dmrt1 inhibits this in teleosts. In addition, sexually dimorphic expression of foxl2 has been reported in bivalves, namely, Chlamys farreri [21] and pearl oyster (Pinctada margaritifera) [22]. In the Yesso scallop, Li et al. [23] recently developed a method for scallop sex identification during the sex-differentiating phase by using LOG 10 (DMRT1L/FOXL2). Therefore, mechanistically, the involvement of Dmrt and Foxl2 could be critical for testis and ovary differentiation, respectively, after the completion of sex differentiation in the Yesso scallop. There is no information regarding the phenotypic stability of sex in the Yesso scallop after sex differentiation, even though this scallop is a good model species for the analysis of phenotypic sex.
Tracking Experiment for the Analysis of Phenotypic Stability of Sex in the Yesso Scallop
In March 2016, 10-month-old farmed Yesso scallops (M. yessoensis) were purchased (approximately 300 scallops) from a local commercial supplier (Ogatsu Bay, Miyagi, Japan). This population in Miyagi Prefecture already undergoes sex differentiation to female or male at 10 months of age ( Figure 1A). This scallop species has a clear reproductive period in spring, as described elsewhere [11]. During this reproductive period, the gonads exhibit a sex-specific color (i.e., milky white color for testis in males, as shown in Figure 1B, orange-red color for ovary in females, as shown in Figure 1C). Meanwhile, in summer, male and female gonads exhibit the same color (i.e., beige color for both sexes) during the spent stage ( Figure 1D) and are not distinguishable histologically. For the tracking experiment, sexing was performed for all individuals in March 2016 (10 months of age) by visual judgment of gonad color on shore ( Figure 1E). If there was uncertainty about the sex, we discarded the scallops (n < 5). In total, male (n = 140) and female scallops (n = 150) were identified, divided into two groups ( Figure 1F), hanged on a nylon rope each a plastic pin ( Figure 1G,H), and used for the subsequent rearing study as group A (pro-male population) and group B (pro-female population). Both groups were separately hanged on ropes and re-reared for extra nine months until the next reproductive period ( Figure 1I, Table 1). Sampling for sexing in the subsequent reproductive period was performed in October (16 months of age) and December 2016 (19 months of age). For all specimens sampled, shell length (SL) and softbody weight (BW) were measured. Then, the softbody and gonad were dissected, weighed for calculation of gonad index (GI; 100 × gonad weight/softbody weight (%)), and sampled for fixation for sexing by subsequent histological analysis. The sampled gonads were fixed with Davidson's solution (artificial sea water/glycerin/formalin/ethanol/acetic acid, 3:1:2:3:1, v/v) at 4 • C for 24 h, rinsed with distilled water, dehydrated with an ascending ethanol series, and then embedded in paraffin. Cross sections, 6 µm-thick, were prepared, attached onto glass slides (Matsunami Glass, Tokyo, Japan), and used for hematoxylin and eosin (HE) (Muto Pure Chemicals, Tokyo, Japan) staining. Histological observation was performed for the specimens sampled in October and December in 2016. Because, in October, the gonad color was at the beginning of the reproductive period and still faint, we carefully evaluated the sex by histological observation.
Sample Preparation for mRNA Expression Analyses
One-to two-year-old farmed scallops (M. yessoensis) were purchased several times from local commercial suppliers (Mutsu Bay, Aomori, and Ogatsu Bay, Miyagi Prefecture, Japan) from September 2016 to March 2017. The gonads were sampled and stored in RNAlater stabilization solution (Thermo Scientific, Waltham, MA, USA) at −30 • C for subsequent RNA extraction. At the same time, another piece of gonad was fixed in Davidson's fixative overnight at 4 • C and processed as formalin-fixed paraffin-embedded tissue for in situ hybridization (ISH) detection with HE staining. Total RNA was extracted from various tissues using RNeasy mini kit (Qiagen, Tokyo, Japan) in accordance with the manufacturer's protocol and quantified by spectrophotometry with a NanoDrop ND-1000 instrument (Thermo Scientific). RNA integrity was assessed by electrophoresis on a 1% (w/v) agarose gel. Total RNA (1 µg) was transcribed to cDNA using high-capacity cDNA reverse-transcription kits (Life Technologies, Tokyo, Japan).
Transcriptomic Survey and cDNA Cloning
A transcriptomic survey for dmrts and foxl2 was conducted by local blasting with the Yesso scallop transcriptome datasets (SRX047537 [24] and our previous data [25]). Known protein sequences reported in related species (e.g., Akoya pearl oyster, C. farreri, Pacific oyster, limpet) were used as queries, and candidate unigenes were carefully assessed by several bioinformatic analyses.
Bioinformatic Analyses
Deduced amino acid sequences of Dmrts and Foxl2 were generated from candidate contigs obtained from the above transcriptomic survey. Amino acid sequences were aligned by Clustal W2 and used for Bayesian inference (MrBayes v3.1.2, mrbayes.csit.fsu.edu) using a mixed model of amino acid substitutions (1,000,000 generations, sampling every 10th generation, and burn-in for the first 10,000 trees). Graphical representations of the phylogenic trees were obtained with FigTree (http://tree.bio.ed.ac.uk/software/figtree/). Domain structure analysis was also performed using SMART (http://smart.embl-heidelberg.de/).
Semi-Quantitative RT-PCR Assay
Total RNA and cDNA were prepared from the gonads during the early differentiating stage of the adult Yesso scallops in November 2015, as described previously [11]. Semi-quantitative RT-PCR was performed with GSP sets ( Table 2) with Takara Ex Taq HS (TaKaRa-Bio, Kusatsu, Shiga Prefecture, Japan). Thermocycling parameters were 95 • C for 5 min, followed by 35 cycles (dmrts: Unigene22131, Unigene26880, and Unigene30667) or 30 cycles (my-foxl2) of 30 s at 95 • C, 30 s at 58 • C, and 1 min at 72 • C, with a final elongation step of 72 • C for 5 min. The PCR products were electrophoresed on a 2% (w/v) agarose gel and photographed. The PCR amplicons were cloned into pGEM-T Easy vectors (Promega, Madison, WI, USA), and the cloned plasmids were extracted with Zyppy plasmid miniprep kit (Zymo Research, Irvine, CA, USA) and sequenced (Macrogen, Seoul, South Korea).
qPCR Assay
Total RNA and cDNA were prepared from gonads during the maturation period of the adult Yesso scallops (n = 5 for each sex in September, October, and December 2016, and in January, February, and March 2017, farmed in Miyagi, Japan). The mRNA levels of my-dmrt2, my-foxl2, my-soxb1, my-tesk, and my-vtg were quantified in scallop testes and ovaries during the reproductive period using a qPCR system (7300 Real-time PCR system; Applied Biosystems, Warrington, UK), as described elsewhere [25], with specific primer sets ( Table 2). The program was initially set at 50 • C for 2 min and 95 • C for 10 min, followed by 40 cycles of 15 s at 95 • C and 60 s at 60 • C. Dissociation curve analysis was set at 95 • C for 15 s, 60 • C for 60 s, and 95 • C for 15 s. The cycle threshold values were set at 0.2 to define the level of arbitrary fluorescence intensity on the 7300 System SDS software. Amplification efficiency (E, %) was calculated from a standard curve constructed from a diluted series of pooled cDNA samples (1:1, 1:2, 1:4, 1:8, and 1:16). Three stable reference genes (DEAD-box RNA helicase (heli), ubiquitin (ubq), and 60s ribosomal protein L 16 (rpl16)), studied by Feng et al. [27], were validated with all gonad cDNA specimens sampled. Using geNorm software (https://genorm.cmgg.be/), the two best reference genes (rpl16 and heli) were selected among all three genes (geNorm stability values (M) were 0.617, 0.632, and 0.716 for rpl16, heli, and ubq, respectively) and used for normalization. The mRNA expression levels were calculated using the relative standard curve method with the normalization factors calculated above. For the comparison of means within the same sex, one-way ANOVA was used. If the one-way ANOVA was significant, Tukey's multiple comparison test was used as a post-hoc test. For one-way ANOVA tests, significance levels were set at p < 0.05. For the comparison of means between sexes at a particular sampling point, two-way ANOVA was used. If the two-way ANOVA was significant, Bonferroni post-tests were performed.
Tracking of the Sex Phenotype in the Yesso Scallops between One and Two Years of Age
To analyze the sex phenotype in the next reproductive period, we planned a tracking experiment ( Figure 1A). We purchased and sexed the cultured Yesso scallops at 10 months of age in March 2016. During the sexing, we confirmed that most scallops possessed colored gonads, indicating that they reached maturity with sex-differentiated gonads ( Figure 1B-D). In the beginning of the experiment (March 2016), the scallops were 10 months old, and the sex proportion of 300 scallops was 140:150 (males/females), showing an equal sex ratio (Table 1). Approximately, 10 scallops were not counted because of uncertain sexing. Next, we set up two rearing ropes for each sexed scallop population and performed additional rearing until the next reproductive season to confirm their subsequent sex phenotype ( Figure 1I). During the following nine months of rearing, the scallops exhibited normal increases in shell size ( Figure 2A) and softbody weight ( Figure 2B). The GI dropped from the beginning of the fully mature phase ( Figure 2C). No sex differences in growth and reproductive histories were observed (Figure 2A-C). At 16 months of age in October 2016 (six months later), the histological analysis was performed for each group (n = 10 for each group) and identified that all individuals of each group exhibited the initial sex as identified at 10 months of age (Table 1, Figure 2), at the beginning of the maturation stage (October in Figure 2D). At 19 months of age in December 2016 (nine months later), they reached the fully mature phase (Dec in Figure 2D). Another histological analysis identified that they all maintained their initial sex phenotype (n > 21 for each group) ( Table 1).
Characterization of dmrts and foxl2 in the Yesso Scallops
By performing local blasting with the Yesso scallop transcriptome datasets, we found three and one contigs of dmrt and foxl2 cDNAs, respectively. Three dmrt and one foxl2 candidate contigs that mostly covered open reading frames were obtained. Next, we confirmed their sequence by blasting with the genome sequence resource deposited in NCBI [28]. Sequences identical to the above contigs were also identified in NCBI resources (GenBank accession numbers are presented in Table 2 Figure 3A). The Foxl2 candidate had a Forkhead (FH) domain, which acts as a sequence-specific DNA-binding transcription factor ( Figure 4A).
Characterization of dmrts and foxl2 in the Yesso Scallops
By performing local blasting with the Yesso scallop transcriptome datasets, we found three and one contigs of dmrt and foxl2 cDNAs, respectively. Three dmrt and one foxl2 candidate contigs that mostly covered open reading frames were obtained. Next, we confirmed their sequence by blasting with the genome sequence resource deposited in NCBI [28]. Sequences identical to the above contigs were also identified in NCBI resources (GenBank accession numbers are presented in Table 2; my-dmrt2_Unigene22131: XM_021498039, my-dmrt4-5_Unigene26880: XM_021521599, my-dmrtmab3_Unigene30667: XM_021513113, my-foxl2_Unigene30321: XM_021497746). Domain analysis revealed that three Dmrt candidates had a Doublesex/Mab-3 (DM) domain at the N terminal and two had a DMRTA-specific C-terminal (DMA) domain ( Figure 3A). The Foxl2 candidate had a Forkhead (FH) domain, which acts as a sequence-specific DNA-binding transcription factor ( Figure 4A).
Validation of Molecular Markers of Sex Identification for Testis/Male or Ovary/Female in the Adult Yesso Scallop
Sex-specific mRNA expression of my-dmrts and my-foxl2 in the gonads was subjected to RT-PCR analysis with testis and ovary cDNAs of adult scallops. To evaluate sex identification markers, we chose gonads at the early differentiating stage as specimens, because their sex could not be distinguished by visual judgment of gonad color (data not shown). For the nine Yesso scallops, histological observation of the gonads was first performed to carefully distinguish their sex ( Figure 5A). Then, RT-PCR screening for my-dmrts and my-foxl2 candidates was performed ( Figure 5B). For three my-dmrts, Unigene22131 (my-dmrt2) showed dominant expression in the testes rather than in the ovaries. Two other dmrts (i.e., Unigene26880 (my-dmrt4/5) and Unigene30667 (my-dmrt/mab3)) showed uniform expression in testes and ovaries. In addition, Unigene30321 (my-foxl2) was specifically expressed in the ovaries, showing no mRNA expression in the testes. Table S2.
Validation of Molecular Markers of Sex Identification for Testis/Male or Ovary/Female in the Adult Yesso Scallop
Sex-specific mRNA expression of my-dmrts and my-foxl2 in the gonads was subjected to RT-PCR analysis with testis and ovary cDNAs of adult scallops. To evaluate sex identification markers, we chose gonads at the early differentiating stage as specimens, because their sex could not be distinguished by visual judgment of gonad color (data not shown). For the nine Yesso scallops, histological observation of the gonads was first performed to carefully distinguish their sex ( Figure 5A). Then, RT-PCR screening for my-dmrts and my-foxl2 candidates was performed ( Figure 5B). For three my-dmrts, Unigene22131 (my-dmrt2) showed dominant expression in the testes rather than in the ovaries. Two other dmrts (i.e., Unigene26880 (my-dmrt4/5) and Unigene30667 (my-dmrt/mab3)) showed uniform expression in testes and ovaries. In addition, Unigene30321 (my-foxl2) was specifically expressed in the ovaries, showing no mRNA expression in the testes. Table 2. ISH detection supported the sex-specific expression of my-dmrt2 and my-foxl2 for testis and ovary, respectively. At the early differentiating stage, the testes were filled with proliferating spermatogonia, whereas the ovaries were filled with growing primary oocytes and fewer oogonia ( Figure 6). As a testis/male-specific marker, my-dmrt2 mRNA was detected in spermatogonia in the testis ( Figure 6Aa), while its faint signals were seen in some ovarian cells (inset in Figure 6C). Regarding ovary/femalespecific markers, my-foxl2 mRNA was detected in follicle cells attached to growing oocytes in the ovary, and its expression was absent in testicular cells ( Figures 6D-d,F). In addition, no signal was observed in sense probe conditions for my-dmrt2 and my-foxl2 in both testis and ovary ( Figures 6B,E). Table 2. ISH detection supported the sex-specific expression of my-dmrt2 and my-foxl2 for testis and ovary, respectively. At the early differentiating stage, the testes were filled with proliferating spermatogonia, whereas the ovaries were filled with growing primary oocytes and fewer oogonia ( Figure 6). As a testis/male-specific marker, my-dmrt2 mRNA was detected in spermatogonia in the testis ( Figure 6A-a), while its faint signals were seen in some ovarian cells (inset in Figure 6C). Regarding ovary/female-specific markers, my-foxl2 mRNA was detected in follicle cells attached to growing oocytes in the ovary, and its expression was absent in testicular cells ( Figure 6D-d,F). In addition, no signal was observed in sense probe conditions for my-dmrt2 and my-foxl2 in both testis and ovary ( Figure 6B,E).
Expression Profiles of Sex Identification Markers in the Yesso Scallop Gonads during the Reproductive Cycle
During the reproductive cycle, the mRNA expression of my-dmrt2, my-foxl2, my-soxb1, my-tesk, and my-vtg in testes and ovaries was quantified by real-time qPCR (Figure 7). The maturation stages were defined by GI (GI in Figure 7A) as follows: September (early sex differentiation), November (growing), December (early mature), January (middle mature), February (fully mature), and March (fully mature and spawning), as reported elsewhere [11]. my-dmrt2 was consistently expressed at a higher level in testes than in ovaries throughout the reproductive cycle ( Figure 7B). The mRNA expression of my-dmrt2 in testes drastically increased in November after gonad sex differentiation in September. The mRNA expression of my-dmrt2 gradually decreased until the fully mature stage in March. The mRNA expression of my-tesk showed no sex-biased pattern throughout the reproductive cycle ( Figure 7C). In detail, my-tesk mRNA was expressed at a higher level in testes in November and February, whereas no different levels of mRNA were observed in ovaries. In contrast, my-foxl2 mRNA was expressed at a higher level in ovaries for all maturational stages, while a much lower expression was seen in testes ( Figure 7D
Expression Profiles of Sex Identification Markers in the Yesso Scallop Gonads during the Reproductive Cycle
During the reproductive cycle, the mRNA expression of my-dmrt2, my-foxl2, my-soxb1, my-tesk, and my-vtg in testes and ovaries was quantified by real-time qPCR (Figure 7). The maturation stages were defined by GI (GI in Figure 7A) as follows: September (early sex differentiation), November (growing), December (early mature), January (middle mature), February (fully mature), and March (fully mature and spawning), as reported elsewhere [11]. my-dmrt2 was consistently expressed at a higher level in testes than in ovaries throughout the reproductive cycle ( Figure 7B). The mRNA expression of my-dmrt2 in testes drastically increased in November after gonad sex differentiation in September. The mRNA expression of my-dmrt2 gradually decreased until the fully mature stage in March. The mRNA expression of my-tesk showed no sex-biased pattern throughout the reproductive cycle ( Figure 7C). In detail, my-tesk mRNA was expressed at a higher level in testes in November and February, whereas no different levels of mRNA were observed in ovaries. In contrast, my-foxl2 mRNA was expressed at a higher level in ovaries for all maturational stages, while a much lower expression was seen in testes ( Figure 7D History of (A) GI and quantification of (B) dmrt2, (C) tesk, (D) foxl2, (E) soxb1, and (F) vtg mRNAs in the gonads of the Yesso scallops throughout the reproductive cycle. White and black bars show the relative mRNA levels in testes and ovaries, respectively. Error bars show SEM at each sampling point (n = 5 for each sex). Different superscript letters indicate significant differences within sex (lower case for ovaries and upper case for testes). Asterisks *, **, and *** indicate significant differences between testes and ovaries at p < 0.05, p < 0.01, and p < 0.001, respectively.
Phenotypic Stability of Sex in the Yesso Scallop
The present study is the first to report on the analysis of phenotypic stability of sex after the sex differentiation phase in the Yesso scallop. In most situations, one-year-old Yesso scallops exhibited a 1:1 sex ratio under culture conditions [2,4,5]. Kawamata [5] reported the sex differentiation pattern in the Yesso scallop. After birth, all scallops differentiate into males, with spermiation at 4-5 months of age. At 7 months, some of the males enter a regressing phase of testis where sperm are phagocytized and then start a transition process with the generation of ovarian germ cells in germinal acini. At 8 months of age, approximately 35% and 57% of scallops are males and females, respectively. The remaining 8% are hermaphrodites. It was proposed that hermaphrodites found before one year of age are in a transition phase of sex reversal. In addition, it was suggested that females do not directly differentiate from sexually immature scallops. However, no study has traced the history of sex reversal in the same individual over the first three years after birth to confirm the following points: (i) How does the hermaphroditic gonad arise from the testis? (ii) Does the hermaphroditic gonad differentiate into a normal ovary? To resolve these issues, tag tracking to confirm the sexual fate in the subsequent reproductive season is an optimal approach.
In the present study, we conducted confirmatory sexing with sexually mature scallops cultured in Ogatsu Bay (Miyagi Prefecture, Japan) at one year of age. The experiment with accurate judgments of sex identified an almost 1:1 sex ratio (females/males, 1:0.93) in 300 individuals. This equal sex ratio indicates the completion of the first sex differentiation at one year of age, as similarly observed in several studies [2,4,5]. In the present study, additional rearing (nine months) was performed for both female and male populations to confirm their subsequent sex phenotype in the next reproductive season at two years of age. Surprisingly, after the additional nine months of rearing, both female and Figure 7. History of (A) GI and quantification of (B) dmrt2, (C) tesk, (D) foxl2, (E) soxb1, and (F) vtg mRNAs in the gonads of the Yesso scallops throughout the reproductive cycle. White and black bars show the relative mRNA levels in testes and ovaries, respectively. Error bars show SEM at each sampling point (n = 5 for each sex). Different superscript letters indicate significant differences within sex (lower case for ovaries and upper case for testes). Asterisks *, **, and *** indicate significant differences between testes and ovaries at p < 0.05, p < 0.01, and p < 0.001, respectively.
Phenotypic Stability of Sex in the Yesso Scallop
The present study is the first to report on the analysis of phenotypic stability of sex after the sex differentiation phase in the Yesso scallop. In most situations, one-year-old Yesso scallops exhibited a 1:1 sex ratio under culture conditions [2,4,5]. Kawamata [5] reported the sex differentiation pattern in the Yesso scallop. After birth, all scallops differentiate into males, with spermiation at 4-5 months of age. At 7 months, some of the males enter a regressing phase of testis where sperm are phagocytized and then start a transition process with the generation of ovarian germ cells in germinal acini. At 8 months of age, approximately 35% and 57% of scallops are males and females, respectively. The remaining 8% are hermaphrodites. It was proposed that hermaphrodites found before one year of age are in a transition phase of sex reversal. In addition, it was suggested that females do not directly differentiate from sexually immature scallops. However, no study has traced the history of sex reversal in the same individual over the first three years after birth to confirm the following points: (i) How does the hermaphroditic gonad arise from the testis? (ii) Does the hermaphroditic gonad differentiate into a normal ovary? To resolve these issues, tag tracking to confirm the sexual fate in the subsequent reproductive season is an optimal approach.
In the present study, we conducted confirmatory sexing with sexually mature scallops cultured in Ogatsu Bay (Miyagi Prefecture, Japan) at one year of age. The experiment with accurate judgments of sex identified an almost 1:1 sex ratio (females/males, 1:0.93) in 300 individuals. This equal sex ratio indicates the completion of the first sex differentiation at one year of age, as similarly observed in several studies [2,4,5]. In the present study, additional rearing (nine months) was performed for both female and male populations to confirm their subsequent sex phenotype in the next reproductive season at two years of age. Surprisingly, after the additional nine months of rearing, both female and male populations at two years of age exhibited the same sex phenotype as determined at one year of age. This is the first observation that the Yesso scallops maintained their sex phenotype after their first sex differentiation, indicating that their sex does not fluctuate depending on external factors, unlike that of the Pacific oyster [7] (Figure 8).
male populations at two years of age exhibited the same sex phenotype as determined at one year of age. This is the first observation that the Yesso scallops maintained their sex phenotype after their first sex differentiation, indicating that their sex does not fluctuate depending on external factors, unlike that of the Pacific oyster [7] (Figure 8). Figure 8. Scheme of phenotypic plasticity of sex in two bivalve species from tracking studies and expression of sex identification markers. Pacific oyster Crassostrea gigas exhibits an alternative sexuality [7] while the Yesso scallop Mizuhopecten yessoensis shows a sex-stable sexual maturation from one to two years of age. In the scallop, for male differentiation, my-dmrt2 is dominantly expressed in the male gonads. In contrast, for female differentiation, my-foxl2, my-soxb1, and my-vtg are consistently expressed at a higher level in the female gonads.
In Pacific oyster, the morphological sex of the gonad is affected not only by genetics but also by environmental factors [29]. Oysters cultured under food-rich conditions often show female-biased sex differentiation, in contrast to wild oysters under an oligotrophic environment [30]. Therefore, a second sex reversal was observed in Pacific oyster, namely, from male to female from one to two years of age, then from female to male from two to three years of age [7]. It was proposed that the Pacific oyster exhibits flexibility in terms of sex differentiation. In contrast, in the present study, it is proposed that the Yesso scallop has very low sex plasticity after sex differentiation. The sex ratio in this scallop may thus be less influenced by food availability; food-rich conditions may promote differentiation to females. However, no study has shown that the proportion of females was more than double that of males in any rearing condition. These observations indicate that the sex of this scallop is not notably influenced by food availability, unlike that of oysters, suggesting that sex determination in the Yesso scallop is under strict genetic control.
In addition, Kawamata [5] reported that the timing of sex differentiation among different aquaculture sites (i.e., Lake Saroma, Funka Bay, and Mutsu Bay) was slightly different, particularly the timing of the appearance of females. Specifically, at Lake Saroma, the timing of sex differentiation was brought forward for a decade starting from 1978. We believe that this shift to early maturation was caused by eutrophication, resulting in an increase in phytoplankton. If so, food-rich conditions Figure 8. Scheme of phenotypic plasticity of sex in two bivalve species from tracking studies and expression of sex identification markers. Pacific oyster Crassostrea gigas exhibits an alternative sexuality [7] while the Yesso scallop Mizuhopecten yessoensis shows a sex-stable sexual maturation from one to two years of age. In the scallop, for male differentiation, my-dmrt2 is dominantly expressed in the male gonads. In contrast, for female differentiation, my-foxl2, my-soxb1, and my-vtg are consistently expressed at a higher level in the female gonads.
In Pacific oyster, the morphological sex of the gonad is affected not only by genetics but also by environmental factors [29]. Oysters cultured under food-rich conditions often show female-biased sex differentiation, in contrast to wild oysters under an oligotrophic environment [30]. Therefore, a second sex reversal was observed in Pacific oyster, namely, from male to female from one to two years of age, then from female to male from two to three years of age [7]. It was proposed that the Pacific oyster exhibits flexibility in terms of sex differentiation. In contrast, in the present study, it is proposed that the Yesso scallop has very low sex plasticity after sex differentiation. The sex ratio in this scallop may thus be less influenced by food availability; food-rich conditions may promote differentiation to females. However, no study has shown that the proportion of females was more than double that of males in any rearing condition. These observations indicate that the sex of this scallop is not notably influenced by food availability, unlike that of oysters, suggesting that sex determination in the Yesso scallop is under strict genetic control.
In addition, Kawamata [5] reported that the timing of sex differentiation among different aquaculture sites (i.e., Lake Saroma, Funka Bay, and Mutsu Bay) was slightly different, particularly the timing of the appearance of females. Specifically, at Lake Saroma, the timing of sex differentiation was brought forward for a decade starting from 1978. We believe that this shift to early maturation was caused by eutrophication, resulting in an increase in phytoplankton. If so, food-rich conditions accelerate sex differentiation for both females and males. In addition, Wakui and Obara [2] reported that one-year-old scallops in Lake Saroma were all male, suggesting that food-scarce conditions result in an abundance of males. However, taken together, these observations do not indicate that food-rich conditions increase the ratio of females to males but, rather, that they simply accelerate ovarian gonad differentiation. Hence, females were observed earlier, but the sex ratio was always equal under different culture conditions at most aquaculture sites. These findings suggest that male sex differentiation requires less energy than female sex differentiation, as observed for teleosts, suggesting that oogenesis also involves higher energy consumption and longer maturation time in the Yesso scallop.
my-dmrt2 as a Testis Marker in the Yesso Scallop
To describe the process of gonad sex differentiation in the Yesso scallop, molecular markers are crucial for understanding the molecular basis of gonad development, as proposed [23]. In the present study, we aimed to characterize dmrt and foxl2 as markers of testis or ovary differentiation, respectively. To identify all paralogs and/or isoforms of both dmrt and foxl2, we performed a transcriptomic survey using a local blast system, as reported elsewhere [25]. Our survey identified three paralogs of dmrt and one of foxl2 from the Yesso scallop transcriptome datasets.
For invertebrate dmrts, Chen et al. [31] summarized the genetic diversity of Dmrt members in metazoans and reported that mollusks should have four dmrt paralogs. Interestingly, Bellefroid et al. [32] reported that the snail Lottia gigantea possesses four Dmrt paralogs with a DM domain, and three out of four Dmrt paralogs have a DMA domain, called the DMRTA (DMRTA-specific C-terminal) motif [33]. For my-dmrts found in the Yesso scallop, all three my-dmrt paralogs possessed a DM domain, but my-dmrt2 (Unigene22131) only lacked a DMA domain, as found in vertebrate Dmrt1 and Dmrt2 [32], although two other scallop my-dmrt paralogs possessed a DMA domain, which is a typical domain of DMRTA proteins whose function is unknown (Pfam; http://pfam.xfam.org). Roles of DMRTA proteins (e.g., Dmrt3, Dmrt4, and Dmrt5) in neurogenesis and patterning of the developing nervous system have been proposed in vertebrates [32]. Notably, our RT-PCR analysis identified that my-dmrt2 showed testis-dominant expression, whereas two other my-dmrts exhibited no sex-biased mRNA expression. Among three my-dmrts, none showed ovary-dominant mRNA expression, unlike in stony coral [31]. Dmrt1 is well known to be involved in male sex determination and differentiation in a wide range of vertebrates [34,35]. In contrast, this study reports that my-dmrt2, which was named DMRT1L [23], showed a testis-dominant expression pattern similar to those of vertebrate dmrt1 and Akoya pearl oyster dmrt2 [36]. Because no dmrt1 has been found in mollusks [17,31], we believe that my-dmrt2 is a functional ortholog of vertebrate dmrt1. Our ISH analysis revealed that my-dmrt2 mRNA was localized in male germ cells (i.e., spermatogonia), differently from a previous report [23]. Similar germ cell-dominant expression was seen in Akoya pearl oyster [36] and for zebrafish dmrt1 [37], dmrt3 [38], and dmrt5 [39], whereas Sertoli cell-specific expression of dmrt1 was observed in teleosts [40,41] and a tetrapod [42]. In addition, human Dmrt1 showed dynamic expression in both Sertoli cells and spermatogonia [34]. These observations suggest the genetic divergence of dmrt-led molecular regulation for sex determination and differentiation in a species-specific manner.
Our qPCR results indicate that my-dmrt2 was dominantly expressed in testes throughout the reproductive period and showed a higher expression level in November (growing) and December (early mature) when spermatogonia were mainly proliferating [11]. However, my-tesk did not show consistent sex-biased expression [43]. Therefore, my-dmrt2 is a marker of testis differentiation in the Yesso scallop. Because my-dmrt2 was detected in the spermatogonia from the early differentiating stage, this marker is applicable for sexing at an early stage of sex differentiation.
my-foxl2 as an Ovary Marker in the Yesso Scallop
For my-foxl2, our in silico survey found one candidate contig with a high e-value, indicating that the Yesso scallop has no other foxl2 sub-isoforms. Our RT-PCR analysis indicated that my-foxl2 showed ovary-specific expression, with no expression detected in testes. In contrast, my-dmrt2, a testis marker, was slightly expressed in ovaries. This slight expression may occur in oogonia, but our ISH detection method did not identify it, being it under the detection limit. Notably, ISH detection revealed that my-foxl2 mRNA was specifically localized in the ovarian follicle cells attached to oocytes, and no expression was seen in germ cells, differently from what previously reported [23]. To the best of our knowledge, this is the first time that a strict control of the expression of my-foxl2 in ovarian follicle cells is observed in bivalve species, and it implies appropriate regulation of sex differentiation in the Yesso scallop. For instance, Pacific oyster foxl2 (cg-foxl2) mRNA was detected in male and female gonads [44]. Specifically, cg-foxl2 mRNA was localized in spermatogonia to spermatids in male gonads and in oogonia to vitellogenic oocytes in female gonads. In addition, Zhikong scallop (C. farreri) foxl2 (cf-foxl2) mRNA was seen not only in follicle cells but also in ovarian and testicular germ cells [21]. By comparison of the above bivalve foxl2 mRNA localizations, it appears that the Yesso scallop has a robust system of female differentiation led by foxl2 expression. In addition, although previous studies [21,44] have suggested the possible presence of natural antisense mRNA of foxl2 in bivalve gonadal cells, the present study identified no such signal.
Our qPCR results revealed that my-foxl2, my-soxb1, and my-vtg were consistently expressed at a higher level in ovaries throughout the reproductive period, and their mRNA levels generally increased according to ovarian maturation, indicating their potential for use as ovary markers. Among the above three genes, my-foxl2 is a more appropriate marker for the early differentiating stage for sexing. In contrast, my-vtg would be a useful indicator of oocyte maturation stage during the late stage of maturation [45]. Surprisingly, my-soxb1 showed ovary-dominant expression in the present study. Because Sox transcriptional factors were characterized as Sry-related high-mobility group box members and found to be essential for testis development [46], my-soxb1 was originally designed as a testis marker in a previous study, like sox2 [43]. From the qPCR results, we believe that my-soxb1 is a functional ortholog of sox3 [47], which shows ovary-specific expression, as reported in Nile tilapia [48].
Conclusions
The present study focused on the phenotypic stability of sex in the adult Yesso scallop after sex differentiation, suggesting that this scallop is a sex-stable bivalve after sex differentiation and maintains a sex-stable maturation system throughout its life. Importantly, the present study assessed all dmrt paralogs identified from the transcriptome of the Yesso scallop and eventually characterized my-dmrt2 and my-foxl2 as testis and ovary markers, respectively. Notably, our ISH results provided for the first time sex-specific spatial expression patterns of my-dmrt2 and my-foxl2, which have not been reported previously [23]. It is noteworthy that our qPCR validations were performed with the specimens that had a background regarding their sex phenotype in the last reproductive season. This advantage enabled the robust quantitative analysis especially for the early differentiating stage by avoiding incorrect judgements of sex using histological observation. Therefore, the qPCR-based sexing with sex identification markers reported in this study would be applicable to distinguish scallop sex throughout the reproductive cycle using biopsies of the gonads. We believe that the knowledge of molecular markers can be a powerful tool not only for the early evaluation of bivalve sex in broodstock management for seed production in shellfish aquaculture but also in fundamental research of bivalve gonad development and sex differentiation based on the control of neuroendocrinological regulation by sex steroids and neuropeptides [13,25,[49][50][51].
|
v3-fos-license
|
2019-10-31T09:07:46.894Z
|
2019-01-01T00:00:00.000
|
208374341
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.32604/mcb.2019.07098",
"pdf_hash": "99c3b24f7732f67c864ddf60dbbb25ad8d0d4014",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2156",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "71cb25ce49f57e1bd671ef2fac8b62badc4d9afa",
"year": 2019
}
|
pes2o/s2orc
|
Magnetic Resonance Image-Based Modeling for Neurosurgical Interventions
Surgeries such as implantation of deep brain stimulation devices require accurate placement of devices within the brain. Because placement affects performance, image guidance and robotic assistance techniques have been widely adopted. These methods require accurate prediction of brain deformation during and following implantation. In this study, a magnetic resonance (MR) image-based finite element (FE) model was proposed by using a coupled Eulerian-Lagrangian method. Anatomical accuracy was achieved by mapping image voxels directly to the volumetric mesh space. The potential utility was demonstrated by evaluating the effect of different surgical approaches on the deformation of the corpus callosum (CC) region. The results showed that the maximum displacement of the corpus callosum increase with an increase of interventional angle with respect to the midline. The maximum displacement of the corpus callosum for different interventional locations was predicted, which is related to the brain curvature and the distance between the interventional area and corpus callosum (CC). The estimated displacement magnitude of the CC region followed those obtained from clinical observations. The proposed method provided an automatic pipeline for generating realistic computational models for interventional surgery. Results also demonstrated the potential of constructing patient-specific models for imageguided, robotic neurological surgery.
Introduction
Image-guided surgery can achieve optimal performance with accurate target location in the brain. It can also be integrated with various robotic manipulation systems for remote surgery and delicate operations [1]. Among all forms of surgical procedures, image-guided brain intervention is a common practice for deep brain stimulation (DBS). However, deformation of brain tissue could occur during the interventional process due to its soft and viscoelastic properties. The deformed brain tissue poses a challenge for the current image guidance, which requires registration between the pre-operative and intra-operative images [2]. Therefore, quantification of the brain tissue deformation during the intervention is significant for accurate image guidance.
Intraoperative magnetic resonance (MR) imaging techniques can provide accurate information for image guidance in neurological surgery. Studies on MR imaging techniques for brain intervention can provide quantitative information for both image guidance and device manipulation by considering the brain tissue deformation [3,4]. For different modeling methods of predicting the brain deformation, finite element (FE) method could provide the most accurate estimation [5,6]. In addition, computational results of the FE model can also provide information for force feedback, which is crucial for robotic manipulations.
Most of the current FE models of the brain are based on general anatomical structures, which lack geometrical details of the brain [7]. Although the general brain models could provide important information for quantifying the brain responses during the interventional process, patient-specific models are still needed for clinical applications. In DBS, electrodes, inserted into the brain in specific positions, required accuracy of sub-millimeter. Therefore, with available pre-and intra-operative MR images, a new FE model used for surgery is still needed.
In this study, the estimated displacement magnitude of the corpus callosum (CC) region was evaluated by finite element (FE) simulation of an interventional needle inserted into a brain. In Section 2, an MRimage based finite element (FE) modeling of the brain was proposed with anatomical details. The brain tissues were treated as hyperelastic, viscoelastic materials. The coupled Eulerian-Largrangian (CEL) method was adopted for the fluid environment of the brain tissue. The effects of interventional angles and locations for optimal surgery were studied in Section 3. In Section 4, the interventional results were presented and discussed. Preliminary results provided the suggestions of the optimal interventional course for minimal deformation of the brain tissue.
Mesh Generation and Simulation Configuration
The image-based human brain model was generated based on the reconstruction of MR atlas images from UCLA Brain Mapping Center [8]. By considering the biomechanical fidelity, the anatomical regions of white matter (WM), gray matter (GM), brain stem (BS), cerebellum (CB), cerebro-spinal-fluid (CSF), corpus callosum (CC), and blood vessel (VE) were segmented from brain MR images and constructed in the model. Briefly, each voxel from the MR image was extracted and mapped into a physical coordinate system. Then the connections of each node were established to build the mesh. The 2 mm isotropic hexahedron meshes for different brain regions were generated in this study. The details of the brain model generation process are shown in Fig. 1.
For the interventional FE model construction, a rigid interventional needle was constructed and inserted aligned with midline position perpendicular to the brain transverse plane. Both the brain and needle were placed in a fluid environment by using the Eulerian domain, which is shown in Fig. 2(a). For the FE simulation, the total intervention depth was 10 mm simulated with an increment of 1 mm, the brain stem was fixed as a boundary condition, which is shown in Fig. 2(b). In order to study the interaction between the interventional needle and surrounding CSF of brain, the coupled Eulerian-Largrangian (CEL) analysis method was introduced. For the CEL method, the complex material distributions in an Eulerian element mesh was assigned with the help of volume fraction tool in Abaqus/CAE. All simulations in the current study were conducted with ABAQUS/Explict 6.17 (Simulia, Providence, RI).
Model Computation
Considering large deformation of and rate effect of the brain tissue, a hyper-viscoelastic material model was employed [9][10][11]. In current study, a second-order Ogden model was employed to describe the hyperelastic behavior of brain tissue [12] = ∑ ( 1 + 2 + 3 ) where µi and αi are the material constants, K is the bulk modulus, λi are the three principal stretches and the Jacobian J = detF = λ1λ2λ3. The total second Piola-Kirchhoff stress in the brain can be obtained as [13] = + , where is the hyperelastic part of the second Piola-Kirchhoff stress, it can be calculated directly from Eq. (1). The viscous part of the second Piola-Kirchhoff stress can be described as [14] = where Ekl are the components of the Green-Lagrangian strain tensor, Gijkl are the stress relaxation functions, and t is the current time. The general form of Gijkl can be further written as [13,14] where g1(t) and g2(t) are two independent relaxation functions. Each of the relaxation functions can be described using a Prony series equation [15] g ( ) = where G0 is the instantaneous shear relaxation modulus, N is the number of term, gi is the relaxation modulus, and τi is the relaxation time. The material parameters are given in Tab. 1 and Tab. 2. In current study, CSF was described as hyperplastic material using Neo-Hookean model, the material parameters are given in Tab. 3.
where p is the current pressure, ρ0 is the initial density, η is the nominal volumetric strain, c0, s and Γ 0 are materials constants, and Em is the internal energy per unit mass of the fluid. c0 and s have the following relation where Us is the shock velocity and Up is the particle velocity (Tab. 4). The interventional needle is treated as a rigid body in the model (Tab. 5).
Effect of Interventional Angles for Optimal Surgery
Interventional angles ranging from 0 to 45 degrees with an increment of 10 degrees were simulated ( Figure 3). As a demonstration, displacement distribution of the CC region was compared. It is seen from Fig. 3 that the maximum displacement values in CC increase with an increase of interventional angles with respect to the midline.
Effect of Interventional Locations for Optimal Surgery
The brain deformation under different interventional locations was investigated by varying the interventional positions on top of brain with an incline angle of 30 degrees (Figure 4). It is seen from Fig. 4 that interventional location close to the midline (P1) had the largest magnitude of displacement for the CC region, followed by locations of P2, P7, P5, P4, and P3. The smallest deformation of CC was from the interventional location at P6.
Discussion and Conclusions
Brain deformation during the interventional process affects the accuracy and performance of the surgery. To accurately quantify the deformation of the brain during the intervention, a model construction scheme based on MR images was proposed. Intervention process was simulated based on the FE model generated from the MR images. By varying the angles and locations of the intervention, deformation of the CC region was quantified and analyzed.
For varying angle of the intervention, simulation results showed that the largest deformation of CC resulted from the insert angle of 45 degrees direction parallel to the midline. This indicates that the smaller insert angle will have smaller displacement of corpus callosum during the interventional process. For varying intervention positions, simulation results from the CC region showed a decreasing order of the displacement for locations of P1, P2, and P3, which should be related to both the decreased inclined angles with the curvature effect and the shorter distance between insert needle and corpus callosum of P2 and P3. Similarly, locations P7, P4, and P6 has decrease trends due to the brain curvature effect. Location P4 and P5 have similar displacement of corpus callosum due to the symmetry of the brain. According to the simulation results, location P6 has the smallest displacement value of corpus callosum, which may be the optimal location for interventional surgery.
In this paper, a workflow to construct brain interventional FE model based on MR images was proposed. Within this framework, patient-specific models could be constructed for specific surgical applications. For illustration, the deformation of the corpus callosum is investigated. Using CEL modeling, brain deformation was quantified with improved computational accuracy. Results showed that the maximum displacement of CC region increased with an increase of interventional angles with respect to the midline. However, the maximum displacement of CC for different interventional locations depended on the brain curvature and interventional depth. This study demonstrated the potential of using MR image-based modeling for optimal interventional planning of image-guided, robotic neurological surgery.
|
v3-fos-license
|
2020-12-03T09:06:27.719Z
|
2020-12-15T00:00:00.000
|
229389116
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://sajid.co.za/index.php/sajid/article/download/196/464",
"pdf_hash": "a0546cabc3b6a3eace97152acef555203171b0d6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2158",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "f60f9fc87d741f460cc5469db44f8b8d2b10343b",
"year": 2020
}
|
pes2o/s2orc
|
Description of non-polio enteroviruses identified in two national surveillance programmes in South Africa
Background Human enteroviruses (EV) consist of 106 serotypes and four species: EV-A, EV-B, EV-C and EV-D. Enteroviruses cause clinical symptoms varying from severe to mild. Knowledge of EV burden in South Africa is limited, and as non-polio EV are important causes of acute flaccid paralysis (AFP) and meningitis, information on the circulating serotypes is vital. Methods Between 2010 and 2012, a total of 832 stool and viral isolate specimens were obtained from two national surveillance programmes at the National Institute for Communicable Diseases: the Rotavirus Sentinel Surveillance Programme (RSSP) and the AFP surveillance programme. Real-time polymerase chain reaction and Sanger sequencing were performed to detect and serotype EV. Results Non-polio EV were detected in 446 specimens, of which 308 were sequenced. Stool specimens yielded a greater variety of serotypes than viral cultures. EV-B viruses were predominant (58.44%), whilst EV-C viruses were detected in 31% of the specimens tested. South African prevalence for these viruses was higher than other countries, such as France with less than 2%, and Spain and the United States with less than 10%. The most common serotype detected was Enterovirus 99 (EV-C, 8.63%), which has not been reported in other regions. Conclusion Direct sequencing from stool specimens yields a broader, more comprehensive description of EV infections compared to sequencing from viral cultures. Disease-associated serotypes were detected, but only in small numbers. This study provides a baseline for EV strain circulation; however, surveillance needs to be expanded to improve EV knowledge in South Africa.
Introduction
Human enteroviruses (EV), a part of the Picornaviridae family in the Enterovirus genus, 1 are divided into four species: EV-A, EV-B, EV-C and EV-D. 2 Although EV infections occur in early childhood, they may also occur in later years because of the high number of EV serotypes. 3 Despite most infections being asymptomatic, large numbers of symptomatic infections are estimated to occur every year, contributing to morbidity and mortality. 3 Most EV infections are mild, causing headaches, rhinitis and rash; however, some infections can lead to serious diseases such as cardio myelitis, flaccid paralysis and diarrhoea, particularly in infants 4 and the immunocompromised. 5 Enteroviruses are the leading cause of viral aseptic meningitis, 6 and are also implicated in a wide range of acute and chronic infections ranging from non-febrile disease, conjunctivitis and upper respiratory infections, to hand-foot-and-mouth disease. 6 damage that results in permanent paralysis or death. 5 Studies investigating EV outbreaks in South Africa have been short term, looking at one particular virus-associated disease. 12,13 An earlier retrospective study on a meningitis outbreak described the prevalence of non-polio EV in Cape Town in 1993. 11 A recent study from South Africa examined the prevalence of EV in respiratory disease patients from 2009 to 2014, using respiratory swabs and lavages. 9,10 These studies give an incomplete view on EV prevalence and have limited use in outbreak control or disease surveillance. Typing EV has evolved with the development of molecular analysis technology. Polymerase chain reaction (PCR) and genetic sequencing are now used to genotype EV into the conventionally assigned species and serotypes. 15 Detection and elimination of PV has been achieved through the AFP surveillance network, which surveys all AFP cases detected, including those where polio is not the cause. 14 Poliovirus was eliminated from South Africa in 1989 and thus the investigation of AFP cases remains essential because other non-polio EV may be causative organisms. 16,17 The circulation and changes in predominance of EV serotypes are complex, and surveillance programmes may aid in tracking and identifying EV serotypes. 18 Two routine surveillance systems in South Africa -the AFP surveillance network and the Rotavirus Sentinel Surveillance Programme (RSSP)provide potential specimens to investigate circulating EV. Specimens from these two programmes may enable detection of EV potentially causing AFP, as well as those causing diarrhoea or gastroenteritis.
The AFP surveillance network collects stool specimens from children under the age of 15 with AFP, or adults with AFP where polio is suspected. The RSSP collects stool specimens from patients under the age of 5 admitted to hospital for diarrhoea to determine the effectiveness of the rotavirus vaccine introduced in 2009. 19 Whilst this study sought to obtain detailed information on EV circulation in South Africa, the specimen type was limited. Cerebrospinal fluid (CSF), conjunctivitis swabs, rash vesicle fluid and respiratory and stool specimens should be surveyed to establish a complete picture of EV circulation and disease burden. 20 This study aimed to determine the epidemiology of non-polio EV circulating in South Africa from 2010 to 2012. We investigated any serotype-disease association in stool specimens obtained from AFP suspected infections, which may give an indication of EV associated with neurological infections; and from patients with diarrhoea, elucidating EV involvement in enteric diseases and the expansion of EV surveillance.
Specimen sources
Eight hundred and thirty-two stool and viral isolate specimens, collected between January 2010 and December 2012, were sourced from the AFP surveillance programme and the RSSP at the National Institute for Communicable Diseases (NICD), Johannesburg, South Africa.
Specimens from the AFP surveillance programme were selected by obtaining all the positive non-polio EV viral isolates determined through EV-like cytopathic effect (CPE). 21 One stool specimen that showed no CPE on cell culture from each of the nine provinces in South Africa each month was also included for direct detection. A total of 175 non-polio EV-positive cultures were collected between 2010 and 2012, and a further 95 culture-negative stools were obtained from January to December 2012. Specimens were discarded on an annual basis and thus no raw stools were available for 2010 and 2011.
The RSSP supplied 562 stool specimens from four provinces -Gauteng, Kwa-Zulu Natal, Western Cape and Mpumalangacovering a mixture of rural, peri-urban and urban populations. 19 The hypothesised percentage frequency was based on the number of non-polio EV detected in the AFP surveillance network for South African patients per year. We selected the first four specimens arriving at the NICD from each site per month for the years 2010-2012 calculated by Equation 1: • n = sample size • p = hypothesised percentage frequency of outcome (15%), confidence level of 95% • c = 0.05 The Western Cape started collecting specimens in May 2010, resulting in a lower total specimen number for 2010 (175 specimens) from that site.
Specimen preparation
Viral Ribonucleic Acid (RNA) extractions were conducted on culture samples and stool samples using the automated Maxwell 16 system (Promega, Madison, Wisconsin, United States), or manually using the Qiagen Qiamp Viral Mini Kit (Qiagen, Venlo, Netherlands). For stool samples, both manual and automated extractions were preceded by stool dilution in stool transport and recovery (STAR) buffer (Roche, Mannheim, Germany), to ensure adequate removal of (PCR) inhibitors. The treated stool specimens were centrifuged at 1500 g for 1 min at room temperature to sediment the solids, with the supernatant aliquoted. Specimens that failed to yield a useable nucleotide sequence were processed manually and re-sequenced.
Polymerase chain reaction and sequencing
The real-time PCR protocol from Nijhuis et al. 22 was used to screen specimens for the presence of EV, followed by amplification and sequencing of EV-positive specimens using a semi-nested assay and degenerate PCR primers, sequencing primers and protocols designed by Nix et al. 23 Sanger sequencing was conducted as per the BigDye Terminator (version 3.1) Cycle Sequencing Kit (Life Technologies, Carlsbad, California, United States) and analysed on the ABI 3130 genetic analyser (Life Technologies, Carlsbad, California, United States).
Specimens were serotyped using Oberste's criteria for EV typing, that is, greater than 75% nucleotide sequence homology to the published sequences. 15 The National Centre for Biotechnology Information (NCBI) database was utilised to compare the EV sequences obtained in the study using the BLAST (Basic Local Alignment Search Tool) function.
Statistics
The gender prevalence and median age of the selected cases were calculated from surveillance data available, with the interquartile range determined by subtracting the lower quartile from the upper quartile. One-way Analysis of Variance (ANOVA) and T-tests were done on GraphPad Prism (University of Leicester, United Kingdom). A p value of 0.05 and lower was considered statistically significant.
Ethical consideration
Ethics approval was obtained from the University of Witwatersrand Ethics Committee (M120467, M119034 and M111145).
Results
The EV PCR screen was conducted on 832 specimens with 446 (53.61%) specimens reported positive for EV. Male patients constituted 55.51% (246/446) of the EV-positive cases; most specimens came from children under the age of 5, with a median age of 1 year and interquartile range of 1.48. Patients under the age of 1 made up 49.33% (220/446) of the positive specimens, 41.26% (184/446) were between the ages of 1 and 5 years and the remainder (7.62%; 34/446) over the age of 5 ( Table 1). The unaccounted for ages (1.79% (8/446) and gender (2.47% 11/446) are unknown.
Sixty-three serotypes were detected from three species groups, EV-A, EV-B and EV-C. No EV-D serotypes were identified. EV-A, EV-B and EV-C were detected in 10.7% (33/308), 58.4% (180/308) and 30.8% (95/308) of specimens, respectively. In EV-A, there were 12 serotypes identified, with CVA5 most frequently detected (6/32 detections, 18.2%). Thirty-seven EV-B serotypes were detected with the most common serotype being CVB3 (in 15/180 [8.3%] specimens), followed by Ec6 (14/180 specimens, 7.8%). EV-C had 14 The distribution of EV serotypes was varied and widespread across the country and no distinct distribution pattern in the different provinces was observed. Most specimens were collected from the provinces that included the RSSP collection sites as well as from the AFP surveillance programme: Western Cape, Gauteng, Kwa-Zulu Natal and Mpumalanga (Figure 2).
FIGURE 1:
Infections of each serotype in males and females. There was no clear seasonality in the distribution of the positive EV specimens detected in the RSSP. In the case of the AFP surveillance, seasonality was observed with January to March showing a peak in infections (Figure 3), although this was not statistically significant (p-value = 0.4433; CI: 95%).
Discussion
No predominant EV serotype was detected, and a wide distribution of serotypes across EV-A, EV-B and EV-C was observed. Serotype distribution followed for the most part gender and age patterns seen historically across the world. 2,3,5,6 Some known disease-associated serotypes were detected but were not more prevalent than other serotypes.
The AFP surveillance programme obtained specimens from all nine provinces, which allowed for description of a countrywide EV distribution over a 3-year period. The use of this programme was advantageous as the specimens originated from all provinces and most districts in the country, and the infrastructure for specimen collection and transport had already been established.
The RSSP 19 sites covered different regions in South Africa and overlapped areas covered by the AFP surveillance programme. The sites were in the Gauteng, Western Cape, Kwa-Zulu Natal and Mpumalanga provinces. 24 Using the national surveillance systems in place ensured good population coverage and direct stool specimen screening, without a virus isolation intermediate step. This allowed for detection of serotypes that are impossible or difficult to grow in cell cultures.
The pattern of EV distribution seen in the cell culture specimens was consistent with other studies worldwide, 25,26,27,28,29,30,31 with EV-B being the predominant group and EV-D being the least prevalent group. However, the stool specimens screened in our study yielded additional EV-C viruses (47.20% in the RSSP specimens and 63.64% in the AFP stool specimens). Our data show that using only cell culture for EV identification may limit and bias the results towards isolating EV-B viruses.
Although EV-B was the most prevalent species detected, EV99 (EV-C) was the most prevalent serotype detected over 3 years -27 out of 308 specimens typed (8.63%). Within each year, EV99 was the most prevalent serotype found along with Ec6 (9 viruses, 2010), CVB5 and Ec14 (6 viruses each, 2011) and Ec13 (10 viruses, 2012). This finding was unusual, as EV99 has not been previously detected as a common serotype in other studies in China and Finland. 32,33 Despite many serotypes detected in our study being associated with disease, these serotypes did not contribute significantly to the total number of viruses detected. Strain EV71 (isolated from the AFP culture positive specimens) has been associated with aseptic meningitis, 34,35 and although its presence in patients presenting with neurological symptoms is expected, only two cases were detected over the three surveillance period. The more recently classified EV, namely, EV80, EV88, EV102 and EV114, were also detected in this study. These viruses are rarely detected and/or newly discovered and have no clear disease association.
The distribution of the serotypes across the country did not show any distinct pattern, although many more types were detected in Gauteng, Mpumalanga, Kwa-Zulu Natal and Western Cape. This is likely because of the larger numbers of specimens obtained from these provinces, as well as their mixed populations. Further studies, with specimens collected more evenly between the nine provinces, will be required in order to confirm this. Ideally, a surveillance programme tailored to detect EV symptoms including handfoot-and-mouth disease, aseptic meningitis, myocarditis and respiratory disease would be required for improved detection. 9,10,18,36 Serotype distribution varies over time within a geographical location, 37 as well as over large distances, such as between continents. Enterovirus Ec30 is the predominant serotype in Europe, 38 whilst EV71 is the predominant serotype worldwide. 39 In our study, EV99 was found to be the most common serotype in South Africa. Other studies 32,33 have not definitively linked EV99 to a disease, and with the virus still relatively unknown, further investigations are required to discover any clinical relevance. The higher levels of EV99 detected compared to other countries may be because of lack of serotyping studies, as well as the use of stool specimens in this study instead of viral isolates. 7,8,40,41 Enterovirus 99 does not grow in cell cultures as readily as viruses from EV-B. Detection of EV directly from stool specimens allows for a more accurate distribution of EV within a population. This genotyping method is faster than virus isolation and so results from an outbreak can be utilised with epidemiological information to prevent further transmission. 23 Hellferscee and colleagues 9,10 published two studies that showed EV strains in respiratory patients from 2009 to 2014 in South Africa, with a wide variety of serotypes detected. Their results correlate with our findings, although we did not detect EV68, more common in respiratory specimens. As EV68 has been shown to be associated with respiratory disease, this is not unexpected. 42,43 All serotypes detected by Hellferscee et al., except CVA3 and EV68, were detected in our study.
This study provides a baseline for EV strain circulation and epidemiology in South Africa. With polio on the brink of eradication, other causes of AFP need to be investigated. Enteroviruses are a potential cause of these symptoms 39 , and this study supplies a baseline for determining which EV are circulating in the South African population. Whilst EV71, a meningitis-associated virus, and various other echoviruses were detected in this study, routine and outbreak surveillance will determine the clinical importance of these serotypes in the South African population. Inclusion of a surveillance programme would assist in detecting EV outbreaks, and utilisation of various specimen types would ensure that EV that replicate in different organs (e.g. EV68 found in respiratory lavages and/or swabs) were not missed.
Limitations
Limitations of this study include no control group of healthy individuals for comparison, although a study in the Philippines shows no difference between diseased and asymptomatic groups. 44 The differences in serotype distribution between the surveillance programmes may have been because of the specimen type. The less severe symptom types associated with EV are not currently covered by any surveillance group in South Africa.
The current surveillance programmes in South Africa used to collect specimens target specific age groups (mostly children under 5 or 15) and are passive systems; consequently, specimen collection is only triggered when syndromes are detected. The AFP surveillance programme met all surveillance targets for the number of specimens collected in all provinces except the Northern Cape. This may underrepresent the number of viruses typed in this province.
Only one type of specimen, namely faeces, was collected from the surveillance programmes in this study, making it more difficult to obtain results (PCR inhibitors are difficult to remove and there is a risk of mixed EV infections).
Enterovirus genotyping has become more complex with the discovery of more serotypes. A small fragment of the VP1 gene was used for typing the EV, which was sufficient for basic typing differentiation, but more in-depth genetic analysis is required for a comprehensive description of EV in South Africa.
Conclusion
The epidemiology of EV in South Africa showed a general concordance with other studies 16,25,29 , and the study provided a baseline of circulating EV strains. In South Africa, various serotypes were shown to co-circulate, although EV99 was the most common virus throughout the 2010-2012 period. Strains CVA24 and EV99 accounted for 14% of all viruses detected over the 3 years and the predominance may be explained by the natural continental differences in serotype circulation.
Specimen type influences the ability to detect different serotypes, and disease presentation affects the serotypes observed. Future surveillance may assist in determining how serotype affects disease burden. Unlike cell culture, an assay that will detect EV directly from the specimen may give a more comprehensive idea of EV strain circulation and epidemiology.
A dedicated EV surveillance programme would provide a more accurate idea of the EV disease burden on symptoms such as AFP, meningitis and encephalitis. This would be useful for outbreak detection and virological investigation. 36 The development of a new vaccine to lessen the disease burden of serotypes with the association of serious effects on patients may be a by-product of the knowledge gained from EV surveillance, as predominant serotypes can be investigated as vaccine candidates.
|
v3-fos-license
|
2019-05-13T13:06:03.507Z
|
2019-04-21T00:00:00.000
|
151141995
|
{
"extfieldsofstudy": [
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/nop2.286",
"pdf_hash": "289685551dc20d0ad98e03bff3804e244c92d38c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2159",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "02c79fc98a7e19708b4b110c199672af89517368",
"year": 2019
}
|
pes2o/s2orc
|
A psychometric analysis of the Caring Assessment Tool version V
Abstract Aim The aim of this study was to examine the factor structure and construct validity of the Caring Assessment Tool version V (CAT‐V) for patients in Australian hospitals. Design Secondary analysis of CAT‐V surveys from the Australian Nursing Outcomes Collaborative (AUSNOC) data set was used. The CAT was originally developed in the United States of America. Methods The 27‐item CAT‐V was administered to patients prior to discharge from eight wards in three Australian hospitals in 2016. The psychometric properties of the CAT were evaluated using item analysis and exploratory factor analyses. Results Item analysis of surveys from 476 participants showed high levels of perceived caring behaviours and actions. Exploratory factor analysis revealed a two‐factor structure consisting of: Nurse–patient communication; and Feeling cared for. The CAT‐V is a reliable and valid instrument for measuring patients’ perceptions of the attitudes and actions of nurses in Australia.
There is a strong global commitment to improving health care and ensuring that the care provided by nurses is of the highest possible standard (McCance, Wilson, & Kornman, 2016). Recent reports into health system failures have highlighted how fragile the healthcare system can be and made recommendations for nurses to improve patient outcomes through focusing on the culture of caring and development of person-centred approaches to care delivery (Francis, 2013;Garling, 2008). National regulation bodies and indus- There is only limited empirical research that examines links between improved patient outcomes and the presence of caring cultures (Feo & Kitson, 2016). Research that examines this phenomenon is usually related to person-centred care. This is seen in the positive associations between person-centred care and patient outcomes for people who have experienced an acute myocardial infarction (Meterko, Wright, Lin, Lowy, & Cleary, 2010) and haematology-oncology patients (Radwin, Cabral, & Wilkes, 2009). The patient-nurse relationship is less frequently studied, but seen as pivotal in examining the effectiveness of person-centred cultures (Duffy et al., 2014).
There are several approaches used to examine patient-nurse relationships and the caring attitudes and actions of nurses from a pa- A discussion of the theoretical foundations of these instruments is beyond the scope of this paper. The most frequently used instruments for assessing caring behaviours and action of nurses from the patients' perspective in acute care hospitals are the CBI and the CAT (Kuis, Hesselink, & Goossensen, 2014).
The CBI was originally developed by Wolf and colleagues in 1994 and assesses patient and nurse perceptions' of caring using identical self-report surveys with a six-point Likert scale (Wolf, Giardino, Osborne, & Ambrose, 1994). The CBI was revised in 2006 to a 24item scale (CBI-24) for both patient and nurse surveys (Wu, 2006).
The CAT was originally developed by Duffy in 1990 as a 100-item survey to assess patients' perceptions of nurse caring behaviours (Duffy, 1990). The CAT has been iteratively revised (Duffy et al., 2014;Duffy, Hoskins, & Seifert, 2007) and is currently (CAT-V) a unidimensional 27-item survey. The CAT is supported by the Quality Caring Model© (Duffy & Hoskins, 2003) which combines multiple theories from multiple disciplines to help explore the nurse's relationship with the patient and the contribution that nursing attitudes and actions have on patient outcomes (Kim, 2016). The CAT is completed by patients using either a paper-and-pencil approach (Duffy & Brewer, 2011) or via electronic survey (Duffy, Kooken, Wolverton, & Weaver, 2012). Iterative versions of the CAT have had different numbers of items (100, 36 and 27) and different factor structures (between 8 and 1), and each version has reported appropriate reliability and validity (Duffy et al., 2014(Duffy et al., , 2007O'Nan, Jenkins, Morgan, Adams, & Davis, 2014). However, all of the studies using the CAT have been undertaken in different population groups in the USA.
The CAT was chosen as the data collection instrument in this study because of its conceptual link with the Quality Caring Model© and the use of the Quality Caring Model© as the foundational model for evaluating nursing practice in over 40 hospitals in the USA (Duffy et al., 2012). In addition, the CAT had previously been used in an electronic format and this was an important factor in this study (Duffy et al., 2012). Once the decision to use the CAT in the Australian Nursing Outcomes (AUSNOC) data registry had been made, it became appropriate, given the differences between the healthcare systems in the USA and Australia, to test the construct validity of the CAT-V in the Australian healthcare context. Therefore, the purpose of this study was to examine the factor structure and construct validity of the CAT-V using exploratory factor analysis (EFA).
| Aim
The aim of this study was to examine the factor structure, reliability and construct validity of the CAT version V (CAT-V) in the Australian healthcare setting using survey data collected in the AUSNOC data registry.
| Design
The AUSNOC data registry is a multi-site repository of structure, process and outcome measures that explore the quality and safety of nursing practice (Sim, Crookes, Walsh, & Halcomb, 2018). This study used cross-sectional data from patients at the time of discharge in three hospitals who were participating in the feasibility testing of the AUSNOC data registry. The feasibility testing of the AUSNOC data registry is described elsewhere (Sim, Joyce-McCoach, Gordon, & Kobel, 2019). Hospitals were chosen based on convenience and willingness to participate in the AUSNOC project. The data from the CAT-V are focused on measuring patients' perceptions of the caring attitudes and actions of nurses and the nurse-patient relationship.
| Sample
Patients being discharged from three hospitals between March-December 2016 were approached to complete the CAT-V survey.
All hospitals included in this study were private hospitals providing acute care services in the state of New South Wales, Australia.
Patients discharged from four surgical wards, three medical wards and one rehabilitation ward participated in the study.
| Survey instrument
The CAT was originally developed in 1990 (Duffy, 1990) and is based on Watson's Theory of Human Caring (Watson, 2008). Several different versions of the CAT have been tested in hospitalized adults (Duffy & Brewer, 2011;Duffy et al., 2012;O'Nan et al., 2014), emergency department settings (Anosike, 2016), settings outside the USA (Melby, 2005), education settings to assess student relationship competency (CAT-Edu) (Duffy, 2005) and among nurses to assess the caring behaviours of their managers (CAT-Adm) (Wolverton, 2016). The most recent version of the CAT is referred to as CAT-V and was validated by Duffy et al. (2014) for use with hospitalized adults. Table 1 provides an overview of the evolution of the CAT.
The CAT-V consists of 27 items and a single factor structure.
Participants rate how often each item occurred in their healthcare experience on a five-point Likert scale where 1 = never, 2 = rarely, 3 = occasionally, 4 = frequently and 5 = always. The CAT-V includes items related to caring, person-centred care and the nurse-patient relationship (Duffy et al., 2014). All items are directly related to the concept of caring which is defined by Duffy (2013) as "a process that involves the person of the nurse relating with the person of the patient" (p.32). No items in the CAT-V are reverse scored. Summed scores for the overall scale range from 27-135, with higher scores indicating higher ratings of caring and person-centred care (Duffy et al., 2014). In this research, pilot testing was undertaken using the CAT-V with a sample of 40 patients from participating hospitals in February 2016. No changes were made to the wording of any items, and data from the pilot testing were not included in the final sample. Permission to use the CAT-V was obtained under licence from QualiCare on 17/9/2015 (Licence #000915).
This study was approved by the Health and Medical Human
Research Ethics Committee at the University of Wollongong and Illawarra Shoalhaven Local Health District (Approval No HE15/425). All participants were given a participant information sheet by a staff member in the ward and had the opportunity to ask questions about the study. Participants were free to choose whether they wanted to participate and provided informed consent prior to completing the survey. No identifiable data were collected from any participant. All data obtained in the survey were stored securely on password-protected computer systems at the University of Wollongong.
| Data collection
Participants completed the survey within 24 hr prior to discharge from the ward. Surveys were completed either by using an online survey tool in RedCap software (Harris et al., 2009) via an iPad™, or using a paper-based form that was subsequently entered into the online survey tool by a nominated staff member in each ward. The survey consisted of demographic questions and the 27 item CAT-V survey. All paper-based forms were given a unique identifier, and data entry accuracy was verified in a random selection of surveys.
| Data analysis
Prior to undertaking the psychometric analysis, missing value imputation and descriptive analyses were undertaken. The expectation-maximization technique was used to impute the missing values as it is reported to be the best method that produces unbiased estimates (Allison, 2012). Descriptive statistics were then used to summarize the demographic data. A two-step approach involving TA B L E 1 Evolution of the Caring Assessment Tool (CAT) in published studies
Psychometric properties
Original CAT (Duffy, 1990) both confirmatory factor analysis (CFA) and EFA adopted in previous studies (Bhagwat, Kelly, & Lambert, 2012;Servidio, 2017) was then used to examine the psychometric properties of the CAT-V.
The two-step process is more feasible than a study replication in that the two-step process enables researchers to run CFA and EFA independently on both samples to compare and confirm the results (Schumaker & Lomax, 2004). The data (N = 476) were randomly split into two subsamples of approximately 50% of the cases using the for the CAT-V as an index of internal consistency. Generally, an acceptable alpha is >0.75 (Cronbach, 1951). All analyses were conducted using SPSS for Windows version 22 software and AMOS version 22 software (IBM Corp, 2013).
| Descriptive statistics
The means and standard deviations for each item in the CAT-V (N = 476) are displayed in Table 2. The responses were negatively skewed with most participants responding either "Frequently" or "Always" on most items (mean = 4.52, SD: 0.71). The CAT-V inter-item correlation ranged between 0.44-0.81 demonstrating that most selected items measure related phenomena. The subsamples were similar with no significant differences in the mean scores for all the 27 CAT-V items.
| Exploratory factor analysis (EFA)
The second sample (N = 242) was used to explore the dimensionality of the CAT-V using EFA. Bartlett's test of sphericity revealed statistical significance (χ 2 = 7587.05, df = 351, p < 0.0001) indicating that the data were adequately distributed to allow an evaluation of the potential factor structure. The Kaiser-Meyer-Olkin (KMO) index was 0.961, suggesting that the ratio of the number of participants to CAT-V items was sufficient for factor analysis.
Two factors had eigenvalues greater than one and accounted for 72.44% of the variance of the total factor loading. The inflexion on the scree plot and further analysis suggested a departure from linearity that was consistent with a two-factor solution. Further attempts at different factor structures did not significantly change the number of residuals. Therefore, a two-factor structure was considered best fit for these data. A summary of the EFA for the two subscales of the 27-item CAT-V is presented in Table 4. All items loaded 0.5 or higher on the respective factors. The two-factor model was
| Reliability and criterion-related validity analysis
The Cronbach's alpha (α) reliability coefficient was 0.97 for "Nurse-
| D ISCUSS I ON
The purpose of this study was to evaluate the psychometric properties of the CAT-V in the Australian healthcare setting. The CAT-V was assessed using (a) a pilot study with 40 participants; (b) analysis of data from 476 participants to establish a data set; and (c) a cross-validation study to confirm the factor structure and to ensure reliability of the scale. Using CFA, the hypothesized unidimensional factor of the 27 item CAT-V was rejected. The follow-up EFA suggested a two-factor model. Review of the items that loaded ≥0.50 on factor 1 led to the conceptual label "Nurse-patient communication." Revision of the items that loaded ≥0.60 on factor 2 led to the conceptual label "Feeling cared for."
| Reliability
Internal consistency of the CAT-V was shown because the Cronbach's α confidence coefficient was higher than 0.75 (Cronbach, 1951) across the whole instrument and in each factor. The Cronbach alpha
| Validity
The criterion-related validity of the CAT-V was supported by evidence of a high correlation between the two factors with r = 0.83 (p < 0.001, two-tailed
| Development of the Caring Assessment Tool
Prior research has examined the factor structure of various versions of the CAT using EFA (Duffy et al., 2014(Duffy et al., , 2007. To the best of our knowledge, this is the first study to assess the factor structure of the CAT-V; the first study to assess any version of the CAT in a data registry; and in the Australian context. Previous versions of the CAT have had a range of different subscales. The CAT-IV had eight subscales (mutual problem-solving; attentive reassurance; human respect; encouraging manner; appreciation of unique meanings; healing environment; affiliation needs; and basic human needs) (Duffy et al., 2007). The CAT-V was reported as evaluating a unidimensional construct which was described as an expression of the nurse-patient relationship where the attitudes, skills and TA B L E 4 Rotated loading matrix of the exploratory factor analysis for the two-factor Caring Assessment Tool version V solution (N = 242) behaviours of nurses are assessed in the caring relationships they have with their patients (Duffy et al., 2014). The unidimensional CAT-V described 73% of the variance in the construct and had a high Cronbach's alpha coefficient of 0.97 (Duffy et al., 2014). Our study has produced a two-factor solution with an explained variance of 72.44% and a high Cronbach's alpha (α) coefficient of 0.98.
Item (item number) Communalities
This study builds on prior research and provides a valid instrument to advance the research in the field. This study has evaluated the psychometric properties of the CAT-V and proposes a two-factor solution in the Australian healthcare context. Data obtained for this study were obtained from multiple sites which enables generalizability of the results.
| Study limitations
There are several limitations which must be considered when considering the results of this study. Firstly, a convenience sample from three hospitals in one state in Australia was used. As such, our results may not generalize to other locations. In addition, this study used self-reported data which may limit the validity of findings as participants may have various reasons for over-or underestimating their responses due to social desirability and inaccurate recall. It is also possible that a substantial proportion of patients were not invited to participate in this study at time of discharge due to factors such as unexpected discharge, absence of key staff, busyness of the wards and staff not providing relevant information to potential participants at time of discharge.
Despite these limitations, our findings make meaningful contributions to the body of knowledge and support the ongoing use of the CAT-V to evaluating patients' perceptions of the caring attitudes and actions of nurses at the time of discharge from an acute care hospital. Further evaluation of the CAT-V with different types of patients and various age groups is required.
| CON CLUS ION
The results of this study support the usefulness of the 27-item CAT-V as a brief, reliable and psychometrically sound instrument for measuring patient's perceptions of the caring attitudes and actions of nurses. In evaluating the CAT-V, a two-factor structure was identified which highlights the ability to assess "Nurse-patient communication" and "Feeling cared for." The two-factor, 27-item CAT-V provides important information at unit level about nurse caring that can be used to evaluate and improve the quality of nursing care provided to patients in hospitalized settings.
Assessment of nursing care quality is complex and multi-faceted.
In this study, the CAT-V has been used to evaluate patients' perceptions of the caring attitudes and actions of nurses during hospitalization. The CAT-V provides important information about the quality of the patient-nurse relationship, communication and the perceptions of being cared for. These elements are essential to evaluate the quality and safety of nursing care in a holistic way (Sim et al., 2018). The two subscales of "Nurse-patient communication" and "Feeling cared for" describe meaningful constructs that provide opportunities for hospitals to obtain more precise measures of the quality of nursing care. Additional studies that examine the factor structure of the CAT-V and other measures of quality of nursing care are critically needed.
ACK N OWLED G EM ENTS
The authors gratefully acknowledge participants who completed surveys, and staff in participating hospitals who championed data collection.
CO N FLI C T O F I NTE R E S T
None declared.
|
v3-fos-license
|
2019-05-20T04:11:58.000Z
|
2019-05-20T00:00:00.000
|
159041306
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.101.066014",
"pdf_hash": "d67334047e48aa615b134868dd584defe312ee3b",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2162",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "d67334047e48aa615b134868dd584defe312ee3b",
"year": 2019
}
|
pes2o/s2orc
|
Deep Inelastic Scattering on an Extremal RN-AdS Black Hole II: Holographic Fermi Surface
We consider deep inelastic scattering (DIS) on a dense nucleus described as an extremal RNAdS black hole with holographic quantum fermions in the bulk. We evaluate the 1-loop fermion contribution to the R-current on the charged black hole, and map it on scattering off a Fermi surface of a dense and large nucleus with fixed atomic number. Near the black hole horizon, the geometry is that of AdS2×R where the fermions develop a Fermi surface with anomalous dimensions. DIS scattering off these fermions yields to anomalous partonic distributions mostly at large-x, as well as modified hard scattering rules. The pertinent R-ratio for the black hole is discussed. For comparison, the structure functions and the R-ratio in the probe or dilute limit with no back-reaction on the geometry, are also derived.
I. INTRODUCTION
Many years ago the EMC collaboration at CERN has revealed that DIS scattering on an iron nucleus deviates substantially from deuterium [1] contrary to established lore. Since then, many other collaborations using both electron and muon probes have confirmed this observation [2][3][4]. Although the nucleus is a collection of loosely bound nucleons with confined quarks, DIS scattering is much richer in a nucleus. The nuclear structure functions display shadowing at low-x, a depletion at intermediatex, and an enhancement due mostly to Fermi motion at large-x.
QCD supports the idea that hadrons are composed of quarks and gluons as revealed by DIS scattering of electrons on nucleons at SLAC. The scaling laws initially reported follows from scattering on point-like object or partons. Because of asymptotic freedom, the partons interact weakly at short distances leading to relatively small scaling violations at intermediate-x. At low-x, perturbative QCD predicts a large enhancement in the nucleon structure functions due to the rapid growth of the gluons [5] that eventually saturate [6]. This observation has been confirmed at HERA [7,8].
DIS in holography at moderate-x is different from weak coupling as it involves hadronic and not partonic constituents [9]. The large gauge coupling causes the charges to rapidly deplete their energy and momentum, making them invisible to hard probes. However, because the holographic limit enjoys approximate conformal symmetry, the structure functions and form factors exhibit various scaling laws including the parton-counting rules [10]. In contrast, DIS scattering at low-x on a non-extremal thermal black-hole was argued to be partonic and fully saturated [11]. * Electronic address: kiminad.mamo@stonybrook.edu † Electronic address: ismail.zahed@stonybrook.edu This paper is a follow up on our recent investigation of DIS scattering on a nucleus as an extremal RN-AdS black hole [12]. In the double limit of a large number of colors and gauge coupling, the leading contribution amounts to the Abelian part of the R-current being absorbed in bulk by the black-hole. After mapping at the boundary, the ensuing nuclear structure functions show strong shadowing at low-x. At next to leading-order, the Rcurrent scatters off a virtual hallow of charged fermionic pairs forming a holographic Fermi liquid around the black hole. The purpose of this paper is to detail DIS scattering on this dense holographic liquid as the analogue of DIS scattering on a nucleus described as a Fermi liquid. Some aspects of this liquid were initially discussed in [13].
This paper consists of several new results: 1/ an explicit derivation of the structure functions for DIS scattering on the emerging holographic Fermi surface around an extremal black hole; 2/ the characterization of these structure functions both at large-x and low-x, with the identification of new anomalous exponents at large x; 3/ an explicit derivation of the R-ratio for DIS scattering on the extremal black hole as a model for DIS scattering on a dense nucleus; 4/ an explicit derivation of the same structure functions in the probe fermion limit as a model for DIS scattering on a dilute nucleus; 5/ a comparative study of the R-ratio in the probe limit.
The organization of the paper is as follows: in section II, we briefly review the setting for the extremal RN-AdS black hole, and the key characteristics of the holographic Fermi liquid. In section III, we derive the contribution to the boundary effective action of an R-photon scattering off bulk quantum fermions. The result is quantum and dominant at large-x, thereby correcting the classical and leading contribution from the bulk black hole which is mostly supported at low-x. In section IV, we analyze the contribution stemming from the quantum fermions at low-x and show that it is vanishingly small near the horizon. In section V, we detail our derivation of the R-ratio for the black hole in the dense regime. For comparison, we discuss in section VI the probe or dilute limit with the arXiv:1905.07864v1 [hep-th] 20 May 2019 bulk fermions carrying a finite density in AdS without affecting the underlying geometry. The pertinent R-ratio in this regime is derived and analyzed. Our conclusions are in section V. Some useful details are found in several Appendices.
II. EXTREMAL BLACK HOLE : DENSE LIMIT
In this work we will address DIS scattering on a cold nucleus as a dual to an RN-AdS black hole following on our recent analysis [17]. Conventional DIS scattering on cold nuclei with many of the conventions used are reviewed in [18]. In holography, DIS scattering on a nucleus as an RN-AdS black hole is illustrated in Fig. 1. In the holographic limit, the leading contribution is Fig. 1a with the structure functions being the absorbed parts of the R-current. To this order, the structure functions have a support only at low-x [12] (see below). At nextto-leading order, the R-current is absorbed through the virtual fermionic loop shown in Fig. 1b. This loop describes a fermionic hallow around the RN-AdS black hole that acts as a holographic Fermi liquid. Below we detail how this contribution leads to structure functions with support at large-x. This description is complementary to our recent analysis based on a generic density expansion around a trapped Fermi liquid [17]. The RN-AdS black hole is described by effective gravity coupled to a U(1) gauge field in a 5-dimensional curved AdS space [14] The Ricci scalar is R, and κ 2 = 8πG 5 and Λ = −6/R 2 are the gravitational and cosmological constant. The curvature radius of the AdS space is R with line element and warping factor with r + > r − the outer-inner horizons satisfying f (r ± ) = 0.
The black-hole is charged and sources the R-potential provided that the electric charge Q and the geometrical charge q satisfy for a D3-D7 U(1) vector charge. The temperature of the RN-AdS black hole is with γ 2 = 1/12π 2α . The chemical potential µ is fixed by the zero potential condition on the outer horizon A t (r + ) = 0 or µ = Q/r 2 + . At extremality where T = 0, we have r √α .
B. Holographic Fermi liquid
The fermionic fields in bulk are characterized by the Dirac action in a charged AdS black hole geometry
and the long derivative
The indices M, N · · · or µ, ν, r · · · refer to the space-time indices, and a, b, · · · to space-time indices with underline correspond to tangent space indices. Therefore, for example, Γ a denotes the gamma matrices in the tangent space, Γ M denotes gamma matrices in the curved spacetime. They are specifically given in Appendix A.
A bulk fermion field of mass m and R-charge e R is dual to a composite boundary field of conformal dimension ∆ = 3 2 + mR. Since the horizon of the extremal charged RN-AdS black is characterized by a finite U(1) electric field, fermionic pair creation takes place through the Schwinger mechanism. As a result, the black-hole say with positive R-charge absorbs the negative part of the pairs and expel the positive part. Since AdS is hyperbolic and confining, the positive charge falls back to the surface of the black hole, accumulating into a hallow or holographic Fermi liquid.
The characteristics of the low-lying excitations of the holographic Fermi liquid for low frequencies |k 0 | < µ and low momenta k = | k|, have been discussed in [15,16]. In particular, near the horizon the AdS 5 geometry factors into AdS 2 × R 3 .
The fermions exhibit strong distorsion in the AdS 2 geometry, with [15] and Note that k 2 R < 0 in this case, i.e., for e 2 Rα < 1 2 (mR) 2 . Throughout, we will use the block notation to refer to the fermionic retarted (Feynman) propagators For k 2 R > 0 and k ≤ k R , the corresponding holographic spectral function exhibits oscillating behavior and gapless excitations, with comparable real and imaginary parts. In other words, the excitations in this oscillating region are short lived as they form and quickly fall into the extremal RN-AdS black hole.
Further arguments [15,16] show that the fermionic density diverges near the horizon causing strong back reaction. As a result, the near horizon geometry becomes a Lifshitz geometry whereby the Fermi-like volume is resolved into concentric Fermi spheres each describing heavy Fermions with narrow widths, thereby explaining the gapless like excitations. This resolution occurs only for |k 0 |/µ ∼ e −N 2 c and resorbs for |k 0 |/µ ∼ N 0 c .
For k 2 R > 0 and k ≥ k R , localized and long lived fermionic states emerge that are characterized by a Fermi momentum k F > k R . In this case, the retarded propagator near the Fermi surface reads [15,16] (II.14) The coefficients we have a Fermi liquid, a marginal Fermi liquid or a non-Fermi liquid, respectively. Note that the transition from a non-Fermi liquid to a Fermi liquid occurs for ω ≈ ω c which is fixed by the condition ω c ≈ |v F Π(ω c )|.
A schematic description of the poles of (II.13) is given in Fig. 2. For sufficiently large effective charge e R √α , some of the largely damped quasi-normal modes (QNM) of the RN-AdS black hole transmute to narrow quasibound states (QBS) close to the real axis for fixed k < k F . For increasing k → k F the narrow QBS start crossing the origin ω = 0 turning to equally spaced holographic Fermi surfaces (here 4 Fermi surfaces) as discussed in [16].
For fermions with larger effective charge, i.e., for e 2 Rα > 1 2 (mR) 2 or k 2 R > 0, pair creation takes place near the horizon as we noted earlier. A hallow of charged fermions at the Fermi surface with k F > k R > 0, that supports quasi-particles with G 11 F given in (II.13). For hard R-probes with large q 0 in the DIS kinematics, only G 11 R (k 0 , k) is modified close to the horizon, since G 11 R (ω 1 , k + q) carries a large momentum and is mostly unmodified in the ultraviolet,
III. HOLOGRAPHIC STRUCTURE FUNCTIONS
The holographic structure functions on an extremal black-hole in leading order have been discussed in [17], to which we refer for further details. For completeness, the results will be summarized below, and extended to allow for the next to leading order contributions from the holographic Fermi liquid at the horizon.
A. Structure functions
We recall that the scattering amplitude of an R-photon of longitudinal momentum q µ = (ω, q, 0, 0) scattering on a black-hole at rest in the Lab frame with n µ = (1, 0, 0, 0), (III.35), can be tensorially decomposed into two invariant functionsG 1,2 [12] G F µν (q) = η µν − q µ q ν Q 2 G 1 + n µ n ν − n · q Q 2 (n µ q ν + n ν q µ ) + q µ q ν (Q 2 ) 2 (n · q) 2 G 2 (III. 16) with Q 2 = q 2 , thanks to with the current conservation and covariance. The corresponding DIS structure functions for an R-photon on a black hole are defined as As in [12], the rest frame of a cold and extremal black hole will be dual to the rest frame of a cold nucleus at the boundary with fixed energy E A = 3 4 Aµ. Since the binding energy in a nucleus is small, we also have E A Am N and therefore the chemical potential µ 4 3 m N . In our mapping, m N and µ are interchangeble for estimates.
B. Classical black-hole in leading order
As we noted earlier, the leading order contribution to the structure functions (III.17) in DIS scattering is classical and of order N 2 c as illustrated in Fig. 1. It does not involve scattering off the fermions near the holographic surface, which is of order N 0 c . In the regime Q q Q 2 the leading contribution to the structure functions vanishes, as the probe spin-1 R-field is prevented from falling to the black-hole by an induced potential barrier [11]. The R-current correlator is purely real with an exponentially vanishing imaginary part. In the regime q Q 3 , the barrier wanes away with the classical and leading contribution to the un-normalized structure functionF 2 of the form [12] This result was shown to hold for low-x or x A √ µE A /Q, with the Callan-Gross relation-likeF 2 = 2x AF1 . The normalized structure functions follow as [12] after using the black-hole equation of state. More specifically, we have (Q 2 = q 2 > 0) with C T,L /C T,L = π 5 (48α) 2 /2N 2 c . The normalization in (III.21) amounts overall to nor-malizingF 1,2 by the density of the black-hole, canceling part of the model dependence of the equation of state. In a way, the normalized F 1,2 are the un-normalized black-hole structure functionsF 1,2 per degree of freedom. (III.22) is dominated by the first contribution at low-x. We now show that the next contribution is dominated by scattering off bulk fermions at large-x from a holographic Fermi liquid close to the horizon.
C. Quantum fermions in sub-leading order
The contribution of the sub-leading fermions to the induced effective action can be obtained through the holographic dictionary. The shift of the R-field A M → A 0 δ M 0 + a M amounts to a shift in the Dirac action density in (II.8) at the origin of the minimal coupling of the R-field In terms of (II.8-III.23) the bulk effective action for the 1-loop contribution in Fig. 1b at zero temperature reads The routing of the momenta in (III.24) corresponds to the hard fermion with k + q and the soft fermion with k.
The R-field in bulk a(r, q) relates to the R-field at the boundary A (0) µ (q) through the bulk-to-boundary propagator K A (r; q), which satisfies K A (r → ∞; q) = 1, This allows the re-writing of (III.24) in the form of the boundary action with the dressed bulk vertices We have approximated the bulk-to-boundary K A (r; q) by its vacuum contribution, with K 1 (x) the modified Bessel function.
In the DIS regime Q q Q 2 with Q 2 = q 2 , the spin-1 2 fermion field remains localized near the boundary as a potential barrier develops in bulk, a phenomenon also observed for spin-1 boson fields [11]. In this regime, we will approximate the hard part of the fermion propagator by its vacuum (in AdS 5 ) result [19] and The soft part of the fermion propagator can be separated into its contribution deep in the infrared which is modified by the induced holographic Fermi surface through the geometrical reduction to AdS 2 ×R 3 , and its ultraviolet completion. More specifically, near the AdS 2 ×R 3 geometry, the infrared part of the soft the propagator is of the form .
(III.33)
Note that only G 11 R (k 0 , k) has a singular or Fermi-like structure near k → k F . Hence, we will ignore the contribution from G 22 R (k 0 , k) to the current correlator. The normalizable wave functions are given in (X.123).
The time-ordered correlation function for the Rcurrent follows from the functional derivativẽ Using the spectral form of the Feynman propagator (III.32-III.33), we can re-write (III.34) in a more compact form (III.37) We recall that at zero temperature, the general Feynman and retarded propagators G F,R are related by the relationship Using (III.38) and the fact that G F (k 0 , k) is analytic in the upper complex k 0 -plane, allow for the re-writing of the imaginary part of (III.35) in the form This result shows that for q 0 = 0, the imaginary part vanishes as it should as the effective action induced by the R-current (III.26) is real. For q 0 = 0 this result is clearly negative as it should, since its contribution to (III.26) amounts to a self-energy for the R-field which amounts to damped oscillations in time.
D. Large-x near the horizon
Using the vertex (XI.133) for momenta near k F , we can re-write (III.39) (III.41) can be simplified by enforcing the delta function with the overall constant We re-arranged the hypergeometric function 2 F 1 using the same Pfaff identity (III.49). Note that for the special value of ν k = mR+1, one can see that the x k dependence of the integrand in (III.42) reduces to the one in [20] before the multiplication by the trace (for our case the trace is √ s k ). However, for general ν k the same partonic content as in (III.50) is noted. For narrow quasi-particles, we may use the substitu- and undo the k 0 integration in (III.42) with the result where k 0 in x k is solution to the transcendental equation and we have defined a dimensionless constant In arriving to (III.45), we have made use of the Pfaff identity with 2τ ± = τ ± (ν k + 1/2) and the twist parameter τ = mR + 3/2. Near the black hole horizon, the parton distribution function develops a modified scaling law, but it is still seen to vanish at the end points x k = 0, 1. In Fig. 3 we show the modified behavior of the partonic distribution function in (III.50) for fixed q 2 , τ = 3 and ν k = 1 2 versus x k as the light-solid curve (green). The comparison is with the large-x dependence of the nucleon for weak coupling dashed curve (red), and strong coupling dark-solid curve (blue). Near the black hole horizon, the distribution function is shifted to intermediatex. With our choice of parameters, the holographic result (III.50) reduces to x , and the weak coupling result also in the vacuum x Remarkably, the formation of a holographic fermionic surface through the AdS 2 ×R 3 reduction, is to shift the holographic partonic distribution to intermediate-x, and modify the hard scattering rule. For our choice of DIS kinematics, the non-normalized F 2 structure function (III.17) follows from (III.45) in the form (q 0 ≈ q x ) with again Q 2 = q 2 > 0. Modulo the dispersion relation and the anomalous exponents that characterize the holographic fermions in the reduced AdS 2 ×R 3 geometry, the results (III.45) and (III.51) are similar to the ones we derived recently in [17] using general arguments.
IV. FERMIONIC CONTRIBUTION AT LOW-X
In the DIS regime with q Q 2 or low-x, the structure functions are dominated by the exchange of a Pomeron, a multigluon exchange with vacuum quantum numbers. In holography, this exchange is described either through a closed surface exchange [22] or a graviton [23] in bulk. For the latter, this regime was identified in the range e − √ λ x 1/ √ λ where the exchange involves the string scattering amplitude. Since x e − √ λ , the strings are small compared to the size of the AdS space so that the scattering amplitude is quasi-local with almost flat space signature.
A. General set up
The 10-dimensional tree-level effective actionthat describes the scattering of an R-photon off bulk quantum fermions at low-x reads [23] where v a are the Killing vectors for the compact part of the 10-dimensional space. The forward R-current scattering amplitude follows from pertinent variation with respect to the R-field. Here K refers to the kinematical factor involving the fermions ψ and the R-field strength F , and V is the exchanged flat space 10-dimensional Virazorro-Shapiro string amplitude as illustrated in Fig. 4. The 10-dimensional Mandelstam variabless,t are related to the 4-dimensional ones s, t through with the warping made explicit. The imaginary part of the string amplitude (IV.53) is with the delta-function summing over the closed string Regge trajectory. At low-x we have s ∼ 1/x and j ∼ s ∼ 1/x, so that for ln We now recall that the field strength F mn describes the bulk-to-boundary R-field-strength with incoming momentum q µ and outgoing momentum q µ , while ψ describes the bulk fermion with incoming and outgoing momentum k µ in the anomalous Fermi surface. The low-x regime with x 1 corresponds to the kinematical regime q · k q 2 k 2 , so that the dominant contribution in K is the term with the spin contraction of the form (q · k), i.e. the first term in (IV.52). Using and normalizing where Y(v) is a spherical harmonic in S 5 in (IV.52), we can write down the one-loop effective action S F for the diagram shown in Fig. 4 as We now choose the polarizations to be transverse with the additional axial gauge condition a r = 0, so that the boundary-to-bulk R-field is a µ (r, q) = R 2 r K 1 qR 2 r n µ (q) e iq·x (IV.60) The corresponding field strengths are and their contraction is Low-x near the horizon To analyze the low-x contribution of the fermions near the horizon, we will focus on the graviton exchange and make use of warped momentaq throughout this section. For small energy transferq 0 µ, the bulk-to-bulk propagator for transverse graviton h y x (q 0 ,q x ) can be written as G xy,xy (q 0 ,q x , r 1 , r 2 ) = φ(q 0 ,q x , r 1 )G B xy,xy (q 0 ,q x )φ(q 0 , q x , r 2 ) (IV.63) where φ(q 0 ,q x , r 1 ) is the normalizable wave function of the graviton, and G B xy,xy is its boundary Green's function where Re G(q 0 ,q x ) = f (q x , µ) which can be determined from the low-frequency expansion, and [26] Im G(q 0 ,q x ) = 3C 1 +q 2 0 where C is a proportionality constant, e 0 qx µ is a function to be determined from the low-frequency expansion coefficients, and Note that for zero energy and momentum transfer (q 0 = 0 andq x = 0), the bulk-to-bulk propagator of the graviton exchange vanishes Im G xy,xy (0, 0, r 1 , r 2 ) = 0 since G ± (0, 0) = 0. Therefore, the t-channel contribution of the graviton for the current-current correlation function or forward deeply virtual Compton scattering away from the probe limit vanishes. Its Reggeized form through higher spins (closed string exchange), vanishes as well.
V. R-RATIO FOR THE BLACK-HOLE
A. Particle and energy density at the horizon Having assessed the structure functions both at largex and small-x near the black hole horizon, we now need to normalize them. For that we need to evaluate the contribution of the bulk fermions near the horizon to the particle and energy densities, much like we did in the probe limit. More specifically, we define as the boundary expectation values of the timecomponent of the R-current and the energy momentum tensor. The expectation values follow from the holographic correspondence in the tadpole approximation in AdS as with I K (qz 1) ≈ I K (qz 1) playing the role of a spectral weight, and defined in (VI.107). Evaluating the momentum integral near the Fermi surface, we find with C θ = 1 2π and the dimensionless constant Since I K = I K , we have = nk 0 . Note that k 0 is the solution of the transcendental equation (III.47), which near the Fermi surface k → k F can be solved as k 0 ∼ C 0 /z − with the dimensionless constants . Therefore, for the dense limit near the horizon, we make the identification E A ≡ V A = A /n = A k 0 .
B. Normalized structure functions : dense regime
Having determined n, in the dense limit near the horizon, we can now normalize the corresponding structure function (III.45) through the substitution The integral in (III.45) can be evaluated near the fermi surface k → k F with the result Here k 0 ∼ C 0 /z − plays the role of the Fermi energy of the quasi-particles in the holographic Fermi liquid near the horizon z − . We recall that Nc N f for a D3-D7 U(1) vector charge.
Using III.51) together with (V.73-V.74), we can extract the normalized structure function of the holographic Fermi liquid (x k F k 0 = xm N ) where we defined the dimensionless constant Note that for a large effective charge e R √α → ∞, we have γ F = −2πν k F and C 0 → 0 which implies that the structure function V.76 vanishes in the probe limitα ∼ Nc N f → ∞, which is also the regime where the backreaction from the flavor branes can be ignored.
C. R-ratio in the dense regime
We define the R-ratio of the nucleus in the dense limit as with the nucleon structure function as derived below (see VI.115), and the dense structure function following To quantify each of the contributions in the R-ratio, we now need to fix the parameters entering this expression, many of which are tied by holography. We first fix the explicit holographic parameters:α = N c /4N f = 1 (ratio of branes), 2π 2 c 5 / √ 4πλ = 0.01 (strong coupling) and e R = 0.3 (charge of the probe fermions). Next, we fix the scaling parameters entering in the nucleon pdf: τ = 3 (hard scaling law) and j = 0.08 (Pomeron intercept). The nucleon confining scale enters β = 1/(m N z − ) 2 as in (VI.115) below. We fix it to β = 7.3. Finally, we fix the parameters of the emergent Fermi surface: v F = 1 (Fermi velocity), µ/m N = 1.2 (chemical potential) and two values for k F /m N = 0.8, 3.5 (Fermi momentum).
With these parameters fixed, we show in Fig. 5 the behavior of the dense R-ratio versus x for a large Fermi momentum k F /m N = 3.5 and fixed momentum q/m N = 1. The low-x part is dominated by the black hole with the emergent Fermi surface contributing only at larg-x. This behavior is expected. At strong coupling, most of the partonic content is shifted at very low-x while scattering off the bulk black hole in leading order. The subleading and quantum correction due to scattering off the emergent Fermi surface only contributes at largex for a sufficiently large Fermi momentum. In Fig. 6 we show the same behavior for a smaller Fermi momentum k F /m N = 0.8. The Contribution from the emergent Fermi surface is dwarfed by the scattering off the black hole.
Remarkably, the essential features of scattering off a nucleus are seen in Fig. 5 when the emergen Fermi surface is made visible through a large Fermi momentum. The R-ratio exhibits shadowing for x < 0.1, anti-shadowing for 0.1 < x < 0.3, the EMC-like effect for 0.3 < x < 0.8 and Fermi motion for x > 0.8.
Finally, we note that since x k F /x = m N /k 0 = m N z − /C 0 , most of the dependence on matter (hence A through the location of the horizon z − ) comes from this contribution in the dense limit. We note that this contribution is finite forα = 1 for a U(1) R-charge, and is vanishingly small forα = N c /N f for D3-D7 probe branes at large N c since γ F → 0 and therefore C 0 → 0 in (V.78). For the latter, the R-ratio for dense nuclei (V.79) takes the limiting value where we have defined the coefficients for the dense part, and for the nucleon part (V.84) We recall that β = 1/(m N z − ) 2 . The q-dependence does not not drop in the dense limit.
VI. DIS IN THE PROBE LIMIT : DILUTE REGIME
We now consider scattering in the probe limit, where the bulk fermions carry a density without affecting the underlying AdS 5 geometry (with or without a wall), i.e. µ √α → 0 with µ √α × e R √α = µ e fixed whereα ∼ Nc N f 1 and µ is chemical potential. This is the dilute limit case which amounts to using the free spectral form (III.29) with the substitution is the Fermi occupation factor for a fermion of momentum k, mass ω 1 and Fermi energy µ e , and the vacuum (AdS 5 ) wavefunctions. For the confining case, the mass ω 1 is quantized. This analysis complements the one we have discussed recently using generic arguments based on a density expansion of a trapped Fermi liquid [17].
A. Large-x
With this in mind, consider the case of scattering in the ultraviolet region of the black-hole, with the hard fermion of momentum k +q and the remaining fermion of momentum k treated in the probe approximation. This example will help clarify the relationship between our analysis and that in [20]. For that we use the vacuum propagator (III.28) for both the hard fermion and the density modified propagator (X.128) and (VI.85) for the soft fermion in (III.39), where the hard vertices are defined as Here ψ(r, ω) and ψ(r, ω) are the hard wave functions [20] ψ(z, ω) = z 5/2 J mR−1/2 (ωz)P + + J mR+1/2 (ωz)P − , with the chiral projectors P ± = 1 2 (1±γ 5 ). In this regime, the imaginary parts are reducible to on-shell delta functions after using (VI.85). Recall that n F (ω, k) is the Fermi disribution for a fermion of mass ω and momentum k near the boundary in the probe limit. With this in mind, (VI.86) becomes with the physical condition ω 1 + ω 2 < q (i.e., a meson or virtual photon of mass q decaying into KK-fermions of masses ω 1 and ω 2 ), and where we made use of the approximation for ω 2 z 1. Note that without making the approximation ω 2 z 1, the above integral I zν ∼ [27,28] where F 4 is the fourth Appell series of hypergeometric functions which is indeed convergent only for ω 1 + ω 2 < q.
The integral in (VI.90) is in agreement with the Rcurrent scattering on a dilatino in [20]. Evaluating the integral in (VI.89) over ω 1 using the delta-function δ(ω 2 1 − s k ), and using (VI.90), we have The evaluation of the remaining k 0 -integral in (VI.92) using the last delta-function, yields where E k = | k| 2 + ω 2 2 < |q 0 |, and k 0 = E k . To extract the structure functions (III.17) from (VI.93) we carry the spin trace by contacting with the time-like frame vector n µ = (1, 0, 0, 0), where we have assumed n · q ≈ 0 and k 2 ≈ 0. Note that the trace in (VI.94) is the same trace evaluated in [20] for ω 2 = 0 (see their Eq.72 ). Using (III.17) with x A = −q 2 /2P A · q, we can now extract the structure functions F 1,2 (x A , q 2 ) of a state with momentum P µ A from (VI.94) We have x k = −q 2 /2k ·q, s k = −(k +q) 2 ≈ −q 2 (1−1/x k ) and k 0 = E k = (| k| 2 + ω 2 2 ) 1 2 < |q 0 |. We can reduce I zv ( √ s k , ω 2 , q) in (VI.90) in terms of x k as for the mass range ω 2 ≤ Λ. Using (VI.96), we can re-write the structure functions (VI.95) in terms of x k as with the twist parameter is τ = mR + 3/2, following the approximation (ω 2 q) In contrast to the dense limit in (IV.64), the bulk-tobulk graviton propagator in the probe limit, is given by where t = −q 2 =q 2 0 −q 2 x . Therefore, G xy,xy (q 0 = 0,q x = 0, z, z ) = 0 does not vanish in the probe limit. In this limit, the graviton exchange Reggeizes by including higher spin-j (stringy) exchange as With this in mind, we now consider the case of the oneloop fermionic contribution in the probe limit at low-x. In this regime, the bulk-to-bulk fermion propagator is of the form [19] Note that only in this section, we have added an extra factor of i in the gamma matrix in comparison to (III.31) and replaced ω by −ω to make the comparison with standard results easier. Inserting (IV.62) in (IV.59), we obtain the on-shell one-loop effective action µ ≡ n µ ] = n µ n ν ImG µν F (q), Here E k = (| k| 2 + ω 2 ) 1 2 < |q 0 |, k 0 = E k , and x k = − q 2 2k·q . Also we have set r j = R √ α s/2 √ j, w j = qR 2 /r j = qz j , and defined the integrals with ν = 1 2 (n + 1). They are related to each other recursively (n − 1)I 1,n = (n + 1)I 0,n .
The structure functions of the nuclei at small-x in the probe limit are given bỹ The effects caused by the diffusion in the radial direction on the structure functions far from the black-hole at low-x, are discussed in Appendix E.
C. Normalized structure functions
To normalize the structure functions in the probe limit, we recall that the bulk density and bulk energy density follows from the holographic principle as where k F (ω) = µ 2 e − ω 2 , and after taking qzK 1 (qz 1) ≈ 1. The ω-integration in (VI.106) is carried over the bulk spectral density I K (0, ω) with an upper cut-off Λ. In the conformal case, the cutoff is a priori arbitrary. In the conformally broken case, say a hard wall at z = z − , we can set z − Λ = z − m N in (VI.106) and assuming Λ large, to pick only the nucleon ground state. A higher cutoff would include higher excited states of the nucleon. With this in mind, we can first undo the k-integration by approximating it near the Fermi surface, and then undo the ω-integration by keeping only the leading contribution for 1 . We now identify the bulk densityñ = A/V A as the density of a fixed target say a nucleus, with A-nucleons in a fixed volume V A and a total energy E A = AE F . The normalized structure F 1,2 are then related to our earlier and un-normalixed structure functionsF 1,2 through The normalized structure functions at large-x follow by inserting (VI.105) and (VI.108) into (VI.109) (VI.110) We define the x-fractions and note that the large-x structure functions in (VI.110) in the probe approximation obey the analogue of the Callan-Gross relation F 2 = 2x A F 1 for a holographic and dilute nucleus. Also we note that (VI.110) are analogous to the so-called structure functions of the nucleus obtained through the so-called x-scaling of the structure functions of the nucleon.
Case-2 (Small-x):
Doing the momentum integrals in (VI.105) near k → k F and doing the appropriate normalization as in the large-x regime, we find D. R-ratio in the probe limit We define the R-ratio of the nucleus in the probe (dilute) limit as where F dilute 2 (x, q 2 ) is given by the sum of (VI.110) for large-x and (VI.112) for small-x, and F nucleon 2 (x, q 2 ) is given by Recall that βm 2 N = 1/z 2 − is related to the confining scale here, and that x F E F = xm N . Note that in (VI.112) j = 1 in the absence of transverse diffusion or curvature corrections. When the latters are included j → 1 − O(1/ √ λ). The structure function of the proton follows from (VI.115) by setting k F = 0 or through the substitution x F → x. The R-ratio for the probe or dilute limit is independent of q 2 . Note that the first contribution is proportional to e 2 R λ 0 while the second is proportional to e 0 R / √ λ independent of the R-charge.
In Fig. 7 we show the behavior of the dilute R-ratio (VI.113) versus x for fixed Fermi momentum k F /m N = 1. The holographic parameters used are fewer but consistent with those used for the dense R-ratio in Fig. 5. Specifically, we have used: e R = 0.3 (R-charge of the bulk fermion), 2π 2 c 5 / √ 4πλ = 0.01 (strong coupling), τ = 3 (hard scaling exponent), j = 0.08 (Pomeron intercept). In the dilute case, the R-ratio is dominant at large-x and asymptotes 1 at small-x. Clearly visible is the EMC-like effect for 0.2 < x < 0.8 and the Fermi motion for x > 0.8. We have checked that the overall features of Fig. 7 remain unchanged for smaller values of e R but fixed k F /m N in conformity with the probe limit. This holographic behavior is very similar to the one we presented recently using general arguments [17].
VII. CONCLUSIONS
In the double limit of a large number of colors and strong coupling, DIS scattering off an extremal black hole is of order N 2 c following from the absorption of the bulk R-current by the black hole. Through a suitable mapping onto a nucleus, the ensuing structure functions are dominated by low-x. Scattering off the black hole is the ultimate coherent scattering off a dense nucleus with strong shadowing as we noted in [12].
To order N 0 c , DIS scattering is off holographic fermions hovering around the black-hole horizon due to quantum pair creation. In this regime, the geometry is that of AdS 2 ×R 3 with an emergent Fermi surface and anomalous scaling laws. DIS scattering off these bulk fermions show that their partonic distribution functions on the boundary exhibit anomalous exponents and modified hard scattering rules in comparison to scattering off bulk fermions in the dilute or probe limit. For both limits, DIS scattering exhibits the EMC-like effect at intermediate-x and Fermi-like motion for large-x.
The fermionic contribution in the probe limit exhibits many similarities with our recent analysis of DIS scattering off a dilute nucleus using the reduction formula and the holographic identification in the dilute approximation [17]. The partonic content of the bulk fermions is found to shift to intermediate-x. Remarkably, our results both in the dense and dilute limit exhibit the essential features observed in the reported DIS scattering on real nuclei [1,2]. The gamma matrices in curved and tangent space used to analyze the Dirac equation in the extremal RN-AdS black hole will be made explicit here. For that, consider the generic line element in curved space
VIII. ACKNOWLEDGEMENTS
If we refer to the indices in curved space by µ, ν (also t, i) and those in the tangent space by a, b (also t, i) then the gamma matrices are related by If we set the vierbeins as [21] e t = √ g tt dt, e i = √ g ii dx i , (IX.118) then we have In the tangent space, the gamma matrices read [16] The non-vanishing spin connections are
X. APPENDIX B: SOFT SPINORS
The soft normalizable wavefunctions were constructed in [21], we reproduce them here for completeness. The Dirac equation in the AdS 2 ×R 3 geometry is solved by the rescaled spinors and W = −iv T + σ 2 v − , a + (k 0 , k) =c 1 (k − k F ) +c 2 k 0 + · · · , wherec 1,2 ∼ R 2 r− . The explicit spinors are , (X.125) Note that for pure AdS 2 , the soft wave-functions simplify Finally, note that the Feynman propagator for the soft part in (III.32-III.33) is given by with the boundary spectral function ρ B (ω, k), and the normalizable wave function ψ α (r, k) for the Dirac equation in curved AdS 5 .
XI. APPENDIX C: EFFECTIVE VERTICES
The soft-to-hard transition vertices entering in the bulk DIS amplitude involve (X.127) for the reduction to AdS 2 or (X.124) in general for the soft part, with the hard part of the wave-function given by More specifically, for pure AdS 2 , the transition vertex is simply given by In general, the transition vertices are of the form (XI.132) Using the gamma matrices explicitly, we can simplify the effective vertices (XI.132). More specifically, we have Λ x 11 (r 2 ; ω 1 ; q; k) = ie R r − R 2 with the rest of the verices following by symmetry Λ x 11 (r 1 ; k; q; ω 1 ) = −Λ x 11 (r 2 ; ω 1 ; q; k) , Λ x 22 (r 1 ; k; q; ω 1 ) = Λ x 22 (r 2 ; ω 1 ; q; k) ≡ 0 , (XI.134) and all other components vanishing. Performing the change of variable r = R 2 /z and setting z z − , we can re-write the integral in (XI.133) as Λ x 11 (z 2 ; ω 1 ; q; k) = C(ν k )a + (k 0 , k) I z (ω 1 ; q; k) The integration can be carried out analytically with the result Note that for the special value ν k = ν * k = mR, the integrand reduces to the one in [20], and can be evaluated exactly as (XI.139) and C z (ν * k ) = 2 mR+ 1 2 Γ(mR + 3 2 ).
|
v3-fos-license
|
2021-05-01T06:17:18.638Z
|
2021-04-22T00:00:00.000
|
233461796
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4409/10/5/984/pdf",
"pdf_hash": "2f85e7fc784d6fd23f45b23c49120d392cb203b2",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2164",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "ff2744ddf5761cd4774568ff13507cbbfea0cf19",
"year": 2021
}
|
pes2o/s2orc
|
Extracellular Vesicle Release Promotes Viral Replication during Persistent HCV Infection
Hepatitis C virus (HCV) infection promotes autophagic degradation of viral replicative intermediates for sustaining replication and spread. The excessive activation of autophagy can induce cell death and terminate infection without proper regulation. A prior publication from this laboratory showed that an adaptive cellular response to HCV microbial stress inhibits autophagy through beclin 1 degradation. The mechanisms of how secretory and degradative autophagy are regulated during persistent HCV infection is unknown. This study was performed to understand the mechanisms of viral persistence in the absence of degradative autophagy, which is essential for virus survival. Using HCV infection of a CD63-green fluorescence protein (CD63-GFP), labeled stable transfected Huh-7.5 cell, we found that autophagy induction at the early stage of HCV infection increased the degradation of CD63-GFP that favored virus replication. However, the late-stage of persistent HCV infection showed impaired autophagic degradation, leading to the accumulation of CD63-GFP. We found that impaired autophagic degradation promoted the release of extracellular vesicles and exosomes. The impact of blocking the release of extracellular vesicles (EVs) on virus survival was investigated in persistently infected cells and sub-genomic replicon cells. Our study illustrates that blocking EV and exosome release severely suppresses virus replication without effecting host cell viability. Furthermore, we found that blocking EV release triggers interferon lambda 1 secretion. These findings suggest that the release of EVs is an innate immune escape mechanism that promotes persistent HCV infection. We propose that inhibition of extracellular vesicle release can be explored as a potential antiviral strategy for the treatment of HCV and other emerging RNA viruses.
Introduction
The Hepatitis C virus (HCV) is a blood-borne pathogen causing chronic inflammation of the liver without any significant symptoms over several decades that, if untreated, leads to cirrhosis and, potentially, to the development of hepatocellular carcinoma (HCC) [1][2][3]. Approved direct-acting antivirals (DAAs) can cure most cases of chronic HCV infection and, if prescribed early enough, can prevent the progression of cirrhosis [4]. An HCV cure reduces liver inflammation, the progression of liver fibrosis, and HCC development, which decreases HCV-associated mortality [5,6]. The mechanisms of virus and host interaction that dictate the pathogenesis of chronic liver disease, cirrhosis, and HCC are not well understood. This knowledge is essential for developing a biomarker for the early detection of cirrhosis and HCC after an HCV cure.
The replication of the HCV genome occurs predominantly in the endoplasmic reticulum (ER) derived membranes, the most abundant and elaborate membrane-rich organelles [7]. HCV extensively utilizes the ER during all stages of chronic liver disease, leading to cellular stress. Chronic HCV infection leads to the increased accumulation of misfolded proteins in the hepatocytes. The collection of viral proteins and replicative intermediates generates an innate stress response. This maladaptive stress response leaves infected cells vulnerable to additional stress, including metabolic and oxidative stress. The low-level accumulation of misfolded proteins in the ER is cleared by ubiquitin-proteasome degradation, referred to as type I ER-associated protein degradation (type I ERAD). When type I ERAD is not sufficient for reducing chronic stress, the ER initiates a second line of protein degradation process through the induction of autophagy (type II ERAD). The adaptive cellular response through autophagy is activated to reduce stress and improve the survival of the infected cell [8,9]. The cellular stress response is an integral part of liver homeostasis that leads to different types of cell death pathways, which fuel the common tumor suppressor mechanism. Autophagy compromises a set of evolutionally conserved degradation pathways that deliver cytosolic cargoes to the lysosome or endosomes for degradation [10,11]. Autophagy connects two major cell death pathways (necrosis and apoptosis); therefore, it serves a tumor-suppressive role in the liver during chronic HCV infection. Autophagy starts declining during the chronic stage of liver disease, leading to a reduced clearance of misfolded proteins and cellular constituents that results in the development of liver fibrosis and HCC [12]. Nevertheless, how the adaptive cellular response to HCV-microbial stress modulates the autophagy pathway to improve cell survival and HCC development during chronic HCV infection is unknown. This knowledge is essential to prevent virus-associated pathological conditions, such as liver cirrhosis and HCC.
There are three types of autophagy: macroautophagy, chaperone-mediated autophagy (CMA), and microautophagy. These processes can compensate each other to improve cell survival under stress [13,14]. Consistent with these reports, previous publications from this laboratory demonstrated that excessive HCV microbial stress inhibits autophagy by impairing the fusion of the autophagosome with the endosome or lysosome [12,15]. It was observed that CMA is compensatively activated in HCV-infected culture to improve cell survival. However, CMA cannot degrade aggregated misfolded proteins, lipids, and nucleic acids. It is unclear how dysfunctional viral and cellular components are removed from chronically infected cells when autophagic degradation is inhibited.
Extracellular vesicles (EVs) are a heterogeneous group of vesicles released by cells under physiological and pathological conditions. They can be originated through the endosomal pathway from multivesicular bodies (MVBs), or pinched off from the cell membrane. MVBs are membrane-bound organelles generated from the invagination of the late endosomal membrane and intraluminal bodies [16]. They are involved in the transporting, storing, sorting, recycling, and releasing of many substances derived from the Golgi complex, ER, and mitochondria [17][18][19]. They also participate in autophagy through the degradation of organelles, proteins, and RNA [20]. During autophagy, MVBs fuse with an autophagosome, generating a hybrid organelle called an amphisome, which fuses with lysosomes for degradation [21][22][23]. Based on these pieces of evidence, we propose that the extracellular release of misfolded protein aggregates, nucleic acids, molecular chaperones, cytosolic proteins, lipids, and small RNAs through exosomes is an efficient cellular adaptive mechanism to improve cell survival during persistent HCV infection.
This study was performed to test the hypothesis of whether EV release is a part of the adaptive cellular response to microbial stress that promotes virus-cell survival during chronic infection. Our results showed that increased cellular stress during persistent infection impairs autophagic degradation. Impaired autophagic degradation of HCV replicative intermediates promotes the release of EVs and exosomes. The inhibition of EV release by small molecule inhibitors dramatically suppressed virus replication and activated innate antiviral program, leading to interferon (IFN) production. Collectively, our study's results illuminate an adaptive cellular response to HCV microbial stress that promotes the release of EVs which contribute to the persistent viral infection.
Western Blotting
Western blotting was performed using a standard protocol established in our laboratory. Infected cells were harvested by the treatment of trypsin-EDTA (Life Technologies, Carlsbad, CA, USA) at different time points and were washed twice with PBS, then lysed in ice-cold RIPA buffer (Sigma-Aldrich, St. Louis, MO, USA) with a protease inhibitor (ThermoFisher Scientific, Waltham, MA, USA) and phosphatase inhibitor cocktail (Sigma-Aldrich, St. Louis, MO, USA). The total protein content of the extract was quantified using NanoDrop TM 2000 (ThermoFisher Scientific, Waltham, MA, USA). Cell lysates (approximately 20 µg of protein) were loaded by SDS-PAGE and transferred into a nitrocellulose membrane (0.45 mm pore size, ThermoFisher Scientific, Waltham, MA, USA). The membrane was blocked using 0.05 g/mL blotting-grade milk powder (Bio-Rad, Hercules, CA, USA) for two hours, then incubated with primary antibody for overnight incubation on an orbital shaker. After overnight incubation, the antigen-antibody complex was visualized with HRP-conjugated goat anti-rabbit or anti-mouse IgG (Cell Signaling, Beverly, MA, USA), then developed with an ECL detection system (Supersignal TM West Pico PLUS, Ther-moFisher Scientific, Waltham, MA, USA) using the Bio-Rad ChemiDoc imaging system.
Confocal Microscopy
The relationship of HCV core and CD63-GFP expression was verified by confocal microscopy. Huh-7.5 cells stably transfected with CD63-GFP (Huh-7.5-CD63-GFP) were infected with JFH1-RLuc chimera HCV using a standard protocol [25]. These cells were examined for GFP expression on day 9 and day 21 after virus infection. Cells were incubated for 1 h at 4 • C with HCV core antibody (1:100 dilution) in ice-cold DMEM containing 1% FBS with gentle shaking. Cells were then washed once with DMEM containing 1% FBS and incubated with Texas Red (ThermoFisher Scientific, Waltham, MA) with 1:1000 concentration at 4 • C with shaking for 30 min. Finally, cells were washed twice with DMEM containing 1% FBS and fixed with 4% paraformaldehyde. The cell suspension was then Cells 2021, 10, 984 4 of 21 examined for HCV core and CD63-GFP expression using confocal microscopy. In addition, 4 ,6-diamidino-2-phenylindole (DAPI) staining was used for nuclear imaging. Finally, the expression of GFP, as well as the HCV core, was monitored using a Nikon A1 confocal microscope. Transfected cells were examined using a fluorescence microscope (Olympus IX73, Tokyo, Japan) at 484 nm for the expression of green fluorescence, 563 nm for the expression of red fluorescence, and 340 nm for DAPI. For each area, two sets of pictures were generated. Then combined pictures was generated by superimposing different fluorescent images with using Olympus cellsSens Dimension version 1.15 software.
Exosome Isolation and Quantification
For EV isolation, infected cells were cultured in exosome depleted medium (Ther-moFisher Scientific, Waltham, MA, USA). Culture supernatant was collected at multiple time points and centrifuged at 2000× g for 30 min to remove cellular debris. The supernatant was transferred to a new sterile tube and 0.5 volumes of total exosome isolation reagent (for cell culture medium) from Invitrogen were added. Samples were incubated at 4 • C overnight and subsequently centrifuged at 10,000× g for 60 min at 4 • C. The supernatant was removed without touching the exosome-containing pellet. The pellet was resuspended in 1× phosphate-buffered saline (PBS) and stored at −20 • C for downstream analysis. The absolute concentration and size distribution of EVs from infected cultures were measured using NanoSight (Model NTA3300 with 532 nm green laser module, Malvern, Worcestershire, UK), which is a laser-based light scattering system [26,27]. The general nanoparticle quantification range was 10 6 to 10 9 particles/mL. Exosome pellets were serially diluted 1:100 to 1: 10,000 with particle-free water before starting the analysis. If the particle count was above the detection limit in the initial analysis, we then used the next dilution level. Final exosome concentrations were calculated according to the dilution factor. Exosome count and size were calculated by Nanosight Nanoparticle analysis (NTA) software version 2.3 (Malvern, Worcestershire, UK). For every sample, at least 30 s and three different video image sets were captured.
Cryo-TEM for Exosome Characterization
Transmission Electron Microscopy (TEM) and Cryo-TEM analyses were performed using exosomes purified from cell culture supernatants by ultracentrifugation protocol [28]. Culture supernatants were centrifuged at 1000× g at room temperature for 10 min. The supernatants were collected and spun again at 10,000× g at room temperature for 30 min to remove cellular debris. The supernatants were filtered through 0.22 µm filters (Sigma-Aldrich, St. Louis, MO, USA), and exosomes were precipitated by ultracentrifugation at 100,000× g at room temperature for 2 h (Beckman Ultracentrifuge). The exosome pellet was resuspended in PBS for downstream analysis. Cryo-TEM was performed to demonstrate the purity and size of exosomes released from the HCV-infected cell culture using an FEI G2 F30 Tecnai TEM operating at 150 kV. The exosome samples were prepared on a lacey carbon-coated copper grid (200-mesh, electron microscopy sciences) using an automated plunging station (FEI Vitrobot). The sample solution was applied to the grid. The excess liquid was blotted by attached blotting papers for 2 s to produce a thin sample film that was immediately vitrified by plunging to liquid ethane. The grid with the sample cryogenically immobilized was transferred onto a single tilt cryo-specimen holder for imaging.
Transmission Electron Microscopy (TEM)
Uninfected and infected Huh-7.5 cells on days 9 and 21 were harvested using trypsin-EDTA. Cell pellets were washed with PBS and then suspended in 3% glutaraldehyde fixative (Sigma-Aldrich, St. Louis, MO, USA). Cell pellets were fixed in 1% osmium tetroxide and dehydrated with an ethyl alcohol series. Samples were infiltrated and embedded in eponate-12 resin and polymerized at 60 • C for 24 h. Thin sections (70 nm) of the samples were placed on copper grids. Cells were examined using a G2 F30 Tecnai TEM at 200 kV. We captured cytoplasmic areas of 10 different cells under the grid, and the Cells 2021, 10, 984 5 of 21 number of autophagic vacuoles (AVs) and multivesicular bodies (MVBs) was counted in uninfected, early, and late-infected Huh-7.5 cells.
MTT Proliferation/Viability Assay
Cells were counted with an EVE automated cell counter (NanoEnTek, Seoul, Korea). Huh 7.5 cells were seeded in 96 well plates with 7500 cells/well density. Each treatment group was prepared in triplicate. After 72 h, 20 µL MTT solution (5 mg/mL) was added per well into the culture medium. The plate was incubated for 3.5 h at 37 • C in a culture hood. After removing the medium, 150 µL MTT solvent was added per well, and the plate was covered with tinfoil and incubated on an orbital shaker for 15 min. Absorbance measurements were performed at 595 nm with a reference filter of 655 nm in iMark Microplate Reader (BIORAD Hercules, CA, USA). MTT solution and MTT solvent were prepared according to a previously described protocol [29].
Enzyme Linked Immunosorbent Assay (ELISA)
After exosome inhibitor treatment, IFNL1 protein concentration in cell culture supernatants was measured using Human IL-29 ELISA kit (Invitrogen, ThermoFisher, Waltham, MA, USAcat#88-7296-22). In brief, wells were incubated with 50 µL of culture supernatant, and the remaining steps were carried out using the Sandwich ELISA protocol. The plate was read at 450 nm. The concentration of IFNL1 was calculated using the standard curve of the internal control supplied in the ELISA kit.
Statistical Analysis
Statistical analysis was performed using by GraphPad Prism software version 8 (GraphPad Software, Inc., La Jolla, CA, USA). All experiments were performed 3 independent times with fresh cultures of cells each time to obtain 3 replicates. Because Huh-7.5 is the only cell line in which these experiments can be done, this approach provides measurements with the maximum possible biological independence. However, in order to prevent a violation of the independency assumptions, 2-tailed paired t tests were used to compare means of variables. The statistical significance was shown as * p < 0.05, ** p < 0.01, *** p < 0.001.
Autophagic Degradation and Exosome Release Supports Virus-Cell Survival during Persistent HCV Infection
Initially, we tested the impact of autophagy, lysosomal degradation and exosome release on HCV replication and dissemination. We used an infectious full-length GFP reporter-based chimera HCV virus and sub-genomic replicon cell culture models, established in our laboratory ( Figure 1A). Huh-7.5 cells were infected with HCV-GFP-chimera virus and treated with the autophagy inducer (Torin1), lysosomal inhibitor (HCQ), or the MVB inhibitor (GW4869) for 72 h. The MTT assay determined the optimal dosage for each drug: 100 nM for Torin1, 10 µM for HCQ, 10 µM for GW4869 ( Figure S1). The amount of GFP-positivity was examined under fluorescence microscopy and then quantified by flow analysis at different time points. As expected, we found that the induction of autophagy by Torin1 treatment increased HCV replication and spread ( Figure 1B,C). Inhibition of lysosomal degradation by HCQ treatment decreased the percentage of HCV-positive cells. Inhibition of exosome release by GW4869 treatment inhibited HCV replication ( Figure 1B,C). The impact of autophagy induction, HCQ, and GW4869 treatment on extracellular vesicle release was quantified by NTA. It was found that Torin1 and HCQ treatment increased extracellular vesicles, whereas GW4869 inhibited extracellular vesicle release ( Figure 1D). We examined the impact of Torin1, HCQ, and GW4869 treatment on intracellular HCV RNA replication in a sub-genomic HCV replicon (R4GFP) cell line that did not produce the virus since it lacked the structural genes. This cell line is resistant to interferon alpha (IFNA) but sensitive to interferon lambda 1 (IFNL1). IFN-resistant R4GFP cells were treated with Cells 2021, 10, 984 6 of 21 either Torin1, HCQ or GW4869 for 72 h. The expression of GFP was quantified by flow analysis and fluorescence microscopy ( Figure 1E,F). The results show that, while autophagy induction supports replication of HCV sub-genomic RNA, HCQ and GW4869 treatment inhibit replication. Inhibition of lysosomal degradation and exosome release decreased HCV replication. We found HCQ that inhibits lysosomal degradation shows increased exosome release. Taken together, our data using both models suggest that autophagy induction promotes HCV infection and spread, which is consistent with an earlier study on poliovirus, where autophagy induction correlates with viral replication [30].
of autophagy induction, HCQ, and GW4869 treatment on extracellular vesicle release was quantified by NTA. It was found that Torin1 and HCQ treatment increased extracellular vesicles, whereas GW4869 inhibited extracellular vesicle release ( Figure 1D). We examined the impact of Torin1, HCQ, and GW4869 treatment on intracellular HCV RNA replication in a sub-genomic HCV replicon (R4GFP) cell line that did not produce the virus since it lacked the structural genes. This cell line is resistant to interferon alpha (IFNA) but sensitive to interferon lambda 1 (IFNL1). IFN-resistant R4GFP cells were treated with either Torin1, HCQ or GW4869 for 72 h. The expression of GFP was quantified by flow analysis and fluorescence microscopy ( Figure 1E,F). The results show that, while autophagy induction supports replication of HCV sub-genomic RNA, HCQ and GW4869 treatment inhibit replication. Inhibition of lysosomal degradation and exosome release decreased HCV replication. We found HCQ that inhibits lysosomal degradation shows increased exosome release. Taken together, our data using both models suggest that autophagy induction promotes HCV infection and spread, which is consistent with an earlier study on poliovirus, where autophagy induction correlates with viral replication [30].
Persistent HCV Infection Inhibits CD63-Mediated Autophagic Endosome-Lysosomal Degradation
Tetraspanins (TSPAN) CD9, CD63, and CD81 are enriched in the membrane of exosomes, which are often used as exosomal markers [31]. The TSPAN protein, CD63, plays a crucial role in endosomal cargo sorting and EV production [32,33]. To understand the impact of autophagy-lysosome pathways regulating MVBs degradation and exosome release, we developed a persistent HCV replication model using Huh-7.5-CD63-GFP liver cell culture. The replication of HCV-genotype 2a virus (JFH-deltaV3-Rluc chimera) in CD63-GFP cells was studied by the measurement of luciferase activity for over one month. Consistent with our previous report, we were able to demonstrate the high-level replication of this chimera virus in Huh-7.5 cells and Huh-7.5 with CD63-GFP ( Figure 2A). We found that most of the cells in culture at day 9 show high-level expression of NS3 by Western blotting ( Figure 2B) and viral core protein expression by immunostaining ( Figure 2C). The number of core protein expressing cells quantified by ImageJ software can be seen in Figure 2D. Consistent with our previous report, we were able to demonstrate the high-level replication of this chimera virus in Huh-7.5 cells and Huh-7.5 with CD63-GFP ( Figure 2A). We found that most of the cells in culture at day 9 show high-level expression of NS3 by Western blotting ( Figure 2B) and viral core protein expression by immunostaining ( Figure 2C). The number of core protein expressing cells quantified by ImageJ software can be seen in Fig We examined the impact of autophagy inhibition on MVB formation and degradation by quantifying CD63-GFP expression. Huh-7.5-CD63-GFP cells were infected with HCV, and the expression of GFP and HCV core expression was measured at 3, 6, 9, and 21 days by confocal microscopy ( Figure 3A,B). The impact of HCV replication in the Huh-7.5-CD63-GFP cells on cellular autophagy flux and MVB degradation was confirmed by the measurement of NS3 and CD63 expression by Western blot analysis ( Figure 3C). Western blot analysis showed that autophagic flux (p62, LC3BI/II ratio) was gradually decreased due to HCV replication until day 9. The levels of p62 and the LC3BI/II ratio were increased after day 12, suggesting that the late stage of persistent HCV infection inhibits autophagic flux ( Figure 3C). These results are consistent with data shown in our previous publication showing that persistent HCV replication inhibits autophagy [15]. The HCVinduced effect on CD63-GFP expression was verified by flow analysis on day 9 and day 21. We found that the early stage of persistent HCV infection efficiently degraded CD63-GFP expression (39% decreased to 27.5%), whereas late-infected culture showed increased expression (39% to 65.6%) ( Figure 3D). All these results indicate that the early stage of We examined the impact of autophagy inhibition on MVB formation and degradation by quantifying CD63-GFP expression. Huh-7.5-CD63-GFP cells were infected with HCV, and the expression of GFP and HCV core expression was measured at 3, 6, 9, and 21 days by confocal microscopy ( Figure 3A,B). The impact of HCV replication in the Huh-7.5-CD63-GFP cells on cellular autophagy flux and MVB degradation was confirmed by the measurement of NS3 and CD63 expression by Western blot analysis ( Figure 3C). Western blot analysis showed that autophagic flux (p62, LC3BI/II ratio) was gradually decreased due to HCV replication until day 9. The levels of p62 and the LC3BI/II ratio were increased after day 12, suggesting that the late stage of persistent HCV infection inhibits autophagic flux ( Figure 3C). These results are consistent with data shown in our previous publication showing that persistent HCV replication inhibits autophagy [15]. The HCV-induced effect on CD63-GFP expression was verified by flow analysis on day 9 and day 21. We found that the early stage of persistent HCV infection efficiently degraded CD63-GFP expression (39% decreased to 27.5%), whereas late-infected culture showed increased expression (39% to 65.6%) ( Figure 3D). All these results indicate that the early stage of HCV replication degrades MVBs, and that the late-stage of persistent HCV replication inhibits the degradation of CD63-GFP. HCV replication degrades MVBs, and that the late-stage of persistent HCV replication inhibits the degradation of CD63-GFP.
Autophagy Induction Promotes Degradation of MVBs
MVBs are membrane-bound organelles that belong to endosomal pathways. They play a major role in cellular metabolism such as transporting, storing, sorting, recycling, and releasing protein, lipids, and small RNAs derived from the Golgi, ER, and mitochondria. We examined whether autophagy induction or inhibition through small molecule drugs alters the expression of CD63-GFP without any viral infection. Huh-7.5-CD63-GFP cells were treated with an autophagy inducer (Torin1) and lysosome inhibitor (HCQ) for 24 h. Torin1 is a potent and selective ATP-competitive inhibitor of mTOR (mechanistic target of rapamycin) kinase. HCQ is an alkalinizing drug that inhibits lysosomal degradation by increasing pH. The next day, cells were incubated with bovine serum albumin derivatives conjugated to a self-quenched fluorophore (DQ-BSA), which is bovine serum
Autophagy Induction Promotes Degradation of MVBs
MVBs are membrane-bound organelles that belong to endosomal pathways. They play a major role in cellular metabolism such as transporting, storing, sorting, recycling, and releasing protein, lipids, and small RNAs derived from the Golgi, ER, and mitochondria. We examined whether autophagy induction or inhibition through small molecule drugs alters the expression of CD63-GFP without any viral infection. Huh-7.5-CD63-GFP cells were treated with an autophagy inducer (Torin1) and lysosome inhibitor (HCQ) for 24 h. Torin1 is a potent and selective ATP-competitive inhibitor of mTOR (mechanistic target of rapamycin) kinase. HCQ is an alkalinizing drug that inhibits lysosomal degradation by increasing pH. The next day, cells were incubated with bovine serum albumin derivatives conjugated to a self-quenched fluorophore (DQ-BSA), which is bovine serum albumin labelled with a red fluorophore (RFP) for one hour, and then the amount of GFP and RFP expression was quantified by flow analysis. Autophagy induction in Huh-7.5-CD63-Cells 2021, 10, 984 9 of 21 GFP cells by Torin1 treatment increased CD63-GFP degradation, since the percentage of GFP positive cells decreased (72.9% to 16.1%), whereas the rates of DQ-BSA positive red fluorescence cells were increased (0.2% to 46.3%). Autophagy inhibition by HCQ treatment impaired lysosomal degradation, leading to an increase in CD63-GFP positive cells (72.9% to 84.6%) and only 2.6% red fluorescence positive cells by flow analysis ( Figure 4A). A morphological evaluation of cells treated with Torin1 and HCQ was performed under fluorescence microscopy and showed that autophagy induction degraded, but autophagy inhibition accumulated, CD63-GFP ( Figure 4B). Quantification of these data from three separate investigations revealed that the autophagy inducer Torin1 degrades CD63-GFP, whereas HCQ treatment accumulates its expression ( Figure 4C). DQ-BSA fluorescence was strong in Torin1-treated culture, but not in the HCQ-treated culture. The impact of autophagy modulation by the treatment of Torin1 on the degradation of MVBs protein CD63-GFP was verified by a Western blot analysis ( Figure 4D). These results are consistent with our observation in HCV-infected culture showing that autophagy inhibition promotes accumulation of CD63-GFP due to their impaired lysosomal degradation at the level of autophagosome-lysosome fusion. albumin labelled with a red fluorophore (RFP) for one hour, and then the amount of GFP and RFP expression was quantified by flow analysis. Autophagy induction in Huh-7.5-CD63-GFP cells by Torin1 treatment increased CD63-GFP degradation, since the percentage of GFP positive cells decreased (72.9% to 16.1%), whereas the rates of DQ-BSA positive red fluorescence cells were increased (0.2% to 46.3%). Autophagy inhibition by HCQ treatment impaired lysosomal degradation, leading to an increase in CD63-GFP positive cells (72.9% to 84.6%) and only 2.6% red fluorescence positive cells by flow analysis ( Figure 4A). A morphological evaluation of cells treated with Torin1 and HCQ was performed under fluorescence microscopy and showed that autophagy induction degraded, but autophagy inhibition accumulated, CD63-GFP ( Figure 4B). Quantification of these data from three separate investigations revealed that the autophagy inducer Torin1 degrades CD63-GFP, whereas HCQ treatment accumulates its expression ( Figure 4C). DQ-BSA fluorescence was strong in Torin1-treated culture, but not in the HCQ-treated culture. The impact of autophagy modulation by the treatment of Torin1 on the degradation of MVBs protein CD63-GFP was verified by a Western blot analysis ( Figure 4D). These results are consistent with our observation in HCV-infected culture showing that autophagy inhibition promotes accumulation of CD63-GFP due to their impaired lysosomal degradation at the level of autophagosome-lysosome fusion
Persistent HCV Replication Decreased Autophagic Vacuoles and Autophagosome-Lysosome Fusion without Affecting Lysosomal Activity
The impact of HCV replication on autophagosome-lysosome (autolysosome formation) was verified using alternative approaches. Monodansylcadaverine (MDC) was used previously for the labeling of active AVs. DQ-BSA was used to quantify lysosomal protease activity. These two reagents were used to measure autophagy between early (day 9) and late-infected culture (day 21) by flow cytometry. We found that the number of DQ-
Persistent HCV Replication Decreased Autophagic Vacuoles and Autophagosome-Lysosome Fusion without Affecting Lysosomal Activity
The impact of HCV replication on autophagosome-lysosome (autolysosome formation) was verified using alternative approaches. Monodansylcadaverine (MDC) was used previously for the labeling of active AVs. DQ-BSA was used to quantify lysosomal protease activity. These two reagents were used to measure autophagy between early (day 9) and late-infected culture (day 21) by flow cytometry. We found that the number of DQ-BSA positive cells was comparable between early and late-infected cultures (52.4% vs. 55.1%), whereas there was a marked difference in the MDC positive fluorescence observed between early and late-infected culture (33.1% vs. 0.4%) ( Figure 5A). Uninfected Huh-7.5 cells and HCV-infected Huh-7.5 cells without any DQ-BSA or MDC treatment did not show any positive staining. These results were confirmed by the visualization of infected cells under fluorescence microscopy. The only early stage of persistent HCV-infected cells led to AVs that stained with MDC. Similar staining was not observed in late-infected culture, suggesting the absence of AVs in late-infected culture ( Figure 5B). The red fluorescence staining displayed a comparable active lysosomal protease activity between the early stage and late stage of the persistently infected culture. A statistical analysis of the effects of three separate experiments proved that persistent HCV replication results in significant inhibition of autophagy induction without compromising the cellular lysosomal degradation ( Figure 5C). Figure 5A). Uninfected Huh-7.5 cells and HCV-infected Huh-7.5 cells without any DQ-BSA or MDC treatment did not show any positive staining. These results were confirmed by the visualization of infected cells under fluorescence microscopy. The only early stage of persistent HCV-infected cells led to AVs that stained with MDC. Similar staining was not observed in late-infected culture, suggesting the absence of AVs in late-infected culture ( Figure 5B). The red fluorescence staining displayed a comparable active lysosomal protease activity between the early stage and late stage of the persistently infected culture. A statistical analysis of the effects of three separate experiments proved that persistent HCV replication results in significant inhibition of autophagy induction without compromising the cellular lysosomal degradation ( Figure 5C).
Ultrastructural Analysis of Huh-7.5 Cells Infected with HCV
The presence of HCV-induced AVs and MVBs were compared in the early and late stages of HCV infection by transmission electron microscopy (TEM). Cytoplasmic areas of 10 different cells were imaged under the grid; the numbers of autophagosomes and MVBs were counted. The number of AVs per field per cell was compared among uninfected, early stage of infection (day 9), and late-stage of infection (day 21) (Figure 6A-C). The number of MVBs were normalized back to control levels, after a reduction at day 9 ( Figure 6D). In contrast, large numbers of AVs with the partial dissolution of the double membrane due to
Ultrastructural Analysis of Huh-7.5 Cells Infected with HCV
The presence of HCV-induced AVs and MVBs were compared in the early and late stages of HCV infection by transmission electron microscopy (TEM). Cytoplasmic areas of 10 different cells were imaged under the grid; the numbers of autophagosomes and MVBs were counted. The number of AVs per field per cell was compared among uninfected, early stage of infection (day 9), and late-stage of infection (day 21) (Figure 6A-C). The number of MVBs were normalized back to control levels, after a reduction at day 9 ( Figure 6D). In contrast, large numbers of AVs with the partial dissolution of the double membrane due to lysosome fusion were present in the early infected culture ( Figure 6E). Ultrastructural analysis was consistent with liver cell imaging studies of persistently infected HCV culture. lysosome fusion were present in the early infected culture ( Figure 6E). Ultrastructural analysis was consistent with liver cell imaging studies of persistently infected HCV culture.
Persistent HCV Infection Promotes Release of Extracellular Vesicles and Exosomes
We studied the effect of autophagy on the release of EVs and exosomes in a persistently infected HCV cell culture model. For this purpose, we used Huh-7.5 cells infected with HCV-luciferase chimera virus. Infected cells were cultured in the exosome-depleted cell culture media. Cell-free supernatant was collected at different time intervals, and exosomes were isolated from 1 mL of cell culture media by total exosome isolation kit (Invitrogen) and were characterized through multiple approaches. Cryo-TEM examination confirmed the purity and size of exosomes released from the HCV-infected cells ( Figure 7A). The NTA was used to count the absolute particles secreted into the culture supernatants over 28 days post-HCV infection. We found that exosome release gradually increased with time in persistently infected HCV culture ( Figure 7B). We found that infected cultures release very uniform size exosomes 60-80 nm ( Figure 7C). A Western blot analysis found enrichment of TSPANs (CD63 and CD9) in the exosomes isolated from HCV-infected culture over time ( Figure 7D).
Persistent HCV Infection Promotes Release of Extracellular Vesicles and Exosomes
We studied the effect of autophagy on the release of EVs and exosomes in a persistently infected HCV cell culture model. For this purpose, we used Huh-7.5 cells infected with HCV-luciferase chimera virus. Infected cells were cultured in the exosome-depleted cell culture media. Cell-free supernatant was collected at different time intervals, and exosomes were isolated from 1 mL of cell culture media by total exosome isolation kit (Invitrogen) and were characterized through multiple approaches. Cryo-TEM examination confirmed the purity and size of exosomes released from the HCV-infected cells ( Figure 7A). The NTA was used to count the absolute particles secreted into the culture supernatants over 28 days post-HCV infection. We found that exosome release gradually increased with time in persistently infected HCV culture ( Figure 7B). We found that infected cultures release very uniform size exosomes 60-80 nm ( Figure 7C). A Western blot analysis found enrichment of TSPANs (CD63 and CD9) in the exosomes isolated from HCV-infected culture over time ( Figure 7D). Cells 2021, 10, x FOR PEER REVIEW 12 of 22
Release of Extracellular Vesicles Promotes Virus Replication during Persistent HCV Infection
Virus infection also releases vesicles that originate either from the intraluminal vesicles (ILVs) of multivesicular endosome, called exosomes, or microvesicles which are directly budding from the plasma membrane. Microvesicle biogenesis is modulated by membrane lipids and the organization of the peripheral actin cytoskeleton, both known to alter membrane fluidity, membrane invasion, and fusion [34]. Actin polymerization and myosin contraction are involved in the biogenesis of microvesicle formation and their intracellular movement and cargo transport [35,36]. MVBs primarily fuse with lysosomes for degradation and degradation of viral double-stranded RNA replicative intermediates. The late stage of persistent HCV infection prevents MVBs degradation, therefore allowing exosome release. We investigated the impact of inhibition of EVs and exosome release on HCV replication in the infected cells and replicon model. A number of pharmaceutical agents were selected to inhibit EVs released, and their mechanisms of action are shown in Figure 8.
Release of Extracellular Vesicles Promotes Virus Replication during Persistent HCV Infection
Virus infection also releases vesicles that originate either from the intraluminal vesicles (ILVs) of multivesicular endosome, called exosomes, or microvesicles which are directly budding from the plasma membrane. Microvesicle biogenesis is modulated by membrane lipids and the organization of the peripheral actin cytoskeleton, both known to alter membrane fluidity, membrane invasion, and fusion [34]. Actin polymerization and myosin contraction are involved in the biogenesis of microvesicle formation and their intracellular movement and cargo transport [35,36]. MVBs primarily fuse with lysosomes for degradation and degradation of viral double-stranded RNA replicative intermediates. The late stage of persistent HCV infection prevents MVBs degradation, therefore allowing exosome release. We investigated the impact of inhibition of EVs and exosome release on HCV replication in the infected cells and replicon model. A number of pharmaceutical agents were selected to inhibit EVs released, and their mechanisms of action are shown in Figure 8. Figure 8. The action mechanism of drugs used to inhibit extracellular vesicle release. Extracellular vesicles originate from the endosomal pathway or pinch off from the cell membrane. Exosomes are produced from MVBs by ESCRT-dependent and ESCRT-independent pathways. While manumycin A inhibits exosome dependent pathway, GW4869 inhibits exosome independent pathway. Microvesicle biogenesis is modulated by lipids and cytoskeletal proteins. Lipid rafts and cholesterol play an important role in the budding of cell membranes. Enzymes involved in the transfer of lipid from one leaflet of the cell membrane to the other are potentially targeted to inhibit exosome release. Calpeptin is a family of calcium-dependent neutral cytosolic cysteine protease inhibitors used as a micro-vesicular inhibitor. Y27632 is a competitive inhibitor of both ROCK1 and ROCK2 and is able to compete with ATP in binding to the catalytic site of these kinases. This compound inhibits microvesicle release by blocking these two proteins. This compound reorganizes the cytoskeleton and mediates cellular contractility by regulating the activity of actin filaments. Imipramine is a well-known antidepressant that promotes membrane fluidity by inhibiting acid sphingomyelinase (aSMase), therefore preventing the generation of microvesicles. D-Pantethine inhibits cholesterol synthesis as well as fatty acid synthesis as the fluidity of the cell membrane is important during membrane bi-layer reorganization and microvesicle formation. This drug blocks the translocation of phosphatidylserine to the outer surface membrane, which is an essential step of microvesicle formation. Cytochalasin D is an alkaloid produced as a toxin by many fungi. This compound binds the edges of actin filaments to prevent actin polymerization. Actin polymerization is essential for the formation of membrane-derived microvesicles and their intracellular movement.
Seven different compounds with known mechanisms of action were selected. An MTT assay determined their cellular toxicities using Huh7.5 cells ( Figure S2). Persistently HCV-infected Huh-7.5 cells were treated with each drug (Imipramine 10 μM, D-Pantethine 100 μM, Y27632 10 μM, calpeptin 30 μM, manumycin A 2 μM, cytochalasin D 2 μM and GW4869 at 20 μM) for 72 h, and then extracellular vesicle released to the cell culture supernatants were quantified by NTA. We found that almost all of the inhibitors used in our assay decreased extracellular vesicle release ( Figure 9A). The NTA was used to capture the movement of particles in the liquid stage. Brownian motion of the purified exosomes from the infected culture with and without treatment of different inhibitors was recorded ( Figure 9B). Exosome release was also decreased when the R4GFP sub-genomic Calpeptin is a family of calcium-dependent neutral cytosolic cysteine protease inhibitors used as a micro-vesicular inhibitor. Y27632 is a competitive inhibitor of both ROCK1 and ROCK2 and is able to compete with ATP in binding to the catalytic site of these kinases. This compound inhibits microvesicle release by blocking these two proteins. This compound reorganizes the cytoskeleton and mediates cellular contractility by regulating the activity of actin filaments. Imipramine is a well-known antidepressant that promotes membrane fluidity by inhibiting acid sphingomyelinase (aSMase), therefore preventing the generation of microvesicles. D-Pantethine inhibits cholesterol synthesis as well as fatty acid synthesis as the fluidity of the cell membrane is important during membrane bi-layer reorganization and microvesicle formation. This drug blocks the translocation of phosphatidylserine to the outer surface membrane, which is an essential step of microvesicle formation. Cytochalasin D is an alkaloid produced as a toxin by many fungi. This compound binds the edges of actin filaments to prevent actin polymerization. Actin polymerization is essential for the formation of membrane-derived microvesicles and their intracellular movement.
Seven different compounds with known mechanisms of action were selected. An MTT assay determined their cellular toxicities using Huh7.5 cells ( Figure S2). Persistently HCV-infected Huh-7.5 cells were treated with each drug (Imipramine 10 µM, D-Pantethine 100 µM, Y27632 10 µM, calpeptin 30 µM, manumycin A 2 µM, cytochalasin D 2 µM and GW4869 at 20 µM) for 72 h, and then extracellular vesicle released to the cell culture supernatants were quantified by NTA. We found that almost all of the inhibitors used in our assay decreased extracellular vesicle release ( Figure 9A). The NTA was used to capture the movement of particles in the liquid stage. Brownian motion of the purified exosomes from the infected culture with and without treatment of different inhibitors was recorded ( Figure 9B). Exosome release was also decreased when the R4GFP sub-genomic replication cell line was treated with each inhibitor (Figure 9C,D). The inhibition of extracellular release was more prominent in the sub-genomic replicon cell line as compared to the infected cell. replication cell line was treated with each inhibitor (Figure 9C,D). The inhibition of extracellular release was more prominent in the sub-genomic replicon cell line as compared to the infected cell. In the next step, we determined whether inhibition of EV release by specific inhibitor treatment at viable concentrations could affect replication. The impact of inhibiting exosome release on HCV replication was examined using late-stage infected culture on day 21. Initially, the effect of inhibiting EV release on host and virus survival was examined using an infectious full-length GFP reporter-based chimera HCV virus. Huh-7.5 cells were infected with HCV-GFP chimera virus and, on day 21, infected cells were then treated with individual inhibitors for 72 h. The number of GFP-positive cells was examined under fluorescence microscopy ( Figure 10A) and then quantified by flow analysis (Figure 10B,C). These data show that inhibition of EVs decreased HCV replication by more than 50%. Among all inhibitors, the inhibition of actin polymerization and GW4869 had the strongest inhibitory effect on HCV replication in the infected culture. In the next step, we performed a similar analysis to determine the impact of inhibiting extracellular release on intracellular HCV RNA replication using a stable R4GFP replicon cell line. For this purpose, R4GFP cells were treated with each drug for 72 h. The antiviral effect was determined by examining GFP expression ( Figure 10D), and then by flow analysis (Figure 10E,F). All these data from persistently infected and R4GFP replicon models suggest that blocking EVs' release indeed decreases HCV replication. In the next step, we determined whether inhibition of EV release by specific inhibitor treatment at viable concentrations could affect replication. The impact of inhibiting exosome release on HCV replication was examined using late-stage infected culture on day 21. Initially, the effect of inhibiting EV release on host and virus survival was examined using an infectious full-length GFP reporter-based chimera HCV virus. Huh-7.5 cells were infected with HCV-GFP chimera virus and, on day 21, infected cells were then treated with individual inhibitors for 72 h. The number of GFP-positive cells was examined under fluorescence microscopy ( Figure 10A) and then quantified by flow analysis (Figure 10B,C). These data show that inhibition of EVs decreased HCV replication by more than 50%. Among all inhibitors, the inhibition of actin polymerization and GW4869 had the strongest inhibitory effect on HCV replication in the infected culture. In the next step, we performed a similar analysis to determine the impact of inhibiting extracellular release on intracellular HCV RNA replication using a stable R4GFP replicon cell line. For this purpose, R4GFP cells were treated with each drug for 72 h. The antiviral effect was determined by examining GFP expression ( Figure 10D), and then by flow analysis (Figure 10E,F). All these data from persistently infected and R4GFP replicon models suggest that blocking EVs' release indeed decreases HCV replication. Cells 2021, 10, x FOR PEER REVIEW 15 of 22 The effect of the EV inhibitor treatment on the viability of late-infected HCV culture and stable sub-genomic replicon cell lines was studied after 72 h. The cell viability results, and antiviral efficacy results were compared to determine whether blocking the release of EVs is conducive for virus replication or survival of infected cells. The cell survival and antiviral efficacy of each drug was compared in infected and replicon cell culture models ( Figure 11). It appears that inhibition of EVs release has dramatic effect on HCV RNA replication. Cell viability was not affected in the concentration of drugs used in this assay. The effect of the EV inhibitor treatment on the viability of late-infected HCV culture and stable sub-genomic replicon cell lines was studied after 72 h. The cell viability results, and antiviral efficacy results were compared to determine whether blocking the release of EVs is conducive for virus replication or survival of infected cells. The cell survival and antiviral efficacy of each drug was compared in infected and replicon cell culture models ( Figure 11). It appears that inhibition of EVs release has dramatic effect on HCV RNA replication. Cell viability was not affected in the concentration of drugs used in this assay. The effect of the EV inhibitor treatment on the viability of late-infected HCV culture and stable sub-genomic replicon cell lines was studied after 72 h. The cell viability results, and antiviral efficacy results were compared to determine whether blocking the release of EVs is conducive for virus replication or survival of infected cells. The cell survival and antiviral efficacy of each drug was compared in infected and replicon cell culture models ( Figure 11). It appears that inhibition of EVs release has dramatic effect on HCV RNA replication. Cell viability was not affected in the concentration of drugs used in this assay. We then examined the impact of exosome inhibitors on HCV replication at an early stage of HCV infection. Huh-7.5 cells infected with the JFH-GFP virus on day 3 were treated with a similar concentration of each inhibitor for 72 h. The HCV-GFP expression was measured by fluorescence microscopy and quantified by flow cytometry ( Figure S3). These analyses show that only cytochalasin D, imipramine, GW4869 and D-Pantethine show some antiviral effects in the early infected culture. Among those, cytochalasin D showed the strongest inhibitory effect on viral replication. This is probably because cytochalasin D affects all kinds of cellular processes involved in exosome biogenesis, starting with membrane curvature, vesicle formation, and vesicle movement, because all these processes require actin polymerization. Additionally, imipramine and D-Pantethine likely affect the whole secretory pathway (which is strongly dependent on phosphatidyl serine, sphingomyelin, ceramide, and other lipids), and the HCV replication organelles are derived from, or are part of, the secretory pathway. GW4869 that inhibits the ESCRT independent pathway appears to be important in HCV replication.
Inhibition of Extracellular Vesicles Induced Innate Antiviral Response in HCV Culture through Interefron-Lambda (IFNL1) Production
HCV replication accumulates replicative intermediates; viral proteins, therefore, activate intracellular pattern recognition receptors (PRRs). A previous study by Grünvogel et al. [37] showed that double-stranded HCV replicative intermediates (negative-strand RNA) are released through EVs or exosomes. They showed the inhibition of EVs or exosomes leads to the accumulation of the HCV replicative intermediate and, therefore, to the activation of the innate antiviral program. We examined whether the activation of the innate antiviral program through IFN production is the reason for antiviral suppression. A previous publication from this laboratory showed interferon lambda induces potent antiviral response against HCV. In this study, we showed HCV induced ER stress and autophagy response that degrades the interferon alpha and beta receptor subunit 1 (IFNAR1), whereas the expression of interferon lambda receptor 1 (IFNLR1) was not altered [25]. This provides an explanation why interferon-alpha is not effective in clearing HCV infection. We showed that IFNL1 inhibits HCV replication in IFNA-resistant cells, suggesting that the IFNL axis could play an essential role in inducing HCV clearance [38]. R4GFP cells are resistant to IFNA since they express truncated IFNAR1. For this reason, levels of IFNL1 production were measured in the cell supernatants of both infected and R4GFP cells by ELISA. We found IFNL1 expression was increased in R4GFP, as well as HCV infected culture when EV secretion was inhibited ( Figure 12A,B). These results provide an explanation as to why inhibiting release of EVs decreased HCV replication. Taken together, all these data suggest that inhibition of EV release suppresses HCV replication in infected cells, as well as in replicon cell line. These results suggest that EV release is critical to sustain persistent HCV replication. Our results indicate that inhibiting EVs has minimal impact on cell viability but decreased the replication of HCV significantly in both the models. Blocking EVs' release activates the innate antiviral program through the induction of interefron lambda production. Our data support the conclusion Taken together, all these data suggest that inhibition of EV release suppresses HCV replication in infected cells, as well as in replicon cell line. These results suggest that EV release is critical to sustain persistent HCV replication. Our results indicate that inhibiting EVs has minimal impact on cell viability but decreased the replication of HCV significantly in both the models. Blocking EVs' release activates the innate antiviral program through the induction of interefron lambda production. Our data support the conclusion that exosome release supports virus replication during persistent infection by escaping the innate antiviral response.
Discussion
Autophagy occurs at basal levels in every cell, including hepatocytes, under nonpathological conditions. Hepatic autophagy levels increased several folds after HCV infection to alleviate microbial stress associated with virus replication. Autophagy levels also increased to meet the metabolic demands associated with virus infection to generate ATP, amino acids, sugar, and fatty acids. Data presented in this study are consistent with previous reports of other investigators, suggesting that autophagy induction is beneficial for HCV replication [30,39]. Initially, we examined the role of autophagy induction, extracellular vesicle release, and lysosomal degradation in HCV replication. We found that autophagy inducer Torin1 promoted HCV replication and extracellular vesicle release, as well as lysosomal DQ-BSA degradation. The importance of lysosomal degradation and extracellular vesicle release in HCV replication is supported by the results of HCQ and GW4869 treatment. For example, the compound HCQ inhibits lysosomal degradation and not the extracellular vesicle release inhibited by HCV replication, suggesting that lysosomal degradation is important for sustaining HCV replication. Likewise, GW4869 inhibits extracellular vesicle release, which inhibits replication. All these results indicate that autophagy-induced lysosomal degradation and extracellular vesicles release are two important cellular events required for sustaining HCV replication.
Some researchers, including our laboratory, showed that HCV infection could inhibit autophagy at the level of impaired autophagic degradation. Sir et al. [40] demonstrated that adaptive cellular response to HCV infection induces UPR accumulated through incomplete autophagosomes. Autophagy is impaired due to inefficient fusion between autophagosomes and lysosomes. The autophagy process is also impaired in cells replicating sub-genomic HCV replicon. The accumulation of large aggregates with aberrant cytoplasmic vacuole formation impairs autolysosome maturation [41]. Our laboratory showed that persistent HCV replication in Huh 7.5 cells inhibited autophagy by BECN1 degradation by CMA [15]. BECN1 loss impairs autophagosome-lysosome fusion, leading to the accumulation of MVBs. CMA-associated BECN1 degradation in HCV-infected cells inhibits endocytosis and degradation of epidermal growth factor receptor (EGFR). We demonstrated that CMA activation compensates for impaired autophagy due to HCVinduced microbial stress [15]. Since CMA cannot degrade unfolded proteins, protein aggregates, and non-protein cargoes such as lipids and nucleic acids, we examined whether exosome release is a potential autophagy compensatory mechanism for virus and cell survival under excessive HCV-induced microbial stress.
This study provides evidence suggesting that the early stage of persistent HCV infection induces autophagy and MVB degradation. TSPAN-CD63 bridges the degradation of MVBs through autophagic endosomal fusion in the HCV model. We found that persistent HCV infection blocks the degradation of MVBs and promotes exosome release. Increased CD63-GFP expression was observed in the late-stage of persistent HCV infection, suggesting MVBs degradation was impaired. All mammalian cells release EVs, whose 50-100 nm sized lipid bilayer can contain aggregated proteins and RNA. A growing body of evidence suggests that cells infected with enveloped or non-enveloped viruses release EVs [28]. These vesicles carry some viral proteins, viral RNAs, and viral genetic materials. The EVs isolated from HCV culture were characterized by TEM and NTA. We showed a time-dependent release of EVs in HCV-infected cell culture. TEM pictures revealed that exosomes were approximately 100-140 nm in size. The EVs isolated from the persistently infected HCV culture are characteristic of exosomes, as they express CD63 and CD9. As expected, we found that EV secretion is increased during persistent HCV infection. The peak of EV release matches with autophagy impairment at the late stage of ongoing HCV infection.
We aimed to understand the importance of EV release in viral survival by blocking EV release. Here, we show that some inhibitors suppressed EVs release in HCV culture. Blocking EVs and exosome release suppressed HCV replication in infected and replicon cell culture models. Furthermore, our data show EV release is favorable for sustaining virus replication during persistent infection. Our study investigated a potential virus-cell survival program under extreme microbial stress during chronic HCV infection. Interferons (IFNs) play a vital role in antiviral defense, mediated by innate immunity. There are three types of IFNs. IFN family comprises type I, Type II and Type III. There are differences in the mechanisms of expression and antiviral potency of antiviral cytokines in HCV infection. While type I IFNs trigger strong antiviral response in some viruses, but are not very effective in clearing chronic HCV infection, type III IFNs have a prominent effect on HCV clearance [42][43][44][45]. We speculate that TLR3-mediated recognition of doublestranded RNA (dsRNA) formed during extracellular vesicle accumulation activate type III IFN production. Future investigations will examine whether the activation of the Toll-like receptor (TLR) family, Retinoic Acid-Inducible Gene-1 (RIG-1) or Melanoma Differentiation-Associated Protein 5 (MDA5), is involved in type III IFN induction when extracellular vesicle is inhibited.
Extracellular vesicles are also involved in the pathogenesis of chronic HCV infection. During the last few years, many researchers have reported that exosomes released by HCV-infected cells play multiple roles in human liver disease progression [46]. Those studies claim that exosomes produced from HCV-infected cells carry small RNA and protein cargoes that can transfer information for cell-to-cell communication. Some studies demonstrated that exosomes contain HCV RNA and virus particles that can initiate new infections [47][48][49]. Exosomes released by HCV-infected cells are involved in the modulation of dendritic cell function and inhibit innate immune response, leading to immune escape [50,51]. Another report suggested that exosomes produced during HCV infection could hamper adaptive immune response and T cell function, which contribute to the development of chronic HCV infection [52,53]. A few publications showed that exosomes produced in HCV-infected cells could activate hepatic stellate cells implicated in the pathogenesis of hepatic fibrosis [54][55][56][57][58]. Our results indicate that extracellular vesicle release is an adaptive cellular response to chronic HCV infection. We propose that inhibiting EVs release can be explored as a potential therapeutic strategy to treat chronic HCV infection, as well as other positive-strand RNA viruses. Furthermore, we propose that the liver-derived extracellular vesicles can be used as a marker for monitoring hepatic stress response and liver disease progression during chronic HCV infection.
|
v3-fos-license
|
2018-11-30T17:26:21.254Z
|
2018-01-04T00:00:00.000
|
53973814
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/jnm/2018/9509126.pdf",
"pdf_hash": "74418d3adacb26316b5431196244de1989df4873",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2166",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"sha1": "74418d3adacb26316b5431196244de1989df4873",
"year": 2018
}
|
pes2o/s2orc
|
Enhancement of Capacitive Performance in Titania Nanotubes Modified by an Electrochemical Reduction Method
Highly ordered titania nanotubes (TNTs) were synthesised by an electrochemical anodization method for supercapacitor applications. However, the capacitive performance of the TNTs was relatively low and comparable to the conventional capacitor. Therefore, in order to improve the capacitive performance of the TNTs, a fast and facile electrochemical reduction method was applied to modify the TNTs (R-TNTs) by introducing oxygen vacancies into the lattice. X-ray photoelectron spectroscopy (XPS) data confirmed the presence of oxygen vacancies in the R-TNTs lattice upon the reduction of Ti4+ to Ti3+. Electrochemical reduction parameters such as applied voltage and reduction time were varied to optimize the best conditions for the modification process. The electrochemical performance of the samples was analyzed in a three-electrode configuration cell. The cyclic voltammogram recorded at 200 mV s−1 showed a perfect square-shaped voltammogram indicating the excellent electrochemical performance of R-TNTs prepared at 5 V for 30 s. The total area of the R-TNTs voltammogram was 3 times larger than the unmodified TNTs. A specific capacitance of 11.12 mF cm−2 at a current density of 20 μA cm−2 was obtained from constant current charge-discharge measurements, which was approximately 57 times higher than that of unmodified TNTs. R-TNTs also displayed outstanding cycle stability with 99% capacity retention after 1000 cycles.
Introduction
Global energy crisis, the depletion of fossil fuels, and the ever increasing environmental pollution have all led to an urgent search of efficient, clean, and sustainable alternative energy supply and storage.Moreover, due to the increased power demand worldwide, there has been a need to develop high power and high energy devices that are robust and are able to withstand hundreds and thousands of charging/discharging cycles without being degraded.In recent years, electrochemical capacitors, also known as ultracapacitors and supercapacitors, have attracted significant attention, mainly due to their promising properties: higher power density than batteries, higher energy density than conventional capacitors, fast charging-discharging rates, and prolonged cycle life [1].Supercapacitors are characterized as electric double-layer capacitors (EDLC) and pseudocapacitors.Electrical energy storage in EDLC occurs at the phase boundary between the electrode (active material) and the electrolyte solution (liquid ionic conductor) [2] with no charge-transfer involvement.Electrodes that use the current generated by the displacement of current due to charge rearrangement are known as ideally polarized electrodes [3,4].As for pseudocapacitors, charge storage is caused by the fast faradaic redox reaction due to Table 1: Voltage and time taken in the electrochemical reduction method of R-TNTs.
Label
Voltage (V), time (s) R-TNTs 4V30s 4 V, 30 s R-TNTs 5V30s 5 V, 30 s R-TNTs 6V30s 6 V, 30 s R-TNTs 7V30s 7 V, 30 s R-TNTs 5V10s 5 V, 10 s R-TNTs 5V20s 5 V, 20 s R-TNTs 5V40s 5 V, 40 s redox-active materials such as metal oxides and conducting polymers on the surface and in the bulk near the surface of the electrode [4,5].Nanostructured materials have drawn great interest in the field of supercapacitors as they offer a combination of nanoscale dimensions with highly defined geometry and high surface areas.Titania has been considered as the best candidate for many applications over the past few decades due to its remarkable properties and strikingly high potential for practical applications [6][7][8][9].Recently, titania nanotubes have been of much interest in energy storage owing to their capacity of being able to offer large surface areas and greatly improved electron transfer pathways compared to nonoriented structures, which in turn led to a higher charge propagation in active materials [6,10,11].Among the various methods reported for the synthesis of titania nanotubes, the electrochemical anodization method has been the most promising method as its offers suitably back-connected nanotubes on the Ti foil substrate which can be used directly as a binder-free supercapacitor electrode [12].However, TNT electrodes suffer from very low specific capacitance (less than 1 mF cm −2 ) due to their poor electrical conductivity, which are similar to conventional electric double-layer capacitors [13,14].
The enhancement of the capacitive performance of the TNTs can be carried out by electrochemical approaches and thermal treatments.Salari et al. (2011) reported the remarkable specific capacitance of the modified TNTs by thermal treatment optimization under an argon atmosphere [13].The annealing process in a low oxygen atmosphere induced the evolution of oxygen vacancies that improved the specific capacitance.They disclosed that the highest specific capacitance of 2.6 mF cm −2 was obtained at 600 ∘ C which was higher than the previously reported capacitance of TiO 2 (100-911 F cm −2 ) [6,13].Besides argon, Endut et al. reported a specific capacitance of 0.118 mF cm −2 for titania nanotubes optimized by thermal treatment in ammonia (NH 3 ) [15].It was concluded that by reducing the Ti 4+ to Ti 3+ during NH 3 annealing, the electrochemical performance of the sample was enhanced.
However, a simple, fast and cost-effective approach in modifying the TNTs for enhancing their electrochemical properties is much more desirable.Macak et al. reported that the higher conductivity layer at the bottom of the titania nanotubes layers formed by a reductive doping process [16].This simple electrochemical method could be carried out in ambient temperatures and required simple preparation steps.In the doping process, Ti 4+ was reduced to Ti 3+ which acted as a donor center resulting in a highly conducting barrier layer [16][17][18][19].They claimed that only 1% of the Ti 4+ in the titania nanotubes layers could be reduced to Ti 3+ and the color change in the films from light grey to black was a side effect of the reductive doping process.In this study, the focus was on utilizing the electrochemical reduction method to fabricate R-TNTs that meet the requirements as effective electrodes for supercapacitor applications.
Materials and Methods
2.1.Preparation of Titania Nanotubes.Pure Ti foil (0.125 mm thick, 99.7% purity, Sigma Aldrich) was degreased by sonicating in acetone, isopropanol, and deionized (DI) water for 15 min each followed by subsequent etching in 3 M HNO 3 (65%, MERCK) for 10 min to form a fresh smooth surface.They were then rinsed with excess DI water and dried in air.The titania nanotubes were fabricated in a two-electrode electrochemical cell with high density graphite as the cathode and Ti foil as the anode.Ethylene glycol (EG) (99.8% purity, initial water content < 0.03 wt%, Fisher Scientific) solution containing 0.5 wt% NH 4 F (FLUKA) and 5 vol.% of water was used as the electrolyte.Anodization was carried out at a constant voltage of 40 V for 1 hour using a direct current (DC) power supply (Consort Mini, Cleaver Scientific Ltd).The distance between both electrodes was fixed at 3 cm in all experiments.After the anodization, the samples were immediately rinsed with DI water and dried in air.Finally, the samples were calcined at 500 ∘ C in air for 2 h with a heating rate of 2 ∘ C min −1 .
Electrochemical Reduction of Titania Nanotubes.
The electrochemical reduction of TNTs was also performed in the same two-electrode electrochemical cell with the TNTs as the cathode while the high density graphite electrode was the anode.A supporting electrolyte containing 0.5 M Na 2 SO 4 (MERCK) was used, and the distance between the two electrodes was fixed at 3 cm.The sample preparation parameters are shown in Table 1.
Material Characterization and Electrochemical Measurements.
The morphology and the microstructure of the samples were examined by field emission scanning electron microscopy (FESEM, JSM-7600F, JOEL, Japan).X-ray diffraction patterns of the samples were collected using an X-ray diffractometer (Shimadzu, D60000, Japan) with Cu K ( = 1.5406Å) radiation to investigate the phase and composition of the prepared samples.The chemical states of the prepared samples were investigated using X-ray photoelectron spectroscopy (XPS, PHI Quantera II).
The electrochemical performance of the prepared samples was evaluated by cyclic voltammetry (CV), galvanostatic charge-discharge tests, and electrochemical impedance spectroscopy (EIS) using an Autolab PGSTAT204/FRA32M module.All of the electrochemical analyses were carried out using a three-electrode cell system.A platinum wire and an Ag/AgCl (3 M KCl) electrode were used as the counter and reference electrode, respectively, and the prepared samples were used as the working electrode.The prepared nanotube films were measured at an applied potential ranging from −0.4 V to 0.8 V versus Ag/AgCl in 1.0 M KCl aqueous electrolyte.The specific capacitance (SC) was calculated based on the charge-discharge curves using the following equation: where is the discharge current in amperes, Δ is the discharge time in s, Δ is the difference in discharge voltage in volts, and is area of the active electrode in square centimetres.
The stability of the R-TNTs was investigated up to 1000 cycles at a current density of 200 A cm −2 .The average geometrical size including the inner diameter, wall thickness, and tube length of the R-TNTs measured from FESEM images was approximately 80 nm, 9 nm, and 4 m, respectively.The geometrical size of the nanotubes tended to reduce after electrochemical reduction, but the structure of the nanotubes was still intact.The nanotubes were highly ordered, vertically aligned, and covered a large area of the Ti substrate.The color change, from dark grey to black, of the TNTs and R-TNTs could be observed by the naked eye.This phenomenon was due to the presence of Ti 3+ and an increase in defect density during the reduction process [20][21][22].Figure 2 displays the XRD pattern of the as-anodized TNTs and R-TNTs.The XRD patterns obtained for the TNTs and R-TNTs were barely distinguishable from each other with presence of a single phase of TiO 2 indexed to anatase (JCPDS: 21-1272) except for the peaks at 35.43 ∘ , 38.74 ∘ , 40.49 ∘ , and 53.32 ∘ which originated from the Ti metal substrate.The crystallite size was calculated using the Debye-Scherrer equation:
Results and Discussion
where is the crystallite size, is the wavelength of the Cu K radiation (1.542 Å), is the Bragg diffraction angle, and is the full width of the half maximum (FWHM) of the diffraction peak.The crystallite sizes for the TNTs and R-TNTs samples were 35 nm and 38 nm, respectively, based on the (101) plane.From the data, there were no phase changes observed in XRD, and the difference in crystallite size was insignificant after the electrochemical reduction.XPS techniques were employed to further verify the reduction of the samples and to determine the chemical composition and oxidation state of the Ti element in the TNTs and R-TNTs.Figure 3(a) displays the Ti 2p XPS spectra for both samples, which consisted of two broad peaks centered at 459.1 eV and 465.0 eV corresponding to Ti 2p 1/2 and Ti 2p 3/2 peaks of Ti 4+ for the TNT samples [13,18,20].In comparison, the peaks for the R-TNTs showed a slight negative shift in the binding energy located at 459.0 eV and 464.8 eV, suggesting the presence of Ti 3+ ions in the lattice [16,23].The presence of Ti 3+ ions was ∼13% in R-TNTs, whereas there were no traces of Ti 3+ detected in TNTs.A comparison survey of the O 1s XPS spectra was carried out for both samples, and the data is displayed in Figure 3 was seen in both samples, with a higher intensity for R-TNTs.This could be attributed to the presence of the Ti-OH species in the R-TNTs sample.The XPS analysis data suggested that oxygen vacancies were introduced into the R-TNTs lattice during the electrochemical reduction process.The presence of impurities induced by Ti 3+ and oxygen vacancies in the lattice structure of R-TNTs could possibly enhance the capacitive performance of the samples.
Electrochemical Measurements.
The effect of electrochemical reduction on the capacitive performance of the R-TNTs was investigated via CV, galvanostatic chargedischarge, and EIS measurements.It has been well established that the ideal capacitor exhibits a perfect square when subjected to CV measurements, independently of the scan rate [14,24].The shape may slightly transform to a trapezoidal shape as it undergoes a series of resistance and diffusion.Figure 4 shows a comparison of the cyclic voltammograms of TNTs and R-TNTs in a three-electrode configuration collected at a scan rate of 200 mV s −1 .In contrast to the TNTs sample which displayed poor electroactivity, the R-TNTs sample displayed an obvious capacitive characteristic curve with no peak associated with a redox reaction [13,17].The dramatic increase in the cyclic voltammogram for the R-TNTs, which was approximately 3 times larger than the TNTs, revealed a remarkable improvement in the capacitive performance of the R-TNTs.Besides, the nearly rectangularshaped cyclic voltammogram possessed by the R-TNTs was as expected for an ideal capacitor [25,26].The data indicated that the R-TNTs sample achieved a specific capacitance of 2.81 mF cm −2 at a scan rate of 200 mV s −1 , which was 18 times higher than the unmodified TNTs (0.16 mF cm −2 ).This enhancement was attributed to the improvement in electrical conductivity of the samples which led to an enhancement of the carrier density, while the existence of oxygen vacancies in the R-TNTs boosted the electrochemical activity of the sample [18].The possible chemical processes for the formation of oxygen vacancies in the TNTs lattice are as follows: At the Anode At the Cathode ) Overall Reaction Further investigations to find the optimum conditions for the electrochemical reduction process were carried out by varying the voltage and time as listed in Table 1.Based on the FESEM images in Figures 1(a) and 1(b), the R-TNTs samples were stable and did not undergo any destruction or damage after the electrochemical reduction.The cyclic voltammograms of the R-TNTs 4V30s, R-TNTs 5V30s, R-TNTs 6V30s, and R-TNTs 7V30s as shown in Figure 5(a) have displayed a rectangular shape, suggesting typical capacitive behaviour.The specific capacitance of the samples increased as the applied voltage of the electrochemical reduction increased from 4 V to 5 V, followed by a slight drop at higher applied voltages of 6 V and 7 V as depicted in Figure 5(a ).
A short circuit error was detected in the DC device at higher applied voltage during the electrochemical reduction which could have caused damage to the films and thus contributed to lower capacitive performance.The cyclic voltammograms of the R-TNTs samples prepared at different electrochemical reduction times (10 s to 40 s with an applied potential of 5 V) in Figure 5(b) displayed an increment in the integrated area for the samples from 10 s to 30 s.However, the cyclic voltammogram was reduced for the sample at 40 s. Figure 5(b ) shows the overall variation of the specific capacitance with increasing time of electrochemical reduction at 5 V. Longer electrochemical reduction times caused the specific capacitance to decrease which was attributed to an increase in the surface defect density and the corresponding recombination rate [14].The optimum voltage and time for the electrochemical reduction of TNTs for the highest capacitive performance were at 5 V and 30 s, respectively.Figure 6 shows the current density versus potential profiles at different scan rates for the R-TNTs 5V30s (denoted as R-TNTs for further discussion) sample.The cyclic voltammograms maintained their rectangular profiles indicating a capacitive charge storage mechanism.As the scan rate increased, the integrated area became larger, which gradually decreased at a lower scan rate.It is postulated that at a higher scan rate, the electron flow increased, which led to the rapid charging of the sample.Thus, a smaller integrated area was observed at a low scan rate.This meant that the R-TNTs could stand extreme cycling conditions.
The charging-discharging curves for the TNTs and R-TNTs conducted at a current density of 30 A cm −2 are displayed in Figure 7 for comparison purposes.The charge-discharge curve of the R-TNTs was symmetric and covered a significantly longer time than the TNTs sample, revealing a good capacitive behaviour.The R-TNTs sample delivered a remarkable improved specific capacitance of 6.31 mF cm −2 at the same current density of 30 A cm −2 , which was approximately 57 times higher than the TNTs.The specific capacitance values of both samples were profoundly influenced by the current density.As was observed in Figure 8, the capacitance decreased with increasing current density.At the lowest current density of 20 A cm −2 , the R-TNTs sample delivered an average specific capacitance of 11.12 mF cm −2 , while at the highest current density of 500 A cm −2 , a specific capacitance of 2.78 mF cm −2 was obtained.This proved that the R-TNTs sample exhibited excellent rate capability at high current density.
Specific capacitance and coulombic efficiency values with respect to the charge-discharge cycle numbers at a constant current density of 10 A cm −2 for the TNTs and 200 A cm −2 for the R-TNTs are plotted in Figures 9(a cycles.As the cycle number increased, the specific capacitance of the TNTs samples decreased, while the R-TNTs sample exhibited an insignificant difference in the observed capacitance values.The initial specific capacitance of the TNTs with a value of 0.19 mF cm −2 declined by about 47% to approximately 0.09 mF cm −2 .In comparison to the TNTs, the R-TNTs samples revealed excellent cycle stability in this analysis with retention of 99%.It is worth highlighting that the electrochemically reduced sample exhibited better electrochemical stability and enhanced capacitance behaviour compared to the unmodified TNTs.The coulombic efficiency was calculated based on the following equation: where and represent the charging and discharging times, respectively.The coulombic efficiency for the R-TNTs sample remained at a high plateau indicating that the electrochemical reaction involved at the films-electrolytes interface was fast and reversible which supported the cycle stability data that was discussed previously.Electrochemical impedance analysis (EIS) is performed to study the resistance behaviour of the samples under alternating current.From the Nyquist plots in Figures 10(a) and 10(b), the TNTs and R-TNTs samples displayed two important features: cell-electrolyte resistance ( ) and charge-transfer resistance ( ct ). is the first intercept of the semicircle formed at the lower part of the plot that corresponds to the high frequency region.The charge-transfer resistance ( ct ) at the interface between the electrode and electrolyte was directly acquired from the diameter of the semicircle.Both samples exhibited a semicircle arc characteristic where cell-electrolyte resistance ( ) and charge-transfer resistance ( ct ) values were determined using an electrochemical circle fit.The values for the TNTs and R-TNTs were 2.50 Ω and 1.44 Ω, respectively, as shown in Table 2.The R-TNTs sample recorded the lowest value as a result of the oxygen vacancies in the R-TNTs facilitating an efficient excess of electrolyte ions within the R-TNTs surface, thus enhancing the ion diffusion pathways.In addition, the R-TNTs also recorded a lower ct value of 1.74 Ω compared to the TNTs, with a value of 790 Ω, and this led to the interpretation of higher specific capacitance.The data obtained was supported by CV and galvanostatic charge-discharge analyses.
Conclusions
A fast and simple electrochemical reduction method in an aqueous electrolyte has been presented in this work in order to enhance the capacitive performance of TNTs as a supercapacitor.The reduction of the samples was verified with XPS analysis which confirmed the presence of Ti 3+ in the R-TNTs lattice.It is postulated that the presence of Ti 3+ and oxygen vacancies due to the reduction process enhanced the conductivity and electrical performance of the sample, hence improving the electrochemical properties of the sample.Ultrahigh conductivity could cause lowered resistance in the samples which led to higher capacitive performance.The R-TNTs exhibited an excellent capacitive performance of 11.12 mF cm −2 at 20 A cm −2 and 2.78 mF cm −2 at 500 A cm −2 , which was approximately 57 times higher than TNTs.The cyclic voltammogram of the R-TNTs displayed a perfect rectangular shape with no evidence of faradic reaction.The outstanding chemical stability of the R-TNTs sample was confirmed by the galvanostatic charge-discharge test as the sample still possessed 99% capacitive retention after 1000 cycles.Thus, the promising properties of the R-TNTs presented here make it potentially applicable for other technical applications including energy storage as a supercapacitor.
Figure 5 :
Figure 5: Cyclic voltammograms of R-TNTs prepared at different (a) voltage and (b) time, conducted at a scan rate of 200 mV s −1 , specific capacitance of R-TNTs with respect to (a ) voltage and (b ) time.
Figure 7 :− 2 ) 2 ) 2 ) 2 )Figure 8 :
Figure6shows the current density versus potential profiles at different scan rates for the R-TNTs 5V30s (denoted as R-TNTs for further discussion) sample.The cyclic voltammograms maintained their rectangular profiles indicating a capacitive charge storage mechanism.As the scan rate increased, the integrated area became larger, which gradually decreased at a lower scan rate.It is postulated that at a higher scan rate, the electron flow increased, which led to the rapid charging of the sample.Thus, a smaller integrated area was observed at a low scan rate.This meant that the R-TNTs could stand extreme cycling conditions.The charging-discharging curves for the TNTs and R-TNTs conducted at a current density of 30 A cm −2 are displayed in Figure7for comparison purposes.The charge-discharge curve of the R-TNTs was symmetric and covered a significantly longer time than the TNTs sample, revealing a good capacitive behaviour.The R-TNTs sample delivered a remarkable improved specific capacitance of 6.31 mF cm −2 at the same current density of 30 A cm −2 , which was approximately 57 times higher than the TNTs.The specific capacitance values of both samples were profoundly influenced by the current density.As was observed in Figure8, the capacitance decreased with increasing current density.At the lowest current density of 20 A cm −2 , the R-TNTs sample delivered an average specific capacitance of 11.12 mF cm −2 , while at the highest current density of 500 A cm −2 , a specific capacitance of 2.78 mF cm −2 was obtained.This proved that the R-TNTs sample exhibited excellent rate capability at high current density.Specific capacitance and coulombic efficiency values with respect to the charge-discharge cycle numbers at a constant current density of 10 A cm −2 for the TNTs and 200 A cm −2 for the R-TNTs are plotted in Figures9(a) and 9(b).This analysis was performed in order to evaluate the stability of the samples under long cycling conditions of up to 1000
Figure 9 :Figure 10 :
Figure 9: Variation of specific capacitance of (a) TNTs and (b) R-TNTs with respect to charge-discharge cycle numbers investigated in 1.0 M KCl at a current density of 10 A cm −2 for TNTs and 200 A cm −2 for R-TNTs.
|
v3-fos-license
|
2021-08-27T06:16:20.565Z
|
2021-07-24T00:00:00.000
|
237306252
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2309-608X/7/8/599/pdf",
"pdf_hash": "84559765225fcb4ed3c520d5bf093ec19bd82884",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2168",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "f09c7d8477dd7287a688c2997431055dcffa039c",
"year": 2021
}
|
pes2o/s2orc
|
Azole-Resistance Development; How the Aspergillus fumigatus Lifecycle Defines the Potential for Adaptation
In order to successfully infect or colonize human hosts or survive changing environments, Aspergillus fumigatus needs to adapt through genetic changes or phenotypic plasticity. The genomic changes are based on the capacity of the fungus to produce genetic variation, followed by selection of the genotypes that are most fit to the new environment. Much scientific work has focused on the metabolic plasticity, biofilm formation or the particular genetic changes themselves leading to adaptation, such as antifungal resistance in the host. Recent scientific work has shown advances made in understanding the natural relevance of parasex and how both the asexual and sexual reproduction can lead to tandem repeat elongation in the target gene of the azoles: the cyp51A gene. In this review, we will explain how the fungus can generate genetic variation that can lead to adaptation. We will discuss recent advances that have been made in the understanding of the lifecycle of A. fumigatus to explain the differences observed in speed and type of mutations that are generated under different environments and how this can facilitate adaptation, such as azole-resistance selection.
Introduction
Aspergillus fumigatus is a filamentous fungus that can cause human diseases ranging from allergic bronchopulmonary aspergillosis to chronic pulmonary aspergillosis or even life-threatening acute invasive aspergillosis (IA). The interactions between A. fumigatus and the host environment are dynamic and complex. A. fumigatus is ubiquitous in our environment, and asexual spores can be dispersed over wide geographic distances by air currents. A. fumigatus is a saprotrophic fungus that is found in soil and grows on decaying organic matter, and it has an important role in carbon and nitrogen recycling. Humans are estimated to inhale at least 100 conidia each day, and due to their small size, a fraction of inhaled conidia will reach into the alveoli of the lungs [1]. In immune-competent individuals, airway epithelial cells and resident alveolar macrophages remove inhaled conidia; however, in individuals who are unable to clear these conidia, germination could occur, subsequently leading to Aspergillus disease [2]. Among the infections caused by Aspergillus species, A. fumigatus is the leading etiological agent in most geographic regions. IA may occur in patients with specific immune defects, such as neutropenia, and infection is associated with substantial morbidity and mortality. Recently, cases of IA have been increasingly reported in patients with severe viral pneumonia, including influenza and coronavirus disease 2019 [3][4][5][6]. Despite the considerable advances made over the past decades in the management of fungal infection, such as diagnostics and antifungal therapy [7], the management of IA has become increasingly complex due to various reasons, including acquired antifungal resistance [8]. Resistance to voriconazole was shown to be associated with excess mortality in voriconazole-treated IA patients, compared to patients with voriconazole-susceptible IA [9]. Furthermore, A. fumigatus is capable of persisting in the lungs of patients with structural lung diseases, such as COPD and cystic fibrosis (CF).
In order to successfully infect or colonize human hosts, A. fumigatus may initially establish lung colonization due to its physiological versatility and can subsequently adapt, through genetic changes, to the human lung environment and stressors, such as antifungal agents. Much scientific work has focused on the metabolic plasticity, biofilm formation [2] or the particular genetic changes themselves leading to adaptation, such as antifungal resistance in the host [10,11]. In this study, we review what is currently known of the lifecycle of A. fumigatus in relation to various environments. We focus on the various aspects of the lifecycle and their relevance for contributing to adaptive genetic variation by mutation and recombination under the different environmental conditions. Furthermore, we discuss the implications of genetic variation for azole resistance, a selection factor found both in human medicine and the environment.
Asexual Lifecycle
An A. fumigatus colony is initiated from a single spore or hypha on a suitable substrate. Within four to six hours, conidia can develop short hyphae, known as germ tubes, that, by mitotic division and branching, form an extended mycelial network. Within a few days, asexual spores, also called conidia, are formed on specialized hyphal structures, called conidiophores. The spore head of a conidiophore can produce up to 10 4 asexual spores, and the dispersal of asexual A. fumigatus conidia in the environment is highly efficient, as these conidia are very hydrophobic compared to other Aspergillus species and are very efficiently spread by air [2,12]. Conidia of A. fumigatus do not germinate or swell in distilled water but germinate under nutrient-and oxygen-rich conditions [7]. Germ tubes or tubular hyphae grow in a polar fashion by apical extension and branching to form a network of interconnected cells, known as a mycelium. A colony consists of septate multinucleated cells that grow in a radial shape [13]. The structure of the cell wall of the mycelial or vegetative form of filamentous fungi is different from the cell wall structures of conidia and conidiophores, of which the latter functions for survival and dissemination ( Figure 1). The fact that A. fumigatus has a high mycelial growth rate and rapidly produces abundant airborne conidia promotes the colonization of multiple environmental niches [2].
Parasexual Lifecycle
Parasexual recombination in fungi, i.e., recombination outside the sexual cycle, has been suggested as an alternative to sex in nature for generating diversity [14]. The parasexual cycle, was initially described to have the following elements: (1) heterokaryon formation following anastomosis of hyphae from vegetatively compatible but genetically different colonies; (2) heterozygous diploid formation by nuclear fusion of genetically unlike nuclei, multiplication of the diploid nuclei in the heterokaryon and segregation into diploid spores that, after dispersal, may establish diploid colonies; (3) recombination during mitotic divisions of diploid nuclei by crossing over; (4) nondisjunction, leading to haploidization. First observed in A. nidulans, parasex has later been discovered in many other fungi, including A. fumigatus, [15], and was considered to be especially relevant for fungi that are predominantly or completely asexual. However, when analyzing natural isolates of various fungi, it was found that most are not capable of forming stable heterokaryons, a prerequisite for parasex. Heterokaryon formation between different natural isolates is commonly restricted by heterokaryon incompatibility, a common fungal allorecognition mechanism limiting fusion of hyphae to those with the same genetic heterokaryon-compatibility allele combination [16,17]. This severely limits the potential for parasex among natural isolates to generate recombinants, and the interest in researching the role of parasex in nature has declined. The parasexual cycle has mainly been studied and used in laboratory experiments of isogenic strains, e.g., constructing a mitotic genetic map in the asexual A. niger [18], for strain construction and complementation of recessive deficiency markers and A. fumigatus can enter an asexual (orange arrow), parasexual (purple arrow) or sexual (blue arrow) reproductive cycle. During the sexual cycle, the mycelium forms a fruiting body, the cleistothecium, which holds the ascospores that, once released into the environment, can colonize ecological niches. Environmental plant waste material, or compost, is an ideal environment for A. fumigatus to sporulate, grow, and reproduce, as there are ample resources present, and the moisture and high temperatures inside the compost heap all favor A. fumigatus. New mycelium growth colonizes this ecological niche and can enter the asexual cycle. Aerial dispersal of A. fumigatus asexual spores from either an environmental niche or from patient-to-patient transmission via coughing (probably rare but should not be excluded) facilitates the spread to patients at risk for opportunistic Aspergillus disease. In contrast to acute IA, A. fumigatus can also colonize the lung and form biofilms in patients with structural lung disease, such as CF. The hyphae of an A. fumigatus fungal mycelium contain multiple nuclei. Hyphal fusions between compatible monokaryons with genetically different nuclei can yield hyphae with mixed populations of nuclei, called heterokaryons. Subsequently, dissimilar nuclei can fuse to form heterozyogus diploids, which can undergo mitotic recombination by crossing over and haploidization, also known as the parasexual cycle.
Parasexual Lifecycle
Parasexual recombination in fungi, i.e., recombination outside the sexual cycle, has been suggested as an alternative to sex in nature for generating diversity [14]. The parasexual cycle, was initially described to have the following elements: (1) heterokaryon formation following anastomosis of hyphae from vegetatively compatible but genetically different colonies; (2) heterozygous diploid formation by nuclear fusion of genetically unlike nuclei, multiplication of the diploid nuclei in the heterokaryon and segregation into diploid spores that, after dispersal, may establish diploid colonies; (3) recombination during mitotic divisions of diploid nuclei by crossing over; (4) nondisjunction, leading to haploidization. First observed in A. nidulans, parasex has later been discovered in many other fungi, including A. fumigatus, [15], and was considered to be especially relevant for fungi that are predominantly or completely asexual. However, when analyzing natural isolates of various fungi, it was found that most are not capable of forming stable heterokaryons, a prerequisite for parasex. Heterokaryon formation between different natural isolates is commonly restricted by heterokaryon incompatibility, a common fungal allorecognition A. fumigatus can enter an asexual (orange arrow), parasexual (purple arrow) or sexual (blue arrow) reproductive cycle. During the sexual cycle, the mycelium forms a fruiting body, the cleistothecium, which holds the ascospores that, once released into the environment, can colonize ecological niches. Environmental plant waste material, or compost, is an ideal environment for A. fumigatus to sporulate, grow, and reproduce, as there are ample resources present, and the moisture and high temperatures inside the compost heap all favor A. fumigatus. New mycelium growth colonizes this ecological niche and can enter the asexual cycle. Aerial dispersal of A. fumigatus asexual spores from either an environmental niche or from patient-to-patient transmission via coughing (probably rare but should not be excluded) facilitates the spread to patients at risk for opportunistic Aspergillus disease. In contrast to acute IA, A. fumigatus can also colonize the lung and form biofilms in patients with structural lung disease, such as CF. The hyphae of an A. fumigatus fungal mycelium contain multiple nuclei. Hyphal fusions between compatible monokaryons with genetically different nuclei can yield hyphae with mixed populations of nuclei, called heterokaryons. Subsequently, dissimilar nuclei can fuse to form heterozyogus diploids, which can undergo mitotic recombination by crossing over and haploidization, also known as the parasexual cycle.
Sexual Lifecycle
The sexual cycle of A. fumigatus was discovered in vitro in 2009 and described the teleomorph Neosatorya fumigata [20]. The sexual cycle requires two haploid nuclei of opposing mating-types, MAT1-1 and MAT1-2, which regulate sexual compatibility. After fertilization, a network of dikaryotic cells is formed, which, upon nuclear fusion, produce diploid zygotes, each of which undergoes two meiotic cell divisions, followed by a post-meiotic mitotic division, yielding eight haploid ascospores contained in an ascus ( Figure 1) [21]. A fruiting body, termed cleistothecium, may contain several hundreds to thousands of such asci and up to 10 5 ascospores. A culture may have many cleistothecia, each resulting from a single fertilization process. Growth conditions for sexual replication are very specific, namely at 30 • C on oatmeal agar for 3-6 months in the dark [20]. The natural niche of ascospores has not been detected yet, but they have been hypothesized to exist in composting plant waste material, an environment specifically beneficial for A. fumigatus to thrive with high temperatures of up to 60 • C [22]. The contribution of genetic diversity in the life cycle of A. fumigatus is difficult to estimate with the currently available knowledge base. The sexual cycle is not as uncommon as may be thought, and it may be the root of the almost infinite variation in A. fumigatus, as demonstrated in genotyping studies, but it may be constrained by the environmental conditions under which it is possible.
In the initial study of O'Gorman et al. [20], strains were used that were isolated from air samples in Dublin, Ireland. A few years later the paper of Sugui et al. [23] described the fertility of strains originating from five geographic regions (the United States, Hong Kong, India, England and Ireland), as well as the discovery of highly fertile strains that can complete the sexual cycle in only four weeks, the so-called supermater strains [20,23], showing that, as expected, the fertility of A. fumigatus is not restricted to only Irish strains. More recently, a large screening of fertility was undertaken by Swilaiman et al. [24] in a study where a global collection of 131 isolates were screened for the ability to undergo a sexual cycle. Ninety-seven percent of isolates were found to produce cleistothecia with at least one of the available mating partners. Furthermore, interestingly, a large variation was seen in numbers of cleistothecia produced per cross, suggesting differences in the possibility for genetic exchange between strains in nature. Many studies have shown that random sets of A. fumigatus strains repeatedly show a 1:1 ratio of the two mating types [22]. This is consistent with the occurrence of sex in nature maintaining a balance of the two mating types, together with evidence of sexual recombination [24]. However, it is clear that it seems highly unlikely to occur within the human body, as opposing mating types are required to initiate a sexual cycle, as well as very specific environmental and nutritional conditions.
Mutation and Recombination Contribute to Genetic Variation
Genetic variation is created by mutation and recombination and is the source for adaptive evolution. Adaptive variants that arise spontaneously may expand in the population until it either gets fixed or replaced by an even fitter variant that arises. The selection pressures, the frequency of mutation and recombination and the population size all determine the population structure. Although various whole-genome sequencing projects have been undertaken of worldwide sampled A. fumigatus isolates as well as clinical samples, the sequencing data has so far not been sufficient to fully understand the population structure. Critical factors that are required to characterize population structure include the extent and nature of standing genetic variation, the contribution by mutation and recombination and what part of the lifecycle is providing this variation and in which environments. Many of these determinants are unclear or partially characterized for A. fumigatus. In this section, we explain, for each part of the lifecycle, the potential for providing genetic variation that can contribute to adaptation (e.g., azole resistance) and thereby shape the population structure of A. fumigatus next to other known factors that impact population structure, such as environmental pressures.
Asexual Genetic Variation
Asexual reproduction is a common reproductive mode for many fungi, including A. fumigatus. With 10 4 asexual spores per spore head, a colony initiated by the germination of a single spore can easily produce up to 10 9 asexual spores after one week of growth [12]. Although the number of nuclei needed to populate the mycelium is limited, upon sporulation, the formation of hundreds of millions of spores requires many mitotic divisions and rounds of DNA replication during which replication errors may occur. Despite a very high fidelity with, for example, an estimated 1.67 × 10 −10 SNP mutation rate per bp per generation for yeast [25], given a genome size of~30 Mb, millions of the~10 9 A. fumigatus spores of a three-day old colony are likely to contain a de novo mutation in their genome. By chance, some of these mutations may confer resistance to azoles. Due to the high number of new cells generated during sporulation, de novo azole-resistance mutations are more likely to occur compared with the expanding mycelium. A large spore production is therefore likely to generate numerous unique genotypes carrying mutations that will be tested for in its current environment by natural selection. In addition, beneficial mutations are more likely to be selected from asexual sporulation due to the single-celled nature of the spores that removes the burden of the (partial) recessivity of mutations that shields the full expression of resistance in a multicellular mycelium [16]. When a beneficial mutation occurs during vegetative growth in the multinucleate mycelium, the resulting resistant nucleus is initially surrounded by wild-type nuclei in a heterokaryotic cell. Mutations may not be fully expressed into the phenotype when wild-type nuclei are also present in the mycelium, and such recessive mutations may segregate and form homokaryotic sectors and sporulate. In addition, upon formation of uninucleate asexual spores, the mutant nucleus can escape from the heterokaryotic mycelium so that, after dispersal and germination, the potentially beneficial phenotype would be fully expressed [26]. The asexual sporulation process thus releases mutations from the mycelium to allow for efficient selection and expression of beneficial traits. Blocking sporulation could thus reduce evolvability. This aligns with the hypothesis that asexual sporulation is essential for both mutation supply and phenotypic expression of azole-resistance mutations in A. fumigatus [12]. This has also been shown by Zhang et al. in a study where evolutionary lines exposed to antifungal azole compounds show far less evolvability within mycelium growth and elongation compared to cultures passaged by transferring conidia [12]. In addition to mutation, the asexual cycle may also provide an opportunity for recombination by unequal sister chromatid exchange [27], which may explain the expansion of tandem repeats and azole-resistance development in A. fumigatus [28].
Parasexual Genetic Varation
Parasexual reproduction is defined as a ploidy change without meiosis and is often accompanied by mitotic recombination. Parasexuality was first described by Guido Pontecorvo in the 1950s while investigating A. nidulans [29,30]. During parasex, fusion of haploid nuclei yields relatively stable diploid cells, which can produce haploid recombinants by mitotic crossing-over and loss of whole chromosomes. Mitotic recombination in contrast to meiotic recombination therefore results in the nonsexual exchange of genetic material. In A. nidulans, the potential benefits of a parasexual cycle have been shown by Schoustra et al., who revealed that, by long term passaging of cultures of A. nidulans, diploid strains resulted in higher fitness than culturing of corresponding haploid strains [31]. Diploids accumulated recessive deleterious mutations that only became beneficial in recombinant haploid populations. Mutations that are individually neutral or deleterious and that can be beneficial when present in combination are also called sign epistasis. The longer the diploid can evolve, the more mutations it can accumulate and the more it can provide possible genotypes that could potentially enhance fitness. Parasex therefore promotes adaptation or evolution even in the absence of a sexual cycle with its fastidious requirements. A main difference between meiotic recombination and mitotic recombination during parasex is that chromosome dynamics are highly coordinated during meiosis but not in parasex. Coordinated movement of chromosomes has not been reported during parasex, and current models suggest that ploidy reduction occurs by chromosome nondisjunction, leading to an uneven segregation of chromosomes during cell division [21]; for A. fumigatus, this has yet to be investigated.
Despite the unknown and possible limited relevance of parasex in nature, recently, it was shown that, in long-lasting fungal colonization of the human lung, heterokaryons can evolve [32]. Here, it was hypothesized that an isolate, initiated from a single spore lasting for many years in the patient, may accumulate somatic mutations during mitotic divisions of the nuclei in the originally homokaryotic culture. Subsequently, heterozygous diploid nuclei may be formed, allowing parasexual recombination. In addition, fusion of isogenic nuclei can yield a homozygous diploid nucleus, which can subsequently evolve into a heterozygous diploid by de novo mutations. A heterozygous diploid can continue to accumulate or buffer mutations and finally, after haploidization, segregate into new recombinant genotypes. All these processes may occur coincidentally and contribute to the genetic variability of the culture; the length and likelihood of the sequence of these events is however unknown, and future studies will hopefully provide new insights on this.
Since asexual spores of A. fumigatus are uninucleate, newly formed colonies start as a homokaryon and may produce a heterokaryon during mycelial growth due to mutations in some of the nuclei. Heterokaryosis is thus a transient characteristic of the mycelium that is lost upon asexual sporulation and dispersal of spores by air. Therefore, heterokaryons may form and persist, particularly in long-lived mycelium cultures in the human lung ( Figure 1). Such is the case for chronic A. fumigatus infections or those patients that show colonization and where the fungus has been shown to persist sometimes for many years without active sporulation [16,32]. Evidence for parasexual recombination was found in isolates obtained from CF patients, where A. fumigatus is confined to hyphal networks in a biofilm in the epithelial lining of the lung [32].
Sexual Genetic Variation
For A. fumigatus, the sexual cycle takes place by the formation of sexual fruiting bodies, or cleistothecia, that, upon completion of the full sexual cycle, can contain up to 10 4 -10 5 genetically unique ascospores. Haploid progeny analysis revealed extensive genetic recombination for A. fumigatus [20]; therefore, the genetic variation that is present in the parental strains is thus enhanced through the reshuffling of these genotypes by recombination. Completion of meiosis is accompanied by formation of recombinant haploid offspring, thus increasing genetic variation upon which natural selection can act. The reshuffling of the genome can, at the same time, also be a disadvantage by breaking up beneficial gene variant combinations (recombination load). Limitation of the sexual cycle therefore appears to be a common strategy for fungi, enabling generation of clonal populations well-adapted to host and environmental niches, yet retaining the ability to engage in sexual or parasexual reproduction to respond to changing environments when necessary [33]. A further limitation may be the heterothallic lifestyle, which requires two strains with an opposite mating type that may not come together frequently. This would explain why the sexual cycle in A. fumigatus is still undetected in our environment and could indicate that it is indeed infrequent.
Antifungal Resistance Selection and the Role of the A. fumigatus Lifecycle
The medical triazoles voriconazole, itraconazole, isavuconazole and posaconazole are the most important in the treatment of Aspergillus diseases. Triazoles, or more commonly named azoles, are a group of antifungal drugs, and they inhibit the synthesis of ergosterol, which forms a major component of the fungal cytoplasmic membrane [34]. Itraconazole was first introduced for use in patients in 1987, and voriconazole, posaconazole and isavuconazole, as second-generation azole antifungals, were introduced in 2002, 2006 and 2015, respectively. Azole antifungal therapy is recommended for prophylaxis, empiric or preemptive therapy for acute disease and long-term maintenance therapy for allergic and chronic pulmonary aspergillosis. Therefore, the emergence of azole resistance threatens the effective treatment of aspergillosis [8]. In the past two decades, azole resistance in A. fumigatus has been increasingly reported, both in clinical and environmental strains with tandem repeat mutations in the target of the azoles, the cyp51A gene [35,36]. The application of azoles is not limited to clinical use but reaches out to agrochemical applications, as well as use in corrosion inhibitors, dyestuffs and wood preservatives [26]. As azoles are used abundantly in the environment it is thought to have selected for the tandem repeat azole resistance mechanisms in A. fumigatus and its global spread [26,37]. Furthermore, the use of similar chemical structures for these various applications has caused cross-resistance between medical and non-medical azoles to develop, also called the environmental route of resistance selection [37]. However, recent studies have challenged the concept of the environmental route, and the importance of the patient route should be reconsidered, since the resistant genotype TR 34 3 /L98H and TR 120 both have been recovered from a patient under the long period of clinical azole selection [38,39]. Patient-to-patient transmission has also been investigated, whereby patients can spread A. fumigatus into the direct environment by coughing and exposing other patients in the same direct environment [40][41][42]. The same genotype of A. fumigatus in cough aerosols and sputum samples were recovered from two out of 15 patients [40]. The assumption that tandem repeat selection via the patient route and patient-to-patient transmission of A. fumigatus cannot occur should be reconsidered, even if it accounts for only a minor part.
Clearly, azole resistance is a growing concern, as patients with azole-resistant A. fumigatus have a high probability of treatment failure, and alternative treatment options are limited. To obtain a full understanding of azole resistance development, we need to characterize it by what type of mechanisms of resistance emerge, how resistance can spread and if or how resistant genotypes can persist in environments without azoles. The molecular mechanisms that cause antifungal resistance in general are naturally occurring in less susceptible species (intrinsic resistance), or they may be acquired in susceptible strains (acquired resistance). Drug resistance mechanisms can include altered drug-target interactions, reduced cellular drug concentrations mediated by drug efflux transporters or shielding mechanisms, such as biofilm formation [43]. Adaptation is the process by which populations of organisms evolve in such a way to become better suited to their environments as advantageous traits become predominant. Genetic adaptation can be achieved generally by either spontaneous mutation or recombination. As azoles are not known to be mutagenic or recombinogenic per se, they provide a stress factor for A. fumigatus and will provoke a strong selection pressure for resistance, as has been shown by many in vitro evolutionary azole resistance selection experiments [12]. In addition to drug resistance, other stress factors (e.g., pH) present in the environment will select adaptive traits.
Adaptation requires genetic variation, which increases the probability that emerging progeny are better suited to survive. As indicated, A. fumigatus can benefit from three modes of reproduction to generate variation, but the adaptation potential depends on the availability of these modes in the specific environment, population size and time to complete the life cycle (Table 1). There are some restrictions to each of these modes of reproduction: asexual reproduction requires specific conditions (including humidity, temperature, light, and oxygen); parasexual processes are mostly limited by heterokaryon incompatibility between genetically dissimilar isolates; for sexual reproduction, opposing mating types are required. In human infection, various morphotypes can be observed in different Aspergillus diseases. IA is characterized by hyphal growth that causes tissue invasion. The infection is acute and the ability for in-host adaptation seems very limited due to the short duration of disease. In pulmonary cavities, hyphal biofilms may be formed, and asexual sporulation may occur. Cavitary Aspergillus diseases are usually chronic, thus creating the potential for high mutation rates. As a consequence, various lineages may develop, including azole-resistant traits [44]. Selection of azole-resistant traits may occur when patients receive antifungal therapy, which may cause the resistant clone to become dominant. Conversion from an azole-susceptible phenotype to azole-resistant phenotype has been repeatedly described in patients with chronic cavities. Genotyping may indicate that the phenotype switch occurred in an isogenic background, supporting in-host selection. However, these observations do not reveal through which reproduction mode the adapted phenotype emerged. Other chronic Aspergillus diseases that may provide environments that support adaptation include sinusitis and otitis, although the number of studies that report adaptation remains limited. Sexual reproduction requires mating between isolates of an opposite mating type,~10 5 -10 6 ascospores may be formed after~6 weeks Another patient group with chronic Aspergillus colonization includes CF-patients. Although lung cavities may develop in CF-patients, A. fumigatus is thought to primarily form biofilms in the epithelial mucus. Although this biofilm formation provides shielding from stressors like other microorganisms, antifungal drugs and host immune effectors [45], the fungus cannot benefit from asexual reproduction to adapt to the lung environment, as it is confined to the hyphal morphotype. Nevertheless, antifungal drug resistance reported in azole-treated and azole-naïve CF-patients indicates that A. fumigatus employs other strategies that enable adaptation [46]. The study of Engel et al. [32] showed that in chronically colonized patients, A. fumigatus can undergo parasexual recombination, characterized by diploid formation. Naturally occurring diploids were previously never detected, but in this study, a large set of 799 A. fumigatus isolates were screened that were recovered from contrasting environments, including chronic colonization (i.e., CF and chronic pulmonary aspergillosis), acute IA and from the environment. Diploids were detected in isolates from CF-patients but absent in those from patients with acute infection and the environment. As CF-patients may be chronically colonized with isogenic A. fumigatus isolates that form hyphal networks in epithelial mucus, this study showed that the CF-lung might represent a specific niche for parasex to occur. Diploid formation was associated with the accumulation of mutations and variable haploid offspring after crossingover, including a voriconazole-resistant isolate. Thereby, it was shown that parasexual recombination played a role in azole resistance development in at least one of the detected diploids, which provides a possible explanation for recovery of azole-resistant A. fumigatus isolates from azole-naïve CF-patients.
Characteristics of the azole resistance mutations might also provide clues to which reproduction mode was involved. While single resistance mutations in the coding part of the cyp51A gene were found in cases of in-host resistance selection, more complex mechanisms involving a combination of SNPs in the coding gene and tandem repeats in the promoter region were observed in resistant isolates, mostly from environmental-resistance selection. Although in-host selection or the patient route of resistance selection may be considered synonymous with asexual reproduction, and environmental resistance selection a combination of sexual and asexual reproduction, over time it has become clear that various resistance mechanisms may develop in both human and non-human environments. A TR 34 3 /L98H variant was recovered from a CF patient that also harbored a corresponding isogenic TR 34 /L98H isolate, suggestive for a non-sexual genetic alteration. Experimental evolutionary lines that were set up showed that tandem repeat 34 bp promoter elongation is possible via asexual reproduction upon exposure to voriconazole. The TR 34 3 /L98H first emerged after five cycles, and it subsequently became the dominant genotype with higher azole resistance levels compared to its ancestor strain. This study showed that under strong azole selection, the tandem repeat copy number may increase through asexual reproduction. Mechanistically, this can be explained by replication slippage or unequal sister chromatid recombination during mitosis [37]. Such processes may be rare per single mitotic division but are relevant during growth and sporulation of A. fumigatus cultures that involve numerous mitotic divisions. During each of these numerous mitotic divisions, there is a possibility of replication slippage and unequal sister chromatid recombination that may lead to an increase in the tandem repeat number [28]. The observation in the study of Zhang et al. is in line with observations of in-host selection of triazole resistance mutations in a different study, including a 120 bp tandem repeat in the promoter region of the cyp51A gene that can be matched with an isogenic ancestor isolate without a tandem repeat from the same patient [38].
There is accumulating evidence that there is an active sexual cycle for A. fumigatus that, at least in part, explains the highly genetic diverse population structure in nature [47,48]. As direct observations or sampling of sexual structures in nature have not been reported to date, the implication for sex and azole resistance development have not been elucidated yet. In 2017, a study by Zhang et al. showed evidence for a role of sex in azole resistance selection [22]. Heaps of composting organic waste material were shown to possibly provide the right conditions for sexual reproduction. Next to the commonly detected tandem repeat 46 bp variant, a triple repeat was detected in the samples from this study. An experimental sexual cross that was set up between two tandem repeat 46 bp strains yielded a triple 46 bp variant amongst the progeny isolates. By mispairing in the repeat region of 46 bp during the meiotic process, longer tandem repeats can evolve in the promoter region. In addition to these observations that support sexual reproduction, it is important to note that compost heaps provide favorable conditions for sex, such as a warm, dark, low-O 2 and high-CO 2 environment as a result of biological metabolic activity. A dynamic composting process with temperature gradients (20 to 70 • C) and gas changes might therefore stimulate sexual reproduction of A. fumigatus. The extent to which these favorable conditions for sex are present in the compost may differ for specific compost samples, which is shown by the differences in growth between the different compost samples after heat shock [22]. A high temperature heat shock is required to induce ascospore germination, and in this study, several samples showed growth after a one hour of heatshock [22]. As composting waste materials contain azole residues from agricultural use, the azole-containing habitat could serve as an evolutionary incubator with increased selective pressure on recombination that benefits the fungus through increased fitness or increased resistance, thus facilitating its survival. This could explain the emergence of tandem repeat 34 bp and 46 bp azole resistance mechanisms that have been found across the globe in a wide range of genetic backgrounds [49,50].
Final Remarks
Adaptation can be defined as the acquisition of adaptive traits through natural selection, which enables the organism to adjust to live in changing environments. As fungal adaptation results in treatment failure and fungal persistence, the biology of A. fumigatus during human infection and colonization needs to be understood to design strategies that can prevent or overcome adaptation. A. fumigatus may employ the asexual, parasexual or sexual cycle to adapt to its changing environments. As any change in the environment can provoke adaptation, switching between azoles in patient therapy or in agricultural settings might result in multi-azole-resistant A. fumigatus strains through the accumulation of several resistance mutations. When triazole application is stopped, an azole-free environment is created that could prompt selection for compensatory mutations that overcome any fitness costs that are expected to accompany resistance development. As a consequence, there is a risk of selecting for highly resistant strains with wild-type fitness. Future research should investigate the genomic dynamics during infection, as well as in our environment, to understand the key factors facilitating adaptation of A. fumigatus.
Author Contributions: E.S. and J.Z., writing-original draft preparation; E.S., P.E.V., J.Z. and A.J.M.D., writing-review and editing. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
|
v3-fos-license
|
2017-04-13T05:39:26.840Z
|
2014-11-12T00:00:00.000
|
4200421
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0112762&type=printable",
"pdf_hash": "66313b88e984aee8d280bc7346db87bedd862d54",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2169",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "66313b88e984aee8d280bc7346db87bedd862d54",
"year": 2014
}
|
pes2o/s2orc
|
Identification and Characterization of HTLV-1 HBZ Post-Translational Modifications
Human T-cell leukemia virus type-1 (HTLV-1) is estimated to infect 15–25 million people worldwide, with several areas including southern Japan and the Caribbean basin being endemic. The virus is the etiological agent of debilitating and fatal diseases, for which there is currently no long-term cure. In the majority of cases of leukemia caused by HTLV-1, only a single viral gene, hbz, and its cognate protein, HBZ, are expressed and their importance is increasingly being recognized in the development of HTLV-1-associated disease. We hypothesized that HBZ, like other HTLV-1 proteins, has properties and functions regulated by post-translational modifications (PTMs) that affect specific signaling pathways important for disease development. To date, PTM of HBZ has not been described. We used an affinity-tagged protein and mass spectrometry method to identify seven modifications of HBZ for the first time. We examined how these PTMs affected the ability of HBZ to modulate several pathways, as measured using luciferase reporter assays. Herein, we report that none of the identified PTMs affected HBZ stability or its regulation of tested pathways.
Introduction
Human T-cell leukemia virus type 1 (HTLV-1) was the first human retrovirus discovered to be associated with diseases [1,2] including the aggressive CD4 + T-cell malignancy, adult T-cell leukemia (ATL) [3], as well as the neurodegenerative disease HTLV-1-associated myelopathy/tropical spastic paraparesis, and other inflammatory diseases [4]. HTLV-1 encodes the structural and enzymatic proteins, Gag, Pol, and Env, as well as the regulatory proteins, Tax and Rex. The virus also encodes accessory proteins that are required for efficient infection and persistence in vivo, but are dispensable for T-cell immortalization in vitro [5]. The accessory/regulatory protein HBZ is unique in that it is the only viral protein encoded by the minus strand of the proviral genome while the rest of the viral proteins are encoded by the plus strand [6][7][8]. HBZ is expressed in all HTLV-1 cell lines and cases of ATL; in fact, in 60% of those ATL cases, hbz is typically the only viral gene expressed (eg. no Tax) [9]. This finding is attributed to deletion or hyper-methylation-silencing of the promoter in the 59 LTR or a non-functional mutation in the Tax transactivator, which significantly disrupts plus-strand transcription [9,10].
Post-translational modifications (PTMs) are chemical modifications added to proteins that can alter many aspects of a protein, including conformation, localization, and activity. This common mechanism of cellular regulation is utilized by several pathogens, including HTLV-1, to alter the expression of their own proteins. Tax contains several PTMs, for example, phosphorylation of Tax both stabilizes the protein [28] and inhibits its activity [29]. In addition, a phosphorylation site is required for the addition of an acetyl group that activates Tax to enhance NF-kB and induce transformation [30,31]. Furthermore, our lab has shown phosphorylation to be vital for the regulation of Rex function [32].
There currently are no published data about whether HBZ is post-translationally modified; however, it is known that HBZ interacts with acetyl-transferases [12,33]. Therefore, we hypothesized that HBZ, like Tax and Rex, would contain PTMs that regulate important functions. In this study, we purified an affinity-tagged-HBZ protein and analyzed this protein by LC-MS/MS. A high percentage of the protein, including the majority of the key leucine-zipper domain at the C-terminus, was covered in this analysis. This approach identified 7 modifications, which were further characterized by mutational analysis to determine if they regulated known HBZ functions.
Cells
293T cells were maintained in Dulbecco's modified Eagle's medium and Jurkat T-cells were maintained in RPMI medium at 37uC in a humidified atmosphere of 5% CO 2 and air. Media was supplemented with 10% fetal bovine serum (FBS), 2 mM glutamine, penicillin (100 U/ml), and streptomycin (100 mg/ml). Cells were originally obtained from ATCC.
Plasmids
To generate the Flag-6xHis-HBZ construct, the HBZ cDNA was inserted downstream of an N-terminal Flag-6xHis affinity tag and expression was driven by a CMV promoter. Amino acid exchanges were made using the QuickChange site-directed mutagenesis kit (Stratagene, La Jolla, CA). All mutations were confirmed by DNA sequencing and expression was verified by transfection and Western blot analysis. The pCMV-c-Jun and pLG4-10-6xAP-1-Luc plasmids were graciously provided by Dr. John C. McDermott of York University. The p65 expression plasmid and kB-Luc plasmid were a kind gift from Dr. Dean Ballard of Vanderbilt University. The IRF-1 expression plasmid and IRF-1 luciferase reporter plasmid were graciously provided by Dr. John Yim of the Beckman Research Institute.
Protein Purification
293T cells were plated in six 100 mm dishes, three per condition, and each plate was transfected with 10 mg of empty vector or Flag-6xHis-HBZ plasmid using lipofectamine (Invitrogen, Carlsbad, CA). Twenty-four hours post-transfection, cells were collected, combined, washed in cold 1x PBS, and lysed following the FLAG fusion protein immunoprecipitation and SDS-PAGE buffer elution protocols of the FLAG M Purification Kit (Sigma Aldrich, St. Louis, MO). Samples were loaded on a large 12% SDS-PA gel and electrophoresed for 3 hours at 55 mA. The gel was washed with Millipore water and stained using GelCode Blue Stain (Thermo Scientific, Rockford, IL). The HBZ band was excised from the gel for further proteomic analysis.
Mass Spectrometry and Proteomic Analysis
LC-MS/MS analysis was performed as described previously [34] with following modifications. HBZ excised gel slices were cut into small pieces (2-3 mm cubes) and incubated on a shaker overnight in 50% acetonitrile to distain gel pieces from Coomassie dye. Samples were reduced with 7.5 mM DTT in 75 mM ammonium bicarbonate solution at 50uC for 30 min, after which DTT was removed and the protein was alkylated with 40 mM iodoacetamide in 75 mM ammonium bicarbonate solution for 20 min at room temperature in dark. The gel pieces were washed with acetonitrile, desiccated in a speed-vac. Aliquots were, subjected to in-gel proteolysis using the following endoproteinases (5 ng/ml): i) sequencing grade modified trypsin (Promega); ii) sequencing grade chymotrypsin (Roche); iii) sequencing grade endoproteinase Asp-N, (Roche) and iv) Trypsin/Asp-N combination. The resulting peptides were extracted in 100 ml of acetonitrile by vortexing for 10 min. The solution was transferred to new small microcentrifuge tubes and desiccated in a speed-vac. Dryed samples were resuspended in 6 ml buffer A (2% acetonitrile, 0.2% formic acid,) and 5 ml were separated on a 15 cm60.075 mm fused silica capillary column packed with reversed phase 3 mm ReproSil-Pur C 18AQ resin (Dr. Maisch GmbH, Ammerbuch-Entringen, Germany) using a nano EASY HPLC. Peptides were eluted over 50 min by applying a 0-30% linear gradient of buffer B (80% acetonitrile, 19.8% water and 0.2% formic acid) at a flow rate of 350 nL/min. The Orbitrap (Thermo Fischer Scientific, San Jose, CA) was run in data dependent mode with 10 datadependent scan events for each full MS scan. Normalized collision energy was set at 35, activation Q was 0.250. AGC target for MS was 1610 6 and AGC target for MS/MS was 5610 4 . Dynamic exclusion was set to 60 s and early expiration was disabled. Sequence analysis was performed with MASCOT (Matrix Sciences, London GB) software using an indexed human subset database of Swissprot, supplemented with HTLV, 263 contaminants and 114960 decoy sequences.
Reporter Assays
Each functional reporter assay had its own set of conditions for plasmid concentrations. In brief, 293 T cells were seeded in 6-well plates at 2610 5 cells per well. Twenty-four hours post-plating, cells were transfected with 10 or 20 ng of renilla-TK, a luciferase reporter, an expression plasmid of a specific transcription factor, and an HBZ expression plasmid at one of two concentrations (1:5 or 1:10 ratio). Empty vector was added to make the total DNA concentration equal among all transfections. Transfections were performed using Lipofectamine (Invitrogen, Carlsbad, CA). Twenty-four hours post-transfection, cells were collected and analyzed using a dual luciferase assay kit (Promega, Madison, WI). Levels of firefly luciferase and renillia luciferase were measured using a Packard LumiCount luminometer. Each experiment was performed three independent times in duplicates. Jurkat T-cells were plated in 6-well dishes at 3.5610 5 cells per well. Twenty-four hours post-plating, cells were transfected using TransFectin lipid reagent (Bio-Rad Laboratories, Hercules, CA) by following the manufacturer's guidelines. Forty hours post-transfection, cells lysates were collected and analyzed as described above.
Western Blot Analysis
Transfected cells were lysed in 1x Passive Lysis Buffer (Promega, Madison, WI) with protease inhibitor cocktail (Roche, Mannheim, Germany). Protein concentrations were measured using a Nanodrop spectrophotometer (Thermo Fisher Scientific, Waltham, MA). SDS dye (6x solution) was added to the lysates and samples were boiled for 10 min. Twenty micrograms of protein were resolved by SDS-PAGE and transferred to nitrocellulose membranes. Blots were probed with a rabbit polyclonal anti-HBZ antiserum (1:1000), a mouse anti-Flag M2 antibody (1:5000) (Sigma Aldrich, St. Louis, MO), or mouse anti-Actin (1:10000) according to standard procedures. Secondary antibodies used included goat anti-rabbit and goat anti-mouse conjugated with horseradish peroxidase (Santa Cruz Biotechnology, Santa Cruz, CA) at a dilution of 1:2000. Blots were developed using Immunocruz luminol reagent (Santa Cruz Biotechnology) and imaged using the Fuji LAS 4000 imaging system (GE Healthcare Life Sciences, Piscataway, NJ). Densitometry was measured using Multi Gauge version 3.0 software (Fujifilm, Tokyo, Japan).
Prediction of HBZ PTMs
There are currently no reports on whether HBZ is posttranslationally modified, but it is well known that PTMs can play a major role in the properties and functions of proteins. Two major types of modification are phosphorylation and acetylation. These modifications are reversible and are used to modify the activity of many transcription factors [35]. Using the online phosphorylation prediction tool, NetPhos 2.0 Server [36], we found 6 potential phosphorylation sites on HBZ (Table 1). We also used the online tool PAIL [37] for acetylation prediction ( Table 2) that predicted 16 acetylated lysines. These data suggested that HBZ was likely modified by the cell. Instead of mutating all predicted modified residues, we first set out to identify modified residues by performing mass spectrometry (MS). This approach would allow us to identify both phosphorylation and acetylation added to HBZ within a eukaryotic cell in a single assay. This analysis was dependent on the production and purification of substantial quantities of HBZ protein.
Function and purification of Flag-6xHis-HBZ
Varying amounts of the Flag-6xHis-HBZ construct were transfected into 293T cells along with a Tax expression plasmid and the HTLV-1-LTR-Luciferase reporter plasmid ( Figure 1A). As expected, Flag-6xHis-HBZ was able to repress Tax transactiva-tion in a dose-dependent manner similarly to untagged, wild-type HBZ. We next verified that we would be able to adequately purify HBZ for mass spectrometry analysis. Using components of Sigma's FLAG M Purification kit, lysates from HBZ-transfected 293T cells were collected and HBZ was purified using agarose beads conjugated with mouse anti-Flag antibody. SDS-PAGE and GelCode Blue visualization revealed a specific band correlating to tagged HBZ that could be processed for mass spectrometry ( Figure 1B).
Identification of PTMs
Multiple runs of LC-MS/MS were performed with protein digestion schemes described in the Materials and Methods section. Overall, we were able to obtain 68% coverage of the amino acid sequence, including the majority of the key leucine-zipper functional domain and identified several PTMs (Figure 2). We detected phosphorylation on S49, acetylation on K66 and K155, and methylation on K35, K37, K181 and K186 (Figure 2 and Tables 1-3). The majority of these modifications occur in the important protein-protein interaction domains of HBZ. Of the predicted phosphorylation sites, we covered 5 of the 6 predicted sites (Table 1), and 5 of the 16 predicted acetylation sites (Table 2). We compared MS spectrum counts for modified peptides with their unmodified counterparts, which allowed semiquantitative analysis for the frequency of modifications (Tables 1-3). Our data suggest that the phosphorylation of HBZ is an infrequent occurrence since S49 showed limited phosphorylation. The addition of an acetyl group to K155 also seems to be a rare event, being detected approximately 3% of the time. Of the discovered methylations, our data indicate that only K35 is methylated with some consistency. Furthermore, we provide evidence that K66 is constitutively acetylated, neutralizing the positive charge of this amino acid. All these identified modifications are novel and, we hypothesized, could regulate the properties and/or functions of HBZ.
PTMs do not affect HBZ steady-state levels
In the current study, we decided to examine the roles of phosphorylation and acetylation individually by mutating modified amino acids to mimetic (SRD and KRQ, respectively) and inhibitory (SRA and KRR, respectively) residues for each PTM. We also created a phospho-, acetyl-mimetic mutant (PhAc-mim: S49D-K66Q-K155Q), and a phospho-, acetyl-inhibitory mutant (PhAc-inh: S49A-K66R-K155R) for the discovered modified sites to investigate if they act in concert. The approach of having mimetic and inhibitory mutants allowed us to compare each mutant to the wild-type protein and the paired residue mutation. It also is important to examine both the mimetic and inhibitory mutations as both phosphorylation and acetylation can positively or negatively regulate the functions of proteins. We decided not to focus on the discovered methylation sites currently since all methylated states were found in less than half the cases of detected peptides and we cannot create a methylated lysine mimetic mutation. If future studies identify methyltransferases that interact with HBZ, it would be interesting to see if over-expression of these enzymes modifies these residues of HBZ and are important for HBZ function.
After the creation of the mutant forms of HBZ in the Flag-6xHis vector, Western blot analysis was performed to examine if any of the modifications affected the steady-state levels of the protein. We hypothesized that acetylation of K66 would be important for protein stability as it was found to be constitutively modified. However, probing for the affinity-tag showed that none of the modifications affected the steady state level of the protein (Figure 3).
PTMs do not affect inhibition of viral regulatory proteins
HBZ inhibits Tax-transactivation of the LTR promoter by binding to the co-activators CREB and p300 [11,12] and by upregulating the ubiquitin E3 ligase PDLIM2, which targets Tax for degradation [38]. To examine if the phosphorylation and acetylation of HBZ were important for this function, 293 T cells were transfected with an HTLV-1 LTR-luciferase reporter along with Tax and titrating amounts of wild-type HBZ, the PTM mutants, and a DLZ mutant (previously shown to repress Taxtransactivation to a lesser extent than wild-type) [11,12] ( Figure 4A). As expected, wild-type HBZ and, to a lesser degree, HBZDLZ, were able to repress Tax trans-activation of the HTLV-1 LTR promoter. We observed that all PTM mutants tested were able to inhibit Tax activity to a similar degree as wild-type HBZ and found no significant difference between paired mutants. It should be noted that this result for the S49 mutants was not unexpected because these mutations were reported previously to bind to p300, inhibiting it from interacting with Tax [12]. To examine if the PTMs affected HBZ's functions within a T-cell, the luciferase assay was repeated in Jurkat T-cells ( Figure 4B). Results in Jurkat T-cells were similar to 293 T cells; all the tested PTM mutants functioned similarly to wild-type HBZ and repressed Taxtransactivation of the HTLV-1 LTR promoter. These data suggest that the tested PTMs do not affect the ability of HBZ to modulate Tax activity. More recently, it has been reported that HBZ modestly repressed Rex function in a dose-dependent manner in HeLa cells [39]. We confirmed this repression of Rex activity by wild-type HBZ and that all of the PTM mutants functioned similarly to wild-type HBZ (data not shown). These data suggest that the tested PTMs do not affect the ability of HBZ to modulate Rex activity. It previously was reported that HBZ represses the classical NF-kB pathway by inhibiting the DNA-binding of p65 and inducing p65 degradation [18]. This finding was important because this binding stopped cells from entering Tax-induced senescence [19]. We used a reporter assay to examine if the discovered PTMs affected the ability of HBZ to repress p65 in both 293 T cells and Jurkat T-cells ( Figure 5A and 5B). These data demonstrated that the PTMs of HBZ, individually and in combination with each other, did not affect HBZ's ability to repress p65 transcriptional activity in either cell type.
We next examined how the PTMs affect the ability of HBZ to repress c-Jun transcriptional activity because HBZ and APH-2, the HBZ counterpart in non-pathogenic HTLV-2, differentially regulate this cellular pathway [13,40]. 293 T cells were transfected with a 6xAP-1-luciferase construct along with pCMV-c-Jun and titrating amounts of HBZ. Because HBZ interacts with c-Jun through its leucine zipper domain [13], we also included the HBZDLZ mutant ( Figure 6). Our results show WT HBZ was able to repress c-Jun-mediated transcription and the HBZDLZ mutant was unable to repress c-Jun. All PTM mutants acted in a manner similar to WT HBZ, indicating none of the PTMs affected the interaction of HBZ with c-Jun. Taken together, our data indicate that the phosphorylation and acetylation state at these residues are not important for the ability of HBZ to modulate the classical NF-kB and AP-1 pathways.
Repression of IRF-1 is not dependent on HBZ PTMs
After testing whether PTMs regulate the ability of HBZ to repress viral expression and growth pathways, we next turned our attention to a component of the innate immune system. Interferon (IFN) regulatory factors (IRFs) are key components of the immune system as they control interferon production and development of immune cells, but they also play a role in regulating oncogenesis [41]. IRF-1 induces the expression of type-I IFN and acts as a tumor suppressor by inducing apoptosis [42]. Clinical data have shown that IRF-1 expression is lost in many cases of leukemia [43]. Since HBZ is typically the only HTLV-1 protein expressed in cases of ATL, Mukai et al investigated if HBZ and IRF-1 interacted [20]. They discovered that the N-terminus of HBZ was important for binding IRF-1 and repressing its activity. We performed a reporter assay to assess whether the PTMs of HBZ regulated the repression of IRF-1 transcriptional activity (Figure 7). All PTM mutants were able to repress IRF-1 in a dosedependent manner and there were no significant differences between paired mutations. These data suggest that PTMs are not involved in the regulation of IRF-1 activity by HBZ.
Discussion
Our present research is the first to report PTMs of HBZ and to define the potential role of these PTMs in the known functions of HBZ. Using online prediction tools, we found 6 potential sites of phosphorylation and 16 sites of acetylation. Our MS data covered 68% of the amino acids of HBZ, including 5 of the 6 potential phosphorylation sites and 5 of the 16 acetylation sites. In total, 7 Only acetylation of K66 occurred at a high frequency, with the 6 other modifications occurring at low frequency. The negative predicted sites that were covered cannot be fully ruled out as being modified, but we are confident that the modifications do not occur at a high frequency. Of the uncovered predicted sites, it would seem likely that acetylation would occur more frequently than phosphorylation because there are more sites available and HBZ is known to interact with acetyl-transferases.
We first examined the effect that these PTMs had on the steady state levels of the protein, but found no difference between samples and controls. We next tested how PTMs affect the ability of HBZ to repress Tax transactivation. Because none of the mutants acted differently than the wild-type at a low or high concentration, we are able to infer two aspects of the identified phosphorylation and acetylation: 1) they do not affect the interaction of HBZ with p300, and 2) they do not affect the interaction of HBZ with CREB. The cellular signaling pathways AP-1 and NF-kB, along with IRF-1mediated transcription were examined. Although the modifications found were in domains that modulate the activity of the tested transcription factors, they did not play any role in the ability of HBZ to repress these selected pathways.
HBZ is known to interact with several proteins and affect various cellular pathways. While we could not identify any role for PTMs in the pathways examined, it remains possible that these PTMs have a function. Although the enzymes that add PTMs to their cognate proteins within the cell are not 100% specific for functionality, their promiscuity is still expected to be limited due to the importance of strict regulation and localization. The possibility that a combination of identified and unidentified PTMs may be necessary cannot be ruled out at this point. Furthermore, it is important to note that there could be unknown functions of HBZ that are regulated by these three PTMs. Future studies should focus on modifications that cannot be readily detected by MS such as SUMOylation [44,45] as these have also been shown to be important for regulating protein functions.
|
v3-fos-license
|
2019-04-10T13:12:24.408Z
|
2018-10-04T00:00:00.000
|
105300936
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.15673/fst.v12i3.1044",
"pdf_hash": "3858172a37c8d4382f7916df4445da179b1acbda",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2170",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "3858172a37c8d4382f7916df4445da179b1acbda",
"year": 2018
}
|
pes2o/s2orc
|
DETERMINATION OF FUNCTIONAL AND TECHNOLOGICAL PROPERTIES OF BEEF BASED ON THE ANALYSIS OF COLOR DIGITAL IMAGES OF MUSCULAR TISSUE SAMPLES
The paper considers the kinetics of changes in the values of рН and temperature of beef of slaughtered Holstein bull calfs aged 15 months during cold storage. It has been established that the rate of pH decrease during autolytic maturation is greatly influenced by the rate of temperature decrease. This was observed in the two muscles taken as an example – m. Longissimus dorsi and m. Semimembranosus. A method is suggested of analysing digital images of beef muscular tissue samples in the color coordinate space to study the beef’s color characteristics during cold storage. It has been found out that using this method, with second order polynomial fitting, provides a mean-square approximation error of 5.6% on average from the minimum coordinate of the maximum level of the red component of color. This suggests the objectivity of its use to assess the color of the meat. An analytical dependence has been established between the beef color and the term of refrigeration with the use of information technologies. In accordance with it, it has been determined that due to biochemical processes, there takes place intense oxidation of myoglobin, which results in a dark color of the muscle tissue. In the course of time (up to 120 hours of storage), the red color intensity increases. This is accompanied with decomposition of myoglobin forms that have appeared, and with appearance of МbО2. When beef is stored for more than 140 hours, deeper iron oxidation begins, with formation of methyoglobin, and the brightness of the meat decreases. The developed method allows automating registration and processing of images of muscle tissue in real time, increases the productivity of the assessment, and gives an opportunity to obtain reliable objective results about the meat properties during its storage.
Introduction. Formulation of the problem
The effectiveness of systems of monitoring the technological process at food enterprises depends on the timeliness of receiving information.This system isparticularly important for technological operations that are critical control points (CCP).For CCP, the promptness and accuracy of assessing the controlled parameters determine the quality and safety of manufactured products.Initial CCP at any food enterprise is receiving raw materials, because the properties of food raw materials are labile and depend on many factors.That is why, at this stage, it is so important to have online assessment of the initial functional and technological characteristics of raw materials.
It is known that, when received at an enterprise, the quality of meat raw material, especially in carcasses or half-carcasses, is usually assessed on the basis of temperature, pH, and sensory parameters.The temperature and pH level are controlled with measuring instruments.And determining complex sensory parameters requires workers of a certain qualification.That is why, the sensory characteristics obtained are subjective and contradictory.The situation is still more complicated because of the fact that the sensory parameters are the basis for determining the sanitary condition of the surface, veterinary safety, and functional properties of the meatand the latter determine its further use.
At the same time, it is known that one of the most informative sensory parameters that characterizes meat as a complex biochemical system and is directly related to technological properties is the color of fat and muscle tissue.The color characteristics of muscle tissue depend on the amount and state of meat's basic chromoproteinsmyoglobin (Mb) (90% of the total number of pigments) and hemoglobin (10%) [1].These proteins determine the intensity of the color of the raw material.The biochemical state of meat chromoproteins is influenced by pH, the presence of oxygen in the tissues, the level of autolytic changes.These factors are decisive not only for pigments, but for all other muscle tissue proteins that determine the functional and technological properties of meat.This is especially true for beef, which has an intense red color.That is why determining the color indicators of beef during cold storage on the basis of a method for analyzing digital images of beef muscle tissue samples is a topical scientific and practical problem.Its solution will eliminate experts' subjectivity and allow obtaining mathematically grounded dependences between the functional and technological properties of meat and its color.
Analysis of recent research and publications
The problem of operational control of the quality indicators of meat raw materials has been relevant for a long time.Scientists of different countries are looking for effective express methods to assessment meat.The possible methods fall into instrumental and organoleptic ones.By their underlying principles, they are classified into chemical, physico-chemical, physical, and biological.By means of special devices and reagents, one can determine the qualitative and quantitative composition, the state of proteins, lipids, moisture, structural and mechanical properties, color characteristics, and other parameters of raw materials and finished products.Almost all control methods require sampling and take time to conduct an analysis.For operational control, express methods are needed allowing online monitoring of the process.The most popular with scientists is the spectral method of analyzing the quality of meat products using different wavelengths.
According to Yu.G.Sazonov and K.G.Pankratov, spectroscopy in the near infrared spectrum allows us to determine a significant number of parameters in products of complex chemical composition [2].With the help of infrared analyzers, solid, liquid, and paste-like products can be analyzed.This was proved experimentally, and the results were compared with those of other types of control.The authors developed methods for processing the results of analyses with modern computer programs.The same was the direction of V.V. Zinchenko and V.A. Bogomolov's studies [3,4] that proved that near-infrared spectroscopy can be used to determine the composition and properties of food products.This determination was based on the previously accumulated data of the samples the composition and properties of which was already known.The idea of the infrared spectroscopy method consists in determining the composition of a sample by its spectrum without separating the components.Different components of the sample selectively absorb light at different wavelengths, it means that they have unique spectra.Thus, by the spectrum of an unknown sample, the concentration of components can be determined.
Bruce W. Mossa and others [5] studied whether Raman-spectroscopy could be used to assess the marbling of meat.The essence of the method was in irradiating the sample with a laser beam with a wavelength of 785 nm and measuring the light scattered by the surface.In this case, a certain number of molecules on the surface of the product become excited, and it is this energy that changes in diffused light (Stokes Scattering) measured with the device.The suggested method was used to assess the fatty acid Volume 12 Issue 3/ 2018 composition of marbling fats and to determine the quality of meat when stored for up to 3 days.This method can be recommended for on-line meat quality assessment after additional research.
N. Prieto and others used the method of near-infrared spectroscopy [6] and computer tomography of X-rays [7] to predict the chemical composition of meat and to classify it according to the quality classes, and to determine the fatty acid composition of intramuscular fat.The suggested methods are potentially promising for using in meat processing industry, but require establishing additional functional and sensory parameters.
Fabıola Manhas and others studied the qualitative characteristics of beef by means of nuclear magnetic resonance spectroscopy [8].In selected samples, sensory (aroma, juiciness, tenderness) and physico-chemical (moisture content, fat, instrumental tenderness, shear force) parameters were analyzed with standard methods and with NMRS.In this work, Carr-Purcell-Meiboom-Gill (CPMG) and Continuous Wave-Free Precession (CWFP) sequences were used.The regression models were described and calculated with the partial least squares (PLS) method using CPMG.The results obtained allowed the authors to conclude that the qualitative indicators assessed by standard methods and by the prediction ones did not show significant differences, with the confidence interval being 95%.
C. C. Correa and others [9] studied the intramuscular fat content by means of high-performance and nondestructive nuclear magnetic resonance.The method is based on data on the time of spatial and transverse relaxation obtained with continuous wave-free precession (CWFP).This method makes it possible to estimate the fat content of beef quickly and conveniently.
The wavelengths in the visible and near-infrared range were used to estimate the moisture content, native protein, and intramuscular fat in lambs and chickens [10].The objects of the research were both whole muscle products and minced meat.The samples were scanned in reflexion in NIRSystems 6500 (NIRSystems, Silver Spring, MD, USA).By means of modified partial least squares (MPLS) with internal cross validation, the regression equations were calculated.The least error was observed when dealing with minced meat.It was concluded that analysis of spectral characteristics ensures high reliability of the data on the chemical composition of raw materials.
D. Dow and others [11] analyzed various methods for determining the fat content of beef.The methods to be compared included extraction performed with chloroform: methanol -2:1, microwave drying followed by ether extraction, and nuclear magnetic resonance.All regression equations for determining the percentage of fat, regardless of the method of extraction, were linear.The highest accuracy was obtained with extraction using ether.But the difference in the coefficients of the equation between ethereal extraction and the use of NMR is insignificant.So, NMR can be recommended for enterprises as a quick way to assess the total fat content in muscle tissue.
A.G. Shleikin [12] suggested a method of nondestructive control of meat raw materials and unification of the assessment of the color characteristics of pork and beef muscle tissue as an integral indicator of quality and safety of meat and meat products.The objects of his study were meat of various anatomical locations (bacon, undercut, thigh, cutlet, fillets, scapula, etc.) at different refrigeration and storage stages (cooled and defrosted meat after 1, 7, and 120 days of storage at a temperature of -18°C) [13].The experiments were carried out using spectrophotometers: SF-26 (destructive method), which records the absorbing capacity of extracts (by the method of determining myoglobin), and СФ-18 (remote method), which records the ability of reflection from the cut surface of a muscle.At the same time, functional and technological characteristics, such as pH, and moisture retaining and wetting ability of meat, were determined.The research was conducted at wavelengths of 488 and 640 nm.The results obtained revealed the superiority of the remote color estimation method, which is much faster and thus eliminates the contamination of the test sample.
In recent years, researchers have found applications of computer vision technology for the automatic detection of meat marbling [13].Image processing typically includes such operations as segmentation of m.Longissimus dorsi with a picture of steak, segmentation of marbling from the soft part, and the extraction of marbling meat.The advantage of computer assessment is the speed and simplicity of the method, which allows automating the process of sorting meat raw material.
The use of spectrophotometers of various designs does not always allow us to assess objectively the quality of meat.That is why, in recent years, digital cameras and scanners have become popular with scientists.A.A. Kulakov [14] suggested a method for assessment scanned color images where the concentration of the dye was accurately determined.The dye (malachite green) was spread on filter paper, dried, and scanned with an Epson Stylus CX3900 multifunctional scanner.The resulting digital images were processed according to a program developed by the "Blot-1" authors and based on Borland ImageProcessForm.The relative error of the color intensity was no more than 0.4%.Taking into account the results obtained, the authors recommend using scanning devices, not expensive spectrophotometers, to assess the color intensity.
A general analysis of literature allows noting a lack of universal methods of assessing the color of meat.Existing devices have advantages and disadvantages.So, developing an objective express method for assessing the quality of meat, in particular beef, is necessary and timely, taking into account the dynamic development of food industry.
The purpose of the research is determining of functional-technological and color parameters of beef during refrigeration on the basis of the development of Volume 12 Issue 3/ 2018 a method for analyzing digital images of beef muscle tissue samples.To achieve this goal, the following tasks have been solved: giving reasons for the selection and obtaining of samples to be studied; studying the changes in the pH and temperature parameters of beef during refrigeration; developing a method for analyzing digital images of beef muscle tissue samples in the color coordinate space RGB for the study of color characteristics of beef during cold storage; establishing analytical dependence between the color of beef and the term of refrigeration using information technologies.
Research Materials and Methods
When solving the first problem, as an object to obtain the research samples from, chilled half-carcass beef of Holstein bulls slaughtered at the age of 15 months was taken.The Holstein breed complies with all the main characteristics of bovine cattle productivity.But its main unique quality is the fact that cows and bulls gain weight quickly and can be freely used in the production of meat.The slaughtered animals under study were 480-500 kg of live weight, 265-280 kg of slaughtering weight.Cooling the half-carcasses was carried out by the single-stage method, with the following conditions: temperature +4ºС; air speed 0.5-1.0m/s; relative humidity 90%; total duration 36 hours till the temperature achieved in the center of the pelvis was not higher than +4 ºС; storage: temperature +4 ºС; air speed 0.5 m/s; relative humidity 95%; total duration 172 hours from the moment of slaughter.The character of autolytic changes was determined by the pH [15], by the temperature in the thickness of the muscles, and by the change in the meat color.The points of control of functional indicators were m.Longissimus dorsi and m.Semimembranosus.The color characteristics were assessed on the slices of m.Semimembranosus.The samples were taken from half-carcasses at certain time intervals (see Table 1).The change in the meat color was determined by analyzing the digital color images obtained by scanning the samples of the muscle sections with an HP ScanJet 5590P (L1912A) scanner (the CCD matrix, the color depth 48 bits).Each image contained 3 to 9 beef cuts.What the image of a cut looked like at the second point (see Table 1) is shown in Figure 1.The algorithm for analyzing images of meat cuts is described as follows: Obtaining output images for processing.
Time after the slaughter of animals, h
-Construction of a red color distribution histogram in the RGB color system.-Construction of the spatial distribution of the red component on the basis of an average of all histograms, depending on the time of obtaining the samples.-Determination of the maximum intensity coordinates for each average histogram.-Construction of the point dependence (data) of the coordinates of the red color maximum intensity on the time of obtaining samples.Smoothing the obtained experimental data by first and second order polynomials.
Results of the research and their discussion
The quality of the finished product is determined by the initial properties of the main raw material.The main raw material for meat products is beef, pork, poultry meat, and fat derived from processing slaughtered animals.Leaving aside the in-life factors, the functional characteristics of meat are largely influenced by primary processing and refrigeration.In the production of cooled raw materials, refrigeration has two stagescooling and storage in a cooled state.When cooled, complex biochemical changes take place in meat.According to the stage of autolysis, they allow classifying it roughly into steamed meat, meat in a rigor mortis state, and meat at the stage of maturation.Products made from carcass meat and matured meat have excellent sensory characteristics.Posthumously numb meat is not suitable for industrial processing, since it is characterized by low pH, moisture-retaining ability, rigidity, darker color.Some of these parameters can be determined directly, in the work area, without complex physical or chemical research.During this series of studies, pH and temperature changes in the longest muscle of the back and in the internal muscle of the pelvis were studied.
Analysis of the dependencies presented in Fig. 2 shows that the pH level for m.Longissimus dorsi throughout the trial period is higher than that for m.Semimembranosus.An explanation for this is the difference in the initial content of the glycogen.The muscles of the pelvis carry a significant dynamic load; that is why, they contain 1.5 times more glycogen than the muscles of the back [16,17].Accordingly, in amylolysis and subsequent phosphorylation of glycogen, more lactic acid is formed, which shifts the pH of the meat into acidic side.Rigor mortis for m.Longissimus dorsi is observed in the interval of time 35 to 44 hours from the moment of slaughter, for m.Semimembranosus -21 to 31 hours.Previous studies [18,19] have found that rigor mortis for m.
Volume 12 Issue 3/ 2018
Longissimus dorsi, which is a standardized pH measurement point, typically takes place after 40 hours from the time of slaughter.The completion of the rigor mortis process, when functional indicators are stabilized, is observed in the interval of time 50 to 55 hours.That is why, it is practical to send beef for processing not earlier than after two days from the moment of slaughter.
A close relationship is observed between the kinetics of change in pH and temperature.The highest biochemical activity in muscle tissue after slaughter occurs within the first 12 hours.The pH reduction rate in this period for m.Longissimus dorsi is 0.085 units/h, for m.Semimembranosus 0.094 units/h.The rate of decrease in temperature in the thickness of the corresponding muscles: for m.Longissimus dorsi 1.99 degrees/h, for m.Semimembranosus -1.79 degrees/h.A higher temperature changes rate in m.Longissimus dorsi (Fig. 3) contributes to the slowing of the activity of the enzyme system of muscle tissue, and, accordingly, the level of reduction of pH is smaller and vice versa, in the pelvis, the temperature decreases more slowly, respectively, the activity of enzymes contributes to the intensive reduction of pH.When developing a technique for analyzing digital images of beef muscle tissue samples in the color coordinate space RGB during cold storage, the fact was taken into account that the pH indicator characterizes the biochemical state of the meat and determines the color of the raw material.The natural color of meat is due to myoglobin and hemoglobin.The non-protein portion of myoglobin, a heme, consists of iron and four heterocyclic pyrrole rings connected with methylene bridges.The iron atom can easily be oxidized by giving up electron and determines the forms of myoglobin: proper myoglobin (Mb), oxymioglin (МbО 2 ), and methymogonin (MetMb).In the presence of oxygen in the air, Mb oxidizes with the formation of oxymoglobin -МbО 2 that gives meat a pleasant, bright red color.МbО 2 is an unstable compound and under the influence of light and air, in acidic medium, it is converted into MetMb, while the heme of iron passes from bivalent to trivalent.Meat becomes brownish-gray [1].It acquires this color when it contains more than 70% of MetMb of the total proportion of muscle pigments, which is characteristic of the raw material of the deep stage of autolysis.At the same time, in the postslaughter period, due to the destruction of the bicarbonate system in the cells, free carbon dioxide is released [16].Most intensively this process goes on in the first hours of autolysis.The carboniferous component decomposes into water and carbon dioxide.Carbon dioxide together with myoglobin forms red-cherrycolored carbomisoglobin, which explains the dark color of the muscles at the stage of rigir mortis.The pecu-liarities of changing the color of the meat during cold maturation are shown in Table 2.
The obtained fragments of images (Table 2), besides the intensity of coloring of muscle tissues, allow us to estimate the texture of fibers.It should be noted that these images confirm other scientists' results about the connective tissue density in the initial period of maturation and in the hydrated period, at the stage of completion of rigor mortis.Obtaining scanned images of meat cuts makes it unnecessary to carry out histological studies using a microscope.It can, at the controller's request, provide information about the degree of cell destruction, which determines the direction of further use of raw materials.
Thus, the received fragments of meat cuts make it possible not only to characterize the correlation between the main forms of myoglobin and the color of meat, but also to determine the degree of destruction of muscle tissues.
Typically, an automated process of determining the color characteristics of meat raw material includes the preparation of a sample; receiving and registering its color digital image in the RGB format; transforming it into another digital format and further computer processing the information by algorithms that distinguish color characteristics.In the review of existing methods for the assessment (instrumental and organoleptic) of meat by color index [20,21], it is noted that changes in the meat color are associated with such a color characteristic as light.That is why, in many systems, there is additional transition from the original color RGB space to others, such as HSI, Lab [22], etc.When creating a method for analyzing color digital images, scanned cuts of chilled flesh of beef (see Figure 1) were used as inputs.They were being made in certain (irregular) intervals of time for eight days (see Table 1).The developed method for analyzing digital images of muscle tissue beef samples in the color coordinate space RGB consists of three stages.Modeling of the stages of the methodology was carried out using the Matlab software environment.
Stage 1. Receiving fixed-size input images for further analysis.At each one of the presented images of meat cut samples (Fig. 1), 256×256 pixels were selected in an arbitrary manner (in Table 2, the samples of the incoming images were magnified by 4 times).The complete input set consisted of 74 output images Stage 2. Construction of red color distribution histograms in the RGB color system.For all fragments corresponding to a certain time of storage, red color distribution histograms were constructed in the color system RGB (Fig. 4a).Further, the histogram values were averaged and normalized, resulting in the maximum value of the component R not exceeding 1.For each averaged histogram, the coordinate of the maximum intensity is determined (Fig. 4b).Figure 4c shows the spatial distribution of the maximum intensity of the red color component, depending on the time of receipt, built on the basis of all averaged histograms.
Stage 3. Construction of the point and analytical dependence (data) of values of the coordinates of the red color maximum intensity from the time of obtaining the samples (table 1).During a computer experiment based on the results of processing a test sample of 74 images of meat cuts and the analysis of the spatial distribution of the red color component, the point graphic dependence was constructed of the values of the coordinates of the red color maximum intensity from the time of obtaining the samples.The experimental data obtained were smoothed by the first and second order polynomials.The smoothing by the second order polynomial secured a lower mean square error of approximation as compared with the linear approximation.It was an average of 5.6% of the minimum value of the coordinate of the maximum level of the red color component in the color coordinate space RGB (Fig. 5).
In accordance with the received dependencethe spatial distribution of the red color component depending on the time of obtaining the samples (Fig. 4 and Fig. 5), we can assume that during 120 hours of being stored in meat, as a result of biochemical processes, not only oxygen is released, but other gases as well (including carbon dioxide) that oxidize myoglobin and are responsible for a dark color of muscle tissue [1,16].Most intensively these processes go on before the onset of rigor mortis.Over time, the intensity of red color increases, which is accompanied by the decomposition of the formed forms of myoglobin and the appearance of МbО 2 .When stored for more than 140 hours, there is a deeper oxidation of iron with the formation of methyoglobin, the brightness of the meat decreases.It can be assumed that, after 120 hours of cold storage in the muscle tissue, glycolytic transformations are almost fully completed by the influence of enzymes.With prolonged storage, hydrolytic enzymes are activated, and highly reactive compounds are formedhydrogen sulfide, ammonia and others that are capable of oxidizing myoglobin to methymogon, with a change in the red color intensity.
Using the method of scanning muscle tissue cuts in the production environment with further information processing of images will allow obtaining data on the degree of development of autolytic changes in meat and determining the direction of its use.This method can be recommended for differentiated sorting of beef as to the defects of DFD, RSE at a certain time point.
However, to obtain objective results using this method of assessing the quality of meat by color characteristics, it is necessary to have regional databases on the color of raw materials, depending on age, on methods of fattening, slaughter, refrigeration.
Conclusions
1. Selected muscle tissue samples were taken from half-carcasses of Holstein bulls, aged 15 months.The kinetics of the changes in the pH and temperature of beef during cold storage has been studied.It has been found that a higher rate of temperature change in m.Longissimus dorsi contributes to the slowdown in the activity of the enzyme system of muscle tissue, and, accordingly, provides less intense reduction of pH.And vice versa, in the pelvis, temperature decreases more slowly, so the activity of enzymes contributes to the intense reduction of pH.
2. The method of analysis has been developed for digital images of specimens of beef muscle tissue in the color coordinate space RGB so as to study beef's color characteristics during cold storage.It has been established that the use of the suggested method, when smoothing with a second order polynomial, provides a mean square error approximation of 5.6 % on the average, from the minimum value of the coordinate of the red color component maximum level, which confirms its objectivity when used to assess the color of the meat.An analytical dependence has been established between the color of beef and the term of refrigerated storage with the use of information technologies, according to which it is determined that due to biochemical processes, there is an intense oxidation of myoglobin, which provides a dark color of the muscle tissue.Over time, the intensity of red increases up to 120 hours of storage, which is accompanied by the decomposition of the formed forms of myoglobin and the appearance of МbО 2 .When stored for more than 140 hours, a deeper oxidation of iron be-Volume 12 Issue 3/ 2018 gins with the formation of methyoglobin, the brightness of the meat decreases.
3. The developed method allows automating the processes of registration and processing of images of muscle tissue in real time, increases the productivity of the assessment, and gives an opportunity to obtain objective reliable results about the properties of meat during storage.It is recommended for application in systems of automated processing of digital images of cuts of meat while conducting an assessment of its quality, as well as the development of colorimetric analyzers.
Fig. 1 .Table 2 -Fig. 4
Fig. 1.Example of an image of muscle tissue cuts obtained after 6 hours from the moment of slaughter.
|
v3-fos-license
|
2016-04-26T18:22:02.481Z
|
2013-09-24T00:00:00.000
|
8731593
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1038/msb.2013.48",
"pdf_hash": "b18fdbd9129c68d08d5f83b3b1baeac8ff79c900",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2172",
"s2fieldsofstudy": [
"Biology",
"Engineering",
"Medicine"
],
"sha1": "b18fdbd9129c68d08d5f83b3b1baeac8ff79c900",
"year": 2013
}
|
pes2o/s2orc
|
Biomedically relevant circuit-design strategies in mammalian synthetic biology
The development and progress in synthetic biology has been remarkable. Although still in its infancy, synthetic biology has achieved much during the past decade. Improvements in genetic circuit design have increased the potential for clinical applicability of synthetic biology research. What began as simple transcriptional gene switches has rapidly developed into a variety of complex regulatory circuits based on the transcriptional, translational and post-translational regulation. Instead of compounds with potential pharmacologic side effects, the inducer molecules now used are metabolites of the human body and even members of native cell signaling pathways. In this review, we address recent progress in mammalian synthetic biology circuit design and focus on how novel designs push synthetic biology toward clinical implementation. Groundbreaking research on the implementation of optogenetics and intercellular communications is addressed, as particularly optogenetics provides unprecedented opportunities for clinical application. Along with an increase in synthetic network complexity, multicellular systems are now being used to provide a platform for next-generation circuit design.
Introduction
Mammalian synthetic biology has established itself in only a few years as one of the strongest and most innovative biological disciplines (Grushkin, 2012). What began as simple transcriptional gene switches responding to supplied inducers has become an ever-expanding toolbox of genetically encoded circuits with highly complex functionality. The design of such mammalian circuits has proliferated, and is now able to apply regulatory mechanisms at the DNA, RNA or protein levels, or in some combination thereof (Keefe et al, 2010;Wieland and Fussenegger, 2012;Wang et al, 2013). The arsenal of circuits now available includes genetic toggle switches (Kramer et al, 2004b;Greber et al, 2008), band-pass filters (Greber and Fussenegger, 2010), time delay circuits (Weber et al, 2007b), memory devices (Burrill et al, 2012), oscillators (Tigges et al, 2009) and biocomputers (Benenson, 2011;Auslander et al, 2012a;Daniel et al, 2013). Circuits have been designed for diverse purposes, including to perform logic calculations (Rinaudo et al, 2007;Auslander et al, 2012a), screen for anti-tuberculosis compounds , control T-cell proliferation (Chen et al, 2010), kill cancer cells (Xie et al, 2011) or treat metabolic disorders (Kemmer et al, 2010;Ye et al, 2011Ye et al, , 2013. However, despite increased complexity and highly innovative circuit design, synthetic biology's current state is still that of a 'proof of concept' discipline. To progress toward clinically relevant applications, synthetic biology design has changed drastically in recent years. Of crucial importance are both the design of regulatory circuits and the biocompatibility of regulatory compounds. The original gene switches were constructed to respond to compounds with potential pharmacological side effects, such as antibiotics (Fussenegger et al, 2000;Weber et al, 2002). Newer circuits aim to reduce potentially negative impacts on patients, and therefore use food components and food additives such as vitamins and amino acids (Weber et al, 2007b;Bacchus et al, 2012), cell metabolites (Weber et al, 2007a;Wang et al, 2008), signaling transduction partners (Culler et al, 2010) and even endogenous cell type-specific transcription factors (Nissim and Bar-Ziv, 2010) to regulate the circuit function. This enables synthetic circuits to be directly integrated with the patient's metabolic networks to interface and respond to endogenous signals already present in the patient ).
Part of the success of mammalian synthetic biology has been due to its ability to constantly improve and create more advanced and robust genetic circuits. But another part of its success has been its ability to interact with other emerging biological disciplines (Ehrbar et al, 2008;Milias-Argeitis et al, 2011;Guo et al, 2012;Heng et al, 2013), the most obvious example being optogenetics (Chow et al, 2010;Chow and Boyden, 2011). Research at the interface with optogenetics has led to the development of non-invasive traceless methods of regulating various cellular functions by simple light irradiation (Levskaya et al, 2005;Tyszkiewicz and Muir, 2008;Yazawa et al, 2009;Kennedy et al, 2010;Ye et al, 2011;Wang et al, 2012;Bugaj et al, 2013;Muller et al, 2013aMuller et al, , 2013b. Photoreceptors, the sensory building blocks of optogenetic circuits, are abundant in nature. They are continually being identified, characterized and genetically modified by researchers, and therefore provide a constant flow of novel building blocks for constructing light-responsive synthetic biology tools (Airan et al, 2009;Chow et al, 2010).
Multicellular organisms consist of different consortia of specialized cells that have evolved to execute and coordinate, by intercellular communication, specific activities to distribute highly complex tasks and workload to increase the overall fitness of the organism. Likewise, with synthetic biology-based circuits becoming increasingly complex and multilayered (Auslander et al, 2012a;Moon et al, 2012), a single designer cell will no longer be able to cope with the complexity of programmed functionalities. To ovecome this limitation, engineered activities and metabolic workload will need to be distributed among different communicating designer cell populations that coordinate their activities to provide concerted actions. The design and construction of synthetic intercellular communication has thus far provided a ready and sustainable solution (Li and You, 2011). Engineering specialized and interconnected cell populations allows for a plug-and-play approach where the combinations themselves determine the overall function of the cellular consortium (Regot et al, 2011;Tamsir et al, 2011). Synthetic multicellular consortia of communicating cell populations show increased control precision and reliability (Koseska et al, 2009) and will foster advances in tissue engineering, the assembly of complex cellular patterns with novel functionalities , and the design of synthetic hormone systems (Weber et al, 2007a). Also, the distribution of synthetic circuits among specialized cell populations may overcome apparent limitations in the engineering capacity and metabolic activities of individual cells and will enable the design of increasingly complex multicellular gene networks (Bacchus et al, 2012;Rusk, 2012).
In this review, we cover the novel repertoire of mammalian synthetic circuit design. We discuss regulatory circuits that enable a direct link between synthetic biology and endogenous cellular activities, continuing advances in circuit design, synthetic circuits that implement optogenetic features, and conclude with a discussion of synthetic intercellular communication and prosthetic networks.
Synthetic circuits based on rewired cell-signaling pathways
To integrate synthetic circuits with endogenous signaling pathways, cells are engineered to express transmembrane receptors that respond via endogenous signal transduction pathways. In this way, the circuits use the natural signaling mechanism of the cell to regulate cellular functions. This can be done in a direct way, via elevated levels of second messengers (Airan et al, 2009), or an indirect way, via activation of synthetic promoters (Kemmer et al, 2011;Ye et al, 2011Ye et al, , 2013Stanley et al, 2012). This design enables a generic strategy for constructing synthetic control systems, which can be designed to respond to either endogenous or externally applied stimuli depending on which receptor is used. This strategy was adopted to construct a synthetic circuit for the treatment of the metabolic syndrome, a collection of interdependent pathologies including hypertension, hyperglycemia, obesity and dyslipidemia. Cells were engineered to express a chimeric trace amine-associated receptor 1 (cTAAR1), which produced a stronger cAMP response compared with its native counterpart in response to the clinically licensed antihypertensive drug Guanabenz (Wytensin s ) (Ye et al, 2013). Increased intercellular cAMP levels triggered transgene expression from a synthetic promoter (P CRE ) via the cAMP-responsive element binding protein 1 (CREB1). In this way, the oral dose of Guanabenz was simultaneously controlling hypertension as well as expression of a bifunctional therapeutic peptide hormone, GLP-1-Leptin, which combines the anorexic and insulin secretion-stimulating effect of the glucagon-like peptide 1 (GLP-1) with the lipid level, food intake-and body weight-controlling capacity of leptin. Implanting the circuit in mice that were developing symptoms of the metabolic syndrome (ob/ob mice) enabled simultaneous correction of all associated pathologies ( Figure 1A) (Ye et al, 2013).
Melanopsin, the photopigment of retinal ganglion cells that interacts with retinal (vitamin A), has been utilized to induce light sensitivity in otherwise non-sensitive cells (Melyan et al, 2005). In retinal ganglion cells, blue-light stimulation of melanopsin activates transient receptor potential channels (TRP channels) via a G-protein signaling cascade, resulting in calcium influx. By linking melanopsin signal transduction to the endogenous signaling pathway of the nuclear factor of activated T cells (NFAT), which is responsive to elevated calcium levels, Ye et al (2011) constructed a blue lightresponsive circuit that enabled transgene expression from an NFAT-responsive promoter. Expression of the GLP-1 under the control of the NFAT-responsive promoter resulted in blue lightcontrolled blood-glucose homeostasis in type 2 diabetic mice ( Figure 1B) (Ye et al, 2011).
In a similar manner, Stanley et al (2012) utilized the endogenous signaling pathway of NFAT to regulate gene expression directly by engineering the control of TRP channel activation, in an approach that combined synthetic biology with nanotechnology. Iron oxide nanoparticles coated with His antibodies were targeted to a temperature-sensitive TRP channel, which had been modified to express extracellular His-epitope tags (TRPV1 His ). The metal nanoparticles absorb radio-wave energy and transfer the heat to the temperaturesensitive TRPV1 His , which opens the channel and triggers calcium influx. These elevated calcium levels resulted in transgene expression from an NFAT-responsive promoter, and when used in mice, radio wave-heated activation of a modified human insulin gene was able to regulate glucose levels in the animals ( Figure 1C) (Stanley et al, 2012).
Culler et al (2010) reported a highly sophisticated strategy to use the recognition of disease markers to reprogram cell fate. They constructed an RNA-based device composed of specific aptamers designed to recognize endogenous signaling partners such as the subunits p50 and p65 of the transcription factor NF-kB. The aptamers were placed into key intronic locations near an alternatively spliced exon that harbored a stop codon. The exclusion of the alternative exon, which was part of a three-exon, two-intron minigene fused to a suicide Biomedically relevant circuit-design strategies W Bacchus et al gene (HSV-TK), was dependent on the binding of the p50 and p65 subunits to the aptamers. In the presence of tumor necrosis factor-a, the NF-kB pathway was induced, leading to the translocation of p50 and p65 to the nucleus. Subsequently, their presence in the nucleus regulated exon exclusion of the alternative exon and HSV-TK expression, ultimately resulting in cell death ( Figure 1D) (Culler et al, 2010).
Sophisticated two-/multi-input design allows for increased circuit complexity
The successful development of synthetic gene circuits mainly rests on the construction of gene regulation systems where one specific input is converted by the circuit into a specific genetic output. These circuits are likely to have limitations in therapeutic settings, as disease states typically have complex biological profiles (Evan and Littlewood, 1998;Banegas et al, 2007). Recent work in synthetic biology is therefore focused on constructing two-input or even multiple-input circuits where combinations of the input signals determine the final genetic response.
A simple yet efficient strategy for designing two-input circuits able to respond to AND-gate logics was illustrated by Nissim and Bar-Ziv (2010). They used the activation strengths of the synthetic promoters CXCL1, SSX1 and H2A1 in various cancer cell lines. Each promoter regulated the expression of one of the two components in a split transcription factor, enabling functional gene activation only when both promoters used were sufficiently active. The split transcription factor consisted of two fusion proteins, one of which was the bacterial DocS fused to the viral VP16 transactivation domain, and the other of which was the bacterial Coh2 fused to the yeast Gal4-DNAbinding domain. DocS-Coh2 association and subsequent activation of a Gal4 synthetic promoter by the associated transcription factor were dependent on the combined activity of CXCL1, SSX1 and H2A1 promoters. As the levels of endogenous transcription factors in turn controlled the activity of these promoters, this system allowed for cancer cell-specific recognition and the production of a response modifying subsequent cancer cell fate ( Figure 2A) (Nissim and Bar-Ziv, 2010).
A highly sophisticated multi-input design circuit, which allowed for specific cancer cell recognition and destruction, has been reported by Xie et al (2011). They constructed a celltype classifier that scored high and low levels of cancer cellspecific microRNAs and when matching the predetermined profile, programmed the identified cancer cells for apoptosis. The high-level microRNA markers, miR-21, miR-17 and miR-30a, targeted mRNA of the transactivator rtTA and the transrepressor LacI. rtTA was designed to activate expression of LacI, while LacI in turn was designed to repress the expression of the apoptosis-inducing hBax, by binding to the CAGop promoter. High levels of all three high-level microRNA Figure 1 Synthetic circuits based on the rewired cell-signaling pathways. (A) Guanabenz-induced synthetic circuit for the treatment of metabolic syndrome. Cells engineered to express the chimeric trace amine-associated receptor (cTAAR1) respond to Guanabenz by activating endogenous cAMP signaling. Increased levels of cAMP activate P CRE -driven transgene expression of Glp-1-Leptin via a cAMP-responsive element binding protein 1 (CREB1). When implanted in mice developing symptoms of metabolic syndrome, the circuit enabled simultaneous targeting of several metabolic disorders (Ye et al, 2013). (B) Blue light-and (C) radio wave-induced synthetic circuits enabling glucose homeostasis. (B) Cells engineered to trigger calcium influx through transient receptor potential channels (TRPCs) by expressing blue light-responsive melanopsin, link blue-light sensing to transgene expression via an NFAT-responsive promoter (P NFAT ). Implanted in diabetic mice, the circuit enabled blue light-controlled glucose homeostasis when expressing glucagon-like peptide 1 (Ye et al, 2011). (C) Cells engineered to trigger calcium influx through temperaturesensitive, His-tagged TRPCs (TRPV1 HIS ). Antibody-coated nanoparticles for His-tag recognition (NP) enabled local nanoparticle heating of TRPV1 HIS , consequently allowing for calcium influx, linking radio-wave exposure to transgene expression via an NFAT-responsive promoter (P NFAT ). Implanted in mice, the circuit enabled radio wave-controlled regulation of blood glucose levels by expressing insulin (Stanley et al, 2012). (D) Synthetic circuit responsive to endogenous proteins allow for diseasetargeted cell death. The RNA-based devise is composed of specific aptamers for p50/p65 recognition (white circle), localized at key intronic positions near an alternative spliced exon harboring a stop codon (red area) in a three-exon, two-intron minigene fused to a suicide gene (HSV-TK). Activation of the NF-kB pathway by stimulation of the tumor necrosis factor receptor (TNFR) with tumor necrosis factor-a (TNFa) enables p50/p65 regulation of exon exclusion, thereby linking disease markers to the killing of the diseased cells (Culler et al, 2010). markers would be required for the expression of hBax. Low microRNA markers, miR-141, miR-142(3p) and miR-146a, were set to target the translation of hBax. This enabled the apoptosis-inducing transgene to be only translated if the levels of all three low-level microRNA markers are indeed low. When the cell classifier locked into the specific high-and low-level microRNA profile it executed specific destruction of matching cancer cells ( Figure 2B) (Xie et al, 2011). The possibility of designing synthetic circuits capable of performing logic gate calculations was a landmark advance in synthetic biology (Kramer et al, 2004a;Rinaudo et al, 2007 engineering of combinatorial circuits, using integrated twomolecule input, capable of performing complex logic calculations. For the construction of such circuits they used the transcription factors ET1 and TtgA 1 , which respond to erythromycin and phloretin, as well as the RNA-binding proteins MS2 and L7Ae, which inhibit the translation of transcripts containing the specific RNA target motifs MS2 box and C/D box . In a plug-and-play fashion, implementing these simple transcription-translation control elements, triggerprogrammable circuits able to process NOT, AND, N-AND and N-IMPLY logics were constructed. XOR computations were achieved by different combinations of two N-IMPLY gates and the combination of three logic gates enabled cells to perform calculations as complex as additions (one AND gate and two N-IMPLY gates) and subtractions (three N-IMPLY gates) ( Figure 2C) (Auslander et al, 2012a).
Light-responsive synthetic circuits
Light-sensing proteins are abundant in nature, and they permit light energy to be transferred into specific cellular responses (Sharrock, 2008;Do et al, 2009). Examples include the microbial light-sensitive ion channels called opsins, which become permeable to ion fluxes in response to light. The introduction of opsins into mammalian cells has in recent years become a powerful biological tool called optogenetics, which allows for spatiotemporal control of cellular functions (Zhang et al, 2006). When engineered to express lightresponsive opsin, the activation state of single neuronal cells can be regulated (Boyden et al, 2005), heart function can be controlled (Arrenberg et al, 2010;Bruegmann et al, 2010) and vision restored (Doroudchi et al, 2011) simply by applying light. The potential of light as a non-invasive regulator of functions at the cellular, organ and even organism level has not gone unnoticed by synthetic biologists (Schroder-Lang et al, 2007;Airan et al, 2009;Chaudhury et al, 2013). Optogenetics has been shown to be a powerful tool in mammalian synthetic biology, allowing for precise and easy control of cell fate with spatiotemporal precision.
Blue light-controlled circuits
Photosensitive proteins found in nature dimerize when exposed to light, and this property is being used to generate light-responsive synthetic circuits in mammalian cells. Such light-responsive elements include photoreactive light-oxygenvoltage domains (LOV domains) bound to the co-factor flavin mononucleotide (FMN), which upon blue-light absorption enable protein-protein interactions in prokaryotes, fungi and plants, and in doing so, regulate various cellular functions (Demarsy and Fankhauser, 2009;Herrou and Crosson, 2011). Implementing the blue light-dependent protein-protein interaction of the Arabidopsis derived flavin-binding kelch repeat f-box 1 (FKF1), containing an LOV domain, to the GIGANTEA protein (GI) resulted in the first light-regulated transgene expression system in mammalian cells (Yazawa et al, 2009). Yazawa et al fused GI to a Gal4-DNA-binding protein and FKF1 to a VP16 transactivation domain. Upon blue-light illumination, the FKF1-VP16 fusion protein was recruited to the GI-Gal4-DNA-binding protein, thereby enabling activation of gene expression from its cognate promoter containing Gal4-specific operator sites ( Figure 3A) (Yazawa et al, 2009). Replacement of the Gal4-DNA-binding protein with a zinc finger protein (ZFP) made it possible to target specific sequences with engineered ZFP, thereby opening the possibility of also regulating endogenous genes in response to light (Polstein and Gersbach, 2012).
The smallest LOV domain-containing protein VIVID (VVD), derived from Neurospora crassa, incorporates the co-factor flavin adenine dinucleotide (FAD). VVD was utilized by Wang et al (2012) to engineer blue light-inducible regulation. A modified version of VVD was fused to a monomeric variant of the Gal4-DNA-binding domain and the p65 transactivation domain. Upon blue-light illumination, VVD was able to dimerize, consequently allowing the reconstituted Gal4-DNAbinding domain dimer to bind to its cognate promoter and activate gene expression. This design allowed for spatial control of gene expression in mice ( Figure 3B) .
Blue light-induced protein-protein interaction found in Arabidopsis thaliana, between cryptochrome 2 (CRY2), which requires FAD as a co-factor, and the cryptochrome-interacting basic-helix-loop-helix (CIB1), was implemented to regulate transgene expression by fusing the dimerization partners to the two parts of an artificially split Cre recombinase (Kennedy et al, 2010). Blue light enabled these parts to combine and thus produce Cre activity. This eliminated a stop sequence flanked by two loxP sites, thereby allowing for gene expression ( Figure 3C) (Kennedy et al, 2010). CRY2 has further been Figure 2 Multi-input design for increased circuit complexity. (A) Two-input circuit for cancer cell recognition and destruction. The synthetic promoters CXCL1, SSX1 and H2A1, which show diverse activation strengths in various cancer cell lines, are engineered to control the gene expression of either one of two subunits, DocS-VP16 and Gal4 BD -Coh2, which together comprise a split transactivator. As the activities of the synthetic promoter combinations (P1; either CXCL1, SSX1 or H2A1, P2; either CXCL1, SSX1 or H2A1) used are regulated by endogenous, cell-specific transcription factors (TF1, TF2), the split transactivator is only expressed in a cell line where sufficient activities of both promoters are obtained. The association of DocS and Coh2 produces a functional transactivator that activates gene expression of a killer gene (TK1) from a Gal4-synthetic promoter (P Gal4 ), thus leading to cell death (Nissim and Bar-Ziv, 2010). (B) Multi-input circuit for cancer cell recognition and destruction. A cell type classifier for HeLa cells was constructed by implementing endogenous expressed microRNA profiles consisting of high-or low-expressed microRNA (high/low sensors). Three high-expressed microRNAs (miR-21, miR-17 and miR-30a) targeted the mRNA of the activator rtTA and the repressor LacI (miR-21 t , miR-17 t and miR30a t ). rtTA was designed to activate the expression of LacI and LacI in its turn was designed to repress the final expression of a output gene (GOI), thereby only allowing for the activation of the gene in the presence of all three high-expressed microRNAs. Three low-expressed microRNAs (miR-141, miR142(3p) and miR-146a) further targeted the mRNA of the output gene (miR-141 t , miR-142(3p) t and miR-146a t ), only allowing for its expression at low levels of all three of the microRNAs. Regulation of a killer gene (hBax) with this cancer cell classifier enabled cell type-specific destruction of the HeLa cells (Xie et al, 2011). (C) Two-input circuits enable construction of plug-and-play assemblies performing sophisticated computations. The transcription factors ET1 and TtgA 1 , which repress the promoter activity of P ETR2 and P TtgR1 in response to erythromycin (E) and phloretin (P), were combined with the RNA-binding proteins MS2 and L7Ae, which inhibit the translation of transcripts containing the specific target motifs MS2 box and C/D box , to construct circuits capable of performing easy computations such as N-IMPLY logics, which are induced in the presence of only one specific input molecule. Assembling such simple circuits in a plug-and-play fashion allowed the construction of complex circuits capable of performing half-subtractor and half-adder computations (Auslander et al, 2012a).
Biomedically relevant circuit-design strategies W Bacchus et al implemented in achieving blue light-mediated protein oligomerization and photoactivation of the endogenous b-catenin pathway (Bugaj et al, 2013). Blue light has also been utilized to directly control protein function by stimulating enzymatic activity upon exposure to blue light (Wu et al, 2009;Zhou et al, 2012), achieving blue light-responsive migration of stem cells in synthetic extracellular matrices (Guo et al, 2012) and enabling blue lightguided protein localization (Strickland et al, 2012).
Red light-controlled circuits
Not only blue light-responsive optogenetic tools have been introduced in mammalian cell-based synthetic biology. Concurrent with the first blue light-based systems came red lightbased systems where precise spatiotemporal control of cellular morphology was demonstrated by a system utilizing the plant phytochrome B (PhyB) and its interaction with phytochrome interacting factor 6 (PIF6) upon exposure to red/far-red light. is unable to bind to the Gal4 promoter ((UAS G ) 5 ) and activate gene expression due to the monomeric structure of the GBD. Blue-light illumination enables VVD dimerization due to its chromophore flavin adenine dinucleotide (FAD), thus reconstituting the GBD dimer and consequently activating gene expression . (C) Fusion proteins of CRY2 and CIBN to each part of a split Cre recombinase lacking enzymatic activity (Cre N and Cre C ) enabled associated and reconstituted Cre activity through the blue light-dependent interaction of CRY2, which requires FAD, and CIBN. The functional Cre acts by eliminating a stop sequence flanked by loxP sites, subsequently permitting gene expression (Kennedy et al, 2010). (D) Red lightcontrolled circuit. The two proteins PhyB and PIF6 interact upon red light while far-red light inhibits the interaction. Fusions of PhyB, which uses the chromophore phytochromobilin (PCB), to VP16 and PIF6 to the TetR repressor enabled red light-dependent association of the split transactivator, consequently activating gene expression from a TetR-promoter ((TetO) 13 ). This action was reversed using far-red light, which caused dissociation of the PhyB and PIF6 fusions (Muller et al, 2013a). (E) UVB light-controlled circuit. Fusion proteins of UVR8 to the E repressor and WD40 to VP16 enabled association of the split transactivator upon UVB illumination as the UVR8 homo-dimerization is released, allowing for WD40-VP16 recruitment. The reconstituted transactivator enables gene expression from a promoter containing an E-responsive operator motif ((etr) 8 ) (Muller et al, 2013b).
Biomedically relevant circuit-design strategies W Bacchus et al
This system required exogenous addition of the co-factor phytochromobilin (PCB) (Levskaya et al, 2009). Capitalizing on the interaction mechanism of PhyB and PIF6, the first mammalian gene regulation system responsive to red light was constructed (Muller et al, 2013a). Muller et al engineered a split transcription factor based on the fusion proteins of the tetracycline repressor TetR to PIF6 and PhyB fused to the VP16 transactivation domain. Red light enabled the reconstitution of the split transcription factor, thereby activating gene expression from a TetR-specific target promoter. Far-red light illumination resulted in the dissociation of PhyB from PIF6 and the de-activation of gene expression ( Figure 3D). The authors further showed its utility by inducing spatially controlled angiogenesis in chicken embryos using red light-controlled expression of the human vascular endothelial growth factor splice variant 121 (hVEGF 121 ) (Muller et al, 2013a).
Ultraviolet B light-controlled circuits
A genetic circuit responding to ultraviolet B (UVB) light has recently been reported (Muller et al, 2013b). The authors used the A. thaliana photoreceptor protein UV resistance locus 8 (UVR8), which homo-dimerizes in the absence of UVB light, and the WD40 domain of its interacting partner COP1. By fusing UVR8 to the macrolide repressor E and WD40 to the VP16 transactivation domain, they constructed a split transcription factor that was activated upon exposure to UVB light, which released UVR8 homo-dimerization, allowing for WD40-VP16 recruitment. The reconstituted transcription factor then activated gene expression from a chimeric promoter containing the E-responsive operator motif (etr 8 ) ( Figure 3E). Finally, multichromatic control of gene expression was established by combining light control circuits responding to blue, red and UVB light. Such multichromatic systems were implemented in a circuit used to control angiogenesis signaling processes (Muller et al, 2013b).
Engineering intercellular communication
Humans communicate via speech, but simpler organisms such as bacteria developed ways to communicate through direct exchange of molecules to monitor and adapt to their environment. For example, quorum sensing enables bacteria to synchronize activities, such as motility and gene expression, within a large group of cells, thereby adapting population-wide behavior (Bassler, 1999;Waters and Bassler, 2005). At the cellular level of the human body, specialized cells, such as those of the immune-or endocrine systems, communicate through signaling molecules to regulate crucial biological processes. The natural existence of specialized cells, performing specific tasks that are coordinated by intercellular signaling, has in recent years inspired the design of synthetic multicellular assemblies (Weber et al, 2007a;Bacchus et al, 2012;Ortiz and Endy, 2012;Rusk, 2012). Not only does synthetic intercellular communication networks represent a way for synthetic biologists to build and thereby understand naturally existing systems (Weber et al, 2007a;Balagadde et al, 2008;Song et al, 2009), but it also allows to design gene network topologies with increasing complexity and new control dynamics (Bacchus et al, 2012). Intercellular communication enables the engineering of genetic circuits that allow for robust and timely gene expression in entire cellular populations (Prindle et al, 2012), the possibility for programmed pattern formation (Basu et al, 2005;Liu et al, 2011), as well as the creation of interconnected multicellular assemblies, very similar to those found in nature (Bacchus et al, 2012;Macia et al, 2012).
The first synthetic intercellular communication system in mammalian cells, constructed by Weber et al (2007a), allowed engineered sender cells to produce a metabolic signal in a cell density-dependent manner, and engineered receiver cells to respond to that signal with a distinct genetic response. The sender cells were engineered to express mouse-derived alcohol dehydrogenase (ADH), allowing supplemented ethanol to be converted into the volatile metabolite acetaldehyde. The receiver cells were engineered with an acetaldehyde-inducible regulation system based on the genetic components derived from Aspergillus nidulans, which enabled gene expression upon reception of acetaldehyde. Replacement of the engineered mammalian sender cells with those of E. coli, S. cerevisiae and L. sativum, organisms naturally expressing ADH, allowed for interkingdom communication, as the produced acetaldehyde was routed to the mammalian receiver cells. When microencapsulated circuit-transgenic designer cells were implanted into mice, the mammalian sender and receiver cells functioned in a manner similar to hormones. Ethanol provided through drinking water was converted by the sender cells into acetaldehyde and broadcast to the receiver cells, thereby triggering transgene expression ( Figure 4A) (Weber et al, 2007a).
The generic design for constructing intercellular communication in mammalian cells (Weber and Fussenegger, 2011) by implementing distinct sender and receiver cell populations has been adapted to create intercellular communication systems responding to L-arginine (Weber et al, 2009), biotin (Weber et al, 2007a), nitric oxide (Wang et al, 2008) and L-tryptophan (Bacchus et al, 2012). The latter system was composed of sender cells engineered to express the bacterial gene tryptophan synthase (TrpB), allowing for the conversion of supplemented indole into the amino acid L-tryptophan. The receiver cells expressed a target gene via a constructed L-tryptophan-inducible regulation system based on the genetic components derived from Chlamydia trachomatis. The potential of intercellular communication in bioreactor settings, which could be important in manufacturing pharmaceuticals or biofuels, was illustrated by programming gene expression profiles to be dependent on inoculated cell concentrations. Combining the acetaldehyde and L-tryptophan intercellular communication systems allowed for complex multicellular assemblies to be constructed. Natural signaling systems of multicellular assemblies such as multistep information processing cascades, feed forward-based signaling loops, and two-way communication were mimicked simply by implementing the same genetic building blocks in different cellular configurations ( Figure 4B). For example, two-way communication was used in a model for angiogenesis by controlling vascular endothelial cell permeability (Bacchus et al, 2012).
Prosthetic networks
Prosthetic networks are synthetic devices which will act as molecular prosthesis that sense, monitor and score (disease-) relevant metabolites, process off-level concentrations and coordinate adjusted diagnostic, preventive or therapeutic responses in a seamless, automatic and self-sufficient manner Auslander and Fussenegger, 2013;Perkel, 2013). In contrast to the aforementioned transgene control devices, prosthetic networks are directly linked to host metabolism and triggered by the disease metabolite. The potential to use prosthetic networks as therapy was demonstrated in a pioneering example reported by Kemmer et al (2010). Elevated levels of uric acid are associated with pathological conditions such as tumor lysis syndrome and Figure 4 Engineering of intercellular communication. (A) Acetaldehyde-based intercellular communication system enables interkingdom communication. Sender cells (SCs), able to produce acetaldehyde, composed of E. coli, S. cerevisiae, L. sativum or mammalian cells engineered to express alcohol dehydrogenase. Mammalian receiver cells (RCs) were engineered with an acetaldehyde-responsive element consisting of AlcR, which in the presence of acetaldehyde activates gene expression from a P AIR promoter. Implanting the mammalian sender and receiver cells in mice allowed for the production of acetaldehyde by the sender cells, thus converting ethanol supplemented in the drinking water. The acetaldehyde was broadcast to the receiver cells allowing for gene expression of secreted alkaline phosphatase (SEAP) (Weber et al, 2007a). (B) L-tryptophan-based intercellular communication system enables multicellular assemblies. Sender cells were engineered to express tryptophan synthase (TrpB), converting supplemented indole into L-tryptophan. The receiver cells were engineered with an L-tryptophan-responsive element consisting of the transactivator TRT, which activates gene expression from P TRT in the presence of L-tryptophan. Combining the genetic components of the acetaldehyde-and L-tryptophan-based intercellular communication systems allowed for various sender-(SC), processor-(PC), receiver (RC) and sender/receiver cells (S/RC) to be constructed. Assembling these components in a plug-and-play manner allowed the creation of multicellular architectures mimicking the natural phenomena (Bacchus et al, 2012).
Biomedically relevant circuit-design strategies W Bacchus et al gout, so they constructed a genetic circuit for controlling uric acid homeostasis in mice. The circuit was composed of a modified Deinococcus radiodurans-derived protein (mUTS) able to relieve repression of its cognate promoter (P UREX8 ) upon elevated levels of uric acid. After insulation of circuittransgenic cells by encapsulation in immunoprotective microcontainers (Auslander et al, 2012b) and implantation in urate oxidase-deficient mice developing gout, the circuit autoconnected to peripheral circulation, sensed the pathologically high levels of uric acid in the bloodstream of the animals, activated the expression of a secretion-engineered version (smUox) of the clinically licensed Aspergillus flavus urate oxidase (Rasburicase) driven by P UREX8 , and thereby reduced the levels of uric acid to subpathological levels ( Figure 5A) (Kemmer et al, 2010).
Prosthetic networks have also been developed as a tool for artificial insemination (Kemmer et al, 2011). This was achieved by rewiring the luteinizing hormone receptor (LHR) to activate a CREB1, enabling transgene expression when the receptor was stimulated by the luteinizing hormone binding to it. The stimulation of LHR triggered a classic G protein-coupled receptor response, which increased levels of intracellular cAMP, and triggered CREB1 binding to a synthetic promoter (P CRE ) controlling expression of a secretion-engineered cellulase. Co-encapsulation of sperm and cells containing this circuit into cellulose-based capsules were implanted in the uterus of cows. At ovulation, elevated levels of luteinizing hormone were produced, which resulted in the rupture of the implants when the secreted cellulase degrades the cellulose-based capsule, and ultimately resulted in successful fertilization ( Figure 5B) (Kemmer et al, 2011).
Conclusion
Starting from the basic construction of transcriptional gene regulation systems, and utilizing native bacterial gene switches that respond to antibiotics, synthetic biology circuits today include novel circuits based on the transcriptional, translational or post-translational regulation (Auslander and Fussenegger, 2013). These complex circuits are designed for use in therapy, as they are engineered to respond to metabolites of the human body, to native cell-signaling pathways and to disease or cellspecific markers and thus target specific disease states (Wei et al, 2012). This development is by no means coincidental, as researchers have worked to find synthetic biology solutions for real clinical issues (Dean et al, 2009;Ruder et al, 2011;Folcher and Fussenegger, 2012;Weber and Fussenegger, 2012).
Yet, can synthetic biology deliver what it promises outside the laboratory as well? To achieve these ambitious goals, it is crucially important to solidify the advances that have been made in standardized genetic circuit design, and to create still more robust and complex circuits, as these in turn will ensure safe and reliable usage (Endy, 2005;Gardner, 2013). Capitalizing on the most recent technological advances in synthetic biology, time has now come that these designer devices be implemented and validated in clinical settings. Therefore, the designer cells will have to traverse the same clinical phases and likely meet with similar technical challenges as gene-and cell-based therapies. However, with over two decades of records in gene-based treatment strategies, clinical implementation of synthetic biology devices may be more straightforward (de Amorim, 2009;Laurencot and Ruppel, 2009;Deplazes-Zemp and Biller-Andorno, 2012;Philp et al, 2013). Cells are engineered to respond to elevated levels of uric acid by disassociation of the mUTS repressor from the P UREX8 promoter, thereby enabling transgene expression. Expression of a urate transporter (URAT1) enhanced intercellular urate concentrations and circuit sensitivity. When implanted in urate oxidase-deficient mice, the circuit sensed pathologically high levels of uric acid in the blood stream, activated transgene expression of a secreted urate oxidase (smUox), and thus reduced the elevated levels of uric acid (Kemmer et al, 2010). (B) Prosthetic network for artificial insemination. Cells engineered to express the luteinizing hormone receptor (LHR) respond to luteinizing hormone by activating endogenous cAMP signaling, allowing for the activation of P CRE -driven transgene expression of cellulase via a cAMP-responsive element binding protein 1 (CREB1). Engineered cells are coencapsulated with sperm into cellulose-based implants and positioned in the uterus of cows. Ovulation-coordinated activation of cellulase expression in response to elevated levels of luteinizing hormone results in capsule degradation and sperm release (Kemmer et al, 2011).
Biomedically relevant circuit-design strategies W Bacchus et al In just a few years, optogenetics has become of marked importance for synthetic biology (Knopfel et al, 2010;Chow and Boyden, 2011). If applied in clinical settings, then the regulation of crucial therapeutic proteins in response to light could be a reality for patients. This would provide an optimal solution to biopharmaceutical production, as such therapy could be used to induce protein production at a specific cell density without the addition of chemical inducers (Ye et al, 2011). While most mammalian light circuits are controlled by blue light, red light systems could prove to be highly influential, as red light penetrates tissue more efficiently than blue light does (Muller et al, 2013a). However, the clinical utility of light-controlled circuits is limited by their chromophores. For example, the red light-controlled circuit requires phytochromobilin, which is not only difficult to produce and to administer but also unlikely to become clinically licensed due to side effects caused by this plant-derived co-factor. Also, light-controlled devices assembled from human components are preferred to eliminate the risk of immune responses and other undesired site effects. With its all-human design and the ubiquitous co-factor vitamin A, the blue light-responsive melanopsin-derived optogenetic device meets the high standard clinical compatibility (Ye et al, 2011) ( Figure 1B). The development of multichromatic control circuits will further broaden the biomedical utility of light-controlled circuits and enable more accuracy in the implementation of the circuits (Muller et al, 2013b).
With the introduction of synthetic intracellular communication systems, synthetic biologists have not only found an innovative way to tackle the current processing limitations of single cells, but have also found a solution to design the circuits of the future which likely continue to increase in complexity and thus require more components (Perkel, 2013). As intercellular communication allows for spatial separation of the cell populations, it could hold great promise for biomedical applications such as advanced tissue engineering. Implementation of multiple and interconnected cell implants in vivo could allow for remote control of differing functions, very much like the natural regulatory processes in the body. The application of engineered intercellular communication systems for therapeutic purposes is not only restricted to mammalian cell design (Anderson et al, 2006;Duan and March, 2010;Wu et al, 2013). With their cell densitydependent transgene expression responses, intercellular communication systems represent a powerful asset for synthetic biology (Mitchell et al, 2011;Miller et al, 2012;Shong et al, 2012).
Increased complexity, reliability and accuracy of genetic circuit devices in combination with incorporating newly developed technologies will ensure synthetic biology's place among the biological engineering disciplines of the 21st century. This century is likely to mark mammalian synthetic biology's advance from a 'proof of concept' discipline to a tool commonly used in clinical medical practice.
|
v3-fos-license
|
2014-11-28T21:26:58.000Z
|
2014-01-15T00:00:00.000
|
36960744
|
{
"extfieldsofstudy": [
"Physics",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1364/oe.23.006379",
"pdf_hash": "a8a3d5655347e4af01cdbe3f3a311ec4ee2fd50a",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2173",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "a8a3d5655347e4af01cdbe3f3a311ec4ee2fd50a",
"year": 2014
}
|
pes2o/s2orc
|
Diffraction manipulation by four-wave mixing
We suggest a scheme to manipulate paraxial diffraction by utilizing the dependency of a four-wave mixing process on the relative angle between the light fields. A microscopic model for four-wave mixing in a Lambda-type level structure is introduced and compared to recent experimental data. We show that images with feature size as low as 10 micrometers can propagate with very little or even negative diffraction. The mechanism is completely different from that conserving the shape of spatial solitons in nonlinear media, as here diffraction is suppressed for arbitrary spatial profiles. At the same time, the gain inherent to the nonlinear process prevents loss and allows for operating at high optical depths. Our scheme does not rely on atomic motion and is thus applicable to both gaseous and solid media.
I. INTRODUCTION
The diffraction of light during propagation in free space is a fundamental and generally unavoidable physical phenomenon. Because of diffraction, light beams do not maintain their intensity distribution in the plane transverse to the propagation direction, unless belonging to a particular class of non-diffracting (Bessel) beams [1]. In nonuniform media, waveguiding is possible for specific spatial modes [2,3], or equivalently arbitrary images may revive after a certain self-imaging distance [4]. However in such waveguides, the suppression of diffraction for multimode profiles is not trivial, as each transverse mode propagates with a different propagation constant or group velocity, resulting in spatial dispersion of the profile.
Recently, a technique was suggested [5] and experimentally demonstrated [6] to manipulate (eliminate, double, or reverse) the diffraction of arbitrary images imprinted on a light beam for arbitrary propagation distances. The technique is based on electromagnetically induced transparency (EIT) [7] in a thermal atomic gas. Unlike other methods utilizing EIT [2][3][4][8][9][10][11][12][13][14][15][16][17], which rely on spatial non-uniformity, this technique operates and prescribes non-uniformity in k ⊥ space. Here, k ⊥ denotes the transverse wave vectors, i.e., the Fourier components of the envelope of the field in the transverse plane, which is the natural basis for paraxial diffraction. The technique of Refs. [5,6] relies on the diffusion of the atoms in the medium and on the resulting so-called Dicke narrowing [18,19]. Due to Dicke narrowing, the linear susceptibility becomes quadratic in k ⊥ and results in motionalinduced diffraction that can counteract the free-space diffraction. Unfortunately in the currently available experimental conditions, the resolution limit of motionalinduced diffraction is on the order of 100 µm, preventing it from being of much practical use. Higher resolution requires a denser atomic gas, in which strong absorption is unavoidable due to imperfect EIT conditions. Very recently, Zhang and Evers proposed to circumvent the absorption by generalizing the model of motional-induced diffraction to a four-wave mixing (FWM) process in combination with EIT [20]. The FWM process further allows the frequency conversion of the image and increases the available resolution [20].
In this paper, we propose a scheme to manipulate diffraction using FWM [21][22][23] without the need for motional-induced diffraction. The mechanism we study originates from phase matching in k ⊥ space and does not require a gaseous medium; it is therefore directly applicable to solid nonlinear media. For our model to be general and accommodate motional-broadening mechanisms (not important in solids), we here still concentrate on describing atomic gases and validate our model against relevant experiments. The inherent gain of the FWM process allows us to improve the spatial resolution by working with relatively higher gas densities while avoiding loss due to absorption.
In Sec. II, we introduce a microscopic model of FWM in a Λ system, based on Liouville-Maxwell equations, similar to the one used in Ref. [24]. In Sec. III, we compare the model to recent experimental results of FWM in hot vapor [24,25]. We use our model in Sec. IV to show that, with specific choice of frequencies, the k ⊥ dependency of the FWM process can be used to eliminate the diffraction of a propagating light beam. We also present a demonstration of negative diffraction, implementing a paraxial version of a negative-index lens [26], similar to the one in Ref. [6] but with positive gain and higher resolution. Finally, we analyze the resolution limitation of our scheme and propose ways to enhance it. We show that, for cold atoms at high densities (∼10 12 cm −3 ), diffraction-less propagation of an image with a resolution of ∼10 µm can be achieved.
A. Model
We consider an ensemble of three-level atoms in a Λ configuration depicted in Fig. 1a. The atomic states are denoted as |u , |l , and |r , for the up, left, and right states, and the corresponding energies are ǫ u , ǫ l , and ǫ r . We introduce the optical transition frequencies ω ul = (ǫ u − ǫ l ) /ℏ and ω ur = (ǫ u − ǫ r ) /ℏ, taken to Typeset by REVT E X be much larger than the ground-state splitting ω lr = ω ul − ω ur = (ǫ l − ǫ r ) /ℏ. To simplify the formalism, we assume the same dipole-moment for the two optical transitions µ = µ ul = µ ur , where µ αα ′ = α| µ· x |α ′ , µ being the dipole-moment operator.
The atom interacts with three external, classical electromagnetic fields, propagating in time t and space r, Here ω i are the frequencies of a weak 'probe' (i = p) and two 'control' fields (i = c); ε j are the polarization vectors (with j = p, cl, cr); k 0 i ≡ ω i /c are the wave vectors in the case of plane waves, otherwise they are carrier wavevectors; and Ω i (r, t) are the slowly varying envelopes of the the Rabi frequencies, satisfying ∂ 2 t Ω i (r, t) ≪ |ω i ∂ t Ω i (r, t)| and ∂ 2 z Ω i (r, t) ≪ k 0 i ∂ z Ω i (r, t) . We shall analyze the case of two identical control fields with the same Rabi frequency Ω c , wave vector k 0 c , and frequency ω c . The strong control and weak probe fields stimulate a weak classical 'Stokes' field (or 'conjugate') at a fre- The resonances are characterized by the one-photon detuning ∆ 1p = ω c − ω ur and the two-photon detuning ∆ 2p = ω p − ω c − ω lr (see Fig. 1a) .The population of the excited level |u decays to the ground levels |l and |r with rates Γ l and Γ r . The atomic coherence between the excited level |u and each of the ground levels |l and |r decays with rates Γ d,l and Γ d,r . For simplicity, we assume Γ l = Γ r = Γ d,l = Γ d,r ≡ Γ. Within the ground state, we consider a population relaxation with a symmetric rates Γ l↔r and decoherence with rate Γ lr .
In a frame rotating with the control frequency ω c , the equations of motion for the local density-matrix ρ (r, t) are better written in terms of the slowly-varying densitymatrix R (r, t) , where R u,j (r, t) = ρ u,j (r, t) e iωct−ik 0 c z for j = l, r and R α,α ′ (r, t) = ρ α,α ′ (r, t) for all other matrix elements, Here γ cj = Γ − i (ω c − ω uj ) are complex one-photon detunings of the control fields (j = l, r), are interference fields. Assuming non-depleted control fields, constant in time and space Ω c (r, t) = Ω c , we complete the description of the atom-field interaction with the propagation equations under the envelope approximation for the probe field (5a) and the Stokes field (5b) where ∇ 2 ⊥ ≡ ∂ 2 /∂x 2 + ∂ 2 /∂y 2 the transverse Laplacian, g = 2πN |µ| 2 q/ℏ the coupling strength proportional to the atomic density N , and q ≡ k 0 c ≈ k 0 p ≈ k 0 s . To obtain Eqs. (5), we neglected the second-order t and z derivatives of the envelopes.
B. Steady-state solution
The evolution of the fields is described by a set of non-linear, coupled differential equations for the density matrix elements R α,α ′ and the weak fields Ω p and Ω s [Eqs. (3)-(5)], which require further approximations to be solved analytically. The solutions and assumptions are detailed in the Appendix, where we find the steady state of the system to first order in the weak fields, α,α ′ being the zero-and first-order steadystate solutions. The most important assumption is the proximity to two-photon resonance, such that δ ω is on the order of the ground-state frequency splitting ω lr and much larger than any detuning, Rabi frequency, or pumping rate in the system. Plugging Eqs. (A5)-(A8) and (6) for R u,r and R u,l into the propagation equations (5) and discarding terms rotating at δ ω and 2δ ω , we obtain the well-known FWM form including paraxial diffraction, where Here α j = g (n l /γ jl + n r /γ jr ) are the linear absorption coefficients of the probe (j = p) or Stokes (j = s) fields, with n i ≡ R i,i the populations of the i = r, l levels. β p = g (n l /γ pl + n r /γ * cr ) and β s = g (n r /γ * sr + n l /γ cl ) are twophoton interaction coefficients, γ jk = Γ − i (ω j − ω uk ) [j = p, c, s; k = l, r] are complex one-photon detunings, and is the complex two-photon detuning. Eqs. (7) are similar to those obtained by Harada et al. [24] but here including the diffraction term ±i∇ 2 ⊥ /(2q), which we require in order to explore the spatial evolution of the FWM process.
We start with the simple case of a weak plane-wave Fig. 1). We assume that the generated Stokes field is also a plane wave Ω s (r) = g (z) e ik s ⊥ ·r ⊥ e i(k s z −k s 0 )z . Substituting into Eqs. (7), the phase-matching condition k s ⊥ = − k p ⊥ is easily obtained, and the resulting equations for f and g are [23] Assuming f (0) = 1 and g (0) = 0, we follow Ref. [23] and find along the medium with the eigenvalues In the limit where |B| and |C| are much smaller than |A| and |D|, the solution is governed by independent EIT for the probe and Stokes fields with little coupling between them. In the opposite limit, the fields experience strong coupling, and the real part of the eigenvalues λ 1,2 can be made positive and result in gain.
III. COMPARISON WITH EXPERIMENTS
To verify our model, we have calculated the probe transmission as a function of the two-photon detuning and compared it to the data published in Refs. [24,25]. The Doppler effect due to the motion of the thermal atoms is taken into account by averaging the FWM coefficients, Q = A, B, C, D in Eq. (8), over the Doppler profile [27]. Assuming nearly collinear beams, the mean coefficients are where v th = k B T /m, T the cell temperature, and m the atomic mass. Fig. 2 shows the transmission spectrum in (a) rubidium vapor and (b) sodium vapor (cells length L ≃ 5 cm). Our model reproduces the experimental spectra, including the Doppler-broadened absorption lines and the gain peaks for both rubidium and sodium experiments. The missing peak in Fig. 2b is due to anti-Stokes generation not included in the model.
IV. DIFFRACTION MANIPULATION BY FWM
We now concentrate on a specific choice of frequency detunings, for which the phase dependency of the FWM process can be used to manipulate the diffraction of the propagating probe and Stokes. To this end, we study the evolution of an arbitrary image F (r) imprinted on the probe beam, with the boundary conditions Ω p (r ⊥ , 0) = F (r) and Ω s (r ⊥ , 0) = 0. Our prime examples shall be the propagation of the image without diffraction or with reverse diffraction, both of which while experiencing gain.
A. Image propagation
We start by solving Eqs. (7) in the transverse Fourier basis, where Ω p/s (k ⊥ , z) = dr 2 ⊥ e −ik ⊥ ·r ⊥ Ω p/s (r ⊥ , z), and the coefficientsĀ,B,C, andD are Doppler averaged according to Eq. (12). We notice that Eqs. (13) are identical to Eqs. (9) with k ∆ = 0 and with the substitutions A →Ā − ik 2 ⊥ /(2q) andD →D + ik 2 ⊥ /(2q). The evolution of the probe and Stokes fields then follows from Eq. (10), where We choose |e λ2z | ≫ |e λ1z | and obtain Ω p (k ⊥ , z) = Ω p (k ⊥ , 0) e Z , where determines the changes in the spatial shape of the probe along its propagation. Re Z is responsible for the k ⊥dependency of the gain/absorption, and Im Z is responsible for the k ⊥ -dependency of the phase accumulation, that is, the diffraction-like evolution.
B. Suppression of paraxial diffraction
In general, in order to minimize the distortion of the probe beam, one is required to minimize both the real and imaginary k ⊥ -dependencies of Z. To better understand the behavior of Z, we expand it in orders of k 2 ⊥ . Taking the limit where 2E = [ Ā −D 2 + 4BC] 1/2 , we write and find The k ⊥ -dependency, governed by Z (2) , can be controlled through the FWM coefficientsĀ,B,C, andD given in Eq. (8), by manipulating the frequencies of the probe and control fields (ω p , ω c ), the control amplitude Ω c , and the density N . We demonstrate this procedure in Fig. 3, using for example the experimental conditions of the sodium experiment, detailed in Fig. 2. First, we observe the gain of the probe and the Stokes fields in Fig. 3a and 3b, as a function of the one-(∆ 1p ) and two-(∆ 2p ) photon detunings. The gain is achieved around the two-photon resonance (∆ 2p ≈ 0), either when the probe is at the one-photon resonance (∆ 1p ≈ 0) or the Stokes (∆ 1p ≈ ω lr , here ≈ 2 GHz); the latter exhibits higher gain, since the probe sits outside its own absorption line. The real and imaginary parts of Z (2) are plotted in Figs. 3c and 3d. When Re Z (2) = 0 (dashed line), the gain/absorption is not k ⊥dependent, whereas when Im Z (2) = 0 (solid line), the phase accumulation along the cell is not k ⊥ -dependent. When both happen, Z (2) = 0, and a probe with a spectrum confined within the resolution limit k ⊥ ≪ k 0 propagates without distortion. The exact propagation exponent Z as a function of k ⊥ for the point Z (2) = 0 (∆ 2p ≈ 0.4 MHz, ∆ 1p ≈ 0) are plotted in Fig. 3e. As expected, both real (blue solid line) and imaginary (red dashed line) parts of Z are constant for k ⊥ ≪ k 0 (deviation of 1% within k ⊥ < k 0 /2 and 0.1% within k ⊥ < k 0 /4). In the specific example of Fig. 3, the probe's gain is ∼1.4, the Stokes' gain is ∼ 4, and k 0 ≈ 40 mm −1 .
To illustrate the achievable resolution, we shall employ a conservative definition for a characteristic feature size in the image in area units a = (2π/k ⊥ ) 2 [For example: for a Gaussian beam, a 1/2 shall be twice the waist radius, and, for the field pattern E = 1 + cos(k ⊥ x) cos(k ⊥ y), the pixel area is a 2 . The Rayleigh length is qa 2 /8]. Fig. 4 presents numerical calculations of Eqs. (14) in the conditions found above for a probe beam in the shape of the symbol (R) with features of a ≈ 0.025 mm 2 (corresponding to k ⊥ = k 0 = 40 mm −1 ). The propagation distance is L = 45 mm, equivalent to 2 Rayleigh distances as evident by the substantial free-space diffraction. Indeed when Z (2) = 0, the FWM medium dramatically reduces the distortion of the image due to diffraction. Note that the image spectrum (black dashed-dotted line) lies barely within the resolution limit and that the Stokes distortion due to diffraction is also reduced. Direct numerical solutions of Eqs. (7) give exactly the same results. For the hot sodium system, the required control-field intensity is on the order of 100 mW for beams with a waist radius of a few mm, which is practically a plane wave on the length scale of the image.
C. Negative paraxial diffraction
Another interesting application of diffraction manipulation is imaging by negative diffraction, similar to the one proposed in Ref. [6]. Using the same tools as above, one can find the conditions for the reversal of paraxial diffraction, namely when Re Z (2) vanishes and Im Z (2) = 1 (free space diffraction is equivalent to Z (2) = −i).
At these conditions, as demonstrated in Fig. 5, the FWM medium of length L focuses the radiation from a point source at a distance u < L to a distance v behind the cell, where u + v = L. The mechanism is simple: each k ⊥ component of the probe accumulates outside the cell the phase −ik 2 ⊥ (u + v) /(2q) = −ik 2 ⊥ L/(2q) and inside the cell the the phase ik 2 ⊥ L/(2q), summing up to zero phase accumulation. The probe image thus 'revives', with some additional gain, at the exit face of the cell.
V. CONCLUSIONS AND DISCUSSION
We have suggested a scheme utilizing the k ⊥dependency of the four-wave mixing process for manipulating the diffraction during the propagation. The inherent gain of the FWM process allows one to take advantage of high optical-depths while avoiding absorption and, by that, achieving higher resolution than with previous EIT-based schemes [5,6]. As oppose to a recent proposal incorporating FWM [20], our scheme does not require atomic motion and is expected to work even more efficiently in its absence. We have introduced a microscopic model for the FWM process, based on Liouville-Maxwell equations and incorporating Doppler broaden-ing, and verified it against recent experimental results. We have delineated the conditions for which, according to the model, the FWM process suppresses the paraxial diffraction. We have also demonstrated the flexibility of the scheme to surpass the regular diffraction and reverse it, yielding an imaging effect while introducing gain. Our proposal was designed with the experimental limitations in mind, and its demonstration should be feasible in many existing setups.
The resolution limit a −1 ∝ k 2 0 of our scheme (and thus the number of 'pixels' S/a for a given beam area S) is proportional to the resonant optical depth . In practice, the latter can be increased either with higher atomic density N or narrower optical transitions. For example, using a density of N = 5 · 10 12 cm −3 , 10 times higher than in the sodium setup of Ref. [24], the limiting feature size would be 250 µm 2 (k 0 ≈ 125 mm −1 ). As long as N L =const , the other parameters required for the suppression of diffraction remain the same. At the same time, avoiding Doppler broadening by utilizing cold atoms or perhaps solids would substantially increase the resolution limit. Assuming cold atoms with practically no Doppler broadening (and ground-state relaxation rate Γ lr = 100 Hz), the same limiting feature of 250 µm 2 can be obtained at a reasonable density of 10 12 cm −3 . Finally, we note that the best conditions for suppression of diffraction are not always achieved by optimizing Z (2) alone (first order in k 2 ⊥ /k 2 0 ); In some cases, one could improve significantly by working with higher orders. As demonstrated in Fig. 6, Combining the aforementioned methods for resolution enhancement with N = 10 12 cm −3 cold atoms, a resolution-limited feature size of down to about 100 µm 2 with unity gain can be achieved. Going beyond this resolution towards the 1 − 10 µm 2 -scale, for applications in microscopy or lithography, would require further work aimed at lifting the paraxial assumption.
The FWM process conserves quantum coherence on the level of single photons, as was previously shown theoretically [28] and experimentally [29] by measuring spatial coherence (correlation) between the outgoing probe and Stokes beams. An intriguing extension of our work would thus be the generalization of the scheme to the single-photon regime. Specifically, the main limitation in the experiment of Ref. [29] was the trade-off between focusing the beams to the smallest spot possible while keeping the 'image' from diffracting throughout the medium. Our scheme could circumvent this trade-off by maintaining the fine features of the image along much larger distances.
In addition, our scheme can be utilized in optical trapping experiments for the production of traps that are long along the axial direction and tight in the transverse direction. An intricate transverse pattern can be engineered, for example, a thin cylindrical shell or a 2D array of narrow wires, that will extend over a large axial distance to allow for high optical depths. This can be further extended by modulating the control fields along the axial direction, such that the probe (and Stokes) diffracts in the absence of the controls and 'anti-diffracts' in their presence. In this arrangement, non-uniform traps in the axial direction can be designed.
Assuming control fields constant in time and space and much stronger than the probe and Stokes fields, the steady-state solution of Eqs. (3)-(5) can be approximated to lowest orders in the weak fields as R α,α ′ ≃ R α,α ′ is the zero-order and R (1) α,α ′ the first-order steady-state solutions. We find R (0) α,α ′ from the zero-order equations of motion by solving (∂/∂t)R (0) α,α ′ = 0. Under the assumption |Ω c /ω lr | ≪ 1, thus R (0) r,l = 0, we obtain for the other elements with the denominator and A l/r = |Ω c | 2 Im[(ω ul/ur − ω c − iΓ d ) −1 ] the optical pumping rates.
To find R (1) α,α ′ , we start from the first-order equations of motion, ∂ ∂t R (1) l,l = −2 Im Ω * p e −iδ k z+iδω t R eliminating the explicit dependency on time. The steadystate solution is obtained from the complete set of linear algebraic equations for the variables P The exact solution of Eqs. (A6) is easily obtained but is unmanageable and bears no physical intuition. Rather,
|
v3-fos-license
|
2019-04-27T13:13:16.943Z
|
2019-02-20T00:00:00.000
|
134485033
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://se.copernicus.org/articles/10/637/2019/se-10-637-2019.pdf",
"pdf_hash": "debd7f64d6135b99336b5f8cb7347d0b3acf9a6f",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2174",
"s2fieldsofstudy": [
"Geology"
],
"sha1": "08739793961919fd21b5734232c6e051fb1eed18",
"year": 2019
}
|
pes2o/s2orc
|
Migration of Reflector Orientation Attributes in Deep Seismic Profiles : Evidence for Decoupling of the Yilgarn Craton Lower Crust
Interpretation of deep seismic data is challenging due to the lack of direct geological constraints from drilling and 10 the more limited amount of data available from 2-D profiles in comparison to hydrocarbon exploration surveys. Thus other constraints that can be derived from the seismic data themselves can be of great value. Though the origin of most deep seismic reflections remains ambiguous, an association between seismic reflections and crustal strain, e.g. shear zones, underlies many interpretations. Estimates of the 3D orientation of reflectors may help associate specific reflections, or regions of the crust, with geological structures mapped at the surface whose orientation and tectonic history are known. In the case of crooked 215 D onshore seismic lines, the orientation of reflections can be estimated when the range of azimuths in a common midpoint gather is greater than approximately 20 degrees, but integration of these local orientation attributes into an interpretation of migrated seismic data requires that they also be migrated. Here we present a simple approach to the 2-D migration of these orientation attributes that utilises the apparent dip of reflections on the unmigrated stack, and maps reflector strike, for example, to a short linear segment depending on its original position and a migration velocity. This interpretation approach has been 20 applied to a seismic line shot across the Younami Terrane of the Australian Yilgarn Craton, and indicates that the lower crust behaved differently from the overlying middle crust as the newly assembled crust collapsed during the Late Archean. Some structures related to approximately east-directed shortening are preserved in the middle crust, but the lower crust is characterized by reflectors that suggest N-NNE-oriented ductile flow. Deployment of off-line receivers during seismic acquisition allows the recording of a larger range of source-receiver azimuths, and should produce more reliable future 25 estimates of these reflector attributes.
Introduction
Deep seismic reflection surveys that image the entire continental crust are typically acquired as 2-D profiles due to cost and are able to provide subsurface images with a resolution of the order of 100 m or better.The interpretation of these deep seismic profiles, however, is often limited by the presence of reflections that can originate from locations out of the plane of the seismic profile, resulting in cross-cutting reflections in the migrated seismic section.In such situations it is difficult to identify which reflection, if any, should be included in an interpretation.Many onshore profiles have a crooked geometry because they are acquired along existing access roads.By using a 3-D travel time equation to determine the coherence of reflections, Bellefleur et al. (1997) showed how this limited 3-D geometry could be exploited to estimate the true 3-D orientation of subsurface reflectors where the acquisition line was particularly crooked, for example at sudden large bends in the road.Taking advantage of the increase in computing power over the last two decades, Calvert (2017) extended this method to every common depth point (CDP) in a crooked seismic profile, additionally providing quantitative estimates of the relative errors in the estimated angles of reflector dip and strike, and potentially also stacking velocity.These results, for example the angles and error estimates, are displayed as a function of time at each CDP on unmigrated seismic sections.Although it is possible to make general inferences on the distribution of subsurface reflectors, more detailed interpretation requires that the angle estimates be represented closer to their true subsurface position, i.e. on migrated seismic sections, which is an issue that was not addressed by Calvert (2017).The purpose of this paper is to present an approach to the migration of these reflector orientation attributes that allows their use in the interpretation of conventional 2-D migrated deep seismic sections; for example, by migrating more steeply dipping reflections into the middle crust, the predominant orientation of lower crustal reflections can be clarified.The importance of obtaining more accurate orientation estimates for positioning reflectors in 3-D, by for example deploying additional crossline receivers, will also be discussed.
Reflector orientation estimation
When a crooked seismic reflection line is processed, it is necessary to choose a slalom line through the distribution of source-receiver midpoints, and to define the CDP bins together with their dimensions along this line.Within a CDP bin, the conventional 2-D hyperbolic travel time equation may not accurately represent out-of-plane reflections due to the varying source-receiver azimuths.In these circumstances and under the straight-ray assumption used in stacking velocity analysis (Taner and Koehler, 1969), reflection travel times are better described by a 3-D travel time equation that includes the dip and strike of the reflector (Levin, 1971).When the seismic line is linear, the angles representing dip and strike cannot be uniquely determined, but along a crooked seismic profile, the distribution of source-receiver azimuths within a CDP gather varies, allowing the dip and strike to be well determined if a sufficiently large range of azimuths is present, for example where there is a large change in the direction of a seismic line (Bellefleur et al., 1997).In practice, most single CDP gathers on a crooked seismic line contain an insufficient number of traces, but this limitation can be mostly overcome by combining multiple CDP gathers into a much larger supergather that can be used for the estimation process; both Bellefleur et al. (1997) and Calvert (2017) provide examples of how the use of a large supergather permits the independent recovery of both dip and strike angles in many situations.The estimation method assumes that reflections within the supergather originate from a locally planar interface; as more CDP gathers are combined, this assumption can break down, especially where the geology is complex.For example where folded reflectors are present; for the crooked lines tested, supergathers of 40-80 CDPs appear to be adequate.If the algorithm were applied to every CDP gather with the output comprising the stacked trace computed using a moveout correction based on the estimated values of dip and strike, then this process could be viewed as an automated version of the cross-dip correction that is often applied manually to crooked seismic profiles, e.g.Nedimović and West (2003a) or Beckel and Juhlin (2018).It should, however, be noted that this cross-dip correction usually makes the assumption of linear moveout in the cross-dip direction within a CDP gather, which is not necessarily the case, especially where the line is particularly crooked.
Thus in a CDP gather and assuming a root mean square (RMS) velocity function, the semblance of a reflection (Neidell and Taner, 1971) can be calculated using a small time window, e.g.40 ms, at each zero offset time for a range of trial angles of dip and strike, which is measured from the north.At each zero offset time, the estimated dip and strike correspond to the angles with the maximum semblance, i.e. the most coherent reflection (Bellefleur et al., 1997).Although the searched strike angle varies from −180 to +180 • , only values between 0 and 180 • are output in the algorithm employed here because negative values are increased by 180 • to ensure that the same value is output for reflectors with parallel strike directions but an opposite sense of dip; for example, reflectors dipping to the north and south will both be assigned a strike of 90 • (Calvert, 2017).Since this method is a grid search, the relative error in the orientation angles can be characterized by defining a threshold, for example 90 % of the maximum semblance, and finding the largest difference in angle from this maximum to any other angle with a semblance greater than the threshold.These error values characterize at each time sample the size of the semblance maximum as a function of dip and strike; as an example, for a horizontal reflector and a survey geometry with a broad range of source-receiver azimuths, the dip angle will likely be well resolved, but the error for the estimated strike could be as large as ±90 • , because the strike is not well-defined in this specific case.It is additionally possible to extend the method to velocity analysis by repeating the estimation of an optimal dip and strike angle for a range of trial RMS velocity values; more details on the error estimation can be found in Calvert (2017).Since all the estimated attributes, angle, velocity, and error, are found for each zero offset time sample within a CDP gather, they are represented on a seismic section that corresponds to the unmigrated stack, i.e. the attributes are not positioned at their true subsurface position.
Attribute migration
It is possible to apply 3-D prestack time migration to crooked 2-D reflection profiles; in some cases, for example where the deviation from 2-D is not great, the result is readily interpretable, but in others, the output 3-D volume can be dominated by artefacts from wave equation migration, with most structures incompletely imaged due to the limited amount of data recorded in the cross-line direction (Nedimović and West, 2003b).Thus crooked 2-D seismic profiles are usually migrated in 2-D for interpretation.To better integrate ori-entation angle estimates into the seismic interpretation it is therefore desirable to reposition these attributes in a way that is analogous to seismic data migration, so that they can be superimposed on their corresponding reflections.Attributes do not satisfy the assumptions necessary for wave-equation migration, and the result of applying such an algorithm to an unmigrated section containing attribute values would be meaningless.However, if the apparent dip of reflections on the unmigrated section is known, then a line migration or segment migration algorithm can be used to position the attribute value at a new output location corresponding to the migrated position of the corresponding reflection (Hagedoorn, 1954;Calvert, 2004).The sample value at each time and CDP location on the unmigrated section can be mapped to a small linear segment whose output position and dip is determined by the input position, apparent dip, and migration velocity (Raynaud, 1988).With seismic data, when multiple reflections are mapped to the same output location they are summed together, but for the attribute migration algorithm presented here, the output value is modified to be the attribute with the greatest semblance, implying that some less coherent attribute values will not be represented in the migrated output.
In principle, input attributes could be mapped to planar facets within a 3-D volume, but with narrow-azimuth crooked line surveys the uncertainties in determining the dip and strike along individual reflections are likely to be too large, resulting in the fragmentation of individual reflections after migration.Though 3-D migration is preferable in theory, the interpretation of an incomplete, sparse set of reflections in a 3-D volume is also likely to be challenging, and a better approach may be to forward model the appearance of 3-D structures in the crooked 2-D seismic profile.
Yilgarn Craton example
This pragmatic approach to attribute migration is illustrated using a high-quality seismic line, 10GA-YU2, which was shot in 2010 over the Youanmi Terrane of the Archean Yilgarn Craton of Western Australia as a collaborative project between Geoscience Australia (GA) and the Geological Survey of Western Australia (Fig. 1; Wyche et al., 2014).The Youanmi Terrane, which contains several north-northeast striking greenstone belts, is the 3.05-2.70Ga core of the craton (Pigeon and Wilde, 1990;Van Kranendonk et al., 2013).It is separated by the Ida Fault from the > 2.95-2.66Ga Eastern Goldfields Superterrane (Czarnota et al., 2010), which was accreted during a period of intermittent crustal shortening from > 2.73 to 2.65 Ga (Myers, 1995).The seismic line extends from the 2.81 Ga Windimurra Igneous Complex (Ivanic et al., 2014) in the west into the Eastern Goldfields Superterrane in the east and is mostly located over granitoid plutons and tonalitic gneiss, but it also crosses the Sandstone greenstone belt.The interior of the Yilgarn Craton was un- affected by any large-scale post-Archean tectonic events, but was intruded by four sets of mafic dyke swarms during the Proterozoic.
Youanmi seismic survey
Line 10GA-YU2 was shot every 80 m using a source array of three Hemi60 vibrators and recorded by a 300channel symmetric split spread with receiver groups every 40 m.A 12 s long Varisweep technique was used with either two or three sweeps recorded at each vibration point (VP).The seismic data were originally processed by Geoscience Australia using a conventional sequence of crookedline geometry, refraction statics, geometric spreading correction, spectral equalization, velocity analysis, normal moveout, residuals statics, dip moveout correction, stretch mute, stack, and Kirchhoff migration; further details on the seismic acquisition and processing is provided by Costelloe and Jones (2014).
Reflector orientation estimation and migration
The preprocessing of the prestack seismic data for orientation analysis included resampling to 8 ms, refraction statics, residual statics, amplitude recovery with a T 1.2 gain (to 12 s), time-variant spectral whitening, automatic gain control (AGC) with a 0.5 s window, zero-phase Ormsby filtering to 5-10-30-40 Hz, trace muting, and the combination of 64 adjacent CDPs into supergathers every 2 CDP; an additional mute of data stretched more than 30 % is included in the orientation estimation analysis.The reflector orientation www.solid-earth.net/10/637/2019/Solid Earth, 10, 637-645, 2019 analysis was performed on each supergather using a 56 ms time window every 3 • of dip and 3 • of strike, using a RMS velocity function that increased from 6000 m s −1 at 0.0 s to 6500 m s −1 at 12.0 s, and to 7250 m s −1 at 20.0 s.At each time sample, an estimate of the dip and strike of the most coherent reflection in the prestack data is obtained, together with an estimate of the relative error (Calvert, 2017).
Values of local reflector strike can complement an interpretation based on a conventional seismic section, and estimates of reflector strike along line 10GA-YU2 are shown in Fig. 2a, but only for reflections with a semblance greater than 0.005 and for which the error in estimated strike angle is less than 30 • , in order to remove less reliable estimates.Where the seismic line is almost linear, reflector orientation cannot be estimated accurately, and these large errors, which are shown in Fig. 2b, result in the vertical white, no-data bands in Fig. 2a.The error depends on the distribution of sources and receivers in the supergather used for the estimate, and their relation to the CDP bin centre; however, in practice, those parts of the seismic line where it is difficult to obtain orientation angles are reasonably well predicted by the range of useful source-receiver azimuths, which is defined to be the number of 1 • azimuth bins for which there are seismic data available (Fig. 2).This definition was chosen to account for a (perhaps unlikely) situation in which a single orthogonal trace could result in a large range of source-receiver azimuths, but would not contribute significantly to the reflector orientation estimate due to a low signal-to-noise ratio.With the geometry of this seismic line, an azimuth range greater than ∼ 20 • seems sufficient to obtain most strike estimates, but 30 • is a more preferable minimum.Though most strike estimates with large uncertainties have been excluded, there remain some parts of the seismic line where strike values are judged to be unreliable.The most evident areas are where very similar strike values extend through much of the crust in a vertical column on the unmigrated section, for example, the ∼ 60 • values (green in Fig. 2a) visible between 5 and 8.5 s at CDP 14 000 and ∼ 50 • values (yellow-green in Fig. 2a) from 6 to 12 s at CDP 6150.
The 2-D migration of an orientation attribute requires two input datasets: the attribute and an estimate of the apparent dip.To ensure consistency with the conventional migration, the apparent dip was estimated from the GA-processed stack section by determining the most coherent dip in a local slant stack across an 800 m window at each time sample and CDP (Calvert, 2004).Using these apparent dips and the 1-D stacking velocity function, each attribute sample was migrated to a 320 m long linear segment centred on its output location with only the most coherent event retained at each position, as described earlier.The length of the output segment was selected to provide a degree of overlap for points migrated from the same reflection, creating some continuity on the output image without producing long linear segments that would be incapable of mimicking the geometry of a curved reflector.The trace spacing of the input datasets was 40 m, and dips greater than 50 • were excluded to remove some steeply dipping coherent noise present in the data.Migrated reflector strike is shown in Fig. 3b, with the repositioning of reflector strike due to the migration process clear from a comparison with Fig. 2a; moderately dipping events have moved up-dip to earlier times where they exhibit a shorter length; reflections with a strike of 0 • (coloured red) that occur at times of 9-11 s in the lower crust on the unmigrated section have moved into the middle crust after migration; other reflections have moved into the vertical white bands where strike values could not be well determined.After migration, anomalous columns of similar strike are much harder to identify due to the differential movement of strike values associated with different apparent dips.
The correspondence between reflections after frequencywavenumber (F-K) migration and the migrated strike attribute can be assessed by superimposing the strike values on the migrated seismic data, as demonstrated by a section at the east end of line 10GA-YU2 that shows the upper crust near the boundary between the Youanmi Terrane and the Eastern Goldfields Superterrane (Fig. 4).In general, local reflector strike estimates that appear laterally consistent over > 2 km overlie clear reflections, but there are many events for which no strike estimates are available due to the limited range of source-receiver azimuths here.Changes in strike value, i.e. colour, along a reflection may indicate variation in the estimated strike due to limited constraint from the available source-receiver positions within adjacent CDPs or, alternatively, the actual variation in strike due to the geometry of the reflector.
Reflector strike and crustal structure
The estimates of reflector orientation are derived under the assumption that the reflector is locally planar.In the case of complex 3-D structures, for example a dome adjacent to the seismic line, the estimated strike will vary laterally, and inferring the nature of the subsurface structure will not be straightforward.In this situation, one approach would be to model in 3-D the seismic responses of a range of realistic features, estimate the local strikes from the synthetic data, and compare the synthetic results with the observations.However, where a large region of crust is dominated by reflectors with similar local strikes, then this characteristic and laterally extensive reflective fabric are likely to have arisen during a large-scale tectonic process.For example, a tectonic process such as crustal exhumation during shortening or the collapse of thickened crust can produce a thick band of pervasive seismic reflectivity that internally exhibits broadly similar reflector orientations.In this paper, we focus on the identification and interpretation of crustal domains in which a single reflector strike predominates.
Despite the absence of reliable estimates of reflector orientations at many locations along line 10GA-YU2 due to the relatively straight road along which the survey was carried out, it is possible to make some general inferences on the distribution of reflector strike.Specifically, the shallowly dipping reflections in the lower crust between 9 and 11 s are commonly characterized by values that differ significantly from the overlying middle crust; from CDP 6000-11 000 strikes range mostly from 80 to 110 • , and from CDP 11 700-16 000 values are approximately 90-120 • , whereas there is a much wider range of values in the middle crust (Fig. 2).Due to the steeper dips of reflections from the middle crust, some of which arrive at times corresponding to the lower crust, it is necessary to interpret them from the migrated section, on which an interpretation (Calvert and Doublier, 2018) has been superimposed (Fig. 3b).In this interpretation anticlines A1 and A2 are inferred to have been formed in an episode of earlier crustal shortening that was then largely overprinted elsewhere in the line by subsequent extension and lower crustal flow.Unfortunately A1 is located in an area of the seismic line where reliable strike estimates could not be recovered, but reflections just to the east associated with, and above, the interpreted thrust have strike values of 0-30 • , suggesting that this package of approximately eastdipping mid-crustal reflections between CDP 7300 and CDP 8700 may represent an imbricate stack created in this relatively early thrusting event.Reflections overlying the thrust below A2 exhibit a strike of 120-150 • , and may have been created in the same event; however, since this orientation is characteristic of other mid-crustal reflections to the east, it is also possible these reflections were created during the later pervasive collapse of the crust.The listric geometry of many mid-crustal reflections that flatten into the lower crust at different levels between 8 and 10 s has been interpreted as indicating extension and ductile flow of the crust (Calvert and Doublier, 2018); a normal sense of motion is inferred, for example, from the offset of reflections R1 and R2, which appear to occur at the top of the reflective middle crust.Since much of the lower crust exhibits reflector strikes of 80-120 • flow is inferred here to be in a N-NNE direction, almost orthogonal to earlier direction of shortening, under the assumption that reflector strike is perpendicular to the direction of ductile flow.The origin of the reflectors is not known, but they could be due to shear zones or syn-tectonic magmatic intrusions whose orientation is controlled by strain fabrics and the stress regime prevailing at that time.
The origin of the large amplitude reflection R1, which is important to the interpretation summarized above, is unclear, because it truncates some underlying reflections, but also appears to cut across others (Fig. 4).After migration, estimates of reflector strike indicate that R1 exhibits a strike of approximately 120 • over a distance of more than 20 km; the strike of R1 changes to 000 • where it merges with the base of the "fan" of reflections (F in Fig. 4b) that project up into the Waroonga shear zone.Both the underlying, abruptly truncated reflections and the cross-cut reflections have a strike of 000-030 • (T and C in Fig. 4b respectively).Since most reflections above R1 have an opposite sense of apparent dip to those below, R1 represents an angular unconformity, but the high amplitude of R1 and its lateral continuity also suggest that it is a sill.These two perspectives can be reconciled if a sill exploited an existing boundary, perhaps a thrust fault or the base of the brittle upper crust, during its emplacement, but further intruded the package of reflections near its western end, producing the cross-cutting relationship.There remains, however, some uncertainty due to the fact that while reflections from the approximately north-striking reflectors occur close to the seismic line, those from 120 • striking section of R1 likely occur further away, and are not coincident.Overlying reflection S, which has a fairly consistent strike of 120-130 • , may be another subparallel sill that was intruded in the same event.Consequently some reflections may be part of a network of intrusions, perhaps including reflection T, that could have directed melt upward, as has been found by drilling in younger tectonic settings (Juhlin et al., 2016).
Field recording for reflector orientation estimation
Along much of line 10GA-YU2, the range of source-receiver azimuths available for the orientation analysis is quite limited, resulting in the exclusion of many estimates due to their large errors.This problem is due to the mostly linear geometry of the road along which the seismic line was shot.Deep seismic lines are typically acquired along existing roads to minimize the cost, which in the case of vibroseis surveys is often determined by the number of shot points that can be acquired per day, i.e. the source effort.When sufficient recording channels are taken to the field, the incremental cost of deploying additional receivers can be relatively small.If additional recording channels can be placed along crossing roads or readily accessible land through which the survey passes, then the range of available source receiver azimuths can be greatly increased, from < 15 to > 120 • in the synthetic ex-ample presented by Calvert (2017).Thus instead of sporadic estimates of strike along a reflection, as shown in Fig. 4b, the continuous variation in a reflector's orientation can be determined.This is particularly important when trying to correlate dipping upper crustal reflections with structures mapped at the surface or trying to distinguish between late sills and the pervasive crustal reflectivity.As an example, the Waroonga shear zone contains steep, north-trending foliations and slivers of greenstone that are subparallel to the pattern of gneissic foliation (Zibra et al., 2017).The shear zone is underlain by a set of reflections that approach the surface with an apparent westerly dip (F in Fig. 4b).Perhaps the reflections arise from the contrast between the mafic greenstones and the surrounding more felsic tonalite, but it has not been possible to confirm that the reflector orientations are consistent with the mapped geological structures due to the limited range of source-receiver azimuths in the seismic survey.In the crystalline basement where reflection geometries can be complex, the availability of complementary reflector orientation attributes can assist an interpretation, perhaps at a basic level by allowing cross-cutting, out-of-plane reflections to be excluded, or potentially by revealing the origin of some enigmatic reflectors in the upper crust.
Conclusions
In this paper, a method of 2-D line migration that can be applied to any attribute continuously derived from seismic data has been presented.This algorithm uses the apparent dip obtained from the unmigrated stack section to move the attribute to a migrated position where it is represented by a short linear segment.(An alternative approach that iterates over an output 3-D migrated volume to identify the most coherent reflections would be much more costly and create artefacts, because an input location can contribute to multiple output locations.)Nevertheless the use of reflector orientation information to correctly position reflectors and their attributes throughout a 3-D volume, perhaps as planar facets, remains a long-term goal, but such an approach requires more accurate orientation estimates, which can be achieved by the use of additional off-line recording during 2-D onshore surveys.
By estimating and migrating the strike of subsurface reflectors along line 10GA-YU2, it has been possible to demonstrate that the lower crust of the eastern Younami Terrane of the Yilgarn Craton exhibits a systematic orientation of shallowly dipping reflectors which mostly dip to the N-NNE or S-SSW, in contrast to the middle crust which is characterized by a broad range of azimuths.Given that much of the crust here has been previously interpreted as reworked during extension and crustal collapse in the Late Archean, we suggest that the orientation of lower crustal reflections is consistent with approximately orogen-normal lower crustal flow at this time.
Figure 1 .
Figure 1.Major terranes of the Yilgarn Craton in Australia with locations of deep seismic lines.The Youanmi Terrane represents the older core of the craton to which terranes of the Eastern Goldfields Superterrane were accreted during the late Archean.Greenstone belts are shown in green.
Figure 2 .
Figure 2. (a) Unmigrated reflector strike estimated for line 10GA-YU2.The range of source-receiver azimuths available for the orientation analysis is indicated in degrees in the overlying panel.Vertical white bands correspond to unreliable values that have been excluded due to low reflection semblance or high angular uncertainty.(b) Relative error in estimated strike, i.e. the range of angles within 90 % of the global semblance maximum.
Figure 3 .
Figure 3. Line 10GA-YU2: (a) F-K migration, (b) migrated reflector strike with interpretation from Calvert and Doublier (2018); only arrivals with semblance greater than 0.008 and strike estimation error less than 30 • are included.Pink to dark red: granitoid rocks and gneiss; green, dark green: mafic volcanic rocks; purple; ultramafic volcanic rocks; grey: sedimentary rocks; yellow and blue: Proterozoic sill.
Figure 4 .
Figure 4. Section of line 10GA-YU2 across the boundary between the Youanmi Terrane and the Eastern Goldfields Superterrane: (a) F-K migration, (b) migrated reflector strike superimposed on F-K migration; only arrivals with semblance greater than 0.008 and strike estimation error less than 30 • are included.Dark pink: granitoid rocks; red: tonalitic gneiss; green: mafic volcanic rocks; purple: ultramafic volcanic rocks; grey: sedimentary rocks; yellow: Proterozoic sill.
|
v3-fos-license
|
2019-11-16T14:06:11.413Z
|
2019-11-14T00:00:00.000
|
208035694
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/mcn.12901",
"pdf_hash": "c2d62ade4c7388c9a01b0bb4c909ecd66e6fe2e9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2175",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Medicine"
],
"sha1": "2db35d8d4aa4b0a7338c34a39a03f211684ccd26",
"year": 2019
}
|
pes2o/s2orc
|
Dietary patterns of 6-24-month-old children are associated with nutrient content and quality of the diet.
Abstract We determined the associations of dietary patterns with energy/nutrient intakes and diet quality. Previously collected single 24‐hr dietary recalls for children aged 6–11 months (n = 1,585), 12–17 months (n = 1,131), and 18–24 months (n = 620) from four independent studies in low socio‐economic populations in South Africa were pooled. A maximum‐likelihood factor model, with the principal‐factor method, was used to derive dietary (food) patterns. Associations between dietary pattern scores and nutrient intakes were determined using Kendall's Rank Correlations, with Bonferroni‐adjusted significance levels. For both 6–11 months and 12–17 months, the formula milk/reverse breast milk pattern was positively associated with energy and protein intake and mean adequacy ratio (MAR). The family foods pattern (6–11 months) and rice and legume pattern (12–17 months) were positively associated with plant protein, fibre, and PU fat; both for total intake and nutrient density of the complementary diet. These two patterns were also associated with the dietary diversity score (DDS; r = 0.2636 and r = 0.2024, respectively). The rice pattern (18–24 months) showed inverse associations for nutrient intakes and nutrient densities, probably because of its inverse association with fortified maize meal. The more westernized pattern (18–24 months) was positively associated with unfavourable nutrients, for example, saturated fat and cholesterol. These results highlight that underlying dietary patterns varied in terms of energy/nutrient composition, nutrient adequacy, nutrient densities of the complementary diet, and dietary diversity.
distinct Western-like dietary pattern and health conscious dietary pattern are already present at this young age (Kiefte- de Jong et al., 2013).
Dietary patterns based on predefined dietary indices or derived from factor or cluster analyses examine the whole diet rather than individual foods and/or nutrients (Hu, 2002). In factor analyses, various arbitrary decisions are taken, including grouping of foods into food groups and the naming of the dietary pattern (Hu, 2002;Newby & Tucker, 2004).
Dietary patterns derived through factor analysis may therefore not necessarily be comparable between studies or even age groups, and associations between dietary patterns and nutrient intakes are complex and may be difficult to interpret. For example, identified a home-made traditional pattern for young children at age 6-8 and 15 months, but the association of this dietary pattern with the nutrient profile was inconsistent between the two age groups.
Dietary patterns have been shown to be associated with infant growth outcomes such as length-for-age z-scores and BMI z-scores (Wen et al., 2014). Understanding the energy and nutrient content and nutritional quality of specific dietary patterns therefore will provide valuable insight that may guide the development of appropriate nutrition messages/policies in terms of infant and young child feeding, particularly against the background of the triple burden of malnutrition in South Africa (stunting, overweight/obesity, and micronutrient deficiencies).
In vulnerable populations in South Africa, dietary intake in 6-24month-old children can range from predominantly maize-based to predominantly based on commercial infant foods (Faber, 2005;Faber, Laubscher, & Berti, 2016;Swanepoel et al., 2018). Pooling diverse dietary intake data would potentially provide a dataset with sufficient variation to determine the nutrient profile of a variety of dietary patterns.
The aim of this study was to determine whether distinct dietary patterns are associated with energy/nutrient intakes and nutritional quality in 6-24-month-old South African children of low socio-economic status.
| Study design
This study consisted of pooled single 24-hr dietary recalls for 6-24 month-old children previously collected in four independent studies.
All study sites were of low socio-economic status. In Study 1 (Smuts et al., 2005) and Study 2 (Faber, 2005;Faber, Kvalsvig, Lombard, & Benadé, 2005), dietary intake data were collected for children who participated in two independent randomized controlled trials (RCT) that were done in rural sites in KwaZulu-Natal province. Study participants were recruited through an NGO-driven community-based health programme that operated through 12 health posts. Exclusion criteria were birth weight <2500 g and haemoglobin concentration < 80 g/L (both studies), premature birth (<37-week gestation), and weight-for-length z-score < -3 (Study 1 only). Data collection was done at baseline (at age 6-12 months) and follow-up (at age 12-18 months). In Study 1, additional data were collected 6 months after the completion of the RCT (at age 18-24 months). In Study 3 Swanepoel et al., 2018), dietary intake data were collected for children who participated in an RCT that was done in a peri-urban site in North West province. Study participants were recruited through primary health care facilities and house-to-house visits. Exclusion criteria included haemoglobin concentration <70 g/L, weight-for-length z-score < -3, severe congenital abnormalities, infant known to be HIV positive, and infants known to be allergic/intolerant to peanuts, soy, cow's milk protein, or fish. Data were collected at baseline (at age 6 months), follow-up (at age 12 months), and 6-month post RCT (at age 18 months). In all three studies, dietary intake data were missing for children whose caregiver could not provide reliable information because the child was not in her permanent care during the 24-hr recall period. Study 4 (Faber et al., 2016) was a cross-sectional dietary assessment study. Primary caregivers of randomly selected children, stratified per age category (6-11 months, 12-17 months, and 18-24 months), were recruited through house-to-house visits in two study sites, one rural and one urban, in KwaZulu-Natal province.
Previously collected 24-hr dietary recalls were recoded to ensure that coding and analysis were standardized across all dietary surveys and that all records were analysed with the same version of the food composition database. Estimated intake of breast milk was assumed according to age: 675 ml for partially breastfed infants at age 6-11 months, 615 ml at age 12-17 months and 550 ml at age 18-24 months (WHO, 1998). Exclusively breastfed or formula-fed infants were excluded. The complementary diet was defined as all foods and beverages consumed, excluding breast milk and formula milk feeds. Formula milk powder mixed into porridge/infant cereal may affect the nutrient density of the complementary diet and was therefore coded separately from formula milk feeds, using dummy food codes. This allowed for formula milk powder mixed into food to be included when calculating the nutrient density of the complementary diet. Food intake was converted to energy and nutrients using Stata software and the 2017 South African Food Composition Database (SAFOODS, 2017), which includes an updated section on infant foods.
Key Messages
• The association of formula milk/reverse breast milk pattern scores and MAR suggests that breastfeeding children are more likely to consume a diet of lower nutrient adequacy.
• Associations of formula milk/reverse breast milk pattern scores and nutrient densities of the complementary diet suggest that breastfeeding children consume a complementary diet of lower nutrient density.
• The more westernized dietary pattern was associated with unfavourable nutrients such as saturated fat, cholesterol, and sugar, as well as certain micronutrients.
• Although associations of dietary pattern scores with dietary quality indicators could be explained by the foods with high factor loadings in most cases, this was not always the case.
Nutrient adequacy ratios (NAR) were calculated using age appropriate estimated average requirements (EAR) or, where there is no EAR, the Adequate Intakes (AI) of the Dietary Reference Intakes (DRIs) (Otten, Hellwig, & Meyers, 2006 Individual food items were grouped into 36 foods (or groups) based on nutritional composition and similarity of foods. Energy contribution of the foods was calculated and expressed as a percentage of total energy intake. Daily energy intake values (expressed as percentage of total intake) for the 36 foods were used in a maximumlikelihood factor model, with the principal factor method to derive estimates of dietary patterns; a varimax (orthogonal) rotation of the factor-loading matrix was done to make interpretation easier. Derived components with an eigenvalue > 1.00 and also containing two or more original foods with loading factor ≥ 0.35 or ≤ -0.35 were retained in order to name the factors. Regression scoring was used for the set of retained factors. A higher factor score indicates higher adherence to the corresponding dietary pattern. These factor scores (continuous variables) were then used to determine associations of the dietary patterns with energy and nutrient intakes, MAR, nutrient densities of the complementary diet, and the DDS, using Kendall's rank correlations, with Bonferroni-adjusted significance levels.
Data were further explored by stratifying the children according to dietary pattern tertiles (Ts) and then calculating the percentage consumers for the 36 foods within each tertile. Differences across the tertiles were determined using the Fisher exact test.
Ethical considerations
Ethical approval was not required as we used pooled data from previous studies.
| Dietary patterns
In each of the three age categories, three dietary patterns were identified, which explained 38.6% (6-11 months), 37.8% (12-17 months), and 32.7% (18-24 months) of the variance ( Table 1). The percentage of children who consumed foods during the recall period according to dietary pattern tertiles is given inTables 2-4. For significant associations of these patterns with energy and nutrient intakes, MAR, DDS (Table 5), and nutrient densities of the complementary diet (Table 6), a correlation coefficient (r) of between -0.3 and 0.3 is considered weak, and associations with r ≤ -0.3 and r ≥ 0.3 will mostly be highlighted hereafter.
| Age 6-12 months:
Factor 1, named the 'formula milk/reverse breast milk' pattern, had a very high positive loading for formula milk and a very high negative loading for breast milk (Table 1), indicating an inverse association between formula milk and breast milk. In terms of pattern score tertiles (Table 2), 7.6% of children consumed formula milk in T1 versus 86.2% in T3.
The opposite was observed for breast milk, with all children in T1 receiving breast milk, versus 11.9% in T3. The 'formula milk/reverse breast milk' pattern was positively associated with energy, protein and most micronutrients, and ultimately MAR (Table 5), as well as with the nutrient density of the complementary diet for various nutrients (Table 6), although these associations were weak (r > -0.3 and r < 0.3).
Factor 2, named the 'family foods' pattern, had high positive loadings for maize meal, rice, and legumes and a high negative loading for infant cereal. The 'family foods' pattern was inversely associated with all commercial infant products and positively associated with several family foods (Table 2). In terms of nutrients, the 'family food' pattern was positively associated with plant protein, fibre, and PU fat; both for total intake (Table 5) and the nutrient density of the complementary diet (Table 6). This pattern was positively associated (r ≥ 0.3) with magnesium and vitamin B6, both for total intake and nutrient density of the complementary diet, and inversely associated with the nutrient density of the complementary diet for vitamin C and, to a lesser extent, calcium (r = -0.2742) and iron (r = -0.2476).
Factor 3, named the 'maize meal and sugar' pattern, had a high loading for maize meal. The 'maize meal and sugar' pattern was inversely associated with all commercial infant products (Table 2). This dietary pattern showed statistically significant correlations with a few nutrient intakes (Table 5) and nutrient densities for various micronutrients (Table 6), but most of these correlations were weak (r > -0.3 and r < 0.3), except for the nutrient densities for carbohydrates, magnesium, and folate (r ≥ 0.3).
| Age 12-17 months:
Factor 1, named the 'tea and sugar' pattern, had high loadings for sugar and rooibos tea. This pattern was not associated with energy and nutrient intakes, or MAR (Table 5), but it was associated with the nutrient density of the complementary diet for several micronutrients (Table 6).
Factor 2, named the 'rice and legumes' pattern, had high loading for rice, legumes, and tea (Table 1). This pattern was positively associated with plant protein, fibre, and PU fat, both for total intake (Table 5) and the nutrient density of the complementary diet (Table 6).
Factor 3, named the 'formula milk/reverse breast milk' pattern, had a high positive loading for formula milk and a high negative loading for breast milk (Table 1). In terms of pattern score tertiles (Table 3), 4.2% of children consumed formula milk in T1 versus 58.1% in T3. The pattern was positively associated with energy, protein and most micronutrients, and ultimately MAR (Table 5). This pattern was also positively associated with the nutrient density of the complementary diet for various nutrients (Table 6), although these associations were weak (r > -0.3 and r < 0.3).
| Age 18-24 months:
Factor 1, named the 'tea and sugar' pattern had a high loading for tea, rooibos tea, and sugar and a high negative loading for breast milk (Table 1). This dietary pattern showed several statistically significant inverse correlations with nutrient intakes, but these correlations were weak (r > -0.3 and r < 0.3). Factor 2, named the 'more westernized' pattern had high loadings for breakfast cereal and milk and high negative loadings for rice and legumes (Table 1), therefore indicating a less traditional but more westernized diet. This pattern was associated with a higher percentage consumers of unhealthy food items such as sweets, cake and cookies, cold drinks, and salty snacks (Table 4). In terms of nutrients (Table 6), this pattern was associated with saturated fat, cholesterol, and riboflavin intakes.
Factor 3, named the 'rice' pattern, had a high loading for rice and a high negative loading for maize meal (Table 1). This pattern showed several statistically significant inverse correlations but r ≥ 0.3 for only magnesium, thiamine, and folate. T1 T2 T3 P-value T1 T2 T3 P-value T1 T2 T3 P-
| DISCUSSION
In this paper, we describe dietary patterns for 6-24-month-old children, using a large dataset of pooled single 24-hr recalls previously collected in four independent studies done in areas of low socioeconomic status. Distinct dietary patterns were identified, and the and r < 0.3), suggest that breastfeeding children consume a complementary diet of lower nutrient density. We can only speculate on why this is the case. A study in South Africa reported that mothers who were breastfeeding were more likely to be unemployed compared with mothers who formula fed (Nieuwoudt, Manderson, & Norris, 2018) suggesting that income may be a factor. Nonetheless, these results suggest that a stronger focus is needed on the nutritional quality of the complementary foods for breastfeeding babies.
The 'family foods' pattern (age 6-11 months) was positively associated with plant protein and fibre for total intake as well as the nutrient density of the complementary diet, indicating a mostly plant-based diet. The association with PU fat can most probably be ascribed to oil used when preparing legumes. This pattern was positively associated with maize meal and inversely associated with infant cereals, both of to the foods with high loadings in these patterns.
The positive association of both the 'formula milk/reverse breast milk' pattern and the 'tea and sugar' pattern with the nutrient density of the complementary diet may suggest a perception that as long as children are being breastfed, the quality of the complementary diet is not of that high importance. Although this is pure speculation, it warrants further investigation.
Dietary patterns identified in our study are based on a single 24-hr recall, which has several inherent limitations (Murphy, Guenther, & Kretsch, 2006). Studies reporting dietary patterns in children of similar age group used either single 24-hr recall (Gatica, Barros, Madruga, Matijasevich, & Santos, 2012;Melaku et al., 2018) or a food frequency questionnaire (Betoko et al., 2013;Wen et al., 2014). Robinson et al. (2007) reported that princi- to different versions of the database being used to convert food intake data to nutrient intake data was therefore avoided.
In conclusion, dietary patterns varied in terms of energy and nutrient composition, MAR, nutrients densities of the complementary diet, and DDS. Interpretation of the associations between pattern scores and indicators of dietary quality is complex, for various reasons.
Firstly, although in most cases the associations could be explained by the foods with high loadings, this was not always the case. Secondly, some dietary patterns had both positive and negative associations with key micronutrients, particularly in the younger age group, probably because both infant cereals and maize meal are fortified.
Lastly, the associations of the 'formula milk/reverse breast milk' pattern score with various indicators of dietary quality need further attention, as these associations imply poorer dietary quality for breastfeeding babies.
ACKNOWLEDEGEMENTS
We acknowledge the role of the dietary coders and data capturers.
CONFLICT OF INTEREST
The authors declare that they have no conflicts of interest.
CONTRIBUTIONS
The authors' responsibilities were as follows: M.F. conceptualized the study, wrote the first draft, and was responsible for collecting dietary intake data for all the original studies; M.R. contributed to dietary coding and was involved in collecting dietary data in one of the original studies and writing of the manuscript; R.L. did the data analyses; and C.M.S. was the principle investigator for two of the original studies.
All authors read and approved the final manuscript.
FUNDING INFORMATION
The study was funded by the South African Sugar Association (Project 247).
|
v3-fos-license
|
2020-07-14T13:03:44.414Z
|
2020-07-14T00:00:00.000
|
220497757
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmars.2020.00570/pdf",
"pdf_hash": "75f30aec9f103d9b4d35fcc821c983d911bb9f7c",
"pdf_src": "Frontier",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2176",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "75f30aec9f103d9b4d35fcc821c983d911bb9f7c",
"year": 2020
}
|
pes2o/s2orc
|
Future Distribution of Suitable Habitat for Pelagic Sharks in Australia Under Climate Change Models
Global oceans are absorbing over 90% of the heat trapped in our atmosphere due to accumulated anthropogenic greenhouse gases, resulting in increasing ocean temperatures. Such changes may influence marine ectotherms, such as sharks, as their body temperature concurrently increases toward their upper thermal limits. Sharks are high trophic level predators that play a key role in the regulation of ecosystem structure and health. Because many sharks are already threatened, it is especially important to understand the impact of climate change on these species. We used shark occurrence records collected by commercial fisheries within the Australian continental Exclusive Economic Zone (EEZ) to predict changes in future (2050–2099) relative to current (1956–2005) habitat suitability for pelagic sharks based on an ensemble of climate models and emission scenarios. Our predictive models indicate that future sea temperatures are likely to shift the location of suitable shark habitat within the Australian EEZ. On average, suitable habitat is predicted to decrease within the EEZ for requiem and increase for mackerel sharks, however, the direction and severity of change was highly influenced by the choice of climate model. Our results indicate the need to consider climate change scenarios as part of future shark management and suggest that more broad-scale studies are needed for these pelagic species.
INTRODUCTION
Climate change is predicted to have unprecedented effects on the marine environment, with changes in ocean temperature increasing extinction risk for many species (Dulvy et al., 2003;Barnosky et al., 2011;Bruno et al., 2018;Pinsky et al., 2019) and altering the global distribution of marine life (Tittensor et al., 2010;Garciá Molinos et al., 2016). Changes in species distribution (Perry et al., 2005;Poloczanska et al., 2013) and community structure (Doney et al., 2012) are already being observed in marine ecosystems due to temperature shifts associated with rising emissions and accumulation of atmospheric carbon dioxide (Hoegh-Guldberg and Bruno, 2010;Doney et al., 2012;Gattuso et al., 2015). Recent modeling of biodiversity under different future climate change scenarios, across a wide range of marine and terrestrial ecosystems, predicts abrupt and irreversible ecosystem disruption during the late 21st century (Trisos et al., 2020). With predicted increases of up to ∼5 • C in worldwide sea-surface temperature (SST) by the end of the 21st century (IPCC, 2015), there is a critical need to investigate how marine species will be affected, especially ectotherms which are dependent on external sources for body heat. As ectotherms, sharks may be influenced by climate change (Bernal et al., 2012;Rosa et al., 2014Rosa et al., , 2017Syndeman et al., 2015;Pinsky et al., 2019), with higher temperatures increasing their metabolism and oxygen demand (Pistevos et al., 2015;Lawson et al., 2019). The exception to this may be Lamnid mackerel sharks, which have some endothermic capability (Watanabe et al., 2015).
Many shark species are already globally threatened due to fisheries overexploitation (Queiroz et al., 2019) coupled with their low fecundity, late age at maturity, and slow growth (Cortés, 2000;Garcia et al., 2008;Yokoi et al., 2017). In fact, 16.6% of shark species are estimated to be threatened with extinction, and another 37.9% of shark species are categorized as "Data Deficient" by the International Union for Conservation of Nature (IUCN, 2020). Nevertheless, sharks are known to have direct economic value in fisheries (Dulvy et al., 2017) and ecotourism (Cisneros-Montemayor et al., 2013;Huveneers et al., 2017). They also play a key role in ecosystem functioning and stability, connecting distant ecosystems via their long-distance migrations (Rogers et al., 2015), and altering prey behavior, distribution and energy use (Heupel et al., 2015;Roff et al., 2016;Dulvy et al., 2017). Climate change may exacerbate existing threats for sharks, for example, suitable pelagic shark habitat in the north Pacific Ocean is projected to decline by the year 2100 (Hazen et al., 2013).
Future projections based on existing observations and modeling techniques can be used to investigate the effects of climate change on pelagic sharks (Barange et al., 2016). Using Earth System Models from the Coupled Model Intercomparison Project Phase 5 (CMIP5; hereafter called "climate models"), complex relationships between ecosystem health, human activities and global climate can be included to evaluate alternative future scenarios with varying severity of emissions (Moss et al., 2010;Freer et al., 2017). There are four emission scenarios commonly referred to as Representative Concentration Pathways (RCP 2.6, RCP 4.5, RCP 6, and RCP 8.5) (IPCC, 2013). These RCP scenarios are used to predict radiative forcing values, a measure of absorbed and retained energy in the lower atmosphere, for the year 2100 (Moss et al., 2010;Vuuren et al., 2011). RCP 4.5, also referred to as "stabilization scenario, " is an optimistic scenario assuming a decline in overall energy usage from fossil fuel sources that limits emissions and radiative forcing . Conversely, RCP 8.5, also referred to as "business-as-usual, " is the most pessimistic scenario assuming minimal stabilization of greenhouse gas emissions alongside a large human population with high energy demands .
The Australian Exclusive Economic Zone (EEZ) is already being impacted by climate change with waters off south-east Australia warming at almost four times the global average (Oliver et al., 2017) and range extensions already documented in several fish species (Last et al., 2011). Australia has one of the world's most diverse communities of sharks, with 182 recognized species (Simpfendorfer et al., 2019), and SST has been shown consistently to be a strong predictor of pelagic shark occurrence in Australian waters (Rogers et al., 2009(Rogers et al., , 2015Stevens et al., 2010;Heard et al., 2017;Birkmanis et al., 2020). It is therefore important to investigate the likely impact of temperature changes on pelagic shark distribution and the location of suitable habitat on a continental scale if these species are to be appropriately managed into the future -especially if such changes may require a reassessment of interactions with fisheries in the future. Sharks comprise approximately 27% of the total catch (by number) of Australian pelagic longline fisheries (Gilman et al., 2008), with Australian stocks of the IUCN classified "Critically Endangered" oceanic whitetip (Carcharhinus longimanus), "Endangered" shortfin mako (Isurus oxyrinchus), and "Endangered" longfin mako (Isurus paucus) sharks listed respectively as "overfished, " "depleting, " and "undefined" due to a lack of data (Simpfendorfer et al., 2019;IUCN, 2020).
This study follows on from Birkmanis et al. (2020) in which occurrence records of pelagic sharks belonging to the Carcharhinidae and Lamnidae families (hereafter "requiem" and "mackerel, " respectively) were obtained from commercial fisheries and used to develop generalized linear models with which to predict suitable habitat for these species within the Australian continental EEZ. After accounting for fishing effort bias, these models showed that SST was an important predictor of shark distributions, with the highest ranked model also including turbidity. Here, we extend our modeling to assess the impact of climate change on pelagic shark habitat across the entire continental Australian EEZ.
Shark Occurrence
Catch records of 3,973 individual sharks from two families; requiem (silky Carcharhinus falciformis, oceanic whitetip Carcharhinus longimanus, dusky Carcharhinus obscurus and blue Prionace glauca) and mackerel (shortfin mako Isurus oxyrinchus, longfin mako Isurus paucus and porbeagle Lamna nasus) were obtained through the Global Biodiversity Information Facility online database (GBIF.org, 2017), as per details included in Birkmanis et al. (2020). These oceanic sharks were caught predominantly using commercial longlines in Commonwealth managed fisheries (more detailed data unavailable), with catch locations depicted in Supplementary Figure S1.
Predictors for Modeling Baseline and Future Climate Environmental Data
A climatological baseline was used as a reference point for projected future climate changes. According to Birkmanis et al. (2020), SST and turbidity were the most suitable predictors of requiem and mackerel shark occurrence within the Australian EEZ. We therefore focused on these two predictors to develop a climatological baseline to use as a reference for projected future climate changes. To calculate the SST baseline data, we downloaded monthly SST values for the years 1956-2005, covering the time period of our observed shark occurrence data, from the Integrated Marine Observing System (IMOS, 2016). We then averaged the SST values for each 0.1 • gridcell in the study area using ArcGIS 10.5 from Environmental Systems Research Institute (ESRI, 2017). We incorporated the observed turbidity values (measured as mean diffuse attenuation coefficient at wavelength 490 nm, downloaded using the Marine Geospatial Ecology Tool; Roberts et al., 2010) from 2000 to 2002 into our models with the assumption that turbidity will remain unchanged in the future.
Future SST data were taken from 24 CMIP5 climate models, using only one realization per climate model, under two emission scenarios, RCP 4.5 and RCP 8.5, amounting to 48 total simulations ( Table 1). We downloaded the SST field and the anomaly statistic for each climate model ( Scott et al., 2016 for details). We used the portal to calculate the difference in the mean SST between the future climate (2050-2099) and the model baseline reference period , hereafter called "anomaly" data. We then added these anomaly data to our baseline data across the extent of the Australian EEZ using ArcGIS and included this as the SST predictor for the future values.
Modeling Habitat Suitability for Baseline and Future Climate Data
We developed binomial generalized linear models with a logit link function for each of the two pelagic shark families, following Birkmanis et al. (2020). In brief, the probability of shark occurrence (calculated as the number of sharks caught divided by the number of fishing boats occurring in the same gridcell) was used as the response variable, with turbidity and SST values for either the climatological baseline or the future used as predictors. We included effort, defined as the number of boats recorded in each grid-cell from the same time period as the occurrence data (2000)(2001)(2002), as a model weight to account for differing amounts of catch per unit effort (CPUE) within the entire EEZ. As in, we weighted our models by fishing effort to estimate the probability of finding a shark in each grid-cell within the Australian EEZ which minimized the effect of fisheries effort on the data (Birkmanis et al., 2020). To stabilize parameter estimation, we standardized both predictors to z-scores using the scale function in R statistical software (R Core Team, 2017) before inclusion in our models (James et al., 2015). We also included a quadratic term for SST using the poly function from the stats package (R Core Team, 2017) in R statistical software to account for likely preferential SST ranges. We then quantified the goodness-of-fit for each model using the percentage of deviance explained, and used the predict function from the stats package in R statistical software to predict shark habitat suitability for the baseline data and also for the end of the century using the future climate data. To calculate the amount of change in suitable habitat under each climate model and emission scenario, we subtracted the number of grid-cells with resulting suitable habitat ≥0.5 in the future climate scenarios from those obtained in the baseline scenario. Differences between baseline and future scenarios show the change in suitable habitat area predicted for each family under possible future conditions.
RESULTS
Anomalies in SST in the Australian EEZ varied according to the climate model and emission scenario used (Figure 1 and Table 1).
The mean SST anomaly for all climate models was 2.27 • C (SD: 0-1.2) for RCP 4.5 and 3.78 • C (SD: 0-1.21) for RCP 8.5. Our results show that the predicted mean SST anomaly ranged from minima of 0.93 • C (for climate model GISS-E2-R, RCP 4.5) and 0.69 • C (for climate model HADGEM2-AO, RCP 8.5), to mean maxima of 1.83 • C (for climate model IPSL-CM5A-MR for both emission scenarios) at the end of the century (Figure 1 and Table 1).
The climate model MPI-ESM_LR resulted in the maximum SST anomaly projected by all climate models (3.76 • C for RCP 4.5 and 5.71 • C for RCP 8.5, respectively). Despite model-to-model variation in the magnitude of anomalies, all climate models predicted south-eastern Australia would experience the greatest SST increases by the end of the century (Figure 1). Both the baseline and future habitat suitability models explained slightly higher deviance for requiem than mackerel sharks but all values were around 30% (Supplementary Table S1). The baseline models explained 31.13 and 27.33% for requiem and mackerel sharks, respectively. The future models explained 29.91 and 31.76% for RCP 4.5 and RCP 8.5, respectively for requiem, and 26.47 and 26.22% for RCP 4.5 and RCP 8.5, respectively for mackerel sharks. The resulting predicted habitat suitability maps are presented as the mean across all climate models for requiem (Figure 2) and mackerel sharks (Figure 3), with the predicted change per climate model presented in requiem and mackerel sharks (0.65 and 0.63, respectively). For both requiem and mackerel sharks, the maximum habitat suitability (∼0.8) was predicted by climate model NORESM1-ME under both emission scenarios (Figure 4). Regions where habitat was predicted to be suitable (i.e., ≥0.5) at the end of the century varied by family, with southern Australia suitable for mackerel sharks, and north-eastern Australia for requiem sharks (Figures 2, 3).
Based on 48 climate simulations, our results suggest a shift in suitable habitat for both requiem and mackerel sharks within the Australian EEZ in the last half of the twenty-first century . The severity and direction of this shift varied, with suitable habitat for requiem sharks predicted to decrease under most climate models, while habitat suitability for mackerel sharks varied to a greater degree depending on the climate model and emission scenario. On average, predicted suitable habitat for requiem sharks under RCP 4.5 extended south on the north-eastern (∼600 km) and south-western coast (∼200 km), but decreased in the north-west (∼400 km). For RCP 8.5, suitable habitat was projected to extend south on the northeastern coast (∼650 km) and decrease across the north-west (∼500 km) with similar increases on the south-western coast (Figure 2). For mackerel sharks, the average of all climate models predicted an increase in suitable habitat across on the southern coast (∼900 km) and off the southern extent of the EEZ south of Tasmania (∼400 km) for RCP 4.5, with increases also projected to occur under RCP 8.5 (∼700 km across and ∼200 km south along the southern coast and ∼150 km south off the southern extent of the EEZ south of Tasmania) (Figures 3, 4).
DISCUSSION
Significant shifts in the distributions of marine organisms are being observed in the global ocean due to anthropogenic climate change (Poloczanska et al., 2013). Our results highlight that shifts in the location of suitable habitat for requiem and mackerel sharks by the end of the century are to be expected, with a decrease in predicted suitable habitat for requiem sharks off the south-western coast under both emission scenarios. This agrees with predicted habitat shifts for silky, blue (both requiem family; Cheung et al., 2015;Lezama-Ochoa et al., 2016), and mako sharks (mackerel family; Hazen et al., 2013) in other areas. The waters of south-western and south-eastern Australia are warming at an increased rate, almost three and four times higher than the global average, respectively (Hartmann et al., 2013;Robinson et al., 2015a) as indicated in Figure 1. Our models predict that this area will become unsuitable for both requiem and mackerel sharks, likely due to the water temperatures at the end of the century exceeding the thermal tolerance of these pelagic sharks. In our analysis, and those of Robinson et al. (2015b) and Hobday (2010), southward shifts in suitable habitat for blue and mako sharks on the eastern coast of Australia are predicted. This is in line with ocean climate zones (areas with distinct climate, based on annual SST values) shifting southwards by 200 km along the north-eastern coast and approximately 100 km along the northwestern coast in tropical Australian waters (Lough, 2008). In the north Pacific Ocean suitable habitat loss was predicted for both blue and mako sharks by the end of the century (Hazen et al., 2013). Such differences in predictions may be due to currents and northern latitude prey species being able to migrate poleward along the coastline (Perry et al., 2005). Due to the east-west orientation of the temperate Australian coastline and limited continental shelf area to the south of the continent (Urban, 2015), there are few opportunities for continental shelf marine organisms, including fish that are shark prey species, to move to higher latitudes and avoid increased water temperatures. Even with suitable habitat available for pelagic sharks within Australian waters these predators will follow prey species, such as tuna (Hobday and Poloczanska, 2010), which are expected to decline in the tropics and shift poleward in response to a warming ocean (Erauskin-Extramiana et al., 2019).
Although relatively little is known about how elevated temperatures will affect sharks (Pistevos et al., 2015), pelagic sharks are vulnerable to climate change impacts (Jones and Cheung, 2018) and life history strategies may play a part in determining ultimate patterns of species distribution. For relatively sedentary, benthic shark species, exposure to projected end-of-century temperatures has been shown to result in both positive and negative impacts. Port Jackson sharks (Heterodontus portusjacksoni) exposed to elevated temperatures exhibited an increase in mortality, altered behavior, increased learning performance and feeding, but reduced growth and embryonic development time (Pistevos et al., 2015;Vila et al., 2018Vila et al., , 2019. Conversely, brownbanded bamboo sharks (Chiloscyllium punctatum) showed decreased survival alongside significantly increased embryonic growth and ventilation rates (Rosa et al., 2014), while juvenile epaulet sharks (Hemiscyllium ocellatum) showed significantly decreased growth rates and 100% mortality . It is likely that the physiological impacts of increasing ocean temperature will be greater for more active pelagic sharks than for benthic species (Rosa et al., 2014), given their reliance on ram ventilation and continuous movement (Lawson et al., 2019). Sharks already at their provisioning limit may be faced with starvation if temperature-driven FIGURE 4 | Change in predicted habitat suitability for requiem and mackerel sharks in the Australian EEZ between the baseline time period and at the end of the twenty-first century (2050-2099) under two emission scenarios (RCP 4.5 and 8.5). increases in metabolic rates are not met with higher food intake (Pistevos et al., 2015), and this risk will be heightened should environmental perturbations concurrently influence prey availability and abundance. However, the thermal tolerance of requiem and mackerel sharks (Francis and Stevens, 2002;Last and Stevens, 2009;Corrigan et al., 2018;Hueter et al., 2018;Young and Carlson, 2020) may enable them to cope with changing temperatures.
Even though we predicted an overall increase in the amount of suitable habitat for mackerel sharks at the end of the 21st century, temperature acclimatization comes with an energetic cost that impacts other functions such as reproduction, growth, foraging and swimming. Changes in the marine environment may result in novel ecosystems requiring predators to alter foraging behaviors and adapt to new prey species (Nagelkerken and Munday, 2016;Rivest et al., 2019). Under such stresses, individuals become less competitive with decreases in reproduction and population density (Beaugrand and Kirby, 2018) and may exploit habitat heterogeneity by undertaking vertical migrations to suitable temperatures to maximize biological efficiency and minimize physiological adjustment costs (Chin et al., 2010;Beaugrand and Kirby, 2018). The endothermic ability to swim faster and farther (Watanabe et al., 2015) may allow mackerel sharks to migrate longer distances and forage over wider areas with greater access to prey and seasonal resources, although at higher energetic costs than ectothermic species. However, the ability of pelagic sharks to move and follow the shifting suitable habitat outside their current ranges, may potentially alter their interactions with fisheries. It is worth noting that latitudinal species shifts in response to warming can be misleading with some pelagic species migrating vertically not latitudinally (Perry et al., 2005;Beaugrand and Kirby, 2018) and this may be the case with some pelagic shark species. In Australian waters, pelagic sharks have been recorded regulating their depth to occupy regions of favorable temperatures, although this behavior could also be related to prey movements (Rogers et al., 2009;Stevens et al., 2010;Heard et al., 2017) as well as habitat suitability.
Our study predicts changes in habitat suitability for pelagic sharks in the Australian EEZ, but predictions at the end of the century are highly dependent on the climate model and emission scenario chosen to represent future conditions. This has been the case for similar studies on other species, for example, freshwater fish assemblages (Buisson et al., 2010) and mesopelagic lanternfish in the Southern Ocean (Freer et al., 2019) highlighting the benefit of using an ensemble approach to capture high climate uncertainty. Moreover, the SST anomalies across the Australian EEZ also vary according to the climate model and emission scenario used in the analysis. Our analysis was done at the family level due to the sample size available. Analysis at family level, whilst valuable for relatively homogenous species groups, inevitably results in loss of information at lower taxonomic levels. Further research is needed in more localized areas, including telemetry studies on single species, to add greater certainty to species distribution model predictions. There is no consensus about how turbidity may vary under a changing climate, and in our models we assumed that turbidity levels would remain stable at the end of the century. However, a predicted increase in extreme rainfall influenced by changes in atmospheric circulation may increase coastal turbidity due to terrestrial-derived nutrients and pollutant input (Harley et al., 2006). Additionally, turbidity is correlated with chlorophylla in pelagic systems, and warmer water temperatures drive phytoplankton blooms, with elevated temperatures increasing both cyanobacterial and algal chlorophyll-a concentrations (Lürling et al., 2018;Trombetta et al., 2019). As aquatic nutrients have a greater impact on chlorophyll-a concentrations than temperature, and salinity and wind are also correlated with plankton blooms (Lürling et al., 2018;Trombetta et al., 2019), the impact this may have on pelagic systems in Australian waters is still unclear. Despite the uncertainties associated with predicting future conditions, studies such as ours using remotely sensed environmental information and occurrence data from fisheries over a large spatial scale, are important to understand how pelagic species with broad geographic ranges might fare in the future. Such studies are a first component of broader research in which the distribution of multiple species are predicted in a likely altered future marine environment.
DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the article/Supplementary Material.
AUTHOR CONTRIBUTIONS
CB, AS, and JP conceived the study. CB, JP, AS, and LS designed the methodology with assistance from JF. CB collated and analyzed the data and led the writing of the manuscript with significant contributions from AS. All authors contributed critically to the drafts and gave final approval for publication.
ACKNOWLEDGMENTS
We thank R. Summerson for assistance with accessing the fisheries data and acknowledge Australian Bureau of Agricultural and Resource Economics and Sciences (ABARES) as the source of the fisheries data, originally supplied by Australian Fisheries Management Authority (AFMA) and state fisheries management agencies.
|
v3-fos-license
|
2020-06-11T09:08:04.993Z
|
2020-06-07T00:00:00.000
|
219603828
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7762793",
"pdf_hash": "7721bbfc8a587e4a1055565341377d45a509b328",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2177",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "9a6492296d6a43230f25262b781564c4eaf98ab0",
"year": 2020
}
|
pes2o/s2orc
|
Elucidating the Diversity and Potential Function of Nonribosomal Peptide and Polyketide Biosynthetic Gene Clusters in the Root Microbiome
We identified distinct secondary-metabolite-encoding genes that are enriched (relative to adjacent bulk soil) and expressed in root ecosystems yet almost completely absent in human gut and aquatic environments. Several of the genes were distantly related to genes encoding antimicrobials and siderophores, and their high sequence variability relative to known sequences suggests that they may encode novel metabolites and may have unique ecological functions.
S oil is an extremely diverse ecosystem that contains a myriad of micro-and macroorganisms, including nematodes, arthropods, fungi, and bacteria. The rhizosphere is a narrow region of soil directly influenced by root exudates and mucilage (1,2). This "hot spot" of organic matter and nutrients "enriches" a specific fraction of the soil microbial community known as the root microbiome, which is significantly different than the surrounding soil microbiome (3). Over the past 2 decades, several studies have linked specific constituents of the root microbiome to enhanced plant growth and development and inhibition of soilborne plant pathogens (4) by direct antagonism and/ or induced systemic resistance (5). These functions are often facilitated by the vast array of secondary metabolites (SMs) produced by root-associated bacteria, which play a key role in inter-and intraspecies interactions (6,7).
Many important soil and root-associated bacterial SMs are nonribosomal peptides (NRPs) or polyketides (PKs), produced by nonribosomal peptide synthetases (NRPSs) or polyketide synthases (PKSs), respectively. These are encoded on large biosynthetic gene clusters (BGCs) that often exceed 50,000 bp (8). Enzymatic complexes in these families follow a similar biosynthetic logic wherein molecules are assembled in an iterative building process using conserved domains that are organized in modules (9,10). NRPSs and PKSs are responsible for the synthesis of a wide array of siderophores, toxins, pigments, and antimicrobial compounds (11) that are believed to play pivotal roles in bacterial adaptation to soil and rhizosphere ecosystems and in plant health and development (12). Despite their ecological (rhizosphere competence) and translational (biocontrol agents and novel antimicrobial compounds for plant protection) importance, little is known about the occurrence, diversity, and dynamics of NRPSs and PKSs in root ecosystems or their role in intra-and intermicrobial and plant-bacterium interactions.
A major challenge in exploring the role and function of SMs in soil stems from the fact that the majority of bacteria cannot be cultivated using conventional methods, making it difficult to study these bacteria and the diversity, expression, and function of the metabolites that they produce (13). Despite the progress made in culturing techniques, our capacity to isolate soil and root-associated bacteria is highly constrained, primarily because it is challenging to mimic the natural conditions required for growing these bacteria (14). Furthermore, many bacterial BGCs are silent under laboratory conditions, and therefore, the metabolites that they encode are extremely challenging to isolate (15).
To circumvent the above-mentioned barriers, a myriad of culture-independent sequencing-based and omics tools have been developed to reveal the scope and composition of soil-derived BGCs encoding NRPSs and PKSs (16,17) and to infer the chemical composition and structure of the metabolites produced by these synthases (18,19). For instance, amplicon sequencing-based approaches have been developed to target short fragments within adenylation (AD) (in NRPS) and ketosynthase (KS) (in PKS) domains. These amplicons can be used to ascertain the diversity and abundance of bacterial BGCs in complex environments as both AD and KS domains are important (in concert with other components) for the assembly, and thus the identity and activity, of the synthesized metabolites (20,21). To date, a few studies have explored the diversity and composition of bacterial SM-encoding BGCs in soil, demonstrating the vast genetic diversity and novelty of NRPS and PKS genes (22,23). However, little is known regarding the distribution of these gene families in the root microbiome, and their functional role in this complex community remains an enigma (24).
This study proposes a unique approach to analyze the diversity and potential functions of NRPSs and PKSs in the root, specifically focused on elucidating (i) the composition and diversity of NRPS-and PKS-encoding genes in the root environment relative to adjacent bulk soil, (ii) NRPS and PKS composition and expression in the root as a function of plant type, (iii) the sequence and inferred SM structures of whole bacterial BGCs that are highly abundant or expressed in root environments, and (iv) the occurrence of root-enriched bacterial BGCs in other ecosystems.
RESULTS
Composition and diversity of NRPS and PKS genes in roots versus bulk soil. To determine the composition and diversity of NRPSs and PKSs in tomato and lettuce root samples relative to bulk soil (previous studies targeting this controlled lysimeter system showed that bulk soils from tomato and lettuce microbiomes were almost identical, and therefore, only tomato soil was analyzed here), we applied a previously described amplicon sequencing approach to amplify the conserved adenylation (AD) and ketosynthase (KS) domains of NRPSs and PKSs, respectively (23). Overall, sequencing yielded totals of 1,850,442 and 2,174,020 raw KS and AD reads with average read lengths of 280 bp and 235 bp, respectively (see Table S1 in the supplemental material). Further filtering steps using QIIME2 and DADA2 denoising methods resulted in 2,980 and 3,269 nonredundant KS and AD domain sequences, respectively.
We observed significantly greater diversity of both AD and KS domains in the bulk tomato soil than in the adjacent roots ( Fig. S1A and C). In contrast, no difference in diversity was observed between tomato and lettuce roots for either of the SM-encoding domains ( Fig. S1B and D). To assess differences in AD and KS domain diversity between samples, a principal-coordinate analysis (PCoA) and analysis of similarity (ANOSIM) using the Bray-Curtis similarity index were performed ( Fig. 1A and B). The KS and AD domain profiles of the roots (from both tomato and lettuce) formed distinct clusters, which were significantly different from those of the adjacent bulk soil (R = 0.332 [P , 0.05] for PKS; R = 0.308 [P , 0.01] for NRPS).
To explore the potential novelty of root-associated SM-encoding genes, the amplified AD and KS domain sequences from the root and soil samples were first aligned against the MIBiG (Minimum Information about a Biosynthetic Gene Cluster) repository (25) using blastp, with a .50% amino acid sequence identity cutoff, and then grouped according to their identity to the MIBiG reference genes (Table 1). On average, more than 25% of the AD and almost 13% of the KS domain sequences in the root environment showed less than 50% amino acid identity with genes found in the MIBiG database (characterized as "unassigned"), whereas fewer than 1% and 6% of the AD and KS sequences, respectively, shared over 85% similarity to the reference MIBiG genes. These results demonstrate the profusion of potentially novel SM-encoding genes in both root and soil environments.
Pinpointing predicted root-enriched NRPs and PKs. As SMs are known to play critical roles in bacterium-bacterium and bacterium-plant interactions, we were interested in the associated metabolites synthesized by BGCs whose AD or KS domains were highly abundant and enriched in tomato or lettuce roots relative to bulk soil. To do so, the MIBiG-aligned amplicons were annotated to the corresponding BGC-associated metabolites, using a cutoff E value of ,10 240 . Sequences that did not meet these criteria were defined as "unknown." Previous analyses have shown that compared to reference KS or AD domain sequences, amplicons with E values of ,10 240 are likely derived from the same BGC family as the reference sequence and thus may be inferred to encode a similar function (26)(27)(28)(29).
A differential abundance analysis using DESeq2 of the top 20 highly abundant AD and KS amplicons revealed that 55% (11/20) and 70% (14/20) of the amplicons in both of the plant root samples (tomato and lettuce, respectively) (adjusted P value of ,0.1; log 2 fold change of .5) were not associated with known BGCs; thus, their associated metabolites cannot be inferred (Fig. 2). Nonetheless, several root-enriched AD and KS domain sequences (9 in tomato and 6 in lettuce) were above these threshold values and thus can be considered congeners to known metabolites. These included the nonribosomal peptides stenothricin (30) and griselimycin (31), whose BGC NRPS analogues were highly abundant in the tomato and lettuce roots, respectively, and were less profuse in the bulk soil. While we cannot determine the actual role of the metabolites potentially encoded by these enriched BGCs, both stenothricin and griselimycin are known for their antimicrobial activity.
Next, we calculated the relative abundances of amplicons that were associated with known metabolites (based on the criteria described above) in the different root and soil samples (Fig. S2). To pinpoint associated BGCs that may play a role in adaptation to the root environment, we focused our analysis on inferred metabolites that were present in at least four of the root samples (tomato and lettuce) and in no more than one soil sample (Fig. 3). In addition, to identify BGCs specifically relevant to soil, we also selected inferred metabolites that were present in all three soil samples and in no more than one root-associated sample. For NRPs, we found BGCs associated with four metabolites that were highly abundant in both of the root samples (e.g., the Streptomyces-derived antibiotic macrolide family streptovaricin). For PKs, we again found several highly abundant Streptomyces-derived inferred metabolites, among others. These included lasalocid, sanglifehrin A, and azalomycin A. Interestingly, amplicons associated with the two former metabolites were also found to be highly enriched in roots relative to soil for lettuce (Fig. 2B). Overall, we found that 26 associated metabolites were present in at least one of the root-associated samples and completely absent in the soil samples, e.g., diaphorin (in lettuce) and basiliskamides (present in both tomato and lettuce) (Fig. S2).
Due to the potential biases associated with the above-described PCR-based approach, we analyzed previously reported (32) shotgun metagenomes of the same tomato and lettuce root samples (n = 3 each). Assembled open reading frame (ORF) sequences from the metagenomes identified using Prodigal were aligned against the MIBiG database, generating a list of the 50 most abundant NRPSs and PKSs in each of the root data sets (representing the normalized abundance within samples by plotting the coefficient of variance [CV] for each gene) ( Fig. S3A and B). In addition, as gene clusters are often silent or expressed under very specific conditions, we evaluated gene expression in parallel to gene occurrence to uncover active BGCs with ecological importance in the highly dynamic root ecosystem. Thus, in parallel to the shotgun metagenome analysis described above, we applied a similar analytical approach using the previously collected shotgun metatranscriptomes (32) to identify NRPSs and PKSs with enhanced expression in lettuce and/or tomato root microbiomes. Interestingly, 60% (30/ 50) and 46% (23/50) of the AD and KS domains that were highly abundant in the tomato and lettuce root samples, respectively, were highly expressed as well (Fig. S3). Next, we filtered the highly abundant hits based on their CV values (,50) in order to analyze sequences with lower dispersion levels within tomato and lettuce root-associated samples, followed by taxonomic annotation using MEGAN. The resulting 42 sequences were clustered into two main phyla: Actinobacteria (13/42) and Proteobacteria (25/ 42). Several sequences could not be assigned a taxonomic affiliation, and one was assigned to the Bacteriodetes phylum (Fig. 4). While we could not infer the associated metabolites synthesized by most of these highly abundant sequences (including all of those assigned to the phylum Proteobacteria), suggesting their potential novelty, we managed to annotate several of the Actinobacteria-associated BGCs. These were distantly associated with ossamycin (5 hits) and polycyclic tetramate macrolactams (PTMs) (2 hits, including the most highly abundant NRPS/PKS-related sequence and the 5th most expressed). Ossamycin is a known antifungal and cytotoxic macrocyclic polyketide originally isolated from soil-associated Streptomyces hygroscopicus subsp. ossamyceticus (33,34), while PTMs are a family of biologically important metabolites, including HSAF (heatstable antifungal factor), ikarugamycin, and clifednamides, generally associated with different isolates of Actinobacteria and Gammaproteobacteria (35).
Finally, to evaluate the extent to which the amplicon sequencing method was able to detect NRPSs and PKSs in the targeted samples relative to the PCR-independent shotgun metagenomic analyses, we analyzed the distribution of total MIBiG-associated genes in all four data sets (lettuce and tomato NRPS and PKS amplicons and tomato and lettuce metagenomes) (Fig. S4). In general, approximately 34% and 25% of the MIBiG-characterized genes were found in both amplicon sequences and metagenomes of the lettuce and tomato roots, respectively. In contrast, approximately 33% and 20% of the tomato and lettuce genes, respectively, were detected only in the shotgun metagenomic data sets. Fewer than 4% and 5% of the NRPSs and PKSs were found only within the tomato and lettuce amplicon sequencing data sets (28 and 35 genes in tomato and lettuce, respectively), and fewer than one-fifth (139 genes or 17.2%) were common to all four culture-independent data sets.
Extraction and environmental distribution of whole SM-encoding gene clusters. The identification of NRPSs and PKSs that were either enriched in roots relative to adjacent bulk soil or abundant and/or highly expressed in lettuce and/or tomato roots encouraged us to capture whole BGCs associated with these sequences in order to shed light on their phylogenetic affiliation and potentially infer their function and chemical structure. This was achieved by screening the root-associated NRPS and PKS candidate sequences identified in this study against a large set of previously collated soil and rhizosphere cosmid libraries using the bioinformatic platform eSNaPD (29) (see Materials and Methods for the full pipeline). Five cosmid library targets showed low E values and high nucleotide sequence identities (.75%) to candidate NRPSs or PKSs. Sequencing and annotation of the metagenomic insert captured in each cosmid revealed three NRPS and two hybrid NRPS/PKS gene clusters (Fig. 5). Based on gene content and sequence identity, the identified gene clusters were not identical to any BGCs associated with known metabolites. The NRPS and PKS ORFs of two recovered clones (B326 and B385) were not affiliated with any known bacterial taxa (,50% nucleic acid identity to the NCBI database), while the other three clones were related to genes from Actinobacteria (Table 2). Of the cosmids recovered from the metagenomic libraries, clone B481 was nearly identical to an uncharacterized NRPS BGC found in the genome of Streptomyces cyaneogriseus (Fig. 6A). The only predicted chemical structure that we could infer from the recovered BGCs was for clone B893, which was related to an uncharacterized PKS gene cluster found in the genome of Saccharothrix saharensis, a filamentous actinobacterium isolated from desert soil. A detailed bioinformatic analysis of its PKS domains revealed that the gene cluster likely encodes an extended polyene substructure (Fig. 6B). The seven PKS modules captured on the clone all contain dehydratase (DH) and ketoreduction (KR) domains, indicating that each module introduces a double bond into the polyketide backbone (Fig. 6B). While polyene substructures like this are seen in a number of natural products (36,37), they are most commonly seen in polyene antifungal agents, including many that are derived from Streptomyces species (e.g., cyphomycin, nystatin, filipin, and pimaricin). This may suggest that the BGC encodes an antifungal compound. The five recovered BGCs were initially targeted due to their abundance in tomato and lettuce root data sets, suggesting a link to root ecosystems. To test this hypothesis, we assessed the abundances of the five BGCs in a large collection of publicly available shotgun metagenomes (20 metagenomes from each environment) from four distinct environments, targeting gut (animal and human), aquatic (freshwater and marine), soil (different soil types), and root-associated (various plant species) data sets. Our analysis demonstrated that the recovered BGCs are ubiquitous in most of the queried root samples and in some of the soil samples (Fig. 7A). Only clone B893 showed a significantly higher abundance in the root samples than in the soil samples (P , 0.05 by a Wilcoxon test) (Fig. S5). B893 was found in 16/20 of the root-associated data sets that we examined (compared with 8/20 soil data sets) (Fig. 7B). In contrast, none of the five BGCs were found in any of the gut microbiome communities analyzed, and very few were detected in the aquatic environments (Fig. 7B).
DISCUSSION
NRPs and PKs produced by root-associated microbial communities play an important role in plant root ecosystems (38,39). Several studies have identified and characterized BGCs and/or associated metabolites in prominent plant-growth-promoting and biocontrol agents originally isolated from plant roots (40,41). However, the large fraction of uncultivated bacteria in root ecosystems and the limitations associated with culturing bacteria encouraged us to examine the composition of genes encoding NRPS and PKS in root environments using culture-independent approaches. These methods have been applied previously to understand SM-encoding gene diversity and distribution in bulk soil (17,26,42,43), but NRPSs and PKSs have not been comprehensively explored in root ecosystems. Furthermore, to the best of our knowledge, this is the first study to explore the expression of secondary-metabolite-associated genes in root ecosystems.
Our results demonstrate distinct compositions of NRPSs and PKSs in plant (tomato and lettuce) roots relative to adjacent bulk soil, that these genes in root microbial communities are less diverse than those found in soil microbiomes, and that a fraction of these NRPSs and PKSs are highly expressed in root ecosystems. This is consistent with several previous studies showing that the phylogeny and functionality of root-associated microbial communities are significantly different from those in adjacent soil (44)(45)(46). It is well established that plants interact with soil bacterial communities through the secretion of root exudates (47,48), resulting in the selection of specific microbial populations from the soil microbiome. This appears to be the case for the recruitment of plant-growth-promoting rhizobacteria (PGPR), which are known to harbor specific SM-encoding genes (4,49,50). Thus, while at this point, we cannot infer the actual function of the highly abundant and expressed NRPSs and PKSs in the root environment, we can infer that they likely play a role in various processes, e.g., competition and root colonization (51,52).
While most of the detected SM-associated genes were too novel to link to any known metabolites, a few of the highly abundant, expressed, and/or root-enriched NRPSs and PKSs were associated with known metabolites. Azalomycin F, for instance, found to be associated with sequences in both of the plant microbiome samples, is a polyketide with reported antifungal activity against a variety of phytopathogens, which is produced by different soil-and root-associated Streptomyces isolates (53,54). Diaphorin, associated with sequences found only in the tomato root microbiome, is a pederin analogue known to be produced by the psyllid Diaphorina citri endosymbiont "Candidatus Profftella armatura" (Betaproteobacteria), with potential antifungal and cytotoxic activity (55). In this regard, the described culture-independent approaches are a promising platform for identifying novel BGCs and elucidating their roles in soil ecosystems and within the framework of drug discovery efforts, despite their current limitations (16,17).
A large fraction of the highly abundant and highly expressed NRPSs and PKSs identified in this study were not assigned, or had low identity (50 to 70%), to previously characterized genes in the MIBiG repository. Recently, a previously unidentified hybrid NRPS/PKS gene cluster was found to be essential for Rhizoctonia suppression by an endophyte Flavobacterium sp. (56), highlighting the vast amount of root-associated SMs with unidentified functional roles, which undoubtedly play a pivotal role in bacterium-plant interactions.
We screened a large set of soil cosmid libraries with candidate sequences from our amplicon sequencing and metagenomic analyses that were enriched (relative to bulk soil) and/or highly abundant and expressed in tomato and lettuce roots, taking advantage of a unique culture-independent platform capable of extracting and analyzing long NRPSs and PKSs (27,57). Five clones containing uncharacterized gene clusters with no known function, including two that were not associated with any known taxonomic group, were identified. The fact that all of these BGCs were rather common in various root-associated environmental metagenomes but rare or completely absent in other environments suggests their potential importance in these habitats. While we can only speculate as to their synthesized metabolites' actual activity, they were associated with bacterial groups well known for their antimicrobial capacity. Saccharothrix saharensis (Actinobacteria), for instance, which contains a BGC closely related to clone B893, is a soil-dwelling bacterium known to produce an array of antimicrobials (58). This BGC likely encodes a polyene substructure, often seen in antifungal agents, as it is capable of directly disrupting the fungal membrane (59). Of particular interest is clone B481, associated with SM-associated genes from Streptomyces cyaneogriseus, known for its ability to produce the biopesticide nemadectin (60). We speculate that this BGC may be associated with bacterium-fungus competition in the root ecosystems.
At the broader level, our results emphasize the need to look beyond basic descriptive diversity and composition information regarding the SM capacity of microbial communities. The pipeline adopted in this study, where potentially important NRPSs and PKSs are first identified, followed by the extraction of BGCs from cosmid libraries in order to identify potentially novel BGCs, has the advantage of being resource-efficient while spawning deeper knowledge regarding potentially important gene clusters. Future studies will focus on expressing these cosmid library BGCs in suitable hosts, enabling us to characterize their encoded metabolites and test their in vitro and in planta activities against various phytopathogens. Our results coincide with studies conducted in other plants, showcasing the as-yet-unexplored diversity of NRPSs and PKSs (43,61).
Overall, this study indicates that the root microbiome harbors a unique, diverse, and potentially novel array of SM-synthesizing genes, which are significantly different from those in the bulk soil microbiome. To enhance our current understanding, future research should focus on identifying additional factors shaping the occurrence and expression of SMs in the root microbiome (e.g., plant health, the presence of phytopathogens, and plant growth). This will undoubtedly help expose the ecological role of SMs in root ecosystems and provide a platform for drug discovery and novel and environmentally sustainable compounds for plant protection. Figure S6 in the supplemental material presents a conceptual description of the pipeline applied in this study.
MATERIALS AND METHODS
Amplicon sequencing, shotgun metagenomics, and metatranscriptomics analyses. Tomato soil and root samples and lettuce root samples were collected as previously described (62). Briefly, tomato (Solanum lycopersicum Heinz 4107) and lettuce (Lactuca sativa [romaine] Assaph) seedlings were planted and grown for 42 days in a random-block lysimeter experiment at the Lachish agricultural research station in Kiryat-Gat, Israel. Each sample type (soil, tomato roots, and lettuce roots) was analyzed using triplicate samples from three different lysimeters (thus, n = 3 for each sample type). Each triplicate consisted of a composite sample collected from 2 to 4 plants or 3 soil samples, taken from the distant edges of the lysimeters and away from plant roots. As the same soil was used throughout the experiment, and given that soil samples were collected at a sufficient distance from growing plants, bulk soil from the tomato lysimeters served as a reference point for both tomato and lettuce soils. A previous study showed that they harbored almost identical microbial communities (32). Soil samples were collected, frozen in liquid nitrogen on-site, and stored at 280°C until further analysis. Root samples were collected intact, and soil particles were removed by shaking and briefly rinsing. The roots were then lightly dried, immediately frozen in liquid nitrogen on-site, and kept at 280°C until processed. In this study, extracted DNA was used as the template for NRPS and PKS PCR amplification using degenerate primers (A3F/A7R for AD-NRPS and degKS2F/degKS2R for KS-PKS domains, as previously reported [23]). The resulting barcoded libraries were pooled and sequenced on an Illumina MiSeq instrument, employing V2 chemistry, at the University of Illinois-Chicago Sequencing Core (UICSQC). A total of 18 samples were sequenced, which included sampling location (tomato soil versus tomato and lettuce roots) and SM family (AD and KS domains), with three replicates for each treatment. The resulting sequences were processed and demultiplexed using the QIIME2 pipeline and the integrated DADA2 method (63). Exact sequence variants (ESVs) represented by fewer than 3 sequences were removed from downstream analyses. Raw amplicon sequences are available via the MG-RAST data repository under accession number mgm4862150.3. In addition, shotgun metagenome and metatranscriptome data sets of lettuce and tomato roots (n = 3 for each data set type [hence, 6 for tomato and 6 for lettuce]) previously generated and analyzed from the same samples were also used for NRPS and PKS identification as described below (32). Shotgun sequence data are available via the NCBI Sequence Read Archive (SRA) data repository under project accession number PRJNA602301.
Identification and annotation of NRPSs and PKSs. For chemical diversity analysis of NRPS and PKS gene clusters, the different data sets (ESVs generated via amplicon sequencing and metagenome/metatranscriptome-assembled genes) were aligned against the Minimum Information about a Biosynthetic Gene Cluster (MIBiG) repository (version 6 August 2018). Only core NRPS and PKS genes (AD and KS domains) were included in the analysis. Alignment was performed using the diamond blastx command line, with .50% amino acid sequence identity. To associate each NRPS or PKS hit with its potentially synthesized metabolite, an E value of ,10 240 was used as a cutoff. Hits that did not pass this threshold were regarded as "unknown." For taxonomic annotation, sequences were aligned against the nonredundant (nr) BLAST NCBI database, followed by lowest-common-ancestor (LCA) classification using the MEGAN 6.15 Ultimate edition by taking the top 10% of hits and filtering for a minimum score of 50 and a maximum E value of 0.01
Root Microbiome Secondary Metabolite Diversity
November/December 2020 Volume 5 Issue 6 e00866-20 msystems.asm.org 11 (64). Conversion of gene identifiers to taxonomic path was done using the mapping files provided by MEGAN as of October 2016. Soil library amplicon generation screening. Metagenomic libraries were constructed as previously reported (65). Briefly, in each library, crude environmental DNA (eDNA) was extracted directly from fieldcollected soil, gel purified, blunt ended, ligated into cosmid pWEB::TNC (Epicenter), packaged into lambda phage, and transferred into E. coli EC100 (Lucigen). Each library was expanded to contain 20 Â 10 6 unique cosmid clones with ;30to 45-kb eDNA inserts and then arrayed into 768 subpools (two 384-well plates) containing ;25,000 unique cosmid clones per well. Each subpool was then stored as a glycerol stock for the clone recovery of interesting hits and as cosmid DNA to facilitate PCR-based screening. To generate an amplicon sequence database of NRPSs and PKSs, the following two sets of degenerate primers (AD and KS) were applied to amplify the conserved regions in the adenylation and ketosynthase domains in the biosynthetic gene cluster: AD forward primer 59-SATBTAYACSTCVGGHWCSAC-39 and reverse primer 59-CCANRTCNCCBGTSYKGTACA-39, and KS forward primer 59-TGYTCSDSSTCGCTSGTS GCS-39 and reverse primer 59-GTNCCSGTSCCRTGBGCYTCS-39. The 59 ends of the primers were augmented with MiSeq sequencing adapters followed by unique 8-bp barcode sequences identifying the soil metagenome from which they were amplified. Amplicons were pooled as collections of 96 samples and cleaned using magnetic beads. Cleaned, pooled amplicons were used as the template in a second PCR. Prior to sequencing, all PCR amplicons were quantified by gel electrophoresis and mixed in an equal molar ratio. The resulting pool was fluorometrically quantified with HS D1000 ScreenTape and sequenced on an Illumina MiSeq instrument. Reads were debarcoded and trimmed to 240 bp. The reads from each sample were clustered using UCLUST (66) to generate the 95% tags.
Recovery of BGC clones from metagenomic library pools. The library well locations for target AD or KS domains were identified using well-specific barcodes incorporated into the degenerate primers (27). Specific primers with melting temperature (T m ) values of ;60°C (18 to 20 bp) were designed to amplify each unique conserved sequence of interest. To recover the single cosmid clone from each library subpool, a serial dilution of the whole-cell PCR strategy was used (17). Briefly, library glycerol stocks that contained target hits from eSNaPD analysis were inoculated into 3 ml LB broth (kanamycin and chloramphenicol) and grown overnight at 37°C to confluence. The cells cultured overnight were diluted to 2,000 CFU ml 21 , calculated by the optical density at 600 nm (OD 600 ). The 384-well plates were inoculated with 50 ml of the resulting diluent (600 CFU/well) with an Eppendorf epMotion 5075 liquid handler, grown overnight, and screened using real-time PCR with a touchdown PCR program to identify wells containing target clones. Target-positive wells were diluted to a concentration of ;5 CFU ml 21 , and the process was repeated to identify new wells containing target clones. Five clone pools were then plated on a solid-agar plate, and target single clones were identified by clone PCR.
Analysis of recovered gene clusters. Recovered single-cosmid clones were miniprepped by using a QIAprep kit and sequenced using MiSeq technology. The M13-40FOR and T7 universal primers were utilized to sequence both ends of the insert sequences. Reads, amplicons, and end sequences were assembled together to generate constant contigs using Newbler 2.6 (67). Fully assembled contigs were then analyzed using an in-house annotation script consisting of open reading frame (ORF) prediction with MetaGeneMark, HMM Scan, and BLAST search. The annotation script was developed using Python and is available at the GitHub open-source repository (https://github.com/brady-lab-rockefeller/gene _annotation). Putative functions and source organisms of genes in the BGC were assigned based on the closest characterized gene in the NCBI database. KnownClusterBlast in antiSMASH 5.0 (68) was utilized to analyze the relationship between known characterized gene clusters and recovered BGCs. The structure prediction of the adenylation domains and ketosynthase domains in BGCs was given by the antiSMASH prediction, which employs three prediction algorithms, NPRSPredictor2, Stachelhaus code, and SVM (support-vector machine) prediction. These predicted building blocks were then utilized to predict a final structure combined with known characterized BGCs in cultured bacteria.
Recovered clone search in environmental shotgun metagenomes. AD and KS domains from all five recovered gene clusters were searched against shotgun metagenomes from four different environments: animal and human feces (gut), aquatic, soil, and root associated. We selected 20 Illuminasequenced shotgun metagenomes from each of these ecosystems using the JGI IMG/MER advanced search option, followed by a Blastn search using the IMG website online tool. Additional filtering was performed based on an E value threshold (10 240 ) and identity (.85%). For each hit, counts were normalized using gyrB and rpoB housekeeping gene counts (obtained via the IMG/MER platform). A relativeabundance heat map was created using the pheatmap R package (69). For gene clusters with more than one AD/KS domain, results are shown only for data sets that contained all cluster-belonging domains. A gene cluster presence/absence plot was created using the ggplot2 R package. Further information regarding selected metagenome data sets is shown in Table S2.
Statistical analyses. Alpha (Simpson index) and beta (Bray-Curtis) diversity indices across environments (bulk soil versus roots and soil versus tomato versus lettuce) were calculated using the R package vegan (70). Variation in NRPS/PKS-associated genes was visualized by principal-coordinate analysis (PCoA) using the same R package. To obtain this figure, we performed ordination on an ESV count table (constructed by QIIME2) using Bray-Curtis distances, followed by plotting using the R ggplot2 package. Difference significance between groups was determined using vegan ANOSIM (analysis of similarity).
Enrichment of ESVs between soil and tomato and lettuce roots and of NRPS/PKS-related sequences between root shotgun metagenomes and metatranscriptomes was determined using DESeq2 (71). Only sequences with a corrected-adjusted P value of ,0.1 (Wald test P values corrected for multiple testing by the Benjamini-Hochberg method [72]) were chosen.
Data availability. Raw AD and KS amplicons from the tomato root, lettuce root, and tomato bulk soil microbial communities sequenced in this study are available via the MG-RAST data repository under accession number mgm4862150.3. Previously sequenced shotgun metagenome and metatranscriptome data sets (32) are available via the NCBI SRA data repository under project accession number PRJNA602301.
SUPPLEMENTAL MATERIAL
Supplemental material is available online only.
|
v3-fos-license
|
2023-04-14T15:48:41.008Z
|
2023-04-11T00:00:00.000
|
258118280
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2624-8174/5/2/28/pdf?version=1681207020",
"pdf_hash": "ac410249b2415a3398c474e7a78cce678653fd7c",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2178",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "1ed16591265fe29bc5f3ae6984ab5bb9f881412a",
"year": 2023
}
|
pes2o/s2orc
|
Feasibility Study of Hypernucleus Production at NICA/MPD
: The NICA (Nuclotron-based Ion Collider fAcility) project at the Joint Institute for Nuclear Research (JINR, Dubna, Russia) is aimed at the construction of a new accelerator complex for heavy ions and polarized particles. Heavy-ion collisions at NICA are planned to be studied in the region of the highest net-baryon density, which favors the formation of bound nuclear systems with strangeness hypernuclei. The multipurpose detector (MPD) at NICA is designed to reconstruct interactions of relativistic nuclei in a high-multiplicity environment. In this paper, we report the feasibility study results for the reconstruction of 3 Λ H, 4 Λ H and 4 Λ He in Bi+Bi collisions at the nucleon-nucleon center-of-mass energy, √ s NN = 9.2 GeV.
Introduction
Heavy-ion collisions offer a unique method to create hot and dense nuclear matter in the laboratory. If the temperature in the medium exceeds 150 MeV, quark and gluon degrees of freedom appear, and quark-gluon plasma (QGP) can be formed. The deconfinement phase transition is also possible at densities of a few times the normal nuclear density. The production of strange quarks relative to non-strange ones changes in the partonic reactions; thus, the strangeness production was proposed as a QGP signature [1]. Moreover, it was suggested that the nature of the matter, which is created in heavy-ion interactions, can be characterized via baryon-strangeness correlations inside the medium [2,3]. Hypernuclei are bound systems of nucleons and Λs; thus, their production rates are sensitive to the initial hyperon-baryon phase-space correlation [4]. Furthermore, to understand the basic properties of neutron stars, a good knowledge of the dense matter equation of state is crucial. Due to the large density in the core of a neutron star, strange degrees of freedom (hyperons) are expected to appear [5,6] The presence of hyperons and their role on the properties of neutron stars depend strongly on the in-medium hyperon-nucleon potential. Hypernuclei offer a unique opportunity to elucidate strong interactions involving hyperons. For instance, precise measurements of hypernucleus lifetimes can provide valuable information on the hyperon-nucleon interaction strength (for recent results on the hypernucleus lifetimes; see [7,8]).
The NICA (Nuclotron-based Ion Collider fAcility) project [9] is aimed at the construction of a new accelerator complex for heavy ions and polarized particles at the Joint Institute for Nuclear Research (JINR) in Dubna, Russia. The NICA complex will be capable of providing ion beams (from protons to bismuth ions) in the energy range from 4 to 11 GeV (in the nucleon-nucleon center-of-mass system) at the nominal luminosity of L = 10 27 cm −2 s −1 . The NICA offers a unique possibility to study the properties of strongly interacting matter in the region of high net-baryon density. In particular, precise measurements of hypernuclei, including their yields, lifetimes, and binding energies are among the key objectives of the NICA physics program. The benefit is that statistical thermal models predict the highest production rates of (hyper)nuclei in the NICA energy range [10].
The main goal of this paper is to perform a feasibility study aimed at testing the MPD's performance in the reconstruction of hypernuclei in heavy-ion collisions.
The MPD at the NICA Complex
The multipurpose detector (MPD) is the main experimental setup for the study of heavy-ion collisions at the NICA collider [11]. It comprises (see Figure 1) a set of subdetectors within a superconducting solenoid, which provides a homogeneous axial magnetic field of 0.56 T. The inner tracker system (IT) surrounds the beam pipe and consists of 6 layers of silicon pixel detectors. Its main goal is to allow very precise tracking in the high-track occupancy region near the primary interaction vertex as well as accurate reconstruction of the decay vertices of short-lived particles. Three-dimensional tracking in the MPD experiment is performed with a time-projection chamber (TPC). The TPC is a cylinder of 3.4 m long and 2.8 m in diameter. Its active gas volume, which is separated in two equal parts by a cathode membrane in the center, is filled with a mixture of argon and methane gases. The ionized electrons drift towards the end plates under the applied uniform electric field. Each end plate consists of a multiwire proportional chamber with a cathode pad readout. The TPC has the pseudorapidity coverage |η| < 1.3. Tracks with the maximal radial length have 53 measurements on the trajectory, and it was found that a relative momentum resolution of 3% can be achieved within the transverse momentum interval up to 2 GeV/c, where c denotes the speed of light. In addition to its tracking capability, the TPC provides ionization loss measurements in the gas with a resolution of the order of 8%. The MPD phase-space coverage in the forward region is achieved with the forward end cap tracker (ECT). The ECT is situated behind the TPC end plates and is made of several layers of cathode-pad chambers (CPC). The time-of-flight (TOF) system, which is made of multi-gap resistive-plate chambers (MRPC), is situated after the TPC. It covers the pseudorapidity range |η| < 1. 4 and has an intrinsic time resolution of 60 ps. Exploiting the time-of-flight information allows powerful discrimination between pions, kaons, protons, and light nuclei in the momentum range 0.1-4 GeV/c. The goal of the electromagnetic calorimeter (ECAL) is the detection of electrons and photons. It also allows the detection of neutral mesons via their decay in two photons. The ECAL is made of sampling (lead+scintillator) modules with a light readout with silicon photomultipliers. Two arrays of the forward hadronic calorimeter (FHCAL) are used to perform event centrality selection. Two arms of Cherenkov counters of the forward detector (FD) are meant to provide an online trigger and start timing for the TOF detector.
Reconstruction of Hypernuclei in the MPD Experiment
The NICA complex is planned to operate with beams of bismuth ions during the start-up period. The NICA operation time will be divided between accelerator studies, beam commissioning, and data collection for physics. During this period, beam collisions will be performed at 9.2 GeV (4.6 GeV per beam) with a reduced value of the luminosity of 5 · 10 25 cm −2 s −1 . The MPD team has defined a plan to collect 40-50 million of Bi+Bi collisions per week during the first year of NICA operation. In this study, we use a similar amount of simulated events. As an input for our feasibility study, we use the parton-hadronquantum-molecular dynamics (PHQMD) microscopic model [12]. This event generator uses an n-body transport approach, which describes heavy-ion collisions, including the formation of bound systems containing strangeness. As reported in Ref. [13], PHQMD reproduces the existing experimental data on hadron and (hyper)nucleus production in a broad collision energy range and over a wide phase-space region. We use a set of 4 · 10 7 Bi+Bi collisions at √ s NN = 9.2 GeV. In order to have sufficient statistics of 4 Λ H and 4 Λ He nuclei in the analysis, their yields in the model are enriched by a factor of 40. All produced particles from the model are propagated through the MPD detector with the GEANT program which simulates interactions in the material. All the simulated energy deposits are transformed into the detector response (space points) using realistic description of physics processes in the MPD detector elements. The produced space points are then reconstructed by system-specific cluster-finding procedures. For example, a TPC cluster is a collection of registered charges in several neighboring space and time bins (the bin size is 5 mm in the direction perpendicular to the beam axis and 100 ns in the time direction). The reconstructed clusters are then combined into tracks using the Kalman filter approach [14]. To find the main collision vertex, all the reconstructed tracks are extrapolated toward the detector center. By extrapolating tracks from the MPD center to the surface of the TOF detector, one can find the matches of TPC tracks with the hits in the TOF system, and for all matched cases, the mass squared (divided by the magnitude of the particle's charge) can be calculated as where l denotes the track length, p the total momentum, q the magnitude of the particle's charge, and t the time-of-flight. Identification of charged hadrons and light nuclei in the analysis relies on the combination of the information on the ionization energy loss, dE/dx (the energy deposited by charged particles in layers of thickness dx) in the TPC gas and the mass-squared from the TOF. The information under interest (dE/dx or m 2 ) is compared to the expectation for a given species. For the case of the ionization loss, the expected value is taken from the Bethe-Bloch distribution, while for the case of mass-squared, the 'measured' value is compared to the particle's rest mass. In Figure 2, the specific energy loss and mass-squared for hadrons and light nuclei from Bi+Bi collisions are shown as a function of rigidity, p/q. The red lines indicate the ±3σ boundaries from the expected positions used to separate particles of different types; here, σ denotes the corresponding resolution. It should be noted that momentum reconstruction relies on the assumption that all particles are singly charged. Thus, for double-charged nuclei ( 3 He and 4 He), the reconstructed momentum is two times less than the actual one, and the calculated mass-squared is less then the nominal value by a factor of 4. In this case, 4 He and deuteron candidates have the same m 2 , and discrimination between these two species is achieved using only the ionization loss information. Once a helium candidate is selected within the expected boundaries, the correct momentum information can be obtained from the refit procedure using the proper value for the electric charge of the candidate.
Hypertritons are reconstructed through the topological decay, 3 Λ H → 3 He+π − . Once a pair of tracks identified as 3 He and a negatively charged pion are selected, the distance of closest approach (DCA) between the candidates is defined, thus determining the decay point of the hypertriton candidate. However, if the DCA value is larger than a given threshold, the pair is rejected. For the successive candidates, each daughter track is propagated back to the main vertex, requiring it to have a minimum DCA to the primary vertex to avoid selecting primary particles as daughters. Using the values of the reconstructed momentum components of the tracks, the momentum and the invariant mass of the hypertriton candidate are calculated. Further selection regards the direction of the reconstructed momentum vector of the candidate, which is required to point to the main vertex position by applying a cut in the cosine of the pointing angle. The analysis of reconstructed invariant mass distributions is performed in bins of transverse momentum, p T = p 2 x + p 2 y of 0.5 GeV/c width, where p x (p y ) are the particle momentum components in x (y) direction. An example distribution for p T = 2.0-2.5 GeV/c is shown in Figure 3 (left). Blue symbols indicate the reconstructed data. The shown invariant mass distribution can be described (fitted) by a sum of a Gaussian and a polynomial. The Gaussian corresponds to the signal peak, while the polynomial represents the background (uncorrelated combinations of candidates which passed selection criteria) in a certain range around the mass peak. The resulting fit is plotted in Figure 3 (left) by the red line. One can also indicate the signal parameters (mass and sigma) as well as the signal-to-background ratio (S/B) and significance (S/ √ S + B). The raw signal is extracted by summing the bin content of the histogram over the 4σ region around the nominal peak position. The amount of the combinatorial background, which is estimated from the fit, is subtracted from the bin-counted signal. The extracted signal value in each p T -bin is then corrected by applying the efficiency coefficient. This overall efficiency was obtained from the Monte Carlo data and includes the detector acceptance, the branching ratio, the reconstruction efficiency of the daughter particles, the particle identification efficiency of daughters, as well as the efficiencies of topological cuts applied in the secondary vertex reconstruction procedure. Figure 3 (right) shows the overall reconstruction efficiency for hypertritons as a function of p T . A fully corrected invariant transverse momentum spectrum of hypertritons from Bi+Bi collisions is plotted in Figure 4. The reconstructed points, which are shown by red symbols, are compared to the initial distribution from the model (blue symbols). As can be seen, both spectra agree within the errors. Hypertritons are unstable particles; thus, one expects that the yields of the hypertritons in proper time intervals drop off exponentially with the lifetime as a slope parameter. In order to extract the value of the lifetime τ 0 , one has to test the yields against the equation, where τ = t/γ is the proper time, the factor γ = 1/ 1 − (v/c) 2 , v is the velocity, L is the decay distance, p is the particle momentum, and M = 2.991 GeV/c 2 is the hypertriton rest mass. We extract the hypertriton signal in several bins for the proper time interval [0.1-1.5] ns. An example invariant mass distributions for τ = [0.1, 0.3] ns is plotted in Figure 5 (left). The signal in each bin is then corrected by the overall efficiency, and the final distribution is plotted in Figure 5 (right). The distribution is fitted by Equation (2), and the result of the fit is shown by a solid line. The extracted lifetime parameter ('p1' in Figure 5) is obtained to be 265 ± 4 ps, which is quite close to the theoretical (model) value of 263 ps. Heavier hypernuclei are reconstructed through the decay, 4 Λ H→ 4 He + π − for 4 Λ H nuclei and through the decay mode, 4 Λ He→ 3 He + p + π − for 4 Λ He. This analysis was performed in the full MPD phase-space (without subdivision into p T or proper time intervals) with the goal to obtain an estimate of the overall MPD efficiency for these hypernucleus species. Figure 6 shows invariant mass distributions for 4 Λ H (left) and for 4 Λ He (right). As can be seen, the efficiency for 4 Λ He is lower because the three-body topology is much more complicated than the two-body one. Finally, one can estimate the number of registered MPD events for 3 Λ H during the first running year of the NICA complex (when the collider luminosity is expected to be 5 · 10 25 cm −2 c −1 ), taking into account the results obtained in this study for the reconstruction of hypernuclei. Exploiting the MPD reconstruction efficiency and model predictions for 3 Λ H, one expects to register about 10 3 hypertritons per one week of data taking. Since the production rates of heavier hypernuclei ( 4 Λ H and 4 Λ He) are much lower, the real study of their production starts once the NICA collider achieves its nominal luminosity of 10 27 cm −2 c −1 . Figure 6. Invariant mass distribution for ( 4 He,π − ) pairs (left) and for the combination of 3 He, proton, and π − (right). The blue symbols indicate the reconstructed data, while the red line shows a fit by a sum function of a Gaussian and a polynomial.
Summary
In this paper, we present the results of the evaluation of the MPD setup performance in reconstructing hypernuclei in heavy-ion collisions. The assessment procedure was performed by means of Monte Carlo simulation of Bi+Bi collisions from the PHQMD model. The yield of hypertritons is obtained in the transverse momentum interval, p T = (1.0-4.5) GeV/c and over a proper time range of 0.1-1.5 ns. In addition, the MPD's efficiency for the reconstruction of 4 Λ H and 4 Λ He hypernuclei is estimated. An approximate estimate for the number of the reconstructed hypertritons during the first data-taking period at NICA is made.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2019-05-09T13:09:53.939Z
|
2019-05-01T00:00:00.000
|
147707225
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/16/9/1573/pdf",
"pdf_hash": "c4e1334d8fdb3f176a0c40843e64e1fabd225ecb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2179",
"s2fieldsofstudy": [],
"sha1": "c4e1334d8fdb3f176a0c40843e64e1fabd225ecb",
"year": 2019
}
|
pes2o/s2orc
|
Molecular Detection and Epidemiological Features of Selected Bacterial, Viral, and Parasitic Enteropathogens in Stool Specimens from Children with Acute Diarrhea in Thi-Qar Governorate, Iraq
Knowledge of etiology causes of diarrheal illness is essential for development and implementation of public health measures to prevent and control this disease syndrome. There are few published studies examining diarrhea in children aged <5 years in Iraq. This study aims to investigate the occurrences and epidemiology of selected bacterial (Salmonella spp. and Campylobacter spp.), viral (adenovirus, norovirus GI and GII, and astrovirus), and parasitic (Entamoeba spp. and Giardia spp.) agents in stool samples from 155 child diarrheal cases enrolled between March and August 2017, in a hospital-based cross-sectional study in Thi-Qar, southeastern Iraq. Using molecular techniques and sequence-based characterization, adenovirus was the most frequently detected enteropathogen (53/155 (34.2%)), followed by Salmonella spp. (23/155 (14.8%)), Entamoeba spp. (21/155 (13.5%)), and Campylobacter spp. (17/155 (10.9%)). Mixed infection with Salmonella spp. and Campylobacter spp. was evident, and the same was revealed between various enteric viruses, particularly adenovirus and norovirus. The most frequent co-infection pattern was between adenovirus and Campylobacter spp., in seven cases (7/155 (4.5%)). Whole-genome sequencing-derived typing data for Salmonella isolates (n = 23) revealed that sequence type 49 was the most prevalent in this sample set (15/23 (65.2%)). To the best of our knowledge, this study provides the first report on detection and identification of floR, blaCARB-2, and mphA antimicrobial resistance genes in Salmonella isolated from children in the Middle East region. Logistic regression analysis pointed to few enteropathogen-specific correlations between child age, household water source, and breastfeeding patterns in relation to the outcome of detection of individual enteropathogens. This study presents the first published molecular investigation of multiple enteropathogens among children <5 years of age in Iraq. Our data provide supporting evidence for planning of childhood diarrhea management programs. It is important to build on this study and develop future longitudinal case-control research in order to elaborate the epidemiology of enteropathogens in childhood diarrhea in Iraq.
Introduction
Diarrheal diseases accounted for 8% of all deaths in children under five years of age in 2016, and this translates to over 1300 young children dying each day, or approximately 480,000 children a year [1]. In Iraq, the impact of war, sanctions, and sectarian violence left a dysfunctional health system and an on-going public health emergency impacting vulnerable sections of the population, particularly children. Several viral, bacterial, and parasitic infections are among the most common causes of acute diarrheal cases in children [2]. Published studies on childhood diarrhea are lacking in Iraq and, therefore, the pathogen spectrum associated with diarrheal disease requires investigation.
Among enteric viruses, rotavirus is the most commonly identified cause of severe diarrhea among children in Iraq, as well as in many developing countries [3,4]. Adenoviruses are also implicated in several viral outbreaks and sporadic cases across all age groups, causing a broad spectrum of clinical symptoms and occurring throughout the year [5][6][7]. Within the adenovirus F subgenera, serotypes HAdV-40 and HAdV-41 are associated with significant outbreaks of disease in infants and children [7]. Other viruses commonly associated with acute gastroenteritis globally include noroviruses (NoVs) and human astroviruses (HAstVs) [8,9]. After enteric viruses, bacterial causes are ranked as the second most common cause of diarrhea in developing countries. Campylobacter is a potential etiological agent of bacterial enteritis both in children and adults, and it is second in prevalence to Salmonella and similar to Shigella in many countries [10]. Non-typhoidal Salmonella spp. are among the leading causes of gastroenteritis worldwide, with an increased incidence observed in children less than five years old [11,12]. Invasive cases of non-typhoidal Salmonella are frequently reported in infants and young children with a higher risk of secondary complications such as bacteremia and meningitis [13]. In addition, the recent increase of multidrug resistance (MDR) among non-typhoidal Salmonella species is a serious problem worldwide, due to the widespread use of traditional antibiotics in human and veterinary medicine, raising global public health concern [14]. Next to viral and bacterial causes, amebiasis and giardiasis are among the major intestinal parasitic infections causing childhood diarrhea in many developing countries [15], and are endemic throughout socio-economically deprived communities [16,17]. Given the multifactorial nature of diarrheal illnesses, it is suggested that enteric pathogen co-infections play an important role in gastroenteritis; however, research efforts often focus on a small range of species belonging to a few pathogen groups [13][14][15][16][17][18]. Thus, studies oriented at investigating the role of co-infections with enteric pathogens in cases of acute diarrhea are required.
In Iraq, the morbidity and mortality associated with diarrhea is high, particularly among children <5 years [4]. Elevated morbidity and mortality is predominantly due to serious challenges facing the delivery of basic public health and environmental sanitation services across Iraq, after decades of war and political instability. Previously, we investigated gastroenteritis caused by Salmonella infection among children aged below five years in Thi-Qar, southeastern Iraq [18]. Thi-Qar is one of the least developed and poorest governorates in Iraq, and it is important to investigate the spectrum of infectious causes of children diarrhea in such an unprivileged setting in Iraq. Hence, we transported aliquots of fecal samples from child diarrheal cases recruited in Thi-Qar (Iraq) to the Antimicrobial Resistance and Infectious Disease (AMRID) laboratory at Murdoch University (Australia). The present study is pilot in nature, and aims to conduct a comprehensive molecular screening survey of selected viral, bacterial, and parasitic agents. This molecular-based survey hopes to explore the coexistence between several infectious pathogens, along with their related clinical and epidemiological features among children with acute diarrhea in Thi-Qar.
Study Setting and Design
The study population consisted of children below five years of age presenting with acute diarrhea to the Enteric Diseases Clinic of two referral children hospitals in Thi-Qar, a regional governorate situated in southeastern Iraq, between March and August 2017. This survey is a follow-up from an initial hospital-based cross-sectional study that focused on culture-based screening and characterization of non-typhoidal Salmonella [18]. The initial study included 320 diarrhea cases of children below five years; details of case enrolment, stool specimen collection, and questionnaires administered to the child's parent or guardian to gather information on basic socio-demographic information and potential risk factors for infection are presented in full details elsewhere [18].
For the present study, aliquots of fecal samples from half of the diarrhea cases (n = 320) enrolled in the primary study [18] were selected for further molecular screening of a panel of enteropathogens. The decision to select half of the cases was based on feasibility and cost-effectiveness. Random selection of the cases was done using the "select cases" tool in the Statistical Package for the Social Sciences software (SPSS for windows, version 15.0), with the option for selecting approximately 50% of the cases. Thus, in this study, 155 aliquots of stool specimens from children below five years presenting with acute diarrhea were selected for molecular screening of a panel of viral, bacterial, and parasitic enteropathogens.
Stool Samples Processing and DNA Extraction
The sampled stool specimens were kept at 4 • C at the hospital facility, and each sample was divided into two aliquots; one aliquot was placed in Amies transport media with charcoal (COPAN, Italy), labeled and transported under cold chain to the Microbiology Laboratory, University of Thi-Qar, for Salmonella detection using the culture-based method [18]. The second aliquot was stored in RNA later®solution (Ambion, USA) as per the manufacturer's instructions and then shipped from Iraq to Australia. Molecular analysis was conducted at the AMRID Laboratory of Murdoch University. Genomic DNA was extracted from all fecal samples suspended in RNA later®solution using a Bioline Isolate Fecal DNA kit (ISOLATE II, Genomic DNA Kit, Bioline), according to the manufacturer's recommended protocol. Purified DNA was stored at −20 • C until further analysis. The following panel of enteropathogens was screened (Table 1): (a) Salmonella spp. and Campylobacter spp. as targeted common bacterial enteric pathogens; (b) adenovirus, norovirus (GI and GII), and astrovirus as representative viral causes of diarrhea; (c) Entamoeba spp. and Giardia spp. as targeted parasitic causes. In this study, we use the term co-infection to denote cases where different classes of enteropathogens were detected together, for example, bacteria and virus from the same stool specimen. We use the term mixed infection to refer to cases where different agents from the same class were detected together, such as two bacterial species.
Bacteria
In the present study, the randomly selected aliquots of fecal samples encompassed 23 out of the total 33 non-typhoidal Salmonella isolates identified in previous study [18]. We further characterized those 23 non-typhoidal Salmonella isolates using whole-genome sequencing (WGS). The WGS was utilized to validate the previous serotype identities of Salmonella isolates, to screen and match between antimicrobial resistance genes and the resistance phenotypes, and to screen for multilocus sequence types (MLST). For WGS, library preparation was performed using an Illumina NexTera®XT library preparation kit (Illumina) as per the manufacturer's instructions. Sequencing was performed on an Illumina Nextseq platform using a mid-output 2 × 150 kit. Reads were de novo assembled using SPAdes 3.11.1 software [19]. Contig files were uploaded to the Center for Genomic Epidemiology (http://www.genomicepidemiology.org/) to screen for MLST and serotypes, and to extract antimicrobial resistance gene data. From all transferred stool aliquots (n = 155), we screened for the S. enterica gene invA using conventional PCR as previously described by Swamy et al. [20]. Screening for Campylobacter was undertaken using the conventional PCR assay targeting the 16S ribosomal RNA (rRNA) gene according to protocol described by Barletta et al. [21].
Viruses
Reverse-transcription PCR (RT-PCR) assays were performed using SuperScript III One-Step Platinum@ Taq (Invitrogen, USA) for detection of three enteric viruses: human astrovirus, norovirus group 1 (GI), and norovirus group 2 (GII) [22], while adenovirus was detected by conventional PCR [23]. The specific primers used in this reaction are outlined in Table 1. Bands of the expected size from each assay were excised from 1.5% agarose gels and DNA was purified though filter tips. DNA sequencing was performed at the Australian Genome Research Facility (Perth, WA). The results of the sequencing were analyzed and edited using FinchTV (Version 1.4), then compared to the most similar sequence deposited in public databases on National Center for Biotechnology Information GenBank by applying the Basic Local Alignment Search Tool (BLASTn).
Two randomly chosen samples which were positive on the adenovirus screening PCR were analyzed using WGS, utilizing the same procedures described above. Initial screening for adenovirus genomes was performed using SPAdes, and generation of complete genomes was performed using Geneious V10.2.3 to map raw read data against a representative HAdV-41 genome (GenBank Accession KY316161). Annotation of adenovirus genomes was performed using Geneious V10.2.3. The adenovirus sequences were submitted to NCBI GenBank under the accession numbers MG925782 (MU22) and MG925783 (MU35).
Parasites
A nested PCR assay for the detection of Entamoeba species in stool aliquots was utilized according to the procedure previously described by Al-Areeqi et al. [24]. The presence of Giardia spp. in all samples was screened at the glutamate dehydrogenase (gdh) locus using a quantitative PCR (qPCR) [25].
Statistical Analysis
Descriptive data analysis was used to determine the frequency of enteropathogen occurrence and distribution over a range of variables related to study subjects. Statistical analyses were performed using univariable logistic regression analysis (STATA software package, version 11.0). Univariable logistic regression models were used to examine the correlation between demographic characteristics, household features, and breastfeeding patterns in relation to the binary outcome variable of pathogen detection (presence vs. absence of "a pathogen" of concern in diarrheal stool samples). The analysis examined the correlation between the predictor variables and each of adenovirus, Salmonella spp., Campylobacter spp., and Entamoeba spp. Those four enteropathogens were the most frequently detected in diarrheal stool samples. The analysis did not involve the other enteropathogens detected in less than 10% of the diarrheal samples, nor the various mixed and co-infection combinations detected in low number (minimum = 1, and maximum = 7) of samples.
Ethics and Consent Approval
The study protocol was approved by the Murdoch University Human Research Ethics Committee (Permit No. 2015/224). Permission to conduct the study was also obtained from the Ministry of Health, Iraq (Permit No.11/5/393) and the children's hospitals in Thi-Qar Governorate (Permit No.1/4/26885). As the study subjects were children under the age of five, informed verbal consent was obtained from their caregivers (parents/guardians) before enrolment. Movement of samples from Iraq to Australia was granted by the Department of Agriculture (Australian Government), under quarantine import permit number 0000369563.
Results
In this study, we tested a total of 155 stool samples from children with acute diarrhea. Of all cases, the male:female ratio was 1.4:1 and 93 (60%) were under two years of age (Table 2). Descriptive information about demographic characteristics of the cases, their breastfeeding patterns in the first six months of age, and recorded household features, together with information about caregivers' hygiene practices, is presented in Table 2.
Among all samples, adenovirus was the most frequently detected enteropathogen (53/155 (34.2%)), followed by Salmonella spp. (Table 3). Those four etiologic agents accounted for 73.4% of the spectrum of enteropathogens detected in the study samples. Group I noroviruses were the least detected (5/155 (3.2%)) among the panel of enteropathogens screened for in this study (Table 3). WGS analysis of two adenovirus PCR positive samples using SPAdes de novo assembly of raw reads returned large contigs consistent with HAdV-41 adenovirus genomes. Raw read files were mapped to the HAdV-41 KY316161 to obtain an entire genomic sequence for each strain. BLASTn analysis of strains MU22 and MU35 demonstrated most homology to existing HAdV-41 strains, with 98.6% pairwise homology to Genbank accessions AB728839 and KY316161, and 98.56% homology to each other.
Results presented in Table 3 highlight the diverse nature of pathogens among cases of acute diarrhea in Iraqi children. Mixed infection with the bacterial pathogens Salmonella spp. and Campylobacter spp. was evident, and the same was revealed between various enteric viruses, particularly adenoviruses and noroviruses (Table 3). Nevertheless, there was no mixed infection between the two parasitic agents Entamoeba spp. and Giardia spp. (Table 3). Moreover, co-infection with different classes of enteropathogens was common among the tested samples. Of interest, co-infection with adenovirus and Campylobacter spp. was detected in seven cases (7/155 (4.5%)). Co-infection with a bacterial, viral, and a parasitic etiologic agent all together was detected in nine cases (9/155 (5.8%)). Table 4 summarizes some WGS-derived typing data for Salmonella isolates (n = 23). A total of four multilocus sequence types (STs) were characterized among the 23 isolates; of which, S. typhimurium ST49 was the most common (15/23 (65.2%)). All of the whole-genome sequenced Salmonella isolates harbored at least one tet gene, with the tetB gene having the highest frequency (n = 12), followed by tetA (n = 8) and tetG (n = 3). Five groups of streptomycin-resistance genes were detected among 17 out of the 23 Salmonella isolates, consisting of aadA7 (n = 5), strB (n = 4), strA (n = 3), aadA2 (n = 3), and aadA1 (n = 2). For aminoglycoside resistance genes, 11 of the 23 Salmonella isolates carried aph(3')-Ic (n = 7) and aac(3)-Id (n = 4). The sul1 gene was identified in eight of the sulfonamide-resistant Salmonella isolates. Each of β-lactamase (bla CARB-2 ) and kanamycin (aph(3')-Ia) resistance genes were found in four Salmonella isolates. Furthermore, trimethoprim (dfrA14), azithromycin (mphA), erythromycin (erm42), and florfenicol (floR) resistance genes were also detected in a few isolates (Table 4). Few enteropathogen-specific associations were significant based on logistic regression analysis ( Figure 1). Among the cases enrolled in this study, detection of Entamoeba spp. was less likely (p < 0.001) to occur among children younger than two years (odds ratio (OR) = 0.12, 95% confidence interval (CI) = 0.03-0.37). Lower odds (p = 0.051; OR= 0.46, 95% CI = 0.21-0.99) of adenovirus detection were associated with children exclusively breastfed compared to children exclusively bottle-fed. Among the present study subjects, the likelihood of PCR detection of Campylobacter spp. in children from households supplied by pipe water was 5.12 (95% CI = 1.12-23.27) times higher (p = 0.034) compared to children from households supplied with (purchased) reverse osmosis-treated water. Figure 2 shows the relationship between caregivers' hygienic practices and occurrence of the four frequently detected enteropathogens in diarrheal cases. According to results from the logistic regression model, the odds of Entamoeba spp. detection in children belonging to caregivers who reported always washing hands after cleaning child defecations was three times lower (p = 0.030; OR = 0.34, 95% CI = 0.14-0.90) compared to children belonging to caregivers who did not (at all/not always) wash hands.
Discussion
The majority of enteric bacterial and viral pathogens are not routinely screened for in Iraqi hospitals, due to a lack of basic diagnostics and sufficiently trained personnel [26]. The war in the last decade destroyed substantial capacities of hospitals and public health laboratories in Iraq, and nearly two-thirds of its qualified medical personnel emigrated [27]. In general, published research on diarrheal illnesses in the Iraqi population is very limited, and has mainly focused on screening for
Discussion
The majority of enteric bacterial and viral pathogens are not routinely screened for in Iraqi hospitals, due to a lack of basic diagnostics and sufficiently trained personnel [26]. The war in the last decade destroyed substantial capacities of hospitals and public health laboratories in Iraq, and nearly two-thirds of its qualified medical personnel emigrated [27]. In general, published research on diarrheal illnesses in the Iraqi population is very limited, and has mainly focused on screening for single-pathogen infections or, at best, infections with pathogen groups [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18]. In the present study, we report the first molecular epidemiological investigation describing the occurrence and co-existence of several enteropathogens in stool samples from diarrheal children <5 years of age in one of the least developed governorates in Iraq. Added to that, we profiled sequence types and genes conferring resistance to several antimicrobial groups among non-typhoidal Salmonella isolated for the diarrheal cases. This study demonstrates the value of WGS as a tool for comprehensive analysis of bacterial and viral pathogens commonly detected in diarrheal patients.
In the present study, human adenovirus (HAdV) was the most common enteropathogen, detected in 53 (34.2%) cases. A survey of patients under five years in Australia from 2007 to 2010 revealed that adenovirus was also the most common cause of gastroenteritis in the studied population, emphasizing the importance of this virus in childhood diarrhea in both developing and developed countries [8]. Detection rates in the present study are considerably higher than published research on HAdV in children with diarrhea from other Middle East and North African countries such as in Kuwait (4%) [28], Qatar (6.25%) [29], Saudi Arabia (8%) [16], and Egypt (20%) [30]. Also, our finding is higher compared to the published occurrence of HAdV in East Asia (9.8-20%) [31] and in Bangladesh (10.7%) [32]. The reasons for the higher HAdV detection rate are not clear. However, it is important to note that the pan-adenovirus PCR assay used in this study detects all adenovirus serotypes and not just the enteric serotypes F40 and F41 [33]. Nevertheless, WGS analysis of two of the PCR positive samples collected in this study allowed the extraction of complete genomes of HAdV-41 serotypes, providing evidence that this enteric serotype, as expected, is circulating in the study population in Iraq. Also worth highlighting is that our study design may have impacted our observed frequency of enteropathogens, including HAdV. Because the diarrheal cases were sampled in the warmer summer months, it is possible our observed frequencies are higher than for other times of year, as described elsewhere [5]. Hence, future case-control studies are needed to accurately predict the frequency of pathogens and potential changes due to seasonality.
Non-typhoidal Salmonella was detected in 14.8% of the stool samples from children with diarrhea, and was the second most frequently detected enteric pathogen in this study. This result is consistent with studies conducted previously in Iraq (15%) [34], and in other neighboring countries such as Kuwait (18%) [35] and Saudi Arabia (15.3%) [16]. Our results re-emphasize the importance of non-typhoidal Salmonella in the epidemiology of childhood bacterial diarrhea in Iraq. We previously demonstrated that a higher likelihood of positive isolation of non-typhoidal Salmonella from children diarrheal cases in Thi-Qar was associated with source of water and presence of domestic animals in the household, as well as with caregiver education level and hygienic practices [18].
Entamoeba spp. were found to be the third ranked among the seven enteropathogens screened for in the present research. This finding is consistent with previous surveillance data from Saudi Arabia [16], Oman [36], Yemen [24], and Libya [17], where Entamoeba spp. were commonly isolated from children diarrheal samples. Intestinal parasitic infection is a significant public health burden, especially in poor and socio-economically deprived communities [16], which is relevant to the situation in Thi-Qar where 37.8% of the population lives below the poverty line of United States dollar (USD) $2.5 per day [37]. Added to that, the proportion of the population in Thi-Qar using an improved sanitation facility is very low, with only 20.8% utilizing the public sewage system as the primary system, while 39.4% rely on a covered canal outside the house, and 30% primarily use a septic tank [37]. An alarming 54.8% of the population in Thi-Qar disposes of garbage in open areas [37]. A number of case-control and cohort studies on diarrhea in children demonstrated that unsafe water supply and poor sanitation are important risk factors associated with enteric parasite infections [16][17][18][19][20][21][22][23][24].
Very limited research was conducted on Campylobacter occurrence in childhood diarrhea in Iraq, as it is not screened for in pediatric hospitals, hampering our understanding of the role of Campylobacter spp. in diarrheal illness in this setting. Our results indicate positive PCR detection of Campylobacter spp. in 10.9% (17/155) of the screened stool samples from child diarrheal cases. Interestingly, recent findings from the Global Enteric Multicenter Study [38] indicate that the fraction of severe diarrheal cases in infants attributed to Campylobacter jejuni or Campylobacter coli ranged from 6% in Kenya to 12% in Bangladesh, which is comparable to the present study finding from Iraq. We also conclude, based on logistic regression analysis, that the likelihood of detection of Campylobacter spp. in children from households supplied by pipe water was higher compared to those supplied with reverse osmosis-treated water. This finding is in accordance with a cross-sectional study on Campylobacter infections among diarrheic children in Ethiopia, where the highest rates of infections were reported in children whose family did not use a protected water source [39]. In spite of growing evidence regarding the burden of Campylobacter-attributed diarrhea in developing countries, we know little about what, how, and where children contract infection [40]. Further research is urgently required to investigate the role of supplied household water and the role of domestic animals in the transmission of Campylobacter jejuni, especially in populations living with poor sanitary conditions, similar to those in Thi-Qar in south of Iraq.
The spectrum of co-existence of enteric pathogens and their role in diarrheal illnesses could be understood better by utilizing recent advances in diagnostic tools. The utilization of molecular tools in the present study shed light on the potential occurrence of mixed infection between the bacterial pathogens Salmonella spp. and Campylobacter spp., and the same was revealed between various enteric viruses ( Table 3). Children can be exposed to multiple pathogens at home, the playground, and daycare [41]. The presence of mixed infections complicates diagnosis of a specific pathogen responsible for the disease and may result in an additive impact, leading to a more severe clinical disease [42]. Our results also point to an intriguing frequency (4.5%) of co-infection between adenovirus and Campylobacter spp. This co-infection pattern should be viewed in parallel with the results of the logistic regression modeling, as our results pointed to higher odds of adenovirus detection in children exclusively bottle-fed (compared to exclusively breastfed), as well as a higher likelihood of PCR detection of Campylobacter spp. in children from households supplied by pipe water (compared to reverse osmosis water). In settings where potable water may be limited or surfaces contaminated, cleaning feeding bottles adequately may be impossible, placing infants at a heightened risk of infectious disease. The role water plays in the epidemiology of adenoviruses and Campylobacter, as well as the potential health risks constituted by these pathogens in water environments, is widely recognized [43][44][45]. Adenoviruses are considered the only DNA viral pathogens in the enteric virus group. They are robust viruses which are non-enveloped with a double-stranded DNA (dsDNA) genome and are, thus, more resistant in the environment, including water sources, than other enteric viruses [44]. A recent multi-country study suggests that treatment of drinking water and improved sanitation reduced risk associated with Campylobacter infection [46]. The frequent co-infection that we report in the present study between adenoviruses and Campylobacter warrants a hypothesis that an interaction between hygiene and contaminated water might be a possible route of children co-exposure to both pathogens in Thi-Qar.
Our results demonstrate the usefulness of WGS-derived data in providing in-depth insight into non-typhoidal Salmonella isolated for children with diarrhea. To the best of our knowledge, this is the first published WGS-based characterization of Salmonella from clinical samples from a Middle-Eastern country. ST49 was the most frequent genotype, followed by ST198 and ST52. The standardization of data and the portable nature of the sequence-based typing allow this method to be used as a worldwide epidemiological tool to study source attribution of enteric pathogens. The three STs characterized in Salmonella isolates in our study were recently reported in human salmonellosis cases from neighboring Qatar [47], as well as in the United Kingdom [48]. In several studies, ST49, ST198, and ST52 were also frequently carried in cattle and poultry sources contaminated with Salmonella, which might have played an important role in human exposure to infection through food and environmental sources [47][48][49][50].
Analysis of WGS data also revealed that tetracycline, streptomycin, and aminoglycoside resistance genes were commonly harbored by Salmonella isolates characterized in this study ( Table 4). The emergence and spread of antimicrobial resistance in Salmonella is a threat to human public health [14]. The high resistance rates to traditional antibiotics in the current study could be explained by the fact that many of these antibiotics in Iraq, as in other developing countries, are still indiscriminately prescribed in human medicine due to their low cost and wide availability [51]. Three tetracycline resistance genes (tetB, tetA, and tetG) were detected among the sequenced Salmonella isolates. A study in Iran also found a similar pattern, as the same three genes were the most commonly identified in tetracycline-resistant Salmonella from human stool samples [52]. Florfenicol is a chemosynthesis broad-spectrum antibiotic related to the chloramphenicol class and is mainly used in veterinary medicine [53]. In this study, WGS identified floR in 4.3% of S. enterica isolates, which is lower compared to findings from a study in Taiwan where floR was identified in 19% of the Salmonella isolates from children [54]. In the isolates characterized in the present study, bla CARB-2 and floR genes were all associated with S. typhimurium, with one exception of a strain of S. hadar that harbored the bla CARB-2 resistance gene. A similar finding was demonstrated by Randall et al. [55] who also found bla CARB-2 and floR genes to be linked with S. typhimurium isolated from humans and animals. Using WGS, Nair et al. [48] observed resistance to azithromycin among Salmonella serovars isolated from humans to be linked with the presence of mphA gene. This is consistent with our results, which also found two mphA genes to be associated with the azithromycin resistance profile. To the best of our knowledge, this is the first report of detection and identification of floR, bla CARB-2 , and mphA in Salmonella isolated from children in the Middle East region.
Effective hand hygiene is essential to prevent the spread of microbes from person to person and reduce cross-contamination from hands to food [56]. In the present study sample, the likelihood of Entamoeba spp. detection in children belonging to caregivers who reported always washing hands after cleaning child defecations was significantly lower (compared with those belonging to caregivers who did not wash hands). However, our study data could not conclude a relationship between caregivers' hygienic practices and occurrence of the other frequently detected bacterial and viral pathogens ( Figure 2). It is possible that caregivers' hygienic practice is a limited route of exposure, compared to other sanitary and environmental routes, and, hence, it did not reveal a tangible relationship among the present study samples. It is worth noting that it is not uncommon to experience difficulty in establishing statistical relationships between hygiene-related factors and infections that are multifactorial in nature, as is often the case for diarrheal illnesses. For instance, recently concluded randomized controlled trials that tested the efficacy of improvements in drinking water, sanitation, and hand washing (WSH) in low-and middle-income countries found no significant effects on gut markers of environmental enteric dysfunction, growth at 18 months of age, or diarrhea incidence in two out of three sites [57]. Lack of statistical significance of individual studies should not be taken as implying that the totality of evidence supports no effect. Hence, it is recommended that mothers should always be encouraged to wash their hands following the use of a toilet, cleaning the child's bottom after defecation, and before feeding the child, as inadequate hand hygiene can transfer contamination to surfaces and foods in the home [58].
Conclusions
To the best of our knowledge, this study presents the first published molecular investigation of multiple enteropathogens among children <5 years of age in Iraq. Although this was not a case-control study, the frequency of detection of adenovirus, Salmonella, Campylobacter, and Entamoeba suggests that these organisms are important causes of diarrhea in this population. More information is needed about the sources, modes of transmission and risk factors of enteropathogens in Iraqi children in order to develop methods to control these infections. In future work, it is important to build on the present study and plan for longitudinal case-control research to investigate in depth the epidemiology of enteropathogens in childhood diarrhea, and to perform environmental, water source, and animal sampling. Phenotypic and genotypic characterization of Salmonella resistance to some clinically important antimicrobial emphasizes the need to carry out long-term monitoring. Overall, this work fills a gap in research on the frequency of a range of enteropathogens and could be used by public health authorities for informing diarrhea control programs among infants and children in Iraq.
|
v3-fos-license
|
2021-08-04T00:04:18.541Z
|
2020-12-01T00:00:00.000
|
236836372
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.15562/bmj.v9i3.2021",
"pdf_hash": "70a93ea7bd7b0c27ccfdc2ca2352590118fd3f66",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2181",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "430942e0d47afc8ae2c9093e84cd54868a64d964",
"year": 2020
}
|
pes2o/s2orc
|
Antiplasmodial activity of chalcone derivatives compound through phagocytosis of kupffer cells in experimental malaria hosts
Open a cess: www.balimedicaljournal.org and ojs.unud.ac.id/index.php/bmj
INTRODUCTION
Malaria is a global infectious disease caused by the Plasmodium and transmitted by the Anopheles mosquito. Malaria still posed a considerable burden on global health, especially in Africa and Asiatic countries. In Indonesia, there was a decreasing trend of malaria morbidity in 2009-2016 from 1.8/1,000 at risk population in 2009 to 0.84/1,000 at risk population in 2016. However, some provinces in Indonesia still have a considerable burden from malaria. For example, Papua is one of the provinces with the highest Annual Parasite Incidence (API) with 45.85 per 1,000 inhabitants. 1 The main problem of malaria is not only because of its mortality and impact on national productivity but also due to the drug resistance found in P.falciparum and P. Vivax. Furthermore, vaccine development is also hampered by the complex life cycle of P. Falciparum. 2, 3 This phenomenon has ignited extensive research to find an effective alternative of current antimalarials. The ideal antimalarial drugs should fulfill several criteria, including low toxicity and high efficacy in combating the major species of Plasmodium.
Fortunately, there have been several promising compounds that had been evaluated for their antimalarial properties. Chalcones (1,3-diaryl-2-propen-1-ones) are secondary metabolites of flavonoids found in several plant species and have antimalarial activity. 4,5 Antimalarial activity of chalcones was investigated after a report on the research results stating potent antimalarial activity in vitro and in vivo from a compound, namely Licochalcone A, an isolated compound from the root of Chinese licorice. 6,7 Suwito et al. had designed several chalcones derivatives as an inhibitor in ferredoxin (Fd) interaction with ferredoxin-NADP + reductase (NFR), which is a crucial redox system in P. Falciparum's survival. 8 The result indicated that one of the synthesized compounds (E)1-(4 aminophenyl)-3-(2,3-dimethoxy phenyl) prop-2-en-1-one had a remarkable inhibitory activity against Plasmodium through molecular docking. They also found that the amino group of the aminomethoxy derivative of chalcones played an essential role in inhibition through electrostatic interactions and can form a more stable complex with NFR compared to Fd. Likewise, an in vitro study found that (E)1-(4 aminophenyl)-3-(2,3-dimethoxy
Antiplasmodial activity of chalcone derivatives compound through phagocytosis of kupffer cells in experimental malaria hosts
Lilik Wijayanti 1 , Paramasari Dirgahayu 2 , Yulia Sari 3 , Danus Hermawan 4 , Ida Nurwati 5 Introduction: Malaria is still one of the major health problems, specifically due to drug resistance in Plasmodium, which encourages extensive research to find effective alternatives. One of the new antimalarial compounds is chalcone-derivative compound (E)-1-(4-aminophenyl)-3-(2,3-dimethoxy phenyl)prop-2-en-1-one. However, its potency is still needed to be evaluated. Therefore, this study aimed to determine the efficacy and identify the pharmacological mechanism of this chalcone-derivative. phenyl)prop-2-en-1-one had good antiplasmodial activity, selectivity index, and plasmodial growthinhibiting effect. 9 Finally, the in vivo study also confirmed the efficacy of this substance, in which it was shown that the effective dose (ED50) for this substance was at 17.36 mg/kgBW/day. 10 Overall, the initial evidence showed that (E)1-(4 aminophenyl)-3-(2,3-dimethoxy phenyl)prop-2-en-1-one has a potent antimalarial ability.
Other studies had also revealed that (E)1-(4 aminophenyl)-3-(2,3-dimethoxy phenyl)prop-2en-1-one can alter the formation of hemozoin and stomatocytes. 11 However, this compound's effect on host-immunity is not yet investigated, especially immunity mediated by Kupffer cells. On the other hand, it has been known that plasmodium infection can induce host immunity and one of the immune responses involved is mediated by Kupffer cells in the liver. 12 Therefore, this study was aimed to evaluate the effect of (E)1-(4 aminophenyl)-3-(2,3dimethoxy phenyl)prop-2-en-1-one toward Kupffer cells phagocytosis in the presence of plasmodium infection.
In vivo antiplasmodial activity testing
The in vivo antiplasmodial activity was evaluated using Swiss Mice infected with Plasmodium berghei by the classical 4-day suppressive test. The research protocol had been approved by the Health Research Ethics Committee, Dr. Moewardi General Hospital/ Faculty of Medicine, Universitas Sebelas Maret, Surakarta. P. berghei strain was obtained from the Department of Parasitology, Faculty of Medicine, Universitas Sebelas Maret, Surakarta. The Swiss mice were obtained from the Integrated Research and Testing Laboratory, Universitas Gadjah Mada, Yogyakarta. P. berghei-infected mice erythrocytes were obtained from donor mice and resuspended in RPMI 1640 medium to a volume of 0.2 ml a day before inoculation. Ninety male mice (20-25 g and 6-8 weeks) were inoculated intraperitoneally with 10 7 P. berghei-infected mice erythrocytes. The mice were then divided into nine groups, with ten mice in each group. The first four groups received 10, 20, 40, and 80 mg/kgBW/day of the tested compound. The second four groups were treated with 0.25, 0.5, 1, and 2 mg/kgBW of doxycycline as the positive control. Another group (negative control) was only received aquadest. Each dose of tested compound or doxycycline was given to the mice daily for four consecutive days, starting two hours after inoculation until the third day. A day after the last treatment, a Giemsa-stained thin blood smear from the tail vein was prepared. Parasitemia level was then determined microscopically by counting the number of parasitized erythrocytes out of 200 erythrocytes in the microscope's random fields.
Histological preparations
After treatment, the mice were sacrificed on a determined day by neck dislocation. The liver organs were then taken for histological preparations using the paraffin block method and stained with hematoxylin and eosin (HE). The right lobe of the liver was taken and sliced at the middle of the lobe to get a uniform preparation. Three slices were made from each right lobe of the liver with a thickness of 3-8 µm. The distance between slices with each other was about 25 slices. Three preparations were made from each experimental host and Kupffer cells were counted (400X magnification). Additionally, the number of Kupffer cells that had phagocytosed the infected erythrocytes was also calculated.
Data analysis
The data were compiled and the study groups were compared by using the ANOVA test. Then, a posthoc multiple comparisons test (Tamhane) was also performed to compare the groups against each other. Finally, the number of Kupffer cells were compared using Kruskall Wallis and Man Whitney statistical tests. The p-value ≤ 0.05 was considered significant.
The Relationship between the test substance and the number of Kupffer cells
After analyzing the effect of the chalcone derivative (E)1-(4 aminophenyl)-3-(2,3-dimethoxy phenyl) prop-2-en-1-one toward parasitemia, we assessed the effect of the compound toward Kupffer cells number within the liver. As depicted in Table 2, the number of Kupffer cells was doubled in the mice, which received chalcone derivative at 10 mg/ kgBW, compared to the negative control group (Aquadest). The Kupffer cells were doubled again at 20 mg/kgBW and begin to plateau at the higher dosage. Comparably, doxycycline was also effective in increasing the number of Kupffer cells and it seemed to have higher efficacy than the tested chalcone derivative. The number of Kupffer cells was tripled at dose 0.25 mg/kgBW and increased at a higher dose. The ANOVA showed a significant result, which indicated that there were significant differences between the study groups. However, indepth analysis using a post-hoc test revealed that the chalcone derivative at dose 20 mg/kgBW had a comparable effect to 0.25 mg/kgBW doxycycline. Also, 80 mg/kgBW of chalcone derivative did not significantly differ from 40 mg/kgBW in increasing the number of Kupffer cells. 40 mg/kgBW of chalcone derivative was also had comparable effect with 0.5 and 1 mg/kgBW doxycycline while 2 mg/ kgBW doxycycline proved to be superior to even 80 Africa, and it is estimated that every minute one child dies of malaria. The main problem in the treatment or eradication of malaria to date is the increased resistance of P. falciparum and P. vivax to antimalarial drugs. Meanwhile, the development of malaria vaccines is hampered due to the complexity of P. falciparum's life cycle, and therefore efforts need to be made to develop multistage vaccines. 2 Drug resistance in Plasmodium is continuously reported from various parts of the world. Consequently, the World Health Organization (WHO) has instructed to stop the monotherapy protocol and recommended Artemisin Combination Therapy (ACT) to improve treatment outcomes and reduce mortality. Meanwhile, rigorous research in drug discovery is continuously conducted to find new antimalarial agents as current agents' alternatives. The ideal antimalarial drug should meet several criteria in which one of which is to have mild side effects with low toxicity. Chalcones (1,3-diaryl-2-propen -1-ones) are secondary metabolites of flavonoids that can be found in several types of plants. 13 Chalcones and their derivatives are known to have various interesting biological activities such as antiviral, anti-inflammatory, antimicrobial, antitumor, cytotoxic, analgesic, antifungal, antioxidant, anticancer, and antiplasmodial. 14, 15 Following previous studies, this study found that chalcone derivative effectively reduced parasitemia and increased the number of stomatocytescontaining Kupffer cells. Compared to doxycycline, the tested compound was still inferior and only the highest dosage (80 mg/kgBW) was comparable to 0.5 mg/kgBW doxycycline. Likewise, 40 mg/kgBW of E)1-(4 aminophenyl)-3-(2,3-dimethoxy phenyl) prop-2-en-1-one was also comparable to 0.5 mg/ kgBW doxycycline and 1 mg/kgBW doxycycline while the highest dose of the compound was comparable to 40 mg/kgBW. Nevertheless, we found a novel compound that has a promising prospect as a new alternative antimalarial agent.
Although the mechanism of action of chalcone was not part of this study, it had been proposed that chalcone and its derivatives act as an inhibitor of ferredoxin (Fd) and ferredoxin-NADP+reductase (NFR) interaction. The presence of inhibitors inhibits electron addition to Lyt B resulting in a disruption of the synthesis of isoprenoid and isoprenoid precursors. 8 Additionally, chalcone inhibits hemoglobin digestion by binding to and inhibiting falcipain, an enzyme that plays a role in hemoglobin digestion in Plasmodium's food vacuole. 16, 17 The free heme, which is released as the byproduct of hemoglobin digestion, is usually aggregated into hemozoin by the Plasmodium to mg/kgBW. The visual depiction of the Kupffer cells in the liver slice is presented in Figure 3.
DISCUSSION
Malaria is a disease caused by plasmodium parasites and is transmitted by the Anopheles mosquito. Malaria can cause anemia, reduce productivity, and cause death, especially in groups with high-risk factors, namely infants, toddlers, and pregnant women. Many deaths occur in children in
ORIGINAL ARTICLE
avoid its toxic effect. Chalcone was also known to inhibit hemozoin synthesis and enhanced its antiplasmodial effect through heme intoxication of the Plasmodium. 18 Also, the degradation of hemin that leaked out into the cytoplasm will inhibit chalcone through the inhibition of glutathione (GSH). 18 Other reports also showed that chalcone attacks the bc1 complex and complex II (succinate ubiquinone reductase) of Plasmodium's mitochondria. 19 This effect will completely disrupt the electron transport chain, which is highly lethal for Plasmodium. Other effects reported include inhibition of Plasmodial development from ring form into a schizont form, inhibiting sorbitolinduced erythrocytes hemolysis infected by parasites, altering the ultrastructure of parasitic mitochondria and inhibiting mitochondrial function. 6, 16 Additionally, our study also showed the immunological effect of chalcone. This study proved that chalcone administration significantly increased the number of Stomatocyte-containing Kupffer cells, which indicated increased phagocytic activity. However, this immunological effect is contrary to previous reports regarding the role of chalcone in host immunity. Chalcone contains two aromatic rings connected by an α, β unsaturated ketone, and reactive keto-ethylonic groups (-CO-CH = CH-) responsible for anti-inflammatory and antimalarial effects. 20, 21 The anti-inflammatory effect of chalcone was reported by Arya et al., who showed that where phenyl-sulfonyl-uranyl-chalcone derivatives inhibited PGE2 production in LPS-induced RAW 264.7 macrophages through selective inhibition of COX-2 activity. Additionally, Singh et al. also reported that 2′,5′-dimethoxy-4-hydroxy chalcone and 3,4-dichloro-2′,5′-diethoxychalcone inhibited NO production in RAW 264.7 macrophages and LPS-induced microglial cells. 22 Therefore, further studies are needed to analyze the difference between our substance with previous studies and the detail of chalcone's immunological effects so the complete pictures of its effect can be unveiled and the treatment strategy can be devised for the clinical trial.
CONCLUSION
In conclusion, the (E)-1-(4-aminophenyl)-3-(2,3-dimethoxy phenyl)prop-2-en-1-one exhibited potent antimalarial activity, possibly via enhancement of the phagocytic activity of Kupffer cells. Further studies are needed to assess the efficacy and the toxicity of this compound in vivo and to evaluate the immunological effect in detail so the complete pharmacological picture of this compound can be obtained.
AUTHOR CONTRIBUTION
Danush Hermawan: processed the ethical clearance, prepared materials in this research, and doing the statistical test; Yulia Sari and Ida Nurwati: performing the research' procedures; Lilik Wijayanti: performing statistical analysis and preparing the manuscript; Paramasari Dirgahayu: finalization of the manuscript.
CONFLICT OF INTEREST
The author declared that there is no conflict of interest regarding all aspects of the study.
ETHICS APPROVAL
This study has been approved by the Health Research Ethics Committee, Dr. Moewardi General Hospital/ Faculty of Medicine, Universitas Sebelas Maret, Surakarta, with letter number 104/II/HREC/2016.
|
v3-fos-license
|
2023-04-20T15:09:10.747Z
|
2023-04-01T00:00:00.000
|
258225869
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/150679/20230418-14840-1j7xm5d.pdf",
"pdf_hash": "6ba72d7421fd8af85b103087e1e23971e9d20923",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2182",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "64b3e2effdaa353e33a6eeb7363c60c9f5a1b8a9",
"year": 2023
}
|
pes2o/s2orc
|
DiGeorge Syndrome With Absence of Speech: A Rare Case
DiGeorge syndrome (DGS) is a rare genetic disorder caused by a deletion or abnormality of a small piece of chromosome 22. This condition can affect multiple organs in the body, including the heart, thymus, and parathyroid glands. While speech and language difficulties are common in individuals with DGS, the complete absence of speech is a rare presentation. This case report presents the clinical features and management of a child with DGS who presented with an absence of speech. The child underwent a multidisciplinary intervention approach, including speech and language therapy, occupational therapy, and special education, to improve their communication skills, motor coordination, sensory integration, academic performance, and social skills. The interventions resulted in some improvement in their overall function; however, speech improvement was not significant. This case report contributes to the literature on DGS by highlighting the potential underlying causes of speech and language difficulties in patients with this condition, and the possible etiologies that may lead to a complete absence of speech, which is a severe manifestation. It also emphasizes the importance of early recognition and intervention with a multidisciplinary approach to management, as early intervention can lead to better outcomes for patients with DGS.
Introduction
DiGeorge syndrome (DGS), also known as velocardiofacial syndrome, is a rare genetic disorder inherited in an autosomal dominant fashion [1]. Dr. Angelo DiGeorge first described it in 1965 when a group of infants with a congenital absence of the thymus and parathyroid glands was found [1]. This syndrome is caused by a deletion in chromosome 22q11.2 and is characterized by abnormal development of the third and fourth pharyngeal pouches [2]. This syndrome classically presents with congenital thymic and parathyroid hypoplasia, which presents with immunodeficiency to viral and fungal pathogens and hypocalcemia due to hypoparathyroidism [2]. Other common manifestations include conotruncal heart malformations such as tetralogy of Fallot, dysmorphic facial features, including cleft palate and lip, low-set ears, and short palpebral features, and neurodevelopmental problems like autism spectrum disorder and attention deficit hyperactivity disorder (ADHD) [2].
The incidence of DGS is approximately one in 3,000-6,000 and continues to increase annually [3]. There are no significant risk factors in relation to gender or race since males and females of any ethnicity are equally affected but the risk of developing DGS is increased in relationships where at least one parent is affected [1]. The severity of symptoms ranges, and patients may not present typically, causing this disease to be overlooked and underdiagnosed without proper screening methods [3]. Developmental delay of the gross and fine motor skills as well as speech is a common manifestation of DGS and affects approximately 90% of patients [4].
Speech delay is considered in children whose development is below the norm for children of the same age [5]. In a study by Baylis et al., the authors found that 58.8% and 82.4% of the 17 adolescent participants with the 22q deletion met the criteria for speech delay and motor speech disorder, respectively [4]. Delays in speech may improve with speech therapy; however, in rare cases, patients have a complete absence of speech where there is no articulation of verbal expression. Absent speech occurs when there is no speech or language development, rendering patients nonverbal [6]. Complete absence of speech is a severe manifestation of DGS and has not been reported in the literature, with most cases having a delayed speech onset that improved with speech therapy. Herein, we present a case of a patient with DGS who had absent speech instead of the typical speech delay with no significant improvement despite speech therapy.
Case Presentation
A seven-year-old male child was brought to the psychiatry outpatient department of a tertiary care hospital with a complaint of an absence of speech. The patient was diagnosed with DGS at birth. On examination, the patient had dysmorphic facial features, including a small chin, a short nose, and widely spaced eyes. He had no spontaneous speech, and his vocalizations were limited to grunts, moans, and occasional cries. He had no signs of oral motor dysfunction and could produce a range of non-speech sounds, including clicks and whistles. He showed no signs of hearing loss on the audiological evaluation.
The child was born via non-consanguineous marriage and delivered by emergency lower (uterine) segment cesarean section (LSCS) due to meconium-stained amniotic fluid. The child cried immediately after birth. The mother's antenatal history was significant for polyhydramnios at seven months of gestation. On the third day after birth, the baby was found to have a holosystolic murmur of grade 3/6 when examined with a stethoscope. This prompted further investigation, which included an echocardiography that revealed multiple heart defects, including tetralogy of Fallot, a ventricular septal defect, a small atrial septal defect, and pulmonary atresia. Additionally, a chest X-ray did not detect the presence of a thymus, and the infant also had a submucosal cleft palate. The baby experienced a seizure episode during the second week of life and was later diagnosed with hypocalcemia, which was responsible for the seizures. Lab tests indicated low levels of parathyroid hormone and serum calcium, high levels of serum phosphorus, and normal levels of 25hydroxyvitamin D, confirming hypoparathyroidism ( A medical geneticist performed further tests as part of a comprehensive medical evaluation, in which fluorescent in situ hybridization (FISH) was done to check out the TUPLE gene deletion on chromosome 22q11.2, and the diagnosis was confirmed for DGS.
Due to primary immunodeficiency and a cleft palate, the patient experienced recurrent episodes of lower respiratory tract infection (LRTI) mainly caused by Burkholderia cepacia and aspiration pneumonia. These conditions were managed with IV antibiotics, steroids, and frequent nebulization. However, the patient continued to suffer from recurrent infections, which necessitated long-term Ryle's tube (RT) feeds for a duration of 18 months. Despite this intervention, poor weight gain was observed in the infant, prompting a videofluoroscopic study (VFSS), which revealed a reduced suckling reflex during the oral stage. A barium swallow test did not reveal any signs of reflux or hiatus hernia. The patient was advised to have regular follow-ups every six months to monitor major infections due to a low T cell count. However, the child's poor immunity and long-term RT feeds resulted in multiple episodes of LRTI, leading to the performance of a percutaneous endoscopic gastrostomy (PEG) tube insertion at the age of 19 months. The patient was advised to continue taking prophylactic antibiotics and seek urgent care if experiencing signs of high fever or respiratory distress. After the PEG placement, the patient was gradually introduced to feeds, and once it was established that they were well-tolerated, the feeds were increased to full feeds. Subsequently, the child had fewer episodes of LRTI and aspiration pneumonia during follow-up care and gained adequate weight for his age. Definitive corrective cardiac surgery was performed when the patient achieved 10 kg of body weight at the age of 43 months. His endocrine evaluation for initial hypocalcemia and hypo-functioning parathyroid was well-maintained with adequate daily supplementation of calcium and active vitamin D.
The developmental milestones were reported to be delayed. Later in the course of his development, he showed global developmental delay with an inability to achieve head control even at six months of age, rolled over at eight months, and was only able to sit with support at the age of one year. The child learned to sit without support only at 18 months of age and started to walk alone only at 48 months. He completely lacks speech and can only make sounds and show gestures by pointing when he needs something.
At the moment, the child is receiving speech therapy, which focuses on augmentative and alternative communication strategies, including the use of sign language and picture-based communication boards, and also receiving occupational therapy to help with his gross motor skills. Despite undergoing speech and language training, there has not been any noticeable progress in his ability to speak or write words; instead, he makes sounds like hooting. On the other hand, his gross motor skills have improved with activity-based occupational therapy. The child is currently enrolled in a special school linked to the hospital's psychiatry department, but he struggles with social cues and interacting with peers.
Discussion
The pharyngeal pouches derive from the endoderm and form within the fourth week of fetal development [7]. The third pharyngeal pouch forms the thymus and inferior region of the parathyroid gland, while the superior portion of the parathyroid gland arises from the fourth pharyngeal pouch [7]. The thymus is a lymphoid organ that aids in the development of T lymphocytes, which participate in the adaptive cellularmediated immune response [8]. The parathyroid gland regulates calcium and phosphate concentrations via the parathyroid hormone (PTH), which acts on bones and the gastrointestinal system [9]. The classic triad of DGS includes hypocalcemia due to hypoparathyroidism, congenital cardiac anomalies, and immunodeficiency [10]. The severity of immunodeficiency ranges from mild to moderate due to either thymic hypoplasia or aplasia, respectively [11]. Hypocalcemia due to parathyroid hypoplasia manifests as paresthesia, muscle spasms, tetany, and seizures, which occurred in this patient's second week of life [11]. This syndrome has a heterogeneous presentation, with commonly affected areas including organs of the renal, ocular, gastrointestinal, and nervous systems [10].
Approximately 90% of DGS cases result from microdeletion of the long arm (q) at locus 11.2 of chromosome 22, also known as 22q11.2 [12]. There are over 90 different genes at this locus that can be deleted, leading to DGS, including the TUP-like enhancer of split gene 1 (TUPLE1) and the T-box transcription factor 1 (TBX1) genes [12]. TBX1 stimulates the embryologic formation of the pharyngeal pouches; therefore, if affected, this leads to severe features of DGS and is commonly associated with severe defects in the heart, thymus, and parathyroid glands [12]. Studies also show that the TUPLE1 and TBX1 genes are expressed in multiple tissues, including the mesoderm of the developing brain, and microdeletion may lead to irregular neuromicrovascular formation resulting in developmental abnormalities in children [1].
Although developmental delay is a common feature of DGS, the absence of speech is not commonly seen. DGS can affect speech in two ways: (i) facial deformities such as cleft palate and lip may affect articulation and phonation or (ii) cerebral abnormalities may cause a delay in speech onset [13,14]. In this case, the patient's cleft palate was small and due to the severity of his speech defect, it is inferred that an underlying neurological defect was most likely the cause of his lack of speech. Some speech manifestations associated with DGS include delayed speech emergence, dysarthria, velopharyngeal dysfunction, childhood apraxia of speech, and phonological disorders, all of which may persist into adolescence [10]. This is the first reported case of a complete absence of speech in a patient with DGS. The various neurological effects of DGS are not well known; however, studies show that changes in the anatomy of the brain lobes, basal ganglia, and cerebellum may lead to developmental delay. A study by Campbell et al. investigated the areas of the brain affected by DGS in 39 children with cognitive deficits using voxel-based morphometry (VBM) [15]. Results showed a significant reduction in the cerebellar gray matter as well as the white matter in the cerebellum, internal capsule, and frontal lobe [15]. The frontal lobe is responsible for voluntary movement, expressive language, and managing cognitive skills; therefore, if affected, this may lead to a delay in achieving gross and fine motor skill milestones as well as a delay or absence of speech onset [16].
Due to the risk of a decreased quality of life, a high index of clinical suspicion is required. A confirmatory diagnosis of this syndrome can be achieved with genetic testing via FISH [17]. FISH is the gold standard diagnostic tool for DGS and uses fluorescent DNA probes to evaluate for microdeletions in a specific chromosomal location [17]. It is useful in genetic counseling as well as prenatal and postnatal diagnosis [17]. After confirmation, extensive evaluation of the various organ systems is required. Tests include an echocardiogram to assess for conotruncal abnormalities, a chest X-ray to evaluate the thymic hypoplasia, Tlymphocyte panels, serum ionized calcium and phosphorus levels, PTH levels, and renal ultrasound [12]. The management of DGS is symptomatic, with the goal of preventing complications. Intravenous immunoglobulins, prophylactic antibiotics, and either a thymic or hematopoietic cell transplant can be used to manage immunodeficiency [12]. Surgical correction of life-threatening cyanotic heart diseases and cleft palate improves circulation, breathing, speech, and feeding [12]. Supplementation with calcium and vitamin D aids in the prevention of seizures and other hypocalcemic complications. Occupational and speech therapy is useful in children with developmental delays and proves beneficial in aiding children to meet their developmental milestones [12].
In this case, the patient had DGS, characterized by tetralogy of Fallot, cleft palate, thymic aplasia, and hypoparathyroidism, which led to a hypocalcemic-induced seizure. The diagnosis was confirmed with FISH, which showed the deletion of the TUPLE gene. Global developmental delay was noted in the patient as he aged; however, there was a severe speech deficit where the patient was nonverbal and showed minimal signs of improvement with speech therapy alone. This case of DGS with a complete absence of speech was a unique and unusual finding that shows the extensive range of symptoms caused by this genetic disease. A complete absence of speech could be one of the rarer symptoms of DGS, and further research into this association and the possible underlying causes is required.
Developmental speech delay is a common manifestation of DGS that affects many children in various ways. Speech defects normally range from delayed onset to apraxia; however, the complete absence of speech in the setting of DGS is a new finding that should be noted. Microdeletions of the TUPLE1 and TBX1 genes on chromosome 22 are possible underlying genetic etiologies contributing to this severe symptomatic variant. Possible neurological findings in patients with complete absence of speech and DGS include reduced gray and white matter throughout the cerebrum. Early diagnosis with genetic testing and symptomatic treatment is crucial to improving the patient's quality of life. Speech therapy is the initial management strategy for patients with speech delay; however, in some cases, such as this one, improvement may not be significant.
More research needs to be conducted on the effectiveness of treatment and alternative therapies that can be utilized to improve speech outcomes. Further evaluation into the long-term outcomes for patients with DGS and the absence of speech is also required. Physicians should understand that since this disorder presents with a variety of symptoms, the treatment plan should be unique and tailored to the needs of the patient as they transition into adulthood; therefore, using a holistic multidisciplinary approach to address their medical, behavioral, and psychological needs can be beneficial.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
v3-fos-license
|
2024-05-11T15:37:34.424Z
|
2024-05-08T00:00:00.000
|
269676225
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-3417/14/10/3993/pdf?version=1715167612",
"pdf_hash": "c9969f659d24b23af43dae48b71fc42e8acd57d4",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2183",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "c5835d0a5a9cad2dc68717563bbed6594486c1cb",
"year": 2024
}
|
pes2o/s2orc
|
Design and Experimental Study of an Embedded Controller for a Model-Based Controllable Pitch Propeller
.
Introduction
With the advances in the shipping industry, improved ship maneuverability and propulsion efficiency are required.The controllable pitch propeller is widely applied in ship thrust systems due to its good properties.The control research associated with this propeller is fundamental for enhancing the navigational safety and economic performance of ships.Extensive research has been conducted on the controller algorithm.Traditional methods, such as the proportional-integral-derivative (PID) control and fuzzy control, have been widely applied to control controllable pitch propellers [1,2].By optimizing the control parameters and structures, researchers have upgraded the stability and response speed of control systems.For instance, the PID accurately controls the propeller speed and pitch angle by adjusting the proportional, integral, and derivative coefficients.Thanks to artificial intelligence technology advancements, intelligent control algorithms (i.e., neural networks and genetic algorithms) have been introduced into controllable pitch propellers.Scholarly efforts have been made to address the uncertainty when evaluating the reliability and availability of controllable pitch propeller hydraulic systems.For instance, Bai [3] applied D-S evidence theory and dynamic Bayesian networks to establish a new method for assessing their reliability and availability.Fang [4] predicted the system's reliability using the failure rate prediction method and obtained each unit's failure rate and reliability curve.Zhang [5] designed an adjustable pitch propeller control hydraulic system comprising an electro-hydraulic directional control valve and a proportional directional control valve, and the corresponding control strategies were proposed.Rosenkranz [6] employed the fuzzy method to control a controllable pitch propeller.However, the accuracy of this method is difficult to guarantee.Ji [7] reported that the gain loop could be treated as a nonlinear function within a specific error range; moreover, pertinent research on the ship pitch control propeller with nonlinear PID was conducted, further improving the accuracy and efficiency.Chen [8] constructed a joint controller with load protection for adjustable pitch propellers to achieve maximum thrust effectiveness.Wang [9] conducted simulation research on the ship's controllable pitch propeller control system based on the PID.For the controllable pitch propeller control system, existing control algorithms still have limitations regarding timeliness.At the same time, the traditional design process for marine control systems faces practical issues such as long development cycles, complex procedures, and low development efficiency, which have made it increasingly difficult to meet development demands.In fields like automotive electronics and aerospace, the model-based design (MBD) method has been proven to be an effective controller design approach.Transplanting this method into the design and development of controllable pitch propeller control system undoubtedly represents an effective approach.The model predictive control (MPC) algorithm allows efficient constraint control and easy solutions to the optimal control problems arising from its limitations [10,11].This algorithm is iterative and online in optimizing the control.It has the characteristics [12][13][14] of a control structure based on model prediction, rolling optimization, and feed-forward feedback, enabling the fast and accurate responses of the controller.Based on the above analysis, the MPC can design the controller within the controllable pitch propeller control system.
The structure of this paper is as follows.Section 2 presents the simulation model of the controllable pitch propeller.In Section 3, the MPC design is introduced.Section 4 outlines the semi-physical simulation experiment and experimental tests.The concluding remarks are presented in Section 5.
Structure
The structure of the controllable pitch propeller is schematically shown in Figure 1.The adjustment function of the blade pitch angle is available by pushing the hub with the oil pipe inside the propeller shaft to rotate the blades.A hydraulic system serves as the primary power source for a controllable pitch propeller.As illustrated in Figure 2, the oil tank supplies hydraulic oil to the system.Subsequently, the oil pressurized by the pump enters the oil circuit and flows into a three-position four-way valve in the valve block following filter filtration.The valve spool is controlled by the controllable pitch propeller controller, which alters the pressure of the hydraulic oil on both sides of the piston in the hub cylinder, enabling forward or reverse motion.
Figure 3 shows the working principle when adjusting the distance in the forward direction.The high-pressure hydraulic oil flows into the rear hydraulic cylinder chamber of the controllable pitch propeller hub through the oil distributor, and the piston is pushed forward to the bow.The oil returns on the other side of the piston.The blade pitch ratio of the controllable pitch propeller advances toward a positive value.At a positive value, the controllable pitch propeller produces a favorable thrust to the hull.
Thrust Calculation
The effective thrust of the controllable pitch propeller F e and the experimental thrust of the water flow F s can be calculated according to the following equations: where K T stands for the thrust coefficient, ρ indicates the seawater density (kg/m 3 ), n denotes the diesel engine speed (r/s), D marks the pitch paddle diameter (m), and t represents the thrust derating factor.Aiming to facilitate the calculation, the square coefficient value of the hull C B is typically adopted in engineering to approximate the thrust reduction factor of a singlepropeller ship t, and their relationship is presented in Table 1.The thrust coefficient is related to the process ratio of the pitched paddle and the pitch angle.The K T can be expressed as follows: The propeller feed speed V a can be written as: where V s denotes the ship speed (m/s), and ω represents the companion flow coefficient.Typically, the effect of the hull's squareness coefficient value C B is considered, and other factors are ignored.According to the squareness coefficient value C B , Table 2 is checked to obtain the approximate value of ω.The pitch drag torque is affected by the torque coefficient K Q , pitch diameter D, water density ρ, and diesel engine speed n, and it can be expressed as follows: The torque coefficient K Q is influenced by the pitch paddle process ratio J and the pitch angle θ.It can be calculated as follows: Due to the calculation complexity, the flow characteristic curve of the four-blade pitch paddle torque coefficient K Q − (J, θ) is introduced to solve the torque coefficient K Q and plotted in Figure 4.
The torque coefficient Q K is influenced by the pitch paddle process ratio J and the pitch angle θ .It can be calculated as follows: ( , )
Simulation Model
The mechanistic modeling of the controllable pitch propeller system is cumbersome and lacks intuitive clarity, and many parameters in the resulting transfer function are challenging to measure.In addition, the Simulink/SimHydraulics toolbox contains considerable hydraulic modules and commercial component-based modules that are commonly used, allowing for the realization of the physical modeling of hydraulic and hydro-mechanical systems.Herein, the Simulink/SimHydraulics toolbox is adopted for the physical modeling to establish an accurate and user-friendly mathematical model of the controllable pitch propeller system.The parameters of the hydraulic cylinder and the load settings are summarized in Table 3.During the transmission of the controllable pitch propeller, the friction between the drive shaft and the bearings will generate the friction loss torque M f .This torque is related to the rotational speed of the controllable pitch propeller.Due to the comparison with the diesel output torque M s and the pitching paddle resistance torque M P , the friction loss torque M f is small.To simplify the calculations, the friction loss torque can be considered constant, with a magnitude of M f = 0.02M H . M H represents the output torque of the diesel engine under the rated operating conditions.
Simulation Model
The mechanistic modeling of the controllable pitch propeller system is cumbersome and lacks intuitive clarity, and many parameters in the resulting transfer function are challenging to measure.In addition, the Simulink/SimHydraulics toolbox contains considerable hydraulic modules and commercial component-based modules that are commonly used, allowing for the realization of the physical modeling of hydraulic and hydromechanical systems.Herein, the Simulink/SimHydraulics toolbox is adopted for the physical modeling to establish an accurate and user-friendly mathematical model of the controllable pitch propeller system.The parameters of the hydraulic cylinder and the load settings are summarized in Table 3.The simulation model of the hydraulic system is presented in Figure 5.When the external control signal is input to the three-position four-way valve using the digital-toanalog converter module, the direction of the hydraulic fluid between the pipelines is changed, pushing the hydraulic cylinder.The mass block, spring, and damper simulate the external force on the pitch paddle blade in seawater and are connected to the hydraulic cylinder, achieving a better replication of the operating environment at sea and a more realistic simulation.The simulation model of the hydraulic system is presented in Figure 5.When the external control signal is input to the three-position four-way valve using the digital-toanalog converter module, the direction of the hydraulic fluid between the pipelines is changed, pushing the hydraulic cylinder.The mass block, spring, and damper simulate the external force on the pitch paddle blade in seawater and are connected to the hydraulic cylinder, achieving a better replication of the operating environment at sea and a more realistic simulation.Considering that obtaining the state information of the controllable pitch propeller system is challenging, the identification target is set as the transfer function of the input and output signals.Then, this function is transformed into the state space during the system identification.The input and output signals are imported using MATLAB's system identification toolbox.The zeros and poles of the transfer function are set to be two and three, respectively.
After the identification, the final transfer function obtained aligns well with the model input and output, representing 93% conformity (Figure 6).It can be found that the conclusion is consistent with the engineering accuracy and systematically identifies the physical model of the controllable pitch propeller.The transfer function can be expressed as: ϕ = 2.1408s 2 + 0.2282s + 0.0278 s 3 + 1.3793s 2 + 0.1654s + 0.0144 (8) identification toolbox.The zeros and poles of the transfer function are set to be two and three, respectively.After the identification, the final transfer function obtained aligns well with the model input and output, representing 93% conformity (Figure 6).It can be found that the conclusion is consistent with the engineering accuracy and systematically identifies the physical model of the controllable pitch propeller.The transfer function can be expressed as:
Mathematical Modeling
In designing an MPC-derived controller, a mathematical model allowing for the prediction of the controlled system's state is necessary.This model is typically described in a state-space pattern in modern control.At the moment k , the state of the controlled sys- tem ( ) ∈ , and the output ( ) Correspondingly, the controlled model at k can be written as:
Mathematical Modeling
In designing an MPC-derived controller, a mathematical model allowing for the prediction of the controlled system's state is necessary.This model is typically described in a state-space pattern in modern control.At the moment k, the state of the controlled system x(k) ∈ R n , the control input u(k) ∈ R l , and the output y(k) ∈ R q can be obtained.Correspondingly, the controlled model at k can be written as: According to Equation (9), p is taken as the prediction time domain, the measured output of the system y(k) can be considered as the starting point of the output, and x(k) can be regarded as the beginning of the prediction state.The output of the prediction time domain p can be estimated at any moment from k + 1 to k + p according to the moment k.The output of the k + i can be expressed as follows: Subsequently, the system's output in the prediction time domain p is recorded as: Similarly, the control input within the prediction time domain p is defined as: Appl.Sci.2024, 14, 3993 8 of 20 Each component of the control input vector U k is independent and needs to be solved as an optimization problem.
The controller's goal is to minimize the output for the desired one.The reference input in the control time domain includes: To maximize the consistency between the predicted and the expected outputs, the difference between the two output vector components should be accumulated when establishing the optimization objective function.The optimization objective function can be defined as: For the control inputs and system outputs, constraints are generally imposed for engineering applications.The limits can be expressed as below: Solving the optimal control inputs can be divided into two parts.First, the controller inputs to the system can minimize the output, considering the reference inputs.Second, by solving the control inputs U k , the control inputs and the system outputs can meet the control constraints u min ≤ u(k + i) ≤ u max and the output constraints Ultimately, an optimal solution addressing the above problem at k can be obtained: Based on the mechanical modeling and system identification of the pitch paddle hydraulic system in Section 2, Equation (9) can be rewritten in incremental form, and the hydraulic system can be represented as a state space: where A = −1.9480−1.7010 stands for the controlled output variable, and d(k) ∈ R n d signifies the measurable external disturbance variable.
When the MPC controller works, the speed of solving the optimization problem will affect the controller's timeliness.For a faster computing speed on the part of the algorithm, the number of independent variables for the optimization problem is typically reduced.The control time domain m is smaller than the prediction time domain p, and the control quantity is assumed to remain unchanged outside the control time domain m, i.e., ∆u(k Since the upcoming disturbance quantity at the current moment k is unknown, the measurable disturbance is assumed to be constant after k, i.e., ∆d(k At k, the system state is x(k), and ∆x(k) is considered to be the starting value for predicting the controlled system state.According to Equation (17), the state from k + 1 to k + p can be predicted: From Equations ( 17) and ( 18), the measured output y c (k) at k is selected as the starting point for the system's controlled output prediction, which can be achieved from k + 1 to k + p as follows: . . .
The prediction output vector Y p (k + 1 k) and input vector ∆U(k) are defined as follows: Equation ( 19) is converted to a matrix equation, and the system prediction output can be calculated by the following equation:
Optimized Design
The optimization problem design is based on two main considerations.On the one hand, the controlled output is required to be consistent with the reference input.On the other hand, imposing constraints is necessary.For the first aspect, the objective function is chosen as the form: where i = 1, 2, • • • , p is the j component of the reference input, and Γ y i is the weighting factor for the j difference between the controlled output and the reference input.During controller design, the performance of the controllable pitch propeller is influenced by the variation between the controlled output and the reference input.When the error requirement is high, the weighting factor Γ y i can be increased.Depending on the needs of a controllable pitch propeller, the weighting factor can be time-varying.
A mathematical description of the constraints on the action is needed for proper control action.The difference in the control input ∆u can be introduced into the objective function design.The control action can be constrained by weighting.
Ultimately, the optimization problem combining Equations ( 23) and ( 24) can be described as min According to Equation (20), Equation ( 25) is transformed into the following equation: The predicted control gain is defined.
The control increment can be described as below.
Simulation
Based on the above theoretical derivation, a comparative simulation of the MPC controller and PID controller is explored, as shown in Figure 7.The error between the input signal and the output is defined at the moment k as ∆y mpc/PID = y − y mpc/PID .y indicates the input signal, and y mpc/PID represents the output of the MPC or PID controller.The absolute value of the total errors between the output value and the input signal is determined from the initial moment to the k as E mpc/PID , i.e., E mpc/PID = k ∑ i=0 y − y mpc/PID |.
According to the basic principle of MPC, the controller can act on the controlled system by taking the first element of the optimal control sequence from k , and this element is expressed as follows: The predicted control gain is defined.
The control increment can be described as below.
Simulation
Based on the above theoretical derivation, a comparative simulation of the MPC controller and PID controller is explored, as shown in Figure 7.The error between the input signal and the output is defined at the moment k as The tracking signal results are shown in Figure 8.It can be seen that the PID controller tracks the reference signal in a closer distance over time.In contrast, the output signal of The tracking signal results are shown in Figure 8.It can be seen that the PID controller tracks the reference signal in a closer distance over time.In contrast, the output signal of the MPC controller maintains a certain error with the tracking signal.The relative error curves of the two controllers are presented in Figure 9.In the 12-14 s and 18-21 s intervals, the relative error of the PID controller is smaller than that of the MPC controller and varies greatly for the remaining time.The absolute error curves of the two controllers are plotted in Figure 10.Specifically, the PID controller has an increasingly larger absolute error than the MPC controller over time.This suggests that the MPC controller outperforms the PID controller in terms of accuracy and stability.
The simulation results with a step signal incorporated into the system are shown in Figure 11.Two error bands of 5% and 2% were set during the simulation.The regulation time of the MPC was 6.716 s and 6.896 s, and no overshoot was detected.The regulation time of the PID reached 6.705 s and 6.885 s, and the presence of a 1% overshoot was visible.These results indicate that the MPC had good accuracy and response speed under the step signal.
the MPC controller maintains a certain error with the tracking signal.The relative error curves of the two controllers are presented in Figure 9.In the 12-14 s and 18-21 s intervals, the relative error of the PID controller is smaller than that of the MPC controller and varies greatly for the remaining time.The absolute error curves of the two controllers are plotted in Figure 10.Specifically, the PID controller has an increasingly larger absolute error than the MPC controller over time.This suggests that the MPC controller outperforms the PID controller in terms of accuracy and stability.The simulation results with a step signal incorporated into the system are shown in Figure 11.Two error bands of 5% and 2% were set during the simulation.The regulation The simulation results with a step signal incorporated into the system are shown in Figure 11.Two error bands of 5% and 2% were set during the simulation.The regulation time of the MPC was 6.716 s and 6.896 s, and no overshoot was detected.The regulation time of the PID reached 6.705 s and 6.885 s, and the presence of a 1% overshoot was visible.These results indicate that the MPC had good accuracy and response speed under the step signal.
Experimental Studies
A semi-physical simulation experiment was designed to verify the effectiveness of the MPC controller.The controllable pitch propeller was implemented in the form of a virtual prototype, and the controller used the embedded pitch control board designed in the current work.Both achieved bi-directional data communications through TCP/IP.
Virtual Prototype
The virtual prototype of the controllable pitch propeller was fabricated using threedimensional modeling.Briefly, a specified type of controllable pitch propeller device was selected in the laboratory as a prototype.Subsequently, a three-dimensional model of its components was established using Solidworks (SolidWorks is an original 3D design software based on the Windows system).Based on the coordination relationship between the actual parts, the hub assembly, double oil pipe assembly, and oil distributor assembly were constructed.These components were assembled to form the controllable pitch propeller.Finally, the three-dimensional model was imported into the virtual experimental scene constructed using Unity3D.Unity3D is a comprehensive game development tool that allows players to easily create interactive content such as 3D video games, architectural visualizations, real-time 3D animations, and more.It is a fully integrated professional game engine.The virtual prototype is schematically illustrated in Figure 12.
Experimental Studies
A semi-physical simulation experiment was designed to verify the effectiveness of the MPC controller.The controllable pitch propeller was implemented in the form of a virtual prototype, and the controller used the embedded pitch control board designed in the current work.Both achieved bi-directional data communications through TCP/IP.
Virtual Prototype
The virtual prototype of the controllable pitch propeller was fabricated using threedimensional modeling.Briefly, a specified type of controllable pitch propeller device was selected in the laboratory as a prototype.Subsequently, a three-dimensional model of its components was established using Solidworks (SolidWorks is an original 3D design software based on the Windows system).Based on the coordination relationship between the actual parts, the hub assembly, double oil pipe assembly, and oil distributor assembly were constructed.These components were assembled to form the controllable pitch propeller.Finally, the three-dimensional model was imported into the virtual experimental scene constructed using Unity3D.Unity3D is a comprehensive game development tool that allows players to easily create interactive content such as 3D video games, architectural visualizations, real-time 3D animations, and more.It is a fully integrated professional game engine.The virtual prototype is schematically illustrated in Figure 12.
Hardware Design
An open STM32F4X Target system (STM32F4X is a high-performance microcontroller developed by STMicroelectronics.)wasdesigned to establish Simulink simulation models with one-click automatic code generation and download them to the embedded development board for real-time simulation.It primarily consists of stm32f4x.tlc,stm32f4x_file_process.tlc,stm32f4x_callback_handler.m, stm32f4x_make_rtw_hook.m, as well as TLC and C files of various hardware resource drivers on the embedded board.The
Hardware Design
An open STM32F4X Target system (STM32F4X is a high-performance microcontroller developed by STMicroelectronics.) was designed to establish Simulink simulation models with one-click automatic code generation and download them to the embedded development board for real-time simulation.It primarily consists of stm32f4x.tlc,stm32f4x_file_process.tlc,stm32f4x_callback_handler.m, stm32f4x_make_rtw_hook.m, as well as TLC and C files of various hardware resource drivers on the embedded board.The purpose is to set the target parameters during code generation, user code customization, Keil uVision5 invocation (Keil uVision5 is a microcontroller software development platform developed by the well-known German software company Keil.),etc. STM32F4XTarget supports the Simulink Coder and Embedded Coder with Simulink model code generation, which controls the code building through the target system file.The Keil uVision5 compiler is automatically invoked in the background to compile, link, and download the code without human intervention.This compiler can be deployed to the supporting embedded development board by clicking Build on the Simulink toolbar, and the working process is illustrated in Figure 13.
Hardware Design
An open STM32F4X Target system (STM32F4X is a high-performance microcontroller developed by STMicroelectronics.)wasdesigned to establish Simulink simulation models with one-click automatic code generation and download them to the embedded development board for real-time simulation.It primarily consists of stm32f4x.tlc,stm32f4x_file_process.tlc,stm32f4x_callback_handler.m, stm32f4x_make_rtw_hook.m, as well as TLC and C files of various hardware resource drivers on the embedded board.The purpose is to set the target parameters during code generation, user code customization, Keil uVision5 invocation (Keil uVision5 is a microcontroller software development platform developed by the well-known German software company Keil.),etc. STM32F4XTarget supports the Simulink Coder and Embedded Coder with Simulink model code generation, which controls the code building through the target system file.The Keil uVision5 compiler is automatically invoked in the background to compile, link, and download the code without human intervention.This compiler can be deployed to the supporting embedded development board by clicking Build on the Simulink toolbar, and the working process is illustrated in Figure 13.
Semi-Physical Simulation Test
The virtual experiments were conducted using the virtual experiment software of the host computer, and the control experiment was initiated by clicking Start Simulation.During the simulation experiment, the simulation curve was observed through the realtime simulation curve window.According to the test needs, the controller parameters can be modified.The simulation experiment procedures are shown in Figure 14.
After applying the command signal from zero pitch to positive limit pitch and negative limit pitch, respectively, the results are shown in Figures 15 and 16.
According to the definitions of the relative and absolute errors, the experimental results are illustrated in Figures 17 and 18 for positive pitch as well as in Figures 19 and 20 for negative pitch.
From Figures 17 and 19, it can be found that the pitch, when being adjusted by the controllable pitch propeller, exhibits a relative error within 0.002 cm.Figures 18 and 20 demonstrate that in the first ten seconds, a more obvious increase is observed in the absolute error of the pitch over time.When the adjustment ends, the absolute error is not obvious, indicating the good stability of the controller.
Semi-Physical Simulation Test
The virtual experiments were conducted using the virtual experiment software of the host computer, and the control experiment was initiated by clicking Start Simulation.During the simulation experiment, the simulation curve was observed through the real-time simulation curve window.According to the test needs, the controller parameters can be modified.The simulation experiment procedures are shown in Figure 14.After applying the command signal from zero pitch to positive limit pitch and negative limit pitch, respectively, the results are shown in Figures 15 and 16 According to the definitions of the relative and absolute errors, the experimental results are illustrated in Figures 17 and 18 for positive pitch as well as in Figures 19 and 20 for negative pitch.The curve of inputting positive step signals is plotted in Figure 21.Accordingly, the pitch enters the 5% error band at 3.7 s, and no overshoot is detected.Figure 22 portrays the curve of introducing negative step signals.In this case, the pitch value falls within the 5% error band at 2.7 s. Similarly, no apparent overshoots are observed.It can be concluded that the response speed of the controllable pitch propeller controller to signals is relatively fast.From Figures 17 and 19, it can be found that the pitch, when being adjusted by the controllable pitch propeller, exhibits a relative error within 0.002 cm.Figures 18 and 20 demonstrate that in the first ten seconds, a more obvious increase is observed in the absolute error of the pitch over time.When the adjustment ends, the absolute error is not obvious, indicating the good stability of the controller.
The curve of inputting positive step signals is plotted in Figure 21.Accordingly, the pitch enters the 5% error band at 3.7 s, and no overshoot is detected.Figure 22 portrays the curve of introducing negative step signals.In this case, the pitch value falls within the 5% error band at 2.7 s. Similarly, no apparent overshoots are observed.It can be concluded that the response speed of the controllable pitch propeller controller to signals is relatively fast.
Conclusions
In this paper, the structural composition and working principle of a controllable pitch propeller were introduced.The controllable pitch propeller hydraulic system had the inherent features of high constraint and nonlinearity.To address this, the MPC algorithmderived controller was designed.A physical model of the hydraulic system was established by using MATLAB/Simulink.The transfer function of the hydraulic system was derived from multiple sets of input and output data using system identification tools.The system identification simplified the mathematical modeling process for the controlled object and shorten the design cycle.Custom signal and step signal tests were conducted in the MATLAB/Simulink environment.By comparing the relative and absolute errors of the two controllers, it was found that the MPC controller displayed a shorter regulating duration, lower overshooting amount, and higher control accuracy than the traditional PID controller.Under the existing conditions, the controllable pitch propeller with controller hardware in the loop test platform was fabricated.Embedded Coder was adopted to real-
Conclusions
In this paper, the structural composition and working principle of a controllable pitch propeller were introduced.The controllable pitch propeller hydraulic system had the inherent features of high constraint and nonlinearity.To address this, the MPC algorithmderived controller was designed.A physical model of the hydraulic system was established by using MATLAB/Simulink.The transfer function of the hydraulic system was derived from multiple sets of input and output data using system identification tools.The system identification simplified the mathematical modeling process for the controlled object and shorten the design cycle.Custom signal and step signal tests were conducted in the MATLAB/Simulink environment.By comparing the relative and absolute errors of the two controllers, it was found that the MPC controller displayed a shorter regulating duration, lower overshooting amount, and higher control accuracy than the traditional PID controller.Under the existing conditions, the controllable pitch propeller with controller hardware in the loop test platform was fabricated.Embedded Coder was adopted to realize the one-key generation of the embedded code.The semi-physical simulation experiment verified the stability of the designed controller, and the control algorithm ran smoothly and met the control requirements.
The crux of the model-based predictive control algorithm lies in the accuracy of the model.With the development of artificial intelligence and advanced machine learning, the data-driven method can be used to establish a more accurate mathematical model by using the real-time data of the controllable pitch propeller.On this basis, the performance of the controllable pitch propeller can be improved by changing the parameters of the model predictive control algorithm.
Additional Points
(1) A control law based on the MPC algorithm is designed.The MPC and PID control systems are compared and simulated to verify the effectiveness of the MPC controller.(2) The virtual prototype of the controllable pitch propeller is fabricated using three-dimensional modeling.Additionally, the embedded controller is created using the C-MEX S-Function and TLC programming language.(3) A semi-physical simulation experiment is conducted.The results show that the controllable pitch propeller with an embedded controller runs reliably and has good anti-interference, achieving the control function of the pitch propeller under various working conditions.
Figure 1 .
Figure 1.Structure of the controllable pitch propeller.
Figure 3 .
Figure 3. Hydraulic system in the forward direction.
Figure 3 .
Figure 3. Hydraulic system in the forward direction.
Figure 3 .
Figure 3. Hydraulic system in the forward direction.Figure 3. Hydraulic system in the forward direction.
Figure 3 .
Figure 3. Hydraulic system in the forward direction.Figure 3. Hydraulic system in the forward direction.
7 )
Due to the calculation complexity, the flow characteristic curve of the four-blade pitch paddle torque coefficient solve the torque coefficient Q K and plotted in Figure4.
Figure 4 .M
Figure 4. Flow characteristic curves of the four-blade pitch paddle torque coefficient.During the transmission of the controllable pitch propeller, the friction between the drive shaft and the bearings will generate the friction loss torque
Figure 4 .
Figure 4. Flow characteristic curves of the four-blade pitch paddle torque coefficient.
Figure 5 .
Figure 5. Hydraulic system simulation model of the controllable pitch propeller.Figure 5. Hydraulic system simulation model of the controllable pitch propeller.
Figure 5 .
Figure 5. Hydraulic system simulation model of the controllable pitch propeller.Figure 5. Hydraulic system simulation model of the controllable pitch propeller.
of the MPC or PID controller.The absolute value of the total errors between the output value and the input signal is determined from the initial moment to the k as
Figure 7 .
Figure 7. Connection to the model predictive control (MPC) controller.
Figure 7 .
Figure 7. Connection to the model predictive control (MPC) controller.
Figure 8 .
Figure 8. Tracking signal results of the two controllers.
Figure 10 .
Figure 10.Absolute errors of both controllers.
Figure 10 .
Figure 10.Absolute errors of both controllers.
Figure 10 .
Figure 10.Absolute errors of both controllers.
Figure 11 .
Figure 11.Step signal outcomes of the two controllers.
Figure 11 .
Figure 11.Step signal outcomes of the two controllers.
Figure 16 .
Figure 16.Patch adjustment in the opposite direction.
Figure 16 .
Figure 16.Patch adjustment in the opposite direction.
Figure 16 .
Figure 16.Patch adjustment in the opposite direction.
Figure 17 .
Figure 17.Relative errors in the case of forward pitch adjustment.
Figure 17 .
Figure 17.Relative errors in the case of forward pitch adjustment.
Figure 17 .
Figure 17.Relative errors in the case of forward pitch adjustment.
Figure 18 .
Figure 18.Absolute errors when the pitch is undergoing adjustment in the forward direction.
Figure 18 .
Figure 18.Absolute errors when the pitch is undergoing adjustment in the forward direction.
Figure 19 .
Figure 19.Relative errors during pitch modification in the opposite direction.
Figure 19 .
Figure 19.Relative errors during pitch modification in the opposite direction.
Figure 19 .
Figure 19.Relative errors during pitch modification in the opposite direction.
Figure 20 .
Figure 20.Absolute errors in the case of pitch adjustment in the opposite direction.
Figure 20 .
Figure 20.Absolute errors in the case of pitch adjustment in the opposite direction.
Table 1 .
The relationship between C B and t.
Table 2 .
The relationship between C B and ω.
Table 3 .
Hydraulic cylinder and load parameters.
|
v3-fos-license
|
2019-08-26T17:35:56.000Z
|
2019-08-26T00:00:00.000
|
201666698
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-45231-5_1.pdf",
"pdf_hash": "6995bebe595971d02dc8f12146f5b008ba8bfdff",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2184",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "1a071561ab51153890c8033b3f5aa5302ca2d84e",
"year": 2020
}
|
pes2o/s2orc
|
Neural Flocking: MPC-Based Supervised Learning of Flocking Controllers
We show how a symmetric and fully distributed flocking controller can be synthesized using Deep Learning from a centralized flocking controller. Our approach is based on Supervised Learning, with the centralized controller providing the training data, in the form of trajectories of state-action pairs. We use Model Predictive Control (MPC) for the centralized controller, an approach that we have successfully demonstrated on flocking problems. MPC-based flocking controllers are high-performing but also computationally expensive. By learning a symmetric and distributed neural flocking controller from a centralized MPC-based one, we achieve the best of both worlds: the neural controllers have high performance (on par with the MPC controllers) and high efficiency. Our experimental results demonstrate the sophisticated nature of the distributed controllers we learn. In particular, the neural controllers are capable of achieving myriad flocking-oriented control objectives, including flocking formation, collision avoidance, obstacle avoidance, predator avoidance, and target seeking. Moreover, they generalize the behavior seen in the training data to achieve these objectives in a significantly broader range of scenarios. In terms of verification of our neural flocking controller, we use a form of statistical model checking to compute confidence intervals for its convergence rate and time to convergence.
Introduction
With the introduction of Reynolds rule-based model [16,17], it is now possible to understand the flocking problem as one of distributed control. Specifically, in this model, at each time-step, each agent executes a control law given in terms of the weighted sum of three competing forces to determine its next acceleration. Each of these forces has its own rule: separation (keep a safe distance away from your neighbors), cohesion (move towards the centroid of your neighbors), and alignment (steer toward the average heading of your neighbors). Reynolds controller is distributed ; i.e., it is executed separately by each agent, using information about only itself and nearby agents, and without communication. Furthermore, it is symmetric; i.e., every agent runs the same controller (same code).
We subsequently showed that a simpler, more declarative approach to the flocking problem is possible [11]. In this setting, flocking is achieved when the agents combine to minimize a system-wide cost function. We presented centralized and distributed solutions for achieving this form of "declarative flocking" (DF), both of which were formulated in terms of Model-Predictive Control (MPC) [2].
Another advantage of DF over the ruled-based approach exemplified by Reynolds model is that it allows one to consider additional control objectives (e.g., obstacle and predator avoidance) simply by extending the cost function with additional terms for these objectives. Moreover, these additional terms are typically quite straightforward in nature. In contrast, deriving behavioral rules that achieve the new control objectives can be a much more challenging task.
An issue with MPC is that computing the next control action can be computationally expensive, as MPC searches for an action sequence that minimizes the cost function over a given prediction horizon. This renders MPC unsuitable for real-time applications with short control periods, for which flocking is a prime example. Another potential problem with MPC-based approaches to flocking is its performance (in terms of achieving the desired flight formation), which may suffer in a fully distributed setting.
In this paper, we present Neural Flocking (NF), a new approach to the flocking problem that uses Supervised Learning to learn a symmetric and fully distributed flocking controller from a centralized MPC-based controller. By doing so, we achieve the best of both worlds: high performance (on par with the MPC controllers) in terms of meeting flocking flight-formation objectives, and high efficiency leading to real-time flight controllers. Moreover, our NF controllers can easily be parallelized on hardware accelerators such as GPUs and TPUs. Figure 1 gives an overview of the NF approach. A high-performing centralized MPC controller provides the labeled training data to the learning agent: a symmetric and distributed neural controller in the form of a deep neural network (DNN). The training data consists of trajectories of state-action pairs, where a state contains the information known to an agent at a time step (e.g., its own position and velocity, and the position and velocity of its neighbors), and the action (the label) is the acceleration assigned to that agent at that time step by the centralized MPC controller.
We formulate and evaluate NF in a number of essential flocking scenarios: basic flocking with inter-agent collision avoidance, as in [11], and more advanced scenarios with additional objectives, including obstacle avoidance, predator avoidance, and target seeking by the flock. We conduct an extensive performance evaluation of NF. Our experimental results demonstrate the sophisticated nature of NF controllers. In particular, they are capable of achieving all of the stated control objectives. Moreover, they generalize the behavior seen in the training data in order to achieve these objectives in a significantly broader range of scenarios. In terms of verification of our neural controller, we use a form of statistical model checking [5,10] to compute confidence intervals for its rate of convergence to a flock and for its time to convergence.
Background
We consider a set of n dynamic agents A = {1, . . . , n} that move according to the following discrete-time equations of motion: are the position, velocity and acceleration of agent i ∈ A respectively at time step k, and dt ∈ R + is the time step. The magnitudes of velocities and accelerations are bounded byv andā, respectively. Acceleration a i (k) is the control input for agent i at time step k. The acceleration is updated after every η time steps i.e., η · dt is the control period. The flock configuration at time step k is thus given by the following vectors (in boldface): The configuration vectors are referred to without the time indexing as p, v, and a. The neighborhood of agent i at time step k, denoted by N i (k) ⊆ A, contains its N -nearest neighbors, i.e., the N other agents closest to it. We use this definition (in Section 2.2 to define a distributed-flocking cost function) for simplicity, and expect that a radius-based definition of neighborhood would lead to similar results for our distributed flocking controllers.
Model-Predictive Control
Model-Predictive control (MPC) [2] is a well-known control technique that has recently been applied to the flocking problem [11,19,20]. At each control step, an optimization problem is solved to find the optimal sequence of control actions (agent accelerations in our case) that minimizes a given cost function with respect to a predictive model of the system. The first control action of the optimal control sequence is then applied to the system; the rest is discarded. In the computation of the cost function, the predictive model is evaluated for a finite prediction horizon of T control steps.
MPC-based flocking models can be categorized as centralized or distributed. A centralized model assumes that complete information about the flock is available to a single "global" controller, which uses the states of all agents to compute their next optimal accelerations. The following optimization problem is solved by a centralized MPC controller at each control step k: min a(k|k),...,a(k+T −1|k) <ā The first term J(k) is the centralized model-specific cost, evaluated for T control steps (this embodies the predictive aspect of MPC), starting at time step k. It encodes the control objective of minimizing the cost function J(k). The second term, scaled by a weight λ > 0, penalizes large control inputs: a(k + t | k) are the predictions made at time step k for the accelerations at time step k + t.
In distributed MPC, each agent computes its acceleration based only on its own state and its local knowledge, e.g., information about its neighbors: min ai(k|k),...,ai(k+T −1|k) <ā is the distributed, model-specific cost function for agent i, analogous to J(k). In a distributed setting where an agent's knowledge of its neighbors' behavior is limited, an agent cannot calculate the exact future behavior of its neighbors. Hence, the predictive aspect of J i (k) must rely on some assumption about that behavior during the prediction horizon. Our distributed cost functions are based on the assumption that the neighbors have zero accelerations during the prediction horizon. While this simple design is clearly not completely accurate, our experiments show that it still achieves good results.
Declarative Flocking
Declarative flocking (DF) is a high-level approach to designing flocking algorithms based on defining a suitable cost function for MPC [11]. This is in contrast to the operational approach, where a set of rules are used to capture flocking behavior, as in Reynolds model. For basic flocking, the DF cost function contains two terms: (1) a cohesion term based on the squared distance between each pair of agents in the flock; and (2) a separation term based on the inverse of the squared distance between each pair of agents. The flock evolves toward a configuration in which these two opposing forces are balanced. The cost function J C for centralized DF, i.e., centralized MPC (CMPC), is as follows: where ω s is the weight of the separation term and controls the density of the flock. The cost function is normalized by the number of pairs of agents, |A|·(|A−1|) 2 ; as such, the cost does not depend on the size of the flock. The control law for CMPC is given by Eq.
The basic flocking cost function for distributed DF is similar to that for CMPC, except that the cost function J D i for agent i is computed over its set of neighbors N i (k) at time k: The control law for agent i is given by Eq.
Additional Control Objectives
The cost functions for basic flocking given in Eqs. (7) and (8) are designed to ensure that in the steady state, the agents are well-separated. Additional goals such as obstacle avoidance, predator avoidance, and target seeking are added to the MPC formulation as weighted cost-function terms. Different objectives can be combined by including the corresponding terms in the cost function as a weighted sum.
Cost-Function Term for Obstacle Avoidance. We consider multiple rectangular obstacles which are distributed randomly in the field. For a set of m rectangular we define the cost function term for obstacle avoidance as: where o is the set of points on the obstacle boundaries and o (i) j is the point on the obstacle boundary of the j th obstacle O j that is closest to the i th agent.
Cost-Function Term for Target Seeking. This term is the average of the squared distance between the agents and the target. Let g denote the position of the fixed target. Then the target-seeking term is as defined as Cost-Function Term for Predator Avoidance. We introduce a single predator, which is more agile than the flocking agents: its maximum speed and acceleration are a factor of f p greater thanv andā, respectively, with f p > 1. Apart from being more agile, the predator has the same dynamics as the agents, given by Eq. (1). The control law for the predator consists of a single term that causes it to move toward the centroid of the flock with maximum acceleration. For a flock of n agents and one predator, the cost-function term for predator avoidance is the average of the inverse of the cube of the distances between the predator and the agents. It is given by: where p pred is the position of the predator. In contrast to the separation term in Eqs. (5)-(6), which we designed to ensure inter-agent collision avoidance, the predator-avoidance term has a cube instead of a square in the denominator. This is to reduce the influence of the predator on the flock when the predator is far away from the flock.
NF Cost-Function Terms. The MPC cost functions used in our examination of Neural Flocking are weighted sums of the cost function terms introduced above. We refer to the first term of our centralized DF cost function J C (p) (see Eq. (7)) as J cohes (p) and the second as J sep (p). We use the following cost functions J 1 , J 2 , and J 3 for basic flocking with collision avoidance, obstacle avoidance with target seeking, and predator avoidance, respectively.
where ω s is the weight of the separation term, ω o is the weight of the obstacle avoidance term, ω t is the weight of the target-seeking term, and ω p is the weight of the predator-avoidance term. Note that J 1 is equivalent to J C (Eq. (7)). The weight ω s of the separation term is experimentally chosen to ensure that the distance between agents, throughout the simulation, is at least d min , the minimum inter-agent distance representing collision avoidance. Similar considerations were given to the choice of values for ω o and ω p . The specific values we used for the weights are: ω s = 2000, ω o = 1500, ω t = 10, and ω p = 500. We experimented with an alternative strategy for introducing inter-agent collision avoidance, obstacle avoidance, and predator avoidance into the MPC problem, namely, as constraints of the form j || < 0, and d min − ||p i − p pred || < 0, respectively. Using the theory of exact penalty functions [12], we recast the constrained MPC problem as an equivalent unconstrained MPC problem by converting the constraints into a weighted penalty term, which is then added to the MPC cost function. This approach rendered the optimization problem difficult to solve due to the non-smoothness of the penalty term. As a result, constraint violations in the form of collisions were observed during simulation.
Neural Flocking
We learn a distributed neural controller (DNC) for the flocking problem using training data in the form of trajectories of state-action pairs produced by a CMPC controller. In addition to basic flocking with inter-agent collision avoidance, the DNC exhibits a number of other flocking-related behaviors, including obstacle avoidance, target seeking, and predator avoidance. We also show how the learned behavior exhibited by the DNC generalizes over a larger number of agents than what was used during training to achieve successful collision-free flocking in significantly larger flocks.
We use Supervised Learning to train the DNC. Supervised Learning learns a function that maps an input to an output based on example sequences of inputoutput pairs. In our case, the trajectory data obtained from CMPC contains both the training inputs and corresponding labels (outputs): the state of an agent in the flock (and that of its nearest neighbors) at a particular time step is the input, and that agent's acceleration at the same time step is the label.
Training Distributed Flocking Controllers
We use Deep Learning to synthesize a distributed and symmetric neural controller from the training data provided by the CMPC controller. Our objective is to learn basic flocking, obstacle avoidance with target seeking, and predator avoidance. Their respective CMPC-based cost functions are given in Sections 2.2 and 3. All of these control objectives implicitly also include inter-agent collision avoidance by virtue of the separation term in Eq. 7.
For each of these control objectives, DNC training data is obtained from CMPC trajectory data generated for n = 15 agents, starting from initial configurations in which agent positions and velocities are uniformly sampled from [−15, 15] 2 and [0, 1] 2 , respectively. All training trajectories are 1,000 time steps in duration.
We further ensure that the initial configurations are recoverable; i.e., no two agents are so close to each other that they cannot avoid a collision by resorting to maximal accelerations. We learn a single DNC from the state-action pairs of all n agents. This yields a symmetric distributed controller, which we use for each agent in the flock during evaluation.
Basic Flocking. Trajectory data for basic flocking is generated using the cost function given in Eq. (7). We generate 200 trajectories, each of which (as noted above) is 1,000 time steps long. The input to the NN is the position and velocity of each agent along with the positions and velocities of its N -nearest neighbors. This yields 200 · 1, 000 · 15 = 3M total training samples.
Let us refer to the agent (the DNC) being learned as A 0 . Since we use neighborhood size N = 14, the input to the NN is of the form where p x 0 , p y 0 are the position coordinates and v x 0 , v y 0 velocity coordinates for agent A 0 , and p x 1...14 , p y 1...14 and v x 1...14 , v y 1...14 are the position and velocity vectors of its neighbors. Since this input vector has 60 components, the input to the NN consists of 60 features.
where o x 0 , o y 0 is the closest point on any obstacle to agent A 0 ; o x 1...14 , o y 1...14 give the closest point on any obstacle for the 14 neighboring agents, and g x , g y is the target location.
Predator Avoidance. The CMPC cost function for predator avoidance is given in Eq. (12c). The position, velocity, and the acceleration of the predator are denoted by p pred , v pred , a pred , respectively. We take f p = 1.40; hencev pred = 1.
Experimental Evaluation
This section contains the results of our extensive performance analysis of the distributed neural flocking controller (DNC), taking into account various control objectives: basic flocking with collision avoidance, obstacle avoidance with target seeking, and predator avoidance. As illustrated in Fig. 1, this involves running CMPC to generate the training data for the DNCs, whose performance we then compare to that of the DMPC and CMPC controllers. We also show that the DNC flocking controllers generalize the behavior seen in the training data to achieve successful collision-free flocking in flocks significantly larger in size than those used during training. Finally, we use Statistical Model Checking to obtain confidence intervals for DNC's correctness/performance.
Preliminaries
The CMPC and DMPC control problems defined in Section 2.1 are solved using MATLAB fmincon optimizer. In the training phase, the size of the flock is n = 15. For obstacle-avoidance with target-seeking, we use 5 obstacles with the target located at [60,50]. The simulation time is 100, dt = 0.1 time units, and η = 3, where (recall) η · dt is the control period. Further, the agent velocity and acceleration bounds arev = 2.0 andā = 1.5. We use d min = 1.5 as the minimum inter-agent distance for collision avoidance, d obs min = 1 as the minimum agent-obstacle distance for obstacle avoidance, and d pred min = 1.5 as the minimum agent-predator distance for predator avoidance. For initial configurations, recall that agent positions and velocities are uniformly sampled from [−15, 15] 2 and [0, 1] 2 , respectively, and we ensure that they are recoverable; i.e., no two agents are so close to each other that they cannot avoid a collision when resorting to maximal accelerations. The predator starts at rest from a fixed location at a distance of 40 from the flock center.
For training, we considered 15 agents and 200 trajectories per agent, each trajectory 1,000 time steps in length. This yielded a total of 3,000,000 training samples. Our neural controller is a fully connected feed-forward Deep Neural Network (DNN), with 5 hidden layers, 84 neurons per hidden layer, and with a ReLU activation function. We use an iterative approach for choosing the DNN hyperparameters and architecture where we continuously improve our NN, until we observe satisfactory performance by the DNC.
For training the DNNs, we use Keras [3], which is a high-level neural network API written in Python and capable of running on top of TensorFlow. To generate the NN model, Keras uses the Adam optimizer [8] with the following settings: lr = 10 −2 , β 1 = 0.9, β 2 = 0.999, = 10 −8 . The batch size (number of samples processed before the model is updated) is 2,000, and the number of epochs (number of complete passes through the training dataset) used for training is 1,000. For measuring training loss, we use the mean-squared error metric. To test the trained DNC, we generated 100 simulations (runs) for each of the desired control objectives: basic flocking with collision avoidance, flocking with obstacle avoidance and target seeking, and flocking with predator avoidance. The results presented in Tables 1, were obtained using the same number of agents and obstacles and the same predator as in the training phase. We also ran tests that show DNC controllers can achieve collision-free flocking with obstacle avoidance where the numbers of agents and obstacles are greater than those used during training.
Results for Basic Flocking
We use flock diameter, inter-agent collision count and velocity convergence [20] as performance metrics for flocking behavior. At any time step, the flock diameter D(p) = max (i,j)∈A p ij is the largest distance between any two agents in the flock. We calculate the average converged diameter by averaging the flock diameter Table 1 compare the performance of the DNC on the basic-flocking problem for 15 agents to that of the MPC controllers. Although the DMPC and CMPC outperform the DNC, the difference is marginal. An important advantage of the DNC over DMPC is that they are much faster. Executing a DNC controller requires a modest number of arithmetic operations, whereas executing an MPC controller requires simulation of a model and controller over the prediction horizon. In our experiments, on average, the CMPC takes 1209 msec of CPU time for the entire flock and DMPC takes 58 msec of CPU time per agent, whereas the DNC takes only 1.6 msec.
Results for Obstacle and Predator Avoidance
For obstacle and predator avoidance, collision rates are used as a performance metric. An obstacle-agent collision (OC) occurs when the distance between an agent and the closest point on any obstacle is less than d obs min . A predator-agent collision (PC) occurs when the distance between an agent and the predator is less than d pred min . The OC rate (OCR) is the average number of OCs per test-trajectory time-step, and the PC rate (PCR) is defined similarly. Our test results show that the DNC, along with the DMPC and CMPC, is collision-free (i.e., each of ICR, OCR, and PCR is zero) for 15 agents, with the exception of DMPC for predator avoidance where PCR = 0.013. We also observed that the flock successfully reaches the target location in all 100 test runs.
DNC Generalization Results
Tables 2-3 present DNC generalization results for basic flocking (BF), obstacle avoidance (OA), and predator avoidance (PA), with the number of agents ranging from 15 (the flock size during training) to 40. In all of these experiments, we use a neighborhood size of N = 14, the same as during training. Each controller was evaluated with 100 test runs. The performance metrics in Table 2 are the average converged diameter, convergence rate, average convergence time, and ICR.
The convergence rate is the fraction of successful flocks over 100 runs. The collection of agents is said to have converged to a flock (with collision avoidance) if the value of the global cost function is less than the convergence threshold. We use a convergence threshold of J 1 (p) ≤ 150, which was chosen based on its proximity to the value achieved by CMPC. We use the cost function from Eq. 12a to calculate our success rate because we are showing convergence rate for basic flocking. The average convergence time is the time when the global cost function first drops below the success threshold and remains below it for the rest of the run, averaged over all 100 runs. Even with a local neighborhood of size 14, the results demonstrate that the DNC can successfully generalize to a large number of agents for all of our control objectives. The main idea of MC is to use N random variables, Z 1 , . . . , Z N , also called samples, IID distributed according to a random variable Z with mean µ Z , and to take the sumμ Z = (Z 1 + . . . + Z N )/N as the value approximating the mean µ Z . Since an exact computation of µ Z is almost always intractable, an MC approach is used to compute an ( , δ)-approximation of this quantity.
Additive Approximation [6] is an ( , δ)-approximation scheme where the mean µ Z of an RV Z is approximated with absolute error and probability 1 − δ: whereμ Z is an approximation of µ Z . An important issue is to determine the number of samples N needed to ensure thatμ Z is an ( , δ)-approximation of µ Z . If Z is a Bernoulli variable expected to be large, one can use the Chernoff-Hoeffding instantiation of the Bernstein inequality and take N to be N = 4 ln(2/δ)/ 2 , as in [6]. This results in the additive approximation algorithm [5], defined in Algorithm 1. We use this algorithm to obtain a joint ( , δ)-approximation of the mean convergence rate and mean normalized convergence time for the DNC. Each sample Z i is based on the result of an execution obtained by simulating the system starting from a random initial state, and we take Z = (B, R), where B is a Boolean variable indicating whether the agents converged to a flock during the execution, and R is a real value denoting the normalized convergence time. The normalized convergence time is the time when the global cost function first drops below the convergence threshold and remains below it for the rest of the run, measured as a fraction of the total duration of the run. The assumptions Input: ( , δ) with 0 < < 1 and 0 < δ < 1 Input: Random variables Zi, IID Output:μZ approximation of µZ N = 4 ln(2/δ)/ 2 ; for (i=0; i ≤ N ; i++) do S = S + Zi; µZ = S/N ; returnμZ ; about Z required for validity of the additive approximation hold, because RV B is a Bernoulli variable, the convergence rate is expected to be large (i.e., closer to 1 than to 0), and the proportionality constraint of the Bernstein inequality is also satisfied for RV R.
In these experiments, the initial configurations are sampled from the same distributions as in Section 5.1, and we set = 0.01 and δ = 0.0001, to obtain N = 396,140. We perform the required set of N simulations for 15, 20, 25, 30, 35 and 40 agents. Table 4 presents the results, specifically, the ( , δ)-approximationsμ CR andμ CT of the mean convergence rate and the mean normalized convergence time, respectively. While the results for the convergence rate are (as expected) numerically similar to the results in Table 2, the results in Table 4 are much stronger, because they come with the guarantee that they are ( , δ)-approximations of the actual mean values.
Related Work
In [18], a flocking controller is synthesized using multi-agent reinforcement learning (MARL) and natural evolution strategies (NES). The target model from which the system learns is Reynolds flocking model [16]. For training purposes, a list of metrics called entropy are chosen, which provide a measure of the collective behavior displayed by the target model. As the authors of [18] observe, this technique does not quite work: although it consistently leads to agents forming recognizable patterns during simulation, agents self-organized into a cluster instead of flowing like a flock.
In [9], reinforcement learning and flocking control are combined for the purpose of predator avoidance, where the learning module determines safe spaces in which the flock can navigate to avoid predators. Their approach to predator avoidance, however, isn't distributed as it requires a majority consensus by the flock to determine its action to avoid predators. They also impose an α-lattice structure [13] on the flock. In contrast, our approach is geometry-agnostic and achieves predator avoidance in a distributed manner.
In [7], an uncertainty-aware reinforcement learning algorithm is developed to estimate the probability of a mobile robot colliding with an obstacle in an unknown environment. Their approach is based on bootstrap neural networks using dropouts, allowing it to process raw sensory inputs. Similarly, a learningbased approach to robot navigation and obstacle avoidance is presented in [14]. They train a model that maps sensor inputs and the target position to motion commands generated by the ROS [15] navigation package. Our work in contrast considers obstacle avoidance (and other control objectives) in a multi-agent flocking scenario under the simplifying assumption of full state observation.
In [4], an approach based on Bayesian inference is proposed that allows an agent in a heterogeneous multi-agent environment to estimate the navigation model and goal of each of its neighbors. It then uses this information to compute a plan that minimizes inter-agent collisions while allowing the agent to reach its goal. Flocking formation is not considered.
Conclusions
With the introduction of Neural Flocking (NF), we have shown how machine learning in the form of Supervised Learning can bring many benefits to the flocking problem. As our experimental evaluation confirms, the symmetric and fully distributed neural controllers we derive in this manner are capable of achieving a multitude of flocking-oriented objectives, including flocking formation, inter-agent collision avoidance, obstacle avoidance, predator avoidance, and target seeking. Moreover, NF controllers exhibit real-time performance and generalize the behavior seen in the training data to achieve these objectives in a significantly broader range of scenarios.
Ongoing work aims to determine whether a DNC can perform as well as the centralized MPC controller for agent models that are significantly more realistic than our current point-based model. For this purpose, we are using transfer learning to train a DNC that can achieve acceptable performance on realistic quadrotor dynamics [1], starting from our current point-model-based DNC. This effort also involves extending our current DNC from 2-dimensional to 3-dimensional spatial coordinates. If successful, and preliminary results are encouraging, this line of research will demonstrate that DNCs are capable of achieving flocking with complex realistic dynamics.
For future work, we plan to investigate a distance-based notion of agent neighborhood as opposed to our current nearest-neighbors formulation. Furthermore, motivated by the quadrotor study of [21], we will seek to combine MPC with reinforcement learning in the framework of guided policy search as an alternative solution technique for the NF problem.
|
v3-fos-license
|
2020-06-11T09:07:04.040Z
|
2020-06-04T00:00:00.000
|
236835733
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.researchsquare.com/article/rs-32548/v1.pdf",
"pdf_hash": "ba3411e416b7caac1bbad6ced4b950d70d176f15",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2187",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "66cc5648a0290b6cb03c39ff7eb9c1e232a47126",
"year": 2020
}
|
pes2o/s2orc
|
Case Report: Diagnosis and Treatment of a Giant Retroperitoneal Liposarcoma Presenting as an Irreducible Inguinal Hernia
Background: Retroperitoneal liposarcoma protrude to the inguinoscrotal area presenting as an irreducible inguinal hernia is extremely rare. For the rare cases and little experience of diagnosis and treatment of this disorder, the clinical guidelines are vacant. We report a successful example for the management of a giant retroperitoneal liposarcoma extending to the inguinoscrotal area. Case presentation: A 55-year-old male patient was admitted to our hospital in August 2018 with a large left inguinal mass without abdominal pain or digestive symptoms. Preoperative contrast-enhanced computed tomography revealed an abdominopelvic huge mass, and ultrasound guided biopsies showed liposarcoma. The patient also suffered from dilated cardiomyopathy and the left ventricular ejection fraction is only 39%. The left renal pedicle was squeezed by the mass and the left glomerular ltration rate is as low as 29.25ml/min. Intraoperatively, the mass was incarcerated in the inguinal canal and involved the left testis. We performed a radical tumor resection with two incisions, including resection of the retroperitoneal tumor, resection of the scrotal tumor and a tension-free repair of left inguinal hernia. The resected specimen for the retroperitoneal part measured 50*28*9 cm, weighed 13.5 kilograms and the scrotal part measured 16.5*7*4.5 cm, weighed 6.2 kilograms. Pathologically, the tumor was diagnosed as a well-differentiated liposarcoma, and originated from perirenal fat. The patient did not undergo adjuvant therapy post-operation and is completely clinical remission fteen months after the operation. Conclusions: Careful distinction for inguinoscrotal mass is essential to minimize complications and improve patient prognosis. The prime principle to treat well differential retroperitoneal liposarcomas is radical resection with protection of vital organs and vessels.
Background
Liposarcoma is a rare malignant tumor derived from adipocyte and it is more often located in the extremities, retroperitoneum or inguinal region [1] . Retroperitoneal liposarcoma protrude to the inguinoscrotal area mimicking an inguinal hernia is extremely infrequent with unexpected diagnosis [2] . Radical resection is the hallmark to treat the well differential liposarcoma for the insensitivity to radiotherapy and chemotherapy [3] . But the operation can be a challenge due to the huge tumor with a wide distribution, and the anatomical adjacent relationship is obviously changed. Patient with organ dysfunction worsens this situation.
Herein, we report a case of a giant retroperitoneal liposarcoma that presented as a giant inguinoscrotal irreducible mass with high risk of operation.
Case Presentation
A 55-year-old male presented to our hospital with a painless growing mass in the left inguinal region. The mass has gradually extended into the scrotum. He had rst noticed the mass 4 year prior to the consultation when he was undergoing medical treatment for heart failure caused by dilated cardiomyopathy in local hospital. The surgeons of the local hospital recommended a stay of operation because of the recent heart failure. Since then, the patient took Bisoprolol, Trimetazidine and Perindopril tert-butylamine to control the heart failure. He suffered from an open fracture of his left tibia and bula after 3 year of the heart failure and the external xation did not moved until this visit. There were no special abnormalities in the physical examinations except for the left inguinoscrotal mass (Fig. 1). We diagnosed the inguinal mass as a left irreducible inguinal hernia. Due to the large size of the mass, the patient underwent an abdominal contrast-enhanced computed tomography (CT) to rule out other intraabdominal abnormalities. The contrast-enhanced computed tomography revealed a giant mass of fat density extending below the internal ring ori ce and falling into the scrotum (Fig. 2). We suspected the mass to be a liposarcoma, and then the B-ultrasound guided biopsies con rmed our suspicion. Because of the heart failure history, we carefully assessment the heart, lung and kidney function before operation.
The left ventricular ejection fraction is only 39%, but the electrocardiogram (ECG), Brain Natriuretic Peptide (BNP) and Troponin I (TNI) are within normal range. The New York Heart Association (NYHA) functional class was II grade assessed by the cardiology physician. The chest CT revealed that the lung has multiple emphysema but the pulmonary function is basically normal. The left renal pedicle was squeezed by retroperitoneal mass and the left glomerular ltration rate is as low as 29.25 ml/min. The operation was performed under the cooperation of the department of anesthesiology, department of cardiology, and the Surgery Intensive Care Unit (SICU). In order to radically resect the giant tumor, we performed the operation with two incisions. One incision is from the xiphoid process to the symphysis pubis and the other is a classic incision of inguinal hernia repair. The whole process including resection of the retroperitoneal tumor, resection of the scrotal tumor and a tension-free repair of left inguinal hernia.
During the operation we found a giant encapsulated mass occupied the left retroperitoneum (Fig. 3). The tumor was derived from peripheral fat tissue of the left kidney and the upper urinary tract including the renal pedicle were tightly wrapped by the tumor. Complete tumor and adjacent abdominal organ resections including left kidney, left ureter, left testicles and epididymis were performed. Then, a hernial patch was put into the preperitoneal space to repair and strengthen the local defect. The operating time was about 300 min with less than 100 ml blood loss and the patient was stable during the whole operation. The resected specimen for the retroperitoneal part measured 50*28*9 cm, weighed 13.5 kilograms and the scrotal part measured 16.5*7*4.5 cm, weighed 6.2 kilograms (Fig. 3). The patient was admitted to the SICU to closely monitored the day after the surgery but transferred to the general ward the next day. There was no postoperative complications and the patient was discharged one week after the surgery. Pathologically, the tumor was diagnosed as a well-differentiated liposarcoma, and originated from retroperitoneum. We did not undergo adjuvant therapy inconsideration of the radical resection and the insensitivity to radiotherapy and chemotherapy for the well-differentiated subtype [1] . The patient is well and shows no evidence of recurrence fteen months after the operation(supplementary materials, CT scan).
Inguinal hernia is a common and prevalent disease mainly presented as inguinal or scrotal mass [4] . It is not complicated to diagnosis an inguinal hernia, but we must pay attention to some special types of inguinal hernia and special hernia contents. For example, sliding hernia that caecum, sigmoid colon or bladder become part of the hernia sac wall, and these organs can be easily damaged when the hernia sac wall is incised without caution [5][6] . Sometimes the hernia contents extending from the abdominal cavity are not part of the normal tissues, but are retroperitoneal or intraperitoneal tumors such as liposarcoma or neuro broma from the retroperitoneum, and lymphoma from the small intestine [2,7] . Special hernia content should be considered, if a huge inguinal or scrotal mass is irreducible without signs of abdominal pain or intestinal obstruction. This patient consulted at the outpatient for his football-sized scrotal mass, and we immediately send him to the radiology department for an abdominal CT scan to rule out other abdominal abnormalities. From the reported literatures, three cases were misdiagnosed preoperatively as mere inguinal hernia and needed reoperation [8][9][10] . Reoperation would increase the mortality and recurrence of those patients. Sometimes, non-inguinal hernia disease presents as an inguinoscrotal mass also should be cautious. These diseases or disorders are testicular or spermatic hydrocele, varicocele, enlarged lymph nodes in the groin, spermatic cord lipoma or liposarcoma, undescended testis, and cold abscess of the psoas muscle et al [11][12] . Hence, potential pitfalls should be avoided when diagnosis of inguinal masses.
Retroperitoneal liposarcoma presenting with inguinal hernia is extremely rare and only eleven cases have been reported in the world up to this point (supplementary table 1). Histologically, the majority of these liposarcoma were well-differentiated. The majority of the cases perform surgical resection and have favorable prognosis. Hence, radical resection remains the mainstay of therapy particularly for the welldifferentiated subtype [3] . But the operation can be a big challenge for the huge tumor with a wide distribution, the surrounding organs are compressed and deformed, and the anatomical adjacent relationship is changed. It is important to radical resect the tumor and preserve the vital organs or vessels. Intraoperatively, bilateral ureteral intubation should be rst performed to protect the ureter. The pulsation of important blood vessels should always be touched and the tumor should be dissected more than 1 cm away from the arteriopalmus, so as to avoid damage to the large blood vessels. Large retroperitoneal tumors often have serious adhesion with surrounding tissues and organs, or even completely wrap around some organ tissues. At this time, the choice of combined resection of the viscera can avoid local residual tumor and the spread of tumor cells, which ensure negative surgical margin [13] . For patients with cardiopulmonary insu ciency, multidisciplinary cooperation must be sought. Serdar Yol et al. successfully cured a patient with dyspnea and cachexia by respiratory support in the early postoperative period [14] . In our case, the massive operation was performed under the escort of cardiologist, anesthetist, and intensive care physician. There are still debates about the postoperative radio-chemotherapy of the well differentiated liposarcomas [15] . We did not undergo adjuvant therapy inconsideration of the radical resection and the low risk of recurrence for the well-differentiated subtype.
In conclusion, careful distinction for inguinoscrotal mass is essential to minimize complications and improve patient prognosis. Further imaging examination is mandatory to screen for other intraabdominal abnormalities, if a huge inguinal or scrotal mass is irreducible without signs of abdominal pain or intestinal obstruction. The prime principle to treat well differential retroperitoneal liposarcomas is radical resection with protection of vital organs and vessels. Since radical resection is a challenge for a retroperitoneal mass protruding into inguinoscrotal area, interdisciplinary discussion must be considered.
Consent for publication
The manuscript is approved for publication by all the authors. Written informed consent was obtained from the patient for publication.
Availability of data and materials
The datasets used during the current study are available from the corresponding author on reasonable request.
Competing interests
The authors declare that they have no con ict of interests
|
v3-fos-license
|
2022-10-12T16:54:58.429Z
|
2022-10-01T00:00:00.000
|
252822373
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/19/19/12776/pdf?version=1665050144",
"pdf_hash": "c56437a2df5b8aae1a0ab38a037a4b8efbff70d1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2188",
"s2fieldsofstudy": [
"Political Science"
],
"sha1": "e8f17fe12d87131a3e0a72aa56c5800de5b8f9e3",
"year": 2022
}
|
pes2o/s2orc
|
Methodology for a Comprehensive Health Impact Assessment in Water Supply and Sanitation Programmes for Brazil
Based on the broader concept of health proposed by the Pan-American Health Organization/World Health Organization (PAHO/ WHO), 2018, and the absence in the literature of indices that translate the causal relationship between sanitation and health, a methodology for assessing the health impact of a water and sanitation programmes, known as a Health Impact Assessment (HIA), was developed, specifically in the Brazilian context, and focused on a school in the northeast of the country. Through exploratory and descriptive evidence, and using documentary research as a method, a retrospective survey was carried out from 2000 to 2022 using documents proposing evaluation methodologies. A single document was found to fit the research objective, which was used to develop the proposed HIA methodology. Development of the methodology consisted of two stages: definition of the health dimensions and selection of the indicators making up each dimension. The HIA methodology was then applied to a school in northeast Brazil to test its use, before a water-efficient management intervention was going to be used. The overall score of 46% indicated that there was room for improvement, which the new management approach could facilitate. This methodology is therefore proposed to be an instrument for the evaluation of public water and sanitation policies, thus assisting managers in the decision-making process and in guiding sanitation programs and plans.
Introduction
The health-sanitation approach is related to the level of social development of a country [1]. Where there is inadequate water supply and sanitation services, it is likely that indicators of health are poor, reflecting the country's low economic development. This highlights the importance of establishing the impact of water supply and sanitation programmes on health [1]. However, how is this relationship to be established if there is no unified concept of health? While sanitation is basically established as access to facilities and services that provide collection, transport, treatment, and disposal of human excreta, wastewater, and solid waste [2], definitions of health range from the limited concept of "absence of disease" to the more comprehensive concept of "state of complete physical, mental and social well-being" [3]. However, [4] states that the World Health Organisation (WHO) definition of health is not fit for purpose, and instead proposes that it is defined as " . . . the experience of physical and psychological well-being." Additionally, it states that "Good health and poor health do not occur as a dichotomy, but as a continuum".
Provision of adequate water, sanitation, and health is related to the concept of quality of life, which is considered to be "an eminently human notion" related to the satisfaction found in family, love, and social and environmental life, synthesising of all the elements that a society considers its standards of comfort and well-being [9]. It is acknowledged, however, that there are many different location-specific definitions of quality of life from an individual and societal point of view, which are beyond the scope of this paper. Provision of adequate water and sanitation impacts quality of life, as it contributes to positive impacts on the health and well-being of the benefited population [10]. The difficulty of disadvantaged populations accessing adequate water and sanitation infrastructure makes them vulnerable, particularly when these aspects are taken in combination, leading to a low quality of life, emphasising social inequality and allowing the proliferation of diseases related to inadequate sanitation [10]. These diseases include those associated with diarrhoea such as cholera and dysentery, as well as typhoid, Figure 1. Socioeconomic, cultural, and environmental factors impacting an individual's health. After Dahlgren and Whitehead, 1991 in [8].
Provision of adequate water, sanitation, and health is related to the concept of quality of life, which is considered to be "an eminently human notion" related to the satisfaction found in family, love, and social and environmental life, synthesising of all the elements that a society considers its standards of comfort and well-being [9]. It is acknowledged, however, that there are many different location-specific definitions of quality of life from an individual and societal point of view, which are beyond the scope of this paper. Provision of adequate water and sanitation impacts quality of life, as it contributes to positive impacts on the health and well-being of the benefited population [10]. The difficulty of disadvantaged populations accessing adequate water and sanitation infrastructure makes them vulnerable, particularly when these aspects are taken in combination, leading to a low quality of life, emphasising social inequality and allowing the proliferation of diseases related to inadequate sanitation [10]. These diseases include those associated with diarrhoea such as cholera and dysentery, as well as typhoid, intestinal worm infections and polio. The WHO [11] also states that inadequate sanitation can lead to resistance to antimicrobial treatments and can exacerbate stunted growth.
In general, "sanitation" is defined as being able to access the means to safely dispose of human waste (blackwater containing faeces and urine), which usually means provision of toilet facilities to also include the disposal of menstrual blood, the collection and disposal of solid waste, the management of industrial/hazardous waste, and the treatment and disposal of wastewater [12,13]. The latter includes any water produced by households which does not go into the toilet, such as water from personal washing, clothes washing, kitchen preparation, etc., and is also commonly called "greywater". Thus, assessing the impact on health of improvements in sanitation conditions is important because it measures the effectiveness of this type of action on the quality of life of the benefited populations, even assisting with monitoring the morbidity and mortality rates of related diseases. This impact can be measured in several ways, including by the population served, as carried out by the Brazilian Ministry of Regional Development, or by the length of the network implemented [14], although this does not account for small scale, decentralized systems, or the quality of the service provided. The Ministry of Health measures the incidence of water-borne disease, and thus there is little interaction between these organisations when it comes to measuring the impact on health of sanitation actions.
The Demographic and Health Surveys (DHS) and Multiple Indicator Cluster Surveys (MICS) are worldwide sources of information on the health status of children and women carried out by the United Nations Children's Fund (UNICEF). DHS covers selected nutritional indicators, fertility, and issues of health around reproduction, maternal and child health, HIV and AIDS, maternal and child mortality, malaria, and other indicators. MICS covers health, education status, protection of children, and prevalence of HIV/AIDS according to geographic, social and demographic characteristics (https://inddex.nutrition.tufts.edu/data4 diets/data-source/demographic-and-health-surveys-dhs-multiple-indicator-cluster-surveysmics, accessed on 30 September 2022). These data are statistically robust and comparable globally [15]; however, a survey of the online databases did not include Brazil and did not include the subject of sanitation. The MICS surveys were carried out at intervals, and in the case of Brazil, there are records for 1986 (MICS1) [16], 1991 (MICS2) [17] and 1996 (MICS3) [18]. There should have been a further survey, or MICS4, between 2010 and 2012, but Brazil was one of the countries, including India, which did not carry this out [15], which may explain their absence from the UNICEF databases. The information contained in the MICS1-3 databases are specific to Brazil and are therefore in Portuguese. Sanitation was not considered during the first two MICS; however, by MICS3, sanitation was considered as it was reported that for the country overall, whilst over 70% of households had access to a water supply, just over 40% of them did not have a bathroom and were not connected to a sewerage network. Table 1 shows the breakdown of this data by region and land use, and illustrates that whilst urban areas do have nearly 85% coverage for water supply, nonetheless, over half were not connected to a sewerage network, and may not have had a bathroom. In comparison with the urban areas, Table 1 shows that rural households in the 1996 MICS3 had far less access to a water supply (nearly 25%), and only 6.3% had a bathroom or were connected to a sewerage network. In comparison with other areas of Brazil, the north of the country had the lowest % of all households with a bathroom or connection to a sewerage management system; this may reflect the population density, which is very small. There are areas where there is just one inhabitant per hectare, and as a consequence, it is not possible to install a sewage system; instead, a septic tank is used for each individual household. The United Nations Millennium Development Goal (MDG) sanitation target was to halve the number of people who did not have access to basic sanitation (water and sanitation) by 2015 [19]; however, it is debatable whether this was achieved [19,20]. Superseded by the United Nations' Sustainable Development Goals (SDG) [21], the current ambition is that everyone will have "adequate and equitable" water and sanitation and basic hygiene by 2030. However, as stated by [12], "the world is alarmingly off track to deliver sanitation for all by 2030" (p5). However, authors such as Pereira and Marques have recently asked the question of SDG6 (the "water" SDG) "Are we there yet?" [22] deciding that UN Member States were in fact closer than widely perceived, in that the gap between the Best and Worst Performance Frontiers had closed, although in [23] they did acknowledge that in the specific case of Brazil, the gap had widened.
Additionally, because the targets set by the SDGs are global and aspirational, in essence, each country is individually responsible for organising specific mechanisms to achieve them, and there are no country-specific metrics to enable the achievement of these goals. According to [24], there are several countries which are now not using best practice in order to achieve the aims of SDG 6; Brazil is second to China for this record in terms of its size and population. As a result, the Brazilian National Sanitation Information System reported that 16% of the population of Brazil cannot access a water supply network, 46% are not connected to a sanitation network, and 22% of the wastewater produced is not treated [24]. According to Cavalcanti et al. [25] by improving the performance of those companies involved in the integrated management of basic sanitation, and thereby addressing these issues, there is potential for access to a sanitation network across Brazil to increase to 76.5%. Ferreira et al. [26] identify that the provision of adequate drinking water facilities would improve human health by reducing the numbers of hospitalisations due to water-related disease, could positively impact the whole population, and would bring Brazil closer to the situation in developed countries.
In Brazil, the National Guidelines for Basic Sanitation, established by Law nº 14026 (2020) [27] which address, among other aspects, issues around drinking water and sanitary sewerage. These services must be provided based on principles such as: universal access; completeness; safety, quality and protection of the environment and public health; availability; local specificities; economic efficiency and sustainability; and communication with other policies aimed at improving quality of life, for which basic sanitation is a determining factor. However, this is a law, and as such it did not present a methodology for measuring the impacts resulting from the implementation of basic sanitation actions. This, therefore, goes some way towards justifying the proposition of the methodology for evaluating the health impact specifically of sanitation programs in the Brazilian context, a comprehensive Health Impact Assessment (HIA). A study by Abe and Miraglia [28] found that the use of the HIA approach was not common across Latin America in general or Brazil in particular, although Thandoo et al., [29] acknowledge that, across Latin America as a whole, only Mexico and Brazil have published HIA guidelines; as these guidelines are published solely in Portuguese, they are not readily accessible internationally. The authors of [28] also found that the monitoring and subsequent analysis of health impacts was not robust, both of these issues implying that encouragement in the application of HIA was needed. Abe and Miraglia also published a study in 2016 [30] utilising HIA in the Brazilian context, but this was in terms of identifying the impacts of air pollution on health and did not consider water-resource management. Silveira and Fenner [31] also found that HIA was not commonly used in Brazil, particularly highlighting the benefits of engaging with multiple stakeholders in HIA as opposed to the standard Brazilian approach of the environmental licensing process.
In general, HIAs evaluate the absence of disease, taking an epidemiological perspective [30], but the proposed, comprehensive methodology uses seven dimensions of health, not just epidemiology. These seven dimensions are: sanitary, environmental, technological, sociocultural, epidemiological, mental well-being and economic; their development is discussed in Section 3. The present study applies this extended method-ology to a situation in Brazil, whereby the water supply system needs to be improved. The relationship between water/sanitation and health is a complex one, affecting all aspects of health. According to Heller [32], there is not a comprehensive enough assessment to support any relationship between water supply and sanitation interventions and health indicators.
The first aim of this article is to propose a HIA methodology which can be used to specifically assess the impacts on human health of water supply and sanitation projects in Brazil but has the potential to be applied elsewhere in the world with similar issues regarding sanitation. This will be achieved by proposing indicators which represent the status of human health in the Brazilian context. Whilst HIA is a well-recognised technique to show the results of suitable interventions to address inadequacies [33], there is a dearth of its application to instances of insufficient provision of water and sanitation in communities, and thus, this study provides an extension to its use in a specific context. This paper therefore begins by considering both qualitative and quantitative impacts related to water supply and sanitation programmes translated into indicators that address the various dimensions of health. By carrying out this comprehensive survey of health dimensions, the HIA produced enables the engagement of multiple stakeholders simultaneously, encouraging dialogue in the development of associated policies and guidelines. The second aim is to test this methodology in a school setting where sanitation is inadequate and to present preliminary, baseline results of applying the HIA before sanitation issues have been addressed.
Materials and Methods
For the construction of the comprehensive HIA evaluation methodology, exploratory and descriptive research was used by means of documentary evidence. It was characterised as exploratory because it sought to understand problems experienced in the local context, i.e., Brazil specifically, and to characterise them. It was also descriptive to the extent that it detailed each dimension of health and its indicators. Primary sources of information were used, which consisted of official Brazilian reference documents that had not been analysed previously. It is acknowledged that there are many full literature surveys either of global HIA approaches (e.g., Harris-Roxas et al.'s [34] state-of-the-art review) or individual countries worldwide (e.g., Dannenberg [35]); therefore, it was thought to be beyond the scope of this study. Therefore, the Brazilian documents alone provided the basis of the health dimensions to be utilised to construct a comprehensive HIA.
In order to identify methodologies for evaluating the health impacts caused by water supply and sanitation programmes in Brazil, the databases accessed were from the Ministry of Health, Ministry of Environment and Ministry of Cities, currently the Ministry of Regional Development, since they are responsible for establishing standards, proposing, monitoring, and implementing policies, guidelines, and actions for basic sanitation. Since these databases contain Brazilian information, much of the corresponding documentation is not available in English, but only in Portuguese. A retrospective survey was conducted for the period 2000 to 2022 around the publication of the MDGs dating from 2000, and the SDGs which were produced between 2014 and 2015. The following keywords were used as locators: "health impact assessment", "social determinants of health", "water supply", "sanitation" and "assessment methodology". The inclusion of these criteria was determined from documents that proposed methodologies for assessing health impacts caused by the provision of water supply and sanitation. Exclusion criteria were documents that only evaluated public policies related to water supply and sanitation alone, and did not take account of dimensions of health.
The search was only able to identify three possible sources of information based on the criteria and sources of information given above: 1.
"Plano Nacional de Saneamento Básico" (National Basic Sanitation Plan) (PLANSAB) [36], coordinated by the Ministry of Regional Development; in Portuguese, with relevant sections translated into English in Supplementary Material S1.
2.
"Pesquisa Nacional de Saneamento Básico" (National Survey on Basic Sanitation) (PNSB) [37], a survey applied by the Brazilian Institute of Geography and Statistics (IBGE); in Portuguese, with relevant sections translated into English in Supplementary Material S1. 3.
"Avaliação de Impacto na Saúde das Ações de Saneamento" (Health Impact Evaluation of Sanitation Actions) [38], a methodological proposal by the Pan-American Health Organization/World Health Organization (PAHO/WHO) in Brazil, together with the Ministry of Health in 2014; in Portuguese, with relevant sections translated into English in Supplementary Material S1.
It was observed through the search that evaluation of government programs related to the provision of basic sanitation was lacking, with overviews of local sanitation conditions in certain cities and/or regions and their historical evolution predominating. As PLANSAB [36] and PNSB [37] are public policy evaluation instruments for the development of resource investment plans, they did not present a methodology for the evaluation of sanitation actions and their impacts on health and were therefore excluded. Thus, only the methodology developed by PAHO/ WHO [38], which discussed health dimensions and indicators, was applicable to this study.
The comprehensive Health Impact Assessment methodology comprises the following steps: a) Definition of the dimensions of health that made up the evaluation methodology. b) Selection of indicators for each dimension, which represented the variables or attributes, enabling their description and measurement, both quantitatively and qualitatively.
The following need to be performed before and after the intervention to calculate its health impact. c) Use of evaluation tools for assigning a grade for each indicator. d) Calculation of the value of each dimension as the average value of its indicators. e) Indication of the weight for each dimension (depending on the type of sanitation intervention).
f) Calculation of the weighted average of the dimensions, which will be the value of the health condition. g) After the intervention, calculation of the health impact as the difference between the health condition before and after the intervention.
The HIA methodology tackles the relationship between water supply and sanitation programmes, and health. This methodology uses indicators as a tool to relate the implementation of water supply and sanitation interventions, with improvements in each dimension of health, by assessing how each intervention affected each indicator. The proposed methodology, illustrated in Figure 2, provides a formal and specific measurement of the impact of water supply or sanitation programmes on health improvement. The first step in the methodology was to determine the dimensions of health, followed by determining the indicators that reflect the condition of each dimension. The indicators would depend on the nature and scope of each water supply or sanitation intervention; those related to each health and sanitary dimension are listed in Sections 4.1 and 4.2. Many indicators can be assessed by comparison with official standards (for example, water-quality determinands for the sanitary dimension), while others can be estimated by expert field analysts, through perception surveys (including epidemiology, mental well-being, the environment, and sociocultural) and information from institutions linked to the health sector both at federal and municipal or local level where the latter has information from family health programs. The technological dimension can be observed on a site visit by examination of the infrastructure, and also in available manuals and guidelines. For the economic dimension, data can be obtained from the municipalities on water consumption, the costs of deliveries by water tanker and the school's electricity bill, with the health secretariat supplying data on health spend. Further information linked to the environmental dimension includes climate data and geography specifically related to the site [5,38]. Using this methodology, a baseline scenario can be determined, against which it may be possible to assess if there is an associated improvement in health conditions post sanitation action. However, each health dimension is different and basically cannot be compared in terms of their individual merit; thus, the score for each one needs to be examined and taken into account. For each water supply or sanitation intervention, decision-makers need to establish a proper hierarchy for each separate health dimension.
where the latter has information from family health programs. The technological dimension can be observed on a site visit by examination of the infrastructure, and also in available manuals and guidelines. For the economic dimension, data can be obtained from the municipalities on water consumption, the costs of deliveries by water tanker and the school's electricity bill, with the health secretariat supplying data on health spend. Further information linked to the environmental dimension includes climate data and geography specifically related to the site [5,38]. Using this methodology, a baseline scenario can be determined, against which it may be possible to assess if there is an associated improvement in health conditions post sanitation action. However, each health dimension is different and basically cannot be compared in terms of their individual merit; thus, the score for each one needs to be examined and taken into account. For each water supply or sanitation intervention, decision-makers need to establish a proper hierarchy for each separate health dimension. The overall score comprised a weighted average of the health dimensions scores, to reflect their nature, and was assigned between 1 and 3. These were chosen by the expert team carrying out the study, and were based on observations at the study site, and also information gathered from the interviews. For example, an intervention that focused on environmental education entailed a greater weight in the sociocultural dimension. A decision on the type of system to be implemented gave greater weight to the technological dimension. The decision on the construction of an improvement for water treatment, on the other hand, would carry greater weight for the sanitary dimension. Drilling wells or using rivers for supply would make sense in a region subject to regular extreme droughts, leading to the environmental dimension having greater weight.
Development of the Dimensions Applied to Health
Seven dimensions were defined when developing the methodology: sanitary, environmental, technological, sociocultural, epidemiological, mental well-being, and The overall score comprised a weighted average of the health dimensions scores, to reflect their nature, and was assigned between 1 and 3. These were chosen by the expert team carrying out the study, and were based on observations at the study site, and also information gathered from the interviews. For example, an intervention that focused on environmental education entailed a greater weight in the sociocultural dimension. A decision on the type of system to be implemented gave greater weight to the technological dimension. The decision on the construction of an improvement for water treatment, on the other hand, would carry greater weight for the sanitary dimension. Drilling wells or using rivers for supply would make sense in a region subject to regular extreme droughts, leading to the environmental dimension having greater weight.
Development of the Dimensions Applied to Health
Seven dimensions were defined when developing the methodology: sanitary, environmental, technological, sociocultural, epidemiological, mental well-being, and economic. These were chosen based on the concept of health adopted by the WHO, "the most complete physical, mental and social well-being" and their concept of water and sanitation, "the control of all factors that may interfere with the most complete physical, mental and social well-being" [3]. They were developed from those proposed in [38], i.e., anthropology, sanitation, epidemiology, and economics, combined with health indicators from [5], including environment and socioeconomic (divided into sociocultural and economic). Mental well-being was also included, as the literature highlights issues such as embarrassment, fear (particularly of assault), lack of privacy, shame, anxiety, and safety, particularly in women and girls, which negatively impacted mental well-being when sanitation was inadequate or lacking, as discussed by Sclara et al. [39].
These are discussed in turn in the following sections.
Sanitary
Sanitation uses environmental surveillance in the identification and monitoring of factors in water, air, and soil that can impact human health, with consequences for the incidence of disease. This, together with epidemiological studies of the incidence and prevalence of diseases and their interrelationship with water and sanitation actions, can be used to control and eliminate risks [40].
Environmental
Society's impact on the natural environment alters the dynamics of the landscape, modifies its ability to respond, generating a degraded environment, and can lead to disasters. These issues cause concern and impact on sanitation conditions and, consequently, on health.
Pruss-Ustun and Corvalan [41] point out that there are several environmental risk factors that can contribute to the incidence of disease; 24% of diseases in the world can be attributed to environmental risk factors. Data from WHO [40] show that, in 2004, 85 of 102 health problems and injuries were attributed to poor water supply and sanitation. In addition, 24% of illnesses and 23% of premature deaths resulted from exposure to unhealthy environments and unsanitary care.
Disasters can also impact provision of drinking water and can disrupt the sanitation system. According to the UN Office for Disaster Risk Reduction [42], from 1990 to 2014, the most frequent disasters in Brazil were: floods (65.2%), landslides (11.3%), drought (8.7%), storm (7.8%), extreme temperature (3.5%), fire (2.6%) and others (0.9%). Floods caused the greatest number of deaths and homelessness (82.2%), followed by landslides (15.7%). In this context, waterborne diseases are the most frequently observed, followed by those caused by poor water supply and sanitation conditions and food contamination, as well as those caused by changes in the behaviour of disease vectors and infectious agents [10,43]. Therefore, the frequency of occurrence of extreme events is an aspect to be taken into account.
Technological
Technological dimensions are related to the process of selection, conception, and discussion of the technologies to be implemented, preparation of infrastructure projects, and the suitability of technologies adopted. They are linked to other dimensions, since they consider the population's sociocultural context and verify if the technology is appropriate for the environment in which the sanitation system is to be implemented, from social, cultural, environmental, and economic standpoints.
This approach summarizes to what extent the technological solution used in sanitation projects impacts health. The study of technology is related to a certain context and territory and represents the level of societal development, which is influenced by cultural, political, and economic factors, Murphy [44]. For this reason, the term appropriate technology can be used in specific contexts to mean "technically correct, culturally acceptable and economically viable" [45]. Thus, the implementation of technologies involves risks concerning acceptance and social control; the balance between costs and benefits; in the specificity and scope of its application and in the requirement of behavioural changes in the benefited community [46]. Technology must not only be technical, but the population should be engaged for it to be grounded in the trust of that community [47,48].
Over time, the need to construct water and sanitation systems has caused the emergence of several technological solutions, which needed to be appropriate to benefit each situation and community and to be properly designed for the identified sanitary problems [44]. Therefore, the feasibility of a water and sanitation system depends on several factors, such as: the number of people to be served; aspects related to the community (culture, beliefs, habits, etc.); local environmental conditions; available technology; technical requirements; human, materials, and financial resources [44].
Sociocultural
This is related to the benefits accrued by the population due to the provision of basic sanitation, aimed at understanding the technologies to be used and the way they impact the life and health of the communities involved. Universal access to water and sanitation services is still a challenge to be achieved; the Brazilian National Agency of Water and Basic Sanitation (ANA: https://www.gov.br/ana/pt-br, accessed on 30 September 2022) has to establish targets to achieve it, guaranteed by Brazil's New Sanitation Legal Framework: Federal Law No. 14.026/2020. For this service to be provided in an equitable way, popular participation, engagement, and awareness are essential factors, which involves communities influencing the creation, implementation, monitoring, and evaluation of public policies [49,50].
Social participation reverses the logic with which the government plans and executes sanitation policies, and the population only receives these services. The population then starts to demand provision of basic sanitation, according to their needs and priorities, and monitor their implementation [11,51]. From this perspective, the sociocultural component seeks to analyse the behaviour, interest, and involvement of the community in relation to water and sanitation, verifying the importance of understanding the relationship between the provision of infrastructure and its impact on health. Thus, similar to the technological aspects, culture, beliefs and habits, as well as the economic aspects of the community, can impact the results of this analysis.
Epidemiological
Poor water and sanitation conditions promote the transmission of biological agents, which may be present in the secretions and excretions of sick individuals or carriers of infectious diseases. Studies show that places with low coverage of basic sanitation services have high incidence rates of diseases such as diarrhoea [52,53], cholera [54], hepatitis [55], intestinal parasitosis [56], and typhoid fever [57], among others. Press-Ustun and Corvalan [41] reviewed data obtained from 145 countries from low-and middle-income countries focused on the prevention of diarrhoea. Their finding highlighted how important improving water and sanitation services is for reducing the burden of disease in these contexts. The great importance in decreasing the incidence rates of these diseases led the National Health Foundation (FUNASA) to classify them as Diseases Related to Inadequate Environmental Sanitation (DRSAI) [58] based on the proposed classification of diseases by Cairncross and Feachem [59] and Mara and Feachem [60].
The presence of these diseases makes it possible to visualise the precariousness of local basic sanitation systems and whether they constitute a risk to the population, especially the poorest who live in unhealthy conditions. This was shown in a study conducted between 2000 and 2010 [61], which aimed to understand the spatial behaviour of DRSAI throughout Brazil. To do so, they applied the Moran Index to measure the intensity of spatial autocorrelation of each Brazilian municipality and their neighbours in order to determine the relationship between four socioeconomic variables: Municipal Gross Domestic Product per capita; percentage of people with access to water supply, with sanitary sewerage and with rubbish collection and average hospitalisation rates between 2000 and 2010. The study found a negative correlation between rates of hospitalisation and the provision of sanitation services [61].
Mental Well-Being
Mental well-being is an important part of overall health [3]; inadequate sanitation can exert harmful effects on individual and community well-being, and thus it forms a key part of the HIA. The impacts on mental health of inadequate sanitation cause suffering, anxiety, stress, and even depression. A study by Sclar et al., [39], used aspects related to mental health and well-being, such as dignity, privacy, shame, embarrassment, anxiety, fear, violence, and safety to make a broader assessment of the impacts of sanitation on health. These authors concluded that lack of sanitation has a great influence on privacy and safety, aspects that influence anxiety, shame, and embarrassment, especially of women and girls, demonstrating that inadequate sanitation impacts mental health.
A study by Caruso et al., [62] in Sierra Leone, Africa, carried out in eight public schools in two communities with adolescent girls, assessed factors influencing hygiene during menstrual periods in schools. Although all schools had toilet facilities, the quality among them varied. The girls indicated that they did not like to use the school toilets because they had inadequate facilities and smelt. There was also a lack of privacy because there was no separation of toilets by gender. Aspects such as exposure of the body, lack of privacy, and violence were addressed. It was found that during the menstrual period, girls felt greater stigmatisation and marginalisation, irritation, anxiety, and distraction. There were also negative impacts on school attendance.
Economic
The economic component examines costs incurred due to the absence of sanitation infrastructure. Since 2000, the UN MDGs followed by the SDGs have been committed to increasing the proportion of the population with permanent and sustainable access to drinking water and sanitation. A document entitled Progress on household drinking water, sanitation and hygiene 2000-2020: five years into the SDGs, by WHO and the United Nations Children's Fund (UNICEF) [63], highlights that for every dollar spent on the provision of safe drinking water and adequate sanitation, there is an average global return of USD 2.0 and USD 5.5 in terms of health improvement, respectively [64].
Evaluation of the impacts of poor sanitation was conducted by the Lixil Group Corporation in 2016 [65]. Four variables were used in their analysis: premature infant mortality, lost productivity, health expenditure, and value of time lost due to poor sanitation. This study found that there were four regions most affected: Asia and the Pacific, with a cost of USD 172.3 billion, followed by Latin America and the Caribbean (USD 22.2 billion), Africa (USD 19.3 billion) and Eastern Europe, the Middle East and the former USSR (USD 9 billion). The largest expenditure was on premature infant mortality, which accounted for around 50% of the costs in all regions.
From the questions raised in this study around the economic components of water supply, the indicators which would need to be examined include: water and energy consumption and spending on water, energy and health. For sewage treatment, energy consumption to collect and treat sewage, the sewage tariff (included in the water tariff), health expenditure, and the loss of work or absenteeism from school due to sewage-related illness all need to be considered.
The Development of Indicators for Each Health Dimension
The indicators developed were mainly a combination of those identified by publications such as [5,38] as well as by the fieldwork strategy and expertise of the team as detailed in Section 2. As suggested by [34], many indicators were site-specific, in other words, they were developed "relevant to the proposal in question" (page 12). A WHO database of "Health Indicators" for Brazil (https://data.humdata.org/dataset/who-data-for-brazil?#, accessed on 30 September 2022) concentrated on the following: Disability-adjusted Life Years (per 100,000), distribution of years of life lost by major cause, adult mortality rate (15-60 years, per 1000), deaths per 1000 live births, causes of children's death <5 years (%), number of deaths. None of these indicators were relevant to the context of a school, requiring specific indicators to be developed as described below.
Indicators associated with the sanitary dimension focused on water quality, and the analysis of specific determinands, which were undertaken during the site visit. The quality of water for human consumption should be monitored in the supply system, distribution networks, reservoirs, or surface sources; this includes operation and maintenance, including at the water treatment plant. Since 1990, the National Program for the Surveillance of the Quality of Water for Human Consumption monitors the quality of drinking water, guided by Consolidation Ordinance GM/MS nº888 [66], and the National Guideline for the Sampling Plan of the Surveillance of the Quality of Water for Human Consumption [67] with samples collected and evaluated for the specific parameters identified in the indicators [68]. The frequency of this analysis was assessed by interaction with the school respondents, since the Standard of Water Potability recommends it is undertaken every 6 months. Food safety was also chosen, since the case study was carried out in a school, and it is important to have potable water (i.e., in terms of quality) for the preparation of school meals. This was determined during the survey by perceptions of the respondents.
The environmental indicators concentrated on water quantity provision, as the case study was in a Brazilian semiarid region, i.e., what quantity was delivered, how frequently, if it was sufficient or not and why. The frequency of extreme events was also important, such as time without rainfall leading to drought. Food preparation can be impacted by drought if there is insufficient water to prepare the food. Much of the information in relation to these indicators was obtained during the school survey and observations during the field visit. Climate issues were obtained from Internet searches such as rainfall in the research location.
Indicators related to the technological dimension were selected to evaluate the existing water supply system at the school. The safest source of supply would be the public network [63], but in rural areas of Brazil, there is generally no connection to the network. Therefore, wells or water tankers can be used, which may not be safe. A further indicator was how water was stored, and if this was in a container, what material it was made of and how often it was cleaned. The best would be tanks made of polyethylene, although many rural schools have rainwater reservoirs, which need to be maintained regularly if present, to ensure cleanliness. The quality of potable supplies is cross-referenced with the chemical analyses conducted in the Sanitary dimension, as well as maintenance of the supply infrastructure, such as changing the drinking fountain filter to ensure there is sufficient-quality water in the distribution system, and at the point of consumption. Water quantity was related to assessments carried out in the Environmental dimension, including water consumption (gained from perception of the respondents) and number of distribution points (observation during the field visit).
In the sociocultural dimension, the proposed indicators covered habits and customs, such as the amount of water ingested daily, habits such as frequency of hand washing, and the washing of fruit and vegetables before eating. An assessment needed to be made of any awareness of the need for water rationing with educational sessions in the school about the importance of water, particularly in terms of hygiene and associated water-related disease. Much of this information was gained from the respondents during the field visit.
For the epidemiological dimension, the focus was on the symptoms and diagnosis of water-related disease such as cholera, verminosis, typhoid fever, gastroenteritis, and leptospirosis. The symptoms were separated from the actual disease, since a diagnosis has to be made by a qualified medical doctor, whereas the symptoms were canvassed from the school community and included incidences of diarrhoea, sickness, headache, etc., as listed below; toothache was also included, as it is related to oral hygiene, the amount of caries, and tooth brushing habits. In semi-arid areas, the likelihood is that water from wells can be brackish and if it is used for drinking, may lead to hypertension, incidences of which needed to be assessed.
Mental health can be adversely affected by a lack of water [69], leading to difficulties with concentration and memory, in addition to overall fatigue as well as mental fatigue. It may also cause irritability, depression, and apathy, hence the importance of the indicators in this dimension, which can lead to absenteeism from work and school, potentially leading to job losses. Thus, the mental well-being indicators reflected issues associated with provision of an adequate supply of water and were related to the perceptions of the respondents.
Indicators for the economic dimension were identified due to their association with the cost of providing improvements to the water supply, and thus were structured around a before/after scenario for water, energy supply, and incidence of disease. Often, schools in rural areas of Brazil rely on water deliveries by tanker, which is expensive; if the implementation of a new system is cheaper, the school will save money. However, it is possible that other expenses would be incurred, such as energy, and thus the "after" scenario in the HIA would be able to indicate this. Reduction in expenses due to waterrelated disease, particularly in the event of hospitalisation, can be substantial, and can also be associated with lost income and livelihoods. Information related to water and energy expenditure could be obtained from the municipal education offices, through the analysis of water and energy bills, whereas health expenditure data can be obtained via the perceptions of the school community.
Each indicator received a grade from 0 to 1 in order to compute the average grade for each health dimension, and so produce the overall score for health. The grade for the physical and chemical parameters indicate if they conform to legal standards (i.e., one) or zero if they do not conform. For other indicators, different evaluation tools will be used, depending on the nature of the indicator. For each of those evaluation tools, an example is given, but the tool is not limited to the given example. In Tables 2 and 3, each indicator is related to its relevant evaluation tool (a, b, c, or d): Table 2. A summary of the components and indicators related to water supply used in the HIA showing impacts on health. Where (a.) to (d.) relate to the evaluation tools given in Section 4. a. Measuring study-water samples will be collected at the main points of the water supply system. b. Observational study-the behaviour of water and sanitation facilities will be evaluated and verified to determine whether there is significant damage to the structure of the system and whether the system is working properly.
Indicators
c. Perception study-aspects of reality will be determined, applying questionnaires in order to raise objective and subjective data on the community affected by the absence of water and sanitation systems.
d. Survey study-of the health data of the community will be carried out, aimed at identifying the main symptoms and diseases.
Due to the different focus of the two kinds of interventions, indicators have been grouped separately for water supply and sanitation.
•
Physical parameters of water quality (temperature, colour, and turbidity).
•
Physical parameters for perception of water quality (taste, odour, colour); these may affect the approval of the water for human consumption.
•
Chemical water-quality parameters (pH, total and free residual chlorine).
•
Microbiological water-quality parameters (total coliforms and faecal coliforms, Escherichia coli). It should be noted that there are many types of biological agents. Some are important in the transformation processes of organic matter in biogeochemical cycles, but others are responsible for causing disease and generating health concerns. For this type of analysis, the most important micro-organisms are the coliform bacteria, which are associated with water-borne disease [69].
•
Frequency of water analysis. • Food safety-related to water quality in food preparation.
Environmental
• Frequency of extreme events. • Frequency of supply. • Quantity of water/source supply. • Water uses: how extreme events (drought or flooding) affect water uses. • Impact of extreme events on food-related to water quantity during food preparation and types of vegetables and fruits more resistant to extreme events such as drought.
Technological
As water supply is linked to the existing community supply system, all of the process needs to be monitored.
•
Type of source (public network, well, water tanker). • Type of storage (water tank, cistern).
•
Material the storage container is made of (polyethylene, fibreglass, metal, asbestos, cement). The indicators to assess water contamination include: • Physical parameters of water quality (temperature, taste, odour, colour, turbidity, total solids (suspended and dissolved)).
•
Frequency of water analysis. • Food safety-related to water quality in food preparation.
Environmental
Due to extreme events (for example, drought or flood): • Change in volume reduced due to an extreme event. • Impact on sewage collection. • Impact on the sewage treatment system.
Technological
• Individual or collective solution for sewage (septic tank or collection network).
•
Treatment system adopted (primary, secondary, or tertiary treatment of sewage).
•
Final disposal adopted (drying bed, incineration of sewage sludge). • Operation and maintenance (necessary precautions for the operation of the system, both for the collection network and the treatment system, the need to dispose of the sludge, care with gas leaks (e.g., from a biodigester), etc. Loss of work or absence from school due to illness caused by sewage.
The dimensions and indicators for sanitation are summarised in Table 3.
Results of a Case Study Applying the HIA Methodology
The application of the HIA methodology was carried out in a case study based in a school located in the Brazilian north-eastern semi-arid region, where there is a deficiency in water supply. In this school, equipment donated by the Israeli government was to be implemented, which extracts water from humid air, therefore improving the quality of water consumed in the school. The HIA methodology will be used to assess the health impact before and after the deployment of the equipment. Thus, health conditions were assessed before the deployment of the equipment to establish baseline. The evaluation of the health impact after the deployment of the equipment will be carried out at the end of 2022.
Field data collection included 4 teachers/educational coordinators, 2 general service assistants (GSA: these are people who work in the kitchens and also have cleaning duties in the school) and 19 students.
In terms of the specific case study, water supply was included as applied to a sanitary intervention in a school, whose dimensions were: sanitary (water quality), environmental (water quantity), technological (the water supply system itself), epidemiological (impact on health due to water-delivery diseases), mental well-being (diseases due to lack of water), sociocultural (habits of the community), and economic factors (financial issues).
The weights of the dimensions were assigned from 1 to 3, based on the relevance of each dimension in relation to the intervention carried out. The weightings were chosen based on observation, perceptions of the individuals in the study, and the expertise of the team who carried out the survey. As this was in a school, the weightings were also site specific, and included information gathered during the site visit. The dimensions weighted 3 were: sanitary, epidemiological, mental well-being and technological because these were central to sanitation and their impacts on health. The environmental dimension was assigned a weight of two because it refers to the amount of water, which also influences disease incidence. The sociocultural and economic dimensions were assigned a weight of one because, despite their importance, they are not directly related to water quality. The score for each indicator varied from 0 to 1, always focused on the best health condition. Tables 4-9 summarise the results of the surveys carried out for each dimension and its indicators. Tables showing the calculations underlying the scores are provided in Supplementary Material S2.
In terms of the Economic dimension, during the site visit, it was found that the average monthly energy consumption of the school was 421kwh and that the school is supplied by a 5000-litre water truck every 15 to 20 days. However, it was not possible to score this dimension, since the school budget was not known, and therefore neither were the costs in relation to the budget. The Secretary of Education has been asked for information so that this dimension can be calculated. The calculation of the current overall HIA is a preliminary assessment; it is likely to change once a score for the Economic dimension can be included.
Food safety (as related to water quality in food preparation)
The kitchen water sample was contaminated with total coliform, thus not suitable for food preparation -0 Average overall score = 0.39 This overall result, where the school is at 46% on the health assessment, shows that there are improvements that need to be made, and therefore there is a high likelihood that the intervention will provide a major boost to this rate. Taking each dimension separately, however, illustrates the scale in terms of their impacts on health, as shown in Table 10, whereby the lowest scores were obtained by sociocultural, mental well-being and environmental dimensions, and the higher scores by epidemiological, sanitary, and technological dimensions. It must be noted that these scores are associated with some indicators which have been estimated, and thus may reflect a certain amount of imprecision in the health dimension of the study community. To contextualise these results, as stated by PAHO/WHO [5] (page 6): "Measuring dimensions of health in a population requires estimations, and therefore there is a certain degree of imprecision." "Every health indicator is an estimate (a measurement with some degree of imprecision) of a given health dimension in a target population." Additionally, as [29] (page 42) states: "HIA does not try to uncover absolute and incontrovertible truths." Annual rainfall index of 641.7 mm with droughts lasting 6 to 8 months. The site is classified as a hot semi-arid climate with a rainfall variation of 250-750 mm per year (Brazil, 2020). As the maximum rainfall in Brazil is 1800 mm per year, 641.7 mm is equivalent to 35.65%
0.263
Frequency of water supply (4 teachers and 2 GSAs only, n = 6) 3 (50%) of respondents said that water arrives every day. This question was only asked of 4 teachers and 2 GSAs.
Water supply was not regular. Water in the well was brackish; the school was supplied with a water truck every 15 to 20 days * 2 .
Types of water use (n = 25) (n = 2)
12 (48%) water was used for drinking, washing hands, brushing teeth and flushing toilets, it is also used in the kitchen. 2 (100%) toilet cleaning occurs daily using water and cleaning materials (GSA). Average overall score 0.48 * 1 Documentary evidence of rainfall indices from the area confirmed this. * 2 This only considered information from the school community and the percentage of people who said that there were daily water deliveries, which is considered to be the best situation for health. However, the site visit found that the water tanker only supplied the school for 15 to 20 days. There was a cistern on the ground and a raised water tank. 0.92 * 2 Maintenance of the water supply system (teachers only; n = 4) 4 (100%): the water tank had a lid. The supported cistern had a lid, but the water tank was uncovered * 3 . 0.5 The two boxes were made of concrete, with the cistern painted white * 4 0.505
Water distribution points (n = 25)
3 (12%): water was distributed via a drinking fountain, hand washing sink, the shower, toilet, kitchen sink and tank.
The places where water was delivered were verified, ie the drinking fountain, sinks, shower, toilet, kitchen sink and tank. However, there was no filter on the kitchen tap.
0.56
Water consumption points (n = 25) 14 (56%): drinking fountain, 6 (24%): kitchen tap with filter * 5 -0.80 Treatment, GSA only (n = 2) 2(100%): washing was performed whenever the candle was dirty or every month 1 Average overall score 0.64 *1 Average of sum of sources and observations; *2 average of all perceptions and observations. *3 was considered zero since the tank had no lid. A score of 0 reflects hygiene of the water tank and 1 for treatment, as the filter candle was frequently changed. * 4 all answers were added together because the reservoirs were concrete, one of which was painted white. * 5 There was both a water dispenser and a kitchen tap, the two responses were added together.
Discussion
This paper presents the details of constructing a HIA methodology based on sanitation actions and their potential to impact health, specifically in a Brazilian context, and applied to the situation of a school in north-east of the country, where sanitation is inadequate. The purpose of developing the HIA methodology was to assess any improvements in the health of a population as a result of the implementation of a water supply or sanitation programme in order to address inadequate sanitation which "reduces human well-being, social and economic development" WHO [11]. This relationship, although obvious, has not been sufficiently substantiated until now. To this end, the methodology took as its starting point the concept of health established by PAHO/ WHO (2018) [5] for its view of health in different dimensions. Whilst HIAs have been used to assess other impacts (e.g., transport [70]; climate change [71]) and in other contexts [35], this is the first time such an approach has been taken in Brazil and the first time health has been considered in terms of seven dimensions to provide a more comprehensive approach than is usually taken.
The indicators assigned to each dimension allowed an assessment to be made of the extent to which health benefits from various water supply or sanitation programmes could be identified, monitored, evaluated, noted, and acted upon. It should be noted that these indicators were identified based on conceptual and operational understanding of a provisional nature during the preliminary field visit, and thus will be subject to modification and improvement with subsequent visits. This reinforces the need for constant feedback throughout the development of the work and evaluation process, in which case studies are fundamental to development of the strategy.
Harris-Roxas et al., [34] evaluated the possibilities of HIA from the perspectives of its strengths, weaknesses, opportunities, and threats. In terms of the strengths of the proposed HIA, it provides an instrument or tool that government agencies can use to evaluate public sanitation and environmental policies, programmes, and actions. In the Brazilian context, its environmental impact assessment (EIA) also contains an evaluation of health outcomes in terms of the production of disease only. Whilst the proposed HIA includes epidemiology, it goes beyond just considering this one dimension, as is the case in the Brazilian EIA, but provides an assessment of a further six, enabling a comprehensive review of the circumstances both before and after introduction of a sanitation action, or introduction of sanitation programmes. It also provides the opportunity to identify specific dimensions which require extra attention. In this case, as is shown in Table 10, sociocultural, mental well-being, and environmental dimensions would need more effort than epidemiological, sanitary, and technological dimensions. These lower health scores represent the urgency of any intervention, whereas the higher scores indicate that the community is able to manage adequate water supplies and sanitation and that improvement is less of a priority. With the lack of other studies using HIA to assess the efficacy of water and sanitation interventions, it is not easy to contextualise an overall score of 46%. However, as this reflects the current situation, before application of an intervention to improve drinking water quality, the outcome indicates substantial room for improvement. It is difficult to predict what the overall score could be, or the effect on individual dimensions at the follow-up visits to assess the sanitation intervention. However, the scores should change to reflect any positive or negative impacts and will give an overview, particularly on health, of the effectiveness of the intervention in improving health and well-being. Any new intervention, even at the same site, but certainly at different ones, would require a further baseline to be established in order to effectively assess its impacts on the health of the community.
It is important to establish the specific parameters which determine the focus of the HIA [34]. In this specific case, it was to produce a broad HIA which included different dimensions, not only epidemiology, and which has the potential to be applicable in other countries with similar issues around inadequate sanitation and health impacts. A further strength is in the ability to target decision-making which is value for money. At many levels, national and international, and the individual school level, financial resources are scarce, thus the benefit of using a comprehensive HIA is the ability to be able to select appropriate water supply and sanitation programmes, in the context of their health benefits, as well as being able to obtain the necessary support from both the social and political sectors. This methodology can additionally support the process of environmental surveillance, providing evidence of negative impacts, not only on health, but also on the surrounding environment and hence on quality of life.
In terms of weaknesses, [34] highlights the complexity around the scoring of different types of impact; the current study utilised perceptions of the school community and observation as the basis of the study (Section 2). It also illustrates the methodology and the preliminary investigation before use of the intervention. In a follow-up, once the sanitary action is in use, the HIA will be applied once more. A further weakness in rolling out such a methodology would be in any financial and human resources required, since Brazilian municipalities are short of both, particularly trained technicians. Therefore, if HIA became public policy whereby each municipality had to undertake sanitation work, in all likelihood, the Ministry of Health would charge for work. However, as is shown by Ferreira et al. [26], the increased efficiency of Brazil's sanitation system could go some way towards offsetting the costs of sanitation actions by monetising the reductions in hospitalisation due to waterand sanitation-related diseases. Thus, average numbers not requiring hospital treatment due to investment in sanitation could be as much as 157 thousand per BRL 100 million with a potential 26 thousand per BRL 100 million due to investment in drinking water supplies. Such substantial reductions in hospital admissions could be promoted by relatively little financial outlay to improve provision of adequate sanitation and water supplies.
Opportunities of the HIA revolve around its flexibility in that it provides the opportunity to undertake an assessment of the seven dimensions and can identify areas in which more urgent action is needed. It is, therefore, an opportunity to evaluate the situation before and after the application of a sanitation action and make a judgement of the best approach to take once the weaker dimensions have been identified. The method also enables engagement of the community in discussions around their perceptions of the situation, enabling contextualisation and the employment of a more site-specific strategy.
The final perspective of [34] is that of threat, and in this case, it is the fact that government organisations do not necessarily engage with one another; for example, the health sector is not involved with the planning of other sectors, such as provision of sanitation. This HIA methodology covers many government sectors and organisations, and thus encourages dialogue between them to support joint and integrated planning efforts.
Once the overall health score has been defined, the weights assigned to each of the health dimensions depending on the type of intervention can be used to define the work to be carried out. Another application is the verification of the health impact caused by the intervention, in order to support and defend its implementation for legislators, politicians, and society at large. This, therefore, has the potential to be a valuable tool to support decision making with regard to investment in water supply and sanitation programmes by providing a comprehensive set of indicators related to health dimensions in terms of sanitation actions. This firstly assesses any issues with regard to sanitation provision in a specific context, and secondly provides a numerical measure for any improvements gained by managing inadequate sanitation.
The limitations of the study include the fact that it is presenting the development of the methodology, and the case study illustrating its use is preliminary and provides information to establish baseline only, i.e., before an intervention is applied to improve a situation where sanitation is inadequate. The Economic dimension has not been addressed due to lack of information on the school budget; however, this will be accessed during further field visits. A further limitation is that it is applied in only one school; however, two more evaluations in the same school are planned, with testing of the methodology followed up in three more schools, in other states, for which permission has been granted. This will enable the assessment of any improvement in individual dimension scores, as well as the overall HIA score. Currently, a further limitation is that the methodology is only used in the Brazilian context, but once the HIA has been trialled, it will be assessed for its utility in other countries with similar issues of lack (or inadequate provision) of sanitation.
Conclusions
The rationale for proposing a methodology for an integrated HIA focused on Brazil is that for many years, the measurement of the effectiveness of public policies related to water and sanitation has been by accounting for the benefited population or the financial investment, or even through the efficiency of the system itself. The greatest impact of improved water and sanitation infrastructure is on the improvement in the population's health, not simply on the absence of disease. This paper, therefore, proposes a methodology that can be used as a tool to discuss with those who will benefit about the type of intervention in water and sanitation to be used, and then to measure the consequences of the intervention itself on health, but also to enable policymakers and legislators to engage with the process. The proposed methodology has only been used in Brazil thus far, but has the potential to be used in other developing countries with similar issues around inadequate sanitation and its impacts on health.
|
v3-fos-license
|
2023-12-15T16:10:24.522Z
|
2023-12-12T00:00:00.000
|
266215243
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1021/acs.jpca.3c04866",
"pdf_hash": "fd51c8d1f3f900f5f7d2956be370e3e4ecf09840",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2189",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry",
"Physics",
"Engineering"
],
"sha1": "c5d6222a7df380e0425afa94e77279152e7ca292",
"year": 2023
}
|
pes2o/s2orc
|
Effect of Benzothiadiazole-Based π-Spacers on Fine-Tuning of Optoelectronic Properties of Oligothiophene-Core Donor Materials for Efficient Organic Solar Cells: A DFT Study
In this work, five novel A-π-D-π-A type molecules D1–D5 were designed by adding unusual benzothiadiazole derivatives as π-spacer blocks to the efficient reference molecule DRCN5T for application as donor materials in organic solar cells (OSCs). Based on a density functional theory approach, a comprehensive theoretical study was performed with different functionals (B3LYP, B3LYP-GD3, B3LYP-GD3BJ, CAM-B3LYP, M06, M062X, and wB97XD) and with different solvent types (PCM and SMD) at the extended basis set 6-311+g(d,p) to evaluate the structural, optoelectronic, and intramolecular charge transfer properties of these molecules. The B3LYP-GD3BJ hybrid functional was used to optimize the studied molecules in CHCl3 solution with the SMD model solvent as it provided the best results compared to experimental data. Transition density matrix maps were simulated to examine the hole–electron localization and the electronic excitation processes in the excited state, and photovoltaic parameters including open-circuit photovoltage and fill factor were investigated to predict the efficiency of these materials. All the designed materials showed promising optoelectronic and photovoltaic characteristics, and for most of them, a red shift. Out of the proposed molecules, [1,2,5]thiadiazolo[3,4-d]pyridazine was selected as a promising π-spacer block to evaluate its interaction with PC61BM in a composite to understand the charge transfer between the donor and acceptor subparts. Overall, this study showed that adding π-spacer building blocks to the molecular structure is undoubtedly a potential strategy to further enhance the performance of donor materials for OSC applications.
INTRODUCTION
Nowadays, the demand for energy is increasing significantly due to human activity. 1,2Traditional energy resources such as fossil fuels or nuclear energy are criticized because of pollution generated during production and the danger they pose to the environment. 3Therefore, renewable sources of energy have emerged as a promising alternative in the last two decades thanks to several benefits at the social, environmental and economic sides as they are inexhaustible, reliable and nontoxic with the potential to improve public health and partially mitigate global warming. 4,5olar energy is one of the most promising renewable alternatives to fossil and nuclear energies due to its great potential to respond to the planet's energy needs. 6Photovoltaic technology has thus grown and developed in recent years.−12 OSCs also offer promising manufacturing characteristics over their inorganic counterparts due to roll-to-roll (R2R) processing based on flexible substrate technology. 13R2R fabrication affords various benefits such as increasing efficiency, enabling multiple sequential processing steps, high production yields, and reduced manufacturing costs. 14ecently, OSCs based on a bulk heterojunction (BHJ) architecture have gained great attention.In BHJ structures the wide extension of the contact between the donor and acceptor materials leads to a considerable increase of exciton dissociation and therefore in the power conversion efficiency (PCE) of OSCs. 15,16Advances of BHJ OSCs have been linked to the development of new organic materials with tremendous semiconducting and electro-optical properties.Intense research has been conducted in the search for efficient building blocks for synthesis of new polymers and small molecules (SMs) exhibiting excellent properties for OPV applications. 17Polymers have long been favored materials in the field of organic solar cells, primarily due to their advantageous characteristics. 18Their inherent structural flexibility, ease of processing, and tunable optoelectronic properties have made them compelling choices for the design and fabrication of efficient organic photovoltaic devices. 19Over the years, significant progress has been made in improving the energy conversion efficiency of polymer-based solar cells, making them a major player in the field of renewable energy.However, recent advancements in materials science have unveiled a compelling alternative: namely, small organic molecules.Compared to polymers, the interest in studying SMs stems from their well-defined molecular structure, higher purification, and simple synthetic procedures.Recently, small molecule donor systems have successfully enabled high PCE in OPV devices. 20−23 Particularly, the linear conjugated architecture of the A-π-D-π-A type was widely used as an effective molecular design in which the central core of an electron-rich donor block (D) is covalently linked with two electron-deficient terminal acceptor blocks (A) through two πconjugated bridges.This structure efficiently reduces the band gap energy, tunes the optoelectronic properties, and enhances the intramolecular charge transfer (ICT) among the different moieties of the donor material. 24,25n this contribution, we designed and characterized five A-π-D-π-A SMs based on the A-D-A reference structure.The reference DRCN5T chosen in this work is characterized by high photovoltaic performance and is composed of an oligothiophene core donor block and two 2-(3-ethyl-4-oxothiazolidin-2ylidene)malononitrile end-capped acceptor blocks. 26,27The designed materials are push−pull systems, where the alternating arrangement of electron-deficient and electron-rich blocks along the conjugated framework efficiently extends the electron delocalization, enhances the light-harvesting abilities, and improves the charge dissociation for efficient OPVs applications. 28,29The selected π-spacer for the new materials design is 2,1,3-benzothiadiazole (BT), which is one of the most popular fused heterocyclic building blocks for organic electronics thanks to its outstanding optoelectronic properties. 30As depicted in Figure 1, the designed materials, labeled D1−D5, contain different BT derivatives in which carbon atoms are replaced by electron-donating groups such as nitrogen atoms, or electronwithdrawing groups such as fluorine or cyano groups. 31Detailed computational investigations were carried out to evaluate the influence of the introduction of such modifications on the structural, electronic, and optical properties of donor materials for efficient OSC devices.
COMPUTATIONAL METHODOLOGY
Gaussian16 software 32 was used to perform the theoretical calculations using the density functional theory (DFT) and time-dependent density functional theory (TDDFT) approaches. 33,34We start by choosing the appropriate functional to reproduce the experimental data of the reference molecule (DRCN5T).The ground-state optimization of DRCN5T was performed using different exchange−correlation (XC) functionals such as the Becke-3-Lee−Yang−Parr (B3LYP) XC functional, B3LYP coupled with Grimme's D3 atomic pairwise dispersion correction (B3LYP-GD3) to estimate noncovalent interaction, B3LYP-GD3 combined with Becke−Johnson (BJ) damping (B3LYP-GD3BJ), 35,36 B3LYP combined with the Coulomb-attenuating method (CAM-B3LYP), 37 the M06-class functionals such as M06 and M062X, 38 and the long-range correction functional (wB97XD). 39In all cases, the extended basis set 6-311+g(d,p) including polarization and diffuse functions was used during the molecular optimization. 40For solvent model effects a chloroform solution (CHCl 3 ) was chosen with solvation model density (SMD) 41 or polarizable continuum model (PCM). 42he results in Table 1 show that B3LYP-GD3BJ with the SMD model solvent is the optimal functional, which reproduces sufficiently the experimental data.Compared with the highest occupied molecular orbital (HOMO) value, the error of the lowest unoccupied molecular orbital (LUMO) value is noticeably larger than that determined experimentally due to the generally greater difficulty of calculating unoccupied orbitals. 43ccordingly, the B3LYP-GD3BJ/6-311+g(d,p) level of theory at CHCl 3 with SMD solvent model (SMD-CHCl 3 ) was selected for optimization of all designed structures.
Next, based on the optimized ground-state structure of DRCN5T at the DFT/B3LYP/6-311+g(d,p) level, TD-DFT calculations were performed using different functionals to theoretically investigate the absorption properties of the reference molecule.Figure 2 presents the simulated spectra that show maximum absorption at wavelength λ max of 770, 770, 770, 587, 717, 588, and 560 nm using B3LYP, B3LYP-D3, The Journal of Physical Chemistry A B3LYP-GD3BJ, CAM-B3LYP, M06, M062X, and wB97XD, respectively.Experimentally, the λ max of DRCN5T is found at 531 nm.From Figure 2, the TD-DFT computations based on λ max values suggest that wB97XD is the suitable functional to reproduce most accurately the experimental absorption data.Thus, the optical properties of the studied molecules in SMD-CHCl 3 solution were computed at the TD-DFT/wB97XD/6-311+g(d,p) level of theory.
Subsequently, we calculated hole mobility, which is a crucial parameter of donor materials in organic solar cells.Reorganization energies were calculated based on neutral and cationic states.To determine the hole transfer integrals, we used the M06-2X functional to optimize adjacent molecule pairs and obtain the optimal π-stacking distance.Finally, the donor/ acceptor complex, necessary for the evaluation of charge transfer in the organic photovoltaic active layer, was constructed manually out of the selected donor molecule and PC 61 BM as the acceptor molecule.During manual placement, the starting configuration allowed the closest proximity of the molecules while avoiding any steric conflict.Subsequently, we performed an optimization of the donor/PC 61 BM complex using the same previous employed method for molecular optimization (DFT/ B3LYP-GD3BJ) to ensure accurate representation of its structure in simulations.
RESULTS AND DISCUSSION
3.1.Optimized Structure.Planarity of conjugated materials for OSC applications is necessary as it promotes intramolecular π-orbital overlap and improves the π−π stacking leading to efficient intermolecular interactions. 44To ascertain this, ground-state optimizations were performed at B3LYP-GD3BJ/6-311+g(d,p) in chloroform solution.The optimized structures, which exhibit a large degree of planarity, are depicted in Figure 3, and the relevant parameters are tabulated in Table 2.The bridge bond length between the π-spacer and the acceptor The Journal of Physical Chemistry A (l b1 ), between the donor and acceptor in R, and the bridge bond between the donor and the π-spacer (l b2 ) have been calculated to gain insight into the electronic interaction strength within the conjugated framework.The calculated bond lengths are found in the range of 1.42 and 1.44 Å, i.e, between the typical C−C single bond length (1.54 Å) and the double bond C=C length (1.33 Å).
The short length of these bonds indicates large delocalization of π-electrons in these structures which leads to further intramolecular charge transfer (ICT). 45o ascertain the impact of the π-spacer on the overall planarity of the π-conjugated frameworks, the molecular planarity parameter (MPP) and span of deviation from plane (SDP) were calculated using Multiwfn, 46 and the corresponding structures were plotted using VMD. 47The MPP delivers an estimation of the deviation of the whole structure from the plane, while SDP is an indicator of the deviation of different blocks of the structure from planarity. 48The low values of MPP and SDP equal approximately 0.6 and 3.5 Å, respectively, denote large planarity of the structures and low deviation from the fitted plane, respectively.A schematic representation of the structures' deviation is illustrated in Figure 4, with blue/red, respectively, indicating deviation above/below the plane.
The extension of the conjugated framework and the creation of relevant noncovalent interactions (NICs) by adding the πspacer increase the planarity of the molecular structure.According to Table 2, molecule D2 exhibits the lowest MPP value of 0.50, indicating its superior planarity, which is better than for the reference molecule.In contrast, the D4 π-spacer increases the MPP slightly to 0.64, what is consistent with the largest deviation from planarity for the considered molecules.
The low MPP is likely due to a large conjugated framework and the strong NCIs between the different building blocks.
Analyzing the SDP, we notice the out-of-plane deviation of the π-bridge within D4 and D5 structures, which might be explained by steric hindrance 49 generated by the presence of the electronwithdrawing groups (−CN and −F).However, in general, this validates that the added π-spacer enhances the planarity of the structure, with the small exception of the D4 molecule.
Noncovalent Interaction and Reduced Electron Density Gradient Analysis.
To further investigate the designed structures, we evaluated NCIs together with the reduced electron density gradient (RDG).The NCIs-RDG analyses are useful tools to obtain insight into the intermolecular interactions, the repulsive interactions, and the nonlocalized dispersion within the reacting moieties.The RDG is generated from the electron density (ρ) 50 The RDG scatter graph is generated between RDG versus sign(λ 2 )ρ, where sign(λ 2 ) is the second eigenvalue of the electron density (ρ), which is useful to discern the nature of nonbonding interaction, and ρ provides information about the strength of these interactions. 51he graphical illustration of isosurfaces and their respective RDG scatter for D1−D5 and R was performed using Multiwfn and is shown in Figure 5.The value and sign of sign(λ 2 )ρ are interpreted as follows: sign(λ 2 )ρ > 0 defines repulsive interaction from steric effects and is present in aromatic rings and nonbonding interactions, sign(λ 2 )ρ < 0 refers to hydrogen attractive interaction, and sign(λ 2 )ρ around zero corresponds to weak van der Waals interaction. 52The vertical color code of RDG scatter spectra, ranging from −0.025 to 0.025 au, presents the λ 2 (r) values.The red spikes in the RDG scatter plots between 0.01 and 0.05 au manifest themselves inside the rings of the oligothiophene centered core, the π-spacer, and the acceptor moieties, as shown in the gradient isosurfaces (Figure 5).The blue spikes are weak, confirming the absence of intermolecular hydrogen bonding interactions.The green and mixed red-green spikes observed between −0.02 and 0.01 au indicate the presence of noncovalent interaction between the constructive fragments.As seen from the isosurfaces, the introduction the πspacer moiety results in relevant intramolecular noncovalent The Journal of Physical Chemistry A interactions between its ending groups, i.e, hydrogen, nitrogen atoms, the cyano group, and fluorine atoms and the sulfur atoms on the adjacent oligothiophene and acceptor blocks.This "conformational lock" thus leads to an enhanced planarity. 53,54he designed materials have shown high planarity generated from the NCIs, which makes them rigid and stable.This proper planarity promotes molecular π−π stacking in the active layer of OSCs.
Frontier Molecular Orbitals.
The frontier molecular orbitals (FMOs) of conjugated materials are described by the highest occupied and lowest unoccupied molecular orbitals.The FMO analysis is a helpful tool to examine electronic properties and anticipate the optical behavior as well as the ICT within the conjugated backbone. 55The HOMO/LUMO charge distributions of the optimized ground-state geometries were performed at the DFT/B3LYP-GD3BJ/6-311+g(d,p) level in SMD-CHCl 3 solution and are depicted in Figure 6.The HOMOs, LUMOs, and band gap energies are presented in Figure 7 and are listed in Table 3.
As seen in Figure 6, all of the designed molecules exhibit similar electron density distributions of the HOMO and LUMO states.The HOMOs of R and D1−D5 are characterized by a broad distribution of electron density mainly over the donor moiety.In contrast, the LUMOs are predominantly concentrated over the π-spacer and acceptor moieties.These distributions clearly demonstrate the transport of electrons from the donor over HOMO to the acceptor over LUMO through the π-spacer.The electron-withdrawing nature of the added π-spacer fragments shows its effectiveness for electron migration from the donor to the acceptor. 56SCs require absorber materials with reduced band gaps to maximize their photovoltaic performance. 57From Table 3, we note that the designed materials possess smaller band gap energies (E g ) in comparison to R. As seen in Figure 7, the results show a decreasing order of the band gap energies (E g ) as follows: 1.97 eV (R) > 1.58 eV (D5) > 1.57 eV (D1) > 1.48 eV (D2) > 1.47 eV (D4) > 1.33 eV (D3).The fluorine atoms in D5, the cyano group in D4, and the pyridine and pyridazine aromatic rings in D2 and D3 exhibit a larger influence on tuning the electronic properties as compared to the bare benzothiadiazole moiety involving D1.This is related to the high delocalization of electrons and the increased push−pull mechanism. 58D3 exhibits the lowest band gap (1.33 eV) resulting from a higher planarity and the high electron-withdrawing behavior of the The Journal of Physical Chemistry A pyridazine moiety. 59These results denote the promising abilities of D1−D5 in OSC applications.
Density of States.
The density of states (DOS) provides explicit details on charge occupation and possible electronic excitation over various energetic levels 60 and is helpful in yielding insight into the contribution of different molecule fragments to the formation of FMOs.Specifically, in Figure 8 we plot both the total DOS (tDOS) of all electrons in the systems and the partial DOS (pDOS) projected onto the three distinct building blocks of the considered molecules (donor, acceptor, and π-spacer) described in Figure 1.The DOS of R and D1−D5, calculated at the B3LYP-GD3BJ/6-311+g(d,p) level of theory, is depicted in Figure 8 and summarized in Table 4.The shape of the tDOS indicates the distribution of electronic energy levels within the molecule.Sharp peaks correspond to localized electronic states, while smooth curves indicate a delocalized electronic structure.In the studied systems, we observe the latter case, i.e., broad tDOS which indicate significant delocalization.In the tDOS curves, the HOMO and LUMO energy levels are easily identifiable.The HOMO levels are located at about −5.0 eV, where distinct peaks in the electron density of states are observed.The LUMOs, on the other hand, appear at around −3.0 eV, which represent the lowest energy level in the region of higher energies.The HOMO/LUMO energy gap, which is The Journal of Physical Chemistry A approximately 1.9 eV for R and 1.5 eV for D1−D5, is a crucial determinant of the electronic behavior of the molecule, indicating the energy required for electronic transitions.
The tDOS plots are divided into three pDOS defining the donor (red), acceptor (blue), and π-spacer (green) moieties of the considered materials, with the band gap clearly visible between the HOMO edge peak on the left and the LUMO level on the right.The strong push−pull interactions between the different fragments are expressed by an increase of the relative peak intensities which enhance the electron density and the electronic transition probability. 61Fragments that contribute strongly to HOMO/LUMO formation exhibit a larger electron density, described by large peaks in the DOS plot around the HOMO/LUMO region.These plots clearly show the powerful electron-withdrawing nature of the π-spacing fractions that leads to an alternating distribution arrangement around the HOMO and LUMO levels.
From Table 4, for R, we find a significant 85% contribution of the donor to HOMO with the acceptor providing a minor 15% contribution only.The contributions of donor and acceptor to LUMO are, conversely, equal.These results demonstrate partial electronic density migration the donor core block to the end-capping acceptor moieties.The added π-spacer changes the electron density distribution of the molecular orbitals and leads to significant electron delocalization and a large charge transfer from the donor to acceptor moieties.Hence, for all of the studied molecules, the HOMO levels are raised mainly due to the influence of the donor.In contrast, the rise in the LUMO levels is a result of the higher percentage contribution of acceptors and bridges.We find that the addition of the π-spacer blocks does not alter the donor contribution to HOMO densities.However, the contribution of the donor to LUMOs decreases considerably (by more than half): from a 50% contribution in R to the range of 20−30% for the modified, with the smallest contribution noted for molecule D2.Due to the π-spacers exhibiting large conjugation and π−π* transition probability, the charge conductivity is higher within the conjugated framework.For all the designed materials, the π-spacer blocks contribute slightly to HOMOs around 8%, while they contribute above 50% for LUMOs.This simultaneously decreases the acceptor contribution to HOMO to 3%.The above discussion proves the role played by the added π-spacer blocks in improving the charge transfer abilities from the core donor to the acceptor moieties.
Optical Properties.
To estimate the optical properties of the studied molecules, TD-DFT was used as a cost-effective method. 62The simulated optical absorption spectra were carried out together with the corresponding oscillator strengths at the TD-DFT/wB97XD/6-311+g(d,p) level in SMD-CHCl 3 solution and are illustrated in Figure 9.The calculated excited energy (E ex ), maximum absorbance wavelength (λ max ), oscillator strength ( f), main transitions, full-width at half-maximum (fwhm), and light harvesting efficiency (η λ ) are tabulated in Table 5.
As shown in Figure 9a, the molecules under investigation exhibit large absorption spectra that cover a significant amount of the visible region with a notable red shift compared to that of R. From Table 5, the maximum absorption wavelength (λ max ) values are found to be 560, 605, 649, 651, 629, and 586 nm for R and D1−D5, respectively, which is in good agreement with the The Journal of Physical Chemistry A observed trend for E gap (Table 3).These λ max values refer to the π → π* electronic transitions involving the electron migration from HOMOs mainly located over the oligothiophene donor unit to LUMOs mainly distributed over the end-capped acceptor and π-spacer moieties.
The λ max values of the designed compounds D1−D5 are, respectively, red-shifted by 45, 89, 91, 69, and 26 nm compared to R. This red-shift indicates a significant contribution of the added π-spacer units to improving intramolecular charge transport (ICT) properties.It originates from the electronwithdrawing groups (−F and −CN) and the acceptor character of the nitrogen atoms within the π-spacer moiety.The red-shift hints at the possible advantage of the light-harvesting ability of the investigated molecules and the improved efficiency of OSCs.
The main contribution to the absorption peaks comes from the HOMO−LUMO electronic transition, as noted in Table 5, showing strong electron displacement from the ground (S 0 ) to the first excited state (S 1 ).The excitation energy (E ex ) is a key factor in predicting the efficiency of the material in OSCs, where E ex defines the energy required for an electron to be excited from S 0 to S 1 .A lower E ex is beneficial, leading to easier electronic excitation and smoother charge migration. 63This increased ability of molecules D1−D5 to efficiently transport electrons in comparison to R is illustrated in Figure 9b by a significant decrease of E ex .The excitation energies are larger than the corresponding gap energies because the HOMO → LUMO transition contribution to the main absorption peak is on the order of 50% with additional contributions from weaker transitions from different energy levels.Additional improvement of the photovoltaic performance stems from an increase of the width of the absorption peak, as elucidated by the fwhm (c.f.Table 5).Overall, based on E ex , the fwhm and a narrow gap of 1.33 eV the D3 molecule is the most appropriate one for the desired electronic application as it exhibits the best optical and charge transport properties due to the existence of a strong electron-accepting entity in the conjugated chain. 64he spectral range and intensity of solar absorption are decisive parameters for estimating the short-circuit current density (J SC ) of OSCs.Basically, J SC is a function of the external quantum efficiency (EQC) and the photon number S(λ) integrated over the solar spectrum, expressed as 65 where EQE is defined as a product of the light harvesting efficiency (η λ ), the exciton diffusion efficiency (η ED ), charge separation efficiency (η CS ), and charge collection efficiency (η CC ).The light-harvesting efficiency η λ depends on the oscillator strength ( f) of the specific optical absorption wavelength 66 = 1 10 f (3) and together with a broad absorbance is one of the main factors that determines the efficiency of the photovoltaic devices. 67he oscillator strength, f, is critical in determining the propensity of a donor material to absorb and convert incoming photons into excitations.The value of f relies heavily on the choice of functionals, which are used to describe the electronic behavior of the donor material. 68From eqs 2 and 3 it is clear that donor materials with large f yield high η λ and provide superior light-harvesting capabilities.As listed in Table 5, η λ exhibits values close to 1.This convergence is indicative of precise tuning of the density functionals to capture accurately the excitonic behavior of donor materials.In all cases, D1−D5 exhibit larger η λ compared to R, which is explained by an increased degree of πconjugation.The obtained results show that all of the designed materials are promising candidates for improving the photocurrent and J SC in the OSC devices.
3.6.Transition Density Matrix.The transition density matrix (TDM) is a useful tool to analyze electronic excitations, electron−hole localization, and interactions between donor, πspacer, and acceptor moieties.Using Multiwfn, we performed a TDM analysis of the investigated molecules at the first exited state (S 1 ) to quantify its composition, identify the atoms most affected by electron transition, and evaluate the hole−electron coherence during the electronic transition. 69,70As shown in Figure 10, we divided the TDM maps into three parts representing the different moieties (A, D, and π-spacer) of the conjugated frameworks with the colorbar denoting the electron density coefficient values.Locally excited (LE) state components are marked by the bright diagonal parts, while the offdiagonal elements represent the intramolecular charge-transfer (ICT) state components.
The TMD of the reference molecule shows large electron− hole coherence with the pair localized in the D−D block, The Journal of Physical Chemistry A indicating the predominance of the local state.A very weak ICT is present between the donor and acceptor elements within R.However, the TDM maps of D1−D5 show a dispersal of charges over the on-diagonal and off-diagonal segments, showing effective exciton dissociation and significant ICT from the donor to the acceptor and the π-spacer elements as compared to R. In fact, the efficient separation of excitons within the donor materials leads to an increase of photogenerated charge carriers and thus improves J SC. 71 The weakest coherence is noted for D4, which contains a strong electron-withdrawing group (−CN) leading to effective exciton dissociation.
Subsequently, we calculated electron density difference (EDD) plots 72 between S 0 and S 1 to study the ICT and charge separation in these materials after electronic excitation.The blue and purple colors of the EDD maps represent the regions of decreasing and increasing electron densities due to electron excitation, respectively.As seen in Figure 10, the donor unit exhibits the minimum electron density, while the π-spacer and acceptor units exhibit the maximum electron density.The decrease of the electron density over the donor is larger for the modified molecules than for R. Simultaneously, the addition of the π-spacer causes a smaller decrease of the electron density over the acceptor unit for D1−D5 compared to R, since the contribution of the acceptor to the HOMO state is reduced by the π-spacer.These plots validate the ICT from the donor-core block to the π-spacer and acceptor blocks during the S0 → S1 transition, demonstrating the contribution of the π-spacer to increasing the electron density difference between the central part and the external part of the molecule and to improving the exciton dissociation into free charges.
Charge Transfer Properties.
The charge transfer characteristics of the donor material are used to assess the ability to dissociate excitons into free charges.Following light harvesting in a BHJ solar cell, the excitons created in the active The Journal of Physical Chemistry A layer at the donor/acceptor interfaces are dissociated into free charges (electrons and holes) with the electrons being injected into the acceptor and the holes being transferred into the donor material to reach the hole transport layer.It is noteworthy that the J SC of the solar cell is mainly affected by the efficiency of exciton dissociation and the ability of the active layer to transport charge carriers. 73Hence, to ensure good yield of the active layer in the OSCs the donor material should exhibit a large hole transport.The process of hole transfer can be described as a sequence of uncorrelated hopping processes, and the relationship of the hole mobility μ hole and the hole transfer rate k hole is obtained from with e, r, k B , and T being the electron charge, the intermolecular distance between the π-stacked molecules, the Boltzmann constant, and the temperature (298 K), respectively.
To obtain insight into these properties, we considered exclusively a face-to-face parallel π-stacking pattern to approximate the charge transport characteristics as it mainly contributes to the process. 74,75The M06-2X functional was used to optimize the dimers to obtain the optimal π-stacking distance 76 at the 6-31g(d) basis set, with the optimized geometries depicted in Figure 11.The center-to-center π-stacking distances are found to be approximately 3.9 Å with a slight perturbation of the structures due to mutual interaction.
The mobility is directly related to the hole transport rate between the neighboring molecules and is calculated based on Marcus theory 77 = i k j j j j j y where h is the Planck constant and the temperature is assumed to be 298 K.The relevant parameters in eq 5 for estimating the hole transport abilities are the hole transfer integral (t hole ) and the reorganization energy of the hole (λ hole ).The transfer integral represents the electron coupling strength of the adjacent molecules.According to the Marcus−Hush two-state model, t hole is approximated as 78,79 Here, E H−1 and H describe the HOMO−1 and HOMO energies of adjacent molecules in the neutral state, respectively.The reorganization energies of holes λ hole are calculated based on the neutral cationic states following The Journal of Physical Chemistry A hole 1 2 0 0 0 0 (7) where E 0 (G 0 ) and E + (G + ) represent the energies of the neutral and cationic species in their lowest-energy geometries, respectively.Likewise, E 0 (G + ) and E + (G 0 ) are, respectively, the energies of the neutral and cationic states with the geometries of the cationic and neutral species.As sketched in Figure 12, obtaining the reorganization energies involves four calculations at a single point.The neutral reorganization energy (λ 1 ) is equivalent to the difference between the neutral energies of the optimized neutral and charged geometries, respectively.The cation reorganization energy (λ 2 ) is equivalent to the energy difference between the cation energies of the optimized charged and neutral geometries, respectively. 82rom Table 6, the similarity of the reorganization energies of the newly designed materials is explained by the similar relaxation of their geometries.The highest value of λ hole found for D5 is likely caused by the presence of electronegative fluorine groups in the π-spacer, which enhance structural relaxation.Molecules with substitutions of −CN and −F show higher hole transfer integrals.This increase in t hole is attributed to the electron-withdrawing properties of the fluorine (−F) atoms and the cyano (−CN) groups within the conjugated frameworks.These electron-withdrawing groups promote better stacking between conjugated molecules and improve electronic coupling.The increasing trend of k hole (D1 < D2 < D3) is related to the increasing number of nitrogen atoms substituted into the benzene ring of the BT block from zero (D1) to two (D3).This increase leads to a rise of the electron deficiency of the π-spacer and thus enhances electron movement within the conjugated backbone.The μ hole goes in order of R < D5 < D1 < D2 < D4 < D3.The high μ hole value of 5.53 cm 2 V −1 s −1 is found for D3, which shows a low reorganization energy and high intermolecular interactions.In conclusion, these results indicate that incorporating electron-withdrawing groups into a conjugated compound improves its ability to efficiently transport holes.Therefore, according to hole transport mobility calculations, the newly designed materials are expected to exhibit higher transport capabilities, potentially leading to higher values of J SC in OSC applications.
3.8.Photovoltaic Properties.Bulk heterojunction solar cells are typically composed of a mixture of π-conjugated electron donor material and a fullerene derivative as the electron acceptor material: (6,6)-Phenyl-C61 Butyric Acid Methyl Ester (PC 61 BM).The power conversion efficiency (PCE) is an important parameter that captures the efficiency of photovoltaic devices and is defined as the ratio of the electrical output to the incident solar power (P in ) 83
=
where J sc , V oc , and FF are the short-circuit current density, the open circuit voltage, and the fill factor, respectively.The FF and V oc values can be theoretically determined based on the computed electronic properties.To achieve high PCE, the donor should exhibit large FF and V oc values, which results in a trade-off for materials with a low energy gap to cover a large area of the solar spectrum.FF is estimated as 84 with v oc denoting the dimensionless voltage and V oc being 85 0.3 is an empirical factor and H Donor and L Acceptor define the HOMO of the donor and the LUMO of the acceptor, respectively.
The computed FF and V oc of the studied materials are tabulated in Table 7.The investigated donor materials' HOMO aligned with the LUMO of the PC 61 BM acceptor, along with the V oc values are depicted in Figure 13.The V oc , depicted with the arrows in Figure 13, represents the maximum voltage that an OSC can provide to an external circuit after exciton dissociation.Interestingly, the designed materials exhibit V oc and FF values comparable to that of R due to the close HOMO values of these materials.The Journal of Physical Chemistry A By combining these results with the results found previously, it may be deduced that integration of the π-spacer does not necessarily lead to improving all of the properties of the conjugated molecule.Specifically, the π-spacer modification studied here improved mainly the optical and charge transfer properties.In contrast, the photovoltaic properties, which are directly related to the HOMO and LUMO levels of the donor material, are enhanced only slightly.However, all the developed molecules with sufficient V oc and FF values can be considered as good candidates for an active layer in BHJ OSC devices.
3.9.Donor/PC 61 BM Interfacial Charge Transfer.In order to prove the efficiency of the studied materials as donors, we investigated the charge transfer efficiency of the D3/PC 61 BM composite employing the DFT/B3LYP-GD3BJ/6-31g(d,p) level of theory.The D3 donor is selected due to its superior charge transfer properties, a low hole reorganization energy, and a high hole transport rate.Efficient charge transfer across the interface necessitates that the composite structure maintains planarity and the HOMO/LUMO distribution should be entirely located over the donor and acceptor, respectively. 86he optimized D3/PC 61 BM structure, illustrated in Figure 14a, shows that the donor conforms to the acceptor, improving the intermolecular interaction within the composite and facilitating charge transfer within the composite.
In order to ascertain the possible interactions between D3 and PC 61 BM, a reduced density gradient analysis was carried out.As depicted in Figure 14b, high repulsion due to steric effects is located over the aromatic rings of D3 and over PC 61 BM.From the RDG map, the absence of hydrogen bonding is clear.A high degree of van der Waals interaction is seen between D3 and PC 61 BM which favors π−π stacking between the donor and acceptor subparts and thus improves the configuration stability and enhances the ICT.The frontier molecular orbital distribution pattern is illustrated in Figure 14c.Specifically, the HOMO density is entirely distributed over D3, while the LUMO is entirely located over PC 61 BM, demonstrating the donor and acceptor character of these moieties.
CONCLUSIONS
In this study, we report a computational study based on the electronic, optical, and charge transport properties of five novel small molecules to be used as donors in BHJ OSCs by means of DFT.The five molecular structures are derived from the DRCN5T reference by adding benzothiadiazole-derived πspacer groups to its main framework between the oligothiophene-core donor and the end-caps of the acceptor groups.Considering the obtained results, the addition of π-spacers yields a profound influence on the electronic and absorption characteristics.The [1,2,5]thiadiazolo [3,4-d]pyridazine groups (D3) as the π-spacer lead to the largest decrease of gap energies and a red-shift of absorption spectra by increasing the NCIs and enhancing the π-electron delocalization, while the difluorobenzothiazole has a weaker effect.The optical absorption covers most of the visible part of the solar spectrum with a high light- The Journal of Physical Chemistry A harvesting efficiency.An analysis of the charge transport shows the effect of the π-spacer units on the exciton dissociation at the first excited state and enhancing the charge carrier mobility.The newly designed materials show enhanced properties in all of the studied aspects of OSCs compared to the reference molecule.Due to the considerable electronegativity of the nitrogen atoms within the π-spacer and the high ICT, D3 exhibits the largest maximum absorption wavelength of 651 nm and the largest hole mobility of about 5.35 cm 2 V −1 s −1 .Accordingly, the D3/ PC 61 BM composite was studied to evaluate the charge transfer between the donor and acceptor subparts.Overall, this study shows that adding a π-spacer building blocks to the molecular structure can be a promising strategy to further improve the photovoltaic properties of donor materials for highly efficient OSC devices.
■ ASSOCIATED CONTENT Data Availability Statement
All data underpinning the results of this study is available from the Authors on reasonable request.Basic data files necessary to reproduce the main results of this study are also available at 10.5281/zenodo.10137672.
■ AUTHOR INFORMATION
Corresponding Authors
Figure 1 .
Figure 1.Molecular structures of the reference R molecule of DRCN5T and five specific π-spacer groups that are inserted between the donor and acceptor groups of R. The newly designed molecules are referred to by the name of the employed spacer D1−D5.
Figure 2 .
Figure 2. Comparative analysis of experimental and computed maximum absorption wavelengths of DRCN5T with five different functionals.The functional wB97XD is more suitable to reproduce the experimental data.
Figure 3 .
Figure 3. Optimized structures of the studied molecules at the DFT/B3LYP-GD3BJ/6-311+g(d,p) level.The symbols l b1 and l b2 denote the acceptorπ-spacer and donor-π-spacer bridge bonds in the modified D1−D5 molecules and the donor−acceptor bond in the reference molecule.Note the large degree of planarity of all considered structures.
Figure 4 .
Figure 4. Molecular planarity parameter (MPP) and span of deviation from plane (SDP) plots of R and D1−D5 indicate a large degree of planarity of all considered molecules.
Figure 5 .
Figure 5. RDG scatter and isosurface plots of R and D1−D5 molecules.The red color indicates the repulsion from aromatic steric effect and the green color indicated the noncovalent interactions.
Figure 6 .
Figure 6.FMO distribution plots of R and designed materials D1−D5.The HOMOs are distributed over the whole structure, while the LUMOs are located over the π-spacer and acceptor, indicating a charge transfer between the building blocks.
Figure 7 .
Figure 7. HOMO/LUMO energy levels and band gap energies of R and the designed compound.Note a decrease in the gap energy with the added π-spacer.
Figure 8 .
Figure 8. Density of state plots of reference R and designed molecules D1−D5 at the DFT/B3LYP-GD3BJ/6-311+g(d,p) level of theory.The electron density distribution changes while adding π-spacer block which leaded to higher electron delocalization.
Figure 9 .
Figure 9. (a) Simulated optical absorption spectra for D1−D5 at the TD-DFT/wB97XD/6-311+g(d,p) level of theory in a CHCl 3 solution.Note a redshift of the absorption spectra depending on the nature of the π-spacer.(b) Maximum absorption wavelengths and excitation energies of R and D1−D5.
Figure 10 .
Figure 10.Electron density difference maps and transition density matrix plots of compounds R and D1−D5: Donor (D), Acceptor (A), and π-bridge (π).The charges are dispersed over on-diagonal and off-diagonal segments for D1−D5 compared to R showing enhanced exciton dissociation and higher ICT.
Figure 11 .
Figure 11.Optimized geometries of the π-stacked configurations of the considered reference and modified molecules at the M06-2X/6-31g(d) level of theory.
Figure 12 .
Figure 12.Potential energy curve of the intermolecular transfer reaction between the neutral and cationic states of a conjugated molecule.
Figure 13 .
Figure 13.Graphical representation of open-circuit voltage (V oc ) of reference and designed molecules with respect to the acceptor PC 61 BM.
Table 1 .
Theoretical and Experimental E HOMO , E LUMO , and Gap Energy E gap of the Reference Molecule at the 6-311+g(d,p) Basis Set in CHCl 3 Solution Using PCM and SMD Model Solvents
Table 2 .
Parameters of the Optimized Molecular Structures
Table 4 .
Percentage Involvement of Different Segments in Raising FMOs
Table 6 .
Calculated Reorganization Energies of Hole (λ hole ), Hole Transfer Integrals (t hole ), Hole Transport Rates (k hole ), and Hole Mobility (μ hole ) of R and D1−D5 Studied Materials
Table 7 .
Photovoltaic Parameters Calculated for the Studied Compounds
|
v3-fos-license
|
2022-08-31T15:09:04.699Z
|
2022-08-27T00:00:00.000
|
251941495
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1093/eurpub/ckac095.137",
"pdf_hash": "2308fd3fd10a1a583ee5c304b2c8fd921fecefb1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2192",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "63d04d06f76257a5623db14b5250a32472fd05f4",
"year": 2022
}
|
pes2o/s2orc
|
P09-07 Physical activity and sedentary behaviour patterns among French adults during the COVID-19 health crisis
Abstract Background The COVID-19 health crisis and the various restrictions (lockdowns) implemented may have impacted individuals' behaviours (e.g. physical activity [PA] and sedentary behaviour [SB]) and psychological health (e.g., self-esteem or adjustment strategies to cope with stressful events). The objective of this study was to identify PA and SB patterns and to investigate their associations with socioeconomic and psychological characteristics among French adults during the COVID-19 health crisis. Methods Cross-sectional data of French adults were collected during the COVID19 health crisis (between March 2020 and February 2021). PA and SB were measured using the International Physical Activity Questionnaire. The Rosenberg Self-Esteem Scale and the Brief Cope questionnaire were used to measure self-esteem and coping strategies, respectively. PA and SB cross-sectional patterns were identified using latent class analysis. Multivariable logistic regression models were used to investigate associations between identified patterns and adults' socioeconomic factors, self-esteem, and coping strategies. Results Among the 241 included adults (mean age ± standard deviation: 29.6 ± 13.1 years), three cross-sectional PA and SB patterns were identified: sedentary walker (n = 141; 58.5%); varied PA practitioner (n = 68; 28.2%); walker with intense PA (n = 32; 13.3%); Compared to the sedentary walker pattern, the walker with intense PA one was overrepresented by socially less advantaged adults, using more planning and less religion as coping strategies to stressful events, and those in the varied PA practitioner pattern used more denial as coping strategy. Conclusions More than half of adults were in the least healthy pattern (sedentary walker). These results suggest using PA and SB as levers to cope with stressful life events.
Background
Sedentary time (sitting) has been associated with adverse cardio-metabolic consequences. The general recommendation is to interrupt long periods of sitting. In order to successfully develop interventions and policies to decrease sedentary behaviour, high-risk groups as well as the context of sitting should be identified. The aim of this study was to investigate sedentary behaviour among (subgroups of) the Dutch population and to identify in which domains most sedentary time was spent. Methods Data from the 2017 Dutch national Health Interview Survey was used, which includes a nationally representative sample of 8,441 Dutch citizens aged 4 years and older. Sedentary time on an average day was assessed using an adjusted version of the Marshall questionnaire. Sitting domains were defined as: 1) traveling, 2) at work, 3) at school or studying 4) watching television, 5) using a computer/smartphone at home, and 6) otherwise. Total sedentary time was analysed stratified by age, sex and level of education with ANOVA and Bonferroni correction.
Results
On average the Dutch population accumulates 9,0 hours/day of sedentary time. Overall, participants accumulated most sedentary time while watching television (2.2 hours/day) followed by sitting at work and other activities (both 1.7 hour/day). Significant differences (p > 0.001) were found by sex, age group and level of education. Men reported slightly more sedentary hours than women (9.2 vs. 8.8 hours/day). With respect to age groups, adolescents (12-17 years old) reported the highest, whereas children (4-11 years old) reported the lowest sedentary hours (10.1 vs. 7.3 hours/day). Finally, sedentary hours were high for higher educated people (9.7 vs. 8.2 hours/day in lower educated people). Adolescents accumulated most sedentary time at school or during studying (4.0 hours/day), higher educated people accumulated most sedentary time at work (3.4 hours/day).
Conclusions
Our study showed that in general Dutch people spend a lot of time sedentarily, especially adolescents and higher educated people. Most sedentary times was spent while watching television, at school or during studying, and at work. Therefore interventions aiming to decrease sedentary beha- [SB]) and psychological health (e.g., self-esteem or adjustment strategies to cope with stressful events). The objective of this study was to identify PA and SB patterns and to investigate their associations with socioeconomic and psychological characteristics among French adults during the COVID-19 health crisis. Methods Cross-sectional data of French adults were collected during the COVID19 health crisis (between March 2020 and February 2021). PA and SB were measured using the International Physical Activity Questionnaire. The Rosenberg Self-Esteem Scale and the Brief Cope questionnaire were used to measure self-esteem and coping strategies, respectively. PA and SB crosssectional patterns were identified using latent class analysis. Multivariable logistic regression models were used to investigate associations between identified patterns and adults' socioeconomic factors, self-esteem, and coping strategies.
Results
Among the 241 included adults (mean age AE standard deviation: 29.6 AE 13.1 years), three cross-sectional PA and SB patterns were identified: sedentary walker (n = 141; 58.5%); varied PA practitioner (n = 68; 28.2%); walker with intense PA (n = 32; 13.3%); Compared to the sedentary walker pattern, the walker with intense PA one was overrepresented by socially less advantaged adults, using more planning and less religion as coping strategies to stressful events, and those in the varied PA practitioner pattern used more denial as coping strategy.
Conclusions
More than half of adults were in the least healthy pattern (sedentary walker). These results suggest using PA and SB as levers to cope with stressful life events. Keywords: Physical activity, Sedentary Behaviour, Adults, COVID-19 health crisis ii114 European Journal of Public Health, Volume 32 Supplement 2, 2022
|
v3-fos-license
|
2021-10-29T05:17:53.630Z
|
2021-10-20T00:00:00.000
|
240071244
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "6ecec2aaed4c7ebe14446fc5ab0dee7cdbb35c10",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2199",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "6ecec2aaed4c7ebe14446fc5ab0dee7cdbb35c10",
"year": 2021
}
|
pes2o/s2orc
|
Effect of Amoxicillin and Clavulanate Potassium Combined with Bazhengsan on Pediatric Urinary Tract Infection
Objective To explore the therapeutic effect of amoxicillin and clavulanate potassium combined with Bazhengsan on pediatric urinary tract infection (UTI). Methods The data of 120 UTI children treated in Wuhan Xinzhou District People's Hospital from February 2019 to February 2020 were retrospectively analyzed. They were equally split into experimental group (EG) and control group (CG) according to the order of admission. All children were treated with amoxicillin and clavulanate potassium for suspension (twice a day), and EG was additionally treated with one dose of Bazhengsan daily. Both groups were treated for 10 days. After treatment, the immune function indexes, inflammatory factor levels, and clinical efficacy were compared before and after treatment. Results No remarkable differences in the general data such as blood routine and urine routine results were observed between the two groups before treatment (P > 0.05). After treatment, EG achieved obviously better immune function indexes (P < 0.001) and lower levels of inflammatory factors (P < 0.05) compared with CG. Besides, the treatment effective rate in EG (96.7%) was higher than that in CG (P < 0.05). Conclusion Amoxicillin and clavulanate potassium combined with Bazhengsan can improve the immune function of UTI children and reduce the levels of inflammatory factors, with remarkable effects, which should be popularized in practice.
Introduction
Pediatric urinary tract infection (UTI) refers to urinary tract inflammation caused by pathogens invading urinary tract mucosa or tissue. Its clinical manifestations mainly include abnormal urination such as frequency and urgency of urination, as well as urinary incontinence and retention in some children [1,2]. If not treated in time, it may trigger chronic urinary system infection and lead to renal fibrosis, seriously endangering children's physical and mental health. Antibiotics are the main treatment measures in clinic since UTI is mostly caused by bacteria. However, the wide application of antibiotics results in antibiotic resistance in more than half of the strains due to the production of β-lactamase [3,4]. erefore, children can be treated with β-lactamase inhibitors in practice to protect the activity of β-lactamase antibiotics. Amoxicillin and clavulanate potassium is a mixture of the β-lactam antibiotic (amoxicillin) with the β-lactamase inhibitor (clavulanate potassium), which can enhance the sensitivity of pathogens to amoxicillin and inhibit the production of drug-resistance bacteria [5]. At present, many reports have shown that amoxicillin and clavulanate potassium can reduce the clinical symptoms of UTI children, and especially oral administration of this drug can reduce the incidence of complications such as phlebitis, with definite efficacy [6,7]. However, UTI can recur in children due to factors such as immunocompromise, and recurrence is an important reason for the development of UTI into chronic renal failure [8,9]. However, amoxicillin and clavulanate potassium cannot improve the immune function of children, so it is extremely important to combine it with other therapeutic drugs.
In recent years, traditional Chinese medicine (TCM) with the holistic view has shown unique advantages in the treatment of urinary tract diseases. TCM classifies UTI into the category of stranguria and holds that the disease in children is caused by excessive milk and food and accumulation of heat and stagnation, which triggers disturbance of qi transformation and urinary tract obstruction, resulting in frequent urination and pain [10,11]. e treatment should be based on clearing heat, eliminating accumulation, promoting urination, and removing stranguria. Ning treated pediatric UTI with Bazhengsan and found that Polygonum aviculare and fringed pink in the medicine inhibited Staphylococcus and Bacillus and turned bacteriuria negative [12]. In addition, scholars Changli Xue found that the total effective rate (98.3%) of children was significantly improved compared with the control group after the addition and subtraction treatment of Bazhengsan, suggesting the remarkable effects of this drug on UTI [13]. However, the research of Bazhengsan in UTI treatment focuses on the short-term efficacy, and its impact on immune function and inflammatory factor levels in children remains unclear. Besides, there is no study on the application of Bazhengsan combined with amoxicillin and clavulanate potassium. Based on this, this paper will explore the actual effect of the combined treatment on pediatric UTI, reported as follows.
Study Design.
is retrospective study was conducted in Wuhan Xinzhou District People's Hospital from February 2019 to February 2020, aiming to explore the efficacy of amoxicillin and clavulanate potassium combined with Bazhengsan in the treatment of pediatric UTI.
Recruitment of Research Subjects.
e data of UTI children treated in Wuhan Xinzhou District People's Hospital from February 2019 to February 2020 were retrospectively analyzed. Children meeting the following criteria were included: (1) children who were diagnosed with UTI by examination, meeting the criteria of Guidelines for the Clinical Research of Chinese Medicine New Drugs [14] and Zhu Futang Practical Pediatrics (7th Edition) [15], that is, white blood cell (WBC) in urine routine >5/HP, and midstream urine culture colony count >1 × 10 6 /mL; (2) children with typical urinary tract irritation symptoms; (3) children who were treated throughout the whole period in our hospital without transferring or stopping treatment; (4) children with complete clinical; and (5) children between 1-12 years old. Children were excluded according to the following criteria: (1) children with urinary calculi, urinary deformity, deformity of kidney, chronic pyelonephritis, or other serious organic diseases; (2) children quitting the treatment halfway and changing the treatment plans; (3) children with simple urethral syndrome; (4) children who were allergic to the drugs involved in the study; (5) children with missing clinical data; and (6) children who received antibacterial drug therapy before participating in the study.
Steps.
A total of 120 children were enrolled in this study and were equally split into experimental group (EG) and control group (CG) according to the order of admission. On the day when the family members agreed to participate in the study, the research group collected social demographic data and clinical data of the children and tested their blood routine, urine routine, immune function, and inflammatory factor levels. At 10 days after treatment, the research group tested their immune function and inflammatory factor levels again.
Ethical Considerations.
is study is in line with the principles of Declaration of Helsinki (as revised in 2013) [16] and approved by the ethics committee of Wuhan Xinzhou District People's Hospital. After the children were recruited, the research group explained the purpose, significance, content, and confidentiality of the study to their families and asked them to sign the informed consent.
Withdrawal Criteria.
Judged by the research group, the children with the following conditions were unsuitable to continuously participate in the experiment, and their medical records would be kept but not for data analysis: (1) adverse events or serious adverse events occurred; (2) the condition deteriorated during the experiment; (3) the subjects had some serious comorbidities or complications; and (4) the families of the children were unwilling to continue the clinical trial and requested the research group for withdrawal.
2.6.
Methods. All children took amoxicillin and clavulanate potassium for suspension (Guangzhou Baiyunshan Pharmaceutical Co., Ltd., Baiyunshan Pharmaceutical General Factory, National Medical Products Administration approval no. H20041109, each containing 200 mg of amoxicillin and 28.5 mg of clavulanate with the ratio as 7 : 1), with the specific administration methods as follows: (1) 14.3 mg/kg each time for children with the body weight less than 13 kg and age less than 2 years old; (2) one pack each time for children with the body weight of 13-21 kg; and (3) 2 packs each time for those with body weight over 21 kg. After the symptoms disappeared, children continued to take the suspension orally. e total treatment time was 10 days.
EG was additionally treated with Bazhengsan consisting of plantain seed, Polygonum aviculare, fringed pink, talc, ural licorice root tip, Gardenia, rhubarb, dandelion, and Hedyotis diffusa. With the addition and subtraction of herbs in Bazhengsan, Cortex Phellodendri and Bupleurum were added for children with fever and chills, Lalang Grass Rhizome and field thistle were added for children with hematuria, peony and Cyperus rotundus were added for children with abdominal distention, and Astragalus mongholicus and Codonopsis were added for those with qi deficiency. Bazhengsan was decocted by the research group and administrated specifically as follows: (1) 1 dose every 2 days with frequent administration every day for children aged under 2 years old; (2) 1 dose every 2 days and three times a day for children aged 2-5 years old; and (3) 1 dose every day and three times a day for children over 5 years old. e total treatment time was 10 days.
Observation Criteria
e general data extraction forms were established by the children's families, including inpatient number, name, gender, age, urine culture results, blood routine results, urine routine results, residence, family monthly income, parents' marital status, and parents' educational level.
Immune Function
Indexes. Five milliliter of fasting venous blood was taken from children before treatment (T 1 ), 5 days after treatment (T 2 ), and 10 days after treatment (T 3 ). e levels of T lymphocyte subsets (CD8 + and CD4 + /CD8 + ) were detected by flow cytometry (ACEA BIO Hangzhou Co., Ltd, Zhejiang Medical Products certified no. 20142400581), and the levels of immunoglobulin (IgA and IgG) were measured by nephelometry immunoassay kit (Nanjing Getein Bio-Pharmaceutical Co., Ltd., Jiangsu Medical Products certified no. 20122400146).
Clinical Efficacy.
e therapeutic efficacy of the children was evaluated according to the Guidelines for Clinical Research on Antibiotics [17] issued by the Pharmaceutical Administration of the Ministry of Health. If the symptoms, signs, laboratory tests, and etiological tests were normal, the children were regarded as cured; if the condition of the children was remarkably improved while one index did not return to a normal level, the treatment was deemed as markedly effective; if the condition was improved while more than one index did not return to normal levels, the treatment was classified as effective; if the condition was not improved, or even aggravated, the treatment was ineffective.
Statistical Processing.
e data in this study were processed by SPSS20.0 software and graphed by GraphPad Prism 7 (GraphPad Software, San Diego, USA). e data included in the study were enumeration data (clinical efficacy) and measurement data (immune function indexes and inflammatory factor levels), tested by X 2 and t-test. e differences were statistically significant at P < 0.05.
Comparison of General Data of Children.
No remarkable differences in the general data such as blood routine and urine routine results were observed between the two groups before treatment (P > 0.05) (see Table 1).
Comparison of Clinical Efficacy in Children.
e clinical efficacy in EG was remarkably better compared with CG (P < 0.05) (see Table 2).
Discussion
e incidence of pediatric urinary tract infection (UTI) is 3%-5% in China [18], and the children present with different symptoms and signs due to different ages and urinary infection sites. Gramnegative bacteria are the most common pathogens, and the proportion of Gram-positive bacteria represented by Streptococcus faecalis and Staphylococcus has also increased in recent years. Antibiotics are still the main treatment measures. Antibiotics are secondary metabolites with antipathogen effects, which can selectively act on specific links in the synthesis of deoxyribonucleic acid and ribonucleic acid with protein in bacterial cells, so as to inhibit, kill, and dissolve bacteria. Early antibiotic treatment of pediatric UTI has achieved remarkable results. However, with the long-term administration of antibiotics, the drug-resistant bacteria have secreted a large amount of β-lactamase against β-lactam antibiotics, which can cleave the β-lactam ring, lose the antibacterial activity, and subsequently enhance the bacterial resistance to antibiotics such as penicillin and cephalosporin. In order to stabilize the antibacterial efficacy of antibiotics, children can be clinically treated with β-lactamase inhibitors that can irreversibly combine with β-lactamase to ensure the role of antibiotics [19]. Amoxicillin and clavulanate potassium are a mixture of the β-lactam antibiotic (amoxicillin) with the β-lactamase inhibitor (clavulanate potassium), in which the former has an antibacterial effect on Gram-negative and Gram-positive bacteria, while the latter has strong broad-spectrum enzyme inhibitory function. eir combination can enhance the sensitivity of antibiotics and reduce the possibility of drug-resistant bacteria. It has been well documented that amoxicillin and clavulanate potassium can alleviate the clinical symptoms of UTI children and improve the short-term efficacy [20], but it cannot reduce the recurrence rate of pediatric UTI. About 50% of children will relapse after 1 month of treatment due to complex factors, and kidney scars can be formed in severe cases, triggering secondary hypertension and chronic renal failure [21], with poor prognosis.
ere are many reasons for the recurrence of pediatric UTI, and low immune function is one of the most critical factors. Once the immune balance is damaged, normal bacteria can become opportunistic pathogens, triggering the recurrence of UTI. Carmen and Maria and have shown in their study that the levels of T lymphocyte subsets in patients with chronic UTI are markedly lower than those in healthy people. ey also have stated that the imbalance of CD4 + / CD8 + is an important factor leading to immune disorders, and immunoglobulin also plays an important role in resisting bacterial invasion [22]. Bazhengsan in this study is derived from an ancient prescription, including plantain seed, Polygonum aviculare, fringed pink, talc, ural licorice root tip, Gardenia, rhubarb, dandelion, and Hedyotis diffusa. Plantain seed promotes urination, removes stranguria, clears heat, and brightens the eye because its outer epidermal cell wall contains a large number of hydrophilic polysaccharide Evidence-Based Complementary and Alternative Medicine colloids that can improve the intensity of delayed allergic reaction and increase the level of hemolysin in mice with low immune function, indicating that the substance can enhance the immune function. In addition to plantain seed, water extract and low polarity extract can also regulate the secretion of human immunoglobulin, while rhubarb can enhance the IgA level secreted by the intestinal tract of burned mice and accelerate the secretion of immune-related substances [23]. erefore, Bazhengsan has an immune enhancement effect, and the immune function indexes of EG after treatment were significantly better compared with CG (P < 0.001).
At present, scholars have studied the application of Bazhengsan in pediatric UTI, but the effect of the drug on the levels of inflammatory factors in children remains unclear. Polygonum aviculare in Bazhengsan significantly inhibits Shigella flexneri, Escherichia coli, Staphylococcus aureus, and Staphylococcus, while the water and ethanol extracts of fringed pink also restrain Escherichia coli and Salmonella paratyphi. Moreover, dandelion and Hedyotis diffusa have strong inhibitory effects on a variety of bacteria and cocci, while rhubarb can also hinder the nucleic acid synthesis of bacterial cells and plays an anti-anaerobic role. Cao et al. have shown in their study that rhubarb can improve serum TNF-α and IL-6 levels, indicating that the drug can effectively prevent the amplification of inflammatory mediators and avoid their biological effects [24]. erefore, the levels of inflammatory factors after treatment in this study were lower in EG than in CG (P < 0.001), with markedly better clinical efficacy in EG (P < 0.05).
It is worth noting that some scholars have found that rhubarb can inhibit the expression of intercellular attachment molecules in glomerulus, reduce the proliferation of human renal fibroblasts induced by mitogen PMA, and hinder the secretion of IL-6, thereby preventing renal fibrosis [25] or protecting the renal function of UTI children.
is study did not discuss the renal function of children, and the protective effect of Bazhengsan on renal function of UTI children needs to be further explored.
In addition, amoxicillin and clavulanate potassium combined with Bazhengsan can enhance the comprehensive efficacy of children, which should be popularized in practice.
Data Availability
e data used to support the findings of this study are available upon reasonable request from the corresponding author.
Conflicts of Interest
e authors declare that they have no conflicts of interest. (d) Figure 2: Comparison of inflammatory factors in children (x ± s). Note: the abscissa from left to right was T 1 , T 2 , and T 3 . e black area was EG and the gray area was CG. # indicated P < 0.001.
|
v3-fos-license
|
2018-12-31T07:48:43.125Z
|
2018-04-19T00:00:00.000
|
59457168
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/ijae/2018/3582508.pdf",
"pdf_hash": "8575b0b3aabcc320c503be64e78069c1b5de4a33",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2200",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"sha1": "8575b0b3aabcc320c503be64e78069c1b5de4a33",
"year": 2018
}
|
pes2o/s2orc
|
Experimental Verification of a Simple Method for Accurate Center of Gravity Determination of Small Satellite Platforms
We propose a simple and relatively inexpensive method for determining the center of gravity (CoG) of a small spacecraft. This method, which can be ascribed to the class of suspension techniques, is based on dual-axis inclinometer readings. By performing two consecutive suspensions from two different points, the CoG is determined, ideally, as the intersection between two lines which are uniquely defined by the respective rotations. We performed an experimental campaign to verify the method and assess its accuracy. Thanks to a quantitative error budget, we obtained an error distribution with simulations, which we verified through experimental tests. The retrieved experimental error distribution agrees well with the results predicted through simulations, which in turn lead to a CoG error norm smaller than 2mm with 95% confidence level.
Introduction
The growing interest for the development of light, small, highly capable spacecraft (S/C) platforms for a wide range of missions demands for a boost in performance from the standards established by the multitude of low-cost micro/ nanosatellites.Often developed as part of university educational programs, they have been dominating this segment in the last two decades.In this respect, it is known that accurate attitude and orbit control systems rely on the precise knowledge of the spacecraft CoG.However, the development of such a class of S/C is highly cost-driven, whereas methods for measuring the CoG commonly employed for larger platforms [1], being highly accurate, require rather complex and expensive equipment.Thus, cost-effective and easy-toimplement alternatives shall be pursued.
Typically, the methods for measuring CoG of an S/C fall into two broad categories, that is, static methods and dynamic methods [2].Static methods are often based on the pivoting axis system: the payload under test (PUT) is mounted on an instrument featuring a pivoting axis.In principle, the offset of the CoG from the pivoting axis can be retrieved by measuring the force acting on a point at a certain distance from the axis itself, once the total mass of the payload is known.
Complete CoG localization is then obtained by repeating the measurement after rotating the PUT.The most accurate instruments exploiting the static balancing principle consist of rotary platforms featuring a closed loop self-balancing controller, to hold the platform to its neutral position [3].The torque required for rebalancing is the measured output from which the CoG location can be retrieved, leading to submillimeter accuracies.Another common static measurement method is the one of multipoint weighting, achieved by placing the PUT over a multipoint weight platform equipped with 3 (or 4) high accuracy force transducers.The forces measured by the transducers, whose locations are known, allow to compute the in-plane coordinates of the CoG.This concept is employed at NOVA test facility (Utah University), to measure the mass properties of nanosatellites, with a reported accuracy of 1 mm in localizing the CoG [4].
Dynamic methods are based on the principle of dynamic balancing: the PUT is placed on a spin balance which estimates the CoG location by measuring the centrifugal forces.High sensitivity, however, is achieved at high rotational speeds, which makes such method of limited applicability for space vehicles CoG measurements [1].
Despite various measurement instruments based on all methods listed above are commercially available, these are quite expensive: even when aiming at a relatively low total weight capacity and moderate accuracy, the cost reaches several thousands of Euros.The concept of suspending a body for measuring its CoG, which is pursued in this work, is certainly not new, rather one of the oldest.Suspension was employed for example in NASA X-38 project [5].In that case, the CoG localization was obtained combining weight distribution (as for a multipoint weight method) with inclination measurements.Recent examples involving the suspension concept are the trifilar torsional pendulum [6], and the photogrammetry technique [7], applied by NASA engineers to locate Orion capsule CoG.The trifilar pendulum is a quite simple mechanism, allowing the joint determination of the CoG and the inertia matrix.The reported accuracy in locating the CoG is 1.5 mm, but this was obtained after a careful calibration of the mechanism and the use of a tricoordinate measuring machine to determine the distance between some predefined points [6].In [7], the authors suspended a fullscale Orion crew module from an asymmetric bifilar lifting strap and retrieved the CoG position from triangulation of the plumb lines.These, in turn, were determined from a set of images, gathered by a multicamera system, and processed through a set of custom-designed data reduction functions.Authors' indications suggest for an accuracy in the order of few millimeters.
In this paper, we aim at the experimental verification of the method devised by the authors in [8], which relies upon two consecutive monofilar suspensions of the object under test to determine its CoG, using as measured quantities the angle output from a dual-axis inclinometer.To this end, we first generalize the method relaxing some of the constraints outlined in the original formulation.The experimental verification approach is that of applying the method to determine the barycenter of a known mass distribution, that is, a proof mass.To enforce experiment repeatability and smooth systematic errors, we perform measurements from several couple of suspension points.The error of the method is then quantified as the distance between the computed barycenter of the proof mass and the true one.
The main contribution of this work is twofold: (1) to investigate an extremely low-cost method for determining the CoG, with minimum hardware and calibration requirements, with an accuracy suitable for many practical applications, and (2) to provide a comprehensive error analysis which is validated through experiments.To this end, the paper is organized as follows: first, the double suspension method is outlined (Section 2).Then, an error budget is presented, first qualitatively to justify the experimental setup design (Section 3) and later quantitatively by introducing the test facility and the assumed statistical distributions of errors (Section 4).The verification method is then presented in Section 5, which combines Monte Carlo error analysis and experiments.Once the theory is set, results are presented in Section 6, and finally, our conclusions are drawn in Section 7.
The Double Suspension Method
In recalling and generalizing the method presented in [8], we first define the inclinometer frame of reference.Consider the inclinometer in Figure 1, with top face up; ẑi is perpendicular to the top face, with outward positive, ŷi is directed in the direction of the cable connection, and xi completes the right-handed frame.
The dual-axis inclinometer selected for the experiment (Posital Fraba ACS-060) provides as output the direction sines of the gravity vector (g) with respect to xi (call the angle X) and ŷi (call the angle Y), that is, We define body frame xb , ŷb , and ẑb the frame of reference fixed to the proof mass to be suspended.It results from a simple translation of the inclinometer reference frame.We reserve the definition of the location of its origin later in the manuscript, after the justification of the suspension mass shape.Lastly, we define the laboratory reference frame xl , ŷl , and ẑl as a pseudoinertial frame of reference with: ẑl is parallel and opposite to the local gravity vector, xl points northward, and ŷl completes the frame.
The CoG determination method can be summarized as follows: for a given suspension point, the body frame components of the upward local vertical can be computed starting from the inclinometer readings.Then, two suspensions determine two of such unit vectors which identify two lines ideally passing through the CoG of the assembly under test.These lines are not going to intersect exactly due to measurement errors; however, the midpoint of the segment of the closest approach can be taken as the estimated CoG.In what follows, a step-by-step procedure towards the computation of such an estimate is presented.
For solving the problem under discussion, we first need to express the direction of the upward local vertical, ẑl , in body frame components ûb as a function of X and Y, while the orientation of the body about this direction is not important.From (1), it follows: International Journal of Aerospace Engineering where the last component of ûb is computed to enforce the unit norm, and its sign depends on the orientation of the inclinometer: it is + when the inclinometer is with top face up, − otherwise.
It shall be noticed that, in the form above, (2) may lead to unphysical results.In fact, it is possible to obtain angular measurements such that sin X 2 + sin Y 2 > 1 due to measurement errors.We can handle such occurrence by normalizing the sine of the sensor readings through the factor Taking two suspension points, P 1 and P 2 , on any of the face of the proof mass, we obtain two body frame representations of the upward vertical vector, û1 and û2 .If we call L 1 and L 2 the lines stemming from the suspension points and passing through the barycenter, their parametric equations are , we avoided the indication of the body frame of representation for P and û, which is from now on left implicit for ease of notation.The intersection occurs when L 1 t 1 = L 2 t 2 or [9] when Subtracting P 1 from both sides of (4) and crossing with û2 yields Equation ( 5) can be solved for the parameter t 1 by dotmultiplying by û1 × û2 and dividing by û1 × û2 2 to get And, operating symmetrically for t 2 yields A useful property of the above solution for the point of intersection is that, if the two lines are skew, as it would certainly happen with actual noisy measurements, t 2 * and t 1 * represent the parameters of the points of closest approach, that is, the extremal points of the minimum distance segment.This suggests a definition for the CoG estimation from the double suspension technique as the midpoint of the segment of closest approach between the two suspension lines: It is interesting to note that expressing the solution for the CoG location through (8) is not different from performing a triangulation, that is, a point localization from two angular measurements: this is a well-known concept in the field of angle-only navigation ( [10] and references therein, [11]).
Experiment Requirements
To assess the accuracy of the proposed double suspension method, it is fundamental to identify first the error sources and then to design the experiment such in a way to minimize their detrimental effect on the CoG estimate.
Based on the analysis performed in [8], the main error contributions are expected to be Having its own mass distribution and being integral to the proof mass, the measurement equipment (ME) induces a shift of the CoG location for the assembly under test resulting to be C tot .This differs from the CoG of the proof mass alone, according to where C tot is the estimated CoG location from the double suspension method, C me is the CoG of the measuring equipment alone, m is the proof mass mass, m me is the mass of the ME, and m tot = m + m me .Clearly, we can use (9) to compensate for the ME presence; however, the outcome of the experiment will be affected by how much accurate our knowledge of the ME mass and its CoG location (C me ) are.Since we expect an accuracy of the method in the order of 1 mm, we shall design the experiment so that the ME introduces an uncompensated perturbation on the measured C tot of one order of magnitude lower, that is, 0.1 mm, or less.Indeed, from (9) we derive the proof mass CoG coordinates as C me can be estimated from a CAD model; however, such an estimate is affected by a modelling error.For our purposes, we assume a conservative error window for C me and design the experiment as to make this uncertainty negligible on the CoG computation.Minimizing the sensitivity to the above uncertainties reduces to placing the ME as close as possible the CoG and building the proof mass as heavy as possible, that is, a bulk mass is the best choice.
Error source 2 is intrinsic to the measuring equipment and cannot be reduced by proper design once the ME is chosen.Concerning the error source 3, it is caused by both manufacturing precision and suspension mechanism.
International Journal of Aerospace Engineering
The manufacturing precision contribution can easily be minimized by measuring the effective dimensions after manufacturing.The suspension mechanism shall be designed to minimize the uncertainty in the suspension location and hinge moment.We examined different solutions such as universal joint, uniball, and wire suspension.A thin wire resulted as the best compromise between cost, ease of manufacture, and expected accuracy.Error source 4 is the easiest to control: in fact, one can take a homogenous material of known shape (say a parallelepiped of cast iron) and machine it to strict geometric tolerances.This ensures a negligible CoG shift from the geometric centroid, which can thus be assumed as the true barycenter.Considering all the above points, we finally opted for a bulk parallelepiped suspended by means of a wire as the experimental setup to verify the proposed method.A rough computation with pessimistic assumptions on C me error lead to a proof mass of about 10 kg with 80 × 80 mm base, needed to bound the uncompensated CoG perturbation due to the ME within 0.1 mm.A detailed quantification of all error sources is provided in the next section.
Experiment Design and Error Source Models
As outlined in Section 3, for the experimental verification, we employed a steel parallelepiped as proof mass.After the machining, the effective dimensions were 78 × 75 × 200 mm and the mass equal to 9.2 kg.We assumed the body frame origin lying on one of the vertexes with the positive x-axis along the longer edge, the positive y-axis along the shortest edge, and z-axis completing the right-handed frame.
The overall CoG measuring instrumentation can be regarded as the combination of a suspension mechanism plus the measurement equipment (i.e., the inclinometer and the acquisition hardware), see Figure 2.Each of those two parts introduces errors in the measurements, which are discussed, together with the respective implementation details, in the following subsections.
4.1.Measurement Equipment.The measurement equipment (ME) consisted of the inclinometer plus some acquisition hardware, namely, an Arduino Uno, a wireless transmitter and power supply.The boards were assembled together with the inclinometer using an Arduino Prototyping Board to form the complete ME.The Arduino reads the analogic inclinometer output, converts it to digital, and sends it through the transmitter to a computer.Wireless transmission was needed to avoid running cables, which would otherwise induce systematic errors in the measurements.The ME was placed onto the x and y plane of the proof mass with the inclinometer's top face up.The exact location, which in principle is free, shall nevertheless be selected accounting for the range of the inclinometer (+/− 60 °in our application) and the position of the suspension points, to guarantee that when the body is hanged, the inclination angles lie within the measurement range.
Errors introduced by the ME are of two kinds, namely, the one due to the imperfect knowledge of the location of its own CoG and the inclinometer measurement errors.A 3D CAD model of the ME was used to get an estimate of its CoG.Clearly, the CAD model is never an exact replica of the real ME so that we needed to assign an error to its CoG estimate.The measuring equipment is a stacked structure with nonuniformly distributed mass, as shown in Figure 3.
To justify the assumed error on the C me , we can think of the ME as built up of 4 volumes: Arduino Uno volume, WiFi transmitter volume, prototyping board plus inclinometer volume, and battery volume.In each of these volumes, we can conservatively assume that the real CoG of the pertinent mass lies wherever inside a cube of 10 mm edge around the CAD's CoG.This is equivalent to say that the error random variable of the CoG in each volume has a uniformly distributed probability density function (PDF) in a 10 mm edge cube.Due to the linearity of the CoG expression, the error in C me (i.e., ΔC me ) is the weighted sum of the 4 random variables: This means that the PDF of ΔC me is the convolution integral of the 4 weighted random variables ΔC v i .Performing a random simulation, we concluded that the total PDF resembles enough a normally distributed PDF with zero mean and standard deviation 1.66 mm (reasonable in force of central limit theorem).
As far as the measurement error is concerned, it is due to both the error affecting the (analog) inclinometer output and the discretization error.The inclinometer accuracy (maximum error) is rated at 0.1 °.Since we lack any International Journal of Aerospace Engineering statistical information, we shall assume a uniform distribution for this source in between −0.1 °and + 0.1 °.The voltage signal from the inclinometer is then processed by the Arduino UNO 10-bit analog-to-digital converter.The resulting discretization step is Σ AD = 0 16 °, whose effect can be modelled as a uniformly distributed random variable between ± 0 08 °, that is, having zero mean and standard deviation equal to Σ AD / 12 ≈ 0 046 °.
The accuracy and discretization errors can be considered independent and additive so that the global PDF can be computed as the convolution of the single PDF's.The resulting PDF will resemble a triangular distribution with zero mean, since the uniform distributions have comparable widths.To smoothen the effect of random errors, we can average many measurements (say n = 20); in force of the central limit theorem, the PDF of the sample mean resembles a normal distribution with standard deviation reduced by a factor n.
Suspension Mechanism.
As anticipated in Section 3, a wire suspension was selected.To keep the suspension points as localized as possible, a 0.8 mm multiwire cable and a bolt with a pass-through 1 mm hole in the center were employed, as shown in Figure 4.
Although there is a small play between the wire and the hole (0.2 mm), we can think of the wire as being clamped onto the top of the bolt.This way, we can assume the wire as a clamped beam.The clamping reacts the vertical weight of the proof mass and a moment.The moment is originated from the flexural rigidity of the wire, which prevents the suspension point from aligning (exactly) to the barycenter.Rather, we can more accurately say that the barycenter aligns with the holding point of the wire.The net effect of the bending moment at clamp then would be a shift of the suspension point from the nominal location to a virtual point lying above it by a quantity δ (see Figure 5).This is the second major contribution to the error in locating the suspension point after the small play around the wire.
To have an educated guess of the maximum δ, we express the displacement Δ as a function of the beam parameters and face inclination with respect to vertical α 0 obtaining (see Appendix) In (12), F is the proof mass weight and the product EJ between the elastic modulus (E) and the cross-section moment of inertia (J) is the bending stiffness of the beam.In principle, since we use a multiwire cable, EJ is dependent on the load F; however, in our experiment, the tension is constant so that we can assume a fixed F/EJ.Thus, the virtual suspension point is offset above the bolt surface by about Equation (13) shows that δ gets smaller as α 0 decreases (since EJ and F are constant); thus, we can have a worstcase estimate of δ considering a situation with large α 0 .Figure 3 depicts an experiment at high α 0 from which one can visually estimate Δ ≅ 0 8 mm being the cable thickness 0.8 mm.Then, we obtained that δ ≅ 1 mm is a conservative upper-bound of the offset.The observations above allow to define an uncertainty volume for the suspension point which can be assumed to have a square base of 1 mm edge on the plane of the bolt head (due to the play), centered in the measured suspension point, with a vertical height of 1 mm.We can assume an error around the theoretic suspension point, belonging to this volume and drawn from a uniform PDF in that volume.
Verification Process
The verification of the proposed method employs a combination of numerical simulations and experiments according to the following steps: (1) Take as input the assumed distributions of the error sources outlined in Section 4.
(2) Perform Monte Carlo simulations to estimate the PDF and the cumulative distribution function (CDF) of the CoG distance error ΔC .
(3) Verify experimentally the error budget: perform many experiments and check whether the results are compatible (in a sense to be soon specified) with the PDF found in simulation.
Such approach was preferred to a simpler direct error distribution estimation through multiple independent trials.In fact, we aim at experimentally verifying the method rather than characterizing the measuring equipment, for which a more extensive test campaign should be used.Furthermore, characterizing the measuring equipment in absolute would be complicated by (10), as the estimation accuracy depends also on the mass under proof.We considered 6 suspension points, which lead to 15 possible suspension couples, enough for our scope.The number was selected to limit also the perturbation induced by the drilled holes on the true CoG location: we estimated a worst-case shift in the order of 0.01 mm.
A workflow diagram of the entire validation process is given in Figure 6, while the suspension point coordinates are reported in Table 1.
5.1.Monte Carlo Simulations.The Monte Carlo simulation scenario was developed in MATLAB® environment, according to the following approach.We can regard the double suspension algorithm as a function taking as input a perturbed vector of parameters, p = p true + δp, and whose output is the CoG estimate.We denote by p true the vector collecting the true values of such parameters, namely, the suspension point coordinates, the inclinometer readings, and the CoG coordinates of the measuring equipment: p true = P 1 , P 2 , X 1 , Y 1 , X 2 , Y 2 , C me ; by δp, we denote the vector containing the error affecting each parameter (masses were not taken as error sources, as we know them accurately enough to safely neglect the impact of their uncertainties on the CoG computation).Within a simulation, the ideal inclinometer output obtained by hanging the proof mass from a given suspension point can be calculated from the reversed application of the algorithm in Section 2. Thus, for a given couple of suspension points, the true parameter vector can be computed and then perturbed with random errors drown from the corresponding PDF (Section 4).The estimated CoG location is retrieved from direct application of the solution algorithm to the perturbed parameters.
When running the Monte Carlo simulations according to the procedure above, we randomly distributed a large number (25) of suspension points on the surface of the proof mass, to avoid as much as possible any dependency of the outcome on the specific geometric configuration.In fact, as pointed out in [8], the accuracy of the double suspension method is also dependent on the mutual configuration of the suspension points, getting worse when the CoG and the suspension points become closely collinear.In such a case, International Journal of Aerospace Engineering we would be attempting to intersect two lines which are almost parallel: the evaluation of the CoG through ( 6) and ( 7) would lead to an ill-conditioned operation (when û1 ∥û 2 , the denominators approach 0).For each couple of the 25 suspension points, we generated 1000 perturbed input vectors, which were supplied to the CoG estimation algorithm: the resulting estimate was then compared to the geometric center of the proof mass for computing the error ΔC.Due to the relatively large number of random error contributions, the components of the CoG location error ΔC can be regarded as normally distributed, in force of the central limit theorem.As a consequence, the norm of ΔC then approximately follows a Rayleigh distribution.Hence, we fitted a Rayleigh distribution to the results, obtaining as output the desired PDF of ΔC.
Experimental Verification Method.
The outcome of the Monte Carlo simulations was checked against a test campaign carried out using the experimental set up in Figure 2.For the subsequent analysis, we can regard the series of suspensions performed as a Bernoulli process.Assuming we performed n trial experiments whose outcome could be either success (S) or failure (F); in each trial, we had a 7 International Journal of Aerospace Engineering probability p to succeed and q = 1 − p to fail.For our experiments, we call success the event in which the error norm is lower than a given threshold w e and failure otherwise.Then, if we denote by f err w e the PDF of ΔC obtained from simulation, and F err the corresponding CDF, taking an error window w e , F err w e provides the probability p that the error belongs to the error window (i.e., success): p = F err w e .To validate the CDF obtained through simulations against the experiments, we check that for some w e is p exp w e ≈ p w e 14 In (14), p exp is the experimental probability of success given by the maximum likelihood estimator for the Bernoulli process parameter (p) [12], that is, s w e n 15 n being the number of trials (15) and s the number of successes given w e .
Note that, the larger the number of experiments n, the higher the confidence in the estimator, (15), for p exp w e .However, our number of trials is already constrained by the considerations made in the previous section.
Results
Considering all possible couples of 25 suspension points, the Monte Carlo simulation explored 300 suspensions for a total number of 3 ⋅ 10 5 trials.Figure 7 depicts the histogram of the resulting CoG error.
The best-fit Rayleigh distribution has a shape parameter B = 0 8 mm.If we assume the error being isotropic in space, B corresponds to the (common) standard deviation of the three scalar error components.
During the experimental campaign, the CoG was measured through all possible combinations of suspension point couples.The results are depicted in Figure 8 and summarized in Table 2, which compares the CDF obtained from the simulated best-fit Rayleigh distribution (p), to the ones from experiments (p exp ), according to the method outlined in Section 5.2.
The agreement is very good at central w e and poorer, but still reasonable, at extremal w e .This can be expected, since the "front" and "tail "of the Rayleigh distribution are low probability regions, that is, it is less likely to obtain results in these regions.We can conclude that both experimentally and in simulation the method works as expected, reaching accuracies in the order of 1 mm.
The error analysis performed so far is specific for the assumed ME and on the mass ratio of the proof mass and ME itself; strictly speaking, these conditions are necessary for the estimated error PDF to be valid.It is of interest to briefly assess up to which extent the results obtained can be extrapolated to a generic experiment for estimating the CoG of a small spacecraft.To this end, consider (10) reformulated in terms of error variables: where ΔC tot is the error caused by the method when estimating the CoG of the entire assembly (ME + PUT), and ΔC me is
Figure 1 :
Figure 1: Inclinometer adopted in the experiment.
( 1 )
True barycenter shift due to measurement equipment (2) Measurement errors (inclinometer error plus analogto-digital conversion) (3) Knowledge of the suspension point location (4) Small geometric errors of the proof mass
Figure 2 :
Figure 2: Experimental setup; two consecutive suspensions as the one shown are required to locate the CoG.
Figure 4 :
Figure 4: Detailed view of the suspension mechanism.
ΔFigure 5 :
Figure 5: Schematic representation of the virtual suspension point concept.
Figure 7 :
Figure 7: Histogram of the occurrences of the CoG estimation error.
Figure 8 :
Figure 8: Cumulative density function of the CoG measurement error: experimental (markers) versus numerical (full line).
Table 2 :
Bernoulli checks for different error windows.
|
v3-fos-license
|
2019-06-19T13:12:08.791Z
|
2019-06-19T00:00:00.000
|
53262786
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2019.00538/pdf",
"pdf_hash": "f7eb0c67c113fca1174cb5c964703b7aef65244d",
"pdf_src": "Frontier",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2201",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "f7eb0c67c113fca1174cb5c964703b7aef65244d",
"year": 2019
}
|
pes2o/s2orc
|
The Puerto Rico Alzheimer Disease Initiative (PRADI): A Multisource Ascertainment Approach
Introduction Puerto Ricans, the second largest Latino group in the continental US, are underrepresented in genomic studies of Alzheimer disease (AD). To increase representation of this group in genomic studies of AD, we developed a multisource ascertainment approach to enroll AD patients, and their family members living in Puerto Rico (PR) as part of the Alzheimer’s Disease Sequencing Project (ADSP), an international effort to advance broader personalized/precision medicine initiatives for AD across all populations. Methods The Puerto Rico Alzheimer Disease Initiative (PRADI) multisource ascertainment approach was developed to recruit and enroll Puerto Rican adults aged 50 years and older for a genetic research study of AD, including individuals with cognitive decline (AD, mild cognitive impairment), their similarly, aged family members, and cognitively healthy unrelated individuals age 50 and up. Emphasizing identification and relationship building with key stakeholders, we conducted ascertainment across the island. In addition to reporting on PRADI ascertainment, we detail admixture analysis for our cohort by region, group differences in age of onset, cognitive level by region, and ascertainment source. Results We report on 674 individuals who met standard eligibility criteria [282 AD-affected participants (42% of the sample), 115 individuals with mild cognitive impairment (MCI) (17% of the sample), and 277 cognitively healthy individuals (41% of the sample)]. There are 43 possible multiplex families (10 families with 4 or more AD-affected members and 3 families with 3 AD-affected members). Most individuals in our cohort were ascertained from the Metro, Bayamón, and Caguas health regions. Across health regions, we found differences in ancestral backgrounds, and select clinical traits. Discussion The multisource ascertainment approach used in the PRADI study highlights the importance of enlisting a broad range of community resources and providers. Preliminary results provide important information about our cohort that will be useful as we move forward with ascertainment. We expect that results from the PRADI study will lead to a better understanding of genetic risk for AD among this population.
INTRODUCTION
Alzheimer disease (AD) is a progressive neurodegenerative disorder that affects 1 in 9 Americans over the age of 65. This disease has a significant impact on individuals with AD and their families and poses huge financial and social burden on society. To date, over 20 loci have been identified as risk factors for AD in non-Hispanic White (NHW), genome wide association studies (GWAS) with limited GWAS in other populations (Lambert et al., 2013). In addition, the only large AD sequencing effort to date, the Alzheimer's Disease sequencing project (ADSP) (Beecham et al., 2017), has focused its efforts on individuals of NHW descent, including a limited number of Hispanic (HI), and African American individuals. The importance of examining AD in other populations (Ramirez et al., 2008) is highlighted by findings that show Caribbean Hispanics from the Dominican Republic are twice as likely as NHW to have lateonset Alzheimer's Disease (LOAD) (Tang et al., 1998(Tang et al., , 2001. Furthermore, the incidence of new LOAD cases in families from the Dominican Republic is three times larger than the incidence found in NHW families (Vardarajan et al., 2014) even though the genetic risk of LOAD is similar. Despite clear evidence that points to the importance of investigating AD in underserved populations, this work has lagged.
Although comparisons of risk among different ethnic groups are complicated by differences in the assessment of cognitive decline across studies and population differences in willingness to participate in medical research, there are several possible explanations for increased incidence in these specific ethnic groups (e.g., lower educational attainment, higher rates of cardioand cerebrovascular disease, and metabolic syndrome). While the importance of diversity and inclusion in genomic research has been emphasized for more than two decades (NIH Revitalization Act of 1993, Public law 103-143) many groups, including Hispanics, are underrepresented in biomedical research (Shavers et al., 2002;Sheppard et al., 2005;Calderon et al., 2006), including genomic, and translational studies (Armstrong et al., 2005;Ricker et al., 2006;Armstrong et al., 2012). Further, this lack of participation has the potential to delay the application of novel treatments that may be relevant to these populations, exacerbating existing health disparities in a variety of diseases, including AD. Specifically, given the importance of genomic research in the development and implementation of precision medicine initiatives (Hampel et al., 2017), there is an urgency to engage with and include underserved and underrepresented groups in such research to enable access to these advanced treatments (Wilkins, 2018).
Alzheimer disease is the most common form of dementia and the fourth leading cause of death in Puerto Rico (PR) (Friedman et al., 2016). The population of PR was estimated at 3,474,182 individuals in 2015, with 617,007 over the age of 65, and AD prevalence of 12.5% (Puerto Rico Department of Health, 2015). Further, according to Perreira et al. (2017) the population of PR is aging and struggles with high rates of comorbid conditions (e.g., hypertension and diabetes) that contribute to dementia. These numbers underscore the need to investigate early risk factors and develop the necessary research to study the neurobiology of cognitive decline in Puerto Ricans and more broadly Hispanics. Furthermore, enriching AD genomic studies with Hispanic populations is fundamental for reducing health disparities, delivering precision medicine, and ultimately improving health outcomes for this community.
To address the range of disparities experienced by Hispanics due to under-representation in genomic studies of AD, we developed the Puerto Rico Alzheimer Disease Initiative (PRADI). The goal of this National Institute of Aging funded project is twofold. First, the PRADI study examines genomic risk for AD in Puerto Ricans and adds to the growing body of knowledge regarding Hispanic risk for AD. Second, the PRADI study makes comparisons using two types of controls: family-based (related controls) and case-control (unrelated controls), paralleling, and building on the ongoing work of the ADSP (Beecham et al., 2017). Furthermore, Puerto Ricans are an admixed population, enriched for at least three ancestries (European Caucasian, Western African, and Amerindian/Taino), resulting in complex population substructure (Claudio-Campos et al., 2015;Rajabli et al., 2018). The use of population substructure (i.e., global and local ancestry) can allow for adjustment of models to improve genetic analyses. The importance of examining ancestral contributions in Hispanics can be seen in studies of complex diseases, including asthma (Gignoux et al., 2019), multiple sclerosis (Amezcua et al., 2018), and cancer (Salgado-Montilla et al., 2017;Diaz-Zabala et al., 2018). The usefulness of understanding and incorporating genotypic and admixture information into the conceptualization and management of disease among Puerto Ricans is becoming increasingly apparent (Morales-Borges, 2017;Diaz-Zabala et al., 2018).
In contrast to other studies of Puerto Ricans (Tucker et al., 2010), the current study focuses exclusively on participants from the island of PR. We describe the design and implementation of our multisource method for recruiting individuals for the genetic study of AD and our corresponding work in the community to increase study participation among eligible Puerto Ricans. Equally important, we describe our cohort with respect to clinical features and ancestral proportions by region. These results provide a preliminary picture of our PRADI cohort.
MATERIALS AND METHODS
A multisource ascertainment approach was implemented to recruit and enroll participants into the PRADI study. As described below, the approach consisted of different phases that revolved around community engagement and included: (a) identification and relationship building with key stakeholders from several organizations; (b) collaborative agreement on ascertainment methods and formalization using memorandums of understanding; (c) targeted actions and recruitment events; and (d) education and dissemination of information about AD to health professionals and the general public. This approach was designed to establish and strengthen collaborative relationships with key stakeholders to facilitate ascertainment for this study and future studies.
Ascertainment efforts were carried out in PR and encompassed all seven health regions (Arecibo, Bayamón, Caguas, Fajardo, Mayagüez, Metro, and Ponce) as defined by the Puerto Rico Department of Health. Only bilingual personnel were sent to the sites and plain Spanish was used for all verbal and written study-related communication (materials for public dissemination were developed for a third-grade reading level). Standard screening and evaluation activities were performed, which included collection of clinical, family, and medical history and neurocognitive testing. Individuals were determined to be cases or controls with further specification depending on whether they were family history positive or negative for AD.
Finally, to investigate potential differences among our participants from different parts of the island, we tested for differences in age of onset and 3MS scores by health region and ascertainment source (i.e., AD specialist, adult care center, or community event/activity). We also conducted admixture analysis to examine the population substructure of our Puerto Rican cohort by region to evaluate differences in ancestry proportions among the health regions.
Ascertainment Phase One: Getting to Know the Field Stakeholders From Multiple Sectors
In the initial phase of our multisource ascertainment approach, the local team identified potential sources of participants within PR communities by interacting with groups and providers that serve the AD population. There are multiple groups and ongoing community initiatives working to increase AD awareness in PR. Our goal was to establish collaborative relationships with stakeholders from different sectors (Figure 1). These interactions served as a starting point to disseminate information about the study, to identify sources for cases and controls, to build networks with potential collaborators, and to create opportunities for direct ascertainment. In addition, these initial meetings served as a venue for discussing the importance of inclusive recruitment in genetics research, especially how a lack of diversity can delay specific populations' access to personalized/precision medicine. The primary groups we approached included:
Governmental Stakeholders
We contacted central and local government representatives, including the PR office of the Ombudsman for the Elderly (OPPEA, for its Spanish acronym), a legal affairs office for older adults, and the AD Registry of the Health Department of PR. As an initial step, local team members joined the Health Department Alzheimer's Advisory Board. This process allowed us to meet with key stakeholders to discuss the PRADI study. Through these initial contacts, OPPEA provided us with additional contacts at the provider level to include various programs and adult care centers for older adults and those with AD and other cognitive problems. Through these contacts, we established ties with additional local government representatives of the Frontiers in Genetics | www.frontiersin.org municipalities, including Cidra, Fajardo, Carolina, Aguadilla, Arecibo, among others.
Community Non-profit Organizations (NPO)
To establish community based collaborations in the non-profit sector, we contacted multiple groups that serve older adults in PR, including the Puerto Rican Chapter of the AARP; Mente Activa (Active Mind), which is a non-profit organization that promotes physical and mental activity for older adults and those with dementia; and Organización Pro Ayuda a Personas con Alzheimer (OPAPA), another non-profit organization that provides education and support to people with AD and their families. Our team met with leadership in these organizations to provide information about the PRADI study.
Religious Groups
Our primary religious contact was the Lutherans Social Service of PR, a non-profit faith-based organization involved in providing services to older adults. It is funded to provide programs to train dementia capable personnel and service providers as well as programs to identify older adults with early signs of AD. In addition, we contacted the Catholic Church, especially the Seminary of PR and the Caguas Cathedral. Both groups agreed to assist with the study by providing access to participants and disseminating information about our study during religious services and through print media.
Ascertainment Phase Two: Defining and Formalizing Collaborations With Stakeholders
The next phase in our multisource ascertainment approach was seeking and using input from the stakeholders and organizations about best practices for ascertainment. This process typically involved in person discussions between the local team (headed by Dr. Feliciano, a neurologist who specializes in the care of older individuals) and the organizations. This allowed us to define our ascertainment practices in alignment with accepted practices for the respective organizations, groups, etc. In addition, it allowed us to address any concerns at the outset. Based on these discussions, we constructed memorandums of understanding (MOUs) to specify the nature of the relationship and outline collaborative activities with the stakeholders from different sectors. MOUs were signed with OPPEA, the Puerto Rican Chapter of the AARP and Lutheran Social Services of PR. In addition, we established MOUs with Mayors and their staff from several municipalities, including Cidra, Fajardo, Carolina, Aguadilla, and Arecibo. As part of the MOU, the Universidad Central del Caribe provided insurance endorsements for the use of their venues during recruitment events.
Ascertainment Phase Three: Targeted Actions and Direct Recruitment
Working with the various groups with whom we had MOUs, we set up multiple recruitment events. Depending on the site, pre-recruitment conferences were scheduled to educate center personnel (e.g., primary doctors, nurses, social workers, psychologist, and others dementia specialists) or the public. These pre-recruitment meetings were used to provide general information about AD and to clarify aspects of the study in person to healthcare providers as well as potential participants and their families. At meetings involving the public, potential participants, or family members we gathered contact information for further follow up, leading to recruitment of interested individuals. This also allowed us to estimate the number of participants and to plan our ascertainment resources accordingly.
Ascertainment Phase Four: Giving Back: Dissemination and Education
We conducted a number of follow-up events to provide information for caregivers and center personnel at the various recruitment sites. For physicians, we were able to provide continuing medical education through the Puerto Rican College of Physicians and Surgeons; for health professional staff, we provided participation certificates for early detection of AD and culturally relevant adaptation of the comprehensive and evidence-based community support strategies.
This follow-up allowed us to disseminate information about AD to the community. The provision of information about AD to non-AD healthcare workers and general communities will help us build local resource networks and empower them with knowledge about dementia capabilities to improve the quality of life of the participants and their caregivers. In addition, at select venues we have also organized educational outreach activities where we served as expert speakers, providing information about dementia research and care. Typical audiences included healthcare providers (e.g., nurses, social workers, case managers, and primary care physicians) and the public. We have also engaged in dementia-related initiatives via social media, like "Un café por el Alzheimer" (A cup of coffee for Alzheimer) (Friedman et al., 2016), which shares our study information on their social media platforms.
Study Population
A convenience sampling method with a geographic distribution throughout the island was used. PRADI participants were self-reported Puerto Rican adults, aged 50 years, and older with no restrictions on gender or socioeconomic status. While the majority of participants were residents of PR, a small fraction of relatives of the Puerto Rican families living in the continental United States (Florida, New York, Connecticut, and Massachusetts) were enrolled. In addition, some individuals less than 50 years of age were enrolled. When conducting our analyses, we included only residents of PR who were 50 years of age or greater.
Our cohort is further specified based on seven health regions as defined by the PR Department of Health 1 . These seven regions contain multiple municipalities and place this cohort in the context of the previously established health related structure. Each of the health regions is labeled by the major municipality within each region (with the exception of the Metro region). As seen in Figure 1, the most heavily populated areas per the 2010 census are the Metro, Caguas, and Bayamón regions, containing 22, 16, and 16% of the total population, respectively.
Per the same census period, ∼15% of individuals in PR were over 65 years of age.
Ascertainment Sources
All participants were ascertained via three main sources: AD specialists, adult care centers, and community events. This approach allowed us to capture a wide range of AD cases from varied socioeconomic backgrounds and education levels. All individuals were recruited using site-specific IRB approved protocols.
AD Specialists
Several AD specialists (neurologists, psychiatrists, and geriatricians) served as collaborators and referred patients who met inclusion criteria and were interested in participating in the PRADI study. These included patients with AD, mild cognitive impairment (MCI), and dementia. As described below in the screening and evaluation section, we obtained clinical and medical records for patients who were recruited via AD specialists.
AD Centers and Adult Care Centers
To date, we have recruited participants from seven AD dedicated centers and advanced age nursing homes across the island, identified through the OPPEA directory of services website. The AD centers and nursing homes serve between 20 and 40 individuals who are typically older than 60 years of age (with or without the diagnosis of AD) on a daily basis. These centers focus on providing therapeutic, social, and recreational activities to improve quality of life, as well as educating, and supporting caregivers or family members.
Community Groups
We conducted recruitment events in various municipalities. Typically, these recruitment events were preceded by a prerecruitment event. The actual recruitment visits were then conducted at various centers or in private spaces. During these events, our multi-disciplinary teams consented participants (or their proxies), conducted cognitive screenings, and drew blood samples. These events ranged in size from small venues that attracted 20 or so individuals to much larger events that drew 60 or more individuals. We were able to enroll cases and controls during these events.
Inclusion/Exclusion Criteria
Participants were enrolled in the following categories: cases (AD and MCI), unaffected family members of cases, or unrelated individuals with no cognitive problems. To be enrolled, participants had to meet basic inclusion criteria. All individuals had to: (a) be of Puerto Rican ancestry (with at least one grantparent born on the island); (b) be ≥50 years of age; and (c) be willing to participate (or, in cases of serious cognitive impairment, have family members who consent on their behalf) and provide informed consent or have a proxy for consent.
To be included as a case, we required that individuals have a previous clinical diagnosis of AD, MCI, dementia, or show evidence of a memory disorder, and meet standard criteria for AD or MCI (McKhann et al., 1984;Albert et al., 2011;McKhann et al., 2011). We included cases from families (family history positive) as well as sporadic or isolated cases (family history negative). We excluded individuals whose memory and cognitive problems are secondary to other causes (e.g., stroke, psychoses, etc.) and those with a known mutation (e.g., PS1, PS2, or APP).
To be included as a control, individuals had to meet basic inclusion criteria, have no prior clinical diagnoses of a memory disorder or subjective memory complaints, demonstrate no cognitive problems on neurocognitive screening and assessment, and be unrelated to our cases. Unaffected family members had to meet the same inclusion criteria as the controls in addition to being a first-or second-degree relative of a case. For unaffected family members, we typically included the oldest available individual.
Screening and Evaluation
For participants enrolled as cases (i.e., with suspected memory problems or known dementias), we conducted a detailed chart review during which we corroborated clinical diagnoses and extracted current and past medical histories, current and past medications, family histories (pedigrees), and sociodemographic information. In addition, we collected clinical neurologic and neuropsychological test data, neuroimaging results, and pertinent lab values (e.g., hematology, thyroid function, lipid profile, vitamin D and B12 levels, and liver function tests, hypothyroidism, and vitamin deficiency).
For presumptive cases, we conducted an initial screening with the Modified Mini-Mental State Examination (3MS) (Folstein et al., 1975;Teng and Chui, 1987) followed by a cognitive evaluation that included the NIA-LOAD cognitive battery (Morris et al., 2006;Weintraub et al., 2009). In addition, we administered the Clinical Dementia Rating Scale (CDR) (Yesavage, 1988). Individuals who were deemed cognitively normal were screened with the 3MS (Folstein et al., 1975;Teng and Chui, 1987) and the CDR. For most cognitively normal individuals, we administered the NIA-LOAD battery.
Adjudication
All clinical, historical and screening/evaluation test data (e.g., laboratory tests, neurologic examination, neuroimaging, and neuropsychological screen and testing) from individuals with a known or suspected dementia were reviewed by a clinical adjudication panel consisting of a neurologist, neuropsychologist, and clinical staff. The panel reviewed all data and assigned best-estimate diagnoses. To be classified as AD individuals had to meet the current NIA-AA criteria (McKhann et al., 2011). They were further classified as definite (neuropathologic confirmation), probable, or possible AD. Diagnoses of MCI were assigned using the NIA-AA criteria (Albert et al., 2011). Cognitively normal individuals with no history of memory problems and MMSE or 3MS scores that fall above clinical cutoffs were designated as unrelated controls for the study. Family-based controls were evaluated similarly for inclusion in family-based analyses (Beecham et al., 2017). In the course of adjudication meetings, team members discussed cases until a diagnostic classification was determined. For those cases in which the team was unable to arrive at a final decision, the team stipulated the reason and corrective actions were taken (e.g., obtaining a more detailed history, retesting, etc.) In the event of a disagreement, the team consulted with an independent dementia specialist.
Analysis
To test for possible differences in our cohort related to where participants live and how they were ascertained, we compared mean 3MS scores and mean age of onset (AAO) for cases by region and recruiting source. Cases consisted of both AD and MCI phenotypes. In addition, for our controls we were able to compare mean 3MS scores by region. All analyses were performed using one-way ANOVA in SAS and SPSS (SAS Institute Inc., 2011;SPSS, 2013). P values lower than 0.05 were considered statistically significant.
In addition, we conducted an admixture analysis to estimate the proportions of admixture (European, African, and Native American) in our cohort. Genotyping and quality control methods are described elsewhere (Alexander et al., 2009;Rajabli et al., 2018). Briefly, genotyping was performed on the Expanded Multi-Ethnic Genotyping Array and Global Screening Array (Illumina, San Diego, CA, United States) and quality was assessed using PLINK software, v.2. Using the reference panels (African, European, and Native American populations) from the Human Genome Diversity Project3, we conducted admixture analysis, using ADMIXTURE software (Alexander et al., 2009;Rajabli et al., 2018), to generate average ancestry proportions across PR's seven health regions.
RESULTS
We have enrolled 770 individuals over a 30-month period, 710 of which were from PR. After removing individuals <50 years of age (35 unaffected, 1 MCI), our current dataset consisted of 674 individuals. The distribution of enrollment across the seven health regions of PR, as seen in Figure 1, shows the heaviest ascertainment in the Metro (44%; N = 295), Caguas (20%; N = 134), and Bayamón (16%; N = 106) regions, which reflects the greater population densities of these regions and cities. Enrollment numbers for the seven health regions are presented in Table 1, which also provides the numbers for the respective municipalities within those health regions.
Participants were recruited from three sources: AD specialists (N = 261, 39%), adult care centers (N = 201, 30%), and community events (N = 202, 30%). Not surprisingly, as seen in Table 2, most of the AD cases were recruited via the AD specialist, while the largest number of MCI cases were ascertained through community events. Figure 2 provides additional information regarding enrollment sources per the respective health regions. ( Finally, our cohort can be further delineated by whether individuals were part of a family or ascertained as an isolated/sporadic case. Of the 43 multiplex families that have been completed to date, 10 families contain four or more living individuals with AD, 3 families contain 3 living individuals with AD, and 31 families contain 2 living individuals with AD. The mean number of LOAD cases per multiplex family is 3.9. Among the 198 individuals from those multiplex families 73 (37%) meet the criteria for LOAD, 19 (9%) meet the criteria for EOAD, 31 (16%) meet the criteria for MCI, and 75 (38%) meet the criteria for no cognitive problems.
Admixture Results
We examined the population structure of Puerto Ricans using the supervised ADMIXTURE analysis at K = 3. Figure 3A illustrates the results from the ADMIXTURE analysis in a bar-plot figure. Each vertical bar represents an individual and corresponding estimates of the fraction of continental ancestries (African, European, and Native American). On average, Puerto Ricans have mostly European ancestry with a mean value of 69.3% (SD = 12.2). Mean values for African and Native American ancestry are 17.3% (SD = 12.2) and 13.4% (SD = 4.2), respectively as seen in the box plots ( Figure 3B). Figure 4A illustrates the bar-plots of admixed individuals across the Puerto Rican health zones and shows heterogeneous admixture patterns. Results of the admixture analysis are in general agreement with recent genetic studies showing a three-way admixture (European, African, and Native American) structure in Puerto Ricans (Via et al., 2011).
Clinical Comparisons
Separate one-way ANOVAs were conducted to test if mean values for AAO and the 3MS differed by (a) ascertainment region (i.e., the seven health regions of PR) and (b) ascertainment source (AD specialist, adult care center, and community).
Age at Onset (AAO)
The mean AAO values for our AD and MCI case were 74.1 (SD = 9.4) and 71.2 (SD = 8.5), respectively. As noted above, for the purposes of analysis we combined these into one group (cases) which had a mean AAO value of 73.2 (SD = 9.2). The mean values for AAO for the seven ascertainment regions and sources are shown in Table 3.
Across the different regions, mean AAO values ranged from 70.3 (SD = 7.4) in Mayagüez to 75.9 (SD = 9.6) in Fajardo. Results of one-way ANOVA found no statistically significant differences in AAO across the different health regions F(6,385) = 0.92, p = 0.48. The mean AAO values for the three ascertainment sources ranged from 70.6 (SD = 4.6) for AD specialists to 76.4 (SD = 9.0) for cases ascertained through adult care centers. The results of the one-way ANOVA found significant group differences among the three ascertainment sources F(2,382) = 16.29 p < 0.001. Post hoc tests showed mean AAO was higher in patients recruited from the community sites (+4.1 years) and adult care centers (+6.0 years) than it was for patients ascertained from AD specialists.
Modified Mini Mental State Examination (3MS)
The mean 3MS scores for our AD and MCI cases were 52.6 (SD = 23.5) and 80.1 (SD = 12.2), respectively; the overall mean 3MS score for all cases was 63.5 (SD = 24). The mean 3MS scores for the seven ascertainment regions and sources are seen in Table 3.
Among the health regions, mean 3MS scores ranged from 46.3 (SD = 28) in Mayagüez to 69.5 in the Metro region (SD = 19.9). Note that we dropped the Fajardo region, as there were only three 3MS scores. For these comparisons, the homogeneity of variances assumption was violated, as assessed by Levene's Test of Homogeneity of Variance (p = 0.008). The one-way Welch ANOVA results show statistically significantly differences in mean 3MS scores between the health regions Welch's F(5,52.96) = 3.81, p = 0.005. Games-Howell post hoc analysis revealed only one statistically significant comparison (p < 0.01) between the Metro and Mayagüez regions (23.3+5.8) [mean ± standard error]. For source, the mean values ranged from 63.1 (SD = 24) for cases ascertained via the community to 64.2 (SD = 27.6) for cases ascertained through AD specialists. Again, Levene's Test of Homogeneity of Variance was significant (p = 0.03) indicating that the homogeneity of variances assumption was violated, prompting use of Welch's ANOVA.
Results of one way ANOVA found no statistically significant differences in 3MS means Welch's F(2,130.5) = 0.04, p = 0.96.
DISCUSSION
Using a multisource approach that emphasized community engagement and was tailored to the Puerto Rican population, we were able to enroll eligible participants and their family members across PR. A major feature of our community engagement efforts was the development of partnerships with leaders of health initiatives in municipalities and resources within those municipalities. These included the health department, governmental organizations, community-based organizations, religious groups, and various healthcare providers. Establishing strong community partnerships allowed us to develop strategies with input from different parts of the community to achieve an ascertainment approach that was sensitive to the local culture. Our multisource approach emphasizes community engagement beginning with the identification of and establishment of relationships with key stakeholder groups and organizations. This allowed us to develop mutually agreed upon ways to implement research activities and create memorandums of understanding to formalize implementation. Working with these stakeholders and organizations enabled us to conduct outreach and ascertainment activities in the respective municipalities. Concurrent with the outreach activities and recruiting events (and as a way of giving back to the communities), we provided information and educational opportunities to healthcare providers and the public. This community engagement approach, developed for PRADI by AD clinicians and researchers in Puerto Rico and Miami, is a platform for our ongoing ascertainment efforts.
Using this approach, we have enrolled 674 individuals from PR over the age of 50 for our PRADI study. These individuals were recruited fairly evenly from the three ascertainment sources are and concentrated in the three health regions with the largest numbers of individuals -Metro, Bayamón, and Caguas. We also observed that the main ascertainment sources varied by the health regions, reflecting different resources in the respective regions. Further, while the percentage of individuals ascertained in select regions paralleled the percentage of the total population for the region, the Metro and Ponce regions were disparate as 44% of our participants were ascertained in the Metro region which constitutes 22% of the population vs. 3% of our participants were ascertained in the Ponce region which constitutes 14% of the population. These ascertainment figures have already begun to inform our subsequent recruitment efforts, as we emphasized the need to engage other sectors of PR (e.g., Ponce).
The importance of recruiting in regions such as Ponce and Mayagüez is also reflected in the results of our admixture analysis showing differences in the proportion of European and African ancestry among individuals from these regions. The failure to ascertain participants from regions with different ancestral backgrounds could potentially limit the applicability of important findings to these groups. The significance of this for the PRADI study is reinforced by work showing that different ancestral backgrounds may play a significant role in modifying the effect of APOE on risk for AD (Rajabli et al., 2018). These results are preliminary and will need further investigation, in particular to specify area of origin for participants vs. current area.
In addition to potential ancestral differences across the different regions, we observed clinical differences in our cohort in relation to ascertainment region and sources. For instance, participants' mean 3MS scores varied by ascertainment region although the only significant difference was between the Mayaguez and Metro regions. This may reflect differences in the sources of these participants as most of the individuals from Mayaguez were ascertained in the community. While there were no significant differences in AAO among participants from these different regions, we observed that AAO varied according to ascertainment source. Specifically, individuals who had been seen by AD specialists were more likely to have been identified as having cognitive/memory problems at younger ages. Aside from differences in sample size, the observed differences in AAO and 3MS values by ascertainment region and source most likely reflect the complex interplay of multiple influences, including access to AD specialists, availability of dementia related resources, and general knowledge and acceptance of AD.
The influence of knowledge and acceptance of AD is an important issue that is intertwined with efforts to recruit and enroll participants for genetic studies of AD in PR. While genetic studies of AD in PR have been undertaken by several groups as part of a larger emphasis on understanding AD in Caribbean Hispanics (Lee et al., 2006;Barral et al., 2015), the ascertainment approach developed for PRADI focuses solely on the island and intends to create a program that enhances knowledge of AD in PR.
Efforts to increase knowledge of AD in PR have grown recent years and the multisource approach to recruitment and enrollment is aligned with programs such as the Un Café por el Alzheimer program in PR, which provides education about AD at coffee shops and through social media (Friedman et al., 2016). The educational component that we include as part of our larger ascertainment approach is crucial for providing information about AD to healthcare providers and the public across the various communities and will potentially impact participation in biomedical research, including genetic studies (Karlawish et al., 2011).
The goal of the PRADI study is to investigate the genetics of AD in Puerto Ricans. AD is a complex disease with substantial burden on the population -particularly in PR where there is a large aging population suffering from chronic diseases that may exacerbate existing risk (Perreira et al., 2017). To date, there have been a scarcity of genetic studies of complex traits (e.g., AD) in Puerto Ricans which could exacerbate existing health disparities. Exceptions to this are the Boston Puerto Rican Health Study (BPRHS), a longitudinal cohort study which examines non-genetic, and genetic influences on multiple health outcomes among mainland Puerto Ricans (Tucker et al., 2010) and the Hispanic Community Health Study (HCHS), a large longitudinal multi-cohort project which studies a variety of health outcomes among different Hispanic-Latino groups in the US, including Puerto Ricans (Lavange et al., 2010) -both of which have extensive phenotypic and genotypic data. Using data from these cohorts, investigators have found links between select genes, obesity and asthma (Guo et al., 2018), lipid profiles (Graff et al., 2017), and blood pressure traits (Sofer et al., 2017). A large amount of research has genetic factors contributing to asthma and other pulmonary traits which are a major health problem in Puerto Ricans. The involvement of Puerto Ricans in this work can lead to greater understanding of genetic contributions to disease in this population and intervention opportunities. Central to the success of this research is ensuring participation in this research (Karlawish et al., 2011).
Our results suggest the importance of engaging multiple stakeholders and communities across municipalities. Including stakeholders in the development of outreach and recruitment was an important part of the PRADI ascertainment approach. Another important aspect of our ascertainment approach was the provision of AD and dementia information to providers, care centers, and the public. While our ascertainment results cannot be directly attributed to our multisource approach we have preliminary data that can guide more systematic evaluation of what works best as the PRADI study moves forward. Ultimately, this study and others like it are intended to inform and improve health outcomes and reduce health disparities for Puerto Ricans and other Hispanic Latino populations who have been consistently underserved.
ETHICS STATEMENT
This study was carried out in accordance to the recommendations of the National Institute of Health Guiding Principles for Ethical Research Pursuing Potential Research Participants Protection and the 2016 National Institute of Health Single Review Board (sIRB) Policy. This study received ethical approval from University of Miami Institutional Review Board (approved protocol #20070307) and Universidad Central del Caribe Institutional Review Board (approved protocol # 2016-26). The Universidad Central del Caribe is relying on the designated UM-IRB by an Institutional Review Board Authorization Agreement (Protocol: Genetic Studies in Dementia). All subjects (participant or proxy) gave written informed consent. This study was carried out in accordance with the Declaration of Helsinki and amendments.
AUTHOR CONTRIBUTIONS
MC helped with study design, assisted with clinical adjudication of patient and control data, and wrote and proofread the manuscript. BF-A and KC assisted with study design, ascertainment, and clinical adjudication of patient and control data, and wrote and proofread the manuscript. JR and FR performed statistical analyses and helped to writing the manuscript. LA and JV helped with study design, ascertainment, and clinical adjudication of patient and control data. PB, PRM, AR, and VR helped with ascertainment and clinical adjudication of patient and control data. CS, PM, AG, MP, and JM helped with ascertainment of patient and control data. KH-N compiled data for the publication and ran clinical queries. NF helped with ascertainment of patient and control data, and proofread the manuscript. AC and HA helped with ascertainment of patient and control data, diagnosis, and adjudication. GB conceived of and implemented the study design. MP-V conceived of and implemented the study design, assisted with ascertainment and clinical adjudication of patient and control data, and helped to writing the manuscript.
FUNDING
Financial support for the research, authorship, and publication of this article was provided by the grant "Genomic Characterization of Alzheimer's Disease Risk in the Puerto Rican Population" (1RF1AG054074-01) from the National Institute of Health (NIH) and National Institute on Aging (NIA).
|
v3-fos-license
|
2023-10-24T13:38:32.867Z
|
2023-10-24T00:00:00.000
|
264429146
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://josr-online.biomedcentral.com/counter/pdf/10.1186/s13018-023-04284-5",
"pdf_hash": "fdc8a9e1f5e90ca9940796824f263fb31c4aa729",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2202",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "39388dc16db8bddeaf643640a78fd3ddcd19ced8",
"year": 2023
}
|
pes2o/s2orc
|
Secoisolariciresinol diglucoside regulates estrogen receptor expression to ameliorate OVX-induced osteoporosis
Objective Secoisolariciresinol diglucoside (SDG) is a phytoestrogen that has been reported to improve postmenopausal osteoporosis (PMOP) caused by estrogen deficiency. In our work, we aimed to investigate the mechanism of SDG in regulating the expressions of ERs on PMOP model rats. Methods Ovariectomization (OVX) was used to establish PMOP model in rats. The experiment was allocated to Sham, OVX, SDG and raloxifene (RLX) groups. After 12-week treatment, micro-CT was used to detect the transverse section of bone. Hematoxylin and Eosin staining and Safranine O-Fast Green staining were supplied to detect the femur pathological morphology of rats. Estradiol (E2), interleukin-6 (IL-6), bone formation and bone catabolism indexes in serum were detected using ELISA. Alkaline phosphatase (ALP) staining was used to detect the osteogenic ability of chondrocytes. Immunohistochemistry and Western blot were applied to detect the protein expressions of estrogen receptors (ERs) in the femur of rats. Results Compared with the OVX group, micro-CT results showed SDG could lessen the injury of bone and improve femoral parameters, including bone mineral content (BMC) and bone mineral density (BMD). Pathological results showed SDG could reduce pathological injury of femur in OVX rats. Meanwhile, SDG decreased the level of IL-6 and regulated bone formation and bone catabolism indexes. Besides, SDG increased the level of E2 and conversed OVX-induced decreased the expression of ERα and ERβ. Conclusion The treatment elicited by SDG in OVX rats was due to the reduction of injury and inflammation and improvement of bone formation index, via regulating the expression of E2 and ERs.
Introduction
As a systemic bone disease, osteoporosis (OP) is characterized by decreased bone mass, damaged bone microarchitecture and increased bone fragility [1].In developing countries, especially in Asia, the incidence of osteoporotic fracture appears to be on the rise [2].Postmenopausal osteoporosis (PMOP) is one of the primary forms of OP.Its occurrence is closely related to the plummeting of estrogen level in postmenopausal women, which can cause problems such as accelerated bone turnover, destroyed bone microarchitecture, weakened bone strength and increased bone fragility and fracture risk [3,4].The decline in estrogen in the ovaries after menopause is an important cause of rapid bone loss and the early stages of OP in women [5].Estrogen replacement therapy is commonly used in the clinical prevention and treatment of PMOP [6].However, long-term use of estrogen will have endometrial hyperplasia, vaginal bleeding and other adverse reactions and even increase the risk of endometrial cancer or breast cancer [7,8].Several trials [9,10] and several meta-analyses [11,12] have indicated denosumab, pamidronate and zoledronate for the treatment of PMOP.In addition, the effects of biomarkers in the treatment of PMOP are also beginning to be investigated [13,14].In consequence, it is beneficial to explore more natural sources of estrogen for the treatment of PMOP.
Estrogen receptor (ER) is a ligand-activated nuclear transcription factor that regulates the action of 17-β estradiol (E2).ERs are abundant in the human body and are thought to be an important regulator of bone metabolism [15], with two subtypes, ERα and ERβ [16].Estrogens bind to ERs to form dimers that bind to estrogen response elements on the genome, thereby regulating the transcription of estrogen-response genes and estrogen comes into play [17].Estrogen plays an important role in bone growth, bone maturation and bone turnover to maintain bone metabolic balance [18].Estrogen selectively activates intracellular signaling pathways depending on the receptor subtype to which it binds, and has a variety of biological activities.ERα is mainly expressed in the uterus, testis, pituitary gland, kidney, epididymis and adrenal gland, mammary gland, bone and some other target organs.ERβ is distributed in the ovary, prostate, testis, bone and other organs, but its affinity is different in different tissues due to the difference of ER [19].PMOP reduces the level of estrogen, resulting in increased osteoclasts, bone resorption and bone loss leading to fractures.Research has shown that women should start taking estrogen at menopause and continue taking it indefinitely to prevent fractures [20].
Secoisolariciresinol diglucoside (SDG) is a phytoestrogen found in the mature seed of Linum usitatissimum L. which is similar to human estrogen [21].As a phenolic ingredient in the herb, SDG has preventive effects on estrogen-dependent diseases such as breast cancer [22], prostate cancer [23], menstrual syndrome [24] and OP.The clinical efficacy of SDG in the prevention and treatment of PMOP in women has been confirmed, which could increase the serum calcium content and bone mass of patients, improve the sensitivity of bone to parathyroid hormone, promote the formation of new bone matrix and have significant effects on controlling bone loss [25].SDG has also been reported to improve ovarian reserve function in aging mice by inhibiting oxidative stress [26].In addition, SDG metabolites may play a direct role in flaxseed combined with low-dose estrogen therapy on OP in ovariectomized (OVX) rats [27].However, the mechanism of how SDG affects the expression of ERs is not clear, and the effect of SDG compared with raloxifene (RLX), a drug commonly used in clinical practice for the treatment of PMOP, cannot be unified.In this study, the PMOP rat model was established by OVX.After the intervention of SDG and RLX, serum E2 was determined by ELISA, the pathological changes of bone tissue were detected by staining, and the expressions of ERα and ERβ were detected by immunohistochemistry (IHC) and Western blot to explore the mechanism of SDG on ERα and ERβ expression of ER on PMOP in castrated rats.
Animals
SD rats were provided by Shanghai SLAC Laboratory Animal Co., Ltd under the animal license permit number SYXK (Zhe) 2021-0033.The rats were reared in a well-ventilated environment for adaptive feeding with a 12-h light/dark cycle at 23 ± 2 °C and 60 ± 5% humidity in the animal room of Zhejiang Eyoung Pharmaceutical Research and Development Co., Ltd for 7 days.
Model establishment and administration
Twenty-four female non-pregnant SD rats (220 ± 20 g, 12 weeks) were randomly divided into four groups: Sham, OVX, SDG (30 mg/kg) and RLX (1 mg/kg).After one week of adaptive feeding, the groups of OVX, SDG and RLX underwent OVX surgery and the Sham group underwent sham surgery [28].All animals received 3% pentobarbital sodium for anesthesia before the surgery.A longitudinal incision was made into the abdomen from both sides of the back, 1.5 cm beside the lumbar spine, and the ovaries were removed.Sham surgery performed a similar procedure but not carried out the step of ovariectomy.All rats received an intramuscular injection of penicillin (Hefei Dragon God Animals Pharmaceutical Co., Ltd.) once a day for three days.One week after surgery, the SDG group was given SDG (30 mg/kg) [29] by gavage twice a week, RLX group was administered RLX (1 mg/ kg) [30] by intragastric administration twice a week, and the Sham group and the OVX group received the same volume of normal saline by intragastric administration for 12 weeks.The day after the last administration, the rats were respiratory anesthetized with 1.5% isoflurane, blood was immediately extracted from the abdominal aorta and allowed to stand for 30 min and then centrifuged at 3500r/min for 15 min, and the blood serum was collected and stored at − 80 °C.The rats were then killed by carbon dioxide euthanasia, and the femur tissue was isolated and stored at − 80 °C.
Micro-computed tomography (Micro-CT)
The femurs of rats were scanned using a Micro-CT imaging system (Product model: MCT-III, ZKKS, China).Image acquisition and analysis of the femur followed the published guidelines for the micro-CT evaluation of rodent bones [31].After the scanning was completed, the 3D image of the rat femur was reconstructed using the NRecon software.A fixed volume of interest was selected in the specimen center for histological data analysis.Variables measured included bone mineral content (BMC, mg), bone mineral density (BMD, mg/cc), bone volume fraction (BVF, BV/TV, %), trabecular number (Tb.N, 1 / mm), trabecular thickness (Tb.Th, mm) and trabecular separation (Tb.Sp, mm).
Hematoxylin and Eosin (H&E) staining
The femur tissues were fixed in 4% paraformaldehyde.After dehydration, the femur tissues were embedded in paraffin wax.The femur paraffin blocks were then cut into 5-μm slides.The slides were de-waxed with xylene, washed with water and stained with hematoxylin and eosin.Afterwards, the femur tissues images were recorded at 200 × and 400 × magnification by using a microscopic imaging system (Product model: Nikon DS-Fi2, Nikon, Japan) with an optical microscope (Product model: Nikon Eclipse Ci-L, Nikon, Japan).A random number method was used to select five doctors from the pathological diagnostic physician database to score the staining.The scoring principles were as follows: 0 score: normal organizational structure; 1 score: slight injury of bone trabecular structure, increased degree of trabecular separation, pathological damage area < 25%; 2 scores: moderate damage to the trabecular structure, increased degree of trabecular separation, pathological damage area < 50%; 3 scores: severe damage to the trabecular structure of bone, disordered distribution of bone cells, pathological damage area < 75%; 4 scores: bone trabecular structure severely damaged, mature bone cells were severely reduced, and the pathological damage area > 75%.
Safranine O-fast green staining
The femur tissues slides were dewaxed with xylene and gradient ethanol, washed with water and then stained with safranine and fast green.Images of femur tissues were recorded at 200 × and 400 × magnification under a microscopic imaging system (Product model: Nikon DS-U3, Nikon, Japan) combined with an optical microscope (Product model: Nikon Eclipse E100, Nikon, Japan).
ELISA
The levels of E2, interleukin-6 (IL-6), procollagen I N-terminal peptide (P1NP), alkaline phosphatase (ALP), C-terminal telopeptide of collagen type 1 (CTX-1) and cross-linked N-telopeptide of type 1 collagen (NTX-1) in serum were quantified using ELISA kits and referring to the manufacturer's instructions.All samples were repeated in duplicate, and the absorbance was detected spectrophotometrically using a CMaxPlus Microplate reader (Product model: Molecular Devices, USA) at 450 nm.
ALP staining
The chondrocytes were derived from cartilages of the distal femur and proximal tibia of 5-day-old rats.The chondrocytes were digested with 0.2% type II collagenase at the condition of 37 °C for 8 h.For cell culture, the chondrocytes were inoculated at a density of 5.7 × 10 5 cells/cm 2 , cultured in DMEM/F12 supplemented with 10% fetal bovine serum, 100U/ml penicillin and 0.1 mg/ ml streptomycin and incubated in a cell incubator at the conditions of 37 °C, 5% CO 2 .Then ALP staining was performed, fixation with 4% paraformaldehyde, and ALPpositive osteoblasts were examined under a microscope (Product model: ICX41, Ningbo Shunyu Co., Ltd, China) to detect the ALP activity of the chondrocytes.
Immunohistochemistry (IHC)
The femur slides were dewaxed with xylene, hydrated with graded ethanol series and water, inactivated with 3% H 2 O 2 and antigen retrieved with EDTA.The slides were incubated with 5% BSA for blocking, followed by overnight incubation at 4 °C with primary antibodies for ERα and ERβ.The slides were washed with PBS and then incubated with the secondary antibody for 2 h at the condition of room temperature.Subsequently, the slides were coated with DBA for 5 min.The slides were counterstained with hematoxylin for 30 s and washed with water.Then, the slides were transparentized with graded ethanol series and dehydrated with xylene.Afterward, the slides were mounted under a microscopic imaging system (Product model: Nikon DS-U3, Nikon, Japan) with an optical microscope (Product model: Nikon Eclipse E100, Nikon, Japan).
Western blot
The total protein of the bone tissue was extracted, and the protein concentration was measured using a BCA protein quantitative kit.The proteins were separated by SDS-PAGE, with 30 μg of protein loaded in each well under a voltage of 80 V-120 V, and transferred to a 0.45 μm PVDF membrane under an electric current of 200 mA for 2 h.The membranes were then blocked with 5% BSA for 1.5 h at room temperature and then incubated with primary antibodies of ERα and ERβ overnight at 4 °C.Membranes were incubated with the secondary antibody for 2 h at a condition of room temperature.Then, the peroxidasecoated bands were detected by ultra-signal ECL chemiluminescent solution, and the images were captured using a Chemidoc XRS Imaging System (Product model: 610,020-9Q, Shanghai Qinxiang Scientific Instrument Co., Ltd, China).Image-Pro Plus 6.0 was used to make the signal intensity expression of the protein bands of interest quantified and normalized to β-actin.
Statistical analysis
Data results were processed using SPSS.If the measurement data among multiple groups followed the normal distribution and the homogeneity test of variance, oneway ANOVA analysis of variance was used.Tukey test was used for comparison between groups.Dunnett's T3 test or independent sample t test was used when the distribution was normal, but the variance was not uniform.Kruskal-Wallis H test is used when the distribution was not normal.All data were expressed as mean ± standard deviation (mean ± SD).Significance level α = 0.05.A value of P < 0.05 was considered statistically significant.
Effect of SDG on femoral parameters on femoral parameters of femur in OVX rats
In Fig. 1A, compared with the Sham group, the micro-CT images showed that trabecular bone mass is decreasing significantly and the cancellous bone micro-architecture is deteriorating in the OVX group.The bone histological morphometric analysis (Fig. 1B-G) revealed that BMC, BMD, BV/TV, Tb.N and Tb.Th were decreased (P ## < 0.01) and Tb.Sp was increased significantly (P ## < 0.01) in the OVX group.SDG or RLX intervention could improve these parameters (P * < 0.05 or P ** < 0.01).
SDG reduced femur pathological damage and serum inflammation in OVX rats
As shown in Fig. 2A, there was normal bone microarchitecture and neatly arranged bone trabecula in the Sham group, while in the OVX group, the femoral tissue was seriously damaged, the bone trabecula was sparsely arranged, the bone microarchitecture was severely damaged, and the mature bone cells were greatly reduced.Compared with the OVX group, SDG group and RLX group reduced the damage to the bone microarchitecture, the trabecular arrangement of bone was more regular, and the damage was less severe.In Fig. 2B, compared with the Sham group, the HE scores of femur tissue in the OVX group were increased significantly (P ## < 0.01).The HE scores of femur tissue decreased significantly in both SDG group (P * < 0.05) and RLX group (P ** < 0.01) compared with OVX group.
Safranine O-Fast Green staining could detect cartilage tissue pathological changes.Basophilic cartilage is red when combined with Safranine O, and eosinophilic bone is blue when combined with Fast Green.From Fig. 2C, in the Sham group, the bone microarchitecture was normal with a smooth femoral surface, and there was no obvious injury.In the OVX group, the femur tissue showed large area loss of Safranin O staining, serious loss of proteoglycan, rough bone surface and severe damage of bone microarchitecture.Compared with the OVX group, the loss of Safranin O staining in femur tissue was reduced, the proteoglycan was partially lost, and the cartilage damage was attenuated in the SDG group and the RLX group.
ELISA results (Fig. 2D) showed that compared with the Sham group, the serum level of IL-6 in the OVX group was significantly increased (P ## < 0.01).SDG or RLX could decrease the level of IL-6 (P ** < 0.01).
SDG regulated the indexes of bone formation and bone catabolism in OVX rats
ELISA results (Fig. 3A-D) showed that serum levels of P1NP, ALP in the OVX group were significantly decreased (P ## < 0.01) while CTX-1, and NTX-1 were significantly increased (P ## < 0.01).Compared with the OVX group, the levels of P1NP and ALP in the SDG group and RLX group were increased (P ** < 0.01), while CTX-1 and NTX-1 were decreased (P ** < 0.01).
Effects of SDG on osteoblastic capacity of chondrocytes
The nuclei and cytoplasm of chondrocytes with positive ALP staining showed deep purple.In Fig. 4, compared with the Sham group, the degree of ALP staining of chondrocytes in the OVX group was significantly decreased (P ## < 0.01).Compared with the OVX group, the ALP staining degree of chondrocytes in the SDG group and RLX group was significantly increased (P ** < 0.01).
SDG promoted the level of E2 and the expression of ERα and ERβ of the femur in OVX rats
The results of IHC (Fig. 5A-D) showed that OVX caused lower expressions of ERα and ERβ (P ## < 0.01), while SDG or RLX treatment increased their expressions (P ** < 0.01).In Fig. 5E, the ELISA result showed that the serum level of E2 in the OVX group was decreased significantly (P ## < 0.01), and SDG or RLX treatment increased the level (P ** < 0.01).Western Blot (Fig. 5F-H) showed the same experimental results as IHC.
Discussion
PMOP, as a disease caused by estrogen deficiency, occurs in postmenopausal women, women with a family history of OP or ovariectomy or premature menopause [32].Current research into this disease is becoming increasingly detailed, and previous studies have found that estrogen plays a crucial role in the balance of bone remodeling [33].Estrogen reduction leads to changes in the combination of estrogen and ER, which enhances osteoclast differentiation, decreases osteoblast proliferation and differentiation and increases cell apoptosis, and Fig. 1 Effect of secoisolariciresinol diglucoside (SDG) on femoral parameters of femur in OVX rats.A The micro-CT images of the femur in rats.B-G The bone histological morphometric analysis results of the femur in rats, including bone mineral content (BMC), bone mineral density (BMD), bone volume fraction (BV/TV), trabecular number (Tb.N), trabecular thickness (Tb.Th) and trabecular separation (Tb.Sp).All data are expressed as the mean ± SD (n = 6).P ## < 0.01 vs. the Sham group, P ** < 0.01 vs. the OVX group ultimately the capacity of bone resorption is greater than bone formation, and OP occurs [34].In recent studies, researchers have focused on the treatment of OP caused by estrogen deficiency.Some studies have investigated the effects of Chinese herbal extracts or major constituents on the prevention and treatment of OP through OVX-estrogen deficient model rats [35].SDG is a phytoestrogen very similar to human estrogens [36].Clinical studies have shown that patients treated with SDG have significantly increased BMD and serum calcium, which enhanced the parathyroid hormone sensitivity and inhibited bone resorption, reducing the rate of bone conversion after treatment [37].In addition, SDG could increase the weight of the femur, the content of serum calcium, serum phosphorus and bone calcium [38].Bone histopathology studies showed that SDG could increase Tb.N, Tb.Th, BV/TV, increase the mean bone cortical thickness and reduce the mean Tb.Sp for the treatment of OP [39].Therefore, we performed bilateral ovariectomy in SD rats [28] to establish an animal model of PMOP that reduce the estrogen levels.Consistent with previous research [40], in our study, the femur of rats that underwent OVX surgery demonstrated significant trabecular bone loss, decreased BMC, BMD, BV/TV, Tb.N and Tb.Th and increased Tb.Sp.We found that SDG treatment could significantly reduce OP via increasing BMC, BMD, BV/ TV, Tb.N and Tb.Th and decreasing Tb.Sp, suggesting a substantial improvement in bone microarchitecture pathological damage.
Several studies have shown that almost all chronic diseases are associated with inflammation, including OP [41].B cells and T cells, immune cells, are involved in the normal process of bone formation and bone resorption.The decrease in estrogen level after menopause led to the expansion of T cells and significantly increased contents of pro-inflammatory factors IL-1, IL-6, IL-17 and TNF-α.And then the function of osteoclasts was enhanced, and bone deposition capacity of osteoblasts was reduced, which made bone resorption greater than bone formation, leading to decreased bone mass and OP [42].Our experimental results showed that OVX caused an increase in serum IL-6 content, which may imbalance the function of osteoclasts and osteoblasts, resulting in the occurrence of OP.IL-6 levels decreased after administration of SDG, suggesting that SDG ameliorated OP by reducing the expression of pro-inflammatory factors.
Bone turnover biomarkers (BTMs) are by-products produced during bone remodeling that can indicate the rate of bone turnover [43].BTMs are mainly classified into bone formation markers and bone resorption that OVX caused a decrease in bone formation and an increase in bone resorption.After the administration of SDG, the trend caused by OVX was reversed, suggesting that SDG could improve bone formation and reduce bone resorption, restore bone formation/resorption balance, and thus alleviate OP caused by PMOP.
Estrogen, a sex steroid, can directly enter the nucleus or cytoplasm of the target cell and bind to ERs.E2, as the major estrogen secreted by the female ovaries, can improve bone microstructure and BMD of the femur in PMOP rats, increase the content of bone formation indexes in serum and play a part in bone preservation [45].As mentioned above, ERs play an important role in bone formation and bone resorption [34,46].Many researchers have reported that ERs influence BMD and PMOP [47].The pharmacological experiments showed that SDG could affect hormonal parameters and increase the level of E2 in serum [48].In this study, the experimental results showed that OVX decreased the content of E2 and the expressions of ERα and ERβ, indicating that the serum estrogen level and the expressions of ERs decreased after OVX surgery, leading to the occurrence of PMOP.After the administration of SDG, it was found that SDG significantly increased the E2 content and the expression of ERα and ERβ reduced by OVX, suggesting that SDG could inhibit PMOP by increasing the expressions of ERs.
Our current research still has some limitations.Although we have found that SDG could regulate the expression of ERα and ERβ, to treat PMOP, there is no specific mechanism between them.Further experiments are needed to clarify it, and there is still a long way to go before it can be used in the clinic.In the following studies, molecular docking, fluorescent probes and gene silencing will be effective methods to elucidate the mechanism in vivo and in vitro experiments.
Conclusion
In conclusion, our results show that SDG has a therapeutic effect on PMOP model rats by regulating bone microarchitecture, reducing inflammatory damage and increasing bone formation markers, via activating the expression of ERα and ERβ.
Fig. 3 Fig. 4
Fig. 3 SDG regulated the indexes of bone formation and bone catabolism in OVX rats.A-D The serum level of PINP, ALP, CTX-1 and NTX-1 was detected by ELISA.All data are expressed as the mean ± SD (n = 6).P ## < 0.01 vs. the Sham group, P ** < 0.01 vs. the OVX group
|
v3-fos-license
|
2015-08-11T20:29:18.000Z
|
2011-07-27T00:00:00.000
|
10823251
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-12-S2-S3",
"pdf_hash": "ba763d7fbfe693c8a559bd96f761ff3a9ca3233c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2203",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "ba763d7fbfe693c8a559bd96f761ff3a9ca3233c",
"year": 2011
}
|
pes2o/s2orc
|
FluReF, an automated flu virus reassortment finder based on phylogenetic trees
Background Reassortments are events in the evolution of the genome of influenza (flu), whereby segments of the genome are exchanged between different strains. As reassortments have been implicated in major human pandemics of the last century, their identification has become a health priority. While such identification can be done “by hand” on a small dataset, researchers and health authorities are building up enormous databases of genomic sequences for every flu strain, so that it is imperative to develop automated identification methods. However, current methods are limited to pairwise segment comparisons. Results We present FluReF, a fully automated flu virus reassortment finder. FluReF is inspired by the visual approach to reassortment identification and uses the reconstructed phylogenetic trees of the individual segments and of the full genome. We also present a simple flu evolution simulator, based on the current, source-sink, hypothesis for flu cycles. On synthetic datasets produced by our simulator, FluReF, tuned for a 0% false positive rate, yielded false negative rates of less than 10%. FluReF corroborated two new reassortments identified by visual analysis of 75 Human H3N2 New York flu strains from 2005–2008 and gave partial verification of reassortments found using another bioinformatics method. Methods FluReF finds reassortments by a bottom-up search of the full-genome and segment-based phylogenetic trees for candidate clades—groups of one or more sampled viruses that are separated from the other variants from the same season. Candidate clades in each tree are tested to guarantee confidence values, using the lengths of key edges as well as other tree parameters; clades with reassortments must have validated incongruencies among segment trees. Conclusions FluReF demonstrates robustness of prediction for geographically and temporally expanded datasets, and is not limited to finding reassortments with previously collected sequences. The complete source code is available from http://lcbb.epfl.ch/software.html.
Introduction
Influenza (the "flu") is an RNA virus with an extremely high mutation rate [1][2][3] that causes fever and respiratory problems in humans and other animals. It is responsible for half a million human deaths every year [4]. Flu populations typically experience a seasonal bottleneck event, as host-to-host transmission in the temperate regions drops to very low levels during the warm season. According to the source-sink hypothesis [2], new strains of viruses are seeded from a flu reservoir in the tropics, called the source, and spread seasonally to the temperate zones, called sinks [5][6][7], thus creating multiple coexisting generations of flu strains in the temperate regions [8].
The genome of the flu virus is composed of eight segments. Reassortment of segments among flu virus strains, i.e., mixing of segments from one or more strains to produce new strains, is a frequent event [9][10][11]. Strains resulting from such reassortments have been responsible for two of the three great pandemics of the 20th century [12].
A large number of fully sequenced flu genomes recently became publicly available [13], but large amounts of data cannot be processed by the most widely used reassortment-finding techniques, as these involve human scrutiny of phylogenetic trees. In these methods, one constructs a phylogenetic tree based on the full genomes, as well as a tree based on each of the eight segments, for a total of nine trees; one then examines these trees, looking for strains that have segments on different branches of their respective trees [12,14]. While the visual inspection method is intuitive and logical, it is prohibitively time-consuming, and outright inapplicable when thousands of samples are to be examined.
A few methods have been proposed to process the flu data automatically. Rabadan et al. [11] postulated that, for any two strains, the Hamming distance between their respective first segments and the Hamming distance between their respective second segments should be equal (after normalization) in the absence of reassortment, while different distances should point to a reassortment. Niranjan et al. [10] used phylogenies, but considered distributions of phylogenetic trees for each segment, instead of the consensus tree. Their method enumerates the maximal bicliques on a bipartite graph of tree edges for the distributions of the two segments; these bicliques represent sets of mutually incompatible choices, indicating that the two segments may have had different evolutionary histories. Both methods are limited to detecting reassortments between collected and sequenced strains. While these approaches were used to detect meaningful reassortment events [11,15], they are not scalable to large datasets when all reassorted segments need to be identified, because they use pairwise comparisons which must then be manually aggregated.
Our contribution
Our new, fully automated, flu reassortment finding algorithm, FluReF, embodies and parameterizes the structural observations used in visual reassortment finding. We describe an algorithm that examines the reconstructed phylogenetic trees of individual segments and of the full genome, selects candidate reassortment groups through a bottom-up search of the full phylogenetic tree, and confirms candidates that meet preset thresholds and cause demonstrated incongruencies among segment trees.
FluReF is designed to find all segments involved in all reassortment events in a dataset. The method is scalable, running in time quadratic in the number of full genome sequences. Furthermore, FluReF is not limited to finding reassortments among sequenced strains, as it searches for reassortments with ancestral source strains.
We also present a simple simulator for the evolution of flu genomes, in terms of point mutations and reassortment, and incorporating both strain isolation and bottleneck events. On data produced by this simulator, FluReF tuned for a 0% false positive rate has consistently demonstrated a 10% false negative rate. On sequence data from influenza databases, FluReF corroborated two new reassortments identified during our visual analysis of the 75 Human H3N2 New York flu strains from [2005][2006][2007][2008]. FluReF demonstrated robustness of prediction with temporally and geographically expanded datasets. We obtained partial verification of reassortments found by Holmes et al., who performed a phylogenetic analysis of 156 Human H3N2 New York flu strains from 1999-2004 [8].
Results
We first describe the principles behind our method. Next, we present the FluReF algorithm, including the description of tunable parameters. We then present the results on various flu datasets from the state of New York. We conducted a visual analysis of a collection of 2005-2008 New York flu genomes, identifying two reassortments, and ran FluReF on this dataset. We then expanded the temporal and geographic scope of the data to test the robustness of FluReF by augmenting our dataset with (i) a large number of sequences from the same area (New York) from a prior year (2004) and (ii) sequence data from all over the United States. We then ran FluReF on another, unrelated flu dataset from Holmes et al. [8]. Finally, we experimented with a larger set of simulated sequences.
FluReF: principles
FluReF exploits certain characteristics of phylogenetic trees of the flu genome. The trees produced from samples taken over a number of years in the same geographical location follow a well established patternsequences from the same year tend to cluster together, sometimes forming a clade with sequences from the year before or the year after [3]. Another common feature of localized phylogenetic trees is that sequences collected in earlier years tend to be closer to the root than those collected in later years, as they had less time to evolve away from the common ancestor at the root. In visual inspection methods [12,14], the exploration starts by examining the full genome tree, looking for individual sequences, or small groups of sequences, that do not fit these characteristics.
For example, we may find some sequences that are not grouped with the others from the same year, but with sequences sampled in an earlier year: Figure 1 (A) shows a toy example where clade E from year 3 is grouped together with the sample from year 2. Similarly, we may find some sequences that, while grouped together with the rest of the season, are separated from them by a significantly large distance: Figure 1 (B) shows clade E correctly grouped with the other samples from year 3, but at a significant evolutionary distance from them. In either case, sequences phylogenetically separated from their seasonal grouping are candidates for reassortment. We postulate that this genetic disparity is possible if a strain from the sampling year, the survivor of the previous bottleneck event, has reassorted some of its segments with a strain that re-emerged from the source population. We assume that the lower selective pressure in the source population results in slower evolutionary change, so that a re-emerging sample from the source population would be more genetically similar to the sink population from prior sampling years.
To test for a reassortment, we examine the eight segment trees, searching for an isolated candidate clade. If the candidate clade remains isolated in all individual segment trees, the reason is unrelated to reassortment. One of the possible explanations is that such candidate strains infected the human host in a geographic area far away from the sampling area and thus have a somewhat different evolutionary history. If, however, the candidate clade is grouped together with the other samples from its season in some of the segments, but is isolated in others, we have identified a probable reassortment. Figure 2 shows a toy example with three sampled years. Segments in the isolated candidate clade E3 (3, 5, and 6) have come from the seasonal migration of the source strain, while the rest of the segments for E3 (1, 2, 4, 7, and 8) came from the local seasonal population.
FluReF: algorithm
FluReF carries out an exhaustive bottom-up search of the phylogenetic tree reconstructed from the full genome sequences. As the search proceeds, various measures are checked to ensure that candidate reassortments satisfy parameter thresholds motivated by the visual inspection.
In the main loop of the algorithm, each leaf node (a single sequence) is considered if it was not already identified as part of a candidate reassortment. A candidate group is grown upward from the leaf, expansion terminating upon reaching the noise threshold-exceeding a tuneable parameter which dictates when the candidate group would encompass an unacceptably heterogeneous sample from different years.
Once a candidate group is identified, the Least Common Ancestor (LCA) is found for all leaves sampled in the year that contains the majority of sequences in this candidate group, as shown in Figure 3.
A.
B. Next, the Least Common Ancestor excluding the candidate group (LCA_Without) is found. Various metrics for the path from the candidate group to the LCA_-Without, via the LCA, are checked to ensure that the separation distance is nontrivial and that the three path has strong support.
B3
In the visual reassortment search method, the path is examined to ensure that it contains several edges with very high confidence values as provided by the phylogenetic reconstruction software. In general, it is desirable to have a majority of edges on the path with reasonably high confidence values, generating trust in the existence of the candidate group separation. FluReF translates this intuition into several tuneable parameters which minimize the rate of false positives by ensuring that only paths with high confidence values from the phylogenetic reconstruction are considered in a reassortment search. During the visual reassortment search, the candidate group is assessed for its distance away from the rest of the season, compared to the rest of the tree. FluReF encompasses this observation with a couple separation parameters, tuned to ignore candidate groups with a trivial genetic separation from the rest of the season. For each candidate group which satisfies all parameters, the algorithm then attempts to find the analog of this candidate group in each of the individual segment trees. If a group is found in a segment tree, it is again checked against various parameter thresholds-typically lower than those used with the tree based on the full genome sequences, because the confidence values from the phylogenetic reconstruction software tend to be lower for individual segment trees. The candidate group is output as a reassortment if it is found to be isolated from the rest of the year sample in some segment trees, but is grouped with the rest of the year sample in other segment trees, pointing to different evolutionary histories. (Preference may be given to certain segments, as there is evidence that some segments are more commonly involved in reassortments than others [11].) FluReF runs in at most quadratic time. The main loop traverses a tree, taking time proportional to the size of the tree, i.e., proportional to n, the number of leaves; if each leaf (strain) is considered as a separate candidate group, the main loop will iterate n times.
Experiment 1: confirming visual inspection
We examined a dataset of 75 Human H3N2 strains collected between 2005 and 2008 in New York. The visual inspection of full-sequence and individual segment phylogenetic trees revealed two reassortments. Clade A from 2006, shown in Figure 4, was grouped separately from the rest of its season in the full genome tree, as well as in individual trees for segments 1, 2, 3, 5, and 6.
Clade B from 2007, also shown in Figure 4, was grouped separately from the rest of its season in the full genome tree, and in individual trees for segments 3 and 4. We applied FluReF to this data set; it produced no false positives and output both Clades A and B as reassortment groups, with the same segments identified as in the visual analysis. This result confirms that FluReF properly applies the principles of the visual analysis of phylogenetic trees.
Experiment 2: increasing the temporal scope
To test the robustness of FluReF, we augmented the dataset from Experiment 1 with human H3N2 strains sampled in 2004 from New York. The new data set thus contains 118 sequences-at the limit of what visual inspection can handle. FluReF run on this dataset returned the same output as on the unaugmented dataset used in Experiment 1, once again matching visual inspection results.
Experiment 3: increasing the geographic scope
The inclusion of geographically separated strains can lead to the isolation of subgroups from their seasonal cohort and thus potentially cause false positive identifications. We augmented the dataset from Experiment 2 with the rest of the 2005-2008 human H3N2 strain sequences collected all over the United States. The resulting data set contains 180 sequences, beyond our ability to inspect visually. FluReF once again returned the same output as on the unaugmented dataset from Experiment 1, a reassuring result in that it was not misled by geographically isolated strains.
Experiment 4: validating prior work
In 2005, Holmes et al. performed a phylogenetic analysis of 156 complete genomes of human H3N2 influenza A viruses collected between 1999 and 2004 from New York State and found several reassortment events between the various clades [8]. Aside from betweenclade reassortments, which are currently not targeted by FluReF, Holmes et al. identified three reassortment groups. Run on the same data, FluReF confirmed one of these candidate reassortment groups: a small clade containing two strains from 1999: [GenBank: CY001120-27, GenBank:CY000989-96]; another candidate group was considered by the algorithm, but rejected due to low confidence scores. We have tuned the parameters of FluReF to be very conservative, so the absence of false positives and the occurrence of some false negatives are to be expected; a more sensitive tuning is possible, especially one that favors certain segments over others, a bias adopted by Holmes et al. in their analysis.
Experiment 5: scaling
While the quadratic limit makes FluReF scalable in terms of runtime, care must be take to ensure that the accuracy of the algorithm does not suffer as the datasets increase. We performed a first scaling experiment, with a set of 420 simulated sequences containing a single reassortment event. FluReF found this reassortment, and reported no false positives.
External software and materials
All influenza A sequences were downloaded from the NCBI Influenza Virus Sequence database [13]. GenBank sample identifier strings were modified to include the year of sampling and a short unique identifier, to aid in the visual inspection of the phylogenetic trees. MAFFT, a multiple alignment program based on Fast Fourier transforms [16], was used to align the sequences for all experiments using real data. The RAxML web server [17] was used to reconstruct the phylogenetic trees for all real and synthetic datasets using the Maximum Likelihood approach.
Tuning FluReF
Parameter thresholds were tuned on simulated data generated by our simulator (described farther down) to keep the number of false positives down to zero, while minimizing the number of false negatives. Tuning was done using two dozen small datasets of 40 strains each. With tuned parameters, FluReF, when run on these simulated datasets, finds no false positives (nonexistent reassortment) and fails to find 10% of the existing reassortments, for a 0% false positive rate and a 10% false negative rate.
FluReF usage
FluReF input consists of nine files with phylogenetic trees reconstructed by RAxML from aligned sequences of viruses; these sequences should come from sampling, at regular intervals, of the flu genome within a well defined geographic area (or, of course, from simulations). The first eight files contain the trees for their respective segments, while the ninth file contains a tree reconstructed from full genome sequences for each virus strain. The output provides the identifier(s) of all viruses in groups that have undergone a reassortment. For each group, the names of the segments that participated in the reassortment are also output.
Viral evolution simulator
In the absence of sufficient verified reassortment data, we needed a viral evolution simulator to produce synthetic data sets with known reassortments, so as to be able to test and tune our algorithm. We developed a very simple simulator, which incorporates some more recent theories on the flu evoltion, and used it to produce synthetic datasets that resulted in realistic phylogenies.
We begin the simulation by initializing the start source population. The start source population consists of any number of real virus sequences, downloaded from NCBI, preferably collected from the same geographic and temporal location. The input is separated into eight files, each containing aligned sequences of all source population viruses for their respective segments. The output consists of nine files with aligned sequences sampled at regular intervals from the sink population. The eight files each contain virus sequences for their respective segments, and the ninth file contains a full genome sequence for each virus strain, all in the format described for input to FluReF.
We model the viral evolution by maintaining two groups of viruses. The first group is kept at a stable size, to mimic the viral source in the tropics. The second group models a local virus sink; it expands every sampling interval and then contracts in a bottleneck event, to mimic the seasonal flu cycles. Instead of maintaining individual viral strains, we maintain populations of relatively small size. A population consists of up to one hundred viruses with an identical sequence, a simple way of modelling closely related strains or the viral quasispecies population.
We model point mutations using the Kimura twoparameter substitution model [18]. We introduce an operation we call "global parallel mutate," which makes identical mutations in any number of populations. We apply this operations to all viral populations in source and sink to mimic conditions that make certain mutations more advantageous during a particular season, as well as to mimic the super-viral strains that rapidly spread through the world during a particular season. We also use a regular, "divergent" mutate operation, that makes unique mutations in each viral population and is responsible for individual variations between the populations.
We perform a reassortment between one population in the source and one population in the sink once per sampling period. The genetic transfer is unidirectional, as the genetic flow is thought to be unidirectional from the source to the sink regions. At the end of each sampling period, after the bottleneck event, we output a small, randomly selected sample of survivor sequences.
Discussion
FluReF builds upon the visual inspection of reconstructed phylogenetic trees, which is the most commonly used and best accepted method for finding reassortments. However, whereas visual inspection is limited to datasets of a hundred samples, FluReF is designed to be scalable for very large data sets. Its running time is at most quadratic in the number of samples, so that analyses of datasets with tens or hundreds of thousands of samples can be carried out rapidlythe computational bottleneck is the reconstruction of the nine phylogenetic trees, not the search for reassortments.
The FluReF model does not tie us to searching for reassortments between pairs (or triplets) of sampled and sequenced strains in the same data set. This is very important, as it is it entirely possible that only one of the strains involved in a reassortment has been sampled.
While our results are promising, much work remains to be done. FluReF will benefit from extensive parameter tuning on real datasets with proven reassortments. Focusing the search on segments more likely to be involved in reassortments will both speed up the search and increase its sensitivity-our current implementation is quite conservative and favors specificity over sensitivity. Finally, reassortments between clades could also be sought by a similar approach, with its own set of parameters.
Conclusions
The recent swine flu epidemic, along with discoveries of correlations between different circulating strains of flu, underscore the importance of genomic analyses for future influenza surveillance. Even reassortments in the same lineage can cause a severe outbreak and failure of vaccine coverage. As our databases of flu genomes increase very rapidly, computational approaches must be deployed to analyze the large volume of data and help identify candidate events, such as reassortments, that may pose new health threats. We developed FluReF, an automatic reassortment finder algorithm inspired by the visual identification approach. FluReF parameters were tuned on the synthetic datasets produced by our simple virus evolution simulator to yield no false positives; even at this very conservative setting, FluReF had very high sensitivity, with false negative rates consistently below 10%. With these parameter values, FluReF corroborated the two reassortments we found during a visual analysis of the 75 Human H3N2 New York strains from 2005-2008; it also demonstrated robustness of prediction with temporally and geographically expanded datasets, and verified some of the reassortments found using another bioinformatics method. FluReF is only as good as the data for which its parameters have been tuned; while flu databases accumulate ever more sequences, quality annotation of reassortments has been very limited to date. Any approach to reassortment finding based on statistics or machine learning will benefit from additional reference data.
Authors contributions
AY wrote the algorithms and carried out the experiments. BM provided guidance and direction. Both authors worked on the manuscript.
|
v3-fos-license
|
2018-10-12T03:47:11.792Z
|
2014-08-07T00:00:00.000
|
53137939
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.17126/joralres.2014.044",
"pdf_hash": "35e0803ae8f8951bc7c55e6cd8842b29e2a5b2a8",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2205",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "35e0803ae8f8951bc7c55e6cd8842b29e2a5b2a8",
"year": 2014
}
|
pes2o/s2orc
|
Human Papilloma Virus and oral cancer : Narrative review of the literature
Cite as: Fernández A, Marshall M & Esquep A. Human Papilloma Virus and oral cancer: Narrative review of the literature. J Oral Res 2014; 3(3): 190-197 Abstract: The Human Papilloma Virus (HPV) infection is now more common sexually transmitted diseases, with an incidence of 5.5 million worldwide, with 85% of the carrier of this virus adult population. Their oncogenic potential and increased oral lesions associated with oral HPV infection have led us to make a narrative review of the literature on the role of HPV in oral cancer, especially types 16 and 18. Here we refer to the possible routes of infection, oncogenic mechanisms, both benign and potentially malignant oral lesions associated with the infection, different methods used for detection, prediction and prevention of infection. We stress the importance of the role of the dentist to identify individuals considered high risk and ease of performing detection in the oral cavity, through a quick and easy method as exfoliative cytology.
The Human Papilloma Virus (HPV) is the most common sexually transmitted infection in the world.It has an incidence of 5.5 million people worldwide 1 and 85% of the adult population would be carrier of this virus.However, this does not necessarily mean they have some kind of injury.
HPV consists of a double-stranded DNA containing 8,000 pairs of bases covered by a non-enveloped capsid in which three regions are identified.The first one is called "EarlyRegion".It corresponds to 45% of the viral genome and E1, E2, E4, E5, E6 and E7 genes 2 , which are responsible for cellular regulation and transformation, can be found in it 3 .The second one is called "Late Region".It corresponds to 40% of the viral DNA and L1 and L2 genes, which encode viral capsid proteins, are found in it.The latter, called "Third Region LCR" or "Long Control Region" is responsible for regulating cell functions 2 (Table 1).This virus has a variation in the E6 and E7 genes and this allows to identify more than 120 subtypes of HPV 3 , which are classified according to their oncogenic potential (Table 2) 1 .
These various subtypes of HPV can cause benign, potentially malignant and malignant injuries in the skin, the anogenital area, the oral cavity and especially in the oropharyngeal area.In 1907, HPV infection was associated to benign lesions for the first time.Then, in 1976, ZurHausen proposed that HPV has a causal role in cervical cancer and, in 1983, Syrjanen et al. proposed it has a role in head and neck cancers 4 .Among the benign lesions, there are the papilloma, vulgar wart and the condylomata acuminatum 3 .Dysplastic and neoplastic lesions were mainly associated with HPV subtype 16.From among these lesions, the squamous cell carcinoma (SCC) of the oropharynx is mainly highlighted 5 .This is based primarily on the epitheliotropic characteristics of HPV, because of the morphological resemblance between the genital and oropharyngeal epithelium and the etiologic role this virus has in cervical cancer 4 .
The importance of determining the presence and type of HPV in oral and pharyngeal samples of SCC is based on existing epidemiological, histopathological, molecular and even post treatment evolution differences between different samples or SCC which are or not associated with HPV infection.These differences can be observed in Table 3 1 .According to the above, it can be inferred that SCC samples associated with HPV have a less aggressive biological behavior.
The increase in HPV-associated oral lesions and their consequent oncogenic potential has led to make a narrative literature review regarding HPV and its role in oral cancer.Below, it talks about the possible routes of infection, oncogenic mechanisms, oral lesions associated with the infection (benign potentially malignant and malignant), different methods used for its detection, prognosis and prevention, highlighting the role dentists plays.Finally, there will be authors' comments on the issue.
HPV FEATURES. Transmission.
The mode of transmission of HPV to the oral mucosa is not clearly known yet.The infection usually starts early in life and it has been demonstrated the presence of this virus in 6% of children, 13% of adolescents and 23% of adults 2 .It can be spread vertically and/or horizontally.The first one is a perinatal transmission of cervical origin.The second one, which is considered as the primary means of transmission, is sexual and is caused through oro-genital contact producing micro traumas in the mucosal when in contact with the virus 3 .
Recently, HPV has been detected in histopathology sections of most patients diagnosed with oral cancer and who have a sexual history linked to more than one sexual partner, coupled with oro-genital contact, compared with histopathology sections of patients who have also been diagnosed with oral cancer but have not participated in these kind of practices.Therefore, this could be considered as important data in the virus transmission 6 .
HPV infects squamous cells because it has an affinity for the epithelium 7 , which is where virus replication takes place.This depends on the proteins encoded by the viral genome and the differentiation degree of the infected cell 3 .First, HPV infects the basal epithelium cells, but there is a limited expression of viral genes in these cells and additional genes of the virus start expressing as epithelial cells differentiate [7][8] .Mature virions are produced in the granular layer, resulting in perinuclear vacuolization which is a characteristic of the epithelial cells called koilocytes, which are finally released into the stratum corneum 7 .
Oncogenic mechanism.
HPV infection was first associated with cervical cancer and it has been found that the Papilloma Virus 16 and 18 are mainly involved 9 .
The possibility for the Human Papilloma Virus becoming involved in carcinogenic mechanism depends on the type of virus which infects the epithelial cell, the synergistic action with different agents (physical, chemical and/or biological), genetic constitution and the host response immune 3 .
High oncogenic risk viruses insert their viral genome into the host keratinocytes.The integrated part of the viral genome into the cell corresponds to E1, E6 and E7 regions, while E2, E3, E4 and E5 regions get lost (Table 1).The loss of E2 region during the integration process results in loss of control of the E6 and E7 regions, which are directly related with the cell cycle, by inhibiting the normal functions of the p53 and pRB proteins, respectively.These two proteins regulate normal cell division.The first produces transcription factors required for progression through the cell cycle.This denotes that RB prevents the cell from dividing until enough proteins have been isolated for cell division.E2F is the protein which produces Rb, making it a suppressor gene which prevents the cell cycle from continuing.When HPV infects a cell, E7 gene binds to Rb so that it releases E2F gene, being an advance signal for the cell cycle.So, as long as E7 gene remains stationary to Rb, the cell cycle continues in an uncontrolled way, which is a characteristic of malignancy 8 .Furthermore, when the DNA of a cell is damaged, p53 stops cellular division and repairs DNA.If this is not possible, this protein induces apoptosis, ensuring that the damaged cell dies and is not reproduced.The viral protein E6 can bind to p53 and inactivate it.This allows the virus to replicate in the cell, since p53 gene, inhibited by the virus, cannot stop it or start the process of cell death [8][9] .
The role of the immune response against HPV is unclear 3 .This response is cellular and humoral.The first one is characterized by the involvement of natural killers, CD4 lymphocyte inducing an adaptive immune cytotoxic response mainly directed to the early E6 and E7 proteins 10 .Regarding the humoral response, IgA, IgM or IgG antibodies reach their maximum values 6 to 12 months after being infected [3][4][5][6][7][8][9][10][11] .Antibodies are raised against the viral capsid proteins and early viral proteins including the E6 and E7 oncogenic proteins [7][8][9][10][11] .The presentation of viral antigens mediated by cells is so minimal that the infection can be maintained for months or years without being detected clinically.Therefore, the absence of clinical evidence is not synonymous of the virus absence 12 .
Squamous papilloma.
It corresponds to a papillary growth of the squamous epithelium 13 and is mainly associated with HPV types 6, 11 [13][14] and HPV 18 13 .
It presents an estimated prevalence of 1 in 250 adults 13 .This injury is more common in children and has the same frequency in men and women.It is preferably found in the soft palate, tongue and lips.However, other areas of the oral mucosa can be affected as well.
Clinically, an increased soft exophytic mass with numerous verrucoid projections, pedunculated, normal color, white or slightly red, asymptomatic and approximately 0.5 cm diameter is observed.
Histopathologically, an intense proliferation of squa-mous epithelium in papillary growth with projections surrounding connective tissue is seen.Generally, the basal epithelial layer has hyperplasia, mitotic activity which may involve higher strata.Also, sometimes koilocytes can be observed in the spinosum stratum, corresponding to clear epithelial cells with pyknotic nuclei altered by viral infection 15 .
Verruca vulgaris.
Common wart is the most prevalent injury of different HPV-associated lesions and affects both the skin and the oral mucosa.The most affected areas in the oral mucosa are keratinized, i.e., the hard palate and gingiva and this is because of their histological similarity to the skin.HPV types which are more closely associated with oral mucosal warts are 2, 4 and 57 16 .
Clinically, a papule or nodule with papillary projections is observed.It may be sessile or pedunculated.Most of them are white but they can also be pink.
Histologic features include intense epithelial proliferation with papilliform projections on the connective tissue, hypergranulosis and koilocytes in the spinous layer 16 .
Condyloma acuminate.
It is a warty lesion associated with infection due to HPV 6 and 11 and occurs in the oral and anogenital mucosa.In adults, it is sexually transmitted and can be passed on through contaminated objects in children under two years old 5 .
The oral lesions are usually in the labial mucosa, soft palate and lingual frenulum.It is seen as a well-defined exophytic mass.It is sessile, pink and tends to be larger than the common wart (1 to 1.5 cm).
Histopalogic features include intense epithelial proliferation with papillary projections and crypts of keratin between them.Koilocytes are observed in the surface layers and these are less prominent than in genital lesions 15 .
Focal epithelial hyperplasia.This lesion was first described as multiple nodules in the oral mucosa in 1965 by Archar et al.It is more prevalent in children in India (37%) and has been associated with HPV types 13 and 32.
It affects the labial mucosa, buccal and lingual and multiple rounded nodular lesions with f lat surface and pale pink are observed.
Histopathologically, it can be observe a hyperplasia of stratified squamous epithelium, thick and elongated ridges, koilocytes in the surface layer and sometimes nuclei of the keratinocyte are altered resembling mitotic figures [15][16] .
POTENTIALLY MALIGNANT LESIONS (CAN-ZERISABLE).
The presence of HPV in potentially malignant lesions is important because it suggests the possibility of playing an important role in malignant transformation, which highlights the non-homogeneous leukoplakia.It is estimated that the prevalence of this virus in these lesions ranges from 0 to 85% and is mainly associated with virus type 16 and 18 [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17] .This wide variation in the range may be related to demographic variables, differences in the categorization of the studied lesions, differences in sampling, as well as the sensitivity of the technique employed for molecular detection 2 .
The role this virus plays in oral leukoplakia, as well as in its pathogenesis and malignant transformation, is unclear.But this mechanism can be explained through the action of HPV E6 and E7 proteins which promote keratinocyte to recommence the S phase of the cell cycle resulting in altered epithelial proliferation and maturation 2,3,12,18,19 .
The most frequent morphologic features in HPV associated with dysplasia are the presence of eosinophilic cells distributed throughout the thickness of the epithelium, due to the presence of apoptosis and cytological changes such as hyperchromatism 20 .
MALIGNANT LESIONS.
The presence of intraoral SCC associated with HPV is specially set up at the base of the tongue 2,6,17,19 .Nevertheless, when considering the larynx and oropharynx, it is more frequent in the last one.When determining the pre-valence of HPV associated with these different areas with presence of SCC, it is statistically significantly higher in the oropharynx (48.5%) than in the oral cavity (32.5%) and larynx (30%).When comparing its presence in relation to different geographical areas, it has been reported that Asia has a prevalence of 49.1%, Europe 25.6% and America 23.8% 21 .
By observing these data, it can be inferred that there is a high incidence of HPV infection in patients with SCC, suggesting it has a relevant role in its etiology.The largest association of HPV infection in the oropharynx can be understood by the morphological difference with the epithelium of the oral cavity and its similarity with the epithelial and lymphoid tissue in the uterine endocervix.Furthermore, the differences registered in location can be attributed to different ethnic groups, the environment, lifestyle and health conditions.In Europe and America, it is primarily associated with sexual activity and number of partners 21 .
According to an analysis by the International Agency for Research on Cancer (IARC), there is an increased incidence of cancer in the world's poorest regions: Africa, Southeast Asia, India and Latin America.Also, there are divisions among their socioeconomic levels in each country, showing that men from most vulnerable groups have a greater risk of developing and dying from oropharyngeal cancer and the most vulnerable women are at higher risk of contracting and dying from esophageal cancer 22 .
Patients with oropharyngeal cancer associated with HPV are characterized by being younger (5-10 years) than patients with carcinomas not associated with HPV.They are usually not smokers or alcohol drinkers and the risk of developing it is equal in men and women 23 .
DETECTION METHODS.
There is no standard method for the detection of HPV in the oral cavity.However, currently, virological laboratories provide a kit containing a sterile cytology brush and a tube for transportation.Then, the oral mucosa is brushed and it is possible to obtain epithelial cells where viral DNA is detected through chain reaction (PCR).It is important to highlight the quality of the nucleic acids in formalin-fixed paraffin-embedded samples is low.Therefore, it is not recommended to obtain viral DNA from these samples 25 .
Other molecular techniques, such as in situ hybridization and Southern Blott, are known for detecting nuclear DNA.But,from among these different types of methods, it is preferably to use PCR, since it has been seen that the other types of procedures mentioned detected a loweramount of viral DNA 6 .
PROGNOSIS.
SCC associated with the presence of HPV has a better prognosis 11,24,26 and a lower mortality rate than the one which is not associated with HPV 13 .The reason for this is unclear.However, this might be explained by the ability of HPV positive cancer cells to produce apoptosis in response to DNA damage.Another reason could be the inability to induce neoplastic transformation in uninfected cells into cancer cells.Thus, it is easy to understand why a HPV-positive SCC does not induce multifocal lesions 11 .
PREVENTION.
Prevention of HPV infection has recently been achieved by developing vaccines based on the use of virus-like particles (VLP).VLP are obtained by synthesis and autoassembly in vitro of the main proteins of major capsid of HPV virus.These are morphologically identical to HPV virions but do not contain viral DNA and this means that they cannot transmit the virus or cause disease, but do induce the generation of neutralizing antibodies and confer protection against HPV.
There are two types of prophylactic vaccines: Bivalente Cervarix, which protects against types 16 and 18 and Quadrivalent Gardasil, which protects against types 6, 11, 16 and 18.These vaccines are administered intramuscularly in three doses, at two and six months respectively after the initial dose and are effective for 3 to 5 years 2 .
Depending on the natural history of HPV infection and mainly cervical neoplasia, it seems logical that vaccination would be before initiation of sexual activity.The Advisory Commite on Immunization Practice (ACIP) recommends that primary vaccination is done between 11-12 years old, but there is also a range between 9 to 26 years approved by the Food and Drug Administration (FDA) 27 .
Herrero et al. reported that vaccination against highrisk subtypes (16 and 18) decreases the prevalence of the oral infection.It is estimated that the efficacy of vaccination would be about 93% (95% CI 62.2% to 99.7%).The author also suggested that the vaccine is effective when given to patients who have not been exposed to HPV 28 .
It must be considered that these vaccines are not a treatment for infections.Instead, they provide a benefit if the person receives it before being sexually active 13 .
Routine vaccination programs have been implemented in several countries with satisfactory results.Australia showed that a routine program for men and women is effective and successful, proving a reduction of 77% of genital warts for women in the age range of 12-17 years old and a reduction of 44% is observed in men.The same results were observed in Rwanda, Africa.It has been observed that vaccination against HPV-16 to prevent the infection of the uterine cervix would also have an effect on the infection by HPV-16 at oral level.However, further studies are needed to confirm this theory 29 .
FINAL COMMENT.
The relationship between HPV infection and cervical cancer has been well established and its presence has been observed in 90% of intraepithelial neoplasias.This has increased interest in discovering whether there is a posi-tive association between HPV infection and oral cancer.Despite several studies on this subject, the results are not conclusive to date.For example, Kreimet et al. found a prevalence of 24% of HPV presence in SCC, while other studies have found that the prevalence of HPV-associated oropharyngeal cancer is 35%.Näsman et al. observed an increased incidence among tonsillar cancer associated with the presence of HPV, 23% in 1970 to 93% in 2007 24 .A study by Lopes et al. attempted to establish the prevalence and relationship between the research of HPV and HPV 16-18 in oral carcinomasby using PCR and Q-PRC.This study suggested that HPV, as anoncogenic factor, is uncommon and only high viral load numbers would have a causal relationship 30 .
In Chile, both in SciELO Chile and the Health Ministry database, there are no available epidemiological data on oral lesions associated with HPV.So, we are working on keratinized mucous detection in both groups with and without factors risk.However, in data obtained from the IARC about incidence and mortality of the oral cavity cancer in South America, it is observed that the incidence for men is 2.5% while the mortality rate is 1.8%.The incidence for women is 1.4%, and the death rate is 0.9% 31 .
It is noteworthy that this year the vaccine against HPV will be incorporated as part of the national immunization program, benefiting all girls aged 9.It will be administered in schools and then be reinforced in a year in order to protect from cervical and oropharyngeal cancer.Although the development of genital lesions associated with HPV is more important in women, vaccinating men could also be promoted since indicators of oral health show some increase in oral SCC and HPV involvement.
Considering and highlighting the major route of virus inoculation is sexual and it is asymptomatic, it would require significant health policies and appropriate preventive efforts which involve both the medical and dental profession.Therefore, a correct history and clinical examination by the dentist in order to identify individuals considered high risk, such as those who start their sexual history early, have promiscuous sexual behavior, oral-genital contact, and history of warty lesions in other anatomic areas, is important.
Because of the association of this virus with the development of potentially malignant lesions and oral carcinomas, it is recommended to primarily search throughout verrucoide like injuries, as this virus tends to produce epithelial proliferation and malignancy and in every malignant lesion of epithelial nature.This can be done quickly and easily in the clinic by a general dentist using a kit with a cytology brush provided by laboratories which then confirms its presence through PCR tests.
Finally, the importance of promoting sex education campaigns and prevention with the vaccine against the virus can be highlighted.It has been noted in literature that the creation of public policies on education and awareness about HPV are primary preventive measures which have a large social impact, are effective and simple to develop.In the gynecological area, prevention of HPV infection is widely described in literature, but not in the dental area 29 .
Table 1 .
L1 It encodes viral capsid proteins for the production of new virus.L2 It encodes viral capsid proteins for the production of new virus.Viral genome and protein functions 1 .
Table 2 .
Human Papilloma virus genotype and its oncogenic risk 1 .
Table 3 .
Difference between HPV positive and HPV negative oropharyngeal cancer 1 .
|
v3-fos-license
|
2023-11-13T16:39:54.843Z
|
2023-11-09T00:00:00.000
|
265137516
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-445X/12/11/2040/pdf?version=1699542101",
"pdf_hash": "e3ff7875db6118ba2a6b21ac7cac5173e9341171",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2206",
"s2fieldsofstudy": [
"History",
"Sociology",
"Geography"
],
"sha1": "dc468a20225e6a4c157c413497d3da4afa5d3f36",
"year": 2023
}
|
pes2o/s2orc
|
Unpacking the Dynamics of Urban Transformation in Heritage Places through ‘Critical System Dynamics’: The Case of Beresford Square, Woolwich
: Rapidly growing research in urban heritage studies highlights the significance of incorporating participatory approaches in urban transformation projects. And yet, participation tends to be limited, including only certain segments of the population. It is also acknowledged that cities are ‘dynamic’ and ‘complex’ systems. However, there is extremely limited research that captures the dynamic transformation mechanisms in historic urban environments. This paper aims to illustrate a novel, mixed-method and dynamic approach to unfold the dynamics of urban heritage areas. We do so by focusing on the historic area of Woolwich, a South-East suburb in London, UK. To do so, we apply ‘critical system dynamics’ for the analysis of a mixed dataset which incorporates architectural surveys, interviews, online surveys, social media data and visual observations of material change through light archaeology. Within the framework of ‘deep cities’, the article argues that the transformation of a place is a complex process that can be captured not only based on ‘what we see’ but also on ‘what we cannot see’. In other words, the invisible (values, emotions, and senses) is as significant as the visible. This is of paramount importance as most urban planning policies tend to be based on material, visible remains and less on the spirit or soul of a place.
Introduction
This article explores how the constant and 'deep' transformative character of a historic urban environment can be captured through participatory and dynamic methods to inform present and future sustainable urban transformation practices.We do so by approaching a historic urban environment through the lens of a 'deep city', that is a city that consists not only of the historic, visible material layers we can see but also the 'layers' we cannot see, yet we feel and experience [1].Given the complex nature of urban transformation, our investigation draws on a novel, cross-disciplinary theoretical and methodological approach which conceptualises heritage and transformation as dynamic social practices, the continuation or interruption of which depends on the dynamic interactions of several social, cultural, economic, political and other variables [2].In this article, we conventionally name this approach 'urban heritage dynamics' in order to stress that heritage and transformation, as any form of dynamic practice and process, constitute a dynamic, complex system of Land 2023, 12, 2040 2 of 22 interrelated elements [2].The scope of 'urban heritage dynamics' is thus to explore how urban heritage interconnects over time with the wider socio-economic, environmental and other dimensions of an 'urban system' [2].
At this point, it is worth noting that there has been a remarkable shift in how heritage, and in particular urban heritage, has been conceptualised over the years.In academic literature and practice, the concept of heritage has evolved from a 'thing' that needs to be 'managed' due to 'threats' and 'risks' [3,4] into a socio-cultural, meaning-making process [5,6].More recently, and partially in response to emphasis on the 'discursive' nature of heritage [5], heritage has been approached as an 'assemblage' of both human and 'nonhuman' agents as well as material and immaterial qualities [7][8][9][10].In all these approaches and conceptualisations, the dynamic nature of heritage is recognised [2].Heritage is indeed subject to constant change.This change is sometimes viewed as a threat that merits the development of management [11] or adaptation strategies [2].Other times, change is viewed as a value itself that can be studied and integrated in conservation and urban planning [1].However, despite the universal recognition of the complex and dynamic nature of heritage, very few studies unpack this theoretically and methodologically [2].Furthermore, these studies are still in early stages, only 'scratching the surface' in terms of using relevant theories [12] or methods [13].A limited number of published works [2,14] take a step further in unfolding methodologically and theoretically the systemic, complex, dynamic nature of heritage in diverse heritage contexts by utilising the method of 'critical system dynamics' including a study on modelling change through the same method [15].It should be noted here that the lack of such studies may be explained by the fact that gathering longitudinal qualitative data can be a resource-heavy and time-consuming process.We are thus in the unique position to contribute an article that is based on observations and surveys gathered over the last 10 years on the urban heritage regeneration of Woolwich.
If sustainable heritage management is about developing strategies for heritage that enable adaptation to changing socio-economic and environmental challenges while also contributing to sustainable development [16], then it is imperative to apply and further refine theories and methods that can best capture the dynamic nature of heritage.In view of this, we contend that 'critical system dynamics' is a suitable method to this end.As explained below, 'critical system dynamics' denote a method of system dynamics that allows pluralistic and multi-method approaches to data collection and analyses.It is usually applied in contexts where critical matters such as social justice and equality are of research interest.We intend to apply this method in the analysis of the dynamic transformation of the historic urban area of Woolwich Town Centre, South-East London, UK.The case of Woolwich was chosen because it has been one of the most deprived areas in London but also an area that has been subject to rapid transformation over the last twenty years.In addition, as aforementioned, the lead author has been monitoring the transformation of the area over the last ten years through annual visits with postgraduate students.While our original intention was to conduct an analysis on the Town Centre, the Beresford Square and the Royal Arsenal (see Figure 1), we soon realised that such an in-depth and complex dynamic analysis in the scope and timeframe of the project could only be achieved if we narrowed down the focus.Hence, we decided to focus on Beresford Square and its Gate, which is a critical zone lying between the traditional Town Centre and the conservation area of the Royal Arsenal that is currently undergoing rapid redevelopment.On the bottom (b), a Google map view pinpoints the location of Beresford Square and Beresford Gate highlighted with a red star.Opposite to the square indicated by a red star, the Plumstead road can be seen (green star) which separates Beresford Square and its nearby town centre with from the Royal Arsenal area (pinpointed by a yellow star).
Spatial Context
As aforementioned, the focal point of our study in this article is located in Beresford Square where the High Street (Powis Street) of the Woolwich Town Centre leads.Beresford Square is marked by the presence of the Royal Arsenal Gatehouse.The Gatehouse is also known as the Beresford Gate, which provided the access point for thousands of workers with occupations in the factories manufacturing weaponry at the site of the Royal Arsenal (Figure 1).
Both Beresford Square and its Gate are located in the conservation area surrounding the Town Centre, which was designated in 2019.The Royal Arsenal is located on the other side of Plumstead Road, which currently divides Woolwich Town Centre from the Royal Arsenal.The Royal Arsenal, a conservation area since the 1980s, has been redeveloped since the early 2000s [17].The redevelopment has been characterised by the conservation
Spatial Context
As aforementioned, the focal point of our study in this article is located in Beresford Square where the High Street (Powis Street) of the Woolwich Town Centre leads.Beresford Square is marked by the presence of the Royal Arsenal Gatehouse.The Gatehouse is also known as the Beresford Gate, which provided the access point for thousands of workers with occupations in the factories manufacturing weaponry at the site of the Royal Arsenal (Figure 1).
Both Beresford Square and its Gate are located in the conservation area surrounding the Town Centre, which was designated in 2019.The Royal Arsenal is located on the other side of Plumstead Road, which currently divides Woolwich Town Centre from the Royal Arsenal.The Royal Arsenal, a conservation area since the 1980s, has been redeveloped since the early 2000s [17].The redevelopment has been characterised by the conservation and adaptive reuse of the existing historic buildings as well as by the construction of high-rise blocks of apartments.
The plans of the Greater London Council to approve the widening of Plumstead Road took place at a time when the Royal Arsenal site was closed in the 1960s.The closure of the Arsenal triggered discussions for new developments on the site and its surroundings.Indeed, as soon as the Arsenal closed, the Royal Arsenal Gatehouse 'was earmarked for demolition by 1969 to permit the widening of Plumstead Road when the Greater London Council was building Thamesmead' [17] (p.163).However, plans for the demolition were faced not only with delays but also with opposition from the local community and heritage groups.Eventually, Beresford Gate was listed in 1979, alongside a number of buildings located within the main Royal Arsenal site [18].The listing and consequent protection of Beresford Gate led to the rerouting of Plumstead Road to its north in 1984-1986, the reuse of the Gate as a backdrop to the square, as well as the pedestrianisation of the square [17] (p.163) which, at that time, was accessed by trams and via other public transportation means.
Conceptual Approach
Although our analysis will zoom in on the area of Beresford Square, co-relations with the wider area's transformations will also be made.As aforementioned, our analysis will be framed within the concept of 'urban heritage dynamics' [2].This approach draws on diverse but interrelated theories including urban dynamics, complexity, systems thinking with emphasis on participatory system thinking, grounded theory and assemblage theories, e.g., [19].These theories derive from different traditions, including hard and soft science traditions, but merge into a common theoretical toolbox being developed within heritageled urban studies.A common ground for these approaches is critical urban theory [20].The phrasings within critical urban theory, such as 'right to the city' and 'cities for citizens' through the reinvigoration of participatory urban civil societies, resonate with the aim of implementing a complex methodological approach captured through theories of urban dynamics, complexity, systems and assemblages.
Urban dynamics is one of the fundamental theoretical underpinnings of 'urban heritage dynamics'.Urban dynamics looks at the growth or decline of cities as the result of dynamic interconnections between available land, housing, industries and populations [21].Joy Forrester was the first to develop an urban dynamic theory, still in use today, simulating how an 'empty land' evolves into a dense urban area through the construction of new housing and businesses [21].According to Forrester, new housing attracts a managerial, professional population, a phenomenon that contributes to the overall attractiveness of the area and the growth of new businesses.This urban growth though reaches an equilibrium followed by gradual decline due to the deterioration of housing and business infrastructure.The attractiveness of the area declines with inhabitants abandoning the area.The newcomers are inhabitants occupying lower-paid jobs while overcrowding the already deteriorated housing stock [21].As a result, depopulation occurs due to the deterioration of the material fabric, closure of business, overcrowding of houses and unemployment.Demolition is often proposed by urban planners as the way forward for regrowth and revitalisation [22].
Through an 'urban heritage dynamics' perspective, we intend to offer an alternative approach by which urban 'renewal' occurs on a land that is occupied by 'obsolete structures' of heritage value revived through adaptive reuse.Through the adaptive reuse of 'obsolete' and 'abandoned' heritage buildings and sites, a socio-economic and cultural revival is achieved [2].It should be noted here that this 'revival', often termed as 'heritage-led' or 'heritage-driven' regeneration, is not without its unintended, negative consequences, with gentrification leading to displacement of local, deprived communities being one of those [23,24].It is, therefore, imperative to examine the dynamic interplay of the various factors mobilised during urban transformation programmes in order to prevent unintended socio-economic and cultural consequences.The method of 'critical system dynamics' can provide a useful tool in this direction [25].The use of this method calls for a re-conceptualisation of heritage as a dynamic, complex 'system' or 'assemblage'.Such reconceptualization can benefit from reviewing 'assemblage', 'systems thinking' and 'complexity theories'.
The idea that heritage is a 'complex', 'dynamic' 'assemblage' subject to constant change and transformation implies that heritage is not a 'thing' but a 'process', or even better, a socio-cultural practice.As a socio-cultural practice, the ways in which urban heritage emerges, evolves, persists, disappears or revives over time can be the object of study of 'urban heritage dynamics'.For Shove et al. [26], who have been studying the dynamics of social practices, there are three key fundamental elements upon which the continuation or disappearance of social practices depends.These elements include materials, competencies (knowledge/skills) and meanings.In the context of urban heritage as a social practice, Fouseki [2] has identified additional elements that are critical for the continuation or disappearance of urban heritage transformation.These include 'senses and emotions', 'space/place/environment', 'time' and 'resources'.The 'agents' driving these practices are omnipresent [2] (p.8).
In detail, 'materials' connote the physical fabric associated with heritage such as buildings, ruins, archaeological remains.'Values' and 'meanings' refer to the significance attributed by individuals or groups to heritage objects, sites, places and practices.'Senses' and 'emotions' connote the deeper feelings attached by those who have a deeper connection to a place.'Place' and 'space' denote on a micro-level the individual homes/settings and on a macro-level, a neighbourhood area.'Competencies' or 'skills' include the background knowledge of how to perform a practice.'Time' is conceptualised in its broadest possible sense.It can, for instance, connote the ways in which time is experienced or the time span that a heritage practice covers.'Resources' refer to financial, human or other resources needed to materialise a heritage action [2].
The dynamic continuation of urban heritage practices will depend on how the aforementioned dynamic elements are perceived by the various stakeholders as connected, disconnected or reconnected over time [2].These dynamic elements are connected through non-linear relationships, the investigation and mapping of which requires mixed data including both quantitative and qualitative information [27,28].This approach aligns with principles of complexity theory, critical realism and 'critical systems thinking'.In critical realism, the object or subject of observation is imbued with non-linear relationships, multiple causality, interplay between structures and agency [29] (p.2).In 'critical systems thinking', the world is conceptualised as composed of complex systems-that is, a group 'of interacting, interrelated, or interdependent parts that form a complex and unified whole that has a specific purpose' [30] (p.2).
'Complexity theory' and 'critical systems thinking' share elements in common with assemblage theories [31,32].DeLanda, whose work has been particularly influential in the field of heritage studies, e.g., [7,8,33], regards assemblages as 'wholes' whose properties emerge from the interactions of their parts [32].It is worth noting here that Buchanan criticises such an approach as object and material-centric, which contradicts with the initial assemblage theory of Deleuze and Guattari approaching assemblage as a dynamic arrangement between two (or more) semi-autonomous formations that encompasses the organisation of bodies and the organisation of discourses [34] (p.113).However, what all the above theories share in common is that an 'entity' or 'process' is subject to constant transformation, a transformation that depends on the dynamic interactions of several factors and parameters that cover the wider spectrum of social, environmental, economic, cultural and other dimensions [2].This complexity thus calls for the application of a method that can capture the dynamic and complex nature of urban heritage and, in particular over time, of a historic urban environment.
Materials and Methods
Given our conceptual approach of 'urban heritage dynamics', we decided to deploy the method of 'critical system dynamics' for the unveiling of community power dynamics and social relationships in the case study analysis.'Critical system dynamics' denote the use of 'system dynamics' for the analysis of problems related to disadvantaged and oppressed groups.The ultimate goal of using this method is to 'advance democracy and justice using system dynamics tools' [35] (p.23).
The underpinning conceptual foundation of 'critical system dynamics' is 'critical systems thinking'.'Critical systems thinking' is questioning the methods, practice and theory framing a social problem.'Critical systems thinking' is also committed to pluralism in that it insists that all system approaches, qualitative or quantitative, have a contribution to make [36], p. 12; see also [37].Our analytical approach aligns with principles of 'critical systems thinking' in that we have been critically debating and questioning our methodological tools and techniques while adopting a pluralistic methodological approach that combines qualitative and quantitative data while being aware of the need to improve urban policies through our results; see also [15].
It is worth clarifying at this point that the term 'system' refers to a set of things and/or people interconnected in such a way that they produce their own pattern of behaviour over time [36,37].In other words, events and patterns, or things that we observe, are driven by systemic structures and hidden mental models [38].By using 'critical system dynamics' in complex social contexts, we intend to gain an in-depth understanding of how certain elements interconnect to form a pattern, behaviour or phenomenon [39].As mentioned in the previous section, the notion of 'dynamic system' connotes a 'complex entity' of interconnected elements which change over time.This 'entity' can be a social practice, a building, a city.The underlying premise is that changes on any of the dynamic elements of the 'system' will affect the entire system because a complex system comprises non-linear, multiple, interconnected loops which change over time, with some loops disappearing or re-appearing under certain conditions [40].
The application of 'critical system dynamics' requires to first delineate the problem under investigation.From a 'critical system dynamics' point of view, the focal problem usually revolves around social justice and equality issues.In this article, the problem, the complexity of which we endeavour to unfold, relates to the gradual decline of Beresford Square and its market in Woolwich Town Centre, as well as the growing social disparity between Beresford Square and the nearby 'Royal Arsenal' riverside, which has been subject to rapid transformation over the last twenty years.By deploying 'critical system dynamics', the factors that have contributed to the growth and decline of Beresford Square and its market as well as the social division between the two areas can be unveiled.By revealing these factors, we can inform future urban planning strategies and policies for the wider area.
Having identified the problem, the next methodological step is to map the non-linear cause and effect relationships between the 'dynamic elements' that have been instrumental in the dynamic transformation of the urban area.We accomplished this by looking at the thorough Survey of London (Volume 48) which focuses specifically on the development of Woolwich and its built heritage since the 17th and 18th centuries.The information extrapolated from the Survey of London was complemented by qualitative data on perceptions and attitudes of those living or working in Woolwich towards the local heritage and the urban transformation in the area with particular emphasis on Beresford Square, Beresford Gate and its market.More specifically, we drew on 29 in-depth, lengthy, semi-structured interviews (with an average duration of 1.5 h each) conducted by the UCL (University College London) and University of Edinburgh/University of Stirling teams of the CUR-BATHERI project (19 conducted in 2018 by the UCL team and 10 conducted in 2021 by the Edinburgh/Stirling team).This large number of in-depth interviews resulted in more than 100,000 words for thematic analysis.The longitudinal interviews allowed us to capture any perceived change in heritage values over the last three years of the most intense and rapid transformation programmes.The interview data were further complemented by 98 responses to an online questionnaire carried out by postgraduate students at UCL's Institute for Sustainable Heritage in 2020 during the COVID-19 pandemic.
The interview data were first thematically coded using a software for qualitative analysis, NVivo 12.The data were coded both by each individual UCL researcher and the lead author independently before the researchers as a team concluded on the final thematical analysis.Hence, by conducting double coding, we minimised inevitable interpretation biases.Following the principles of grounded theory according to which the data drive the theory [41], interview data were initially coded through an open coding process, identifying as many variables and themes as possible related to the key research questions.
A series of 'cause and effect' relationships between the various codes were then identified and mapped on the NVivo 12 software [42].This analysis was further corroborated by findings emerging from the online questionnaire and the Survey of London.The identification of the 'cause and effect' relationships formulated the basis for drawing a 'causal loop diagram' on Vensim PLE 64 software, visualising the feedback loops that are identified as having caused the behaviour of key variables over time [43].In other words, causal loop diagrams depict the causal links among variables with arrows from cause to effect, creating a series of non-linear relationships and loops.The loops are cause-and-effect relationships which can exponentially grow (reinforcing loops) or start declining and bridging the gap between a desired and an actual goal (balancing loops).Each cause-effect relationship is indicated with + or − depending on whether the relationship is positive and reinforcing (e.g., the more . . . the more) or balancing (e.g., the more . . . the less).
The causal-loop diagram provided the basis for developing a 'stock' and 'flow' system dynamics model.The 'stock' and 'flow' model simulates what accumulates over time (stocks) and what drives this accumulation (flows).Each relationship between 'stocks' and 'flows' is described with simple mathematical equations enabling the simulation of the dynamic hypothesis created.This action can prove particularly challenging [15].From a 'critical system dynamics' perspective, it could be argued that abstract concepts-such as heritage values or cultural meanings-cannot be represented via mathematical equations.Through critical discussions with system dynamic modelists, it became apparent that the effort to map the relationships of different variables with simple mathematical equations forced us to think even more about how these interrelationships behave [15].
The socio-cultural data were further complemented with material data related to the material change and transformation of Beresford Gate.To this end, the method of light archaeology was utilised.Light archaeology is a stratigraphic and non-destructive research method of historical archaeology used for the investigation of complex spatial contexts (and not just single sites) [44].Its formation took place progressively between the 1970s and 2000s within (mainly) the medievalist Italian archaeological community.In light archaeology, the 'stratigraphical observatory' is key.This refers to a targeted building or architectural complex selected to investigate historical themes and territorial systems on a regional scale [44].For the analysis of the Woolwich case, the selection of the targeted building had to accord with a building which represents various chronological phases and associated material changes that echo the wider transformation moments experienced by the area as whole.The 'stratigraphical observatory' becomes then an additional information source that enables a deeper understanding of how an urban area materially evolves, unveiling known and unknown layers of history and material transformation as well as social values associated with these layers [45].
For the case study of Woolwich, we concluded that Beresford Gate fulfils the selection criteria for conducting light archaeology.Beresford Gate lies on the boundaries that physically, socially, and symbolically separate the 'gentrified' Royal Arsenal from Woolwich Town Centre.It is an integral component of Beresford Square and yet also feels disconnected from it.According to the cartographic and iconographic sources (old paintings, architectural sketches, historical photographs), the analysis of the south elevation of the front of the gate facing the square evidenced five main transformations phases of the building, dating from 1828 to the 1990s.Some of these detected phases clearly evidenced that the gate is a tangible witness of the material and social changes of Beresford Square, and the whole Woolwich area, further supporting its preservation (Figure 2).It should be noted here that the other front elevations revealed additional phases of the gate's individual history which are still subject to analysis.building, dating from 1828 to the 1990s.Some of these detected phases clearly evidenced that the gate is a tangible witness of the material and social changes of Beresford Square, and the whole Woolwich area, further supporting its preservation (Figure 2).It should be noted here that the other front elevations revealed additional phases of the gate's individual history which are still subject to analysis.Since this piece of research relies mainly on qualitative data, we performed a series of validation strategies including a thorough, historic review of the case study; regular discussions on biases that researchers from different disciplines bring to the study through critical self-reflection; use of peer debriefing; and independent coding before discussing together the various themes [46].One of the main means of validation is to test and apply the model in a real-life context in order to inform decision-making processes.This is the next step of our research for which we are seeking funding.However, we did discuss the modelling process with stakeholders associated with the case study of Woolwich as well as the case studies of the entire project including Florence, Barcelona and Oslo.This enabled us to explore cross-cultural reflections of decision-makers towards the methodology and the key findings emerging from the analysis.
The key methodological steps followed are summarised in the diagram (Figure 3) below.Since this piece of research relies mainly on qualitative data, we performed a series of validation strategies including a thorough, historic review of the case study; regular discussions on biases that researchers from different disciplines bring to the study through critical self-reflection; use of peer debriefing; and independent coding before discussing together the various themes [46].One of the main means of validation is to test and apply the model in a real-life context in order to inform decision-making processes.This is the next step of our research for which we are seeking funding.However, we did discuss the modelling process with stakeholders associated with the case study of Woolwich as well as the case studies of the entire project including Florence, Barcelona and Oslo.This enabled us to explore cross-cultural reflections of decision-makers towards the methodology and the key findings emerging from the analysis.
The key methodological steps followed are summarised in the diagram (Figure 3) below.
and the whole Woolwich area, further supporting its preservation (Figure 2).It should b noted here that the other front elevations revealed additional phases of the gate's individ ual history which are still subject to analysis.Since this piece of research relies mainly on qualitative data, we performed a serie of validation strategies including a thorough, historic review of the case study; regula discussions on biases that researchers from different disciplines bring to the study throug critical self-reflection; use of peer debriefing; and independent coding before discussin together the various themes [46].One of the main means of validation is to test and appl the model in a real-life context in order to inform decision-making processes.This is th next step of our research for which we are seeking funding.However, we did discuss th modelling process with stakeholders associated with the case study of Woolwich as we as the case studies of the entire project including Florence, Barcelona and Oslo.This ena bled us to explore cross-cultural reflections of decision-makers towards the methodolog and the key findings emerging from the analysis.
The key methodological steps followed are summarised in the diagram (Figure 3 below.
Results from the Survey of London Analysis
Volume 48 of the Survey of London for Woolwich was originally published for English Heritage in 2012 by Yale University Press, New Haven and London, on behalf of the Paul Mellon Centre for Studies in British Art, London, and edited by Peter Guillery.The Survey details the changes of, mainly, the built environment across Woolwich since the 18th century.It is a dense document, the information of which we had to categorise in such a way so that the 'deep history' and 'heritage' of the area could be captured.The information associated with Beresford Square was organised into the following chronological phases: One of the key findings emerging from the analysis of the Survey of London is that any attempts to reverse the gradual decline of Beresford Square and its market through the 'canonisation' and relandscaping of the square, the pedestrianisation of the main High Street (Powis Street) or even the unsuccessful re-opening of the covered market failed.Our system dynamic analysis, as shown below, indicates that one of the main reasons for failing to reverse the decline is the failure to capture and revive the 'deep values' and 'deep sense of place' which, in this case, was associated with a vibrant, albeit chaotic, environment characterised by people passing by and through, connecting, socialising, hopping into trams and buses or stopping to buy diverse products from the market stalls.This vibrant atmosphere is succinctly encapsulated in the storytelling of a local resident who recalls his childhood in the market [47]: "Excitement, anticipation, of sights and sounds.Hurry to the bus stop; hop on the bus or the tram.The trip in; familiar sights.And in the distance. .... The Market, all noise and smells, crowds, bustling, shuffling, smiling folk.People looking, and touching, and asking, meeting and talking, buying and selling, calling and shouting".
Light Archaeology
As aforementioned in the Section 2, the analysis of the south elevation front of the gate, facing the square, evidenced five main transformations phases of the building, dating from 1828 to the 1990s.Some of these detected phases clearly evidenced that the gate is a tangible witness of the material and social changes of Beresford Square, and the whole Woolwich area, further supporting its preservation.
In detail, Phase 1 represents the date the gate was built (i.e., 1828) following the clearances of cottages in town to open-up of the road to the Arsenal.This first gateway in plain yellow-stock brick is clearly recognisable today, the lower floor of the present building, even if some alterations were made to adapt the later additions (Figure 4).
In detail, Phase 1 represents the date the gate was built (i.e., 1828) following the clearances of cottages in town to open-up of the road to the Arsenal.This first gateway in plain yellow-stock brick is clearly recognisable today, the lower floor of the present building, even if some alterations were made to adapt the later additions (Figure 4).The early phases of the gate can be observed in historical photographs of the end of the 19th century where Phase 2 is also visible; this phase relates to the addition of the western superintendent's office in 1859 and its bell tower as clearly illustrated in the "The procession leaving the Arsenal, Woolwich, United Kingdom, funeral of the prince Louis Napoleon" old illustration from the magazine The Graphic dated in 1879 [48].Phase 3 is dated in 1891 and corresponds to the superstructure abutting the first gate.This addition in red brick, with dressed stone, machicolated cornices and a clock over the central of three openings, gave the gate a new monumental façade.As mentioned before, this phase altered the first structure.For instance, the buttresses flanking both the central and the sided footways appear more raised, and more protruding compared to the former ones.Subsequently, all the memorial plaques, the inserted ones in the buttresses inscribed '1829 B' and the Arsenal's coat of arms between the King's monogram 'G.R. IV', originally above the footways, were replaced higher in the new walls.The footways were widened in 1936 (Phase 4), and the iron gate was replaced with a spear-headed one [48].The last alteration is the most severe: the demolition of the Arsenal's walls abutting the gate on both sides left the gate completely isolated in fact and at risk of demolition during the widening of Plumstead Road in the 1980s.
While Phase 2 and 4 concern Arsenal's internal use of the building more, Phase 1, 3 and 5 can also be connected to relevant socio-economic transformations of Beresford Square and Woolwich as a whole.Both Phase 1 and 3, even if at different scales, represent two crucial moments of the relationship between the 'inside' (the Arsenal) and the The early phases of the gate can be observed in historical photographs of the end of the 19th century where Phase 2 is also visible; this phase relates to the addition of the western superintendent's office in 1859 and its bell tower as clearly illustrated in the "The procession leaving the Arsenal, Woolwich, United Kingdom, funeral of the prince Louis Napoleon" old illustration from the magazine The Graphic dated in 1879 [48].Phase 3 is dated in 1891 and corresponds to the superstructure abutting the first gate.This addition in red brick, with dressed stone, machicolated cornices and a clock over the central of three openings, gave the gate a new monumental façade.As mentioned before, this phase altered the first structure.For instance, the buttresses flanking both the central and the sided footways appear more raised, and more protruding compared to the former ones.Subsequently, all the memorial plaques, the inserted ones in the buttresses inscribed '1829 B' and the Arsenal's coat of arms between the King's monogram 'G.R. IV', originally above the footways, were replaced higher in the new walls.The footways were widened in 1936 (Phase 4), and the iron gate was replaced with a spear-headed one [48].The last alteration is the most severe: the demolition of the Arsenal's walls abutting the gate on both sides left the gate completely isolated in fact and at risk of demolition during the widening of Plumstead Road in the 1980s.
While Phase 2 and 4 concern Arsenal's internal use of the building more, Phase 1, 3 and 5 can also be connected to relevant socio-economic transformations of Beresford Square and Woolwich as a whole.Both Phase 1 and 3, even if at different scales, represent two crucial moments of the relationship between the 'inside' (the Arsenal) and the 'outside' (Woolwich).The gate named after and commissioned by Master General Beresford in 1829 was the only major work that took place in the Arsenal following the end of the Napoleonic Wars activities and changed the Woolwich townscape, replacing cottages and resulting in the creation of a road.This connected the Gate with Arsenal and the Square.In the end of the 19th century (Phase 3), the main entrance was monumentalised, possibly mirroring the Arsenal's dominant role in local economic growth, as well as the expansion of Woolwich and the growth of its market and square [48].With the demolition of the walls in the 1980s (Phase 5), the 'inside-outside' irrevocably changed.However, even if these two spheres materially disappeared, the isolated gate, framed within other relevant urban transformations following the closure of the Arsenal and the rerouting through Plumstead Road, seems to still represent for some people a symbol of physical, social and symbolic disconnection [2].
Interview Findings
The rich interview data were thematically coded into 68 core categories classified into the following broader overarching categories: values and perceptions towards Beresford Square, Beresford Market, Beresford Gate, the Royal Arsenal and the factors that enhance or prohibit the attractiveness of Beresford Square, the Royal Arsenal and Town Centre.For instance, the overarching category of 'values' included sub-categories specifying the specific values attached to the area such as historic, aesthetic, social, economic.The idea of 'attractiveness' here refers to the extent to which those living or working in Woolwich are willing to keep living in the aforementioned places.
The socio-cultural data revealed great diversity, as expected, among residents' perceptions towards Beresford Square, its Gate and the Royal Arsenal.Beresford Gate was recognised by almost all respondents as a monument of iconic and symbolic value despite the loss of its original function, i.e., its function as the 'passing by' and connecting point between the Royal Arsenal and Woolwich Town Centre.
Similarly, official voices in the area highlighted the need to enhance the presence of Beresford Gate through its adaptive reuse, lighting during the night, and plantation of trees in the surroundings.
In contrast to Beresford Gate, Beresford Market was associated with both negative and positive perceptions.For most Royal Arsenal residents, the market was not viewed as holding a special heritage value while the products being sold at the market stalls were perceived of low quality.On the other hand, other respondents living outside the Royal Arsenal noted the 'friendly' atmosphere of the market as well as its distinct community identity and cultural diversity: "I think most of the people [here], they have their own market, their things here, so they are important.For example, African communities, they have their own culture, own food, own shopping, but this doesn't stop us from going here".(Interviewee 15, local resident of Woolwich living outside the Royal Arsenal and the Town Centre) Interestingly, the values with which the market is attributed are further projected to the wider Beresford Square where the market is located.One of the interviewees, for instance, commented on how the square as a whole reflects the rich cultural diversity of Woolwich.
"And when you look at all different groups, and the nationalities.You know, there are not many squares we can get so many communities in one space.So it's very diversified in Woolwich".(Interviewee 2, Local resident living outside Royal Arsenal) Having identified some of the key values attributed to the key locations under study, the next step was to identify themes related to what factors the various communities believe enhance or prohibit the attractiveness and growth of each area.Although most local markets in Woolwich have been suffering from an overall decline, our system dynamic analysis demonstrated that the rerouting of the buses played a detrimental role in the decline of Beresford Market, a much more significant role than the closure of the Royal Arsenal.The rerouting led to the physical and social disconnection of the market from the rest of Woolwich.Recently, attempts by official authorities can be noted in boosting new and unique, international food stalls while reinforcing the nature of the market as a passing-through point.These attempts are noted by residents in the area: "There is a Bulgarian shop and a lot of Bulgarian people go to that shop and also one specializes in Afro Caribbean".(Interviewee 10, local resident living outside the Royal Arsenal) "If you just watch the different trucks here, each of them seems to show a different part of the world".(Interviewee 21, local resident living outside the Royal Arsenal) "For example, there's a Nepalese community there's Nepalese food truck and you will see all these Nepalese and they'll sit together.Yes.They're very nice, yeah, so for them this is their community, they go there".(Interviewee 40, local resident living outside the Royal Arsenal) In addition to international food stalls, performances and festivals taking place in Beresford Square are noted as a distinct feature of this 'side of the road' in contrast to the Royal Arsenal.
"Last week, there were a mini music festival, local school and communities.They built a stage and there were performing.Well, do they do these in the other side?I'm not sure".(Interviewee 2, local resident living outside the Royal Arsenal) The festivals, in particular, seem to have acted as a 'contact zone' between the two sides as Royal Arsenal residents have been attending them.
"There was a festival there, one time.And you could see that everybody from this part (Royal Arsenal) went down there, it was like Jamaica, African festival in the Square.Everybody was there.So if there keep on doing that, people will start mixing".(Interviewee 3, Resident of the Royal Arsenal) In addition to the festivals, the built heritage of the Royal Arsenal has the potential of bringing the two communities together.Residents living outside the Royal Arsenal highlighted that the built heritage is one of the key reasons why they would like to visit the area, despite the unaffordable prices of the local restaurants/pubs and of the Royal Arsenal's farmers' market.
"We lived in Woolwich and we saw the gates open.That's always been a wall. ..people never went behind that wall for years and years and years.And then one day, the gate was open and then we went in there. ..they were these incredible old universal original warehouses and you know, officers mess and all that.We just thought it was fabulous".(Interviewee 8, Local resident living outside the Royal Arsenal) Overall, the physical disconnection of Beresford Square and Gate and the Royal Arsenal reinforces social disconnection between the communities living on both sides, the socio-economic profile of which is also different with affluent communities gradually inhabiting the Royal Arsenal.However, over time, there are signs indicating that built heritage or intangible heritage associated with the cultural diversity of the market have the power to act as catalysts of social cohesion.
Online Survey
The interview data were complemented with data collected via an online survey carried out by a team of MSc Sustainable Heritage postgraduate students supervised by the lead author during the COVID-19 pandemic.Not unexpectedly, the questionnaires also revealed distinctly different opinions between those living in the Royal Arsenal and those living in the Town Centre or elsewhere in Woolwich, with the latter groups tending to feel more dissatisfied with the rapid degree of change and transformation occurring on Beresford Square than the Royal Arsenal residents (Figure 5).Overall, the respondents tend to view changes occurring on Beresford Square positively while they seem uncertain about changes on Powis Street (Figure 6).The diversity of responses seems to depend on the number of years each respondent has been living in Woolwich (Figure 7).For instance, residents who have been living in Woolwich for more than 10 years seem to favour the transformation of Beresford Square and less so the changes on Powis Street.On the other hand, recent incomers seem to be satisfied with recent changes on Powis Street.Overall, the respondents tend to view changes occurring on Beresford Square positively while they seem uncertain about changes on Powis Street (Figure 6).Overall, the respondents tend to view changes occurring on Beresford Square positively while they seem uncertain about changes on Powis Street (Figure 6).The diversity of responses seems to depend on the number of years each respondent has been living in Woolwich (Figure 7).For instance, residents who have been living in Woolwich for more than 10 years seem to favour the transformation of Beresford Square and less so the changes on Powis Street.On the other hand, recent incomers seem to be satisfied with recent changes on Powis Street.The diversity of responses seems to depend on the number of years each respondent has been living in Woolwich (Figure 7).For instance, residents who have been living in Woolwich for more than 10 years seem to favour the transformation of Beresford Square and less so the changes on Powis Street.On the other hand, recent incomers seem to be satisfied with recent changes on Powis Street.The Royal Arsenal residents, more specifically, tend to be either satisfied with the transformation of Beresford Square or dissatisfied with all changes happening (mostly associated with townscape transformation).This may be explained by the fact that the majority of Royal Arsenal residents avoid spending time on Powis Street and hence, they may have not experienced the transformation processes in this area unlike in Beresford Square which provides a 'passing by' point for catching the DLR (Docklands Light Railway) train.Those living in the Town Centre are generally dissatisfied with all changes.However, those inhabiting other areas of Woolwich seem to be satisfied with the overall changes, especially with changes occurring at Beresford Square.Interestingly, despite its vibrance, the Town Centre has received the lowest number of positive responses.This may be explained by the fact that only recently the Town Centre has been going through a targeted regeneration programme, as part of Historic England's Heritage Action Zone scheme (Figure 8).The Royal Arsenal residents, more specifically, tend to be either satisfied with the transformation of Beresford Square or dissatisfied with all changes happening (mostly associated with townscape transformation).This may be explained by the fact that the majority of Royal Arsenal residents avoid spending time on Powis Street and hence, they may have not experienced the transformation processes in this area unlike in Beresford Square which provides a 'passing by' point for catching the DLR (Docklands Light Railway) train.Those living in the Town Centre are generally dissatisfied with all changes.However, those inhabiting other areas of Woolwich seem to be satisfied with the overall changes, especially with changes occurring at Beresford Square.Interestingly, despite its vibrance, the Town Centre has received the lowest number of positive responses.This may be explained by the fact that only recently the Town Centre has been going through a targeted regeneration programme, as part of Historic England's Heritage Action Zone scheme (Figure 8).The Royal Arsenal residents, more specifically, tend to be either satisfied with the transformation of Beresford Square or dissatisfied with all changes happening (mostly associated with townscape transformation).This may be explained by the fact that the majority of Royal Arsenal residents avoid spending time on Powis Street and hence, they may have not experienced the transformation processes in this area unlike in Beresford Square which provides a 'passing by' point for catching the DLR (Docklands Light Railway) train.Those living in the Town Centre are generally dissatisfied with all changes.However, those inhabiting other areas of Woolwich seem to be satisfied with the overall changes, especially with changes occurring at Beresford Square.Interestingly, despite its vibrance, the Town Centre has received the lowest number of positive responses.This may be explained by the fact that only recently the Town Centre has been going through a targeted regeneration programme, as part of Historic England's Heritage Action Zone scheme (Figure 8).Hence, what the online survey findings illustrate, is that differences in perceptions among residents depend largely on the area in which respondents live as well as on the number of years they have been living in each area.As aforementioned in the introduction, we deployed 'critical system dynamics' for three main reasons.First, we aimed to capture the plurality and diversity of views of segments of the population instead of homogenising the population as one entity.Secondly, we opted for synthesising mixed data (qualitative and quantitative data).Thirdly, we focused on the critical role that issues related to social justice, equality and cultural diversity play in urban transformation policies.Having identified the critical non-linear cause and effect relationships in the thematic analysis of the interview data using the 'Relationships' functions on Nvivo (Figure 9) and the Survey of London, we developed a causal-loop diagram on the Vensim software.The causal-loop diagram we created represents the dynamic transformation of Beresford Square, its Gate and Market, although we acknowledge that Beresford Square occupies only a small section of the wider area.However, by zooming in, we are in the position to look more closely at how the area transformed over the years and what key values and attributes were missed during such transformation processes by planners, leading to the gradual decline of the square and its market.
In our 'causal-loop' diagram, we mapped six main 'stocks' (highlighted in blue) including the Royal Arsenal population (newcomers), the Woolwich local community residing outside the Royal Arsenal, the culturally diverse population, the number of market stalls, the transportation infrastructure, and the sense of connectivity/social cohesion.Unlike the traditional urban dynamics model which focuses on the dynamic interaction of housing, business and population dynamics [21], in the case of Beresford Square, the critical elements proved to be, in addition to the population, the transportation infrastructure, the market and the sense of social connectiveness.In order to better comprehend the evolution of the square and its market, it is important to look at how historically the market emerged as a social and cultural practice.The in-depth study of the morphological changes to Beresford Square, as recorded by the Survey of London and further complemented with information extrapolated from old images and testimonials and light archaeology analysis, unveiled how the 'open land' occupied by the square evolved organically, in a grassroots manner into a public space which hosted initially a grassroot market that was not subject to a particular legal framework since the beginning of the 19th century, before the opening of the Royal Arsenal.The market gradually grew and eventually acquired a legal status in 1879.Attempts to 'regulate' the square and its markets have been continuous since then.Indeed, the square has been marked by a continuous 'tension' between the respective authorities in each period to 'regulate' and 'put an order' to the open space by endeavouring to attribute a 'rectangular shape' to the space through the demolition of the small number of houses and pubs that were built back in the early 19th century, and the communities that resisted this regularisation process (Figure 10).
Land 2023, 12, x FOR PEER REVIEW 16 of 22 archaeology analysis, unveiled how the 'open land' occupied by the square evolved organically, in a grassroots manner into a public space which hosted initially a grassroot market that was not subject to a particular legal framework since the beginning of the 19th century, before the opening of the Royal Arsenal.The market gradually grew and eventually acquired a legal status in 1879.Attempts to 'regulate' the square and its markets have been continuous since then.Indeed, the square has been marked by a continuous 'tension' between the respective authorities in each period to 'regulate' and 'put an order' to the open space by endeavouring to attribute a 'rectangular shape' to the space through the demolition of the small number of houses and pubs that were built back in the early 19th century, and the communities that resisted this regularisation process (Figure 10).Thus, the square and its market have been historically characterised by a sense of 'randomness', 'irregularity', 'informality' and 'disorder', which was further exacerbated by the inclusion of trams and buses crossing the square amidst market stalls and thousands of Royal Arsenal workers or just residents doing their shopping (Figure 11).This is an additional 'deep feature' of the square.Beresford Square has been functioning for years as the 'passing-by' or 'connecting point' between the Town Centre, the Royal Arsenal and the rest of Woolwich.Trams and buses were cutting through the 'square' amidst hundreds of stalls and thousands of people (Figure 12).Thus, the square and its market have been historically characterised by a sense of 'randomness', 'irregularity', 'informality' and 'disorder', which was further exacerbated by the inclusion of trams and buses crossing the square amidst market stalls and thousands of Royal Arsenal workers or just residents doing their shopping (Figure 11).archaeology analysis, unveiled how the 'open land' occupied by the square evo ganically, in a grassroots manner into a public space which hosted initially a g market that was not subject to a particular legal framework since the beginning of century, before the opening of the Royal Arsenal.The market gradually grew and ally acquired a legal status in 1879.Attempts to 'regulate' the square and its mark been continuous since then.Indeed, the square has been marked by a continuous between the respective authorities in each period to 'regulate' and 'put an orde open space by endeavouring to attribute a 'rectangular shape' to the space thro demolition of the small number of houses and pubs that were built back in the e century, and the communities that resisted this regularisation process ( Thus, the square and its market have been historically characterised by a 'randomness', 'irregularity', 'informality' and 'disorder', which was further exa by the inclusion of trams and buses crossing the square amidst market stalls a sands of Royal Arsenal workers or just residents doing their shopping ( This is an additional 'deep feature' of the square.Beresford Square has be tioning for years as the 'passing-by' or 'connecting point' between the Town Ce Royal Arsenal and the rest of Woolwich.Trams and buses were cutting thro 'square' amidst hundreds of stalls and thousands of people (Figure 12).This is an additional 'deep feature' of the square.Beresford Square has been functioning for years as the 'passing-by' or 'connecting point' between the Town Centre, the Royal Arsenal and the rest of Woolwich.Trams and buses were cutting through the 'square' amidst hundreds of stalls and thousands of people (Figure 12).In a way, this very nature of the square and its market as a 'meeting point' is one of the 'deep values' of the place, a value that is not materially visible and one which can be traced through the unveiling of the 'deep layers' of the area.The 'disorder' and almost 'chaotic' nature of the square and its market were at the very heart of this space.The 'Beresford Gate' is possibly one of the few tangible reminders of this 'passing by', 'meeting point' function of Beresford Square.It could be argued that the square has not totally lost this 'passing by' feature given that the 'DLR' (Docklands Light Railway) station is located on the side of the square.However, the side location implies that most people move directly to the station without crossing through the square.In addition, the recent opening of an Elizabeth line tube station in the Royal Arsenal has further contributed to the loss of the 'passing by' nature of Beresford Square.In a way, despite the physical disconnection of the Gate from Royal Arsenal, the Gate became a reminder of the 'deep values' of the place.As such, the Gate has a huge potential role in reviving the 'deep spirit' of the place.
The widening of the Plumstead Road in 1984, intended, partially, to reroute the buses outside Beresford Square in order to provide a safer environment, seemed to constitute one of the key drivers-if not the main one-of the decline that the 'square' and its market have since been experiencing.The character of the place, which evolved organically in a grassroot manner, was rapidly transformed by the Plumstead Road project.As illustrated by memoires of former residents posted on blogs (see quote in the previous section), the 'square' stopped being the connecting or 'passing-by' point, numbers of market traders started declining and whole families of traders began disappearing.The bus re-routing was one of the many interventions to Beresford Square.Other interventions included the demolition of buildings around the square and the closure of Beresford Gate, which were opposite to the 'deep values' of the area (Figure 12).Indeed, the area and its market started gradually declining.The number of market stalls decreased from 135 to 10 despite the growth of the population as a result of the Royal Arsenal development (Figure 13).
Discussion
Our 'critical system dynamics' analysis enabled us to capture not just the 'deep urban layers' 'we can see' through material traces of the past but more importantly, what 'we can feel' and 'experience' [1].In other words, we contend that through 'critical system dynamics' we can identify the often 'invisible', intangible, 'deep values' of a place.Although the resulting model merits application and validation on the ground, the process of mapping the 'deep transformation' of Beresford Square through historic and social research made the unfolding of the 'deep values' of the place feasible.By doing so, urban planning and urban conservation policies can build upon the deeply embedded features of a place alongside contemporary perceptions of this place to provide sustainable solutions for the present and future.
It could be argued that the 'deep values' and 'deep features' of a place we refer to echo, to some extent, the notion of 'genius loci' [49,50].The concept of 'genius loci' has been defined as the 'intangible quality', the 'spirit' or soul' of a place, which can be perceived physically and/or spiritually [49] (p.225).'Genius loci' reveals itself 'through visible tangible and perceivable non-material features' and is 'made known by underlying processes' as it signifies a process that is happening which 'cannot intentionally be created' [49] (p.225).Hence, the 'genius loci' or 'spirit of the place' is a temporal, dynamic concept as it encompasses all the temporal visible or invisible layers that have shaped the 'soul' of the place.However, as an urban space, and even more so a historic urban environment, is in constant flux, we should approach the notion of 'deep values' (or genius loci) as a 'complex' and 'systemic' concept, a dynamic 'assemblage' the analysis of which can benefit from systems thinking and respective methods.
The critical role that 'deep values' can play in urban transformation projects advances our 'urban heritage dynamics' perspective.We offered the 'urban heritage dynamics' perspective in the earlier sections of this article as an alternative approach to urban 'renewal', by which land occupied by 'obsolete structures' of heritage value is revived through adaptive reuse [2].The focus was intentionally a material one in order to note the shift from traditional urban dynamics focusing on demolition to adaptive reuse.The in-depth analysis of the case study calls for an expansion of this approach by including the 'deep values' of 'deep features' of a place.Hence, an 'urban heritage dynamics' approach refers to the sustainable transformation of historic urban areas through the 'adaptive reuse' of existing material traces of the past in ways that build upon the 'deep values' of a place.To do so, it is important to investigate how 'materials', 'competencies' and 'meanings' have interplayed over time [26] alongside 'senses and emotions', 'space/place/environment', 'time' As the existing market declined, the need for a new market emerged in the context of the Royal Arsenal area.A farmers' market was established in 2015 and currently takes place twice a month within the Royal Arsenal complex which stands in juxtaposition with the Beresford Market.A promising point of reversal is provided through the gradual increase in food stalls with diverse food offered by the traders at Beresford Square.Stories of local traders have been featured at the Museum of Docklands exhibit and on hoardings in the Royal Arsenal site prior to the launch of the Woolwich Works cultural centre at the Royal Arsenal.
Discussion
Our 'critical system dynamics' analysis enabled us to capture not just the 'deep urban layers' 'we can see' through material traces of the past but more importantly, what 'we can feel' and 'experience' [1].In other words, we contend that through 'critical system dynamics' we can identify the often 'invisible', intangible, 'deep values' of a place.Although the resulting model merits application and validation on the ground, the process of mapping the 'deep transformation' of Beresford Square through historic and social research made the unfolding of the 'deep values' of the place feasible.By doing so, urban planning and urban conservation policies can build upon the deeply embedded features of a place alongside contemporary perceptions of this place to provide sustainable solutions for the present and future.
It could be argued that the 'deep values' and 'deep features' of a place we refer to echo, to some extent, the notion of 'genius loci' [49,50].The concept of 'genius loci' has been defined as the 'intangible quality', the 'spirit' or soul' of a place, which can be perceived physically and/or spiritually [49] (p.225).'Genius loci' reveals itself 'through visible tangible and perceivable non-material features' and is 'made known by underlying processes' as it signifies a process that is happening which 'cannot intentionally be created' [49] (p.225).Hence, the 'genius loci' or 'spirit of the place' is a temporal, dynamic concept as it encompasses all the temporal visible or invisible layers that have shaped the 'soul' of the place.However, as an urban space, and even more so a historic urban environment, is in constant flux, we should approach the notion of 'deep values' (or genius loci) as a 'complex' and 'systemic' concept, a dynamic 'assemblage' the analysis of which can benefit from systems thinking and respective methods.
The critical role that 'deep values' can play in urban transformation projects advances our 'urban heritage dynamics' perspective.We offered the 'urban heritage dynamics' perspective in the earlier sections of this article as an alternative approach to urban 'renewal', by which land occupied by 'obsolete structures' of heritage value is revived through adaptive reuse [2].The focus was intentionally a material one in order to note the shift from traditional urban dynamics focusing on demolition to adaptive reuse.The in-depth analysis of the case study calls for an expansion of this approach by including the 'deep values' of 'deep features' of a place.Hence, an 'urban heritage dynamics' approach refers to the sustainable transformation of historic urban areas through the 'adaptive reuse' of existing material traces of the past in ways that build upon the 'deep values' of a place.To do so, it is important to investigate how 'materials', 'competencies' and 'meanings' have interplayed over time [26] alongside 'senses and emotions', 'space/place/environment', 'time' and 'resources' [1,2].In the case of Beresford Square, the materiality of the place shifted through the construction and demolition of residential or leisure buildings, the presence and re-routing of trams and buses, the increase and decrease in market stalls.By the appearance or disappearance of certain material manifestations of the area, certain competencies occurred or vanished such as driving trams, selling particular products in the market, cooking certain foods.This affected the ways in which the place was 'sensed' over time by the local inhabitants.The place was transformed from a place of connectivity, social transaction and a 'passing by' point into just a 'passing by' or 'through' area.Resources associated with investing in the revival of the area proved inadequate when failing to capture the 'deep values' of the place.Hence, among all the dynamic elements of urban heritage dynamics, the element of 'senses' and 'emotions', in this instance, proved to be amongst the most significant aspects of the dynamic transformation.
Conclusions
This article aimed to explore the dynamic transformation of Beresford Square in Woolwich, its historic Gatehouse and its market through the use of 'critical system dynamics'.By doing so, the article unveiled the visible and invisible 'deep layers' of the historic transformation of the area.A system dynamics diagram (causal-loop diagram) was created demonstrating the non-linear relationships between socio-cultural and economic factors that contributed to the sustainable growth or decline of Beresford Square, Beresford Gate, and its market.By 'zooming in' on a focal area, the area of Beresford Square, we were able to capture the micro-dynamics of this site which are often disregarded, but which are key for developing sustainable future conservation and transformation strategies.However, it is important to keep in mind that this area is also part of a wider urban environment that is going through rapid transformation and change.Indeed, as shown through the causal-loop diagram, wider socio-physical changes have directly impacted Beresford Square.
Why is tracing the 'deep values' of a historic area important for urban planning and conservation policies?In order to answer this question, we need to consider the unintended consequences of failing to do so in the case of Beresford Square, Gate and Market.Our 'critical system dynamics' analysis showed that the 'deep character' of Beresford Square lies in the ways the square functioned as a vibrant 'meeting' and 'passing by' point in space, occupied by trams, buses and market stalls.While this is an 'immaterial' quality, it could be argued that it is echoed materially in the presence of Beresford Gate.Hence, although Beresford Gate appears as a monument disconnected from its original site and function, in reality it functions as a 'material reminder' of the 'deep values' of the place.As shown in the previous sections, former attempts to revive the square and its market failed to build upon the 'deep features' of the place.Most endeavours of revitalisation focused on how to 'aesthetically' enhance the square by imposing an 'order', almost 'pristine' arrangement of space layout.However, these efforts were in a way against the very nature of the place, which evolved organically over the years, creating a space of 'disorder', 'fusion' and even 'chaos' at times, but one that accommodated the social practices and cultural heritage of the people using the space.
Recent strategies to revive the square and its market seem to be starting to address the 'deep values' of the place by exploring ways in which the Gate can be transformed into a 'passing by' point, allowing connectivity and social interactions between Woolwich Town Centre and the Royal Arsenal [51].Although originally Beresford Gate was listed due to its historic significance during the widening plans of the Plumstead Road as a means to preserve it for demolition, it can now play a significant role in reviving the 'deep spirit' of the place.There are also efforts to increase the number of market stalls through the introduction of new food stalls reflecting the cultural diversity of the area [51].This may contribute to the revival of the character of the place as space of connectivity, gathering and interaction.
The in-depth, dynamic examination of the transformation of Beresford Square can showcase a series of implications for future urban planning policies.The complexity, for instance, of the layers of values attached over time to a heritage area requires the application of appropriate techniques and methods that move beyond tick-box consultation exercises.The 'Deep Cities' toolbox (www.deepcities-toolbox.unifi.it(accessed on 25 August 2023)) that has been designed by the research team aims to address this gap by offering a suite of participatory methods and dynamic tools that can be applied in tight timeframes.Recommendations emerging from the 'Deep Cities' project include the wider adoption of people-centred methods, to understand the complex social values associated with cities, and the need for a holistic approach to urban heritage management, ensuring involvement by a wide range of stakeholders throughout urban change processes and recognition of grassroots, community heritage practices.A detailed paper on recommendations for practitioners and policymakers emerging from the 'Deep Cities' project is currently in preparation.
We would like to conclude this article by stating that the causal-loop diagram developed in this case cannot represent the dynamics of every single urban heritage area.On the contrary, the dynamics for each area will be peculiar and distinct.However, the process of applying the method can be generalised and applied in similar case studies.We thus hope that we offered a new conceptual and methodological approach to the understanding of the dynamic transformation of historic urban areas, as well as a tool to communicate the results of such studies with planners and conservation officers.
Figure 1 .
Figure 1.On the top (a), Beresford Gate is depicted with the Beresford Market in the foreground and with the Royal Arsenal development beyond (Photo by Kalliopi Fouseki, Date: 22 July 2023).On the bottom (b), a Google map view pinpoints the location of Beresford Square and Beresford Gate highlighted with a red star.Opposite to the square indicated by a red star, the Plumstead road can be seen (green star) which separates Beresford Square and its nearby town centre with from the Royal Arsenal area (pinpointed by a yellow star).
Figure 1 .
Figure 1.On the top (a), Beresford Gate is depicted with the Beresford Market in the foreground and with the Royal Arsenal development beyond (Photo by Kalliopi Fouseki, Date: 22 July 2023).On the bottom (b), a Google map view pinpoints the location of Beresford Square and Beresford Gate highlighted with a red star.Opposite to the square indicated by a red star, the Plumstead road can be seen (green star) which separates Beresford Square and its nearby town centre with from the Royal Arsenal area (pinpointed by a yellow star).
Figure 2 .
Figure 2. Beresford Gate-matrix and activities.This figure showcases the changes in uses over time with each material change outlined in boxes.(Designed by Elisa Broccoli and Michele Nucciotti).
Figure 2 .
Figure 2. Beresford Gate-matrix and activities.This figure showcases the changes in uses over time with each material change outlined in boxes.(Designed by Elisa Broccoli and Michele Nucciotti).
Figure 2 .
Figure 2. Beresford Gate-matrix and activities.This figure showcases the changes in uses over tim with each material change outlined in boxes.(Designed by Elisa Broccoli and Michele Nucciotti).
(i) Early construction phase and emergence of a grass-roots market (1720-1780); (ii) First attempts to organise the square and construction of Beresford Gate (1812-1865); (iii) Growth of market, square and transportation infrastructure initially with trams and then with buses (1867-1913); (iv) Closure of cinemas, and other buildings, including the closure of the Royal Arsenal but with the market thriving (1936-1984); (v) Gradual decline of the market kicked off by the re-routing of buses and trams through the opening of a new main road (Plumstead road) (1984 till today).
Figure 4 .
Figure 4. Beresford Gate-building archaeology analysis.This figure summarises in one diagram (unlike Figure 2 which shows the linear material changes) the changes through which the Beresford Gate went.The material changes are represented by different colours.(Designed by Elisa Broccoli and Michele Nucciotti).
Figure 4 .
Figure 4. Beresford Gate-building archaeology analysis.This figure summarises in one diagram (unlike Figure 2 which shows the linear material changes) the changes through which the Beresford Gate went.The material changes are represented by different colours.(Designed by Elisa Broccoli and Michele Nucciotti).
Land 2023 , 22 Figure 5 .
Figure 5.Comparison of views towards the transformation of Beresford Square based on living area.
Figure 6 .
Figure 6.Perceptions of change and transformation at Beresford Square and Powis Street.
Figure 5 .
Figure 5.Comparison of views towards the transformation of Beresford Square based on living area.
Land 2023 , 22 Figure 5 .
Figure 5.Comparison of views towards the transformation of Beresford Square based on living area.
Figure 6 .
Figure 6.Perceptions of change and transformation at Beresford Square and Powis Street.
Figure 6 .
Figure 6.Perceptions of change and transformation at Beresford Square and Powis Street.
Figure 7 .
Figure 7. Perceptions of change based on number of years living the area.
Figure 8 .
Figure 8. Perceived area change based on residents' area of living.
Figure 7 .
Figure 7. Perceptions of change based on number of years living the area.
Land 2023 , 22 Figure 7 .
Figure 7. Perceptions of change based on number of years living the area.
Figure 8 .
Figure 8. Perceived area change based on residents' area of living.Figure 8. Perceived area change based on residents' area of living.
Figure 8 .
Figure 8. Perceived area change based on residents' area of living.Figure 8. Perceived area change based on residents' area of living.
3. 5 .
Synthesising Mixed Data by Mapping the Dynamic Transformation of Beresford Square through 'Critical System Dynamics'
Figure 10 .
Figure 10.The forces leading to market growth and decline are illustrated.Reinforcing loops are marked in red; balancing loops are marked in green.(Designed on Vensim PLE ×64 by Kalliopi Fouseki and Lorika Hisari).
Figure 11 .
Figure 11.This picture dates back to approx.1900 and depicts Beresford Square and the Gatehouse through which individuals entered the Royal Arsenal complex.The gate fronted directly onto the old Plumstead Road, which in the past used to lead straight into Beresford Square, but which today passes behind the Gate (Wikimedia commons).
Figure 10 .
Figure 10.The forces leading to market growth and decline are illustrated.Reinforcing loops are marked in red; balancing loops are marked in green.(Designed on Vensim PLE ×64 by Kalliopi Fouseki and Lorika Hisari).
Figure 10 .
Figure 10.The forces leading to market growth and decline are illustrated.Reinforcing marked in red; balancing loops are marked in green.(Designed on Vensim PLE ×64 by Fouseki and Lorika Hisari).
Figure 11 .
Figure 11.This picture dates back to approx.1900 and depicts Beresford Square and the G through which individuals entered the Royal Arsenal complex.The gate fronted directly old Plumstead Road, which in the past used to lead straight into Beresford Square, but wh passes behind the Gate (Wikimedia commons).
Figure 11 .
Figure 11.This picture dates back to approx.1900 and depicts Beresford Square and the Gatehouse through which individuals entered the Royal Arsenal complex.The gate fronted directly onto the old Plumstead Road, which in the past used to lead straight into Beresford Square, but which today passes behind the Gate (Wikimedia commons).
Figure 12 .
Figure 12.Sense of connectivity and interaction at Beresford Square as a critical dynamic factor is illustrated.(Designed on Vensim PLE ×64 by Kalliopi Fouseki and Lorika Hisari).
Figure 13 .
Figure 13.Market decline over time is depicted in the form of a causal loop.(Designed on Vensim PLE ×64 by Kalliopi Fouseki and Lorika Hisari).
Figure 13 .
Figure 13.Market decline over time is depicted in the form of a causal loop.(Designed on Vensim PLE ×64 by Kalliopi Fouseki and Lorika Hisari).
|
v3-fos-license
|
2018-11-30T17:20:21.968Z
|
2013-02-21T00:00:00.000
|
54006175
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/ijap/2013/767342.pdf",
"pdf_hash": "5fec8a4b85b834798dba81f902781b82e9b17691",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2207",
"s2fieldsofstudy": [
"Business",
"Physics"
],
"sha1": "5fec8a4b85b834798dba81f902781b82e9b17691",
"year": 2013
}
|
pes2o/s2orc
|
Design of Shaped Beam Planar Arrays of Waveguide Longitudinal Slots
The Elliott’s procedure for the design of a pencil beam waveguide longitudinal slot array has been generalized to encompass the design of shaped beam planar slot arrays. An extended set of design equations, taking into account in an operative way the feeding part of the array, has been devised. From this set of equations, a general and effective design procedure has been set up, shedding light on the constraints posed by a complex aperture distribution. The results of the proposed synthesis procedure have been validated through comparison with a commercial FEM software.
Introduction
Planar arrays of waveguide slots ( Figure 1) have a very long story, since their use dates back at least from the 40s [1], and are still a very popular choice for high-performance antenna systems, especially in the higher part of microwave range [2]. Therefore, their pros and cons from both the mechanical and the electromagnetic point of view are well known [3]. The main advantages of these antennas are the high efficiency, the polarization purity [2][3][4], a considerable mechanical strength, a small size, and the great accuracy achievable both in the design and in the realization phase. The above features make the use of these antennas an effective solution in a wide range of applications [2], such as radar, aerospace, and satellite antennas. The most common drawbacks of waveguide slot arrays are the high realization cost, the small useful bandwidth, and the lack of flexibility; since once the array is realized, its electromagnetic behavior cannot be changed (though some countermeasures are possible [5,6]).
The most popular slot array configuration is the resonant array of longitudinal shunt slots [1], where the slots spacing is half the guided wavelength (at the center or "resonant" frequency) in the slotted waveguide. Therefore, we consider here this kind of array, which allows quite general feeding architectures. A single array (or subarray) consists of parallel-slotted waveguides (called radiating waveguides), with a transverse feeding guide, as in Figure 1. The feeding and radiating guides are coupled using series-series slots. Such slots are still spaced half the guided wavelength in the feeding waveguide, which is, therefore, the array spacing in the -plane. Since the array bandwidth depends on the array size, a popular solution is to divide large arrays into subarrays, each one with its own matched input port. In this case, a beam-forming network (BFN) is required to properly feed the subarrays. A (quadrantal) subarray architecture is required also for monopulse radar antennas.
A slot array design splits naturally into an "internal" problem and an "external" one. The former is the physical design of the array which realizes an aperture distribution fulfilling the array pattern requirements, and whose input port is well matched. The latter is the computation of this aperture distribution, fulfilling the constraints imposed by the slot array technology. The most accurate model describing the behavior of a resonant, pencil beam, and array of longitudinal slots has been proposed by Elliott in [16]. Elliott's work [16] is renowned since a highly accurate model of a slot array including mutual coupling is set there for the first time. Furthermore, it contains a second, less developed though equally important, point. As a matter of fact, the last section of [16] suggests that 2 International Journal of Antennas and Propagation the system of nonlinear design equations of a slot array can be solved in a very effective (and physically meaningful) way, evaluating all mutual couplings on the data of the previous iterative steps, since in this way the nonlinear equations for each slot can be decoupled. Actually, the decoupling details are not described in [16], also because they strongly depend on the array specifications. As a matter of fact, Elliott himself partly developed in [17, eq. (56)-(58)] the decoupling details for a planar equiphase array, but without taking into account the feeding network. After that, useful remarks on large arrays with a pencil beam are reported in [18], and some results on shaped beam arrays [19] have been reported for the first time in [20,21], where the array is designed using an optimization technique, and, subsequently, in [22], using the Elliott's synthesis procedure. Unfortunately, the design procedures described in [20][21][22] have been applied by the authors only to linear arrays, but a large number of practical applications call for planar shaped beam arrays. As a matter of fact, most of the antennas used for satellite applications, radar systems, aerospace applications, and telecommunication systems are usually designed to produce a shaped beam in order to illuminate a selected geographical area with a maximum gain. These antennas require a complex aperture distribution, with a phase distribution spanning up to 360 ∘ .
The design of a planar array is a completely different matter compared to the design of a linear array. Actually, in the planar case the radiating slots interact both through the external mutual coupling and through the complex feeding network. Moreover, since the array geometry is not separable, the design of a shaped beam array is significantly different from the design of an array with equiphase slot voltages.
To the best of our knowledge, a complete synthesis procedure for a planar waveguide slot array with complex distribution has not been described in international literature. Aim of this paper is to fill this gap providing an effective and accurate design technique for a shaped beam planar array, which takes into account not only the external mutual coupling, but also the strong slot interaction due to the feeding network.
The first problem to face in the design of a waveguide slot array with a shaped beam pattern is that the excitation phase of each slot cannot span 360 ∘ . A very limited solution is to realize a shaped beam array with real excitations, as in [23], but the results are not satisfactory. On the other hand, array pattern synthesis procedures, able to take into account excitations constraints, have been proposed [24], and these procedures could be exploited by suitably limiting the amplitude and phase variation of the slot excitations, in order to get an aperture distribution achievable with a waveguide slot array.
In this work, we propose an "internal" design procedure for shaped beam planar arrays of waveguide longitudinal slots by extending Elliott's model and devising from it a new synthesis procedure. This is not straightforward, since the original Elliott's equations [17] must be properly modified to take into account a further degree of freedom, namely, the phase of the slot excitations. In Sections 2 and 3, the new derived design equations and the array synthesis procedure are described in detail. In Section 4, this procedure has beenvalidated and tested, in a way independent of the Elliott's model, using a commercial FEM solver, namely, HFSS 13 by Ansoft. The results obtained with this FEM CAD are in very good agreement with experimental results, as reported in the open literature for a wide range of applications (see, e.g., [25,26]).
A number of shaped beam arrays with different patterns have been designed, and two of them are discussed in detail.
The analysis performed with HFSS shows that the presented examples fulfill the required specifications.
Design Equations for Slot Arrays
The behavior of a planar slot array is ruled by a set of design equations, linking the electrical variables of the array to the geometrical ones. These equations describe (1) the slot excitation due to the radiating guide [16]; (2) the external mutual coupling between the slots [16]; (3) the interaction between the radiating slots due to the BFN [27][28][29].
The interaction (3) is strongly frequency dependent. Therefore, since a resonant array is always designed at the center frequency, only the corresponding equations at this frequency will be used in this work. We consider here a planar array composed by radiating waveguides, each one carrying a, possibly different, number of radiating slots. The first radiating slot of the th radiating waveguide is indicated bŷ, and the numbering proceeds arbitrarily from right to left, whilêandd enote, respectively, the slot immediately to the right of the feeding coupling slot and the last slot of the th radiating waveguide, as shown in Figure 2.
This numbering allows to design arrays with regular or irregular aperture shapes in the same way. In this reference system, the array -plane is vertical, and the axis of the radiating waveguides is horizontal. Each slot is completely characterized by its length and its offset with respect to the waveguide axis, which is assumed positive upward.
An array of longitudinal slots can also be divided into subarrays (as in the example in Figure 3). Each subarray is made of a number of radiating waveguides, fed by a feeding waveguide (orthogonal to the radiating ones) through a sequence of series-series inclined coupling slot (one for each radiating waveguide of the subarray) [30]. Each feeding waveguide is then fed at its input node through a seriesseries inclined coupling slot. All the coupling slots have been chosen resonant. Therefore, a generic array is composed by N radiating slots, radiating waveguides, and subarrays and has consequently feeding waveguides and input ports.
In the example shown in Figure 3(a), the array is divided into 4 subarrays. Each subarray is composed by 4 radiating waveguides, and the design procedure allows each radiating waveguide to contain a different number of radiating slots. Figure 3(b) shows the four waveguides, each one feeding a subarray, and the input port is shown for each subarray. Let be the (TE 10 fundamental) mode voltage on the th radiating waveguide. The array design equations can be written taking into account that the mode voltage at the position of the th radiating slot is different in each radiating waveguide and can be written as = (−1) −̂. Therefore, the first two sets of design equations for a slot array are [17] wherein the { } are the slot excitations required by the aperture distribution, and = sin , wherein and are the waveguide transverse dimensions, is the wavenumber in free space, and 10 are the equivalent admittance and the propagation constant of the TE 10 fundamental waveguide mode, and , , and are, respectively, the self-admittance, the length, and the offset of the th slot of the array.
In (6), is the sum of the external coupling between the radiating slots [16] and of the internal coupling due to the interaction between the radiating slots through higher-order waveguide modes [31].
At the input node of the th radiating waveguide (Figure 4), the inclined coupling slot feeding the waveguide is modeled (being resonant) as an ideal transformer, with a current transformation ratio equal to [30,[32][33][34]. The input impedance seen at the input of this series-series transformer is therefore Since the mode voltages are not independent for radiating waveguides fed by the same feeding waveguide, we must take into account the equations of the feeding line. Also the feeding waveguide is fed by a series-series transformer, with known input current. Subsequent equations will be more clear if we consider each half of the feeding guide as a separate guide. With this convention, two feeding waveguides are represented in Figure 5, namely, the th and ( + )th waveguides. The index assumes therefore the values between 1 and , where represents the number of the feeding waveguides of the array (namely, the number of input ports of the array). Let̃be the current flowing into the last coupling slot of the th feeding waveguide (the farther from the feeding node), having the same direction of (see Figure 5). This coupling slot feeds the first radiating waveguide, which we denote bŷ. The current flowing on thê+ subsequent coupling slots will be (−1) −̂̃, wherê+ 1 ≤ ≤̂, and̂represents the coupling slot corresponding to the last radiating waveguide, fed by the th feeding guide.
The current 0 (with =̂) flowing on the first radiating waveguide (see Figure 4 for the generic th radiating guide) fed by the th feeding guide is therefore given bỹ , while (−1) −̂̃i s the current flowing on the other radiating waveguides, witĥ+ 1 ≤ ≤̂.
The mode voltage at the slot̂(which, as shown in Figure 4, is the radiating slot immediately at the right of the coupling slot feeding the th radiating waveguide) is then given by Since (from Figure 2)̂= , = (−1)̂−̂, the mode voltage on the th radiating waveguide can be expressed as Finally, as indicated in Figure 5, the input impedance at the port is given by where the notations [ ], [ + ] indicate that the sums are extended to all the radiating waveguides fed by the th and ( + )th feeding waveguides, respectively (see Figure 5); is the equivalent admittance of the TE 10 fundamental mode in the feeding waveguide.
Synthesis Procedure
In order to design a slot array, we have to solve the nonlinear systems (1) and (11), which require an iterative solution.
The input data of the design procedure are the radiating slot excitations (namely, the slot voltages ) and the input impedances IN at each input node of the array. The procedure gives as output the lengths and offsets of all the radiating slots.
Following Elliott suggestion [16], it is convenient to evaluate the mutual coupling coefficients , given by (6), using the data of the previous iterative step, since small changes in offsets and lengths cause only a small change in the mutual coupling. With this choice, the equations are decoupled, and it is possible to recompute the new parameters of each slot independently of the other slots.
A shaped beam array requires a complex aperture distribution, therefore the Elliott's design equations (1) must be properly modified because a further set of requirements, the phase of the slot excitations, must be taken into account. On the other hand, no further degrees of freedom are available, so a different strategy must be devised in order to extend the Elliott's procedure [16] to the shaped beam case. Let = the slot voltage of the th radiating slot of the array. A complex slot voltage distribution, such as (12), requires that some other electrical quantities are complex. Among them, there are the feeding currents̃. We include also a sign variable into their definition, which reads =̃exp ( ) , where = +/−1 is a sign to be determined, and is defined by = arc tan(Im[̃]/Re[̃]), so that − /2 ≤ ≤ /2. With this choice, in the limit case of a complex distribution, but with all phases of the slot voltages equal to zero ( = 0), we come back to the equiphase case, being̃= |̃| and = 0, without ambiguity. Since is a real number we get, from (10) As a consequence, all the mode voltages on the radiating waveguides fed by the same feeding waveguide (identified by the index ) are equiphase.
The active admittance, using (1), can therefore be expressed as The input impedances IN required at the feeding nodes of the array are real numbers in almost all practical applications. Therefore, it follows from (11) ) −1 must have an opposite imaginary part. As a consequence, the problem is not determined, and the simpler choice is to require that all the have a real value. Now, from (15), it follows that as also found in [20,21], for the linear case ( ≡ 0). The left-hand side of (16) depends only on the slot length , since the offsets are fixed to the values of the previous iterative step. Therefore, (16) is the sought equation for the new value of this length. If̃is the solution of (16), then is real. Using (17) in (15), we obtain the following expression for the active admittances: Then, comparing (18) with (15), we get Finally, by replacing (19) in (15), the active admittance can be expressed as If we put in (8) the active admittances given by (20), the input impedance seen at the primary of the feeding transformer can be written as Since the input node is a series one, the relation between the currents |̃| and |̃+ | can be expressed as Let + be a real positive parameter defined bỹ Using + , we can write From (25), it follows that and + must have the same phase (apart from the sign), once the convergence of the iterative design procedure has been reached.
Finally, we must fulfill the requirement on the input impedance IN at the secondary of the transformer feeding the waveguides and + . This impedance is the sum of the input impedances of the two waveguides: which are (see (11) and (21)) as follows: eq = [ The input impedance IN must have real and positive values, while eq and eq + can be real or complex. However, we have enough available degrees of freedom to force both eq and eq + to have real and positive values. With this choice, we can fix the phases and + as follows: = − arg ( ) , The input impedance IN can be finally expressed as Equation (30) allows to determine |̃| from the required value of IN , thus terminating the iterative step. It is worth noting that, in order to avoid convergence problems, the initial values of must be properly connected to the values of the voltage distribution. Therefore, even in the first iterative step, the phases of the active admittances must be kept relatively small, avoiding problems of oscillating or trapped solutions.
The synthesis procedure proposed in this section has no limitations by itself, since it can design the array geometry for every aperture distribution that a slot array can radiate. On the other hand, the excitation phase achievable with a longitudinal radiating slot cannot span the whole 360 ∘ , but is limited to where MAX depends slightly on the waveguide dimensions but is always not larger than 60 ∘ . However, to prevent convergence problems, it can be safe to choose a smaller MAX , for example 50 ∘ .
As a consequence, an arbitrary voltage distribution cannot always be radiated by a slot array. However, since different aperture distributions can radiate essentially equivalent shaped patterns, this "hardware" limitation can be circumvented using array pattern design techniques which allow the introduction of appropriate constraints both for voltage amplitudes and phases (compare [24]). The amplitude constraints prevent the synthesis procedure to obtain too small slot lengths and/or offsets, while the phase constraints take into account the limited excitation phase that each slot can span.
Results
In order to assess the synthesis procedure, a number of planar arrays, with different size and aperture distributions, International Journal of Antennas and Propagation have been designed with the procedure of Section 3. Inhouse softwares have been used to evaluate both the slot selfadmittance [35] and the mutual coupling [36,37].
Once the geometry of the designed array has been determined, an analysis has been performed to check whether the array requirements are fulfilled. This has been done using both [28] and a commercial FEM solver, namely, HFSS. Since the former is based on Elliott's model, while the latter is independent of it and is considered essentially equivalent to experimental verification [25,26], we present here only the HFSS results, which fulfill all the requirements and therefore fully assess our procedure. It is worth noting that the results obtained by the procedure of [28] are equivalent to the ones simulated with HFSS.
The architecture of the arrays presented in this section is shown in Figure 1, where both the radiating waveguides and the feeding waveguide are half-height WR90 waveguides (22.86 mm × 5.08 mm) with 1 mm wall thickness. The feeding waveguide has been fed at its side by a waveguide port, and the radiating waveguides have been fed through a seriesseries inclined resonant coupling slot. All the coupling slots have a length equal to 17.07 mm, a width of 1.5 mm, and the tilt angle with respect to the feeding waveguide axis is 45 ∘ , corresponding to a coupling coefficient equal to 1 (see [30,32,33] for details).
Starting from a specified and arbitrary-shaped beam pattern, we have used the array patterns synthesis procedure described in [24] to compute the required excitations. According to the considerations made at the end of Section 3 about the excitations achievable with a longitudinal radiating slot, appropriate constraints both on the amplitude and on the phase of the slot excitations are required, in order to get an aperture distribution achievable with an array of slots. In particular, the maximum phase and the minimum normalized amplitude of the slot excitations have been set, respectively, to MAX = 50 ∘ and | , min | = 0.1.
In the first example, we present an 8 × 8 planar array, fed by a single feeding waveguide containing 8 coupling slots, designed requiring a circular pattern, with a radius equal to 0.25 in the ( , ) plane, with −20 dB sidelobes, and a ripple of ±0.5 dB. The normalized amplitudes and the phases of the required slot excitations are reported in Tables 1(a) and 1(b), respectively, while the corresponding designed slot lengths and offsets are shown in Tables 1(c) and 1(d).
The contour plot of the simulated (HFSS) far field pattern for the designed array is shown in Figure 6. In Figure 7, the 3D far field pattern is depicted, and Figure 8 shows the difference between the pattern obtained using the required slot voltages and the designed pattern. In the shaped region, this difference is less than 0.3 dB. Figure 9 shows an enlargement of the shaped region with a ripple of about ±0.5 dB. The frequency response of the array is shown in Figure 10, while Figures 11 and 12 show, respectively, the -plane and the -plane far field patterns within the working frequency bandwidth. The behaviour of the shaped radiation pattern is still very good even at the upper and at the lower ends of the bandwidth.
The second example is a 10 × 10 planar array, fed by a single feeding waveguide containing 8 coupling slots, designed requiring an arrow-shaped pattern (see the dashed line in Figure 16 for the required geometry), with −15 dB sidelobes. The normalized amplitudes and the phases of the required slot excitations are reported in Tables 2(a) and 2(b), and the corresponding designed slot lengths and offsets are shown in Tables 2(c) and 2(d). The contour plot of the simulated (HFSS) far field pattern for the designed array is shown in Figure 13. In Figure 14 International Journal of Antennas and Propagation 9 Table 2: (a) Required normalized amplitude of slots excitations for the 10 × 10 planar array with an arrow-shaped radiation pattern. (b) Required phase (degrees) of slots excitations for the 10 × 10 planar array with an arrow-shaped radiation pattern. (c) Designed slots lengths (mm) for the 10 × 10 planar array with an arrow-shaped radiation pattern. (d) Designed slots offsets (mm) for the 10 × 10 planar array with an arrow-shaped radiation pattern. the 3D far field pattern is depicted, and Figure 15 shows the difference between the pattern obtained using the required slot voltages and the designed one. In the shaped region, this difference is less than 0.2 dB. Figure 16 shows an enlargement of the shaped region. Finally, the frequency response of the array is shown in Figure 10, while Figures 17 and 18 show, respectively, the -plane and the -plane far field patterns in the array frequency bandwidth. Also in this case, the behaviour of the shaped radiation pattern remains very satisfactory even at the upper and at the lower ends of the useful bandwidth. The results of the performed simulations, both at the design frequency and within the operating frequency bandwidth of the designed arrays, are in very good agreement with the required specifications, and this fully validates the proposed synthesis procedure.
Conclusion
The use of shaped beam waveguide slot arrays is required in various antenna applications, such as radar and aerospace applications. A synthesis procedure for shaped beam planar slot arrays has been presented. Starting from the well-known Elliott's model for a (pencil-beam) slot array, an extended set of design equations has been set up to include both the feeding guide interaction between radiating slots and the provision of complex aperture distribution. Then, a design procedure for shaped beam planar arrays has been devised and assessed through validation against a commercial FEM software.
|
v3-fos-license
|
2019-04-27T13:09:03.505Z
|
2018-03-07T00:00:00.000
|
134728021
|
{
"extfieldsofstudy": [
"Geology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.jvolcanica.org/ojs/index.php/volcanica/article/download/2/5",
"pdf_hash": "e956b7ac73f3de3aa72dc3e47c6c86e154703560",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2209",
"s2fieldsofstudy": [
"Geology"
],
"sha1": "6afcd109455d97cf87aaf3cb5fe40e204afd5aab",
"year": 2018
}
|
pes2o/s2orc
|
Unravelling intrusion-induced forced fold kinematics and ground deformation using 3D seismic reflection data
Sills emplaced at shallow-levels are commonly accommodated by overburden uplift, producing forced folds. We examine ancient forced folds developed above saucer-shaped sills using 3D seismic reflection data from the Canterbury Basin, offshore SE New Zealand. Seismic-stratigraphic relationships indicate sill emplacement occurred incrementally over ~31 Myr between the Oligocene (~35–32 Ma) and Early Pliocene (~5–4 Ma). Two folds display flat-topped geometries and amplitudes that 10 decrease upwards, conforming to expected models of forced fold growth. Conversely, two folds display amplitudes that locally increase upwards, coincident with a transition from flat-topped to dome-shaped morphologies and an across-fold thickening of strata. We suggest these discrepancies between observed and expected forced fold geometry reflect uplift and subsidence cycles driven by sill inflation and deflation. Unravelling these forced fold kinematic histories shows complex intrusion geometries can produce relatively simple ground deformation patterns, where magma transgression corresponds to localisation 15 of uplift.
INTRODUCTION
Uplift of Earth's surface in response to shallow-level magma movement provides crucial insights into volcano activity, potentially warning of impending eruptions (e.g., Sturkell et al., 2006;Biggs et al., 2009;Sparks et al., 2012;van Wyk de Vries et al., 2014).Inverting ground deformation patterns recorded at monitored volcanoes to map magma movement is difficult, however, because we cannot directly observe the host rock deformation mechanisms accommodating intrusion or validate models (Galland, 2012).We thus typically assume that ground deformation results from elastic bending of the overburden (i.e.forced folding), such that the area of surface uplift is expected to directly correlate to the location and size of an underlying intrusion (Galland, 2012).Importantly, analyses of forced folds above sills and laccoliths exposed at Earth's surface, generated in analogue models, modelled analytically, or imaged in seismic reflection data reveal that a combination of elastic bending and inelastic processes (e.g.faulting, fluidisation, and pore collapse) can accommodate magma emplacement (e.g., Pollard and Johnson, 1973;Johnson, 1987;Galland and Scheibert, 2013;Jackson et al., 2013;Magee et al., 2013;van Wyk de Vries et al., 2014;Montanari et al., 2017).The likely occurrence of inelastic deformation processes implies that traditional inversion of ground deformation data assuming pure elastic bending of the host rock will underestimate magma 2 volumes (e.g., Schofield et al., 2014).It thus remains challenging to compare active and ancient systems because the dynamic deformation processes that cumulatively build a forced fold are difficult to deduce when magmatism has long-since ceased.
Here, we analyse a magma plumbing system imaged in 3D seismic reflection data from the petroliferous Canterbury Basin, offshore SE New Zealand (Fig. 1), and identify four saucer-shaped sills intruded into Cretaceous-to-Eocene strata.The sills are overlain by dome-shaped forced folds and generated hydrothermal vents above their lateral tips.Because intrusion-induced forced folds and hydrothermal vents are expressed as topographic or bathymetric highs at the contemporaneous surface, numerous studies have used the age of overlying strata that onto these structures as a method for determining the timing of magmatism (e.g., Trude et al., 2003;Jamtveit et al., 2004;Hansen and Cartwright, 2006;Magee et al., 2013).Whilst most studies assume that onlap of strata onto the top of forced folds marks the age of instantaneous emplacement (Trude et al., 2003), we show that multiple onlap events can be recognised throughout the folded sedimentary succession.Our analysis of seismic-stratigraphic relationships between the hydrothermal vents, forced folds, and overlying strata reveals three main phases of forced fold growth and thus sill emplacement in the Oligocene (~35-32 Ma), Miocene (~19-16 Ma), and Pliocene (~5-4 Ma); these phases of emplacement indicate magmatism overlapped with and may have impacted petroleum generation, migration, and accumulation.Seismic-stratigraphic onlap onto intrusion-induced forced folds is thus a powerful tool for determining timing of magmatic activity (e.g., Trude et al., 2003), although we demonstrate that we should not solely rely on defining strata onlapping onto the top of forced folds to constrain emplacement age (Magee et al., 2014).Identifying seismicstratigraphic relationships throughout folded sequences allows forced fold kinematics to be unravelled and we show, for the first time, that intermittent subsidence can play an important role in intrusion-induced forced folding.
Figure 1: Location map of the study and available seismic reflection and borehole data used.
GEOLOGICAL SETTING
The Canterbury Basin, located offshore SE New Zealand (Fig. 1), is bound by the Chatham Rise to the north-east and the Bounty Trough to the south-east.Basin formation occurred in response to rifting between New Zealand, Antarctica, and Australia in the Late Albian-to-Early Campanian (Fig. 2) (e.g., Fulthorpe et al., 1996;Lu and Fulthorpe, 2004).The basement broadly corresponds to the Torlesse Supergroup, a series of Permian-to-Early Cretaceous greywacke and argillite metasedimentary rocks (Uruski, 2010).Graben and half-graben formed during the middle Cretaceous phase of rifting broadly strike E-W and were infilled by fluvial and paralic sediments, including coal that forms the main source rock in the region (Fig. 2) (i.e. the Horse Range and Katiki formations ;Carter, 1988;Killops et al., 1997;Uruski, 2010;Ghisetti and Sibson, 2012).The onset of passive subsidence and a marine transgression in the Late Cretaceous defined the transition to the post-rift period, characterised stratigraphically by the upwards progression from terrestrial sandstone and coal (i.e. the Pukeiwihai Formation) to deposition of marine sandstone, mudstone, and siltstone (Fig. 2) (i.e. the Katiki, Moreaki, and Hampden formations ;Carter, 1988;Killops et al., 1997).Some of the Paleogene mudstone represent potential source rocks (Fig. 2) (Bennett et al., 2000).
Overlying these formations is the marine Amuri Limestone (Fig. 2) (Fulthorpe et al., 1996).The point of maximum transgression at ~29 Ma is marked in the Canterbury Basin by a regional unconformity (Fig. 2) (e.g., Carter, 1988;Fulthorpe et al., 1996).Continued uplift and an increase in the supply of terrigenous silt and sand drove the eastward progradation of continental shelf and slope deposits in the Early Miocene-to-Recent (Fig. 2) (i.e. the Tokama Siltstone; Lu et al., 2005).
Hydrocarbon generation, migration, and accumulation in the Canterbury Basin likely began in the ~Middle Miocene when Middle-to-Late Cretaceous coals were buried to sufficient depths (Fig. 2) (e.g., Bennett et al., 2000).Most plays rely on stratigraphic traps within Upper Cretaceous sandstone reservoirs, although Eocene sandstone reservoirs within Miocene faultand fold-related structural traps also form viable prospects (Fig. 2) (Bennett et al., 2000).
DATASET AND METHODOLOGY
We use a pre-stacked time-migrated (PSTM) 3D seismic reflection survey (Waka) tied to three regional boreholes (i.e.Galleon-1, Endeavour-1, and Cutter-1) by PSTM 2D seismic surveys (Fig. 1).The 3D seismic data cover a ~1428 km 2 area, of which we focus on ~314 km 2 (Fig. 1).Inline (NE-SW) and crossline (NW-SE) spacing is 25 m and 12.5 m, respectively.The data are displayed with a SEG normal polarity, whereby a downward increase in acoustic impedance corresponds to a positive (red) reflection.Within the focused study area, the water depth is 863-1948 ms TWTT (two-way travel time), or 647-1461 m assuming a water velocity of 1480 m s -1 .Three, NW-trending submarine canyons are developed at the seabed (Fig. 3A), with seismic reflections directly beneath them being down-warped, decreasing in amplitude with depth, and mirroring the channel plan-view morphology (Fig. 3B).We consider that the apparent expression of the submarine channels within the underlying reflections reflection is a geophysical artefact attributable to velocity push-down, caused by acoustically slow seawater being juxtaposed against shallowly buried, but still acoustically faster sediment/rock.We use borehole data to define the age and lithology of ten mapped stratigraphic horizons (H1-H10) (Figs 2 and 3); four sills (S1-S4) were also mapped (Fig. 3).All three wells display consistent time-depth relationships, suggesting that the area of interest has a simple velocity structure (Fig. 4).We use a 2 nd order polynomial best-fit line to the checkshot data from the three boreholes to broadly define interval velocities for the Seabed-H10 (1800 m s -1 ), H10-H2 (2800 m s -1 ), and H2-H1 (3600 m s - 1 ).However, the boreholes are located on the continental shelf where stratigraphy is ~700 ms TWTT shallower and thinner than in the area covered by the 3D seismic survey (Fig. 1), implying that these velocities are probably minimum estimates for those encountered in our study area.We use our simple velocity model to depth convert structural maps and measurements from time to depth.Depth-conversion of the seismic data using the derived velocity was attempted in order to remove the velocity push-down artefacts, which hinder our geometric interpretation of the seismically imaged geology.Whilst we were unable to fully remove the imprint of the velocity push-downs, which suggests our simple model utilised does not fully capture the true velocity structure of the study area, the depth-conversion significantly reduced their imaging impact and, thereby, facilitated greater confidence in structural interpretations (Fig. 3B).Using our simple velocity model we created depthstructure and isopach maps for and between key stratigraphic horizons, respectively, thereby highlighting lateral variations in stratal thickness that may be related to tectonics and magmatism.A dominant frequency that decreases downwards from ~35 Hz to 25 Hz within the interval of interest, coupled with the inferred velocity structure, suggests that the limit of separability within the data increases with depth from 13 m to 36 m; we calculate the limit of visibility to increase from 2 m to 5 m (Brown, 2004).Assuming an interval velocity of 5550 m s -1 for the mapped intrusions (Skogly, 1998) and taking the local dominant frequency of ~25 Hz, we estimate that the limits of separability and visibility are 55 m and 7 m, respectively.Sills between 7-56 m thick will therefore be expressed in seismic data as tuned reflection packages, i.e.where reflections from the top and base intrusion contacts constructively interfere and cannot be distinguished, meaning we cannot calculate true sill thickness.
The basal strata-concordant sections of S2-S4 typically coincide with H1 (Figs 3 and 7).S1 and S2 are elongated ENE-WSW and ESE-WSW and their long axes and plan-view aspect ratios are 6.3 km and 7.5 km, and 1.5 and 1.7, respectively; the inner sill length of both S1 and S2 is 4.5 km (Fig. 5).In detail, S3 has an ESE-WSW oriented long axis and consists of several saucer-like depressions bound by transgressive inclined limbs, which become shallower towards the NE (Figs 5 and 7A).S4 occurs between S1 and S2, displays a rather irregular inner sill morphology, is roughly elongated ESE-WSW, and shallows to the NE (Fig. 5).S3 and S4 extend beyond the limits of the 3D seismic survey, thus we cannot determine their true dimension.However, their long axes are a minimum of 9.1 km (S3) and 14.5 km (S4) (Fig. 5).
Supra-sill structure
The top of the basement (H1) in the study area is dominated by a NE-trending, ~29 km long, ~0.5 km high ridge along its south-eastern boundary, but also displays a series of smaller, variably shaped structural highs (Fig. 8).Overlying strata onlap the basement (H1) and dip gently eastward (Fig. 3B).Superimposed onto the regional structure of H2-H8 are three, prominent elliptical folds (i.e.folds 1-3) that have long axes of 6.2 km, 6.4 km, and 4.6 km respectively (Fig. 8).The true geometry of Fold 3 is difficult to ascertain because its south-eastern limit appears to coincide with an area of velocity push-downs related to seabed submarine canyons (Figs 3B and 8).A broad, 11 km long elliptical dome is also observed between H2-H8 (i.e.Fold 4; Fig. 8).The outlines of folds 1-2 overlie the lateral terminations of S1 and S2, respectively, Fold 3 overlies a relatively shallow portion of S3, and the central part of Fold 4 is underlain by S4 (Fig. 5).
Fold geometries between H8-H10
Within Fold 1 and between H8-H10 (i.e. the fold top), we observe numerous seismic stratigraphic onlap and truncation relationships at various structural levels, particularly onto H8, H9, and H10 (Fig. 6).The base of S1 is currently located ~1.95 km beneath H10.From H8 to H10, there is a gradual transition in the morphology of Fold 1 from flat-topped to dome-shaped, which corresponds to an increase in fold amplitude from 59 m at H8, to 120 m at H9, and 90 m at H10 (Figs 3B,6,and 8).
This change in Fold 1 morphology occurs between H8-H9, where the thickness of this stratal package increases from ~230 m beyond the immediate fold periphery up to ~303 m across the fold crest (Fig. 9).There are several reflections between H8-H9, which apparently downlap onto underlying reflections and only occur within the limits of Fold 1 (Fig. 6).
Fold 2 displays onlap and truncation patterns from just below H8 to H10, where it has a maximum amplitude of 64 m, but its geometry remains flat-topped and the H8-H10 strata thin across the fold (Figs 7A,8,and 9).The base of S2 is currently located ~1.83 km beneath H10.Onlap and truncation patterns are also observed in Fold 3 between H7 and H9 (i.e. the top of the fold), where it has an amplitude of 125 m (Fig. 7B).The base of S3 is currently located ~1.61 km beneath H9.We only observe onlap onto the top of Fold 4 at H8 (Fig. 3B).Folds 1-3 are, in places, incised by presumably deep-marine channels (e.g., Figs 3 and 7B).
Mound-like structures
Associated with folds 1 and 2 are a series of craters, dome-, and eye-shaped mounds that truncate and/or downlap onto various stratigraphic horizons between H8-H10, and are onlapped by overlying strata (e.g., Figs 7A and 10).These mounds have diameters and heights of ~200-500 m and ~30-80 m, respectively (e.g., Figs 7A and 10).All mounds are located at the fold peripheries and underlain by a zone of low-amplitude, chaotic reflections that extends down to lateral sill terminations (e.g., Fig. 10).
Magma emplacement
Space to accommodate magma intrusion is commonly generated by deformation of the host rock.At shallow-levels in sedimentary basins, intrusions often develop sill-like geometries as magma is emplaced along mechanical contrasts between layered strata, weak sedimentary rocks, and/or the minimum principal stress axis rotates to vertical (e.g., Kavanagh et al., 2006;Gudmundsson, 2011;Schofield et al., 2012;Magee et al., 2016;Walker et al., 2017).As intrusion continues and the sill inflates, space can be generated by uplift of the overburden and free surface to form dome-shaped forced folds (e.g., Pollard and Johnson, 1973;Hansen and Cartwright, 2006).Ground deformation driven by intrusion-induced forced folding is akin to the uplift observed at active volcanoes generated by magma movement and accumulation (e.g., Castro et al., 2016;Magee et al., 2017a).Given the broad spatial coincidence between fold outlines and sill terminations (e.g., Figs 3 and 5-7), we suggest that folds 1-3 formed in response to the intrusion of S1-S3, respectively (Stearns, 1978;Hansen and Cartwright, 2006).This forced fold interpretation is supported by evidence of onlap onto folds 1-3 at various stratigraphic levels (Figs 3, 6, and 7), which indicates that the domes had a bathymetric expression (e.g., Trude et al., 2003;Hansen and Cartwright, 2006).S4 is broadly overlain by a dome-shaped fold, which is onlapped at H8 by overlying strata, but the fold extends beyond the limit of the sill to the SE by up to ~6 km (Fig. 5).We suggest that part of Fold 4 was generated in response to sill emplacement but has interfered and merged with a differential compaction fold developed over the NE-SW oriented basement high (Figs 3, 5, and 8).
Biostratigraphic dating of these sedimentary horizons within the interval of interest indicates that sill emplacement principally occurred in the Oligocene (i.e.H7-H8, ~35-32 Ma), the Early Miocene (i.e.H9, ~19-16 Ma), and the Early Pliocene (i.e.H10, ~5-4 Ma) (Fig. 2).The occurrence of subtle onlap and truncation observed within folded strata deposited between these principal phases of magmatism implies that sill emplacement occurred intermittently over ~31 Myr (Figs 3,6,and 7), consistent with previous observations that sills and sill-complexes can assemble incrementally via the accumulation of relatively smallvolume magma pulses intruded across protracted periods of time (e.g., Annen, 2011;Magee et al., 2014;Annen et al., 2015;Magee et al., 2016;Magee et al., 2017a).We cannot constrain the precise volumes and timing of individual sill emplacement events because: (i) we cannot seismically image presumably thin sills fed by discrete magma pulses; and (ii) we lack detailed biostratigraphic data to constrain the precise ages of the key onlap surfaces and strata deposited during periods of forced folding.
Given that S1-S4 are elongated ~E-W (Fig. 5), we consider it plausible that magma ascent (e.g., via dykes) and emplacement was influenced by the E-W striking Cretaceous normal faults that formed the Canterbury Basin and dissect the basement (Ghisetti and Sibson, 2012); basement-involved normal faults have also been shown to effect magma input and sill geometry in the Faroe-Shetland Basin, NE Atlantic (Schofield et al., 2017).Initial formation of S1-S4 likely occurred in the Katiki or Moreaki formations at structural levels, where host rock properties or stress conditions favoured sill emplacement (e.g., Kavanagh et al., 2006;Gudmundsson, 2011;Schofield et al., 2012), and were at least partly accommodated by forced folding (e.g., Figs 2 and 3).We suggest that later magma pulses utilised previous pathways into the basin (e.g., dykes) and became trapped by pre-existing components of S1-S3, where the new pulses promoted further sill construction and were accommodated by the reactivated growth of the forced folds.The trapping mechanism of later pulses will have principally been controlled by the relative timing of the different magma pulses and the thermal history of the intrusions and host rock (e.g., Annen, 2011;Annen et al., 2015;Magee et al., 2016).For example, if there is sufficient time for previous magma pulses to fully crystallise, their basal contact with the underlying sedimentary host rock will act as a rigidity barrier that can deflect and trap intruding magma along its surface (e.g.Kavanagh et al., 2006;Annen, 2011;Annen et al., 2015).Alternatively, if the time interval between emplaced magma pulses is short and/or earlier intrusions have thermally equilibrated with the host rock (i.e. they are crystalline mushes that retain residual melt), new magma injections may rejuvenate and mix with the partly solidified, preexisting sill(s) (Annen, 2011).Whilst unravelling sill construction is critical to assessing their structural and thermal evolution, as well as that of the host rock, the limited spatial and temporal resolution of seismic reflection data means these hypotheses cannot be tested without additional information (e.g., biostratigraphic data from boreholes) or improvements in seismic imaging.However, because our observations indicate emplacement of S1-S4 occurred over 31 Myr at relatively shallow levels, probably <2.5 km considering current basal sill depths beneath H8-H10 are <2 km and typically ~1.6 km, we consider it most likely that the low temperature host rock would have promoted full solidification of magma pulses before later magma pulses intruded.
Fold amplitude as a proxy for sill thickness
Assuming that shallow-level sill emplacement is fully accommodated by elastic bending of the overburden implies that the amplitude of a forced fold is equivalent to the thickness of the forcing intrusion (Fig. 11A) (e.g., Pollard and Johnson, 1973;Goulty and Schofield, 2008;Jackson et al., 2013).Inversion of ground deformation data collected from active volcanoes and related to subsurface magma movement also typically assumes that host rock deformation occurs via elastic bending, such that the size and location of the surface uplift and/or subsidence is expected to broadly reflect the volume and position of the magma body (e.g., Biggs et al., 2011;Galland, 2012;Pagli et al., 2012).If space for magma emplacement is also generated by the contemporaneous occurrence of inelastic host rock deformation processes (e.g., fluidisation and porosity reduction), fold amplitude will be less than the thickness of the intrusion (e.g., Jackson et al., 2013;Magee et al., 2013;Magee et al., 2017b).
Figure 11: (A) Schematic summarising the expected fold geometry and onlap relationships for forced folds, specifically folds 1 and 3. (B) Schematic describing how evacuation of a tabular sill and formation of inclined limbs can drive subsidence across the crest of a forced fold, which can accommodate depositing sediments.Repeated sill inflation/deflation and forced fold uplift/subsidence cycles could produce the observed upward increase in fold amplitude from H8 to H9 and thickening of the H8-H9 strata across the fold.(C) Schematic showing how the occurrence of seismically undetected, thin sills within the fold could produce the observed upward increase in fold amplitude from H8 to H9 and thickening of the H8-H9 strata across the fold.All figure parts are drawn to the same relative scale, such that differences in deformation style between the models can be compared.Also note that although the schematics only depict one sill, which grows through injection of new magma, it is plausible that the actual imaged intrusions consist of multiple accreted sills (e.g., Annen, 2011).
The sills imaged in seismic reflection data here are expressed as tuned reflection packages and are therefore probably <56 m thick, assuming the intrusions have an average interval velocity of 5550 m s -1 .However, all maximum fold amplitudes measured at identified fold tops are ≥59 m and up to 125 m (i.e.Fold 3 at H9); if sill thickness is at the limit of detectability (i.e. 7 m), differences between fold amplitude and sill thickness could thus be up to ~120 m.Furthermore, because the folded sedimentary succession has been compacted during burial, the measured fold amplitudes and, thus, the discrepancies between sill thickness and fold amplitude are minimum estimates.These unexpected discrepancies where fold amplitude is greater than sill thickness could be because: (i) the sills have a faster average interval velocity than 5550 m s -1 , which would increase the limit of separability (e.g., an interval velocity of ≥5900 m s -1 would mean the sills could be ≥59 m thick; Fig. 12); (ii) the seismic velocity of the sedimentary sequence is overestimated, meaning that depth-converted fold amplitudes are accentuated, although we note that the increased depth of the study area relative to the boreholes implies the velocities used are minimum end-members; and/or (iii) multiple, seismically undetectable sills (i.e.<7 m thick) contributed to fold generation.
Figure 12: Graph showing the limits of separability and detectability for the seismic data, which has a frequency of ~25 Hz, if velocity of the igneous intrusions range from 4000-7500 m s -1 (Magee et al., 2015).The minimum fold amplitude measured (i.e.59 m) is shown, revealing that intrusion velocities of ≥5900 m s -1 are required for sill thickness to equal measured fold amplitudes (grey).
In addition to the discrepancy between maximum forced fold amplitude and sill thickness, our observations reveal that amplitude varies with stratigraphic level.For example, Fold 4 has an amplitude of 103 m at H7 but 58 m at H8 (i.e. the top of the fold) (Fig. 3B).Because Fold 4 is only onlapped at H8 (Fig. 3B), suggesting it formed in a single intrusion event, the upwards decay in fold amplitude may relate to a syn-kinematic increase in ductile strain and inelastic deformation (e.g., compaction) towards the top of the fold (e.g., Pollard and Johnson, 1973;Hansen and Cartwright, 2006).Fold 2 also decreases in amplitude upwards, from 78 m at H8 to 64 m at H10 (Figs 7A and 8), but developed across multiple intrusion events.The upper portions of Fold 2, between H8-H10 are thus expected to have been superimposed and added onto the original forced fold generated in the Oligocene.For Fold 2, the formation of a 64 m high fold during the Early Pliocene implies that the Oligocene fold had an original amplitude of 14 m.
In contrast to folds 2 and 4, the amplitude of folds 1 and 3 increases with stratigraphic height; i.e.Fold 1 increases in amplitude from 59 m at H8 to 120 m at H9, decreasing to 90 m at H10, whereas Fold 3 has an amplitude of 110 m at H8 but 125 m at H9 (Figs 6, 7B, and 8).These increases in amplitude are associated with a change in fold geometry from flat-topped to domeshaped and a subtle increase thickness of the H8-H9 sequence across folds 1 and 3 (Figs 6, 7B, 8, and 9).Within Fold 1, where the change in fold style from H8 to H9 is more prominent, the increased amount of reflections within the fold and presence of seismic-stratigraphic onlap and apparent downlap (i.e.rotated onlaps) suggest that there are several, thin packages of material that only occur across the fold crest (Fig. 6).These additional rock packages, which are restricted to the fold, accommodate the observed increase in amplitude and H8-H9 thickness (Fig. 6).It is important to note that these increases in amplitude and thickness, a change in fold morphology (i.e. from flat-topped to dome-shaped), and occurrence of additional material solely within the folded sequence contrasts with our conceptual model of intrusion-induced forced folding (Fig. 11A) (cf.Pollard and Johnson, 1973;Hansen and Cartwright, 2006;Galland, 2012;Magee et al., 2014).For example, because the geometry and growth of forced folds are controlled by a directly underlying forcing member, it is expected that whatever happens to the upper layers within a forced fold must also happen to the lower layers (Fig. 11A) (Stearns, 1978).
We suggest that the protracted development of Fold 1, and to a lesser extent Fold 3, involved repeated episodes of uplift and subsidence related to several discrete periods of sill injection and evacuation (Fig. 11B).In particular, we envisage that the intrusion and inflation of tabular sills uplifted the overburden to form flat-topped folds, which were expressed at the palaeosurface (Fig. 11B; Time 1).It is likely that Fold 1 formation was facilitated by circumferential reverse faulting and elastic bending (Figs 5,6,and 11).Whilst many previous seismic-based studies have not recognised reverse faults associated with forced folding (e.g., Trude et al., 2003;Hansen et al., 2008;Jackson et al., 2013;Magee et al., 2013), analogue modelling experiments show that reverse faults can accompany forced fold formation (e.g., Galetto et al., 2017;Montanari et al., 2017).
With inflation and bending of the overburden, eventual tensile fracturing of the host rock immediately overlying the lateral terminations of the tabular sill allows magma to transgress upwards and form the inclined limbs of a widening saucer-shaped sill (Fig. 11B; Time 1).Exploitation of reverse faults by magma may also promote inclined limb development (Figs 6 and 11).
If the melt supply to the entire sill wanes during the emplacement of the inclined limbs, their propagation could be further driven by magma evacuating from the inner, tabular sill in response to roof subsidence; i.e. magma pressure decreases below the lithostatic load, promoting relaxation (subsidence) of the elastically bended strata and compression of the inner sill (Fig. 11B; Time 2).Such a redistribution of magma would maintain or enhance the original flat-topped fold around its rim but promote subsidence of the fold crest, which may be infilled by depositing sediment, as the underlying inner sill thins (Fig. 11B; Time 2).Where a later injection of magma into the inner sill or along its contact re-inflates the forced fold, the strata deposited within the folded sequence will rotate and appear to downlap onto the underlying surface, producing a more domeshaped fold geometry (Fig. 11B; Time 4).The seismic imaging of these stratal packages restricted to the folded sequence implies that there was time between intrusion events for a sufficiently thick sedimentary succession to be deposited (e.g., Fig. 11B).Unfortunately, we lack the high-resolution lithological and biostratigraphic data required to determine the sedimentation rate of these fold-restricted strata and, thereby, cannot constrain the time between distinct periods of sill emplacement.Overall, repeated periods of sill injection and evacuation into the inclined limbs over a protracted period of time could explain the observed increase in fold amplitude and stratal thickness, as well as the occurrence of fold-restricted reflections, as observed in folds 1 and 3 between H8-H9 (Figs 6,7B,8,9,and 11B).Similar uplift and subsidence patterns have been observed to affect forced folds at active volcanoes, albeit on a much smaller spatial and temporal scale (Pagli et al., 2012;Magee et al., 2017a).
The injection of multiple, seismically undetectable, thin sills (i.e.<5 m thick) into the H8-H10 succession may also produce the observed fold geometries (Fig. 11C); this model could, to some extent, also explain the seismic-stratigraphic relationships if emplacement occurred incrementally.However, for Fold 1, a cumulative sill thickness of 59 m is required to increase the fold amplitude of 59 m at H8 to 120 m at H9. Whilst borehole from the Faroe-Shetland Basins suggest that a significant proportion of sills may not be resolved or detected in seismic reflection data (Schofield et al., 2017), perhaps supporting the thin sill model, a recent study has proposed that the high acoustic impedance contrast between igneous and sedimentary rocks means that even very thin sills should be detected in seismic data (Eide et al., 2017).We thus consider it unlikely that multiple, thin sills (<5 m thick) occur within the H8-H9 folded sequence of folds 1 and 3.
Tectono-magmatic context
Initial emplacement of S1-S3 during the Oligocene (35-32 Ma) was concurrent with emplacement of the Waiareka-Deborah volcanics and/or the Cookson volcanics (Fig. 2) (Timm et al., 2010).This magmatic event coincides with and may be genetically related to the opening and separation of Australia and Antarctica, which occurred ~33-30 Ma (e.g., Jenkins, 1974), and/or the northwards propagation of the Emerald Basin spreading zone (Uruski, 2010).Sill emplacement during the Early Miocene (~19-16 Ma) likely correlates to either the onshore development of the 27-12 Ma Oxford Volcanics in Central Canterbury or the 16-11 Ma Dunedin Volcano on the Otago Peninsula, which is located only ~50 km to the WSW of the study area (Fig. 2).It is difficult to link Early Pliocene sill emplacement (5-4 Ma) to other magmatic events that occurred in and around the Canterbury Basin, although it may relate the ~2.6 Myr old basaltic Geraldine and Timaru lavas (Timm et al., 2010).
Implications for using seismic reflection data to inform interpretation of ground deformation at active volcanoes
Reflection seismology is the only technique that allows the entire 3D geometry of natural, shallow-level intrusions and associated host rock structures to be visualised and quantified at a relatively high-resolution (e.g., Smallwood and Maresh, 2002;Hansen and Cartwright, 2006;Magee et al., 2016).Seismic reflection data thus provides a unique opportunity to investigate how overburden uplift (i.e.forced folding) and subsidence accommodates intrusions and is expressed at the contemporaneous surface (e.g., Trude et al., 2003;Hansen and Cartwright, 2006;Jackson et al., 2013).For example, discrepancies between fold amplitudes and intrusion thicknesses measured in seismic reflection data, coupled with field observations, have highlighted that inelastic deformation processes can play an important role in accommodating magma volumes (e.g., Jackson et al., 2013;Magee et al., 2013).To date, however, the vast majority of seismic-based studies examining intrusion-induced forced folds adopt an interpretation framework that assumes magma emplacement and fold growth occurred instantaneously (e.g., Trude et al., 2003;Hansen and Cartwright, 2006;Jackson et al., 2013).Whilst this instantaneous model may be appropriate for forced folds developed during single, short-lived magma injection events, observations of active emplacement and host rock deformation from field-, geophysical-, and geodetic-based studies reveal that forced folds can evolve through multiple uplift and subsidence episodes (e.g., Sturkell et al., 2006;Magee et al., 2017a).It is thus difficult to reconcile insights into the processes controlling ground deformation obtained from seismic reflection data, which only provide a snapshot of the cumulative strain accommodating ancient intrusions, and the dynamic uplift and subsidence recorded at active volcanoes.We show that mapping of intra-fold strata and identification of seismic-stratigraphic relationships can be used to unravel the incremental development of sill intrusions and overlying forced folds (see also Magee et al., 2014).Furthermore, our results provide the first evidence from seismic reflection data that the dynamic interplay between uplift and subsidence can control forced fold geometries.We suggest that broad areas of uplift likely correspond to the inflation of magma reservoirs, whereas the transition to broad subsidence and localised uplift (e.g., above inclined limbs of saucer-shaped sills) marks the onset of magma transgression.Importantly, our observations also emphasise that relatively simple, transient uplift and subsidence patterns can be produced by complex intrusion morphologies (Galland, 2012;Magee et al., 2017a).
Implications for hydrocarbon exploration
Deciphering how the host rock deforms and accommodates the intruded magma volume is also important from a hydrocarbon exploration perspective because: (i) elastic folding of the overburden and free surface above intruding, shallow-level (< 2 km depth) sills can produce forced folds that may result in the formation of structural (i.e.four-way dip closures) and stratigraphic (i.e.pinchout) traps (e.g., Reeckmann and Mebberson, 1984;Smallwood and Maresh, 2002;Schutter, 2003;Schmiedel et al., 2017); (ii) intrusion-induced faulting and fracturing, which may accompany folding, can increase local permeability and potentially breach traps or compartmentalise reservoirs (e.g., Reeckmann and Mebberson, 1984;Holford et al., 2012;Holford et al., 2013); and (iii) inelastic deformation processes involving porosity reduction (e.g., compaction and fluidization) can inhibit hydrocarbon migration and reduce reservoir quality (Schofield et al., 2017).Sill emplacement in the petroliferous Canterbury Basin throughout the Oligocene-to-Early Pliocene overlapped with the onset of hydrocarbon generation and expulsion in the mid-Miocene (Fig. 2) (Bennett et al., 2000).The sills are spatially restricted and therefore likely to only influence any active petroleum system on a local scale.Sills intrude Cretaceous-to-Palaeogene strata, where the principal source rocks (e.g., coals) are expected (Figs 2,3,and 6).The imaged sills are probably <55m thick but their impact on source rock maturity is unknown; e.g., sill intrusion could mature or overmature any surrounding source rocks (e.g., Rodriguez Monreal et al., 2009;Holford et al., 2013).Furthermore, it is probable that igneous bodies below the resolution of the seismic data are present and could impact maturation dynamics (Schofield et al., 2017).The forced folds deform potential Late Cretaceous and Eocene reservoir rocks, creating possible structural traps (Figs 2,3,and 6).Other potential traps associated with the forced folds are created by the onlap of strata onto the domes (Fig. 6) (Smallwood and Maresh, 2002;Magee et al., 2017b).Overall, whilst it is difficult to assess whether sill emplacement had a beneficial or adverse effect on petroleum system development, our study highlights that it is critical to not only elucidate magma emplacement mechanics, but also to determine the timing of magmatism relative to hydrocarbon generation and migration.
CONCLUSIONS
Emplacement of shallow-level sills in sedimentary basins is commonly accommodated by overburden uplift to produce a forced fold that is expressed at the contemporaneous surface.The geometry and kinematics of these intrusion-induced forced folds reflects sill emplacement processes and thus sheds light on how ground deformation relates to magma movement at active volcanoes.Here, we use 3D seismic reflection data from the Canterbury Basin, offshore SE New Zealand, to analyse the timing and formation of four saucer-shaped sill and forced fold pairs.Seismic-stratigraphic onlap and truncation relationships reveal that sill emplacement initially occurred in the Oligocene (~35-22 Ma), followed by two other major intrusive phases in the Early Miocene (~19-16 Ma) and Early Pliocene (~5-4 Ma); these observations indicate that we should not rely on simply identifying onlap relationships at the top of forced folds to assess the age of sill emplacement.Evidence of forced fold growth between these main magmatic events indicates that sill emplacement occurred incrementally over a protracted timespan (~31 Myr).Whilst two of the forced folds conform to the traditional conceptual models of forced fold growth, i.e. fold amplitude decreases up and away from the underlying forcing body, two folds exhibit an upward increase in fold amplitude and a change in morphology from flat-topped to dome-shaped.These changes in fold geometry correspond to the occurrence of additional seismic reflections across and restricted to the fold crests, which locally thicken the folded sequence.We suggest that this unexpected increase in fold amplitude and thickening of strata can be attributed to either: (i) repeated episodes of sill injection and inflation followed by magma evacuation into the inclined limbs of the saucer-shaped sill, which promoted fold subsidence and locally accommodated deposition of sediments restricted to the deformed sequence; or (ii) the emplacement of seismically undetectable, thin sills within the folded sequence.Furthermore, by unravelling forced fold kinematics, we demonstrate that sill emplacement spanned the generation, migration, and accumulation of hydrocarbons, potentially influencing local petroleum system development.Our observations show that changes in ground deformation patterns, specifically the localisation of uplift and onset of broad subsidence, may indicate magma transgression.Overall, our study shows that analysing structural and stratigraphic relationships across the entire height of a forced fold can provide critical insight into the long-term and dynamic evolution of sill emplacement and associated ground deformation.
AUTHOR CONTRIBUTIONS
JR conducted the bulk of the seismic interpretation, analysis, and write-up as part of her MSci project at Imperial College London.CM designed the project, aided interpretation, and contributed significantly to the manuscript writing.CJ helped design the project, advised interpretations, and edited the manuscript.
Figure 3 :
Figure 3: (A) Map of the seabed in the study area highlighting the presence of three, deep seafloor canyons.(B) Time-migrated and depth-converted seismic sections showing the effect of velocity push-downs related to the seafloor canyons and the four sills and forced folds studied.Depth-converted seismic sections with vertical exaggeration (VE), to better highlight the fold geometries, and without are shown for comparison.See Figures 1 and 3A for location.
Figure 5 :
Figure 5: Depth-structure map of S1-S4 highlighting the location of reverse faults around sill edges and the position folds 1-4.
Figure 6 :
Figure 6: Seismic sections and line interpretations through S1 and Fold 1. See Figure 5 for locations.
Figure 7 :
Figure 7: Seismic sections and line interpretations through S2 and Fold 2 (A), and S3 and Fold 3 (B).See Figure 5 for locations.
|
v3-fos-license
|
2018-12-18T12:05:00.448Z
|
2017-03-27T00:00:00.000
|
168868226
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://businessperspectives.org/images/pdf/applications/publishing/templates/article/assets/8206/PPM_2017_01_Melnyk.pdf",
"pdf_hash": "b07a9228b3c0c742df5e7bcbeb4bbe2af0765494",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2212",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "3a3ffca0205fb5f849dbc96554bf0d98603da769",
"year": 2017
}
|
pes2o/s2orc
|
“Influence of the minimum salary level increase on the business entities activity in the context of the transition to the sustainable development”
In the context of transition to the sustainable development actually justified and economically balanced managerial decisions are worth to be introduced into activity of the business entities. First of all, it is connected with the formation of the social standards by the Ukrainian government. Establishment of the minimum salary for the employees of the national economic complex of the country is one of the main components of these standards. This indicator influences both the increase of the population’s social welfare provision level and on the economy of the economic entities, including business representatives. Research was conducted in Ukraine. The main trends of the social welfare provision of the business sector entities, including the experience of Hungary and Russia, were analyzed in this article. The main rules of the effective social welfare provision, accounting the necessities of the business environment, were formed. Economical analysis of the retrospective and predictive information about the payroll payment and payment of social contributions was made. The influence of the increase of the minimum salary on the activity of business entities, taking into account raised minimum salary, was analyzed. The regressive model of the payroll budget dependence, accounting minimum salary and social contributions’ level increase, was designed. Obtained calculation results showed high level of tax burden on the business sector entities, so, organization-economic measures of tax burden decrease on the business entities were offered. They took into account minimum salary growth for their employees in the context of the transition to the sustainable development. Recommendations concerning the further scientific researches on the topic of the article were offered.
Introduction
In the context of constant transformations of the work of real, private, state and external sectors of economy, which determine the transition to the sustainable development, in the activity of business entities, special attention should be paid to the formation of the effective policy of employees' social welfare provision.We use the term sustainable development to mean the formation of the new patterns of introduction of economy ecologization concept and balanced variant of its development assurance (Strochenko, N.I., 2014).There exist a lot of factors, which influence in the negative way on the realization of this policy by business organizations.Examining this issue and accounting that the study was conducted in Ukraine, the main statistical markers of the social development of the country's economy sectors, including business sector, in 2015-2016, should be analyzed.This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International license, which permits re-use, distribution, and reproduction, provided the materials aren't used for commercial purposes and the original work is properly cited.Thus, the average gross wages in August 2016 were equal to 5202 UAH (203 USD according to the exchange rate of National Bank of Ukraine for the moment of estimation), which is 3.6 times higher than the minimum wages (1450 UAH (57 USD)).In relation to the August 2015, the amount of the minimum wages increased for 23.7% (Anti-crisis Program of Joint Actions of Government and Business: Urgent Solutions, 2016).During the period from January till September 2016, the amount of backdated wages comprised 1902.3 mln UAH (74164 USD).Rate of increase of the backdated wages to the employees is 94.2% (September 2016 relatively to the September 2015).The quantity of the officially unemployed workers in 2015 was 490.8 thousand persons (multi-industry statistic information, 2016).Examining the level of the backdated wages, it is necessary to state that it is a negative issue, which influences the reduction of motivation to the effective work of the staff of business organizations and, in its turn, it influences economical indexes of business entities' economical activity.
On the modern level of productive forces' development, the Government of Ukraine in its socialeconomic programs introduces the policy of the social standards' increase.It takes into account the rate of minimum wages for the employees of different business patterns, and it does not take into account increase of tax burden on the business entities.
It is necessary to state that in terms of modern market relations in the country's economy to the negative consequences of wages increase one should refer: to low level of real income of the population, low level of pension provision, inflation risks, high level of tax burden, strengthening of fiscal control, raise of corruption (News of Ukrainian League of Industrialists and Entrepreneurs, 2016).
All mentioned above conditions the timeliness of the research topic and its main aim, which is in the implementation of the complex economical justification of influence of labor remuneration level increase on the economic activity of the business entities in the context of transition to the sustainable development.The subject of the research in this article is the development of theoretical and methodological issues as to the formation of the social standards in the business organizations' activity.Under the term "business entities", we mean entities of economic relations, which perform business activities (Makhinchuck, 2013).
Brief literature review
1.1.Social-economic system in the context of transition to the sustainable development.During the investigation of the social-economic system it is necessary to state importance of its formation and functioning in the context of transition to the sustainable development.In the work of L.G. Melnik, the preconditions of self-organization of open steady-state systems are examined, the main rules of system's functioning and development are formed, the factors, which distinguish the speed of development of socialeconomic systems, are analyzed (Melnik, 2010).Formation of innovation strategies in the development of social-economic systems for the increase of "green" economy efficiency level was investigated in the group work of L.G. Melnyk, O.V. Shkarupaand M.O.Kharchenko (Melnyk, ShkarupaandKharchenko, 2013).Moreover, creation of the favorable social-economic and regulatory climate for their further sustainable development is an important issue in the effective functioning of socialeconomic system.In the article of I. Sotnyk, T. Kurbatova, and G. Khlyap (Sotnyk, Kurbatova and Khlyap, 2014) economical, social and legislative problematic issues of improvement of activity of the objects of state renewable energy sector in the context of transition to the sustainable development were studied.It is also necessary to mention the role of business organizations as the entities of socialeconomic system in its functioning, which means to distinguish organization-economic and social interaction between the business companies and state authorities.Improvement of administration system of small and medium-sized business in the context of crisis by means of formation of the interaction pattern between the enterprises and regional authorities in order to guarantee the business entities' viability and to increase the efficiency of the regional economic policy, including the raise of social standards and norms, were studied in their work by A.V. Kundenko, M.S. Dorosh, I. A. Baraniuk and D.M. Ilchenko (Kundenko, Dorosh, Baraniuk, and Itchenko, 2015).
1.2.Social determinants in business entities' activity.One of the main features of the system's sustainable development is the increase of the level of employees' social protection, which is also characterized by the growth of the payroll budget at the enterprises of various forms of incorporation.In this research it is necessary to take into account the interference of "grey" economy into social welfare provision in the business organizations activity.The influence of the level of "grey" economy on the financial provision of the social protection was studied with the help of economic-statistic methods in the works of scientistseconomists M.I.Malyovanyi, O.V. Rolinskyi and N.V. Lysa (Malyovanyi, Rolinskyi and Lysa, 2016).Obtained results state the fact that countries with higher level of "grey" economy have lower social expenditures per one person.First of all, it is connected with the reduction of funding sources for social protection (Malyovanyi, Rolinskyi and Lysa, 2016).Studying of human capital assets in the social business activity is an important issue which was studied by the scientistseconomists S. Estrin, S. Mickiewicz, and U. Stephan (Estrin, Mickiewicz, and Stephan, 2016).In our opinion it allows to focus attention on the formation of the intelligent potential in the employees of the business sector companies and also on the effective system of labour remuneration in the company.The individual motivation component should be taken into account while the investigation of the role of social entrepreneurship in the creation of effective system of employees' labor remuneration.The article of the scientists O.Irengun and S.Arikboga was devoted to these issues (Irengun and Arikboga, 2015).
Labour remuneration in the activities of business entities.
The level of employees' labour remuneration plays an important role in the effective social welfare provision of the enterprise.J. Chesloch and T. Сallie were researching the changes of the labour remuneration in the consulting companies and educational establishments (Cheslock and Callie, 2015).They established the differences between the businesssectors and levels of state funding of these sectors.While studying of labour remuneration key features it is necessary to pay attention to the salary structure in various categories of employees.Thus, in the works of scientists M. Maluland A. Shoham (Malul and Shoham, 2013) special features of salary accounting for the top-management depending on its real qualification are studied.Authors of this article consider it to be the basic feature for the establishment of the level of labour remuneration according to qualification of the company's top-management and its possibilities to adopt effective managerial decisions. .A valuable issue in the research of remuneration changes in business structures is the study of resource possibilities of business organizations, which can be directed to increase of employees' labour remuneration.This problem was studied in the work of scien-tists V. Ng and D. Feldman (Ng and Feldman, 2014), in which scientists studied the problematic issues of financial resources preservation and delays in the reaching high level of labour remuneration because of career.The issues of negotiations between the employer and employee concerning the changes in the remuneration, namely development of flexible approach to the interconnection between the chief and the subordinate in the sphere of the establishment of the optimum level of remuneration for both sides, were studied in the work of the scientist D.More (More, 2014).
Among the research works, devoted to the role of labor remuneration and social welfare provision in the activity of business organizations, little attention is devoted to the establishment of dependence between the payroll budget growth and social contributions, which are compulsory to be paid by the entrepreneurs.That is why this issue is to be studied in this article.Scientific hypothesis implies the improvement of methodology of influence of labour remuneration growth on the activity of the business sector entities in the context of transition to the sustainable development.
Research approach.
During the scientific research for the furtherance of its main goal the method of comparative analysis of the main economical indicators was used.These indicators characterize the processes of labor remuneration in the entrepreneurs' activities.There was also used the methodological approach, which included economical modeling of the processes of influence of labor remuneration growth on the tax burden, which in its turn, influenced on the economical activity of business organizations in general.This economical modeling characterizes the formation of regressive functional relation between the payroll budget, accounting minimum wages growth, and social contributions.
Participants.
Information for economical substantiation of influence of the level of labor remuneration growth on the tax burden in the activity of business organizations was taken from the companies' financial data reporting, namely, from the report about financial results of the business sector companies, which perform project activities.Names of the companies are the following: TOV "BBB" (Sumy), TOV "Avtogazproject" (Dnipro), TOV "LMG" (Sumy) (all companies of Limited Liability Partnership).All staff categories of these companies were examined, that was in total, administrative officers -20%, engineers -70%, junior labour -10%.Information about the labour remuneration of these employees is taken into account while calculating tax burden in the activity of economic entities.
Measuring toolkit.
During the economical substantiation of the influence of the labour remuneration growth on the tax rate in the activity of business organ-izations, authors of this article used the comparative method of economic indexes, special features of which were researched by the scientists T.A. Gorodnia and N.S.Kanyuka (Gorodnia and Kanyuka, 2012).Principle of the method is in the usage of economical markers, which characterize the level of labor remuneration, namely the level of salary itself, salary supplements and rewards that in total comprise the main and additional labour remuneration of the employees and also the level of social contributions, which is characterized by the united social tax, personal income tax rate and war tax.While conducting of this comparative analysis, economical model will be formed.It characterizes the regressive functional relation between four variables (payroll budget, united social tax, personal income tax, and war tax) and formation of the mathematical level of dependence of the payroll budget and social contributions.Special features of forecasting with the help of regressive functional relation were accepted as the background of the estimation by scientists A.V. Kalinichenko and Y.V. Shmigol (Kalinichenko and Shmigol, 2012).Special features of regressive functional relation between the employees' remuneration and their skills, which was offered by G. Gilpin will also be included into these toolkit (Gilpin, 2012).
Procedure.
During research, the data of the business entities'statistical reporting were collected.These data concern remuneration to the employees of the business organizations during 2015-2016.Here, staff categories, receiving the minimum salary within the correspondent period of analyzed time period, were taken into account.Obtained results were used for research purposes only.
Statistical analysis.
Comparative analysis of the statistical reporting of business organizations about labour remuneration was conducted.During the investigation regression analysis for formation of regressive functional relation between the payroll budget, which is influenced by the minimum level of labour remuneration, and the amount of social contributions of the business entities under examination, were used.
analytical and theoretical issues.
During the study of organization-economic issues, the level of social standards in the country should be distinguished.Dynamic patterns of the growth of the minimum salary in Ukraine during 2004-2015 should be analyzed.It was established that in the period 2004 -2015, in Ukraine there took place the dynamic pattern of the initial increase of the minimum labor remuneration in the equivalent Dollar amount, and in the period 2012-2015, inclusive the dynamic pattern was negative, as the decrease of the level of minimum labor remuneration from 134.29 USD till 57.42 USD (difference 76.87 USD) took place.According to the data of Ministry of Social Policy of Ukraine, minimum salary is officially received in Ukraine by 3.7 mln employees (2.6 mln employees in the private sector).Taking into account that in 2015, quantity of the working population was equal to 16.4 mln people, the minimum salary was received by more than 20% of employees.For comparison, in 2014 among 77.2 mln USA employees, who were paid wages, about 3 mln were remunerated with the federally adopted minimum wages, which amounted 7.25$ per hour or even lower wages (in total 3.8%).In Poland about 10.5% of employees were remunerated in the amount lower than minimum wages (Gorodnichenko and Talaver, 2016).
It was established that the situation, which leads to the shadowing of relations between the employer and employee, is observed.It is better for the business entities to show minimum salary in the income declaration than to set high salary to the employees and pay high taxes.As a result, these processes lead to the decrease of income into the country's budget.The steps of the country government's policy aimed at the social standards'raising in order to increase budget's income are quite understandable.
In scientific practice M. Tonin (Tonin, 2011), Hungarian scientist, studies Hungarian practice of the minimum salary growth.In 2000 in Hungary the minimum salary was on the level of 25000 HUF, in 2001 -40000 HUF.It was found out that while the increase of the social welfare provision, the part of the employees, who lose their working places because of the growth of the employer expenditures for the labour resources, also increases.Also, the practice, of the minimum salary increase, which was introduced by the Russian government in 2007, should be noted.From the scientific point of view this phenomenon was studied by O. Muravyev and O. Oschepkov (Muravyev and Oshchepkov, 2015), both scientists-economists.It was discovered that minimum salary growth led to: shadow payments growth, young people employment decrease, creation of new shadow working places, decrease of guarantee level of social welfare provision.
During the formation of the effective social policy, aimed at increase of the social standards level, it is necessary to work out main rules of effective social welfare provision which take into account necessities of the business entities.They characterize the effective direction of the minimum labor remuneration increase for all entities of this process.
The authors refer to the following rules: equilibrium principle, which characterizes balanced increase of social standards, accounting economic situation in the country, price level, paying capacity of the population; pro rata principle, which establishes the dependence between the labor remuneration increase and level of the tax burden, which has to be optimum for performing effective business activity. principle of equality, which includes equal rights among all participants of the process before the law and their keeping to laws and regulations.The participants are: entrepreneurs, employees, taxation bodies; principle of non-refoulement, which contains the realization of the components of the sustainable development in the entrepreneurs' activity.It means that business-processes of the business organizations have to be aimed at stable growth, social responsibility and increase of the business competitiveness level.
It is necessary to state that the problems of violation of the legislative regulation by the enterprise administration during the minimum salary accounting, which is characterized by the principle of equality, were studied in his works by the scientist O.M. Pyshulina (Pyshulina, 2007).Functions of the labor remuneration and its legislative provision were studied by the author.Authors claim that the developed rules of the effective social welfare provision in the activity of the business entities characterize development of theoretical issues and social welfare provision in context of sustainable development.
Descriptive statistics.
Practical grounds of the consequences of the minimum salary increase for the entities of the business sector should be analyzed.Thus, for the entrepreneurs, who are paying the united tax, minimal compulsory tax payments will double: for 10% for the first group and for 20% for the second group.So, in 2017, these entrepreneurs will have to pay to the state 3840 UAH (143 USD) and 7680 UAH (286 USD) for the united tax payers only (comparing to the numbers of 2016: 1653.6 UAH (61 USD) and 3307.
UAH (123 USD) correspondently).
There also exist united social tax compulsory payments (UST) for the individual entrepreneurs of the simplified tax system.In 2017 these payments will comprise 8448 UAH (314 USD) comparing to the 3797.64UAH (141 USD) in 2016 (Zhuck, 2016).
Practical issues of the social welfare provision in the activity of business entities of the enterprises: TOV "BBB" (Sumy), TOV "Avtogazproject" (Dnipro), TOV "LMG" (Sumy) should be analyzed.Incoming data in the Table 1 are presented on the basis of reports the labour remuneration and reports about the amount of accounted salary in TOV "BBB" (Sumy), TOV "Avtogazproject" (Dnipro), TOV "LMG" (Sumy) the period 2015-2016.
Table 1.Incoming information about labour remuneration and social tax payments made by business organizations TOV "BBB", TOV "Avtogazproject", TOV "LMG" (according to the financial reporting data) War tax,% After obtaining incoming information it is necessary to make a comparative analysis of the research of the business entities' activity.
Comparative statistical analysis.
There should be studied salary growth level and social contributions during the period of the fourth quarter of 2015-2016 and predicted first quarter of the 2017 in the activity of the enterprises TOV "BBB", TOV "Avtogazproject", TOV "LMG" (Table 2).Tax burden was estimated according to the following values, which are set as a norm by regulations for 2017: so, united social tax -22% (united social tax, 2016), personal income tax -18% (PIT, 2016), war tax -1.5% (war tax, 2016).
Table 2. Comparative analysis of the payroll budget and social contributions in the activity of TOV "BBB", TOV "Avtogazproject", TOV "LMG" (developed by the authors) Taking into account National Bank of Ukraine currency exchange rate as of the date of estimation (USD/UAH 1:23.13 in 2015; 1:26.89 in 2016), the great leap of expenditures for the labour remuneration in the predicted period was established.In TOV "BBB" it amounted 215%, in TOV "Avtogazproject" it amounted 205%, and in TOV "LMG" -210%.This growth of expenditures is certainly a negative factor, and it influences the cost of all business processes and additional cost of the performed services for all participants of the business sector.
Regressive analysis.
Regressive analysis, which characterizes formation of the regressive function of four variables and formation of the mathematical equation ( y ) of dependence of the payroll budget (ac- counting the minimum salary growth) and social contributions, should be conducted.
(1:26.89).Also, linear equations, presented in the Fig. 2 were chosen according to the criteria max 2 R which means the maximization of the approximation consistency, which increases authenticity of the obtained data.
Discussion.
The authors proved that the established dependence between the growth of the payroll budget by means of minimum labor remuneration growth and increase of social contributions has negative influence on the economical activity of the business entities because it influences the increase of the tax burden rate in their activity.In the works of I.M. Sotnik, T.V. Yakushko, both scientistseconomists (Sotnik, Yakushko, 2016), it is also stated that the level of the minimum salary in 2015 remains on the level of 2010 and does not fulfill social functions and leads to employees' demotivation.One more important issue, which influences the increase of social standards level, is the presence of sufficient amount of current capital for effective activity of business entities.While formating of the dependence function of the payroll budget from the social contributions, it is necessary to mention the limitations of this function, namely, rates of the tax burden, which have to be constant.Implementation of this condition allows the possibility to increase the obtained results authenticity and to decrease inaccuracy of calculations.The settled economical models, which characterize the functions of the dependence of the payroll budget from the social contributions, give an opportunity to distinguish the interrelation among four variables and to study the level of deviation from the control (trend) value of dependence of the payroll budget from social contributions.
Consequences for administration.
The authors declare that the sharp increase of the minimum salary leads to the increase of the level of social contributions for the business sector and, as a result, stuff reduction and introduction of the shadow schemes of salary payment.Thus, there is the necessity to optimize tax burden in the context of the minimum salary increase in the business entities activity.The authors state that in the business administration, it is necessary to offer organization-economic measures, oriented on the decrease of the tax burden in the context of transition to the sustainable development.These organizationeconomic measures are the following: decrease of the tax rates of the social contributions for the business entities, which can lead to unshadowing of the business and payment of legal salaries without shadow schemes; introduction of the tax holidays for the entrepreneurs, whose business is younger than one year, that gives an opportunity to build up the volume of turnover capital for the effective further administration; decrease of the loan rate for the business sector till 10% per year, with following decrease till 3% in the context of sustainable development.This measure can influence the investments in the business sector; introduction of business patterns with collateralized property, which will create the capital stock of the business organization for the case of loan debt.This method will lead to the decrease of risks of nonpayment for the material assets and other payments by the business entities; implementation of the state programs of the investor attraction to the business sector.They are aimed at creation of the new working places in the region and increase of the salary for the employees.
Accomplishment of these organization-economic measures will solve existing for the present day administration problem in the social sphere for the business entities and implement the main rules of sustainable development.
Conclusions and directions for further researches.
The main problematic aspects of the social provision in the social sphere were studied.The existing scientific views for the social provision process and its role in the economic activity of the business entities were examined.The comparative analysis of the payroll budget and social contributions in the work of TOV "BBB", TOV "Avtogazproject", TOV "LMG" was performed.Direct linear regression between the payroll budget increase, accounting the level of the minimum salary and social contributions was established.Organization-economic measures of the tax burden decrease for the business entities were offered.For the further scientific researches on this topic, authors offer not to be limited only by examination of the monetary policy, but to study this problem in complex with analysis of investment and financial provision of business organizations' activity, building-up of organization-economic provision of employees motivation rewards in the work of business entities.
Leonid Melnyk, Leonid Taraniuk, Olga Kozmenko, Lina Sineviciene, 2017.Leonid Melnyk, D.Sc.(Economy), Professor, Head of Economics and Business Administration Department of Sumy State University, Ukraine.Leonid Taraniuk, D.Sc.(Economy), Professor, Associate Professor of Economics and Business Administration Department of Sumy State University, Ukraine.Olga Kozmenko, Dr., Professor of Department of Finance, Kharkiv National University of Economics, Ukraine.Lina Sineviciene, Dr. in Economics, Lecturer of Department of Finance, School of Economics and Business, Kaunas University of Technology, Lithuania.
Table 2 (
cont.).Comparative analysis of the payroll budget and social contributions in the activity of TOV "BBB", TOV "Avtogazproject", TOV "LMG" (developed by the authors)
|
v3-fos-license
|
2018-04-03T04:03:14.738Z
|
2011-07-07T00:00:00.000
|
24961424
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/286/34/29806.full.pdf",
"pdf_hash": "2b2feb59d9b9808edeb8f80ea45b3d88533ea526",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2213",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "332cc988bd6d13b0718eb826753236a942761122",
"year": 2011
}
|
pes2o/s2orc
|
Morphine Withdrawal Stress Modulates Lipopolysaccharide-induced Interleukin 12 p40 (IL-12p40) Expression by Activating Extracellular Signal-regulated Kinase 1/2, Which Is Further Potentiated by Glucocorticoids*
Withdrawal stress is a common occurrence in opioid users, yet very few studies have examined the effects of morphine withdrawal (MW) on immune functioning or the role of glucocorticoids in MW-induced immunomodulation. This study investigated for the first time the role of glucocorticoids in MW modulation of LPS-induced IL-12p40, a key cytokine playing a pivotal role in immunoprotection. Using WT and μ-opioid receptor knock-out mice, we show that MW in vivo significantly attenuated LPS-induced IL-12p40 mRNA and protein expression. The role of glucocorticoids in MW modulation of IL-12p40 was investigated using a murine macrophage cell line, CRL2019, in an in vitro MW model. Interestingly, MW alone in the absence of glucocorticoids resulted in a significant reduction in IL-12p40 promoter activity and mRNA and protein expression. EMSA revealed a concurrent decrease in consensus binding to transcription factors NFκB, Activator Protein-1, and CCAAT/enhancer-binding protein and Western blot analysis demonstrated a significant activation of LPS-induced ERK1/2 phosphorylation. Interestingly, although glucocorticoid treatment alone also modulated these transcription factors and ERK1/2 activation, the addition of glucocorticoids to MW samples resulted in a greater than additive reduction in the transcription factors and significant hyperactivation of LPS-induced ERK1/2 phosphorylation. ERK inhibitors reversed MW and MW plus corticosterone inhibition of LPS-induced IL-12p40. The potentiating effects of glucocorticoids were non-genomic because nuclear translocation of glucocorticoid receptor was not significantly different between MW and corticosterone treatment. This study demonstrates for the first time that MW and glucocorticoids independently modulate IL-12p40 production through a mechanism involving ERK1/2 hyperactivation and that glucocorticoids can significantly augment MW-induced inhibition of IL-12p40.
support previous studies by the Eisenstein group (9,10,19) showing that mice subjected to morphine withdrawal stress in the context of LPS stimulation displayed decreased levels of IFN-␥ and IL-12. The protective role of IL-12 in human infectious diseases, including leprosy (20), tuberculosis (21), and leishmaniasis (22), has been well characterized. On the contrary, overexpression of IL-12 may contribute to the development of chronic inflammatory disorders (23), including Crohn disease and rheumatoid arthritis. Thus, the regulated expression of IL-12 in antigen-presenting cells is a critical event in the pathogenesis of infectious and inflammatory diseases.
Although several studies using dexamethasone as a surrogate for stress in vitro reported inhibition of IL-12p40 production in LPS-stimulated monocytic cells, thus far, to our knowledge, there have been no studies that have systematically investigated the role of corticosterone in MW-induced immunosuppression and specifically IL-12p40 synthesis.
In the current investigation, we studied the effects of MW in vivo in WT and MORKO mice and in vitro in the presence of corticosterone, to simulate stress, in primary murine macrophage cells and macrophage cell lines to delineate the role of corticosterone in MW-induced inhibition of IL-12p40 expression in LPS-stimulated cells.
EXPERIMENTAL PROCEDURES
Animals-8 -10-week-old B6129SF2 and B6129PFI male mice and MORKO male mice were used in the experiments described herein. Animals were housed 4 animals/cage under controlled conditions of temperature and lighting (12-h light/ dark cycle) and given free access to standard food and tap water. All animals were allowed to acclimate to their environment for at least 7 days prior to any experimental manipulations. Mice were sacrificed by carbon dioxide asphyxiation, and spleen tissues were harvested aseptically. Discomfort, distress, and injury to the animals were minimized. The Institutional Animal Care and Use Committee at the University of Minnesota has approved all protocols in use, and all procedures are in agreement with the guidelines set forth by the National Institute of Health Guide for the Care and Use of Laboratory Animals.
In Vivo Withdrawal Model-Mice were subjected to a well established model for both generating morphine dependence and producing withdrawal (24). Animals were anesthetized by inhaling isoflurane (3%), followed by implantation with morphine pellets (75 mg each) or placebo pellets (kindly provided by NIDA, National Institutes of Health, Rockville, MD), depending on the experiment. The implantation procedure consisted of making a small incision on the dorsal side of the animal and inserting a pellet (placebo or morphine) into the subcutaneous space created by the incision. Pellets were wrapped with nylon mesh and secured with surgical thread to facilitate easy removal. The incision was closed with the use of stainless steel wound clips. Following the morphine exposure period (72 h), the pellets were removed by opening the wound clips and taking out the pellets wrapped in nylon mesh. The wound was again closed with a wound clip. Removal of the pellets initiated spontaneous withdrawal in these animals; this technique is a widely utilized and accepted model for eliciting withdrawal (3). Classic withdrawal symptoms, including diarrhea, wet dog shakes, tremors, lack of grooming, increased agitation, and up to a 5% reduction in body weight occurred in morphine-withdrawn mice. The morphine withdrawal period consisted of either 4, 8, or 24 h, and at the initiation of withdrawal, animals were administered 20 g of LPS intraperitoneally (Sigma). At the conclusion of all procedures, animals were returned to their home cages, separated by experimental groups, and not housed more than 4 animals/cage. Following the withdrawal period, animals were sacrificed by CO 2 asphyxiation, and spleens were harvested as described below. Prior to sacrifice, blood was collected via the retro-orbital plexus or cardiac puncture.
Preparation of Murine Macrophages-Primary peritoneal macrophages were aseptically collected by flushing the peritoneal cavity with PBS with a 10-ml syringe. Collected cells were pelleted by low speed centrifugation and maintained in RPMI 1640 (Invitrogen) supplemented with 10% FBS and 1% penicillin/streptomycin. Spleens were removed aseptically, and suspensions were prepared by forcing the tissue through a cell strainer (70 m) with a sterile syringe plunger. Cell suspensions were maintained in culture dishes with RPMI 1640 but without FBS to facilitate macrophage attachment. Following attachment, cells were washed to remove contaminating cell populations. Cells were collected and counted and were plated in 24-well culture plates at a concentration of 2 ϫ 10 6 cells/ml in triplicate. Cells were then stimulated with LPS and incubated overnight at 37°C, 5% CO 2 .
Cell Culture-The mouse alveolar macrophage cell line, CRL2019 (American Type Culture Collection, Manassas, VA) was used for in vitro experiments. Murine peritoneal macrophage cell line J774.1 was also used for EMSA experiments. Cells were maintained in RPMI 1640 (CRL2019) and DMEM (J744.1) supplemented with 10% FBS and 1% penicillin/streptomycin. Cells were plated at a concentration of 0.5 ϫ 10 6 cells/ml in 10-cm culture plates. Cells were subjected to the in vitro withdrawal method described below in triplicate and incubated at 37°C, 5% CO 2 .
In Vitro Withdrawal-To replicate conditions tested in vivo, either primary cells or the cell lines were plated as described above. Following plating, cells were treated with 100 nM morphine sulfate (NIDA, National Institutes of Health, Rockville, MD) once per day for three consecutive days. On the fourth day, the cells were washed 3-5 times with PBS to simulate withdrawal. The cells were incubated in serum-free RPMI 1640 medium for 24 h, followed by LPS (100 ng/ml) treatment for 6 h. In in vitro studies, before the end of LPS treatment, cells were also treated with corticosterone (300 ng/ml) for 30 min.
Corticosterone Radioimmunoassay-WT and MORKO mice were sacrificed following MW experiments, and plasma samples were collected and stored at Ϫ70°C until analyzed. Plasma concentrations of corticosterone in WT and MORKO mice were analyzed using a 125 I-coupled, double antibody radioimmunoassay (ICN Biochemicals, Costa Mesa, CA) according to the manufacturer's instructions. The concentrations were expressed as ng/ml. RNA Extraction and RT-PCR Analysis-Cells (CRL2019 and J774.1) following treatments were collected in 1 ml of TRIzol reagent (Invitrogen), and total RNA was extracted as per the manufacturer's protocol. Total RNA was quantified and frozen at Ϫ80°C until used. RNA was reverse transcribed using Moloney murine leukemia virus RT (New England Biolabs, Ipswich, MA) together with random hexamers (GeneLink, Hawthorne, NY). One hundred nanograms of cDNA was used for real-time PCR and gel-based PCR to study the expression profile of mouse IL-12p40 and -actin or GAPDH. Sense and antisense oligonucleotide primers were designed for RT-PCR using DNA sequence information obtained from the Genome Data base (National Center for Biotechnology Information) and were synthesized at the Bio-Medicine Genomic Center facility, University of Minnesota. The following specific primers were used for real-time as well as gel-based PCR: for IL-12p40, sense (5Ј-TCA-TCAGGGACATCATCAAAC-3Ј) and antisense (5Ј-TGAGGG-AGAAGTAGGAATGGG-3Ј); for -actin, sense (5Ј-ATATC-GCTGCGCTGGTCGTC-3Ј) and antisense (5Ј-AGGATGGC-GTGAGGGAGAGC-3Ј); and for GAPDH, sense (5Ј-CGACT-TCAACAGCAACTCCCACTCT-3Ј) and antisense (5Ј-TGGGTGGTCCAGGGTTTCTTACTC-3Ј). The real-time PCR analysis was performed using SYBR Green master mix (Applied Biosystems, Carlsbad, CA) on a 7500 real-time PCR station (Applied Biosystems). Primers for -actin/GAPDH were used as internal controls. The results for real-time PCR were calculated by using the ⌬cT-⌬cT method and were expressed as -fold expression.
Enzyme-linked Immunosorbent Assays-Quantikine ELISA kits were obtained from R&D Systems (Minneapolis, MN) and were performed according to the manufacturer's directions. Briefly, 50 l of sample supernatant or serum was assayed in triplicate per experimental condition. Following the incubation period, plates were washed, and 100 l of detection antibody conjugated to horseradish peroxidase was added to the wells. Finally, 100 l of substrate solution was added, and absorbance was read at 450 nm using a plate reader (Flostar, BMG Labtech). Optical density measurements for the standards were used to generate a standard curve, and the concentration of the particular cytokine in each of the samples was extrapolated from this standard curve.
EMSAs-Transcription factor interactions with DNA response elements were assessed using EMSAs. Nuclear extracts were prepared with an NXTract nuclear extraction kit (Sigma). Briefly, cells were resuspended in lysis buffer containing dithiothreitol (DTT) and protease inhibitors. Cells were then lysed using mild detergent and centrifuged to separate cytoplasmic protein pool and pellet. The pellet was dissolved in extraction buffer to extract the nuclear protein pool. Activator Protein-1 (AP-1), NFB, and C/EBP consensus oligonucleotides were synthesized and labeled with IR700 at the 5Ј-end, and both strands were duplexed at the Bio-Medical Genomics Center, University of Minnesota. The sequence for the probes were as follows: for probe AP-1, sense (IR700 -5Ј-CGCTTGATGACT-CAGCCGGAA-3Ј and antisense (5Ј-TTCCGGCTGAGTCA-TCAAGCG-3Ј); for probe NFB, sense (IR700 -5Ј-AGTT-GAGGGGACTTTCCCAGGC-3Ј) and antisense (5Ј-GCCTG-GGAAAGTCCCCTCAACT-3Ј); and for probe C/EBP, sense (IR700 -5Ј-TGCAGATTGCGCAATCTGCA-3Ј) and antisense (5Ј-TGCAGATTGCGCAATCTGCA-3Ј). Mutant probes for NFB, AP-1, and C/EBP were also synthesized from the same facility (AP-1, 5Ј-CGCTTGATGACTTGGCCGGAA-3Ј; C/EBP, 5Ј-TGCAGAGACTAGTCTCTGCA-3Ј; NFB, 5Ј-AGTTGAG-GCGACTTTCCCAGG-3Ј). Underlined bases are mutated bases. Unlabeled probes were purchased from Promega and were used at a 70-fold excess of labeled probe. EMSA was performed using an Odyssey Infrared EMSA kit (LiCor Biosciences, Lincoln, NE) according to the manufacturer's instructions. Approximately 10 g of nuclear extracts were incubated with 50 fmol of labeled probe in binding buffer. The probe and nuclear proteins were incubated for 30 min at room temperature. DNA-protein complexes were resolved on 4.5% non-denaturing acrylamide gels. Gels were then scanned directly in an Odyssey scanner (LiCor Biosciences) to visualize DNA-protein interaction.
Transient Transfections-The hIL-12p40 promoter-luciferase reporter plasmid was kindly provided by Dr. A. Kumar (University of Ottawa), and the construct has been described previously (25). CRL2019 cells were transfected with plasmids using Effectene reagent (Qiagen, Valencia, CA) according to the manufacturer's instructions. Briefly, 10 g of IL-12p40 promoterfirefly luciferase reporter plasmid and 0.5 g of pRL-TK-Renilla reniformis luciferase reporter internal control plasmid (Promega, Madison, WI) were incubated for 10 min with Effectene reagent in standard RPMI medium to allow formation of complexes. Complexes were added directly to each well of a 6-well plate, and cells were maintained in 37°C, 5% CO 2 culture conditions. Transfections were performed on day 3 of the 5-day experiment. After treatment, cells were lysed, and luciferase activity was measured using a Dual-Luciferase reporter assay system (Promega) and a Turner Biosystems TD 20/20 luminometer according to the manufacturer's instructions. Data are presented as standardized luciferase activity, determined by the ratio between firefly luciferase and R. reniformis luciferase.
Western Blot Analysis-Cells were treated with LPS (100 ng/ml, 6 h) after MW treatment and, at the end of LPS treatment, treated with corticosterone (300 ng/ml, 30 min). In experiments with PD98059, calyculin A, and/or wedelolactone, cells were treated with different inhibitors for 1 h prior to LPS treatment. Total cell lysate was extracted for MAPK experiments, whereas proteins were isolated from nuclear and cytosolic cell extracts in glucocorticoid receptor translocation experiments. Fifty micrograms of total protein was loaded on a 7.5% denaturing gel, electrophoresed, and then transferred to a PVDF membrane. The antibodies used were glucocorticoid receptor (Millipore, Billerica, MA), -actin, ␣-tubulin, and RNA polymerase II (Santa Cruz Biotechnology, Inc., Santa Cruz, CA), pERK1/2, ERK1/2, p-p38, p38, pSAPK-JNK, and SAPK-JNK (Cell Signaling, Danvers, MA). PD98059 and calyculin A were purchased from Cell Signaling (Danvers, MA), and wedelolactone was purchased from Sigma. -Actin, ␣-tubulin, and RNA polymerase II antibodies were used as loading controls for cytosolic and nuclear extract, respectively.
Statistics-Each cytokine supernatant protein concentration was expressed as the percentage change versus placebo Ϯ S.D., and comparisons between group means were assessed using an unpaired Student's t test. Significance was defined as p Ͻ 0.05.
Morphine Withdrawal Inhibits LPS-induced IL-12p40 Cytokine Expression, and This Effect Is Abrogated in MORKO
Mice-Wild type (WT) mice were subjected to placebo withdrawal (sham withdrawal) or morphine withdrawal by the removal of either placebo or morphine pellet in the presence of LPS stimulation. Following the withdrawal periods, the animals were sacrificed, and serum samples were collected to assess IL-12p40 protein levels by ELISA. MW significantly decreased LPS-induced IL-12p40 production at 8 h postwithdrawal. It was interesting to observe that although in the MORKO mice MW did not result in significant inhibition in IL-12p40, the base-line IL-12p40 levels in the placebo group were significantly greater in the MORKO animals compared with the WT animals. The increased baseline levels in the MORKO group may be attributed to endogenous opioids that may be binding to MOR in the WT animals to modulate IL-12p40. These data further support the role of MOR in MW-induced inhibition of IL-12p40. (Fig. 1A). Peritoneal macrophages were also collected from these animals and cultured overnight and restimulated ex vivo with LPS. Results demonstrated a significant decrease in IL-12p40 production in peritoneal macrophages harvested from morphine-withdrawn animals (24 h) when compared with placebo-withdrawn animals. When the same manipulations were performed in MORKO mice, there was no significant difference between any of the groups examined, indicating that morphine-induced decrements in IL-12p40 production involved the classical -opioid receptor (Fig. 1B).
IL-12p40 Message Levels in Primary Splenic Macrophage Cells Are Also Inhibited in Morphine-withdrawn Samples-Primary splenic macrophage cells were extracted from LPS-stimulated morphine-withdrawn and placebo-withdrawn WT and MORKO mice (n Ն 3). The message level of IL-12p40 was measured using quantitative RT-PCR (Fig. 1C). Our results show that MW decreased LPS-induced message levels of IL-12p40 almost 5-fold, whereas this decrease was abolished in MORKO mice, indicating that an intact -opioid receptor is essential for withdrawal-mediated LPS-induced IL-12p40 production and that modulation of IL-12p40 following morphine withdrawal is transcriptionally regulated.
Morphine Withdrawal Stress Induces a Transient Increase in Plasma Corticosterone in WT Mice through a MOR-dependent Pathway-To determine if morphine withdrawal resulted in HPA activation and corticosterone release, WT and MORKO mice were morphine-withdrawn as described under "Experimental Procedures." Corticosterone levels in plasma were evaluated at varying time points. We show that MW resulted in an increase in corticosterone levels, which peaked at around 60 min following withdrawal and returned back to base line at around 240 min following withdrawal (Fig. 2). The MW-induced increase in plasma corticosterone levels was completely abolished in the MORKO mice (Fig. 2), once again implicating the role of MOR in MW-induced activation of the HPA axis. Cytokine-specific ELISAs were performed on the serum samples to assess levels of IL-12p40. Peritoneal macrophages were collected from WT mice and MORKO mice (B) that underwent 24 h of withdrawal in the presence of LPS (n Ն 3/group). Cytokine-specific ELISAs were performed on the cell supernatants to assess the protein levels of IL-12p40. Data are presented as pg/ml Ϯ S.D. (error bars) and are representative of at least three independent experiments. C, splenocytes extracted from WT and MORKO mice (n Ն 3/group) were either placebo-or morphine-withdrawn in the presence or absence of LPS (100 ng/ml). Total RNA was extracted from splenocytes and reverse transcribed and used for real-time PCR. Data are presented as -fold expression of IL-12p40 Ϯ S.E. (error bars) and are representative of three independent experiments. ***, p Ͻ 0.001.
Morphine Withdrawal Inhibits LPS-induced Consensus Sequence Binding to NFB, C/EBP, and AP-1 in Splenocytes
Derived from WT Mice but Not in MORKO Mice-Spleen-derived macrophages were harvested from WT and MORKO mice and used in the following experiments, and nuclear protein extracts from these samples were subjected to EMSA analysis. The transcription factors tested were selected because they were all known to have binding sites on the IL-12p40 promoter. LPS treatment resulted in a significant increase in the binding of NFB, C/EBP, and AP-1 to their respective consensus binding sequences when compared with untreated samples. MW alone did not result in a significant change in consensus oligonucleotide binding; however, MW in the presence of LPS resulted in a significant decrease in LPS-induced binding to all three consensus oligonucleotides (Fig. 3A). MW-mediated inhibition of LPS-induced binding of transcription factors to the consensus oligonucleotide was completely abolished in macrophages harvested from MORKO splenocytes (Fig. 3B) because there were no differences in interaction between LPS and LPS plus morphine-withdrawn samples. These results implicate MW modulation of LPS-induced transcriptional regulation of IL-12p40 as the mechanism by which MW inhibits IL-12p40 protein levels.
Morphine Withdrawal Reduces Consensus Sequence Binding of LPS-induced Interaction of NFB, C/EBP, and AP-1 in Macrophage Cells in Vitro, and Corticosterone Treatment Further Attenuates It-The consequences of withdrawal were also investigated in vitro in murine peritoneal macrophage cell line J774.1 and the alveolar macrophage cell line CRL2019. Similar to in vivo MW, in vitro MW in the presence of LPS also resulted in a significant decrease in the binding of the transcription factors NFB, C/EBP, and AP-1 to their respective consensus oligonucleotide in both CRL2019 cells (Fig. 4A) and J774.1 cells (Fig. 4B). To determine the contribution of corticosterone to MW-induced modulation of IL-12p40 transcription, cells were subjected to morphine withdrawal in the presence or absence of corticosterone. The corticosterone groups were treated with 300 ng/ml (a concentration that was attained in mice following morphine withdrawal) corticosterone 30 min prior to lysis. Although corticosterone treatment alone resulted in a decrease in LPS-induced binding of NFB, C/EBP, and AP-1 to their respective consensus oligonucleotides, the effect was more dramatic when cells were subjected to both MW and corticoster- one treatment. Interestingly, the inhibition observed with corticosterone treatment was more dramatic in the CRL2019 cells when compared with J774.1 cells. LPS-induced NFB binding was dramatically and consistently higher in both cell lines when compared with AP-1 and C/EBP, underscoring its role in the transcriptional regulation of IL-12p40. MW in the presence of LPS and corticosterone reduced the NFB binding to basal levels. Although treatment with corticosterone alone did not dramatically decrease LPS-induced binding of AP-1 as well as C/EBP to their consensus oligonucleotide, the combination of MW and corticosterone treatment reduced binding of these transcription factors following LPS stimulation to base-line levels. These data show that NFB is a strong modulator of transcriptional regulation of IL-12p40 in addition to AP-1 and C/EBP.
To determine the specificity of the binding interactions, labeled probes were competed with either excess unlabeled consensus oligonucleotide or mutated consensus oligonucleotide (Fig. 4C). Cold probe competed with the labeled probe and decreased LPS-induced shift in the oligonucleotides. Mutated probes failed to induce a shift in the binding, validating the specificity of the binding to the consensus oligonucleotides.
In Vitro Morphine Withdrawal in the Presence of Corticosterone Potentiates a Decrease in IL-12p40 Promoter Activity-In order to understand in more detail how MW inhibits LPS-induced IL-12p40 mRNA expression, a previously established in vitro MW model was used (24). In this set of experiments, CRL2019 macrophages were transfected with the IL-12p40 promoter-luciferase constructs and treated according to the in vitro withdrawal methodology described under "Experimental Procedures." Cell lysates were prepared, and luciferase activity, an indicator of IL-12p40 activation, was measured. As expected (Fig. 5A), LPS treatment resulted in a significant increase in IL-12p40 promoter activity when compared with no treatment controls. Cells subjected to MW showed a significant decrease in LPS-induced IL-12p40 promoter activity when compared with LPS alone treatment groups, although MW by itself had no effect on IL-12p40 promoter activity. Corticosterone-treated with corticosterone and/or LPS following morphine withdrawal. Nuclear extracts were incubated with IR700-labeled DNA probes for NFB, AP-1, and C/EBP. DNA-protein complexes were run on 4.5% non-denaturing acrylamide gels and visualized on an Odyssey scanner (LiCor Biosciences). Each scanned figure is representative of three independent experiments. C, specificity of probe binding. Nuclear extracts from CRL 2019 cells treated with LPS and/or corticosterone following morphine withdrawal were incubated with IR700-labeled consensus sequence probes only, with probes, with mutated probes, and with cold probes with no labeling to validate specific binding of probes. DNA-protein complexes were run on 4.5% non-denaturing acrylamide gels and visualized on an Odyssey scanner (LiCor Biosciences). Each scanned figure is representative of three independent experiments. cells also showed a significant decrease in LPS-induced IL-12 promoter activity. Interestingly, when cells were subjected to both MW and corticosterone treatment, an additive decrease in IL-12p40 promoter activity was observed. These results suggest that corticosterone may contribute to the decreased promoter activity but, more importantly, that MW can also act independently of corticosterone and inhibit LPS-induced IL-12p40 promoter activity. These results were interesting, given that in many cases, it was the subsequent production of corticosterone caused by either morphine treatment or withdrawal that resulted in decrements of several physiological functions.
We further investigated the role of corticosterone on MWinduced modulation of LPS-induced IL-12p40 message levels in CRL2019 cells using real-time PCR and gel-based PCR. LPS treatment resulted in a 550-fold induction in IL-12p40 message levels (Fig. 5B). In cells subjected to MW, LPS-induced IL-12p40 message levels were significantly decreased (46% reduction). Corticosterone treatment also resulted in a significant reduction in LPS-induced IL-12p40 mRNA levels (27%), but the effects were less than those observed with MW. However, when cells were subjected to both MW and corticosterone treatment, there was an additive decrease (82%) in IL-12p40 mRNA levels. Similar results were obtained with gel-based PCR (Fig. 5C). These data clearly indicate that MW-mediated modulation of LPS-induced IL-12p40 transcriptional regulation at the message level is independent of corticosterone, but the presence of corticosterone potentiates MW effects.
In Vivo Morphine Withdrawal Has No Effect on Glucocorticoid Receptor Translocation in WT Mice-Corticosterone, a glucocorticoid, binds to cytosolic glucocorticoid receptors.
Ligand-activated receptors then translocate to the nucleus in order to bind to the glucocorticoid receptor response element to modulate gene transcription. To further determine if modulation of glucocorticoid receptor nuclear translocation is a possible mechanism for the potentiating effect of corticosterone, we investigated the effect of MW on glucocorticoid receptor translocation in morphine-withdrawn samples. WT mice were treated with LPS or saline for 8 h following the morphine withdrawal paradigm. The nuclear and cytosolic proteins were extracted from splenic macrophages. The proteins were electrophoresed and transferred to PVDF membrane. The membranes were probed with antibody against glucocorticoid FIGURE 5. In vitro morphine withdrawal and corticosterone treatment decreased IL-12p40 promoter activity and message levels in LPS-stimulated CRL2019 macrophages. CRL2019 macrophages were transfected with IL-12p40 promoter construct followed by morphine withdrawal paradigm and stimulated for 6 h with 100 ng/ml LPS. Promoter activity (A) is presented as standardized luciferase activity. Total RNA was extracted, and message levels were analyzed using real-time PCR (B) and gel-based PCR (C) for IL-12p40. Housekeeping gene -actin was amplified as an internal control. The graph was plotted against data normalized to -actin expression. Each treatment group was tested in triplicate, and results are representative of at least three independent experiments Ϯ S.D. (error bars). #, p Ͻ 0.01 versus control samples; *, p Ͻ 0.01; **, p Ͻ 0.05; ***, p Ͻ 0.001. RLU, Relative Luciferase Units.
receptor. LPS did not induce glucocorticoid receptor translocation in either placebo-withdrawn or morphine-withdrawn splenocyte-derived macrophages (Fig. 6A), indicating that HPA activation with the release of glucocorticoids does not act at the transcriptional level to modulate MW-induced inhibition of LPS-induced IL-12p40 expression.
When the effects of MW and corticosterone treatment were tested in an in vitro MW model, as expected, corticosterone treatment alone resulted in a significant translocation of glucocorticoid receptor into the nucleus in glucocorticoid receptor to nucleus in CRL2019 cells (Fig. 6B). However, similar to in vivo studies, nuclear translocation of glucocorticoid receptor was not observed in morphine-withdrawn samples, and no additive effect on nuclear translocation was observed in MW samples that were treated with corticosterone. These data clearly indicate that the potentiating effects of corticosterone on morphine withdrawal were not mediated through modulation of gene transcription through glucocorticoid receptor translocation but may be mediated through a non-genomic pathway.
Morphine Withdrawal Results in LPS-induced Hyperactivation of ERK1/2 and Induction of SAPK/JNK Activation, Which Is Potentiated in the Presence of Corticosterone-Macrophage
activation by microbial components involves cascades of intracellular signaling pathways, including those that lead to activation of different MAPKs like ERK1/2, p38, and SAPK/JNK. LPSinduced activation of SAPK/JNK was observed in the MW samples and corticosterone-treated samples. However, we did not observe an augmented activation when cells were subjected to both MW and corticosterone (Fig. 7A), suggesting that the augmented IL-12p40 response that we observed when both insults were present may not be mediated by SAPK/JNK activation. Interestingly, LPS treatment did not result in any significant activation of p38MAPK, and the effect was not dramatically modulated by MW or corticosterone (Fig. 7A). We then determined if ERK1/2 kinases play a role in MW-induced inhibition of LPS-induced IL-12p40 expression. Our result show that MW alone in the presence of LPS results in significant activation of ERK1/2 when compared with vehicle-withdrawn samples. Interestingly, a dramatic hyperactivation was observed in MW samples that were treated with corticosterone (Fig. 7B). From these data, we conclude that hyperactivation of ERK1/2 may be a potential mechanism by which MW and corticosterone may negatively regulate LPS-induced IL-12p40 expression.
Treatment with ERK1/2 Inhibitor PD98059 Rescues Morphine Withdrawal-mediated Suppression of LPS-induced IL-12p40
Expression-To determine if ERK1/2 hyperactivation is the mechanism underlying MW modulation of LPS-induced IL-12p40 expression, we tested if the ERK1/2 inhibitor PD98059 (40 M) will rescue MW induced modulation of LPSinduced IL-12p40. CRL 2019 cells were pretreated with either PD98059 or vehicle and subjected to MW alone or MW in the presence of corticosterone. As shown before, LPS treatment induced IL-12p40 protein levels in a time-dependent manner, and MW and MW plus corticosterone significantly inhibited LPS-induced IL-12p40 protein levels. However, when cells were pretreated with PD98059 (40 M) and then subjected to MW or MW plus corticosterone, the effect of MW-or MW plus corticosterone-induced inhibition was completely reversed at every time point tested (Fig. 7C). These data clearly indicated that MW-or MW plus corticosterone-induced inhibition of LPS-induced IL-12p40 expression suppression was mediated through a mechanism that involved ERK1/2 activation.
Treatment with Phosphatase PP2A Inhibitor Hyperactivates ERK1/2 Phosphorylation and Further Attenuates IL-12p40 Message Levels-In pursuit of what regulates ERK1/2 hyperactivation, cells were treated with phosphatase PP2A inhibitor calyculin A prior to LPS treatment following the MW paradigm. PP2A is a positive modulator of cellular IKK activity and interacts with regulatory subunit IKK␥. Our data show that MW inhibits NFB activation, implying inhibition of IKK activation. We further demonstrate that calyculin A treatment led to greater ERK1/2 phosphorylation in the MW and corticosterone-treated samples, suggesting a role for PP2A in modulating ERK1/2 activation (Fig. 8A). Calyculin A treatment further hyperactivated ERK1/2 phosphorylation even in the resting FIGURE 6. Morphine withdrawal did not increase the glucocorticoid receptor (GR) translocation to nucleus in WT mice and CRL2019 macrophage cells. Splenic macrophages from WT mice (A) (n Ն 3/group) or CRL2019 cells (B) were treated with the morphine withdrawal paradigm as discussed under "Experimental Procedures." Nuclear extract and cytosolic extract were prepared, and 50 g of total proteins was loaded onto 7.5% denaturing gel. The membranes were probed for glucocorticoid receptor. Blots were reprobed with -actin and RNA polymerase II antibody as loading controls for cytosolic extract and nuclear extract, respectively. Each blot is representative of at least three independent experiments. cells beyond basal phosphorylation. In LPS and corticosteronetreated MW samples, calyculin A treatment phosphorylated ERK1/2 to a greater extent compared with no calyculin A treatment, so much so that the basal hyperactivation was not seen in these samples. Basal ERK1/2 phosphorylation in samples not treated with calyculin A was not visible because of a masking effect due to a very high ceiling effect of ERK1/2 phosphorylation in calyculin A-treated samples. This overhyperactivation of ERK1/2 kinase due to calyculin A treatment was inhibited when ERK1/2 inhibitor PD98059 was used, establishing that PP2 activation is upstream to ERK1/2 activation (lane 16). To further establish that ERK1/2 hyperactivation was the potential mechanism involved in MW-induced IL-12p40 inhibition, we investigated the effect of calyculin A and the IKK inhibitor, wedelolactone, on IL-12p40 expression. CRL 2019 cells were treated with the inhibitors, followed by LPS treatment for 3 h. Our data show that calyculin A treatment completely suppressed LPS-induced IL-12p40 message levels as early as at 3 h of LPS treatment (Fig. 8, B and C). Although significant inhibition in LPS-induced IL-p40 was observed in wedelolactonetreated samples, the effect was not as dramatic as calyculin A treatment. Both the effect of calyculin A and that of wedelolactone were inhibited by PD98059, indicating that the final common downstream signal is ERK1/2 activation. This confirms that LPS plus corticosterone treatment in the presence of MW inhibits the IKK-NFB signal transduction pathway, and this dysregulation of IKK activity means that less PP2A is available for repressing the MEK1/2 and ERK1/2 pathway, therefore leading to hyperactivation of ERK1/2.
DISCUSSION
The aim of the current investigations was to delineate the mechanism underlying MW-induced inhibition of IL-12p40, a key cytokine that is produced by phagocytic macrophages for the regulation of antigen-presenting cells and effector lymphocytes during an immune response to pathogens. We demon- FIGURE 7. Morphine withdrawal hyperactivates ERK1/2 with corticosterone treatment in LPS-stimulated CRL2019 cells, and ERK1/2 activation is inhibited by PD98059. CRL2019 cells were treated with PD98059 for 1 h before treating with 100 ng/ml LPS (6 h) and/or 300 ng/ml corticosterone (30 min) before the end of LPS treatment following the morphine withdrawal paradigm as discussed under "Experimental Procedures." Total cell extract was prepared, and 50 g of total proteins were loaded onto 10% denaturing gel. A, the membranes were probed with phosphorylated isoforms and total isoforms of p38 and SAPK/JNK MAPKs. The blots were reprobed with ␣-tubulin as a loading control. B, the membranes were probed with phosphorylated isoforms and total isoforms of ERK1/2 MAPK antibodies. The blots were reprobed with ␣-tubulin as a loading control. C, cells were treated with PD98059 (40 M) for 1 h before treating with LPS (100 ng/ml) for 0, 24, 48, and 72 h following morphine withdrawal. Corticosterone treatment was done for 30 min at the end of LPS treatment, and the cultured medium was collected to perform ELISA for IL-12p40. IL-12p40 protein was expressed as pg/ml concentration for LPS alone (Ⅺ), LPS with morphine withdrawal (ϫ), LPS with corticosterone (CS) (‚), LPS with corticosterone in MW samples (E), or LPS plus corticosterone with MW in the presence of PD98059 (PD) (छ). Each experiment is a representation of at least three independent experiment Ϯ S.D. (error bars). *, p Ͻ 0.05; **, p Ͻ 0.01 compared with LPS alone samples.
strate, using an in vivo model of MW, a significant decrease in LPS-induced IL-12p40 production in the plasma of MW animals. Ex vivo stimulation of peritoneal macrophages harvested from MW mice with LPS also showed a significant decrease in IL-12p40 production. MW-mediated modulation of LPS-induced IL-12p40 was completely abolished in the MORKO animals, implicating the role of -opioid receptors in MW-induced changes.
To delineate the molecular mechanism underlying MW effects on LPS-induced IL-12p40, mRNA and promoter activity of IL-12p40 were investigated following MW. Our data show that LPS-induced IL-12p40 mRNA levels were significantly inhibited in peritoneal macrophages harvested from MW animals when compared with placebo withdrawal animals. Because the promoter of the IL-12p40 gene contains functional cis-acting sequences, including response elements for NFB, AP-1, and C/EBP (26 -29), which are key transcription factors in inflammation, we used EMSA to understand how MW modulated IL-12p40 production at the transcriptional level. We demonstrate that MW resulted in a significant blunting of LPSinduced binding of transcription factors NFB, C/EBP, and AP-1 to DNA consensus oligonucleotide sequences. These results suggest that MW may result in the modulation of signal transduction pathways that converge on LPS/TLR4 signaling to post-translationally modify transcription factors, leading to disruption in their nuclear translocation.
The role of stress in drug addiction is well established. The noradrenergic system and the HPA axis comprise two major adaptive mechanisms to stress. Like stressors, morphine with-drawal activates the HPA axis in rats (16), which results in the release of adrenocorticotropin from the pituitary with subsequent increase in corticosterone secretion (16,30,31). Similarly, we show a significant but transient increase in corticosterone following MW in the WT animals. The role of corticosterone as an immunosuppressor has been well documented. At the molecular level, at least three mechanisms have been proposed to mediate glucocorticoid effects on immunity and inflammation. Glucocorticoid receptor-induced transactivation and transrepression are the "classical" mechanisms whereby ligand-activated glucocorticoid receptors bind to glucocorticoid receptor response elements and either activate or repress transcription of the targeted gene. Transactivation of genes encoding inhibitory proteins and transrepression of inflammatory genes have been described. However, the majority of anti-inflammatory effects are due to so-called cross-talk, in which glucocorticoid-bound glucocorticoid receptors interact with transcription factor proteins, such as NFB and AP-1, interfering with their ability to activate transcription of target genes. To determine the role of corticosterone in MW-mediated modulation of LPS-induced IL-12p40, macrophage cell lines, CRL2019 and J774.1, were subjected to MW in the absence and presence of corticosterone. Interestingly, MW in the absence of corticosterone was able to significantly inhibit LPS-induced binding of the transcription factors NFB, C/EBP, and AP-1 to DNA consensus oligonucleotide sequences, suggesting a mechanism that is independent of corticosterone. However, a significant and a more than additive effect is seen when corticosterone is present at the time of MW, suggesting a FIGURE 8. Treatment with PP2A inhibitor calyculin A hyperactivates basal ERK1/2 phosphorylation and inhibits message levels of IL-12p40. A, CRL2019 cells were treated with PP2A inhibitor calyculin A (10 nM) and ERK1/2 inhibitor PD98059 (40 M) for 1 h before treating with LPS (100 ng/ml) for 6 h followed by corticosterone treatment (300 ng/ml) for 30 min following morphine withdrawal. Total protein was extracted, and 50 g was loaded onto 10% denaturing gel. The membranes were probed with phosphorylated isoforms and total isoforms of ERK1/2 MAPK antibodies. The blots were reprobed with ␣-tubulin antibody as a loading control. Each blot is representative of three independent experiments. B and C, CRL 2019 cells were treated with PP2A inhibitor calyculin A (10 nM), IKK inhibitor wedelolactone (20 M), and ERK1/2 inhibitor PD98059 (40 M) for 1 h, followed by LPS (100 ng/ml) treatment for 3 h. Following treatment, cells were washed, and total RNA was extracted. cDNA was used to analyze message levels of IL-12p40 using real-time PCR (B) and gel-based PCR (C). Housekeeping gene GAPDH was amplified as internal control. Error bars, S.D.
potentiating effect of corticosterone. Although glucocorticoids have also been shown to decrease IL-12p40 production through strong inhibition of NFB and AP-1, there are no reports of glucocorticoid receptor response element sites in murine IL-12p40 promoter (25,27). However, mutation in these transcription factor binding sites in the promoter of human IL-12p40 abrogated the luciferase assay, indicating the importance of these transcription factors in regulating IL-12p40 gene following glucocorticoid treatment (25). As a complement to the classic genomic theory, a non-genomic mechanism has been proposed for the rapid action of glucocorticoids (32,33). It is speculated that glucocorticoids might affect the expression of genes by modulating cell signaling pathways that are not activated via glucocorticoid receptor activation (i.e. non-genomic pathways). In our study, when glucocorticoid receptor translocation into the nucleus was investigated, although corticosterone treatment resulted in a significant increase in glucocorticoid receptor nuclear translocation, MW treatment resulted in no significant increase in glucocorticoid receptor translocation with MW either in vivo or in vitro, suggesting that corticosterone may be potentiating MW effects through a non-genomic pathway.
Previous studies have reported that inhibition of p38, ERK, or JNK in primary monocytes results in enhanced binding of AP-1 and Sp1 to IL-12p40 promoter (34). In contrast, p38 and ERK inhibition had essentially no effect on NFB binding. Contrary to that, in THP-1 cells, inhibition of p38, ERK, and JNK significantly enhanced LPS-induced binding of NFB and Sp1 to IL-12p40 promoter, whereas AP-1 binding decreased (34). In another report, IL-12p40 production is regulated by NFB and AP-1 through the activation of upstream calcium and PI3K pathways (35). In macrophages, C/EBP is involved in the inducible expression of several genes that are important for inflammation and immunity, including IL-12p40. In our experiments, MW decreased LPS-induced C/EBP binding with consensus sequence. The activity and expression of three C/EBP members (␣, , and ␦) are regulated by a number of inflammatory signals, including LPS and a range of cytokines (36,37). Interestingly, a recent study showed that triptolide-induced inhibition of IL-12p40 transcription was preceded by sustained phosphorylation of ERK1/2 and that blocking the activity of ERK1/2 but not those of p38 and JNK can substantially rescue triptolideinduced inhibition of IL-12p40 production (38). Our data show that MW resulted in a dramatic and significant increase in LPSinduced ERK1/2 activation. A similar LPS-induced ERK1/2 activation was also observed with corticosterone treatment alone. Interestingly, LPS-induced ERK1/2 was significantly hyperactivated in the presence of MW and corticosterone. ERK is a family of serine/threonine protein kinases that have been functionally linked to addiction through phosphorylation of transcription factors leading to changes in target gene expression. ERK phosphorylates various substrates, including many enzymes, transcription factors, and proteins. Following activation, ERK dissociates from cytoplasmic anchors, such as MEK, and translocates to the nucleus, where it phosphorylates its nuclear substrates. Activated ERK does not always localize to the nucleus. Several transcription factors are activated by ERK in the cytoplasm, and they translocate to the nucleus after phos-phorylation (39,40). MW have been shown to up-regulate ERK1/2 phosphorylation in neuronal cells. Naloxone-precipitated withdrawal in morphine-dependent rats, resulting in an intense behavioral reaction (41), induced a robust stimulation of MEK1/2 in the cerebral cortex and corpus striatum. Other studies have shown activation of the spinal ERK1/2 pathway may contribute to the development of morphine dependence and withdrawal, and the function of pERK1/2 is partly accomplished via the CREB-dependent gene expression (42). Our present findings further attribute ERK1/2 activation as a potential mechanism in MW-induced IL-12p40 inhibition. We show that MW-induced increase in ERK activity was due to an enhancement in the phosphorylation state of the enzyme, without changes in total ERK immunoreactivity. This suggests that the effects of MW that may be mediated by ERK1/2 are likely to be affected through the activation (via phosphorylation) of ERKs. This conclusion was further supported by the use of an inhibitor (PD98059) of MEK1/2, upstream signaling proteins of ERK1/2, which rescued MW-induced inhibition of IL-12p40 expression. LPS stimulation in macrophages triggers cascades of intracellular signaling events, including those that lead to activation of IKK. IKK activation leads to the activation of the phosphatase PP2A, thereby dephosphorylating ERK1/2. This was further supported by real-time PCR data where calyculin A hyperactivated ERK1/2 phosphorylation but at the same time suppressed IL-12p40 message levels as early as 3 h following LPS treatment. Wedelolactone, the inhibitor of IKK that is FIGURE 9. Proposed mechanism for negative regulation of IL-12p40 expression by hyperactivation of ERK1/2 in the presence of LPS, corticosterone, and MW. Key mediators downstream of the LPS signaling pathway lead to expression of IL-12p40 (solid lines). MW in the presence of corticosterone inhibits LPS-induced IKK activation, and subsequent phosphorylation of IkB␣ therefore decreases translocation of p65 and p50. At the same time, inhibited IKK inactivates PP2A recruitment and its phosphatase activity, thereby promoting hyperactivation of MEK1/2 and subsequently ERK1/2 hyperactivation (dotted lines). Inhibition of PP2A activity hyperactivates ERK1/2 phosphorylation and down-regulates IL-12p40 message levels. GC, glucocorticoid.
upstream of PP2A, also suppressed IL-12p40 message levels, albeit to a lesser degree when compared with calyculin A. This inhibition of IL-12p40 expression was reversed when cells were treated with the ERK1/2 inhibitor PD98059. Therefore, we speculate that MW inhibits LPS-induced IKK activity, thereby inactivating PP2A activity. This leads to persistent activation of MEK1/2, leading to hyperactivation of ERK1/2 and suppression of IL-12p40 expression (Fig. 9).
The unique role of IL-12p40 in the regulation of IL-12 suggests that it is critically involved in the immunopathogenesis of Th1-mediated inflammatory and autoimmune disorders. Our investigations regarding the mechanism underlying the inhibitory effects of MW on IL-12p40 production have revealed the role of different transcription factors (in particular NFB, AP-1, and C/EBP) in regulating IL-12 production. At the same time, this is also regulated by the ERK1/2 pathway via a feedback mechanism thereby controlling the system in check. These results suggested that MW can disrupt the normal immune function and can possibly lead to enhanced susceptibility to infection. Understanding mechanisms underlying it can be a driving force to understand host responses and cellular immune functions to HIV and drug abusers and relapsed patients undergoing withdrawal phases.
|
v3-fos-license
|
2023-07-11T15:31:16.031Z
|
2023-07-01T00:00:00.000
|
259579578
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/23/13/6162/pdf?version=1688538512",
"pdf_hash": "c150fe327db99c6e75fc28967b4341c32cfe550d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2218",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"sha1": "23c5e29048811d7a7e9c35d46ea647c65277c7e1",
"year": 2023
}
|
pes2o/s2orc
|
Bandwidth-Controllable Third-Order Band Pass Filter Using Substrate-Integrated Full- and Semi-Circular Cavities
The article presents a novel circular substrate-integrated waveguide (SIW) bandpass filter (BPF) with controllable bandwidth. The proposed BPF was configured using two microstrip feed lines, semi-circular SIW cavities, capacitive slots, and inductive vias. The circular cavity was divided into two halves, and the two copies were cascaded. The resulting bisected and cascaded structures were then connected back-to-back. Finally, by introducing two inductive vias to the circular center cavity, a transmission zero was generated. In order to examine the design concept, a coupling matrix was generated. To demonstrate the theory, a third-order BPF was realized, fabricated, and experimentally validated. The BPF prototype features a wide passband of 8.7%, a low insertion loss of 1.1 dB, and a stopband of 1.5 f0 with a rejection level better than 20 dB, which makes it a potential candidate for microwave sensing and communication industries.
Introduction
Remarkable advancements in wireless communication technology have significantly impacted the development of bandpass filters (BPFs), featuring low fabrication costs, better frequency selectivity, and broadband suppression. Waveguide structures are commonly employed for base station filter designs due to their high-power handling capacity, high Q-factor, and low loss. A conventional waveguide, however, is expensive and difficult to integrate with conventional planar microwave components. Substrate-integrated waveguides (SIWs) have gained significant attention in recent times due to their numerous advantages, such as low cost, lightweight design, low insertion loss, easy fabrication, and compatibility with various planar circuits. A substrate-integrated waveguide (SIW) is a type of transmission line that effectively incorporates rectangular waveguides in a planar format. The substrate-integrated waveguide (SIW) comprises two rows of conducting cylinders embedded in a dielectric substrate. These cylinders serve to connect two parallel metal plates. By doing so, the non-planar rectangular waveguide can be converted into a planar shape that is suitable for planar processing approaches, such as a conventional printed circuit board (PCB) or low-temperature co-fired ceramic (LTCC) technology. The field distribution and dispersion properties of propagation in SIW structures are similar to those in traditional rectangular waveguides. The benefits of traditional metallic waveguides, such as high-quality factors and power handling capacity with self-consistent electrical shielding, are also maintained in SIW structures. The ability of SIW technology to integrate both passive and active devices and antennas on a single substrate is its most significant advantage. Additionally, multiple chip sets can be mounted on a single substrate. Loss and parasitism are reduced since no transitions between components are performed in various ways. These characteristics make them ideal for meeting the high-performance demands placed on filtering structures [1][2][3].
Numerous types of BPFs have been developed utilizing SIW technology . A three-pole BPF with adjustable transmission zeros was designed using a dual-mode circular SIW cavity in [4]. Based on SIW technology, a triple-mode BPF was designed in [5]. In [6], microwave low-phase noise oscillators based on SIW BPF technology were designed. A perturbing via hole was employed on the SIW circular cavity to enhance the filter selectivity. In [7], single-and dual-band bandpass filters were designed based on circular SIW cavities. Folded circular substrate-integrated waveguide cavity (FCSIWC) filters were analyzed and implemented in [8]. In [9], it was suggested to build box-like BPFs with a broad stopband response using dual-mode SIW cavities. In [10], a balanced filter was implemented using a multi-layer dual-mode SIW. In [11], a broad stopband SIW filter was implemented using a modified mode suppression approach. In [12], high-order BPFs were developed using perturbed SIW cavities. In [13], half-mode SIW cavities were employed to build dual-mode miniaturized BPFs.
In [14], higher-order modes of substrate-integrated waveguide (SIW) bandpass filters were suppressed using a multi-layer method. The demonstrated apertures engraved on the middle metal layer allowed the vertical coupling of rectangular SIW resonators on multi-layer substrates through magnetic and/or electric coupling. In [15], SIW filter development was demonstrated on and off, with its operational frequency range switchable between the S-band and the X-band. In [16], a bandpass filter with a broad upper stopband and a weaker electric field was created using the fundamental mode of post-loaded substrate-integrated waveguide (SIW) resonators. The SIW coaxial cavity was used to construct both narrow-band and wide-band BPFs, as shown in [17]. The investigation and realization of QMSIW filters were recorded [18]. Triple-mode bandpass filters (BPFs) utilizing a substrate-integrated waveguide (SIW) square cavity loaded with CSRRs were developed [19]. In reference [20], a compact bandpass filter (BPF) with a broad stopband response was achieved by utilizing a combination of microstrip and substrate-integrated waveguide (SIW) technologies. Half-mode substrate-integrated waveguide (SIW) cavities were utilized in the design of bandpass filters (BPFs) in reference [21].
In reference [22], a bandpass filter with a wide upper stopband response was created using multi-layered substrate-integrated waveguides (SIWs). In reference [23], a compact bandpass filter (BPF) with a wide stopband response was achieved by combining QMSIW and EMSIW cavities. A wideband bandpass filter (BPF) was implemented in [24] by utilizing dual-mode substrate-integrated waveguide (SIW) radial cavities. In reference [25], the proposed filter had three transmission zeros that could be independently tuned. This was achieved by implementing mixed coupling between the source and load. Another approach to generating additional transmission zeros involves cascading two nearby dualmode cavities. Analytical methods were employed in [16] to design a bandpass filter. The objective was to create a filter with a wide upper stopband and a weaker electric field. This was achieved by utilizing the fundamental mode of post-loaded substrate-integrated waveguide (SIW) resonators.
In [26], it was demonstrated that an HMSIW cavity has the potential to be employed to construct compact planar bandpass filters. These fourth-order filters have a footprint area of 0.159 λ 2 g , an FBW of 31.8%, and one or two transmission zeros. Bandpass filters of the third order, as described in [27], were realized using a T-septum HMSIW cavity. This filter has three transmission zeros, a broad stopband, and excellent selectivity. In [28], a SIW cavity with perturbing vias and a CSRR was used to make a bandpass filter for frequencies below 6 GHz, designed for a specific application. This filter offers excellent selectivity, a low insertion loss of 2.9 dB, and an FBW of 1.16%. An X-band bandpass filter based on a dual-mode SIW cavity is described in reference [29]. This filter runs at a frequency of 12 GHz and features 2 transmission zeros at 10.75 GHz and 13.3 GHz. This filter has a fractional bandwidth of 11%. In reference [30], a bandpass filter based on a double-layer HMSIW resonator is presented. To achieve its broad stopband response, this filter employs a defective microstrip structure. The two-band bandpass filter described in [31] uses a rectangular SIW cavity with a D-shaped ring resonator. This filter may be used for applications that need frequencies between 2.66 and 3.54 GHz, below the typical operating frequency of 6 GHz. Asymmetric SIW filter responses are given in [32]. This filter also uses a non-resonant node and positive coupling to enhance its selectivity. In [33], the construction of a bandpass filter using SIW cavities is explained. The SIW cavity and interdigital resonators function together to accomplish harmonic suppression in this filter. A broadband bandpass filter on miniature HMSIW cavities was developed in [34].
A three-stage stepped impedance resonator was used to provide transmission zeros and practical stopband responses in this filter. As illustrated in [35], a narrow-band bandpass filter may be built using an inline HMSIW cavity. This filter improves selectivity by producing quasi-elliptic responses through interdigital slots, which provide restricted transmission zeros. In [36], a 6 GHz bandpass filter working in a rectangular SIW cavity was created for 5G networks. This filter uses D-shaped resonators to provide compactness and a broad stopband response. In [37], the authors detailed the construction of a bandpass filter using a rectangular SIW cavity packed with an array of mutually reinforcing split-ring resonators. The stopband response of this filter was between 6.4 and 7.8 GHz, and the fractional bandwidth was 30% larger than the total bandwidth. The insertion loss of this filter was 1.5 dB. In [38], a SIW cavity was utilized to make a dual-frequency bandpass filter by loading it with a combination of right-and left-handed transmission lines and complementary split-ring resonators. The filter had a bandwidth of 3% at its 5 GHz resonant frequency and 4.2% at its 7.5 GHz resonant frequency. In [39], a tunable bandpass filter was reported, which used a SIW hexagonal resonator. Insertion loss was 2.01 dB, and fractional bandwidth was 2.92% for this filter. Two resonators and three inverters combined to create a SIW bandpass filter, as described in [40]. The design and development of a bandpass filter utilizing a substrate-integrated waveguide (SIW) cavity with iris resonators was elaborated on in reference [41]. The aforementioned filter has the capability to operate at a frequency of 9.77 GHz, exhibiting a fractional bandwidth of 12.17% and an insertion loss of 1.19 decibels. To obtain comprehensive responses, the study conducted by reference [42] demonstrates the design of a bandpass filter using a substrate-integrated waveguide (SIW) cavity that incorporates a defective ground structure. The aforementioned filter demonstrates a reduced footprint through its passband range of 3.0 GHz to 11.0 GHz, an insertion loss of 1.2 dB, and the inclusion of a notched band. In a previous study [43], a narrow bandpass filter was achieved using a rectangular substrate-integrated waveguide (SIW) cavity that incorporated inductive posts on its upper surface. The given filter exhibited a center frequency of 12.2 GHz, an insertion loss of 1.22 dB, and a fractional bandwidth of 1.475%. The authors constructed a dual-mode bandpass filter in reference [44], employing a substrate-integrated waveguide (SIW) cavity filled with cross-shaped slots. The filter exhibited a fractional bandwidth of 9.1% at a frequency of 7.5 GHz. Additionally, it had two transmission zeros located at frequencies of 12.5 GHz and 15 GHz, respectively. The authors of [45] used a rectangular substrate-integrated waveguide (SIW) cavity and stepped impedance resonators to create a compact bandpass filter. The filter operated at a frequency of 4.8 GHz, possessed four transmission zeros, and exhibited a fractional bandwidth of 13%. Its footprint was 0.3 λ 2 g . A dual-mode bandpass filter at 5.8 GHz was created using a SIW cavity loaded with a circular patch slot [46]. The authors of [47] utilized a SIW cavity in the non-resonant mode to create a bandpass filter that offered a high degree of design flexibility. The SIW cavity had two rectangular complementary split-ring resonators [48]. The filter's bandwidth at 3 dB was 320 MHz, its insertion loss was 2.4 dB, and its transmission zero was 5.9 GHz.
Substrate-integrated waveguides (SIWs) can serve as microwave sensors as well as bandpass filters. For the sensing mechanism to function, it must use the parameterinduced changes in the bandpass filter's frequency response [49]. The SIW bandpass filter can be used in various sensing tasks involving temperature, humidity, pressure, and chemical detection. Observing the shifts in frequency response caused by changes in temperature [50], a SIW bandpass filter can be used as a temperature sensor. Because of its sensitivity to changes in the dielectric characteristics of the substrate caused by moisture, the SIW bandpass filter can function as a humidity sensor [51]. The SIW bandpass filter can work as a pressure sensor since pressure changes are translated into variations in the substrate's dielectric properties. Because of this, the bandpass filter can be used for pressure sensing [52]. Incorporating chemically sensitive materials into the substrate or resonant structures is necessary when using a SIW bandpass filter in chemical sensing applications [53].
Despite the aforementioned developments, the reported circuits exhibit high insertion loss and narrow fractional bandwidth. In fact, there are still significant challenges to be addressed in terms of the development of SIW-based third-order bandpass filters with adjustable bandwidth and low insertion loss.
In this paper, a novel circular substrate-integrated waveguide (SIW) bandpass filter (BPF) with controllable bandwidth was developed. The working theory of the filter was derived from the field distributions, coupling, and full-wave simulations of the proposed BPF filter topology. A third-order BPF was realized, fabricated, and experimentally validated to demonstrate the theory. The filter exhibits the following key features:
1.
A fractional bandwidth of 8.7%, which is extremely good compared to the previously reported SIW BPFs; 2.
A stopband response of 1.5 f 0 with a rejection level of 20 dB.
Design and Analysis of the Third Order BPF Configuration and Working Principle
The architecture of the proposed third-order bandpass filter is depicted in Figure 1. The proposed BPF was configured using two microstrip feed lines, semi-circular SIW cavities, capacitive slots, and inductive vias. The evolution of the proposed model is depicted in Figure 2, which shows six successive transformative phases.
Initially, a full-mode circular SIW was created at frequency f c , such that it acted as a dual-mode resonator. The resonant frequency of the degenerate modes TM 110 was calculated using formula [18]: where c is the speed of light in a vacuum; R is the equivalent radius of the substrateintegrated circular cavity (SICC); µ r and r denote relative permeability and relative permittivity of the substrate, respectively. Subsequently, the cavity is bisected into two halves along the region of zero electric or magnetic field, also known as the field of null. The frequency of operation of the cavity does not change significantly even upon bisection, as the bisecting line does not interfere with the electric field distribution of the cavity. The resulting halves are then cascaded in reverse order, as illustrated in Step 2. Two 50 Ohm microstrip lines feed the input and output of the filter. In Step 3, the cascade structure enables the modes to couple, owing to the overlapping cavities at the junction, resulting in a two-pole filter. When the slot lines are placed in the coupling window (as in Step 4), they alter the electric field distribution and increase the coupling between the two adjacent cavities. This effect is due to increased electric field intensity in the slot line region. The simulated S-parameters of the circuit design in Steps 3 and 4 are shown in Figure 3. To create a three-pole filter, the design obtained in Step 4 is cascaded back-to-back, and the vias are removed from the center of the newly formed structure (as in Step 5) to allow for wave propagation. Finally, the introduction of two inductive vias into the circular center cavity generates a transmission zero in the upper stopband. The diameters d of the vias and their spacings S are chosen by applying the following criteria: S/λ ≤ 0.1 and S ≥ 2d. This is to keep the radiation losses reasonably low. The coupling topology of the proposed circular SIW filter is illustrated in Figure 4. Figure 5 shows the E-field distribution of the proposed model. The electromagnetic field (EM) simulator was set up with a value of one watt for the incident power. We already know that the power density, referred to as the rate of energy transfer per unit of area, is the product of the electric field strength (E) and the magnetic field strength (H). Inside the full-mode circular cavity, where the strength of the magnetic field is relatively low, the highest electric field is measured to be 15 kilovolts per meter, as depicted in Figure 5. A parametric study was carried out to determine the impact of the slot dimensions on the filter performance. It involved varying the length and width of the slot and observing the resulting changes in the S-parameters of the filter over a range of frequencies. As shown in Figures 6 and 7, increasing the slot length (v 1 ) enlarges the filter bandwidth. The 3 dB fractional bandwidth increased by 27% when (v 1 ) varied from 2 mm to 4 mm. The results shown in Figures 8 and 9 indicate that increasing the slot width enhances the filter bandwidth. Altering (v 2 ) from 0.2 mm to 1 mm increases the 3 dB fractional bandwidth by 15.6%. Based on the trade-off between the return loss and bandwidth, the filter's optimal slot lengths and widths were determined to be 3.5 mm and 1 mm, respectively.
The source and load external quality factors, Q S and Q L , as well as the coupling coefficient K i,i+1 , can be calculated as [54][55][56].
where f 0 stands for the resonant frequency, and ∆ f 3dB is the 3 dB bandwidth; f m1 and f m2 represent the mode frequencies.
Applying the eigenmode analysis, the unloaded quality factor Q u is computed as 256. For verification, a third-order BPF was synthesized. Figure 10 shows the variation of the coupling coefficient with respect to the v 1 . Figure 11 shows the variation of the coupling coefficient with respect to the v 2 , and Figure 12 shows the variation of the coupling coefficient with respect to the d 3 . The parameters v 1 , v 2 , and d 3 control the values of the main coupling coefficients (K 12 and K 23 ) and the cross-coupling coefficient (K 13 ) in Figures 10-12. As shown in Figure 10, when v 1 increases from 2.5 mm to 4 mm, K 12 shifts from 0.018 to 0.024. When v 2 changes from 0.6 mm to 1.2 mm, K 23 changes from 0.026 to 0.016. As illustrated, the cross-coupling coefficient K 13 decreases from 0.07 to 0.05 when the value of d 3 increases from 0.2 mm to 0.6 mm.
A third-order BPF was synthesized for testing purposes. The design exhibits a relative bandwidth of 6%, which also has a return loss of 18.2 dB, a transmission zero at 6.2 GHz, and a center frequency of 5.6 GHz, respectively. To compute the coupling coefficient, quality factor, and coupling matrix, the synthesis approach explained in [54] was applied. By following reference [54], the coupling matrix is produced. The self-coupling is denoted by the non-zero diagonal elements of coupling matrix A, which are m 11 Figure 13 depicts the calculated S-parameters of the proposed BPF, employing the coupling matrix and EM simulation. Three types of losses-radiation, dielectric, and conductor-account for the overall loss of the proposed BPF. Estimated losses of the proposed third-order BPF are shown in Figure 14. The finite conductivity of the top and bottom metal plates and the metallic via holes causes conductor loss in the SIW. The dielectric loss tangent tan δ of the substrate is accountable for the dielectric loss. Radiation loss is caused by electromagnetic power leakage via the spacing between adjacent vias. As shown in Figure 14, the total loss and the sum of the dielectric and radiation losses are less than 0.25 and 0.2, respectively, while the loss due to radiation alone is smaller than 0.08. Therefore, the proposed BPF reflects a minimal insertion loss of 1.1 dB.
Fabrication, Measurement, and Results
The proposed third-order BPF based on SICC is fabricated on a Rogers RO4003 substrate with a relative dielectric constant of 3.55, thickness h = 0.8 mm, and loss tangent tan δ = 0.0027. Figure 15 displays a photograph of the fabricated filter prototype. The simulated and measured S-parameters of the circuit are shown in Figure 16. The measurement results indicate a return loss better than 18 dB, an insertion loss of 1.1 dB, and a fractional bandwidth of 8.7%. The filter's transmission zero is located at 6.2 GHz. Table 1 presents a comparison between the proposed BPF with state-of-the-art BPFs reported in the literature. The salient features of the proposed third-order bandpass filter are as follows: 1.
2.
The proposed filter provides a stopband of 1.5 f 0 with a rejection level of 20 dB. 3.
Conclusions
In this paper, a third-order circular substrate-integrated waveguide (SIW) bandpass filter (BPF) was presented. The filter design achieves a three-pole response with one transmission zero in the upper stopband. The experimental results are well aligned with the simulation, confirming the effectiveness of the proposed design concept. In addition, the operating principle, field distribution, coupling matrix, and loss computation were all discussed. Finally, a third-order BPF was fabricated and experimentally validated. The prototype features a low insertion loss of 1.1 dB, a passband bandwidth of 8.7%, and a stopband of 1.5 f 0 , with a rejection level better than 20 dB.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2012-10-09T00:00:00.000
|
5857137
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcnephrol.biomedcentral.com/track/pdf/10.1186/1471-2369-13-134",
"pdf_hash": "f9d9156ea124b277237d7df2c9fb0fa6d412383a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2219",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"sha1": "ef79612b7e547532e88c5ef4f2f360a93a150068",
"year": 2012
}
|
pes2o/s2orc
|
Pro-inflammatory cytokines and bone fractures in CKD patients. An exploratory single centre study
Background Pro-inflammatory cytokines play a key role in bone remodeling. Inflammation is highly prevalent in CKD-5D patients, but the relationship between pro-inflammatory cytokines and fractures in CKD-5D patients is unclear. We studied the relationship between inflammatory cytokines and incident bone fractures in a cohort of CKD-5D patients. Methods In 100 CKD-5D patients (66 on HD, 34 on CAPD; males:63, females:37; mean age: 61 ± 15; median dialysis vintage: 43 months) belonging to a single renal Unit, we measured at enrolment bone metabolic parameters (intact PTH, bone and total alkaline phosphatase, calcium, phosphate) and inflammatory cytokines (TNF-α, IL-6, CRP). Patients were followed-up until the first non traumatic fracture. Results During follow-up (median: 74 months; range 0.5 -84.0) 18 patients experienced fractures. On categorical analysis these patients compared to those without fractures had significantly higher intact PTH (median: 319 pg/ml IQ range: 95–741 vs 135 pg/ml IQ: 53–346; p = 0.04) and TNF-α levels (median: 12 pg/ml IQ: 6.4-13.4 vs 7.8 pg/ml IQ: 4.6-11; p = 0.02). Both TNF-α (HR for 5 pg/ml increase in TNF-α: 1.62 95% CI: 1.05-2.50; p = 0.03) and intact PTH (HR for 100 pg/ml increase in PTH: 1.15 95% CI: 1.04-1.27; p = 0.005) predicted bone fractures on univariate Cox’s regression analysis. In restricted (bivariate) models adjusting for previous fractures, age, sex and other risk factors both PTH and TNF-α maintained an independent association with incident fractures. Conclusions In our bivariate analyses TNF-α was significantly associated with incident fractures. Analyses in larger cohorts and with adequate number of events are needed to firmly establish the TNF α -fracture link emerged in the present study.
Background
Bone mineral disorders are pervasive in patients with kidney failure on dialysis (CKD stage 5D) and the risk for bone fractures is quadrupled in this population [1,2]. Deranged parathyroid function is currently considered as the fundamental alteration responsible for bone disease in CKD [3]. Past exposure to steroids applied to treat immunological renal diseases or administered in previous kidney transplants represents an additional major factor in the pathogenesis of bone fractures in these patients [1]. Apart from parathyroid hormone (PTH) and other major hormonal regulators of bone metabolism, during the last two decades pro-inflammatory cytokines have fully emerged as major players in bone remodeling [4]. In particular, Tumor Necrosis Factor Alpha (TNF-α) a cytokine endowed with a large repertoire of biological effects is one of the most powerful inducers of the receptor activator of NF-kB ligand (RANKL), i.e. a key trigger of osteoclast activation and bone resorption [4][5][6][7]. High cytokines levels may contribute to increase the risk of osteoporosis and bone fractures in chronic inflammatory disease including COPD [8] and inflammatory bowel disease [9], and the relevance of RANKL pathway in bone health is indicated by the efficacy of drugs impinging upon RANKL in the treatment of osteoporosis in elderly women [10], including patients in CKD stage 2-4 [11]. Inflammation is a feature of advanced CKD [12][13][14][15], but the relationship between pro-inflammatory cytokines and fractures in CKD-5D patients is still unclear. To explore the hypothesis that inflammation may contribute to the high risk of bone fracture in CKD we tested the relationship between inflammatory makers and other bone metabolic parameters with incident bone fractures in a cohort of stable CKD-5D patients without inter-current clinical infectious processes.
Study population
The study protocol was approved by the Ethics Committee of the Azienda Ospedaliera "Bianchi-Melacrino-Morelli" di Reggio Calabria. All patients provided informed consent.
All prevalent patients in January 1995 and incident patients in 1996-1997 [66 on haemodialysis (HD) and 34 on continuous ambulatory peritoneal dialysis (CAPD), 63 males and 37 females] belonging to a single renal Unit, who had been on regular dialysis treatment (RDT) for at least 6 months and without inter-current clinical problems requiring hospitalization were recruited for the study. Patients mean age was 61 ± 15 years and the median duration of dialysis treatment was 43 months (interquartile range 18-99 months). Further clinical details about the study population are given in Table 1. Hemodialysis patients were being treated thrice weekly with standard bicarbonate dialysis (Na 138, HCO 3 35, K1.5, Ca 1.25, Mg 0.75 mmol/L) and 1.1-1.7 m 2 dialysers (89% cuprophan, 11% semi-synthetic membranes). The average fractional urea clearance (Kt/V) in these patients was 1.28 ± 0.31. Dialysis fluid was produced by a reverse osmosis system and Aluminium never exceeded 5 μg/L which is well below the safety limit recommended by the European Council. Patients on CAPD were all on 4 exchanges/day schedule with standard dialysis bags containing 1.75 mmol/L calcium. The average weekly Kt/V in these patients was 1.67 ± 0.30. Sixteen patients were diabetics and 48 were habitual smokers. Forty-nine patients were on treatment with erythropoietin and 60 were taking various anti-hypertensive drugs (42 on mono-therapy with ACE inhibitors, calcium channel blockers or beta blockers and the remaining 18 on double or triple therapy with various combinations of these drugs). Eighteen patients out of 60 on antihypertensive therapy were on treatment with beta blockers (alone or in combination with other drugs). Eighty-six patients were assuming calcium chelating agents (either calcium carbonate or calcium acetate). Forty-seven patients were being treated with calcitriol. None of patients who took part in the study had undergone parathyroidectomy.
Laboratory methods
Fasting blood sampling was performed during a midweek non-dialysis day for HD patients. Samples were stored in prechilled vacutainers containing edetic acid, placed immediately on ice, and centrifuged within 30 min at 4°C; plasma was stored at −80°C until required. Serum calcium, serum phosphate, haemoglobin and alkaline phosphatase measurements were made using standard methods in the routine clinical laboratory. Intact PTH molecule measurement was made by a specific immuno- Data are expressed as mean ± SD, median and inter-quartile range or as percent frequency, as appropriate. Patients are divided into 2 groups on the basis incident fractures occurrence. P tests the differences among the groups. Significant differences between groups are indicated in bold. Intact PTH thresholds of < 100 and >800 pg/ml identify low and high bone turn-over, respectively [18]. ESAs = erythropoiesis-stimulating-agents.
Follow-up study
Fractures were defined as non-traumatic events documented by imaging techniques (i.e. radiography, computerized axial tomography or nuclear magnetic resonance). We considered fractures of the femoral neck (intertrochanteric, or subtrochanteric fractures), fractures of other parts of the femur (condyle, supracondylar, and epiphysis), vertebral fractures and fractures interesting other skeletal segments. Vertebral fractures were diagnosed using a semi-quantitative approach [17].
After enrollment patients were followed-up until the first fracture, those who died or underwent transplant or were free of fractures at the end of study were censored no patient was lost to follow-up. Median follow-up was 74 months (range 0.5-84.0 months).
Statistical analysis
Data are reported as mean ± SD, median and inter-quartile range or as prevalence rate and differences between groups were analyzed by the T-test, the Mann-Whitney test or the Chi-Squared Test, as appropriate. The association between incident fractures with serum PTH, total and bone alkaline phosphatase, calcium, phosphate, TNF-α, CRP, Il-6 and others potential risk factors was preliminarily analyzed by dividing patients into two groups (patients with and without incident fractures) and by testing the differences between them. To identify patients with low and high bone turn-over we used intact PTH thresholds suggested by European Best Practice Group [18]. The predictive value of biomarkers of inflammation, mineral and bone disorder and other potential risk factors was analyzed by univariate Cox's proportional hazards method. In the Cox's analysis the proportional hazard assumption was tested by the analysis of Schoenfeld residuals and no violation was found. Due to the small number of fractures to assess the idependent link between TNF-α and fractures we entered this risk factor in(restricted) bivariate models considering other risk factors one at a time. Furthermore, we computed a risk score [19] in each patient by summing up the individual profile of 5 risk factors for fractures, each dichotomized as following : sex female = 1 male = 0, previous fractures = 1 no previous fractures = 0, previous transplants = 1 no previous transplants = 0, intact PTH > median value = 1 intact PTH < median value = 0, age > median value = 1 age < median value = 0. The potential confounding effect of this score, reflecting the combined effect of major risk factors for fractures, was then tested in an additional bivariate model.
Data are expressed as hazard ratio (HR) and 95% confidence interval (CI). All calculations were done using a standard statistical package (SPSS for Windows).
Results
C-Reactive Protein, IL-6 a and TNF-α levels above the upper limit of the normal range were observed in 65%, 77% and 78% of patients, respectively (Figure 1). During the follow-up period (median 74 months; range: 0.5-84.0 months) 18 patients had incident fractures (vertebral = 10 ; pelvic = 4; femoral neck = 1; humerus = 1; costal = 1; clavicle = 1). Patients with incident fractures had higher levels of serum intact PTH, and TNF-α when compared to those without these complications (Table 1, Figure 2) while CRP and IL-6 levels were similar in the two groups. The proportion of patients with PTH in the range denoting low bone turn-over (<100 pg/ml) according to current guidelines [18] did not differ, while the proportion of those with high turn-over (>800 pg/ml) was higher among patients with incident fractures ( Table 1). The proportion of patients who had suffered from a previous fracture was markedly higher in patients who had incident fractures (Table 1).
On Kaplan-Meyer analysis fracture-free survival was longer in patients with TNF-α in the lower tertile as compared to those with levels in the upper tertile (Figure 3). Similarly PTH levels in the lower tertile were associated to a longer fracture-free survival (Figure 4). On univariate Cox analysis the association between TNF-α with incident fractures was significant (HR for 5 pg/ml increase in TNF-α: 1.62 95% CI: 1.05-2.50; p = 0.03) as it was the association between fractures and PTH (HR for 100 pg/ml Plasma TNF-α data were adjusted in bivariate Cox models including TNF-α and each risk factor listed. The latter factors were used to compute the risk score (see methods). In reduced bivariate models the association of TNF-α with fractures proved to be independent of intact PTH, age, sex, history of previous transplants and previous fractures and of the risk score composed by the same risk factors (Table 2).
Discussion
Our exploratory observations, confirming the central role of PTH, have shown that TNF-α is associated with incident fractures in CKD -5D population. These findings generate the hypothesis that systemic inflammation might contribute to increase the bone fracture risk in these patients.
In the last two decades the biology of osteoclast activation has been intensively investigated and the RANKL/ RANK pathway emerged as a fundamental modulator of osteoclastogenesis [4]. Inflammatory cytokines are well established potent activators of the RANKL/RANK pathway [4][5][6][7] and play a direct role in osteoclastogenesis in post-menopausal women [20]. Furthermore, recent longitudinal studies [21][22][23] coherently suggest that high cytokines levels may contribute to bone loss and fractures in elderly women and men.
We once again confirm that inflammation is pervasive in CKD-5D patients [12][13][14][15]. Indeed, CRP levels were above the upper limit of the normal in as much as 65% of patients at time of enrolment. In the present study TNF-α was substantially increased, being above the upper normal range in 78% of patients, a figure close to that of other biomarkers of inflammation, but it was the only cytokine linked to the fracture risk. The finding is consistent with studies showing that among the cytokines, TNFα is the most powerful stimulator of osteoclastogenesis [4][5][6][7].
Our study has limitations. First, we measured TNF-α and other cytokines only once. Since the precision of the estimate of the usual level of inflammation biomarkers increases with repeated measures, the link between TNF-α and bone fractures might be even stronger than emerged in the present analysis. On the other hand, because of the low number of events, the possibility that the TNF-α − fractures link may merely represent a false positive finding cannot be dismissed. We controlled for confounding by adopting a parsimonious approach based on bivariate Cox models and a composite risk score [19], and found that the relationship between TNF-α and the risk of incident fractures proved to be independent of PTH, as well as of major confounders like age, sex, previous fractures and previous kidney transplants. Notwithstanding these adjustments failed to materially change the risk of fractures associated with high TNF-α, we cannot exclude residual confounding. A second obvious limitation is that we did not measure the full set of hormones controlling mineral balance including 25 hydroxy vitamin D and 1,25 hydroxy vitamin D and FGF23. Moreover we have not evaluated serum magnesium levels and did not study the relationships between inflammation and vascular calcifications. Finally, our data collected in a single Renal Unit cannot be generalized to the greater HD population. Due to these limitations our data are merely hypothesis generating. The relationship between fractures and inflammatory cytokines in CKD-5D patients needs to be confirmed in larger cohorts gathering a substantial higher number of bone events. The issue is of relevance because bone fractures in CKD-5D patients not only engender disabling orthopaedic problems but are also associated to increased mortality [24].
Conclusions
In conclusion, our observations generate the hypothesis that TNF α plays a role in the increased risk of bone fractures in CKD-5D patients. Analyses in larger cohorts and with adequate number of events are needed to firmly establish the TNF α -fracture link emerged in the present study.
|
v3-fos-license
|
2021-10-06T05:23:16.668Z
|
2021-09-27T00:00:00.000
|
238354726
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/jp/2021/3248850.pdf",
"pdf_hash": "21d8fdb5c15535895929c569ed19601c488b7c0a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2223",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "21d8fdb5c15535895929c569ed19601c488b7c0a",
"year": 2021
}
|
pes2o/s2orc
|
High Maternal Neonatal Mortality and Morbidity in Pregnancy with Eisenmenger Syndrome
Objectives This study is aimed at evaluating the maternal and perinatal characteristics and pregnancy outcomes of ES. Material and Methods. This is a retrospective cohort study of pregnancy with Eisenmenger syndrome (ES) in Dr. Soetomo Hospital from January 2018 to December 2019. Total sampling size was obtained. We collected all baseline maternal-perinatal characteristic data, cardiac status, and pregnancy outcomes as primary outcomes. The maternal death cases were also evaluated, and we compared characteristics based on defect size (< or >3 cm). Results During study periods, we collected 18 cases with ES from a total of 152 pregnancies with heart disease. The underlying heart disease type includes atrial septal defect (ASD), ventricle septal defect (VSD), and patent ductus arteriosus (PDA). All cases suffered pulmonary hypertension (PH), 3 cases moderate, and 15 cases as severe. 94% of cases fall into heart failure (DC FC NYHA III-IV) during treatment. The majority of cases are delivered by cesarean section (88.9%). Pregnancy complications found include preterm birth (78%), low birthweight (94%), intrauterine growth restriction (55%), oligohydramnios (16%), severe preeclampsia (33%), and placenta previa (5.5%). Large defect group has an older maternal ages (30.18 ± 4.60 vs. 24.15 ± 2.75; p = 0.002), higher clinical sign (100 vs. 40%, p = 0.003), and higher preterm delivery rate (100% vs. 69%, p = 0.047) compared to small defect groups. The R to L or bidirectional shunt is significantly higher at the large defect group (13 vs. 5 cases, p = 0.006, 95% confidence interval: -1.156 to -0.228). There were seven maternal death cases caused by shock cardiogenic. Conclusions Pregnancy with ES is still associated with very high maternal neonatal mortality and morbidity. The larger defect size is correlated with clinical performances and pregnancy outcomes. Effective preconception counseling is the best strategy to reduce the risk of maternal and neonatal death in ES women.
Introduction
Maternal heart disease is the second common cause of maternal death after preeclampsia. One of the most frequent heart disease types in pregnancy is congenital heart disease in which 5% of the patients have pulmonal hypertension [1]. Eisenmenger syndrome (ES) represents the severe end of the spectrum for disease in pulmonary arterial hypertension associated with congenital heart defect (PAH-CHD). Eisenmenger syndrome due to PAH-CHD is classified into group one. It is defined by systemic-to-pulmonary shunting of blood through any large congenital cardiac defects at any location permitting increased pulmonary blood flow that progress into severe elevation of pulmonary vascular resistance (PVR) resulting in reversal (pulmonary-to-systemic) or bidirectional shunting [2]. The defect is considered inoperable.
Hemodynamically, ES is defined as the elevation of PVR to 12 Wood unit or to a pulmonary-to-systemic resistance ratio equal to or greater than 1.0. The size of shunt and exact diameter of the defect do matter. The threshold is 2-3 cm at atrial level, 1-1.5 cm at ventricular level, and 0.5-0.7 cm at arterial level. Fifty percent of patients with a large defect in the ventricular level developed ES, and 13% percent of patients with large defect at the atrial level develop ES [3].
The prevalence of ES is not well-known. Recent data showed that 4.2% of adult CHD patients develop PAH-CHD, and one percent had ES. The incidence of ES is even lower in pregnancy, approximately around 3% in pregnant women with congenital heart disease [4]. Patients with ES who become pregnant have a very high risk of adverse pregnancy outcomes. Maternal mortality is reported 30-50%, and most of it is caused by rapidly progressive cardiopulmonary decompensation, thrombotic complications, and sudden death due to malignant arrhythmia [5]. Because of the high risk of maternal mortality, pregnancy is contraindicated in women with ES [6]. Adverse pregnancy outcomes found in ES are abortion, intrauterine growth restriction/IUGR (30%), and preterm birth (50-60%) which are all related to maternal chronic hypoxia [7]. Eisenmenger syndrome is often diagnosed and managed late, especially in developing countries. This is related to the low awareness and knowledge in the community and social and economic factors that delay access to public health services [8]. This study is aimed at evaluating the maternal, perinatal, and pregnancy outcomes of ES.
Study Population and
Outcomes. This is a retrospective cohort study of pregnancy with ES cases at Dr. Soetomo General Academic Hospital, Surabaya, East Java, Indonesia, from January 2018 to December 2019. Dr. Soetomo General Academic Hospital is the top referral tertiary center hospital in East Java, Indonesia. The study population was pregnant women with heart disease who were managed in our hospital during study periods. The inclusion criteria were all pregnant or postpartum women with ES. There are no exclusion criteria in this study. We used a total sampling size. The ethical clearance of this study was approved by the Ethical Committee Board of Dr. Soetomo Hospital Written informed consent has been acquired from all the participants. The primary outcomes of this study are maternal, perinatal, and pregnancy outcomes in Eisenmenger syndrome. Pregnancy outcomes evaluated consist of maternal mortality, obstetrics complications (abnormal cardiotocography test, oligohydramnios, severe preeclampsia, and IUGR), heart failure, preterm delivery rate, mode of delivery, baby birthweight, and Apgar score. We also assessed the maternal characteristics and the maternal cardiac status. Maternal characteristics include maternal ages, gestational age at diagnosis and delivery, gravidity, antenatal care history, heart disease type, and heart failure. Cardiac status includes heart disease type, murmur sign, clubbing finger, arterial oxygen saturation (SaO 2 ), defect size, and pulmonary arterial systolic pressure (mmHg). The defect size and pulmonary arterial systolic pressure were determined using echocardiography examination, while the other cardiac parameters were found from the physical examination. We also evaluated the correlation between maternal and perinatal characteristics and pregnancy outcomes with the cardiac defect size. We divided the samples into two groups based on the defect size: small (<3 cm) and large (>3 cm) defect groups. The clinical characteristics of all maternal death cases were also described.
We defined abnormal cardiotocography as a finding of category two or three based on National Institute of Child Health and Human Development criteria [9]. Oligohydramnios was diagnosed as an amniotic fluid index < 5 cm, from fetal ultrasound [10]. We defined preeclampsia as gestational hypertension accompanied by >1 of the following new-onset conditions after 20-week gestation: proteinuria, maternal organ dysfunction, or uteroplacental dysfunction [11]. IUGR was defined based on ultrasound finding of estimated fetal weight < 10 percentile [12]. Heart failure was categorized based on the New York Heart Association (NYHA) functional class [13]. Preterm was defined as delivery before 37-week gestation [14][15][16]. Apgar score was used to evaluate general health and sign of hemodynamic compromise of the newborn. Score < 7 was defined as low Apgar scores [17].
Echocardiography
Procedure. Transthoracic echocardiography (TTE) was performed by adult cardiology fellows. All the procedures were supervised, and all the results are discussed by the senior cardiologist from echocardiology and adult congenital heart disease division to minimize interoperator variation bias. The heart scanning machine was Echo Vivid E9 GE and EchoPAC Dimension system (General Electric Healthcare, US) with the procedure as follows: (1) All TTE parameters were examined based on American Society of Cardiography (ASE) recommendation (2) Two-dimensional modes were scanned from all views, including parasternal long axis (PLAX), parasternal short axis (PSAX), apical 4-chamber, apical 2-chamber, subcostal, and suprasternal (3) Pulse-wave (PW) Doppler and colour-wave (CW) Doppler mode were scanned from all views to detect any cardiac lesion and shunts (4) Left ventricle (LV) systolic function was estimated using Teichholz methods from M-mode in PLAX and also using modified Simpson's methods in an apical 4-chamber and 2-chamber view (5) Left ventricle (LV) diastolic function was measured using PW Doppler and tissue Doppler imaging (TDI) in a 4-chamber view (6) Probability of pulmonal hypertension related to congenital heart defect was estimated by the addition of tricuspid regurgitation maximal pressure gradient (TR max. PG) and estimated right atrial pressure (est. RAP). Estimated pulmonal artery pressure 2 Journal of Pregnancy (Est. PASP) is also assessed from peak velocity of tricuspid regurgitation (TR V max) and other PH signs from the ventricle, pulmonary artery, inferior vena cava, and right atrium 2.3. Statistical Analysis. The data of maternal and perinatal characteristics were evaluated using descriptive analysis. The categorical variables were compared between groups using the chi-square test or Fisher exact test based on its distribution. The numerical variables were compared using an independent t-test. All statistical tests were performed using IBM Statistics SPSS 25.
Maternal and Perinatal Characteristic.
During study periods, we collected a total of 152 pregnant and postpartum women with heart disease. Among them, there are 18 cases (11.8%) with ES that consisted of fifteen pregnancies (83%) and three postpartum cases (17%). Most cases (90.5%) were identified as ES on admission to the hospital. The maternal age average was 27.17 years old, and most are in the interval of 20-29 years old (83%). The majority of cases were primigravida (61%). Fifteen (83%) cases were referred from lowerlevel hospital/public health services. Only 17% cases had a regular antenatal care in our hospital and were managed by a multidisciplinary team until delivery (Table 1). Most patients were first diagnosed as having ES in the gestational ages of 28-36 weeks (61%). The heart diseases found in the study were atrial septal defect (ASD), ventricle septal defect (VSD), and patent ductus arteriosus (PDA) ( Table 1). Pregnancy complications found include preterm birth (78%), low birthweight (94%), intrauterine growth restriction (55%), oligohydramnios (16%), severe preeclampsia (33%), and placenta previa (5.5%). The majority of cases are delivered by cesarean section, and newborns have a low birthweight related to preterm birth. Most cases are straightly sterilized during or after delivery (72%) ( Table 1).
All cases suffered pulmonary hypertension (PH), which was divided into the moderate (3 cases) and severe type (15 cases). Moderate PH is defined by the pulmonary artery systolic pressure (PASP) 60-80, and the PASP in severe PH is >80. Echocardiography results show a common defect size of 1-2 cm in PDA (1 case) and VSD cases (4 cases). Defect 2-3 cm (7 case) and large defect > 3 cm (5 case) are only found in ASD cases. The shunting pattern in the heart was commonly right to left (10 cases), followed by bidirectional (5 cases) and left to right (3 cases) flow (Table 2). Unfortunately, most cases fall into heart failure during gestation (94%), including NYHA classification DC-DC III and DC-FC IV. Laboratory parameters, including hemoglobin (Hb), leukocyte, platelet count, hematocrit (Hct), and oxygen saturation, can be seen in Table 2. The mean value of each parameter was as follows: Hb: 13.9 g/dL, Hct: 41.68%, leukocyte: 10.971 cell/μL, platelet count: 200.667 cell/μL, and oxygen saturation: 88%. Most patients have a normal Hb levels, and only two women have anemia (Hb < 11 g/dL). There is only one woman who has an abnormal leukocyte count (>17.000) and four women with thrombocytopenia 3 Journal of Pregnancy (<150.000). Only five cases have a normal oxygen saturation during admission (≥95%), while the others are already in a hypoxia state. Seven cases even show a sign of respiratory failure, with an oxygen saturation < 85% (Table 2).
Relationship between Maternal-Perinatal Characteristic
and Cardiac Defect Size. We divided all cases into two groups based on the defect size in the heart, including small defect (<3 cm) and large defect (≥3 cm) groups. All maternal and perinatal characteristics are compared among these two groups. Maternal ages in the large defect group are significantly older than the small defect group (30:18 ± 4:60 vs. 24:15 ± 2:75). All cases in the large defect group show a clinical sign of clubbing finger and cyanosis; however, only forty-six percent manifest in the small defect group. Regarding the heart disease type, the large defect group consisted of ASD (100%), while the small defect has an ASD, VSD, and PDA. All cases in the large defect group end in preterm delivery, compared to sixty-nine percent in the small defect group. All other parameters show no significant difference statistically (Table 3).
Relationship between Defect
Size, Shunt Type, and Pulmonary Hypertension. This study evaluated the relationship between pulmonary hypertension, R to L or bidirectional shunt, and defect size. The small defect group (≤3 cm) showed a significantly higher proportion of PH compared to large defect (13 vs. 5 cases, p = 0:037, 95% confidence interval: 1.040 to -0.0.37), but no correlation was found between both parameters (p = 0:063). The R to L or bidirectional shunt is significantly higher at the large defect group (13 vs. 5 cases, p = 0:006, 95% confidence interval: -1.156 to -0.228). Contingency coefficient correlation test resulted in r = 0:527, indicating a strong correlation between bidirectional or R to L shunt with defect size.
3.4. Maternal Death Case. We found seven maternal death cases in this study (38.8%), and all happened in ASD cases. The cause of death in these cases is a cardiogenic shock after delivery. All cases were lately referred to our hospital in the third trimester in an already deteriorated condition. All cases are complicated by severe PH, three cases by severe preeclampsia, and 1 case by cardioembolic stroke. All maternal deaths have happened in less than two weeks after delivery; the fastest was 16 hours after delivery. The clinical characteristics of maternal death can be seen in Table 4.
Discussion
ES is an acquired elevation of pulmonary vascular resistance and pulmonary artery pressure due to a left-to-right intracardiac shunt. These pathological changes lead to a reversal right-to-left or bidirectional shunt, with subsequent cyanosis and polycythemia. As shown in Table 3, over two years, we found most cases are severe PH (89%), which are dominated by right to left shunt (55.6%). However, most PHT cases arise from the small defect group (72.2%). In this study, most ES patients come from rural residents in East Java Province, with low socioeconomic-educational level and poor physical status. They got a chronic tolerance to cardiac heart disease by no symptoms over a long period, which eventually manifests in pregnancy. Due to a lack of general Journal of Pregnancy and medical knowledge, these people lack access to health services, especially cardiologists. The majority refused the advice of cardiac surgery on the patients who can receive cardiac health services. The reason is lack of awareness of the increased risk of morbidity and mortality of pregnancy with cardiac disease and its associated ES complica-tions. There is also a lack of preconceptional counseling nor antenatal care with the obstetrician and cardiologist before pregnancy [18].
The clinical manifestation of ES (clubbing finger or cyanosis) also significantly correlated with the defect's size. The larger the defect size, the higher the possibility of clinical 5 Journal of Pregnancy signs appearing. The defect size also correlates with the disease's risk of progression, which is shown by the reversal flow pattern in the heart shunt (R to L or bidirectional shunt). The reversal or bidirectional shunt is found prominently in our large defect group compared to the small one. In ES, especially large septal defects, lesions are characterized by high pulmonary pressure and a high pulmonary flow state. ES refers to any untreated congenital cardiac defect with intracardiac communication that leads to pulmonary hypertension, reversal of flow, and cyanosis. The previous left-to-right shunt is converted into a right-to-left shunt secondary to elevated pulmonary artery pressures and associated pulmonary vascular disease [19,20].
El Kayam et al. explained, eventually, that due to increased resistance and decreased compliance of the pulmonary vessels, elevated pulmonary pressures cause the right heart myocardium hypertrophy (RVH). ES begins when RVH causes right heart pressures to exceed the left heart pressure, leading to a reversal of blood flow through the shunt. Consequently, deoxygenated blood returning from the body bypasses the lungs through the reversed shunt and directly to the systemic circulation, leading to cyanosis and resultant organ damage [20].
In pregnant women, the congenital heart diseases that cause pulmonary vascular disease and evolve into ES are mainly VSD, followed by ASD and PDA. Pregnant women with ES may present with clubbing fingers, cyanosis, dyspnea, fatigue, dizziness, or even right heart failure. This study shows that cyanosis and clubbing fingers and cardiac septal lesion were more prominent in larger defect sizes (> 3 cm). Blood gas analysis, complete blood count, and oxygen saturation are important factors in pregnant women with ES. Previous studies showed that oxygen saturation < 65%, Hct > 60%, and Hb > 18 g/dL are predictors of adverse maternal outcomes in pregnancy [21,22]. The majority of this study's cases have oxygen saturation < 85%, with the lowest saturation which is 66%. The mean hematocrit value in this study is relatively high (42%) compared to a similar study from India (35.3%) [23]. Most cases show polycythemia (Hb > 16 g/dL), indicating chronic hypoxia (Table 2). Pregnant women with ES should be hospitalized after the 20th week of pregnancy-or earlier if clinical deterioration occurs [20,[24][25][26]. A person with ES is paradoxically subject to the possibility of both uncontrolled bleeding due to damaged capillaries and high pressure and spontaneous clots due to hyperviscosity and blood stasis [27].
ES in pregnancy can cause severe complications, although successful delivery has been reported. Maternal mortality ranges from 30% to 60% and may be attributed to heart failure, venous thromboembolic event, hypovolemia, or cardiogenic shock. Six cases have a defect size > 2 cm, and even three of them have >3 cm defect. Most deaths occur either during or within the two weeks after delivery. Study shows that 10% of women with ES are deceased within 14 days after delivery. Even in ES related to congenital heart disease, this number increases by 28%, and the average survival time is only six days [28]. All maternal death in this study is caused by cardiogenic shock and suffered a severe PH. This study's most maternal death has happened less than two weeks after delivery, related to redistribution of fluid in postpartum periods. This volume overload will exceed the heart capacity with a defect and lead to complications such as heart failure, atrial fibrillation, pulmonary edema, or shock cardiogenic. Katsurahgi et al. study mentioned that from 73 cases of ES, the majority of death was on postpartum periods compared to antepartum (23 vs. 3 cases) [29]. Three maternal death cases coincide with severe PE, which complicated the hemodynamic changes in patients with ES. Maternal mortality in heart disease is still high three until four weeks after delivery [30]. The first two-week postpartum is the highest risk of maternal death in pregnancy with heart disease. This finding may suggest the need to prolong women with heart disease in the intensive care unit after delivery. All maternal death cases come in FC NYHA III-IV), and low oxygen saturation. Although a multidisciplinary team has managed patients in the ICU and the pregnancy was immediately terminated, the disease's progression cannot be stopped. All patients died <10 days after delivery. Most patients delivered by cesarean section (88%). Only two women deliver vaginally; both are in nonviable and periviable gestational age. One case is in pregnancy with severe PH in the first trimester (12 weeks), which is terminated due to an increased risk of maternal mortality if the pregnancy is continued. The woman is 24 years old and not aware of her congenital heart disease before pregnancy. The other case is the first pregnancy on 27-week gestation and complicated by perimembranous VSD, moderate PH, and ES. Delivery is indicated due to the worsening condition of the mother and born 800-gram babies with a low Apgar score. Unfortunately, this mother passed away 48 hours after delivery. Although there is no evidence of a superior mode of delivery in heart disease, cesarean delivery is preferable in case of severe heart disease with a poor maternal condition such as ES. Vaginal delivery benefits from lower blood loss, risk of thromboembolism, and risk of infection compared to cesarean section. However, vaginal delivery should be performed with a device (forceps extraction or vacuum) to accelerate the second stage in a term gestation [31]. Our hospital's preferred delivery model is decided by the multidisciplinary team, including obstetricians, intensivists, anesthesiologists, cardiologists, and perinatologists. In the most condition of ES, the cesarean is favorable due to the faster procedure, more tight monitoring, and more controllable maternal condition. In cesarean delivery, the mother also does not have to face the uterine contraction, which massively increases cardiac output and heart failure risk [21].
The outcomes of pregnant women with ES in this study are poor. Ninety-four percent (17 cases) of babies were born in preterm gestation with birthweight < 2500 grams. Ten babies were confirmed to have an IUGR condition after being born from the Ballard-Lutchenzko score [32]. ES in pregnancy is a significant risk factor of IUGR, and the majority deliver in preterm gestation [2,4,26,29]. However, the prevalence of IUGR in pregnancy with corrected congenital heart disease (without severe residual defects) is only 8 out of 50 pregnancies (16%) [33]. Another study shows that neonatal outcomes in the corrected heart are much better compared to the uncorrected, including prematurity (0 vs. 40%), IUGR (20% vs.40%), neonatal death (0 vs. 10%), and baby birthweight (2.17 kg vs. 1.62 kg) [34]. Unfortunately, in this study, all cardiac lesions are uncorrected for the reason that was already explained. The neonatal complication found includes respiratory distress syndrome (9 cases), early-onset sepsis (1 case), and necrotizing enterocolitis (1 case).
Conclusions
Pregnancy with ES, incredibly complicated by PH, is still associated with very high maternal and fetal morbidity and mortality. Effective preconception counseling is essential to correct heart disease before pregnancy. Termination of pregnancy in the first trimester is advisable in severe heart diseases such as ES and PH. Maternal risks would increase significantly if the pregnancy continued until the third trimester. Suppose the ES patient with PH wishes to carry on the pregnancy. In that case, they should be monitored closely and managed in a tertiary center with collaborative efforts among obstetricians, cardiologists, anesthesiologists, pediatricians, and intensivists. There is no standardized approach to the management of ES in pregnancy; successful perinatal outcomes seem heavily dependent on each patient's individualization of treatment.
Data Availability
The data is available on request via the institutional board of Dr. Soetomo General Academic Hospital. The authors do not own the data. The data right is held by Dr. Soetomo General Academic Hospital.
Ethical Approval
The Ethical Committee approved all procedures in this study of Dr. Soetomo General Academic Hospital (Surabaya, Indonesia) following the institutional and national research committee's ethical standards and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Consent
Informed consent was obtained from all individual participants in this study.
|
v3-fos-license
|
2019-11-22T01:32:04.233Z
|
2019-11-15T00:00:00.000
|
209600632
|
{
"extfieldsofstudy": [
"Chemistry",
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/341/1/012189/pdf",
"pdf_hash": "e04bf4ff6f5fd19670c40fecba5dda6c46137741",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2224",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"sha1": "fd58a785b864c561257c31eb1f8ea81d2ea20c86",
"year": 2019
}
|
pes2o/s2orc
|
Comparative evaluation of the effect of the Quercus cortex extract and biologically active substances of plant origin on health and scar digestion
The paper studies effects of the Quercus corte x water extract (group II) and synthesized biologically active substances of the Quercus cortex extract (group III) on the dry matter digestibility, hematological parameters and the elemental composition of the scar fluid. It was identified that additives have a dose-dependent effect on the dry matter digestibility. They improve digestibility by 12.46% (P≤0.001) (group II) and 17.68% (group III). Among the hematological parameters, the number of lymphocytes increased by 34.07% (group II) and by 44.74% (group III); the hemoglobin concentration increased by 5.11% (group III). The serum iron decreased by 23.26% (P≤0.05) (group II) increased by 7.29% (group III). Experimental additives infleunce the microelement composition of the ruminal fluid reducing the concentration of Fe, CoCr, Ni and increasing the concentration of Mn, Cu, Zn values. The results obtained require further research.
Introduction
Currently, in order to correct the natural resistance and normalize metabolism in cattle, biological active substances of plant origin whose valuable components are well absorbed are used. In scientific and folk medicine preparations from aqueous plant extracts of larch wood, Siberian cedar and oak [1][2] are used.
Scientists found that biologically active substances have a positive effect on the mucous membrane of the digestive tract: they have an astringent effect similar to the tannic one, and contribute to the formation of a layer that reduces irritation of the mucous membrane [3].
When screening twenty medicinal plants used in medicine, a significant ability to inhibit the sensory quorum system of wild and mutant C. Violaceum strains was found in Quercus cortex extract, Betula verucosa buds and Eucalyptus viminalis leaves [4]. The content of biologically active systems in plants varies depending on various factors: parts, harvest seasons and geographic location, methods for producing additives [5]. Scientists conduct research on plant extracts, then carry out experiments adding isolated compounds of biologically active substances [6].
In addition, there is little information about the identification of compounds present in the additives, since most of the additives are complex extracts [7]. Therefore, it is recommended to identify chemical components in plant extracts in order to understand their effect on the elemental status of animals [8][9]. Data were expressed as mean values ± standard error of the mean. Statistical analysis was performed using Statistica 10.0 (StatSoft Inc., USA) and Microsoft Excel (Microsoft, USA). The significance of group differences was estimated using Student's t-test with p≤0.05.
Invitro study results
Invitro studies identified that dry matter digestibility increases due to the addition of biologically active substances Quercus cortex and an oak bark extract at various doses (Table 1). When making adding the Quercus cortex extract at the minimum dose, dry matter digestibility exceeded the control one by 5.42% (P <0.01). An increase in the dose to 3.3 mg/ml increased digestibility by 12.46% (P≤0.001). A further increase decreased digestibility.
When comparing two factors, it is clear that biologically active substances increase digestibility more as compared to the aqueous extract Quercus cortex. Invivo study results.
Morpho-biochemical blood parameters.
The Quercus cortex extract (group II) decreased the number of granulocytes and platelets by 28.94 and 3.41% (P≤0.05), respectively. The number of lymphocytes and erythrocytes increased by 34.07% and 0.34%, respectively.
Biologically active substances (group III) increased the hemoglobin content by 5.11% and decreased platelets by 48.3% (P≤0.01) ( Table 2). The Quercus cortex (group II) increased activity of ALT by 8.10%, and biologically active substances (group III) reduced activity of the enzyme by 18.81% (P≤0.01) ( Table 2).
Biologically active substances (group III) decreased the total protein content by 19.80% (P≤0.01) and 13.51% (P≤0.001), respectively. The remaining biochemical parameters were close to the control values.
The elemental composition of rumen fluid. The number of trace elements changed. 3 hours after the addition of the Quercus cortex extract (group II), the As concentration decreased by 50% (P≤0.001), Ni -by 26.92% (P≤0.001), and Feby 32.81% (P≤0.01). Cu decreased by 66.67% (P≤0.001), Mn -by 28.13% (P≤0.001), Znby 111.46% (P≤0.05 (Fig. 1). (Fig. 4). In the experimental groups, the content of trace elements in the rumen fluid changed. Their higher content was observed 6 hours after addition of the extract due to the accumulation of chemicals during digestion.
Discussion
The Quercus cortex extract of biologically active substances increased digestibility by 12.46% (P≤0.001) and 17.68%; tQuercus cortex as a source of quercetin has antioxidant and anti-inflammatory effects increasing digestibility of nutrients in the rumen [12].
Quercus cortex as a source of tannin has no negative effect on fermentation. It has a positive effect on dry matter digestibility, energy metabolism and the use of protein in the rumen [13]. Inhibiting properties of medicinal plants and their extracts are associated with the participation of phenols and polyphenols (flavonoids) in reactions with radicals that accompany some diseases [14].
Quercus cortex and biologically active substances of plant origin changed morphological and biochemical blood parameters; the number of leukocytes, lymphocytes, monocytes, granulocytes increased [15] due to an increase in immunological activity and phagocytosis [16].
An increase in the content of leukocytes in group III is consistent with previous studies [17] performed using the thyme extract, where there was no significant increase in the level of leukocytes, but immunological reactions improved.
Feeding broiler chickens with Quercus cortex + artificially synthesized substances has a positive effect on the immunomodulating state and antioxidant activity, increases the content of in-lysine, serum superoxide dismutase and catalase [18].
The data obtained are similar to the data in [19], where a decrease in plasma iron was observed when feeding with Grape seed extracts.
Protein plays a leading role in complex biochemical processes; its content in the blood plasma indicates the physiological well-being of the body [20]. The concentration of total protein in the blood of calves increases as an adaptive-mobilization response. In the blood of experimental animals, the level of total protein is lower than the control value which indicates higher metabolism in the animals of the experimental group [21]. 50 g of dry matter of the oak bark extract added into the feed increased calcium and sodium twice; phosphorus and magnesium -1.7 and 1.8 times [22].
The Quercus cortex extract increased the Mn concentration due to its high concentration in the extract and its ability to form weak systems with chemical elements in the gastrointestinal tract [23].
In the experimental groups, the iron content decreased. It is in compliance with the report of a decrease in the content of zinc and copper in the liver of monogastric animals fed with plant products (extracts from grape marc) containing polyphenolic substances [24].
Conclusion
It was established that the Quercus cortex extract and synthesized biologically active substances of the Quercus cortex extract in cattle feeding have a dose-dependent effect on dry matter digestibility. Digestibility increased by 12.46% (P≤0.001) (group II) and 17.68% (group III) The number of lymphocytes increased by 34.07% (group II) and 44.74% (group III); the hemoglobin concentration increased by 5.11% (group III). Serum iron decreased by 23.26% (P≤0.05) (group II) and increased by 7.29% (group III). Experimental additives had an effect on the microelement composition of the rumen fluid reducing the concentration of Fe, CoCr, Ni and increasing the concentration of Mn, Cu, Zn. The results require further research.
|
v3-fos-license
|
2022-07-07T15:02:42.837Z
|
2022-07-01T00:00:00.000
|
250325147
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1996-1944/15/13/4697/pdf?version=1657017502",
"pdf_hash": "296de96cce5ee8653c39dbe4fa2fd8c3c1bf27fb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2225",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"sha1": "73417dea0bc4de80942b11d7222c18d6ce145d32",
"year": 2022
}
|
pes2o/s2orc
|
Theoretical and Experimental Models to Evaluate the Possibility of Corrosion Resistant Concrete for Coastal Offshore Structures
This study built theoretical and practical models to evaluate the corrosion resistance of concrete for coastal offshore structures in Vietnam. A mathematical model was developed in the form of a system of nonlinear partial differential equations characterizing the diffusion “free calcium hydroxide” in a solid of a concrete structure. The model describes the process of non-stationary mass conductivity observed in the “concrete structure—marine environment” system under non-uniform arbitrary initial conditions, as well as combined boundary conditions of the second and third kind, taking into account the nonlinear nature of the coefficients of mass conductivity k and mass transfer β. It was shown that the solution of the boundary value problem of non-stationary mass conductivity allows us to conclude about the duration of the service life of a concrete structure, which will be determined by the processes occurring at the interface: in concrete—mass conductivity, depending on the structural and mechanical characteristics of hydraulic structures, and in the liquid phase—mass transfer, determined by the conditions of interaction at the interface of the indicated phases.
Introduction
The South China Sea plays an important role in Vietnam's history, since the coastline from north to south is about 3260 km. Many important economic centers and defense facilities of the country are located on the coast. The accumulated experience shows that many hydrotechnical reinforced concrete structures, after 5 on 10 years of operation, develop damage caused by corrosion processes occurring in an aggressive marine environment. The rate of corrosion damage is quite high, especially in tidal waters. Thus, the issues of increasing the reliability and durability of the operation of hydrotechnical facilities in the coastal zone of Vietnam are very relevant and are of great economic and social importance for the country [1].
The destruction of reinforced concrete structures occurs due to corrosion processes caused by diffusion (mass transfer) between the concrete components and the ions of aggressive components of the liquid phase [2][3][4]. Recent studies only stop at building a model to evaluate the process of chloride ion penetration from the aggressive environment into reinforced concrete structures [5][6][7][8]. There have been no specific studies on the corrosion process caused by diffusion between the concrete components.
This paper considers the basic methods of physical-mathematical modeling that are used to describe the processes of non-stationary mass transfer of "free calcium hydroxide" in concrete structures placed in a liquid environment with a defined flow rate. The boundary value problem of "free calcium hydroxide" mass conductivity in dimensionless variables is obtained. To demonstrate the possibilities of the obtained solution, we will carry out a numerical experiment: in which the fluctuation in the field of dimensionless concentrations C(x, Fo m ) by the different values of the Fourier number, in accordance with the theory of similarity, will be considered as an indicator of the processing time. The study indicated the results of calculating the concentration distributions "free calcium hydroxide" over the dimensionless thickness of the concrete structure at Fourier numbers 0.01; 0.1; 0.2; 0.5 and 1. The study also provides an example of determining the time of reaching the critical concentration "of free calcium hydroxide" on the coastal structure surface.
Theoretical model is applied to corrosion resistant concrete (CRC) with a modified structure based on sulfate resistant Portland cement using mainly local materials, suitable for the construction of offshore structures in coastal areas, which can be carried out by compaction and strengthening of the structure of the cement stone due to the combined effect of modifying admixtures (MA), introduced into the concrete mixture, in the form of a water-reducing polycarboxylate superplasticizer (SP), as well as silica fume (SF) and mechanically activated low-calcium fuel fly ash (FA) and rice husk ash (RHA)-finely dispersed mineral admixtures the composition of a multicomponent adhesives and having a high pozzolanic activity due to a significant content of amorphous silica [9,10].
Materials and Methods
Sulphate resisting portland cement type CEM I 42.5N CC (SC) produced by the Tam Diep plant, which is the leading cement manufacturer in Vietnam using the most modern world technologies. The main characteristics of clinker and Portland cement based on it met the requirements of ASTM C150-07 [11], GOST 22266-2013 [12] (State standard of Russia) and TCVN 6067:2018 [13] (State standard of Vietnam).
The physical and mechanical properties, as well as the chemical and mineral compositions of the used cement are shown in Tables 1-3. Active mineral admixtures allow to reduce the consumption of cement, as well as to compact the structure of concrete by reducing the porosity of the cement stone and thereby improving its operational properties, and in addition, to avoid stratification of the concrete mixture when using water-reducing superplasticizers [14][15][16]. Local active mineral ingredients used in the work included fly ash class F from Vung Ang (FA) conforms to the standard TCVN 10302:2014 [17] and GOST 25818-2017 [18], Vina Pacific SF-90 Silica fume (SF) and rice husk ash (RHA) conforms to the standard TCVN 8827:2011 [19]. Their composition and properties are shown in Tables 4 and 5. Granulometric composition of FA, SF and RHA, shown in Figure 1, was determined using the method of laser granulometry. Active mineral admixtures allow to reduce the consumption of cement, as well as to compact the structure of concrete by reducing the porosity of the cement stone and thereby improving its operational properties, and in addition, to avoid stratification of the concrete mixture when using water-reducing superplasticizers [14][15][16]. Local active mineral ingredients used in the work included fly ash class F from Vung Ang (FA) conforms to the standard TCVN 10302:2014 [17] and GOST 25818-2017 [18], Vina Pacific SF-90 Silica fume (SF) and rice husk ash (RHA) conforms to the standard TCVN 8827:2011 [19]. Their composition and properties are shown in Tables 4 and 5. Granulometric composition of FA, SF and RHA, shown in Figure 1, was determined using the method of laser granulometry. Silica sand (SS) of the Lo River (Vietnam) was used as a fine aggregate. It is a popular construction sand in Vietnam with good quality and low price. The grain size composition of sand is important for the preparation of concrete mixtures of the required consistency, since it has a significant effect on their workability and the amount of mixing water required for this. The regulatory requirements for the physical and mechanical properties of sand are set out in the Russia and Vietnam standards GOST 8736-2014 [20] and TCVN 7570:2006 [21]. The results of their determination are presented in Table 6. As a coarse aggregate, we used crushed stone (CS) with D max = 10 mm, which is mined in open pits in Ninh Binh (Vietnam) and whose properties corresponded to the requirements of the standards GOST 8267-93 [22] and TCVN 7570:2006 [21]. The physical and mechanical properties of the used crushed stone are shown in Table 7. A special requirement is imposed on the cleanliness of the aggregate, since dusty, silty and clay particles envelop the surface of the grains and impair their adhesion to the cement stone. Therefore, the content of such particles in a coarse aggregate should not exceed 3%.
The superplasticizer SR 5000P (SP) from SilkRoad (Vietnam) with a density of 1.1 g/m 3 at a temperature of 20 ± 5 • C was used as a plasticizing additive in concrete mixtures, which reduces the water demand of equally mobile concrete mixtures by 30-40% that meets the requirements of GOST 24211-2008 [23] and ASTM C494/C494M-19 [24]. The main characteristics are shown in Table 8. According to the passport data provided by the manufacturer, the optimal dosage of the superplasticizer SR5000P for obtaining a concrete mixture with the highest mobility is in the range of 0.9 ÷ 1.2% of the mass of the adhesives. If the SP consumption exceeds this amount, then this can lead to water separation and stratification of the concrete mixture. Therefore, the work used the average value of the recommended dosage of the superplasticizer in the amount of 1% by weight of the adhesives.
Water (W) used for the preparation of concrete mixtures complied with the requirements of GOST 23732-2011 [25] and TCVN 4506:2012 [26]. Such water should not contain impurities that affect the setting of concrete, as well as reduce the durability of structures, above the permissible limit, have a pH value of at least 4 and contain no more than 5.6 g/L of mineral salts, including no more than 2.7 g/L sulfates. In addition, the water should be free of sludge and oil flakes, as well as organic matter of more than 15 mg/L.
Building theoretical models
Sea water is a highly corrosive environment containing a large amount of dissolved salts and causing chemical corrosion of both concrete itself and steel reinforcement in reinforced concrete structures. The aggressive marine environment has a significant impact on the durability of concrete and reinforced concrete structures of hydraulic structures of the coastal zone. At the same time, in reinforced concrete, the penetration of liquid aggressive media through capillary pores causes cracking and peeling of the protective concrete layer above the surface of the reinforcing bars, which leads to corrosion of the reinforcement [27][28][29][30].
To experimentally determine the chemical composition of seawater at different depths in the coastal zone, in the area of Halong port in the north of Vietnam, samples were taken (Figure 2), the results of chemical analysis of which are presented in Table 9.
uid aggressive media through capillary pores causes cracking and peeling of the pr tive concrete layer above the surface of the reinforcing bars, which leads to corrosi the reinforcement [27][28][29][30].
To experimentally determine the chemical composition of seawater at diff depths in the coastal zone, in the area of Halong port in the north of Vietnam, sam were taken (Figure 2), the results of chemical analysis of which are presented in Tab Table 9 shows that the content of solutes in seawater tends to increase in the bo layer, especially the content of Ca 2+ ions. This is due to the fact that Halong Bay rests limestone base, as a result of which the seawater of the bottom layer, dissolving um-containing rocks, has a higher concentration of Ca 2+ ions.
For the most part, all offshore hydraulic structures are made of concrete or forced concrete, complex composite materials, the viability, performance, and dura of which to a decisive extent depend on the structure of structures, their physicochem structural, mechanical, and operational properties. An important influence is exerte Table 9 shows that the content of solutes in seawater tends to increase in the bottom layer, especially the content of Ca 2+ ions. This is due to the fact that Halong Bay rests on a limestone base, as a result of which the seawater of the bottom layer, dissolving calcium-containing rocks, has a higher concentration of Ca 2+ ions.
For the most part, all offshore hydraulic structures are made of concrete or reinforced concrete, complex composite materials, the viability, performance, and durability of which to a decisive extent depend on the structure of structures, their physicochemical, structural, mechanical, and operational properties. An important influence is exerted by the salinity of sea water, the presence of salts of inorganic substances in it and the presence of biological microorganisms in different climatic seasons. From the point of view of the theories of physicochemical hydrodynamics and heat and mass transfer, the nature of the interaction of the composite of a hydraulic structure with the components of seawater is determined by the laws of chemical kinetics and diffusion in the bulk of concrete and at the solid-liquid interface, as well as by the laws of mass transfer (in this case, the transfer substances from the interface into the volume of the sea water basin).
To develop effective methods for protecting concrete from leaching by a marine environment containing a range of different ingredients that have a significant effect on the rate of decomposition of highly basic compounds and the removal of decomposition products into the marine environment, it is necessary to develop mathematical models of unsteady mass conductivity (diffusion in a solid) under non-uniform arbitrary initial conditions and combined boundary conditions of the 2nd and 3rd kind. Particular attention should be paid to taking into account the nonlinearity of the coefficients of mass conductivity and mass transfer.
In accordance with the classification of Professor V.M. Moskvin [31], the simplest form of development of corrosion processes in concrete is leaching. In this case, the aggressive component does not penetrate deep into the material of the concrete (reinforced concrete) structure. The rate of the process is determined by the diffusion of calcium hydroxide from the pores of the inner layers of the structure to the external solid-liquid interface, and then by mass transfer from the interface to the liquid mass.
In this case, it is assumed that the target component, which is free calcium hydroxide in the processes of corrosion of cement concrete, is removed from the surface of a concrete or reinforced concrete structure by a liquid medium as a result of convective mass transfer. If the medium is stationary, then the mass transfer will be characterized by natural convection, and if the surface of the structure is washed with a liquid at a certain speed of its movement, then there is a forced flow of the liquid. In both cases, the mass transfer of the target component will be determined by two processes: mass conductivity from the inner layers to the interface and mass transfer from the interface to the liquid phase [32][33][34][35][36][37][38][39].
The model of the problem of mass transfer with initial and boundary conditions for an unbounded plate concrete (reinforced concrete) can be schematically illustrated in Figure 3.
rate of decomposition of highly basic compounds and the removal of dec products into the marine environment, it is necessary to develop mathematica unsteady mass conductivity (diffusion in a solid) under non-uniform arbi conditions and combined boundary conditions of the 2nd and 3rd kind. Part tion should be paid to taking into account the nonlinearity of the coefficie conductivity and mass transfer.
In accordance with the classification of Professor V.M. Moskvin [31], t form of development of corrosion processes in concrete is leaching. In this c gressive component does not penetrate deep into the material of the concrete concrete) structure. The rate of the process is determined by the diffusion of droxide from the pores of the inner layers of the structure to the external sol terface, and then by mass transfer from the interface to the liquid mass.
In this case, it is assumed that the target component, which is free calcium in the processes of corrosion of cement concrete, is removed from the surfa crete or reinforced concrete structure by a liquid medium as a result of conv transfer. If the medium is stationary, then the mass transfer will be chara natural convection, and if the surface of the structure is washed with a liquid speed of its movement, then there is a forced flow of the liquid. In both case transfer of the target component will be determined by two processes: mass c from the inner layers to the interface and mass transfer from the interface t phase [32][33][34][35][36][37][38][39].
The model of the problem of mass transfer with initial and boundary co an unbounded plate concrete (reinforced concrete) can be schematically il Figure 3. The problem of mass transfer of calcium hydroxide from a concrete struc aqueous substance can be formulated by the following system of Equations (1 The problem of mass transfer of calcium hydroxide from a concrete structure into an aqueous substance can be formulated by the following system of Equations (1)-(4): where: C 0 is the initial concentration of free calcium hydroxide in concrete, in terms of calcium oxide, kg CaO/kg concrete; C(x,τ) is the concentration of free calcium hydroxide in concrete at the moment τ at any point with the coordinate x, in terms of calcium oxide, kg CaO/kg concrete; k is coefficient of mass conductivity in the solid phase (diffusion), m 2 /s; β is mass transfer coefficient in a liquid medium, m/s; Cp is the equilibrium concentration of the transferred component on the surface of a solid; kg CaO/kg concrete; δ is wall thickness of the structure, m. The Equation (1) is the differential equation of non-stationary mass transfer in the body of a reinforced concrete structure. The Equation (2) defines the initial condition of the process: the distribution of calcium hydroxide concentrations at the time instant taken as the initial one. The Equations (3) and (4) expressions define the conditions at the interface. The Equation (3), called the condition of the 2nd kind, also called the "non-penetration condition", determines the fact that calcium hydroxide does not diffuse into the internal premises of the hydraulic structure located on the left from enclosing concrete (reinforced concrete) construction. The Equation (4) characterizes the interaction of the surface layer of the structure with a liquid medium. This is a condition of the 3rd kind, also called "Newton's condition".
The use of dimensionless variables allows you to go to the following Equation (5): where: C(x, Fo m ) is the dimensionless concentration of the transferred component across the concrete thickness; x is dimensionless coordinate; Fo m is Fourier mass transfer criterion; Bi m is Bio mass transfer criterion. In this case, the system of Equations (1)-(4), also called the "boundary value problem of non-stationary mass transfer", is transformed to the from: C(x, 0) = 1, The purpose of solving this boundary value problem is to find a function C(x, Fo m ) that allows one to calculate the concentration profiles of the transferred component over the thickness of the structure, which also change over time. This is the so-called "direct problem of the dynamics of the mass transfer process" [40]. The solution to the abovementioned problems is indicated in the [32,41].
where µ m is the roots of the characteristic Equation (11): Some results of calculations by Equation (10) are shown in Figure 4.
Results
Based on the studied causes and nature of corrosion of offshore hydraulic struc in order to increase the durability of cement stone of concrete and reinforced con structures, experimental studies were carried out in the laboratories of the Civil neering Faculty of the Hanoi Mining and Geological University and the Institute of struction Sciences and Technologies of the Ministry of Construction of Vietnam cordance with the requirements of the standards ACI 211.4R-08 [42], developed corr resistant concrete, the compositions of which are shown in Table 10.
Results
Based on the studied causes and nature of corrosion of offshore hydraulic structures, in order to increase the durability of cement stone of concrete and reinforced concrete structures, experimental studies were carried out in the laboratories of the Civil Engineering Faculty of the Hanoi Mining and Geological University and the Institute of Construction Sciences and Technologies of the Ministry of Construction of Vietnam in accordance with the requirements of the standards ACI 211.4R-08 [42], developed corrosion resistant concrete, the compositions of which are shown in Table 10.
Investigation of the Physical and Mechanical Properties and Performance Indicators of the Developed Corrosion Resistant Concrete
The experimental results of determining the physical, mechanical, and performance indicators of the developed concretes of the above compositions and, for comparison, the requirements for concrete in accordance with SP 41.13330.2012 [43] are presented in Table 11. From the test results given in Table 11, it can be seen that an increase in the density of concrete of Mix 3 and 4 due to the use of fine mineral fillers in the form of silica fume and mechanically activated ash of rice husks containing amorphous silica capable of binding free calcium hydroxide (CH) into less soluble low-basic calcium hydrosilicate (CSH), contributes not only to an increase in strength, but also to an increase in the water-resistance of concrete, as well as a decrease in water absorption. At the same time, the concrete of the Mix 3 has the highest compressive strength in comparison with the rest of the developed concretes. This is due to the increased content of CSH formed as a result of the pozzolanic reaction due to the high content of SiO 2 (89.9%) in the silica fume of the mineral sealing additive, which is confirmed by the results of X-ray phase analysis.
These results are fully consistent with the results of the analysis of the microstructure of concrete of the developed compositions, obtained using the method of electron microscopy and showing a denser structure in CRC of Mix 3 and 4 compared to Mix 1 and 2 ( Figure 5).
Study of the Effect of Finely Dispersed Active Mineral Admixtures on the Composition of Hydration Products by X-ray Phase Analysis
In order to assess the pozzolanic properties of the used mineral admixtures (FA, SF, and RHA), the method of X-ray phase analysis was used to study their influence on the phase composition of the adhesive's hydration products during the hardening of concrete of Mix 1, 2, 3 and 4.
The obtained results of studying the influence of these active mineral admixtures on the change in the phase composition of hydrated neoplasms in the cement stone of con-
Study of the Effect of Finely Dispersed Active Mineral Admixtures on the Composition of Hydration Products by X-ray Phase Analysis
In order to assess the pozzolanic properties of the used mineral admixtures (FA, SF, and RHA), the method of X-ray phase analysis was used to study their influence on the phase composition of the adhesive's hydration products during the hardening of concrete of Mix 1, 2, 3 and 4.
The obtained results of studying the influence of these active mineral admixtures on the change in the phase composition of hydrated neoplasms in the cement stone of concrete of the developed compositions at the age of 28 days of hardening are presented in Figure 6.
Study of the Effect of Finely Dispersed Active Mineral Admixtures on the Composition of Hydration Products by X-ray Phase Analysis
In order to assess the pozzolanic properties of the used mineral admixtures (FA, SF, and RHA), the method of X-ray phase analysis was used to study their influence on the phase composition of the adhesive's hydration products during the hardening of concrete of Mix 1, 2, 3 and 4.
The obtained results of studying the influence of these active mineral admixtures on the change in the phase composition of hydrated neoplasms in the cement stone of concrete of the developed compositions at the age of 28 days of hardening are presented in Figure 6. The presented figures show that, in contrast to Mix 1 and 2, the intensity of the peaks of free Ca(OH) 2 -portlandite decreases in concretes of Mix 3 and 4, and at the same time, the intensity of the peaks of calcium hydrosilicate increases. This can be explained by the occurrence of the pozzolanic reaction of SF, RHA, and FA with portlandite, the rate of which increases with the hardening of these concretes, as a result of which the absorption of calcium hydroxide and its transformation into hydrosilicate occurs more intensively than the formation of Ca(OH) 2 as a result of hydration of Portland cement clinker minerals. It was found that at the age of 28 days of normal hardening, the highest intensity of CSH peaks and the smallest CH peaks are observed in the concrete of Mix 3 containing silica fume, which can be explained by its high pozzolanic activity.
The results obtained allow us to conclude that mechanically activated ash and silica fume has a positive effect on the formation of low-basic calcium hydrosilicate in the structure of hardening concretes, which will increase their density, strength, and corrosion resistance under operating conditions.
Determination of the Coefficient of Mass Conductivity of Calcium Hydroxide by the Thickness of the Concrete Structure and the Forecast of the Duration of Operation of Concrete and Reinforced Concrete Structures
Determination of the coefficient of mass conductivity is not limited to purely technological problems associated with the fact that the coefficient of mass conductivity is included in the calculated equations of the ongoing processes, but is of great scientific importance, since allows you to study the mechanism of the process and the influence of various factors on the rate of transfer of matter [44].
To determine the content of calcium hydroxide over the thickness of the concrete structure, an experimental scheme was developed, as shown in Figure 7. For the tests, concrete samples with dimensions of 100 × 100 × 100 mm were used. The surfaces of the samples, except for one, were protected with a waterproof paint coating, leaving one face unprotected to allow the diffusion of calcium hydroxide into a bath with an aqueous solution simulating the composition of seawater in the bottom layer of the South China Sea in the area of Halong port. The specified solution contained chlorides of calcium, magnesium, and potassium, as well as sodium sulfate with a total concentration of calcium ions equal to 0.7 g/L, chlorine ions 18.2 g/L, sodium ions 11.3 g/L, potassium ions 0.4 g/L, magnesium ions 1.4 g/L and sulfate anions 2.7 g/L.
Determination of the Coefficient of Mass Conductivity of Calcium Hydroxide of the Concrete Structure and the Forecast of the Duration of Operation of Concret Concrete Structures
Determination of the coefficient of mass conductivity is not limited nological problems associated with the fact that the coefficient of mass included in the calculated equations of the ongoing processes, but is of importance, since allows you to study the mechanism of the process and t various factors on the rate of transfer of matter [44].
To determine the content of calcium hydroxide over the thickness structure, an experimental scheme was developed, as shown in Figure 7 concrete samples with dimensions of 100 × 100 × 100 mm were used. The samples, except for one, were protected with a waterproof paint coating, le unprotected to allow the diffusion of calcium hydroxide into a bath wi solution simulating the composition of seawater in the bottom layer of th Sea in the area of Halong port. The specified solution contained chlorid magnesium, and potassium, as well as sodium sulfate with a total conce cium ions equal to 0.7 g/L, chlorine ions 18.2 g/L, sodium ions 11.3 g/L, pot g/L, magnesium ions 1.4 g/L and sulfate anions 2.7 g/L. The content of calcium hydroxide was determined by thermogravimetric analysis in the central zone of the samples every 25 mm of thickness with a 14-day interval during 70 days of testing. As a result, profiles of calcium hydroxide concentrations were obtained over the thickness of the sample in an aqueous medium ( Figure 8).
Analyzing the concentration profiles of calcium hydroxide over the thickness of concrete samples of different composition, we determined the concentration gradients of calcium hydroxide at the interface and, using Equation (1) and the Matlab and Origin 2018 software, calculated the value of the mass conductivity coefficient k of calcium hydroxide. The calculation results are shown in Table 12.
The change of mass conductivity coefficient with time is shown in Figure 9. From the obtained results, it can be seen that the mass conductivity coefficient decreases sharply in the period from 14 to 42 days. From 42 days to 70 days, the mass conductivity coefficient continued to decrease but not significantly.
As an illustration, the following is a specific example of calculating the corrosion time of a concrete structure: 1.
Thickness of the concrete structure of the hydraulic structure: δ = 0.3 m; 2.
Coefficient of mass conductivity of calcium hydroxide in concrete k according to Table 12 at the moment of time τ = 56 days. Analyzing the concentration profiles of calcium hydroxide over the thickness of concrete samples of different composition, we determined the concentration gradients of calcium hydroxide at the interface and, using Equation (1) and the Matlab and Origin 2018 software, calculated the value of the mass conductivity coefficient k of calcium hydroxide. The calculation results are shown in Table 12. Calculations using the proposed method in the Equation (1) show that the critical value of the dimensionless concentration of the transferred component, free calcium hydroxide, over the thickness of concrete C(x, Fo m ) is achieved at the value of the mass transfer Fourier number Fo m,crit , equal to one (curve 6 in Figure 4). In accordance with the accepted designations, the calculation of the corrosion time will be carried out according to the Equation (12): The change of mass conductivity coefficient with time is shown in Figure 9. Fro obtained results, it can be seen that the mass conductivity coefficient decreases shar the period from 14 to 42 days. From 42 days to 70 days, the mass conductivity coef continued to decrease but not significantly. As an illustration, the following is a specific example of calculating the cor time of a concrete structure: 1. Thickness of the concrete structure of the hydraulic structure: δ = 0.3 m; 2. Coefficient of mass conductivity of calcium hydroxide in concrete k accord Table 12 at the moment of time τ = 56 days.
Calculations using the proposed method in the Equation (1) Table 13.
Discussion
To solve the problem of protecting a reinforced concrete structure from the a sive effects of the marine environment, it is necessary to use the obtained express solve the "inverse problem of non-stationary mass transfer" in order to find cond under which the processes of mass transfer would be carried out with a min leaching rate. It is possible to control this process by influencing the structure concrete in the structure. Obviously, the time parameter is the Fourier mass transf terion. It is also obvious that the time τ is included in the exponential function, whi factor in each term of the Fourier series. Therefore, the solution to the "inverse transfer problem" is possible only with the use of the iteration method [41]. The results of calculating the corrosion time of concrete structures from the four investigated concrete compositions are shown in Table 13.
Discussion
To solve the problem of protecting a reinforced concrete structure from the aggressive effects of the marine environment, it is necessary to use the obtained expression to solve the "inverse problem of non-stationary mass transfer" in order to find conditions under which the processes of mass transfer would be carried out with a minimum leaching rate. It is possible to control this process by influencing the structure of the concrete in the structure. Obviously, the time parameter is the Fourier mass transfer criterion. It is also obvious that the time τ is included in the exponential function, which is a factor in each term of the Fourier series. Therefore, the solution to the "inverse mass transfer problem" is possible only with the use of the iteration method [41].
From the results of calculations given in Tables 12 and 13, it can be seen that the mass conductivity coefficient of calcium hydroxide in the concrete of Mix 3 is less than that of the control concrete of Mix 1. Consequently, the concrete structure from the Mix 3 will have a longer service life. This result is explained by the fact that concrete of composition No. 3 has a dense structure due to the simultaneous use of mechanically activated FA in the composition of multicomponent adhesives and SF as an active mineral finely dispersed sealing additive. The research results show that the developed experimental model can be used to solve the inverse problem of unsteady mass conductivity in order to determine the mass conductivity coefficient of calcium hydroxide in a concrete structure. This model can serve as a basis for predicting the service life of the concrete and reinforced concrete structures of hydraulic structures in the marine aquatic environment.
Conclusions
(1) Compositions of corrosion resistant concretes have been developed for the construction of underwater parts of offshore hydraulic structures of the coastal zone, the structure of which is modified with active mineral admixtures (including silica fume, mechanically activated rice husk ash, and fly ash) and polycarboxylate superplasticizer in the form of water.
(2) A mathematical model has been developed in the form of nonlinear partial differential equations characterizing unsteady mass conductivity-diffusion in a solid of a concrete (reinforced concrete) structure of free calcium hydroxide, observed in the system "concrete (reinforced concrete) structure-marine environment" under uneven arbitrary initial conditions, as well as combined boundary conditions of the 2nd and 3rd kind, taking into account the nonlinear nature of the coefficients of mass conductivity k and mass transfer β.
(3) It is shown that the solution of the boundary value problem of unsteady mass conductivity allows us to conclude about the durability of the concrete (reinforced concrete) structure, which will be determined by the processes occurring at the interface: in concretemass conductivity, depending on the structural and mechanical characteristics of hydraulic structures, and in liquid phase-mass transfer, determined by the conditions of interaction at the interface between these phases. At the same time, the analysis of the liquid phase will make it possible to assess the duration of the serviceability of these structures and, as a result, it becomes possible to design the optimal compositions of corrosion resistant concretes intended for the construction of durable offshore structures, due to their high resistance to corrosion in seawater.
|
v3-fos-license
|
2021-06-15T13:15:51.540Z
|
2021-05-21T00:00:00.000
|
235427382
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/wcmc/2021/6698311.pdf",
"pdf_hash": "758da78c88780748501a01704cf78e0fcba0bc08",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2227",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "54c712f9e7a6308201d91c52cc36c5ba4be6a6e7",
"year": 2021
}
|
pes2o/s2orc
|
Framework for State-Aware Virtual Hardware Fuzzing
,
1. Introduction 1.1. Background. Virtualization technology is widely used in cloud computing, software testing, daily office, and many other scenarios. Virtualization technology can provide users with with convenient service of privilege isolation protection. It is also an effective solution to reduce the cost of configuring multiple physical computing instances [1]. In order to enable the virtualization guest machine to access essential hardware (network cards, graphics cards, sound cards, etc.), the virtualization platform adopts methods of accessing the physical hardware (which needs real hardware connected to host machine) and making some full virtualized hardware [2]. The software-implemented virtual hardware runs at the same privilege level as hypervisor, which is convenient for guest user to access the virtual hardware. However, attackers can execute the exploitation program in guest machine to trigger the vulnerabilities hidden in virtual hardware and gain the same privilege as hypervisor, which may lead to the virtual machine escape [3,4]. Attackers also send vulnerability exploit programs to victims through methods such as spam to achieve the purpose of the attack. Guo et al. use the collaborative neural network to detect robust spammer [5]. Blocking the spread of exploit programs can effectively reduce the possibility of attacks, but software vulnerabilities with independent propagation capabilities still pose a major threat to network security.
In order to remedy this, fuzzing technology has become the most popular method for software vulnerability detection, which is mainly due to its high efficiency in detecting vulnerabilities [6]. Using this technology, researchers have discovered many software vulnerabilities. The fuzzing technology was first proposed in the 1990s. The main research direction in the early stage was focused on black-box fuzzing, and the current main research directions are greybox fuzzing and white-box fuzzing. Intensive studies have been done since AFL (American Fuzzy Lop) was first proposed in 2013 [7]. At present, most endeavors on fuzzing technology refocused on efficiency performance improvement and its adaptability analysis.
In terms of performance improvement, the main research key points are testcase variation methods, testcase set minimization methods, instrumentation methods, testcase evaluation methods, etc. The representative studies in this aspect include Driller (an augmenting fuzzing method through selective symbolic execution) [8], AFLGo (a directed greybox fuzzing method) [9], and AFLFast (a coverage-based greybox fuzzing method as Markov chain) [10]. Driller is a greybox fuzzing framework that combines symbolic execution technology. It utilizes the symbolic execution technique to deal with conditional branches that are difficult to enter for mutated testcases generated by fuzz testing. AFLGo uses LLVM (Low Level Virtual Machine) to generate the call graph and control flow graph of the test target and proposed the fuzzing strategy based on the simulated annealing algorithm. AFLGo can directionally test the code related to the specified code block. AFLFast proposes a fuzzing strategy based on the Markov chain model, which intelligently controls the number of mutation of testcases in the corpus, thereby giving more opportunities to low-frequency paths. Aschermann et al. use the processing state of the program to guide the testcase mutation of the fuzzing [11].
In terms of adaptability, the key points of the main research are kernel fuzzing, browser fuzzing, virtualization platform fuzzing, etc. Representative studies include kAFL (a fuzzing method based on hardware-assisted feedback for OS kernels) [12], syzkaller (an unsupervised coverageguided kernel fuzzer) [13], and fuzzilli (a coverage guided fuzzer for dynamic language interpreters based on a custom intermediate language) [14]. KAFL utilizes KVM (Kernelbased Virtual Machine) and Intel-PT (Intel Processor Trace) technologies to fuzz kernel with the function of mounting image as an interaction point, and it also developes a decoder that is more effective than the official PT decoder released by Intel. Syzkaller uses a declarative language to describe the kernel system calls and uses the system calls as the interaction point to fuzz the kernel. Fuzzilli is a fuzzer for JavaScript interpreter engine of browsers. It generates testcases through grammar templates and uses intermediate representation language for mutation. It uses code branch feedback to improve the code coverage of fuzzing.
1.2. Related Works. In literature, there are some antecedent studies on fuzzing technology for virtual hardware vulnerability discovery. For instance, some researchers proposed adaptive methods for applying traditional fuzzing frameworks to virtual hardware. Tang et al. proposed a framework for adapting AFL to devices in a virtualization platform at the Blackhat Conference in 2016 [15]. This framework is an early concept of virtual hardware fuzzing and provides a rough plan for virtual hardware fuzzing. However, the state information of the virtual hardware is neglected, resulting in inaccuracy in testcase evaluation [15].
The state condition of the virtual hardware is the change condition of the processing logic determined by its control register. To this end, VDF (Virtual Device Fuzzing) [16] is proposed to use initialization testcase sequence replay to eliminate the state conditions of virtual hardware. VDF implementes a fuzzing framework for virtual devices, which uses recording and playback mechanisms to try to eliminate the state conditions of virtual hardware. Before each testcase is input, it resets the state of the virtual hardware to the initial state through the playback mechanism. VDF does not use a guest to interact with virtual hardware. It uses the memory access interfaces provided by Qemu and implements a method of input/output (I/O) with virtual hardware based on accelerator qtest. Limited by the memory access method, VDF can only fuzz the MMIO BAR (Memory Mapped I/O Base Address Registers) of the virtual hardware.
In order to solve the virtual hardware state condition interference, we propose SAVHF (Framework for State-Aware Virtual Hardware Fuzzing). In this work, we provide the instrumentation method to obtain the state information of virtual hardware, and we also use the fuzzing strategy to reduce the state condition interference and improve the code coverage of target virtual hardware. We try to answer the following key questions.
(1) How to obtain the state condition of virtual hardware?
(2) How to reduce the state condition interference of virtual hardware?
(3) How to use the state condition information to improve the adaptability of the fuzzing to the virtual hardware?
1.3. Contributions. SAVHF uses a syntax-based instrumentation method to monitor the state transition of virtual hardware. SAVHF also includes a state-based fuzzing strategy. It guides fuzzing to traverse the states of the virtual hardware and performs efficient testcase variation and input in various states. Meanwhile, since SAVHF does not depend on the features provided by a definite target virtualization software, SAVHF can be applied to a variety of open source virtualization softwares. The contributions of this paper can be summarized as follows: (1) SAVHF. We propose the SAVHF, a virtual hardware fuzzing framework, which can perceive the state transition of virtual hardware and effectively detect vulnerabilities in virtual hardware (2) Source-to-source instrumentation based on abstract syntax tree. We propose a syntax-based instrumentation method to effectively monitor the state transition of virtual hardware. By analyzing the abstract syntax tree of the target virtual hardware, we insert instrumentation at the reference code of the key structure that records the information of the virtual hardware control registers, so as to effectively detect the state transition of virtual hardware during fuzzing process (3) State-based fuzzing strategy. We propose a strategy for fuzzing to traverse the states of the virtual hardware and to generate the testcases as the operation queue for the virtual hardware. It is a state-based fuzzing optimization strategy based on the feedback provided by instrumentation. This strategy can reduce the interference of virtual hardware state condition on the evaluation of fuzzing testcases and [17]. The method to access MMIO address space is the same as accessing normal memory address space, which makes it convenient for guest system to access the IO address space of the hardware using the MMIO mapping method. In contrast, to access the PMIO address space, guest system must use the specific instructions provided by CPU instruction set. For systems running on x86/amd64, OUT and IN can be used to access PMIO address space. The Linux operating system designs a pseudofilesystem called sysfs to export kernel objects, which provides an interface to access kernel data structures. The directory /sys/devices provides the hierarchy of Linux kernel device tree, so that users can obtain the device model information and device address space by accessing the filesystem node in this directory [18]. For example, after users map the file /sys/devices/pci0000 : 00/0000 : 00:0 f.0/resource1 to the virtual address space of the process by using MMAP system call, the read and write operation on the virtual address will be transferred by the Linux kernel to the I/O operation on the BAR 0 of the graphic card device mounted on PCI bus 0, which is a memory mapped device address space. Thus, with the help of Linux sysfs, users can access the device address space using unified I/O representation, which is a tuple consisting of address and size for read operation or a triple consisting of address, size, and value for write operation.
2.1.2. State Transition. The implementation of virtual hardware needs to follow the specifications of the simulated hardware to provide the same functions as real hardware. It has state information at runtime, and when the virtual hardware is in a different state, the logic for handling commands or external interference may be different [19]. Generally, the operation logic of the virtual hardware is controlled by the configuration registers of the virtual hardware, which can be customized by users. Configuration registers and other registers are accessed by the way of address space mapping, which can be described as a unified representation.
For example, users can disable the CRC checksum appended after each package by setting the 16th bit (CRC disable bit) of the rtl8139 transmit configuration register to 1. With the help of Linux pseudofilesystem sysfs, users only need to map the file resource 0 in the file system directory corresponding to the rtl8139 to the process virtual memory and set the 16th bit of the 2-byte word at the 64th offset of the mapped memory to 1 [20].
As the configuration registers changes, the operation logic of the hardware changes, which means the hardware has transformed from the prior state to a new one. The state of the hardware will also be changed due to external interference, which may not be user-controllable. Normally, the hardware will return to the waiting state after completing the current task to handle the next user's instruction or external interference.
Vulnerability.
Virtual hardware is one of the relatively independent and direct ways for users to interact with the software implementation of certain specific modules in the virtualization platform. Coupled with the wide variety of virtual hardware, there are errors in their implementation, which makes virtual hardware a vulnerable target for attackers. Hardwares with high usage rates, such as display adapters, network adapters, disk drives are the main targets for attackers. As shown in Table 1, statistics on high-risk Qemu virtual hardware vulnerabilities exposed by the CVE vulnerability database before 2019 [21] show that display adapters, network adapters, disk drives, and Virtio devices account for more than half of virtual hardware vulnerabilities. Virtual hardware vulnerabilities generally have the characteristics of low exploration complexity and great danger, and most of the vulnerabilities can lead to the impact of code execution attacks.
Greybox Fuzzing Framework
2.2.1. Instrumentation and Feedback. The instrumentation and feedback are necessary conditions for the greybox fuzzing and are also a specific measure that different from the black-box fuzzing. Instrumentation is used to obtain feedback information of the target during the testing process. For targets with and without source code, compile-time instrumentation and runtime instrumentation are used to gain feedback information. For the test target with the source code, the instrumentation technology can be more adaptable to the test target through the custom instrumentation strategy, which includes the instrumentation strategy based on the basic blocks [7], the instrumentation strategy based on the function call operations, and the instrumentation strategy based on the system call operations [22]. For test targets without source code, binary translation and just-in-time compilation are typically used to place instrumentation code between the basic blocks of the test target (Qemu, Dyna-moRIO, Intel Pin, etc.). With the promotion of Intel PT technology [23], the feedback acquisition technology based on Intel PT is also applied to greybox fuzzing.
Based on different instrumentation strategies, the obtained feedback is also different, but the feedback must provide a matching basis for the optimization function of the greybox fuzzing.
Testcase Input and Target
Reset. Testcases of fuzzing are usually applied to the target software in the form of standard I/O or files, which is also the way most software processes input. In order to accurately obtain the effect of each testcase on the target software, it is necessary to ensure that each testcase has an independent effect on the target software, which is achieved by resetting the target software each time a testcase is entered. If we reset the target software, it will cost more computing and time resources. In order to remedy this, AFL proposed a fork server mechanism to reduce the resource consumption before the process is loaded into memory.
In greybox fuzzing with virtual hardware as the target, the state of the virtual hardware is changed with the input of testcases. If the state information of the virtual hardware is not monitored, it is difficult to evaluate the effectiveness of testcases and to effectively guide the optimization of fuzzing. Without resetting the target virtual hardware module, if the testcases that cause the state changes to the virtual hardware are ignored, all the testcase sequences must be recorded to ensure the validity of the final testcase in the event of a crash, resulting in extreme waste of storage resource and the inaccurate assessment of the role of testcases, which has a lot of unknowable influence on the fuzzing strategy. However, the fork server mechanism cannot adapt to multiprocess or multithreaded test targets. The most feasible way to reset virtual hardware is to restart the virtualization platform, which will consume a lot of time.
State-Aware Fuzzing Framework
The key problem of virtual hardware fuzzing is the interference of state transitions of the evaluation of fuzzing testcase. The state transition of the virtual hardware is complicated. The behavior of the virtual hardware under the current testcase is affected by the previous state of the virtual hardware and the current testcase itself. Moreover, the current testcase may also cause the state transition of the virtual hardware, thereby generating a new state.
To briefly explain the state condition of fuzzing virtual hardware, we hereby give an example as shown in Figure 1. When testcase ½x is input to virtual hardware, its initial state is state ½n . The virtual hardware processes testcase ½x and generates corresponding feedback. However, the testcase fails to trigger the state transition of the virtual hardware, so when testcase ½x+1 is input, the state of the virtual hardware inherits from state ½n . Testcase ½x+1 triggers the state transition of the virtual hardware, so when testcase ½x+2 is input, its initial state is state ½n+1 . The impact of testcases on the virtual hardware is fed back to fuzzer. The feedback of a testcase in virtual hardware fuzzing is based on its initial state, which interferes with testcase evaluation and PoC generation in fuzzing. At the same time, utilizing the state transition information of virtual hardware can improve the efficiency of virtual hardware fuzzing and further explore the code branches of virtual hardware.
The state of the virtual hardware is determined by the virtual hardware control registers, so we can effectively detect the state transition of the virtual hardware by monitoring the modification of these registers. To find the state transition of virtual hardware accurately by detecting the modification of the control registers, we proposed syntax-based sourceto-source instrumentation in Subsection 3.1.
In order to utilize the state transition information of virtual hardware to discover more code branches, we propose a state-based fuzzing strategy in Subsection 3.2. It is the basic optimization strategy for our fuzzing framework and guides fuzzing process. In this case, we can explore more states and more code branches.
3.1. Syntax-Based Instrumentation. In this module, we used a compiler to generate the abstract syntax tree of the source code of the target virtual hardware and insert Wireless Communications and Mobile Computing instrumentation at the root node of the abstract syntax subtree that caused the change of the key structure of target virtual hardware. As shown in Figure 2, the routines of this module include the following steps.
(1) Abstract syntax tree generation. We use a compiler to generate the abstract syntax tree of the source code of the target virtual hardware module. The abstract syntax tree provides convenient interfaces for key structure determination, reference code determination, and instrumentation code insertion. The nodes of the abstract syntax tree include declaration, statement, expression, and other types. Declaration is the root node of the variable (or function) declaration, statement is the root node of the basic statement of the program, and expression is the root node of the basic expression (2) Key structure determination. The state of the virtual hardware implemented by software is determined
Wireless Communications and Mobile
Computing by its key structure. So, we monitor the state of the virtual hardware by monitoring the key structure changes. According to the specific situation of the virtualization platform where the target virtual hardware is implemented, we determine the key structure to be monitored manually. Take vga-pci as an example, its key structure is PCIVGAState. The registers related to the state such as mmio region and mrs of the vga-pci are stored in this structure (3) Determination of instrumentation locations to monitor state transition. Search the abstract syntax tree to find the abstract syntax subtrees that may modify the key structure and determine the root nodes of the abstract syntax subtrees as the instrumentation locations. We search for codes that modify the key structure through matching rules. This rule matches the reference node of the key structure (or its member variables) as the value code of the assignment operation to find the code that causes the key structure to be modified. After locking the node that causes the modification of the key structure, we use the nearest statement type node among its ancestor nodes as the instrumentation root node (4) Determination of instrumentation locations at code branches. Determine the root nodes of the logical branches in the abstract syntax tree as the instrumentation locations. The code branch jump is composed of statement nodes such as conditional judgments or loops. For the statement nodes of the conditional judgment type, we use the statement root node of each conditional branch as the instrumentation root node. For the statement node of the loop type, we use the first statement node of the loop body as the instrumentation root node (5) Source-to-source instrumentation. According to the type of instrumentation locations, the instrumentation code is inserted into the source code of the target virtual hardware module. For code branch type instrumentation locations, insert code branch jump monitoring instrumentation code. For key structure monitoring type instrumentation locations, insert their corresponding structure monitoring instrumentation code. The code branch jump monitoring instrumentation code will feed back branch jump information to the fuzzer when the target virtual hardware is running, and the structure monitoring instrumentation code will give state change information to the fuzzer when the key structure is modified, which means the state of virtual hardware is changed (6) Syntax correction and compilation. Modify the newly generated code to ensure the syntax is correct and compile it to generate virtual hardware module with instrumentation inserted. This module will be inserted into the corresponding executable of the virtualization platform during the linking operation of the compilation process, so that it will be used when the virtualization platform is running Remark 1. Through the above instrumentation method, the modification of control registers of the virtual hardware will be detected during the fuzzing process, and the code branches covered during the processing of testcases can also be accurately recorded. Code branch feedback and state transition feedback provide the basis for customizing a fuzzing strategy that meets the characteristics of virtual hardware.
State-Based Fuzzing
Strategy. The change of the key structure does not completely mean that the state of the virtual hardware changes, but the change of the state of the virtual hardware basically depends on the key structure of the virtual hardware. By monitoring the modification of the key structure of the virtual hardware, we obtained the effect of testcases on the state of the virtual hardware. We define the testcases that cause the change of the state of target virtual hardware as high-value testcases. The current state of the virtual hardware is the state affected by the high-value testcase sequence after the virtual hardware is reset. We replay a certain segment of the sequence from the first testcase to roll back the state of the virtual hardware. Based on a certain state of the virtual hardware, we can fully explore the processing logic that the virtual hardware completes in this state by entering mutated testcases. As is shown in Algorithm 1, our fuzzing strategy includes the following steps.
(1) Reset the virtual hardware by restarting the guest instance and then use randomly mutated testcases to find testcases that may independently cause changes in the key structure of the virtual hardware. If a new high-value testcase is found, the virtual hardware is reset. On the premise of ensuring that this testcase does not overlap with the testcases in the current testcase corpus, we add the testcase to the highvalue testcase corpus. If the number of testcases in the high-value testcase corpus is greater than the threshold T high , then go to step 2 (2) Conduct fuzzing of one round with R init as the length of the initialization testcase sequence of rollbacks of this round and traverse the permutation of R init testcases in the current high-value testcase corpus to generate possible testcase sequences of rollback. After the rollback with the testcase sequence Q now as the current initialization testcases, we try to enter mutated testcases to fully explore the processing logic of the target virtual hardware based on this state. If the number of input mutated testcases is greater than the threshold N explore , we discard this rollback testcase sequence and try the next rollback testcase sequence (3) During the exploration process, if a testcase triggers the modification of a key structure, we insert it at the end of S current , and on the premise of ensuring that it is not duplicated with the testcases in the highvalue testcase corpus, we add it to the high-value testcase corpus. If a testcase can find a new code branch or can find some code branches with less 6 Wireless Communications and Mobile Computing consumption, we add it to the normal testcase corpus. If a testcase causes the target virtualization platform to crash, insert it at the end of the current testcase sequence S current and generate the PoC based on S current (4) After one round fuzzing, the results are counted, and the high-value testcase corpus is optimized. Increase the length of the initialization testcase sequence by 1, jump to step 2, and start fuzzing of next round The testcases of virtual hardware can be described by triples (address, size, value) for write operations and tuples (address, size) for read operations. The main method of mutation is to randomize address, size, and value. For the mutation based on a high-value testcase, we perform random value addition and subtraction operations on the address, size, or value of the basis testcase. For randomization mutation, we use the input address range to randomize the address, use 1, 2, 4, and 8 to randomize the size, and use the range corresponding to the size to randomize the value. Figure 3 shows the fuzzing strategy when three state transitions are found, and each state transition corresponds to a high-value testcase. When performing rollback operations, we try to replay these testcases to achieve random reorganization of the virtual hardware state. This strategy can traverse the state of the virtual hardware as much as possible and discover the behavior of the virtual hardware in various states, thereby improving the code coverage of fuzzing.
Remark 2. The above fuzzing strategy explores the various states of virtual hardware by traversing the transition sequence of state variables and using it as the operation to initialize the test target. This strategy reduces the interference of the state information to the evaluation of testcases by periodically rolling back the fuzzing target. Meanwhile, the strategy defines the testcases for discovering new branches, and the testcases for discovering new states as the input of the testcase mutation function to make the state information and branch detection information together play a guiding role in fuzzing. 1: S high = ∅; 2: S normal = ∅; 3: R init = 0; 4: whileCountðS high Þ < T high do 5: t now = TestcaseRandomGenðÞ; 6: feedback = FuzzOne(t now ); 7: ifAffectKeyStruct(feedback) then 8: ResetHardware(); 9: ifNotDuplicated(S high ,t now ) then 10: AddTo
Implementation
Based on the model we introduced in Section 3, we implement SAVHF. The key components are elaborated as follows.
(1) Fuzzing framework. As shown in Figure 4, our fuzzing framework is mainly composed of three modules: the instrumentation inserter, the fuzzing back-end, and the fuzzing front-end. Among them, the instrumentation inserter implements the instrumentation at key structure modification parts and branch jump parts of the target virtual hardware source code based on the abstract syntax tree analysis; the fuzzing backend is responsible for obtaining and analyzing the feedback information of fuzzing, generating the mutated testcases and guiding the fuzzing process according to the optimization strategy; the fuzzing front-end is responsible for applying the testcases transmitted by the fuzzing back-end to the target virtual hardware (2) Instrumentation-inserter. Based on the abstract syntax tree analysis interfaces provided by the clang compiler, we implemented the instrumentationinserter. The key structure is determined by a simple analysis of the source code of the target virtual hardware. We found that the virtual hardware in the same virtualization platform follows the same implementation rules. So, it is relatively easy to find the key structure. We then input the key structure information into the source-to-source instrumentation component, which is based on the AST matcher interfaces provided by clang and uses defined rules to find the root node of the abstract syntax subtree that may cause the key structure to change. At the same time, by traversing the abstract syntax tree of the target virtual hardware source code, we can also find the code branches. After analyzing the types of the abstract syntax tree nodes that need to be inserted with instrumentations, the instrumentation code is inserted into the source code. Finally, after correcting possible syntax errors in the instrumentation code, the target virtual hardware is compiled (3) Fuzzing back-end. Fuzzing back-end interacts with the instrumentation code in the target virtual hardware through shared memory and pipes. Shared memory is used to transfer the description information of key structure and code branch jump information, and pipes are used to send and receive commands between fuzzing back-end and instrumentation code. The testcases are sent to the fuzzing front-end in the form of uniform I/O tuples, and mutation methods include randomization, splicing, State [1] State [ [3] State [1] State [x] State [1] State [2] State [2] State [2] New states/branches Wireless Communications and Mobile Computing insertion, and segmentation. In order to reduce the memory resource consumption of fuzzing, we only save the changed parts of key structure corresponding to the high-value testcases, not the entire structure (4) Fuzzing front-end. In order to ensure that the fuzzing environment and the real environment have the same condition to interact between guest and virtual hardware, the fuzzing front-end is set in the guest instance of the virtualization platform where the target virtual hardware is located. According to the device manufacturer number and device number specified by the fuzzing back-end, the front-end finds the corresponding file node in sysfs to interact with the virtual hardware through this file node. The front-end uses a network adapter in the virtualization platform that has nothing to do with the target virtual hardware to interact with the back-end to eliminate the uncertainties caused by the front-end and back-end interactions for fuzzing. Due to the use of the instrumentation method based on abstract syntax tree, we only insert instrumentation into the code closely related to the target virtual hardware. Therefore, the operation of the network adapter during the interaction between the back-end and front-end will not trigger the inserted instrumentation
Evaluation
We selected a set of representative virtual hardware of the open source virtualization software Qemu under different versions as the dataset of the experiment, which contained network devices, display adapters, sound cards, bus controllers, and a virtual disk drive under version 4.0 of Qemu. The virtual network devices we selected included ne2000, rtl8139, eepro100 (i82550), vmxnet3, and rocker; the selected virtual display adapters included cirrus-vga, vmware-svga, vga-pci, ati-vga, and bochs-display; the selected virtual sound cards included ac97, es1370, and intel-hda; the selected bus controllers included piix4-usb-uhci, pvscsi, and sdhci-pci; the selected virtual disk drive was nvme. The virtual hardware in the data set contained virtual hardware with high usage rates, and the total number of the virtual hardware in our dataset was 17.
We ran our experiments on a workstation equipped with a modern Intel processor and 64 gigabytes RAM. For each virtual hardware which is the target of fuzzing, we gave it 8 gigabytes of RAM and 2 processor cores and continue fuzzing for more than 18 hours. During the experiment, we recorded the code branch coverage of the target virtual hardware, the number of triggered state transition, and the number of crashes found during fuzzing.
Code Coverage and Crash
Discovery. In our test, we collected four metrics to objectively reflect the performance of SAVHF, and these metrics include the number of target virtual hardware code branches currently discovered, code coverage (the percentage of basic code blocks found), the number of found crashes, and the number of found unique crashes.
We counted and averaged the code coverage in all tests, shown in Figure 5. The first hour of fuzzing is very efficient, and the efficiency gradually decreases with time. The code coverage eventually stabilizes around a value and grows slowly, and this value is about 61% after we averaged the coverage of all tests. Figure 6 shows the trend of detecting unique crashes overtime during fuzzing. The total number of unique crashes was gradually increasing, but the step-like growth occurred after discovering some new crashes. This aspect indicates that the defective code may be triggered in many ways; on the other hand, it indicates that the discovery of unique crashes is based on improving coverage.
After the fuzzing test for each virtual hardware, we record the final data of these metrics, which are shown in Table 2. Although our fuzzing experiment generally only took about 18 hours, the average code coverage reached more than 61%, of which the highest coverage reached 89.41% (es1370). Of the 17 virtual hardwares as test datasets, we found unique crashes (or hangs) in 4 of them. Considering that the tested Qemu virtualization platform was a relatively new stable version, this data shows that SAVHF has good performance in detecting virtual hardware bugs.
As is presented in Table 3, all the unique crashes we obtained were manually analyzed one by one and were divided into the following 3 categories according to the effect of crashes. 9 Wireless Communications and Mobile Computing was caused by the failed assertion g_assert_not_ reached and triggered a SIGABRT error. 7 unique crashes of ati-vga are assertion errors and finally crashed at the assertion assert (bpp! =0). The only unique crash produced by the fuzzing with pvscsi as the target was an assertion error, which was caused by a failed assertion assert (s-> rings_info_valid).
(3) Hang. Crashes of the category of hang generally refer to crashes that cause the program to fall into longterm processing (or endless loop) due to logic errors and prevent users from interacting with it. 2 unique crashes of ati-vga were of this category and were causes by unchecked recursive call in the ati_mm_ read function, which ultimately caused the virtual platform to die. 3 unique crashes of cirrus-vga were caused by unchecked loop limitation 5.2. PoC Availability. For each unique crash, SAVHF generated a PoC program that may trigger the crash and the basic block passing flow of the target virtual hardware. SAVHF records the testcase sequence that affected the virtual hardware state, and the PoC program was generated after process-ing the pretestcase sequence and the testcase that triggered the crash. In order to analyze the effectiveness of PoCs generated in tests, we recompiled the corresponding version of the original Qemu virtualization platform and launched it with the same configuration to ran each PoC program in the guest environment. We divided PoCs according to their corresponding virtual hardware and crash types. The results are given in Table 4. The availability of PoCs of category assertion and memory corruption reached 100%, but of the hang category PoC, their availability is lower, with an average of 41.67%. This result indicated that the hang category crashes of the virtual hardware were relatively complicated, which may be affected by other conditions besides the virtual hardware state.
Strategy Performance.
To evaluate the performance of our fuzzing strategy, we conducted comparative experiments. We use the path-based fuzzing strategy to experiment on the same virtual hardware. This strategy is also a basic optimization strategy for greybox fuzzing. In addition, we use the same code branch instrumentation method to ensure the same granularity of instrumentation. Under the same condition, the difference in the performance of fuzzing is mainly caused by the fuzzing strategy. We mainly evaluate the performance of the fuzzing strategy based on the number of virtual hardware code branches found at the same time.
For each virtual hardware, the difference in the number of code branches found by fuzzing under the guidance of these two strategies is shown in Figure 7. Compared with the path-based fuzzing strategy, the state-based strategy proposed in this paper can increase the number of discovered virtual hardware branches by 11.04%. Particularly, with the virtual display adapter cirrus-vga as the test target, the increase reached more than 40%.
We compared the code branches found during fuzzing using these two strategies. The results are given in Figure 8.
Wireless Communications and Mobile Computing
The parts marked in red in Figure 8 are the code branches, and basic code blocks discovered using the state-based fuzzing strategy; the parts marked in blue are the code branches, and basic code blocks discovered using the path-based strategy; the other parts were discovered by both two strategies. The branch and basic blocks found in Figure 8 using the state-based fuzzing strategy have clustering characteristics. We checked these code branches and found that most of the code branches need to meet the specified value of the virtual hardware control register. The control registers used by these checks include cirrus_blt_srcaddr and cirrus_srccounter, which control whether cirrus-vga can perform certain logic processing activities.
The behavior of the virtual hardware is determined by both the state and input of the virtual hardware. Traversing more states helps to discover the behavior of virtual hardware in various states, thereby improving the code coverage of fuzzing.
Conclusion and Discussion
In this paper, we propose SAVHF, a state-based virtual hardware fuzzing framework. To effectively detect the state transition of the virtual hardware, we proposed a source-tosource instrumentation method based on the abstract syntax tree. We also introduced a state-based fuzzing strategy, which can effectively reduce the interference of virtual hardware state conditions on fuzzing and increase the efficiency of fuzzing. We used SAVHF to fuzz 17 representative virtual hardware of Qemu and found 16 unique crashes. For the ones that have not been patched, we actively contacted the vendor and obtained the CVE vulnerability number or positive response. SAVHF covered an average of more than 61% of virtual hardware code branches within 18 hours during testing and can improve the average code coverage by 11.04% compared with the path-based fuzzing strategy.
The instrumentation method based on abstract syntax tree proposed in this paper mainly detects the state transition of virtual hardware by detecting the modification of the key structure of virtual hardware. However, the effect of each member variable in the key structure on the processing logic is complicated. The framework proposed in this paper does not further analyze the specific effect of each control register on the processing logic, but only monitors the modification of control registers. The main purpose of this method is to add state transition information to the optimization strategy of fuzzing, so as to further improve the efficiency of fuzzing. If methods such as symbolic execution can be used to analyze the state conditions, it may be able to further improve the efficiency of fuzzing.
Data Availability
(1) The [evaluation] data used to support the findings of this study have been deposited in the https://github.com/F1r/ SAVHF_data.git repository. (2)
Conflicts of Interest
The author(s) declare(s) that they have no conflicts of interest.
|
v3-fos-license
|
2017-05-13T04:04:13.283Z
|
2016-06-01T00:00:00.000
|
13174074
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.14814/phy2.12839",
"pdf_hash": "64d3c0fd1486cd68c79daf0583593ba21955d05d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2228",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Psychology"
],
"sha1": "64d3c0fd1486cd68c79daf0583593ba21955d05d",
"year": 2016
}
|
pes2o/s2orc
|
Sleep restriction acutely impairs glucose tolerance in rats
Abstract Chronic sleep curtailment in humans has been related to impairment of glucose metabolism. To better understand the underlying mechanisms, the purpose of the present study was to investigate the effect of acute sleep deprivation on glucose tolerance in rats. A group of rats was challenged by 4‐h sleep deprivation in the early rest period, leading to prolonged (16 h) wakefulness. Another group of rats was allowed to sleep during the first 4 h of the light period and sleep deprived in the next 4 h. During treatment, food was withdrawn to avoid a postmeal rise in plasma glucose. An intravenous glucose tolerance test (IVGTT) was performed immediately after the sleep deprivation period. Sleep deprivation at both times of the day similarly impaired glucose tolerance and reduced the early‐phase insulin responses to a glucose challenge. Basal concentrations of plasma glucose, insulin, and corticosterone remained unchanged after sleep deprivation. Throughout IVGTTs, plasma corticosterone concentrations were not different between the control and sleep‐deprived group. Together, these results demonstrate that independent of time of day and sleep pressure, short sleep deprivation during the resting phase favors glucose intolerance in rats by attenuating the first‐phase insulin response to a glucose load. In conclusion, this study highlights the acute adverse effects of only a short sleep restriction on glucose homeostasis.
Introduction
Recent evidence convincingly shows that sleep is important for metabolic and physiological health. Results from epidemiological studies indicate that short sleep duration for a long period is correlated with obesity and type 2 diabetes (Gottlieb et al. 2005;Chaput et al. 2008;Van Cauter and Knutson 2008;Spiegel et al. 2009;Watanabe et al. 2010). For example, habitual sleep duration of <5-6 h leads to increased body mass index and impaired glucose tolerance or even type 2 diabetes (Vioque et al. 2000;Chaput et al. 2007;Watanabe et al. 2010). In addition to epidemiological studies that are mainly focused on mild chronic sleep deprivation, laboratory experiments in both human subjects and experimental animals have also linked sleep shortening with metabolic abnormalities in a more acute setting (Spiegel et al. 1999; Barf et al. 2012).
Animal experiments have shown that prolonged sleep deprivation leads to behavioral and physiological changes such as modifications in body temperature, body weight, food consumption, and energy expenditure (Rechtschaffen and Bergmann 1995;Banks and Dinges 2007;Nedeltcheva et al. 2009;Vaara et al. 2009; Barf et al. 2012;Markwald et al. 2013). Studies in humans have shown that the secretion of anabolic (growth hormone, prolactin, and testosterone) and catabolic hormones (glucocorticoids and catecholamines) may be affected by sleep disturbances (Nedeltcheva and Scheer 2014). Moreover, sleep restriction lowers plasma levels of the anorexigenic hormone leptin and elevates those of the orexigenic hormone ghrelin (Spiegel et al. 2004;Taheri et al. 2004; Barf et al. 2012). Both quality and quantity of sleep duration may affect glucose metabolism (Donga et al. 2010; Stamatakis and Punjabi 2010; Barf et al. 2012). Furthermore, a number of experimental studies with human volunteers suggest that even partial sleep disturbance leads to impaired glucose tolerance and insulin sensitivity, that is, indicators of a prediabetic condition (Spiegel et al. 1999;Tasali et al. 2008;Donga et al. 2010;Schmid et al. 2011;Robertson et al. 2013). Of note, the metabolic profile observed after sleep deprivation shares several similarities with type 2 diabetes, including decreased muscle glucose uptake, increased liver glucose output, and pancreatic bcell dysfunction (Spiegel et al. 1999;Buxton et al. 2010;Donga et al. 2010;Buxton et al., 2012).
Most of the experiments conducted in humans and animals focused on partial or complete sleep restriction during part or the whole resting period. So far no study assessed how acute, short-term sleep deprivation affects glucose regulation. Therefore, we aimed to investigate the acute effect of short-term sleep deprivation on glucose homeostasis in rats. In order to do so, rats were kept in a light-dark cycle and transferred to constant darkness. On the first day of constant darkness animals were subjected to an intravenous glucose tolerance test (IVGTT) immediately after a 4-h sleep deprivation period in either the beginning or middle of the rest period.
Methods
All the experiments were performed in accordance with the U.S. National Institute Health Guide for the Care and Use of Laboratory Animals (1996), the French National Law (implementing the European Directive 2010/63/EU), and approved by the Regional Ethical Committee of Strasbourg for Animal experimentation (CREMEAS) and the French Ministry of Higher Education and Research (#01050.01).
Animals
Male Wistar rats (Janvier Laboratories, Le Genest-Saint-Isle, France) were maintained at 23°C under a 12-h light/ 12-h dark cycle (light intensity during light and dark periods [red light on] was 200 lux and <3 lux, respectively). Lights on at 07:00 AM and lights off at 07:00 PM defined zeitgeber time (ZT) 0 and ZT12, respectively. Animals had ad libitum access to food and water and were housed individually in Plexiglas cages (28 9 28 9 40 cm) throughout the experiments. On the day of the experiment, animals were transferred into constant darkness (DD; red light, <3 lux).
Experimental design
After a week of habituation, but only when they had reached a body weight of >300 g, animals were implanted with an intravenous silicone catheter through the right jugular vein, according to the method of Steffens (1969). Two weeks after the surgery, when animals had gained presurgery body weight again, all animals were transferred to DD. Rats (n = 6 per group) were either sleep deprived (SD) from circadian time (CT) 0 (defining projected time of lights on during the previous light-dark cycle) to CT4 (for early subjective day sleep deprivation) or allowed to sleep from CT0 to CT4 and sleep deprived from CT4 to CT8 (for middle of subjective day sleep deprivation) by gentle handling or left undisturbed as controls (CTR). Four hours of sleep deprivation by gentle handling is enough to enhance slow-wave sleep during the recovery period in rats (Kostin et al. 2010). An IVGTT was performed immediately after sleep deprivation. During the final hour of sleep deprivation the jugular vein catheter was connected to a blood sampling catheter on the top of the head. This blood sampling catheter was attached to a metal collar and guided outside the animal cage. Blood sampling catheter and metal collar were kept out of reach to the rats using a counterbalanced beam. This system allowed all manipulations to be performed outside the cage without any further handling of the animals. During the experiment (including sleep deprivation and blood sampling) no food was kept in the cages. We used red headlamp during the blood sampling in DD. A glucose solution (0.5 mL, 500 mg kg À1 body weight) was injected as a bolus via the blood sampling and jugular vein catheter. First, a blood sample (0.2 mL) was collected (t = 0), immediately followed by the glucose injection. Subsequently, blood samples (0.2 mL) were taken at t = 5, 10, 20, 40, and 60 min. Samples were used to determine plasma concentrations of glucose, insulin, and corticosterone at these time points. The total amount of glucose in plasma and total amount of insulin released after the glucose bolus injection was calculated from the area under the curve (AUC) of every individual animal and averaged for the experimental groups.
Laboratory method/analysis
During the experiment, blood glucose concentrations were determined by a glucometer (Accu-Check, Roche Diagnostic, Meylan, France). Blood samples were collected in tubes on ice containing heparin and later centrifuged at +4°C. Plasma was isolated and stored at À20°C for further analysis of insulin and corticosterone. Plasma concentrations of insulin and corticosterone were measured employing radioimmunoassay kits (Millipore, Billerica for insulin and MP Biomedicals, Orangeburg for corticosterone).
Statistical analysis
Data are presented as mean AE standard error of the mean (SEM). Statistical analysis was performed by SigmaPlot (version 12, SPSS Inc., Chicago, IL). Significance was defined at P < 0.05. Two-way analysis of variance (ANO-VAs) with repeated measures (rmANOVA) were performed to compare glucose, insulin, and corticosterone levels for different samples. Three-way ANOVAs were performed to compare glucose, insulin, and corticosterone levels according to sample timing and sleep status at the two CTs. Twoway ANOVAs were performed to compare basal glucose, insulin, corticosterone, AUCs, I/G 5-0 , and I/G 10-5 between the experimental groups at the two CTs. If appropriate, post hoc analysis was performed using Tukey's test.
Results
Intravenous glucose tolerance test were performed immediately after the sleep deprivation (SD), during the beginning as well as in the middle of the subjective day. SD in both early and midsubjective day caused an impaired glucose tolerance. Injection of the glucose bolus resulted in an immediate and pronounced increase in plasma concentrations of glucose and insulin in both control and sleep-deprived animals ( Fig. 1A,B,D and E). Highest glucose concentrations were detected 5 min after the bolus injection, directly followed by a rapid decrease. Within 20 min after injection, glucose concentrations had returned to preinfusion concentrations again. Both during early and midsubjective day, ANOVA showed significant effects of SD (F 1,50 = 6.42, P = 0.03 and F 1,50 = 13.42, P = 0.004), sample timing (F 5,50 = 34.85, P < 0.001 and F 5,50 = 35.63, P < 0.001), and interaction (F 5,50 = 5.56, P < 0.001 and F 5,50 = 12.31, P < 0.001). Post hoc analysis revealed that plasma glucose levels were significantly elevated at t=5 min in sleep-deprived compared to control animals at both CT4 and CT8 (P < 0.001). The three-way ANOVA showed no significant effects of time of day (P = 0.194) or the interaction of sample timing 9 time of day (P = 0.885), SD 9 time of day (P = 0.512), or sample timing 9 SD 9 time of day (P = 0.587), indicating that the glucose responses at both time points were very similar. Plasma insulin levels also increased in response to the glucose bolus in both the sleep-deprived and control groups, at t = 5 min. ANOVA showed no significant effect of SD during either early or midsubjective day (F 1,50 = 0.73, P = 0.41 and F 1,50 = 0.25, P = 0.62), but sample timing (F 5,50 = 18.72, P < 0.001 and F 5,50 = 2.44, P = 0.046) and interaction (F 5,50 = 2.89, P = 0.02 and F 5,50 = 4.46, P = 0.002) did show significant effects at both time points. Post hoc analysis revealed that insulin levels were significantly higher in the sleep-deprived group at t = 10 min during both the beginning and the middle of the rest period (P = 0.025 and P = 0.004). In addition, at t = 40 min plasma insulin levels were increased in the control group (CT4-8) (P = 0.02). The three-way ANOVA showed significant effects of sample timing 9 time of day (P = 0.035) and sample timing 9 SD 9 time of day (P = 0.012), but not time of day (P = 0.838) or SD 9 time of day (P = 0.392) interactions, indicating small time-course differences in insulin responses during IVGTT at CT4 and CT8 (see Fig. 1B and E).
To test the possibility of activation of the hypothalamo-pituitary-adrenal (HPA) axis due to sleep deprivation and intervention of IVGTTs, we measured corticosterone levels before and during IVGTTs. Basal levels of plasma corticosterone were not affected by sleep deprivation (F 1,20 = 0.03, P = 0.8), but basal levels were higher at CT8 than at CT4 (F 1,20 = 9.1, P = 0.007) (Fig. 2C). During IVGTTs, ANOVA showed no significant effect of SD during either early or midsubjective day (F 1,50 = 0.67, P = 0.43 and F 1,50 = 0.38, P = 0.54) ( Fig. 1C and F). The three-way ANOVA did not show significant effects of SD 9 time of day (P = 0.99), SD 9 sample timing (P = 0.92) and sample timing 9 SD 9 time of day (P = 0.9), but it detected an effect of time of day (P = 0.01) and sample timing 9 time of day (P = 0.04), indicating the higher mean corticosterone levels during the CT4-8 IVGTT.
Basal levels of plasma glucose did not change due to sleep deprivation (F 1,20 = 2.54, P = 0.12), though the effect of time of day on plasma glucose concentration was apparent (F 1,20 = 10.33, P = 0.004), with higher levels later in the day ( Fig. 2A). Like basal glucose, basal plasma concentrations of insulin also depended on the time of day (F 1,20 = 7.55, P = 0.012) (Fig. 2B). Post hoc analysis showed that especially in the SD group, basal insulin was higher at CT8 compared to CT4 (P = 0.017).
To estimate the ability of the b cells to respond to a glucose challenge, we calculated insulin secretion over the first 5 min after the injection (ΔI5-0) divided by the difference between the glucose concentrations during the same time period (ΔG5-0), that is, I/G 5-0 . ANOVA showed significant effects of SD (F 1,20 = 12.91, P = 0.002) and time of day (F 1,20 = 6.40, P = 0.02) (Fig. 2D, Table 1), with I/G 5-0 being lower at CT8. Post hoc analysis revealed that SD significantly decreased the I/G 5-0 in both early and late subjective day (P = 0.008 and P = 0.044).
We further tested the ability of the b cells to respond to a glucose load at 10 min after injection. For this, we calculated the I/G 10-5 . ANOVA showed significant effects of SD (F 1,20 = 10.8, P = 0.004), but not time of day (F 1,20 = 1.2, P = 0.27) or its interaction (F 1,20 = 0.48, P = 0.49) ( Table 1). Post hoc analysis revealed that SD significantly decreased the I/G 10-5 only during the early subjective day (P = 0.01).
We also analyzed the AUCs as estimation for the amount of glucose and insulin released after the bolus injection of glucose ( Fig. 2E and F). ANOVA showed that SD significantly affected the AUC of glucose (F 1,20 = 17.77, P < 0.001). Post hoc analysis revealed that SD significantly increased the glucose AUC in early and middle of subjective day (P = 0.01 and P = 0.006). On the other hand, insulin AUCs were not significantly affected by SD (F 1,20 = 1.18, P = 0.29) and time of day (F 1,20 = 0.64, P = 0.43).
Discussion
There is increasing evidence from human and animal studies that disturbed sleep is associated with perturbations in glucose homeostasis (Spiegel et al. 1999;Barf et al. 2010). It is not clear, however, how acute sleep deprivation in terms of duration and timing during the rest period impacts on glucose metabolism. In the present study, we show in rats that a short period (4 h) of sleep deprivation is sufficient to impair glucose tolerance and reduce the early-phase insulin response to an intravenous glucose load.
Methodological considerations
The detrimental impact on glucose metabolism of short sleep duration over many days together with misaligned or irregular sleep has been reported in several studies . A few studies also investigated the effects of acute sleep restriction (i.e., within one circadian cycle) on glucose homeostasis in humans (Schmid et al. 2009;Donga et al. 2010) and rats (Barf et al. 2010). In both cases, the effects of sleep restriction were tested at only one time point. In both humans and rats glucose homeostasis is strongly influenced by time of day (Kumar Jha et al. 2015). Among others, glucose tolerance improves from the beginning of the rest period to the onset of the activity onset (la Fleur et al. 2001). Such daily variations may thus modulate the effects of sleep deprivation on glucose metabolism. Therefore, we set out to investigate whether the effects of sleep deprivation are influenced by time of day. For sleep deprivation during early daytime, rats were forced to be awake during the first 4 h of the usual resting period (CT0-4), thus prolonging the period of wakefulness from about 12 h to 16 h. For sleep deprivation later during the light period, there are two options: either keeping the rats awake during a longer time span (e.g., 20 h) or allowing sleep during the early part of the rest period followed by sleep restriction during the latter part of the rest period. We chose the latter option as it permits to test the effect of a similar period of sleep deprivation (i.e., 4 h), but occurring at a different time of day. Notwithstanding, the fact that with sleep deprivation during the latter part of the light period sleep propensity was probably decreased compared to the rats sleep deprived in early morning, glucose tolerance was similarly altered in rats sleep deprived in either the early or middle part of the rest period. Thus, independent of time of day and sleep pressure, sleep restriction is capable of altering glucose homeostasis. In fact, the adverse effect of sleep deprivation on glucose tolerance was much stronger than the diurnal variation in glucose tolerance. Thus, the effect of sleep deprivation completely overruled the improvement of glucose tolerance during the light period as seen in the control animals.
In human studies that investigated the effects of acute sleep restriction (Schmid et al. 2009;Donga et al. 2010), lights were on during sleep restriction, which could stimulate wakefulness and inhibit melatonin secretion (Redlin 2001;Chellappa et al. 2011). In rats light exposure has been reported to stimulate glucocorticoid release (Buijs et al. 1999) and increase plasma glucose . Thus, to avoid interferences with the outcomes studied, lights were turned off during the present experiment. Moreover, to rule out any putative bias due to changes in food intake of sleep-deprived rats, food was removed before the start of sleep deprivation and during blood sampling.
Several procedures have previously been used to induce sleep deprivation in rodents, including forced locomotion, gentle handling, and short platform over water. Gentle handling has the advantage to avoid the confounding effect of hyperactivity triggered by forced locomotion. In addition, it is thought to prevent the stressful effects of platform over water and forced locomotion. Our assumption that gentle handling is suitable for short periods of sleep deprivation is supported by finding similar levels of basal blood corticosterone in control and sleep-deprived rats, indicating that the experimental groups were not stressed by gentle handling.
Glucose tolerance and hormonal changes
In our study, a single period of 4 h of sleep deprivation either in early or middle of the light period did not modify the basal levels of plasma glucose, insulin, and corticosterone, a finding consistent with the lack of significant effect of a single night limited to 4.5 h sleep in human subjects (Schmid et al. 2009). The data from the IVGTT show that sleep deprivation in rats strongly reduces glucose tolerance, as evidenced by the rise in plasma glucose concentrations to higher levels and for a longer time. Several effects may participate in the reduced glucose tolerance. First, although the total amount of insulin released in the sleep-deprived group was not changed (Fig. 2F), the reduced early insulin responses at both time points investigated indicates a reduced or at least inadequate sensitivity of the b cells. During the first 5 min after injection of the glucose bolus insulin levels were similar in the control and sleep-deprived group, but the responsiveness of b cells to the glucose load was significantly reduced in the sleep-deprived group at CT4 and CT8, and at CT4 this effect even remained present in the next 5 min. These findings are very similar to those of a previous rat study using a much longer period of sleep deprivation (i.e., 20 h) (Barf et al. 2010).
The decreased glucose tolerance in sleep-deprived animals during IVGTT may results from either higher glucose production or less glucose uptake. The data from the present study could not differentiate whether the hyperglycemia is due to reduced glucose uptake or more glucose production. The reduced early insulin response in the sleep-deprived groups will result both in a reduced glucose uptake as well as a lesser inhibition of glucose production. In order to understand further the mechanism of hyperglycemia, experiments using the stable isotope dilution technique to determine endogenous glucose production need to be done. Sleep deprivation might trigger glucagon release, which would subsequently result in a higher endogenous glucose production. Although at a first glance, this hypothesis appears unlikely because acute sleep deprivation has an inhibitory effect on circulating glucagon levels in humans (Schmid et al. 2009 assays of plasma glucagon are needed to evaluate this possibility. An alternative explanation for the increased glucose levels during the IVGTT could be an increased activity of the HPA axis, as a consequence of stress during acute sleep deprivation. In humans, most studies reported no acute change in glucocorticoid levels after sleep deprivation (Everson and Crowley 2004;Donga et al. 2010), although delayed effects (i.e., the day after) have been reported (Leproult et al. 1997). By contrast, depending on the procedure of sleep deprivation in animal studies, sleep disturbances can increase glucocorticoid release (Baud et al. 2013). However, no differences were reported in plasma corticosterone levels between sleep-deprived and control rats (Barf et al. 2010). In the present study, basal levels of plasma corticosterone and corticosterone release during IVGTTs were not different in sleepdeprived rats as compared to undisturbed controls, ruling out the possibility of major acute activation of the adrenal via the HPA or sympatho-adrenal axis.
Possible mechanisms
Our results revealed an altered insulin response to the glucose load during the first 5 min in sleep-deprived animals. The diminished early-phase response of insulin after sleep deprivation suggests a reduced or impaired sensitivity of the b cells to a glucose challenge. This defect may depend on disturbances in the sensitivity of the pancreatic b cells to glucose and/or its control by the autonomic nervous system. The latter possibility is supported by the fact that sleep deprivation results in sympathetic activation and release of catecholamines in the general circulation (Levy et al. 2009). Hyperactivity of the sympathetic branch of the autonomic nervous system may lead to insulin resistance (Egan 2003). Thus, the reduction in the early-phase insulin response to glucose might be related to an increased sympathetic and/or decreased parasympathetic activity. Moreover, increased activity of the sympathetic nervous system would also stimulate glucose production. Future work should determine possible changes in the sympathovagal balance under the present conditions of sleep deprivation.
Considering that some actions of sleep deprivation on peripheral functions may result from sympathetic activation, what could be the central structures mediating these effects? A likely candidate is the hypothalamic orexin system, because this neuropeptide is involved not only in the regulation of the sleep/wake cycle, but also in the daily rhythm of glucose metabolism (Sakurai 2007;Kalsbeek et al. 2010b). Activity of orexin neurons in the perifornical region of the hypothalamus is highest during the wake period and during sleep deprivation (Estabrooke et al. 2001). These orexin neurons also participate in the control of endogenous glucose production in the liver via the autonomic nervous system (Yi et al. 2009). Furthermore, orexin appears to regulate insulin sensitivity, because mice lacking orexin show an age-related development of systemic insulin resistance (Hara et al. 2005;Tsuneki et al. 2008). Finally, orexin has bidirectional effects on hepatic gluconeogenesis via the autonomic nervous system (Tsuneki et al. 2015). To test whether orexin neurons are involved in the autonomic control of hepatic glucose production and/or pancreatic sensitivity to glucose, orexin antagonist and organ-specific denervation studies should be performed during sleep deprivation.
Like orexin, also the serotonin system is involved in arousal and the regulation of glucose metabolism (Asikainen et al. 1997;Versteeg et al. 2015). Injection of serotonin leads to hypoglycemia in rats and mice (Yamada et al. 1989;Sugimoto et al. 1990). Mice deficient in serotonin reuptake transporters and the 5-HT 2c receptor in pro-opiomelanocortin neurons of the arcuate nucleus in the hypothalamus show impaired glucose metabolism (Xu et al. 2010;Chen et al. 2012). The daily rhythm of SCN serotonin was shown to be severely impaired in glucose intolerant hamsters, indicating a functional link between the SCN, serotonin, and glucose metabolism (Luo et al. 1999). However, additional experiments are needed before the hypothalamic serotonin system can firmly be implicated in the sleep deprivation-induced changes in glucose metabolism.
NPY is another hypothalamic neuropeptide involved in the control of feeding, arousal, and glucose metabolism (Szentirmai and Krueger 2006;Kalsbeek et al. 2010a;Wiater et al. 2011). Chronic sleep deprivation studies have shown increased expression of hypothalamic NPY (Koban et al. 2006;Martins et al. 2010). Central administration of NPY results in an increase in EGP in rats, probably by increasing hepatic glucose production (Kalsbeek et al. 2010a). The i.c.v. administration of NPY causes insulin resistance via activation of sympathetic output to the liver (van den Hoek et al. 2008). NPY-containing neurons in the arcuate nucleus also project to the paraventricular nucleus of the hypothalamus (PVN), which is a relay center for the hypothalamic integration of glucose metabolism. Therefore, the presently observed impaired glucose tolerance might have been mediated through an enhanced stimulation of NPY receptors in the hypothalamus.
Biomedical perspectives
The present study investigated the acute effects of sleep deprivation on glucose homeostasis in rats. Our data show that disturbance of the sleep-wake rhythm during early or late subjective day by short sleep deprivation acutely affects glucose metabolism by impairing glucose tolerance. Our results show that prolonged wakefulness (sleep deprivation during the early resting period) and short duration sleep deprivation (sleep deprivation in the middle of the rest period) impair glucose tolerance to the same extent.
The sleep-wake cycle is oppositely phased in nocturnal and diurnal species according to the astronomical light/ dark cycle, while plasma glucose concentrations also show oppositely phased rhythms between nocturnal and diurnal rodents (Dardente et al. 2004). Therefore, it would be interesting to determine whether acute sleep deprivation during the resting period induces the same alterations of glucose metabolism in a diurnal rodent, that is, being active during the light period as are humans.
Unraveling the mechanisms that underlie the deleterious effects of sleep deprivation on glucose metabolism in rodents under tightly controlled conditions may be ultimately relevant for applications in humans.
|
v3-fos-license
|
2018-10-17T02:28:02.750Z
|
2015-02-26T00:00:00.000
|
53365479
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://stp.diit.edu.ua/article/download/38269/34817",
"pdf_hash": "9d480abe5e42c31da2003e8465a66612c00029b0",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2230",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"sha1": "9d480abe5e42c31da2003e8465a66612c00029b0",
"year": 2015
}
|
pes2o/s2orc
|
THE BASING OF STABILIZATION PARAMETERS OF A FORTIFIED RAILWAY BED
Dep. «Tunnels, Bases and Foundations», Dnipropetrovsk National University of Railway Transport named after Academician V. Lazaryan, Lazaryan St., 2, Dnipropetrovsk, Ukraine, 49010, tel. +38 (050) 708 50 69, e-mail petrenko1937@mail.ru, ORCID 0000-0002-5902-6155 Dep. «Tunnels, Bases and Foundations», Dnipropetrovsk National University of Railway Transport named after Academician V. Lazaryan, Lazaryan St., 2, Dnipropetrovsk, Ukraine, 49010, tel. +38 (066) 290 45 18, e-mail tutkin@mail.ru, ORCID 0000-0003-4921-4758 Dep. «Tunnels, Bases and Foundations», Dnipropetrovsk National University of Railway Transport named after Academician V. Lazaryan, Lazaryan St., 2, Dnipropetrovsk, Ukraine, 49010, tel. +38 (096) 992 15 81, e-mail murzilka891@mail.ru, ORCID 0000-0002-6077-1689 Dep. «Tunnels, Bases and Foundations», Dnipropetrovsk National University of Railway Transport named after Academician V. Lazaryan, Lazaryan St., 2, Dnipropetrovsk, Ukraine, 49010, tel. +38 (066) 290 45 18, e-mail a.alkhduor@inbox.ru, ORCID 0000-0001-5845-2710
Introduction
At the present time the railway transport in the unit transport system of Ukraine plays the leading role by guarantee the requirements of freight and passenger traffic.In modern conditions the railway operations concentrates on ensuring the necessary level of the track reliability, including the roadbed as the basis of the road embankment to a large extent determines the normal operation of the railway as the whole under the influence of the rolling stock.As it is known, the main cause of the traffic accident on the railways is the state of the track (50%), the state of the rolling stock (43%) and a human factor.Thus, it is necessary to develop new investigations with the using of subgrade reinforcement elements with different materials and determine the parameters of their efficiency, especially at higher train speeds.To achieve this goal of determining the basic parameters of the stress-strain state to stabilize the soil subgrade embankment reinforced with special materials it is required to solve next research tasks: 1) carry out the analysis of previous studies in the field of strengthening of subgrade by reinforcing materials; 2) investigate the effect of influence by reinforcing layer by geomaterials on deformation properties of subgrade strengthening in various designs; 3) determine the distribution of stresses in the subgrade, reinforced geomatherials under static load; 4) carry out the complex of experimental researches to explore the nature of the model subgrade deformation at different voltage level [2][3][4].
Purpose
Analyzing every works about the reinforcing of subgrade embankment, it should be noted that results of previous studies indicate to the using of traditional methods of strengthening subgrade railways are not always effective and it is necessary to develop new methods [1,[5][6][7][8]12].
Methodology
For research inclusions of the reinforcing geomatherials on the soil subgrade method were outlined and as a result of the study its materials loaded up to the level of adequate modern rolling stock.
The results of previous studies [5][6][7][11][12][13][14][15][16] indicate that the using of traditional methods of strengthening subgrade railways are not always effective and leads to the higher costs and time to conduct it.The modern ways of strengthening of railway subgrade have the several disadvantages, technological or economic nature, and therefore do not solve the issue in many causes reducing the deformability of the roadbed.Theoretical and experimental studies carried out in different countries, as well as monitoring of the test sites revealed that geotextile placed on main subgrade surface, comes into work with a ballast layer of soil and is the main site, with the stress-strain state of subgrade changing.
Thus, currently there is the problem of assessment the quality reinforcement, especially geosyn-thetic materials, railway roadbed [6,7,10,12].In this case, it is complicated by the fact that there is no single concept of subgrade strengthening the body over its depth, especially in combined versions [6].Therefore in the task of developing the method need to strengthen and evaluate of its stress-strain state.
The tests were conducted in a closed system i.e. at constant soil moisture.According to test result were constructed plots of the "stress -the relative strain".Base on the results of sample tests on the compression was verify the effectiveness of placing a geotextile to reduce the deformation of the sample at the laboratory studies were performed stabilization reinforced roadbed.Dimensions of the model in plane were accepted 680×120 mm.Front wall of the tray was made of a transparent Plexiglas for observations the development of deformities.Loading of models were carried out on the linkage system with shoulder leverage of 1:10.The load transmitted to the stamp area 155.3 cm 2 .In the process of model loading controlled the level of absolute displacements vertical load on the stamp was from 10 to 50 N with the stresses under the stamp changed from 0.0644 MPa to 0.332 MPa (as we know the normative power of the railways in the strengthening of 0.16 MPa).The settlement of stamp measured three dial gangs with a scale of 0.01 mm.Indicators on the stamp mounted symmetrically.Move the stamp recorded after each stage of the application load after and removed on indicators of the deformed samples and photography model.Moving of subgrade recorded of rules set on the side faces of the tray and strain model with a grid printed on its face.To substantiate methods prevent of substantially reduce the strains of subgrade for different types of reinforcement were conducted experimental researches in the tray with the geometric scale of modelling 1:20.There have been conducted several series of model tests with the specification of their deformation characteristics in depending on the nature of reinforcement (Fig. 1).
At the analysis of the parameters of the experimental studies reinforced by geotextile of subgrade found that the nature of the manifestation of deformation unreinforced model -variant 0 (Fig. 1, a) is manifested by the appearance of the stamp core compression, which was clear from the distortion mesh 2×2 cm.towards the front of the model.Option 1 (Fig. 1, b) during the compression of the soil matrix with geotextile fabric has significantly changed its shape, detached from the main site and out of shape as a result of the critical strain, indicating that non-rationality of the option for placing reinforcing element.At the analysis of the option 2 (Fig. 1, c) states that the considerable effort in the geotextile separation zone formed matrix and reinforcement.In options 3 and 4 (Fig. 1, d, e) there is a homogeneous deformation, since the critical strain as bundle and pulling the valve were found.Therefore the reinforcement models of subgrade with the inline options are optimal to stabilize the railway subgrade.Having considered the option 5 (Fig. 1, f) at significant stresses detected deformation of soil matrix at the edges of stamp and the ballast layer consequently, the presence of cracks in surge -on the edge of stamp and the ballast layer, consequently, presence of cracks in surge -on the edge of the ballast and under the edge of the stamp is a negative effect.
Stabilization parameters that are accepted in the work for gutter tests also showed that the combined version 5 is the most effective measure of stabilization parameters.The additional geometric constructions, which simplify comparison, are shown in Fig. 2 and 3.So at the regulatory maximum stresses on the main site subgrade, which is equal to 0.08 MPa, the relative deformation of options to strengthen from the geotextiles are equal respectively: Option 0 -0.0078,Option 1 -0.0066,Option 2 -0.0065,Option 3a -0.0053, option 3b -0.0067, Option 4 -0.0046,Option 5 -0.0044, that is, the introduction of the geotextile reduces deformations at 1.2…1.8times (maximum decrease of strains in option 5).Modulus of elasticity, as one of the stabilization parameters, varied in the followed ranges; Option 0 -2.78 MPa, option 1 -12.5 MPa, option 2 -5 MPa, option 3a -25 MPa, option 3b -3.57MPa, option 4 -12.5 MPa, option 5 -25 MPa, that is, in options 3a and 5, with the introduction of geotextile modulus of elasticity increase in 9 times, respectively, the deformation characteristics of subgrade, in the conducted tests, were improved.
In the series of experimental and conducted research were carried out tests on the strengthening models by small-deformable layer of Rubble-Soils Mixe, which is located at a different depth from the main site of subgrade (Fig. 2).
Findings
When analyzing the results of experimental studies of the stress-strain state of subgrade reinforced, the small deformable layer of crushed stone-ground mixture, established the following.In option 1 (Fig. 2, a) slightly improved picture of deformation, however, is not essential.But stillcompared to unreinforced option 0 (Fig. 1, a), the damping stress is significant.By increasing the thickness of small deformed layer to 2 cm (0.4 m in nature) revealed its positive impact on the reduction of vertical deformations, and the impact of its location on these values.So in options 2, 3, 4 (Fig. 2, b, c, d) the influence of the position of the layer to change vertical displacements were recorded.In the version 5 (Fig. 2, e) was recognized sheared strain as a bundle and pulling out of the matrix and reinforcement (small deformed layer at a depth of 2 cm (0.4 m in nature) from the main site.When analyzing 6 options (Fig. 2, f) small deformed layer located at a depth 4 cm (0.8 m in nature) from the main site.It is established that uncritical deformation in the form deformation and loosening of fitting is not detected, so this option is effective to stabilize subgrade.As a result, were accepted parameters stabilization in the case of roadbed strengthening by small deformable layer and combined strengthening with relative deformation at the normative maximum stress to strengthen for variants equal respectively: Option 0 -0.0078,Option 1 -0.0062,Option 2 -0.0034,Option 3 -0.0034,Option 4 -0.034,Option 5 -0.0015,Option 6 -0.0034.We can say that: using of combination options, as the most effective, reduces strain in 5.2 times (The rest variants amplification -only at 1.3…2.
Originality and Practical value
On the base of analysis of the results of performed experiments were done the following conclusions.
The analysis of the carried out studies in the field of strengthening subgrade reinforcing materials allowed establishing a lack of development and ways to strengthen of subgrade railways.
In conducting experimental studies determined the distribution of stresses in the subgrade, reinforced geomaterial under static load as a model in the tray.On the base of the experimental studies results established the parameters of the stressstrain state to reinforce the roadbed small deformable layer is made of Rubble-Soil Mixes when the relative deformations equal from 0.008 to 0.017.
As established experimentally, the reinforcing roadbed as separate horizontal panels and strength by geotextile matrixes 1.5…1.6, the emergency of separation zones at the ends of the cloth show irrationally of these options, regardless of their location in height of the matrix.
In addition, it was found that the reinforcement of a closed shell also can not be a rational choice, even with the increasing the strength in 1.6 times, since the deformation of subgrade at this version of the reinforcement due to significant cracking.
Conclusions
The complex of experimental studies was conducted to explore the nature of the deformation model subgrade of various degrees of stress.Based on experimental studies, a way strengthen roadbed reinforcement Ruble-Soil Mixes wrapped in geotextile with bends and justified its position at a distance of 0.4 m from the main site, which will increase the strength of the roadbed in 1.8-2.0times and stability to improve the speed of the trains.
|
v3-fos-license
|
2017-10-18T17:07:21.907Z
|
2017-10-18T00:00:00.000
|
24875904
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2017.01807/pdf",
"pdf_hash": "1c5f1f52166ac6846013b1ccf7a27e55a86c0d41",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2231",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "1c5f1f52166ac6846013b1ccf7a27e55a86c0d41",
"year": 2017
}
|
pes2o/s2orc
|
TWIN SISTER OF FT (TSF) Interacts with FRUCTOKINASE6 and Inhibits Its Kinase Activity in Arabidopsis
In flowering plants, the developmental switch to the reproductive phase is tightly regulated and involves the integration of internal and external signals. FLOWERING LOCUS T (FT) and TWIN SISTER OF FT (TSF) integrate signals from multiple pathways. FT and TSF function as florigenic substances, and share high sequence similarity with mammalian Raf kinase inhibitor protein (RKIP). Despite their strong similarity to RKIP, the kinase inhibitory activity of FT and TSF remains to be investigated. We performed a yeast two-hybrid screen and found that TSF interacted with FRUCTOKINASE6 (FRK6), which phosphorylates fructose for various metabolic pathways. Among the seven Arabidopsis FRKs, FRK6 and FRK7 have high sequence similarity; therefore, we investigated whether TSF interacts with FRK6 and FRK7. In vitro pull-down assays and bimolecular fluorescence complementation assays revealed that TSF interacts with FRK6 in the nucleus, but not with FRK7. Kinase activity assays suggested that TSF inhibits the kinase activity of FRK6, whereas FT does not. By contrast, neither TSF nor FT inhibits the kinase activity of FRK7. The frk6 and frk7 mutants show slightly delayed flowering, but only under short-day (SD) conditions. Plastochron length is also affected in both frk6 and frk7 mutants under SD conditions. FT expression levels decreased in frk6 mutants, but not in frk7 mutants. Taken together, our findings suggest that TSF physically interacts with FRK6 and affects its kinase activity, whereas FT does not, although these proteins share high sequence similarity.
INTRODUCTION
Plants have evolved mechanisms that adjust their flowering time by integrating diverse internal or external signals (Srikanth and Schmid, 2011). Numerous genetic studies have revealed the interconnected pathways that control the floral transition in Arabidopsis thaliana, namely, the photoperiod, vernalization, gibberellic acid, autonomous, and ambient temperature pathways (Sung and Amasino, 2004;Corbesier et al., 2007;Lee et al., 2008;Wellmer and Riechmann, 2010). FLOWERING LOCUS T (FT), a well-known floral activator and a potential florigenic substance (Zeevaart, 2008;Putterill and Varkonyi-Gasic, 2016), acts as an integrator of the multiple signals that are transduced via various pathways and transmits the signals to trigger the onset of flowering.
Sucrose, a primary end product of photosynthesis, plays a pivotal role as the carbon source for most metabolic pathways (Rolland et al., 2002). Because sucrose is a disaccharide of glucose and fructose, it must be cleaved by invertase or sucrose synthase prior to its use as a substrate in metabolism (Sturm, 1999;Koch, 2004). The free hexoses generated by these sucrose-cleaving enzymes must be phosphorylated by specific kinases, such as fructokinase (FRK) and hexokinase (HXK), before entering the metabolic process (Smeekens, 2000). Hence, hexose-phosphorylating enzymes have essential functions for maintaining plant metabolism and development.
The hexose-phosphorylating enzyme FRK plays an important role in the production of functional metabolites. HXK also has fructose phosphorylating activity, but the affinity of HXK for fructose is much lower than that of FRK (Renz and Stitt, 1993). Among higher plants, the functions of FRKs are best characterized in tomato (Solanum lycopersicum). Tomato FRKs play a role in the development of vascular tissue and pollen (German et al., 2003). Furthermore, the suppression of FRK1 via RNA interference caused delayed flowering in tomato (Odanaka et al., 2002). Consistent with the important roles of FRKs in plant development, plant genomes contain multiple FRK or FRK-like genes. In particular, the A. thaliana genome contains seven FRK genes. Arabidopsis FRK6 and FRK7 play a role in accumulation of seed storage proteins, and Arabidopsis FRK1, FRK4, FRK6, and FRK7 are important for development of vascular tissue (Stein et al., 2017).
Arabidopsis FT/TSF family proteins are small globular proteins (approximately 175 amino acids) that play important regulatory roles in flowering. The FT/TSF genes include FT, TWIN SISTER OF FT (TSF), TERMINAL FLOWER1, ARABIDOPSIS THALIANA CENTRORADIALIS HOMOLOG, MOTHER OF FT AND TFL1, and BROTHER OF FT AND TFL1 (Kardailsky et al., 1999;Kobayashi et al., 1999;Yoo et al., 2004;Yamaguchi et al., 2005;Huang et al., 2012). TSF has high sequence similarity to FT; their amino acid sequences are 82% identical, and TSF shows functional redundancy with FT. Overexpression of TSF or FT leads to extremely early flowering (Yamaguchi et al., 2005). Interestingly, the tsf mutants show strongly delayed flowering under short-day (SD) conditions, but the effect of the tsf mutation is very limited under long-day (LD) conditions (Yamaguchi et al., 2005). TSF plays a role in the promotion of flowering by cytokinin under non-inductive conditions (D'Aloia et al., 2011). These findings suggest that TSF plays an important role in the regulation of flowering time under SD conditions. The FT/TSF family members were originally classified as phosphatidylethanolamine binding proteins. These proteins share strong amino acid sequence similarity with mammalian Raf kinase inhibitor protein (RKIP) (Schoentgen et al., 1987;Grandy et al., 1990;Bradley et al., 1996). In mammals, RKIP functions as a negative factor in Raf/MEK/ERK signaling, which helps ensure cell differentiation, growth, and survival in response to extracellular signals (Yeung et al., 1999(Yeung et al., , 2000. In the unstimulated state, RKIP associates with Raf and interferes with the phosphorylation activity of Raf for MEK/ERK (Corbit et al., 2003). Extracellular stimulus-induced phosphorylation of RKIP causes the release of Raf from RKIP, subsequently activating the MEK/ERK cascade (Corbit et al., 2003). Thus, it appears that RKIP is strongly linked to various physiological processes in higher organisms, from plants to mammals. As FT/TSF family proteins contain an evolutionarily conserved ligand-binding domain that is present in RKIP (Kardailsky et al., 1999), circumstantial evidence suggests that FT and TSF also function as kinase inhibitors in Arabidopsis. However, this potential function of these proteins has not been investigated.
In this study, we show that TSF, but not FT, interacts with FRK6 and inhibits its kinase activity. The frk6 mutants showed slightly delayed flowering under SD conditions, which was attributed to reduction in FT expression. Our findings therefore suggest that TSF functions as a FRK inhibitor in Arabidopsis.
Plant Materials and Flowering Time Measurements
The frk6-1 (SALK_143725), frk6-2 (SALK_044085), and frk7-2 (SALK_203384) mutants were obtained from the ABRC 1 and were grown at 23 • C. The T-DNA insertions in these mutants were confirmed via PCR genotyping using primers flanking the T-DNA (p1 and p2 for frk6-1, p3 and p4 for frk6-2, and p5 and p6 for frk7-2, Supplementary Table S1). Total leaf number and plastochron length were measured under both LD and SD conditions. Total leaf number was counted when the size of the primary inflorescence reached approximately 5 cm. Box plots were constructed to represent flowering time distribution (Williamson et al., 1989;Spitzer et al., 2014).
Yeast Two-Hybrid Screening
The full-length TSF gene was cloned in the SmaI/SalI sites of the pB2TK vector, which contains the DNA binding domain of GAL4. The junction of the GAL4 DNA binding domain and TSF was confirmed by sequencing. Screening was performed on 4.0 × 10 6 colonies from an Arabidopsis whole seedling cDNA library. The yeast PBN204 strain containing three reporter genes (URA3, lacZ, and ADE2) under the control of different GAL promoters was used. Yeast cells transformed with the TSF bait vector and an Arabidopsis cDNA AD library were spread onto selection medium (SD-leucine, tryptophan, uracil [SD-LWU]), which supports the growth of yeast harboring bait and prey plasmids, yielding proteins that interact with each other. To confirm the interaction, the portions of prey DNA from URA3 + , ADE2 + , and lacZ + candidates were amplified by PCR, and the resulting amplified prey sequences were re-introduced into yeast with the TSF bait plasmid. Yeast two-hybrid screening was conducted by PanBionet Corp. (Pohang, South Korea).
Phylogenetic Analysis
Amino acid sequence alignment was performed using MUSCLE (Edgar, 2004). A phylogenetic tree was constructed using the maximum likelihood method implemented in the PhyML program of the software phylogeny.fr 2 with default parameters (Guindon and Gascuel, 2003;Dereeper et al., 2008). The tree was visualized by using TreeDyn (Chevenet et al., 2006) with mid-point rooting.
mRNA Expression Analyses
FRK6 and FRK7 mRNA levels were analyzed by semi-quantitative RT-PCR. FT expression was analyzed via qPCR. Total RNA was extracted from 5-day-old Arabidopsis seedlings sampled at ZT14 (unless otherwise indicated) using Plant RNA purification reagent (Invitrogen). The RNA (1 µg) was reverse transcribed into cDNA using a Transcriptor First Strand cDNA Synthesis kit (Roche). For qPCR, expression analysis was performed using SYBR Green I Master mix (Roche) in a LightCycler 480 (Roche). The data were normalized against two stable reference genes, PP2AA3 (AT1G13320) and a SAND family gene (AT2G28390) (Hong et al., 2010). All qPCR data are presented as the mean of two biological replicates with three technical replicates each, and the error bars indicate the standard deviation. Statistical significance of differences in gene expression levels between the samples was assessed using Student's t-test; differences at P < 0.05 were considered significant. Information about the primers used in this study is presented in Supplementary Table S1.
Recombinant Protein Expression and Purification
To prepare His-tagged FRKs, the full-length FRK6 (At1g66430) coding sequence (CDS) including a predicted chloroplast transit peptide (cTP) and the full-length FRK7 (At5g51830) CDS were PCR-amplified and the products were cloned into the pET21a vector (EMD Biosciences). The recombinant constructs were introduced into Escherichia coli BL21 cells. After overnight culture at 28 • C with 0.2 mM IPTG, the transformed cells were harvested and resuspended in lysis buffer (50 mM Tris-HCl pH 8.0, 300 mM NaCl, 20 mM imidazole, 2% N-lauroylsarcosine sodium salt). The lysates were collected after sonication and centrifugation and loaded onto a His Trap column (GE Healthcare). Further purification was performed according to the manufacturer's instructions.
To prepare His-tagged FT and His-tagged TSF, the CDSs of FT (At1g65480) and TSF (At4g20370) were cloned into the pET28a vector after restriction enzyme digestion. E. coli strain BL21 cells transformed with each recombinant plasmid were grown at 28 • C with 0.1 mM IPTG for induction. Protein purification was conducted using the lysis buffer and procedure described above.
In Vitro GST Pull-down Assays
To prepare glutathione S-transferase (GST)-tagged TSF for the in vitro pull-down assays, the full-length CDS of TSF was cloned into the pGEX-5X-1 vector and introduced into E. coli BL21 cells. GST only or GST-tagged TSF was expressed in E. coli BL21 cells at 28 • C with 0.15 mM IPTG. After the protein extracts were sonicated in GST lysis buffer (50 mM Tris-HCl pH 7.5, 0.1 M NaCl, 0.05% Tween-20, 1 mM EDTA pH 8.0, 1 mM PMSF, and protease inhibitor cocktail), the cell lysates were incubated in a glutathione-Sepharose 4B (GE healthcare) slurry for 1 h at 4 • C and washed three times with the same lysis buffer.
For the in vitro pull-down experiment, purified His-tagged FRK6 and His-tagged FRK7 were incubated with equal amounts of GST only or GST-fused TSF immobilized on glutathione-Sepharose 4B beads for 1 h at 4 • C. After the binding reaction, the beads were washed four times with GST lysis buffer. Proteins bound to beads were dissociated by adding SDS-PAGE sample buffer and loaded onto a 15% SDS-PAGE gel. Immunoblotting was performed using anti-His (Santa Cruz) or anti-GST (Santa Cruz) primary antibodies and goat anti-rabbit IgG secondary antibodies. The bands were visualized by applying Enhanced Chemiluminescence solution (AbClon).
Bimolecular Fluorescence Complementation (BiFC) Assays
To generate the constructs used for the BiFC experiments, full-length TSF, FRK6 (including a predicted cTP), and FRK7 CDSs were PCR-amplified from cDNA prepared from wild-type plants. The PCR products were cloned in the BamHI/XhoI sites of the pUC-SPYNE and pUC-SPYCE vectors, respectively. Protoplasts were isolated from 4-weekold Arabidopsis leaves as described previously (Yoo et al., 2007). Recombinant plasmids for BiFC containing N-and C-terminal YFP fragments were co-transfected into the protoplasts using the polyethylene glycol transformation method (Yoo et al., 2007). The transformed protoplasts were incubated for 12 h, and YFP signals were detected by confocal microscopy (Zeiss LSM700). bZIP63 (At5g28770) was used as a positive control for the BiFC experiments. bZIP63 was fused with N-and C-terminal YFP fragments; thus, YFP fluorescence was detected in the nucleus only if bZIP63 formed a homodimer. YFP and autofluorescence were excited at 513 nm and visualized at 530-590 nm and 650-710 nm, respectively.
Fructokinase Enzyme Activity Staining Assays
The effect of TSF and FT on FRK enzyme activity was investigated using a previously described staining method (Harris and Hopkinson, 1976;Gonzali et al., 2001). Electrophoresis of 1 µg FRK6-His (or FRK7-His) or a mixture of 1 µg FRK6-His (or FRK7-His) and 1 µg His-TSF (or His-FT) was performed in a native PAGE gel. 2 nmol Raf1 kinase inhibitor I (Millipore 553003) and 1 µg of purified recombinant His-COP9 Signalosome 5A (His-CSN5a) were also used for the enzyme activity staining assay. A staining mixture in 1% agarose solution at concentrations suggested by Gonzali et al. (2001) was poured on top of the native gel. After the overlaying agarose gels solidified, the enzymatic reaction was conducted in the dark at room temperature for 1 h, followed by the addition of 1% acetic acid solution to stop the reaction. The intensity of formazan, the end product of the FRK reaction, was analyzed using ImageJ (Schneider et al., 2012).
Yeast Two-Hybrid Screening Identifies FRK6 as an Interactor of TSF
To identify interactors of TSF, we performed yeast twohybrid screening using TSF as bait. Full-length TSF cloned in pGBKT shows self-transcriptional activity (data not shown); therefore, we cloned TSF in the pB2TK vector, which allows the bait protein to be expressed at lower levels. Among the 231 URA3 + colonies, 198 lacZ + colonies, and 155 ADE2 + colonies obtained, we identified 60 colonies that were URA3 + , ADE2 + , and lacZ + . After confirming the interaction by reintroducing the amplified prey portion of DNA from the 60 URA3 + , ADE2 + , and lacZ + candidates, we identified 32 positive clones (Figures 1A,B), including: JAB1 HOMOLOG 1 (AJH1; At1g22920), an armadillo/betacatenin-like repeat-containing protein (ARM repeat superfamily protein; At1g01830), FRK6 (FRK6; At1g66430), THYLAKOID FORMATION 1 (THF1; At2g20890), and a tetratricopeptide repeat (TPR)-like superfamily protein (TPR-like superfamily protein; At1g26460). In the case of FRK6, the activation domain was fused to the 5 UTR of FRK6 and N-terminal 14 amino acids (from M1 to G14) were found to interact with TSF. Among these, we decided to investigate FRK6, because FT, the closest homolog of TSF, has sequence similarity to mammalian RKIP (Kardailsky et al., 1999). Therefore, we reasoned that analyzing the interaction between TSF and FRK might reveal a role in inhibition of kinase function.
Phylogenetic Analysis of Arabidopsis FRKs
Before analyzing the relationship between FRK6 and TSF, we analyzed the sequence similarity of the FRKs to identify any close homologs of FRK6 in the Arabidopsis genome. The Arabidopsis genome contains seven FRK genes encoding proteins with fructose phosphorylating activity: FRK1 (At2g31390), FRK2 (At1g06030), FRK3 (At1g06020), FRK4 (At3g59480), FRK5 (At4g10260), FRK6 (At1g66430), and FRK7 (At5g51830) (Stein et al., 2017). 329,345,326,324,384, and 343 amino acids. Notably, FRK6 contains additional 46 amino acids that were predicted as a cTP at its N-terminus. We classified the seven FRK genes according to evolutionary distances (Figure 2A). As shown in the phylogram, FRK7 is more closely aligned with FRK6 than with the five other FRKs, suggesting that FRK6 and FRK7 are homologs. Consistent with this notion, among the Arabidopsis FRK genes, only FRK6 and FRK7 have seven exons, whereas FRK1-FRK5 have four or five exons. The amino acid sequences of FRK6 and FRK7 share 75.1% sequence similarity and 63.1% sequence identity ( Figure 2B). Furthermore, the exon/intron boundaries of FRK6 and FRK7 are also conserved. We reasoned that TSF might also interact with FRK7; thus, we used both FRK6 and FRK7 for further protein-protein interaction analyses.
Protein-Protein Interactions between TSF and FRK6
To test the interaction between TSF and FRK6/FRK7, we performed in vitro pull-down assays. We expressed FRK6 and FRK7 proteins with a 6X His tag in E. coli as a prey for the pull-down experiments, followed by purification through a His column (Figures 3A,B). Following gel electrophoresis of purified FRK6-His protein, Coomassie brilliant blue staining of the gel revealed additional minor bands near the putative FRK6-His protein. We therefore performed immunoblot analysis using anti-His antibody to confirm that the purified product contained FRK6-His protein. Anti-His antibody successfully detected FRK6-His protein at the expected size (∼42 kDa) after blotting ( Figure 3A, right panel). We also induced the production of FRK7-His under the same conditions used for FRK6-His ( Figure 3B, left panel). FRK7-His was highly enriched in the purification eluate, as shown by Coomassie brilliant blue staining and immunoblot analysis (Figure 3B, right panel). To prepare the bait protein for the pull-down assays, we expressed TSF with a GST tag in E. coli and immobilized the protein onto glutathione-Sepharose 4B beads ( Figure 3C). The purified GST-TSF and FRK6-His/FRK7-His proteins were used for pull-down assays.
FIGURE 2 | Sequence alignment of Arabidopsis FRKs. (A) Phylogenetic tree of Arabidopsis FRK genes. The phylogenetic tree was constructed using the maximum likelihood method with mid-point rooting (Guindon and Gascuel, 2003;Dereeper et al., 2008). Scale bar: the number of amino acid changes per site. The exon/intron structure of each fructokinase gene is shown on the right. (B) Alignment of FRK6 and FRK7 amino acid sequences using the T-Coffee Multiple Sequence Alignment tool. FRK5, which is the next related FRK, is included in this alignment to show that FRK6 and FRK7 are closely related. Black and gray shading indicate identical and conserved residues, respectively. Inverted triangles denote exon/intron boundaries. Dashes were introduced to maximize amino acid alignment.
Our in vitro pull-down assays revealed that although GST and GST-TSF were present in almost equal amounts, GST-TSF bound to FRK6-His and was detected in the coprecipitated fraction via immunoblot analysis ( Figure 3D). However, immunoblot analyses using anti-His antibody did not detect any co-precipitating FRK7-His. These results suggest that GST-TSF interacted with FRK6-His but not FRK7-His. Neither FRK6-His nor FRK7-His interacted with GST alone. These results suggest that TSF interacts with FRK6, but not with FRK7, in vitro.
To further validate the TSF-FRK6 protein interaction, we conducted BiFC assays. We co-transfected encoding TSF fused with the N-terminal fragment of YFP and FRKs fused with the C-terminal fragment of YFP into protoplasts; bZIP63 fused with N-terminal and C-terminal YFP fragments was included as a positive control for protein-protein interactions. YFP signal was only observed in the nucleus of protoplasts co-expressing TSF-NYFP and FRK6-CYFP (Figure 3E), suggesting that our in vitro GST pull-down results were reproduced in the BiFC assays. However, no fluorescent signal was detected in protoplasts co-transfected with TSF-NYFP and FRK7-CYFP, although we confirmed TSF-NYFP and FRK7-CYFP expression in the cotransfected protoplast via a western blot analysis (Supplementary Figure S1). It suggested that TSF does not interact with FRK7. Protoplasts co-expressing bZIP63-NYFP and bZFIP63-CYFP (positive control) showed fluorescent signals in the nucleus. Therefore, our GST pull-down and BiFC results suggest that TSF directly interacts with FRK6, but not with FRK7.
TSF Inhibits the Phosphorylation of Fructose by FRK6
After confirming of the binding of TSF to FRK6, we performed an enzyme activity staining assay to investigate whether TSF inhibits the kinase activity of FRK6 (Harris and Hopkinson, 1976;Gonzali et al., 2001). The basic principle of this method is shown in Figure 4A. If active FRK is contained in the reaction mixture, it phosphorylates fructose to fructose-6-p, which phosphoglucose isomerase (PGI) converts into glucose-6-p, the primary substrate of the staining reaction. When the primary substrate is produced via FRK, the downstream reactions occur consecutively in the reaction mixture. Ultimately, the 3-(4,5-dimethylthiazol-2yl)-2,5-diphenyltetrazolium bromide (MTT) in the mixture is reduced to the purple compound formazan; thus, FRK enzyme activity can be measured by analyzing the intensity of formazan staining in the native PAGE gel. We prepared His-TSF and His-FT proteins for this assay (Figure 4B), along with purified FRK6 and FRK7 fused with a 6X His tag.
To investigate whether TSF reduces FRK6 activity, we incubated FRK6-His with His-TSF; FRK6-His protein was also combined with Raf1 kinase inhibitor I and His-CSN5a proteins (Gusmaroli et al., 2004). Purple formazan staining was observed in the lane containing only FRK6-His (Figure 4C), indicating that the purified FRK6-His protein was functional. The intensity of formazan staining was reduced by the addition of Raf1 kinase inhibitor I, indicating that Raf1 kinase inhibitor I inhibits the activity of FRK6. However, the addition of His-CSN5a did not affect the activity of FRK6. As shown in Figure 4C, the formation of formazan was reduced approximately twofold by the addition of His-TSF in two biological replicates, whereas no reduction in formazan level was detected after the addition of His-FT. These results suggest that only His-TSF inhibits FRK6 enzymatic activity. By contrast, the addition of His-TSF to FRK7-His did not reduce the formation of formazan in both biological replicates ( Figure 4D). Finally, the addition of His-FT to FRK7-His also failed to affect the formation of formazan. These results suggest that TSF inhibits the fructose phosphorylating activity of FRK6 via a direct physical interaction.
frk6 Mutants Show Late Flowering under SD Conditions
We next investigated whether the mutation of FRKs has a visible effect on the plant. To investigate the effect of FRK6 and FRK7 on flowering time, we obtained the frk6 (SALK_143725 and SALK_044085) and frk7 (SALK_203384) T-DNA mutants from In vitro pull-down assays using GST-TSF and FRK6-His/FRK7-His. Note that GST-TSF co-precipitated with FRK6-His, but not with FRK7-His. (E) BiFC assays showing that TSF interacts with FRK6 and that this complex localizes to the nucleus (upper arrow). bZIP63 was used as a positive control for protein-protein interaction in the nucleus (lower arrow). BF: bright field.
FIGURE 4 | TSF inhibits FRK6 activity in vitro. (A)
Schematic diagram of the enzyme assay used to measure FRK6 and FRK7 activity in this study. If active fructokinase is present in the reaction mixture, MTT (yellow) is converted into formazan (purple); however, if FRK activity is inhibited, the production of the purple compound is reduced. G6PDH: glucose-6-phosphate dehydrogenase; MTT: 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide; PGI: phosphoglucoisomerase; PMS: phenazine methosulfate (B) Purification of His-TSF and His-FT proteins for enzyme activity assays. Asterisks indicate purified His-TSF and His-FT proteins. F-T: flow-through, W: wash (C,D) The effect of TSF on FRK6 (C) and FRK7 (D) activity. The numbers below each band indicate the fold-change relative to the formazan level under FRK6-His treatment or FRK7-His treatment only. Note that the production of formazan is reduced by His-TSF (C, asterisks), suggesting that TSF inhibits the activity of FRK6. By contrast, His-FT protein does not inhibit formazan production. Neither His-TSF nor His-FT inhibits FRK7 activity (D). RKI: Raf1 kinase inhibitor I. the ABRC. SALK_143725 and SALK_044085 contain a T-DNA insertion in the first exon and first intron of FRK6, respectively (Figure 5A, top). SALK_203384 contains a T-DNA insertion at the end of the second intron of FRK7 (Figure 5A, bottom).
We confirmed the T-DNA insertions via PCR-genotyping using primers flanking both sides of the T-DNA (data not shown). FRK6 and FRK7 expression was severely affected by the T-DNA insertion in the mutants (Figure 5B), suggesting that these mutants are loss-of-function alleles of FRK6 and FRK7. We therefore named SALK_143725, SALK_044085, and SALK_203384 as frk6-1, frk6-2, and frk7-2, respectively, and subjected these alleles to further analyses.
We measured flowering time and plastochron length in the frk6 and frk7 mutants under both LD and SD conditions at 23 • C. None of the mutants showed visible differences in flowering compared to wild type under LD conditions ( Figure 5C); frk6-1, frk6-2, and frk7-2 mutants flowered when the plants had 17.0, 16.5, and 16.6 leaves, respectively, whereas wild-type plants flowered when they had 15.6 leaves under the same conditions. However, under SD conditions, both frk6 and frk7 mutants showed a slight but significant delay in flowering compared to wild type ( Figure 5D). Under SD conditions, frk6-1, frk6-2, and frk7-2 mutants flowered when they had 66.7, 64.0, and 67.8 leaves, whereas wild-type plants flowered when they had 55.7 leaves under the same conditions. Consistent with their altered flowering time, all mutants showed a slightly reduced plastochron length (increased leaf initiation rate) under SD conditions, which was more apparent in frk7-2 mutants ( Figure 5E). These observations suggest that the FRK6 play a role in regulating flowering time under SD conditions. Because the frk6 and frk7 mutants showed delayed flowering under SD conditions, we investigated the expression levels of flowering time genes in these plants via qPCR. Under SD conditions, FT mRNA levels were reduced only in frk6-1 and frk6-2 mutants (Figure 5F), whereas FT mRNA levels were not altered in frk7-2 mutants. These results suggest that the late flowering phenotype of frk6-1 and frk6-2 mutants could be attributed to reduced FT expression levels under SD conditions.
DISCUSSION
In this study, we investigated whether TSF functions as a kinase inhibitor. We detected protein-protein interactions between TSF and FRK6 via in vitro pull-down and BiFC assays. TSF likely inhibits the fructose-phosphorylating activity of FRK6 via physical interaction. We also found that the frk6 mutation affects the expression of FT, which appears to cause delayed flowering under SD conditions.
Although structural similarities suggest that FT and TSF, as well as their homologs play similar roles to that of mammalian RKIP (Kardailsky et al., 1999;Yeung et al., 1999), their potential roles as kinase inhibitors had not been investigated in plants. In this study, we showed that TSF binds to FRK6 (Figures 1, 3) and inhibits its activity (via an enzyme activity staining assay) (Figure 4). Arabidopsis FRK6 has high sequence similarity to FRK7 (Figure 2); however, TSF inhibits the activity of FRK6, but not FRK7, suggesting that the interaction between TSF and FRK6 is specific. Another interesting observation is that although TSF is homologous to FT, FT does not inhibit the activity of FRK6 or FRK7. All Arabidopsis FRKs except FRK1 exhibit substrate inhibition (Riggs et al., 2017), and FRKs are thought to play a role in regulating starch synthesis via sucrose synthase in the sink tissue of plants (Odanaka et al., 2002). Therefore, it would be interesting to further investigate a possible role for TSF in sink tissue.
According to our BiFC assay results, TSF likely interacts with FRK6 in the nucleus (Figure 3E), which is inconsistent with the results of a previous report (Stein et al., 2017). The majority of FRKs in tomato and Arabidopsis localize to the cytosol, except for tomato FRK3 (LeFRK3) and Arabidopsis FRK6, which localize to the plastid (Damari-Weissler et al., 2006;Riggs et al., 2017). Perhaps FRK6 interacts with TSF only in the nucleus, although FRK6 may localize to both the cytosol and nucleus. Indeed, our confocal microscopy analyses showed that FRK6-GFP signal was seen in the nucleus as well as chloroplasts, whereas GFP-TSF signal was observed in the nucleus (Supplementary Figure S2). However, unlike FRK6 and TSF, GFP-FRK7 signal was found in the cytosol. Thus, although FRK6 mainly localizes to the cytosol, a small fraction of FRK6 may be present in the nucleus, where it might interact with TSF. Consistent with this notion, Arabidopsis HXK1, another hexosephosphorylating enzyme, localizes to the nucleus, where it forms a distinct protein complex with its interactors (Cho et al., 2006). Thus, it is tempting to speculate that nucleus-localized FRK6 directly interacts with TSF, which may be required for its novel nuclear-specific function.
Although a previous report suggested that frk6 single mutants have no apparent mutant phenotype (Stein et al., 2017), we observed a slight delay in flowering time in the frk6 mutants under SD conditions (Figure 5). This late flowering is likely due, at least in part, to the reduced levels of FT mRNA in these mutants. Mutants with impaired functioning in both FRK6 and FRK7 exhibit an altered seed phenotype (Stein et al., 2017), suggesting that they act redundantly in seed development. However, we found that both the frk6 and frk7 mutants showed a visible flowering time phenotype under SD conditions. This raised the possibility that the TSF-FRK6 module might play a role in modulating the juvenile-to-adult phase transition, as was observed for TFL1 (Matsoukas et al., 2013).
The observations that TSF inhibits FRK6 activity ( Figure 4C) and that impaired FRK6 function caused late flowering under SD conditions ( Figure 5D) appear to be inconsistent with the known role of TSF as a floral activator (Yamaguchi et al., 2005). A possible scenario to explain this discrepancy is as follows: Although TSF inhibits FRK6, which subsequently delays flowering, the inductive effect on flowering caused by the translocation of TSF to the shoot apical meristem is much stronger and overrides the effect of the frk6 mutation on flowering. Indeed, like FT, TSF likely moves toward the shoot apical meristem to trigger flowering under non-inductive conditions (Corbesier et al., 2007;Jin et al., 2015). Thus, the promotive effect of long-distance movement of TSF likely overrides the effect of frk mutation under SD conditions. Another possible scenario is that the TSF-FRK6 module acts in tissues that are not involved in flowering, for instance, in sink tissue such as seeds (Stein et al., 2017). Further investigation is required to clarify the molecular mechanism underlying the activity of the TSF-FRK6 module in plants.
An important question is how potential alteration of carbon assimilation caused by frk6 mutation is connected to the changes in flowering time. A possible scenario is that the changes in carbon assimilates partitioning by the frk6 mutation does not play a main role in the regulation of flowering time, unlike FRK7; rather, FRK6 plays a role in the transcriptional regulation of downstream flowering time genes. Consistent with this notion, HXK, a hexose-phosphorylating enzyme, regulates the developmental transition via miR156. The level of miR156, which plays a pivotal role in the transition from the juvenile to the adult phase, is affected by HXK1 in response to sugar (Yang et al., 2013;Yu et al., 2013). Nuclear-localized HXK1 directly or indirectly regulates miR156 expression via association with nuclear factors, for instance, VHA-B1 and RPT5B (Cho et al., 2006). The effect of HXK1 on the miR156 changed the levels of SQUAMOSA-promoter binding protein-like (SPL) genes (Wang et al., 2008), thereby affecting FT transcription (Kim et al., 2012). Notably, we found that FRK6 locates not only to the chloroplast but also to the nucleus, as seen in HXK1; furthermore, the frk6 mutation caused reduction of FT. It is tempting to speculate that FRK6 may regulate the expression of the miR156-SPL module to eventually regulate FT transcript levels to control flowering time.
In summary, we identified a possible biochemical role of TSF as an inhibitor of FRK6 in plants. FT/TSF family members participate in various fundamental developmental processes in plants; however, no molecular evidence for their role as kinase inhibitors had previously been obtained, despite their homology to an animal kinase inhibitor protein. Our results open new avenues for investigating the biochemical functions of FT/TSF family proteins.
AUTHOR CONTRIBUTIONS
SJ and SYK performed the experiments. JHA designed and supervised the study. SJ and JHA wrote the manuscript.
|
v3-fos-license
|
2018-10-18T04:05:45.311Z
|
2003-03-13T00:00:00.000
|
61048860
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://liberquarterly.eu/article/download/10302/10753",
"pdf_hash": "c13de9e7d792ee5608dd6cc4a31c1aa449a1ab89",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2232",
"s2fieldsofstudy": [
"History"
],
"sha1": "c13de9e7d792ee5608dd6cc4a31c1aa449a1ab89",
"year": 2003
}
|
pes2o/s2orc
|
National Library of Norway's new Database of 22 Manuscript Maps concerning the Swedish King Charles XII's Campaign in Norway in 1716 and 1718
DigiZeitschriften e.V. gewährt ein nicht exklusives, nicht übertragbares, persönliches und beschränktes Recht auf Nutzung dieses Dokuments. Dieses Dokument ist ausschließlich für den persönlichen, nicht kommerziellen Gebrauch bestimmt. Das Copyright bleibt bei den Herausgebern oder sonstigen Rechteinhabern. Als Nutzer sind Sie sind nicht dazu berechtigt, eine Lizenz zu übertragen, zu transferieren oder an Dritte weiter zu geben. Die Nutzung stellt keine Übertragung des Eigentumsrechts an diesem Dokument dar und gilt vorbehaltlich der folgenden Einschränkungen: Sie müssen auf sämtlichen Kopien dieses Dokuments alle Urheberrechtshinweise und sonstigen Hinweise auf gesetzlichen Schutz beibehalten; und Sie dürfen dieses Dokument nicht in irgend einer Weise abändern, noch dürfen Sie dieses Dokument für öffentliche oder kommerzielle Zwecke vervielfältigen, öffentlich ausstellen, aufführen, vertreiben oder anderweitig nutzen; es sei denn, es liegt Ihnen eine schriftliche Genehmigung von DigiZeitschriften e.V. und vom Herausgeber oder sonstigen Rechteinhaber vor. Mit dem Gebrauch von DigiZeitschriften e.V. und der Verwendung dieses Dokuments erkennen Sie die Nutzungsbedingungen an.
INTRODUCTION
The National Library of Norway is planning to digitise approximately 1,500 manuscript maps.Two years ago we started working on a pilot project, and for this purpose we chose 22 maps small enough to be photographed in one piece.We made slides 6 x 7 cm in size, converted the slides into PhotoCDs and used four different resolutions on JPEGfiles.To avoid large file sizes, we had to divide the version with the biggest resolution into four pieces.The preliminary work was done in Photoshop, the database on the web is made in Oracle.You can click on the map to zoom.Norwegians and probably Swedes during the Great Northern War drew the 22 maps when the Swedish King Charles XII in 1716 and 1718 unsuccessfully attempted to conquer Norway.
The database is now accessible on the National Library of Norway's web site.The database is in Norwegian, but we are working on an English version as well.
The maps are searchable on different topics, countries, counties, geographical names, shelfmarks or a combination of these.We are planning to expand the database to other manuscript maps later.This is the reason why it is possible to search for obvious subjects as Charles XII and the Great Northern War.
FIRST CAMPAIGN 1716
King Charles' first campaign to conquer Oslo in 1716 was not a success.Charles failed, retreated and tried instead to conquer the fortress Fredriksten in the town Halden.The citizens managed to get rid of Charles by putting their own city on fire.This is mentioned in the National Anthem of Norway.At the same time General Armfeldt, led 10,000 men aiming for Trondheim.Sweden had been at war for so long that the quality of Armfeldt's soldiers was quite low.Many of them were Swedish rural boys from the counties of Jemtland and Herjedalen.This is the area just across the border from Trondheim.Jemtland and Herjedalen were lost to Sweden at the peace of Brømsebro in 1645, so motivating the soldiers to fight against their own relatives was quite difficult.For this, Armfeldt brought Finnish mercenaries with him.
Armfeldt crossed the Norwegian border Northeast of Trondheim, behaving like armies usually do, plundering and burning.Armfeldt came to Stene entrenchment and beat the Norwegians in September 1718.The victory at Stene allowed a gateway through to Trondheim, but because of a combination of epidemic diseases and lack of heavy artillery, Armfeldt failed to take Trondheim.At the same time Charles besieged Halden.Many trenches were dug and on 11 December 1718 the King was careless in raising his head too high, and a bullet killed him.Historians are still wondering whether it was a Norwegian or a Swedish bullet ... A simple map shows where he was killed and how long the distance is from the fortress.
THE RETREAT
The retreat from Østfold did not cause any losses.But in the north, in Trøndelag, General Armfeldt got the message three weeks later.On 12 January 1719 he started the retreat.The army split, but the main part went south and tried to cross the Swedish border through an area called the Tydal Mountains.This is an alpine landscape; the mountains are from 1000-1700 metres high, with a cold climate in the wintertime, which makes it quite a rough area if you are not properly dressed.The database contains maps showing different routes.
A cartographer called Klüwer probably makes this map.It is not signed, but it is possible to recognise his handwriting.Klüwer was a family of officers from the area around Trondheim, with at least two cartographers.The map is probably made in the 1750-60s.The army took some locals as pilots and their families as hostages.Historians believe that the pilots led the army higher than the forest line (600 metres above sea level) and round mountain dairy farms to avoid finding shelter, food and deserted soldiers.Up in the mountains the army was surprised by a snowstorm.The 17 th and 18 th century are called 'The Small Ice Age', and even today the temperature in the inland often reaches 30 degrees below zero.General Armfeldt's army was dressed for August temperature.After the wind had dropped, locals went up and found 3000 men and 500 horses frozen to death.Armfeldt himself survived and reached first Duved and then Jerpe entrenchment in Sweden.There was not shelter for all of them and many of the survivors perished.We know that just a few from the Armfeldt campaign reached their homes.
|
v3-fos-license
|
2019-04-27T13:03:53.328Z
|
2019-04-25T00:00:00.000
|
133605685
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jeccr.biomedcentral.com/track/pdf/10.1186/s13046-019-1167-2",
"pdf_hash": "a8afc9a02de7324ecdb2951ff4091ced59bcfe39",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2234",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "a8afc9a02de7324ecdb2951ff4091ced59bcfe39",
"year": 2019
}
|
pes2o/s2orc
|
MAPKAPK2 plays a crucial role in the progression of head and neck squamous cell carcinoma by regulating transcript stability
Background Head and neck squamous-cell carcinoma (HNSCC) ranks sixth among cancers worldwide. Though several molecular mechanisms of tumor initiation and progression of HNSCC are known, others remain unclear. Significance of p38/MAPKAPK2 (Mitogen-activated protein kinase-activated protein kinase-2) pathway in cell stress and inflammation is well established and its role in tumor development is being widely studied. Methods We have elucidated the role of MAPKAPK2 (MK2) in HNSCC pathogenesis using clinical tissue samples, MK2-knockdown (MK2KD) cells and heterotropic xenograft mice model. Results In patient-derived tissue samples, we observed that MK2 is reproducibly overexpressed. Increased stability of cyclin-dependent kinase inhibitor 1B (p27), mitogen-activated protein kinase phosphatase-1 (MKP-1) transcripts and decreased half-life of tumor necrosis factor-alpha (TNF-α) and vascular endothelial growth factor (VEGF) transcripts in MK2KD cells suggests that MK2 regulates their transcript stability. In vivo xenograft experiments established that knockdown of MK2 attenuates course of tumor progression in immunocompromised mice. Conclusion Altogether, MK2 is responsible for regulating the transcript stability and is functionally important to modulate HNSCC pathogenesis. Electronic supplementary material The online version of this article (10.1186/s13046-019-1167-2) contains supplementary material, which is available to authorized users.
Background
Globally, head and neck squamous cell carcinoma (HNSCC) having an estimated annual burden of 633,000 new cases and 355,000 deaths is the sixth most common cancer with the male to female ratio ranging from 2:1 to 4:1 [1]. Majority of head and neck cancers are HNSCCs (~90%) which pertain to malignancies in multiple anatomic subsites like oral cavity, oropharynx, hypopharynx, larynx and nasopharynx [2]. In India, 77,000 cases of HNSCCs are diagnosed every year, making it the second most common cancer in the subcontinent with various environmental and lifestyle risk factors as the primary causes [3]. The treatment for early-stage HNSCC is either single modality or employing various combinations of surgery, radiation and chemotherapy based on stage and primary site of the tumor [4]. Despite advances in surgical and other conventional treatment strategies in recent years, HNSCC continues to have a dismal prognosis with 30-47% recurrence rate as well as quite low 5-year survival rate among all cancers [5].
Systemic side effects like hepatic and cardiac toxicity as well as central nervous system disorders caused by the small molecules-based p38 inhibitors have hindered their translational use. This might be attributed to the fact that p38 regulates more than sixty substrates and therefore its direct inhibitors have failed in their clinical utility due to undesired side effects [6]. This has prompted researchers to look for novel therapeutic targets in downstream regulators of this signaling pathway, prominent among them being Mitogen-activated protein kinase-activated protein kinase-2 (MAPKAPK2 or MK2).
MK2 the downstream substrate of p38 mitogen-activated protein kinase (MAPK) governs the activation and deactivation of RNA binding proteins (RBPs) [7]. RBPs modulate the gene expression of mRNAs encoding several proto-oncogenes, cytokines, chemokines and pro-inflammatory factors that control cell-cycle progression, proliferation, angiogenesis, metastasis and cell death [8]. p38/MK2 signaling pathway has been implicated for its involvement in cell-cycle regulation, cell migration and inflammation [9]. Experimental evidences indicate that MK2, the prime target of p38, regulates the stability of essential genes involved in tumor pathogenesis that harbor adenine/uridine-rich elements (AREs) in their 3′-untranslated regions (3′-UTRs) [10]. It has been established that MK2 plays a significant role in a variety of cellular processes like cytoskeleton reorganization, chromatin remodeling, cell-cycle regulation and cell migration as indicated by its downstream substrates [7].
In this study, we observed overexpression and activation of MK2 in human HNSCC tissues as well as cell lines. Further, we investigated the expression levels of selected genes in clinical tissue samples harboring binding sites for MK2-regulated RBPs in their 3′-UTR and regulating HNSCC pathogenesis. We established that MK2 knockdown (MK2 KD ) in normoxia stabilized cyclin-dependent kinase inhibitor 1B (p27) but destabilized tumor necrosis factor-alpha (TNF-α) and vascular endothelial growth factor (VEGF) transcripts. Furthermore, we found that MK2 KD in tumor milieu mimicking hypoxic conditions stabilized p27 and mitogen-activated protein kinase phosphatase-1 (MKP-1) but destabilized TNF-α. The in vitro findings were further validated in vivo in a xenograft non-obese diabetic/severe combined immunodeficiency (NOD/SCID) mice model. Taken together, our findings show for the very first time that MK2 is responsible for regulating the transcript stability and is functionally important to modulate HNSCC pathogenesis.
Clinical tissue samples
HNSCC tissue samples along with adjacent normal samples (n = 100) were surgically obtained from patients in Department of Otorhinolaryngology, Head and Neck Surgery, RPGMCH, Kangra, India after appropriate prior informed written consent of the patients. The samples were not checked for Human papillomavirus infection. Similarly, the formalin-fixed and paraffin-embedded (FFPE) human HNSCC and normal tissue blocks (n = 50) were obtained from Department of Pathology, RPGMCH (Additional file 1: Table S1 contains patient details). Patient's identity was kept anonymous throughout, and the study was approved by the Institutional Ethics Committee (IEC) of CSIR-IHBT, Palampur, India (Approval No. IEC/IHBTP-3/Jan.2014).
Tissue pathology and immunohistochemistry
Collected samples and tissue blocks belonging to various subsites of the head and neck region (n = 50) were cut into 5 μm sections using microtomy and mounted on normal and lysine-coated glass slides for hematoxylin and eosin (H&E) and immunohistochemistry (IHC) staining, respectively. For H&E staining, the sections obtained on glass slides were deparaffinized, rehydrated and then stained with hematoxylin dye followed by eosin counterstaining. Standard reagents and protocols were used for H&E staining. The levels of expression and activation status of specific proteins (listed in Additional file 1: Table S2) in collected clinical samples were analyzed using IHC staining (protocol detailed in Additional file 1). Sections were then analyzed and imaged by a pathologist for cellular changes relating to HNSCC pathogenesis using bright field microscope (Leica DM 3000 with Leica application suite V4 image capture software).
Cell lines and cell culture
Human HNSCC (FaDu, A-253, CAL27) and normal cell lines (HEPM, Hs680.Tr) were acquired from American Type Culture Collection (ATCC), USA. The cells were cultured at 37°C, 5% CO2 in specific growth medium (Eagle's Minimum Essential Medium (EMEM) for FaDu and HEPM; Dulbecco's Modified Eagle Media (DMEM) for CAL27 and Hs680.Tr; McCoy's5a modified medium for A-253 procured from Invitrogen) supplemented with 10% fetal bovine serum (FBS) and 1% antibioticantimycotic solution (Invitrogen). All the cell lines were properly quarantined and analyzed through monitoring of cell morphology and growth morphology under phase contrast before the start of any experimentation. We further analyzed the population doubling time and found the cell lines free of contamination as assessed by MycoFluor™ Mycoplasma Detection Kit (Invitrogen) and Cell Culture Contamination Detection Kit (Invitrogen) at the time of their use in experiments. All the procured cell lines were pre-authenticated from ATCC and used within 6 months of receipt for all the experimental work. For hypoxia exposure, cells plated in petriplates were incubated for 48 h in 0.5% O 2 at 37°C maintained in a hypoxia chamber (Bactrox, Shel-Lab). After 24 and 48 h, these hypoxia exposed cells were subjected to gene and protein expression analysis to validate the generation of hypoxia.
Western blotting
For protein expression analysis, clinical tissue samples (n = 20)/cultured cells were lysed in protein lysis buffer followed by resolving on acrylamide gel and transfer onto membrane using standard protocol detailed in Additional file 1 [11]. Bands of specific proteins were detected and visualized using Clarity™ Western enhanced chemiluminescence (ECL) Substrate (Bio-Rad) with ECL imager (Azure). For quantification of protein bands Ima-geJ 1.49v software (NIH, USA) was used and statistical analysis was performed by one-way ANOVA using GraphPad Prism 7 software version 7.00.
Sulforhodamine B assay
Sulforhodamine B (SRB) colorimetric assay was performed for cytotoxicity analysis of Actinomycin D (ActD) following standard protocol [12]. CAL27 cells were exposed to different concentrations of ActD (0.5, 1, 2.5, 5 and 10 μM) for 24, 48 and 72 h to evaluate its cytotoxicity. The experimental procedure has been detailed in Additional file 1.
Transfection of CAL27 cells and stable shRNA knockdown experiments
CAL27 cells at about 60% confluence were transfected with the aid of Attractene reagent (Qiagen) with psi-U6. 1 vectors expressing different 19-mer MK2-specific short hairpin RNA (shRNA) constructs (Additional file 1: Figure S1 and Table S3). Further, a non-specific scrambled control shRNA in psi-U6 vector (GeneCopoeia) was used, and the transfection was performed as per manufacturer recommended protocol detailed in Additional file 1.
Before conducting assays, the transfected cells were selected (1 μg/ml puromycin) to obtain stable transfectants and were further allowed to grow for atleast two generations. The confirmation of transfection was performed by imaging the green fluorescent protein (GFP) reporter expression inside the cells. The cells which were stably expressing shRNAs with an almost negligible expression of MK2 were called as MK2 KD cells. Selected cells were further grown and MK2 KD was confirmed by quantitative real time-PCR (qRT-PCR) and WB.
qRT-PCR and determination of mRNA stability
Total RNA was extracted from collected surgical samples (n = 30 each for tumor and normal taking 5 samples each from six different head and neck subsites) and cell lines using RNeasy mini kit (Qiagen) following manufacturer's protocol (detailed in Additional file 1). Extracted RNA was quantified by spectrophotometric measurement using Nanodrop (Thermo Fisher Scientific) before qRT-PCR analysis using Verso One-Step SYBR qRT-PCR kit, (Invitrogen) according to manufacturer recommended protocol (detailed in Additional file 1). Primers used for all the selected human genes were custom synthesized and obtained from Integrated DNA Technologies (Additional file 1: Table S4) while TaqMan probes and primers from Applied Biosystems (Additional file 1: Table S5). GAPDH was used as an endogeneous control for relative quantification of qRT-PCR data [13].
To evaluate the transcript stability, CAL27-MK2 KD cells along with non-transfected controls were treated with 1 μM of ActD (a sub-lethal dose). Total RNA was extracted after 0 min, 30 min, 1 h, 2 h, 4 h and 8 h of ActD treatment in both normoxic as well as hypoxic conditions. qRT-PCR was performed and the relative change in gene expression was evaluated to assess the mRNA stability of specific genes.
Xenograft HNSCC mouse model
We developed a biologically relevant heterotropic xenograft model of HNSCC in immunocompromised mice. For this purpose male NOD/SCID mice of 6-8 weeks age were procured from the Experimental Animal Facility of Advanced Centre for Treatment, Research and Education in Cancer (ACTREC), Navi Mumbai, India. The animal study was approved by the Institutional Animal Ethics Committee (IAEC) of CSIR-IHBT, Palampur, India (Approval No. IAEC/ IHBT-3/Mar 2017). The animals were housed in groups of four per individually ventilated cage (Tecniplast) under controlled conditions of 50 ± 10% humidity, 23 ± 2°C temperature, and l2 h light/12 h dark cycle. The mice were randomly assigned into experimental or control groups and subjected to specific treatments according to the protocols (mice grouping has been detailed in Additional file 1: Table S6). For xenograft generation, one million cultured cells suspended in 100 μl of 1X PBS were injected subcutaneously in the right flank of the animals. Tumor growth and animal weights were regularly monitored. Seven weeks after graft inoculation, animals were sacrificed by CO 2 inhalation and tumors were excised aseptically, weighed, used for RNA and protein extraction and processed for paraffin-fixation. Tumor sections were further analyzed using H&E and IHC staining.
Statistical analysis
All the experimental procedures were conducted in triplicates unless indicated otherwise. The results presented here are expressed in the form of means±standard errors of the mean. Statistical significance between groups was analyzed by two-tailed, unpaired t-test using GraphPad Prism 7 software version 7.00. p-values< 0.05 were considered statistically significant.
Patient characteristics and histopathological confirmation of HNSCC
In the present study, we obtained 100 HNSCC and adjoining normal clinical tissue samples (mean age 56.4; range 19-85 years) from patients comprising 75 males (mean age 58.5; range 19-79 years) and 25 females (mean age 49.7; range 26-85 years) and 50 FFPE-HNSCC and normal tissue blocks. The clinical samples used for the study originated from~15 distinct subsites within the head and neck region, the majority belonging to the glottis/epiglottis, pharynx, tongue, nasal cavity and larynx. In males, the risk group (age group with the highest number of incidences) for HNSCC occurrence was 61-70 years with 26 cases while for females it was 41-50 years with 6 cases. Anamnesis of 34 random patients revealed history of smoking habit in case of 19 males (out of 23) and 1 female (out of 11) as well as alcohol consumption (10 males). Out of these 34 patients, 8 died in due course of time after surgery while 7 are living without any complication; however, no final information regarding survival status of other 19 patients was available after their successful completion of post-operative treatment. The detailed information regarding the patients is presented in Additional file 1: Table S1.
Histopathological evaluation of the clinical samples confirmed tumors comprising of squamous cells with moderate to severe nuclear polymorphism. Destruction of the basement membrane with the invasion of cells into the underlying submucosal to sub-epithelial region was observed in most of the cases (Additional file 1: Figure S2). Mucosal epithelial dysplastic change in certain sections suggested carcinoma in situ as previously postulated [14]. Keratin pearl formation was observed in most of the tumors. On the other hand, normal sections showed presence of stratified squamous epithelium with underlying intact basement membrane (Additional file 1: Figure S2).
MK2 and its downstream target RBPs were found overexpressed and activated in HNSCC clinical tissue samples and cells The p38/MAPK pathway is widely implicated in invasion and metastasis of various tumors [15]. IHC staining confirmed that MK2 expression and phosphorylation is comparatively higher in most of the tumor samples ( Fig. 1a and b). Similarly, the upstream factor p38/MAPK was also overexpressed and activated in tumor samples. Normal tissue sections of head and neck region showed consistent negative staining compared to their tumor counterparts ( Fig. 1c-f). In an attempt to examine the interaction of MK2 with RBPs, we determined expression of MK2-regulated RBPs (CEBPδ, AUF1, HuR, CUGBP1, and TTP) using IHC and found their elevated expression and activation in tumor samples. Overexpression of hypoxia-inducible factor-1 alpha (HIF-1α) in tumor sections validated hypoxic conditions in the tumor core ( Fig. 1c-f).
Next, we evaluated and then quantified the MK2 protein expression and activation in HNSCC and adjacent normal tissues using WB analysis. Activation status and overexpression of the target proteins has been calculated by normalizing them with their phospho-forms/β tubulin as shown in Fig. 2. Consistent with our histopathological and immunohistochemical findings, WB analysis also confirmed p38 and MK2 significantly overexpressed and activated in majority of tumor samples as compared to adjacent normal tissues (Fig. 2a). Similarly, RBPs were found to have consistent overexpression and activation of these proteins in clinical tumors as compared to adjacent normal tissues (AUF1 was the only exception with lower activation in tumor tissues) (Fig. 2a). These observations in clinical samples were validated and also quantified in vitro by performing WB analysis in human HNSCC (FaDu, ATCC HTB-43 pharynx squamous cell carcinoma; A-253, ATCC HTB-41 submaxillary salivary gland epidermoid carcinoma; CAL27, ATCC CRL-2095 tongue squamous cell carcinoma) and normal head and neck cell lines (HEPM, ATCC CRL-1486 palatal mesenchyme; Hs680.Tr, ATCC CRL-7422 trachea normal). In agreement to our earlier findings, we observed a significant increase in p38 and MK2 protein levels and their activation in HNSCC cells as compared to normal cell lines (Fig. 2b). As evident from our findings, CAL27 cells showing significantly high expression and activation of MK2 were selected for further experimentation. Furthermore, in consonance with our previous observations in tumor tissues we found that RBPs (with the same exception of AUF1) showed a higher level of expression and activation status in HNSCC cells respective to normal head and neck cells (Fig. 2b). β tubulin was used as a loading control in this study. In a nutshell, it is evident that the expression and activation of p38, MK2 and MK2-regulated RBPs is elevated compared with respective normal controls, hence, potentiating the hypothesis that MK2 regulates the pathogenesis of HNSCC via a probable interaction with RBPs.
Changes in the expression levels of genes involved in the pathogenesis of HNSCC in clinical tissue samples
To elucidate the role of MK2 in regulating the expression of key genes involved in HNSCC tumorigenesis, expression analysis of selected genes that are involved in a plethora of critical cellular processes like cell-cycle regulation, angiogenesis, metastasis and cell death was performed. These genes play crucial roles in HNSCC pathogenesis and harbors ARE sites in their 3′-UTR for the binding of specific MK2-regulated RBPs. For this, extracted total RNAs from the collected clinical samples were used as templates for the determination of RNA copy number by qRT-PCR of the target genes (listed in Additional file 1: Table S7). From qRT-PCR analysis using SYBR Green chemistry, it was evident that all of the 22 selected genes showed significant variations in the levels of expression with 15 genes overexpressing while 7 down-regulating in tumor tissues as compared to controls (p < 0.05). Cyclin A was the highest up-regulated gene with a relative fold change (R) of~40 as compared to normal controls, while MKP-1 (R ≈ 0.2) Fig. 1 p38, MK2, RBPs and HIF-1α are overexpressed and activated in HNSCC. Representative IHC staining images of clinical tissue samples determining the expression and activation status of p38, p-p38, mitogen-activated protein kinase-activated protein kinase-2 (MK2), p-MK2, CCAAT/ enhancer-binding protein delta (CEBPδ), p-CEBPδ, AU-rich element binding factor-1 (AUF1), p-AUF1, human antigen R (HuR), p-HuR, CUG triplet repeat RNA binding protein-1 (CUGBP1), tristetraprolin (TTP), and hypoxia-inducible factor-1 alpha (HIF-1α) in: (a, c, e) Normal tissue sections of head and neck region showing consistent negative staining in normal stratified squamous epithelium and, (b, d, f) HNSCC tissue sections showing consistent positive staining. The sections were subjected to IHC staining using specific primary antibodies followed by appropriate secondary antibody as described in Materials and Methods. Levels of expression of the above mentioned proteins were found to be relatively high in tumors (brown colour) as compared to normal controls. The images have been captured at 200x and the scale bar denotes 50 μm was the most down-regulated gene ( Fig. 3a and Additional file 1: Table S7). Additional file 1: Figure S3 is a graphical representation of the qRT-PCR results (SYBR Green chemistry) showing top five significantly up-regulated and down-regulated genes, respectively.
Further, qRT-PCR results obtained using SYBR Green chemistry were validated using more specific and reliable TaqMan chemistry for the top 10 significantly up/ down-regulated genes ( Fig. 3b and Additional file 1: Table S8). Our findings confirmed the validation for the 10 selected genes which were showing consistent expression levels as observed in SYBR Green chemsitry. Here, Cyclin A2 (R ≈ 22) was found to be the highest up-regulated gene in tumor samples, while c-Fos (R ≈ 0.1) was the most down-regulated gene ( Fig. 3b and Additional file 1: Table S8). Statistical analysis affirmed that the levels of expression of genes in tumor tissues varied significantly relative to the normal samples (p < 0.05) (Fig. 3b). Additional file 1: Figure S4 is a graphical representation of the qRT-PCR (Taqman chemistry) results showing the up/down-regulated genes. Taken together, our results indicated that pro-inflammatory Fig. 2 Western blot analysis confirmed higher levels of expression of specific proteins. Western blotting was performed to evaluate the levels of expression and activation status of p38, p-p38, MK2, p-MK2, HuR, p-HuR, CEBPδ, p-CEBPδ, AUF1, p-AUF1, CUGBP1 and TTP proteins in extracts prepared from: a Human clinical surgical samples and normal adjacent controls; b Human HNSCC cell lines (FaDu, A-253 and CAL27) and normal human cell lines of head and neck region (HEPM and Hs680.Tr). We observed higher expression levels and activation status of these proteins in tumor samples and HNSCC cells as compared with normal control samples and cell lines. β tubulin served as a loading control. The graphs represent change in protein expression/activation calculated as a ratio (arbitrary units). The results are expressed as means±standard errors of the mean, n = 3. α, p < 0.05; αα, p < 0.01; ααα, p < 0.001 represent the statistical significance of protein expression in tumor tissue 1 compared with normal control 1; and β, p < 0.05 represents the statistical significance of tumor tissue 2 compared with normal control 2; and γ, p < 0.05; γγ, p < 0.01; γγγ, p < 0.001 represent the statistical significance of tumor tissue 3 compared with normal control 3; and δδδ, p < 0.001 represent the statistical significance of tumor tissue 4 compared with normal control 4 and θθθθ, p < 0.0001 represent the statistical significance of tumor tissue 5 compared with normal control 5. Similarly, *, p < 0.05; **, p < 0.01; ***, p < 0.001 and ****, p < 0.0001 represent the statistical significance of protein expression in human HNSCC cell lines compared to HEPM while #, p < 0.05; ##, p < 0.01; ###, p < 0.001 and ####, p < 0.0001 represent the statistical significance of human HNSCC cell lines compared to Hs680.Tr genes like VEGF, TNF-α are up-regulated while tumor suppressor genes like p27, MKP-1 are down-regulated in tumor samples as compared to normal controls. In brief, our findings suggested that expression levels of these genes varied significantly in HNSCC as compared to normal samples, thus, potentiating their crucial role in HNSCC pathogenesis.
MK2 regulates the expression of important genes and plays a crucial role in HNSCC pathogenesis
To ascertain the role of MK2 in regulation of gene expression involved in HNSCC pathogenesis as identified earlier, expression levels of MK2 in MK2 KD cells were confirmed by WB analysis against non-transfected controls using MK2-specific antibody. The confirmation of transfection was performed by imaging the GFP reporter expression inside the cells using immunofluorescence microscopy (Carl Zeiss), imaging flow cytometer (Amnis, Merck) and EVOS FL Auto 2 imaging system (Thermo Fisher Scientific) (Additional file 1: Figure S5A-C). Negligible levels of expression in shRNA 2 and Mix (combination of an equal quantity of shRNA 1, 2, 3 and 4) transfected cells as compared to normal and scrambled control confirmed that the protein expression of MK2 has been significantly suppressed, thereby, validating MK2 KD (Fig. 4a). GFP was used as an input control in this case. Furthermore, RNA was extracted from MK2 KD cells to evaluate the percentage of MK2 KD using qRT-PCR analysis. Our findings confirmed that compared to non-transfected control, the shRNA transfected cells showed~80% MK2 KD (Fig. 4b).
Finally, the effect of MK2 KD on the transcript copy number of previously identified genes playing critical role in HNSCC pathogenesis was observed. As expected, a reversal in the levels of expression of previously validated genes was noted in MK2 KD cells using qRT-PCR analysis employing both SYBR Green and TaqMan chemistry (Additional file 1: Figure S6 and Table S9). This finding further affirmed the pivotal role of MK2 in regulating the expression of HNSCC pathogenesis-linked genes.
MK2 regulates the transcript stability of TNF-α, VEGF, p27 and MKP-1 transcripts
To further identify the transcript regulatory characteristic of MK2, stability of the qRT-PCR validated transcripts was assessed in both normoxic as well as tumor microenvironment mimicking hypoxic conditions. qRT-PCR results postulated that after 48 h of hypoxia exposure CAL27 cells showed~6 fold increase in the transcript levels of HIF-1α (a hypoxia indicator) (Additional file 1: Figure S7). WB further confirmed higher levels of HIF-1α protein expression as compared to normoxic cells (Additional file 1: Figure S8), thus validating the generation of hypoxia in the cultured cells which were further used in mRNA decay study in presence and absence of MK2.
CAL27 cells cultured under both normoxia and hypoxia were treated with different concentrations of ActD Fig. 3 Relative gene expression levels of HNSCC pathogenesis-specific genes in clinical tissues. a Graphical representation of qRT-PCR results (using SYBR Green chemistry). b Graphical representation of qRT-PCR results (using TaqMan chemistry). The graphs show the relative fold change values/relative gene expression of various genes involved in HNSCC pathogenesis. Histograms represent the levels of up/down-regulation of a gene as compared to control samples. Relative gene expression was obtained after normalization with endogenous human GAPDH and determination of the difference in threshold cycle (C t ) between tumor and normal tissues was performed using the 2 -ΔΔC t method. All the qRT-PCR assays were performed in triplicate. The results are expressed as means±standard errors of the mean. *, p < 0.05; **, p < 0.01; ***, p < 0.001 and ****, p < 0.0001 represent the statistical significance compared with control to assess its effect on the cells. Cytotoxicity evaluation through SRB assay established the cytotoxic potential of ActD (Additional file 1: Figure S9) and enabled us to choose a sub-lethal concentration of ActD for transcript stability experiment. To further support our hypothesis that MK2 is directly involved in the regulation of transcript stability of key genes involved in HNSCC pathogenesis, we evaluated the mRNA turnover of the ten previously validated transcripts in both normoxia as well as hypoxia exposed CAL27-MK2 KD cells. Cells were treated with ActD to block transcription, and then at different time points, the transcript levels were determined by qRT-PCR.
Results of the kinetic study analyzed using linear regression of mRNA decay rate established that MK2 KD increased the half-life (t 1/2 ) of p27 and MKP-1 transcripts while an opposing effect was observed for TNF-α and VEGF transcripts in CAL27 cells. We observed that in normoxic conditions, t 1/2 of p27 transcripts increased from~0.13 to~1.3 h while in hypoxia it showed an increase from~2.7 to~4 h ( Fig. 5a and b). Similarly, hypoxia tends to stabilize MKP-1 transcripts by increasing t 1/2 from~1 to~3 h. Our findings revealed that MK2 KD destabilized TNF-α transcripts with t 1/2 decreasing from 1.8 to~0 h in normoxia and from~3.4 to~1.7 h in hypoxia. In normoxia, we observed a robust decay that could not be determined by our best-fit linear regression equation. Similarly, destabilization of VEGF transcripts occurred in normoxic conditions where t 1/2 decreased from~3.4 to~0.3 h in CAL27-MK2 KD cells. Statistical analysis of t 1/2 of the transcripts revealed that the destabilization of VEGF transcripts in normoxia was significant (p < 0.05). Similarly, in hypoxic conditions MKP-1 transcripts stabilization and the decay of TNF-α transcripts in CAL27-MK2 KD cells were significant ( Fig. 5a and b). Fig. 4 Validation of shRNA-GFP construct transfection into CAL27 cells and MK2 knockdown. a For the validation of MK2 KD the expression levels of MK2 were confirmed by Western blot analysis against non-transfected controls using MK2-specific antibody. Negligible expression in shRNA 2 and shRNA Mix transfected cells as compared to normal and scrambled controls confirmed that the protein expression of MK2 has been downregulated in these. C=Non-transfected control; 1, 2, 3, 4, SC = shRNA-GFP construct 1, 2, 3, 4, Scrambled Control transfected, respectively; Mix = Co-transfected with shRNA-GFP constructs 1, 2, 3, 4. GFP served as an input control while β tubulin was used as a loading control in this case. b qRT-PCR analysis of the CAL27-MK2 KD cells established that the shRNA transfection lead to~80% knockdown of MK2 compared to nontransfected control as shown in the histograms. All the qRT-PCR assays were performed atleast thrice. The results are expressed as means ±standard errors of the mean Cultured cells exposed to hypoxia confirmed the role of MK2 in post-transcriptional gene regulation in tumor microenvironment. In a nutshell, hypoxia stabilized p27 and MKP-1 transcripts while it caused the destabilization of TNF-α in CAL27-MK2 KD cells. Similarly, normoxia lead to the decay of TNF-α and VEGF transcripts, while at the same time stabilizing p27 in the absence of MK2. Taken together, our results indicated that MK2 directly controls the mRNA turnover at the post-transcriptional level by regulating the transcript stability.
MK2 KD attenuates tumor progression in xenograft model
In order to closely mimic the tumor microenvironment and validate our in vitro findings we generated a xenograft model in immunocompromised mice. Gross tumor growth was observed in all the animals grafted with CAL27-MK2 wild type (MK2 WT ) and CAL27-MK2 KD cells, suggesting successful xenograft generation. The tumors were well circumscribed, totally encapsulated and did not show metastasis to any other organ or interaction with surrounding tissues till our study period of seven weeks post-grafting. Tumors were moderately to well differentiated with mild microinvasion of capsules. The in vitro results were recapitulated in in vivo experiments. The xenografts showed slower and less aggressive tumor progression in CAL27-MK2 KD derived tumors as compared to CAL27-MK2 WT derived tumors suggesting that loss-of-MK2 attenuated tumor growth. Histopathological analysis suggested that tumors derived from CAL27-MK2 WT group were more aggressive and less differentiated (Fig. 6a). Further, IHC showed the expression and activation status of RBPs is more prevalent in CAL27-MK2 WT as compared to CAL27-MK2 KD group (Fig. 6b). Similarly, protein expression analysis using tumor lysates showed that the expression of p38, MK2 and RBPs is higher in CAL27-MK2 WT as compared to CAL27-MK2 KD group suggesting a prominent role of MK2 in HNSCC progression via RBP mediated gene regulation (Fig. 6c). Furthermore, qRT-PCR analysis showed that VEGF, TNF-α and MKP1 transcripts were Results of the kinetic study analyzed using linear regression of mRNA decay rate have been graphically presented here. Following the transcriptional inhibition with ActD, qRT-PCR was performed and results obtained were used to plot the decay curves and half-lives of (a) p27, TNF-α and VEGF in normoxia (b) p27, TNF-α and MKP-1 in hypoxia. GAPDH serves as an endogeneous control. The graphs showcase that MK2 KD increased the stability of p27 and MKP-1 transcripts but decreased the half-life (t 1/2 ) of TNF-α and VEGF transcripts in CAL27 cells. All the qRT-PCR assays were performed in triplicate. The results are expressed as means±standard errors of the mean. *, p < 0.05; **, p < 0.01 and ***, p < 0.001 represent the statistical significance compared with control upregulated while p27 was downregulated in tumors derived from CAL27-MK2 KD cells when compared with CAL27-MK2 WT tumors (Fig. 6d). These results are in consonance to our in vitro findings in CAL27-MK2 KD cells. Parameters like mice weight, tumor gross weight, biochemical and haematological analysis were also evaluated (Additional file 1: Table S10 and S11).
Discussion
HNSCC accounts for 4.3% of all cancer cases globally and estimates project about half-million new cases worldwide annually, ranking HNSCC sixth among all cancers in incidences [16]. Post-transcriptional regulation of gene expression in tumor versus normal tissues is a highly unexplored area and is especially not well understood in HNSCC. Transcript processing is being increasingly recognized as the most important regulatory step of gene expression in mammals. It is believed that specific interactions between cis-acting structural elements (AREs) located in the 3′-UTRs of proto-oncogenes, growth factors, cytokines, transcription factors and other important proteins with trans-acting RBPs tend to change the protein translation landscape of stressed cells [10,17]. 6 Xenograft establishes that MK2 KD attenuates tumor progression. a Histopathological examination revealed less differentiated and more aggressive tumors in CAL27-MK2 WT than CAL27-MK2 KD. The images have been captured at 100x and the scale bar denotes 100 μm. b IHC showed expression and activation status of RBPs is more prevalent in CAL27-MK2 WT as compared to CAL27-MK2 KD group. The images have been captured at 880x (40x objective) and the scale bar denotes 100 μm. c Protein expression analysis using tumor lysates showed that the expression of p38, MK2 and RBPs is higher in CAL27-MK2 WT as compared to CAL27-MK2 KD. β tubulin served as a loading control. The graphs represent change in protein expression/activation calculated as a ratio (arbitrary units). The results are expressed as means±standard errors of the mean, n = 3. Significant differences between CAL27-MK2 WT and CAL27-MK2 KD groups are indicated by different alphabets (p < 0.05). d qRT-PCR analysis showed that VEGF, TNF-α and MKP1 transcripts were upregulated while p27 was downregulated in CAL27-MK2 KD as compared to CAL27-MK2 KD tumors (as evaluated by SYBR and TaqMan chemistry). Relative gene expression was obtained after normalization with endogenous human GAPDH and determination of the difference in threshold cycle (C t ) between CAL27-MK2 WT and CAL27-MK2 KD groups was performed using the 2 -ΔΔC t method. All the qRT-PCR assays were performed in triplicate. The results are expressed as means±standard errors of the mean. ***, p < 0.001 represent the statistical significance compared with control p38/MAPK, a signal transducing enzyme present in all eukaryotes, is the prime regulatory hub where inflammation and stress responses are regulated [18]. It plays a major role in regulating MK2 expression in response to diverse stimuli and triggers elaborate biological signal transduction cascades allowing cells to interpret a wide range of external signals [19,20]. MK2 activation generates a plethora of different biological effects targeting diverse cellular processes like cell-cycle progression, cytoskeletal architecture, transcript stability and protein translation via regulating the activation and deactivation cycles of RBPs [10]. Surprisingly, till date, the biological significance of MK2 in cancer is not well elucidated. A better understanding of the role of MK2 in tumor progression could provide new insights into the enigma of the post-transcriptional gene regulation in cancer.
To this end, our study was aimed to explore the role of MK2 in post-transcriptional control of crucial genes involved in HNSCC pathogenesis. Here, we demonstrate that MK2 plays an essential role in post-transcriptional gene expression in HNSCC by regulating the mRNA turnover. p38/MK2 signaling establishes a pivotal inflammatory axis with substantial reports affirming its critical role in stress responses [21,22]. Recent reports of MK2 overexpression in tumors suggested that its oncogenic activity is required for the malignant growth [23,24]. In consonance with these findings, we have identified that MK2 is consistently overexpressed in HNSCC and regulates transcript stability of genes involved in HNSCC progression.
RBPs like TTP, HuR, AUF1, CUGBP1 and CEBPδ can directly or indirectly control turnover of mRNAs encoding tumor pathogenesis-related factors. The aberrant expression of RBPs can alter the gene expression patterns and, subsequently, involve in carcinogenesis [25,26]. The complex mechanisms of post-transcriptional regulation of cytokines via MK2-dependent phosphorylation of RBPs have been discussed in several excellent reviews [18,20]. Here we have established significant overexpression of MK2 in tumor tissues and HNSCC cells. Further, it has been observed that MK2 is activating TTP, HuR, CUGBP1 and CEBPδ while deactivating AUF1. These activation and deactivation cycles of RBPs are further responsible to control the downstream genes in this pathway. In this report, we have also found significant up/down-regulation in transcript levels of crucial genes regulating HNSCC pathogenesis in clinical samples as compared to adjacent normal tissues. We also investigated the role of MK2 in modulating mRNA turnover of specific genes in HNSCC cells under hypoxic tumor microenvironment and normoxia. Hypoxia, a common feature in majority of solid tumors supports more aggressive disease, and acts as a strong driving force in inducing survival responses. In comparison to the non-transformed cells, tumor cells tend to overcome cell-cycle arrest and sustain proliferation to thrive in the hypoxic tumor milieu [27]. We have elucidated the role of MK2 in regulating the mRNA turnover by reporting that MK2 controls the stability of TNF-α, VEGF, p27 and MKP-1 transcripts in tumor microenvironment. MK2 KD destabilized TNF-α and VEGF transcripts while increase in t 1/2 of p27 and MKP-1 transcripts established that in addition to changing the transcriptional landscape of mRNAs, MK2 is critically involved in regulation of HNSCC pathogenesis. To the best of our knowledge, this is the first study detailing the p38-mediated signalling leading to MK2 activation and its putative role in HNSCC progression.
Recently it has been shown that post-transcriptional control of TNF-α synthesis is mainly MK2-mediated via AREs in the 3′-UTRs of its mRNA [28]. Here, we report that TNF-α transcripts are destabilized in MK2 KD cells. Our findings are consistent with a past report which shows that MK2 deficiency down-regulates TNF-α production [29]. Investigations in the past have suggested the involvement of MK2 in VEGF-induced cell migration [30]. Many reports have proposed MK2-RBPs mediated stabilization and elevation of VEGF expression in hypoxic tumors [9]. On this line, our results demonstrated that MK2 KD facilitates post-transcriptional decay of VEGF mRNA that supports the above mentioned role of MK2 in the regulation of VEGF in tumors. Our hypothesis is further strengthened by reports that suggested impairment in the inflammatory response in MK2-deficient mice [31]. p27, a critical factor for controlling cellular proliferation, functions as a tumor suppressor with reduced expression associated with poor patient survival. Moreover, its loss has been implicated in tumorigenesis and is linked to a more severe phenotype [32]. MAPK pathway seems to be involved in the negative control of this inhibitor by augmenting its degradation, thereby presumably supporting unrestricted cell growth. Our findings agree well with this hypothesis, as we have reported that MK2 KD tends to stabilize p27 by increasing t 1/2 of its transcripts. Various reports confirmed the fact that in aggressive tumors like HNSCC, low levels of p27 are due to its decreased stability, hence, further validating our findings [33]. Through its phosphatase action, MKP-1 regulates the magnitude and duration of MAPK signaling through negative-feedback regulation, and it has been well reported that the short-lived MKP-1 mRNA is rapidly induced by different stresses [34]. Our results postulated that MK2 KD tends to stabilize the MKP-1 transcripts, thus, confirming the hypothesis that mRNA stabilization of this negative regulator possibly
|
v3-fos-license
|
2023-02-17T14:25:19.434Z
|
2017-11-09T00:00:00.000
|
256938587
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-017-15663-4.pdf",
"pdf_hash": "a7ca3ea42c166bb0d51839cd968fc04f53292a81",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2235",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "a7ca3ea42c166bb0d51839cd968fc04f53292a81",
"year": 2017
}
|
pes2o/s2orc
|
Toxoplasma gondii tachyzoite-extract acts as a potent immunomodulator against allergic sensitization and airway inflammation
Epidemiological and experimental studies have shown an inverse relationship between infections with certain parasites and a reduced incidence of allergic diseases. We and others have shown that infection with Toxoplasma gondii prevents the development of allergy in mice. To establish whether this beneficial effect could be recapitulated by soluble products of this parasite, we tested an extract derived from T. gondii tachyzoites. Immunization of BALB/c mice with tachyzoites lysate antigen (TLA) elicited mixed Th1/Th2 responses. When TLA was applied together with the sensitizing ovalbumin (OVA), the development of allergic airway inflammation was reduced, with decreased airway hyperresponsiveness associated with reduced peribronchial and perivascular cellular infiltration, reduced production of OVA-specific Th2 cytokines in lungs and spleens and reduced levels of serum OVA-specific IgG1 as well as IgE-dependent basophil degranulation. Of note, TLA retained its immunomodulatory properties, inducing high levels of IL-6, TNFα, IL-10 and IL-12p70 in bone marrow-derived dendritic cells after heat-inactivation or proteinase K-treatment for disruption of proteins, but not after sodium metaperiodate-treatment that degrades carbohydrate structures, suggesting that carbohydrates may play a role in immunomodulatory properties of TLA. Here we show that extracts derived from parasites may replicate the benefits of parasitic infection, offering new therapies for immune-mediated disorders.
TLA triggers the production of pro-and anti-inflammatory cytokines in naïve splenocytes in vitro.
To determine the immunostimulatory capacity of TLA in vitro, splenocytes from naïve BALB/c mice were cultured for three days with 5 and 10 µg TLA. In comparison to cells stimulated with medium only, TLAstimulated cells induced significant levels of pro-inflammatory cytokines such as IL-6, IFNγ, TNFα as well as the anti-inflammatory cytokine IL-10 ( Fig. 1).
Immunization with TLA elicits mixed Th1/Th2 humoral and cellular immune responses. To test
humoral and cellular immune responses induced by TLA in vivo, BALB/c mice were immunized with TLA in alum or sham-treated with PBS in alum (Fig. 2a). Levels of TLA-specific antibodies in serum were measured on day 0 and 21. TLA-immunization led to significant production of TLA-specific IgG2a (Th1-associated isotype), as well as TLA-specific IgG1 (Th2-associated isotype) when compared to levels in pre-immune serum collected on day 0 and to levels in serum of sham-treated mice collected on day 21 (Fig. 2b). However, TLA-specific levels of another Th2-associated antibody, IgE, remained unchanged (measured OD for the TLA group was 0.17 ± 0.005 and 0.18 ± 0.007 on day 0 and 21, respectively; P = 0.2481). Restimulation of spleen cells from TLA-immunized mice with TLA ex vivo led to increased levels of Th1-related cytokines IL-6, IFNγ and TNFα, as well as Th2 cytokines IL-5 and IL-4 and regulatory IL-10 compared to cultures incubated with medium only. Similarly, stimulation of splenocytes from TLA-immunized mice with TLA induced higher production of most of these cytokines in comparison to levels detected in spleen cell cultures from TLA-stimulated sham-treated mice (Fig. 2c).
TLA reduces AHR and prevents allergic inflammation in lungs.
To investigate whether TLA can modulate allergic lung inflammation, we used a mouse model of systemic sensitization and intranasal challenge with OVA. Mice were immunized i.p. with alum-adjuvanted OVA in the presence (TLA + OVA/OVA group) or absence (OVA/OVA group) of TLA. Control groups were immunized with PBS (PBS/PBS group) or TLA (TLA/PBS group) in alum only and challenged with PBS as indicated in Fig. 3a. A dose-dependent increase in AHR to methacholine, a nonspecific bronchoconstrictor, is one of the characteristics of allergic asthma 33 . As expected, intranasal challenge with OVA led to increased AHR in OVA/OVA mice in comparison to PBS/PBS mice, as demonstrated by increased levels of methacholine-induced PenH (Fig. 3b). Furthermore, OVA challenge led to increased numbers of eosinophils in BALF and increased numbers of inflammatory cells and goblet cell hyperplasia in lungs in comparison to PBS/PBS mice ( Fig. 3c-f). Intraperitoneal treatment of sensitized mice with TLA (TLA + OVA/OVA) significantly suppressed the development of AHR compared to OVA/OVA mice, resulting in PenH values similar to those seen in negative controls (Fig. 3b). Reduced AHR was associated with reduced eosinophils in BALF (Fig. 3c,d) and reduced infiltration of inflammatory cells into perivascular and peribronchiolar connective tissues ( Fig. 3e; H&E staining). In parallel, TLA reduced goblet cell hyperplasia in lungs in comparison to OVA/OVA group ( Fig. 3e; PAS staining). Immunization with TLA without OVA-sensitization and challenge (TLA/PBS) led to a similar outcome as sham-treatment (PBS/PBS) (Fig. 3b-f). Stimulation of lung cell cultures of OVA/OVA mice with OVA ex vivo led to high levels of IL-5, IL-4, IL-13, and IL-10 ( Fig. 4). Lung cells of TLA + OVA/OVA mice exhibited reduced IL-5 and IL-13, but increased levels of IFNγ in response to OVA (Fig. 4). Levels of IL-4 and IL-10 were reduced as well, but the difference did not reach statistical significance (Fig. 4).
TLA inhibits the development of Th2-type allergen-specific immune responses. In order to determine whether TLA could affect also the induction of allergen-specific humoral responses, we measured the levels of OVA-specific antibodies in serum. Samples were collected before sensitization (d 0) and after the sensitization and challenge on the sacrifice (d 25). As shown in Fig. 5a, mice sensitized and challenged with OVA (OVA/ OVA) exhibited increased levels of specific antibodies in sera collected on day 25 in comparison to levels in samples collected on day 0 (Fig. 5a). Although TLA-treatment in TLA + OVA/OVA group had no impact on the levels ScIentIfIc REPoRTS | 7: 15211 | DOI:10.1038/s41598-017-15663-4 of OVA-specific IgG2a (Fig. 5a), it markedly reduced levels of IgG1 as well as the IgE-dependent basophil degranulation in the RBL assay (Fig. 5a). Furthermore, we aimed to exclude the possibility that certain soluble factors in sera of TLA + OVA/OVA mice interfere with the basophil degranulation in RBL assay. Therefore, RBL cells were preincubated with sera of TLA-immunized mice prior to incubation with sera of OVA-sensitized. The results indicate, that reduced levels of β-hexosaminidase in cultures treated with sera from TLA + OVA/OVA mice in comparison to levels obtained by incubation with sera from OVA/OVA mice is not caused by soluble factors present in the sera of TLA-treated mice ( Supplementary Fig. S1). The levels of OVA-specific antibodies in TLA/PBS were comparable with levels detected in PBS/PBS mice (Fig. 5a). Additionally, restimulation of spleen cell cultures of TLA + OVA/OVA mice with OVA led to reduced production of Th2 cytokines IL-5 and IL-4, as well as of regulatory IL-10 in comparison to cultures of OVA/OVA mice. However, TLA-treatment of OVA-sensitized mice led to increased levels of IFNγ in TLA + OVA/OVA mice in comparison to mice in OVA/OVA group (Fig. 5b).
In these experiments, PBS/PBS and OVA/OVA groups were evaluated as negative and positive controls, respectively, to determine the effect of co-application of TLA and sensitizing allergen on the development of airway inflammation. Comparison to PBS/PBS group enables to resolve the potential consequence of TLA-treatment on immunological parameters in the lungs. However, in order to determine the baseline for measurement of TLA-induced effects (interference during sensitization) on allergen-induced airway inflammation, additionally, we measured humoral and cellular immune responses in non-sensitized and OVA-challenged mice and compared Figure 1. Cytokine production of TLA-stimulated splenocytes. Splenocytes from naïve mice were cultured with media only (Med), 10 and 5 µg/ml TLA for 72 h. Ultra-pure lipopolysaccharide from E. coli (LPS; 1 µg/ ml) and Pam3CSK4 (Pam3; 1 µg/ml) were used as positive controls. Levels of cytokines in culture supernatants were determined by ELISA. Three replicate cultures with cells from individual mice were measured. All data are representative of at least three independent experiments performed using different batches of TLA. Error bars show mean ± SEM. Results of Student´s t test: **P < 0.01, ***P < 0.001. them to PBS/PBS and OVA/OVA groups. Here, we could clearly show that non-sensitized OVA-challenged mice exhibit comparable levels of AHR, eosinophils in BAL, inflammatory infiltrates, Th2 cytokines in the lungs and OVA-specific antibodies in serum, when compared to PBS/PBS mice. Furthermore, all measured parameters were markedly different when compared to levels observed in OVA/OVA mice ( Supplementary Fig. S2-4).
Immunostimulatory potential of TLA is not impaired by heat-inactivation. To investigate whether TLA-components responsible for immunomodulation are heat-stable, TLA was heated at 96 °C for 15 min to denature proteins (TLA H) and tested in vitro and in vivo. Stimulation of splenocytes from naïve mice with TLA or TLA H led to comparable production of IL-6, TNFα and IL-10. TLA H was more potent in the induction of IFNγ in comparison to native TLA (Fig. 6a). In order to investigate the effect of heat-inactivation in vivo, mice were immunized twice with TLA and TLA H and TLA-specific humoral and cellular responses between both groups (TLA and TLA H) were compared (Fig. 6b,c). As shown in Fig. 6b, levels of TLA-specific antibodies on day 21 were comparable between both experimental groups. Stimulation of spleen cells of mice immunized with TLA or TLA H with native TLA led to comparable levels of IL-6, TNFα, IL-5, IL-4 and IL-10 ( Fig. 6c). Interestingly, immunization with the heat-inactivated TLA led to an increased production of IFNγ in stimulated splenocytes compared to immunization with the native TLA. TLA loses its immunostimulatory potential upon deglycosylation. Our results suggest that proteins are not the key players involved in the TLA-induced immunomodulation, therefore TLA was exposed to different biochemical treatments and tested in vitro. In order to confirm the hypothesis that not the proteins but rather carbohydrates are involved in the immunomodulation, TLA was treated with proteinase K (TLA-ProtK) to digest proteins into peptides or with sodium metaperiodate (TLA D), to modify glycan moieties. Sodium metaperiodate treatment leads to destruction of carbohydrate integrity through alteration of three-dimensional structures of the molecules 34 . By using a panel of nine biotinylated lectins in an ELISA assay, we could show that TLA contains wide range of carbohydrate moieties, which were removed by sodium metaperiodate treatment ( Supplementary Fig. S5). TLA-ProtK, TLA D, as well as TLA and TLA H, were used for in vitro stimulation of BMDC, which were isolated from naïve BALB/c mice. Levels of cytokines IL-6, TNFα, and IL-12p70 did not differ between TLA, TLA H and TLA-ProtK and were significantly increased compared to medium levels. Levels of IL-10 were reduced in TLA-ProtK-stimulated cells in comparison to levels observed in cultures with native TLA. Interestingly, TLA D failed to stimulate cytokine production in BMDC, which indicates that glycan moieties might be responsible for the immunomodulatory effect of TLA (Fig. 7).
Discussion
Studies in both humans and animal models demonstrated that certain parasitic infections modulate the host immune responses, which is often associated with protection against allergic diseases 6,8,35 . For example, respiratory allergies are less frequent among individuals infected with the protozoan parasite T. gondii 21,23 . In experimental settings we have previously shown that T. gondii infection, as well as inactivated extract of T. gondii oocysts prevented the development of airway inflammation in a mouse model of birch pollen allergy 24,26 . Similarly, Fenoy et al. have shown that T. gondii infection reduced the airway inflammation in a mouse model of OVA-induced allergy 25,36 . In the present study we tested immunomodulatory properties of an extract derived from T. gondii tachyzoites, in vitro and in vivo in naïve mice and in OVA-allergy model. Although the immune response to acute infection with T. gondii in humans and mice is shifted towards Th1 type, which is mandatory to control parasite replication 24,37,38 , we observed that immunization of mice by i.p. injection of TLA in alum induced mixed Th1/Th2 humoral and cellular immune responses. We detected substantial levels of T. gondii-specific serum IgG2a, a Th1-associated isotype as well as IgG1, a Th2-associated isotype, in samples collected after two immunizations at sacrifice. Similarly, high levels of both Th1 and Th2 cytokines, such as IFNγ and TNFα, or IL-4 and IL-5, respectively, were detected in TLA-stimulated splenocytes. These results are in agreement with our previous findings, as well as with the study by Costa-Silva et al., who described that immunization of mice with T. gondii-derived products induced mixed humoral and cellular Th1 and Th2 responses 39,40 .
Furthermore, we could show that TLA application in combination with allergic sensitization and challenge suppressed allergic immune response to OVA, demonstrated by reduced airway hyperresponsivness, reduced influx of eosinophils, peribronchial and perivascular infiltration of inflammatory cells, as well as Th2 cytokines in lungs compared to sensitized controls. In parallel, levels of Th2 cytokines in OVA-restimulated splenocytes as well as Th2-associated specific serum antibody IgG1 were reduced in TLA-treated mice in comparison to sham-treated sensitized controls. The potential of TLA to reduce Th2 responses has been documented by Liesenfeld et al. in a mouse model of infection with Nippostrongylus brasiliensis 37 . The authors showed that administration of TLA prior to N. brasiliensis infection reduced levels of total IgE in serum and numbers of eosinophils in the periphery. However, the suppressive effect on Th2 responses was short-lived. The experimental setup in our study does not allow us to investigate whether the anti-allergic effects of TLA are long-lasting, therefore, follow-up experiments will focus on this aspect. Here we have shown that TLA-treatment led to increased levels of IFNγ in OVA-restimulated lung and spleen cell cultures. Similar effect was observed previously in allergic mice with T. gondii infection 24,36 . Although high levels of IFNγ has been associated with exacerbated symptoms of asthma in humans and mice 41 , the importance of this cytokine in allergy prevention was demonstrated for example in the study by Brand et al. who could show that the farm-derived bacterial strain Acinetobacter Iwoffii F78, prevented the development of allergy in IFNγ-dependent manner 42 . Also, induction of IFNγ has proven to be crucial in the prevention of house dust mite allergy 43 .
On the other hand, a regulatory cytokine IL-10 has been shown to play an important role in parasite-induced suppression of allergy [44][45][46] . TLA induced high levels of IL-10 in vitro in spleen cell cultures derived from naïve or TLA-immunized mice and in cultures with BMDC. However, in TLA-treated allergic mice this cytokine was not detectable in OVA-stimulated lung and spleen cell cultures. This suggests that IL-10 is not a key cytokine involved in TLA-induced allergy prevention and this is in agreement with findings by Fenoy et al., who could show that T. gondii-induced suppression of allergy is IL-10-independent 47 . We have previously shown that decreased levels of antigen-specific IL-10 in cell cultures could be explained by increased uptake via upregulated IL-10 receptor 48 . However, the cellular source of IFNγ and its role, as well as the exact role of IL-10 in allergy-suppression by TLA still remains to be investigated.
In order to investigate the nature of molecules responsible for immunomodulatory effects, TLA was heat-inactivated or treated with proteinase K to elucidate the relevance of proteins, or treated with sodium metaperiodate to clarify the role of glycans. We have demonstrated that deglycosylated TLA failed to induce IL-6, TNFα, IL-10 or IL-12p70 production in stimulated dendritic cells, whereas interference with protein activities did not influence stimulatory properties of TLA, suggesting that heat-stable carbohydrates might play an important role in parasite-host interaction and immunomodulatory effect of TLA.
Parasite-derived glycans have been shown to play an important role in immunomodulation. For example, glycoconjugates from Trichuris suis, Fasciola hepatica or the major immunogenic glycan element of schistosomes, Lewis x , have been shown to induce activation of antigen-presenting cells and participate in modulation of host immunity by interacting with C-type lectins [49][50][51] . Of interest, it has been shown that glycosylation is a common modification of T. gondii proteins 52 . In tachyzoites, enzymes involved in O-glycosylation are constitutively 53 . Also the presence of N-glycosylation pathway in tachyzoites of T. gondii has been demonstrated and it has been shown that several key proteins involved in invasion and motility are N-linked glycoproteins 30,54 . Our data indicate that glycans of T. gondii tachyzoites have a role in the immunomodulation and studies on detailed characterization of the glycosylation pattern of TLA are in progress.
Taken together, we could show that TLA exhibits strong immunostimulatory properties in vitro and in vivo, inducing mixed Th1/Th2 profile in BALB/c mice. Furthermore, co-application of TLA with the allergen reduced allergic sensitization and airway inflammation in a mouse model of OVA allergy. The fact that the immunostimulatory potential of TLA is lost upon deglycosylation indicates that its immunomodulatory effects are glycan-mediated. Our findings contribute to the understanding of host-parasite interactions and may pave the way for the design of novel well-tolerated immunotherapies for human immune-mediated disorders.
Materials and Methods
Animals. Female Heat-inactivation, proteinase K and periodate treatment. Heat-inactivation of TLA was achieved by incubation at 96 °C for 15 min (TLA H). Furthermore, TLA was enzymatically treated with 1 mg/ml proteinase K (QIAGEN GmbH, DE) at 37 °C overnight, followed by 20 min incubation at 96 °C in order to inactivate the enzyme (TLA-ProtK). Sodium metaperiodate-mediated modification of glycan moieties in TLA (TLA D) was performed as follows. 450 µg TLA was treated with 50 µl 100 mM sodium metaperiodate (Sigma-Aldrich) to yield final concentration of 10 mM, at pH 4.5 for 45 min at room temperature (RT) in the dark. The oxidation reaction was stopped with 100 µl of sodium borohydride (Merck) at a final concentration of 50 mM, at pH 4.5 for 30 min at RT in the dark. Excess salt was removed by exchanging reaction buffer with PBS by using desalting columns (Zeba Spin Desalting Columns, 7 K MWCO, Pierce Biotechnology, Thermoscientific, USA) and the protein concentration was assessed as above.
Bone marrow precursor cells were isolated from murine femurs and tibias of naïve female BALB/c mice and cultured in complete media supplemented with GM-CSF (20 ng/ml; Peprotech, USA) as described previously 16 Immunization with TLA and heat-inactivated TLA. Mice were immunized intraperitoneally with 10 µg TLA in 140 µl alum (Alu-Gel-S Suspension, Serva Electrophoresis, DE) (TLA group), 10 µg heat-inactivated TLA in alum (TLA H group) or sham treated with PBS in alum (Sham group) on days 0 and 10. Blood samples were collected one day before the first immunization (day −1) and at the end of the experiment (day 21). Serum obtained after blood coagulation and centrifugation for 10 min at 1,500 × g was stored at −20 °C for further analysis. Levels of TLA-specific IgG1 and IgG2a antibodies were measured by ELISA. Briefly, microtiter plates (Nunc, DK) were coated with 4 µg/ml of TLA in coating buffer (0.1 M carbonate-bicarbonate buffer, pH 8.4) overnight at 4 °C. Coated plates were incubated with blocking buffer (PBS/0.05%Tween/1% BSA) for 6 h at RT. Plates were washed and incubated with diluted serum samples overnight at 4 °C. Sera were diluted 1:1000 for IgG1, 1:500 for IgG2a and 1:10 for IgE. On the following day, plates were washed and incubated with rat-anti-mouse IgG1, IgG2a and IgE (1:500; Pharmingen, USA) for 6 h at RT. Plates were washed and incubated with horseradish peroxidase-conjugated mouse-anti-rat IgG (1:2000; Jackson ImmunoResearch Laboratories Inc., USA) for 1 h at 37 °C and then 1 h at 4 °C. Plates were washed, incubated with chromogenic substrate (1 mM ABTS in 70 mM citric-phosphate buffer, pH 4.2; Sigma-Aldrich). Absorbance was measured at 405 nm on SparkControl Magellan plate reader (Tecan GmbH, AT). At sacrifice (d 21), spleens were collected and single cell suspensions were prepared and cultured (5 × 10 6 /ml) with 5 µg/ml TLA or media alone at 37 °C for 72 h. Supernatants were collected and levels of cytokines measured by ELISA as indicated above.
Mouse model of OVA-induced airway inflammation.
Mice were immunized intraperitoneally with the mixture of 50 µg TLA and 10 µg OVA (grade V; Sigma-Aldrich) in 90 µl alum on days 0 and 14; and then challenged three times intranasally with 100 µg OVA in a final volume of 30 µl on days 21-23 (TLA + OVA/ OVA group). Before each challenge, mice were anesthetized by 5% isoflurane (Isocare; 100% w/v; Inhalation vapour, Animalcare Ltd, UK) in UniVet Porta anaesthesia machine (Groppler Medizintechnik, DE) with the airflow set at 3 L/min. Control groups were sensitized with 10 µg OVA and challenged with OVA (OVA/OVA group); treated with 50 µg TLA and challenged with 30 µl PBS (TLA/PBS group) or sham treated with PBS and challenged with PBS (PBS/PBS group). Mice were terminally anesthetized and organs were excised on day 25. In order to prepare single cell suspension, spleens were forced through a cell strainer and hemolyzed for 1 min in 3 ml hemolysis buffer (150 mM ammonium chloride, 10 mM potassium bicarbonate and 0.1 mM EDTA). Lung cell isolation was performed as previously described 55 . Briefly, lungs were excised and minced in 6 ml serum-free RPMI-1640 media containing Liberase TL (0.5 mg/ml; Roche, DE) and DNase I (0.5 mg/ml; Sigma-Aldrich), incubated for 45 min at 37 °C and finally, the remaining tissue was forced through a 70 µm cell strainer (Falcon, Corning Inc., USA). Spleen and lung single cell cultures (5 × 10 6 /ml) were restimulated with media and 50 µg/ ml endotoxin-free OVA (EndoGrade Ovalbumin, Hyglos GmbH, DE) for 72 h. Supernatants were collected and cytokines were measured by ELISA as described above. Levels of OVA-specific IgG1 and IgG2a were measured by ELISA. Microtiter plates were coated with 5 µg/ml OVA in coating buffer and ELISA was carried out as above.
Lung histology. Excised lung tissue was fixed with 7.5% formaldehyde-PBS and paraffin-embedded. 5 µm tick sections were stained with H&E and periodic acid-Schiff (PAS) stain. Histological pathology score was evaluated using light microscopy according to the method adapted from Zaiss et al. 58 Statistical analysis. Data were statistically analyzed by GraphPad Prism software (Graph Pad Software, USA) utilizing unpaired Student's t-test and two-way ANOVA. All data are shown as mean ± standard error of the mean (SEM) and differences were considered significant at p < 0.05.
|
v3-fos-license
|
2018-04-03T06:00:32.701Z
|
2015-07-24T00:00:00.000
|
16164907
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.497",
"pdf_hash": "c40613d8c484d287f589cae24d256f0bfd05098f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2236",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "c40613d8c484d287f589cae24d256f0bfd05098f",
"year": 2015
}
|
pes2o/s2orc
|
Phase I/II trial of capecitabine, oxaliplatin, and irinotecan in combination with bevacizumab in first line treatment of metastatic colorectal cancer
Phase III studies have demonstrated the efficacy of FOLFOXIRI regimens (5-fluorouracil/leucovorin, oxaliplatin, irinotecan) with/without bevacizumab in metastatic colorectal cancer (mCRC). Capecitabine is an orally administered fluoropyrimidine that may be used instead of 5-fluorouracil/leucovorin. We evaluated a triple-chemotherapy regimen of capecitabine, oxaliplatin, and irinotecan, plus bevacizumab in 53 patients with mCRC. A Phase I study identified the maximum tolerated dose of irinotecan as 150 mg/m2. Median follow-up in a subsequent Phase II study using this dose was 28 months (74% progressed). For all patients, a complete response was achieved in 4% and a partial response in 60%; median progression-free survival (PFS) was 16 months and median overall survival (OS) was 28 months. Median PFS was longer for patients with an early treatment response (28 vs. 9 months for others; P = 0.024), or early tumor shrinkage (25 vs. 9 months for others; P = 0.006), or for patients suitable for surgical removal of metastases with curative intent (median not reached vs. 9 months for others; P = 0.001). Median OS was longer for patients with early tumor shrinkage (median not reached vs. 22 months for others; P = 0.006) or surgery (median not reached vs. 22 months for others, P = 0.002). K-ras mutations status did not influence PFS (P = 0.88) or OS (P = 0.82). Considerable Grade 3/4 toxicity was encountered (36% for diarrhea, 21% for vomiting and 17% for fatigue). In conclusion, the 3-weekly triple-chemotherapy regimen of capecitabine, oxaliplatin, and irinotecan, plus bevacizumab, was active in the first-line treatment of mCRC, although at the expense of a high level of toxicity.
Introduction
Colorectal cancer (CRC) is the third most commonly diagnosed cancer in males and the second most common in females; more than 1.2 million new cases and 608,700 deaths have occurred worldwide in 2008. [1] Chemotherapy remains the primary therapeutic option for patients with metastatic CRC (mCRC). Combinations of fluoropyrimidines, irinotecan, and oxaliplatin have been shown to be effective in this setting, along with the more recent introduction of targeted chemotherapy with monoclonal antibodies against the vascular endothelial growth factor (bevacizumab) or the epidermal growth factor receptor (cetuximab and panitumumab). The addition of a targeted agent to first line chemotherapy has improved progression free survival (PFS) and overall survival (OS) in randomized trials [2][3][4][5][6] and is now considered to represent the standard of care for mCRC.
It is therefore of interest to study the therapeutic profile of capecitabine within a triple-therapy regimen, with the addition of targeted chemotherapy in mCRC. Accordingly, we have evaluated the combination of capecitabine, oxaliplatin, and irinotecan with bevacizumab in the firstline management of mCRC in a Phase I/II trial.
Patients
The study was conducted at a single institution. Patients eligible for inclusion in the study were men or women aged ≥18 years with histologically confirmed colorectal adenocarcinoma presenting as unresectable metastatic or locally advanced disease; Eastern Cooperative Oncology Group performance status (ECOG PS) of 0-2; measurable disease as defined by Response Evaluation Criteria in Solid Tumors (RECIST); no previous chemotherapy or bevacizumab for metastatic disease; and adequate hematological, renal, and hepatic function (absolute neutrophil count ≥1.5 × 10 9 /L, platelet count ≥100 × 10 9 /L, normal serum creatinine, normal serum bilirubin, serum transaminases ≤2.5 times the upper limit of normal [UNL; ≤5.0 times UNL if elevated secondary to liver metastases]); and urine dipstick for protinuria <2+. Patients who had received prior adjuvant 5-FU or oxaliplatin chemotherapy were eligible if they had remained free of disease for at least 12 months after the completion of adjuvant therapy.
Exclusion criteria included patients with known or suspected dihydropyrimidine deficiency; the presence of central nervous system metastasis; previous malignancy within the last 5 years (except adequately treated nonmelanomatous skin cancer or in situ cervical cancer); severe cardiovascular disease; major bleeding disorder; significant traumatic injury or major surgery within 28 days of starting therapy; minor surgery within 7 days of starting therapy; recent significant hemoptysis; active uncontrolled infection; uncontrolled hypertension; pregnancy or breastfeeding; any other serious medical condition (in the judgment of the investigator); treatment with other experimental drugs within 30 days of entry into the trial; treatment with other anticancer therapy; known hypersensitivity to any of the study drugs; any psychological, familial, geographic, or social circumstances which could impair the patient's ability to participate in the trial and comply with followup, including legal incapacity.
Treatment
Pretreatment baseline evaluation included a complete medical history and physical examination, full blood count and chemistry profile including carcinoembryonic antigen, and a CT scan of the chest, abdomen, and pelvis.
The Phase I trial was designed to find the maximum tolerated dose of irinotecan, with a design based on the standard 3-week Xelox/Capox regimen, in order to minimize the requirements for hospital attendance while maintaining efficacy. All patients received oral capecitabine 1000 mg/m 2 twice-daily on days 1-14, with intravenous oxaliplatin 130 mg/m 2 , and bevacizumab 7.5 mg/kg body weight on day 1.
The prespecified dose levels for irinotecan were 150, 200, and 250 mg/m 2 , given intravenously on day one of each cycle. The starting dose was based on previous clinical experience with the use of irinotecan within three-drug combinations, which involved administration of this agent at starting doses of 150-180 mg/m 2 given every 2 weeks [20,21]. Accordingly, we adopted the dose of 150 mg/ m 2 as a starting dose to explore the maximum tolerated dose within the more standard three-weekly administration regimen employed here.
At least three patients were included sequentially in each dose level, and no intrapatient dose-escalation was allowed. Dose escalation was permitted if no dose limiting toxicity (any grade 4 hematological toxicity and/or grade 3 or 4 nonhematological toxicity) was encountered by the end of the first cycle. If one of three patients experienced dose-limiting toxicity, three additional patients were enrolled at the same dose level. The maximum tolerated dose was defined according to the occurrence of doselimiting toxicity in least 2/3 or at least 4/6 patients.
The recommended dose for the Phase II study was the dose level immediately below the maximum tolerated dose. Additional patients were then enrolled to confirm the safety profile of the combination [22]. Treatment was administered every 21 days. A total of 5-8 cycles of the four drug combination was planned, followed by maintenance capecitabine and bevacizumab at the same dose level until disease progression.
During either phase, the feasibility of surgical resection of metastatic sites was assessed every 2 months and strongly recommended when feasible. Treatment was withdrawn in the event of disease progression, unacceptable toxicity, or withdrawal of patient consent.
Dose modification
Dose modifications were made according to the most serious toxicity observed during the previous cycle, graded according to the National Cancer Institute Common Terminology Criteria (version 3) [23]. Only the capecitabine dose was modified for hand-foot syndrome and mucositis, the capecitabine, and irinotecan doses could be modified for diarrhea, and the oxaliplatin was modified for neuropathy. Bevacizumab doses were not modified. Chemotherapy treatment was delayed until neutrophil count was ≥1.0 × 10 9 /L and platelet count was ≥100 × 10 9 /L prior to start of the next cycle. Patients were withdrawn from the trial if toxicity required treatment to be delayed by more than 2 weeks.
Study endpoints
The primary endpoint for the Phase I study was to identify the maximum tolerated dose of irinotecan. Primary outcomes for the Phase II study were the response rate and toxicity profile in patients with mCRC. Secondary endpoints were PFS and OS. PFS was calculated from the day of treatment start to the first observation of disease progression or death from any cause. OS was calculated from the day of treatment start until death from any cause, censoring patients where necessary at the last date known to be alive.
Evaluation of response
Assessment of response was done according to RECIST criteria, version 1.1 [24] A CT scan or MRI scan of the chest, abdomen, and pelvis were done after the second, fifth and eighth cycle of chemotherapy and then every 2 months. This schedule facilitated detection of early tumor shrinkage after the first 6 weeks, followed by subsequent regular, two-monthly evaluation. Deepness of response (DpR) was defined as the percentage of tumor shrinkage observed (if shrinkage occurred) at the nadir (best response) using the longest diameter based on RECIST criteria [25]. Early tumor shrinkage was defined as ≥20% decrease in the maximum tumor dimension by RECIST criteria at the time of first evaluation of response [26].
Follow-up and end of study visits
All patients underwent measurement of complete blood count with differential, renal, and hepatic profile, carcinoemberyonic antigen (CEA), and urine for proteinuria on day one of each cycle. Blood count was also done on the day 10-14 for the first two cycles. Toxicity evaluation was recorded prior to starting treatment and on day 1 of each cycle of chemotherapy. Patients were followed up until the study was closed upon reaching the planned number of events.
Statistics
The number of patients to be recruited in the Phase I trial was depended on the maximum tolerated dose of irinotecan. The number of patients for the Phase II trial followed a two-stage Simon optima design to include the patient recruited in Phase I. For a lower activity level of 40% (P 0 =0.40, percentage of patients free of progression at 10 months in the null hypothesis) and higher activity level of 60% (P 1 =0.60, percentage of patients free of progression at 10 months in the alternative hypothesis), and with α and β error of 0.05 and 0.20, the Phase I trial was planned to recruit 16 patients. If fewer than seven patients achieved an objective response, the trial would close as the study treatment was not more effective than standard chemotherapy. If more than seven patients in Phase I achieved an objective response then a total of 46 patients would be recruited.
Kaplan-Meier survival curves were compared using logrank tests. Statistical analyses were performed using SPSS version 17.0 (IBM corporation, Armonk, NY, USA). The efficacy analysis was performed on the intention-to-treat population, which comprised all patients who received at least two cycles of study treatment.
Ethics
The study was carried out fully in accordance with the requirements of Good Clinical Practice and the Declaration of Helsinki. The protocol was approved by the ethics committees of our institution. Patients were informed of the investigational nature of the study and provided written informed consent before registration. The trial was registered at clinicaltrials.gov (NCT01311050).
Patients
Fifty-four patients were entered into the Phase I and Phase II studies between 24 January 2009 and 14 December 2011. One patient was found at a later stage to have a concurrent chaolangiocarcinoma, rather than metastatic colon cancer and was excluded from analyses other than the Phase I toxicity evaluation. All other analyses included the remaining 53 patients.
Capecitabine-Platin-Irinotecan-Bevacizumab in mCRC
The study population was roughly equally divided between men and women ( Table 1). The majority of patients had ECOG PS 2, with tumors in the colon or rectosigmoid. About half had undergone surgery, a minority had previously received adjuvant chemotherapy, but none had received radiotherapy. Similar numbers of patients had single or multiple metastases, most commonly in the liver. Wild-type K-ras or K-ras mutations were also found in similar numbers of patients.
Maximum tolerated dose of irinotecan in the Phase I study
Three patients were received irinotecan at a dose of 150 mg/ m 2 , of whom one developed Grade 4 diarrhea and fatigue in cycle 2. Three further patients received irinotecan 200 mg/m 2 , of whom one developed Grade 3 diarrhea and neutropenia and one developed Grade 3 vomiting. The maximum tolerated dose of irinotecan was therefore 150 mg/m 2 . Recruitment commenced for the Phase II trial using this dose level and an additional 47 patients were enrolled. However, a high incidence of Grade 3 and 4 toxicity (mainly diarrhea) led to a reduction in the capecitabine dose to 800 mg/m 2 twice daily after 30 patients had been enrolled.
Treatments
A total of 230 cycles of treatment were administered, with a median of five cycles per patient (range 1-8). Six patients received only one cycle of chemotherapy, either due to withdrawal of consent or Grade 4 toxicity. Thirtyfour patients received the planned 5-8 cycles of induction triplet chemotherapy. Reasons for receipt of less than five cycles included toxicity in 11 patients, progression in four patients, withdrawn consent in three patients, and temporary loss of follow-up in a further patient. The relative dose intensity was 92% of that planned for irinotecan and oxaliplatin, and 79% of that planned for capecitabine. Maintenance treatment with capecitabine and bevacizumab was administered to 32 patients (60%).
Surgical resection of metastatic disease ( Table 2) was attempted with a curative intent in 13 patients (24.5%): four (7.5%) had surgical resection of the primary tumor, three (11.3%) had liver resection only, and six (11.3%) underwent cytoreductive surgery with hyperthermic intraperitoneal chemotherapy. Radical (R0) resection was achieved in 10 patients (18.9%) with a pathological complete response (pCR) in two of these patients (each had pCR of the primary tumor and of the metastasis in the liver or peritoneum).
Efficacy
Of the 53 patients included in efficacy analyses, 45 were evaluable for response evaluation (four patients withdrew consent, two were discontinued for Grade 4 toxicity and two died). Two patients (4.4%) had a complete response and 27 patients (60%) had a partial response, for an overall response rate of 64.4%. Stable disease was observed in 31.1% and progressive disease in 4.4%. Fifteen patients (33.3%) achieved a response at the first evaluation (early treatment response) and 27 (60%) achieved early tumor shrinkage. The time to best response was 48 days (range 18-1041) and the median DpR was 33% (range −12 to 100). Capecitabine-Platin-Irinotecan-Bevacizumab in mCRC S. Bazarbashi et al.
The median follow-up duration was 28 months (range 1-50; 95% CI 23-33), at which time 39 patients (74%) had progressed. Median PFS was 16 months and median OS was 28 months in the overall population (Table 3, Fig. 1). Median PFS was significantly longer in patients with early treatment response or early tumor shrinkage, or in subjects who underwent surgery with curative intent (Table 3, Fig. 2). OS was significantly prolonged in subjects with early tumor shrinkage or surgical resection. K-ras mutations did not influence PFS or OS (Table 3, Fig. 2).
Toxic death occurred in three patients, two after the first cycle and one after fifth the cycle of chemotherapy. All deaths occurred outside our institution; one developed fever at home and refused to go to a hospital, one death was secondary to cerebrovascular accident and one died from non-neutropenic septic shock. There was no significant difference in toxicity between patients who received capecitabine at a dose of 1000 or 800 mg/m 2 .
Discussion
The result of this trial are consistent with the results of previous randomized trials that demonstrated enhanced efficacy of the ripple-therapy regimens in mCRC [7,8]. The response rate in our trial of 60% is similar to the roughly 60-80% response rates observed with other tripletherapy regimens [7,8,20,21,27,28]. The PFS obtained in our regimen (16 months) represents one of the best reported results so far in mCRC, although this did not translate to a similarly high OS (28 months). Receipt of no more than five cycles by half of the patients and the high toxicity of the regimen may account for this finding.
We observed marked toxicity, with Grade 3 and 4 diarrhea (35%) requiring frequent dose reduction, especially of capecitabine, resulting in a low relative dose intensity for capecitabine of 79%, although the dose reduction did not translate into a reduced frequency of Grade 3/4 diarrhea. Febrile neutropenia was also common (17%), compared with other evaluations of triple regimens in mCRC [7,8,20,21,27,28]. It is apparent that the planned dose intensity of both bi-weekly triplet regimens reported for capecitabine (5000 mg/m 2 per week [20] and 7000 mg/m 2 per week [21]) was lower than in our trial, even after the capecitabine dose was reduced to 800 mg/m 2 twice-daily (7466 mg/m 2 per week). This probably explains the difference in toxicity between our study and these earlier trials. In addition, clinical experience in the Middle East suggests that the tolerance of Saudi patients to standard doses of capecitabine may be lower than that reported for Western populations. The toxicity encountered with our regimen might also be explained in part by 87% of our patients having PS 1-2. Nevertheless, other regimens combining capecitabine and irinotecan have yielded similar high rates of toxicity, with incidence rates for diarrhea approaching 40% [18,19]. These regimens maintained activity with decreased toxicity by lowering the doses of both irinotecan and capecitabine [29,30].
We also demonstrated that the concept of early tumor shrinkage, reported previously, is not limited to antiepidermal growth factor receptors (EGFR) regimens [26,31,32]. Patients who achieved ≥20% tumor shrinkage at the first evaluation (after two cycles of induction therapy) had improved PFS and OS compared with patients who did not. Our study also confirmed the lack of influence of K-ras mutation status on prognosis in patients treated with a bevacizumab-containing triple-chemotherapy regimen, as reported elsewhere [27,33,34].
It has been reported that surgery for resectable metastasis improves survival in patients with mCRC [35][36][37], and our data provide further confirmation of this benefit. In addition, our study confirms the high rate of R0 resection (approaching 19%) in patients treated with triple-chemotherapy regimens. The high percentage of surgical resection confirms the feasibility of such procedures in patients receiving bevacizumab containing regimens.
The main limitations of our study were that it was a Phase I/II noncomparative trial, conducted at a single institution, and in a relatively small patient population. On the other hand, our trial enrolled an unselected patient population who were similar to the patients we see in daily practice, as shown by the relatively high percentage of patients with PS 2 (20%) and multiple organ involvement (58%).
In conclusion, the three weekly triple-chemotherapy regimen of capecitabine, oxaliplatin, and irinotecan, combined with bevacizumab, was active in the first line treatment of mCRC, although at the expense of a high level of toxicity. We do not recommend further application of this regimen at the doses described above. Further evaluation of this regimen in a more selected group of patients with mCRC with better PS and an adjusted dose of capecitabine and irinotecan may yield lower toxicity while maintaining therapeutic activity.
|
v3-fos-license
|
2024-07-14T15:33:48.825Z
|
2024-07-01T00:00:00.000
|
271149342
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "7f3eab7364c5bf248c5afb3b19fb61573869e835",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2237",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "8c1c03178e227dc5c5630bb5432129123d7d4d6d",
"year": 2024
}
|
pes2o/s2orc
|
Limb Salvage in Chondrosarcoma of the Proximal Humerus: A Case Report
Chondrosarcoma is the third most common primary malignant bone tumor. The proximal humerus is the most common site. Since it is resistant to chemotherapy and radiotherapy, the mainstay of treatment is surgery. Due to the extensive involvement of long bones, it requires reconstruction with either a prosthetic implant or bone graft. We present a case of a 43-year-old female who presented with chondrosarcoma involving 15 cm of humerus. The patient was managed with the resection of 15 cm of humerus and reconstruction with the same resected bone after autoclaving. It was secured with long fixation resulting in arthrodesis of the glenohumeral joint. The patient was followed for one year and there was evidence of callus formation by ultrasound and computed tomography (CT) scan.
Introduction
Chondrosarcoma is the third most common primary malignant tumor of the bone after myeloma and osteosarcoma.It accounts for 20% of all sarcomas [1].The humerus is the most common site for chondrosarcoma [2].However, this tumor is known for its resistance to radiotherapy and chemotherapy [3].Therefore, surgery is the best and only option available for its management.
Chondrosarcoma is a malignant tumor and, therefore, can progress rapidly if not treated in the early stages.The surgical management requires removal with the care that the resected part of the bone contains tumorfree margins [4].However, the location and size of the tumor are the most important factors for the outcome.Most chondrosarcoma involves the proximal part of the humerus.The tumor resection may require the removal of soft tissue structures like the rotator cuff, deltoid, and other ligaments [5].After that, reconstruction is needed to improve the quality of life of the patient.There are various methods of reconstruction, especially if the tumor size is large: allograft [5], irradiated autograft [6], and autoclaved autograft [7].Apart from these, prostheses for reconstruction are available too.
In developing countries, a lot of patients are not able to afford tumor prostheses, and allografts are not easily available.We present a case of chondrosarcoma involving a large part of the humerus, treated with resection of the tumor and reconstruction with an autoclaved autograft.
Case Presentation
Our case is a 43-year-old female presented with biopsy-proven chondrosarcoma involving 15 cm of humerus including the proximal part.Her main complaint was pain in her arm and restricted shoulder range of motion due to pain.There was no distal neurovascular deficit.Radiographs were performed to look for the extent of the tumor and the involvement of neurovascular bundles.The radiographs showed 15 cm of involvement of the proximal humerus by tumor (Figure 1).
FIGURE 1: Pre-operative radiographs showing chondrosarcoma involving part of the humerus
The surgery was planned and the patient was explained about the procedure and arthrodesis.Enbloc resection was performed and deltoid muscle was sacrificed as it was found involved in the tumor tissue (Figure 2).
FIGURE 2: Intra-operative picture after resection of the bone
The resected bone with tumor was cleaned of soft tissue and autoclaved at 120°C for 10 mins (Figure 3).
FIGURE 3: Resected part of the humerus
Then it was cleaned with copious saline and the tumor was curetted (Figure 4).
FIGURE 4: Resected part of the humerus after autoclaving
The gap was filled with polymethylmethacrylate.It was reimplanted and shoulder arthrodesis was done using an 18-hole 4.5 mm recon plate from the spine of the scapula to the distal humerus.The junctional area was surrounded with prolene mesh with cortico-cancellous graft filled in it.The remnant of rotator cuff muscles was sutured with prolene mesh using ethibond.Post-operatively, the shoulder spica was applied and a window was cut for cleaning and dressing of the wound.Suture removal was done after two weeks and shoulder spica was continued.The patient was followed up for one year when this manuscript was written.The evaluation was done with serial radiographs (Figure 5).
FIGURE 5: Radiographs at a one-year follow-up
In between computed tomography (CT) scan and ultrasound were done to look for callus formation at the junctional area and humero-glenoid area.We found a callus at both junctions after approximately three months.The shoulder spica was removed after six months.
Discussion
Surgical management is the mainstay for treatment of chondrosarcoma.Since it is a highly malignant tumor, therefore, removal of the complete tumor with disease-free margins is mandatory for successful treatment.Failure to do so may lead to a recurrence of the tumor.The most common site is the proximal humerus.The literature supports resection of tumors along with reconstruction which can provide functional ability to the patient.The surgical reconstruction aims to restore the structural integrity and functional capacity of the humerus.The various options for reconstruction are endoprosthetic reconstruction, allograft, autograft, and composite reconstruction.Endoprosthetic reconstruction has become the choice of reconstruction for most of the tumors of bone [8].They provide immediate structural support and immediate mobilization.Also, the patient can achieve near-normal functional capacity [9].Allograft is another option that also provides good structural compatibility and biological integration over time [10].However, the availability of allograft is a challenge.Not all centers are equipped with facilities to store allografts.Gomez et al. published a case report for the successful treatment of chondrosarcoma of the humerus using hemicortical allograft.They have an excellent follow-up of 56 months with good functional outcomes [5].However, they also mention the problems associated with allograft.The biggest limitation is non-union which has been a common cause of hardware failure and revision surgery [11].Even if, there is union, the sections of allograft weaken over time and there are high chances of fracture [12].
Another option for reconstruction is using the same resected bone obtained from the patient.This has been explored by many surgeons and can be done successfully.However, the tumor tissue needs to be eliminated from the autograft to prevent recurrence.The various options are irradiation or autoclaving of the autograft.Chen et al. [6] published a series of malignant bone tumors where resection and reconstruction using autograft were done.Out of 14 cases in their series, 3 were having chondrosarcoma.They suggested extracorporeal irradiation of bone at 300 Gy is useful for halting the growth of malignant cells.They observed no local recurrence after implantation of irradiated bone.Although they didn't encounter any infection in their series, they mentioned that infection is the biggest risk in this irradiation method.They also mentioned it took them 50 minutes to transport and irradiate after which the bone was available to implant.However, not all centers have a facility for irradiation with this efficiency in terms of time.
Autoclaving is a good alternative for preventing the growth of tumor cells and preventing recurrence.The advantages of using autograft are biological compatibility, it preserves the original bone architecture and it is cost-effective [13].Smith et al. [7] presented a case series of eight patients of chondrosarcoma managed with reimplantation of autograft after autoclaving.One of the patients in their series has a follow-up of 24 years.This patient had chondrosarcoma of the femur and was in great functional condition at 24 years of follow-up.The autoclaving protocol used by the authors was different from what we have used.They used a temperature of 135°C for 12-15 minutes.However, we did that at 120°C for 10 minutes.We have followed our patient for one year now which is less than the follow-up of Smith et al.; however, we could not find anything in the literature that mentions the end effects of the difference in autoclaving.We used this temperature and time as we assumed that using a higher temperature for a longer duration would make the bone completely non-viable.
One case in Smith et al. [7] series underwent a total hip replacement.It was observed that the entire autoclaved bone was live except few necrotic spicules.This was also confirmed with histopathological examination.Out of eight cases in their series, five involved the proximal humerus.The same procedure was done in all of them and on long-term follow-up, they have acceptable functional mobility at the glenohumeral joint.Due to extensive involvement by the tumor tissue in our patient, we found the glenohumeral joint was non-salvageable.Therefore, we decided to extend our fixation and fuse the joint.Another reason for the fusion of the glenohumeral joint was to get a mechanical advantage.Since, the resected bone was around 15 cm and autoclaving reduced the mechanical strength of the bone, therefore, long spanning by implant would give better mechanical strength to the whole construct.As a result, we could not comment on the functional outcome of the patient.However, we confirmed the presence of callus formation using ultrasound and CT scan at three months.This was similar to the outcome in the case series of Smith et al. [7] where the callus is visible in all cases at 2-3 months.
The main challenges with autoclaved bone are it loses its mechanical properties which increases the chances of fracture, it loses its biological properties as autoclaving also destroys living cells which can increase the chances of non-union and infection risk.Adjuvant therapies like bone grafting and the use of bone morphogenetic protein (BMP) can reduce the risk of non-union and can improve mechanical strength [14,15].However, more long-term studies are required to look at the success of this procedure and to suggest modifications to make it better.
Conclusions
Reconstruction with autoclaved bone is a viable option for limb salvage in patients with chondrosarcoma of the humerus.While it offers the advantages of biological compatibility and cost-effectiveness, it also presents challenges that need to be carefully managed.Advances in surgical techniques and post-operative care continue to improve the outcomes of this reconstruction method, making it a valuable tool in the armamentarium of orthopedic oncologists.
|
v3-fos-license
|
2023-06-04T15:05:19.832Z
|
2023-06-01T00:00:00.000
|
259052046
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/electronics12112502",
"pdf_hash": "e6f8245d8bcf8ed73249e62dc761b8b5fcede426",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2239",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "ccf0106ffd66d79631dd42b481f7b8bf7dc4efe8",
"year": 2023
}
|
pes2o/s2orc
|
Design Methodology and Experimental Study of a Lower Extremity Soft Exosuit
: Flexibility and light weight have become the development trends in the field of exoskeleton research. With high movement flexibility, low movable inertia and excellent wearable comfort, such a type of system is gradually becoming an exclusive candidate for applications such as military defense, rehabilitation training and industrial production. In this paper, aiming at assisting the walking of human lower limbs, a soft exosuit is investigated and developed based on the considerations of fabric structure, sensing system, cable-driven module, and control strategy, etc. Evaluation experiments are also conducted to verify its effectiveness. A fabric optimization of the flexible suit is performed to realize the tight bond between human and machine. Through the configuration of sensor nodes, the motion intention perception system is constructed for the lower limb exosuit. A flexible actuation unit with a Bowden cable is designed to improve the efficiency of force transmission. In addition, a position control strategy based on division of the gait phase is applied to achieve active assistance during plantar flexion of the ankle joint. Finally, to verify the assistive effectiveness of the proposed lower extremity exosuit, experiments including a physiological metabolic test and a muscle activation test are conducted. The experiment results show that the exosuit proposed in this paper can effectively reduce the metabolic consumption and muscle output of the human body. The design and methodology proposed in this paper can be extended to similar application scenarios.
Introduction
Wearable robot technology belongs to a comprehensive research field that uses fused sensor information to control and assist joint movement, in attempting to break through the limits of human physical activity [1][2][3]. Its main application scenarios are physical enhancement for healthy people [4,5] and rehabilitation training for non-healthy groups [6][7][8].
In both aspects, rigid exoskeleton systems have made great progress, but there are still problems such as the large size and mass of the device and limited movement for the wearer. Different from a rigid exoskeleton, the soft exosuit can provide assistance for humans while being more comfortable and lightweight [9]. These soft wearable systems mainly show the advantages of extremely high movement flexibility and a large movement range that rigid systems do not have [10]. It has the following three characteristics when applied to rehabilitation training for patients or power assistance for healthy people.
(1) It can enhance movement ability. In fact, the soft exosuit is slightly inferior to the rigid exoskeleton in terms of maximum output torque. Considering that both patients and healthy people generally have mild or moderate needs for power assistance in daily life, the soft exosuit can meet the basic functional requirements of physical enhancement to a certain extent; (2) It has a large degree of freedom of movement. The soft exosuit has no rigid links on the limbs and no slewing mechanism at the joints, due to the application of the flexible suit and cables to transmit power [11,12]. This allows the entire device not to restrict the joint movement at all. The range of motion is the same as when exosuit is not worn; (3) It is simple and light in shape, like other soft robotic structures for rehabilitation [13][14][15]. Unlike rigid systems, flexible ones do not drive limbs through bulky structures such as rotary mechanisms at joints and rigid links parallel to limbs. As a result, they are lighter and more comfortable for the wearers.
Harvard University is an important institution for the research and design of exosuits in recent years, and has achieved considerable technical achievements [16][17][18][19]. In 2013, it started to design a new cable-driven lower extremity exosuit with the goal of assisting human walking [20]. This system detects the gait phase information through the foot switch, and uses a geared motor to pull the Bowden cable whose end is connected near the ankle, thereby generating assistive torque at the ankle and hip joints. In 2017, the University of Zurich and ETH Zurich jointly developed a soft wearable device called "Myosuit", which aims to provide continuous assistance to the gravity-bearing hip and knee joints during daily activities [21]. The system combines active and passive elements with a closed-loop force controller to design a mechanism similar to an external muscle. It uses the tendon force and linearized fabric stiffness to estimate the joint angle, based on which positive and negative power is generated. Thereby, a certain degree of gravity compensation is provided to the wearer. In 2016, the European Union planned to design a soft bionic exosuit "XoSoft" for the elderly and the disabled, proposing concepts such as modular joints, intelligent sensing, flexible drive, bionic control, and user monitoring. In 2018, following the user-centered design principles and targeting people with mild to moderate gait impairments, the first prototype of XoSoft was constructed, consisting of a flexible woven garment, an elastic band controlled by an electromagnetic clutch (to support knee and hip flexion), and a backpack that houses the system's sensor and control modules [22]. After that, they launched a series of iterative versions [23,24]. In 2020, the Daegu Gyeongbuk Institute of Science and Technology in South Korea developed a soft lower limb exosuit specially designed for assisting up and down stairs, which can provide additional strength to the knee joint [25]. In 2020, the University of Arizona conducted an experimental study to explore the potential of adaptive assistance for ankle flexion and extension with an exosuit, and to analyze the feasibility of improving ground walking performance in patients with cerebral palsy [26].
In general, the soft exosuit needs more specialized design and optimization analysis in fabric configuration to ensure a close fit with the human body, and not too much deformation during force transfer. The driving device should have a high enough output capacity, as far as possible, under the premise of ensuring light weight, to achieve highefficiency power transmission. The control strategy plays an important role to ensure human-machine collaboration, which requires careful design and testing so that the control system can make appropriate and timely responses after understanding the intention of the wearer. Our research will focus on the above aspects and analyze the performance of related methods through several evaluation experiments. This paper is organized as follows. In Section 2, the structural design method of the exosuit is described, which mainly includes the soft garment and the actuator. In Section 3, the scheme of the control system is described in terms of software and hardware. In Section 4, the efficacy evaluation experiments are performed to test the metabolic cost and muscle activation. Conclusions and future work are in Section 5.
Soft Suit
The main components of the lower extremity exosuit's flexible suit include the skirt belt, leg binding and foot accessories, as shown in Figure 1. The webbing along the leg length is approximately 70 cm long and allows for a deformation of more than 14 cm. It can accommodate people of 170-180 cm in height.
Soft Suit
The main components of the lower extremity exosuit's flexible suit include the skirt belt, leg binding and foot accessories, as shown in Figure 1. The webbing along the leg length is approximately 70 cm long and allows for a deformation of more than 14 cm. It can accommodate people of 170-180 cm in height. The belt is designed with a skirt shape that wraps around the entire waist and hips. To fit the human skin and restrict the movement limit of the joint, it adopts a tailoring method that conforms to the structure characteristics of the waist and hip, which has certain protection functions for sports injury. The belt is made up of two layers of polyester fabric and an intermediate layer of nylon fabric, with a sponge pad on the back, which carries the control box and motors comfortably.
The leg binding contains thigh leg binding and calf leg binding, which fits the shape of the human thigh and calf muscles and is designed according to their circumference. It is made up of high-strength Oxford cloth and adopts an adhesive buckle to realize wearing and fixation. There is a point in the middle of the back of the leg binding which is connected to the actuator through the cable sheath that is used to apply assistant force. The leg binding restrains the direction of the elastic ribbon, so that it starts from the front of the thigh and reaches the middle and lower part of the calf along both sides of the leg over the knee. The thigh leg binding at the anchor point attachment adopts the load bearing belt, forming a triangle shape, so that the force on this point is distributed over the whole thigh part.
The foot accessories consist of an ankle bearing and a heel bearing. The ankle bearing is the fixed end of the cable sheath. The heel bearing part is the fixed end of the Bowden cable, which serves as the bearing end of the cable-driven system.
Cable-Driven Module
To meet the requirements of light weight, low power consumption and high performance of the flexible system, the motor and the Bowden cable fitted to the human leg are used as an active driving unit to enhance the movement ability of the corresponding muscles. The total weight of the cable-driven module is less than 1 kg. The cable diameter is 1.6 cm, which can withstand the maximum tension of about 1800 N. Through reasonable setting of the force transmission path and accurate control of retraction and release, it simulates the force generation process of human muscle to assist the plantar-flexion movement of the ankle joint. The design of the driving unit mainly focuses on the integration of the driving device, transmission device and executive device. The belt is designed with a skirt shape that wraps around the entire waist and hips. To fit the human skin and restrict the movement limit of the joint, it adopts a tailoring method that conforms to the structure characteristics of the waist and hip, which has certain protection functions for sports injury. The belt is made up of two layers of polyester fabric and an intermediate layer of nylon fabric, with a sponge pad on the back, which carries the control box and motors comfortably.
The leg binding contains thigh leg binding and calf leg binding, which fits the shape of the human thigh and calf muscles and is designed according to their circumference. It is made up of high-strength Oxford cloth and adopts an adhesive buckle to realize wearing and fixation. There is a point in the middle of the back of the leg binding which is connected to the actuator through the cable sheath that is used to apply assistant force. The leg binding restrains the direction of the elastic ribbon, so that it starts from the front of the thigh and reaches the middle and lower part of the calf along both sides of the leg over the knee. The thigh leg binding at the anchor point attachment adopts the load bearing belt, forming a triangle shape, so that the force on this point is distributed over the whole thigh part.
The foot accessories consist of an ankle bearing and a heel bearing. The ankle bearing is the fixed end of the cable sheath. The heel bearing part is the fixed end of the Bowden cable, which serves as the bearing end of the cable-driven system.
Cable-Driven Module
To meet the requirements of light weight, low power consumption and high performance of the flexible system, the motor and the Bowden cable fitted to the human leg are used as an active driving unit to enhance the movement ability of the corresponding muscles. The total weight of the cable-driven module is less than 1 kg. The cable diameter is 1.6 cm, which can withstand the maximum tension of about 1800 N. Through reasonable setting of the force transmission path and accurate control of retraction and release, it simulates the force generation process of human muscle to assist the plantar-flexion movement of the ankle joint. The design of the driving unit mainly focuses on the integration of the driving device, transmission device and executive device.
As shown in Figures 2 and 3, the driving device is mainly composed of a driver, encoder, motor, gear reducer and cable wheel. In the process of operation, the motor decelerates through the reducer and increases the output torque according to the output parameters of the control device, thus driving the cable wheel to rotate, and finally pulling the Bowden cable to provide assistance for the ankle joint. Figures 2 and 3, the driving device is mainly composed of a driver, encoder, motor, gear reducer and cable wheel. In the process of operation, the motor decelerates through the reducer and increases the output torque according to the output parameters of the control device, thus driving the cable wheel to rotate, and finally pulling the Bowden cable to provide assistance for the ankle joint. The transmission device is a flexible scheme that adopts the combination of a cable and sheath between the power source and load. This method combines the advantages of cable transmission and gear transmission, and has the characteristics of large load bearing force, small recoil force and a wide range of reachable motion.
As shown in
The executive device consists of the Bowden cable and the joint anchor point. It sets clamping for the Bowden cable sheath on the calf strap and anchor point at the heel. During the phase for joint assistance, the Bowden cable pulls the anchor point along the sheath, thus exerting an assistive torque on the ankle.
Hardware Composition
As shown in Figure 4, the sensor composition of the lower extremity exosuit is relatively simple, including only inertial measurement units (IMUs) and encoders (Renishaw, MB039 + MRA039). The IMUs are installed on the calf straps by suturing, and are mainly used to measure the change data of the calf swing angle in the gait cycle, which provides a basis for the subsequent gait phase division. The encoders are integrated in the driving units to measure the rotation angle of the motors (Kormorgen, TBM7615), so that the length of the Bowden cable can be calculated to provide data feedback for the position closed-loop control. As shown in Figures 2 and 3, the driving device is mainly composed of a driver, encoder, motor, gear reducer and cable wheel. In the process of operation, the motor decelerates through the reducer and increases the output torque according to the output parameters of the control device, thus driving the cable wheel to rotate, and finally pulling the Bowden cable to provide assistance for the ankle joint. The transmission device is a flexible scheme that adopts the combination of a cable and sheath between the power source and load. This method combines the advantages of cable transmission and gear transmission, and has the characteristics of large load bearing force, small recoil force and a wide range of reachable motion.
The executive device consists of the Bowden cable and the joint anchor point. It sets clamping for the Bowden cable sheath on the calf strap and anchor point at the heel. During the phase for joint assistance, the Bowden cable pulls the anchor point along the sheath, thus exerting an assistive torque on the ankle.
Hardware Composition
As shown in Figure 4, the sensor composition of the lower extremity exosuit is relatively simple, including only inertial measurement units (IMUs) and encoders (Renishaw, MB039 + MRA039). The IMUs are installed on the calf straps by suturing, and are mainly used to measure the change data of the calf swing angle in the gait cycle, which provides a basis for the subsequent gait phase division. The encoders are integrated in the driving units to measure the rotation angle of the motors (Kormorgen, TBM7615), so that the length of the Bowden cable can be calculated to provide data feedback for the position closed-loop control. The transmission device is a flexible scheme that adopts the combination of a cable and sheath between the power source and load. This method combines the advantages of cable transmission and gear transmission, and has the characteristics of large load bearing force, small recoil force and a wide range of reachable motion.
The executive device consists of the Bowden cable and the joint anchor point. It sets clamping for the Bowden cable sheath on the calf strap and anchor point at the heel. During the phase for joint assistance, the Bowden cable pulls the anchor point along the sheath, thus exerting an assistive torque on the ankle.
Hardware Composition
As shown in Figure 4, the sensor composition of the lower extremity exosuit is relatively simple, including only inertial measurement units (IMUs) and encoders (Renishaw, MB039 + MRA039). The IMUs are installed on the calf straps by suturing, and are mainly used to measure the change data of the calf swing angle in the gait cycle, which provides a basis for the subsequent gait phase division. The encoders are integrated in the driving units to measure the rotation angle of the motors (Kormorgen, TBM7615), so that the length of the Bowden cable can be calculated to provide data feedback for the position closed-loop control.
In view of the characteristics of the system, such as various sensors and different data structures, the standardization and normalization of the system software and hardware interfaces are carried out. Based on controller area network (CAN) bus, a perception and control system architecture is built, and a scheme suitable for information transmission between each module of the system is studied. On the application layer, we carried out an investigation on real-time transmission control protocol technology for data collection, data encoding and decoding, control command sending or receiving, etc., to achieve a high efficiency of data interaction between various modules, and to further improve the real-time transmission of system information.
The system contains a two-channel CAN bus, which is used for communication between the core controller (STM32), multi-sensor group, and servo driver (Elmo, Gold Twitter). It is mainly responsible for collecting sensor information and human body motion data, completing the delivery of servo control instructions, and feeding back the location of the actuators, etc. The system information flow chart is shown in Figure 5. In view of the characteristics of the system, such as various sensors and different data structures, the standardization and normalization of the system software and hardware interfaces are carried out. Based on controller area network (CAN) bus, a perception and control system architecture is built, and a scheme suitable for information transmission between each module of the system is studied. On the application layer, we carried out an investigation on real-time transmission control protocol technology for data collection, data encoding and decoding, control command sending or receiving, etc., to achieve a high efficiency of data interaction between various modules, and to further improve the real-time transmission of system information.
The system contains a two-channel CAN bus, which is used for communication between the core controller (STM32), multi-sensor group, and servo driver (Elmo, Gold Twitter). It is mainly responsible for collecting sensor information and human body motion data, completing the delivery of servo control instructions, and feeding back the location of the actuators, etc. The system information flow chart is shown in Figure 5.
Motion Control Strategy
The time between any two identical motion states in the gait process is defined as the gait cycle, which is usually regarded as the process between two adjacent heel touches, as shown in Figure 6. In view of the characteristics of the system, such as various sensors and different data structures, the standardization and normalization of the system software and hardware interfaces are carried out. Based on controller area network (CAN) bus, a perception and control system architecture is built, and a scheme suitable for information transmission between each module of the system is studied. On the application layer, we carried out an investigation on real-time transmission control protocol technology for data collection, data encoding and decoding, control command sending or receiving, etc., to achieve a high efficiency of data interaction between various modules, and to further improve the real-time transmission of system information.
The system contains a two-channel CAN bus, which is used for communication between the core controller (STM32), multi-sensor group, and servo driver (Elmo, Gold Twitter). It is mainly responsible for collecting sensor information and human body motion data, completing the delivery of servo control instructions, and feeding back the location of the actuators, etc. The system information flow chart is shown in Figure 5.
Motion Control Strategy
The time between any two identical motion states in the gait process is defined as the gait cycle, which is usually regarded as the process between two adjacent heel touches, as shown in Figure 6.
Motion Control Strategy
The time between any two identical motion states in the gait process is defined as the gait cycle, which is usually regarded as the process between two adjacent heel touches, as shown in Figure 6.
In a cycle, lower extremity movement can be divided into the supporting phase (with contact between foot and ground) and the swing phase (without contact between foot and ground), accounting for approximately 60% and 40% of the time, respectively. It can be further subdivided according to whether the left and right legs are in the supporting phase or the swing phase. When both the left and right legs are in the supporting phase, it is called the double-leg supporting phase. When one of the legs is in the swing phase, it is called the single-leg supporting phase. A more specific subdivision can divide the supporting phase into four sub-phases, including the double leg supporting phase, the initial supporting phase, the mid-supporting phase and the terminal supporting phase, according to the order of occurrence. Similarly, the swing phase can be divided into three sub-phases, which include the initial swing phase, the mid-swing phase, and the terminal swing phase, in the order of occurrence. According to the analysis above, we developed a position control strategy based on the gait phase division, which is shown in Figure 7. Electronics 2023, 12 6 of 11 Figure 6. The sketch of the gait division.
In a cycle, lower extremity movement can be divided into the supporting phase (with contact between foot and ground) and the swing phase (without contact between foot and ground), accounting for approximately 60% and 40% of the time, respectively. It can be further subdivided according to whether the left and right legs are in the supporting phase or the swing phase. When both the left and right legs are in the supporting phase, it is called the double-leg supporting phase. When one of the legs is in the swing phase, it is called the single-leg supporting phase. A more specific subdivision can divide the supporting phase into four sub-phases, including the double leg supporting phase, the initial supporting phase, the mid-supporting phase and the terminal supporting phase, according to the order of occurrence. Similarly, the swing phase can be divided into three subphases, which include the initial swing phase, the mid-swing phase, and the terminal swing phase, in the order of occurrence. According to the analysis above, we developed a position control strategy based on the gait phase division, which is shown in Figure 7. After obtaining information such as limb angle, angular velocity and angular acceleration, the gait phase of each leg during walking can be divided. The system will identify the leg-lifting stages (initial swing and mid-swing) based on that, so as to provide a judgment basis for determining the timing and duration of the active power assistance.
Then, the length of Bowden cable that needs to be retracted theoretically is estimated through the current angle information, and then the actual value is calculated using the motor rotation angle fed back by the encoder. The difference between both will be output to a proportional-integral-differential (PID) controller to realize the regulation of the motor state, ensuring the effectiveness of the power assistance. In a cycle, lower extremity movement can be divided into the supporting phase (with contact between foot and ground) and the swing phase (without contact between foot and ground), accounting for approximately 60% and 40% of the time, respectively. It can be further subdivided according to whether the left and right legs are in the supporting phase or the swing phase. When both the left and right legs are in the supporting phase, it is called the double-leg supporting phase. When one of the legs is in the swing phase, it is called the single-leg supporting phase. A more specific subdivision can divide the supporting phase into four sub-phases, including the double leg supporting phase, the initial supporting phase, the mid-supporting phase and the terminal supporting phase, according to the order of occurrence. Similarly, the swing phase can be divided into three subphases, which include the initial swing phase, the mid-swing phase, and the terminal swing phase, in the order of occurrence. According to the analysis above, we developed a position control strategy based on the gait phase division, which is shown in Figure 7. After obtaining information such as limb angle, angular velocity and angular acceleration, the gait phase of each leg during walking can be divided. The system will identify the leg-lifting stages (initial swing and mid-swing) based on that, so as to provide a judgment basis for determining the timing and duration of the active power assistance.
Then, the length of Bowden cable that needs to be retracted theoretically is estimated through the current angle information, and then the actual value is calculated using the motor rotation angle fed back by the encoder. The difference between both will be output to a proportional-integral-differential (PID) controller to realize the regulation of the motor state, ensuring the effectiveness of the power assistance. After obtaining information such as limb angle, angular velocity and angular acceleration, the gait phase of each leg during walking can be divided. The system will identify the leg-lifting stages (initial swing and mid-swing) based on that, so as to provide a judgment basis for determining the timing and duration of the active power assistance.
Then, the length of Bowden cable that needs to be retracted theoretically is estimated through the current angle information, and then the actual value is calculated using the motor rotation angle fed back by the encoder. The difference between both will be output to a proportional-integral-differential (PID) controller to realize the regulation of the motor state, ensuring the effectiveness of the power assistance.
Physiological Metabolic Test
Oxygen consumption rate is a quantitative indicator that characterizes the metabolic level of human exercise. If under the same motion conditions this physiological data of the human body is significantly reduced after wearing the wearable robot, it can indicate that this equipment can save a part of physical energy consumption. In order to prove the effectiveness of the system proposed in this study, we have used a portable respiratory metabolism monitoring system (K5) to monitor the oxygen consumption data of motion under three conditions: without the exosuit, with the exosuit powered off and with the exosuit powered on. In each test condition, the wearer is required to carry a 5 kg load and walk on a treadmill for nearly 30 min at a speed of 4 km/h, as shown in Figure 8. the human body is significantly reduced after wearing the wearable robot, it can indicate that this equipment can save a part of physical energy consumption. In order to prove the effectiveness of the system proposed in this study, we have used a portable respiratory metabolism monitoring system (K5) to monitor the oxygen consumption data of motion under three conditions: without the exosuit, with the exosuit powered off and with the exosuit powered on. In each test condition, the wearer is required to carry a 5 kg load and walk on a treadmill for nearly 30 min at a speed of 4 km/h, as shown in Figure 8.
(a) Test without exosuit (b) Test with exosuit not powered on (c) Test with exosuit powered on Figure 9 shows the change curves of the oxygen consumption rate during the experiment. It can be qualitatively seen that after the subject put on this wearable system, the oxygen consumption rate with the exosuit powered on is significantly lower than that with the exosuit powered off, which proves the feasibility of saving physical energy through the cable-driven method. However, the oxygen consumption rate with the exosuit powered on is not intuitively lower than that without the exosuit. This is because the extra energy expenditure associated with the system weight and other factors can offset some of the energy savings and even increase metabolic expenditure. Figure 9 shows the change curves of the oxygen consumption rate during the experiment. It can be qualitatively seen that after the subject put on this wearable system, the oxygen consumption rate with the exosuit powered on is significantly lower than that with the exosuit powered off, which proves the feasibility of saving physical energy through the cable-driven method. However, the oxygen consumption rate with the exosuit powered on is not intuitively lower than that without the exosuit. This is because the extra energy expenditure associated with the system weight and other factors can offset some of the energy savings and even increase metabolic expenditure. that this equipment can save a part of physical energy consumption. In order to prove the effectiveness of the system proposed in this study, we have used a portable respiratory metabolism monitoring system (K5) to monitor the oxygen consumption data of motion under three conditions: without the exosuit, with the exosuit powered off and with the exosuit powered on. In each test condition, the wearer is required to carry a 5 kg load and walk on a treadmill for nearly 30 min at a speed of 4 km/h, as shown in Figure 8.
(a) Test without exosuit (b) Test with exosuit not powered on (c) Test with exosuit powered on Figure 9 shows the change curves of the oxygen consumption rate during the experiment. It can be qualitatively seen that after the subject put on this wearable system, the oxygen consumption rate with the exosuit powered on is significantly lower than that with the exosuit powered off, which proves the feasibility of saving physical energy through the cable-driven method. However, the oxygen consumption rate with the exosuit powered on is not intuitively lower than that without the exosuit. This is because the extra energy expenditure associated with the system weight and other factors can offset some of the energy savings and even increase metabolic expenditure. For quantitative illustration, we calculate the average values of the corresponding oxygen consumption rates (VNo_suit, VUnpowered and VPowered), which are 1069.9 mL/min, 1203.0 mL/min and 1032.4 mL/min, respectively. In order to exclude the difference caused by the different initial physiological state, the oxygen consumption data of the wearer at rest is also measured, with an average value (VRest) of 264.1 mL/min. Referring to the relevant literature, the net metabolic change and gross metabolic change brought by the exoskeleton can be obtained by the following formulas. For quantitative illustration, we calculate the average values of the corresponding oxygen consumption rates (V No_suit , V Unpowered and V Powered ), which are 1069.9 mL/min, 1203.0 mL/min and 1032.4 mL/min, respectively. In order to exclude the difference caused by the different initial physiological state, the oxygen consumption data of the wearer at rest is also measured, with an average value (V Rest ) of 264.1 mL/min. Referring to the relevant literature, the net metabolic change and gross metabolic change brought by the exoskeleton can be obtained by the following formulas.
Muscle Activation Test
Assisting in muscle output and providing additional force to the joint are the basic goals for designing a wearable robot. To measure the compensation effect of the lower extremity exosuit on muscle activation, we measured the physiological data of gastrocnemius muscle using sEMG sensors. The test was also divided into three groups: without the exosuit, with the exosuit powered off and with the exosuit powered on. Other experimental conditions are the same as those in the physiological metabolic test.
The change curves of the sEMG signal of gastrocnemius under different test conditions are shown in Figure 10. Root mean square (RMS) is introduced to quantitatively describe muscle activation, as shown in Table 1. Therefore, when subjects put on the exosuit and do not turn it on, it is equivalent to carrying a load on their back, which must increase the activation of the muscles. However, once the system starts working, the additional assistive torque makes the muscle activation drop significantly below the level without wearing the exosuit.
Conclusions and Future Work
In this study, a lower extremity soft exosuit for the ankle joint was designed. Through the synthesis of many technical researches such as structure, sensing and control, the goal of walking assistance was achieved. In terms of structure, it focuses on the seamless integration of the soft suit, and realizes the close combination of the human−robot system by configuring a variety of fabrics. Then, a compact cable−driven device was designed to deliver power to the ankle by rewinding the Bowden cables in an orderly manner. In the aspect of perception, multiple sensors are used to feedback the human movement information and system state data. Then, the gait phase can be divided, which lays a foundation for determining the power stage. In terms of control, the hardware architecture for Therefore, when subjects put on the exosuit and do not turn it on, it is equivalent to carrying a load on their back, which must increase the activation of the muscles. However, once the system starts working, the additional assistive torque makes the muscle activation drop significantly below the level without wearing the exosuit.
Conclusions and Future Work
In this study, a lower extremity soft exosuit for the ankle joint was designed. Through the synthesis of many technical researches such as structure, sensing and control, the goal of walking assistance was achieved. In terms of structure, it focuses on the seamless integration of the soft suit, and realizes the close combination of the human−robot system by configuring a variety of fabrics. Then, a compact cable−driven device was designed to deliver power to the ankle by rewinding the Bowden cables in an orderly manner. In the aspect of perception, multiple sensors are used to feedback the human movement information and system state data. Then, the gait phase can be divided, which lays a foundation for determining the power stage. In terms of control, the hardware architecture for human−robot cooperative control was designed, and the motion control strategy based on position closed−loop was adopted to realize the accurate following and effective assistance of the system for ankle joint movement. Compared with previous studies [27,28], the main contribution of this paper is to propose a novel fabric structure that can work closely with the muscles of the lower limb, and to design an effective motion control strategy based on gait detection.
In order to measure the assistive effect of the system, we carried out the physiological metabolic test and the muscle activation test, using K5 and surface EMG sensors to detect and analyze the oxygen consumption rate and EMG signals under three walking conditions. The results show that the exoskeleton can reduce oxygen consumption during exercise to a certain extent, thus achieving the goal of reducing human metabolism. At the same time, it can also reduce the activation of relevant muscles, demonstrating the effectiveness of compensating joint output.
In the near future, we will design a compensation strategy to improve performance based on the analysis of the nonlinear characteristics of the transmission system. At the same time, it is necessary to provide all−round assistance to multiple joints to enhance the assistive effect.
|
v3-fos-license
|
2019-01-03T08:01:27.793Z
|
2014-02-28T00:00:00.000
|
75843175
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://academicjournals.org/journal/AJPP/article-full-text-pdf/D83292446060.pdf",
"pdf_hash": "89ede3130e4ced6bdb7a695e12db73e94f4824dd",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2240",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "89ede3130e4ced6bdb7a695e12db73e94f4824dd",
"year": 2014
}
|
pes2o/s2orc
|
Nature and frequency of prescription modifications : An evaluation from the community pharmacy
Medication errors can occur at any point in the medication use process. The present study was undertaken to investigate the frequency and nature of prescription modifications and pharmacist’s interventions outcomes at the community pharmacy. A descriptive and prospective study was conducted and data were structured by all prescriptions that were modified by the pharmacy during the study. All medicines were classified into therapeutic groups using the Anatomical Therapeutic Chemical classification. A total of 20,205 prescriptions were processed during the study and the overall incidence of modifications by the community pharmacy was 10.9 % (2216 prescriptions). The majority (1676; 75.6%) of the reasons for the medications concerned the clarification of an insufficiently specified prescription. Drug-drug interaction (32.5%), contraindication (6.5%) or double medications (40.6%) were prevalent. The findings of this study reinforce the importance of prescription screening and interventions by pharmacists in reduce preventable adverse events attributed to medication errors. It also emphasizes the necessity of interdisciplinary communication and cooperation in identifying and resolving prescribing errors and irregularities in order to achieve optimal therapeutic outcomes for the patient.
INTRODUCTION
Patient safety has become a major concern since the November 1999 release of the Institute of Medicine (IOM) report, To Err Is Human.Health care practitioners may have been surprised to learn from this report that errors involving prescription medications are responsible for up to 7,000 American deaths per year and that the financial costs of drug-related morbidity and mortality may cost nearly $77 billion a year (Grissinger et al., 2003).
Medication errors can occur at any point in the medication use process.The prescribing step of the medication use process involves clinical decision making, selecting a treatment or drug regimen, documenting information in the medical record, and ordering the selected drug treatment (IOM, 2007).Some of the reasons that errors occur during this stage of the medication use process are because prescribers do not use current available treatment evidence or available patient information, (i.e.allergy information, other medications, other conditions), do not follow set policies or procedures, fail to document appropriate information in the patient chart, or do not communicate the prescription appropriately (Giampaolo and Pietro, 2009;Ross et al., 2012).The dichotomous nature of community pharmacy practice is a critical dilemma for the profession.The role of community pharmacists has been traditionally characterized by dispensing prescrip-tion medicines, selling over-thecounter medication and offering healthcare advice.Community pharmacists are often not viewed as a core part of the primary healthcare team.Perceptions around being a retailer and healthcare provider create uncertainty in the minds of the medical profession, funders and consumers.Pharmacy is the only health profession that is reimbursed for its sale of a product rather than provision of a service (Rigby, 2010).In contrast; pharmacists are placed in an excellent posi-tion to promote rational use of medicines (for example, prescribing, dispensing, and use of drugs).
The literature on prescribing errors is gaining momentum, and the data so far suggests that the problem is not limited to any specific health care environment or defined practice setting.For example, a study developed in Galway (Ireland) to estimate the seriousness and level of prescribing errors that occurred in general practice reported 12.4% prescribing errors identified (Sayers, 2009).Similarly, pharmacists' interventions effectiveness have been demonstrated to come up with interventions that are most effective for impacting prescribing practice including audit and feedback, reminders, educational outreach visits, and patient-mediated interventions (Grindrod et al., 2006).
According to Hopper, though there is evidence published so far on prescribing errors, there is still a paucity of research reporting the role of pharmacists in identifying these errors and the prevalence of near-miss incidents in the prescribing process (Hopper et al., 2009).Therefore, the present study was undertaken to investigate the frequency and nature of prescription modifications and pharmacist's interventions outcomes at the community pharmacy.
Setting and design
The study was conducted over a 4-month period (February 5-and June 15, 2011) at an urban community pharmacy in Madrid (Spain).The Community pharmacy is a shift of 12 hours, attached to Ambulatory Health Center, which dispenses about 4000 prescriptions each month.Like all community pharmacy in Spain, this is a private community pharmacy.In Spain, the Pharmacy Office (Community Pharmacy) is a private health establishment run for public interest, wherein autonomous communities are subject to health planning, with which the owner-pharmacist works through aides or assistants.The pharmacies dispense drugs to patients covered by the National Health System under the conditions set forth in the regulations.
The professional functions of pharmacists have changed from a passive to a more active role; now pharmacists personally follow up with patients (Bosch, 2000).The pharmacy technician assists the pharmacist in the dispensing of pharmaceutical products; controls inventory and the organization of pharmaceutical products; and evaluates the user's physiological parameters and vital signs under the pharmacist's supervision (Martínez-Sanchez, 2012).Ethical approval was obtained from the local research ethics committee.
The community pharmacy offers services like compounding, weight and blood pressure measurement, and cholesterol and glucose testing.A population of about 2000 inhabitants is served.Pharmacists and pharmacy technicians who worked there were invited to participate (3 pharmacists and 2 pharmacy technicians); eventually, 2 pharmacists and 2 pharmacy technicians agreed to partake.All participants received a pretested study protocol with definitions used, objectives and the methods to use during the period of the study.Each participating pharmacy had to collect all modified prescriptions (cases) during this period.
Selection of cases
All prescriptions for other health care products (such as dressings, incontinence materials, syringes and needles) that were dispensed in the predetermined period to the community pharmacy by the patient were excluded.The data were structured by all prescriptions that were modified by the pharmacy during the study.Reasons for including a prescription modification as a case were defined in the protocol and in the registration form for cases.If there were two or more reasons for modifying a prescription, the pharmacist had to select the one he/she considered most relevant.The protocol excluded the following modifications because of their lack of potential impact on patient care: incorrect or absent address, no or incorrect insurance data, product not in stock.In this study, a prescription error is defined as a result of a prescribing decision or prescription writing process where there is an unintentional significant reduction in the probability of treatment being timely and effective or increase in the risk of harm.
During the data management process the nature of prescription medications were divided into three groups.In the first group a clarification was needed to carry out the prescription order.In most cases, an essential administrative feature of the prescription was missing or obviously incorrect.In fact, the pharmacy could not have dispensed the drug without clarification.In the second group for items identified as `Correction prescription error', the prescription was administratively correct but could potentially have had clinical consequences if not altered.Those identified as `wrong dose' is an important example, for which there are several reasons like too high/low dose according to standard references or in-conflict with the patient's own records.The third group included reasons for medication not covered by the first two categories.Classifications of reported causes of the errors and types of error were adapted from Ashcroft et al., (2005).
All medicines were classified into therapeutic groups using the Anatomical Therapeutic Chemical (ATC) classification of the WHO Collaborating Centre for Drug Statistics Methodology (Anonymous, 1999).After inspection, data from the registration forms were entered in a Microsoft Access database and statistically analyzed (van Mil, 1999).The outcome of the modification (on prescriber or patient level) was recorded as intervention; a) approved and prescription changed, b) approved and no prescript-tion was changed, c) rejected, information only.The community pharmacy anonymised patients and healthcare providers.
RESULTS
A total of 20,205 prescriptions were processed during the study and the overall incidence of modifications by the community pharmacy was 10.9 % (2216 prescriptions).Modifications of prescriptions were most frequently found in the following therapeutic domains: (B) Blood and blood forming organs, (C) Cardiovascular system, (J) Antiinfectives for systemic use, (N) Nervous system, (M) Musculo-skeletal system (Table 1).
Table 2 shows the nature of the prescription modifications.The majority (1676; 75.6%) of the reasons for the medications concerned the clarification of an insufficiently specified prescription (e.g.no specification, insufficient patient data, wrong strength or strength not specified), whereas in 123 cases (5.5%) a prescription error was corrected that might have had clinical consequences (`Correction Prescription Error').Drug -drug interaction (32.5%), contraindication (6.5%) or double medication (40.6%) were more prevalent in this latter group than other intervention, for example, dose corrections (20.3 %).In Table 3 some individual examples of modifications are presented.
At the prescriber's level, 1,551prescriptions (70%) of all modifications made were accepted and prescription modified.Other outcomes in this category were described as follows: prescriber asked for clarification (5%), prescriber informed only (10%), and intervention not accepted (15%).At the patient level, written information was provided to the patient in over 70% of the modifIcations made, and medication counseling (over and above the routine instructions given at the dispensing window) took place in 20% of all interventions in this category.
DISCUSSION
Our study reports an incidence of 10.9% for prescription modifications at the community pharmacy.This incidence would translate to about 70% pharmacist´ interventions made during the period of the study.Prescribing errors were the most frequent type of error (75.6%), related to clarifications needed to carry out the prescription order.Correction prescription error represented the second prescription modification causes (5.5%).Wrong patient data, double medication, interaction with other medicines, contraindication pregnancy or children, and contraindication allergy were significantly higher (92.4%).The prescribing error incidence is comparable to those reported in other studies (Taylor, 2005).In an Irelandbased study by Sayers et al., (2009) from a total of 3,948 prescriptions, 491 (12.4%) contained one or more errors, and from a total of 8,686 drug items, 546 (6.2%) contained one or more errors.In a UK-based study developed at the primary care Sandars and Esmail (2003) revealed that prescribing and prescription errors occur in up to 11% of all prescriptions, mainly related to errors in dose.In a Taiwan-based study, identified prescription errors in 18.3% (n = 560) of prescriptions at the community setting; potential prescribing errors included errors of omission (25.5%), errors of commission (53.4%), and others (21.1%).The top three errors were incorrect do-sage (27.5%), missing indication (23.6%), and insufficient or unavailable drug information (18.9%) (Ho et al., 2012).Similarly, pharmacist´s intervention is comparable.Hopper et al. (2009) found prescription error in 0.71% of the total 82,800 prescriptions received at the primary health care.The intercepted prescriptions generated 890 drug-related problems (DRPs)-related interventions, and the prescriber accepted intervention in 53% of all interventions, and the treatment was changed accordingly (Hopper et al., 2009).In a Canada-based study by Young et al., (2012) 2.8% of pharmacist´s interventions were reported with the prescriber contacted for 69% of the interventions, seventy-two percent of prescriptions changed and 89% of the problems resolved (Young et al., 2012).
Interventions that were more likely to be accepted by not dispensed because dose undetermined in infants Prednisone (10 mg) not dispensed because of contraindication in glaucoma Amoxicillin 250mg/5ml (syrup 60ml) First prescription of Amoxicillin 250mg/5ml (syrup 60ml) for 7 days (5.5ml/day) instead of Amoxicillin 250/5ml (syrup 120 ml) Amoxicillin changed to ciprofloxacin because of hypersensitivity the prescribing physicians were those involving dosage errors, duplicate therapy, and questions for patient´s prescribing clarification.We have not attempted to trace the fate of rejected interventions in this study.In the absence of a structured validation process, we were unable to investigate the basis of rejected interventions.
Our findings only refer to actual modifications of the prescriptions presented on the study day as our protocol did not ask for the recording of other potentially relevant interventions such as the modification or discontinuation of an already dispensed drug or an instruction to the patient to avoid certain drug problems.At the prescriber level similar results were described by Hopper et al. ( 2009) prescriber asked for clarification (3%).Nature of pharmacist interventions reported are comparable to those described in other studies.In a USbased study by Warholak et al., the most common reason for pharmacists' interventions was to supplement omitted information (31.9%), especially missing directions.Dosing errors were also quite common.The most common response by pharmacists was to contact the prescriber (64.1%) (Warholak et al., 2009).In most cases (56%), the prescription order was changed and the prescription was ultimately dispensed.Other Malaysiabased study reported that 24.2% of the pharmacist' intervention carried out were related to contacting the prescribers and clarifying with the patient or his/her representative (19.4%) (Chua et al., 2003).In this study, at the patient level, the most frequent pharmacist' intervention were providing information to patients about prescribing modifications and medication counseling.Similar findings have been described in recent studies carried out in the US by Kuo et al. (2013) and Carole and Kimberlin (2011).
In fact, our findings are consistent with other studies, related to pharmacist´ intervention and prescribing problems; showing that in a primary care setting, the focus is most often prescription problems (Ekedahl, 2010;Mandt et al., 2009;Leemans et al., 2003).At the same time, to conduct a descriptive study, we have been cautious in comparing these results with other studies, due to varying methodology and definitions of interventions that characterizes these studies (Pottegard et al., 2011).Likewise, from the perspectives of the causes of prescribing errors reported in the scientific literature (Chen et al., 2005b;Lewis et al., 2009;Chen et al., 2005a) two basic considerations could be made.First, despite the computer revolution, much prescription continues to be handwritten and this is a reality in the Spanish health system (Rodríguez-Vera et al., 2002).A number of European countries such as Britain or Spain are still struggling to implement an integrated digitized module (Heise, 2011).Some studies show the association between handwritten prescriptions and the incidence of errors (Al Shahabi et al., 2012;Gandhi et al., 2005;Yuosif, 2011;Tully, 2012).This topic must be taking into consideration in future researches to evaluate the nature of prescription related to prescribing errors at the community pharmacy.
Second, pharmacist-physician communication is a vital component of quality health care.Enhanced communication can reduce costs, promote patient safety, and prevent medical errors (Schenkel, 2000).However, the community pharmacist and the physician play separate roles in the delivery of prescription drugs to patients, a protocol to pharmacist-physician communication, and standardized process to manage the patient´s pharmacotherapy.From our point of view, the outcomes of the study reinforce the importance of prescription screening and interventions by pharmacists in minimizing preventable adverse events attributed to medication errors.At the same time, the impact of the interdisciplinary communication and cooperation in identifying and resolving prescribing errors and irregularities in order to achieve optimal therapeutic outcomes for the patient should be taken into account in future researches.. Professional cooperation between pharmacist and physician should combine the unique knowledge of both professions and thereby achieve optimal drug therapy for the patient (Saanum and Mellbye, 1996).
Conclusion
The findings of this study reinforce the importance of prescription screening and interventions by pharmacists in reducing preventable adverse events attributed to medication errors.It also emphasizes the necessity of interdisciplinary communication and cooperation in identifying and resolving prescribing errors and irregularities in order to achieve optimal therapeutic outcomes for the patient.A systematic and more uniform registration of medication errors in community pharmacy will strengthen the quality of the data and help optimize the possibilities to learn from the described incidents and, hence, improve patient safety.
Table 1 .
Characteristics of the modified prescriptions according to the distribution of ATC classes.All interventions and their outcomes were later reviewed by one member from the research team, who also categorized the intervention as per the Pharmaceutical Care Network of Europe (PCNE) Classifications in Broad Drug Related Problem (DRP) classes
Table 2 .
Nature of prescription modifications.
*double medication is a combination of the same substance or different substances from the same therapeutic group.
Table 3 .
Some examples of modifications of prescription
|
v3-fos-license
|
2022-07-10T15:12:03.506Z
|
2022-07-08T00:00:00.000
|
250400232
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4344/12/7/754/pdf?version=1657264075",
"pdf_hash": "1c2bdd42aa5b9eb99036e8537430fc7672e2e970",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2244",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"sha1": "1669c068b85165893060f441c511aff2c036f6bf",
"year": 2022
}
|
pes2o/s2orc
|
Sonophotocatalysis—Limits and Possibilities for Synergistic Effects
: Advanced oxidation processes are promising techniques for water remediation and degradation of micropollutants in aqueous systems. Since single processes such as sonolysis and photocatalysis exhibit limitations, combined AOP systems can enhance degradation efficiency. The present work addresses the synergistic intensification potential of an ultrasound-assisted photocatalysis (sonophotocatalysis) for bisphenol A degradation with a low-frequency sonotrode (f = 20 kHz) in a batch-system. The effects of energy input and suspended photocatalyst dosage (TiO 2 -nanoparticle, m = 0–0.5 g/L) were investigated. To understand the synergistic effects, the sonication characteristics were investigated by bubble-field analysis, hydrophone measurements, and chemiluminescence of luminol to identify cavitation areas due to the generation of hydroxyl radicals. Comparing the sonophotocatalysis with sonolysis and photocatalysis (incl. mechanical stirring), synergies up to 295% and degradation rates of up to 1.35 min − 1 were achieved. Besides the proof of synergistic intensification, the investigation of energy efficiency for a degradation degree of 80% shows that a process optimization can be realized. Thus, it could be demonstrated that there is an effective limit of energy input depending on the TiO 2 dosage.
Introduction
In the last decade, ultrasound-assisted photocatalysis has received a lot of attention in the degradation of micropollutants in aqueous systems such as endocrine disruptors (EDCs), pharmaceutical active compounds (PhACs), and other organic persistent substances [1][2][3]. As a part of advanced oxidation processes (AOP), sonophotocatalysis produces high reactive oxygen species (ROS, e.g., •OH) to oxidize organic pollutants with the help of heterogenous catalysts and suitable irradiation in combination with acoustic cavitation [4]. The major aim of combined AOP techniques, for example, sonophotocatalysis, is to enhance the overall efficiency of a single AOP degradation by overcoming its limiting factors and disadvantages [5,6].
In the case of heterogenous photocatalysis with suspended catalytic particles, there must be primarily considered (I) the mass transfer of pollutants to and from a catalytic surface (diffusion limited); (II) agglomeration of suspended particles resulting in decreased catalytic surface area; and (III) homogenization to avoid concentration gradients in the reaction volume.
Several studies have shown that the introduction of low-frequency ultrasound can enhance the photocatalytic degradation process and leads to synergistic effects [3,4,[7][8][9].
Due to cavitation phenomena-known as the formation (nucleation), the growth, and the collapse of water-vapor-filled microbubbles-various chemical [10] and mechanical effects [11,12] occur, which can be useful for synergistic interactions by overcoming the aforementioned photocatalytic limitations [9]. Chemical effects arise from pyrolytic conditions (~5000 K and~1000 bar) during bubble collapse [13]. It can be used to degrade organic compounds [14,15] or to initiate homolytic splitting of water molecules in •OH and •H for generating additional ROS [9,10]. Mechanical effects depend on the collapse characteristics of the cavitation bubble. An asymmetric collapse leads to microjets, a symmetric collapse leads to shockwaves. Both kinds of collapse conditions accelerate the fluid and cause shear forces in the sonicated media, enhancing the overall mass transfer of pollutants and ensuring steady homogenization and particle deagglomeration [11,12,16]. In general, both the chemical and the mechanical effects increase with increasing energy input.
However, does this mean that the sonophotocatalytic degradation process can be steadily enhanced and maximized by increasing the ultrasonic energy? Otherwise, is there any minimum energy input for reaching any synergistic effects? What synergy can be achieved by acoustic cavitation, apart from noncavitating mixing effects? Additionally, what does the synergy tell us about optimized proceeding conditions?
Furthermore, procedural parameters such as the implemented ultrasound transducer (type and frequency), catalytic particle dosage, or the developed reactor concept must be taken into account.
However, there is a lack of knowledge concerning the sonophotocatalysis regarding economical optimization potentials versus synergistic intensification considering optimized degradation kinetic constants and energy consumption [27,28].
Therefore, this work investigates the dependencies of ultrasound energy input and catalyst dosage on the sonolytic, photocatalytic, and sonophotocatalytic degradation of the model pollutant bisphenol A in water. Since synergistic effects were shown in literature with tip-sonotrodes [29][30][31][32][33][34][35][36], such a transducer type (20 kHz) was coupled with an UVB-LED system (300 nm) to achieve a maximized overlapping of expected cavitation fields and light irradiation. The degradation experiments were carried out with TiO 2 (Degussa P25) in a batch reactor. Correlations were made between degradation processes, kinetic constants, and synergy, with respect to the reactor concept. The sonotrode was characterized by analysis of the bubble field, mapping of oxidizing cavitation areas, and by measuring cavitation intensity. Furthermore, the absorption of the light on various TiO 2 dosages were approximated. Finally, an evaluation concerning the relationship between degradation efficiency, synergy, energy consumption, and optimal degradation parameters was done.
Sonolysis
For ultrasound-assisted advanced oxidation processes, various transducer systems can be used. Since the mechanical cavitation effects are desired, sonophotocatalytic degradation procedures are commonly performed with low-frequency ultrasound. In this work, a tipsonotrode was chosen with a circular vibrating area of 1.3 cm 2 and a maximal energy density of 95 W/cm 2 .
The tip-sonotrode is characterized by an accelerated narrow-defined bubble field directly located in front of the vibrating area, resulting in strong mixing effects in the sonicated media (Figure 1a-c). The intensity depends on the applied amplitude between 38 µm (25% max ) and 114 µm (75% max ). For identical sonolytic conditions, the oxidizing cavitation zones were visualized by the chemiluminescent reaction of luminol due to hydroxyl radical generation (Figure 1df). The oxidizing cavitation zones are restricted in front of the vibrating area and correspond partly to the bubble fields. With an intensified bubble field and higher energy input, respectively, an enlargement of the oxidizing cavitation zones can be observed.
This enlargement can be monitored by an increasing cavitation noise level measured with a hydrophone (Figure 2). For the near field around the bubble field there can be derived a linear dependency of the cavitation noise level with the applied amplitude and energy input, respectively. However, if the cavitation noise is considered with further distance to the sonotrode, this linear correlation loses its validity. Thus, it can be concluded that the main cavitation effects occur in the bubble field and in the near area around it. For more distant regions, nonoxidizing cavitation with reduced cavitation intensity can be partly presumed. However, the possibility that the measured cavitation noise is related to acoustic echoes or reflections ("cavitational artifacts") should be kept in mind due to the nonlinear behavior.
The reactive cavitation occurs in the visualized chemiluminescence areas and corresponds to the amount of generated hydroxyl radicals, which can be confirmed by the correlation with kinetic degradation constants ( Figure 3). The sonolytic degradation of bisphenol A can be correlated in a linear manner (R 2 = 0.995) to the ultrasonic energy input, considering the energy-conversion efficiency (Figure 3a). The kinetic constants of the sonolysis of bisphenol A are calculated in the range of kS = 0.056-0.091 min −1 . Furthermore, For identical sonolytic conditions, the oxidizing cavitation zones were visualized by the chemiluminescent reaction of luminol due to hydroxyl radical generation (Figure 1d-f). The oxidizing cavitation zones are restricted in front of the vibrating area and correspond partly to the bubble fields. With an intensified bubble field and higher energy input, respectively, an enlargement of the oxidizing cavitation zones can be observed.
This enlargement can be monitored by an increasing cavitation noise level measured with a hydrophone (Figure 2). For the near field around the bubble field there can be derived a linear dependency of the cavitation noise level with the applied amplitude and energy input, respectively. However, if the cavitation noise is considered with further distance to the sonotrode, this linear correlation loses its validity. For identical sonolytic conditions, the oxidizing cavitation zones were visualized by the chemiluminescent reaction of luminol due to hydroxyl radical generation (Figure 1df). The oxidizing cavitation zones are restricted in front of the vibrating area and correspond partly to the bubble fields. With an intensified bubble field and higher energy input, respectively, an enlargement of the oxidizing cavitation zones can be observed.
This enlargement can be monitored by an increasing cavitation noise level measured with a hydrophone (Figure 2). For the near field around the bubble field there can be derived a linear dependency of the cavitation noise level with the applied amplitude and energy input, respectively. However, if the cavitation noise is considered with further distance to the sonotrode, this linear correlation loses its validity. Thus, it can be concluded that the main cavitation effects occur in the bubble field and in the near area around it. For more distant regions, nonoxidizing cavitation with reduced cavitation intensity can be partly presumed. However, the possibility that the measured cavitation noise is related to acoustic echoes or reflections ("cavitational artifacts") should be kept in mind due to the nonlinear behavior.
The reactive cavitation occurs in the visualized chemiluminescence areas and corresponds to the amount of generated hydroxyl radicals, which can be confirmed by the correlation with kinetic degradation constants ( Figure 3). The sonolytic degradation of bisphenol A can be correlated in a linear manner (R 2 = 0.995) to the ultrasonic energy input, considering the energy-conversion efficiency (Figure 3a). The kinetic constants of the sonolysis of bisphenol A are calculated in the range of kS = 0.056-0.091 min −1 . Furthermore, Thus, it can be concluded that the main cavitation effects occur in the bubble field and in the near area around it. For more distant regions, nonoxidizing cavitation with reduced cavitation intensity can be partly presumed. However, the possibility that the measured cavitation noise is related to acoustic echoes or reflections ("cavitational artifacts") should be kept in mind due to the nonlinear behavior.
The reactive cavitation occurs in the visualized chemiluminescence areas and corresponds to the amount of generated hydroxyl radicals, which can be confirmed by the correlation with kinetic degradation constants ( Figure 3). The sonolytic degradation of bisphenol A can be correlated in a linear manner (R 2 = 0.995) to the ultrasonic energy input, considering the energy-conversion efficiency (Figure 3a). The kinetic constants of the sonolysis of bisphenol A are calculated in the range of k S = 0.056-0.091 min −1 . Furthermore, it was found that the cavitation noise level also shows a linear correlation to the kinetic constants ( Figure 3b). it was found that the cavitation noise level also shows a linear correlation to the kinetic constants ( Figure 3b).
Photolysis and Photocatalysis
In this section, the photon-induced degradation of bisphenol A including TiO2 (Degussa P25, mcat. = 0-0.5 g/L) as the photocatalyst and an UVB-LED system (λ = 300 nm) is presented for stationary (without mechanical stirring) and nonstationary (applying mechanical stirring without cavitation effects) systems ( Figure 4).
Photolysis
The photolysis (without any catalyst loading) shows a degradation rate of kP = 0.022 min −1 (Figure 4a). Due to the absorption behavior of bisphenol A (<285 nm) [37] and the dissociation energy of water for homolytic splitting (H2O → •OH + •H; dH = 492 kJ/mol ~ <240 nm) [38] to achieve radical oxidizing species, the photolytic degradation of bisphenol A is not effective and can be neglected compared to the photocatalytic degradation procedures with TiO2.
Photolysis and Photocatalysis
In this section, the photon-induced degradation of bisphenol A including TiO 2 (Degussa P25, m cat. = 0-0.5 g/L) as the photocatalyst and an UVB-LED system (λ = 300 nm) is presented for stationary (without mechanical stirring) and nonstationary (applying mechanical stirring without cavitation effects) systems ( Figure 4). it was found that the cavitation noise level also shows a linear correlation to the kinetic constants ( Figure 3b).
Photolysis and Photocatalysis
In this section, the photon-induced degradation of bisphenol A including TiO2 (Degussa P25, mcat. = 0-0.5 g/L) as the photocatalyst and an UVB-LED system (λ = 300 nm) is presented for stationary (without mechanical stirring) and nonstationary (applying mechanical stirring without cavitation effects) systems ( Figure 4).
Photolysis
The photolysis (without any catalyst loading) shows a degradation rate of kP = 0.022 min −1 (Figure 4a). Due to the absorption behavior of bisphenol A (<285 nm) [37] and the dissociation energy of water for homolytic splitting (H2O → •OH + •H; dH = 492 kJ/mol ~ <240 nm) [38] to achieve radical oxidizing species, the photolytic degradation of bisphenol A is not effective and can be neglected compared to the photocatalytic degradation procedures with TiO2.
Photolysis
The photolysis (without any catalyst loading) shows a degradation rate of k P = 0.022 min −1 (Figure 4a). Due to the absorption behavior of bisphenol A (<285 nm) [37] and the dissociation energy of water for homolytic splitting (H 2 O → •OH + •H; dH = 492 kJ/mol~<240 nm) [38] to achieve radical oxidizing species, the photolytic degradation of bisphenol A is not effective and can be neglected compared to the photocatalytic degradation procedures with TiO 2 . After adding TiO 2 , a strong increase can be noticed with a maximum k-value of 0.262 min −1 for 0.025 g/L TiO 2 . Further addition of TiO 2 results in a decrease in the degradation rate (Figure 4a). The behavior of the photocatalytic degradation can be associated to the UV irradiation and the absorbed light intensity depending on the suspended TiO 2 nanoparticles (Figure 4b). Similar to previous studies, it was found that the degradation efficiency strongly depends on the free active catalytic surface and is limited by the turbidity of the suspension [39,40]. With a detailed investigation of the absorbed light intensity of the TiO 2 dosages, it could be shown that there are optimal photocatalytic conditions if the whole reactor depth is illuminated and a light intensity of~10% remains. Although the illumination of the reactor is more efficient for 0.01 g/L TiO 2 , the available catalytic surface is obviously too low at all. On the other hand, the dosage of 0.05 g/L TiO 2 may compensate this limitation, but the estimated illumination of the reactor just takes place in the first centimeter, which is equivalent to 33% of the reactor depth. The remaining reactor can be approximated as a "dead-volume", which is not involved in the degradation process.
Both a lower catalytic surface and/or strong turbidity reflect the main issues in heterogeneous photocatalysis of organic micropollutants, since diffusion is the rate-determining step. To overcome the "dead-volume" issue, the simplest solution is to implement a commercial stirrer to ensure steady homogenization and avoid concentration gradients in the reaction volume
Instationary Photocatalysis (Mechanical Stirred)
In this work, a magnetic stirrer was used to investigate a simple method for intensifying photocatalysis without cavitation effects. The mixing speed was chosen so that no vortex occurred. Thus, an enlargement of the illuminated suspension surface was avoided, and the results are comparable to those of the stationary photocatalysis (Figure 4a). It was found that a stirring unit reduces the degradation efficiency for the previously determined optimal reaction conditions at 0.025 g/L and 0.05 g/L TiO 2 and enhances the degradation rates for the high suspended reaction solutions of 0.1-0.5 g/L to maximal k-values of 0.450 min −1 . It can be concluded that in a stirred system, the limitation of the "dead-volume" issue can be solved and results in higher degradation efficiency for disadvantageous photocatalytic conditions. However, no intensification was observed for photocatalytic conditions, which showed nearly optimal efficiency due to an effective balance between light absorption and catalytic surface area in a stationary system. This can be explained since the UV-LED did not illuminate the whole reactor. Thus, due to stirring, the effective illuminated reaction volume increased and the degradation rates were slightly reduced.
Furthermore, the increasing catalytic surface near the UV irradiation source (higher light intensity) in combination with a steady circulation of the reaction volume has more impact on the degradation efficiency than the illumination of a broader range of the reactor, but with diminished light intensity.
Sonophotocatalysis
In this section, the ultrasonic intensification of the photocatalytic degradation of the model pollutant bisphenol A is investigated and evaluated depending on ultrasonic energy input and TiO 2 dosage. Figure 5a,b shows the sonophotocatalytic rate constants compared to those of the stirred photocatalysis for the TiO 2 dosages 0.01-0.5 g/L and for the amplitudes 38.25 µm (A US 25% max ), 76.5 µm (A US 50% max ), and 114.75 µm (A US 75% max ). The sonophotocatalytic degradation behavior can be split into two areas depending on the applied TiO2 dosage, consisting of (i) 0.01-0.1 g/L ("low dosage") and (ii) 0.3-0.5 g/L ("high dosage")
Low-Dosage Sonophotocatalysis
It can be noticed that the TiO2 dosage of 0.01 g/L is not adequate to achieve further degradation intensifications via sonication. Although at this condition the irradiation is maximized regarding the reaction volume (Figure 4b), the available catalytic surface does not allow any enhancement of photocatalytic degradation via ultrasound. By adding further TiO2 up to a loading of 0.025 g/L, a threshold is exceeded and an interaction with ultrasound leads to strong improvements. Rate constants of k = 0.629 min −1 (AUS 38.25 µm), 0.908 min −1 (AUS 76.5 µm), and 0.857 min −1 (AUS 114.75 µm) were achieved. Comparing the sonophotocatalytic kinetic constants to those of the stirred photocatalysis, the sonication enhances the degradation by 314% (AUS 38.25 µm) and 465-499% (AUS 76.5 and 114.75 µm) (Figure 5b). Including the sonolytic degradation, synergies of 200% (AUS 38.25 µm), 250-300% (AUS 76.5 and 114.75 µm) were obtained, and prove the synergistic intensification of the photocatalysis besides mechanical stirring effects (Figure 5c). Furthermore, it can be observed that there is no significant difference between the amplitudes of AUS 76.5 µm and 114.75 µm regarding sonophotocatalytic degradation, although the higher energy input leads to more intensive cavitation effects and should enhance the degradation. Thus, The sonophotocatalytic degradation behavior can be split into two areas depending on the applied TiO 2 dosage, consisting of (i) 0.01-0.1 g/L ("low dosage") and (ii) 0.3-0.5 g/L ("high dosage")
Low-Dosage Sonophotocatalysis
It can be noticed that the TiO 2 dosage of 0.01 g/L is not adequate to achieve further degradation intensifications via sonication. Although at this condition the irradiation is maximized regarding the reaction volume (Figure 4b), the available catalytic surface does not allow any enhancement of photocatalytic degradation via ultrasound. By adding further TiO 2 up to a loading of 0.025 g/L, a threshold is exceeded and an interaction with ultrasound leads to strong improvements. Rate constants of k = 0. (Figure 5c). Furthermore, it can be observed that there is no significant difference between the amplitudes of A US 76.5 µm and 114.75 µm regarding sonophotocatalytic degradation, although the higher energy input leads to more intensive cavitation effects and should enhance the degradation.
Thus, it can be concluded that there is an upper limit for an intensification potential of the sonophotocatalysis (depending on the applied reactor concept) regarding the amplitude and energy input, respectively.
High-Dosage Sonophotocatalysis
In the high-dosage suspension range, the overall global maximum of the sonophotocatalysis was found at 0.3 g/L TiO 2 with k max,SPC = 1.349 min −1 (A US 76.5 µm) and 1.219 min −1 (A US 114.75 µm). Compared to the corresponding maximum of the stirred photocatalysis (k = 0.450 min −1 ) with equal TiO 2 dosage, the degradation potential was enhanced by~200% (Figure 5b) with an overall synergy value of~125-150% (Figure 5c). Further addition of TiO 2 leads to a decrease in the degradation, which is attributed to the excessive turbidity and indicates the upper boundary condition for the applied reactor concept similar to the results of the photocatalytic degradation methods. Like the observations in the low-dosage suspension range, the rate constants of A US 76.5 µm and 114.75 µm do not differ significantly. However, for A US 38.5 µm, the sonophotocatalytic degradation decreases below the photocatalytic efficiency. Negative synergy values were obtained (sy = −50%) and indicate the inferior energetical boundary condition for highsuspended reaction solutions. Thus, it is concluded that the low-intensity sonication is not able to interact with the sonophotocatalytic-reaction area since UV irradiation takes place in the first 2.5 mm beneath the LED system. Furthermore, an effective circulation of the high-turbidity suspension cannot be ensured to enhance photocatalytic degradation as it does with the higher energy inputs.
Due to the results of the low-and high-dosage suspended sonophotocatalysis, it can be argued that (i) the enhancing cavitation effects approach a limit, and further increasing the energy does not lead to further enhancements; and/or (ii) the cavitation is immaterial, and the mixing effects majorly achieve synergistic effects and exceed a maximum effective level at A US 76.5 µm.
Comparing Figure 5b,c, a combination of (i) and (ii) can be suggested, since synergistic effects caused by ultrasound were proved besides the raw mechanical stirring effects. Due to the highest difference between stirred photocatalysis and the sonophotocatalysis (0.025 g/L TiO 2 ), it can be presumed that the ultrasound can be effectively used for lowdosage suspensions, even for low ultrasonic energy inputs (Figure 5b). This is underlined by the corresponding maximum synergy values of 200-300% (Figure 5c). However, with increasing TiO 2 dosage, it can be approximated that circulation of the reaction media acquires more influence on the degradation. The cavitation effects have a diminished contribution only. This can be especially seen for the TiO 2 dosage of 0.3 and 0.5 g/L, where the sonophotocatalysis is less effective than the stirred photocatalysis for A US 38.25 µm (25% max ), contrary to A US 76.5 µm (50% max ) and 114.75 µm (75% max ) (Figure 5a).
According to the degradation behavior, the negative synergies signify that the lowintensity sonication is not appropriate for high-suspended reaction conditions. Nevertheless, with more intensive sonication there can be found positive synergies with 125-150% for 0.3 g/L TiO 2 . At the same conditions, the overall degradation efficiency reaches its maximum and indicates the optimized reaction conditions, although the synergy value is minor compared to 0.025 g/L TiO 2 . Thus, a differentiation must be clearly made between degradation potential and synergy for evaluating sonophotocatalytic processes.
Energy Assessment
For a target value of 80% degradation, based on the Swiss environmental law for upgrading municipal wastewater-treatment plants [41], the needed reaction times are presented in Figure 5d and were calculated based on the kinetic constants and the pseudofirst-order kinetic by the following equation: Including the energy-consumption calculated by the efficiency of the sonophotocatalysis is compared to the photocatalysis regarding to the highest degradation rates for the low-and high-suspension range with the TiO 2 dosages of 0.025 g/L and 0.3 g/L (Figure 6a,b).
For a target value of 80% degradation, based on the Swiss environmental law for upgrading municipal wastewater-treatment plants [41], the needed reaction times are presented in Figure 5d and were calculated based on the kinetic constants and the pseudofirst-order kinetic by the following equation: • % ℎ the efficiency of the sonophotocatalysis is compared to the photocatalysis regarding to the highest degradation rates for the low-and high-suspension range with the TiO2 dosages of 0.025 g/L and 0.3 g/L (Figure 6a,b). For the low-dosage suspension, the degradation time was reduced by 76-83% from 10.5 min to 1.9-2.6 min with sonication and led to synergistic effects of 202-295%. It can be noticed that no additional energy is required (Figure 6a). Thus, the degradation time of bisphenol A was effectively reduced without any increasing energy consumption. However, for the higher suspension, additional energy consumption was required to achieve synergistic effects of 154% (AUS 76.5 µm) and 124% (AUS 114.75 µm) for reducing the treatment time by 66% and 63%, respectively (Figure 6b). Due to the similar degradation results, the optimized sonophotocatalytic process can be found for AUS 76.5 µm with extra energy of 55% (0.009 vs. 0.014 kWh/L) compared to the stirred photocatalysis.
Materials
The reagent bisphenol A (>97%) was purchased from Alfa Aesar (Karlsruhe, Germany). Titanium(IV)dioxide nanopowder (P25-Degussa) with primary particle size of 21 nm was purchased from Sigma-Aldrich (Steinheim, Germany). Methanol and acetonitrile were purchased from VWR in HPLC grade. Luminol (97%) and sodium hydroxide were purchased from Sigma-Aldrich (Steinheim, Germany). All chemicals were used without further purification. For the low-dosage suspension, the degradation time was reduced by 76-83% from 10.5 min to 1.9-2.6 min with sonication and led to synergistic effects of 202-295%. It can be noticed that no additional energy is required (Figure 6a). Thus, the degradation time of bisphenol A was effectively reduced without any increasing energy consumption. However, for the higher suspension, additional energy consumption was required to achieve synergistic effects of 154% (A US 76.5 µm) and 124% (A US 114.75 µm) for reducing the treatment time by 66% and 63%, respectively (Figure 6b). Due to the similar degradation results, the optimized sonophotocatalytic process can be found for A US 76.5 µm with extra energy of 55% (0.009 vs. 0.014 kWh/L) compared to the stirred photocatalysis.
Setup and Reactor Concept
The batch reactor contained a tip-sonotrode (Bandelin electronic GmbH & Co. KG, Berlin, Germany, sonopuls GM3200 with generator UW 2200 and SH213G and VS/70T, fUS = 20 kHz, A = 1.3 cm 2 , PA,max = 95 W/cm 2 ) or a magnetic stirrer and a UV-LED-system (Epigap Optronic GmbH, Berlin, Germany, λ = 300 nm, PA,max = 52 mW/cm 2 ) (Figure 7). The UV system was placed on the top of the reactor. The tip-sonotrode was immerged 1 cm in the sonicated media with an angle of 40 degrees to apply homogenization and cavitation effects in the expected sonophotocatalytic reaction area beneath the UV system.
Mapping of Ultrasound-Induced Bubble Fields
For imaging the bubble fields, a Panasonic Lumix G81 with a Sigma Contemporary F1,4/16 mm was used. Instead of the UV-LED-system an ordinary LED-panel was implemented to illuminate the inner reactor section. The photo was taken through the reactor wall prepared with an acrylic glass plate vertical to the illumination to avoid interferences with the LED panel. To achieve sufficient images, a fast exposure time and a suitable aperture (shutter priority) were adjusted. Afterwards, the images were edited (exposure compensation −2.5) to level the background and work out the bubble fields.
Mapping of Oxidative Species by Chemiluminescence
For imaging the oxidative cavitation zones of the sonotrode, a Panasonic Lumix G81 with a Sigma Contemporary F1.4/16 mm was used. A solution of 170 mg/L luminol and 150 mg/L sodium hydroxide was prepared for the chemiluminescence experiments. The luminol images were taken in complete darkness, from a constant distance and with constant settings (F5,6/128sec/ISO400). With the free image software "ImageJ", the blue-color components were isolated from the original luminol images (RGB-type) and were transformed in lookup tables. In the selected lookup table "Royal", every blue tone (256 color gradations) was replaced by shaded RAL colors resulting in a heat map based on the blue intensity.
Measuring the Cavitation Noise via Hydrophone
Hydrophonic measurements were carried out with the hydrophone Reson TC4043. The software tool from ELMA Schmidbauer GmbH (KaviMeter V5.12.20, Elma GmbH Co. KG, Singen, Germany, 2016) was used to record the cavitation noise level dBcnl. Therefore, the reactor section below the light source was divided in 4 layers (each 0.5 mm distance from the reactor bottom) with 9 measured points each layer to create a three-dimensional grid. All the 36 points were arithmetically averaged out of 10 single measurements. The UV system was placed on the top of the reactor. The tip-sonotrode was immerged 1 cm in the sonicated media with an angle of 40 degrees to apply homogenization and cavitation effects in the expected sonophotocatalytic reaction area beneath the UV system.
Mapping of Ultrasound-Induced Bubble Fields
For imaging the bubble fields, a Panasonic Lumix G81 with a Sigma Contemporary F1,4/16 mm was used. Instead of the UV-LED-system an ordinary LED-panel was implemented to illuminate the inner reactor section. The photo was taken through the reactor wall prepared with an acrylic glass plate vertical to the illumination to avoid interferences with the LED panel. To achieve sufficient images, a fast exposure time and a suitable aperture (shutter priority) were adjusted. Afterwards, the images were edited (exposure compensation −2.5) to level the background and work out the bubble fields.
Mapping of Oxidative Species by Chemiluminescence
For imaging the oxidative cavitation zones of the sonotrode, a Panasonic Lumix G81 with a Sigma Contemporary F1,4/16 mm was used. A solution of 170 mg/L luminol and 150 mg/L sodium hydroxide was prepared for the chemiluminescence experiments. The luminol images were taken in complete darkness, from a constant distance and with constant settings (F5,6/128sec/ISO400). With the free image software "ImageJ", the bluecolor components were isolated from the original luminol images (RGB-type) and were transformed in lookup tables. In the selected lookup table "Royal", every blue tone (256 color gradations) was replaced by shaded RAL colors resulting in a heat map based on the blue intensity.
Measuring the Cavitation Noise via Hydrophone
Hydrophonic measurements were carried out with the hydrophone Reson TC4043. The software tool from ELMA Schmidbauer GmbH (KaviMeter V5.12.20, Elma GmbH Co. KG, Singen, Germany, 2016) was used to record the cavitation noise level dB cnl . Therefore, the reactor section below the light source was divided in 4 layers (each 0.5 mm distance from the reactor bottom) with 9 measured points each layer to create a three-dimensional grid. All the 36 points were arithmetically averaged out of 10 single measurements.
Estimation of Light Absorption by TiO 2 Nanoparticles
The light absorption of TiO 2 catalyst (0-0.5 g/L) was carried out with a Photometer (NewPort OpticalPowerMeter Model 1916-C with optical detector Model 818-UV-L, Irvine, CA, USA). The UV-LED system was placed on an assembly (4 × 4 cm 2 ) with variable height (0.25-3 cm) containing the suspended nanoparticles. The detector was placed on the bottom of that assembly beneath a quartz glass sheet.
Sono(photo)catalytic Degradation Experiments of Bisphenol A
All degradation experiments were carried out in a batch system with an aqueous bisphenol A solution. For catalytic processes, a specific amount of TiO 2 (m TiO2 = 0.01-0.5 g/L) was suspended in distilled water and homogenized in an ultrasonic bath (EMAG, Emmi-H60, 40 kHz, 180 W) before adding the bisphenol A to obtain an initial concentration of c BPA = 1 µM. After filling the reactor, the reaction volume was stirred under dark conditions to obtain an absorption equilibrium (<8% max ) ( Table 1). The temperature was held between 20 to 25 • C depending on the energy input of ultrasound. Samples were taken with a needle at up to 5 min reaction time and were immediately centrifugated (Hettich Universal 320 R, 13,500 rpm, 2 × 5 min). The quantification of bisphenol A was monitored by HPLC measurements. The kinetic degradation constants were calculated by the pseudo-first-order kinetic as it is postulated for advanced oxidation processes by following equation: ln c(t) c(0) = −k·t
Quantification of Bisphenol A with HPLC/FD
Quantitative analysis of bisphenol A was carried out with an HPLC-system (JASCO, 2000 series) consisting of the autosampler AS-2055 PLUS, two pumps PU-2080 Plus, the oven 2060 PLUS, the degasser DG-2080/53, and the fluorescence detector FP-2020 PLUS. A reversed-phase C18 column (Dr. Maisch, 250 mm × 5 mm) was used. An isocratic mobile phase of acetonitrile and water (ratio 65:35) with a flow rate of 1.5 mL/min was set and the oven temperature was held constantly at 40 • C. The emission and excitation wavelengths were 275 nm and 305 nm, respectively. An injection volume of 10 µL with a sample loop of 100 µL was selected.
Conclusions
In this work, the ultrasonic intensification of a heterogenous photocatalysis with suspended TiO 2 nanoparticles via low-frequency ultrasound (20 kHz) was investigated, proved, and evaluated in a batch system. Synergy values up to 200-300%, depending on the energy input of the ultrasound transducer, were obtained in low-suspended (0.025 g/L TiO 2 ) reaction systems. Increasing the catalyst dosage leads to a decrease in the overall synergy if the photocatalytic degradation reaches its maximum. When a TiO 2 threshold is exceeded, a certain amount of ultrasonic energy must be applied to continue generating positive synergistic effects. However, it was shown that the synergy does not give a clue for optimized degradation conditions. Despite the decreasing synergy values, the highest degradation rates were obtained with 0.3 g/L TiO 2 with a synergy of 100%. Furthermore, it was depicted that the sonophotocatalytic degradation cannot be steadily maximized by applying more and more energy, but there is an effective energy-input limit. Thus, an economical optimization could be derived connected with a process intensification. Aiming for a degradation degree of 80% in low-suspended reaction systems with high synergy values, a reduction in degradation time of 83% was achieved for equal energy consumption compared to the photocatalysis. In high-suspended reaction systems with minor synergy values, an overproportioned reduction in degradation time was achieved regarding the energy consumption.
|
v3-fos-license
|
2021-09-08T13:52:26.680Z
|
2021-09-07T00:00:00.000
|
237436969
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "92e99f317ce71e609912e9360894fd9025599550",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2245",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "194bc678fe59bb5ccbe785588682c3a83c201a75",
"year": 2021
}
|
pes2o/s2orc
|
The Atacama toad (Rhinella atacamensis) exhibits an unusual clinal pattern of decreasing body size towards more arid environments
Background The causes of geographic variation of body size in ectotherms have generally been attributed to environmental variables. Research in amphibians has favored mechanisms that involve water availability as an explanation for the geographic variation of body size. However, there are few studies at intraspecific level on amphibians that inhabit desert or semi-desert environments, where hydric restrictions are stronger. Here, we describe and inquire as to the causes of the geographic variation of body size in the semi-desert toad Rhinella atacamensis, a terrestrial anuran that is distributed over 750 km along a latitudinal aridity gradient from the southern extreme of the Atacama Desert to the Mediterranean region of central Chile. We measured the snout-vent length of 315 adults from 11 representative localities of the entire distribution of the species. Then, using an information-theoretic approach, we evaluate whether the data support eight ecogeographic hypotheses proposed in literature. Results Rhinella atacamensis exhibits a gradual pattern of decrease in adult body size towards the north of its distribution, where the climate is more arid, which conforms to a Bergmann’s cline. The best model showed that the data support the mean annual precipitation as predictor of body size, favoring the converse water availability hypothesis. Conclusions Most studies in amphibians show that adult size increases in arid environments, but we found a converse pattern to expected according to the hydric constraints imposed by this type of environment. The evidence in R. atacamensis favors the converse water availability hypothesis, whose mechanism proposes that the foraging activity determined by the precipitation gradient has produced the clinal pattern of body size variation. The variation of this trait could also be affected by the decreasing productivity that exists towards the north of the species distribution. In addition, we found evidence that both pattern and mechanism are independent of sex. Lastly, we suggest that behavioral traits, such as nocturnal habits, might also play an important role determining this differential response to aridity. Therefore, the support for the converse water availability hypothesis found in this study shows that amphibians can respond in different ways to water restrictions imposed by arid environments. Supplementary Information The online version contains supplementary material available at 10.1186/s40850-021-00090-w.
identified as ecogeographic rules [3,4]. Among these, the relation between body size and environmental variables stands out [5], since body size is a trait strongly linked to the ecology and evolution of organisms [6]. The most studied generalization about geographic variation of body size is Bergmann's rule [7][8][9][10][11][12][13][14]. This rule proposes that, among closely related endotherms, body size increases with lower temperatures since animals with larger body size have less heat exchange with the environment (lower area-volume ratio), and thus will be able to conserve heat in colder areas [7] (translation in supplementary material of [15]). However, latitude has traditionally been the predictor of body size due to its high correlation with temperature at large scales [16,17].
Distinction between pattern and mechanism is key when dealing with ecogeographic rules [16,18]. In the case of the Bergmann's rule, it is necessary that temperature and its conservation mechanism be identified in the study system. When the pattern of increase in body size with latitude is present, but the mechanism is different to heat conservation, it has been proposed to use the term Bergmann's size cline instead [19]. Thus, several mechanisms have been proposed to explain this and other geographic patterns of body size in ectotherms, among which the most frequent are related to heat balance [20,21], water availability [22,23], resource availability [24,25], environment seasonality [26] and life-history attributes [27]. This variety of proposed mechanisms highlights the difficulties associated with providing conclusive evidence for any of them [28].
The study of these mechanisms in amphibians has favored explanations mainly related to water availability [22,23,[28][29][30], given their strong dependence on water due to the permeability of their skin [31]. This has led to the formulation and evaluation of mechanisms that predict different body size variation patterns related to hydric constrains (Table 1). One important contribution in this context was to consider that thermal and water balance are related (e.g. water loss from the surface leads to a simultaneous heat loss). This relationship suggests that a better descriptor of body size would involve a measure of water loss through the skin, as well as the energy present in the environment (potential evapotranspiration) instead of temperature or water availability by themselves [29]. However, Table 1 Hypotheses proposed in literature to explain the geographic variation of body size in amphibians Along with the predictor variables, corresponding bioclimatic variables used in the information-theoretic approach are shown in parentheses
Hypothesis
Predictor variable Predicted relation with body size
Mechanism(s) (Key references)
Water availability Mean annual precipitation (BIO12) Negative A lower area-volume relation given by greater body size will produce less surface for water loss ( [22]) Converse water availability Mean annual precipitation (BIO12) Positive Amphibian activity is strongly related to high water availability and humid periods, allowing more foraging time that promotes greater body size in areas with more precipitation ( these hypotheses have seldom been evaluated formally at intraspecific level in species distributed in aridity gradients.
Other important factors that influence geographic tendencies of body size in amphibians are their geographic context and habitat preferences [29,[32][33][34]. For example, amphibians from desert or semi-desert environments will likely respond to the pressures imposed by aridity and large daily temperature variation with morphological and/or behavioral adaptations [35]. By contrast, in lineages of aquatic amphibians it is expected that the effect of water availability on body size will be irrelevant [29]. Thus, amphibians of desert and semi-desert environments offer the opportunity to study the effects of water availability on the geographic variation of body size. Nevertheless, there are still few intraspecific studies that have used species that inhabit this kind of environment as models [22,36,37] and none have explicitly evaluated ecogeographic hypotheses.
The Atacama Desert, located in the extreme north of Chile, is considered one of the driest places on the planet [38]. Although it is postulated that the extreme aridity of this desert originated in the late Miocene [39], its current climatic conditions would have been installed during the Plio-Pleistocene [40]. The only amphibian that has colonized the extreme south of the Atacama Desert is the Atacama toad (Rhinella atacamensis), a terrestrial species endemic to the semi-desert zone of Chile. Its distribution extends latitudinally over more than 750 km from the desert (25°S) to the Mediterranean zone of the center of the country (32°S), in a climatic transition zone in which precipitation (means of the localities of the extreme north and south vary between 14 to 228 mm/ year; Fig. 1A) and productivity gradually increase southwards, while seasonality is accentuated [41,42]. In much of its distribution is sympatric with Pleurodema thaul, but only R. atacamensis is distributed in the most arid part of this range (25-27°S), hence it is considered a true Table 2). Colors represent different levels of precipitation (mean annual precipitation) and the dashed line delimits the approximate distribution of the species. B Geographic variation of body size (mean and standard deviation of SVL of each locality) as a function of latitude (R 2 = 0.91). The photographs show the differences in size of males and females of three representative localities (numbered according to Table 2). Black bars indicate 10 cm. The map is own elaboration desert inhabitant [43][44][45]. It was originally thought that this species was restricted to a few isolated localities between Paposo (Antofagasta Region) and the Huasco River (Atacama Region) (25-29°S), where it lives closely associated with water systems (some of which have a very small extension), but subsequently its distribution was considerably expanded to the south (~ 32°S, Coquimbo Region; reviewed in [45]).
Rhinella atacamensis has a notable geographic variation in body size and color pattern [44][45][46][47][48]. Almost 60 years ago, the first studies of the populations in the extreme north of its range revealed differences among them in both coloration and sexual dimorphism (less evident in the Paposo population), and in body size [43,44]. The degree of intraspecific variation in coloration and body size is even greater when populations south of the distribution are considered (Coquimbo Region, ~ 30-32°S [47,51]). The adults of the southern populations are larger and have different dorsal coloration ( [46], represented by individuals of locality 9 in Fig. 1B) than those of the extreme north, different enough for Cei [43,44] to consider them a different species (R. arunco) in his seminal studies of the genus Bufo in Chile (now Rhinella). Nowadays, the taxonomic status of the species is clear [50] and its geographic distribution with respect to its sister species (R. arunco) is better known [45], but the amount and form of the variation of body size (e.g. clinal pattern or discrete groups) has not been studied in its entire distribution.
The high level of phenotypic variation among the populations of R. atacamensis, which are distributed along an extensive aridity gradient, offers an interesting opportunity to study the causes of intraspecific body size variation in amphibians. Firstly, geographic variation of body size in this species is described through its entire distribution range. Then, using data from representative localities and an information-theoretic approach, we evaluate the predictions derived from the hypotheses related to water availability as the principal mechanism. However, since body size is a complex trait and its geographic variation may be influenced by multiple factors [1], hypotheses involving temperature, resource availability and seasonality (Table 1) are also considered. We hypothesize that precipitation will be the main predictor of body size in the Atacama toad, considering that this factor becomes limiting towards the north of its current distribution (Fig. 1A) and the antiquity of the aridity gradient where this species is distributed.
Results
The overall mean of the snout-vent length (SVL) of the 315 adult individuals (190 males, 125 females) was 84 mm (S.D. = 13.2 mm); mean SVL was 83.2 mm (± 13.9 mm) in males and 85.1 mm (± 12 mm) in females, which were not significantly different (W = 10.382; p = 0.0591). Student's t-tests showed that there are differences between male and female SVL in five locations, where females were larger ( Table 1 in Additional file 2). The smallest mean SVL for males and females was found in Las Breas (65.4 and 67.8 mm, respectively) and the largest mean SVL for males and females was in Canela Alta (102.5 mm and 109.1 mm, respectively; Table 1 in Additional file 2). It should be noted that there was no overlap in the ranges measured in these two localities, and that there is only a small overlap in the ranges of the extreme localities (Paposo and Palquial) ( Table 2). In all localities (except Mostazal) the smallest sizes correspond to males, while the largest individuals were always females (except Paposo; Table 1 in Additional file 2). Model II regression did not show a significative difference from the isometric slope ( Fig. 2; slope = 0.979, p = 0.725), indicating that the degree of sexual dimorphism does not change with body size. Therefore, males and females show a similar pattern of decrease of body size with latitude ( Table 1 in Additional file 2). Considering this, and that the aim was to assess the pattern of variation in body size at the species level, regardless of sex, Fig. 1B shows the relationship between mean body size of total data (both sexes) by locality and latitude.
A clear pattern of decrease in SVL was observed northwards of the species distribution (R 2 = 0.91, slope = 4.92), which is consistent with a Bergmann's body size cline (Fig. 1B). According to Moran's I, body size was positively autocorrelated at short distances and negatively at long distances grouping sexes by locality (Fig. 3) and considering each sex separately (Figs. 1 and 2 in Additional file 2), which indicates that only nearby populations had similar body size. This is concordant with the pattern of clinal geographic variation observed in Fig. 1B.
The best model ranked with AICc for all data by locality (Table 3) and the best one considering each sex separately ( Table 1 in Additional file 2) included only mean annual precipitation (BIO12) as predictor. In each of these analyses, the mean annual precipitation alone explains over 73% of the body size variation (Table 3 and Table 2 in Additional file 2). The residuals of these models were not spatially autocorrelated (Monte Carlo permutation test, p < 0.05), indicating that there are no biases in our results.
In addition, the second-best models of the pooled and sex-separated data by locality included the variable BIO12 together with the potential evapotranspiration (PET) ( Table 3 and Table 2 in Additional file 2). Therefore, the main bioclimatic variable to explain the variation in body size in R. atacamensis, regardless of sex, is precipitation ( Fig. 1 in Additional file 2). In fact, the linear regression of Fig. 4 shows that precipitation explains an important part of body size variation in the species (R 2 = 0.76, p < 0.01).
In general, very similar results were obtained with the data grouped or separated by sex, so only those obtained with all the data are described below. The first and second models explained a substantial amount of evidence (0.39 and 0.30, respectively), which indicates that both models provide similar support to the data [55]. According to evidence ratios, the best model was 1.32 and 4.73 times more probable than the second and third highest ranked models. Considering the independent contribution of each environmental variable, mean annual precipitation explained 42% of the body size variability of Atacama toad, followed by NDVI with 31.2% (Fig. 5). All results of analyses for each sex are shown in the Additional file 2.
Discussion
Previous inter and intraspecific studies on geographic variation in amphibians have shown that they respond to water restrictions imposed by driest habitats through an increasing of body size [22,28,29,34,36,37,55,56]. In Table 2 contrast, our results show a pattern of decreasing of body size in R. atacamensis northward of its distribution, as the environment becomes more arid (Fig. 1A). Moreover, this pattern fits a Bergmann's size cline (Fig. 1B). Therefore, we corroborate the observations of Cei [43,44] in the northern part of the species distribution, where a clinal increase with latitude was described, and we show that this pattern extends throughout its entire distribution and is directly associated with precipitation ( Fig. 4 and Fig. 1 in Additional file 2). The presence of this variable in the best models (Table 3; see results separated by sex in Table 1 in Additional file 2) and its high independent effect (Fig. 5) favor the converse water availability hypothesis (Table 1). Moreover, the results show that there is an isometric pattern of sexual dimorphism (no evidence of Rensch's rule [54], Fig. 2) and that both males and females have responded identically to the decreasing of precipitation northwards of the species distribution ( Fig. 1 in Additional file 2).
Considering the biogeographic scenario of R. atacamensis, the activity of this species towards the north of its distribution would be lower due to the increase in aridity, Table 3 Linear regression models of climate variables on body size of adult males and females of Rhinella atacamensis (mean snoutvent length by locality), ranked by the values of AICc, from the best to the worst model Only models with low differences in AICc values relative to the best model (Δi < 7) are shown. For each model, the predictor variable name (with its respective regression coefficient sign), R 2 adjusted, estimated number of parameters (K), AICc values, delta AICc (Δi) and Akaike weights (AICw) are shown. The environmental variables were mean annual temperature (BIO1), temperature seasonality (BIO4), mean annual precipitation (BIO12), normalized index of vegetation difference (NDVI) and potential evapotranspiration (PET)
Model
Adjusted R 2 K AICc ∆AICc AICw impacting negatively in its foraging time, resulting in smaller body sizes. Experimental evidence in ectothermic vertebrates points in the same direction. Foraging activity is limited in conditions of lower environmental humidity [57], as well as less efficient [58], which in turn affects net energy gain with a subsequent low growth rate [59]. In fact, the reduction in the foraging efficiency and activity would explain the dwarfism in two species of terrestrial toads that inhabit sandy substrates [60]. Furthermore, models predict that when growth rate is reduced by a decrease in food quality, body size also decreases [61]. For example, limited activity and foraging Table 2 with red numbers opportunities in Pelobates cultripes resulted in a lower grow rate and smaller body sizes [62]. Similar patterns have been observed in a snake species [63] that inhabits arid regions, but in this case the explanation have been focused on food availability more than foraging abilities. Both explanations are related and, although they possess different underlying mechanisms (Table 1), both may be affecting the Atacama toad in a non-exclusive way. In this species, the NDVI is positively related to body size, it has the second largest independent effect (Fig. 5) and it is included in the third best model in the AICc ranking (Table 3). Considering this variable, the decrease in foraging activity in the Atacama toad could be affected at the same time by a reduction in foraging area and/or food supply. The preponderance of precipitation as the main predictor of body size in R. atacamensis is consistent with previous intraspecific studies of amphibians that inhabit arid regions and precipitation gradients [22,34,36,37,55,56]. However, a converse pattern of body size variation, like that exhibited by the Atacama toad, rarely has been described in amphibians. Interestingly, one of the few examples comes from a co-distributed species (from 27°S to the south), Pleurodema thaul [64]. In that study, the pattern was explained arguing that higher minimum temperatures and lower precipitation northwards of its distribution would have reduced hydroperiods, resulting in small postmetamorphic sizes [64]. This mechanism seems less plausible for R. atacamensis because temperature was not an important bioclimatic variable to explain its variation of body size (BIO1 in Table 3 and Fig. 5), but could be a non-exclusive explanation. However, the parallel pattern in these co-distributed species provides an important opportunity to investigate in a common garden design the ultimate causes of body size variation of both species [65]. In addition, the parallel patterns in sympatric populations of different species suggest that these clines may be adaptive [66], suggesting that similar processes could be producing them.
The converse pattern described in the present study suggests that the response does not directly involve the water economy (i.e. water availability and conservation) as expected under an aridity gradient. Thus, other ecological processes could be affecting the body size variation or could even be more important than the effect of water conservation [67]. Although very little is known about the natural history of R. atacamensis, there are some aspects of habitat and behavior that could be important in this context. Populations in the northern distribution of the species (north of 29°30'S) inhabit mainly isolated streams with permanent flow [45] and have associated behaviors such as hiding under rocks in running water or near the edges [44]. The species also has nocturnal habits [68], which allows it to avoid the greater dehydration rates produced by diurnal temperatures [35]. The lack of evidence in favor of hypotheses related to temperature and water economy suggests that this type of explanation is important in the case of R. atacamensis (Table 3).
Data collected in the present study allowed to evaluate sex differences through the entire distribution the R. atacamensis. Although it was not the principal aim of this study, we were able to reevaluate some conclusions from the seminal studies of Cei [43,44] and to compare them with new studies [68]. For instance, the sexual dimorphism skewed towards females found in Llanos de Challe (28°S [68]) was confirmed. However, the pattern of sexual dimorphism is isometric when comparing populations of all its distribution (Fig. 2). The differences in sexual dimorphism between localities could be reflecting different processes occurring at microhabitat level [69,70] or could be due to the low number of samples in some localities (see Table 1 in Additional file 2). To evaluate this possibility at different spatial scales, we recommend using a substantially larger number of samples of both sexes and carrying out field studies such as that of Pincheira-Donoso et al. [68] in other localities. We highlight that even with differences between some localities ( Table 1 in Additional file 2), the pattern of variation in body size through the precipitation gradient was found to be similar in both sexes ( Fig. 1 in Additional file 2) and that the same ecogeographic hypothesis explained the pattern regardless of sex (Table 1 in Additional file 2).
Although correlations and explicit evaluation of multiple hypotheses are useful to identify the environmental factors that may be modulating the variation of traits such as body size, experimental studies are required to determine the underlying mechanisms and directly evaluate the genetic component of geographic variation [65]. However, the historical persistence of the aridity gradient, directly linked to the antiquity of the Atacama Desert, the parallel clinal pattern exhibited by P. thaul [64] and the ancestral distribution of R. atacamensis inferred by the distribution of its sister species R. arunco, which replaces it to the south (~ 32-38°S [46,47]), suggest that the body size cline of R. atacamensis would have been an adaptive response to more arid conditions as its populations expanded further north. In fact, the current distribution ranges of both sister species allow giving a spatial direction to the process of body size decreasing in R. atacamensis but the time frame in which this process occurred is unknown.
Conclusion
We described an intraspecific clinal pattern of geographic variation in body size contrary to that expected according to the literature of amphibians that are distributed in desert or semidesert environments. This is the clearest example of this type of cline (i.e., Bergmann's size cline) described so far in amphibians, as well as the only case where the converse water availability hypothesis is favored; it should be noted that these results are independent of sex.
Moreover, this is the first study in amphibians that inhabit desert and/or semidesert environments where the putative mechanisms (i.e., ecogeographic hypotheses) were explicitly evaluated in an approach of multiple competing hypotheses. Hence, the converse water availability hypothesis emerges as an alternative to the water availability hypothesis, showing that amphibians can respond in different ways to cope with water restrictions imposed by arid environments.
Sampling
The SVL of 315 adults of R. atacamensis from 11 representative localities of its entire distribution were measured (Fig. 1A). Most measurements were made in situ by the same person (individuals were measured, photographed, and released at the capture sites), but specimens from the DBGUCH (Universidad de Chile) and MZUC (Universidad de Concepción) collections were also included. Measurements were made with a digital caliper with 0.01 mm precision and then were rounded to one decimal place. The field campaigns were performed during the reproductive season, which takes place over a few weeks between August and November, depending on the location (C. Correa and M. Méndez, personal observations). The searches for the individuals generally began a few minutes before the sunset (approximately at 19:30 h), lasting until midnight. The southern limit of the distribution of R. atacamensis is not clear, since around 32°S there is a zone of hybridization with its sister species R. arunco [49], thus sampling was extended only to the Choapa River watershed (Palquial) to include only pure populations of R. atacamensis. Individuals were sampled in each locality in the same stream system within a maximum distance of 4 km (Palquial), except for Llanos de Challe, in which we included a few individuals from another site 22.5 km east (Canto del Agua) located in the same watershed. The sampled localities are shown on the map in Fig. 1 within a dashed line that represents the approximate distribution range of the species. This map was prepared by the authors using the QGIS program [71]. The sex and maturity of the individuals were determined using external characters and the presence (males) or absence of vocal activity. Adult males have nuptial pads on fingers one and two of the forelimbs, and generally have yellowish background color and smooth skin. Adult females generally have a whitish color with marked dark patches, skin with small spines in the dorsal area and more robust contexture [43,44]. Data used in this study are show in Additional file 1.
Statistical analyses
Data normality was tested by sex, locality, and for each sex by locality with Shapiro-Wilk tests. Then, because the data of males and females were not normally distributed (Shapiro-Wilk of males: W = 0.941, p < 0.05; females: W = 0.964; p < 0.05), we examined sexual dimorphism in all samples using Mann-Whitney U tests. Differences in SVL of males and females within each locality were evaluated with Student's t-tests. In addition, to evaluate how the degree of sexual dimorphism varies with body size, a major axis regression (model II) was performed fitting the log 10 of mean body size of males and females [72]. This was compared to a slope equal to one, which represents the null hypothesis of isometry. The model II regression was performed in the smatr package [73]. This analysis also allowed us to evaluate another ecogeographical rule, Rensch's rule [54].
Mean, minimum and maximum SVL for each locality were found to be highly correlated in all comparisons between pairs of variables (Pearson's r > 0.89, p < 0.001), thus it was decided to use only the mean SVL in analyses. This allows comparison with studies of other anuran species, since mean SVL has often been used in studies of intraspecific geographic variation of body size (e.g. [52,74]). Only the data from locality of Los Pajaritos was not normally distributed (W = 0.932, p = 0.04). Then, a linear model using mean SVL by locality and latitude as variables was obtained to evaluate the form and magnitude of geographic variation of body size in R. atacamensis.
Environmental variables and hypotheses testing
Geographic coordinates of each locality were used to obtain the climatic variables from the climate surfaces constructed by [75]. These surfaces were constructed with monthly temperatures and precipitation from 1950 to 2000; they are available with a spatial resolution of 1 km 2 . The relation between the environmental variables and body size was analyzed to determine which variable better explains the geographic variation of body size, as shown in Table 1. The Normalized Difference Vegetation Index (NDVI) provides values which are highly correlated with photosynthetic mass and primary productivity [76]. The NDVI data, available with a spatial resolution of 30 arcseconds, were downloaded from [77]. Then, the maximum NDVI for each locality was obtained. Potential evapotranspiration (PET) was obtained from the CGIAR-CSI Soil-Water Balance Database [78] according to the proposal of [29]. The package Raster 3.4.5 [79] was used to extract the values of the climate variables.
We used an information-theoretic approach [80] to identify the potential mechanisms that have produced the geographic variation of body size in R. atacamensis. For this, bioclimatic data, NDVI and PET were used as predictor variables, generating 32 candidate models of simple linear regression (multiple with more than one predictor) for each of the six hypotheses (Table 3), considering all possible combinations of bioclimatic variables (excluding interactions). The models were evaluated using the Akaike Information Criterion corrected for small sample sizes (AICc [81]) and comparing the AICc value of each model with the minimum AICc (∆AICc) [80]. Rule of thumb was applied as suggested by [81] to report the best models, which indicates that models have considerably less support (∆AICc < 7) or substantial support (∆AICc < 2). We also used the Akaike weights (AICw) to evaluate the uncertainty of each model [80]. Evidence ratios were included to compare the relative likelihood of the models (w a /w b ; where w a is the likelihood of model a and w b is that of model b [82]). Considering that male and female may respond differently to climatic variables (e.g. [23]), the AIC analyses were performed also by sex.
The relative contribution of the environmental factors on body size was assessed with an analysis of hierarchical partitioning [83,84] considering mean of body size of all data (males and females) by locality. This analysis allows independent identification of the percentage of variation explained by each causal factor [83,85,86], eliminating the problems produced by multi-collinearity. For this we used the package hier.part 1.0.4 [87].
Spatial autocorrelation of body size and residuals of the best model were assessed using Moran's I with a Monte Carlo permutation test with 199 permutations for significance evaluation, which was done using the package ncf 1.2.9 [88]. Then, spatial correlograms were created for eight distance classes for each sex and total data by locality. All analyses were performed in the R program (version 4.0.3) [89].
Additional file 1. Raw data used in this study.
Additional file 2. Results of the analyses by sex.
|
v3-fos-license
|
2022-05-24T15:05:50.661Z
|
2022-05-01T00:00:00.000
|
249004869
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/23/10/5815/pdf?version=1653299772",
"pdf_hash": "3ea5efcf85715dc648ab848b831fc56e9dae89cb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2246",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "5b3e36eaeeebbd3f14c0730eca469a00d3413fbc",
"year": 2022
}
|
pes2o/s2orc
|
Recent Advances of m6A Demethylases Inhibitors and Their Biological Functions in Human Diseases
N6-methyladenosine (m6A) is a post-transcriptional RNA modification and one of the most abundant types of RNA chemical modifications. m6A functions as a molecular switch and is involved in a range of biomedical aspects, including cardiovascular diseases, the central nervous system, and cancers. Conceptually, m6A methylation can be dynamically and reversibly modulated by RNA methylation regulatory proteins, resulting in diverse fates of mRNAs. This review focuses on m6A demethylases fat-mass- and obesity-associated protein (FTO) and alkB homolog 5 (ALKBH5), which especially erase m6A modification from target mRNAs. Recent advances have highlighted that FTO and ALKBH5 play an oncogenic role in various cancers, such as acute myeloid leukemias (AML), glioblastoma, and breast cancer. Moreover, studies in vitro and in mouse models confirmed that FTO-specific inhibitors exhibited anti-tumor effects in several cancers. Accumulating evidence has suggested the possibility of FTO and ALKBH5 as therapeutic targets for specific diseases. In this review, we aim to illustrate the structural properties of these two m6A demethylases and the development of their specific inhibitors. Additionally, this review will summarize the biological functions of these two m6A demethylases in various types of cancers and other human diseases.
Introduction
Existing epigenetics mainly involve reversible chemical modifications of DNA, histone, and RNA, which can be inherited through cell division without changing the DNA sequence. So far, over 100 kinds of post-transcriptional modifications of RNA have been defined, including N1-methyladenosine (m1A), N6-methyladenosine (m6A), and 2-Odimethyladenosine (m6Am) [1,2]. An important discovery was that m6A was the most abundant in RNA internal modifications between different species [1,3], and was also presented in eukaryotic mRNAs [4] and noncoding RNAs [5]. As with any other epigenetic modifications, m6A methylation can be individually dynamically installed; removed; and recognized by the so-called "writers", "erasers", and "readers", individually ( Figure 1) [6,7]. The occurrence of m6A methylation is controlled by a core methyltransferase complex, i.e., the "writers", composed of several core proteins, including methyltransferase-like 3 and 14 (METTL3 and METTL14), and wilms tumor 1-associated protein (WTAP) [8]. Only two m6A demethylases "erasers", fat-mass-and obesity-associated protein (FTO) [9] and alkB homolog 5 (ALKBH5) [10], have been found so far, which can specifically eliminate the m6A sites from target mRNAs. The "readers", including YT521-B homology (YTH) domain family 1-3 (YTHDF1-3), YTH domain containing 1-2 (YTHDC1-2), insulin-like growth factor 2 mRNA-binding proteins (IGF2BPs) (including IGF2BP1-3), and eukaryotic initiation factor 3 (EIF3), control the fate of the target mRNA by recognizing and binding to In recent decades, epigenetic studies have explicitly highlighted the relationship between m6A demethylases and RNA metabolism, which can affect gene expression and animal development, as well as human disease progression [6,8,9]. FTO/ALKBH5 removes m6A modification with the assistance of cofactors 2-oxoglutarate (2OG) and Fe 2+ . They all belong to the AlkB subfamily of the 2OG dioxygenase superfamily [13,14]. Structurally, both FTO and ALKBH5 contain a highly conserved double-stranded β-helix (DSBH) fold (also called jelly-roll motif), providing a scaffold for the conserved ferrous ion (HXD/E…H motif) and 2OG binding site (Figure 2A-C). Moreover, structural insights into FTO and ALKBH5 identified some active-site residues that are critical to substratebinding specificity and selectivity. These specific residues may help to design selective inhibitors against other AlkB family members. In recent decades, epigenetic studies have explicitly highlighted the relationship between m6A demethylases and RNA metabolism, which can affect gene expression and animal development, as well as human disease progression [6,8,9]. FTO/ALKBH5 removes m6A modification with the assistance of cofactors 2-oxoglutarate (2OG) and Fe 2+ . They all belong to the AlkB subfamily of the 2OG dioxygenase superfamily [13,14]. Structurally, both FTO and ALKBH5 contain a highly conserved double-stranded β-helix (DSBH) fold (also called jelly-roll motif), providing a scaffold for the conserved ferrous ion (HXD/E . . . H motif) and 2OG binding site (Figure 2A-C). Moreover, structural insights into FTO and ALKBH5 identified some active-site residues that are critical to substrate-binding specificity and selectivity. These specific residues may help to design selective inhibitors against other AlkB family members.
We performed extensive research on the databases of PubMed, Google Scholar, Web of Science, and SciFinder to extract literatures from 1974 to 2021. The keywords used included: "m6A demethylases"; "FTO"; "AKBH5"; "inhibitors"; "crystal structure"; "human diseases"; "cancer"; and "therapy responses". Moreover, we extracted protein crystal structures from the protein data bank (RCSB PDB) database, and then the Molecular Operating Environment (MOE, 2019.0102) software was used to generate two-dimensional (2D) and three-dimensional (3D) ligand-protein interactions diagrams. This review will discuss and compare the structural characteristics of FTO and ALKBH5 according to recent literature. Additionally, this review will summarize the advances of their inhibitors and their functions in various biological processes and diseases. We performed extensive research on the databases of PubMed, Google Scholar, Web of Science, and SciFinder to extract literatures from 1974 to 2021. The keywords used included: "m6A demethylases"; "FTO"; "AKBH5"; "inhibitors"; "crystal structure"; "human diseases"; "cancer"; and "therapy responses". Moreover, we extracted protein crystal structures from the protein data bank (RCSB PDB) database, and then the Molecular Operating Environment (MOE, 2019.0102) software was used to generate two-dimensional is colored in red; the L1 loop is colored in purple; 3-meT is colored in green; N-oxalylglycine (NOG) is colored in orange; Fe 2+ is colored in grey. (C) Crystal structure of ALKBH5 66-292 -2OG (PDB ID: 4NRO) generated by MOE software; the motif 1 region is colored in dark green; the motif 2 region is colored in dark green; the L2 loop is colored in purple; 2-oxoglutarate (2OG) is colored in cyan, Mn 2+ is colored in light blue; N: N-terminus, C: C-terminus. (D,E) The detailed interactions of the active center of FTO (PDB ID: 3LFM) and ALKBH5 (PDB ID: 4NRO) generated by MOE software. NOG and 2OG are colored in orange and cyan, respectively; Fe 2+ and Mn 2+ are drawn as grey and blue balls, respectively.
Structures and Functions of FTO
The complete human FTO protein contains 498 amino acids, mainly composed of an N-terminal domain (NTD, residues: 1-326) and a C-terminal domain (CTD, residues: 327-498) (Figure 2A,B). Han et al. firstly reported a crystalline complex of FTO∆31 and 3-methylthymidine (3-meT) (PDB ID: 3LFM), where 3-meT was a mononucleotide substrate for FTO ( Figure 2B) [15]. They found an interaction between the CTD and NTD of FTO, which contributed to stabilizing NTD and promoting the catalytic activity of NTD. Like other AlkB family members, the active center of FTO contained a conserved DSBH (β5-β12) domain located in its NTD. Structural analysis showed that two α-helixes supported DSBH on one side, and the other side was covered by the L1 loop (a long loop including residues 213-224) ( Figure 2B). Notably, structural data indicated that this L1 loop was highly conserved across different FTO species and was involved in substrate selection. Specifically, the structural comparison showed that the L1 loop and the unmethylated strand of the DNA double strand competed to bind to FTO, implying that FTO preferred single-stranded nucleic acids as catalytic substrates.
Structures and Functions of ALKBH5
To date, several crystalline structural features of human and zebrafish ALKBH5 with different ligands have been resolved [16][17][18][19]. The full-length human ALKBH5 protein contains 394 amino acids and consists of two components: NTD and CTD ( Figure 2A). Unlike FTO, the absence of the CTD (residues 293-394) of ALKBH5 has no significant effect on m6A demethylase activity. In the truncated construct of ALKBH566-292 (PDB ID: 4NJ4) by Aik and colleagues, the conserved DSBH core fold was constituted by eight anti-parallel β-strands, forming two β-sheets: the major β-sheet and the minor β-sheet ( Figure 2C) [16]. Importantly, several research groups have identified two major structural features of ALKBH5 [16][17][18]. A significant structural feature of ALKBH5 is located on the "nucleotide recognition lid (NRL)". The NRL region of ALKBH5 included two amino acid peptide chains (named motif 1 and motif 2), which dynamically assisted in recognizing nucleic acid substrates. Interestingly, while motif 1 of ALKBH5 left a larger open space above the active center, the long motif 2 was more flexible and had undergone a conformational change after interacting with the nucleic acid substrate. Once bonded to the substrate, motif 2 flipped up to the open area exposed by motif 1 to accommodate it. Another significant structural feature was the unique disulfide bond between residues Cys230 and Cys267. In particular, this disulfide bond restricted the conformation of the L2 loop related to the L1 loop of FTO, thereby preventing the double-stranded substrate from entering the ALKBH5 active center. Together, these structural insights demonstrated the importance of structural features in maintaining the substrate specificity of ALKBH5.
Structural Comparisons of FTO and ALKBH5
Even though they shared the same catalytic mechanism and DSBH active domain, FTO and ALKBH5 exhibit several differences in substrate specificity and small-molecule inhibitor selection. The identification of some conserved binding residues partially explained the functional differences between FTO and ALKBH5. The most notable one was that they both had conserved nucleic acid binding residues that individually exhibited different affinity and selectivity to the substrate. Therefore, we used MOE software to generate two 3D protein-ligand interaction diagrams for crystal complexes of FTO and ALKBH5 (PDB ID: 3LFM and 4NRO), which displayed the binding information between the ligand and the protein. Creating 3D protein-ligand interaction diagrams consisted of the following steps: (1) open the Visualization Setup panel and change the theme to "Standard White" directly; (2) click "Site View" to isolate the ligand and pocket in 3D; (3) press "Ribbon", "Style", and "×" to hide the protein backbone ribbon; (4) click "Render", "Atoms", and "Residue" to display residue names; (5) click "Render", "Atoms" to change the color and display mode of the ligand, residues, and metal ion; (6) click "Effects & Text" in the Visualization Setup panel and adjust the "Text Size" to 1; and (7) click "Save Picture" from the bottom of the Visualization Setup panel in the TIF format. In the crystalline complex of FTO∆31-3-meT (PDB ID: 3LFM), residues Tyr108 and His231 sandwiched the nucleobase ring of 3-meT, while Leu109 and Val228 packed the sugar ring by hydrophobic interactions. Likewise, the residue Tyr141 in ALKBH5 was determined, which corresponded to the Tyr108 in FTO. Further, there were three hydrogen bonds between FTO and 3-meT in the crystalline complex of FTO∆31-3-meT. Two of them (O2-3-meT and the conserved Arg96; O4-3-meT and the amide nitrogen of Glu234) were vital for FTO selection against differently methylated nucleobases. In line with Arg96 of FTO, residues Arg130 and Lys132 of ALKBH5 that might interplay with m6A contributed to its higher binding affinity and specificity for m6A. Additionally, the residue Phe234 from the L2 loop of ALKBH5 was indispensable in flipping the m6A base into the active site. Furthermore, the active pockets of FTO and ALKBH5 carried conserved residues in combination with metal ions and 2OG cofactors. In FTO, three conserved residues (His231, Asp233, and His307) of the key HX(D/E) motif directly coordinated to Fe 2+ ( Figure 2D). Similarly, in the crystalline structure of ALKBH5 (PDB ID: 4NRO), Mn 2+ was octahedrally coordinated by three residues (His204, Asp206, and His266) from the DSBH side chains of ALKBH5, water molecules, and a cofactor ( Figure 2E). Apart from chelating iron ions, the 2OG cofactor also interacted with multiple residues in the functional pockets for different AlkB members. For example, they were Asn193, Tyr195, Lys132, Arg283, and Arg277 in ALKBH5, while they were Arg316, Ser318, Tyr295, Asn205, and Arg96 in FTO. Notably, crystallographic and biochemical studies revealed a much smaller active cavity for ALKBH5 than that for FTO (490.2 Å3 for ALKBH5 and 817.5 Å3 for FTO) [17]. Therefore, ALKBH5 inhibitors may require smaller-sized compounds. In short, the structural analysis of FTO and ALKBH5 and the evaluation of their substrate-binding specificity and selectivity paved the way for the study of catalytic activity and the design of selective inhibitors.
FTO in Adipogenesis and Metabolism-Related Diseases
Initially, FTO was reported to link to obesity in the genome-wide search for type 2 diabetes-sensitive genes [20]. Since obesity was an accepted risk factor for some common diseases, it was reasonable to assume that FTO was a factor contributing to Human Diseases. Subsequently, cumulative studies have confirmed this hypothesis. FTO was markedly associated with many disease risks, such as leukemia [21], cardiovascular disease [22], breast cancer [23], melanoma [24,25], and endometrial cancer [26].
As described in more recent research, FTO inactivation may prevent obesity, while overexpression of FTO induced the expression of ghrelin that controls intake behavior [27,28]. Further, FTO participated in adipogenesis by mediating mRNA production of adipogenesis regulatory factors. For instance, FTO promoted adipocyte differentiation by controlling RNA splicing of Runt-related transcription factor 1 (RUNX1T1, an adipogenesis-related transcription factor) in an m6A-erase manner [29]. Peng et al. recently validated that treating hepatic cells with FTO inhibitor entacaponesignificantly decreased the expression of forkhead box protein O1 (FOXO1) by inhibiting the m6A demethylase activity of FTO [30]. Inspired by previous studies that found that FOXO1 was crucial for hepatic gluconeogenesis in the fasting condition, they evaluated the specific effect of the FTO-FOXO1 axis in mice [31]. Intriguingly, an in vivo experiment demonstrated that FTO deficiency decreased body weight and fasting blood glucose concentration in diet-induced obese mice by acting on the FTO-FOXO1 regulatory axis.
Krüger et al. recently verified the relationship between FTO and obesity-induced metabolism as well as vascular changes [32]. Endothelial FTO knockout protected mice from obesity-induced insulin resistance, hyperglycemia, and hypertension in endothelial cells and skeletal muscle in obese conditions. The loss of FTO increased m6A level and stabilized lipocalin-type prostaglandin D synthase (L-PGDS) mRNA, thus upregulating its expression. Subsequently, L-PGDS promoted the synthesis of prostaglandin D2 (PGD2), and PGD2 stimulated AKT phosphorylation in endothelial cells. In parallel to previous reports, obesity decreased AKT phosphorylation in endothelial cells, impairing glucose and insulin tolerance [33]. These results indicate that FTO played a metabolic and vascular role independent of the gene's role in obesity. Therefore, the selective inhibition of FTO may be used to treat dysregulated metabolic homeostasis.
FTO in Heart Failure
The association between FTO-mediated m6A demethylation and cardiovascular disease, including heart failure, has also drawn the attention of researchers. Berulava et al. reported that genetic ablation of FTO can lead to an accelerated progression of heart failure [34]. Consistently, Mathiyalagan and co-workers found decreased expression of FTO in failing hearts, which resulted in decreasing cardiomyocyte contractile function [35]. Both sustained and transient overexpression of FTO attenuated ischemia-induced cardiac dysfunction in mouse models of myocardial infarction. Further studies revealed that FTO overexpression prevented the degradation of cardiac contractile transcripts and improved their protein expression under ischemia conditions through selective demethylation. More recently, a study revealed that FTO could alleviate cardiac dysfunction in mice with pressure overload-induced heart failure by regulating glucose uptake and glycolysis through facilitating the expression of glycolysis-related genes such as phosphoglycerate mutase 2 (PGAM2) [36]. These findings suggest that FTO may be a therapeutic target for heart failure treatment.
FTO in Leukemia
More recent studies confirmed that FTO was associated with an oncogenic effect in acute myeloid leukemias (AML) [37,38]. Importantly, Li et al. discovered that FTO was highly expressed in subtypes of AML cells [37]. AML is a disease driven by a small proportion of leukemic stem cells (LSCs), and the persistence of LSCs is considered to be a principal cause of disease recurrence. Functionally, FTO predominantly played its oncogenic role in promoting leukemic oncogene-mediated cell transformation and leukemogenesis. In particular, FTO suppressed the expression of tumor suppressors, including ankyrin repeat and SOCS box containing 2 (ASB2) and retinoic acid receptor alpha (RARA) by eliminating m6A sites. Further research confirmed that FTO suppressed all-trans retinoic acid (ATRA)-induced leukemia cell differentiation by reducing ASB2 and RARA expression [37]. Subsequently, Su et al. found that R-2-hydroxyglutarate (R-2HG) ( Figure 3) displayed an anti-leukemic activity by inhibiting FTO demethylase activity, thus altering the expressions of MYC and CCAAT/enhancer-binding protein alpha (CEBPA) [39]. In R-2HG-sensitive leukemic cells, treating leukemic cells with R-2HG significantly upregulated the m6A levels of MYC/CEBPA mRNA by blocking the binding of FTO to mRNA targets. Later, it was found that YTHDF2 specifically recognized the increased m6A modification, thereby weakening MYC/CEBPA transcripts to trigger their degradation. This study demonstrated that R-2HG could effectively inhibit FTO demethylase activity, thereby reducing the proliferation/survival of leukemic cells with highly expressed FTO by modulating m6A/MYC/CEBPA signaling [39].
Shortly after tyrosine kinase inhibitors (TKIs) were used in the clinical treatment of leukemia, it was found that rapidly acquired resistance to TKIs became a significant obstacle. Yan et al. revealed that the formation of a drug-resistant phenotype during TKI therapy corresponded to the over-expression of FTO in leukemic cells [40]. When FTO was inactivated by gene knockdown or chemical inhibitors, resistant cells became sensitive again to TKIs, accompanied by the MER proto-oncogene, tyrosine kinase (MERTK), and B-cell lymphoma-2 (Bcl-2) reduction. Furthermore, they confirmed that when FTO demethylase activity was inhibited with rhein ( Figure 3), combined with TKIs, it greatly inhibited tumor growth in nude mice. These findings suggested that the inhibition of FTO by small-molecule compounds might be an effective approach to treating leukemia. Shortly after tyrosine kinase inhibitors (TKIs) were used in the clinical treatment of leukemia, it was found that rapidly acquired resistance to TKIs became a significant obstacle. Yan et al. revealed that the formation of a drug-resistant phenotype during TKI therapy corresponded to the over-expression of FTO in leukemic cells [40]. When FTO was inactivated by gene knockdown or chemical inhibitors, resistant cells became sensitive again to TKIs, accompanied by the MER proto-oncogene, tyrosine kinase (MERTK), and B-cell lymphoma-2 (Bcl-2) reduction. Furthermore, they confirmed that when FTO demethylase activity was inhibited with rhein ( Figure 3), combined with TKIs, it greatly inhibited tumor growth in nude mice. These findings suggested that the inhibition of FTO by small-molecule compounds might be an effective approach to treating leukemia.
FTO in Glioblastoma
Cui et al. recently confirmed that reversible m6A modification in mRNA was crucial for the tumor-promoting role of glioblastoma stem cells (GSCs), mainly by maintaining self-renewal capacity and improving tumorigenesis [41]. Both knockdown METTL3/METTL14 and over-expressed FTO could facilitate GSC growth and selfrenewal by reducing the overall level of m6A modification. In particular, they found that decreased m6A modification directly led to increased expressions of oncogenes, such as A disintegrin and metalloproteinase 19 (ADAM19), ephrin type-A receptor 3 (EPHA3), and Krüppel-like factor 4 (KLF4). In addition, in vivo experiments showed that inhibiting
FTO in Glioblastoma
Cui et al. recently confirmed that reversible m6A modification in mRNA was crucial for the tumor-promoting role of glioblastoma stem cells (GSCs), mainly by maintaining selfrenewal capacity and improving tumorigenesis [41]. Both knockdown METTL3/METTL14 and over-expressed FTO could facilitate GSC growth and self-renewal by reducing the overall level of m6A modification. In particular, they found that decreased m6A modification directly led to increased expressions of oncogenes, such as A disintegrin and metalloproteinase 19 (ADAM19), ephrin type-A receptor 3 (EPHA3), and Krüppel-like factor 4 (KLF4). In addition, in vivo experiments showed that inhibiting FTO activity with FTO inhibitors MA2 ( Figure 4) significantly repressed GSC-initiated tumorigenesis and prolonged the lifespan of GSC-grafted mice.
FTO in Breast Cancer
FTO plays a pro-tumor role in human breast cancer [42]. In the previous studies, several FTO single-nucleotide polymorphisms (SNPs) showed an association with breast cancer risk [23,43,44], and FTO is highly expressed in human breast cancer tissues. The inhibition of FTO by MO-I-500 ( Figure 3) reduced the proliferation of breast cancer cells, whereas the over-expression of FTO significantly promoted breast cancer progression [45,46]. Additionally, Niu and colleagues reported that FTO exerted its pro-tumor activity by downregulating the expression of Bcl-2 nineteen kilodalton interacting protein 3
FTO in Breast Cancer
FTO plays a pro-tumor role in human breast cancer [42]. In the previous studies, several FTO single-nucleotide polymorphisms (SNPs) showed an association with breast cancer risk [23,43,44], and FTO is highly expressed in human breast cancer tissues. The inhibition of FTO by MO-I-500 ( Figure 3) reduced the proliferation of breast cancer cells, whereas the over-expression of FTO significantly promoted breast cancer progression [45,46]. Additionally, Niu and colleagues reported that FTO exerted its pro-tumor activity by downregulating the expression of Bcl-2 nineteen kilodalton interacting protein 3 (BNIP3), a pro-apoptotic member of the Bcl-2 family of apoptotic proteins [46]. As described in a recent study, BNIP3 functioned as a tumor suppressor in breast cancer by inducing cell apoptosis [47]. In this case, FTO erased m6A methylation in the 3 -UTR of BNIP3 mRNA, resulting in the degradation of BNIP3. Hence, we propose that FTO may be an effective target for the treatment of breast cancer.
FTO in Melanoma
Earlier findings from 2013 suggested that FTO genetic variations were associated with an increased risk of melanoma [24,25]. More recently, Yang et al. examined the regulation of FTO as m6A demethylase on malignant melanoma samples and multiple melanoma cell lines [48]. Mechanistic studies showed that FTO played a pro-tumor role in vitro and in vivo. Primarily, they found that FTO could be upregulated in response to metabolic stress and starvation via autophagy and nuclear factor kappa B (NF-κB) pathways. In addition, FTO suppression increased the m6A methylation of critical pro-tumorigenic melanoma cell-intrinsic genes (programmed cell death-1, PD-1; C-X-C motif chemokine receptor 4, CXCR4; and sex-determining region Y-box 10, SOX10). After that, the m6A reader YTHDF2, which mainly mediated RNA degradation, significantly reduced their expression. In addition, FTO displayed an effect on anti-PD-1 resistance. The inactivation of FTO reduced the drug resistance to anti-PD-1 treatment of melanoma in mice. These results suggest that the inhibition of FTO activity combined with anti-PD-1 could be an effective strategy in immunotherapy.
FTO in Endometrial Cancer
In previous years, the inherent link between FTO and endometrial cancer had also attracted the attention of scientists. Researchers examined the expression of FTO in endometrial tumor tissues, and immunohistochemistry staining showed that FTO was highly expressed in endometrial cancer tissues [49,50]. Mechanism studies show that β-estradiol (E2) increased FTO expression and subsequently promoted proliferation and invasion phenotypes by activating the PI3K/AKT and MPAK signal pathways [49]. Moreover, estrogen was beneficial for FTO nuclear localization through the mTOR signaling pathway dependent on estrogen receptor α (ERα) [50]. More recent work by Zhang's group found that FTO promoted endometrial cancer metastasis by activating the Wnt signaling pathway [51]. Mechanistically, FTO blocked YTHDF2-mediated mRNA degradation by eliminating m6A methylation in the 3 -UTR regions of homeobox B13 (HOXB13) mRNA, thereby increasing HOXB13 expression. As a result, high expression of HOXB13 activated the Wnt signaling pathway leading to tumor metastasis and invasion. These findings suggested the possible use of FTO as a therapeutic target for endometrial cancer.
FTO in Gastric Cancer
A study published in 2017 has demonstrated that FTO was highly expressed in gastric cancer cell lines and correlated to poor prognosis in patients with gastric cancer [52]. More recently, by performing a univariate Cox regression analysis on the expression levels in the Cancer Genome Atlas (TCGA) dataset, Su et al. found that high FTO expression was related to poor survival of gastric cancer patients [53]. Likewise, another report claimed that reduced m6A modification contributed to malignant phenotypes in gastric cancer [54]. Mechanistic studies showed that knockdown of METTL14 suppressed m6A modification, thereby promoting the proliferation and invasion of gastric cancer cells and activating the Wnt/PI3K-Akt signaling. In contrast, these phenotypic and molecular changes could be attenuated by upregulation of m6A content through knockdown of FTO. Moreover, very recent work by Yang et al. demonstrated that histone deacetylase 3 (HDAC3) improved gastric cancer progression by affecting the forkhead box transcription factor A2 (FOXA2)mediated FTO-m6A-MYC axis [55]. In vitro experiments showed that HDAC3 facilitated the proliferation, migration, and invasion of gastric cancer cells by inhibiting the expression of FOXA2. Mechanism studies confirmed that FOXA2 accurately bonded to the promoter region of FTO, which significantly inhibited FTO transcription and expression. On the contrary, FTO stabilized the mRNA stability of MYC by eliminating m6A methylation, thus increasing its expression. The further investigation highlighted that depletion of HDAC3 impeded tumor growth and reduced the protein synthesis of FTO/MYC, and elevated the expression of FOXA2 in nude mice. Collectively, FTO may be an epigenetic modification target for the treatment of gastric cancer.
FTO in Bladder Cancer
FTO played an oncogenic role in bladder cancer in an m6A-dependent manner [56,57]. Bioinformatic analyses and Western blotting assays showed that FTO was highly expressed in bladder cancer tissues and bladder cancer cells. Moreover, FTO stimulated tumor growth of bladder cancer in vivo and in vitro. Tao et al. illustrated that FTO could increase the expression level of metastasis-associated lung adenocarcinoma transcript 1 (MALAT1) mRNA through m6A demethylation [56]. Further mechanism study demonstrated that FTO promoted the tumorigenesis of bladder cancer by suppressing microRNA miR-384 and inducing mal T cell differentiation protein 2 (MAL2) expression. In the same year, a study by Zhou found that FTO facilitated bladder cancer progression via the FTO/miR-576/cyclin-dependent kinase 6 (CDK6) axis [57]. RNA immunoprecipitation (RIP) assay revealed that FTO mediated miRNA synthesis of miR-576 by regulating the maturation of primiR-576. Additionally, CDK6 was identified as a direct target of miR-576, which could be down-regulated by it. In bladder cancer tissues, the protein level of FTO was positively correlated with CDK6 and negatively related to miR-576. These findings indicated the possibility of FTO as a diagnostic or prognostic biomarker in bladder cancer.
FTO in Esophageal Squamous Cell Carcinoma (ESCC)
Cui et al. discovered that FTO cooperated with LncRNA LINC00022 to promote the tumorigenesis of ESCC by upregulating LINC00022 expression [58]. Clinically, they found that LINC00022 was significantly elevated in primary ESCC samples and correlated with poor prognosis in ESCC patients. Mechanistically, FTO reduced the m6A modification of LINC00022 and promoted its expression, thereby accelerating the proliferation of ESCC cells. However, the genetic addition of YTHDF2 led to a decrease in LINC00022 levels in ESCC cells. On the whole, research demonstrated that FTO-mediated up-regulation of LINC00022 drives ESCC progression in a YTHDF2-dependent manner.
FTO in Multiple Myeloma (MM)
A study has proven that FTO was associated with the progression and metastasis of MM [59]. Transcriptome array analysis indicated that FTO was at a higher level in patients with multiple myeloma compared to healthy subjects. Moreover, FTO functionally increased the expression of heat shock factor 1 (HSF1), a reported metastasis-promoting gene in melanoma, which facilitated the proliferation, migration, and invasion of MM cells [60]. The mechanism by which FTO regulated HSF1 was to block YTHDF2-mediated RNA degradation by eliminating m6A modification. However, inhibition of FTO with MA2 decreased the expression of HSF1 and its target genes in mice. Additionally, combination treatment with MA2 and bortezomib, a first-line chemotherapeutic agent for MM, exhibited stronger synergistic cytotoxic effects on MM occurrence and extramedullary metastasis in vivo. These results suggest the FTO-HSF1 axis could be a potential therapeutic target in MM.
ALKBH5 in Breast Cancer
ALKBH5 is the second discovered m6A demethylase, of which m6A is the only known catalytic substrate [10]. It is well documented that ALKBH5 can be induced by hypoxiainducible factor 1α (HIF-1α) under hypoxia conditions [14]. Zhang et al. recently showed hypoxia-induced breast cancer stem cells (BCSCs) enrichment by inducing expression of ALKBH5 [61]. Notably, the BCSC phenotype is characterized by several core pluripotency factors, such as Nanog homeobox (NANOG), which is crucial for maintaining cancer stem cells [62]. In vitro experiments showed that high expression of ALKBH5 induced by hypoxia improved NANOG mRNA's stability and expression by eliminating the m6A modification of NANOG 3 -UTR. In addition, in vivo experiments have shown that knockdown of ALKBH5 expression impaired tumor formation and reduced BCSC population in breast tumors.
ALKBH5 in Glioblastoma
ALKBH5 is required for maintaining tumorigenicity of GSCs by sustaining transcription factor Forkhead Box M1 (FOXM1) expression [63]. FOXM1, a transcription factor gene, is highly expressed in cancer and is indispensable for the self-renewal and tumorigenesis of GSCs [64,65]. Mechanism analysis revealed that ALKBH5 eliminated m6A modification on 3 -UTR of FOXM1 pre-mRNA accompanied by increasing protein levels of FOXM1. Further, in vivo examinations showed that depletion of ALKBH5 significantly inhibited brain tumor formation. In contrast, this growth inhibition was reversed after ectopic expression of the FOXM1 coding sequence, leading to brain tumor growth. Another team recently demonstrated an important role for ALKBH5 in promoting radioresistance and invasiveness of GSCs [66]. On the one hand, ALKBH5 drove radioresistance by increasing the expression of genes involved in homologous recombination. On the other hand, ALKBH5 upregulated Yes-associated protein 1 (YAP1) expression, thereby contributing to the aggressiveness of GSCs.
ALKBH5 in AML
Recent advances revealed that ALKBH5 played a critical role in promoting tumorigenesis in AML [67]. Primarily, analysis of the gene expression profiling datasets showed that ALKBH5 is highly expressed in various subtypes of AML. Interestingly, TCGA AML database analysis also proved that higher expression of ALKBH5 was related to shorter overall survival in AML patients. Moreover, this finding confirmed that ALKBH5 is indispensable for the self-renewal ability of leukemia stem/initiating cells (LSCs/LICs). The underlying mechanism through which ALKBH5 functioned in AML was carried out by regulating the expression of transforming acidic coiled-coil-containing protein 3 (TACC3). It has been documented that TACC3 played a critical oncogenic role in various cancers and was required for cancer stem cells self-renewal [68]. In this case, ALKBH5 improved the stability of TACC3 mRNA, thus promoting its expression in an m6A-erase manner. These data highlight the role that ALKBH5 played in promoting the tumorigenesis role through the ALKBH5-m6A-TACC3 axis in AML.
ALKBH5 in Ischemic Heart Disease
Abnormal autophagy is associated with many diseases, especially cardiovascular disease [69]. Song et al. recently provided evidence that reversible m6A methylation played a role in the autophagy of cardiomyocytes [70]. Their primarily finding was that hypoxia/reoxygenation (H/R)-treated cardiomyocytes induced m6A levels in total RNA and that METTL3 was the main factor that caused elevated m6A modification. Then, increased METTL3 in H/R-treated cardiomyocytes inhibited autophagic flux and promoted apoptosis. Furthermore, they demonstrated that transcription factor EB (TFEB) was a direct target of METTL3. At the mechanism level, METTL3 methylated TFEB mRNA at the two m6A sites of 3 -UTR. It subsequently facilitated the binding of RNA-binding protein heterogeneous ribonucleoprotein D (HNRNPD) to TFEB pre-mRNA, resulting in its degradation. In contrast, the m6A demethylase ALKBH5 showed an opposite effect on TFEB. Interestingly, further research confirmed that TFEB regulated METTL3 and ALKBH5 in an opposite orientation, i.e., increased ALKBH5 and decreased METTL3. This association between METTL3/ALKBH5 and autophagy implied their potential as therapeutic targets for treating ischemic diseases.
ALKBH5 in Lung Cancer
Additionally, recent papers have shown that ALKBH5 plays diverse roles in lung cancer. Chao et al. found that intermittent the hypoxia-induced high expression of ALKBH5 contributed to the proliferation and invasion of lung adenocarcinoma cells [71]. The mechanism by which ALKBH5 is involved relies on its m6A demethylation activity to induce FOXM1 expression. As previously reported, FOXM1 expression was essential for cancer progression [72]. Further experiments indicated that ALKBH5 exerted its pro-tumor role in lung tissue through upregulating FOXM1. Moreover, Zhu et al. demonstrated that ALKBH5 could accelerate the malignant progression of non-small cell lung cancer (NSCLC) [73]. RNA immunoprecipitation sequencing (RIP-Seq) identified TIMP metallopeptidase inhibitor 3 (TIMP3) as a direct target of ALKBH5. This work confirmed that ALKBH5 was involved in the NSCLC oncogenesis progress by reducing the mRNA stability and protein synthesis of TIMP3 depending on m6A demethylation activity. On the contrary, recent literature by Jin et al. discovered that ALKBH5 inhibited tumor growth and metastasis by decreasing Yes-associated protein (YAP) expression in NSCLC [74]. On the one hand, ALKBH5 inhibited YAP expression by removing m6A methylation of YAP pre-mRNA. On the other hand, ALKBH5 reduced YAP activity by modulating miR-107/large tumor suppressor kinase 2 (LATS2) depending on human antigen R (HuR). Collectively, the specific mechanisms of ALKBH5 in lung cancer still need to be explored in future studies.
ALKBH5 in Epithelial Ovarian Cancer
A study by the Zhu group proposed that ALKBH5 promoted epithelial ovarian cancer by controlling autophagy flux [75]. On the one hand, due to activating the EGFR-PIK3CA-AKT-mTOR signaling pathway, ALKBH5 suppressed autophagy to improve the proliferation and invasion of human ovarian cancer cells. On the other hand, because of the m6A demethylation capability, ALKBH5 stabilized Bcl-2 mRNA and promoted the formation of the Beclin1-Bcl-2 complex, which resulted in the inhibition of autophagy. Moreover, the high expression of ALKBH5 was found in cisplatin-resistant epithelial ovarian cancer cells to promote cell proliferation and chemoresistance to cisplatin in vivo and in vitro [76]. Research demonstrated that ALKBH5 formed a positive regulation loop with homeobox A10 (HOXA10), thereby maintaining the overexpression of ALKBH5 and HOXA10. Collective results show that ALKBH5 upregulation significantly decreased m6A abundance of Janus kinase 2 (JAK2) mRNA, thus increasing its mRNA expression through inhibiting YTHDF2-mediated mRNA degradation. Notably, they discovered that ALKBH5 overexpression promoted tumor growth and chemoresistance to cisplatin in epithelial ovarian cancer by activating the JAK2/STAT3 pathway, consistent with previous studies which reported that the activation of the JAK2/STAT3 signaling pathway contributes to pro-tumor effect and chemotherapy resistance in several cancer types [77][78][79]. These results indicate that ALKBH5 could be a potential therapeutic target in epithelial ovarian cancer.
ALKBH5 Serves as Pancreatic Cancer (PC) Suppressor
Recent advances have defined ALKBH5 as a suppressor in PC [80,81]. Guo et al. initially investigated the expression of ALKBH5 in PC tissues and found that the expression of ALKBH5 significantly decreased more than that of the corresponding noncancerous tissues [80]. Conversely, the over-expression of ALKBH5 dramatically suppressed proliferation, migration, and invasion of PC cells, and reduced tumor volume in the PC xenograft model. Then, they found that ALKBH5 acted on transcriptome regulation and identified period circadian regulator 1 (PER1) as a downstream target of ALKBH5. The mechanism involved decreased ALKBH5 which upregulated the m6A enrichment of PER1 mRNA and resulted in YTHDF2-mediated mRNA degradation, consistent with the previously reported reduction in PER1 expression in PC [82]. In short, this paper demonstrated that ALKBH5 functioned as a tumor suppressor in PC through modulating transcriptional fate via the ALKBH5/m6A/YTHDF2/PER1 axis. In addition, Tang et al. also showed that the expression of ALKBH5 was reduced in the gemcitabine-treated patient-derived xenograft (PDX) model [81]. In comparison, over-expression of ALKBH5 suppressed the proliferation, migration, and invasion of PC cells and reversed the chemotherapy resistance of pancreatic ductal adenocarcinoma (PDAC) cells. Further experiments showed that ALKBH5 over-expression inactivated Wnt signaling by increasing Wnt inhibitory factor 1 (WIF-1) expression after erasing m6A sites in the 3 -UTR of WIF-1 mRNA. These findings highlight the possibility of an m6A eraser-based approach for the diagnosis and treatment of PC.
ALKBH5 Suppresses Malignancy of Hepatocellular Carcinoma (HCC)
Additionally, Chen et al. proposed that ALKBH5 functioned as a tumor suppressor of HCC via an m6A-dependent and IGF2BP1-associated pattern [83]. Research showed that ALKBH5 was down-regulated in HCC patients, and the low expression of ALKBH5 predicted a poor prognosis of HCC. Moreover, the inhibition of ALKBH5 promoted HCC proliferation and accelerated invasion/metastasis in vitro and in vivo. They verified the antioncogenic role of ALKBH5 in HCC. They also identified LY6/PLAUR domain-containing 1 (LYPD1) as a direct target of ALKBH5, and ALKBH5-mediated m6A demethylation blocked IGF2BP1 binding to m6A-containing mRNAs, thereby reducing the stability of LYPD1. It is noteworthy that LYPD1 was subsequently demonstrated to have the ability to induce oncogenic behaviors in HCC. Collectively, this study addresses the important role of the ALKBH5/LYPD1 axis in HCC progression and provides novel insights into therapeutic strategies for HCC.
ALKBH5 in Osteogenesis and Osteosarcoma
Recent papers proposed that ALKBH5 was beneficial in osteogenesis [84,85]. Wang et al. discovered that both ALKBH5 and m6A-containing bone morphogenetic protein 2 (BMP2) were upregulated in the ligamentum flavum cells. During the development of ossification of the ligamentum flavum (OLF), over-expressed ALKBH5 induced BMP2 expression and activated the AKT signaling pathway, thereby promoting the osteogenesis of ligamentum flavum cells [84]. In contrast, METTL3 was a suppressor in the progression of osteogenesis by activating myeloid differentiation factor 88 (MYD88)-mediated NF-κB activity [85]. The upregulation of MYD88 which activated the NF-κB pathway to suppress osteogenesis was confirmed in previous studies. In short, METTL3 promoted the expression of MYD88 through increasing m6A methylation, whereas ALKBH5 had the opposite effect.
Surprisingly, Chen et al. recently found that ALKBH5 was beneficial in osteosarcoma tumorigenesis [86]. The RNA immunoprecipitation assay and RNA pull-down assay confirmed that ALKBH5 interacted with plasmacytoma variant translocation 1 (PVT1), an oncogenic long noncoding RNA (lncRNA), thereby inducing its expression. In particular, ALKBH5 removed the m6A modification of the PVT1 transcript and prevented YTHDF2mediated degradation, thus increasing PVT1 expression. This paper demonstrated that the tumor-promoting effect of ALKBH5 in osteosarcoma was partly through the regulation of PVT1, which was consistent with previous reports of an oncogenic role of PVT1 in cancers. [87].
ALKBH5 in Other Diseases
Li et al. recently examined the expression of ALKBH5 in placental villous tissue from recurrent miscarriage (RM) patients [88]. They found that highly expressed ALKBH5 impaired trophoblastic cell invasion in human trophoblasts. Interestingly, ALKBH5 downregulated the stability of cysteine-rich 61 (CYR61) mRNA in an m6A-dependent manner. According to previous researchers, CYR61 played a critical role in the progression of embryogenesis. They inferred from this work that ALKBH5 regulated the pathogenesis of RM by regulating the expression of CYR61.
Additionally, Zhang et al. recently claimed that ALKBH5 was involved in gastric cancer [89]. Both ALKBH5 and the lncRNA nuclear paraspeckle assembly transcript 1 (NEAT1) were highly expressed in gastric cancer cells and gastric cancer tissues. ALKBH5 was beneficial to the expression of NEAT1 by eliminating m6A modification. Moreover, over-expressed NEAT1 combined with the enhancer of zeste homologue 2 (EZH2) to upregulate the expression of downstream genes of EZH2, thereby promoting gastric cancer invasion and metastasis. These results illustrate that ALKBH5 could be a feasible therapeutic target for these diseases.
m6A Demethylases in Chemotherapy Resistance
Recently, Zhou et al. confirmed that FTO played a role in facilitating the chemoradiotherapy resistance of cervical squamous cell carcinoma (CSCC) [90]. They initially demonstrated that FTO was highly expressed in CSCC tissue and was beneficial to chemoradiotherapy resistance of CSCC. In addition, FTO stimulated β-catenin expression by erasing m6A modification of β-catenin mRNA. Further research indicates that FTO may contribute to chemo-radiotherapy resistance of CSCC might through inducing the β-catenin expression and subsequently activating excision repair cross-complementation group 1 (ERCC1). Moreover, treatment of CSCC cells with the FTO inhibitor MA2 (Figure 4) improved the chemo-radiotherapy sensitivity.
Shriwas et al. recently reported that human RNA helicase DEAD-box helicase 3 (DDX3) was involved in cisplatin resistance in oral squamous cell carcinoma (OSCC) by modulating FOXM1 and NANOG expression via increasing ALKBH5 expression [91]. Importantly, DDX3 expression was elevated in cisplatin-resistant cells and chemo-therapy non-responder tumors. It has been highlighted that an enhanced population of cancer stem cells (CSCs) contributes to chemoresistance and recurrence, and that ALKBH5 promoted CSC property through increasing FOXM1 and NANOG [61,63]. In this case, they confirmed that the specific inhibition of DDX3 decreased the CSC population in chemoresistant cells and significantly suppressed FOXM1 and NANOG expression in an ALKBH5-m6A-dependent manner. In summary, these findings provide new insights for studying the role of m6A demethylase in chemotherapy resistance.
ALKBH5 in Cancer Immunity
Recent research has illustrated the functions and mechanisms of ALKBH5 in cancer immunotherapy. During cancer development, tumor cells evade immune surveillance by expressing inhibitory checkpoint molecules, which is a major mechanism for suppressing immune responses [92]. For example, programmed cell death 1 ligand 1 (PD-L1), a main inhibitory immune checkpoint molecule on tumor cells, contributes to immune evasion by binding to programmed death receptor-1 (PD-1) on T cells [92]. A study found that ALKBH5 was involved in suppressing antitumor T-cell immunity in intrahepatic cholangiocarcinoma by upregulating PD-L1 expression [93]. This work also demonstrated that ALKBH5 reduced the m6A abundance of PD-L1 mRNA, thereby inhibiting the YTHDF2mediated mRNA degradation.
Li et al. proposed that ALKBH5 played an important role in the resistance to immune checkpoint blockade therapy [94]. Importantly, ALKBH5 deletion enhanced the efficacy of anti-PD-1 therapy and significantly prolonged the survival of ALKBH5-deficient tumor-bearing mice. Moreover, ALKBH5 deletion increased m6A abundance in mRNAs, which promoted protein synthesis of several target genes, including monocarboxylate transporter 4 (Mct4)/solute carrier family 16 member 3 (Slc16a3). Further investigation showed that ALKBH5 modulated Mct4 expression and induced lactate content, thereby reducing immune cell populations in the tumor microenvironment during GVAX vaccination and anti-PD-1 antibody therapy. In addition, another team validated that tumor-intrinsic ALKBH5 was responsible for the recruitment of tumor-associated macrophage (TAM) and immunosuppressive phenotypes under hypoxic conditions in glioblastoma multiforme [95]. Interestingly, hypoxia-induced ALKBH5 stabilized lncRNA NEAT1 through m6A demethylation, which subsequently induced the expression and secretion of C-X-C motif chemokine ligand 8 (CXCL8)/interleukin-8 (IL8). CXCL8, a cytokine encoding gene in humans, has been widely studied in cancer cells and TAM recruitment [96]. Mechanically, NEAT1 modulated paraspeckle assembly, which in turn induced the relocation of the splicing factor proline-and glutamine-rich (SFPQ) protein from the CXCL8 promoter, ultimately upregulating CXCL8 expression [95].
Strategies Used for Developing m6A Demethylases Inhibitors
Since FTO and ALKBH5 rely on cofactors 2OG and Fe 2+ for their m6A demethylation activity, early studies focused on screening a series of 2OG analogues and related compounds as their inhibitors [97]. Structure-based virtual screening of different compound libraries was an important way to obtain potent FTO/ALKBH5 inhibitors [30,[98][99][100][101][102]. Interestingly, a high-throughput fluorescence polarization (FP) assay was performed for compounds that competed with FTO/ALKBH5 for binding to m6A-containing singlestranded nucleic acids, and meclofenamic acid (MA) was found to be a selective inhibitor of FTO over ALKBH5 [103]. Later on, Svensen and Jaffrey reported an approach to identify FTO inhibitors by using a fluorometric RNA substrate based on broccoli aptamer [104]. Das and co-workers designed a multi-protein dynamic combinatorial chemistry (DCC) system for screening FTO inhibitors [105]. More recently, Zhang et al. developed a single quantum dot-based Förster resonance energy transfer (FRET) nanosensor for FTO inhibitor screening [106]. Chang's team identified several types of compounds that inhibit FTO activity through fluorescence quenching and molecular modeling studies [107][108][109]. Moreover, combining the information from crystal structures of ligand-protein complexes and structure-based drug designs was also an efficient approach to discover potent inhibitors with distinct chemical scaffolds [110][111][112].
To better understand ligand-protein interactions, we generated 2D protein-ligand interaction diagrams from crystal structure complexes retrieved from the Protein Data Bank using MOE software. The 2D protein-ligand interaction diagrams were processed in four steps: (1) load the PDB file of the crystal complex into the MOE software; (2) rotate the crystal structure to a suitable angle and click "Compute" and "Ligand Interactions" buttons to create 2D diagrams; (3) change the "Legend" dropdown to "Rendering Options", increase the residue size to 1.8 angstroms, and click "Apply"; and (4) save the diagram as an image in the TIF format with default parameters. Structures for FTO in complexes with NOG (PDB ID: 4IDZ) and 2,4-PDCA (PDB ID: 4IE0) showed that both of them are bound to metal ions ( Figure 5A,B). Moreover, they further interacted with residues Arg316, Ser318, and Tyr295 of the side chains. In the complex of FTO with 4 (PDB ID: 4IE5), the pyridine ring of 4 nearly reached the substrate-binding site of FTO, which might spatially compete with the catalytic substrate ( Figure 5C). Moreover, 8-QH (compound 5, Figure 3) was a relatively potent FTO inhibitor with an IC 50 value of 3.3 µM. The crystal structure of FTO-8-QH (PDB ID: 4IE4) showed that 8-QH doubly chelated the Zn 2+ ion with hydroxyl and nitrogen of the hydroxyquinoline in a similar way to NOG ( Figure 5D). IOX3 (compound 6, Figure 3) and FG-4592 (compound 7, Figure 3) were known as prolyl-hydroxylase inhibitors [114], which also showed good inhibitory activity against FTO with IC 50 of 2.8 and 9.8 µM, respectively [97,115]. The crystal structure of FTO-IOX3 (PDB ID: 4IE6) indicated that its chlorine atom of the isoquinoline group reached the substrate-binding site ( Figure 5E). In 2012, Chen et al. identified the natural product rhein (compound 8, Figure 3) (IC 50 = 21 µM) as a competitive substrate inhibitor of FTO [98]. Further, rhein was the first discovered cell-active FTO inhibitor, which could inhibit cellular FTO demethylase activity. In molecular modeling of FTO-rhein (PDB ID: 4IE7), rhein occupied the binding sites of 3-meT, 2OG, and Fe 2+ . It is important to mention that this special structure blocked the binding of m6A containing ssDNA/ssRNA substrates to FTO ( Figure 5F). Compound 9a (Figure 3) acted as a selective inhibitor of FTO (IC 50 = 0.6 µM) compared to ALKBH5 (IC 50 = 96.5 µM) and other AlkB subfamilies [116]. To view the superimposition from an FTO-3-meT-NOG (PDB ID: 3LFM) structure with that of FTO-9a (PDB ID: 4CXW), 9a occupied both 2OG and nucleotide acid binding sites ( Figure 5G). The fumarate hydrazide of 9a was bound in the same combination as NOG, while the 4-benzyl pyridine sidechain sat in the nucleotide-binding site. Inferentially, the interaction between the pyridine nitrogen atom of 9a and Glu234 of FTO was the key factor for the high binding selectivity of FTO. In contrast, in other AlkB subfamilies, it was significantly weakened and even disappeared. In particular, both compound 9a and its ethyl ester derivative 9b (Figure 3) showed low cytotoxicity and significantly increased the global level of m6A in HeLa cells. Shishodia et al. used knowledge of the interaction of FTO with 2OG and substrates to design synthetic FTO inhibitors, of which compound 10 (IC 50 = 1.5 µM, Figure 3) exhibited the best inhibitory activity [110].
Compound MO-I-500 (compound 11, Figure 3), a dihydroxyfuran sulfonamide [117], was the first identified as an FTO inhibitor, which displayed anticonvulsant activity. In the superposition of the MO-I-500 to NOG-FTO complex (PDB ID: 3LFM), this compound is located at the 2OG active site, and the hydroxyl oxygens of dihydroxyfuran chelated with the metal ion in opposite directions. The molecule MO-I-500 displayed anticonvulsant activity in the 6 Hz mouse model at a nontoxic dose, increased the total m6A level of cellular RNA, and altered the production of relative microRNAs. Through using a multi-protein DCC strategy, compound 12 ( Figure 3) was identified as a FTO (IC 50 = 2.6 µM) selective inhibitor, in comparison with ALKBH5 (IC 50 = 201.3 µM) [105]. The structural model of FTO-12 revealed that compound 12 coordinated with Fe 2+ in a bidentate manner, which was further stabilized by a combination of hydrogen-bonding and salt bridge interactions with Arg96, Arg319, Tyr295, and Ser318 of side chains from FTO. Two compounds 13a (IC 50 = 1.46 µM, Figure 3) and 13b (IC 50 = 28.9 µM, Figure 3) were defined as FTO inhibitors through a virtual screening on the ZINC compound library [99]. Molecular docking calculations revealed specific interactions between the amino acid residues of the FTO proteins Asp233, Tyr106, Glu234, Arg96, and Arg322, as well as two compounds. Importantly, compounds 13a and 13b are the first FTO inhibitors demonstrated to support the survival and rescue dopamine neurons from growth factor deprivation-induced apoptosis in vitro.
Substrate Competitive Inhibitors
Meclofenamic acid (MA) (compound 14a, Figure 4) and its derivatives were determined to be substrate competitive selective inhibitors of FTO [103,118,119]. When structural superimposition of the complexes of FTO-MA (PDB ID: 4QKN) and FTO-3-meT (PDB ID: 3LFM) was accomplished, in this case, MA partially covered the binding site of 3-meT in an L shape. In addition, there were stable hydrophobic interactions between a part of the FTO NRL and the carboxyl acid substituent of MA ( Figure 5H) [103]. However, these hydrophobic interactions did not appear in the NRL of ALKBH5, which reduced the binding of MA to ALKBH5. MA2 (compound 14b, Figure 4), an ethyl ester derivative of MA, was a cell-active inhibitor of FTO, which could enhance the overall level of m6A in HeLa cells. Inspired by the specific binding of MA to FTO, fluorescein (compound 15a, Figure 4) and its derivatives, i.e., FL2 and FL2-DZ (compound 15b and compound 15c, Figure 4), were explored as FTO inhibitors with IC 50 = 3.23 µM, 1.72 µM, and 4.49 µM, respectively. In FTO fluorescein's (PDB ID: 4ZS2) crystal, fluorescein sat in the nucleotide-binding site of FTO, which was similar to MA ( Figure 5I). Among them, FL2-DZ could selectively inhibit the demethylation of FTO. FL2-DZ also showed specific photo-affinity labeling of intracellular FTO because of the diazirine unit [119]. Thus, these fluorescein derivatives have dual functions of inhibiting FTO activity and labeling FTO. More recently, selective inhibitors FB23 (compound 16a, Figure 4) and FB23-2 (compound 16b, Figure 4) were synthesized by extending the dichloride-substituted benzene of MA. They were more efficient with IC 50 values of 0.06 µM and 2.6 µM, respectively [118]. In the FTO-FB23 crystalline complex (PDB ID: 6AKW), FB23 occupied the entire binding position of MA in a similar L shape ( Figure 5J). For FB23, the phenyl carboxylic acid substituent of MA was retained, forming several hydrophobic interactions with the nucleotide recognition cap. Hence, it showed the specific recognition capability of FTO compared to ALKBH5. Moreover, several hydrogen bonds were found between nitrogen or oxygen in the heterocyclic ring of FB23 and Glu234 of FTO, which was beneficial for the FB23 inhibitory activity of FTO. In vitro and in vivo research confirmed that FB23-2 improved the anti-proliferative activity of AML cell line cells and inhibited primary AML LSCs in mouse models.
A series of benzene-1,3-diol derivatives were identified as selective inhibitors of FTO. They were N-CDPCB (compound 17, Figure 4) [111], CHTB (compound 18, Figure 4) [120], and radicicol (compound 19, Figure 4) [100]. IC50 values of N-CDPCB, CHTB and radicicol were 4.95 µM, 39.24 µM and 16.04 µM, respectively. In the crystal of compounds FTO-N-CDPCB (PDB ID: 5DAB) and N-CDPCB was sandwiched between the β-sheet and the L1 loop of FTO at the extension of the 2OG binding site ( Figure 5K) [111]. In addition, the chlorine group was crucial for strengthening the N-CDPCB-FTO complex [111]. According to the binding pocket of N-CDPCB to FTO, a novel binding site was observed, which was partly overlapped with the inhibitor MA, not the 3-meT position. Interestingly, CHTB occupied the entire MA binding site in a similar L-shaped fashion in the crystal of FTO-CHTB (PDB ID: 5F8P) [120]. There were visible interactions between the chlorine atom in the chroman ring and several residues (Val83, Ile85, Leu90, and Thr92) of FTO in the hydrophobic pocket ( Figure 5L). A hydrogen bond was also formed between residue Glu234 and the benzene hydroxyl group. Moreover, both N-CDPCB and CHTB were able to increase m6A abundance in total mRNA in 3T3-L1 cells. Inspired by the common structural features of N-CDPCB and CHTB, Chang's group performed a structure-based virtual screening of compounds containing the 4-Cl-1,3-diol group and identified the natural compound radicicol as an effective FTO inhibitor [100]. Radicicol is bound to FTO and located at a similar cavity in the crystal of the FTO-radicicol complex, compared to N-CDPCB's in an L-shaped conformation. One of the obvious differences between these two crystal complexes was that the conservative 4-Cl-1,3-diol group was bound to FTO in different orientations.
Additionally, by using Schrödinger software for molecular docking to target the MA binding site of FTO, a study designed and synthesized chemically distinct FTO inhibitors, of which FTO-04 (compound 20, Figure 4) was identified as a competitive inhibitor of FTO (IC 50 = 3.39 µM) over ALKBH5 (IC 50 = 39. 4 µM) [112]. Importantly, research demonstrated that FTO could impair the self-renewal properties of GSCs to inhibit neurosphere formation without altering the growth of human neural stem cell neurospheres. Prakash and co-workers synthesized compound 21a (Figure 4) as a potent FTO selectivity inhibitor (IC 50 = 0.087 µM) by merging the key fragments of compound 9a and MA [121]. Moreover, the ester prodrug 21b of compound 21a could reduce the viability of AML cells by downregulating MYC and upregulating RARA, which was consistent with previous reports on the anticancer effect of pharmacological FTO inhibition [37,39].
In 2020, Chen and co-workers determined CS1 (compound 22a, Figure 4) and CS2 (compound 22b, Figure 4) as potent and selective FTO inhibitors through conducting a structure-based virtual screening [101]. Both CS1 and CS2 displayed a much higher anti-leukemic efficacy in comparison to FB23-2 in vitro and in vivo by modulating the expression of FTO target genes, including MYC, RARA, and ASB2. Moreover, this study also confirmed that CS1 and CS2 reprogramed immune response by reducing immune checkpoint gene expressions, especially leukocyte immunoglobulin-like receptor (LILRB4) [101]. In the same year, diacerein (compound 23, Figure 4) was identified as an FTO inhibitor by using a single quantum dot-based FRET nanosensor with an IC 50 value of 1.51 µM [106]. Molecular modeling studies have suggested that diacerein possibly competed with m6Acontaining ssDNA for FTO binding through forming hydrogen bonding with the amino acid residues of FTO protein. In addition, researchers validated the anti-proliferation effects of Saikosaponin-d (SsD, compound 24, Figure 4) in AML by targeting the m6A demethylation activity of FTO [122]. In vitro experiments showed that SsD exhibited good inhibitory activity on FTO demethylation with a low IC 50 value of 0.46 µM. Importantly, they also demonstrated that SsD could overcome the resistance to tyrosine kinase inhibitors by suppressing FTO-mediated m6A RNA methylation pathways.
Additionally, through structure-based virtual screening of U.S. Food and Drug Administration (FDA)-approved drugs, Peng et al. discovered that entacapone (compound 26a, Figure 6) was a substrate, as well as the 2OG cofactor competitive inhibitor of FTO [30]. Entacapone was structurally distinct from any reported inhibitors of FTO, the IC 50 value of which was 3.5 µM. In the crystal of entacapone bound with FTO (PDB ID: 6AK4), hydrogen bonds could be discovered between the heterotopic hydroxyl group on the nitrocatechol ring with residues from the substrate-binding site ( Figure 5M). Additionally, the nitrile group of the compound could chelate with Zn 2+ , which was recently reported in histone demethylase protein-ligand complex cases. Interestingly, the flexible tail of diethyl-propanamide was embedded deeply in the cofactor binding site. Furthermore, compounds 26b ( Figure 6) and 26c ( Figure 6) were designed and synthesized by replacing the flexible diethyl tail of entacapone with alicyclic groups, enhancing the inhibitory activity of FTO with IC 50 values of 1.2 and 0.7 µM, respectively.
Combining the fluorescence quenching technology, several inhibitors were found to decrease the demethylase activity of FTO, including nafamostat mesylate (compound 27, Figure 6) [107], clausine E (compound 28, Figure 6 Moreover, molecular docking model analysis showed that the affinity bindings between FTO and these molecules were mainly forced by the hydrophobic and hydrogen bonds interactions with residues from the active cavity of FTO, which were similar to the binding modes between FTO and other inhibitors. Additionally, through structure-based virtual screening of U.S. Food and Drug Administration (FDA)-approved drugs, Peng et al. discovered that entacapone (compound 26a, Figure 6) was a substrate, as well as the 2OG cofactor competitive inhibitor of FTO [30]. Entacapone was structurally distinct from any reported inhibitors of FTO, the IC50 value of which was 3.5 μM. In the crystal of entacapone bound with FTO (PDB ID: 6AK4), hydrogen bonds could be discovered between the heterotopic hydroxyl group on the
Conclusions
Numerous epigenetic studies in recent decades have revealed the potential of m6A demethylases as therapeutic targets for Human Diseases, including cancers. However, these two demethylases are likely to display different effects on various diseases because they are abundant in different tissues and have a different selection of action sites. Given the above, FTO is intensely related to a range of biomedical processes, including obesity-related diseases, metabolic homeostasis, pro-tumor, self-renewal ability of CSCs, immunotherapy resistance, and chemotherapy resistance. Correspondingly, ALKBH5 shows a clear association with pro-tumor, cancer suppression, self-renewal ability of CSCs, autophagy, chemotherapy resistance, and other diseases. Nevertheless, few studies have reported on the regulation of cellular contexts, such as immunity, DNA damage, autophagy, and apoptosis by m6A demethylases. Thus, future work is certainly required to determine the regulatory genes of each m6A demethylase in various cancers.
At present, reported FTO/ALKBH5 inhibitors mainly focus on 2OG analogs and substrate-competitive inhibitors. Several potent FTO inhibitors have been demonstrated to suppress the proliferation of cancer cells, such as R-2HG, FB23-2, CS1, CS2, and SsD in leukemia cells [39,101,118,122]; MA2 in GSCs [41]; and MO-I-500 in breast cancer cells [45]. Moreover, in vivo studies have also shown that several selective FTO inhibitors can significantly inhibit tumor growth and prolong survival in mice. However, few of the currently developed small-molecule FTO inhibitors are available for clinical applications due to mild bioavailability, low sensitivity, and/or poor selectivity. Therefore, much work remains to be carried out in the future to develop more potent FTO inhibitors and to improve their biological functions, inhibitory effects, and the therapeutic potential for human disease treatment. In addition, designing and synthesizing FTO inhibitors based on existing small molecules, and discovering FTO inhibitors with distinct frameworks or compounds that bind to novel binding sites, should be considered as an important research strategy in the future work. Additionally, the lack of selectivity was a challenge for ALKBH5 inhibitors. One significant hindrance is its flexible motif 2 modification and unique disulfide bond, resulting in the smaller active pocket and smaller inhibitors. Therefore, identifying new and lead ALKBH5 inhibitors is urgently needed.
In conclusion, this review underlines the recent advances of m6A demethylases in many Human Diseases. However, there are still some issues that should be resolved. Firstly, the underlying mechanisms of m6A demethylases in some cancers are not fully understood. Secondly, some findings have shown that m6A demethylases can be used as therapeutic targets, but specific experiments in clinical trials remain to be conducted. Thirdly, the possibility of m6A demethylase inhibitors for further clinical applications, or in combination with clinical drugs for specific diseases, should be carefully explored.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2023-08-03T15:29:59.613Z
|
2023-07-31T00:00:00.000
|
260402290
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2079-4991/13/15/2219/pdf?version=1690785228",
"pdf_hash": "72b864b0fe588ce82719fe5638bfddda0483923c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2247",
"s2fieldsofstudy": [
"Materials Science",
"Medicine",
"Engineering"
],
"sha1": "55040da8a15705b6db5e233631e76ac642954548",
"year": 2023
}
|
pes2o/s2orc
|
Fabrication and Characterisation of Calcium Sulphate Hemihydrate Enhanced with Zn- or B-Doped Hydroxyapatite Nanoparticles for Hard Tissue Restoration
A composite based on calcium sulphate hemihydrate enhanced with Zn- or B-doped hydroxyapatite nanoparticles was fabricated and evaluated for bone graft applications. The investigations of their structural and morphological properties were performed by X-ray diffraction (XRD), Fourier transform infrared (FTIR) spectroscopy, scanning electron microscopy (SEM), and energy dispersive X-ray (EDX) spectroscopy techniques. To study the bioactive properties of the obtained composites, soaking tests in simulated body fluid (SBF) were performed. The results showed that the addition of 2% Zn results in an increase of 2.27% in crystallinity, while the addition of boron causes an increase of 5.61% compared to the undoped HAp sample. The crystallite size was found to be 10.69 ± 1.59 nm for HAp@B, and in the case of HAp@Zn, the size reaches 16.63 ± 1.83 nm, compared to HAp, whose crystallite size value was 19.44 ± 3.13 nm. The mechanical resistance of the samples doped with zinc was the highest and decreased by about 6% after immersion in SBF. Mixing HAp nanoparticles with gypsum improved cell viability compared to HAp for all concentrations (except for 200 µg/mL). Cell density decreased with increasing nanoparticle concentration, compared to gypsum, where the cell density was not significantly affected. The degree of cellular differentiation of osteoblast-type cells was more accentuated in the case of samples treated with G+HAp@B nanoparticles compared to HAp@B. Cell viability in these samples decreased inversely proportionally to the concentration of administered nanoparticles. From the point of view of cell density, this confirmed the quantitative data.
Introduction
Bones are the major part of the musculoskeletal system that supports body weight, performs motion, and protects the internal organs [1]. They have functional adaptation ability, meaning that bones can adjust their mass and architecture according to their mechanical environment or other mechanical stimuli. Bone grafting is one of the most commonly used strategies for treating bone defects and is widely applied for bone regeneration in orthopaedic surgeries. An ideal bone graft must augment the process of bone healing Since the selective substitution of its cations (Ca 2+ ) and anions (OH − and/or PO 4 3− ) is allowed in HAp, it can become an ideal host for ionic doping [23]. Because HAp is weak in osteoinduction, ion doping can improve its biological activities. Different dopants have been studied, and research is ongoing to tailor synthetic HAp for different medical applications [24]. Zinc is an essential mineral that is also the second-most abundant mineral in the human body. Zinc deficiency can affect the central nervous, skeletal, and reproductive systems, as well as physical growth, and increase the risk of infection. Hydroxyapatite doped with zinc has been extensively studied in the last few years. Uysal et al. [24] published an extensive review of the different synthesis methods and sintering parameters for doped hydroxyapatite published during the last 20 years [18,[25][26][27][28][29][30]. Negrila et al. [25] and Predoi et al. [26] used sol-gel to produce HAp doped with Zn 2+ . Predoi et al. [26] also analysed the influence of the stability of Zn-HAp solutions on antibacterial properties. They found that Zn content has a significant impact on solution stability and prevents bacterial colonisation. Begam et al. [27] found that Zn 2+ doping changes the lattice parameter of HAp, increasing cell adhesion and growth.
Boron is an important trace element for plants, but it is not as important for animals. However, boron has been shown to play a role in osteoblastic driving [31]. Tunçay et al. [32] first attempted microwave-assisted biomimetic precipitation of B-HAp and showed that B-HAp accelerates cell attachment and differentiation and facilitates early mineralisation.
The present study is focused on the synthesis of composites prepared by mixing CSH with Zn-or B-doped hydroxyapatite and the investigation of their structural and morphological properties by X-ray diffraction (XRD), Fourier transform infrared (FTIR) spectroscopy, scanning electron microscopy (SEM), and energy dispersive X-ray (EDX) spectroscopy techniques. The results provided from this study could lead to the production of bone cement that could be used as complementary materials for filling bone defects and for their effective healing. To study the bioactive properties of the obtained composites, soaking tests in simulated body fluid (SBF) were performed. Moreover, the biological behaviour and cell proliferation of osteoblast-like MG-63 cells were investigated following incubation with nanostructured CSH-zinc/boron-doped hydroxyapatite composites.
After mixing, the pH of the solution was measured (Table 1) and adjusted to 10.5 with the addition of ammonium hydroxide (NH 4 OH). Afterward, the mixture was subjected to a hydrothermal treatment for 12 min at 120 • C, washed with distilled water, and dried at 60 • C for 24 h. To obtain semi-hydrate gypsum powder, calcium sulphate dihydrate was subjected to hydrothermal treatment at 132 • C for 30 min and left overnight at 50 • C in the oven. The composites were obtained by mixing calcium sulphate with the previously obtained hydroxyapatites. The composition and notations of the obtained samples are presented in Table 2. The composite was obtained by mixing undoped or doped hydroxyapatite with gypsum (1:1 weight ratio) and a 10% glycerol aqueous solution. The amount of solution used for each mixture was dosed judiciously to keep the same paste consistency.
The necessary volume of liquid was related to the size of the powders obtained. The doped hydroxyapatite showed smaller particles. The obtained mixture was poured into a cylindrical mould (ø = 10 mm and h = 10 mm) and characterised after hardening.
Samples Characterisation
The X-ray diffraction (XRD) technique was used to determine the degree of crystallinity, the crystallite size, and the phases present in the samples. The analysis was carried out using a PANalytical Empyrean diffractometer (Almelo, Netherlands) at room temperature with a characteristic Cu X-ray tube (λCuKα1= 1.541874 Å). The samples were scanned in a Bragg-Brentano geometry with a scan step increment of 0.02 • and a counting time of 100 s/step. The XRD patterns were recorded in the 2θ angle range of 5-80 • . Rietveld quantitative phase analysis was performed using the X Pert High Score Plus 3.0 software (PANalytical, Almelo, The Netherlands). After refining, values were obtained between 1.44% and 1.78% for goodness of fit, 6.65% and 7.08% for Rexpected, and 6.67% and 7.59% for Rprofile. The crystallite size was determined by Debye-Scherer Equation (1) where s = crystallite size (nm), k = the Scherrer constant (0.98), λ = denotes the wavelength (0.154 nm), and β = the full width at half maximum (FWHM) in radians Morphological aspects were studied via scanning electron microscopy (SEM) with a Quanta Inspect F50 microscope coupled with an energy dispersive spectrometer (EDS) and a Titan Themis 200 transmission electron microscope (TEM) with a line resolution of 90 pm in high-resolution transmission electron microscopy (HRTEM) mode (Thermo Fisher, Eindhoven, The Netherlands). The mechanical compression strength was determined using the Shimadzu Autograph AGS-X 20kN (Shimadzu, Tokyo, Japan) equipment. Fourier transform infrared spectroscopy (FTIR) investigation was performed using a Nicolet iS50R spectrometer (Thermo Fisher, Waltham, MA, USA). The measurements were made at room temperature using the total reflection attenuation module. Each sample was scanned 32 times between 4000 and 400 cm −1 , at a resolution of 4 cm −1 .
In order to evaluate the stability in a wet environment [33], the cylinders obtained from each composite were placed in 20 mL of simulated body fluid with pH = 7.4 (SBF) prepared according to the Kokubo recipe [34]. The initial mass was recorded, and the samples were immersed in SBF solution. After a certain amount of immersion, samples were removed, dried, and weighed again. The mass loss could be calculated using Equation (2): where w i = sample weight before immersion and w t = dried sample weight after t min of immersion in SBF. The mechanical compressive strength was determined by pressing the samples until the breaking point with a speed of 1 mm/min using Shimadzu Autograph AGS-X 20kN equipment (Shimadzu, Tokyo, Japan). The test was made in triplicate on samples hardened for 3, 7, and 28 days, according to the standard [35]. In addition to the initial composites (hardened in the air), samples immersed for 72 h in SBF were also tested.
The hydroxyapatite nanoparticles were suspended in deionised water at a concentration of 0.012 g/mL by dispersing with an ultrasound probe and then sterilised by gamma radiation. MG-63 cells were seeded in 96-well plates at a concentration of 2000 cells/well and incubated under standard conditions to allow cell attachment. After 24 h, the culture medium was removed and replaced with culture medium with nanoparticles at different concentrations (0, 25, 50, 100, and 200 µg/mL, previously prepared by dispersing in complete culture medium with an ultrasound bath). The cells were then incubated in the presence of hydroxyapatite nanoparticles for 7 days under standard temperature and humidity conditions. Following incubation in the presence of nanoparticles, investigations related to cell morphology, viability, and cell differentiation were carried out. Investigations of cell viability and proliferation were performed using the MTT tetrazolium salts assay (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) [36]. Seven days after treatment, the nanoparticle solution was removed and replaced with 10% MTT (5 mg/mL) in cell culture medium. After 2 h incubation, the reacted formazan crystals were solubilised using DMSO, and the absorbance of the supernatant was measured at 570 nm.
Cell differentiation was tested using Alizarin red assay [37]. Following incubation, the nanoparticle-containing cell culture medium was removed, and cells were fixed using 4% paraformaldehyde in PBS for 1 h. After this, the supernatant was removed, and cells were washed with PBS several times to remove any residues of nanoparticles left. Following this step, 40 mM Alizarin red was added to each well and incubated for 45 min at room temperature. Then, the supernatant was removed, and cells were thoroughly washed with deionised water. This method is based on the specific staining of Ca deposits in the cells resulting from osteoblast cell differentiation. The stained Ca deposits were then dissolved in 10% acetic acid, and the supernatant was collected and heated at 85 • C for 10 min. Following centrifugation at 20,000× g for 10 min, the supernatant was neutralised with 10% ammonium hydroxide. The absorbance was measured at 405 nm.
Experiments were performed in triplicate, and cell viability and differentiation ability, respectively, were calculated by relating the data obtained for each sample to the negative control samples (for which a value of 100% was assigned). Data were expressed as ± SEM (standard error of the mean), and their statistical evaluation was performed using the Student t-test function, for which * p < 0.05, ** p < 0.01, and *** p < 0.001. There were blank samples (nanoparticles without cells at the investigated concentrations) for all quantitative determinations, whose absorbance was subtracted from that of cellular samples.
HAp, HAp@Zn and HAp@B Powder Characterisation
The XRD profiles of HAp, HAp@B, and HAp@Zn prepared powders are displayed in Figure 1. The XRD pattern of HAp shows a single crystalline phase identified through the characteristic diffraction maxima. The phase identification is performed using the PDF 04-008-4763 file. Miller indices corresponding to the position of the crystallographic planes and directions could be associated with each important peak, such as (002), (121), (030), (222), and (123). The X-ray diffractogram indicates the presence of two high-intensity peaks located near 26 • and 32 • , which are in good agreement with the literature data for HAp [38,39].
HAp, HAp@Zn and HAp@B Powder Characterisation
The XRD profiles of HAp, HAp@B, and HAp@Zn prepared powders are displayed in Figure 1. The XRD pattern of HAp shows a single crystalline phase identified through the characteristic diffraction maxima. The phase identification is performed using the PDF 04-008-4763 file. Miller indices corresponding to the position of the crystallographic planes and directions could be associated with each important peak, such as (002), (121), (030), (222), and (123). The X-ray diffractogram indicates the presence of two high-intensity peaks located near 26° and 32°, which are in good agreement with the literature data for HAp [38,39]. Figure 1b shows the magnification of the area with the most intense peaks for the studied samples. The diffractograms for the HAp@Zn and HAp@B samples are similar to those for the HAp sample. However, the diffraction peaks are smaller in intensity, with the maximum being shifted and broader, which, according to Miyaji et al. [40], suggests that the crystallinity of apatite increases in the presence of a dopant.
In order to determine the degree of crystallinity, the average crystallite size, the structural microstrains, and the volume of the elementary cell, the Rietveld analysis was performed. The results of the refining are presented in Table 3. As can be seen, following the Rietveld analysis, the addition of the dopant tends to determine an increase in the crystallinity of powders. The addition of 2% Zn increases Figure 1b shows the magnification of the area with the most intense peaks for the studied samples. The diffractograms for the HAp@Zn and HAp@B samples are similar to those for the HAp sample. However, the diffraction peaks are smaller in intensity, with the maximum being shifted and broader, which, according to Miyaji et al. [40], suggests that the crystallinity of apatite increases in the presence of a dopant.
In order to determine the degree of crystallinity, the average crystallite size, the structural microstrains, and the volume of the elementary cell, the Rietveld analysis was performed. The results of the refining are presented in Table 3. As can be seen, following the Rietveld analysis, the addition of the dopant tends to determine an increase in the crystallinity of powders. The addition of 2% Zn increases crystallinity by approximately 2.27%, while the addition of B causes an increase of approximately 5.61% compared to the undoped HAp sample. Also, the crystallite size decreases as a result of doping. In the case of B, a crystallite size of 10.69 ± 1.59 nm is recorded, and in the case of Zn doping, the size value reaches 16.63 ± 1.83 nm, while undoped HAp has a crystallite size value of 19.44 ± 3.13 nm.
Considering the atomic radius OF the dopants (Zn = 135 pm and B = 85 pm), their addition to the crystalline network of hydroxyapatite determines the change in the volume of the elemental cell and the microstrains of the structure. Since B has an atomic radius closer to that of P (100 pm), it is assumed to prefer its positions in the network and not the Ca (180 pm) positions. Therefore, a stronger tension caused by crystal defects present in the network is observed, as proven by the microstrain value of 0.86% compared to undoped HAp, which shows a value of 0.47%.
In order to complete the information from XRD studies related to the structure of the samples, Raman spectroscopy investigations were performed. The obtained spectra are presented in Figure 2. Considering the atomic radius OF the dopants (Zn = 135 pm and B = 85 pm), their addition to the crystalline network of hydroxyapatite determines the change in the volume of the elemental cell and the microstrains of the structure. Since B has an atomic radius closer to that of P (100 pm), it is assumed to prefer its positions in the network and not the Ca (180 pm) positions. Therefore, a stronger tension caused by crystal defects present in the network is observed, as proven by the microstrain value of 0.86% compared to undoped HAp, which shows a value of 0.47%.
In order to complete the information from XRD studies related to the structure of the samples, Raman spectroscopy investigations were performed. The obtained spectra are presented in Figure 2. Following Raman spectroscopy, it is possible to observe the bands located at 431 and 445 cm −1 , which can be attributed to the bending vibration of the O-P-O bonds doubly degenerated (ν2). The bands located at values of 578 and 590 cm −1 were attributed to the bending vibration of the O-P-O bonds, which triply degenerated (ν4). The intense band located at 961 cm −1 was attributed to the stretching of the non-degenerate P-O symmetric bond (ν1). Additionally, the bands located at values of 1046 and 1075 cm −1 were attributed to the antisymmetric stretching of the P-O bond triple degenerated (ν3) [41,42].
Analysing the band located at 961 cm −1 , characteristic of the vibration of PO4 tetrahedra, it is observed that it widens with the addition of the dopant. The ratio between the intensities of the ν1 and ν2 vibration modes decreases from 3.59 for HAp to 2.98 for HAp@Zn, respectively, to 2.48 for HAp@B, which indicates the distortion and increase in the disorder degree of the crystalline network [43].
The shape and size of the particle aggregates were determined by SEM. The images obtained for HAp, HAp@Zn, and HAp@B are presented in Figure 3 at different magnifications. Following Raman spectroscopy, it is possible to observe the bands located at 431 and 445 cm −1 , which can be attributed to the bending vibration of the O-P-O bonds doubly degenerated (ν2). The bands located at values of 578 and 590 cm −1 were attributed to the bending vibration of the O-P-O bonds, which triply degenerated (ν4). The intense band located at 961 cm −1 was attributed to the stretching of the non-degenerate P-O symmetric bond (ν1). Additionally, the bands located at values of 1046 and 1075 cm −1 were attributed to the antisymmetric stretching of the P-O bond triple degenerated (ν3) [41,42].
Analysing the band located at 961 cm −1 , characteristic of the vibration of PO 4 tetrahedra, it is observed that it widens with the addition of the dopant. The ratio between the intensities of the ν1 and ν2 vibration modes decreases from 3.59 for HAp to 2.98 for HAp@Zn, respectively, to 2.48 for HAp@B, which indicates the distortion and increase in the disorder degree of the crystalline network [43].
The shape and size of the particle aggregates were determined by SEM. The images obtained for HAp, HAp@Zn, and HAp@B are presented in Figure 3 at different magnifications. In the case of the HAp, medium-sized particles are observed around the val 67.11 nm, while the values decrease with the addition of dopants, reaching 54.67 nm the sample doped with B and 52.64 nm for the sample doped with Zn. Also, due t small size and the pseudo-acicular shape, the samples show an accentuated agglom tion tendency.
The presence of characteristic hydroxyapatite elements (Ca, P) and dopants demonstrated by the EDS spectra and the elemental composition of the powder prese in Table 4. Boron was found in HAp particles in a proportion of 3.72 wt.% (Table 4), whi was found in a proportion of 0.33 wt.%. The presence of carbon can be attributed to sa preparation for SEM analysis. However, it can also be attributed to the possible carb tion of the HAp particles under the influence of the synthesis conditions and the CO sent in the atmosphere [25]. In the case of the HAp, medium-sized particles are observed around the value of 67.11 nm, while the values decrease with the addition of dopants, reaching 54.67 nm for the sample doped with B and 52.64 nm for the sample doped with Zn. Also, due to the small size and the pseudo-acicular shape, the samples show an accentuated agglomeration tendency.
The presence of characteristic hydroxyapatite elements (Ca, P) and dopants was demonstrated by the EDS spectra and the elemental composition of the powder presented in Table 4. Boron was found in HAp particles in a proportion of 3.72 wt.% (Table 4), while Zn was found in a proportion of 0.33 wt.%. The presence of carbon can be attributed to sample preparation for SEM analysis. However, it can also be attributed to the possible carbonation of the HAp particles under the influence of the synthesis conditions and the CO 2 present in the atmosphere [25].
The TEM images obtained on the HAp samples highlight well-defined particles with a polyhedral, irregular shape, with sizes between 30-70 nm for the HAp (Figure 4a), 15-75 nm for the HAp@Zn sample, and 40-65 nm for the HAp@B sample (Figure 4d). Also, in the case of doping with Zn, the particles are presented in acicular hexagonal form (Figure 4g). The polycrystalline character of all samples is highlighted by SAED analysis (Figure 4c,f,i). The TEM images obtained on the HAp samples highlight well-defined particles w a polyhedral, irregular shape, with sizes between 30-70 nm for the HAp (Figure 4a), 75 nm for the HAp@Zn sample, and 40-65 nm for the HAp@B sample (Figure 4d). A in the case of doping with Zn, the particles are presented in acicular hexagonal form ( ure 4g). The polycrystalline character of all samples is highlighted by SAED analysis ( ure 4c,f,i).
Performing the measurements on the HRTEM images (Figure 4b,e,h), it was fou that the distances between the atomic chains vary between 3.5 Å for the HAp sample Å for HAp@Zn, and 3.3 Å for HAp@B. Performing the measurements on the HRTEM images (Figure 4b,e,h), it was found that the distances between the atomic chains vary between 3.5 Å for the HAp sample, 3.4 Å for HAp@Zn, and 3.3 Å for HAp@B.
The mapping of the main elements on the sample surface is presented in Figure 5. As shown in Figure 5, in addition to the characteristic elements of hydroxyapatite (Ca, P, and O), the presence of carbon could also be identified in the sample. This can be attributed to superficial carbonation of the sample, most likely during synthesis following interaction with atmospheric CO 2 . The elemental distribution of the dopants indicates that they are distributed uniformly among the powder particles.
Gypsum Analysis
Gypsum is a biomaterial that has been used as bone cement for many years [44]. It is a versatile material used in several medical applications due to its biocompatibility, biodegradability, and easy availability. Gypsum is a naturally occurring mineral made up of calcium sulfate. It is widely used in construction and medical applications, such as making casts and moulds. Gypsum can also be easily modified to adjust its properties, which makes it an excellent candidate for use in bone cements that must meet specific mechanical and biological requirements. When used as bone cement, gypsum is mixed with water to create a paste-like substance that can be moulded to the shape of the bone. The paste then hardens to create a rigid bond, much like traditional cement. Gypsum can also be reinforced with other materials, such as carbon fibers, to increase its strength and durability. Figure 6 shows the X-ray diffractogram of the gypsum powder. The specific peaks of calcium sulphate hemihydrate are observed, with the most prominent peak at an angle of 11.7 • , according to the PDF 04-015-7420 file. Also, each important peak has Miller indices associated with it, corresponding to the plane position and crystallographic direction associated with the peak.
Gypsum Analysis
Gypsum is a biomaterial that has been used as bone cement for many years [44]. It is a versatile material used in several medical applications due to its biocompatibility, biodegradability, and easy availability. Gypsum is a naturally occurring mineral made up of calcium sulfate. It is widely used in construction and medical applications, such as making casts and moulds. Gypsum can also be easily modified to adjust its properties, which makes it an excellent candidate for use in bone cements that must meet specific mechanical and biological requirements. When used as bone cement, gypsum is mixed with water to create a paste-like substance that can be moulded to the shape of the bone. The paste then hardens to create a rigid bond, much like traditional cement. Gypsum can also be reinforced with other materials, such as carbon fibers, to increase its strength and durability. Figure 6 shows the X-ray diffractogram of the gypsum powder. The specific peaks of calcium sulphate hemihydrate are observed, with the most prominent peak at an angle of 11.7°, according to the PDF 04-015-7420 file. Also, each important peak has Miller indices associated with it, corresponding to the plane position and crystallographic direction associated with the peak. After carrying out the FTIR analysis, the specific graph indicates the structural composition of calcium sulphate. Thus, one can observe the vibration bands characteristic of stretching the O-H bond at values 3200-3600 cm −1 , but also between 1500-1700 cm −1 , and the sulphate bond (S-O) characteristic bands of between 850-1230 and also 400-800 cm −1 [45,46].
The EDS analysis performed on the plaster sample shows specific chemical elements such as oxygen (O) at 0.53 keV, sulphur (S) at 2.31 keV, and calcium (Ca) at a value of 3.7 keV. The percentage composition of the elements is presented in Figure 7. After carrying out the FTIR analysis, the specific graph indicates the structural composition of calcium sulphate. Thus, one can observe the vibration bands characteristic of stretching the O-H bond at values 3200-3600 cm −1 , but also between 1500-1700 cm −1 , and the sulphate bond (S-O) characteristic bands of between 850-1230 and also 400-800 cm −1 [45,46].
The EDS analysis performed on the plaster sample shows specific chemical elements such as oxygen (O) at 0.53 keV, sulphur (S) at 2.31 keV, and calcium (Ca) at a value of 3.7 keV. The percentage composition of the elements is presented in Figure 7.
Composites Material Characterisation
After hardening, the composites, obtained by mixing the gypsum with the HAp-type particles and the aqueous solution, were matured for 28 days and characterised by FTIR. The results are presented in Figure 8.
Composites Material Characterisation
After hardening, the composites, obtained by mixing the gypsum with the HAp-type particles and the aqueous solution, were matured for 28 days and characterised by FTIR. The results are presented in Figure 8.
Composites Material Characterisation
After hardening, the composites, obtained by mixing the gypsum with the HAp-type particles and the aqueous solution, were matured for 28 days and characterised by FTIR. The results are presented in Figure 8. Scanning electron microscopy was performed on each composite sample, and the results are shown in Figure 9.
Nanomaterials 2023, 13, x FOR PEER REVIEW 13 and between 1370-1520 cm −1 , and the PO4 −3 group between 900-1220 cm −1 and bet 400-740 cm −1 . Scanning electron microscopy was performed on each composite sample, and t sults are shown in Figure 9. The morphology of the composite samples is similar and is predominantly fo by the characteristic forms of calcium sulphate dihydrate. Samples are formed by cr with sizes between 200-350 nm. As shown in Figure 9, hydroxyapatite does not si cantly influence the behaviour of the samples; HAp particles are uniformly distribut the surface of the gypsum crystals as a result of the mechanical homogenisation pr At low magnifications, one can see the porosity obtained by the intercalation of thes yhedral crystals.
To perform in vitro tests, the composites were moulded into cylindrical shape after hardening, immersed in SBF solution. The weight loss variation of composite ples G+HAp, G+HAp@B, and G+HAp@Zn during 72 h of immersion in SBF is pres in Figure 10. The morphology of the composite samples is similar and is predominantly formed by the characteristic forms of calcium sulphate dihydrate. Samples are formed by crystals with sizes between 200-350 nm. As shown in Figure 9, hydroxyapatite does not significantly influence the behaviour of the samples; HAp particles are uniformly distributed on the surface of the gypsum crystals as a result of the mechanical homogenisation process. At low magnifications, one can see the porosity obtained by the intercalation of these polyhedral crystals.
To perform in vitro tests, the composites were moulded into cylindrical shapes and, after hardening, immersed in SBF solution. The weight loss variation of composite samples G+HAp, G+HAp@B, and G+HAp@Zn during 72 h of immersion in SBF is presented in Figure 10.
The rate of degradation of the composites in the SBF environment is accelerated in the first 15 h when a decrease in the weight of the samples of approximately 37% is recorded. After 15 h, a decrease in degradation rate is observed, with the sample reaching a maximum of approximately 47% after 72 h of contact with the liquid. This rate of disintegration of the resulting paste may coincide with the beginning of cell proliferation to form new bone tissue.
It is also noted that G+HAp@Zn samples tend to disintegrate more slowly in the presence of SBF than the other types of studied composites that showed similar behaviour. The visual aspects of the samples before and after immersion in SBF for 72 h are shown in Figure 11, and SEM images are presented in Figure 12. It is also noted that G+HAp@Zn samples tend to disintegrate more slowly in th ence of SBF than the other types of studied composites that showed similar beh The visual aspects of the samples before and after immersion in SBF for 72 h are sh Figure 11, and SEM images are presented in Figure 12. The rate of degradation of the composites in the SBF environment is accelerated in the first 15 h when a decrease in the weight of the samples of approximately 37% is recorded. After 15 h, a decrease in degradation rate is observed, with the sample reaching a maximum of approximately 47% after 72 h of contact with the liquid. This rate of disintegration of the resulting paste may coincide with the beginning of cell proliferation to form new bone tissue.
It is also noted that G+HAp@Zn samples tend to disintegrate more slowly in the presence of SBF than the other types of studied composites that showed similar behaviour. The visual aspects of the samples before and after immersion in SBF for 72 h are shown in Figure 11, and SEM images are presented in Figure 12. Keeping the samples in SBF leads to a rearrangement of the microstructure. As can be seen in the SEM images, the gypsum crystals, initially formed following the hardening process, decrease in size and flatten due to the dissolution process. This process is followed by the mineralisation of the crystal surface through the deposition of apatite phases that lead to a more compact structure. This formation of new apatite phases is also confirmed by the FTIR analysis ( Figure 13).
The bands detected at 566, 601, 962, 1039, and 1089 cm −1 belong to the phosphate phase [47,48]. The band corresponding to the hydroxyl group can be observed at 631 and 3550 cm −1 [49], and the band attributed to water molecules is observed around 3050-3550 cm −1 .
Also, specific bands for carbonate groups were detected at 872, 1421, and 1467 cm −1, and the band of calcium sulphate is present at a value of 3380 cm −1 [50][51][52].
Compared to the FTIR performed on the composites before immersion in SBF (see Figure 8), a decrease in the characteristic bands for calcium sulphate and water and an increase in absorption for the PO 4 3− bands are observed. This indicates the formation of a new apatite phase. Nanomaterials 2023, 13, x FOR PEER REVIEW 15 of 21 Keeping the samples in SBF leads to a rearrangement of the microstructure. As can be seen in the SEM images, the gypsum crystals, initially formed following the hardening process, decrease in size and flatten due to the dissolution process. This process is followed by the mineralisation of the crystal surface through the deposition of apatite phases that lead to a more compact structure. This formation of new apatite phases is also confirmed by the FTIR analysis ( Figure 13). Keeping the samples in SBF leads to a rearrangement of the microstructure. As can be seen in the SEM images, the gypsum crystals, initially formed following the hardening process, decrease in size and flatten due to the dissolution process. This process is followed by the mineralisation of the crystal surface through the deposition of apatite phases that lead to a more compact structure. This formation of new apatite phases is also confirmed by the FTIR analysis ( Figure 13). Regarding the mechanical resistance of the samples at various hardening intervals (Table 5), it was found that the most resistant was the sample doped with Zn. Also, the values for the samples immersed in SBF are 1.08 MPa for the G+HAp sample, 0.74 MPa for the G+HAp@B sample, and 1.93 MPa for the G+HAp@Zn sample. From these values, it can be seen that the samples gain strength for up to 28 days. After introducing samples in SBF, these resistances decrease by about 6%. The biological evaluation of the gypsum/hydroxyapatite-modified nanoparticles was conducted for MG-63 osteoblast-like cells in terms of proliferation measurements and the ability of the cells to differentiate and form bone tissue following mineralisation. Osteoblast proliferation in the presence of nanovehicles is an important parameter to follow in terms of studying the stimulation abilities of osteoblasts. In this regard, the MTT tetrazolium-salt viability assay was performed following 7 days of incubation in the presence of nanoparticles. This method measures mitochondrial metabolism. The ability to metabolise the substance is proportional to cell viability. The analysis performed 7 days following cell seeding can also measure osteoblast cell proliferation in the presence of the nanoparticles because this incubation time is longer than one cell cycle.
The results are comparatively shown in Figure 14, depending on the type of HAp. In the case of (G)/HAp, the cell metabolic activity decreased proportionally with the increase in nanoparticle concentration (p < 0.05 for 200 µg/mL). Gypsum alone proved to have a biocompatible behaviour at all concentrations, as the cell viability did not decrease below the 70% threshold for any of the investigated concentrations. With the addition of gypsum to the hydroxyapatite nanoparticles, cell viability was considerably improved compared to HAp alone. At all concentrations, unless the highest concentration was employed in the study (200 µg/mL), a reduction in cell metabolism was observed. For 200 µg/mL of G+HAp, a reduction in mitochondrial metabolism was observed compared to HAp. However, the difference noticed between these two samples was not statistically significant (G+HAp vs. HAp, NS). Overall, all samples proved biocompatible at concentrations below 100 µg/mL. A statistically significant proliferation of osteoblast-like cells was noticed in the case of G+HAp at 50 µg/mL (p < 0.01 compared to the negative control).
The modification of HAp with B did not show any significant changes in the behaviour of the cells in terms of viability and proliferation compared to (gypsum)/HAp. The cells' metabolic activity decreases with the increase in nanoparticle concentration. HAp@B showed a biocompatible behaviour at concentrations of 100 µg/mL and above, while 200 µg/mL induced a cytotoxic effect in osteoblasts following 7 days of incubation (p < 0.05 compared to the negative control).
In the case of HAp@Zn nanoparticles, the reduction in cells' metabolism was more pronounced at all of the investigated concentrations compared to HAp and HAp@B samples. This inhibitory effect is probably induced by the presence of Zn in the composition of the nanoparticles, which is a well-known effect determined by Zn 2+ ion dissolution [53,54].
The addition of gypsum showed an improved effect on osteoblast cell viability at 25 µg/mL (HAp@Zn vs. G+HAp@Zn, p < 0.01).
Differentiation of osteoblast cells involves the mineralisation of the extracellular matrix in order to form bone matrix, similar to the in vivo environment [55]. The Ca deposits were evidenced in the nanoparticle-treated osteoblasts using a specific reaction between alizarin red and the mineralised areas in the extracellular matrix of the cells (Figure 15). Using this test, the native ability of HAp alone to induce differentiation of osteoblast cells at 7 days of incubation was clearly evidenced. For all samples, blank samples were prepared at equivalent concentrations in order to remove the possible interferences induced by the presence of nanoparticles alone. Making a correlation with MTT assay data, when differentiation occurs in osteoblast cells, proliferation is usually inhibited, and a reduction in the cells' metabolism takes place [56,57]. Thus, the reduction in metabolic activity of MG-63 exposed to HAp nanoparticles is correlated with differentiation data, where a significant amount of Ca deposits was measured.
pronounced at all of the investigated concentrations compared to HAp and HAp@B samples. This inhibitory effect is probably induced by the presence of Zn in the composition of the nanoparticles, which is a well-known effect determined by Zn 2+ ion dissolution [53,54].
The addition of gypsum showed an improved effect on osteoblast cell viability at 25 µg/mL (HAp@Zn vs. G+HAp@Zn, p < 0.01). Differentiation of osteoblast cells involves the mineralisation of the extracellular matrix in order to form bone matrix, similar to the in vivo environment [55]. The Ca deposits were evidenced in the nanoparticle-treated osteoblasts using a specific reaction between alizarin red and the mineralised areas in the extracellular matrix of the cells (Figure 15). Using this test, the native ability of HAp alone to induce differentiation of osteoblast cells at 7 days of incubation was clearly evidenced. For all samples, blank samples were prepared at equivalent concentrations in order to remove the possible interferences induced by the presence of nanoparticles alone. Making a correlation with MTT assay data, when differentiation occurs in osteoblast cells, proliferation is usually inhibited, and a reduction in the cells' metabolism takes place [56,57]. Thus, the reduction in metabolic activity of MG-63 exposed to HAp nanoparticles is correlated with differentiation data, where a significant amount of Ca deposits was measured.
Although gypsum alone inhibited the differentiation of osteoblasts in terms of calcium deposition, this suppressing effect did not exceed 30% for all concentrations (p < 0.05). In consequence, this effect was also translated to cells exposed to gypsum-modified HAp, where a decrease in Ca deposit production was measured compared to control cells. However, this effect was statistically significant only for 200 µg/mL concentration.
The addition of B to the composition of HAp nanoparticles clearly improved the ability of osteoblast-like cells to differentiate and mineralise the extracellular matrix in vitro. The amount of Ca deposits was significantly higher compared to negative control samples A similar effect of differentiation improvement was measured in the cases of HAp@Zn and G+HAp@Zn compared to the negative control. However, the addition of gypsum to the composition of HAp@Zn samples did not increase the amount of resulting calcium deposits as compared to HAp@Zn alone. Similarly, the differentiation of osteoblasts exposed to HAp@Zn and G+HAp@Zn was correlated with a reduction in the cells' metabolism compared to the negative control.
Conclusions
The study presents the results of the preparation and characterisation of the composite obtained by mixing gypsum with Zn-or B-doped hydroxyapatite nanoparticles for hard tissue restoration. Although gypsum alone inhibited the differentiation of osteoblasts in terms of calcium deposition, this suppressing effect did not exceed 30% for all concentrations (p < 0.05). In consequence, this effect was also translated to cells exposed to gypsum-modified HAp, where a decrease in Ca deposit production was measured compared to control cells. However, this effect was statistically significant only for 200 µg/mL concentration.
The addition of B to the composition of HAp nanoparticles clearly improved the ability of osteoblast-like cells to differentiate and mineralise the extracellular matrix in vitro. The amount of Ca deposits was significantly higher compared to negative control samples at all concentrations. The high ability of HAp@B and G+HAp@B to induce mineralisation in MG-63 cell culture was correlated with their metabolism inhibition.
A similar effect of differentiation improvement was measured in the cases of HAp@Zn and G+HAp@Zn compared to the negative control. However, the addition of gypsum to the composition of HAp@Zn samples did not increase the amount of resulting calcium deposits as compared to HAp@Zn alone. Similarly, the differentiation of osteoblasts exposed to HAp@Zn and G+HAp@Zn was correlated with a reduction in the cells' metabolism compared to the negative control.
Conclusions
The study presents the results of the preparation and characterisation of the composite obtained by mixing gypsum with Zn-or B-doped hydroxyapatite nanoparticles for hard tissue restoration.
The experiments showed that the addition of the dopant led to better crystallisation. By adding 2% Zn, a 2.27% increase in crystallinity was observed, while the addition of B led to an increase of 5.61% compared to HAp. The crystallite size decreases as a result of doping. For B, a crystallite size of 10.69 ± 1.59 nm was measured, and in the case of Zn, the size value was 16.63 ± 1.83 nm, compared to HAp, where the crystallite size value was 19.44 ± 3.13 nm.
In the case of the HAp, medium-sized particles are observed around the value of 67.11 nm, while the values decrease with the addition of dopants, reaching 54.67 nm for the sample doped with B and 52.64 nm for the sample doped with Zn. Also, due to their small size and pseudo-acicular shape, the samples show an accentuated agglomeration tendency.
After hardening, the composites, obtained by mixing the gypsum with the HAp-type particles and the aqueous solution, were matured for 28 days and characterised by FTIR.
The morphology of the composite samples is similar and is predominantly formed by the characteristic forms of calcium sulphate dihydrate. Samples are formed by crystals with sizes between 200-350 nm. HAp particles are uniformly distributed on the surface of the gypsum crystals due to the mechanical homogenisation process.
The rate of degradation of the composites in the SBF environment is accelerated in the first 15 h when a decrease in the weight of the samples by approximately 37% is recorded. After a decrease in degradation rate is observed, the sample reaches a maximum of approximately 47% after 72 h of contact with the liquid. This rate of disintegration of the resulting paste may coincide with the beginning of cell proliferation to form new bone tissue.
It is also noted that G+HAp@Zn samples tend to disintegrate more slowly in the presence of SBF compared to the other types of studied composites that showed similar behaviour. After the introduction of samples in SBF, these resistances decreased by about 6%.
The ability of HAp, HAp@B, and HAp@Zn nanoparticles, respectively, to mineralise MG-63 osteoblast-like cells was clearly highlighted at all concentrations involved in the study. This effect is correlated with the reduction in the cells' metabolism following the differentiation process, which normally takes place when the mineralisation of bone cells occurs.
|
v3-fos-license
|
2021-12-08T16:06:57.244Z
|
2022-01-15T00:00:00.000
|
244934451
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://eu-jer.com/EU-JER_11_1_325.pdf",
"pdf_hash": "a7bbeb9ccc8c7ed7b6b53c5ca5cacc32356c7495",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2248",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "08c9b1ece9361d0f38df7efc91934307c60168c8",
"year": 2022
}
|
pes2o/s2orc
|
Supervised Learning Applied to Graduation Forecast of Industrial Engineering Students
The article aims to develop a machine-learning algorithm that can predict student’s graduation in the Industrial Engineering course at the Federal University of Amazonas based on their performance data. The methodology makes use of an information package of 364 students with an admission period between 2007 and 2019, considering characteristics that can affect directly or indirectly in the graduation of each one, being: type of high school, number of semesters taken, grade-point average, lockouts, dropouts and course terminations. The data treatment considered the manual removal of several characteristics that did not add value to the output of the algorithm, resulting in a package composed of 2184 instances. Thus, the logistic regression, MLP and XGBoost models developed and compared could predict a binary output of graduation or non-graduation to each student using 30% of the dataset to test and 70% to train, so that was possible to identify a relationship between the six attributes explored and achieve, with the best model, 94.15% of accuracy on its predictions.
Introduction
The development of technologies using machine learning has shown explosive growth in the processes of creating products or services currently delivered to the market. This area of research emerges as a branch of artificial intelligence and exists as a basic principle of technologies aimed at speech recognition on smartphones, forecasting prices for the stock exchange, recommending films on streaming platforms, identifying diseases from the recognition of ultrasound images, among other applications, so that it can be described as "the science (and art) of programming computers in such a way that they can learn from data" (Géron, 2017).
Among the machine learning methods currently available, there is supervised and unsupervised learning. The first one will be the basis for this research because it involves learning from the visualization of labels (answers) of part of the data, so that the algorithm can predict new labels for the test data based on the learning relationships it found in the training data.
In addition, learning methods, in general terms, seeks to find answers to certain types of problems: those in which the relationships between the input data generate a continuous response (such as car prices), those in which this relationship generates a discrete response (such as vehicle types) or even those in which the answer is unknown so that it is necessary to search for a standardization by the algorithm according to the characteristics delivered to it.
In this context, a problem common to several undergraduate courses at public universities is identified: the low rate of student graduations. Veenstra et al. (2009) emphasizes that when it comes to sciences and engineering fields it turns out the retention rate is even lower, and the reasons can be cognitive, such as GPA or High School grades in general, noncognitive, such as family support, financial difficulties, healthy or characteristics related to the color, gender, habits or expectations of each student.
So, delimited to the Industrial Engineering course at the Federal University of Amazonas (UFAM), the research development department, the statistics involved in this problem showed that, in a database of 364 students with an admission period between 2007 and 2019, the index of graduations until the second semester of 2019 is 17.8% for the universe evaluated of possible graduates (considering the period of five years of course), in addition to presenting an index of 12.4% of retired or dropout students (considering the universe of evaluated samples).
Thus, the research starts from the hypothesis that certain parameters related to the performance of each student throughout the course can describe their result (graduation or non-graduation). Therefore, it aims to identify this binary output through a classifying machine learning algorithm that will learn the relationship between these data by visualizing them and predicting the results for data not previously seen.
Literature Review
The engineering retention problem The problem involved in this work has been extensively addressed in recent decades, as for example in Lin et al. (2008), which seeks, through cognitive (GPA, grades in sciences or mathematics, among others) and non-cognitive (motivation, leadership, expectations, among others) characteristics, to estimate the retention of first-year students of engineering using neural networks, so that this work achieved about 78% probability of detecting student retention, but only 40% for non-retention, indicating that non-cognitive and cognitive characteristics could partially describe the problem, being necessary the combination of other variables that can also influence the persistence of these students, such as family, economics and health issues.
Similarly, using data from the University of Michigan, Veenstra et al. (2009) considers attributes similar to those of Lin et al. (2008), such as: High School Academic Achievement, Quantitative Skills, Study Habits, Commitment to Career and Educational Goals, Confidence in Quantitative Skills, Commitment to Enrolled College, Financial Needs, Family Support and Social Engagement. And it starts from the hypothesis that there is a direct correlation between the retention of engineering students and their GPA, so that, using logistic regression, it can identify that the characteristics High School Academic Achievement, Quantitative Skills, Commitment to Career and Education Goals and Confidence in Quantitative Skills predicted student success (GPA).
More recently, also seeking to understand the issues involved in retaining engineering students, French et al. (2021), works on the understanding both genders about the psychological cost involved in the course, given that the perception of students may differ when focused on minorities, such as the case of women in engineering. This work shows that there is a perception on the part of this minority group, but it is a group with greater probability of graduation, compared to male respondents. Santacroce (2018) also emphasizes these results, showing that women are more likely to graduate with an engineering degree if they remain in the course after 2 years, even though their self-confidence is negatively affected by the stereotypes and majority men environment they face during the course (Jagacinski, 2013). Thus, the gender is an attribute to be considered as having great influence on the students results, as well as the skin color (Davis & Finelli, 2007;Green et al., 2019;Palmer et al., 2011;Reichert & Absher, 1997;Ye et al., 2021).
Besides that, a study carried out by Craig (2011) also emphasizes factors influencing retention and attrition of engineering students at historically black colleges and universities, such as: students working long hours brought on by insufficient financial aid; difficulty of the curriculum and poor teaching styles. In this study, the author also highlights possible strategies to solve these questions: enhanced advisement, tutorial, and mentoring activities.
Similarly, Fletcher and Anderson-Rowland (2000) also considered strategies related to mentoring and advisement to improve the performance of a group of sixteen engineering students and achieved amazing results, including an increase of 79% on the cumulative GPA of these students, indicating feasible ways to solve the retention problem, which were also extensively investigated over the years (Chelberg & Bosman, 2019;Desai & Stefanek, 2017;Hartman et al., 2019;Lisberg & Woods, 2018;Shahhosseini et al., 2020;Stromei, 2000).
Additionally, Honken and Ralston (2013) also discuss potential challenges that might be affecting the engineering low retention rate and highlights the lack of preparation in math and science as the top reason to students transfer out of engineering, followed by financial challenges and lack of time to study as the main reason to left university. This last reason is interesting since engineering courses differ from other majors mainly by the programmatic activities, where the curriculum requires engineering students to not only participating in educational enriching activities, but also to gaining marketable experiences (Lichtenstein et al., 2010), which highly affects the daily time these students dedicate to study.
Furthermore, several other research sought to understand about the motivations behind the retention of students from Science, Technology, Engineering, and Mathematics (STEM) courses, considering different attributes, especially the cognitive ones, as addressed in this work (Coletti et al., 2014;García-Ros et al., 2019;Hieb et al., 2015;Koenig et al., 2012;Wang et al., 2015).
And comparing this common problem between STEM majors, Almatrafi et al. (2017) and Godfrey et al. (2010) found out even though the engineering retention is high and an issue to be worked on due to multiple reasons, like difficulties on understanding academic concepts, self-esteem or time issues, the persistence rates for this major is higher compared to college of sciences, highlighting the importance of studies related not only to engineering but, also, focused on the STEM science fields in general.
Logistic regression and XGBoost
First, logistic regression can be described as a linear model adequate when we have dichotomous outcome variables (Lemon et al., 2003;Subasi & Ercelebi, 2005). Kurt et al. (2008) emphasizes that this model is competent when predicting presence (or absence) of a characteristic or outcome based on values of predictor variables, a concept that allow us to understand the method as appropriate to find out the relationship between the characteristics involved in this research.
Second, XGBoost is a supervised learning algorithm based on gradient boosting decision trees (Dhaliwal et al., 2018). It's responsible to achieve state-of-the-art results in many machine learning competitions (Chen & Guestrin, 2016;Nielsen, 2016), especially those involving tabular data. Dhaliwal et al. (2018) and Chen and Guestrin (2016) also highlights that XGBoost is highly effective in reducing computing time by providing optimal use of memory resources, which is one of its great benefits since computational cost is still a challenge when dealing with neural networks solutions, that's why this algorithm has potential to be, also, an adequate solution to the graduation forecast problem presented.
Artificial Neural Networks -MLP
According to Akhgar et al. (2019) and Ngah and Bakar (2017), artificial neural networks, as the nomenclature suggests, are based on the human brain. This tool has been used as a prediction strategy in areas such as: maintenance management (Susto et al., 2015), identification of defective parts (Wang et al., 2019), diagnosis of diseases (Jiang et al., 2020), landslide predictions (Pham et al., 2017), among other applications. In a simplified way, this simulation created artificially, considers the information transfer process through a standardized scheme and composed of multiple processing units (Gehr et al., 2018). According to Tiwari and Khare (2015) the organization of these units takes place through layers that relate weights and inputs in order to identify the ideal outputs of the model.
The diagram below shows one of the primary artificial neuron models, called the Perceptron model (Géron, 2017), this model indicates the inputs of an artificial neuron (characteristics of a problem), its synaptic weights (with which the inputs will be combined in a linear transformation), the result of a summation of these combinations and an activation function responsible for performing a transformation, commonly, nonlinear.
Figure 1. Perceptron Artificial Neuron Model
This model follows the vector representation below:
= ( )
On what = [ 0 ⋯ ] and = [1 … ]. Thus, the model admits linear classification, being able to predict outputs that can be separated by means of a hyperplane (binary).
Activation Functions
The activation functions are used with a view to perform linear transformations or not of the output . These functions are important in an artificial neural network because it limits the output of a neuron (Ngah et al., 2016), influencing the network's flexibility and, consequently, its efficiency. Géron (2017) shows that, in principle, sigmoidal functions have long been used as activation functions, like: hyperbolic tangent, logistic function, among others; due to its satisfactory behavior in each layer of the network.
However, with the advent of deep learning techniques, the ReLu rectifier activation function (Rectified Linear Unit) was more cost-effective compared to its performance in approximations (weight adjustments) in the data training stage, in addition to reducing computational cost (Lomuscio & Maganti, 2017). Currently, there are several variations of this function.
The architecture of a neural network can be divided by two modes of propagation, namely: Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). On the one hand, for the first method, according to Jiang et al. al (2020), the network computes the direction between the layers in a direct way, in a single direction, until the exit of the network, following the equation below: So that the network performs a mapping of the inputs ( ) to an output ( ) through the set of parameters . On the other hand, the second method, according to Wanto et al. (2017), involves feedback in the network that allows the model to recalculate its synaptic weights. This characteristic defines a recurring network as a network that has memory. So, as new information is fed, the network searches for past information that can characterize it more accurately. This model can be described by the equation below: Where the mapping of entries ( ) it is done considering the immediately previous output and its due adjustment parameters for the output calculation ( ). This means we are dealing with a high flexible network depending on the number of layers and its depth, which can be a challenge when working with small datasets.
Research Goal
The article aims to develop a machine-learning algorithm that can predict the graduation of students in the Industrial Engineering course at the Federal University of Amazonas based on their performance data.
Research Strategy
The data collection considered the performance report of students in the Industrial Engineering course available on the e-Campus portal.
Eligibility Criteria
The target audience of the research consists of evaluating the data of students from the Industrial Engineering Course at the Federal University of Amazonas.
Sample and Data Collection
The data collection and analysis methodology considered the performance report of students in the Industrial Engineering course available on the e-Campus portal. This report was generated based on the enrollment period between 2007 and 2019, and consists of information from 364 students. These characteristics were then evaluated based on their possible influence on the characteristic of interest in this work: the student's graduation.
Thus, the analysis methodology, as well as the supervised learning models, were based on the development of an algorithm in Python programming language under version 3.8.5 on Google Colaboratory (code available on GitHub repository SL-Algorithms) using 3 different methods: Logistic regression, a shallow MLP and the decision tree-based XGBoost, aiming to compare and choose the best of them. The construction of the prediction models followed the flow shown in the Figure 2. Firstly, the conversion of the input attributes file to the appropriate reading format was considered, which is done by .csv files (separated by commas). After that, the complete data were plotted using graphs that describe each of the attributes separately, in addition to the presentation of the correlation between attribute-attribute and attribute-output, as previously described. As a result, the data were randomly divided into two parts: training and testing. So that 30% of the data was destined for the testing stage, which has the objective of reproduce an on-campus application and cannot be used during the training stage to avoid any bias on the predictions.
Thus, before submitting the data to the model, it was necessary to assess their balance, considering that the number of graduated students is lower than the number of non-graduated students and this situation can lead to a biased learning and impact the model's performance on the test set. The Figure 3 explains this situation.
Figure 3. Unbalanced Data
As can be seen, only 9.4% of the data represent graduate students (class 1), which creates difficulties for the model to learn the characteristics that lead to this class. Thus, the application of a resampling feature in the training set was necessary, in order to artificially increase the amounts of data whose output is equal to 1 (graduate students). This process can be seen in Figure 4, which indicates a balance between the two outputs.
Figure 4. Balanced Data
With the data balanced and the initial parameters defined, the models were trained and the predictions along with the test data were performed for the Logistic Regression, MLP and XGBoost models.
Analyzing of Data
The analysis of the characteristics of entry considered the following attributes: grade-point average, semesters taken, courses terminations, type of school in high school, lockouts and dropouts, as shown in Figure 5 and Figure 6.
Figure 5. Input Characteristics
The attributes related to the grade-point average and the semesters taken were divided into histograms, so that it is possible to notice, in the first case, a distribution displaced, since the average for approval considered at the university is equal to 5.0. In the second, it is possible to note two distinct peaks, the first at point 5 and the second at point 10, indicating that the data does not show uniform numbers of students per admission period.
Figure 6. Input Characteristics
In contrast, the attributes that are divided into classes could be presented in the graphs above. Firstly, the number of courses terminations have a high rate of retired students, an even greater number than graduates, as will be seen later, whether in the current course or not. On the side, the graph was divided into only two classes, namely: private schools (type 1) and public schools (type 2), so that the number of students from type two schools are more expressive.
Then, the graph that lists the number of students by the number of enrollment lockouts indicates a low index for it, which differs from the graph on the side, which lists dropouts from the course and has a high index. It is important to highlight that the enrollment lockouts and dropouts from the course are attributes that can be performed more than once by the same student, since there is a period for canceling the action in the university system. Thus, the correlation between the attributes and the data output could be assessed, as can be seen in the Figure 7.
Figure 7. Correlation Map
The attribute-attribute and attribute-output correlation map can identify the influence between all characteristics and their outputs, even if represented by classes. Thus, in the map above, it is possible to identify correlation levels close to zero in most of the points, however, positive and negative correlations, which indicate an influence between the data, could also be seen. For example, the relationship between the grade-point average and the number of courses terminations, which is inversely proportional, and the relationship between the grade-point average and the output, which appears to be in direct proportion.
Findings / Results
First, the training set was analyzed using logistic regression, as this is a categorical problem, and this analysis allowed us to verify an adequate accuracy value, given the limited amount of data for analysis. Then, the data were trained through a shallow MLP, with only two intermediate layers containing 6 and 5 neurons, respectively, using ReLU as activation function and Adam as it's optimizer. In this case, the model had a performance similar to that found through logistic regression, however, it presented a very bumpy error surface (many local minima). Finally, the data were processed using XGBoost, which achieved an accuracy of around 94.15%, higher than the previous ones. Therefore, considering XGBoost as the best estimator, its confusion matrix can be plotted to analyze the behavior of this classifier in the test set, which simulates the on-campus operation of the proposed model.
Figure 8. Confusion Matrix for XGBoost
As can be seen, the matrix above lists the number of errors and hits in the model (y_pred) in relation to the test data (y_test), so that, on the one hand, the main diagonal indicates the correctness rates of the model, which is around 103 true negatives and 90 true positives. On the other hand, the secondary diagonal shows the quantities of incorrect classifications performed by the model, being, then, 6 false positives and 6 false negative. With that, some metrics could be calculated, as can be seen bellow, which indicates the performance of the model for the test set.
In this way, then, precision is presented, a metric that corresponds to the proportion of data referring to the positive class correctly classified in relation to all the examples attributed to the positive class. As can be seen, the precision obtained a good percentage of 93.75%, close to the accuracy achieved.
In addition, the recall was also calculated and obtained a result of 93.75%, which indicates the rate of true positive, in a way that relates the proportion of examples of the positive class correctly classified and presented a result according to the precision, since their number of false negatives and false positives are equal.
In this way, the metric f-measure could be calculated, which relates the two metrics mentioned earlier through a weighted harmonic mean, where 'm' is the weighting factor. This weighting factor is responsible for the importance given to each of the metrics involved in the calculation, so that the value of 'm' used was equal to 1.0, which indicates that the importance of recall and precision are the same. Thus, the metric obtained, as expected, 93.75%, indicating a value close to 100%, which is the ideal value of the f-measure.
Then, the model's error rate can also be seen in the table above, a metric that is inverse to the accuracy presented a result around of 5.85%. So, specificity was also calculated, indicating the rate of true negative, one of the best results among all the calculated metrics, which was around 94.50%, showing how the class with real values, in face of the artificially created values of class 1, represents gain to the model even though the data is balanced.
Finally, the metrics involving the false negative and false-positive rates were calculated, so that the differences in classification could be viewed more intensely. Thus, the first was around 6.25% compared to 5.50% presented by the second, a great performance considering the set size and the number of attributes used.
Discussion
Through the comparative analysis between the supervised learning methods presented, it was possible to observe, firstly through accuracy, that logistic regression proved to be effective in the given classification task, although it was not the model with the highest performance. This result is interesting given that the method has a simple, linear proposal, used in older researches, such as Veenstra et al. (2009), and more recent ones, such as French et al. (2021), confirming it's importance as a competitive estimator and which, in addition, was able to present a superior result than the one found through the more flexible, fully connected neural network, indicating the data used in training, to some extent, can be separated by means of a straight line.
Furthermore, the results presented by the MLP network could show that, even though we have greater flexibility and can learn through synaptic adjustments of the network weights, this learning can be impacted by the amount of data available, so that the error surface can present numerous local minimums and not allowing an adequate search for the solution that brings the highest performance in the test set.
Finally, we highlight that the performance of the decision tree-based algorithm, XGBoost, could confirm the statement made by Chen and Guestrin (2016) and Nielsen (2016) about its ability to generate state-of-the-art results in various machine learning problems, especially those involving tabular data, such as the one presented in this work. As seen, the algorithm was placed as an intermediate solution from the point of view of flexibility, that is, it is not as rigid as logistic regression, nor as flexible as the neural network, generating the most adequate solution for the binary classification problem we have.
In this way, supervised learning techniques could identify the existence of a relationship between the six characteristics that were worked on and the data output: grade-point average, semesters taken, courses terminations, high school type, lockouts and dropouts. And this allows us to state that these attributes have a strong influence on student results and can be able to describe the problem presented quite adequately, highlighting the potential of the estimator in face of several works that seek to predict, in a certain way, the graduation or retention of students until the end of their courses, as in the surveys mentioned previously: Lin et al. (2008), Veenstra et al. (2009) andFrench et al. (2021).
Furthermore, the existence of a relationship between these attributes makes possible for us to discuss about the ideal characteristics to invest on with strategies to increase student's graduation rate having a small universe of possibilities (a total of 6 possibilities).
In other words, we can analyze, mainly, the characteristics that could be improved by the university, such as: grade-point average, investing on different methodologies and developing a research to understand from the students and teachers what could be improved or maintained; type of high school, investing on extra courses to improve basic lacks on math or science subjects; courses terminations, lockouts and dropouts, implementing scholarships, assistance programs and upgrading laboratories and common areas to possibility a higher academic time dedication as well as the development of practical skills inside the university.
With that, we can highlight the semesters taken is a fix attribute used to indicate to the learning algorithm the engineering course has a mean time to be completed, so, for now, the main strategies can be applied in 5 of the 6 attributes used on this research and the implementation order can be made by considering the correlation between each of them and the output of interest (students graduation or non-graduation).
Conclusion
The development of a machine-learning model through supervised learning algorithms is part of the range of predictive methodologies created under the aspect of artificial intelligence. This possibility is based primarily on the development of computer systems that are capable of storing a large amount of data.
This application started from the hypothesis that several factors are responsible for the graduation (or not) of students of the Industrial Engineering course at the Federal University of Amazonas and as it could be seen, together, certain selected attributes had an influence on the study output.
More than the methodology used in the creation of a supervised learning algorithm that could learn based on this dataset, the work could present to the academic community a new way of applying information available in the university's own system, indicating the use of these data not only to statistical studies, but to studies that are aligned with 4.0 industry, such as artificial intelligence, and that bring the novelty of a rising area that needs to be further explored by the Industrial Engineer based on the multidisciplinary intrinsic to this profession.
Thus, the way the machine learning area has been developing in recent years indicates that the implementation of a learning model is not limited to the areas of engineering or computer science, and can currently be applied to various problems, including problems common to the educational area, as seen on previous research, were the authors explored the reasons or motivations regarding to engineering students retention by using different techniques but achieving similar results, which implies we already have ways to understand this problem and propose effective solutions.
With that, the good performance of the developed algorithm, as discussed in the previous session, indicates the importance of using these methods within the university itself as a base methodology for implementing improvements within the Industrial Engineering department or, even, the others that can make use of this algorithm for performance analysis.
Thus, according to what was discussed, one of the main characteristics that are related to the output of the model is the data of grade-point average, with an influence proportional to the output. Thus, the study of teaching methodologies used in each of the disciplines is important for the individual student's performance, in such a way that this performance is an important answer in relation to psychological aspects such as: student interest, excitement with the course, encouragement from teachers and self-esteem within the department. As a result, the department's investment in methods that can reduce dropouts, lockouts and course terminations is essential to the student's success, since the improvement in the grade-point average becomes a consequence of these factors mentioned because they are somewhat correlated.
In addition, as it can be seen in the present work, the improvements to the model will occur according to the number of examples both classes (0 and 1) grow, as the model will be able to improve their learning with more examples and also with a greater number of students in class 1 (trained), since there would be a natural data balance.
With that, the Industrial Engineering department will also be able to make use of the same algorithm developed in this work. Besides, it will be able to increase the database to improve the model predictions and check, then, the impacts happening in the department arising from new strategies implemented.
Recommendations
Although this research carried out an analysis of the performance of students from the Industrial Engineering Course during the years 2007 to 2019, it is recommended that the same analysis be carried out for the entire period of activity of the course, from its institutional creation to the present day. In addition, a similar study can be carried out within the Faculty of Technology and the courses it offers to the community, enabling analysis by course and the development of an institutional strategic plan.
Also, it would be interesting to collect data of the final grades of students in the department in each of the subjects offered, so that the model output is based on the hypothesis that each of the subjects has a direct influence on student success or failure in the course, which would indicate a better direction of the department towards specific improvements to each of them.
Limitations
As limitations, it can be highlighted that this study involved the analysis only of students from the Industrial Engineering course at the Faculty of Technology, among other courses in the academic unit. The study was also limited to analyzing the performance of students from the year 2007 and not since the creation of the course, which dates back to 2004. Besides that, there were limitations related to the data treatment, since they were not on a format suitable to the algorithm, which makes important to the interested teachers or researchers to first open a project with a selected team only destinated to the analysis and organization of these data to later use.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2012-01-23T00:00:00.000
|
5184077
|
{
"extfieldsofstudy": [
"Biology",
"Medicine",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-13-14",
"pdf_hash": "88bfd79694263a98a303d8c16906e18cdf79dd29",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2251",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "3010be502680ed9466e579676e7988febbba06e8",
"year": 2012
}
|
pes2o/s2orc
|
Developing a powerful In Silico tool for the discovery of novel caspase-3 substrates: a preliminary screening of the human proteome
Background Caspases are a family of cysteinyl proteases that regulate apoptosis and other biological processes. Caspase-3 is considered the central executioner member of this family with a wide range of substrates. Identification of caspase-3 cellular targets is crucial to gain further insights into the cellular mechanisms that have been implicated in various diseases including: cancer, neurodegenerative, and immunodeficiency diseases. To date, over 200 caspase-3 substrates have been identified experimentally. However, many are still awaiting discovery. Results Here, we describe a powerful bioinformatics tool that can predict the presence of caspase-3 cleavage sites in a given protein sequence using a Position-Specific Scoring Matrix (PSSM) approach. The present tool, which we call CAT3, was built using 227 confirmed caspase-3 substrates that were carefully extracted from the literature. Assessing prediction accuracy using 10 fold cross validation, our method shows AUC (area under the ROC curve) of 0.94, sensitivity of 88.83%, and specificity of 89.50%. The ability of CAT3 in predicting the precise cleavage site was demonstrated in comparison to existing state-of-the-art tools. In contrast to other tools which were trained on cleavage sites of various caspases as well as other similar proteases, CAT3 showed a significant decrease in the false positive rate. This cost effective and powerful feature makes CAT3 an ideal tool for high-throughput screening to identify novel caspase-3 substrates. The developed tool, CAT3, was used to screen 13,066 human proteins with assigned gene ontology terms. The analyses revealed the presence of many potential caspase-3 substrates that are not yet described. The majority of these proteins are involved in signal transduction, regulation of cell adhesion, cytoskeleton organization, integrity of the nucleus, and development of nerve cells. Conclusions CAT3 is a powerful tool that is a clear improvement over existing similar tools, especially in reducing the false positive rate. Human proteome screening, using CAT3, indicate the presence of a large number of possible caspase-3 substrates that exceed the anticipated figure. In addition to their involvement in various expected functions such as cytoskeleton organization, nuclear integrity and adhesion, a large number of the predicted substrates are remarkably associated with the development of nerve tissues.
Background
Caspases are a family of intracellular cysteinyl aspartatespecific proteases that are highly conserved in multicellular organisms and are key regulators of apoptosis initiation and execution. At least 14 members of the caspase family have been identified in mammals and they are grouped into two major sub-families, namely inflammatory caspases and apoptotic caspases. Apoptosis associated caspases can be further classified into two groups: initiator caspases, including caspase-2, -8, and -9, which are present upstream of apoptosis signalling pathways; and executioner (effectors) caspases -3, -6, and -7 [1][2][3].
Initiator capsases-8 and -9 are activated through an auto-cleavage process that is mediated by large adaptorcaspase complexes known respectively as the deathinducing signalling complex (DISC) and apoptosome. These complexes are usually formed in response to an intrinsic or extrinsic cell death stimulus [4]. The main targets of the activated initiator caspases are the executioner procaspases. It is interesting to notice that substrates of initiator caspases are limited to their own precursors, executioner procaspases-3, -6, and -7, and few more proteins [5]. On the other hand, executioner caspases target a large number of cellular proteins to control the dismantling process of the cell [6]. In addition to their essential role in apoptosis, recent accumulated evidence demonstrates various non-apoptotic functions of executioner caspases including: regulation of the immune response, cell proliferation, differentiation and motility [7,8].
Caspases are characterized by high substrate selectivity. They recognize a specific sequence signal in their target proteins. Resolving substrate specificity for caspases was initially investigated using a combinatorial approach with positional scanning of synthetic tetrapeptidyl-aminomethyl coumarin derivatives. The results of this approach determined the absolute requirements for aspartic acid at position P1 [9,10]. In addition, P2 to P4 positions demonstrate high preference for certain amino acids. Based on positional scanning of synthetic tetrapeptides, the preferred recognition sequences for caspases -1, -4, and -5 were determined to be (W/L)EHD, whereas caspases -3, and -7 recognize the sequence DEVD, while caspases-8, -6, -9, and -10 recognize the sequence (D/L)E(H/T)D.
It is important to emphasize that the in vitro caspase substrate specificity, determined by the synthetic tetrapeptide method, is not absolutely representative of the cleavage conditions in vivo. The cleavage specificities of caspases in vivo are influenced by sequence-dependent conformational features, flanking the cleavage tetrapeptide motif, which can control the molecular electrostatic potential and the steric accessibility of the enzyme to its target protein. For example, and in spite of their identical preference for the DEVD tetrapeptide cleavage motif, caspase-3 and 7 show a clear differential preference for various natural substrates [11,12]. Demon et al [13] demonstrated that in addition to the tetrapeptide cleavage core, DEVD (P4-P1), several amino acid positions located outside this core such as P6, P5, P2' and P3' are critical in the discrimination of caspase-7 and caspase-3 for their specific substrates.
Shen et al [14] reported another interesting example which proves that the relatively similar tetrapeptide cleavage motif of caspase-1 and caspase-9, which are functionally distinct, does not imply a similar recognition preference for their natural substrates. Via a thorough statistical analysis of a window size of P10-P10' for a collection of caspase-1 and caspase-9 natural substrates, Shen et al have determined the significance of various amino acids and/or certain physiochemical properties at certain positions outside the canonical tetrapeptide motif [14].
Among executioner caspases, caspase-3 is considered the major enzyme with a wide array of cellular substrates. While immunodepletion of caspase-3 abolishes the majority of proteolytic events observed during apoptosis, immunodepletion of other executioner caspases shows a minimal impact on apoptosis markers and its proteolytic cleavage outcomes [11]. In the last decade, extensive research on caspases led to the identification of more than 200 caspase-3 substrates and the list is still growing. With the increasing number of proteins that have been discovered, thanks to the sequencing of the human genome and the genomes of many other organisms, there is a need for efficient methods that can help in discovering new caspase-3 substrates. The identification of new cellular substrates for caspase-3 would lead to further insights into the cellular mechanisms that regulate apoptosis, proliferation, and other biological processes.
Bioinformatics tools would allow high-throughput analyses of proteomic data in order to screen for putative caspase-3 substrates. In addition, such tools can provide researchers with an accurate map of the potential cleavage site(s) for a given sequence of interest. In the last few years, several computer-based tools were developed with the aim of predicting caspase substrates. Prediction of Endopeptidase Substrates (PEPS) [15] was among the initial tools and it was developed in order to predict putative caspase-3, cathepsin B and cathepsin L cleavage sites using cleavage site scoring matrices (CSSM). PeptideCutter [16] is another tool that was designed with the objective of predicting cleavage sites for a wide range of proteases including various caspases. GraBCas [17] is a tool that uses a position specific scoring matrix for caspases 1-9 and granzyme B, based on substrate specificities that were determined by positional scanning of synthetic peptides. CaSPredictor [18] was developed based on the assumption that sequences rich in the amino acids Ser (S), Thr (T), Pro (P), Glu or Asp (D/E) (collectively called PEST) are favoured caspase cleavage sites. CaSPredictor was built based on 137 experimentally verified natural substrates for caspase-1, -2, -3, -6, -7, -8, -9, and -10.
In addition to the previously aforementioned scoringmatrix based approaches, several groups recently reported the development of tools that were mainly built up using the support vector machine (SVM) technique. Wee et al. [19] described an SVM based approach (called CASVM) using 195 substrates for different caspases from various organisms. Cascleave [20] is an interesting tool that was recently developed utilizing primary sequence, as well as secondary structure features, of the cleaved sites based on SVM approach. Cascleave was built using a dataset of 370 substrates of the different caspases. Piippo et al [21] described another tool (termed Pripper) using three different pattern recognition classifiers, namely: SVM, a decision tree based method known as J48, and the Random Forest classifier. The three classifiers were trained on 443 different caspase cleavage sites. Li et al [22] proposed a hybrid SVM-PSSM method based on an extended dataset. Unfortunately, some of these tools are not available for testing and comparison purposes.
Despite the substantial efforts to develop in silico systems to predict sites of caspase cleavage, the accuracy of such tools is still a challenging issue. The major drawback of the early tools is the use of training datasets that represent synthetic peptides or limited natural substrates for various proteases including caspases. On the other hand, the recently developed SVM-based tools were built using a mixture of heterogeneous data that represent cleavage sites of the different caspases including non-canonical sites as well as some unverified caspase substrates. In general the SVM-based tools such as Cascleave achieve good levels of sensitivity, yet they suffer from high rates of false positive results. It is generally expected that training a prediction tool on data representing distinctive patterns can lead to overgeneralization and hence a high rate of false positive results. It is important to recall that although different caspases share the primary sequence-requirement to cleave at the carboxyl terminal of aspartate residues in their protein targets, each one of these proteases recognizes a unique context surrounding the cleavage position. Even the caspases that appear to have identical tetrapeptide cleavage specificities such as caspase-3 and -7 are actually distinct in terms of the amino acids preferences outside the tetrapeptide core sequence [13]. Based on this assumption, we decided to develop a prediction tool focusing on data that represent substrates of a single caspase. Caspase-3 was selected for this objective as it represents the major executioner caspase with a considerable number of substrates.
In this work, we present a novel tool designated Caspase Analysis Tool 3 (CAT3), which was developed based on an extensive and highly curated dataset of caspase-3 substrates. CAT3 showed an obvious improvement in the overall prediction accuracy as well as a marked reduction of the false positive rate. Using CAT3, a high-throughput screening was performed on a large set of human proteins with assigned Gene Ontology (GO) annotations. The screening results reveal the existence of a large number of potential caspase-3 substrates.
Caspase-3 substrates
The PubMed literature database [23] was used to search for papers that describe human, mouse, and rat caspase-3 substrates. Each paper was critically analyzed to determine the experimentally demonstrated cleavage position in the relevant protein. The confirmed caspase-3 substrates were 227 proteins with a total of 267 cleavage sites. The amino acid sequences of the proteins were then obtained from the Universal Protein Resource Knowledgebase (UniProtKB) [24]. Of the 267 cleavage sites, 17 sequences were sorted randomly to be used later to compare the performance of CAT3 versus existing similar tools; the remaining 250 sequences, which we refer to as the positive (+) peptide data, were used for training and validation of the CAT3.
Definition of study controls
The following datasets were established as controls in this study: The negative uncleaved peptides This data set consists of all the peptides that contain aspartic acid residues and are presumed to be uncleaved. This control group was established based on the assumption that any D residue, in a caspase-3 substrate, apart from the mapped cleavage site(s) is most likely uncleaved. After excluding the positive peptides that exist in training data, the remaining 8968 D residues were used to create this negative (-) control. Amino acids natural frequency This control represents the frequency of each one of the 20 amino acids in a group of 20,224 human proteins that were available as reviewed proteins in the UniProtKB [24] as of March 2011. Amino acid R-groups frequency This control represents the frequencies of the different R-groups of amino acids; acidic (D and E), basic (H, K and R), polar (N, Q, S, T and Y), and non-polar (A, L, P, M, G, V, I, F, W and C). The frequencies were calculated based on the above mentioned 20,224 reviewed proteins.
Physiochemical characteristics flanking the cleavage site
The positive peptide sequences were aligned in reference to the cleaved aspartic acid residues. The resulting multiple sequence alignment was divided into three regions: the central tetrapeptide cleavage motif (P4P3P2P1), the N-terminal region preceding the motif and the C-terminal region following the motif that was designated "before-motif" and "after-motif", respectively.
The analyses for the regions flanking the motif were made serially: 50, 30, 20, 10, and 5 amino acids before and after the motif ( Figure 1). The analysis included: the frequencies of amino acids represented by their Rgroups (acidic, basic, polar and non-polar), the frequencies of hydrophobic and hydrophilic amino acids and finally the frequencies for each single amino acid. In the case of tetrapeptide motif analysis, the different frequencies were calculated separately for each position: P4, P3, P2 and P1. However, the frequencies within 50, 30, 20, 10, and 5 amino acids, before and after the motif, were calculated collectively for each region.
Establishment of scoring matrices
The peptides that fulfil the length criterion, P9-P5', which means having 8 amino acids before and 5 amino acids after the aspartate residue of interest, were used to build the scoring matrices. Both the positive and the negative peptide data sets were used to build the scoring matrices. The first step was to generate position specific frequency matrices from the multiple sequence alignments of the relevant set of peptides. Each matrix consisted of 14 rows, representing positions P9P8P7P6P5 P4 P3 P2 P1P1' P2' P3'P4'P5', where a D amino acid is at the position P1. The 20 columns of the matrix represent the frequencies of each amino acid.
From the multiple sequence alignment of the positive peptides, we noticed the presence of two possibly different patterns; the first pattern has a D at P4 (P9...P5-D-X-X-D...P5') and the second has any amino acid except D at P4 (P9...P5-X-{D}-X-X-D...P5'). To represent this subtle difference, we decided to construct amino acid frequency matrices to represent each sub-pattern.
Two weighting systems were used in order to correct the probability of overrepresented and underrepresented amino acids in the frequency matrices so as to establish the scoring matrices: i) Calculating log odd ratio: This weighting system involves calculating log odd ratio for each element in the frequency matrix by dividing the observed frequency of a given amino acid over its corresponding natural frequency (see the definition of study controls above).
ii) Subtraction of negative control background: Instead of relying only on the common log odd weighting system and in order to minimize scoring bias, we decided to add a second normalization approach. The method relies on comparing the positive peptides with the negative peptides to further remove the noise signals around the cleaved aspartate residues.
Four scoring matrices are involved in the overall calculation of the final score of CAT3 tool. We propose the following notation to define each scoring matrix and the overall score. First, let FM1 + denote the frequency matrix that was constructed from all the positive (+) peptides. The corresponding scoring matrix 'A' is defined as: where Ω is the natural frequencies of the amino acids. In addition to the above scoring matrix we define FM1as the frequency matrix generated from the negative (-) peptides. A new frequency matrix 'B' is defined as: c denote a frequency matrix calculated from a subset of peptides that fulfil the constraint 'c'. Here, [·] is either + or -as explained before.
Therefore, we define the following scoring matrices: and CAT3 implementation and scoring CAT3 tool was built using Perl language. The input protein can be entered either as a FASTA format sequence Figure 1 Sequence analyses. This drawing shows the regions surrounding the tetra-peptide motif (P4P3P2P1) that were included in the physiochemical analyses. In each step, a given length of amino acids (bold dashed lines) at both N-and C-directions were analyzed.
or as a text file. Once a P14 peptide with a D residue at P1 is identified, it is analyzed to calculate the final score 'S' as follows: where 'a' and 'b' are scores generated from the scoring matrices 'A' and 'B' in Equation 1 and Equation 2, respectively. The 'c' score is generated either from the scoring matrix 'C1'or 'C2' as follows: We refer to the scoring matrix 'C1' if the peptide contains the amino acid D at P4 or the scoring matrix 'C2' if the amino acid at P4 is not D. The three scores (a, b, and c) are normalized to a 100% score by dividing each score by the maximum score that could be obtained from each formula.
CAT3 validation
To examine the prediction power of CAT3 a k = 10 fold cross validation was performed. The positive data were the actual cleavage sites, whereas the negative data were obtained from the uncleaved dataset. In each fold four PSSM matrices were created from 9/10th of the positive substrates. Then, the remaining 1/10th positive and negative substrates are used for testing. Since the number of the negative peptides was much larger than the positive peptides, an equal number of the negative peptides were randomly obtained. The whole 10 fold cross validation experiment was repeated 10 times to ensure a good coverage of the negative dataset. The sensitivity (SEN), specificity (SPE), positive predictive value (PPV), negative predictive value (NPV), accuracy (ACC) and the Matthew's correlation coefficient (MCC) were calculated as in [25].
The areas under the receiver operating characteristic (ROC) curves were calculated by plotting the sensitivity against the corresponding 1-specificity. The optimal cutoff point was defined as that measurement that corresponded to the point on the ROC curve closest to the top left corner, i.e., closest to having sensitivity = 1 and specificity = 1.
Performance comparison
A performance comparison was carried out for CAT3 versus two recently published prediction tools, namely CASVM and Cascleave [19,20]. The aim of the test was to assess how accurate the three tools were in predicting caspase-3 cleavage sites. The comparison was made on 16 caspase-3 substrates that were randomly excluded from the training dataset.
Since CAT3 is a prediction tool specific for caspase-3 cleavage sites, whereas CASVM and Cascleave were developed to predict cleavage sites of different caspases, there is a possibility to misjudge true positive sites of other caspases by assigning them to the false positive category of CASVM and Cascleave. To avoid such unfair comparison, the 16 substrates were carefully inspected to find all caspase cleavage sites. The search was performed using the PubMed database, Google searching engine, the Caspase Substrate database Homepage (CASBAH) [26], and MERPOS -the Peptidases Database [27].
The protein sequences of the 16 substrates were analyzed individually and the prediction results for each tool were counted according to software default parameters. The true positives are the positively predicted caspase-3 cleavage sites, whereas the false positives are the positively predicted aspartates that are actually not recognized by any caspase.
High-throughput screening
The UniProtKB [24] was used to retrieve human proteins with known biological processes. Two filters of the advance search option were used: the first was organism: Homo sapiens (Uniprot ID: 9606) and the second was Gene Ontology GO: biological process (GO ID: 0008150). After excluding the experimentally verified 215 human caspase-3 substrates, a total of 13066 reviewed human proteins with defined Gene Ontology (biological process) were obtained. The protein sequences were analyzed by CAT3 to screen for potential novel caspase-3 substrates. Only results of scores ≥ 45 were considered for further analyses. Proteins that were predicted as potential caspase-3 substrates were further analyzed using ToppGene Suite tool [28] to retrieve the most significant Gene Ontology (GO) terms.
Caspase 3 substrates
Our search in the PubMed literature database for caspase-3 substrates revealed the presence of 227 proteins: 215 of human origin, 9 of mouse origin, and 3 of rat origin. All the substrates were experimentally verified as natural substrates and their cleavage sites were mapped. Of the 227 substrates, the cleavage sites of 189 proteins were mapped by site-directed mutagenesis technique, while the remaining 38 were mapped by different highthroughput proteomic screening approaches. The full list and description of the obtained substrates are available in the additional materials (Additional file 1). The obtained caspase-3 substrates as well as other caspase substrates will be available in the Caspase Substrates Comprehensive Database (CaspoSome Database) that has been developed at our institute (unpublished results).
Tetrapeptide cleavage motif analysis
The tetrapeptide cleavage motifs (P4P3P2P1) of the training group were analyzed to determine physiochemical properties and frequencies of amino acids at each position. The examination of amino acid frequencies within the tetrapeptide motif revealed a unique distribution pattern of hydrophobic and hydrophilic amino acids (Figure 2.A). Hydrophilic amino acids are 8.6 times more frequent in P4 than hydrophobic amino acids. Interestingly, P3 has an opposite pattern to P2. In P3 hydrophilic amino acids are nearly two times more frequent than hydrophobic amino acids, whereas in P2 the converse is true. Figure 2.B shows the results of analyzing the frequencies of acidic, basic, polar and non-polar amino acids. In addition to the obvious difference in amino acid group distribution between the four positions and the corresponding control, it is important to notice the lack of basic amino acids in P4 and the high frequency of acidic amino acids in P3 compared to the control.
Features surrounding the cleavage site
The amino acid sequences surrounding the tetrapeptide cleavage motifs were thoroughly analyzed to identify necessary feature(s) for caspase-3 recognition. The analyses include: secondary structure, amino acids physicochemical properties and amino acid composition.
The secondary structure prediction method GOR4 [29] was used to investigate the cleavage motif and its flanking regions for any common secondary structure(s). The analysis of GOR4 results showed that the majority (80%) of the cleaved sites are located within unstructured context, while 18% are located within alpha helical regions, and only 2% are located in beta sheets.
We then analyzed the biochemical properties of amino acids that flank the tetrapeptide cleavage motif to determine amino acids preferences for caspase-3 substrate recognition. No significant differences in the frequencies of acidic, basic, polar and non-polar amino acids between the tested region and the corresponding control group were found when examining 50, 30, 20, and 10 amino acids before and after the tetrapeptide cleavage motif. When testing the region of 5 amino acids before the cleavage motif a slightly higher percentage of acidic amino acids were noticed, while basic and polar amino acids were strongly unfavoured. In the region of 5 amino acids after the cleavage motif, lower percentages of acidic and basic amino acids were noticed (data not shown).
To further explore the characteristic biochemical properties, we examined individual amino acid frequencies for the entire cleavage vicinity: the 5-amino acids before and after the tetrapeptide motif. As shown in Figure 3, the frequencies of glycine, alanine, serine and proline have altered distributions in regions before and after the tetrapeptide motif that may indicate size and charge requirements for caspase-3 recognition and binding to the substrates.
Position specific scoring matrices
In order to determine the most appropriate window size to construct efficient scoring matrices for CAT3, a series of gradually increasing window sizes ranging from P3-P1 to P23-P19' were evaluated ( Figure 4). As can be seen, it is obvious that window sizes equal to or shorter than the tetrapeptide cleavage motif (P4-P1) are not adequate to develop a reliable prediction tool. Despite of a marginally higher MCC at the window size of P6-P2', the overall prediction efficiency of the window sizes ranging from P5-P1' to P9-P5' are apparently quite similar. However, the efficiency seems to decrease gradually when extending the window size beyond P9-P5'.
We actually preferred the window size P9-P5' over other seemingly comparable shorter alternatives for several reasons. First, the critical analysis of amino acids over-/under-representation scores demonstrated the significance of all the positions in this extended window P9-P5' (Additional file 2). Second, a careful analysis of natural caspase substrates, available in MEROPS database, with cleavage positions near to N-or C-terminals, indicates that minimal adequate N-and C-terminal spacers comparable to the length of P9 and P5', respectively, are required for efficient recognition. Therefore, our scoring matrices were developed by calculating the weight of each amino acid in the 14-mer peptide sequence from P9 to P5'.
To evaluate the contribution of the different amino acids at the positions surrounding the cleaved aspartate, the scoring matrix 'A' (see Equation 1 in methods section) was drawn as a heat-map (Additional file 2). Analyzing the heat-map shows that apart from the tetrapeptide cleavage motif, all positions have either overrepresented or underrepresented amino acids.
Prediction power of CAT3
A 10 fold cross validation was used to evaluate the predictive power of CAT3. Figure 5 shows the ROC curve that represents the average of 10 different experiments of the 10-fold cross validation. The optimal cut-off score was found to be 30. At this cut-off point the prediction statistical measures are shown in Table 1.
To demonstrate specificity of CAT3 for caspase-3 cleavage sites, a group of 25 non-caspase-3 substrates were examined by CAT3. The substrates included 17, 12 and 5 cleavage sites of caspase-1, caspase-8 and caspase-9, respectively. We avoided using any cleavage site that was known to be a shared target with caspase-3. Interestingly, 33 of the 34 cleavage sites (97%) showed CAT3 scores below 30, which is the minimum cut-off for predicting a caspase-3 cleavage site. This result provides a clear evidence to substantiate the very high specificity of CAT3 for predicting caspase-3 substrates. The Evaluating the performance of different binary classifiers is frequently made by comparing their reported statistical measures such as specificity, sensitivity etc. which are usually calculated under different conditions. We have avoided the use of such a comparison as it can lead to a biased conclusion. Instead, we compared the performance of CAT3 versus two recently reported tools, namely CASVM and Cascleave, on a group of 16 caspase-3 substrates that were initially excluded from our training data. It is worth mentioning that some of these substrates could have been used in the training of the other tools, which could offer unfair advantage to the other two tools versus CAT3. A thorough examination using different databases revealed that the 16 substrates contain a total of 537 aspartate residues, of which 17 are caspase-3 cleavage sites, 4 are cleavage sites of other caspases and 516 aspartate residues that are evidently not cleaved by any caspase.
Out of the 17 actual caspase-3 cleavage sites, the predicted true positive results for the three tools were as follows: CAT3 14/17 (82.3%), CASVM 8/17 (47%), and Cascleave 16/17 (94.1%). However, CAT3 was the best Figure 3 Amino acid frequencies around the cleavage motif. The overall frequency of each amino acid was calculated for the two regions: 5-amino acids before (gray bars) and 5-amino acids after (black bars) the tetrapeptide cleavage motif. The observed frequencies were compared to the normal frequency of each amino acid (white bars). Frequencies were obtained as described in the definition of study controls in the Methods' section. Figure 6 shows the result of comparing CAT3 versus CASVM and Cascleave. It is noteworthy that both CASVM and Cascleave correctly predicted two of the 4 non-caspase-3 cleavage sites. The detailed results of the comparison are available in the additional materials (Additional file 4).
High-throughput screening for novel caspase-3 substrates
Screening of 13066 reviewed human proteins with ascribed Gene Ontology (biological process) using our CAT3 tool showed that 3320 proteins are predicted to be caspase-3 substrates with a total of 4903 potential caspase-3 cleavage sites (Additional file 5). To further investigate the function of these potential substrates we used ToppFun: an annotations based gene list functional enrichment analysis tool [28]. Out of the 3320 genes only 3013 had annotations in ToppFun. The analysis revealed a group of 308 biological processes that showed significant enrichment in predicted proteins versus the whole human genome as a control (Additional file 6 and Additional file 7). A careful analysis of these biological processes was performed to shortlist the most significant biological processes by excluding general roots (parents) and detailed leaves (children) of GO terms. The most significant biological processes are shown in Table 2. ToppFun was also used to examine the enriched cellular component GO terms. Interestingly, the majority of the predicted proteins are located in different nuclear components, cytoskeleton, cell projection, membrane fraction, cell junction, and extracellular matrix, where most typical apoptotic morphological and biochemical changes are observed. The detailed list of the enriched cellular component GO terms is available in the additional materials (Additional file 8).
Discussion
In addition to its well known key function in apoptosis, caspase-3 has been shown to play a crucial role in the regulation of various biological processes such as cell differentiation, adhesion, neurodevelopment and neuronal signalling [30][31][32]. Recognition of caspase-3 substrates is becoming a vital need to understand molecular mechanisms behind many disorders including cancer, autoimmune and neurodegenerative diseases. Currently, most of the known caspase-3 substrates have been identified using in vitro proteolytic cleavage assays, coupled with site-directed mutagenesis to determine the exact cleavage position. In recognition of the physiological importance of caspase-3, many labs began to perform high-throughput proteomic screening to identify novel substrates of this major caspase [33][34][35][36]. Such techniques are relatively expensive and cumbersome. In addition to the high cost, the number of identified substrates is usually limited to the proteins that are relatively abundant in the examined cell type.
Recently, several computer-based prediction tools such as CASVM, Cascleave, and Pripper were developed in order to help discover novel caspase substrates [19][20][21]. These tools were trained on data that represent substrates of different caspases and in some cases non-caspase endopeptidase. Although different caspases share a primary sequence-requirement, to cleave at the carboxyl terminal of aspartate residues in their protein targets, each one of these proteases needs a special context surrounding the cleavage position. Even the caspases that appear to have identical tetrapeptide cleavage specificities such as caspase-3 and -7 are actually distinct in terms of the amino acids preferences outside the tetrapeptide core sequence [13]. Therefore, we believe that building a single algorithm for predicting the cleavage of multi-caspases would likely have low prediction specificity. Based on this hypothesis, we decided to develop an algorithm focusing only on substrates of caspase-3, which is the major executioner caspase with a considerable number of targets. Our caspase-3-specific approach (CAT3) has indeed outperformed other multi-caspases prediction tools on an independent comparison-dataset ( Figure 6).
CAT3 has three distinctive features. Firstly, it was developed using PSSM instead of other relatively complex approaches. PSSM is known to be practical, require low computation power and is able to represent the statistical weights of amino acids at each position. In addition, it can be easily combined with other machine learning tools to generate hybrid approaches that might enhance the prediction performance.
Secondly, instead of using data for different caspases, which are actually a mixture of heterogeneous patterns, we used an extended set of highly-curated caspase-3 natural substrates. We believe that inclusion of data that represents other proteases or cleavage sites of caspases with very few substrates and/or cleavage positions representing non-canonical patterns can lead to overgeneralization. In this situation, the classification model is required to loosen its decision boundary to increase sensitivity, but at the cost of having more false positive results. It is therefore generally accepted that improvement in prediction accuracy is more likely to be associated with the good quality of the used data rather than the complexity of the classification method. CAT3, indeed, showed a very low rate of false positive results in comparison to existing state-of-the-art tools, namely, CASVM and Cascleave.
Thirdly, CAT3 is a straightforward sequence-based scoring system that offers an easy to use reference scale to determine the potential cleavage site(s) instead of offering a yes-no answer or providing many suggested cleavage sites without any score to rank them. In contrast to other tools that can execute a single sequence per query, CAT3 is a fast system that can process both single and multiple sequence inputs: a feature that would assist biologists to perform large scale in silico screening to identify novel caspase-3 substrates.
Our secondary structure analyses of caspase-3 substrates showed that regardless of cleavage patterns, aspartic acid residues are predominantly located in unstructured regions and to a lesser extent within alpha-helices. In addition, we found the amino acids D, E, A, G and S appear more frequently in natively unstructured regions no matter whether they lie within or outside cleavage motifs. These findings are in agreement with various reports that used statistical analysis to determine the natural distribution of these amino acids and their influence on secondary structure [37][38][39]. This interesting result raises a question about the benefit of using local secondary structure properties of the cleavage sites as additional features to enhance the discrimination between cleaved and noncleaved patterns [20].
Careful evaluation of amino acid preference, at the positions surrounding the tetrapeptide cleavage motif, points to a general trend where the unfavourable amino acids have greater weight than the favoured ones, especially at P7, P6, P1', P3', P4', and P5' (Additional file 2). Nevertheless, P1' has a remarkable preference for specific amino acids, namely glycine and serine. In addition to their role in determining the molecular electrostatic potential and the steric accessibility of the enzyme, the post-translational modification potential of these two amino acids is vital for determining the timing and functional consequences of cleavage. Tözsér et al. [40] demonstrated that phosphorylation of serine residues in close proximity to the tetrapeptide cleavage core can determine caspase-3 cleavability. On the other hand, the high preference for glycine at P1' can be crucial to the acquisition of a myristic acid at this residue. The GO terms of the listed biological processes were manually filtered, to reduce redundancy, by removing general roots (parents) and detailed leaves (children) of the enriched GO terms that were obtained by ToppFun tool. The P-value of each GO term in the predicted caspase-3 proteins was derived by random sampling of the whole genome.
Myristoylation is a co-translational reaction that occurs after the removal of the initiator methionine residue. It can also occur as a post-translational modification when internal glycine residues become exposed after caspase cleavage. The addition of a myristate moiety can alter subcellular localization of the cleaved proteins by facilitating their attachment to membranes and other proteins [41]. By using CAT3, we carried out a large scale proteomic screen to identify novel potential caspase-3 substrates. The initial screening showed that 3320 human proteins can be potential caspase-3 substrates. Even after normalizing this result by excluding the noise coming from the presumed false positive rate, the percentage of potential caspase-3 substrates in the human proteome would be roughly~14%. This means that only a small fraction (less than~10%) of caspase-3 substrates has so far been discovered.
The results of GO term enrichment analysis using ToppFun showed that the majority of the predicted caspase-3 substrates are involved in cell adhesion, signal transduction, cell cycle, cytoskeleton organization, chromosome organization, neurogenesis, embryo development, cell morphogenesis, DNA metabolism ( Table 2). It is interesting to note the direct association of some of these processes to the biochemical events that lead to characteristic morphological changes in an apoptotic cell. These changes include the breakup of the nuclear envelope and actin filaments in the cytoskeleton, blebbing of the plasma membrane, cell shrinkage, nuclear fragmentation, chromatin condensation, and chromosomal DNA fragmentation [42,43].
It is interesting to notice the remarkable presentation of some biological processes that are not related to apoptosis such as cell development and neurogenesis. The careful analysis of the enriched biological processes GO terms demonstrates a possible significant role of caspases-3 in the development and differentiation of nerve cells. In fact, several reports have shown a strong expression of non-apoptotic active caspase-3 in various proliferating and differentiating neuronal cells [44,45]. Further investigation focusing on the role of caspases in nerve tissue may reveal new pathways that are necessary for the development and differentiation of nerve cells.
The results of enriched cellular component GO terms showed that most of the predicted substrates are distributed to nuclear components (nucleoplasm, nucleolus, chromosome, and nuclear envelope), cytoplasmic components (cytoskeleton and cell projection), and plasma membrane part (cell projection and membrane fraction). This distribution is correlated with the normal subcellular localization of caspase-3. Although the procaspase-3 is localized in the cytoplasm, active caspase-3 plays essential roles both in the cytoplasm and nucleus [46].
Feng et al. [47] have shown that the activated caspase-3 is first observed close to the inside surface of the cellular membrane, then transferred to the cytoplasm, and finally translocated to the nucleus.
An interesting fraction of the predicted caspase-3 substrates are proteins of the extracellular matrix. The cleavage of such proteins can be achieved through their cytoplasmic embedded domains. Further investigations are needed to shed light on the biological importance of extracellular matrix proteins and their association with apoptotic and non-apoptotic roles of caspases.
Conclusions
In this work, we introduce a significant improvement to the in silico prediction approach of caspase substrates. Based on our results and in order to increase prediction specificity, we suggest the caspase-specific approach instead of that based upon considering the different caspases' substrates as having one common pattern. CAT3 can be considered a prototype system that would be easily utilized in developing prediction tools for other caspases and endopeptidases. The predicted cellular targets of CAT3 might be used to explore new pathways to gain further insight into the cellular mechanisms that regulate apoptosis, proliferation, and other biological processes. In addition, the discovery of such targets might have significant implications for the development of drugs for various diseases including cancer, autoimmune disorders and neurodegenerative pathologies.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2011-05-11T00:00:00.000
|
15973562
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/archive/2011/468237.pdf",
"pdf_hash": "1bffc9558150738e139676c6ebccf89b378cb7bb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2254",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "1bffc9558150738e139676c6ebccf89b378cb7bb",
"year": 2011
}
|
pes2o/s2orc
|
Role of Endobronchial Ultrasound in the Diagnosis of Bronchogenic Cysts
Diagnosis of bronchogenic cysts is possible with computed tomography, but half of all cases present as soft tissue densities. Two such cases are highlighted where asymptomatic bronchogenic cysts that presented as soft tissue masses were evaluated by endobronchial ultrasound (EBUS). After studying the ultrasound image characteristics, the diagnosis was confirmed using EBUS-guided transbronchial needle aspiration (EBUS-TBNA). The first case had ultrasound findings of an anechoic collection, and the aspirate was serous with negative microbiologic cultures. The second was an echogenic collection within a hyperechoic wall. Needle aspirate was purulent and cultured Haemophilus influenza. The diagnosis of a bronchogenic cyst complicated by infection was made, and the lesion was surgically resected. This potential for EBUS in the diagnosis of bronchogenic cysts and in identifying complications such as infection should be considered in the management of such cases.
Introduction
Bronchogenic cysts are ventral foregut duplication cysts that arise from aberrant embryonic development. They are typically located near the large airways just posterior to the carina. Although many are asymptomatic and discovered incidentally, complications such as infection, airway obstruction, fistulation, vascular compression, and malignant transformation have been reported [1].
Bronchogenic cysts are lined with cartilage and pseudostratified columnar epithelium. Noninvasive diagnosis is possible with computed tomography (CT) using typical characteristics of a round, well-circumscribed, unilocular mass [1]. Approximately half of the cysts are homogeneous with a near water density (0-20 Hounsfield Units) that reflects their serous nature. The remainder have attenuation values of soft tissue because of viscid mucoid contents making them indistinguishable from neoplasms [2]. Whilst the additional use of magnetic resonance may help to further characterize these lesions, they can still be confused for malignancies with central necrosis. An exploratory thoracoscopy or thoracotomy may then be necessary to confirm the diagnosis [3].
We present two cases where endobronchial ultrasound (EBUS) was performed on patients with asymptomatic bronchogenic cysts. After studying the ultrasonographic images, transbronchial needle aspiration (TBNA) was performed at the same sitting to confirm the diagnosis. Both were outpatient procedures that were completed under local anesthesia and conscious sedation without any complications. The EBUS ultrasound images correlated with TBNA aspirate findings and led to two different management strategies for each of the bronchogenic cysts.
Case 1
A 54-year-old ex-smoker presented with an abnormal chest radiograph. He was asymptomatic had a normal physical examination, and the radiograph was taken as part of a license application. CT scan revealed a well-circumscribed lesion in the right lower paratracheal region abutting the azgos vein. EBUS-TBNA was performed using the 7.5 MHz convex probe bronchoscope (BF-UC260F; Olympus Ltd, Tokyo, Japan). The lesion was identified as a round, anechoic structure and distinguished from surrounding blood vessels using color Doppler (Figure 1). A dedicated 22-gauge needle (NA-202, Olympus Ltd, Tokyo, Japan) was used to puncture and aspirate the lesion under direct visual guidance. To ensure complete drainage of the lesion, the gradual shrinkage of the cyst was observed on ultrasound while 50 milliliters of serous fluid was aspirated ( Figure 1). Cytological examination of the aspirate yielded metaplastic squamous cells with negative microbiological cultures. He has remained clinically and radiologically stable on followup for 18 months.
Case 2
An 18-year-old male presented with an abnormal chest radiograph that was performed before military enlistment.
He was also asymptomatic and had a normal physical examination. CT scan revealed a mass in the right hilum. EBUS-TBNA was performed and ultrasound identified a round lesion with an echogenic centre and thickened, hyperechoic walls (Figure 2). Ten milliliters of purulent aspirate was drained. Cytological examination identified numerous neutrophils, occasional squamous cells and an amorphous granular background. Bacterial cultures grew Haemophilus influenza. He was treated with antibiotics and referred for surgical resection. A lobulated mass at the root of the right upper lobe with a cystic cavity was found on right thoracotomy. Histopathological examination revealed a welldefined cystic space lined by inflamed respiratory epithelium consistent with the diagnosis of an intrapulmonary bronchogenic cyst.
Discussion
Surgical resection of bronchogenic cysts with complete removal of the secreting mucosal lining is recommended as the therapeutic procedure of choice in cases presenting with complications [1,4]. However, many cases are, asymptomatic, and the long-term prognosis in such instances is uncertain with a few reports of late complications. The role of preventative surgery remains controversial, and asymptomatic patients have been managed successfully by clinical observation as well. Endoscopic aspiration has been attempted via conventional transbronchial needle aspiration, radial (20 MHz) EBUS, endoscopic ultrasound (EUS), and convex probe realtime EBUS-TBNA [5][6][7]. Needle aspiration is usually associated with a high recurrence rate because the lining of the cyst is not removed. However, this procedure remains a possible palliative measure in patients presenting with airway obstruction who are poor surgical candidates [6]. Furthermore, ultrasound facilitates visualization during aspiration and enables complete aspiration of the cyst that is not always possible when "blind" techniques are utilized. This causes collapse of the cystic space and may facilitate adhesion Diagnostic and Therapeutic Endoscopy 3 between the mucosal surfaces lining the cavity, consequently reducing recurrence rates [6,7].
Ultrasound provides excellent delineation between solid and fluid structures. Echogenicity of cystic lesions is related to their cellular content, and echogenicity is by convention described with reference to soft tissue such as lymph nodes or tumors. These structures are grey in color and are termed isoechoic. Darker (hypoechoic) or black (anechoic) images denote either cysts with serous fluid or blood vessels. Echogenic images within cysts identify complicated fluid collections that may contain either frank pus or blood clots [8]. The presence of septations, thickened walls, or floating debris within the cyst may give further radiological clues to an infected cyst [8].
Therefore, EBUS can be used to diagnose bronchogenic cysts that present as soft tissue densities on CT imaging. Uncomplicated cysts can be identified using ultrasound image analysis and subsequently managed by observation. Infected cysts may also be recognized and managed aggressively with early surgical resection. If there is any doubt in the diagnosis, then EBUS-TBNA can be considered and aspirate sent for bacterial cultures and cytological analysis.
Antibiotic prophylaxis is recommended should TBNA be performed because of the risk of introducing infection into the cysts and mediastinum [9]. The TBNA needle may become contaminated with airway commensals as the bronchoscope passes through the oropharyngeal region and these organisms can be transferred during needle aspiration. There are concerns that this infection risk may be especially elevated if the TBNA needle is used in full extension (36 mm). At such depths, the tip of the needle is poorly visualized on ultrasound and may inadvertently puncture surrounding tissues [9].
Conclusion
EBUS has a potential role when a confident diagnosis of bronchogenic cyst cannot be made by CT. Uncomplicated cysts appear as rounded, anechoic lesions on ultrasound scans. Early identification of complications such as infection is possible because such lesions are echogenic with thickened walls.
|
v3-fos-license
|
2018-04-03T03:47:32.421Z
|
2017-09-04T00:00:00.000
|
31559407
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41467-017-00362-5.pdf",
"pdf_hash": "0d784427855d1fe955adb3387c42fb97d7a02253",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2256",
"s2fieldsofstudy": [
"Physics",
"Materials Science",
"Chemistry"
],
"sha1": "0d784427855d1fe955adb3387c42fb97d7a02253",
"year": 2017
}
|
pes2o/s2orc
|
White light emission from a single organic molecule with dual phosphorescence at room temperature
The development of single molecule white light emitters is extremely challenging for pure phosphorescent metal-free system at room temperature. Here we report a single pure organic phosphor, namely 4-chlorobenzoyldibenzothiophene, emitting white room temperature phosphorescence with Commission Internationale de l’Éclair-age coordinates of (0.33, 0.35). Experimental and theoretical investigations reveal that the white light emission is emerged from dual phosphorescence, which emit from the first and second excited triplet states. We also demonstrate the validity of the strategy to achieve metal-free pure phosphorescent single molecule white light emitters by intrasystem mixing dual room temperature phosphorescence arising from the low- and high-lying triplet states.
Supplementary Methods
All the chemicals and reagents were purchased from Aldrich and used as received without further purification. All the molecules synthesized were purified by column chromatography and recrystallization from dichloromethane and hexane for two times, and fully characterized by 1 H NMR, 13 C NMR and high resolution mass spectroscopies and elementary analysis. 1 H and 13 C NMR spectra were recorded on a Bruker AV 400 Spectrometer at 400 and 100 MHz in CDCl 3 , respectively. Tetramethylsilane was used as the internal standard. High-resolution mass spectra (HRMS) were recorded on a GCT premier CAB048 mass spectrometer operating in MALDI-TOF mode. Elementary analysis was performed on a Thermo Finnigan Flash EA1112. Gel filtration chromatography was performed using a ZORBAX SB-C18 column (Agilent) conjugated to an Agilent 1260 Infinite HPLC system. Before running, each sample was purified via 0.22 µm filter to remove any aggregates. The flow rate was fixed at 1.0 mL/min, the injection volume was 20 µL and each sample was run for 6 min. The absorption wavelength used was set at 330 nm. 100 % percent of acetonitrile was used as the running buffer. The photoluminescence spectra were measured on a PerkinElmer LS 55 spectrophotometer. The lifetime, time-resolved excitation spectra, steady state and time-resolved emission spectra, temperature dependent photoluminescence spectra and absolute luminescence quantum yield were measured on a Edinburgh FLSP 920 fluorescence spectrophotometer equipped with a xenon arc lamp (Xe900), a microsecond flash-lamp (uF900), a picosecond pulsed diode laser (EPL-375), a closed cycle cryostate (CS202*I-DMX-1SS, Advanced Research Systems) and an integrating sphere (0.1 nm step size, 0.3 second integration time, 5 repeats), respectively. Mean decay times (τ P ) were obtained from individual lifetimes τ i and amplitudes a i of multi-exponential evaluation. Powder X-Ray diffraction patterns were performed on an X'Pert PRO MPD diffractometer with Cu Kα radiation (λ = 1.5418 Å) at 25 °C (scan range: 4.5−50°). Single crystal data was collected on a Bruker Smart APEXII CCD diffractometer using graphite monochromated Cu Kα radiation (λ = 1.54178 Å). The photos and videos were recorded by a Cannon EOS 60D.
The amorphous solids of the phosphors were prepared by heating the samples to melt with a heating gun and quenching the melt with liquid nitrogen.
All the crystalline samples were obtained from slowly evaporative crystallization using hexane/chloroform mixture (3:1, v/v). To further check the purity of the solid samples, all the solid samples were dissolved in 100 % percent of acetonitrile and got sample solutions (50 µM), then run the HPLC.
To check the optical stability of powder samples, a pile of powder was exposed to a 365nm UV light (the power of UV tube is 8W) for 30 min to 12h, then the solid samples were dissolved in 100 % percent of acetonitrile and then sample solutions were prepared (50 µM), finally run the HPLC.
Mean decay times (τ P ) were obtained from individual lifetimes τ i and amplitudes a i of multi-exponential evaluation through:
Synthesis
Synthetic route for BDBT, FBDBT, ClBDBT, BrBDBT and BCBP. was used as the light source. The impulse response function (IRF) (green) was from the output of pulsed diode laser which has a typical pulse width of less than 100 ps. All the decay profiles almost coincide with the IRF, suggesting that no obvious nanosecond fluorescence decay was detected. As persistent lifetimes were detected in all the emission bands, the photoluminescence feature of BDBT, FBDBT, ClBDBT, BrBDBT and BCBP was proved to be pure phosphorescence.
Computational methods and results
The computational models were built from the crystal structure shown below. The quantum mechanics/molecular mechanics (QM/MM) method was implemented to deal with the electronic structures in crystal by virtue of ChemShell 3.5. 1 , interfacing Turbomole 6.5 2 for QM and DL_POLY 3 with the general Amber force field (GAFF) 4 for MM. The atomic partial charges were generated by the restrained electrostatic potential (RESP) 5 method. Molecular geometry optimizations were performed for the ground state (S 0 ) at the level of B3LYP/6-31G(d) and for the triplet state (T 1 and T 2 ) at the TDDFT/B3LYP/6-31G(d) level. The excitation energies were calculated by using TDDFT for electronic excited singlet and triplet states. We further calculated the vibrational frequencies at S 0 , T 1 and T 2 states at (TD)B3LYP/6-31G(d) level in order to determine the vibration emission spectra of T 1 →S 0 and T 2 →S 0 . At the same level, the oscillator strength of triplet states is given by Beijing Density Function (BDF) program. [6][7][8] The emission spectra were calculated by MOMAP program 9 with detailed formulation discribed in our previous work. 10 Because we failed to obtain the single crystal structure of FBDBT, no QM/MM model calculations thus could be carried out. For BDBT and BrBDBT, they have different energy gaps, electronic transition characters for T 1 and T 2 states and the corresponding frontier molecular orbitals. Briefly, BDBT has a large energy gap between T 1 and T 2 state. BrBDBT has similar energy gaps as ClBDBT, but its T 2 state show smaller (n,π*) transition.
|
v3-fos-license
|
2020-10-29T09:08:54.044Z
|
2020-10-22T00:00:00.000
|
226351308
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-3417/10/21/7388/pdf",
"pdf_hash": "3af70ae5c2107459bf0864581608da38bedb0ad1",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2257",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"sha1": "c48f82836369e657cf3abf6be94259d4f00ae29f",
"year": 2020
}
|
pes2o/s2orc
|
Risk Factors Impacting the Project Value Created by Green Buildings in Saudi Arabia
: Green buildings are playing a pivotal role in sustainable urban development around the world, including Saudi Arabia. Green buildings subject to various sources of risk that influence the potential outcomes of the investments or services added in their design. The present study developed a structured framework to examine various risks that may lead to green buildings’ value destruction in Saudi Arabia. The framework initiates with identification of 66 potential risk factors from reported literature. A questionnaire compiling a list of identified risk factors was hand-delivered to 300 practitioners (managers, engineers, and architects) having knowledge of value engineering in the construction industry, and an overall response rate of 29.7% was achieved. Subsequently, descriptive statistics ranked the risk factors based on scores given by the respondents. The principal component analysis extracted 16 components, based on the likelihood of risk factors impacting the value created by green building design. Finally, the factor analysis grouped the 35 most significant risk factors in 5 clusters—i.e., 8 in functional risk, 13 in financial risk, 3 in operational risk, 3 in environmental risk, and 8 in management risk cluster. The study enhances the understanding of the importance of the risk factors’ impact on value creation. Based on the results, the value management (or engineering) teams and the top-level management can identify, manage, and control the risk factors that have a significant impact on the project value created by green building design.
Introduction
Green buildings possess a wide range of advantages over the conventional buildings over their life cycle, e.g., minimal environmental impacts and low maintenance costs [1]. In the recent past, rapidly growing urbanization trends have undergone the world to unusual climate change and environmental deterioration [2]. Due to global warming, the urban heat island (UHI) phenomenon has already raised, in comparison to the rural proximities, the temperature from 2 to 5 • C in many cities around the globe [3]. The potential impacts of global warming on temperature and rainfall patterns are expected to be highly significant in warm and arid regions like Saudi Arabia [4]. Decision-makers need to take proactive measures to minimize the production of greenhouse gasses from all types of development activities, particularly the growing building sector in the country [5].
As per Shin et al. [6], certified green buildings reduce the UHI intensity by around half a degree. To further mitigate the impact of buildings on climate change, He [2] proposed a concept of Zero UHI impact building. In addition to revising the sustainability assessment standards and economic regulations, the idea needs serious efforts towards technological interventions in building envelopes, materials, and construction equipment. Dwaikat and Ali [7] estimated the economic benefits of
Risk Classification in Green Building Development
There is no clear consensus in the existing literature on how to classify the existing risks that may emerge from the development of green buildings. In an early study, Tiong [20] clustered risk into financial, political, and technical aspects of green buildings. Boussabaine and Kirkham [21] presented a systemic life cycle risk classification based on design, construction, operation, and disposal of building facilities. Medda [22] suggested additional classes to include, such as commercial, regulatory, and economic risks. Zou et al. [23] classified risks based on time, cost, quality, safety, and environmental aspects of performance. Yang et al. [24] classified previously defined risks into several categories and introduced the ethics/reputation risk cluster. Zou and Couani [25] classified a number of risks based on the perspective of the stakeholders involved in the process of green building supply chain.
Risk classification largely depends on the intended purpose of the investigation [26]. Based on the objectives of the present research, Table 1 groups the identified risks in, (i) functional risks related to how the building and its components will function in use; (ii) financial risks related to a project's financing parameters, capital and operational costs, and return on investments; (iii) operational risks related to safety and how easy and efficient the green assets are to operate; (iv) environmental risks related to protecting the environment from the impacts of the development; and (v) management risks related to stakeholders' interaction, knowledge, and contractual and organizational relationship. It should be noted that the purpose of our classification is to facilitate risk identification in value creation in green building development. These risks will aid the process of value engineering analysis. The sources of the identified risks in each cluster are described in the following sub-sections.
Functional Risks
Functionality in the green building's use plays a pivotal role in the optimization of the operational cost of core services and the productivity of the occupant. From a value point of view, the function of a building and its components are related to the purpose of their design and existence. The design of green buildings must be subject to risk analysis in order to assess and understand the uncertainties that are associated with the function and design parameters. Changes in the assumption of these parameters may lead to different levels of performance and reliability. Functional risks can be attributed to the state of the product or services provided. If the service or product fails to fulfil its functional requirements as expected, then all or most of the invested value will be lost. Table 1 lists the functionality risks that may impact the value creation in green building development. Physical risks are related to the building asset's condition over the life span of the capital value investment. Typical risks include loss due to fire, corrosion, explosion, structural defect, war, etc. Technical risks are due to increased technology in manufacturing, communications, data handling, and interdependency of manufacturers, methods of storage, stock control, and distribution. These risks could also be associated with physical aspects of the green building's development. Building components and their function are subject to obsolescence in terms of service life, design life, and functional purpose over time, leading to both tangible and intangible value. The monetary value generation from a building's assets is directly related to quality and durability of the asset [27]. The building's systems should be easy to operate, robust, and efficient. It is essential that a green building's facilities should easily accommodate any changes in activity that are likely to occur Appl. Sci. 2020, 10, 7388 4 of 32 throughout its life cycle, including when user requirements change in the future. This is essential to guard against risks of obsolesence.
Zou and Couani [25] compiled a list of key risk factors that may have a consequential impact throughout the green building's supply chain. Al-Yousefi [28] had previously highlighted risks due to the lack of quality, reliability, and performance in sustainable development. Non-complying products and materials and change in technologies due to green building are the two major risks that can impact the green building development [25]. Isa et al. [29] pointed out that physical risks in green buildings might result from events such as earthquake, flood, wear and tear, and user damage. The authors report that these risks will have an impact on the economic value of a property. The Green Building Council [30] stressed the importance of the following regulatory risks: (i) property value decrease due to changes in planning/transport policy; (ii) inability to compete with newer, greener properties; (iii) decrease in value due to low energy rating; and (iv) inability to lease due to new regulations.
The risk of obsolescence of the asset function or it is components will result in larger operating expenditure cost and may undermine the value of the asset that may lead to lower income, smaller capital receipts, higher costs, and the possibility of legal action. Furthermore, new technologies that can change demand-side behavior (e.g., wireless measurement of energy use at appliance level) could increase the risk of obsolescence and missed opportunities for reduced operational costs [30].
Financial Risks
Financial risks attribute to inadequate inflation forecasts, incorrect marketing decisions, and credit policies. Zurich [31] defines financial risks as "the additional costs of green buildings may affect completing projects on time and on budget, but must be weighed against the cost of not going green". Table 1 lists the financial risks that may found to have an impact on value creation in green building development. Haghnegahdar and Asgharizadeh [32] reported that 75% of the projects are not accomplished according to the allocated financial resources and time schedules. Zou and Couani [25] identified "higher investment costs to go green" and "costs of investment in skills development" as additional costs in the development of green buildings. Zurich [31] claimed that additional costs spent on design and construction of green buildings can be too costly for some companies and delay the completion of the projects within the specified budget. According to the NAO [33], buildings that consume large amounts of capital in their development and operation will end up having a negative impact on the user's business and performance.
Thus, if the budgets for both capital and whole-life costs are not estimated correctly and justified in the business case to be sustainable and affordable over the life of the green building facility, it leads to the risks of failure to recognize cost-value mismatches, failure to identify cost-value relationships, and losing potential revenue from the investment. It is also important that operational and maintenance costs are evaluated and kept within the budget. The investment appraisal must address various options for creating the required value from green buildings. Failure to consider the implications of economic conditions and to recognize the cost as resource expenditure will certainly lead to the risk of affordability and to the risk that whole-life cost estimates are not realistic but based on unreliable evidence.
Indirect factors-like the inflation rate, liquidity, and financing risks-will impact the capital and operational costs of developing green buildings. Lower economic activities may influence both the asset's economic value and the rental return [34]. Higher financing cost also results in value loss, leading to a longer period being required to recover the invested capital. Sustainability features in green buildings attract funding at competitive rates.
The investments in green buildings offer higher returns in the form of higher rent, capital appreciation, and cost savings [29]. It is becoming a standard procedure that real estate valuation take into consideration the value difference if environmental features are not incorporated into the construction and operation of the building estates [30]. The Council also suggest that the failure in meeting the benchmarking criteria of sustainability by the potential investment partners may Appl. Sci. 2020, 10, 7388 5 of 32 result in the different potential market risks, including, (i) brown discounts (i.e., reduction in rent and asset value), (ii) increased speed of depreciation, (iii) lower occupancy rates, and (iv) shorter tenancies.
A marked link has been emerged between the market value and the associated performance of the green features of the building [35]. Ashuri and Durmus-Pedini [36] further compiled the following list of financial risks associated with green buildings: The possible unforeseen conditions of retrofitting existing buildings.
Operational Risks
Operational risks are concerned with maintaining, operating, and cleaning a green building facility once it is in use. Table 1 lists the operational risks that may found to have an impact on value creation in green building development. Operational performance of green buildings has a significant impact on their market value, both rental and capital. The project owners are starting to require additional contract provisions regarding the energy efficiency of green buildings; breaching of contract can increase the exposure to legal liabilities, such as tortuous, statutory, and contractual liabilities [31]. Lutkendorft and Lorenz [37] proposed that the value should be attributed to the quality of the indoor environment and its relationship with the efficiency in employees' productivity. Low energy cost reduces the potential occupants' operating cost that minimizes the vacancy risk and improves the rental value.
Environmental Risks
The whole endeavor of the green building ethos is to create facilities that must minimize waste and energy use during construction and operation stages. Investors are concerned with the inherent risk from the environmental perspective to the real estate portfolio [30]. The building should provide a comfortable and healthy working and living environment for people. The methods and materials used in construction should be selected based on their potential risk impacts on the environment [38]. The frequency by which building materials are replaced will have an influence/increase in carbon emissions over the life cycle of the green assets [39][40][41]. This stems from the fact that the replaced materials need to be disposed of, new materials have to be manufactured and transported, equipment must be utilized, and energy must be expended to rebuild or renovate the asset.
The waste from such activities increases the building's environmental impacts, such as global warming-from the building machinery and the operation of the transport and construction vehicles; acidification-caused by emissions from burning of diesel by the building machinery and the operation of the transport and construction vehicles; eutrophication-caused by indirect emissions from the source of electricity supply, and the burning of diesel from the use of building machinery and transport; winter smog-from waste transportation and the production of natural gas; heavy metals-due to toxic effects of heavy metals from disposing and recycling materials; and energy-from electricity and oil usage and production impacts [42]. These risks can be mitigated through the design and specification of robust structures and construction. Therefore, failure to consider maintainability and reparability increases the environmental risk.
The performance of new products and technologies that are being developed for green construction can also pose a risk [31]. This view is based on the fact that green materials are developed rapidly without robust testing of their performance and environmental credentials. This might lead to litigation over specifications or materials that are unfit for purpose or product failure. The legal liability risks-related to tortuous, statutory, and contractual liabilities-eventually reduce the client investment clave. Risk of obsolescence due to a green building's non-conformance with sustainability issues and consumption of resources may undermine the value of the green real estate [30].
Management Risks
Effective management by an integrated project team is inevitable to create value in the development and operation of green and traditional facilities [43]. Risk management is an important aspect of designing and operating green buildings [44]. There are opportunities to maximize the value and minimize waste in each stage of a building project, i.e., planning, design, procurement, construction, and operations [33]. If the process of development is not well managed, risks may emerge from the lack of integration, coordination, and communication between the project team. The project team should have the foresight to develop and communicate a clear brief and make a realistic budget and cost estimation from the outset. Also, the team should be given enough time, as per the need during the whole project cycle, to plan and complete the project.
If the project execution plan is poorly conceived, it may lead to risks related to poor definition of scope and output specification, poor communication, and poor lines of decision-making. Other management risks include psychological risks associated to the choice of service or product selection and procurement. If the wrong product or service is chosen, capital value may be lost and it might have other negative effects on the whole life cycle chain. In addition, lack of coordination is considered one of the most prevalent endemic risks in a construction project's development. Furthermore, stakeholder involvement and teamwork is essential for adding and creating value throughout the life cycle of green building development [45]. Table 1 lists 17 managerial risks that have an impact on value creation in green building development.
The identified risk factors in Table 1 were put forward for evaluation by industry professionals to test their impact on value creation in green buildings. Figure 1 illustrates the methodological framework adopted in present research. Through a literature survey and expert judgment, 66 risk factors were found relevant to the context of Saudi Arabia and elsewhere (refer to Table 1). These risk factors were group under five main categories in Section 2, as described above. A questionnaire was developed to obtain the view of professionals in the country on the importance and likelihood of the identified risk factors. Subsequently, the significance of the selected risk factors was evaluated through a hand-delivered questionnaire survey. The responses were statistically analyzed with the help of Statistical Package for Social Sciences (SPSS). Finally, the risk factors with the highest importance were ranked and grouped into clusters to facilitate the shareholders and designers to enhance performance efficiency and obtain more value from investment in green building assets. All these steps are discussed in detail in the subsequent sub-sections.
Identification of Risk Factors
A detailed literature review developed a list of risks that may have a negative impact on value creation. The extracted risk factors were classified according to value driver groups. A set of 66 risk factors was classified into financial risks, functional risks, operational risks, environmental risks, and
Identification of Risk Factors
A detailed literature review developed a list of risks that may have a negative impact on value creation. The extracted risk factors were classified according to value driver groups. A set of 66 risk factors was classified into financial risks, functional risks, operational risks, environmental risks, and management risks. These risk factors, listed in Table 1, were used for developing the questionnaire to assess how each risk will impact value creation in green building development.
Questionnaire Design and Development
The questionnaire form began by giving an overview and objectives of the research. The first part of the questionnaire gathered the general information of the respondents, while the second part asked the respondents to evaluate and rate the list of identified risk factors. Part 1 obtained two types of information. In the first type, the respondents were asked to provide some general information (optional), such as their names, organization names, email addresses, phone numbers, and postal addresses. The second type of information was related to the job titles and the years of experience of the respondents. Based on the job titles, respondents were categorized into three groups, including manger, engineer, or architect. In order to facilitate subsequent statistical analysis, types of jobs were coded by ordinary numbers for discretion. In part 2, a Likert scale ranging from 1 to 5 (very unlikely, unlikely, neutral, likely, and very likely) was used for rating the list of risk factors, because it is easy to construct, modify, and direct use for statistical inference of the numerical measurement for reliable results. The questionnaire acquired the rate the likelihood of risk factors impacting on the project value created by green building design. A sample of the questionnaire is given in Appendix A. Details on questionnaire development and validation can be seen in Alattyih et al. [19].
The population and the sample size were based on the number of professionals having knowledge about the application of value engineering approaches in the Saudi Arabian construction industry. As per SAVE International, more than 1356 people have obtained value engineering certificates in Saudi Arabia [52,53]. Approximately 30 of them are Certified Value Specialist (CVS), i.e., 16% of global certified population. Annually, 60-80 value engineering training workshops and more than 80 VE study programs are being offered in Saudi Arabia and the Arab Gulf countries. Using a confidence interval of 10% and confidence level of 95% from the population of 1356, the research needs at least 76 respondents. Based on the perceived response of less than 50%, the questionnaires were hand-delivered, in person, to a sample size of 300 professionals with value engineering knowledge and experience in the Saudi Arabian construction industry. The participants were randomly selected from various cities in Saudi Arabia, in order to develop research data covering all the country's major cities.
Statistical Analysis
SPSS software and Microsoft Excel software performed factor analysis and data reduction. The collected data were processed through two statistical procedures, (i) the data ranking process, that is based on mean weighted rating, standard deviation, severity index and coefficient of variation for the risk factors; and (ii) factor analysis and the data reduction process were utilized to decrease the number of variables in order to handle the task more easily and efficiently.
Descriptive Statistics and Data Ranking
The degree of significance on risk factors, in the context of value created by green building designs in construction projects, is described in the following sections. The comparison of the data ranking took place based on severity indices, average weighted mean, and standard deviation of each risk factor. Further analysis of the data were conducted for ranking based on respondents' answers and their experience (0-5 years, 6-10 years, and more than 10 years of experience) and their professional job (manager, engineer, or architect).
A mean weighted rating for each value attribute and risk factor was computed to indicate the importance of each indicator, using Equation (1). Meanwhile, the range varies from 1 to 5, therefore, the moderate point for value attributes and natural point for risk factors is 3.
where R is the rating of each value attribute and risk factor (1, 2, 3, 4, 5), F is the frequency of responses, and n is the total number of responses (n = 89). A severity index (S.I.) measure is employed in order to rank the indicators according to their significance, which is the higher the percentage (%), the more significant the attribute/factor. Equation (2) shows how the S.I. is calculated where W is the weight of each rating (1/5, 2/5, 3/5, 4/5, 5/5). The coefficient of variation (COV) is an expression of standard deviation as a percentage (%) of the mean, which is for comparison of the relative variability of various responses. Better variability is shown by a lower variation coefficient. The COV was computed as a ratio between standard deviation and mean.
Testing the Hypotheses
The means and standard deviations and the coefficient of variations' values were quite close together for the three groups of respondents (mangers, engineers, and architects). Therefore, SPSS software was used for further analysis. ANOVA test analysis was conducted in order to justify the statistical differences between the groups' responses.
The SPSS software was used with a significance level of 0.05 to examine the differences between the groups regarding the likelihood of the risk impacting on the project value creation of green building development using the following hypothesis testing: H 0 : p > 0.05-There is no significant difference among the respondents' ratings for the likelihood of risk factors impacting the value created by green building design.
H 1 : p < 0.05-There is significant difference among the respondents' ratings for the likelihood of risk factors impacting the value created by green building design (at least one of the groups is significantly different from other groups).
After that, a follow-up test was conducted to make multiple comparisons if there is significant difference among the respondents. The follow-up test used in this research was the post hoc multiple comparison test. The Turkey test is one of the post hoc tests used as the sample size is uneven.
Factor Analysis and Data Reduction
A number of risk factors with the highest degree of likelihood of risk factors impacting on project value creation might be considered as representative of the whole set of data. Therefore, the most significant factors are extracted and treated as representative of the whole set of risk indicators. Based on the factors' relationship and correlations, the outcome of the data reduction develops a few clusters that consist of the most important risk factors of the original large group of 66 risk factors. A clear understanding of new risk clusters and their implications will be instrumental in assessing and evaluating the value creation and their performances in green building design.
The factor analysis technique investigated if there is an underlying relationship between the different factors within a questionnaire. Figure 2 illustrates the overall analysis process. The factor analysis process determines the strength of the relationship between the variables, extracts a matrix of correlation coefficients, and finally extracts the components that have an eigenvalue of 1 or more from this matrix, which is the most common method of extraction in principal component analysis. The data reduction process identifies the variables that correlate highly with a set of other variables for clustering them in a meaningful way. The next stage generates a rotated component matrix to find out the risk factors that have a more effective influence on each component. Through identifying redundant data, the existing 66 risk factors were reduced down to 35 most important risk factors for green building design. Figure 2 shows that, through the use of data reduction in SPSS, the risk factors have been categorized into five clusters. The factor analysis technique investigated if there is an underlying relationship between the different factors within a questionnaire. Figure 2 illustrates the overall analysis process. The factor analysis process determines the strength of the relationship between the variables, extracts a matrix of correlation coefficients, and finally extracts the components that have an eigenvalue of 1 or more from this matrix, which is the most common method of extraction in principal component analysis. The data reduction process identifies the variables that correlate highly with a set of other variables for clustering them in a meaningful way. The next stage generates a rotated component matrix to find out the risk factors that have a more effective influence on each component. Through identifying redundant data, the existing 66 risk factors were reduced down to 35 most important risk factors for green building design. Figure 2 shows that, through the use of data reduction in SPSS, the risk factors have been categorized into five clusters.
Usually, a few components account for most of the variation, and these components can be considered to replace the original variables [54][55][56]. The results presented in the following section extract the most important risk factors that are essentially the representatives of the whole set of risk factors. The degree of significance of each risk factor in green building design varies according to its impact on a construction project. Reliability analysis practically validates the properties of a measurement scale and checks the reliability of the items. Low reliability shows that the items that make up the scale do not correlate strongly enough; thus, they might not be selecting the same construct domain. As a measure of reliability, Cronbach's Alpha was calculated to check the consistency of the research items and to Usually, a few components account for most of the variation, and these components can be considered to replace the original variables [54][55][56]. The results presented in the following section extract the most important risk factors that are essentially the representatives of the whole set of risk factors. The degree of significance of each risk factor in green building design varies according to its impact on a construction project.
Reliability analysis practically validates the properties of a measurement scale and checks the reliability of the items. Low reliability shows that the items that make up the scale do not correlate strongly enough; thus, they might not be selecting the same construct domain. As a measure of reliability, Cronbach's Alpha was calculated to check the consistency of the research items and to identify problem items that need to be excluded from the scale [57]. Based on George and Mallory's [58] measures, Cronbach's Alpha is assessed in Table 2. The reliability of the data was checked for the risk factors by using Cronbach's test. The Cronbach's Alpha values of risk factors was 0.969 and shows good to excellent internal consistency of the components.
Descriptive Statistics and Data Ranking
Identification of risk factors, associated with value and evaluation of their level of influence, plays a pivotal role in the project value created by the green building design. The risk factors were ranked based on their likelihood of impacting the project value created. Out of 300 professionals, 89 returned their fully completed questionnaire. This is an acceptable response rate (29.7%) as the typical response rate for a postal questionnaire survey in the construction industry is 20-30% [59]. The survey response revealed than 45% of the respondents were managers, 38% were engineers, and 17% were architects. Around 16% of the respondents were young, having 0-5 years of experience, 28% had experience ranged between 6 and 10, while the largest contribution was from senior respondents possessing more than 10 years of experience in the profession. Among the top 30 factors, the following discussion is limited to the top 3 and the bottom 10 risk factors.
As the detailed statistical ranking results cannot be presented due to space limitations, a summary of the overall ranking and the ranking by each expert along with their years of experience is presented in Appendix B. The appendix shows that the average weighted mean for the risk factors varies from 3.33 to 4.24, with an overall mean of 3.78. The severity indices range within 66.52% to 84.72%. The highest-ranked factor is R35 (failure to identify low-value, long-lead-time items) with a mean of 4.24 and severity index of 84.72%. An overall examination of the first 30 ranked risk factors indicates that all of them have a minimum mean value of 3.81 (which is higher than the average overall mean of 3.78) and severity indices of 76.18%. This means that the first 30 ranked risk factors seem to be important as viewed by the respondents. They are namely: R1, R3, R5, R6, R7, R8, R9, R12, R17, R20, R21, R22, R28, R33, R35, R36, R39, R44, R45, R49, R50, R51, R53, R54, R57, R63, R64, R65, and R66.
Factor R64 (i.e., incorrect time estimate) has the second highest rank for the risk factors with a mean of 4.18 and a severity index of 83.60% and it is ranked as first out of 66 by the engineers and architects, and also by the experts with more than 10 years' experience. The managers ranked it fifth out of 66, and the other rankings based on years of experience are: 0 to 5 and 6 to 10 years ranked it ninth and second out of 66 respectively. The third overall ranking was for R28 (i.e., poor design that may lead to higher operation costs), and the six groups of respondents also ranked it as one of their top 20 highest-ranked risk factors. R28 has a mean of 4.10, severity index of 82.02% and low coefficient of variation of 22.97%.
The last 10 factors, amongst the top 30, have average weighted means for the risk factors that vary from 3.81 to 3.84 and severity indices that range between 76.18% and 76.85%. They have the following overall ranking, R54 (poor team relationships) is ranked 21st out of 66; r50 (poor definition of the scope and objectives of projects) is ranked 22nd; r20 (failure to consider construction implications during design) is ranked 23rd; r9 (uncertainty about prices) is ranked 24th; r33 (failure to integrate the various systems to achieve the lowest life-cycle costs) is ranked 25th; r66 (incorrect estimated cost of energy used) is ranked 26th; r12 (failure to appropriately locate cost-to-function allocation) is ranked 27th; r45 (failure to consider increase in life cycle replacement) is ranked 28th; R3 (failure to consider implication of economic conditions) is ranked 29th; and factor R11 (failure to recognize cost-value mismatches) is ranked 30th out of 66.
Testing the Hypothesis
The section examines the mind-sets of construction project professionals who were involved in value management/engineering in relation to how they value and perceive the likelihood of the risk factors having an impact on value creation in the green building design.
Overall, the ANOVA results showed no significant difference (p > 0.05) amongst different participants (i.e., managers, engineers, and architects) for all the factors and there is no need for the H 1 hypothesis. Figure 3 shows the overall perspectives of the average rating for the likelihood of risk factors having an impact on the project value created by green building design. The architects give a higher ranking for the overall average mean than the engineers and managers. All of the top 20 factors ranked by the architects have a mean of over 4.13 and the average overall mean for all the factors is 3.91. In contrast, the mean average for the managers' responses is 3.79, and for the engineers it is 3.72. Based on these results, it is clear that all three groups of respondents agree that most of the risk factors have a high impact on the project value created by green building design.
Testing the Hypothesis
The section examines the mind-sets of construction project professionals who were involved in value management/engineering in relation to how they value and perceive the likelihood of the risk factors having an impact on value creation in the green building design.
Overall, the ANOVA results showed no significant difference amongst different participants (i.e., managers, engineers, and architects) for all the factors and there is no need for the H1 hypothesis (p > 0.05). Figure 3 shows the overall perspectives of the average rating for the likelihood of risk factors having an impact on the project value created by green building design. The architects give a higher ranking for the overall average mean than the engineers and managers. All of the top 20 factors ranked by the architects have a mean of over 4.13 and the average overall mean for all the factors is 3.91. In contrast, the mean average for the managers' responses is 3.79, and for the engineers it is 3.72. Based on these results, it is clear that all three groups of respondents agree that most of the risk factors have a high impact on the project value created by green building design.
Factor Analysis
Two statistical tests were carried out on data before conducting factor analysis to indicate the suitability of the data for structure detection. The first test is Kaiser-Meyer-Olkin (KMO) that measures sampling accuracy to indicate the proportion of variance of the variables that might cause by the underlying factors. High values close to 1.0 in the KMO test indicate that a factor analysis is useful for the data while the values less than 0.50 indicate that the results of the factor analysis are not useful. Also, KMO values between 0.5 and 0.7 are good, between 0.7 and 0.8 are very good, and above 0.8 are excellent. The Bartlett Test of Sphericity tests the hypothesis that the correlation matrix is an identity matrix, i.e., a small significance level of less than 0.05 indicates the need for factor
Factor Analysis
Two statistical tests were carried out on data before conducting factor analysis to indicate the suitability of the data for structure detection. The first test is Kaiser-Meyer-Olkin (KMO) that measures sampling accuracy to indicate the proportion of variance of the variables that might cause by the underlying factors. High values close to 1.0 in the KMO test indicate that a factor analysis is useful for the data while the values less than 0.50 indicate that the results of the factor analysis are not useful. Also, KMO values between 0.5 and 0.7 are good, between 0.7 and 0.8 are very good, and above 0.8 are excellent. The Bartlett Test of Sphericity tests the hypothesis that the correlation matrix is an identity matrix, i.e., a small significance level of less than 0.05 indicates the need for factor analysis [55][56][57]. Field [55] mentions that a value close to 1 indicates that the patterns of correlations are relatively compact and the factor analysis will provide distinct and reliable factors. Kaiser [60] recommended accepting values greater than 0.5 as acceptable. In the present research, the corresponding value for KMO was 0.702 and Bartlett test found significance value of '0'. As the KMO value was found close to 1, the factor analysis is likely to be appropriate and acceptable.
Bartlett's test measures the null hypothesis (H 0 : p > 0.05) and shows that the original correlation matrix is an identity matrix. Therefore, factor analysis needs some relationships between variables and the significance value to be (p < 0.05). By considering the significance level of 0.05, Bartlett's test showed the p-values for the likelihood of risk impact were highly significant. This test shows that the correlation matrix is not an identity matrix, so there are some relationships between the variables. Both KMO and Bartlett tests demonstrated that factor analysis is appropriate for these data. Table 3 presents the components extracted by the principal component analysis (PCA). The likelihood of risk factors impacting on the value created by green building design shows that just 16 components carry an eigenvalue of more than 1 and account for nearly 79.939% of the whole variance. Consequently, the 16 components can be considered as representative of the 66 factors included in this study. In the subsequent phase of factor analysis, a rotated component matrix was extracted to find out the risk factors with the highest level of influence on project value creation. Table 4 presents the summary of factor analysis results. The matrix loading scores in Table 3 Table 3 also shows the strength of the relationship between the variables and the extracted matrix of correlation coefficients, and then extracts the components that have an eigenvalue of more than 1 from the matrix of correlation coefficients. The results present the variables that highly correlate with a set of other variables. The eigenvalues for the components varied between 33.91 and 1.54, and the rotated variance load varied between 9.13% and 2.63%. Each component has more than two factors with a loading score of more than 0.4. The value attributes that have a loading score of more than 0.4 were reduced and redundant data were eliminated in the clustering stage in order to obtain a few variables that present the risk characteristics and their impacts on value created by green building design. Further reduction was carried out in the subsequent section based on their ranking by the professionals. Table 4 presents the factor analysis and data reduction results for the five new clusters that were molded based on the 16 extracted components and their most important risk factors in Table 3. The new clusters were considered as risk indicator clusters that impact the value creation and can be used for managing the project risks at tolerance level. The variance percentage of each risk factor was extracted from Table 3 while the variance of each cluster was calculated by summation of each component's variance in the same generated cluster. For example, the functional risk cluster in Table 4 is one of five clusters for impact of risk factors on value creation. The cluster encompasses component 8 (variance of 4.76%) that presents R20, R19, and R17; component 11 (variance of 3.69%) that presents R21 and R22; component 10 (variance of 4.34%) that presents R33; and component 14 (variance of 3.501%) that presents R35 and R36 as the main indicators of its set. Consequently, the percentage of variance for this cluster (Functional risk) in Table 5 was calculated as 4.762 + 3.686 + 4.338 + 3.501 = 15.876%. In Table 5, the risk factors grouped in five clusters are highly manageable without losing a large amount of data, and therefore just 100% − 79.9% = 20.1% of the existing information was compromised. Using the method of factor analysis and data reduction, the questionnaire's 66 factors were reduced to 16 components and then grouped/categorized into five fundamental clusters. The five fundamental clusters include just 35 original factors from the questionnaire that represent the most relevant data on risk indicators that impact value creation.
Cluster 1: Financial Risk
The cluster of financial risk comprises components 1, 3, 7, 13, and 15 and represents 28.44% of the total explained variance. The cluster represents 13 risk indicators. The likelihood of the impact of these risks would have a large influence on financial investment, especially the capital expenditure cost (CAPEX) and operating expenditure cost (OPEX). The risk indicators in this cluster should be considered during the early stage of a design because they reflect the impact on economic and monetary aspects for the project's life. The selected indicators are, failure to recognize cost-value mismatches, failure to identify cost-value relationships, failure to consider the cost of losing potential revenue, failure to appropriately locate cost to function allocation, and failure to consider future operational costs and economic conditions as well as incorrect estimated cost and/or insufficient funding.
Cluster 2: Functional Risk
The functional risk cluster identifies the risk indicators that impact the functional performance and affect the asset's functional reliability. The cluster consists of eight risk indicators distributed into four components-i.e., 8, 10, 11, and 14-with a total variance of 15.876%. The risks relate to design considerations such as construction implication, specification, systems, and/or changes in design.
Cluster 3: Operational Risk
The operational risk cluster variance is 6.613% and comprises three risk indicators, including failure to increase routine maintenance, failure to consider increase in life cycle replacement and failure to consider design impact on operating efficiency. The impacts of these indicators might have a large effect on project life efficiency.
Cluster 4: Environmental Risk
The environmental risk cluster consists of components 2, 5, and 12 with a variance of 17.259% and three risk indicators. These risks could have a negative effect on the building's efficiency and might also have an unsafe environmental impact. The selected risk indicators in the environmental cluster include failure to consider implication of environmental risks, failure to consider the impact of maintainability and reparability, and failure to consider the impact of obsolete equipment.
Cluster 5: Management Risk
The management risks influence the project management performance, which reduces the project's ability to deliver within the required objectives. In this cluster, eight risk indicators were distributed into components 6, 9, and 16 with a total variance of 12.295%. The management risk indicators should be considered at an early stage of a project in order to avoid any obsolescence and to manage the risk at tolerance level. The risk indicators in this cluster concern poor project management, project definition, planning, team relationships, and design; incorrect time estimates; as well as lack of coordination and decision-making.
Financial Risks
The financial value or economic value from green assets can be undermined if the risk factors are not factored into the design and operation of these assets [30]. Furthermore, financial risks in green buildings are attributed to the additional capital cost for the inclusion of green strategies in the design. From a value engineering analysis point of view, financial risks relate to the fact that stakeholders are unable to take into consideration the risks and the opportunities associated with the risks shown in Figure 4. The figure indicates that engineers perceived R11 "Failure to recognize cost-value mismatches", R10 "Failure to identify cost-value relationships", and R12 "Failure to appropriately locate cost-to-function allocation" risks are less important in identifying value. This result is not in keeping with the literature because these three risks are the foundation on which value engineering analysis is based. One plausible explanation of this anomaly is that, compared to Western practices, the Kingdom of Saudi Arabia (KSA)'s practices do not consider this as important.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 19 of 28 buildings in KSA are publicly owned, hence issues like revenue and energy costs are not considered important. Although future energy cost, performance of new green technologies and functional performance are uncertain, architects thought R9 "Uncertainty about prices" risk may not have much impact on value. All respondents agreed that R1 "Insufficient funding", R65 "Incorrect estimated cost of maintenance", and R63 "Incorrect cost estimate" risks will influence value analysis. Other risk factors such as R5 "Inappropriate cost evaluation criteria, R6 "Failure to consider future operational costs", and R3 "Failure to consider implication of economic conditions" were perceived as important by the respondents. This reiterate the view expressed in the literature that the investment appraisal must address various options for creating the value required from green buildings. Failure to consider implications of economic conditions and to recognize cost as a resource expenditure will certainly lead to the risk of affordability and to the risk that whole-life cost estimates are not realistic and are based on unreliable evidence or assumptions. This in turn will lead to income and the future value of real estate assets being affected.
Functional Risks
Functional risks are associated to the building asset condition over its life span. The building design function and its components' specifications are generally based on assumptions. Changes in the assumptions of these parameters may lead to different levels of performance and reliability. Thus, the design of green buildings must be subject to risk analysis to assess the uncertainties associated with the function and design parameters. Figure 5 portrays the respondents' perceptions about the functionality risks. The figure below clearly indicates that architects' views on R19 "Failure to design to brief/specification", R20 "Failure to consider construction implications during design", R21 "Design changes", R22 "Redesign/rework", and R36 "Failure to consider design risks" are in keeping with the other respondents' views. This result may suggest that because these risk factors are designrelated and architects generally tended to approach their design with the bias.
The results also showed that there is total agreement on the importance of R35 "Failure to identify low-value, long-lead-time items". However, the engineers' view was in accordance with that The results in Figure 4 show that managers did not highly rank R7 "Failure to recognize cost as resource expenditure", R8 "Failure to consider the cost of losing potential revenue", R66 "Incorrect estimated cost of energy used" risks. Although these risks are important in value entering analysis, the KSA managers perceived these risks differently. This could be attributed to the fact that most buildings in KSA are publicly owned, hence issues like revenue and energy costs are not considered important. Although future energy cost, performance of new green technologies and functional performance are uncertain, architects thought R9 "Uncertainty about prices" risk may not have much impact on value. All respondents agreed that R1 "Insufficient funding", R65 "Incorrect estimated cost of maintenance", and R63 "Incorrect cost estimate" risks will influence value analysis. Other risk factors such as R5 "Inappropriate cost evaluation criteria, R6 "Failure to consider future operational costs", and R3 "Failure to consider implication of economic conditions" were perceived as important by the respondents. This reiterate the view expressed in the literature that the investment appraisal must address various options for creating the value required from green buildings. Failure to consider implications of economic conditions and to recognize cost as a resource expenditure will certainly lead to the risk of affordability and to the risk that whole-life cost estimates are not realistic and are based on unreliable evidence or assumptions. This in turn will lead to income and the future value of real estate assets being affected.
Functional Risks
Functional risks are associated to the building asset condition over its life span. The building design function and its components' specifications are generally based on assumptions. Changes in the assumptions of these parameters may lead to different levels of performance and reliability. Thus, the design of green buildings must be subject to risk analysis to assess the uncertainties associated with the function and design parameters. Figure 5 portrays the respondents' perceptions about the functionality risks. The figure below clearly indicates that architects' views on R19 "Failure to design to brief/specification", R20 "Failure to consider construction implications during design", R21 "Design changes", R22 "Redesign/rework", and R36 "Failure to consider design risks" are in keeping with the other respondents' views. This result may suggest that because these risk factors are design-related and architects generally tended to approach their design with the bias.
Operational Risks
The operational performance of green buildings has a significant impact on their rental and market value. The investors are protecting and increasing the value of their investment in green real estates by incorporating initiatives to improve the energy efficiency and sustainability of their portfolios [30]. Nevertheless, many in the construction industry view the performance of green products, systems, and buildings as a risk [31]. Thus, considering operational features during the early stages of value planning will probably go a long way to protect and increase the investment value in green real estate by taking into consideration risks and initiatives to improve the energy efficiency and sustainability of assets throughout their entire life cycle. Figure 6 below portrays the respondents' perceptions about the operational risks that might have an impact on value creation in green building development if not taken into consideration during the value engineering analysis process. Figure 6 shows that only R45 "failure to consider increase in life cycle replacement", R44 "failure to consider increase in routine maintenance", and R39 "failure to consider design impact on operating efficiency" risks were scored highly by the respondents, whereas R42 "failure to consider component repair and replacement" and R46 "limited knowledge of maintenance issues" risks were viewed as not important. In fact, it is surprising to see that risk R42 is ranked 64 out of 66 risks. Component repair and replacement is an integral part of whole life cycle management strategies. Both have an The results also showed that there is total agreement on the importance of R35 "Failure to identify low-value, long-lead-time items". However, the engineers' view was in accordance with that of the architects for R33 "Failure to integrate the various systems to achieve the lowest life-cycle costs". Again, this result is not in keeping with literature regarding Western economies, where lowest life cycle costs are considered an important value generator [21]. This reaffirms that the risk of obsolescence of the green building functions or those of its components will result in larger OPEX and may undermine the value of the asset, leading to lower income, smaller capital receipts, higher costs, and the possibility of legal action [30].
Operational Risks
The operational performance of green buildings has a significant impact on their rental and market value. The investors are protecting and increasing the value of their investment in green real estates by incorporating initiatives to improve the energy efficiency and sustainability of their portfolios [30]. Nevertheless, many in the construction industry view the performance of green products, systems, and buildings as a risk [31]. Thus, considering operational features during the early stages of value planning will probably go a long way to protect and increase the investment value in green real estate by taking into consideration risks and initiatives to improve the energy efficiency and sustainability of assets throughout their entire life cycle. Figure 6 below portrays the respondents' perceptions about the operational risks that might have an impact on value creation in green building development if not taken into consideration during the value engineering analysis process.
value in green real estate by taking into consideration risks and initiatives to improve the energy efficiency and sustainability of assets throughout their entire life cycle. Figure 6 below portrays the respondents' perceptions about the operational risks that might have an impact on value creation in green building development if not taken into consideration during the value engineering analysis process. Figure 6 shows that only R45 "failure to consider increase in life cycle replacement", R44 "failure to consider increase in routine maintenance", and R39 "failure to consider design impact on operating efficiency" risks were scored highly by the respondents, whereas R42 "failure to consider component repair and replacement" and R46 "limited knowledge of maintenance issues" risks were viewed as not important. In fact, it is surprising to see that risk R42 is ranked 64 out of 66 risks. Component repair and replacement is an integral part of whole life cycle management strategies. Both have an impact on OPEX budget and asset availability (directly associated with vacancy and the rental value risks). Figure 6 shows that only R45 "failure to consider increase in life cycle replacement", R44 "failure to consider increase in routine maintenance", and R39 "failure to consider design impact on operating efficiency" risks were scored highly by the respondents, whereas R42 "failure to consider component repair and replacement" and R46 "limited knowledge of maintenance issues" risks were viewed as not important. In fact, it is surprising to see that risk R42 is ranked 64 out of 66 risks. Component repair and replacement is an integral part of whole life cycle management strategies. Both have an impact on OPEX budget and asset availability (directly associated with vacancy and the rental value risks).
Environmental Risks
The environmental risks are associated with minimizing the impacts on the environment throughout the development and operation of green building facilities. The ethos behind green building development is to create facilities that must minimize waste and energy use throughout their life cycle stages. To cope with the rapid development in the green building sector, green materials were developed without robust testing of their performance and environmental credentials that may lead to liability litigation over unfit for purpose specification, material or product failure, obsolescence, durability, etc. Figure 7 illustrates that architects thought R47 "failure to consider implication of environmental risks" risk ought to be taken into consideration in value engineering analysis, whereas engineers and managers considered R38 "failure to consider obsolescence of equipment impact" as an important risk to consider. The results might have been influenced by the professional bias. Architects ranked R43 "failure to consider maintainability and reparability impact" slightly higher than the other respondents. The way in which green buildings are conceived, constructed, operated, and disposed of will influence (or increase) the environmental impacts during their lifecycle, including global warming, acidification, eutrophication, winter smog, heavy metals, and energy [42]. The ANOVA results in Table 6 show that there were significant differences between the respondents regarding R38: failure to consider obsolescence of equipment impact risk.
R43 "failure to consider maintainability and reparability impact" slightly higher than the other respondents. The way in which green buildings are conceived, constructed, operated, and disposed of will influence (or increase) the environmental impacts during their lifecycle, including global warming, acidification, eutrophication, winter smog, heavy metals, and energy [42]. The ANOVA results in Table 6 show that there were significant differences between the respondents regarding R38: failure to consider obsolescence of equipment impact risk.
Management Risks
There is ample evidence in the literature to suggest that it is essential to have a skills and integration project team coupled with effective management processes to unlock value during the early stage of value planning as well as during the development operation of green facilities. The endeavor of the project should be geared towards identifying risks and opportunities to maximize value and minimize waste at every stage of the construction and procurement process, from the minute that the need for a building is identified to when it is ready for operation [33,61]. Figure 8 illustrates the respondents' perceptions about the management risks that might have an impact on value created by green building development if not taken into consideration during the value engineering analysis process. Out of the 17 managerial risk factors identified in the literature, only half of them were found to be having a negative impact on value creation attributes of green buildings, i.e., R28, R49, R50, R51, R53, R54, R57, and R64. buildings, i.e., R28, R49, R50, R51, R53, R54, R57, and R64.
All respondents agree on considering R28 and R49 risks during the appraisal of green buildings. However, engineers viewed R51 and R50 risks different to other participants. This might suggest that engineers in KSA are not often involved in the very early stages of the design process. However, the findings here are consistent with Shen and Liu [51] who listed the factors that might influence the success of using value analysis in construction projects.
Conclusions
Present research analyzed various aspects of risk to optimize the value creation in the development of green buildings. Sixty-six (66) risk factors were classified in 5 risk categories, including functional, financial, operational, environmental, and management, with an objective to evaluate the impacts of these risks on the value creation of green buildings design. A questionnaire compiling a list of identified risk factors was hand-delivered to 300 practitioners working for value engineering of the construction industry in Saudi Arabia. The overall response rate was 29.7%. Among the respondents, 45% were managers, 38% were engineers, and 17% were architects who participated in the survey. The participation from young professionals with 0-5 years of experience was 16%, and from the middle-career professionals with 6-10 years of experience was 28%. Interestingly, senior professionals with more than 10 years of experience held the largest contribution (56%) among all the respondents.
Based on the participants responses, descriptive statistics identified important risk factors with a minimum mean value of 3.81 (i.e., higher than the average overall mean of 3.78) and severity indices of 76.18%. Furthermore, the principal component analysis (PCA) extracted 16 components, based on the likelihood of risk factors impacting the value created by green building design, that carry an eigenvalue of more than 1 and account for nearly 79.939% of the whole variance. Finally, the factor All respondents agree on considering R28 and R49 risks during the appraisal of green buildings. However, engineers viewed R51 and R50 risks different to other participants. This might suggest that engineers in KSA are not often involved in the very early stages of the design process. However, the findings here are consistent with Shen and Liu [51] who listed the factors that might influence the success of using value analysis in construction projects.
Conclusions
Present research analyzed various aspects of risk to optimize the value creation in the development of green buildings. Sixty-six (66) risk factors were classified in 5 risk categories, including functional, financial, operational, environmental, and management, with an objective to evaluate the impacts of these risks on the value creation of green buildings design. A questionnaire compiling a list of identified risk factors was hand-delivered to 300 practitioners working for value engineering of the construction industry in Saudi Arabia. The overall response rate was 29.7%. Among the respondents, 45% were managers, 38% were engineers, and 17% were architects who participated in the survey. The participation from young professionals with 0-5 years of experience was 16%, and from the middle-career professionals with 6-10 years of experience was 28%. Interestingly, senior professionals with more than 10 years of experience held the largest contribution (56%) among all the respondents.
Based on the participants responses, descriptive statistics identified important risk factors with a minimum mean value of 3.81 (i.e., higher than the average overall mean of 3.78) and severity indices of 76.18%. Furthermore, the principal component analysis (PCA) extracted 16 components, based on the likelihood of risk factors impacting the value created by green building design, that carry an eigenvalue of more than 1 and account for nearly 79.939% of the whole variance. Finally, the factor analysis grouped 35 most significant risk factors in 5 clusters, i.e., 8 in functional risk, 13 in financial risk, 3 in operational risk, 3 in environmental risk, and 8 in management risk cluster.
Due to the differences in perception regarding the risk factors, there is a need for improved communication between the decision-makers' groups for developing a shared understanding of project value creation and associated risks. Absence of such understanding may raise the possibility of conflicts amongst different groups that ultimately effects the expected outcomes of the project. Future work can establish the interaction between various value drivers (identified in the authors' previous work) and the risk factors (selected in the present study) using an effective framework to enhance value creation in green buildings. Furthermore, the impact of the risk factors on project constrains-i.e., quality, cost, and time-can also be investigated.
A clear understanding of new risk clusters and their implications will be instrumental in assessing the design indicators and evaluating the impact of risk factors on value creation of green building in Saudi Arabia and elsewhere in the world.
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.